Sample records for deterministic global search

  1. A deterministic global optimization using smooth diagonal auxiliary functions

    NASA Astrophysics Data System (ADS)

    Sergeyev, Yaroslav D.; Kvasov, Dmitri E.

    2015-04-01

    In many practical decision-making problems it happens that functions involved in optimization process are black-box with unknown analytical representations and hard to evaluate. In this paper, a global optimization problem is considered where both the goal function f (x) and its gradient f‧ (x) are black-box functions. It is supposed that f‧ (x) satisfies the Lipschitz condition over the search hyperinterval with an unknown Lipschitz constant K. A new deterministic 'Divide-the-Best' algorithm based on efficient diagonal partitions and smooth auxiliary functions is proposed in its basic version, its convergence conditions are studied and numerical experiments executed on eight hundred test functions are presented.

  2. Expansion or extinction: deterministic and stochastic two-patch models with Allee effects.

    PubMed

    Kang, Yun; Lanchier, Nicolas

    2011-06-01

    We investigate the impact of Allee effect and dispersal on the long-term evolution of a population in a patchy environment. Our main focus is on whether a population already established in one patch either successfully invades an adjacent empty patch or undergoes a global extinction. Our study is based on the combination of analytical and numerical results for both a deterministic two-patch model and a stochastic counterpart. The deterministic model has either two, three or four attractors. The existence of a regime with exactly three attractors only appears when patches have distinct Allee thresholds. In the presence of weak dispersal, the analysis of the deterministic model shows that a high-density and a low-density populations can coexist at equilibrium in nearby patches, whereas the analysis of the stochastic model indicates that this equilibrium is metastable, thus leading after a large random time to either a global expansion or a global extinction. Up to some critical dispersal, increasing the intensity of the interactions leads to an increase of both the basin of attraction of the global extinction and the basin of attraction of the global expansion. Above this threshold, for both the deterministic and the stochastic models, the patches tend to synchronize as the intensity of the dispersal increases. This results in either a global expansion or a global extinction. For the deterministic model, there are only two attractors, while the stochastic model no longer exhibits a metastable behavior. In the presence of strong dispersal, the limiting behavior is entirely determined by the value of the Allee thresholds as the global population size in the deterministic and the stochastic models evolves as dictated by their single-patch counterparts. For all values of the dispersal parameter, Allee effects promote global extinction in terms of an expansion of the basin of attraction of the extinction equilibrium for the deterministic model and an increase of the probability of extinction for the stochastic model.

  3. A shifted hyperbolic augmented Lagrangian-based artificial fish two-swarm algorithm with guaranteed convergence for constrained global optimization

    NASA Astrophysics Data System (ADS)

    Rocha, Ana Maria A. C.; Costa, M. Fernanda P.; Fernandes, Edite M. G. P.

    2016-12-01

    This article presents a shifted hyperbolic penalty function and proposes an augmented Lagrangian-based algorithm for non-convex constrained global optimization problems. Convergence to an ?-global minimizer is proved. At each iteration k, the algorithm requires the ?-global minimization of a bound constrained optimization subproblem, where ?. The subproblems are solved by a stochastic population-based metaheuristic that relies on the artificial fish swarm paradigm and a two-swarm strategy. To enhance the speed of convergence, the algorithm invokes the Nelder-Mead local search with a dynamically defined probability. Numerical experiments with benchmark functions and engineering design problems are presented. The results show that the proposed shifted hyperbolic augmented Lagrangian compares favorably with other deterministic and stochastic penalty-based methods.

  4. The global Minmax k-means algorithm.

    PubMed

    Wang, Xiaoyan; Bai, Yanping

    2016-01-01

    The global k -means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k -means to minimize the sum of the intra-cluster variances. However the global k -means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k -means algorithm. In this paper, we modified the global k -means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k -means clustering error method to global k -means algorithm to overcome the effect of bad initialization, proposed the global Minmax k -means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k -means algorithm, the global k -means algorithm and the MinMax k -means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.

  5. A Darwinian approach to control-structure design

    NASA Technical Reports Server (NTRS)

    Zimmerman, David C.

    1993-01-01

    Genetic algorithms (GA's), as introduced by Holland (1975), are one form of directed random search. The form of direction is based on Darwin's 'survival of the fittest' theories. GA's are radically different from the more traditional design optimization techniques. GA's work with a coding of the design variables, as opposed to working with the design variables directly. The search is conducted from a population of designs (i.e., from a large number of points in the design space), unlike the traditional algorithms which search from a single design point. The GA requires only objective function information, as opposed to gradient or other auxiliary information. Finally, the GA is based on probabilistic transition rules, as opposed to deterministic rules. These features allow the GA to attack problems with local-global minima, discontinuous design spaces and mixed variable problems, all in a single, consistent framework.

  6. Calculating complete and exact Pareto front for multiobjective optimization: a new deterministic approach for discrete problems.

    PubMed

    Hu, Xiao-Bing; Wang, Ming; Di Paolo, Ezequiel

    2013-06-01

    Searching the Pareto front for multiobjective optimization problems usually involves the use of a population-based search algorithm or of a deterministic method with a set of different single aggregate objective functions. The results are, in fact, only approximations of the real Pareto front. In this paper, we propose a new deterministic approach capable of fully determining the real Pareto front for those discrete problems for which it is possible to construct optimization algorithms to find the k best solutions to each of the single-objective problems. To this end, two theoretical conditions are given to guarantee the finding of the actual Pareto front rather than its approximation. Then, a general methodology for designing a deterministic search procedure is proposed. A case study is conducted, where by following the general methodology, a ripple-spreading algorithm is designed to calculate the complete exact Pareto front for multiobjective route optimization. When compared with traditional Pareto front search methods, the obvious advantage of the proposed approach is its unique capability of finding the complete Pareto front. This is illustrated by the simulation results in terms of both solution quality and computational efficiency.

  7. Ordinal optimization and its application to complex deterministic problems

    NASA Astrophysics Data System (ADS)

    Yang, Mike Shang-Yu

    1998-10-01

    We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.

  8. SPIN or LURCH : a Comparative Assessment of Model Checking and Stochastic Search for Temporal Properties in Procedural Code

    NASA Technical Reports Server (NTRS)

    Powell, John D.; Owens, David; Menzies, Tim

    2004-01-01

    The difficulty of how to test large systems, such as the one on board a NASA robotic remote explorer (RRE) vehicle, is fundamentally a search issue: the global state space representing all possible has yet to be solved, even after many decades of work. Randomized algorithms have been known to outperform their deterministic counterparts for search problems representing a wide range of applications. In the case study presented here, the LURCH randomized algorithm proved to be adequate to the task of testing a NASA RRE vehicle. LURCH found all the errors found by an earlier analysis of a more complete method (SPIN). Our empirical results are that LURCH can scale to much larger models than standard model checkers like SMV and SPIN. Further, the LURCH analysis was simpler than the SPIN analysis. The simplicity and scalability of LURCH are two compelling reasons for experimenting further with this tool.

  9. Analysis of correlations and search for evidence of deterministic chaos in rhythmic motor control by the human brain

    NASA Astrophysics Data System (ADS)

    Roberts, Sean; Eykholt, R.; Thaut, Michael H.

    2000-08-01

    We investigate rhythmic finger tapping in both the presence and the absence of a metronome. We examine both the time intervals between taps and the time lags between the stimulus tones from the metronome and the response taps by the subject. We analyze the correlations in these data sets, and we search for evidence of deterministic chaos, as opposed to randomness, in the fluctuations.

  10. Global Search Capabilities of Indirect Methods for Impulsive Transfers

    NASA Astrophysics Data System (ADS)

    Shen, Hong-Xin; Casalino, Lorenzo; Luo, Ya-Zhong

    2015-09-01

    An optimization method which combines an indirect method with homotopic approach is proposed and applied to impulsive trajectories. Minimum-fuel, multiple-impulse solutions, with either fixed or open time are obtained. The homotopic approach at hand is relatively straightforward to implement and does not require an initial guess of adjoints, unlike previous adjoints estimation methods. A multiple-revolution Lambert solver is used to find multiple starting solutions for the homotopic procedure; this approach can guarantee to obtain multiple local solutions without relying on the user's intuition, thus efficiently exploring the solution space to find the global optimum. The indirect/homotopic approach proves to be quite effective and efficient in finding optimal solutions, and outperforms the joint use of evolutionary algorithms and deterministic methods in the test cases.

  11. Multi-scale dynamical behavior of spatially distributed systems: a deterministic point of view

    NASA Astrophysics Data System (ADS)

    Mangiarotti, S.; Le Jean, F.; Drapeau, L.; Huc, M.

    2015-12-01

    Physical and biophysical systems are spatially distributed systems. Their behavior can be observed or modelled spatially at various resolutions. In this work, a deterministic point of view is adopted to analyze multi-scale behavior taking a set of ordinary differential equation (ODE) as elementary part of the system.To perform analyses, scenes of study are thus generated based on ensembles of identical elementary ODE systems. Without any loss of generality, their dynamics is chosen chaotic in order to ensure sensitivity to initial conditions, that is, one fundamental property of atmosphere under instable conditions [1]. The Rössler system [2] is used for this purpose for both its topological and algebraic simplicity [3,4].Two cases are thus considered: the chaotic oscillators composing the scene of study are taken either independent, or in phase synchronization. Scale behaviors are analyzed considering the scene of study as aggregations (basically obtained by spatially averaging the signal) or as associations (obtained by concatenating the time series). The global modeling technique is used to perform the numerical analyses [5].One important result of this work is that, under phase synchronization, a scene of aggregated dynamics can be approximated by the elementary system composing the scene, but modifying its parameterization [6]. This is shown based on numerical analyses. It is then demonstrated analytically and generalized to a larger class of ODE systems. Preliminary applications to cereal crops observed from satellite are also presented.[1] Lorenz, Deterministic nonperiodic flow. J. Atmos. Sci., 20, 130-141 (1963).[2] Rössler, An equation for continuous chaos, Phys. Lett. A, 57, 397-398 (1976).[3] Gouesbet & Letellier, Global vector-field reconstruction by using a multivariate polynomial L2 approximation on nets, Phys. Rev. E 49, 4955-4972 (1994).[4] Letellier, Roulin & Rössler, Inequivalent topologies of chaos in simple equations, Chaos, Solitons & Fractals, 28, 337-360 (2006).[5] Mangiarotti, Coudret, Drapeau, & Jarlan, Polynomial search and global modeling, Phys. Rev. E 86(4), 046205 (2012).[6] Mangiarotti, Modélisation globale et Caractérisation Topologique de dynamiques environnementales. Habilitation à Diriger des Recherches, Univ. Toulouse 3 (2014).

  12. An ITK framework for deterministic global optimization for medical image registration

    NASA Astrophysics Data System (ADS)

    Dru, Florence; Wachowiak, Mark P.; Peters, Terry M.

    2006-03-01

    Similarity metric optimization is an essential step in intensity-based rigid and nonrigid medical image registration. For clinical applications, such as image guidance of minimally invasive procedures, registration accuracy and efficiency are prime considerations. In addition, clinical utility is enhanced when registration is integrated into image analysis and visualization frameworks, such as the popular Insight Toolkit (ITK). ITK is an open source software environment increasingly used to aid the development, testing, and integration of new imaging algorithms. In this paper, we present a new ITK-based implementation of the DIRECT (Dividing Rectangles) deterministic global optimization algorithm for medical image registration. Previously, it has been shown that DIRECT improves the capture range and accuracy for rigid registration. Our ITK class also contains enhancements over the original DIRECT algorithm by improving stopping criteria, adaptively adjusting a locality parameter, and by incorporating Powell's method for local refinement. 3D-3D registration experiments with ground-truth brain volumes and clinical cardiac volumes show that combining DIRECT with Powell's method improves registration accuracy over Powell's method used alone, is less sensitive to initial misorientation errors, and, with the new stopping criteria, facilitates adequate exploration of the search space without expending expensive iterations on non-improving function evaluations. Finally, in this framework, a new parallel implementation for computing mutual information is presented, resulting in near-linear speedup with two processors.

  13. A noisy chaotic neural network for solving combinatorial optimization problems: stochastic chaotic simulated annealing.

    PubMed

    Wang, Lipo; Li, Sa; Tian, Fuyu; Fu, Xiuju

    2004-10-01

    Recently Chen and Aihara have demonstrated both experimentally and mathematically that their chaotic simulated annealing (CSA) has better search ability for solving combinatorial optimization problems compared to both the Hopfield-Tank approach and stochastic simulated annealing (SSA). However, CSA may not find a globally optimal solution no matter how slowly annealing is carried out, because the chaotic dynamics are completely deterministic. In contrast, SSA tends to settle down to a global optimum if the temperature is reduced sufficiently slowly. Here we combine the best features of both SSA and CSA, thereby proposing a new approach for solving optimization problems, i.e., stochastic chaotic simulated annealing, by using a noisy chaotic neural network. We show the effectiveness of this new approach with two difficult combinatorial optimization problems, i.e., a traveling salesman problem and a channel assignment problem for cellular mobile communications.

  14. Diagnostic Assessment of the Difficulty Using Direct Policy Search in Many-Objective Reservoir Control

    NASA Astrophysics Data System (ADS)

    Zatarain-Salazar, J.; Reed, P. M.; Herman, J. D.; Giuliani, M.; Castelletti, A.

    2014-12-01

    Globally reservoir operations provide fundamental services to water supply, energy generation, recreation, and ecosystems. The pressures of expanding populations, climate change, and increased energy demands are motivating a significant investment in re-operationalizing existing reservoirs or defining operations for new reservoirs. Recent work has highlighted the potential benefits of exploiting recent advances in many-objective optimization and direct policy search (DPS) to aid in addressing these systems' multi-sector demand tradeoffs. This study contributes to a comprehensive diagnostic assessment of multi-objective evolutionary optimization algorithms (MOEAs) efficiency, effectiveness, reliability, and controllability when supporting DPS for the Conowingo dam in the Lower Susquehanna River Basin. The Lower Susquehanna River is an interstate water body that has been subject to intensive water management efforts due to the system's competing demands from urban water supply, atomic power plant cooling, hydropower production, and federally regulated environmental flows. Seven benchmark and state-of-the-art MOEAs are tested on deterministic and stochastic instances of the Susquehanna test case. In the deterministic formulation, the operating objectives are evaluated over the historical realization of the hydroclimatic variables (i.e., inflows and evaporation rates). In the stochastic formulation, the same objectives are instead evaluated over an ensemble of stochastic inflows and evaporation rates realizations. The algorithms are evaluated in their ability to support DPS in discovering reservoir operations that compose the tradeoffs for six multi-sector performance objectives with thirty-two decision variables. Our diagnostic results highlight that many-objective DPS is very challenging for modern MOEAs and that epsilon dominance is critical for attaining high levels of performance. Epsilon dominance algorithms epsilon-MOEA, epsilon-NSGAII and the auto adaptive Borg MOEA, are statistically superior for the six-objective Susquehanna instance of this important class of problems. Additionally, shifting from deterministic history-based DPS to stochastic DPS significantly increases the difficulty of the problem.

  15. Optimal design of piezoelectric transformers: a rational approach based on an analytical model and a deterministic global optimization.

    PubMed

    Pigache, Francois; Messine, Frédéric; Nogarede, Bertrand

    2007-07-01

    This paper deals with a deterministic and rational way to design piezoelectric transformers in radial mode. The proposed approach is based on the study of the inverse problem of design and on its reformulation as a mixed constrained global optimization problem. The methodology relies on the association of the analytical models for describing the corresponding optimization problem and on an exact global optimization software, named IBBA and developed by the second author to solve it. Numerical experiments are presented and compared in order to validate the proposed approach.

  16. Automated Calibration For Numerical Models Of Riverflow

    NASA Astrophysics Data System (ADS)

    Fernandez, Betsaida; Kopmann, Rebekka; Oladyshkin, Sergey

    2017-04-01

    Calibration of numerical models is fundamental since the beginning of all types of hydro system modeling, to approximate the parameters that can mimic the overall system behavior. Thus, an assessment of different deterministic and stochastic optimization methods is undertaken to compare their robustness, computational feasibility, and global search capacity. Also, the uncertainty of the most suitable methods is analyzed. These optimization methods minimize the objective function that comprises synthetic measurements and simulated data. Synthetic measurement data replace the observed data set to guarantee an existing parameter solution. The input data for the objective function derivate from a hydro-morphological dynamics numerical model which represents an 180-degree bend channel. The hydro- morphological numerical model shows a high level of ill-posedness in the mathematical problem. The minimization of the objective function by different candidate methods for optimization indicates a failure in some of the gradient-based methods as Newton Conjugated and BFGS. Others reveal partial convergence, such as Nelder-Mead, Polak und Ribieri, L-BFGS-B, Truncated Newton Conjugated, and Trust-Region Newton Conjugated Gradient. Further ones indicate parameter solutions that range outside the physical limits, such as Levenberg-Marquardt and LeastSquareRoot. Moreover, there is a significant computational demand for genetic optimization methods, such as Differential Evolution and Basin-Hopping, as well as for Brute Force methods. The Deterministic Sequential Least Square Programming and the scholastic Bayes Inference theory methods present the optimal optimization results. keywords: Automated calibration of hydro-morphological dynamic numerical model, Bayesian inference theory, deterministic optimization methods.

  17. Magnified gradient function with deterministic weight modification in adaptive learning.

    PubMed

    Ng, Sin-Chun; Cheung, Chi-Chung; Leung, Shu-Hung

    2004-11-01

    This paper presents two novel approaches, backpropagation (BP) with magnified gradient function (MGFPROP) and deterministic weight modification (DWM), to speed up the convergence rate and improve the global convergence capability of the standard BP learning algorithm. The purpose of MGFPROP is to increase the convergence rate by magnifying the gradient function of the activation function, while the main objective of DWM is to reduce the system error by changing the weights of a multilayered feedforward neural network in a deterministic way. Simulation results show that the performance of the above two approaches is better than BP and other modified BP algorithms for a number of learning problems. Moreover, the integration of the above two approaches forming a new algorithm called MDPROP, can further improve the performance of MGFPROP and DWM. From our simulation results, the MDPROP algorithm always outperforms BP and other modified BP algorithms in terms of convergence rate and global convergence capability.

  18. A stochastic tabu search algorithm to align physician schedule with patient flow.

    PubMed

    Niroumandrad, Nazgol; Lahrichi, Nadia

    2018-06-01

    In this study, we consider the pretreatment phase for cancer patients. This is defined as the period between the referral to a cancer center and the confirmation of the treatment plan. Physicians have been identified as bottlenecks in this process, and the goal is to determine a weekly cyclic schedule that improves the patient flow and shortens the pretreatment duration. High uncertainty is associated with the arrival day, profile and type of cancer of each patient. We also include physician satisfaction in the objective function. We present a MIP model for the problem and develop a tabu search algorithm, considering both deterministic and stochastic cases. Experiments show that our method compares very well to CPLEX under deterministic conditions. We describe the stochastic approach in detail and present a real application.

  19. Soil pH mediates the balance between stochastic and deterministic assembly of bacteria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tripathi, Binu M.; Stegen, James C.; Kim, Mincheol

    Little is known about the factors affecting the relative influence of stochastic and deterministic processes that governs the assembly of microbial communities in successional soils. Here, we conducted a meta-analysis of bacterial communities using six different successional soils data sets, scattered across different regions, with different pH conditions in early and late successional soils. We found that soil pH was the best predictor of bacterial community assembly and the relative importance of stochastic and deterministic processes along successional soils. Extreme acidic or alkaline pH conditions lead to assembly of phylogenetically more clustered bacterial communities through deterministic processes, whereas pH conditionsmore » close to neutral lead to phylogenetically less clustered bacterial communities with more stochasticity. We suggest that the influence of pH, rather than successional age, is the main driving force in producing trends in phylogenetic assembly of bacteria, and that pH also influences the relative balance of stochastic and deterministic processes along successional soils. Given that pH had a much stronger association with community assembly than did successional age, we evaluated whether the inferred influence of pH was maintained when studying globally-distributed samples collected without regard for successional age. This dataset confirmed the strong influence of pH, suggesting that the influence of soil pH on community assembly processes occurs globally. Extreme pH conditions likely exert more stringent limits on survival and fitness, imposing strong selective pressures through ecological and evolutionary time. Taken together, these findings suggest that the degree to which stochastic vs. deterministic processes shape soil bacterial community assembly is a consequence of soil pH rather than successional age.« less

  20. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems

    PubMed Central

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-01-01

    Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289

  1. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.

    PubMed

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-11-02

    We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.

  2. Pyrotechnic modeling for the NSI and pin puller

    NASA Technical Reports Server (NTRS)

    Powers, Joseph M.; Gonthier, Keith A.

    1993-01-01

    A discussion concerning the modeling of pyrotechnically driven actuators is presented in viewgraph format. The following topics are discussed: literature search, constitutive data for full-scale model, simple deterministic model, observed phenomena, and results from simple model.

  3. Markovian Search Games in Heterogeneous Spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffin, Christopher H

    2009-01-01

    We consider how to search for a mobile evader in a large heterogeneous region when sensors are used for detection. Sensors are modeled using probability of detection. Due to environmental effects, this probability will not be constant over the entire region. We map this problem to a graph search problem and, even though deterministic graph search is NP-complete, we derive a tractable, optimal, probabilistic search strategy. We do this by defining the problem as a differential game played on a Markov chain. We prove that this strategy is optimal in the sense of Nash. Simulations of an example problem illustratemore » our approach and verify our claims.« less

  4. Using Bayesian Model Averaging (BMA) to calibrate probabilistic surface temperature forecasts over Iran

    NASA Astrophysics Data System (ADS)

    Soltanzadeh, I.; Azadi, M.; Vakili, G. A.

    2011-07-01

    Using Bayesian Model Averaging (BMA), an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM), with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME) of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009) over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast.

  5. Effective Biot theory and its generalization to poroviscoelastic models

    NASA Astrophysics Data System (ADS)

    Liu, Xu; Greenhalgh, Stewart; Zhou, Bing; Greenhalgh, Mark

    2018-02-01

    A method is suggested to express the effective bulk modulus of the solid frame of a poroelastic material as a function of the saturated bulk modulus. This method enables effective Biot theory to be described through the use of seismic dispersion measurements or other models developed for the effective saturated bulk modulus. The effective Biot theory is generalized to a poroviscoelastic model of which the moduli are represented by the relaxation functions of the generalized fractional Zener model. The latter covers the general Zener and the Cole-Cole models as special cases. A global search method is described to determine the parameters of the relaxation functions, and a simple deterministic method is also developed to find the defining parameters of the single Cole-Cole model. These methods enable poroviscoelastic models to be constructed, which are based on measured seismic attenuation functions, and ensure that the model dispersion characteristics match the observations.

  6. Optimum Parameters of a Tuned Liquid Column Damper in a Wind Turbine Subject to Stochastic Load

    NASA Astrophysics Data System (ADS)

    Alkmim, M. H.; de Morais, M. V. G.; Fabro, A. T.

    2017-12-01

    Parameter optimization for tuned liquid column dampers (TLCD), a class of passive structural control, have been previously proposed in the literature for reducing vibration in wind turbines, and several other applications. However, most of the available work consider the wind excitation as either a deterministic harmonic load or random load with white noise spectra. In this paper, a global direct search optimization algorithm to reduce vibration of a tuned liquid column damper (TLCD), a class of passive structural control device, is presented. The objective is to find optimized parameters for the TLCD under stochastic load from different wind power spectral density. A verification is made considering the analytical solution of undamped primary system under white noise excitation by comparing with result from the literature. Finally, it is shown that different wind profiles can significantly affect the optimum TLCD parameters.

  7. Probabilistic track coverage in cooperative sensor networks.

    PubMed

    Ferrari, Silvia; Zhang, Guoxian; Wettergren, Thomas A

    2010-12-01

    The quality of service of a network performing cooperative track detection is represented by the probability of obtaining multiple elementary detections over time along a target track. Recently, two different lines of research, namely, distributed-search theory and geometric transversals, have been used in the literature for deriving the probability of track detection as a function of random and deterministic sensors' positions, respectively. In this paper, we prove that these two approaches are equivalent under the same problem formulation. Also, we present a new performance function that is derived by extending the geometric-transversal approach to the case of random sensors' positions using Poisson flats. As a result, a unified approach for addressing track detection in both deterministic and probabilistic sensor networks is obtained. The new performance function is validated through numerical simulations and is shown to bring about considerable computational savings for both deterministic and probabilistic sensor networks.

  8. Human Health Risk Assessment Applied to Rural Populations Dependent on Unregulated Drinking Water Sources: A Scoping Review.

    PubMed

    Ford, Lorelei; Bharadwaj, Lalita; McLeod, Lianne; Waldner, Cheryl

    2017-07-28

    Safe drinking water is a global challenge for rural populations dependent on unregulated water. A scoping review of research on human health risk assessments (HHRA) applied to this vulnerable population may be used to improve assessments applied by government and researchers. This review aims to summarize and describe the characteristics of HHRA methods, publications, and current literature gaps of HHRA studies on rural populations dependent on unregulated or unspecified drinking water. Peer-reviewed literature was systematically searched (January 2000 to May 2014) and identified at least one drinking water source as unregulated (21%) or unspecified (79%) in 100 studies. Only 7% of reviewed studies identified a rural community dependent on unregulated drinking water. Source water and hazards most frequently cited included groundwater (67%) and chemical water hazards (82%). Most HHRAs (86%) applied deterministic methods with 14% reporting probabilistic and stochastic methods. Publications increased over time with 57% set in Asia, and 47% of studies identified at least one literature gap in the areas of research, risk management, and community exposure. HHRAs applied to rural populations dependent on unregulated water are poorly represented in the literature even though almost half of the global population is rural.

  9. Spiral bacterial foraging optimization method: Algorithm, evaluation and convergence analysis

    NASA Astrophysics Data System (ADS)

    Kasaiezadeh, Alireza; Khajepour, Amir; Waslander, Steven L.

    2014-04-01

    A biologically-inspired algorithm called Spiral Bacterial Foraging Optimization (SBFO) is investigated in this article. SBFO, previously proposed by the same authors, is a multi-agent, gradient-based algorithm that minimizes both the main objective function (local cost) and the distance between each agent and a temporary central point (global cost). A random jump is included normal to the connecting line of each agent to the central point, which produces a vortex around the temporary central point. This random jump is also suitable to cope with premature convergence, which is a feature of swarm-based optimization methods. The most important advantages of this algorithm are as follows: First, this algorithm involves a stochastic type of search with a deterministic convergence. Second, as gradient-based methods are employed, faster convergence is demonstrated over GA, DE, BFO, etc. Third, the algorithm can be implemented in a parallel fashion in order to decentralize large-scale computation. Fourth, the algorithm has a limited number of tunable parameters, and finally SBFO has a strong certainty of convergence which is rare in existing global optimization algorithms. A detailed convergence analysis of SBFO for continuously differentiable objective functions has also been investigated in this article.

  10. Human Health Risk Assessment Applied to Rural Populations Dependent on Unregulated Drinking Water Sources: A Scoping Review

    PubMed Central

    Ford, Lorelei; Bharadwaj, Lalita; McLeod, Lianne; Waldner, Cheryl

    2017-01-01

    Safe drinking water is a global challenge for rural populations dependent on unregulated water. A scoping review of research on human health risk assessments (HHRA) applied to this vulnerable population may be used to improve assessments applied by government and researchers. This review aims to summarize and describe the characteristics of HHRA methods, publications, and current literature gaps of HHRA studies on rural populations dependent on unregulated or unspecified drinking water. Peer-reviewed literature was systematically searched (January 2000 to May 2014) and identified at least one drinking water source as unregulated (21%) or unspecified (79%) in 100 studies. Only 7% of reviewed studies identified a rural community dependent on unregulated drinking water. Source water and hazards most frequently cited included groundwater (67%) and chemical water hazards (82%). Most HHRAs (86%) applied deterministic methods with 14% reporting probabilistic and stochastic methods. Publications increased over time with 57% set in Asia, and 47% of studies identified at least one literature gap in the areas of research, risk management, and community exposure. HHRAs applied to rural populations dependent on unregulated water are poorly represented in the literature even though almost half of the global population is rural. PMID:28788087

  11. Global forecasting of thermal health hazards: the skill of probabilistic predictions of the Universal Thermal Climate Index (UTCI).

    PubMed

    Pappenberger, F; Jendritzky, G; Staiger, H; Dutra, E; Di Giuseppe, F; Richardson, D S; Cloke, H L

    2015-03-01

    Although over a hundred thermal indices can be used for assessing thermal health hazards, many ignore the human heat budget, physiology and clothing. The Universal Thermal Climate Index (UTCI) addresses these shortcomings by using an advanced thermo-physiological model. This paper assesses the potential of using the UTCI for forecasting thermal health hazards. Traditionally, such hazard forecasting has had two further limitations: it has been narrowly focused on a particular region or nation and has relied on the use of single 'deterministic' forecasts. Here, the UTCI is computed on a global scale, which is essential for international health-hazard warnings and disaster preparedness, and it is provided as a probabilistic forecast. It is shown that probabilistic UTCI forecasts are superior in skill to deterministic forecasts and that despite global variations, the UTCI forecast is skilful for lead times up to 10 days. The paper also demonstrates the utility of probabilistic UTCI forecasts on the example of the 2010 heat wave in Russia.

  12. Architecture studies and system demonstrations for optical parallel processor for AI and NI

    NASA Astrophysics Data System (ADS)

    Lee, Sing H.

    1988-03-01

    In solving deterministic AI problems the data search for matching the arguments of a PROLOG expression causes serious bottleneck when implemented sequentially by electronic systems. To overcome this bottleneck we have developed the concepts for an optical expert system based on matrix-algebraic formulation, which will be suitable for parallel optical implementation. The optical AI system based on matrix-algebraic formation will offer distinct advantages for parallel search, adult learning, etc.

  13. Short-range solar radiation forecasts over Sweden

    NASA Astrophysics Data System (ADS)

    Landelius, Tomas; Lindskog, Magnus; Körnich, Heiner; Andersson, Sandra

    2018-04-01

    In this article the performance for short-range solar radiation forecasts by the global deterministic and ensemble models from the European Centre for Medium-Range Weather Forecasts (ECMWF) is compared with an ensemble of the regional mesoscale model HARMONIE-AROME used by the national meteorological services in Sweden, Norway and Finland. Note however that only the control members and the ensemble means are included in the comparison. The models resolution differs considerably with 18 km for the ECMWF ensemble, 9 km for the ECMWF deterministic model, and 2.5 km for the HARMONIE-AROME ensemble. The models share the same radiation code. It turns out that they all underestimate systematically the Direct Normal Irradiance (DNI) for clear-sky conditions. Except for this shortcoming, the HARMONIE-AROME ensemble model shows the best agreement with the distribution of observed Global Horizontal Irradiance (GHI) and DNI values. During mid-day the HARMONIE-AROME ensemble mean performs best. The control member of the HARMONIE-AROME ensemble also scores better than the global deterministic ECMWF model. This is an interesting result since mesoscale models have so far not shown good results when compared to the ECMWF models. Three days with clear, mixed and cloudy skies are used to illustrate the possible added value of a probabilistic forecast. It is shown that in these cases the mesoscale ensemble could provide decision support to a grid operator in terms of forecasts of both the amount of solar power and its probabilities.

  14. Preliminary Analysis of Low-Thrust Gravity Assist Trajectories by An Inverse Method and a Global Optimization Technique.

    NASA Astrophysics Data System (ADS)

    de Pascale, P.; Vasile, M.; Casotto, S.

    The design of interplanetary trajectories requires the solution of an optimization problem, which has been traditionally solved by resorting to various local optimization techniques. All such approaches, apart from the specific method employed (direct or indirect), require an initial guess, which deeply influences the convergence to the optimal solution. The recent developments in low-thrust propulsion have widened the perspectives of exploration of the Solar System, while they have at the same time increased the difficulty related to the trajectory design process. Continuous thrust transfers, typically characterized by multiple spiraling arcs, have a broad number of design parameters and thanks to the flexibility offered by such engines, they typically turn out to be characterized by a multi-modal domain, with a consequent larger number of optimal solutions. Thus the definition of the first guesses is even more challenging, particularly for a broad search over the design parameters, and it requires an extensive investigation of the domain in order to locate the largest number of optimal candidate solutions and possibly the global optimal one. In this paper a tool for the preliminary definition of interplanetary transfers with coast-thrust arcs and multiple swing-bys is presented. Such goal is achieved combining a novel methodology for the description of low-thrust arcs, with a global optimization algorithm based on a hybridization of an evolutionary step and a deterministic step. Low thrust arcs are described in a 3D model in order to account the beneficial effects of low-thrust propulsion for a change of inclination, resorting to a new methodology based on an inverse method. The two-point boundary values problem (TPBVP) associated with a thrust arc is solved by imposing a proper parameterized evolution of the orbital parameters, by which, the acceleration required to follow the given trajectory with respect to the constraints set is obtained simply through algebraic computation. By this method a low-thrust transfer satisfying the boundary conditions on position and velocity can be quickly assessed, with low computational effort since no numerical propagation is required. The hybrid global optimization algorithm is made of a double step. Through the evolutionary search a large number of optima, and eventually the global one, are located, while the deterministic step consists of a branching process that exhaustively partitions the domain in order to have an extensive characterization of such a complex space of solutions. Furthermore, the approach implements a novel direct constraint-handling technique allowing the treatment of mixed-integer nonlinear programming problems (MINLP) typical of multiple swingby trajectories. A low-thrust transfer to Mars is studied as a test bed for the low-thrust model, thus presenting the main characteristics of the different shapes proposed and the features of the possible sub-arcs segmentations between two planets with respect to different objective functions: minimum time and minimum fuel consumption transfers. Other various test cases are also shown and further optimized, proving the effective capability of the proposed tool.

  15. Stable cycling in discrete-time genetic models.

    PubMed

    Hastings, A

    1981-11-01

    Examples of stable cycling are discussed for two-locus, two-allele, deterministic, discrete-time models with constant fitnesses. The cases that cycle were found by using numerical techniques to search for stable Hopf bifurcations. One consequence of the results is that apparent cases of directional selection may be due to stable cycling.

  16. Acceleration techniques in the univariate Lipschitz global optimization

    NASA Astrophysics Data System (ADS)

    Sergeyev, Yaroslav D.; Kvasov, Dmitri E.; Mukhametzhanov, Marat S.; De Franco, Angela

    2016-10-01

    Univariate box-constrained Lipschitz global optimization problems are considered in this contribution. Geometric and information statistical approaches are presented. The novel powerful local tuning and local improvement techniques are described in the contribution as well as the traditional ways to estimate the Lipschitz constant. The advantages of the presented local tuning and local improvement techniques are demonstrated using the operational characteristics approach for comparing deterministic global optimization algorithms on the class of 100 widely used test functions.

  17. Accurate modeling of switched reluctance machine based on hybrid trained WNN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Shoujun, E-mail: sunnyway@nwpu.edu.cn; Ge, Lefei; Ma, Shaojie

    2014-04-15

    According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, themore » nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.« less

  18. Current issues and actions in radiation protection of patients.

    PubMed

    Holmberg, Ola; Malone, Jim; Rehani, Madan; McLean, Donald; Czarwinski, Renate

    2010-10-01

    Medical application of ionizing radiation is a massive and increasing activity globally. While the use of ionizing radiation in medicine brings tremendous benefits to the global population, the associated risks due to stochastic and deterministic effects make it necessary to protect patients from potential harm. Current issues in radiation protection of patients include not only the rapidly increasing collective dose to the global population from medical exposure, but also that a substantial percentage of diagnostic imaging examinations are unnecessary, and the cumulative dose to individuals from medical exposure is growing. In addition to this, continued reports on deterministic injuries from safety related events in the medical use of ionizing radiation are raising awareness on the necessity for accident prevention measures. The International Atomic Energy Agency is engaged in several activities to reverse the negative trends of these current issues, including improvement of the justification process, the tracking of radiation history of individual patients, shared learning of safety significant events, and the use of comprehensive quality audits in the clinical environment. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  19. Comparison between deterministic and statistical wavelet estimation methods through predictive deconvolution: Seismic to well tie example from the North Sea

    NASA Astrophysics Data System (ADS)

    de Macedo, Isadora A. S.; da Silva, Carolina B.; de Figueiredo, J. J. S.; Omoboya, Bode

    2017-01-01

    Wavelet estimation as well as seismic-to-well tie procedures are at the core of every seismic interpretation workflow. In this paper we perform a comparative study of wavelet estimation methods for seismic-to-well tie. Two approaches to wavelet estimation are discussed: a deterministic estimation, based on both seismic and well log data, and a statistical estimation, based on predictive deconvolution and the classical assumptions of the convolutional model, which provides a minimum-phase wavelet. Our algorithms, for both wavelet estimation methods introduce a semi-automatic approach to determine the optimum parameters of deterministic wavelet estimation and statistical wavelet estimation and, further, to estimate the optimum seismic wavelets by searching for the highest correlation coefficient between the recorded trace and the synthetic trace, when the time-depth relationship is accurate. Tests with numerical data show some qualitative conclusions, which are probably useful for seismic inversion and interpretation of field data, by comparing deterministic wavelet estimation and statistical wavelet estimation in detail, especially for field data example. The feasibility of this approach is verified on real seismic and well data from Viking Graben field, North Sea, Norway. Our results also show the influence of the washout zones on well log data on the quality of the well to seismic tie.

  20. Optimal design for robust control of uncertain flexible joint manipulators: a fuzzy dynamical system approach

    NASA Astrophysics Data System (ADS)

    Han, Jiang; Chen, Ye-Hwa; Zhao, Xiaomin; Dong, Fangfang

    2018-04-01

    A novel fuzzy dynamical system approach to the control design of flexible joint manipulators with mismatched uncertainty is proposed. Uncertainties of the system are assumed to lie within prescribed fuzzy sets. The desired system performance includes a deterministic phase and a fuzzy phase. First, by creatively implanting a fictitious control, a robust control scheme is constructed to render the system uniformly bounded and uniformly ultimately bounded. Both the manipulator modelling and control scheme are deterministic and not IF-THEN heuristic rules-based. Next, a fuzzy-based performance index is proposed. An optimal design problem for a control design parameter is formulated as a constrained optimisation problem. The global solution to this problem can be obtained from solving two quartic equations. The fuzzy dynamical system approach is systematic and is able to assure the deterministic performance as well as to minimise the fuzzy performance index.

  1. Ontology-Based Peer Exchange Network (OPEN)

    ERIC Educational Resources Information Center

    Dong, Hui

    2010-01-01

    In current Peer-to-Peer networks, distributed and semantic free indexing is widely used by systems adopting "Distributed Hash Table" ("DHT") mechanisms. Although such systems typically solve a. user query rather fast in a deterministic way, they only support a very narrow search scheme, namely the exact hash key match. Furthermore, DHT systems put…

  2. Hybrid optimization and Bayesian inference techniques for a non-smooth radiation detection problem

    DOE PAGES

    Stefanescu, Razvan; Schmidt, Kathleen; Hite, Jason; ...

    2016-12-12

    In this paper, we propose several algorithms to recover the location and intensity of a radiation source located in a simulated 250 × 180 m block of an urban center based on synthetic measurements. Radioactive decay and detection are Poisson random processes, so we employ likelihood functions based on this distribution. Owing to the domain geometry and the proposed response model, the negative logarithm of the likelihood is only piecewise continuous differentiable, and it has multiple local minima. To address these difficulties, we investigate three hybrid algorithms composed of mixed optimization techniques. For global optimization, we consider simulated annealing, particlemore » swarm, and genetic algorithm, which rely solely on objective function evaluations; that is, they do not evaluate the gradient in the objective function. By employing early stopping criteria for the global optimization methods, a pseudo-optimum point is obtained. This is subsequently utilized as the initial value by the deterministic implicit filtering method, which is able to find local extrema in non-smooth functions, to finish the search in a narrow domain. These new hybrid techniques, combining global optimization and implicit filtering address, difficulties associated with the non-smooth response, and their performances, are shown to significantly decrease the computational time over the global optimization methods. To quantify uncertainties associated with the source location and intensity, we employ the delayed rejection adaptive Metropolis and DiffeRential Evolution Adaptive Metropolis algorithms. Finally, marginal densities of the source properties are obtained, and the means of the chains compare accurately with the estimates produced by the hybrid algorithms.« less

  3. Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization.

    PubMed

    Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Wong, Wai Peng; Chen, Chun-Hung

    2017-04-01

    Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort.

  4. Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization

    PubMed Central

    Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Chen, Chun-Hung

    2017-01-01

    Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort. PMID:29170617

  5. Characterizing Uncertainty and Variability in PBPK Models ...

    EPA Pesticide Factsheets

    Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variability; such characterization for PBPK model predictions represents a continuing challenge to both modelers and users. Current practices show significant progress in specifying deterministic biological models and the non-deterministic (often statistical) models, estimating their parameters using diverse data sets from multiple sources, and using them to make predictions and characterize uncertainty and variability. The International Workshop on Uncertainty and Variability in PBPK Models, held Oct 31-Nov 2, 2006, sought to identify the state-of-the-science in this area and recommend priorities for research and changes in practice and implementation. For the short term, these include: (1) multidisciplinary teams to integrate deterministic and non-deterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through more complete documentation of the model structure(s) and parameter values, the results of sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include: (1) theoretic and practical methodological impro

  6. In Search of Determinism-Sensitive Region to Avoid Artefacts in Recurrence Plots

    NASA Astrophysics Data System (ADS)

    Wendi, Dadiyorto; Marwan, Norbert; Merz, Bruno

    As an effort to reduce parameter uncertainties in constructing recurrence plots, and in particular to avoid potential artefacts, this paper presents a technique to derive artefact-safe region of parameter sets. This technique exploits both deterministic (incl. chaos) and stochastic signal characteristics of recurrence quantification (i.e. diagonal structures). It is useful when the evaluated signal is known to be deterministic. This study focuses on the recurrence plot generated from the reconstructed phase space in order to represent many real application scenarios when not all variables to describe a system are available (data scarcity). The technique involves random shuffling of the original signal to destroy its original deterministic characteristics. Its purpose is to evaluate whether the determinism values of the original and the shuffled signal remain closely together, and therefore suggesting that the recurrence plot might comprise artefacts. The use of such determinism-sensitive region shall be accompanied by standard embedding optimization approaches, e.g. using indices like false nearest neighbor and mutual information, to result in a more reliable recurrence plot parameterization.

  7. Integrated Arrival and Departure Schedule Optimization Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Xue, Min; Zelinski, Shannon

    2014-01-01

    In terminal airspace, integrating arrivals and departures with shared waypoints provides the potential of improving operational efficiency by allowing direct routes when possible. Incorporating stochastic evaluation as a post-analysis process of deterministic optimization, and imposing a safety buffer in deterministic optimization, are two ways to learn and alleviate the impact of uncertainty and to avoid unexpected outcomes. This work presents a third and direct way to take uncertainty into consideration during the optimization. The impact of uncertainty was incorporated into cost evaluations when searching for the optimal solutions. The controller intervention count was computed using a heuristic model and served as another stochastic cost besides total delay. Costs under uncertainty were evaluated using Monte Carlo simulations. The Pareto fronts that contain a set of solutions were identified and the trade-off between delays and controller intervention count was shown. Solutions that shared similar delays but had different intervention counts were investigated. The results showed that optimization under uncertainty could identify compromise solutions on Pareto fonts, which is better than deterministic optimization with extra safety buffers. It helps decision-makers reduce controller intervention while achieving low delays.

  8. FW-CADIS Method for Global and Semi-Global Variance Reduction of Monte Carlo Radiation Transport Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2014-01-01

    This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is an extension of the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for more than a decade to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development ofmore » an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain more uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented and demonstrated within the MAVRIC sequence of SCALE and the ADVANTG/MCNP framework. Application of the method to representative, real-world problems, including calculation of dose rate and energy dependent flux throughout the problem space, dose rates in specific areas, and energy spectra at multiple detectors, is presented and discussed. Results of the FW-CADIS method and other recently developed global variance reduction approaches are also compared, and the FW-CADIS method outperformed the other methods in all cases considered.« less

  9. Test-state approach to the quantum search problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sehrawat, Arun; Nguyen, Le Huy; Graduate School for Integrative Sciences and Engineering, National University of Singapore, Singapore 117597

    2011-05-15

    The search for 'a quantum needle in a quantum haystack' is a metaphor for the problem of finding out which one of a permissible set of unitary mappings - the oracles - is implemented by a given black box. Grover's algorithm solves this problem with quadratic speedup as compared with the analogous search for 'a classical needle in a classical haystack'. Since the outcome of Grover's algorithm is probabilistic - it gives the correct answer with high probability, not with certainty - the answer requires verification. For this purpose we introduce specific test states, one for each oracle. These testmore » states can also be used to realize 'a classical search for the quantum needle' which is deterministic - it always gives a definite answer after a finite number of steps - and 3.41 times as fast as the purely classical search. Since the test-state search and Grover's algorithm look for the same quantum needle, the average number of oracle queries of the test-state search is the classical benchmark for Grover's algorithm.« less

  10. Adding flexibility to the search for robust portfolios in non-linear water resource planning

    NASA Astrophysics Data System (ADS)

    Tomlinson, James; Harou, Julien

    2017-04-01

    To date robust optimisation of water supply systems has sought to find portfolios or strategies that are robust to a range of uncertainties or scenarios. The search for a single portfolio that is robust in all scenarios is necessarily suboptimal compared to portfolios optimised for a single scenario deterministic future. By contrast establishing a separate portfolio for each future scenario is unhelpful to the planner who must make a single decision today under deep uncertainty. In this work we show that a middle ground is possible by allowing a small number of different portfolios to be found that are each robust to a different subset of the global scenarios. We use evolutionary algorithms and a simple water resource system model to demonstrate this approach. The primary contribution is to demonstrate that flexibility can be added to the search for portfolios, in complex non-linear systems, at the expense of complete robustness across all future scenarios. In this context we define flexibility as the ability to design a portfolio in which some decisions are delayed, but those decisions that are not delayed are themselves shown to be robust to the future. We recognise that some decisions in our portfolio are more important than others. An adaptive portfolio is found by allowing no flexibility for these near-term "important" decisions, but maintaining flexibility in the remaining longer term decisions. In this sense we create an effective 2-stage decision process for a non-linear water resource supply system. We show how this reduces a measure of regret versus the inflexible robust solution for the same system.

  11. The Deterministic Origins of Sexism.

    ERIC Educational Resources Information Center

    Perry, Melissa J.; Albee, George W.

    1998-01-01

    Discusses the physical, sexual, and psychological ramifications of biological determinism using examples from the global status of women's health, the continuation of female genital mutilation, and the history of sexist beliefs in psychology that serve a social control function of creating and defining women's psychopathology. (Author/SLD)

  12. Spatial scaling patterns and functional redundancies in a changing boreal lake landscape

    USGS Publications Warehouse

    Angeler, David G.; Allen, Craig R.; Uden, Daniel R.; Johnson, Richard K.

    2015-01-01

    Global transformations extend beyond local habitats; therefore, larger-scale approaches are needed to assess community-level responses and resilience to unfolding environmental changes. Using longterm data (1996–2011), we evaluated spatial patterns and functional redundancies in the littoral invertebrate communities of 85 Swedish lakes, with the objective of assessing their potential resilience to environmental change at regional scales (that is, spatial resilience). Multivariate spatial modeling was used to differentiate groups of invertebrate species exhibiting spatial patterns in composition and abundance (that is, deterministic species) from those lacking spatial patterns (that is, stochastic species). We then determined the functional feeding attributes of the deterministic and stochastic invertebrate species, to infer resilience. Between one and three distinct spatial patterns in invertebrate composition and abundance were identified in approximately one-third of the species; the remainder were stochastic. We observed substantial differences in metrics between deterministic and stochastic species. Functional richness and diversity decreased over time in the deterministic group, suggesting a loss of resilience in regional invertebrate communities. However, taxon richness and redundancy increased monotonically in the stochastic group, indicating the capacity of regional invertebrate communities to adapt to change. Our results suggest that a refined picture of spatial resilience emerges if patterns of both the deterministic and stochastic species are accounted for. Spatially extensive monitoring may help increase our mechanistic understanding of community-level responses and resilience to regional environmental change, insights that are critical for developing management and conservation agendas in this current period of rapid environmental transformation.

  13. Deterministic and stochastic algorithms for resolving the flow fields in ducts and networks using energy minimization

    NASA Astrophysics Data System (ADS)

    Sochi, Taha

    2016-09-01

    Several deterministic and stochastic multi-variable global optimization algorithms (Conjugate Gradient, Nelder-Mead, Quasi-Newton and global) are investigated in conjunction with energy minimization principle to resolve the pressure and volumetric flow rate fields in single ducts and networks of interconnected ducts. The algorithms are tested with seven types of fluid: Newtonian, power law, Bingham, Herschel-Bulkley, Ellis, Ree-Eyring and Casson. The results obtained from all those algorithms for all these types of fluid agree very well with the analytically derived solutions as obtained from the traditional methods which are based on the conservation principles and fluid constitutive relations. The results confirm and generalize the findings of our previous investigations that the energy minimization principle is at the heart of the flow dynamics systems. The investigation also enriches the methods of computational fluid dynamics for solving the flow fields in tubes and networks for various types of Newtonian and non-Newtonian fluids.

  14. Chaos emerging in soil failure patterns observed during tillage: Normalized deterministic nonlinear prediction (NDNP) and its application.

    PubMed

    Sakai, Kenshi; Upadhyaya, Shrinivasa K; Andrade-Sanchez, Pedro; Sviridova, Nina V

    2017-03-01

    Real-world processes are often combinations of deterministic and stochastic processes. Soil failure observed during farm tillage is one example of this phenomenon. In this paper, we investigated the nonlinear features of soil failure patterns in a farm tillage process. We demonstrate emerging determinism in soil failure patterns from stochastic processes under specific soil conditions. We normalized the deterministic nonlinear prediction considering autocorrelation and propose it as a robust way of extracting a nonlinear dynamical system from noise contaminated motion. Soil is a typical granular material. The results obtained here are expected to be applicable to granular materials in general. From a global scale to nano scale, the granular material is featured in seismology, geotechnology, soil mechanics, and particle technology. The results and discussions presented here are applicable in these wide research areas. The proposed method and our findings are useful with respect to the application of nonlinear dynamics to investigate complex motions generated from granular materials.

  15. A Tabu-Search Heuristic for Deterministic Two-Mode Blockmodeling of Binary Network Matrices

    ERIC Educational Resources Information Center

    Brusco, Michael; Steinley, Douglas

    2011-01-01

    Two-mode binary data matrices arise in a variety of social network contexts, such as the attendance or non-attendance of individuals at events, the participation or lack of participation of groups in projects, and the votes of judges on cases. A popular method for analyzing such data is two-mode blockmodeling based on structural equivalence, where…

  16. Fault Tolerant Optimal Control.

    DTIC Science & Technology

    1982-08-01

    subsystem is modelled by deterministic or stochastic finite-dimensional vector differential or difference equations. The parameters of these equations...is no partial differential equation that must be solved. Thus we can sidestep the inability to solve the Bellman equation for control problems with x...transition models and cost functionals can be reduced to the search for solutions of nonlinear partial differential equations using ’verification

  17. Parameter identification using a creeping-random-search algorithm

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.

    1971-01-01

    A creeping-random-search algorithm is applied to different types of problems in the field of parameter identification. The studies are intended to demonstrate that a random-search algorithm can be applied successfully to these various problems, which often cannot be handled by conventional deterministic methods, and, also, to introduce methods that speed convergence to an extremal of the problem under investigation. Six two-parameter identification problems with analytic solutions are solved, and two application problems are discussed in some detail. Results of the study show that a modified version of the basic creeping-random-search algorithm chosen does speed convergence in comparison with the unmodified version. The results also show that the algorithm can successfully solve problems that contain limits on state or control variables, inequality constraints (both independent and dependent, and linear and nonlinear), or stochastic models.

  18. Optimisation in radiotherapy. III: Stochastic optimisation algorithms and conclusions.

    PubMed

    Ebert, M

    1997-12-01

    This is the final article in a three part examination of optimisation in radiotherapy. Previous articles have established the bases and form of the radiotherapy optimisation problem, and examined certain types of optimisation algorithm, namely, those which perform some form of ordered search of the solution space (mathematical programming), and those which attempt to find the closest feasible solution to the inverse planning problem (deterministic inversion). The current paper examines algorithms which search the space of possible irradiation strategies by stochastic methods. The resulting iterative search methods move about the solution space by sampling random variates, which gradually become more constricted as the algorithm converges upon the optimal solution. This paper also discusses the implementation of optimisation in radiotherapy practice.

  19. Deterministic quantum annealing expectation-maximization algorithm

    NASA Astrophysics Data System (ADS)

    Miyahara, Hideyuki; Tsumura, Koji; Sughiyama, Yuki

    2017-11-01

    Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how DQAEM works in MLE and show that DQAEM moderate the problem of local optima in EM.

  20. Modeling the within-host dynamics of cholera: bacterial-viral interaction.

    PubMed

    Wang, Xueying; Wang, Jin

    2017-08-01

    Novel deterministic and stochastic models are proposed in this paper for the within-host dynamics of cholera, with a focus on the bacterial-viral interaction. The deterministic model is a system of differential equations describing the interaction among the two types of vibrios and the viruses. The stochastic model is a system of Markov jump processes that is derived based on the dynamics of the deterministic model. The multitype branching process approximation is applied to estimate the extinction probability of bacteria and viruses within a human host during the early stage of the bacterial-viral infection. Accordingly, a closed-form expression is derived for the disease extinction probability, and analytic estimates are validated with numerical simulations. The local and global dynamics of the bacterial-viral interaction are analysed using the deterministic model, and the result indicates that there is a sharp disease threshold characterized by the basic reproduction number [Formula: see text]: if [Formula: see text], vibrios ingested from the environment into human body will not cause cholera infection; if [Formula: see text], vibrios will grow with increased toxicity and persist within the host, leading to human cholera. In contrast, the stochastic model indicates, more realistically, that there is always a positive probability of disease extinction within the human host.

  1. Hydraulic tomography of discrete networks of conduits and fractures in a karstic aquifer by using a deterministic inversion algorithm

    NASA Astrophysics Data System (ADS)

    Fischer, P.; Jardani, A.; Lecoq, N.

    2018-02-01

    In this paper, we present a novel inverse modeling method called Discrete Network Deterministic Inversion (DNDI) for mapping the geometry and property of the discrete network of conduits and fractures in the karstified aquifers. The DNDI algorithm is based on a coupled discrete-continuum concept to simulate numerically water flows in a model and a deterministic optimization algorithm to invert a set of observed piezometric data recorded during multiple pumping tests. In this method, the model is partioned in subspaces piloted by a set of parameters (matrix transmissivity, and geometry and equivalent transmissivity of the conduits) that are considered as unknown. In this way, the deterministic optimization process can iteratively correct the geometry of the network and the values of the properties, until it converges to a global network geometry in a solution model able to reproduce the set of data. An uncertainty analysis of this result can be performed from the maps of posterior uncertainties on the network geometry or on the property values. This method has been successfully tested for three different theoretical and simplified study cases with hydraulic responses data generated from hypothetical karstic models with an increasing complexity of the network geometry, and of the matrix heterogeneity.

  2. Fractional dynamics of globally slow transcription and its impact on deterministic genetic oscillation.

    PubMed

    Wei, Kun; Gao, Shilong; Zhong, Suchuan; Ma, Hong

    2012-01-01

    In dynamical systems theory, a system which can be described by differential equations is called a continuous dynamical system. In studies on genetic oscillation, most deterministic models at early stage are usually built on ordinary differential equations (ODE). Therefore, gene transcription which is a vital part in genetic oscillation is presupposed to be a continuous dynamical system by default. However, recent studies argued that discontinuous transcription might be more common than continuous transcription. In this paper, by appending the inserted silent interval lying between two neighboring transcriptional events to the end of the preceding event, we established that the running time for an intact transcriptional event increases and gene transcription thus shows slow dynamics. By globally replacing the original time increment for each state increment by a larger one, we introduced fractional differential equations (FDE) to describe such globally slow transcription. The impact of fractionization on genetic oscillation was then studied in two early stage models--the Goodwin oscillator and the Rössler oscillator. By constructing a "dual memory" oscillator--the fractional delay Goodwin oscillator, we suggested that four general requirements for generating genetic oscillation should be revised to be negative feedback, sufficient nonlinearity, sufficient memory and proper balancing of timescale. The numerical study of the fractional Rössler oscillator implied that the globally slow transcription tends to lower the chance of a coupled or more complex nonlinear genetic oscillatory system behaving chaotically.

  3. Genetic information and ecosystem health: arguments for the application of chaos theory to identify boundary conditions for ecosystem management.

    PubMed Central

    Stomp, A M

    1994-01-01

    To meet the demands for goods and services of an exponentially growing human population, global ecosystems will come under increasing human management. The hallmark of successful ecosystem management will be long-term ecosystem stability. Ecosystems and the genetic information and processes which underlie interactions of organisms with the environment in populations and communities exhibit behaviors which have nonlinear characteristics. Nonlinear mathematical formulations describing deterministic chaos have been used successfully to model such systems in physics, chemistry, economics, physiology, and epidemiology. This approach can be extended to ecotoxicology and can be used to investigate how changes in genetic information determine the behavior of populations and communities. This article seeks to provide the arguments for such an approach and to give initial direction to the search for the boundary conditions within which lies ecosystem stability. The identification of a theoretical framework for ecotoxicology and the parameters which drive the underlying model is a critical component in the formulation of a prioritized research agenda and appropriate ecosystem management policy and regulation. PMID:7713038

  4. Potential and flux field landscape theory. I. Global stability and dynamics of spatially dependent non-equilibrium systems.

    PubMed

    Wu, Wei; Wang, Jin

    2013-09-28

    We established a potential and flux field landscape theory to quantify the global stability and dynamics of general spatially dependent non-equilibrium deterministic and stochastic systems. We extended our potential and flux landscape theory for spatially independent non-equilibrium stochastic systems described by Fokker-Planck equations to spatially dependent stochastic systems governed by general functional Fokker-Planck equations as well as functional Kramers-Moyal equations derived from master equations. Our general theory is applied to reaction-diffusion systems. For equilibrium spatially dependent systems with detailed balance, the potential field landscape alone, defined in terms of the steady state probability distribution functional, determines the global stability and dynamics of the system. The global stability of the system is closely related to the topography of the potential field landscape in terms of the basins of attraction and barrier heights in the field configuration state space. The effective driving force of the system is generated by the functional gradient of the potential field alone. For non-equilibrium spatially dependent systems, the curl probability flux field is indispensable in breaking detailed balance and creating non-equilibrium condition for the system. A complete characterization of the non-equilibrium dynamics of the spatially dependent system requires both the potential field and the curl probability flux field. While the non-equilibrium potential field landscape attracts the system down along the functional gradient similar to an electron moving in an electric field, the non-equilibrium flux field drives the system in a curly way similar to an electron moving in a magnetic field. In the small fluctuation limit, the intrinsic potential field as the small fluctuation limit of the potential field for spatially dependent non-equilibrium systems, which is closely related to the steady state probability distribution functional, is found to be a Lyapunov functional of the deterministic spatially dependent system. Therefore, the intrinsic potential landscape can characterize the global stability of the deterministic system. The relative entropy functional of the stochastic spatially dependent non-equilibrium system is found to be the Lyapunov functional of the stochastic dynamics of the system. Therefore, the relative entropy functional quantifies the global stability of the stochastic system with finite fluctuations. Our theory offers an alternative general approach to other field-theoretic techniques, to study the global stability and dynamics of spatially dependent non-equilibrium field systems. It can be applied to many physical, chemical, and biological spatially dependent non-equilibrium systems.

  5. Deterministic walks with inverse-square power-law scaling are an emergent property of predators that use chemotaxis to locate randomly distributed prey

    NASA Astrophysics Data System (ADS)

    Reynolds, A. M.

    2008-07-01

    The results of numerical simulations indicate that deterministic walks with inverse-square power-law scaling are a robust emergent property of predators that use chemotaxis to locate randomly and sparsely distributed stationary prey items. It is suggested that chemotactic destructive foraging accounts for the apparent Lévy flight movement patterns of Oxyrrhis marina microzooplankton in still water containing prey items. This challenges the view that these organisms are executing an innate optimal Lévy flight searching strategy. Crucial for the emergence of inverse-square power-law scaling is the tendency of chemotaxis to occasionally cause predators to miss the nearest prey item, an occurrence which would not arise if prey were located through the employment of a reliable cognitive map or if prey location were visually cued and perfect.

  6. Changes in Examination Performance in English Secondary Schools over the Course of a Decade: Searching for Patterns and Trends over Time

    ERIC Educational Resources Information Center

    Mangan, Jean; Pugh, Geoff; Gray, John

    2005-01-01

    The article explores changes in the examination performance of a random sample of 500 English secondary schools between 1992 and 2001. Using econometric methods, it concludes that: there is an overall deterministic trend in school performance but it is not stable, making prediction accuracy poor; the aggregate trend does not explain improvement…

  7. Boosting association rule mining in large datasets via Gibbs sampling.

    PubMed

    Qian, Guoqi; Rao, Calyampudi Radhakrishna; Sun, Xiaoying; Wu, Yuehua

    2016-05-03

    Current algorithms for association rule mining from transaction data are mostly deterministic and enumerative. They can be computationally intractable even for mining a dataset containing just a few hundred transaction items, if no action is taken to constrain the search space. In this paper, we develop a Gibbs-sampling-induced stochastic search procedure to randomly sample association rules from the itemset space, and perform rule mining from the reduced transaction dataset generated by the sample. Also a general rule importance measure is proposed to direct the stochastic search so that, as a result of the randomly generated association rules constituting an ergodic Markov chain, the overall most important rules in the itemset space can be uncovered from the reduced dataset with probability 1 in the limit. In the simulation study and a real genomic data example, we show how to boost association rule mining by an integrated use of the stochastic search and the Apriori algorithm.

  8. Global seasonal climate predictability in a two tiered forecast system: part I: boreal summer and fall seasons

    NASA Astrophysics Data System (ADS)

    Misra, Vasubandhu; Li, H.; Wu, Z.; DiNapoli, S.

    2014-03-01

    This paper shows demonstrable improvement in the global seasonal climate predictability of boreal summer (at zero lead) and fall (at one season lead) seasonal mean precipitation and surface temperature from a two-tiered seasonal hindcast forced with forecasted SST relative to two other contemporary operational coupled ocean-atmosphere climate models. The results from an extensive set of seasonal hindcasts are analyzed to come to this conclusion. This improvement is attributed to: (1) The multi-model bias corrected SST used to force the atmospheric model. (2) The global atmospheric model which is run at a relatively high resolution of 50 km grid resolution compared to the two other coupled ocean-atmosphere models. (3) The physics of the atmospheric model, especially that related to the convective parameterization scheme. The results of the seasonal hindcast are analyzed for both deterministic and probabilistic skill. The probabilistic skill analysis shows that significant forecast skill can be harvested from these seasonal hindcasts relative to the deterministic skill analysis. The paper concludes that the coupled ocean-atmosphere seasonal hindcasts have reached a reasonable fidelity to exploit their SST anomaly forecasts to force such relatively higher resolution two tier prediction experiments to glean further boreal summer and fall seasonal prediction skill.

  9. A Memetic Algorithm for Global Optimization of Multimodal Nonseparable Problems.

    PubMed

    Zhang, Geng; Li, Yangmin

    2016-06-01

    It is a big challenging issue of avoiding falling into local optimum especially when facing high-dimensional nonseparable problems where the interdependencies among vector elements are unknown. In order to improve the performance of optimization algorithm, a novel memetic algorithm (MA) called cooperative particle swarm optimizer-modified harmony search (CPSO-MHS) is proposed in this paper, where the CPSO is used for local search and the MHS for global search. The CPSO, as a local search method, uses 1-D swarm to search each dimension separately and thus converges fast. Besides, it can obtain global optimum elements according to our experimental results and analyses. MHS implements the global search by recombining different vector elements and extracting global optimum elements. The interaction between local search and global search creates a set of local search zones, where global optimum elements reside within the search space. The CPSO-MHS algorithm is tested and compared with seven other optimization algorithms on a set of 28 standard benchmarks. Meanwhile, some MAs are also compared according to the results derived directly from their corresponding references. The experimental results demonstrate a good performance of the proposed CPSO-MHS algorithm in solving multimodal nonseparable problems.

  10. Failed rib region prediction in a human body model during crash events with precrash braking.

    PubMed

    Guleyupoglu, B; Koya, B; Barnard, R; Gayzik, F S

    2018-02-28

    The objective of this study is 2-fold. We used a validated human body finite element model to study the predicted chest injury (focusing on rib fracture as a function of element strain) based on varying levels of simulated precrash braking. Furthermore, we compare deterministic and probabilistic methods of rib injury prediction in the computational model. The Global Human Body Models Consortium (GHBMC) M50-O model was gravity settled in the driver position of a generic interior equipped with an advanced 3-point belt and airbag. Twelve cases were investigated with permutations for failure, precrash braking system, and crash severity. The severities used were median (17 kph), severe (34 kph), and New Car Assessment Program (NCAP; 56.4 kph). Cases with failure enabled removed rib cortical bone elements once 1.8% effective plastic strain was exceeded. Alternatively, a probabilistic framework found in the literature was used to predict rib failure. Both the probabilistic and deterministic methods take into consideration location (anterior, lateral, and posterior). The deterministic method is based on a rubric that defines failed rib regions dependent on a threshold for contiguous failed elements. The probabilistic method depends on age-based strain and failure functions. Kinematics between both methods were similar (peak max deviation: ΔX head = 17 mm; ΔZ head = 4 mm; ΔX thorax = 5 mm; ΔZ thorax = 1 mm). Seat belt forces at the time of probabilistic failed region initiation were lower than those at deterministic failed region initiation. The probabilistic method for rib fracture predicted more failed regions in the rib (an analog for fracture) than the deterministic method in all but 1 case where they were equal. The failed region patterns between models are similar; however, there are differences that arise due to stress reduced from element elimination that cause probabilistic failed regions to continue to rise after no deterministic failed region would be predicted. Both the probabilistic and deterministic methods indicate similar trends with regards to the effect of precrash braking; however, there are tradeoffs. The deterministic failed region method is more spatially sensitive to failure and is more sensitive to belt loads. The probabilistic failed region method allows for increased capability in postprocessing with respect to age. The probabilistic failed region method predicted more failed regions than the deterministic failed region method due to force distribution differences.

  11. Will systems biology offer new holistic paradigms to life sciences?

    PubMed Central

    Conti, Filippo; Valerio, Maria Cristina; Zbilut, Joseph P.

    2008-01-01

    A biological system, like any complex system, blends stochastic and deterministic features, displaying properties of both. In a certain sense, this blend is exactly what we perceive as the “essence of complexity” given we tend to consider as non-complex both an ideal gas (fully stochastic and understandable at the statistical level in the thermodynamic limit of a huge number of particles) and a frictionless pendulum (fully deterministic relative to its motion). In this commentary we make the statement that systems biology will have a relevant impact on nowadays biology if (and only if) will be able to capture the essential character of this blend that in our opinion is the generation of globally ordered collective modes supported by locally stochastic atomisms. PMID:19003440

  12. Taylorizing Academia, Deskilling Professors and Automating Higher Education: The Recent Role of Moocs

    ERIC Educational Resources Information Center

    Mirrlees, Tanner; Alvi, Shahid

    2014-01-01

    Since 2012, corporations, politicians, journalists and educators have asserted that MOOCs--massive open online courses--are radically changing North American and global education, and for the better. This article offers a counterpoint to the techno-deterministic and optimistic buzz surrounding for-profit MOOCs by contextualizing and analyzing…

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayes, T.; Smith, K.S.; Severino, F.

    A critical capability of the new RHIC low level rf (LLRF) system is the ability to synchronize signals across multiple locations. The 'Update Link' provides this functionality. The 'Update Link' is a deterministic serial data link based on the Xilinx RocketIO protocol that is broadcast over fiber optic cable at 1 gigabit per second (Gbps). The link provides timing events and data packets as well as time stamp information for synchronizing diagnostic data from multiple sources. The new RHIC LLRF was designed to be a flexible, modular system. The system is constructed of numerous independent RF Controller chassis. To providemore » synchronization among all of these chassis, the Update Link system was designed. The Update Link system provides a low latency, deterministic data path to broadcast information to all receivers in the system. The Update Link system is based on a central hub, the Update Link Master (ULM), which generates the data stream that is distributed via fiber optic links. Downstream chassis have non-deterministic connections back to the ULM that allow any chassis to provide data that is broadcast globally.« less

  14. Application of tabu search to deterministic and stochastic optimization problems

    NASA Astrophysics Data System (ADS)

    Gurtuna, Ozgur

    During the past two decades, advances in computer science and operations research have resulted in many new optimization methods for tackling complex decision-making problems. One such method, tabu search, forms the basis of this thesis. Tabu search is a very versatile optimization heuristic that can be used for solving many different types of optimization problems. Another research area, real options, has also gained considerable momentum during the last two decades. Real options analysis is emerging as a robust and powerful method for tackling decision-making problems under uncertainty. Although the theoretical foundations of real options are well-established and significant progress has been made in the theory side, applications are lagging behind. A strong emphasis on practical applications and a multidisciplinary approach form the basic rationale of this thesis. The fundamental concepts and ideas behind tabu search and real options are investigated in order to provide a concise overview of the theory supporting both of these two fields. This theoretical overview feeds into the design and development of algorithms that are used to solve three different problems. The first problem examined is a deterministic one: finding the optimal servicing tours that minimize energy and/or duration of missions for servicing satellites around Earth's orbit. Due to the nature of the space environment, this problem is modeled as a time-dependent, moving-target optimization problem. Two solution methods are developed: an exhaustive method for smaller problem instances, and a method based on tabu search for larger ones. The second and third problems are related to decision-making under uncertainty. In the second problem, tabu search and real options are investigated together within the context of a stochastic optimization problem: option valuation. By merging tabu search and Monte Carlo simulation, a new method for studying options, Tabu Search Monte Carlo (TSMC) method, is developed. The theoretical underpinnings of the TSMC method and the flow of the algorithm are explained. Its performance is compared to other existing methods for financial option valuation. In the third, and final, problem, TSMC method is used to determine the conditions of feasibility for hybrid electric vehicles and fuel cell vehicles. There are many uncertainties related to the technologies and markets associated with new generation passenger vehicles. These uncertainties are analyzed in order to determine the conditions in which new generation vehicles can compete with established technologies.

  15. National Centers for Environmental Prediction

    Science.gov Websites

    Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather / VISION | About EMC EMC > GLOBAL BRANCH > GFS > HOME Home Implementations Documentation References Products Model Guidance Performance Developers VLab GLOBAL FORECAST SYSTEM Global Data

  16. Deterministic binary vectors for efficient automated indexing of MEDLINE/PubMed abstracts.

    PubMed

    Wahle, Manuel; Widdows, Dominic; Herskovic, Jorge R; Bernstam, Elmer V; Cohen, Trevor

    2012-01-01

    The need to maintain accessibility of the biomedical literature has led to development of methods to assist human indexers by recommending index terms for newly encountered articles. Given the rapid expansion of this literature, it is essential that these methods be scalable. Document vector representations are commonly used for automated indexing, and Random Indexing (RI) provides the means to generate them efficiently. However, RI is difficult to implement in real-world indexing systems, as (1) efficient nearest-neighbor search requires retaining all document vectors in RAM, and (2) it is necessary to maintain a store of randomly generated term vectors to index future documents. Motivated by these concerns, this paper documents the development and evaluation of a deterministic binary variant of RI. The increased capacity demonstrated by binary vectors has implications for information retrieval, and the elimination of the need to retain term vectors facilitates distributed implementations, enhancing the scalability of RI.

  17. Deterministic Binary Vectors for Efficient Automated Indexing of MEDLINE/PubMed Abstracts

    PubMed Central

    Wahle, Manuel; Widdows, Dominic; Herskovic, Jorge R.; Bernstam, Elmer V.; Cohen, Trevor

    2012-01-01

    The need to maintain accessibility of the biomedical literature has led to development of methods to assist human indexers by recommending index terms for newly encountered articles. Given the rapid expansion of this literature, it is essential that these methods be scalable. Document vector representations are commonly used for automated indexing, and Random Indexing (RI) provides the means to generate them efficiently. However, RI is difficult to implement in real-world indexing systems, as (1) efficient nearest-neighbor search requires retaining all document vectors in RAM, and (2) it is necessary to maintain a store of randomly generated term vectors to index future documents. Motivated by these concerns, this paper documents the development and evaluation of a deterministic binary variant of RI. The increased capacity demonstrated by binary vectors has implications for information retrieval, and the elimination of the need to retain term vectors facilitates distributed implementations, enhancing the scalability of RI. PMID:23304369

  18. Data-driven gradient algorithm for high-precision quantum control

    NASA Astrophysics Data System (ADS)

    Wu, Re-Bing; Chu, Bing; Owens, David H.; Rabitz, Herschel

    2018-04-01

    In the quest to achieve scalable quantum information processing technologies, gradient-based optimal control algorithms (e.g., grape) are broadly used for implementing high-precision quantum gates, but their performance is often hindered by deterministic or random errors in the system model and the control electronics. In this paper, we show that grape can be taught to be more effective by jointly learning from the design model and the experimental data obtained from process tomography. The resulting data-driven gradient optimization algorithm (d-grape) can in principle correct all deterministic gate errors, with a mild efficiency loss. The d-grape algorithm may become more powerful with broadband controls that involve a large number of control parameters, while other algorithms usually slow down due to the increased size of the search space. These advantages are demonstrated by simulating the implementation of a two-qubit controlled-not gate.

  19. Global behavior analysis for stochastic system of 1,3-PD continuous fermentation

    NASA Astrophysics Data System (ADS)

    Zhu, Xi; Kliemann, Wolfgang; Li, Chunfa; Feng, Enmin; Xiu, Zhilong

    2017-12-01

    Global behavior for stochastic system of continuous fermentation in glycerol bio-dissimilation to 1,3-propanediol by Klebsiella pneumoniae is analyzed in this paper. This bioprocess cannot avoid the stochastic perturbation caused by internal and external disturbance which reflect on the growth rate. These negative factors can limit and degrade the achievable performance of controlled systems. Based on multiplicity phenomena, the equilibriums and bifurcations of the deterministic system are analyzed. Then, a stochastic model is presented by a bounded Markov diffusion process. In order to analyze the global behavior, we compute the control sets for the associated control system. The probability distributions of relative supports are also computed. The simulation results indicate that how the disturbed biosystem tend to stationary behavior globally.

  20. Reach for the Stars: A Constellational Approach to Ethnographies of Elite Schools

    ERIC Educational Resources Information Center

    Prosser, Howard

    2014-01-01

    This paper offers a method for examining elite schools in a global setting by appropriating Theodor Adorno's constellational approach. I contend that arranging ideas and themes in a non-deterministic fashion can illuminate the social reality of elite schools. Drawing on my own fieldwork at an elite school in Argentina, I suggest that local and…

  1. Potential redistribution of tree species habitat under five climate change scenarios in the eastern US

    Treesearch

    Louis R. Iverson; Anantha M. Prasad; Anantha M. Prasad

    2002-01-01

    Global climate change could have profound effects on the Earth's biota, including large redistributions of tree species and forest types. We used DISTRIB, a deterministic regression tree analysis model, to examine environmental drivers related to current forest-species distributions and then model potential suitable habitat under five climate change scenarios...

  2. A stochastic chemostat model with an inhibitor and noise independent of population sizes

    NASA Astrophysics Data System (ADS)

    Sun, Shulin; Zhang, Xiaolu

    2018-02-01

    In this paper, a stochastic chemostat model with an inhibitor is considered, here the inhibitor is input from an external source and two organisms in chemostat compete for a nutrient. Firstly, we show that the system has a unique global positive solution. Secondly, by constructing some suitable Lyapunov functions, we investigate that the average in time of the second moment of the solutions of the stochastic model is bounded for a relatively small noise. That is, the asymptotic behaviors of the stochastic system around the equilibrium points of the deterministic system are studied. However, the sufficient large noise can make the microorganisms become extinct with probability one, although the solutions to the original deterministic model may be persistent. Finally, the obtained analytical results are illustrated by computer simulations.

  3. System of systems design: Evaluating aircraft in a fleet context using reliability and non-deterministic approaches

    NASA Astrophysics Data System (ADS)

    Frommer, Joshua B.

    This work develops and implements a solution framework that allows for an integrated solution to a resource allocation system-of-systems problem associated with designing vehicles for integration into an existing fleet to extend that fleet's capability while improving efficiency. Typically, aircraft design focuses on using a specific design mission while a fleet perspective would provide a broader capability. Aspects of design for both the vehicles and missions may be, for simplicity, deterministic in nature or, in a model that reflects actual conditions, uncertain. Toward this end, the set of tasks or goals for the to-be-planned system-of-systems will be modeled more accurately with non-deterministic values, and the designed platforms will be evaluated using reliability analysis. The reliability, defined as the probability of a platform or set of platforms to complete possible missions, will contribute to the fitness of the overall system. The framework includes building surrogate models for metrics such as capability and cost, and includes the ideas of reliability in the overall system-level design space. The concurrent design and allocation system-of-systems problem is a multi-objective mixed integer nonlinear programming (MINLP) problem. This study considered two system-of-systems problems that seek to simultaneously design new aircraft and allocate these aircraft into a fleet to provide a desired capability. The Coast Guard's Integrated Deepwater System program inspired the first problem, which consists of a suite of search-and-find missions for aircraft based on descriptions from the National Search and Rescue Manual. The second represents suppression of enemy air defense operations similar to those carried out by the U.S. Air Force, proposed as part of the Department of Defense Network Centric Warfare structure, and depicted in MILSTD-3013. The two problems seem similar, with long surveillance segments, but because of the complex nature of aircraft design, the analysis of the vehicle for high-speed attack combined with a long loiter period is considerably different from that for quick cruise to an area combined with a low speed search. However, the framework developed to solve this class of system-of-systems problem handles both scenarios and leads to a solution type for this kind of problem. On the vehicle-level of the problem, different technology can have an impact on the fleet-level. One such technology is Morphing, the ability to change shape, which is an ideal candidate technology for missions with dissimilar segments, such as the aforementioned two. A framework, using surrogate models based on optimally-sized aircraft, and using probabilistic parameters to define a concept of operations, is investigated; this has provided insight into the setup of the optimization problem, the use of the reliability metric, and the measurement of fleet level impacts of morphing aircraft. The research consisted of four phases. The two initial phases built and defined the framework to solve system-of-systems problem; these investigations used the search-and-find scenario as the example application. The first phase included the design of fixed-geometry and morphing aircraft for a range of missions and evaluated the aircraft capability using non-deterministic mission parameters. The second phase introduced the idea of multiple aircraft in a fleet, but only considered a fleet consisting of one aircraft type. The third phase incorporated the simultaneous design of a new vehicle and allocation into a fleet for the search-and-find scenario; in this phase, multiple types of aircraft are considered. The fourth phase repeated the simultaneous new aircraft design and fleet allocation for the SEAD scenario to show that the approach is not specific to the search-and-find scenario. The framework presented in this work appears to be a viable approach for concurrently designing and allocating constituents in a system, specifically aircraft in a fleet. The research also shows that new technology impact can be assessed at the fleet level using conceptual design principles.

  4. An ensemble-based dynamic Bayesian averaging approach for discharge simulations using multiple global precipitation products and hydrological models

    NASA Astrophysics Data System (ADS)

    Qi, Wei; Liu, Junguo; Yang, Hong; Sweetapple, Chris

    2018-03-01

    Global precipitation products are very important datasets in flow simulations, especially in poorly gauged regions. Uncertainties resulting from precipitation products, hydrological models and their combinations vary with time and data magnitude, and undermine their application to flow simulations. However, previous studies have not quantified these uncertainties individually and explicitly. This study developed an ensemble-based dynamic Bayesian averaging approach (e-Bay) for deterministic discharge simulations using multiple global precipitation products and hydrological models. In this approach, the joint probability of precipitation products and hydrological models being correct is quantified based on uncertainties in maximum and mean estimation, posterior probability is quantified as functions of the magnitude and timing of discharges, and the law of total probability is implemented to calculate expected discharges. Six global fine-resolution precipitation products and two hydrological models of different complexities are included in an illustrative application. e-Bay can effectively quantify uncertainties and therefore generate better deterministic discharges than traditional approaches (weighted average methods with equal and varying weights and maximum likelihood approach). The mean Nash-Sutcliffe Efficiency values of e-Bay are up to 0.97 and 0.85 in training and validation periods respectively, which are at least 0.06 and 0.13 higher than traditional approaches. In addition, with increased training data, assessment criteria values of e-Bay show smaller fluctuations than traditional approaches and its performance becomes outstanding. The proposed e-Bay approach bridges the gap between global precipitation products and their pragmatic applications to discharge simulations, and is beneficial to water resources management in ungauged or poorly gauged regions across the world.

  5. Optimal Alignment of Structures for Finite and Periodic Systems.

    PubMed

    Griffiths, Matthew; Niblett, Samuel P; Wales, David J

    2017-10-10

    Finding the optimal alignment between two structures is important for identifying the minimum root-mean-square distance (RMSD) between them and as a starting point for calculating pathways. Most current algorithms for aligning structures are stochastic, scale exponentially with the size of structure, and the performance can be unreliable. We present two complementary methods for aligning structures corresponding to isolated clusters of atoms and to condensed matter described by a periodic cubic supercell. The first method (Go-PERMDIST), a branch and bound algorithm, locates the global minimum RMSD deterministically in polynomial time. The run time increases for larger RMSDs. The second method (FASTOVERLAP) is a heuristic algorithm that aligns structures by finding the global maximum kernel correlation between them using fast Fourier transforms (FFTs) and fast SO(3) transforms (SOFTs). For periodic systems, FASTOVERLAP scales with the square of the number of identical atoms in the system, reliably finds the best alignment between structures that are not too distant, and shows significantly better performance than existing algorithms. The expected run time for Go-PERMDIST is longer than FASTOVERLAP for periodic systems. For finite clusters, the FASTOVERLAP algorithm is competitive with existing algorithms. The expected run time for Go-PERMDIST to find the global RMSD between two structures deterministically is generally longer than for existing stochastic algorithms. However, with an earlier exit condition, Go-PERMDIST exhibits similar or better performance.

  6. National Centers for Environmental Prediction

    Science.gov Websites

    Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather / VISION | About EMC Click on a model logo to go to its home page RTOFS Global RTOFS Global RTOFS Atlantic

  7. Trend analysis of Arctic sea ice extent

    NASA Astrophysics Data System (ADS)

    Silva, M. E.; Barbosa, S. M.; Antunes, Luís; Rocha, Conceição

    2009-04-01

    The extent of Arctic sea ice is a fundamental parameter of Arctic climate variability. In the context of climate change, the area covered by ice in the Arctic is a particularly useful indicator of recent changes in the Arctic environment. Climate models are in near universal agreement that Arctic sea ice extent will decline through the 21st century as a consequence of global warming and many studies predict a ice free Arctic as soon as 2012. Time series of satellite passive microwave observations allow to assess the temporal changes in the extent of Arctic sea ice. Much of the analysis of the ice extent time series, as in most climate studies from observational data, have been focussed on the computation of deterministic linear trends by ordinary least squares. However, many different processes, including deterministic, unit root and long-range dependent processes can engender trend like features in a time series. Several parametric tests have been developed, mainly in econometrics, to discriminate between stationarity (no trend), deterministic trend and stochastic trends. Here, these tests are applied in the trend analysis of the sea ice extent time series available at National Snow and Ice Data Center. The parametric stationary tests, Augmented Dickey-Fuller (ADF), Phillips-Perron (PP) and the KPSS, do not support an overall deterministic trend in the time series of Arctic sea ice extent. Therefore, alternative parametrizations such as long-range dependence should be considered for characterising long-term Arctic sea ice variability.

  8. Guaranteed Discrete Energy Optimization on Large Protein Design Problems.

    PubMed

    Simoncini, David; Allouche, David; de Givry, Simon; Delmas, Céline; Barbe, Sophie; Schiex, Thomas

    2015-12-08

    In Computational Protein Design (CPD), assuming a rigid backbone and amino-acid rotamer library, the problem of finding a sequence with an optimal conformation is NP-hard. In this paper, using Dunbrack's rotamer library and Talaris2014 decomposable energy function, we use an exact deterministic method combining branch and bound, arc consistency, and tree-decomposition to provenly identify the global minimum energy sequence-conformation on full-redesign problems, defining search spaces of size up to 10(234). This is achieved on a single core of a standard computing server, requiring a maximum of 66GB RAM. A variant of the algorithm is able to exhaustively enumerate all sequence-conformations within an energy threshold of the optimum. These proven optimal solutions are then used to evaluate the frequencies and amplitudes, in energy and sequence, at which an existing CPD-dedicated simulated annealing implementation may miss the optimum on these full redesign problems. The probability of finding an optimum drops close to 0 very quickly. In the worst case, despite 1,000 repeats, the annealing algorithm remained more than 1 Rosetta unit away from the optimum, leading to design sequences that could differ from the optimal sequence by more than 30% of their amino acids.

  9. Introduction to “Global tsunami science: Past and future, Volume I”

    USGS Publications Warehouse

    Geist, Eric L.; Fritz, Hermann; Rabinovich, Alexander B.; Tanioka, Yuichiro

    2016-01-01

    Twenty-five papers on the study of tsunamis are included in Volume I of the PAGEOPH topical issue “Global Tsunami Science: Past and Future”. Six papers examine various aspects of tsunami probability and uncertainty analysis related to hazard assessment. Three papers relate to deterministic hazard and risk assessment. Five more papers present new methods for tsunami warning and detection. Six papers describe new methods for modeling tsunami hydrodynamics. Two papers investigate tsunamis generated by non-seismic sources: landslides and meteorological disturbances. The final three papers describe important case studies of recent and historical events. Collectively, this volume highlights contemporary trends in global tsunami research, both fundamental and applied toward hazard assessment and mitigation.

  10. Introduction to "Global Tsunami Science: Past and Future, Volume I"

    NASA Astrophysics Data System (ADS)

    Geist, Eric L.; Fritz, Hermann M.; Rabinovich, Alexander B.; Tanioka, Yuichiro

    2016-12-01

    Twenty-five papers on the study of tsunamis are included in Volume I of the PAGEOPH topical issue "Global Tsunami Science: Past and Future". Six papers examine various aspects of tsunami probability and uncertainty analysis related to hazard assessment. Three papers relate to deterministic hazard and risk assessment. Five more papers present new methods for tsunami warning and detection. Six papers describe new methods for modeling tsunami hydrodynamics. Two papers investigate tsunamis generated by non-seismic sources: landslides and meteorological disturbances. The final three papers describe important case studies of recent and historical events. Collectively, this volume highlights contemporary trends in global tsunami research, both fundamental and applied toward hazard assessment and mitigation.

  11. Stable Satellite Orbits for Global Coverage of the Moon

    NASA Technical Reports Server (NTRS)

    Ely, Todd; Lieb, Erica

    2006-01-01

    A document proposes a constellation of spacecraft to be placed in orbit around the Moon to provide navigation and communication services with global coverage required for exploration of the Moon. There would be six spacecraft in inclined elliptical orbits: three in each of two orthogonal orbital planes, suggestive of a linked-chain configuration. The orbits have been chosen to (1) provide 99.999-percent global coverage for ten years and (2) to be stable under perturbation by Earth gravitation and solar-radiation pressure, so that no deterministic firing of thrusters would be needed to maintain the orbits. However, a minor amount of orbit control might be needed to correct for such unmodeled effects as outgassing of the spacecraft.

  12. Consistent Adjoint Driven Importance Sampling using Space, Energy and Angle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peplow, Douglas E.; Mosher, Scott W; Evans, Thomas M

    2012-08-01

    For challenging radiation transport problems, hybrid methods combine the accuracy of Monte Carlo methods with the global information present in deterministic methods. One of the most successful hybrid methods is CADIS Consistent Adjoint Driven Importance Sampling. This method uses a deterministic adjoint solution to construct a biased source distribution and consistent weight windows to optimize a specific tally in a Monte Carlo calculation. The method has been implemented into transport codes using just the spatial and energy information from the deterministic adjoint and has been used in many applications to compute tallies with much higher figures-of-merit than analog calculations. CADISmore » also outperforms user-supplied importance values, which usually take long periods of user time to develop. This work extends CADIS to develop weight windows that are a function of the position, energy, and direction of the Monte Carlo particle. Two types of consistent source biasing are presented: one method that biases the source in space and energy while preserving the original directional distribution and one method that biases the source in space, energy, and direction. Seven simple example problems are presented which compare the use of the standard space/energy CADIS with the new space/energy/angle treatments.« less

  13. A first comprehensive census of fungi in soil reveals both hyperdiversity and fine-scale niche partitioning

    Treesearch

    D. Lee Taylor; Teresa N. Hollingsworth; Jack W. McFarland; Niall J. Lennon; Chad Nusbaum; Roger W. Ruess

    2014-01-01

    Fungi play key roles in ecosystems as mutualists, pathogens, and decomposers. Current estimates of global species richness are highly uncertain, and the importance of stochastic vs. deterministic forces in the assembly of fungal communities is unknown. Molecular studies have so far failed to reach saturated, comprehensive estimates of fungal diversity. To obtain a more...

  14. Chaos in plasma simulation and experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watts, C.; Newman, D.E.; Sprott, J.C.

    1993-09-01

    We investigate the possibility that chaos and simple determinism are governing the dynamics of reversed field pinch (RFP) plasmas using data from both numerical simulations and experiment. A large repertoire of nonlinear analysis techniques is used to identify low dimensional chaos. These tools include phase portraits and Poincard sections, correlation dimension, the spectrum of Lyapunov exponents and short term predictability. In addition, nonlinear noise reduction techniques are applied to the experimental data in an attempt to extract any underlying deterministic dynamics. Two model systems are used to simulate the plasma dynamics. These are -the DEBS code, which models global RFPmore » dynamics, and the dissipative trapped electron mode (DTEM) model, which models drift wave turbulence. Data from both simulations show strong indications of low,dimensional chaos and simple determinism. Experimental data were obtained from the Madison Symmetric Torus RFP and consist of a wide array of both global and local diagnostic signals. None of the signals shows any indication of low dimensional chaos or other simple determinism. Moreover, most of the analysis tools indicate the experimental system is very high dimensional with properties similar to noise. Nonlinear noise reduction is unsuccessful at extracting an underlying deterministic system.« less

  15. Modelling the protocol stack in NCS with deterministic and stochastic petri net

    NASA Astrophysics Data System (ADS)

    Hui, Chen; Chunjie, Zhou; Weifeng, Zhu

    2011-06-01

    Protocol stack is the basis of the networked control systems (NCS). Full or partial reconfiguration of protocol stack offers both optimised communication service and system performance. Nowadays, field testing is unrealistic to determine the performance of reconfigurable protocol stack; and the Petri net formal description technique offers the best combination of intuitive representation, tool support and analytical capabilities. Traditionally, separation between the different layers of the OSI model has been a common practice. Nevertheless, such a layered modelling analysis framework of protocol stack leads to the lack of global optimisation for protocol reconfiguration. In this article, we proposed a general modelling analysis framework for NCS based on the cross-layer concept, which is to establish an efficiency system scheduling model through abstracting the time constraint, the task interrelation, the processor and the bus sub-models from upper and lower layers (application, data link and physical layer). Cross-layer design can help to overcome the inadequacy of global optimisation based on information sharing between protocol layers. To illustrate the framework, we take controller area network (CAN) as a case study. The simulation results of deterministic and stochastic Petri-net (DSPN) model can help us adjust the message scheduling scheme and obtain better system performance.

  16. Structuring evolution: biochemical networks and metabolic diversification in birds.

    PubMed

    Morrison, Erin S; Badyaev, Alexander V

    2016-08-25

    Recurrence and predictability of evolution are thought to reflect the correspondence between genomic and phenotypic dimensions of organisms, and the connectivity in deterministic networks within these dimensions. Direct examination of the correspondence between opportunities for diversification imbedded in such networks and realized diversity is illuminating, but is empirically challenging because both the deterministic networks and phenotypic diversity are modified in the course of evolution. Here we overcome this problem by directly comparing the structure of a "global" carotenoid network - comprising of all known enzymatic reactions among naturally occurring carotenoids - with the patterns of evolutionary diversification in carotenoid-producing metabolic networks utilized by birds. We found that phenotypic diversification in carotenoid networks across 250 species was closely associated with enzymatic connectivity of the underlying biochemical network - compounds with greater connectivity occurred the most frequently across species and were the hotspots of metabolic pathway diversification. In contrast, we found no evidence for diversification along the metabolic pathways, corroborating findings that the utilization of the global carotenoid network was not strongly influenced by history in avian evolution. The finding that the diversification in species-specific carotenoid networks is qualitatively predictable from the connectivity of the underlying enzymatic network points to significant structural determinism in phenotypic evolution.

  17. National Centers for Environmental Prediction

    Science.gov Websites

    Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather Modeling Mesoscale Modeling Marine Modeling and Analysis Teams Climate Data Assimilation Ensembles and Post Products People GLOBAL CLIMATE & WEATHER MODELING Global Forecast System (GFS) products - Please see

  18. El Niño$-$Southern Oscillation frequency cascade

    DOE PAGES

    Stuecker, Malte F.; Jin, Fei -Fei; Timmermann, Axel

    2015-10-19

    The El Niño$-$Southern Oscillation (ENSO) phenomenon, the most pronounced feature of internally generated climate variability, occurs on interannual timescales and impacts the global climate system through an interaction with the annual cycle. The tight coupling between ENSO and the annual cycle is particularly pronounced over the tropical Western Pacific. In this paper, we show that this nonlinear interaction results in a frequency cascade in the atmospheric circulation, which is characterized by deterministic high-frequency variability on near-annual and subannual timescales. Finally, through climate model experiments and observational analysis, it is documented that a substantial fraction of the anomalous Northwest Pacific anticyclonemore » variability, which is the main atmospheric link between ENSO and the East Asian Monsoon system, can be explained by these interactions and is thus deterministic and potentially predictable.« less

  19. El Niño$-$Southern Oscillation frequency cascade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stuecker, Malte F.; Jin, Fei -Fei; Timmermann, Axel

    The El Niño$-$Southern Oscillation (ENSO) phenomenon, the most pronounced feature of internally generated climate variability, occurs on interannual timescales and impacts the global climate system through an interaction with the annual cycle. The tight coupling between ENSO and the annual cycle is particularly pronounced over the tropical Western Pacific. In this paper, we show that this nonlinear interaction results in a frequency cascade in the atmospheric circulation, which is characterized by deterministic high-frequency variability on near-annual and subannual timescales. Finally, through climate model experiments and observational analysis, it is documented that a substantial fraction of the anomalous Northwest Pacific anticyclonemore » variability, which is the main atmospheric link between ENSO and the East Asian Monsoon system, can be explained by these interactions and is thus deterministic and potentially predictable.« less

  20. Maximal incompatibility of locally classical behavior and global causal order in multiparty scenarios

    NASA Astrophysics Data System (ADS)

    Baumeler, ńmin; Feix, Adrien; Wolf, Stefan

    2014-10-01

    Quantum theory in a global spacetime gives rise to nonlocal correlations, which cannot be explained causally in a satisfactory way; this motivates the study of theories with reduced global assumptions. Oreshkov, Costa, and Brukner [Nat. Commun. 3, 1092 (2012), 10.1038/ncomms2076] proposed a framework in which quantum theory is valid locally but where, at the same time, no global spacetime, i.e., predefined causal order, is assumed beyond the absence of logical paradoxes. It was shown for the two-party case, however, that a global causal order always emerges in the classical limit. Quite naturally, it has been conjectured that the same also holds in the multiparty setting. We show that, counter to this belief, classical correlations locally compatible with classical probability theory exist that allow for deterministic signaling between three or more parties incompatible with any predefined causal order.

  1. Bloch-like waves in random-walk potentials based on supersymmetry

    NASA Astrophysics Data System (ADS)

    Yu, Sunkyu; Piao, Xianji; Hong, Jiho; Park, Namkyoo

    2015-09-01

    Bloch's theorem was a major milestone that established the principle of bandgaps in crystals. Although it was once believed that bandgaps could form only under conditions of periodicity and long-range correlations for Bloch's theorem, this restriction was disproven by the discoveries of amorphous media and quasicrystals. While network and liquid models have been suggested for the interpretation of Bloch-like waves in disordered media, these approaches based on searching for random networks with bandgaps have failed in the deterministic creation of bandgaps. Here we reveal a deterministic pathway to bandgaps in random-walk potentials by applying the notion of supersymmetry to the wave equation. Inspired by isospectrality, we follow a methodology in contrast to previous methods: we transform order into disorder while preserving bandgaps. Our approach enables the formation of bandgaps in extremely disordered potentials analogous to Brownian motion, and also allows the tuning of correlations while maintaining identical bandgaps, thereby creating a family of potentials with `Bloch-like eigenstates'.

  2. Improved Genetic Algorithm Based on the Cooperation of Elite and Inverse-elite

    NASA Astrophysics Data System (ADS)

    Kanakubo, Masaaki; Hagiwara, Masafumi

    In this paper, we propose an improved genetic algorithm based on the combination of Bee system and Inverse-elitism, both are effective strategies for the improvement of GA. In the Bee system, in the beginning, each chromosome tries to find good solution individually as global search. When some chromosome is regarded as superior one, the other chromosomes try to find solution around there. However, since chromosomes for global search are generated randomly, Bee system lacks global search ability. On the other hand, in the Inverse-elitism, an inverse-elite whose gene values are reversed from the corresponding elite is produced. This strategy greatly contributes to diversification of chromosomes, but it lacks local search ability. In the proposed method, the Inverse-elitism with Pseudo-simplex method is employed for global search of Bee system in order to strengthen global search ability. In addition, it also has strong local search ability. The proposed method has synergistic effects of the three strategies. We confirmed validity and superior performance of the proposed method by computer simulations.

  3. Atmospheric Downscaling using Genetic Programming

    NASA Astrophysics Data System (ADS)

    Zerenner, Tanja; Venema, Victor; Simmer, Clemens

    2013-04-01

    Coupling models for the different components of the Soil-Vegetation-Atmosphere-System requires up-and downscaling procedures. Subject of our work is the downscaling scheme used to derive high resolution forcing data for land-surface and subsurface models from coarser atmospheric model output. The current downscaling scheme [Schomburg et. al. 2010, 2012] combines a bi-quadratic spline interpolation, deterministic rules and autoregressive noise. For the development of the scheme, training and validation data sets have been created by carrying out high-resolution runs of the atmospheric model. The deterministic rules in this scheme are partly based on known physical relations and partly determined by an automated search for linear relationships between the high resolution fields of the atmospheric model output and high resolution data on surface characteristics. Up to now deterministic rules are available for downscaling surface pressure and partially, depending on the prevailing weather conditions, for near surface temperature and radiation. Aim of our work is to improve those rules and to find deterministic rules for the remaining variables, which require downscaling, e.g. precipitation or near surface specifc humidity. To accomplish that, we broaden the search by allowing for interdependencies between different atmospheric parameters, non-linear relations, non-local and time-lagged relations. To cope with the vast number of possible solutions, we use genetic programming, a method from machine learning, which is based on the principles of natural evolution. We are currently working with GPLAB, a Genetic Programming toolbox for Matlab. At first we have tested the GP system to retrieve the known physical rule for downscaling surface pressure, i.e. the hydrostatic equation, from our training data. We have found this to be a simple task to the GP system. Furthermore we have improved accuracy and efficiency of the GP solution by implementing constant variation and optimization as genetic operators. Next we have worked on an improvement of the downscaling rule for the two-meter-temperature. We have added an if-function with four input arguments to the function set. Since this has shown to increase bloat we have additionally modified our fitness function by including penalty terms for both the size of the solutions and the number intron nodes, i.e program parts that are never evaluated. Starting from the known downscaling rule for the two-meter temperature, which linearly exploits the orography anomalies allowed or disallowed by a certain temperature gradient, our GP system has been able to find an improvement. The rule produced by the GP clearly shows a better performance concerning the reproduced small-scale variability.

  4. On Transform Domain Communication Systems under Spectrum Sensing Mismatch: A Deterministic Analysis.

    PubMed

    Jin, Chuanxue; Hu, Su; Huang, Yixuan; Luo, Qu; Huang, Dan; Li, Yi; Gao, Yuan; Cheng, Shaochi

    2017-07-08

    Towards the era of mobile Internet and the Internet of Things (IoT), numerous sensors and devices are being introduced and interconnected. To support such an amount of data traffic, traditional wireless communication technologies are facing challenges both in terms of the increasing shortage of spectrum resources and massive multiple access. The transform-domain communication system (TDCS) is considered as an alternative multiple access system, where 5G and mobile IoT are mainly focused. However, previous studies about TDCS are under the assumption that the transceiver has the global spectrum information, without the consideration of spectrum sensing mismatch (SSM). In this paper, we present the deterministic analysis of TDCS systems under arbitrary given spectrum sensing scenarios, especially the influence of the SSM pattern to the signal to noise ratio (SNR) performance. Simulation results show that arbitrary SSM pattern can lead to inferior bit error rate (BER) performance.

  5. Converting differential-equation models of biological systems to membrane computing.

    PubMed

    Muniyandi, Ravie Chandren; Zin, Abdullah Mohd; Sanders, J W

    2013-12-01

    This paper presents a method to convert the deterministic, continuous representation of a biological system by ordinary differential equations into a non-deterministic, discrete membrane computation. The dynamics of the membrane computation is governed by rewrite rules operating at certain rates. That has the advantage of applying accurately to small systems, and to expressing rates of change that are determined locally, by region, but not necessary globally. Such spatial information augments the standard differentiable approach to provide a more realistic model. A biological case study of the ligand-receptor network of protein TGF-β is used to validate the effectiveness of the conversion method. It demonstrates the sense in which the behaviours and properties of the system are better preserved in the membrane computing model, suggesting that the proposed conversion method may prove useful for biological systems in particular. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. The Stochastic Parcel Model: A deterministic parameterization of stochastically entraining convection

    DOE PAGES

    Romps, David M.

    2016-03-01

    Convective entrainment is a process that is poorly represented in existing convective parameterizations. By many estimates, convective entrainment is the leading source of error in global climate models. As a potential remedy, an Eulerian implementation of the Stochastic Parcel Model (SPM) is presented here as a convective parameterization that treats entrainment in a physically realistic and computationally efficient way. Drawing on evidence that convecting clouds comprise air parcels subject to Poisson-process entrainment events, the SPM calculates the deterministic limit of an infinite number of such parcels. For computational efficiency, the SPM groups parcels at each height by their purity, whichmore » is a measure of their total entrainment up to that height. This reduces the calculation of convective fluxes to a sequence of matrix multiplications. The SPM is implemented in a single-column model and compared with a large-eddy simulation of deep convection.« less

  7. On Transform Domain Communication Systems under Spectrum Sensing Mismatch: A Deterministic Analysis

    PubMed Central

    Jin, Chuanxue; Hu, Su; Huang, Yixuan; Luo, Qu; Huang, Dan; Li, Yi; Cheng, Shaochi

    2017-01-01

    Towards the era of mobile Internet and the Internet of Things (IoT), numerous sensors and devices are being introduced and interconnected. To support such an amount of data traffic, traditional wireless communication technologies are facing challenges both in terms of the increasing shortage of spectrum resources and massive multiple access. The transform-domain communication system (TDCS) is considered as an alternative multiple access system, where 5G and mobile IoT are mainly focused. However, previous studies about TDCS are under the assumption that the transceiver has the global spectrum information, without the consideration of spectrum sensing mismatch (SSM). In this paper, we present the deterministic analysis of TDCS systems under arbitrary given spectrum sensing scenarios, especially the influence of the SSM pattern to the signal to noise ratio (SNR) performance. Simulation results show that arbitrary SSM pattern can lead to inferior bit error rate (BER) performance. PMID:28698477

  8. Refinement and evaluation of helicopter real-time self-adaptive active vibration controller algorithms

    NASA Technical Reports Server (NTRS)

    Davis, M. W.

    1984-01-01

    A Real-Time Self-Adaptive (RTSA) active vibration controller was used as the framework in developing a computer program for a generic controller that can be used to alleviate helicopter vibration. Based upon on-line identification of system parameters, the generic controller minimizes vibration in the fuselage by closed-loop implementation of higher harmonic control in the main rotor system. The new generic controller incorporates a set of improved algorithms that gives the capability to readily define many different configurations by selecting one of three different controller types (deterministic, cautious, and dual), one of two linear system models (local and global), and one or more of several methods of applying limits on control inputs (external and/or internal limits on higher harmonic pitch amplitude and rate). A helicopter rotor simulation analysis was used to evaluate the algorithms associated with the alternative controller types as applied to the four-bladed H-34 rotor mounted on the NASA Ames Rotor Test Apparatus (RTA) which represents the fuselage. After proper tuning all three controllers provide more effective vibration reduction and converge more quickly and smoothly with smaller control inputs than the initial RTSA controller (deterministic with external pitch-rate limiting). It is demonstrated that internal limiting of the control inputs a significantly improves the overall performance of the deterministic controller.

  9. Ringed Seal Search for Global Optimization via a Sensitive Search Model.

    PubMed

    Saadi, Younes; Yanto, Iwan Tri Riyadi; Herawan, Tutut; Balakrishnan, Vimala; Chiroma, Haruna; Risnumawan, Anhar

    2016-01-01

    The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS) is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive) and exploitation (intensive) of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be used in global optimization problems.

  10. Chaos and simple determinism in reversed field pinch plasmas: Nonlinear analysis of numerical simulation and experimental data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watts, Christopher A.

    In this dissertation the possibility that chaos and simple determinism are governing the dynamics of reversed field pinch (RFP) plasmas is investigated. To properly assess this possibility, data from both numerical simulations and experiment are analyzed. A large repertoire of nonlinear analysis techniques is used to identify low dimensional chaos in the data. These tools include phase portraits and Poincare sections, correlation dimension, the spectrum of Lyapunov exponents and short term predictability. In addition, nonlinear noise reduction techniques are applied to the experimental data in an attempt to extract any underlying deterministic dynamics. Two model systems are used to simulatemore » the plasma dynamics. These are the DEBS code, which models global RFP dynamics, and the dissipative trapped electron mode (DTEM) model, which models drift wave turbulence. Data from both simulations show strong indications of low dimensional chaos and simple determinism. Experimental date were obtained from the Madison Symmetric Torus RFP and consist of a wide array of both global and local diagnostic signals. None of the signals shows any indication of low dimensional chaos or low simple determinism. Moreover, most of the analysis tools indicate the experimental system is very high dimensional with properties similar to noise. Nonlinear noise reduction is unsuccessful at extracting an underlying deterministic system.« less

  11. Correlation Dimension Estimates of Global and Local Temperature Data.

    NASA Astrophysics Data System (ADS)

    Wang, Qiang

    1995-11-01

    The author has attempted to detect the presence of low-dimensional deterministic chaos in temperature data by estimating the correlation dimension with the Hill estimate that has been recently developed by Mikosch and Wang. There is no convincing evidence of low dimensionality with either global dataset (Southern Hemisphere monthly average temperatures from 1858 to 1984) or local temperature dataset (daily minimums at Auckland, New Zealand). Any apparent reduction in the dimension estimates appears to be due large1y, if not entirely, to effects of statistical bias, but neither is it a purely random stochastic process. The dimension of the climatic attractor may be significantly larger than 10.

  12. Anticolonial climates: physiology, ecology, and global population, 1920s-1950s.

    PubMed

    Bashford, Alison

    2012-01-01

    Historiography on tropical medicine and determinist ideas about climate and racial difference rightly focuses on links with nineteenth- and twentieth-century colonial rule. Occasionally and counterintuitively, however, these ideas have been redeployed as anticolonial argument. This article looks at one such instance; the racial physiology of Indian economist, ecologist, and anticolonial nationalist Radhakamal Mukerjee (1889-1968). It argues that the explanatory context was mid-twentieth-century discussion of global population growth, which raised questions of density and belonging to land. Ecology offered a new language and scientific system within which people and place were conceptually integrated, in this instance to anticolonial ends.

  13. A high performance, ad-hoc, fuzzy query processing system for relational databases

    NASA Technical Reports Server (NTRS)

    Mansfield, William H., Jr.; Fleischman, Robert M.

    1992-01-01

    Database queries involving imprecise or fuzzy predicates are currently an evolving area of academic and industrial research. Such queries place severe stress on the indexing and I/O subsystems of conventional database environments since they involve the search of large numbers of records. The Datacycle architecture and research prototype is a database environment that uses filtering technology to perform an efficient, exhaustive search of an entire database. It has recently been modified to include fuzzy predicates in its query processing. The approach obviates the need for complex index structures, provides unlimited query throughput, permits the use of ad-hoc fuzzy membership functions, and provides a deterministic response time largely independent of query complexity and load. This paper describes the Datacycle prototype implementation of fuzzy queries and some recent performance results.

  14. Comparison of numerical weather prediction based deterministic and probabilistic wind resource assessment methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jie; Draxl, Caroline; Hopson, Thomas

    Numerical weather prediction (NWP) models have been widely used for wind resource assessment. Model runs with higher spatial resolution are generally more accurate, yet extremely computational expensive. An alternative approach is to use data generated by a low resolution NWP model, in conjunction with statistical methods. In order to analyze the accuracy and computational efficiency of different types of NWP-based wind resource assessment methods, this paper performs a comparison of three deterministic and probabilistic NWP-based wind resource assessment methodologies: (i) a coarse resolution (0.5 degrees x 0.67 degrees) global reanalysis data set, the Modern-Era Retrospective Analysis for Research and Applicationsmore » (MERRA); (ii) an analog ensemble methodology based on the MERRA, which provides both deterministic and probabilistic predictions; and (iii) a fine resolution (2-km) NWP data set, the Wind Integration National Dataset (WIND) Toolkit, based on the Weather Research and Forecasting model. Results show that: (i) as expected, the analog ensemble and WIND Toolkit perform significantly better than MERRA confirming their ability to downscale coarse estimates; (ii) the analog ensemble provides the best estimate of the multi-year wind distribution at seven of the nine sites, while the WIND Toolkit is the best at one site; (iii) the WIND Toolkit is more accurate in estimating the distribution of hourly wind speed differences, which characterizes the wind variability, at five of the available sites, with the analog ensemble being best at the remaining four locations; and (iv) the analog ensemble computational cost is negligible, whereas the WIND Toolkit requires large computational resources. Future efforts could focus on the combination of the analog ensemble with intermediate resolution (e.g., 10-15 km) NWP estimates, to considerably reduce the computational burden, while providing accurate deterministic estimates and reliable probabilistic assessments.« less

  15. Multiobjective generalized extremal optimization algorithm for simulation of daylight illuminants

    NASA Astrophysics Data System (ADS)

    Kumar, Srividya Ravindra; Kurian, Ciji Pearl; Gomes-Borges, Marcos Eduardo

    2017-10-01

    Daylight illuminants are widely used as references for color quality testing and optical vision testing applications. Presently used daylight simulators make use of fluorescent bulbs that are not tunable and occupy more space inside the quality testing chambers. By designing a spectrally tunable LED light source with an optimal number of LEDs, cost, space, and energy can be saved. This paper describes an application of the generalized extremal optimization (GEO) algorithm for selection of the appropriate quantity and quality of LEDs that compose the light source. The multiobjective approach of this algorithm tries to get the best spectral simulation with minimum fitness error toward the target spectrum, correlated color temperature (CCT) the same as the target spectrum, high color rendering index (CRI), and luminous flux as required for testing applications. GEO is a global search algorithm based on phenomena of natural evolution and is especially designed to be used in complex optimization problems. Several simulations have been conducted to validate the performance of the algorithm. The methodology applied to model the LEDs, together with the theoretical basis for CCT and CRI calculation, is presented in this paper. A comparative result analysis of M-GEO evolutionary algorithm with the Levenberg-Marquardt conventional deterministic algorithm is also presented.

  16. Threshold matrix for digital halftoning by genetic algorithm optimization

    NASA Astrophysics Data System (ADS)

    Alander, Jarmo T.; Mantere, Timo J.; Pyylampi, Tero

    1998-10-01

    Digital halftoning is used both in low and high resolution high quality printing technologies. Our method is designed to be mainly used for low resolution ink jet marking machines to produce both gray tone and color images. The main problem with digital halftoning is pink noise caused by the human eye's visual transfer function. To compensate for this the random dot patterns used are optimized to contain more blue than pink noise. Several such dot pattern generator threshold matrices have been created automatically by using genetic algorithm optimization, a non-deterministic global optimization method imitating natural evolution and genetics. A hybrid of genetic algorithm with a search method based on local backtracking was developed together with several fitness functions evaluating dot patterns for rectangular grids. By modifying the fitness function, a family of dot generators results, each with its particular statistical features. Several versions of genetic algorithms, backtracking and fitness functions were tested to find a reasonable combination. The generated threshold matrices have been tested by simulating a set of test images using the Khoros image processing system. Even though the work was focused on developing low resolution marking technology, the resulting family of dot generators can be applied also in other halftoning application areas including high resolution printing technology.

  17. Reliability-based design optimization of reinforced concrete structures including soil-structure interaction using a discrete gravitational search algorithm and a proposed metamodel

    NASA Astrophysics Data System (ADS)

    Khatibinia, M.; Salajegheh, E.; Salajegheh, J.; Fadaee, M. J.

    2013-10-01

    A new discrete gravitational search algorithm (DGSA) and a metamodelling framework are introduced for reliability-based design optimization (RBDO) of reinforced concrete structures. The RBDO of structures with soil-structure interaction (SSI) effects is investigated in accordance with performance-based design. The proposed DGSA is based on the standard gravitational search algorithm (GSA) to optimize the structural cost under deterministic and probabilistic constraints. The Monte-Carlo simulation (MCS) method is considered as the most reliable method for estimating the probabilities of reliability. In order to reduce the computational time of MCS, the proposed metamodelling framework is employed to predict the responses of the SSI system in the RBDO procedure. The metamodel consists of a weighted least squares support vector machine (WLS-SVM) and a wavelet kernel function, which is called WWLS-SVM. Numerical results demonstrate the efficiency and computational advantages of DGSA and the proposed metamodel for RBDO of reinforced concrete structures.

  18. Optimization of a Boiling Water Reactor Loading Pattern Using an Improved Genetic Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kobayashi, Yoko; Aiyoshi, Eitaro

    2003-08-15

    A search method based on genetic algorithms (GA) using deterministic operators has been developed to generate optimized boiling water reactor (BWR) loading patterns (LPs). The search method uses an Improved GA operator, that is, crossover, mutation, and selection. The handling of the encoding technique and constraint conditions is designed so that the GA reflects the peculiar characteristics of the BWR. In addition, some strategies such as elitism and self-reproduction are effectively used to improve the search speed. LP evaluations were performed with a three-dimensional diffusion code that coupled neutronic and thermal-hydraulic models. Strong axial heterogeneities and three-dimensional-dependent constraints have alwaysmore » necessitated the use of three-dimensional core simulators for BWRs, so that an optimization method is required for computational efficiency. The proposed algorithm is demonstrated by successfully generating LPs for an actual BWR plant applying the Haling technique. In test calculations, candidates that shuffled fresh and burned fuel assemblies within a reasonable computation time were obtained.« less

  19. Ringed Seal Search for Global Optimization via a Sensitive Search Model

    PubMed Central

    Saadi, Younes; Yanto, Iwan Tri Riyadi; Herawan, Tutut; Balakrishnan, Vimala; Chiroma, Haruna; Risnumawan, Anhar

    2016-01-01

    The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS) is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive) and exploitation (intensive) of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be used in global optimization problems. PMID:26790131

  20. Social autopoiesis: A concept in search of a theory

    NASA Astrophysics Data System (ADS)

    Broonen, Jean Paul

    1998-07-01

    This paper is a brief report on the issue of extension of the concept of autopoiesis to social systems. The arguments developed by four groups of authors to bring a response to that issue are summarized: Maturana and Varela, the fathers of the concept of autopoieis; Zeleny & Hufford who proposed a simple extension of the concept to social systems; Luhmann and Hejel with two different transformations of the concept; Morgan and his metaphorical perspective. The determinist vs teleological conception of (social) autopoiesis explicitly or implicitly sustained by several authors is emphasized.

  1. Stability analysis via the concept of Lyapunov exponents: a case study in optimal controlled biped standing

    NASA Astrophysics Data System (ADS)

    Sun, Yuming; Wu, Christine Qiong

    2012-12-01

    Balancing control is important for biped standing. In spite of large efforts, it is very difficult to design balancing control strategies satisfying three requirements simultaneously: maintaining postural stability, improving energy efficiency and satisfying the constraints between the biped feet and the ground. In this article, a proportional-derivative (PD) controller is proposed for a standing biped, which is simplified as a two-link inverted pendulum with one additional rigid foot-link. The genetic algorithm (GA) is used to search for the control gain meeting all three requirements. The stability analysis of such a deterministic biped control system is carried out using the concept of Lyapunov exponents (LEs), based on which, the system stability, where the disturbance comes from the initial states, and the structural stability, where the disturbance comes from the PD gains, are examined quantitively in terms of stability region. This article contributes to the biped balancing control, more significantly, the method shown in the studied case of biped provides a general framework of systematic stability analysis for certain deterministic nonlinear dynamical systems.

  2. Deterministic alternatives to the full configuration interaction quantum Monte Carlo method for strongly correlated systems

    NASA Astrophysics Data System (ADS)

    Tubman, Norm; Whaley, Birgitta

    The development of exponential scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, allows exact diagonalization through stochastically sampling of determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, together with a stochastic projected wave function, which are used to explore the important parts of Hilbert space. However, a stochastic representation of the wave function is not required to search Hilbert space efficiently and new deterministic approaches have recently been shown to efficiently find the important parts of determinant space. We shall discuss the technique of Adaptive Sampling Configuration Interaction (ASCI) and the related heat-bath Configuration Interaction approach for ground state and excited state simulations. We will present several applications for strongly correlated Hamiltonians. This work was supported through the Scientific Discovery through Advanced Computing (SciDAC) program funded by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences.

  3. Blocked inverted indices for exact clustering of large chemical spaces.

    PubMed

    Thiel, Philipp; Sach-Peltason, Lisa; Ottmann, Christian; Kohlbacher, Oliver

    2014-09-22

    The calculation of pairwise compound similarities based on fingerprints is one of the fundamental tasks in chemoinformatics. Methods for efficient calculation of compound similarities are of the utmost importance for various applications like similarity searching or library clustering. With the increasing size of public compound databases, exact clustering of these databases is desirable, but often computationally prohibitively expensive. We present an optimized inverted index algorithm for the calculation of all pairwise similarities on 2D fingerprints of a given data set. In contrast to other algorithms, it neither requires GPU computing nor yields a stochastic approximation of the clustering. The algorithm has been designed to work well with multicore architectures and shows excellent parallel speedup. As an application example of this algorithm, we implemented a deterministic clustering application, which has been designed to decompose virtual libraries comprising tens of millions of compounds in a short time on current hardware. Our results show that our implementation achieves more than 400 million Tanimoto similarity calculations per second on a common desktop CPU. Deterministic clustering of the available chemical space thus can be done on modern multicore machines within a few days.

  4. Modeling Uncertainties in EEG Microstates: Analysis of Real and Imagined Motor Movements Using Probabilistic Clustering-Driven Training of Probabilistic Neural Networks.

    PubMed

    Dinov, Martin; Leech, Robert

    2017-01-01

    Part of the process of EEG microstate estimation involves clustering EEG channel data at the global field power (GFP) maxima, very commonly using a modified K-means approach. Clustering has also been done deterministically, despite there being uncertainties in multiple stages of the microstate analysis, including the GFP peak definition, the clustering itself and in the post-clustering assignment of microstates back onto the EEG timecourse of interest. We perform a fully probabilistic microstate clustering and labeling, to account for these sources of uncertainty using the closest probabilistic analog to KM called Fuzzy C-means (FCM). We train softmax multi-layer perceptrons (MLPs) using the KM and FCM-inferred cluster assignments as target labels, to then allow for probabilistic labeling of the full EEG data instead of the usual correlation-based deterministic microstate label assignment typically used. We assess the merits of the probabilistic analysis vs. the deterministic approaches in EEG data recorded while participants perform real or imagined motor movements from a publicly available data set of 109 subjects. Though FCM group template maps that are almost topographically identical to KM were found, there is considerable uncertainty in the subsequent assignment of microstate labels. In general, imagined motor movements are less predictable on a time point-by-time point basis, possibly reflecting the more exploratory nature of the brain state during imagined, compared to during real motor movements. We find that some relationships may be more evident using FCM than using KM and propose that future microstate analysis should preferably be performed probabilistically rather than deterministically, especially in situations such as with brain computer interfaces, where both training and applying models of microstates need to account for uncertainty. Probabilistic neural network-driven microstate assignment has a number of advantages that we have discussed, which are likely to be further developed and exploited in future studies. In conclusion, probabilistic clustering and a probabilistic neural network-driven approach to microstate analysis is likely to better model and reveal details and the variability hidden in current deterministic and binarized microstate assignment and analyses.

  5. Modeling Uncertainties in EEG Microstates: Analysis of Real and Imagined Motor Movements Using Probabilistic Clustering-Driven Training of Probabilistic Neural Networks

    PubMed Central

    Dinov, Martin; Leech, Robert

    2017-01-01

    Part of the process of EEG microstate estimation involves clustering EEG channel data at the global field power (GFP) maxima, very commonly using a modified K-means approach. Clustering has also been done deterministically, despite there being uncertainties in multiple stages of the microstate analysis, including the GFP peak definition, the clustering itself and in the post-clustering assignment of microstates back onto the EEG timecourse of interest. We perform a fully probabilistic microstate clustering and labeling, to account for these sources of uncertainty using the closest probabilistic analog to KM called Fuzzy C-means (FCM). We train softmax multi-layer perceptrons (MLPs) using the KM and FCM-inferred cluster assignments as target labels, to then allow for probabilistic labeling of the full EEG data instead of the usual correlation-based deterministic microstate label assignment typically used. We assess the merits of the probabilistic analysis vs. the deterministic approaches in EEG data recorded while participants perform real or imagined motor movements from a publicly available data set of 109 subjects. Though FCM group template maps that are almost topographically identical to KM were found, there is considerable uncertainty in the subsequent assignment of microstate labels. In general, imagined motor movements are less predictable on a time point-by-time point basis, possibly reflecting the more exploratory nature of the brain state during imagined, compared to during real motor movements. We find that some relationships may be more evident using FCM than using KM and propose that future microstate analysis should preferably be performed probabilistically rather than deterministically, especially in situations such as with brain computer interfaces, where both training and applying models of microstates need to account for uncertainty. Probabilistic neural network-driven microstate assignment has a number of advantages that we have discussed, which are likely to be further developed and exploited in future studies. In conclusion, probabilistic clustering and a probabilistic neural network-driven approach to microstate analysis is likely to better model and reveal details and the variability hidden in current deterministic and binarized microstate assignment and analyses. PMID:29163110

  6. Topology optimization under stochastic stiffness

    NASA Astrophysics Data System (ADS)

    Asadpoure, Alireza

    Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations for the response quantities allow for efficient and accurate calculation of sensitivities of response statistics with respect to the design variables. The proposed methods are shown to be successful at generating robust optimal topologies. Examples from topology optimization in continuum and discrete domains (truss structures) under uncertainty are presented. It is also shown that proposed methods lead to significant computational savings when compared to Monte Carlo-based optimization which involve multiple formations and inversions of the global stiffness matrix and that results obtained from the proposed method are in excellent agreement with those obtained from a Monte Carlo-based optimization algorithm.

  7. STOCHASTIC OPTICS: A SCATTERING MITIGATION FRAMEWORK FOR RADIO INTERFEROMETRIC IMAGING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Michael D., E-mail: mjohnson@cfa.harvard.edu

    2016-12-10

    Just as turbulence in the Earth’s atmosphere can severely limit the angular resolution of optical telescopes, turbulence in the ionized interstellar medium fundamentally limits the resolution of radio telescopes. We present a scattering mitigation framework for radio imaging with very long baseline interferometry (VLBI) that partially overcomes this limitation. Our framework, “stochastic optics,” derives from a simplification of strong interstellar scattering to separate small-scale (“diffractive”) effects from large-scale (“refractive”) effects, thereby separating deterministic and random contributions to the scattering. Stochastic optics extends traditional synthesis imaging by simultaneously reconstructing an unscattered image and its refractive perturbations. Its advantages over direct imagingmore » come from utilizing the many deterministic properties of the scattering—such as the time-averaged “blurring,” polarization independence, and the deterministic evolution in frequency and time—while still accounting for the stochastic image distortions on large scales. These distortions are identified in the image reconstructions through regularization by their time-averaged power spectrum. Using synthetic data, we show that this framework effectively removes the blurring from diffractive scattering while reducing the spurious image features from refractive scattering. Stochastic optics can provide significant improvements over existing scattering mitigation strategies and is especially promising for imaging the Galactic Center supermassive black hole, Sagittarius A*, with the Global mm-VLBI Array and with the Event Horizon Telescope.« less

  8. Collinearity Impairs Local Element Visual Search

    ERIC Educational Resources Information Center

    Jingling, Li; Tseng, Chia-Huei

    2013-01-01

    In visual searches, stimuli following the law of good continuity attract attention to the global structure and receive attentional priority. Also, targets that have unique features are of high feature contrast and capture attention in visual search. We report on a salient global structure combined with a high orientation contrast to the…

  9. Implementation and verification of global optimization benchmark problems

    NASA Astrophysics Data System (ADS)

    Posypkin, Mikhail; Usov, Alexander

    2017-12-01

    The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its' gradient at a given point and the interval estimates of a function and its' gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.

  10. Traveling Salesman Problem for Surveillance Mission Using Particle Swarm Optimization

    DTIC Science & Technology

    2001-03-20

    design of experiments, results of the experiments, and qualitative and quantitative analysis . Conclusions and recommendations based on the qualitative and...characterize the algorithm. Such analysis and comparison between LK and a non-deterministic algorithm produces claims such as "Lin-Kernighan algorithm takes... based on experiments 5 and 6. All other parameters are the same as the baseline (see 4.2.1.2). 4.2.2.6 Experiment 10 - Fine Tuning PSO AS: 85,95% Global

  11. Global search tool for the Advanced Photon Source Integrated Relational Model of Installed Systems (IRMIS) database.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quock, D. E. R.; Cianciarulo, M. B.; APS Engineering Support Division

    2007-01-01

    The Integrated Relational Model of Installed Systems (IRMIS) is a relational database tool that has been implemented at the Advanced Photon Source to maintain an updated account of approximately 600 control system software applications, 400,000 process variables, and 30,000 control system hardware components. To effectively display this large amount of control system information to operators and engineers, IRMIS was initially built with nine Web-based viewers: Applications Organizing Index, IOC, PLC, Component Type, Installed Components, Network, Controls Spares, Process Variables, and Cables. However, since each viewer is designed to provide details from only one major category of the control system, themore » necessity for a one-stop global search tool for the entire database became apparent. The user requirements for extremely fast database search time and ease of navigation through search results led to the choice of Asynchronous JavaScript and XML (AJAX) technology in the implementation of the IRMIS global search tool. Unique features of the global search tool include a two-tier level of displayed search results, and a database data integrity validation and reporting mechanism.« less

  12. Noise-induced symmetry breaking far from equilibrium and the emergence of biological homochirality

    NASA Astrophysics Data System (ADS)

    Jafarpour, Farshid; Biancalani, Tommaso; Goldenfeld, Nigel

    2017-03-01

    The origin of homochirality, the observed single-handedness of biological amino acids and sugars, has long been attributed to autocatalysis, a frequently assumed precursor for early life self-replication. However, the stability of homochiral states in deterministic autocatalytic systems relies on cross-inhibition of the two chiral states, an unlikely scenario for early life self-replicators. Here we present a theory for a stochastic individual-level model of autocatalytic prebiotic self-replicators that are maintained out of thermal equilibrium. Without chiral inhibition, the racemic state is the global attractor of the deterministic dynamics, but intrinsic multiplicative noise stabilizes the homochiral states. Moreover, we show that this noise-induced bistability is robust with respect to diffusion of molecules of opposite chirality, and systems of diffusively coupled autocatalytic chemical reactions synchronize their final homochiral states when the self-replication is the dominant production mechanism for the chiral molecules. We conclude that nonequilibrium autocatalysis is a viable mechanism for homochirality, without imposing additional nonlinearities such as chiral inhibition.

  13. A robust multi-objective global supplier selection model under currency fluctuation and price discount

    NASA Astrophysics Data System (ADS)

    Zarindast, Atousa; Seyed Hosseini, Seyed Mohamad; Pishvaee, Mir Saman

    2017-06-01

    Robust supplier selection problem, in a scenario-based approach has been proposed, when the demand and exchange rates are subject to uncertainties. First, a deterministic multi-objective mixed integer linear programming is developed; then, the robust counterpart of the proposed mixed integer linear programming is presented using the recent extension in robust optimization theory. We discuss decision variables, respectively, by a two-stage stochastic planning model, a robust stochastic optimization planning model which integrates worst case scenario in modeling approach and finally by equivalent deterministic planning model. The experimental study is carried out to compare the performances of the three models. Robust model resulted in remarkable cost saving and it illustrated that to cope with such uncertainties, we should consider them in advance in our planning. In our case study different supplier were selected due to this uncertainties and since supplier selection is a strategic decision, it is crucial to consider these uncertainties in planning approach.

  14. Damage detection of structures identified with deterministic-stochastic models using seismic data.

    PubMed

    Huang, Ming-Chih; Wang, Yen-Po; Chang, Ming-Lian

    2014-01-01

    A deterministic-stochastic subspace identification method is adopted and experimentally verified in this study to identify the equivalent single-input-multiple-output system parameters of the discrete-time state equation. The method of damage locating vector (DLV) is then considered for damage detection. A series of shaking table tests using a five-storey steel frame has been conducted. Both single and multiple damage conditions at various locations have been considered. In the system identification analysis, either full or partial observation conditions have been taken into account. It has been shown that the damaged stories can be identified from global responses of the structure to earthquakes if sufficiently observed. In addition to detecting damage(s) with respect to the intact structure, identification of new or extended damages of the as-damaged counterpart has also been studied. This study gives further insights into the scheme in terms of effectiveness, robustness, and limitation for damage localization of frame systems.

  15. A Deterministic Model to Quantify Risk and Guide Mitigation Strategies to Reduce Bluetongue Virus Transmission in California Dairy Cattle

    PubMed Central

    Mayo, Christie; Shelley, Courtney; MacLachlan, N. James; Gardner, Ian; Hartley, David; Barker, Christopher

    2016-01-01

    The global distribution of bluetongue virus (BTV) has been changing recently, perhaps as a result of climate change. To evaluate the risk of BTV infection and transmission in a BTV-endemic region of California, sentinel dairy cows were evaluated for BTV infection, and populations of Culicoides vectors were collected at different sites using carbon dioxide. A deterministic model was developed to quantify risk and guide future mitigation strategies to reduce BTV infection in California dairy cattle. The greatest risk of BTV transmission was predicted within the warm Central Valley of California that contains the highest density of dairy cattle in the United States. Temperature and parameters associated with Culicoides vectors (transmission probabilities, carrying capacity, and survivorship) had the greatest effect on BTV’s basic reproduction number, R0. Based on these analyses, optimal control strategies for reducing BTV infection risk in dairy cattle will be highly reliant upon early efforts to reduce vector abundance during the months prior to peak transmission. PMID:27812161

  16. Visual Search in ASD: Instructed versus Spontaneous Local and Global Processing

    ERIC Educational Resources Information Center

    Van der Hallen, Ruth; Evers, Kris; Boets, Bart; Steyaert, Jean; Noens, Ilse; Wagemans, Johan

    2016-01-01

    Visual search has been used extensively to investigate differences in mid-level visual processing between individuals with ASD and TD individuals. The current study employed two visual search paradigms with Gaborized stimuli to assess the impact of task distractors (Experiment 1) and task instruction (Experiment 2) on local-global visual…

  17. Global Statistical Learning in a Visual Search Task

    ERIC Educational Resources Information Center

    Jones, John L.; Kaschak, Michael P.

    2012-01-01

    Locating a target in a visual search task is facilitated when the target location is repeated on successive trials. Global statistical properties also influence visual search, but have often been confounded with local regularities (i.e., target location repetition). In two experiments, target locations were not repeated for four successive trials,…

  18. A chaotic model for the plague epidemic that has occurred in Bombay at the end of the 19th century

    NASA Astrophysics Data System (ADS)

    Mangiarotti, Sylvain

    2015-04-01

    The plague epidemic that has occurred in Bombay at the end of the 19th century was detected in 1896. One year before, an Advisory Committee had been appointed by the Secretary of State for India, the Royal Society, and the Lister Institute. This Committee made numerous investigations and gathered a large panel of data including the number of people attacked and died from the plague, records of rat and flea populations, as well as meteorological records of temperature and humidity [1]. The global modeling technique [2] aims to obtain low dimensional models able to simulate the observed cycles from time series. As far as we know, this technique has been tried only to one case of epidemiological analysis (the whooping cough infection) based on a discrete formulation [3]. In the present work, the continuous time formulation of this technique is used to analyze the time evolution of the plague epidemic from this data set. One low dimensional model (three variables) is obtained exhibiting a limit cycle of period-5. A chaotic behavior could be derived from this model by tuning the model parameters. It provides a strong argument for a dynamical behavior that can be approximated by low dimensional deterministic equations. This model also provides an empirical argument for chaos in epidemics. [1] Verjbitski D. T., Bannerman W. B. & Kápadiâ R. T., 1908. Reports on Plague Investigations in India (May,1908), The Journal of Hygiene, 8(2), 161 -308. [2] Mangiarotti S., Coudret R., Drapeau L. & Jarlan L., 2012. Polynomial search and Global modelling: two algorithms for modeling chaos. Physical Review E, 86(4), 046205. [3] Boudjema G. & Cazelles B., 2003. Extraction of nonlinear dynamics from short and noisy time series. Chaos, Solitons and Fractals, 12, 2051-2069.

  19. An effective PSO-based memetic algorithm for flow shop scheduling.

    PubMed

    Liu, Bo; Wang, Ling; Jin, Yi-Hui

    2007-02-01

    This paper proposes an effective particle swarm optimization (PSO)-based memetic algorithm (MA) for the permutation flow shop scheduling problem (PFSSP) with the objective to minimize the maximum completion time, which is a typical non-deterministic polynomial-time (NP) hard combinatorial optimization problem. In the proposed PSO-based MA (PSOMA), both PSO-based searching operators and some special local searching operators are designed to balance the exploration and exploitation abilities. In particular, the PSOMA applies the evolutionary searching mechanism of PSO, which is characterized by individual improvement, population cooperation, and competition to effectively perform exploration. On the other hand, the PSOMA utilizes several adaptive local searches to perform exploitation. First, to make PSO suitable for solving PFSSP, a ranked-order value rule based on random key representation is presented to convert the continuous position values of particles to job permutations. Second, to generate an initial swarm with certain quality and diversity, the famous Nawaz-Enscore-Ham (NEH) heuristic is incorporated into the initialization of population. Third, to balance the exploration and exploitation abilities, after the standard PSO-based searching operation, a new local search technique named NEH_1 insertion is probabilistically applied to some good particles selected by using a roulette wheel mechanism with a specified probability. Fourth, to enrich the searching behaviors and to avoid premature convergence, a simulated annealing (SA)-based local search with multiple different neighborhoods is designed and incorporated into the PSOMA. Meanwhile, an effective adaptive meta-Lamarckian learning strategy is employed to decide which neighborhood to be used in SA-based local search. Finally, to further enhance the exploitation ability, a pairwise-based local search is applied after the SA-based search. Simulation results based on benchmarks demonstrate the effectiveness of the PSOMA. Additionally, the effects of some parameters on optimization performances are also discussed.

  20. An experimental study of search in global social networks.

    PubMed

    Dodds, Peter Sheridan; Muhamad, Roby; Watts, Duncan J

    2003-08-08

    We report on a global social-search experiment in which more than 60,000 e-mail users attempted to reach one of 18 target persons in 13 countries by forwarding messages to acquaintances. We find that successful social search is conducted primarily through intermediate to weak strength ties, does not require highly connected "hubs" to succeed, and, in contrast to unsuccessful social search, disproportionately relies on professional relationships. By accounting for the attrition of message chains, we estimate that social searches can reach their targets in a median of five to seven steps, depending on the separation of source and target, although small variations in chain lengths and participation rates generate large differences in target reachability. We conclude that although global social networks are, in principle, searchable, actual success depends sensitively on individual incentives.

  1. An MPI + $X$ implementation of contact global search using Kokkos

    DOE PAGES

    Hansen, Glen A.; Xavier, Patrick G.; Mish, Sam P.; ...

    2015-10-05

    This paper describes an approach that seeks to parallelize the spatial search associated with computational contact mechanics. In contact mechanics, the purpose of the spatial search is to find “nearest neighbors,” which is the prelude to an imprinting search that resolves the interactions between the external surfaces of contacting bodies. In particular, we are interested in the contact global search portion of the spatial search associated with this operation on domain-decomposition-based meshes. Specifically, we describe an implementation that combines standard domain-decomposition-based MPI-parallel spatial search with thread-level parallelism (MPI-X) available on advanced computer architectures (those with GPU coprocessors). Our goal ismore » to demonstrate the efficacy of the MPI-X paradigm in the overall contact search. Standard MPI-parallel implementations typically use a domain decomposition of the external surfaces of bodies within the domain in an attempt to efficiently distribute computational work. This decomposition may or may not be the same as the volume decomposition associated with the host physics. The parallel contact global search phase is then employed to find and distribute surface entities (nodes and faces) that are needed to compute contact constraints between entities owned by different MPI ranks without further inter-rank communication. Key steps of the contact global search include computing bounding boxes, building surface entity (node and face) search trees and finding and distributing entities required to complete on-rank (local) spatial searches. To enable source-code portability and performance across a variety of different computer architectures, we implemented the algorithm using the Kokkos hardware abstraction library. While we targeted development towards machines with a GPU accelerator per MPI rank, we also report performance results for OpenMP with a conventional multi-core compute node per rank. Results here demonstrate a 47 % decrease in the time spent within the global search algorithm, comparing the reference ACME algorithm with the GPU implementation, on an 18M face problem using four MPI ranks. As a result, while further work remains to maximize performance on the GPU, this result illustrates the potential of the proposed implementation.« less

  2. Exhaustive Versus Randomized Searchers for Nonlinear Optimization in 21st Century Computing: Solar Application

    NASA Technical Reports Server (NTRS)

    Sen, Syamal K.; AliShaykhian, Gholam

    2010-01-01

    We present a simple multi-dimensional exhaustive search method to obtain, in a reasonable time, the optimal solution of a nonlinear programming problem. It is more relevant in the present day non-mainframe computing scenario where an estimated 95% computing resources remains unutilized and computing speed touches petaflops. While the processor speed is doubling every 18 months, the band width is doubling every 12 months, and the hard disk space is doubling every 9 months. A randomized search algorithm or, equivalently, an evolutionary search method is often used instead of an exhaustive search algorithm. The reason is that a randomized approach is usually polynomial-time, i.e., fast while an exhaustive search method is exponential-time i.e., slow. We discuss the increasing importance of exhaustive search in optimization with the steady increase of computing power for solving many real-world problems of reasonable size. We also discuss the computational error and complexity of the search algorithm focusing on the fact that no measuring device can usually measure a quantity with an accuracy greater than 0.005%. We stress the fact that the quality of solution of the exhaustive search - a deterministic method - is better than that of randomized search. In 21 st century computing environment, exhaustive search cannot be left aside as an untouchable and it is not always exponential. We also describe a possible application of these algorithms in improving the efficiency of solar cells - a real hot topic - in the current energy crisis. These algorithms could be excellent tools in the hands of experimentalists and could save not only large amount of time needed for experiments but also could validate the theory against experimental results fast.

  3. Atmospheric and oceanographic research review, 1978. [global weather, ocean/air interactions, and climate

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Research activities related to global weather, ocean/air interactions, and climate are reported. The global weather research is aimed at improving the assimilation of satellite-derived data in weather forecast models, developing analysis/forecast models that can more fully utilize satellite data, and developing new measures of forecast skill to properly assess the impact of satellite data on weather forecasting. The oceanographic research goal is to understand and model the processes that determine the general circulation of the oceans, focusing on those processes that affect sea surface temperature and oceanic heat storage, which are the oceanographic variables with the greatest influence on climate. The climate research objective is to support the development and effective utilization of space-acquired data systems in climate forecast models and to conduct sensitivity studies to determine the affect of lower boundary conditions on climate and predictability studies to determine which global climate features can be modeled either deterministically or statistically.

  4. Target Search & Selection for the DI/EPOXI Spacecraft

    NASA Technical Reports Server (NTRS)

    Grebow, Daniel J.; Bhaskaran, Shyam; Chesley, Steven R.

    2012-01-01

    Upon completion of the Hartley 2 flyby in November 2010, the Deep Impact (DI) spacecraft resided in a solar orbit without possibility for gravity assist with any large body. Conservative estimates of remaining fuel were enough to provide only an 18 m/s impulse on the spacecraft. We present our method and results of our systematic scan of potential small body encounters for DI, and our criteria to narrow the selection to the asteroid 2002 GT as the target flyby body. The mission profile has two deterministic maneuvers to achieve the encounter, the first of which executed on November 25, 2011.

  5. Target Search and Selection for the DI/EPOXI Spacecraft

    NASA Technical Reports Server (NTRS)

    Grebow, Daniel J.; Bhaskaran, Shyam; Chesley, Steven R.

    2012-01-01

    Upon completion of the Hartley 2 flyby in November 2010, the Deep Impact (DI) spacecraft resided in a solar orbit without possibility for gravity assist with any large body. Conservative estimates of remaining fuel were enough to provide only an 18 m/s impulse on the spacecraft. We present our method and results of our systematic scan of potential small body encounters for DI, and our criteria to narrow the selection to the asteroid 2002 GT as the target flyby body. The mission profile has two deterministic maneuvers to achieve the encounter, the first of which executed on November 25, 2011.

  6. Fundamental resource-allocating model in colleges and universities based on Immune Clone Algorithms

    NASA Astrophysics Data System (ADS)

    Ye, Mengdie

    2017-05-01

    In this thesis we will seek the combination of antibodies and antigens converted from the optimal course arrangement and make an analogy with Immune Clone Algorithms. According to the character of the Algorithms, we apply clone, clone gene and clone selection to arrange courses. Clone operator can combine evolutionary search and random search, global search and local search. By cloning and clone mutating candidate solutions, we can find the global optimal solution quickly.

  7. Global Image Dissimilarity in Macaque Inferotemporal Cortex Predicts Human Visual Search Efficiency

    PubMed Central

    Sripati, Arun P.; Olson, Carl R.

    2010-01-01

    Finding a target in a visual scene can be easy or difficult depending on the nature of the distractors. Research in humans has suggested that search is more difficult the more similar the target and distractors are to each other. However, it has not yielded an objective definition of similarity. We hypothesized that visual search performance depends on similarity as determined by the degree to which two images elicit overlapping patterns of neuronal activity in visual cortex. To test this idea, we recorded from neurons in monkey inferotemporal cortex (IT) and assessed visual search performance in humans using pairs of images formed from the same local features in different global arrangements. The ability of IT neurons to discriminate between two images was strongly predictive of the ability of humans to discriminate between them during visual search, accounting overall for 90% of the variance in human performance. A simple physical measure of global similarity – the degree of overlap between the coarse footprints of a pair of images – largely explains both the neuronal and the behavioral results. To explain the relation between population activity and search behavior, we propose a model in which the efficiency of global oddball search depends on contrast-enhancing lateral interactions in high-order visual cortex. PMID:20107054

  8. Monte-Carlo Tree Search in Settlers of Catan

    NASA Astrophysics Data System (ADS)

    Szita, István; Chaslot, Guillaume; Spronck, Pieter

    Games are considered important benchmark opportunities for artificial intelligence research. Modern strategic board games can typically be played by three or more people, which makes them suitable test beds for investigating multi-player strategic decision making. Monte-Carlo Tree Search (MCTS) is a recently published family of algorithms that achieved successful results with classical, two-player, perfect-information games such as Go. In this paper we apply MCTS to the multi-player, non-deterministic board game Settlers of Catan. We implemented an agent that is able to play against computer-controlled and human players. We show that MCTS can be adapted successfully to multi-agent environments, and present two approaches of providing the agent with a limited amount of domain knowledge. Our results show that the agent has a considerable playing strength when compared to game implementation with existing heuristics. So, we may conclude that MCTS is a suitable tool for achieving a strong Settlers of Catan player.

  9. The fully actuated traffic control problem solved by global optimization and complementarity

    NASA Astrophysics Data System (ADS)

    Ribeiro, Isabel M.; de Lurdes de Oliveira Simões, Maria

    2016-02-01

    Global optimization and complementarity are used to determine the signal timing for fully actuated traffic control, regarding effective green and red times on each cycle. The average values of these parameters can be used to estimate the control delay of vehicles. In this article, a two-phase queuing system for a signalized intersection is outlined, based on the principle of minimization of the total waiting time for the vehicles. The underlying model results in a linear program with linear complementarity constraints, solved by a sequential complementarity algorithm. Departure rates of vehicles during green and yellow periods were treated as deterministic, while arrival rates of vehicles were assumed to follow a Poisson distribution. Several traffic scenarios were created and solved. The numerical results reveal that it is possible to use global optimization and complementarity over a reasonable number of cycles and determine with efficiency effective green and red times for a signalized intersection.

  10. Discovering Motifs in Biological Sequences Using the Micron Automata Processor.

    PubMed

    Roy, Indranil; Aluru, Srinivas

    2016-01-01

    Finding approximately conserved sequences, called motifs, across multiple DNA or protein sequences is an important problem in computational biology. In this paper, we consider the (l, d) motif search problem of identifying one or more motifs of length l present in at least q of the n given sequences, with each occurrence differing from the motif in at most d substitutions. The problem is known to be NP-complete, and the largest solved instance reported to date is (26,11). We propose a novel algorithm for the (l,d) motif search problem using streaming execution over a large set of non-deterministic finite automata (NFA). This solution is designed to take advantage of the micron automata processor, a new technology close to deployment that can simultaneously execute multiple NFA in parallel. We demonstrate the capability for solving much larger instances of the (l, d) motif search problem using the resources available within a single automata processor board, by estimating run-times for problem instances (39,18) and (40,17). The paper serves as a useful guide to solving problems using this new accelerator technology.

  11. Ultra-low roughness magneto-rheological finishing for EUV mask substrates

    NASA Astrophysics Data System (ADS)

    Dumas, Paul; Jenkins, Richard; McFee, Chuck; Kadaksham, Arun J.; Balachandran, Dave K.; Teki, Ranganath

    2013-09-01

    EUV mask substrates, made of titania-doped fused silica, ideally require sub-Angstrom surface roughness, sub-30 nm flatness, and no bumps/pits larger than 1 nm in height/depth. To achieve the above specifications, substrates must undergo iterative global and local polishing processes. Magnetorheological finishing (MRF) is a local polishing technique which can accurately and deterministically correct substrate figure, but typically results in a higher surface roughness than the current requirements for EUV substrates. We describe a new super-fine MRF® polishing fluid whichis able to meet both flatness and roughness specifications for EUV mask blanks. This eases the burden on the subsequent global polishing process by decreasing the polishing time, and hence the defectivity and extent of figure distortion.

  12. Software Assessment of the Global Force Management (GFM) Search Capability Study

    DTIC Science & Technology

    2017-02-01

    Study by Timothy Hanratty, Mark Mittrick, Alex Vertlieb, and Frederick Brundick Approved for public release; distribution...Army Research Laboratory Software Assessment of the Global Force Management (GFM) Search Capability Study by Timothy Hanratty, Mark Mittrick...Force Management (GFM) Search Capability Study 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Timothy

  13. Global Optimal Trajectory in Chaos and NP-Hardness

    NASA Astrophysics Data System (ADS)

    Latorre, Vittorio; Gao, David Yang

    This paper presents an unconventional theory and method for solving general nonlinear dynamical systems. Instead of the direct iterative methods, the discretized nonlinear system is first formulated as a global optimization problem via the least squares method. A newly developed canonical duality theory shows that this nonconvex minimization problem can be solved deterministically in polynomial time if a global optimality condition is satisfied. The so-called pseudo-chaos produced by linear iterative methods are mainly due to the intrinsic numerical error accumulations. Otherwise, the global optimization problem could be NP-hard and the nonlinear system can be really chaotic. A conjecture is proposed, which reveals the connection between chaos in nonlinear dynamics and NP-hardness in computer science. The methodology and the conjecture are verified by applications to the well-known logistic equation, a forced memristive circuit and the Lorenz system. Computational results show that the canonical duality theory can be used to identify chaotic systems and to obtain realistic global optimal solutions in nonlinear dynamical systems. The method and results presented in this paper should bring some new insights into nonlinear dynamical systems and NP-hardness in computational complexity theory.

  14. The Non-Signalling theorem in generalizations of Bell's theorem

    NASA Astrophysics Data System (ADS)

    Walleczek, J.; Grössing, G.

    2014-04-01

    Does "epistemic non-signalling" ensure the peaceful coexistence of special relativity and quantum nonlocality? The possibility of an affirmative answer is of great importance to deterministic approaches to quantum mechanics given recent developments towards generalizations of Bell's theorem. By generalizations of Bell's theorem we here mean efforts that seek to demonstrate the impossibility of any deterministic theories to obey the predictions of Bell's theorem, including not only local hidden-variables theories (LHVTs) but, critically, of nonlocal hidden-variables theories (NHVTs) also, such as de Broglie-Bohm theory. Naturally, in light of the well-established experimental findings from quantum physics, whether or not a deterministic approach to quantum mechanics, including an emergent quantum mechanics, is logically possible, depends on compatibility with the predictions of Bell's theorem. With respect to deterministic NHVTs, recent attempts to generalize Bell's theorem have claimed the impossibility of any such approaches to quantum mechanics. The present work offers arguments showing why such efforts towards generalization may fall short of their stated goal. In particular, we challenge the validity of the use of the non-signalling theorem as a conclusive argument in favor of the existence of free randomness, and therefore reject the use of the non-signalling theorem as an argument against the logical possibility of deterministic approaches. We here offer two distinct counter-arguments in support of the possibility of deterministic NHVTs: one argument exposes the circularity of the reasoning which is employed in recent claims, and a second argument is based on the inconclusive metaphysical status of the non-signalling theorem itself. We proceed by presenting an entirely informal treatment of key physical and metaphysical assumptions, and of their interrelationship, in attempts seeking to generalize Bell's theorem on the basis of an ontic, foundational interpretation of the non-signalling theorem. We here argue that the non-signalling theorem must instead be viewed as an epistemic, operational theorem i.e. one that refers exclusively to what epistemic agents can, or rather cannot, do. That is, we emphasize that the non-signalling theorem is a theorem about the operational inability of epistemic agents to signal information. In other words, as a proper principle, the non-signalling theorem may only be employed as an epistemic, phenomenological, or operational principle. Critically, our argument emphasizes that the non-signalling principle must not be used as an ontic principle about physical reality as such, i.e. as a theorem about the nature of physical reality independently of epistemic agents e.g. human observers. One major reason in favor of our conclusion is that any definition of signalling or of non-signalling invariably requires a reference to epistemic agents, and what these agents can actually measure and report. Otherwise, the non-signalling theorem would equal a general "no-influence" theorem. In conclusion, under the assumption that the non-signalling theorem is epistemic (i.e. "epistemic non-signalling"), the search for deterministic approaches to quantum mechanics, including NHVTs and an emergent quantum mechanics, continues to be a viable research program towards disclosing the foundations of physical reality at its smallest dimensions.

  15. Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)

    NASA Astrophysics Data System (ADS)

    Kędra, Mariola

    2014-02-01

    Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.

  16. Ecological Succession Pattern of Fungal Community in Soil along a Retreating Glacier

    PubMed Central

    Tian, Jianqing; Qiao, Yuchen; Wu, Bing; Chen, Huai; Li, Wei; Jiang, Na; Zhang, Xiaoling; Liu, Xingzhong

    2017-01-01

    Accelerated by global climate changing, retreating glaciers leave behind soil chronosequences of primary succession. Current knowledge of primary succession is mainly from studies of vegetation dynamics, whereas information about belowground microbes remains unclear. Here, we combined shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. We investigated fungal succession and community assembly via high-throughput sequencing along a well-established glacier forefront chronosequence that spans 2–188 years of deglaciation. Shannon diversity and evenness peaked at a distance of 370 m and declined afterwards. The response of fungal diversity to distance varied in different phyla. Basidiomycota Shannon diversity significantly decreased with distance, while the pattern of Rozellomycota Shannon diversity was unimodal. Abundance of most frequencies OTU2 (Cryptococcus terricola) increased with successional distance, whereas that of OTU65 (Tolypocladium tundrense) decreased. Based on null deviation analyses, composition of the fungal community was initially governed by deterministic processes strongly but later less deterministic processes. Our results revealed that distance, altitude, soil microbial biomass carbon, soil microbial biomass nitrogen and NH4+–N significantly correlated with fungal community composition along the chronosequence. These results suggest that the drivers of fungal community are dynamics in a glacier chronosequence, that may relate to fungal ecophysiological traits and adaptation in an evolving ecosystem. The information will provide understanding the mechanistic underpinnings of microbial community assembly during ecosystem succession under different scales and scenario. PMID:28649234

  17. Ecological Succession Pattern of Fungal Community in Soil along a Retreating Glacier.

    PubMed

    Tian, Jianqing; Qiao, Yuchen; Wu, Bing; Chen, Huai; Li, Wei; Jiang, Na; Zhang, Xiaoling; Liu, Xingzhong

    2017-01-01

    Accelerated by global climate changing, retreating glaciers leave behind soil chronosequences of primary succession. Current knowledge of primary succession is mainly from studies of vegetation dynamics, whereas information about belowground microbes remains unclear. Here, we combined shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. We investigated fungal succession and community assembly via high-throughput sequencing along a well-established glacier forefront chronosequence that spans 2-188 years of deglaciation. Shannon diversity and evenness peaked at a distance of 370 m and declined afterwards. The response of fungal diversity to distance varied in different phyla. Basidiomycota Shannon diversity significantly decreased with distance, while the pattern of Rozellomycota Shannon diversity was unimodal. Abundance of most frequencies OTU2 ( Cryptococcus terricola ) increased with successional distance, whereas that of OTU65 ( Tolypocladium tundrense ) decreased. Based on null deviation analyses, composition of the fungal community was initially governed by deterministic processes strongly but later less deterministic processes. Our results revealed that distance, altitude, soil microbial biomass carbon, soil microbial biomass nitrogen and [Formula: see text]-N significantly correlated with fungal community composition along the chronosequence. These results suggest that the drivers of fungal community are dynamics in a glacier chronosequence, that may relate to fungal ecophysiological traits and adaptation in an evolving ecosystem. The information will provide understanding the mechanistic underpinnings of microbial community assembly during ecosystem succession under different scales and scenario.

  18. A physically based model of global freshwater surface temperature

    NASA Astrophysics Data System (ADS)

    Beek, Ludovicus P. H.; Eikelboom, Tessa; Vliet, Michelle T. H.; Bierkens, Marc F. P.

    2012-09-01

    Temperature determines a range of physical properties of water and exerts a strong control on surface water biogeochemistry. Thus, in freshwater ecosystems the thermal regime directly affects the geographical distribution of aquatic species through their growth and metabolism and indirectly through their tolerance to parasites and diseases. Models used to predict surface water temperature range between physically based deterministic models and statistical approaches. Here we present the initial results of a physically based deterministic model of global freshwater surface temperature. The model adds a surface water energy balance to river discharge modeled by the global hydrological model PCR-GLOBWB. In addition to advection of energy from direct precipitation, runoff, and lateral exchange along the drainage network, energy is exchanged between the water body and the atmosphere by shortwave and longwave radiation and sensible and latent heat fluxes. Also included are ice formation and its effect on heat storage and river hydraulics. We use the coupled surface water and energy balance model to simulate global freshwater surface temperature at daily time steps with a spatial resolution of 0.5° on a regular grid for the period 1976-2000. We opt to parameterize the model with globally available data and apply it without calibration in order to preserve its physical basis with the outlook of evaluating the effects of atmospheric warming on freshwater surface temperature. We validate our simulation results with daily temperature data from rivers and lakes (U.S. Geological Survey (USGS), limited to the USA) and compare mean monthly temperatures with those recorded in the Global Environment Monitoring System (GEMS) data set. Results show that the model is able to capture the mean monthly surface temperature for the majority of the GEMS stations, while the interannual variability as derived from the USGS and NOAA data was captured reasonably well. Results are poorest for the Arctic rivers because the timing of ice breakup is predicted too late in the year due to the lack of including a mechanical breakup mechanism. Moreover, surface water temperatures for tropical rivers were overestimated, most likely due to an overestimation of rainfall temperature and incoming shortwave radiation. The spatiotemporal variation of water temperature reveals large temperature differences between water and atmosphere for the higher latitudes, while considerable lateral transport of heat can be observed for rivers crossing hydroclimatic zones, such as the Nile, the Mississippi, and the large rivers flowing to the Arctic. Overall, our model results show promise for future projection of global surface freshwater temperature under global change.

  19. Genetic algorithms as global random search methods

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.

    1995-01-01

    Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that the schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solutions and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.

  20. Genetic algorithms as global random search methods

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.

    1995-01-01

    Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that that schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solution and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.

  1. Time-optimal trajectory planning for underactuated spacecraft using a hybrid particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Zhuang, Yufei; Huang, Haibin

    2014-02-01

    A hybrid algorithm combining particle swarm optimization (PSO) algorithm with the Legendre pseudospectral method (LPM) is proposed for solving time-optimal trajectory planning problem of underactuated spacecrafts. At the beginning phase of the searching process, an initialization generator is constructed by the PSO algorithm due to its strong global searching ability and robustness to random initial values, however, PSO algorithm has a disadvantage that its convergence rate around the global optimum is slow. Then, when the change in fitness function is smaller than a predefined value, the searching algorithm is switched to the LPM to accelerate the searching process. Thus, with the obtained solutions by the PSO algorithm as a set of proper initial guesses, the hybrid algorithm can find a global optimum more quickly and accurately. 200 Monte Carlo simulations results demonstrate that the proposed hybrid PSO-LPM algorithm has greater advantages in terms of global searching capability and convergence rate than both single PSO algorithm and LPM algorithm. Moreover, the PSO-LPM algorithm is also robust to random initial values.

  2. Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission

    NASA Astrophysics Data System (ADS)

    Huang, Yuechen; Li, Haiyang

    2018-06-01

    This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.

  3. On salesmen and tourists: Two-step optimization in deterministic foragers

    NASA Astrophysics Data System (ADS)

    Maya, Miguel; Miramontes, Octavio; Boyer, Denis

    2017-02-01

    We explore a two-step optimization problem in random environments, the so-called restaurant-coffee shop problem, where a walker aims at visiting the nearest and better restaurant in an area and then move to the nearest and better coffee-shop. This is an extension of the Tourist Problem, a one-step optimization dynamics that can be viewed as a deterministic walk in a random medium. A certain amount of heterogeneity in the values of the resources to be visited causes the emergence of power-laws distributions for the steps performed by the walker, similarly to a Lévy flight. The fluctuations of the step lengths tend to decrease as a consequence of multiple-step planning, thus reducing the foraging uncertainty. We find that the first and second steps of each planned movement play very different roles in heterogeneous environments. The two-step process improves only slightly the foraging efficiency compared to the one-step optimization, at a much higher computational cost. We discuss the implications of these findings for animal and human mobility, in particular in relation to the computational effort that informed agents should deploy to solve search problems.

  4. Binary Bees Algorithm - bioinspiration from the foraging mechanism of honeybees to optimize a multiobjective multidimensional assignment problem

    NASA Astrophysics Data System (ADS)

    Xu, Shuo; Ji, Ze; Truong Pham, Duc; Yu, Fan

    2011-11-01

    The simultaneous mission assignment and home allocation for hospital service robots studied is a Multidimensional Assignment Problem (MAP) with multiobjectives and multiconstraints. A population-based metaheuristic, the Binary Bees Algorithm (BBA), is proposed to optimize this NP-hard problem. Inspired by the foraging mechanism of honeybees, the BBA's most important feature is an explicit functional partitioning between global search and local search for exploration and exploitation, respectively. Its key parts consist of adaptive global search, three-step elitism selection (constraint handling, non-dominated solutions selection, and diversity preservation), and elites-centred local search within a Hamming neighbourhood. Two comparative experiments were conducted to investigate its single objective optimization, optimization effectiveness (indexed by the S-metric and C-metric) and optimization efficiency (indexed by computational burden and CPU time) in detail. The BBA outperformed its competitors in almost all the quantitative indices. Hence, the above overall scheme, and particularly the searching history-adapted global search strategy was validated.

  5. Evaluation of hybrid inverse planning and optimization (HIPO) algorithm for optimization in real-time, high-dose-rate (HDR) brachytherapy for prostate.

    PubMed

    Pokharel, Shyam; Rana, Suresh; Blikenstaff, Joseph; Sadeghi, Amir; Prestidge, Bradley

    2013-07-08

    The purpose of this study is to investigate the effectiveness of the HIPO planning and optimization algorithm for real-time prostate HDR brachytherapy. This study consists of 20 patients who underwent ultrasound-based real-time HDR brachytherapy of the prostate using the treatment planning system called Oncentra Prostate (SWIFT version 3.0). The treatment plans for all patients were optimized using inverse dose-volume histogram-based optimization followed by graphical optimization (GRO) in real time. The GRO is manual manipulation of isodose lines slice by slice. The quality of the plan heavily depends on planner expertise and experience. The data for all patients were retrieved later, and treatment plans were created and optimized using HIPO algorithm with the same set of dose constraints, number of catheters, and set of contours as in the real-time optimization algorithm. The HIPO algorithm is a hybrid because it combines both stochastic and deterministic algorithms. The stochastic algorithm, called simulated annealing, searches the optimal catheter distributions for a given set of dose objectives. The deterministic algorithm, called dose-volume histogram-based optimization (DVHO), optimizes three-dimensional dose distribution quickly by moving straight downhill once it is in the advantageous region of the search space given by the stochastic algorithm. The PTV receiving 100% of the prescription dose (V100) was 97.56% and 95.38% with GRO and HIPO, respectively. The mean dose (D(mean)) and minimum dose to 10% volume (D10) for the urethra, rectum, and bladder were all statistically lower with HIPO compared to GRO using the student pair t-test at 5% significance level. HIPO can provide treatment plans with comparable target coverage to that of GRO with a reduction in dose to the critical structures.

  6. Wave ensemble forecast system for tropical cyclones in the Australian region

    NASA Astrophysics Data System (ADS)

    Zieger, Stefan; Greenslade, Diana; Kepert, Jeffrey D.

    2018-05-01

    Forecasting of waves under extreme conditions such as tropical cyclones is vitally important for many offshore industries, but there remain many challenges. For Northwest Western Australia (NW WA), wave forecasts issued by the Australian Bureau of Meteorology have previously been limited to products from deterministic operational wave models forced by deterministic atmospheric models. The wave models are run over global (resolution 1/4∘) and regional (resolution 1/10∘) domains with forecast ranges of + 7 and + 3 day respectively. Because of this relatively coarse resolution (both in the wave models and in the forcing fields), the accuracy of these products is limited under tropical cyclone conditions. Given this limited accuracy, a new ensemble-based wave forecasting system for the NW WA region has been developed. To achieve this, a new dedicated 8-km resolution grid was nested in the global wave model. Over this grid, the wave model is forced with winds from a bias-corrected European Centre for Medium Range Weather Forecast atmospheric ensemble that comprises 51 ensemble members to take into account the uncertainties in location, intensity and structure of a tropical cyclone system. A unique technique is used to select restart files for each wave ensemble member. The system is designed to operate in real time during the cyclone season providing + 10-day forecasts. This paper will describe the wave forecast components of this system and present the verification metrics and skill for specific events.

  7. Evaluation of SNS Beamline Shielding Configurations using MCNPX Accelerated by ADVANTG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Risner, Joel M; Johnson, Seth R.; Remec, Igor

    2015-01-01

    Shielding analyses for the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory pose significant computational challenges, including highly anisotropic high-energy sources, a combination of deep penetration shielding and an unshielded beamline, and a desire to obtain well-converged nearly global solutions for mapping of predicted radiation fields. The majority of these analyses have been performed using MCNPX with manually generated variance reduction parameters (source biasing and cell-based splitting and Russian roulette) that were largely based on the analyst's insight into the problem specifics. Development of the variance reduction parameters required extensive analyst time, and was often tailored to specific portionsmore » of the model phase space. We previously applied a developmental version of the ADVANTG code to an SNS beamline study to perform a hybrid deterministic/Monte Carlo analysis and showed that we could obtain nearly global Monte Carlo solutions with essentially uniform relative errors for mesh tallies that cover extensive portions of the model with typical voxel spacing of a few centimeters. The use of weight window maps and consistent biased sources produced using the FW-CADIS methodology in ADVANTG allowed us to obtain these solutions using substantially less computer time than the previous cell-based splitting approach. While those results were promising, the process of using the developmental version of ADVANTG was somewhat laborious, requiring user-developed Python scripts to drive much of the analysis sequence. In addition, limitations imposed by the size of weight-window files in MCNPX necessitated the use of relatively coarse spatial and energy discretization for the deterministic Denovo calculations that we used to generate the variance reduction parameters. We recently applied the production version of ADVANTG to this beamline analysis, which substantially streamlined the analysis process. We also tested importance function collapsing (in space and energy) capabilities in ADVANTG. These changes, along with the support for parallel Denovo calculations using the current version of ADVANTG, give us the capability to improve the fidelity of the deterministic portion of the hybrid analysis sequence, obtain improved weight-window maps, and reduce both the analyst and computational time required for the analysis process.« less

  8. Visual Search Targeting Either Local or Global Perceptual Processes Differs as a Function of Autistic-Like Traits in the Typically Developing Population

    ERIC Educational Resources Information Center

    Almeida, Renita A.; Dickinson, J. Edwin; Maybery, Murray T.; Badcock, Johanna C.; Badcock, David R.

    2013-01-01

    Relative to low scorers, high scorers on the Autism-Spectrum Quotient (AQ) show enhanced performance on the Embedded Figures Test and the Radial Frequency search task (RFST), which has been attributed to both enhanced local processing and differences in combining global percepts. We investigate the role of local and global processing further using…

  9. Large-extent digital soil mapping approaches for total soil depth

    NASA Astrophysics Data System (ADS)

    Mulder, Titia; Lacoste, Marine; Saby, Nicolas P. A.; Arrouays, Dominique

    2015-04-01

    Total soil depth (SDt) plays a key role in supporting various ecosystem services and properties, including plant growth, water availability and carbon stocks. Therefore, predictive mapping of SDt has been included as one of the deliverables within the GlobalSoilMap project. In this work SDt was predicted for France following the directions of GlobalSoilMap, which requires modelling at 90m resolution. This first method, further referred to as DM, consisted of modelling the deterministic trend in SDt using data mining, followed by a bias correction and ordinary kriging of the residuals. Considering the total surface area of France, being about 540K km2, employed methods may need to be able dealing with large data sets. Therefore, a second method, multi-resolution kriging (MrK) for large datasets, was implemented. This method consisted of modelling the deterministic trend by a linear model, followed by interpolation of the residuals. For the two methods, the general trend was assumed to be explained by the biotic and abiotic environmental conditions, as described by the Soil-Landscape paradigm. The mapping accuracy was evaluated by an internal validation and its concordance with previous soil maps. In addition, the prediction interval for DM and the confidence interval for MrK were determined. Finally, the opportunities and limitations of both approaches were evaluated. The results showed consistency in mapped spatial patterns and a good prediction of the mean values. DM was better capable in predicting extreme values due to the bias correction. Also, DM was more powerful in capturing the deterministic trend than the linear model of the MrK approach. However, MrK was found to be more straightforward and flexible in delivering spatial explicit uncertainty measures. The validation indicated that DM was more accurate than MrK. Improvements for DM may be expected by predicting soil depth classes. MrK shows potential for modelling beyond the country level, at high resolution. Large-extent digital soil mapping approaches for SDt may be improved by (1) taking into account SDt observations which are censored and (2) using high-resolution biotic and abiotic environmental data. The latter may improve modelling the soil-landscape interactions influencing soil pedogenesis. Concluding, this work provided a robust and reproducible method (DM) for high-resolution soil property modelling, in accordance with the GlobalSoilMap requirements and an efficient alternative for large-extent digital soil mapping (MrK).

  10. Optimal design of groundwater remediation system using a probabilistic multi-objective fast harmony search algorithm under uncertainty

    NASA Astrophysics Data System (ADS)

    Luo, Qiankun; Wu, Jianfeng; Yang, Yun; Qian, Jiazhong; Wu, Jichun

    2014-11-01

    This study develops a new probabilistic multi-objective fast harmony search algorithm (PMOFHS) for optimal design of groundwater remediation systems under uncertainty associated with the hydraulic conductivity (K) of aquifers. The PMOFHS integrates the previously developed deterministic multi-objective optimization method, namely multi-objective fast harmony search algorithm (MOFHS) with a probabilistic sorting technique to search for Pareto-optimal solutions to multi-objective optimization problems in a noisy hydrogeological environment arising from insufficient K data. The PMOFHS is then coupled with the commonly used flow and transport codes, MODFLOW and MT3DMS, to identify the optimal design of groundwater remediation systems for a two-dimensional hypothetical test problem and a three-dimensional Indiana field application involving two objectives: (i) minimization of the total remediation cost through the engineering planning horizon, and (ii) minimization of the mass remaining in the aquifer at the end of the operational period, whereby the pump-and-treat (PAT) technology is used to clean up contaminated groundwater. Also, Monte Carlo (MC) analysis is employed to evaluate the effectiveness of the proposed methodology. Comprehensive analysis indicates that the proposed PMOFHS can find Pareto-optimal solutions with low variability and high reliability and is a potentially effective tool for optimizing multi-objective groundwater remediation problems under uncertainty.

  11. LETTER TO THE EDITOR: Constant-time solution to the global optimization problem using Brüschweiler's ensemble search algorithm

    NASA Astrophysics Data System (ADS)

    Protopopescu, V.; D'Helon, C.; Barhen, J.

    2003-06-01

    A constant-time solution of the continuous global optimization problem (GOP) is obtained by using an ensemble algorithm. We show that under certain assumptions, the solution can be guaranteed by mapping the GOP onto a discrete unsorted search problem, whereupon Brüschweiler's ensemble search algorithm is applied. For adequate sensitivities of the measurement technique, the query complexity of the ensemble search algorithm depends linearly on the size of the function's domain. Advantages and limitations of an eventual NMR implementation are discussed.

  12. Dynamics of a stochastic multi-strain SIS epidemic model driven by Lévy noise

    NASA Astrophysics Data System (ADS)

    Chen, Can; Kang, Yanmei

    2017-01-01

    A stochastic multi-strain SIS epidemic model is formulated by introducing Lévy noise into the disease transmission rate of each strain. First, we prove that the stochastic model admits a unique global positive solution, and, by the comparison theorem, we show that the solution remains within a positively invariant set almost surely. Next we investigate stochastic stability of the disease-free equilibrium, including stability in probability and pth moment asymptotic stability. Then sufficient conditions for persistence in the mean of the disease are established. Finally, based on an Euler scheme for Lévy-driven stochastic differential equations, numerical simulations for a stochastic two-strain model are carried out to verify the theoretical results. Moreover, numerical comparison results of the stochastic two-strain model and the deterministic version are also given. Lévy noise can cause the two strains to become extinct almost surely, even though there is a dominant strain that persists in the deterministic model. It can be concluded that the introduction of Lévy noise reduces the disease extinction threshold, which indicates that Lévy noise may suppress the disease outbreak.

  13. Multiple vehicle tracking in aerial video sequence using driver behavior analysis and improved deterministic data association

    NASA Astrophysics Data System (ADS)

    Zhang, Xunxun; Xu, Hongke; Fang, Jianwu

    2018-01-01

    Along with the rapid development of the unmanned aerial vehicle technology, multiple vehicle tracking (MVT) in aerial video sequence has received widespread interest for providing the required traffic information. Due to the camera motion and complex background, MVT in aerial video sequence poses unique challenges. We propose an efficient MVT algorithm via driver behavior-based Kalman filter (DBKF) and an improved deterministic data association (IDDA) method. First, a hierarchical image registration method is put forward to compensate the camera motion. Afterward, to improve the accuracy of the state estimation, we propose the DBKF module by incorporating the driver behavior into the Kalman filter, where artificial potential field is introduced to reflect the driver behavior. Then, to implement the data association, a local optimization method is designed instead of global optimization. By introducing the adaptive operating strategy, the proposed IDDA method can also deal with the situation in which the vehicles suddenly appear or disappear. Finally, comprehensive experiments on the DARPA VIVID data set and KIT AIS data set demonstrate that the proposed algorithm can generate satisfactory and superior results.

  14. Front propagation and effect of memory in stochastic desertification models with an absorbing state

    NASA Astrophysics Data System (ADS)

    Herman, Dor; Shnerb, Nadav M.

    2017-08-01

    Desertification in dryland ecosystems is considered to be a major environmental threat that may lead to devastating consequences. The concern increases when the system admits two alternative steady states and the transition is abrupt and irreversible (catastrophic shift). However, recent studies show that the inherent stochasticity of the birth-death process, when superimposed on the presence of an absorbing state, may lead to a continuous (second order) transition even if the deterministic dynamics supports a catastrophic transition. Following these works we present here a numerical study of a one-dimensional stochastic desertification model, where the deterministic predictions are confronted with the observed dynamics. Our results suggest that a stochastic spatial system allows for a propagating front only when its active phase invades the inactive (desert) one. In the extinction phase one observes transient front propagation followed by a global collapse. In the presence of a seed bank the vegetation state is shown to be more robust against demographic stochasticity, but the transition in that case still belongs to the directed percolation equivalence class.

  15. Noise-induced transitions and shifts in a climate-vegetation feedback model.

    PubMed

    Alexandrov, Dmitri V; Bashkirtseva, Irina A; Ryashko, Lev B

    2018-04-01

    Motivated by the extremely important role of the Earth's vegetation dynamics in climate changes, we study the stochastic variability of a simple climate-vegetation system. In the case of deterministic dynamics, the system has one stable equilibrium and limit cycle or two stable equilibria corresponding to two opposite (cold and warm) climate-vegetation states. These states are divided by a separatrix going across a point of unstable equilibrium. Some possible stochastic scenarios caused by different externally induced natural and anthropogenic processes inherit properties of deterministic behaviour and drastically change the system dynamics. We demonstrate that the system transitions across its separatrix occur with increasing noise intensity. The climate-vegetation system therewith fluctuates, transits and localizes in the vicinity of its attractor. We show that this phenomenon occurs within some critical range of noise intensities. A noise-induced shift into the range of smaller global average temperatures corresponding to substantial oscillations of the Earth's vegetation cover is revealed. Our analysis demonstrates that the climate-vegetation interactions essentially contribute to climate dynamics and should be taken into account in more precise and complex models of climate variability.

  16. Dynamics of stochastic SEIS epidemic model with varying population size

    NASA Astrophysics Data System (ADS)

    Liu, Jiamin; Wei, Fengying

    2016-12-01

    We introduce the stochasticity into a deterministic model which has state variables susceptible-exposed-infected with varying population size in this paper. The infected individuals could return into susceptible compartment after recovering. We show that the stochastic model possesses a unique global solution under building up a suitable Lyapunov function and using generalized Itô's formula. The densities of the exposed and infected tend to extinction when some conditions are being valid. Moreover, the conditions of persistence to a global solution are derived when the parameters are subject to some simple criteria. The stochastic model admits a stationary distribution around the endemic equilibrium, which means that the disease will prevail. To check the validity of the main results, numerical simulations are demonstrated as end of this contribution.

  17. Adversarial search by evolutionary computation.

    PubMed

    Hong, T P; Huang, K Y; Lin, W Y

    2001-01-01

    In this paper, we consider the problem of finding good next moves in two-player games. Traditional search algorithms, such as minimax and alpha-beta pruning, suffer great temporal and spatial expansion when exploring deeply into search trees to find better next moves. The evolution of genetic algorithms with the ability to find global or near global optima in limited time seems promising, but they are inept at finding compound optima, such as the minimax in a game-search tree. We thus propose a new genetic algorithm-based approach that can find a good next move by reserving the board evaluation values of new offspring in a partial game-search tree. Experiments show that solution accuracy and search speed are greatly improved by our algorithm.

  18. Global optimization methods for engineering design

    NASA Technical Reports Server (NTRS)

    Arora, Jasbir S.

    1990-01-01

    The problem is to find a global minimum for the Problem P. Necessary and sufficient conditions are available for local optimality. However, global solution can be assured only under the assumption of convexity of the problem. If the constraint set S is compact and the cost function is continuous on it, existence of a global minimum is guaranteed. However, in view of the fact that no global optimality conditions are available, a global solution can be found only by an exhaustive search to satisfy Inequality. The exhaustive search can be organized in such a way that the entire design space need not be searched for the solution. This way the computational burden is reduced somewhat. It is concluded that zooming algorithm for global optimizations appears to be a good alternative to stochastic methods. More testing is needed; a general, robust, and efficient local minimizer is required. IDESIGN was used in all numerical calculations which is based on a sequential quadratic programming algorithm, and since feasible set keeps on shrinking, a good algorithm to find an initial feasible point is required. Such algorithms need to be developed and evaluated.

  19. Competitive Facility Location with Random Demands

    NASA Astrophysics Data System (ADS)

    Uno, Takeshi; Katagiri, Hideki; Kato, Kosuke

    2009-10-01

    This paper proposes a new location problem of competitive facilities, e.g. shops and stores, with uncertain demands in the plane. By representing the demands for facilities as random variables, the location problem is formulated to a stochastic programming problem, and for finding its solution, three deterministic programming problems: expectation maximizing problem, probability maximizing problem, and satisfying level maximizing problem are considered. After showing that one of their optimal solutions can be found by solving 0-1 programming problems, their solution method is proposed by improving the tabu search algorithm with strategic vibration. Efficiency of the solution method is shown by applying to numerical examples of the facility location problems.

  20. Walking the Filament of Feasibility: Global Optimization of Highly-Constrained, Multi-Modal Interplanetary Trajectories Using a Novel Stochastic Search Technique

    NASA Technical Reports Server (NTRS)

    Englander, Arnold C.; Englander, Jacob A.

    2017-01-01

    Interplanetary trajectory optimization problems are highly complex and are characterized by a large number of decision variables and equality and inequality constraints as well as many locally optimal solutions. Stochastic global search techniques, coupled with a large-scale NLP solver, have been shown to solve such problems but are inadequately robust when the problem constraints become very complex. In this work, we present a novel search algorithm that takes advantage of the fact that equality constraints effectively collapse the solution space to lower dimensionality. This new approach walks the filament'' of feasibility to efficiently find the global optimal solution.

  1. Chaos in an imperfectly premixed model combustor.

    PubMed

    Kabiraj, Lipika; Saurabh, Aditya; Karimi, Nader; Sailor, Anna; Mastorakos, Epaminondas; Dowling, Ann P; Paschereit, Christian O

    2015-02-01

    This article reports nonlinear bifurcations observed in a laboratory scale, turbulent combustor operating under imperfectly premixed mode with global equivalence ratio as the control parameter. The results indicate that the dynamics of thermoacoustic instability correspond to quasi-periodic bifurcation to low-dimensional, deterministic chaos, a route that is common to a variety of dissipative nonlinear systems. The results support the recent identification of bifurcation scenarios in a laminar premixed flame combustor (Kabiraj et al., Chaos: Interdiscip. J. Nonlinear Sci. 22, 023129 (2012)) and extend the observation to a practically relevant combustor configuration.

  2. Combining local and global limitations of visual search.

    PubMed

    Põder, Endel

    2017-04-01

    There are different opinions about the roles of local interactions and central processing capacity in visual search. This study attempts to clarify the problem using a new version of relevant set cueing. A central precue indicates two symmetrical segments (that may contain a target object) within a circular array of objects presented briefly around the fixation point. The number of objects in the relevant segments, and density of objects in the array were varied independently. Three types of search experiments were run: (a) search for a simple visual feature (color, size, and orientation); (b) conjunctions of simple features; and (c) spatial configuration of simple features (rotated Ts). For spatial configuration stimuli, the results were consistent with a fixed global processing capacity and standard crowding zones. For simple features and their conjunctions, the results were different, dependent on the features involved. While color search exhibits virtually no capacity limits or crowding, search for an orientation target was limited by both. Results for conjunctions of features can be partly explained by the results from the respective features. This study shows that visual search is limited by both local interference and global capacity, and the limitations are different for different visual features.

  3. Deterministic and reliability based optimization of integrated thermal protection system composite panel using adaptive sampling techniques

    NASA Astrophysics Data System (ADS)

    Ravishankar, Bharani

    Conventional space vehicles have thermal protection systems (TPS) that provide protection to an underlying structure that carries the flight loads. In an attempt to save weight, there is interest in an integrated TPS (ITPS) that combines the structural function and the TPS function. This has weight saving potential, but complicates the design of the ITPS that now has both thermal and structural failure modes. The main objectives of this dissertation was to optimally design the ITPS subjected to thermal and mechanical loads through deterministic and reliability based optimization. The optimization of the ITPS structure requires computationally expensive finite element analyses of 3D ITPS (solid) model. To reduce the computational expenses involved in the structural analysis, finite element based homogenization method was employed, homogenizing the 3D ITPS model to a 2D orthotropic plate. However it was found that homogenization was applicable only for panels that are much larger than the characteristic dimensions of the repeating unit cell in the ITPS panel. Hence a single unit cell was used for the optimization process to reduce the computational cost. Deterministic and probabilistic optimization of the ITPS panel required evaluation of failure constraints at various design points. This further demands computationally expensive finite element analyses which was replaced by efficient, low fidelity surrogate models. In an optimization process, it is important to represent the constraints accurately to find the optimum design. Instead of building global surrogate models using large number of designs, the computational resources were directed towards target regions near constraint boundaries for accurate representation of constraints using adaptive sampling strategies. Efficient Global Reliability Analyses (EGRA) facilitates sequentially sampling of design points around the region of interest in the design space. EGRA was applied to the response surface construction of the failure constraints in the deterministic and reliability based optimization of the ITPS panel. It was shown that using adaptive sampling, the number of designs required to find the optimum were reduced drastically, while improving the accuracy. System reliability of ITPS was estimated using Monte Carlo Simulation (MCS) based method. Separable Monte Carlo method was employed that allowed separable sampling of the random variables to predict the probability of failure accurately. The reliability analysis considered uncertainties in the geometry, material properties, loading conditions of the panel and error in finite element modeling. These uncertainties further increased the computational cost of MCS techniques which was also reduced by employing surrogate models. In order to estimate the error in the probability of failure estimate, bootstrapping method was applied. This research work thus demonstrates optimization of the ITPS composite panel with multiple failure modes and large number of uncertainties using adaptive sampling techniques.

  4. Tracking the global spread of vaccine sentiments: the global response to Japan's suspension of its HPV vaccine recommendation.

    PubMed

    Larson, Heidi J; Wilson, Rose; Hanley, Sharon; Parys, Astrid; Paterson, Pauline

    2014-01-01

    In June 2013 the Japanese Ministry of Health, Labor, and Welfare (MHLW) suspended its HPV vaccination recommendation after a series of highly publicized alleged adverse events following immunization stoked public doubts about the vaccine's safety. This paper examines the global spread of the news of Japan's HPV vaccine suspension through online media, and takes a retrospective look at non-Japanese media sources that were used to support those claiming HPV vaccine injury in Japan. Two searches were conducted. One searched relevant content in an archive of Google Alerts on vaccines and vaccine preventable diseases. The second search was conducted using Google Search on January 6th 2014 and on July 18th 2014, using the keywords, "HPV vaccine Japan" and "cervical cancer vaccine Japan." Both searches were used as Google Searches render more (and some different) results than Google Alerts. Online media collected and analyzed totalled 57. Sixty 3 percent were published in the USA, 23% in Japan, 5% in the UK, 2% in France, 2% in Switzerland, 2% in the Philippines, 2% in Kenya and 2% in Denmark. The majority took a negative view of the HPV vaccine, the primary concern being vaccine safety. The news of Japan's suspension of the HPV vaccine recommendation has traveled globally through online media and social media networks, being applauded by anti-vaccination groups but not by the global scientific community. The longer the uncertainty around the Japanese HPV vaccine recommendation persists, the further the public concerns are likely to travel.

  5. Tracking the global spread of vaccine sentiments: The global response to Japan's suspension of its HPV vaccine recommendation

    PubMed Central

    Larson, Heidi J; Wilson, Rose; Hanley, Sharon; Parys, Astrid; Paterson, Pauline

    2014-01-01

    In June 2013 the Japanese Ministry of Health, Labor, and Welfare (MHLW) suspended its HPV vaccination recommendation after a series of highly publicized alleged adverse events following immunization stoked public doubts about the vaccine's safety. This paper examines the global spread of the news of Japan's HPV vaccine suspension through online media, and takes a retrospective look at non-Japanese media sources that were used to support those claiming HPV vaccine injury in Japan. Methods: Two searches were conducted. One searched relevant content in an archive of Google Alerts on vaccines and vaccine preventable diseases. The second search was conducted using Google Search on January 6th 2014 and on July 18th 2014, using the keywords, “HPV vaccine Japan” and “cervical cancer vaccine Japan.” Both searches were used as Google Searches render more (and some different) results than Google Alerts. Results: Online media collected and analyzed totalled 57. Sixty 3 percent were published in the USA, 23% in Japan, 5% in the UK, 2% in France, 2% in Switzerland, 2% in the Philippines, 2% in Kenya and 2% in Denmark. The majority took a negative view of the HPV vaccine, the primary concern being vaccine safety. Discussion: The news of Japan's suspension of the HPV vaccine recommendation has traveled globally through online media and social media networks, being applauded by anti-vaccination groups but not by the global scientific community. The longer the uncertainty around the Japanese HPV vaccine recommendation persists, the further the public concerns are likely to travel. PMID:25483472

  6. Seismicity map tools for earthquake studies

    NASA Astrophysics Data System (ADS)

    Boucouvalas, Anthony; Kaskebes, Athanasios; Tselikas, Nikos

    2014-05-01

    We report on the development of new and online set of tools for use within Google Maps, for earthquake research. We demonstrate this server based and online platform (developped with PHP, Javascript, MySQL) with the new tools using a database system with earthquake data. The platform allows us to carry out statistical and deterministic analysis on earthquake data use of Google Maps and plot various seismicity graphs. The tool box has been extended to draw on the map line segments, multiple straight lines horizontally and vertically as well as multiple circles, including geodesic lines. The application is demonstrated using localized seismic data from the geographic region of Greece as well as other global earthquake data. The application also offers regional segmentation (NxN) which allows the studying earthquake clustering, and earthquake cluster shift within the segments in space. The platform offers many filters such for plotting selected magnitude ranges or time periods. The plotting facility allows statistically based plots such as cumulative earthquake magnitude plots and earthquake magnitude histograms, calculation of 'b' etc. What is novel for the platform is the additional deterministic tools. Using the newly developed horizontal and vertical line and circle tools we have studied the spatial distribution trends of many earthquakes and we here show for the first time the link between Fibonacci Numbers and spatiotemporal location of some earthquakes. The new tools are valuable for examining visualizing trends in earthquake research as it allows calculation of statistics as well as deterministic precursors. We plan to show many new results based on our newly developed platform.

  7. Transmutation approximations for the application of hybrid Monte Carlo/deterministic neutron transport to shutdown dose rate analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biondo, Elliott D.; Wilson, Paul P. H.

    In fusion energy systems (FES) neutrons born from burning plasma activate system components. The photon dose rate after shutdown from resulting radionuclides must be quantified. This shutdown dose rate (SDR) is calculated by coupling neutron transport, activation analysis, and photon transport. The size, complexity, and attenuating configuration of FES motivate the use of hybrid Monte Carlo (MC)/deterministic neutron transport. The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) method can be used to optimize MC neutron transport for coupled multiphysics problems, including SDR analysis, using deterministic estimates of adjoint flux distributions. When used for SDR analysis, MS-CADIS requires the formulation ofmore » an adjoint neutron source that approximates the transmutation process. In this work, transmutation approximations are used to derive a solution for this adjoint neutron source. It is shown that these approximations are reasonably met for typical FES neutron spectra and materials over a range of irradiation scenarios. When these approximations are met, the Groupwise Transmutation (GT)-CADIS method, proposed here, can be used effectively. GT-CADIS is an implementation of the MS-CADIS method for SDR analysis that uses a series of single-energy-group irradiations to calculate the adjoint neutron source. For a simple SDR problem, GT-CADIS provides speedups of 200 100 relative to global variance reduction with the Forward-Weighted (FW)-CADIS method and 9 ± 5 • 104 relative to analog. As a result, this work shows that GT-CADIS is broadly applicable to FES problems and will significantly reduce the computational resources necessary for SDR analysis.« less

  8. Transmutation approximations for the application of hybrid Monte Carlo/deterministic neutron transport to shutdown dose rate analysis

    DOE PAGES

    Biondo, Elliott D.; Wilson, Paul P. H.

    2017-05-08

    In fusion energy systems (FES) neutrons born from burning plasma activate system components. The photon dose rate after shutdown from resulting radionuclides must be quantified. This shutdown dose rate (SDR) is calculated by coupling neutron transport, activation analysis, and photon transport. The size, complexity, and attenuating configuration of FES motivate the use of hybrid Monte Carlo (MC)/deterministic neutron transport. The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) method can be used to optimize MC neutron transport for coupled multiphysics problems, including SDR analysis, using deterministic estimates of adjoint flux distributions. When used for SDR analysis, MS-CADIS requires the formulation ofmore » an adjoint neutron source that approximates the transmutation process. In this work, transmutation approximations are used to derive a solution for this adjoint neutron source. It is shown that these approximations are reasonably met for typical FES neutron spectra and materials over a range of irradiation scenarios. When these approximations are met, the Groupwise Transmutation (GT)-CADIS method, proposed here, can be used effectively. GT-CADIS is an implementation of the MS-CADIS method for SDR analysis that uses a series of single-energy-group irradiations to calculate the adjoint neutron source. For a simple SDR problem, GT-CADIS provides speedups of 200 100 relative to global variance reduction with the Forward-Weighted (FW)-CADIS method and 9 ± 5 • 104 relative to analog. As a result, this work shows that GT-CADIS is broadly applicable to FES problems and will significantly reduce the computational resources necessary for SDR analysis.« less

  9. Long-run evolution of the global economy - Part 2: Hindcasts of innovation and growth

    NASA Astrophysics Data System (ADS)

    Garrett, T. J.

    2015-10-01

    Long-range climate forecasts use integrated assessment models to link the global economy to greenhouse gas emissions. This paper evaluates an alternative economic framework outlined in part 1 of this study (Garrett, 2014) that approaches the global economy using purely physical principles rather than explicitly resolved societal dynamics. If this model is initialized with economic data from the 1950s, it yields hindcasts for how fast global economic production and energy consumption grew between 2000 and 2010 with skill scores > 90 % relative to a model of persistence in trends. The model appears to attain high skill partly because there was a strong impulse of discovery of fossil fuel energy reserves in the mid-twentieth century that helped civilization to grow rapidly as a deterministic physical response. Forecasting the coming century may prove more of a challenge because the effect of the energy impulse appears to have nearly run its course. Nonetheless, an understanding of the external forces that drive civilization may help development of constrained futures for the coupled evolution of civilization and climate during the Anthropocene.

  10. Whole-brain structural topology in adult attention-deficit/hyperactivity disorder: Preserved global - disturbed local network organization.

    PubMed

    Sidlauskaite, Justina; Caeyenberghs, Karen; Sonuga-Barke, Edmund; Roeyers, Herbert; Wiersema, Jan R

    2015-01-01

    Prior studies demonstrate altered organization of functional brain networks in attention-deficit/hyperactivity disorder (ADHD). However, the structural underpinnings of these functional disturbances are poorly understood. In the current study, we applied a graph-theoretic approach to whole-brain diffusion magnetic resonance imaging data to investigate the organization of structural brain networks in adults with ADHD and unaffected controls using deterministic fiber tractography. Groups did not differ in terms of global network metrics - small-worldness, global efficiency and clustering coefficient. However, there were widespread ADHD-related effects at the nodal level in relation to local efficiency and clustering. The affected nodes included superior occipital, supramarginal, superior temporal, inferior parietal, angular and inferior frontal gyri, as well as putamen, thalamus and posterior cerebellum. Lower local efficiency of left superior temporal and supramarginal gyri was associated with higher ADHD symptom scores. Also greater local clustering of right putamen and lower local clustering of left supramarginal gyrus correlated with ADHD symptom severity. Overall, the findings indicate preserved global but altered local network organization in adult ADHD implicating regions underpinning putative ADHD-related neuropsychological deficits.

  11. Structural Deterministic Safety Factors Selection Criteria and Verification

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1992-01-01

    Though current deterministic safety factors are arbitrarily and unaccountably specified, its ratio is rooted in resistive and applied stress probability distributions. This study approached the deterministic method from a probabilistic concept leading to a more systematic and coherent philosophy and criterion for designing more uniform and reliable high-performance structures. The deterministic method was noted to consist of three safety factors: a standard deviation multiplier of the applied stress distribution; a K-factor for the A- or B-basis material ultimate stress; and the conventional safety factor to ensure that the applied stress does not operate in the inelastic zone of metallic materials. The conventional safety factor is specifically defined as the ratio of ultimate-to-yield stresses. A deterministic safety index of the combined safety factors was derived from which the corresponding reliability proved the deterministic method is not reliability sensitive. The bases for selecting safety factors are presented and verification requirements are discussed. The suggested deterministic approach is applicable to all NASA, DOD, and commercial high-performance structures under static stresses.

  12. Time-series modeling and prediction of global monthly absolute temperature for environmental decision making

    NASA Astrophysics Data System (ADS)

    Ye, Liming; Yang, Guixia; Van Ranst, Eric; Tang, Huajun

    2013-03-01

    A generalized, structural, time series modeling framework was developed to analyze the monthly records of absolute surface temperature, one of the most important environmental parameters, using a deterministicstochastic combined (DSC) approach. Although the development of the framework was based on the characterization of the variation patterns of a global dataset, the methodology could be applied to any monthly absolute temperature record. Deterministic processes were used to characterize the variation patterns of the global trend and the cyclic oscillations of the temperature signal, involving polynomial functions and the Fourier method, respectively, while stochastic processes were employed to account for any remaining patterns in the temperature signal, involving seasonal autoregressive integrated moving average (SARIMA) models. A prediction of the monthly global surface temperature during the second decade of the 21st century using the DSC model shows that the global temperature will likely continue to rise at twice the average rate of the past 150 years. The evaluation of prediction accuracy shows that DSC models perform systematically well against selected models of other authors, suggesting that DSC models, when coupled with other ecoenvironmental models, can be used as a supplemental tool for short-term (˜10-year) environmental planning and decision making.

  13. A Tabu-Search Heuristic for Deterministic Two-Mode Blockmodeling of Binary Network Matrices.

    PubMed

    Brusco, Michael; Steinley, Douglas

    2011-10-01

    Two-mode binary data matrices arise in a variety of social network contexts, such as the attendance or non-attendance of individuals at events, the participation or lack of participation of groups in projects, and the votes of judges on cases. A popular method for analyzing such data is two-mode blockmodeling based on structural equivalence, where the goal is to identify partitions for the row and column objects such that the clusters of the row and column objects form blocks that are either complete (all 1s) or null (all 0s) to the greatest extent possible. Multiple restarts of an object relocation heuristic that seeks to minimize the number of inconsistencies (i.e., 1s in null blocks and 0s in complete blocks) with ideal block structure is the predominant approach for tackling this problem. As an alternative, we propose a fast and effective implementation of tabu search. Computational comparisons across a set of 48 large network matrices revealed that the new tabu-search heuristic always provided objective function values that were better than those of the relocation heuristic when the two methods were constrained to the same amount of computation time.

  14. Novel ID-based anti-collision approach for RFID

    NASA Astrophysics Data System (ADS)

    Zhang, De-Gan; Li, Wen-Bin

    2016-09-01

    Novel correlation ID-based (CID) anti-collision approach for RFID under the banner of the Internet of Things (IOT) has been presented in this paper. The key insights are as follows: according to the deterministic algorithms which are based on the binary search tree, we propose a method to increase the association between tags so that tags can initiatively send their own ID under certain trigger conditions, at the same time, we present a multi-tree search method for querying. When the number of tags is small, by replacing the actual ID with the temporary ID, it can greatly reduce the number of times that the reader reads and writes to tag's ID. Active tags send data to the reader by the way of modulation binary pulses. When applying this method to the uncertain ALOHA algorithms, the reader can determine the locations of the empty slots according to the position of the binary pulse, so it can avoid the decrease in efficiency which is caused by reading empty slots when reading slots. Theory and experiment show that this method can greatly improve the recognition efficiency of the system when applied to either the search tree or the ALOHA anti-collision algorithms.

  15. A deterministic (non-stochastic) low frequency method for geoacoustic inversion.

    PubMed

    Tolstoy, A

    2010-06-01

    It is well known that multiple frequency sources are necessary for accurate geoacoustic inversion. This paper presents an inversion method which uses the low frequency (LF) spectrum only to estimate bottom properties even in the presence of expected errors in source location, phone depths, and ocean sound-speed profiles. Matched field processing (MFP) along a vertical array is used. The LF method first conducts an exhaustive search of the (five) parameter search space (sediment thickness, sound-speed at the top of the sediment layer, the sediment layer sound-speed gradient, the half-space sound-speed, and water depth) at 25 Hz and continues by retaining only the high MFP value parameter combinations. Next, frequency is slowly increased while again retaining only the high value combinations. At each stage of the process, only those parameter combinations which give high MFP values at all previous LF predictions are considered (an ever shrinking set). It is important to note that a complete search of each relevant parameter space seems to be necessary not only at multiple (sequential) frequencies but also at multiple ranges in order to eliminate sidelobes, i.e., false solutions. Even so, there are no mathematical guarantees that one final, unique "solution" will be found.

  16. Probabilistic vs. deterministic fiber tracking and the influence of different seed regions to delineate cerebellar-thalamic fibers in deep brain stimulation.

    PubMed

    Schlaier, Juergen R; Beer, Anton L; Faltermeier, Rupert; Fellner, Claudia; Steib, Kathrin; Lange, Max; Greenlee, Mark W; Brawanski, Alexander T; Anthofer, Judith M

    2017-06-01

    This study compared tractography approaches for identifying cerebellar-thalamic fiber bundles relevant to planning target sites for deep brain stimulation (DBS). In particular, probabilistic and deterministic tracking of the dentate-rubro-thalamic tract (DRTT) and differences between the spatial courses of the DRTT and the cerebello-thalamo-cortical (CTC) tract were compared. Six patients with movement disorders were examined by magnetic resonance imaging (MRI), including two sets of diffusion-weighted images (12 and 64 directions). Probabilistic and deterministic tractography was applied on each diffusion-weighted dataset to delineate the DRTT. Results were compared with regard to their sensitivity in revealing the DRTT and additional fiber tracts and processing time. Two sets of regions-of-interests (ROIs) guided deterministic tractography of the DRTT or the CTC, respectively. Tract distances to an atlas-based reference target were compared. Probabilistic fiber tracking with 64 orientations detected the DRTT in all twelve hemispheres. Deterministic tracking detected the DRTT in nine (12 directions) and in only two (64 directions) hemispheres. Probabilistic tracking was more sensitive in detecting additional fibers (e.g. ansa lenticularis and medial forebrain bundle) than deterministic tracking. Probabilistic tracking lasted substantially longer than deterministic. Deterministic tracking was more sensitive in detecting the CTC than the DRTT. CTC tracts were located adjacent but consistently more posterior to DRTT tracts. These results suggest that probabilistic tracking is more sensitive and robust in detecting the DRTT but harder to implement than deterministic approaches. Although sensitivity of deterministic tracking is higher for the CTC than the DRTT, targets for DBS based on these tracts likely differ. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  17. Searching in clutter : visual attention strategies of expert pilots

    DOT National Transportation Integrated Search

    2012-10-22

    Clutter can slow visual search. However, experts may develop attention strategies that alleviate the effects of clutter on search performance. In the current study we examined the effects of global and local clutter on visual search performance and a...

  18. Ten-year global distribution of downwelling longwave radiation

    NASA Astrophysics Data System (ADS)

    Pavlakis, K. G.; Hatzidimitriou, D.; Matsoukas, C.; Drakakis, E.; Hatzianastassiou, N.; Vardavas, I.

    2003-10-01

    Downwelling longwave fluxes, DLFs, have been derived for each month over a ten year period (1984-1993), on a global scale with a resolution of 2.5° × 2.5°. The fluxes were computed using a deterministic model for atmospheric radiation transfer, along with satellite and reanalysis data for the key atmospheric input parameters, i.e. cloud properties, and specific humidity and temperature profiles. The cloud climatologies were taken from the latest released and improved International Satellite Climatology Project D2 series. Specific humidity and temperature vertical profiles were taken from three different reanalysis datasets; NCEP/NCAR, GEOS, and ECMWF (acronyms explained in main text). DLFs were computed for each reanalysis dataset, with differences reaching values as high as 30 Wm-2 in specific regions, particularly over high altitude areas and deserts. However, globally, the agreement is good, with the rms of the difference between the DLFs derived from the different reanalysis datasets ranging from 5 to 7 Wm-2. The results are presented as geographical distributions and as time series of hemispheric and global averages. The DLF time series based on the different reanalysis datasets show similar seasonal and inter-annual variations, and similar anomalies related to the 86/87 El Niño and 89/90 La Niña events. The global ten-year average of the DLF was found to be between 342.2 Wm-2 and 344.3 Wm-2, depending on the dataset. We also conducted a detailed sensitivity analysis of the calculated DLFs to the key input data. Plots are given that can be used to obtain a quick assessment of the sensitivity of the DLF to each of the three key climatic quantities, for specific climatic conditions corresponding to different regions of the globe. Our model downwelling fluxes are validated against available data from ground-based stations distributed over the globe, as given by the Baseline Surface Radiation Network. There is a negative bias of the model fluxes when compared against BSRN fluxes, ranging from -7 to -9 Wm-2, mostly caused by low cloud amount differences between the station and satellite measurements, particularly in cold climates. Finally, we compare our model results with those of other deterministic models and general circulation models.

  19. Ten-year global distribution of downwelling longwave radiation

    NASA Astrophysics Data System (ADS)

    Pavlakis, K. G.; Hatzidimitriou, D.; Matsoukas, C.; Drakakis, E.; Hatzianastassiou, N.; Vardavas, I.

    2004-01-01

    Downwelling longwave fluxes, DLFs, have been derived for each month over a ten year period (1984-1993), on a global scale with a spatial resolution of 2.5x2.5 degrees and a monthly temporal resolution. The fluxes were computed using a deterministic model for atmospheric radiation transfer, along with satellite and reanalysis data for the key atmospheric input parameters, i.e. cloud properties, and specific humidity and temperature profiles. The cloud climatologies were taken from the latest released and improved International Satellite Climatology Project D2 series. Specific humidity and temperature vertical profiles were taken from three different reanalysis datasets; NCEP/NCAR, GEOS, and ECMWF (acronyms explained in main text). DLFs were computed for each reanalysis dataset, with differences reaching values as high as 30 Wm-2 in specific regions, particularly over high altitude areas and deserts. However, globally, the agreement is good, with the rms of the difference between the DLFs derived from the different reanalysis datasets ranging from 5 to 7 Wm-2. The results are presented as geographical distributions and as time series of hemispheric and global averages. The DLF time series based on the different reanalysis datasets show similar seasonal and inter-annual variations, and similar anomalies related to the 86/87 El Niño and 89/90 La Niña events. The global ten-year average of the DLF was found to be between 342.2 Wm-2 and 344.3 Wm-2, depending on the dataset. We also conducted a detailed sensitivity analysis of the calculated DLFs to the key input data. Plots are given that can be used to obtain a quick assessment of the sensitivity of the DLF to each of the three key climatic quantities, for specific climatic conditions corresponding to different regions of the globe. Our model downwelling fluxes are validated against available data from ground-based stations distributed over the globe, as given by the Baseline Surface Radiation Network. There is a negative bias of the model fluxes when compared against BSRN fluxes, ranging from -7 to -9 Wm-2, mostly caused by low cloud amount differences between the station and satellite measurements, particularly in cold climates. Finally, we compare our model results with those of other deterministic models and general circulation models.

  20. A Comparison of Techniques for Scheduling Earth-Observing Satellites

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna

    2004-01-01

    Scheduling observations by coordinated fleets of Earth Observing Satellites (EOS) involves large search spaces, complex constraints and poorly understood bottlenecks, conditions where evolutionary and related algorithms are often effective. However, there are many such algorithms and the best one to use is not clear. Here we compare multiple variants of the genetic algorithm: stochastic hill climbing, simulated annealing, squeaky wheel optimization and iterated sampling on ten realistically-sized EOS scheduling problems. Schedules are represented by a permutation (non-temperal ordering) of the observation requests. A simple deterministic scheduler assigns times and resources to each observation request in the order indicated by the permutation, discarding those that violate the constraints created by previously scheduled observations. Simulated annealing performs best. Random mutation outperform a more 'intelligent' mutator. Furthermore, the best mutator, by a small margin, was a novel approach we call temperature dependent random sampling that makes large changes in the early stages of evolution and smaller changes towards the end of search.

  1. A deterministic evaluation of heat stress mitigation and feed cost under climate change within the smallholder dairy sector.

    PubMed

    York, L; Heffernan, C; Rymer, C; Panda, N

    2017-05-01

    In the global South, dairying is often promoted as a means of poverty alleviation. Yet, under conditions of climate warming, little is known regarding the ability of small-scale dairy producers to maintain production and/or the robustness of possible adaptation options in meeting the challenges presented, particularly heat stress. The authors created a simple, deterministic model to explore the influence of breed and heat stress relief options on smallholder dairy farmers in Odisha, India. Breeds included indigenous Indian (non-descript), low-grade Jersey crossbreed and high-grade Jersey crossbreed. Relief strategies included providing shade, fanning and bathing. The impact of predicted critical global climate parameters, a 2°C and 4°C temperature rise were explored. A feed price scenario was modelled to illustrate the importance of feed in impact estimation. Feed costs were increased by 10% to 30%. Across the simulations, high-grade Jersey crossbreeds maintained higher milk yields, despite being the most sensitive to the negative effects of temperature. Low-capital relief strategies were the most effective at reducing heat stress impacts on household income. However, as feed costs increased the lower-grade Jersey crossbreed became the most profitable breed. The high-grade Jersey crossbreed was only marginally (4.64%) more profitable than the indigenous breed. The results demonstrate the importance of understanding the factors and practical trade-offs that underpin adaptation. The model also highlights the need for hot-climate dairying projects and programmes to consider animal genetic resources alongside environmentally sustainable adaptation measures for greatest poverty impact.

  2. SGO: A fast engine for ab initio atomic structure global optimization by differential evolution

    NASA Astrophysics Data System (ADS)

    Chen, Zhanghui; Jia, Weile; Jiang, Xiangwei; Li, Shu-Shen; Wang, Lin-Wang

    2017-10-01

    As the high throughout calculations and material genome approaches become more and more popular in material science, the search for optimal ways to predict atomic global minimum structure is a high research priority. This paper presents a fast method for global search of atomic structures at ab initio level. The structures global optimization (SGO) engine consists of a high-efficiency differential evolution algorithm, accelerated local relaxation methods and a plane-wave density functional theory code running on GPU machines. The purpose is to show what can be achieved by combining the superior algorithms at the different levels of the searching scheme. SGO can search the global-minimum configurations of crystals, two-dimensional materials and quantum clusters without prior symmetry restriction in a relatively short time (half or several hours for systems with less than 25 atoms), thus making such a task a routine calculation. Comparisons with other existing methods such as minima hopping and genetic algorithm are provided. One motivation of our study is to investigate the properties of magnetic systems in different phases. The SGO engine is capable of surveying the local minima surrounding the global minimum, which provides the information for the overall energy landscape of a given system. Using this capability we have found several new configurations for testing systems, explored their energy landscape, and demonstrated that the magnetic moment of metal clusters fluctuates strongly in different local minima.

  3. Omokage search: shape similarity search service for biomolecular structures in both the PDB and EMDB.

    PubMed

    Suzuki, Hirofumi; Kawabata, Takeshi; Nakamura, Haruki

    2016-02-15

    Omokage search is a service to search the global shape similarity of biological macromolecules and their assemblies, in both the Protein Data Bank (PDB) and Electron Microscopy Data Bank (EMDB). The server compares global shapes of assemblies independent of sequence order and number of subunits. As a search query, the user inputs a structure ID (PDB ID or EMDB ID) or uploads an atomic model or 3D density map to the server. The search is performed usually within 1 min, using one-dimensional profiles (incremental distance rank profiles) to characterize the shapes. Using the gmfit (Gaussian mixture model fitting) program, the found structures are fitted onto the query structure and their superimposed structures are displayed on the Web browser. Our service provides new structural perspectives to life science researchers. Omokage search is freely accessible at http://pdbj.org/omokage/. © The Author 2015. Published by Oxford University Press.

  4. Relations between perceptual and conceptual scope: how global versus local processing fits a focus on similarity versus dissimilarity.

    PubMed

    Förster, Jens

    2009-02-01

    Nine studies showed a bidirectional link (a) between a global processing style and generation of similarities and (b) between a local processing style and generation of dissimilarities. In Experiments 1-4, participants were primed with global versus local perception styles and then asked to work on an allegedly unrelated generation task. Across materials, participants generated more similarities than dissimilarities after global priming, whereas for participants with local priming, the opposite was true. Experiments 5-6 demonstrated a bidirectional link whereby participants who were first instructed to search for similarities attended more to the gestalt of a stimulus than to its details, whereas the reverse was true for those who were initially instructed to search for dissimilarities. Because important psychological variables are correlated with processing styles, in Experiments 7-9, temporal distance, a promotion focus, and high power were predicted and shown to enhance the search for similarities, whereas temporal proximity, a prevention focus, and low power enhanced the search for dissimilarities. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  5. A downscaling scheme for atmospheric variables to drive soil-vegetation-atmosphere transfer models

    NASA Astrophysics Data System (ADS)

    Schomburg, A.; Venema, V.; Lindau, R.; Ament, F.; Simmer, C.

    2010-09-01

    For driving soil-vegetation-transfer models or hydrological models, high-resolution atmospheric forcing data is needed. For most applications the resolution of atmospheric model output is too coarse. To avoid biases due to the non-linear processes, a downscaling system should predict the unresolved variability of the atmospheric forcing. For this purpose we derived a disaggregation system consisting of three steps: (1) a bi-quadratic spline-interpolation of the low-resolution data, (2) a so-called `deterministic' part, based on statistical rules between high-resolution surface variables and the desired atmospheric near-surface variables and (3) an autoregressive noise-generation step. The disaggregation system has been developed and tested based on high-resolution model output (400m horizontal grid spacing). A novel automatic search-algorithm has been developed for deriving the deterministic downscaling rules of step 2. When applied to the atmospheric variables of the lowest layer of the atmospheric COSMO-model, the disaggregation is able to adequately reconstruct the reference fields. Applying downscaling step 1 and 2, root mean square errors are decreased. Step 3 finally leads to a close match of the subgrid variability and temporal autocorrelation with the reference fields. The scheme can be applied to the output of atmospheric models, both for stand-alone offline simulations, and a fully coupled model system.

  6. Method to deterministically study photonic nanostructures in different experimental instruments.

    PubMed

    Husken, B H; Woldering, L A; Blum, C; Vos, W L

    2009-01-01

    We describe an experimental method to recover a single, deterministically fabricated nanostructure in various experimental instruments without the use of artificially fabricated markers, with the aim to study photonic structures. Therefore, a detailed map of the spatial surroundings of the nanostructure is made during the fabrication of the structure. These maps are made using a series of micrographs with successively decreasing magnifications. The graphs reveal intrinsic and characteristic geometric features that can subsequently be used in different setups to act as markers. As an illustration, we probe surface cavities with radii of 65 nm on a silica opal photonic crystal with various setups: a focused ion beam workstation; a scanning electron microscope (SEM); a wide field optical microscope and a confocal microscope. We use cross-correlation techniques to recover a small area imaged with the SEM in a large area photographed with the optical microscope, which provides a possible avenue to automatic searching. We show how both structural and optical reflectivity data can be obtained from one and the same nanostructure. Since our approach does not use artificial grids or markers, it is of particular interest for samples whose structure is not known a priori, like samples created solely by self-assembly. In addition, our method is not restricted to conducting samples.

  7. Using Mean Orbit Period in Mars Reconnaissance Orbiter Maneuver Design

    NASA Technical Reports Server (NTRS)

    Chung, Min-Kun J.; Menon, Premkumar R.; Wagner, Sean V.; Williams, Jessica L.

    2014-01-01

    Mars Reconnaissance Orbiter (MRO) has provided communication relays for a number of Mars spacecraft. In 2016 MRO is expected to support a relay for NASA's Interior Exploration using Seismic Investigations, Geodesy and Heat Transport (InSight) spacecraft. In addition, support may be needed by another mission, ESA's ExoMars EDL Demonstrator Module's (EDM), only 21 days after the InSight coverage. The close proximity of these two events presents a unique challenge to a conventional orbit synchronization maneuver where one deterministic maneuver is executed prior to each relay. Since the two events are close together and the difference in required phasing between InSight and EDM may be up to half an orbit (yielding a large execution error), the downtrack timing error can increase rapidly at the EDM encounter. Thus, a new maneuver strategy that does not require a deterministic maneuver in-between the two events (with only a small statistical cleanup) is proposed in the paper. This proposed strategy rests heavily on the stability of the mean orbital period. The ability to search and set the specified mean period is fundamental in the proposed maneuver design as well as in understanding the scope of the problem. The proposed strategy is explained and its result is used to understand and solve the problem in the flight operations environment.

  8. An efficient and practical approach to obtain a better optimum solution for structural optimization

    NASA Astrophysics Data System (ADS)

    Chen, Ting-Yu; Huang, Jyun-Hao

    2013-08-01

    For many structural optimization problems, it is hard or even impossible to find the global optimum solution owing to unaffordable computational cost. An alternative and practical way of thinking is thus proposed in this research to obtain an optimum design which may not be global but is better than most local optimum solutions that can be found by gradient-based search methods. The way to reach this goal is to find a smaller search space for gradient-based search methods. It is found in this research that data mining can accomplish this goal easily. The activities of classification, association and clustering in data mining are employed to reduce the original design space. For unconstrained optimization problems, the data mining activities are used to find a smaller search region which contains the global or better local solutions. For constrained optimization problems, it is used to find the feasible region or the feasible region with better objective values. Numerical examples show that the optimum solutions found in the reduced design space by sequential quadratic programming (SQP) are indeed much better than those found by SQP in the original design space. The optimum solutions found in a reduced space by SQP sometimes are even better than the solution found using a hybrid global search method with approximate structural analyses.

  9. Contextual cueing by global features

    PubMed Central

    Kunar, Melina A.; Flusberg, Stephen J.; Wolfe, Jeremy M.

    2008-01-01

    In visual search tasks, attention can be guided to a target item, appearing amidst distractors, on the basis of simple features (e.g. find the red letter among green). Chun and Jiang’s (1998) “contextual cueing” effect shows that RTs are also speeded if the spatial configuration of items in a scene is repeated over time. In these studies we ask if global properties of the scene can speed search (e.g. if the display is mostly red, then the target is at location X). In Experiment 1a, the overall background color of the display predicted the target location. Here the predictive color could appear 0, 400 or 800 msec in advance of the search array. Mean RTs are faster in predictive than in non-predictive conditions. However, there is little improvement in search slopes. The global color cue did not improve search efficiency. Experiments 1b-1f replicate this effect using different predictive properties (e.g. background orientation/texture, stimuli color etc.). The results show a strong RT effect of predictive background but (at best) only a weak improvement in search efficiency. A strong improvement in efficiency was found, however, when the informative background was presented 1500 msec prior to the onset of the search stimuli and when observers were given explicit instructions to use the cue (Experiment 2). PMID:17355043

  10. Global OpenSearch

    NASA Astrophysics Data System (ADS)

    Newman, D. J.; Mitchell, A. E.

    2015-12-01

    At AGU 2014, NASA EOSDIS demonstrated a case-study of an OpenSearch framework for Earth science data discovery. That framework leverages the IDN and CWIC OpenSearch API implementations to provide seamless discovery of data through the 'two-step' discovery process as outlined by the Federation for Earth Sciences (ESIP) OpenSearch Best Practices. But how would an Earth Scientist leverage this framework and what are the benefits? Using a client that understands the OpenSearch specification and, for further clarity, the various best practices and extensions, a scientist can discovery a plethora of data not normally accessible either by traditional methods (NASA Earth Data Search, Reverb, etc) or direct methods (going to the source of the data) We will demonstrate, via the CWICSmart web client, how an earth scientist can access regional data on a regional phenomena in a uniform and aggregated manner. We will demonstrate how an earth scientist can 'globalize' their discovery. You want to find local data on 'sea surface temperature of the Indian Ocean'? We can help you with that. 'European meteorological data'? Yes. 'Brazilian rainforest satellite imagery'? That too. CWIC allows you to get earth science data in a uniform fashion from a large number of disparate, world-wide agencies. This is what we mean by Global OpenSearch.

  11. Parameterizing sorption isotherms using a hybrid global-local fitting procedure.

    PubMed

    Matott, L Shawn; Singh, Anshuman; Rabideau, Alan J

    2017-05-01

    Predictive modeling of the transport and remediation of groundwater contaminants requires an accurate description of the sorption process, which is usually provided by fitting an isotherm model to site-specific laboratory data. Commonly used calibration procedures, listed in order of increasing sophistication, include: trial-and-error, linearization, non-linear regression, global search, and hybrid global-local search. Given the considerable variability in fitting procedures applied in published isotherm studies, we investigated the importance of algorithm selection through a series of numerical experiments involving 13 previously published sorption datasets. These datasets, considered representative of state-of-the-art for isotherm experiments, had been previously analyzed using trial-and-error, linearization, or non-linear regression methods. The isotherm expressions were re-fit using a 3-stage hybrid global-local search procedure (i.e. global search using particle swarm optimization followed by Powell's derivative free local search method and Gauss-Marquardt-Levenberg non-linear regression). The re-fitted expressions were then compared to previously published fits in terms of the optimized weighted sum of squared residuals (WSSR) fitness function, the final estimated parameters, and the influence on contaminant transport predictions - where easily computed concentration-dependent contaminant retardation factors served as a surrogate measure of likely transport behavior. Results suggest that many of the previously published calibrated isotherm parameter sets were local minima. In some cases, the updated hybrid global-local search yielded order-of-magnitude reductions in the fitness function. In particular, of the candidate isotherms, the Polanyi-type models were most likely to benefit from the use of the hybrid fitting procedure. In some cases, improvements in fitness function were associated with slight (<10%) changes in parameter values, but in other cases significant (>50%) changes in parameter values were noted. Despite these differences, the influence of isotherm misspecification on contaminant transport predictions was quite variable and difficult to predict from inspection of the isotherms. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Global and public health core competencies for nursing education: A systematic review of essential competencies.

    PubMed

    Clark, Megan; Raffray, Marie; Hendricks, Kristin; Gagnon, Anita J

    2016-05-01

    Nurses are learning and practicing in an increasingly global world. Both nursing schools and nursing students are seeking guidance as they integrate global health into their learning and teaching. This systematic review is intended to identify the most common global and public health core competencies found in the literature and better inform schools of nursing wishing to include global health content in their curricula. Systematic review. An online search of CINAHL and Medline databases, as well as, inclusion of pertinent gray literature was conducted for articles published before 2013. Relevant literature for global health (GH) and public and community health (PH/CH) competencies was reviewed to determine recommendations of both competencies using a combination of search terms. Studies must have addressed competencies as defined in the literature and must have been pertinent to GH or PH/CH. The databases were systematically searched and after reading the full content of the included studies, key concepts were extracted and synthesized. Twenty-five studies were identified and resulted in a list of 14 global health core competencies. These competencies are applicable to a variety of health disciplines, but particularly can inform the efforts of nursing schools to integrate global health concepts into their curricula. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Deterministic quantum dense coding networks

    NASA Astrophysics Data System (ADS)

    Roy, Saptarshi; Chanda, Titas; Das, Tamoghna; Sen(De), Aditi; Sen, Ujjwal

    2018-07-01

    We consider the scenario of deterministic classical information transmission between multiple senders and a single receiver, when they a priori share a multipartite quantum state - an attempt towards building a deterministic dense coding network. Specifically, we prove that in the case of two or three senders and a single receiver, generalized Greenberger-Horne-Zeilinger (gGHZ) states are not beneficial for sending classical information deterministically beyond the classical limit, except when the shared state is the GHZ state itself. On the other hand, three- and four-qubit generalized W (gW) states with specific parameters as well as the four-qubit Dicke states can provide a quantum advantage of sending the information in deterministic dense coding. Interestingly however, numerical simulations in the three-qubit scenario reveal that the percentage of states from the GHZ-class that are deterministic dense codeable is higher than that of states from the W-class.

  14. Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.

    PubMed

    Dash, Tirtharaj; Sahu, Prabhat K

    2015-05-30

    The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. © 2015 Wiley Periodicals, Inc.

  15. Weighted Global Artificial Bee Colony Algorithm Makes Gas Sensor Deployment Efficient

    PubMed Central

    Jiang, Ye; He, Ziqing; Li, Yanhai; Xu, Zhengyi; Wei, Jianming

    2016-01-01

    This paper proposes an improved artificial bee colony algorithm named Weighted Global ABC (WGABC) algorithm, which is designed to improve the convergence speed in the search stage of solution search equation. The new method not only considers the effect of global factors on the convergence speed in the search phase, but also provides the expression of global factor weights. Experiment on benchmark functions proved that the algorithm can improve the convergence speed greatly. We arrive at the gas diffusion concentration based on the theory of CFD and then simulate the gas diffusion model with the influence of buildings based on the algorithm. Simulation verified the effectiveness of the WGABC algorithm in improving the convergence speed in optimal deployment scheme of gas sensors. Finally, it is verified that the optimal deployment method based on WGABC algorithm can improve the monitoring efficiency of sensors greatly as compared with the conventional deployment methods. PMID:27322262

  16. Visual Search in ASD: Instructed Versus Spontaneous Local and Global Processing.

    PubMed

    Van der Hallen, Ruth; Evers, Kris; Boets, Bart; Steyaert, Jean; Noens, Ilse; Wagemans, Johan

    2016-09-01

    Visual search has been used extensively to investigate differences in mid-level visual processing between individuals with ASD and TD individuals. The current study employed two visual search paradigms with Gaborized stimuli to assess the impact of task distractors (Experiment 1) and task instruction (Experiment 2) on local-global visual processing in ASD versus TD children. Experiment 1 revealed both groups to be equally sensitive to the absence or presence of a distractor, regardless of the type of target or type of distractor. Experiment 2 revealed a differential effect of task instruction for ASD compared to TD, regardless of the type of target. Taken together, these results stress the importance of task factors in the study of local-global visual processing in ASD.

  17. Intelligent Text Retrieval and Knowledge Acquisition from Texts for NASA Applications: Preprocessing Issues

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A system that retrieves problem reports from a NASA database is described. The database is queried with natural language questions. Part-of-speech tags are first assigned to each word in the question using a rule based tagger. A partial parse of the question is then produced with independent sets of deterministic finite state a utomata. Using partial parse information, a look up strategy searches the database for problem reports relevant to the question. A bigram stemmer and irregular verb conjugates have been incorporated into the system to improve accuracy. The system is evaluated by a set of fifty five questions posed by NASA engineers. A discussion of future research is also presented.

  18. Optimal ancilla-free Pauli+V circuits for axial rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blass, Andreas; Bocharov, Alex; Gurevich, Yuri

    We address the problem of optimal representation of single-qubit rotations in a certain unitary basis consisting of the so-called V gates and Pauli matrices. The V matrices were proposed by Lubotsky, Philips, and Sarnak [Commun. Pure Appl. Math. 40, 401–420 (1987)] as a purely geometric construct in 1987 and recently found applications in quantum computation. They allow for exceptionally simple quantum circuit synthesis algorithms based on quaternionic factorization. We adapt the deterministic-search technique initially proposed by Ross and Selinger to synthesize approximating Pauli+V circuits of optimal depth for single-qubit axial rotations. Our synthesis procedure based on simple SL{sub 2}(ℤ) geometrymore » is almost elementary.« less

  19. Cluster structures influenced by interaction with a surface.

    PubMed

    Witt, Christopher; Dieterich, Johannes M; Hartke, Bernd

    2018-05-30

    Clusters on surfaces are vitally important for nanotechnological applications. Clearly, cluster-surface interactions heavily influence the preferred cluster structures, compared to clusters in vacuum. Nevertheless, systematic explorations and an in-depth understanding of these interactions and how they determine the cluster structures are still lacking. Here we present an extension of our well-established non-deterministic global optimization package OGOLEM from isolated clusters to clusters on surfaces. Applying this approach to intentionally simple Lennard-Jones test systems, we produce a first systematic exploration that relates changes in cluster-surface interactions to resulting changes in adsorbed cluster structures.

  20. The Science-Policy Link: Stakeholder Reactions to the Uncertainties of Future Sea Level Rise

    NASA Astrophysics Data System (ADS)

    Plag, H.; Bye, B.

    2011-12-01

    Policy makers and stakeholders in the coastal zone are equally challenged by the risk of an anticipated rise of coastal Local Sea Level (LSL) as a consequence of future global warming. Many low-lying and often densely populated coastal areas are under risk of increased inundation. More than 40% of the global population is living in or near the coastal zone and this fraction is steadily increasing. A rise in LSL will increase the vulnerability of coastal infrastructure and population dramatically, with potentially devastating consequences for the global economy, society, and environment. Policy makers are faced with a trade-off between imposing today the often very high costs of coastal protection and adaptation upon national economies and leaving the costs of potential major disasters to future generations. They are in need of actionable information that provides guidance for the development of coastal zones resilient to future sea level changes. Part of this actionable information comes from risk and vulnerability assessments, which require information on future LSL changes as input. In most cases, a deterministic approach has been applied based on predictions of the plausible range of future LSL trajectories as input. However, there is little consensus in the scientific community on how these trajectories should be determined, and what the boundaries of the plausible range are. Over the last few years, many publications in Science, Nature and other peer-reviewed scientific journals have revealed a broad range of possible futures and significant epistemic uncertainties and gaps concerning LSL changes. Based on the somewhat diffuse science input, policy and decision makers have made rather different choices for mitigation and adaptation in cases such as Venice, The Netherlands, New York City, and the San Francisco Bay area. Replacing the deterministic, prediction-based approach with a statistical one that fully accounts for the uncertainties and epistemic gaps would provide a different kind of science input to policy makers and stakeholders. Like in many other insurance problems (for example, earthquakes), where deterministic predictions are not possible and decisions have to be made on the basis of statistics and probabilities, the statistical approach to coastal resilience would require stakeholders to make decisions on the basis of probabilities instead of predictions. The science input for informed decisions on adaptation would consist of general probabilities of decadal to century scale sea level changes derived from paleo records, including the probabilities for large and rapid rises. Similar to other problems where the appearance of a hazard is associated with a high risk (like a fire in a house), this approach would also require a monitoring and warning system (a "smoke detector") capable of detecting any onset of a rapid sea level rise.

  1. The detection and stabilisation of limit cycle for deterministic finite automata

    NASA Astrophysics Data System (ADS)

    Han, Xiaoguang; Chen, Zengqiang; Liu, Zhongxin; Zhang, Qing

    2018-04-01

    In this paper, the topological structure properties of deterministic finite automata (DFA), under the framework of the semi-tensor product of matrices, are investigated. First, the dynamics of DFA are converted into a new algebraic form as a discrete-time linear system by means of Boolean algebra. Using this algebraic description, the approach of calculating the limit cycles of different lengths is given. Second, we present two fundamental concepts, namely, domain of attraction of limit cycle and prereachability set. Based on the prereachability set, an explicit solution of calculating domain of attraction of a limit cycle is completely characterised. Third, we define the globally attractive limit cycle, and then the necessary and sufficient condition for verifying whether all state trajectories of a DFA enter a given limit cycle in a finite number of transitions is given. Fourth, the problem of whether a DFA can be stabilised to a limit cycle by the state feedback controller is discussed. Criteria for limit cycle-stabilisation are established. All state feedback controllers which implement the minimal length trajectories from each state to the limit cycle are obtained by using the proposed algorithm. Finally, an illustrative example is presented to show the theoretical results.

  2. Enhancing Flood Prediction Reliability Using Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Merwade, V.

    2017-12-01

    Uncertainty analysis is an indispensable part of modeling the hydrology and hydrodynamics of non-idealized environmental systems. Compared to reliance on prediction from one model simulation, using on ensemble of predictions that consider uncertainty from different sources is more reliable. In this study, Bayesian model averaging (BMA) is applied to Black River watershed in Arkansas and Missouri by combining multi-model simulations to get reliable deterministic water stage and probabilistic inundation extent predictions. The simulation ensemble is generated from 81 LISFLOOD-FP subgrid model configurations that include uncertainty from channel shape, channel width, channel roughness and discharge. Model simulation outputs are trained with observed water stage data during one flood event, and BMA prediction ability is validated for another flood event. Results from this study indicate that BMA does not always outperform all members in the ensemble, but it provides relatively robust deterministic flood stage predictions across the basin. Station based BMA (BMA_S) water stage prediction has better performance than global based BMA (BMA_G) prediction which is superior to the ensemble mean prediction. Additionally, high-frequency flood inundation extent (probability greater than 60%) in BMA_G probabilistic map is more accurate than the probabilistic flood inundation extent based on equal weights.

  3. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.

    PubMed

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-08-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.

  4. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks

    PubMed Central

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-01-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784

  5. A methodology for the stochastic generation of hourly synthetic direct normal irradiation time series

    NASA Astrophysics Data System (ADS)

    Larrañeta, M.; Moreno-Tejera, S.; Lillo-Bravo, I.; Silva-Pérez, M. A.

    2018-02-01

    Many of the available solar radiation databases only provide global horizontal irradiance (GHI) while there is a growing need of extensive databases of direct normal radiation (DNI) mainly for the development of concentrated solar power and concentrated photovoltaic technologies. In the present work, we propose a methodology for the generation of synthetic DNI hourly data from the hourly average GHI values by dividing the irradiance into a deterministic and stochastic component intending to emulate the dynamics of the solar radiation. The deterministic component is modeled through a simple classical model. The stochastic component is fitted to measured data in order to maintain the consistency of the synthetic data with the state of the sky, generating statistically significant DNI data with a cumulative frequency distribution very similar to the measured data. The adaptation and application of the model to the location of Seville shows significant improvements in terms of frequency distribution over the classical models. The proposed methodology applied to other locations with different climatological characteristics better results than the classical models in terms of frequency distribution reaching a reduction of the 50% in the Finkelstein-Schafer (FS) and Kolmogorov-Smirnov test integral (KSI) statistics.

  6. Predictability of short-range forecasting: a multimodel approach

    NASA Astrophysics Data System (ADS)

    García-Moya, Jose-Antonio; Callado, Alfons; Escribà, Pau; Santos, Carlos; Santos-Muñoz, Daniel; Simarro, Juan

    2011-05-01

    Numerical weather prediction (NWP) models (including mesoscale) have limitations when it comes to dealing with severe weather events because extreme weather is highly unpredictable, even in the short range. A probabilistic forecast based on an ensemble of slightly different model runs may help to address this issue. Among other ensemble techniques, Multimodel ensemble prediction systems (EPSs) are proving to be useful for adding probabilistic value to mesoscale deterministic models. A Multimodel Short Range Ensemble Prediction System (SREPS) focused on forecasting the weather up to 72 h has been developed at the Spanish Meteorological Service (AEMET). The system uses five different limited area models (LAMs), namely HIRLAM (HIRLAM Consortium), HRM (DWD), the UM (UKMO), MM5 (PSU/NCAR) and COSMO (COSMO Consortium). These models run with initial and boundary conditions provided by five different global deterministic models, namely IFS (ECMWF), UM (UKMO), GME (DWD), GFS (NCEP) and CMC (MSC). AEMET-SREPS (AE) validation on the large-scale flow, using ECMWF analysis, shows a consistent and slightly underdispersive system. For surface parameters, the system shows high skill forecasting binary events. 24-h precipitation probabilistic forecasts are verified using an up-scaling grid of observations from European high-resolution precipitation networks, and compared with ECMWF-EPS (EC).

  7. Eczema, Atopic Dermatitis, or Atopic Eczema: Analysis of Global Search Engine Trends.

    PubMed

    Xu, Shuai; Thyssen, Jacob P; Paller, Amy S; Silverberg, Jonathan I

    The lack of standardized nomenclature for atopic dermatitis (AD) creates challenges for scientific communication, patient education, and advocacy. We sought to determine the relative popularity of the terms eczema, AD, and atopic eczema (AE) using global search engine volumes. A retrospective analysis of average monthly search volumes from 2014 to 2016 of Google, Bing/Yahoo, and Baidu was performed for eczema, AD, and AE in English and 37 other languages. Google Trends was used to determine the relative search popularity of each term from 2006 to 2016 in English and the top foreign languages, German, Turkish, Russian, and Japanese. Overall, eczema accounted for 1.5 million monthly searches (84%) compared with 247 000 searches for AD (14%) and 44 000 searches for AE (2%). For English language, eczema accounted for 93% of searches compared with 6% for AD and 1% for AE. Search popularity for eczema increased from 2006 to 2016 but remained stable for AD and AE. Given the ambiguity of the term eczema, we recommend the universal use of the next most popular term, AD.

  8. Global polar geospatial information service retrieval based on search engine and ontology reasoning

    USGS Publications Warehouse

    Chen, Nengcheng; E, Dongcheng; Di, Liping; Gong, Jianya; Chen, Zeqiang

    2007-01-01

    In order to improve the access precision of polar geospatial information service on web, a new methodology for retrieving global spatial information services based on geospatial service search and ontology reasoning is proposed, the geospatial service search is implemented to find the coarse service from web, the ontology reasoning is designed to find the refined service from the coarse service. The proposed framework includes standardized distributed geospatial web services, a geospatial service search engine, an extended UDDI registry, and a multi-protocol geospatial information service client. Some key technologies addressed include service discovery based on search engine and service ontology modeling and reasoning in the Antarctic geospatial context. Finally, an Antarctica multi protocol OWS portal prototype based on the proposed methodology is introduced.

  9. (Meta)Search like Google

    ERIC Educational Resources Information Center

    Rochkind, Jonathan

    2007-01-01

    The ability to search and receive results in more than one database through a single interface--or metasearch--is something many users want. Google Scholar--the search engine of specifically scholarly content--and library metasearch products like Ex Libris's MetaLib, Serials Solution's Central Search, WebFeat, and products based on MuseGlobal used…

  10. Wave ensemble forecast in the Western Mediterranean Sea, application to an early warning system.

    NASA Astrophysics Data System (ADS)

    Pallares, Elena; Hernandez, Hector; Moré, Jordi; Espino, Manuel; Sairouni, Abdel

    2015-04-01

    The Western Mediterranean Sea is a highly heterogeneous and variable area, as is reflected on the wind field, the current field, and the waves, mainly in the first kilometers offshore. As a result of this variability, the wave forecast in these regions is quite complicated to perform, usually with some accuracy problems during energetic storm events. Moreover, is in these areas where most of the economic activities take part, including fisheries, sailing, tourism, coastal management and offshore renewal energy platforms. In order to introduce an indicator of the probability of occurrence of the different sea states and give more detailed information of the forecast to the end users, an ensemble wave forecast system is considered. The ensemble prediction systems have already been used in the last decades for the meteorological forecast; to deal with the uncertainties of the initial conditions and the different parametrizations used in the models, which may introduce some errors in the forecast, a bunch of different perturbed meteorological simulations are considered as possible future scenarios and compared with the deterministic forecast. In the present work, the SWAN wave model (v41.01) has been implemented for the Western Mediterranean sea, forced with wind fields produced by the deterministic Global Forecast System (GFS) and Global Ensemble Forecast System (GEFS). The wind fields includes a deterministic forecast (also named control), between 11 and 21 ensemble members, and some intelligent member obtained from the ensemble, as the mean of all the members. Four buoys located in the study area, moored in coastal waters, have been used to validate the results. The outputs include all the time series, with a forecast horizon of 8 days and represented in spaghetti diagrams, the spread of the system and the probability at different thresholds. The main goal of this exercise is to be able to determine the degree of the uncertainty of the wave forecast, meaningful between the 5th and the 8th day of the prediction. The information obtained is then included in an early warning system, designed in the framework of the European project iCoast (ECHO/SUB/2013/661009) with the aim of set alarms in coastal areas depending on the wave conditions, the sea level, the flooding and the run up in the coast.

  11. The relationship between stochastic and deterministic quasi-steady state approximations.

    PubMed

    Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R

    2015-11-23

    The quasi steady-state approximation (QSSA) is frequently used to reduce deterministic models of biochemical networks. The resulting equations provide a simplified description of the network in terms of non-elementary reaction functions (e.g. Hill functions). Such deterministic reductions are frequently a basis for heuristic stochastic models in which non-elementary reaction functions are used to define reaction propensities. Despite their popularity, it remains unclear when such stochastic reductions are valid. It is frequently assumed that the stochastic reduction can be trusted whenever its deterministic counterpart is accurate. However, a number of recent examples show that this is not necessarily the case. Here we explain the origin of these discrepancies, and demonstrate a clear relationship between the accuracy of the deterministic and the stochastic QSSA for examples widely used in biological systems. With an analysis of a two-state promoter model, and numerical simulations for a variety of other models, we find that the stochastic QSSA is accurate whenever its deterministic counterpart provides an accurate approximation over a range of initial conditions which cover the likely fluctuations from the quasi steady-state (QSS). We conjecture that this relationship provides a simple and computationally inexpensive way to test the accuracy of reduced stochastic models using deterministic simulations. The stochastic QSSA is one of the most popular multi-scale stochastic simulation methods. While the use of QSSA, and the resulting non-elementary functions has been justified in the deterministic case, it is not clear when their stochastic counterparts are accurate. In this study, we show how the accuracy of the stochastic QSSA can be tested using their deterministic counterparts providing a concrete method to test when non-elementary rate functions can be used in stochastic simulations.

  12. Operational value of ensemble streamflow forecasts for hydropower production: A Canadian case study

    NASA Astrophysics Data System (ADS)

    Boucher, Marie-Amélie; Tremblay, Denis; Luc, Perreault; François, Anctil

    2010-05-01

    Ensemble and probabilistic forecasts have many advantages over deterministic ones, both in meteorology and hydrology (e.g. Krzysztofowicz, 2001). Mainly, they inform the user on the uncertainty linked to the forecast. It has been brought to attention that such additional information could lead to improved decision making (e.g. Wilks and Hamill, 1995; Mylne, 2002; Roulin, 2007), but very few studies concentrate on operational situations involving the use of such forecasts. In addition, many authors have demonstrated that ensemble forecasts outperform deterministic forecasts in terms of performance (e.g. Jaun et al., 2005; Velazquez et al., 2009; Laio and Tamea, 2007). However, such performance is mostly assessed on the basis of numerical scoring rules, which compare the forecasts to the observations, and seldom in terms of management gains. The proposed case study adopts an operational point of view, on the basis that a novel forecasting system has value only if it leads to increase monetary and societal gains (e.g. Murphy, 1994; Laio and Tamea, 2007). More specifically, Environment Canada operational ensemble precipitation forecasts are used to drive the HYDROTEL distributed hydrological model (Fortin et al., 1995), calibrated on the Gatineau watershed located in Québec, Canada. The resulting hydrological ensemble forecasts are then incorporated into Hydro-Québec SOHO stochastic management optimization tool that automatically search for optimal operation decisions for the all reservoirs and hydropower plants located on the basin. The timeline of the study is the fall season of year 2003. This period is especially relevant because of high precipitations that nearly caused a major spill, and forced the preventive evacuation of a portion of the population located near one of the dams. We show that the use of the ensemble forecasts would have reduced the occurrence of spills and flooding, which is of particular importance for dams located in populous area, and increased hydropower production. The ensemble precipitation forecasts extend from March 1st of 2002 to December 31st of 2003. They were obtained using two atmospheric models, SEF (8 members plus the control deterministic forecast) and GEM (8 members). The corresponding deterministic precipitation forecast issued by SEF model is also used within HYDROTEL in order to compare ensemble streamflow forecasts with their deterministic counterparts. Although this study does not incorporate all the sources of uncertainty, precipitation is certainly the most important input for hydrological modeling and conveys a great portion of the total uncertainty. References: Fortin, J.P., Moussa, R., Bocquillon, C. and Villeneuve, J.P. 1995: HYDROTEL, un modèle hydrologique distribué pouvant bénéficier des données fournies par la télédétection et les systèmes d'information géographique, Revue des Sciences de l'Eau, 8(1), 94-124. Jaun, S., Ahrens, B., Walser, A., Ewen, T. and Schaer, C. 2008: A probabilistic view on the August 2005 floods in the upper Rhine catchment, Natural Hazards and Earth System Sciences, 8 (2), 281-291. Krzysztofowicz, R. 2001: The case for probabilistic forecasting in hydrology, Journal of Hydrology, 249, 2-9. Murphy, A.H. 1994: Assessing the economic value of weather forecasts: An overview of methods, results and issues, Meteorological Applications, 1, 69-73. Mylne, K.R. 2002: Decision-Making from probability forecasts based on forecast value, Meteorological Applications, 9, 307-315. Laio, F. and Tamea, S. 2007: Verification tools for probabilistic forecasts of continuous hydrological variables, Hydrology and Earth System Sciences, 11, 1267-1277. Roulin, E. 2007: Skill and relative economic value of medium-range hydrological ensemble predictions, Hydrology and Earth System Sciences, 11, 725-737. Velazquez, J.-A., Petit, T., Lavoie, A., Boucher, M.-A., Turcotte, R., Fortin, V. and Anctil, F. 2009: An evaluation of the Canadian global meteorological ensemble prediction system for short-term hydrological forecasting, Hydrology and Earth System Sciences, 13(11), 2221-2231. Wilks, D.S. and Hamill, T.M. 1995: Potential economic value of ensemble-based surface weather forecasts, Monthly Weather Review, 123(12), 3565-3575.

  13. Deterministic Walks with Choice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beeler, Katy E.; Berenhaut, Kenneth S.; Cooper, Joshua N.

    2014-01-10

    This paper studies deterministic movement over toroidal grids, integrating local information, bounded memory and choice at individual nodes. The research is motivated by recent work on deterministic random walks, and applications in multi-agent systems. Several results regarding passing tokens through toroidal grids are discussed, as well as some open questions.

  14. Stochastic Forcing for High-Resolution Regional and Global Ocean and Atmosphere-Ocean Coupled Ensemble Forecast System

    NASA Astrophysics Data System (ADS)

    Rowley, C. D.; Hogan, P. J.; Martin, P.; Thoppil, P.; Wei, M.

    2017-12-01

    An extended range ensemble forecast system is being developed in the US Navy Earth System Prediction Capability (ESPC), and a global ocean ensemble generation capability to represent uncertainty in the ocean initial conditions has been developed. At extended forecast times, the uncertainty due to the model error overtakes the initial condition as the primary source of forecast uncertainty. Recently, stochastic parameterization or stochastic forcing techniques have been applied to represent the model error in research and operational atmospheric, ocean, and coupled ensemble forecasts. A simple stochastic forcing technique has been developed for application to US Navy high resolution regional and global ocean models, for use in ocean-only and coupled atmosphere-ocean-ice-wave ensemble forecast systems. Perturbation forcing is added to the tendency equations for state variables, with the forcing defined by random 3- or 4-dimensional fields with horizontal, vertical, and temporal correlations specified to characterize different possible kinds of error. Here, we demonstrate the stochastic forcing in regional and global ensemble forecasts with varying perturbation amplitudes and length and time scales, and assess the change in ensemble skill measured by a range of deterministic and probabilistic metrics.

  15. Parallel and Preemptable Dynamically Dimensioned Search Algorithms for Single and Multi-objective Optimization in Water Resources

    NASA Astrophysics Data System (ADS)

    Tolson, B.; Matott, L. S.; Gaffoor, T. A.; Asadzadeh, M.; Shafii, M.; Pomorski, P.; Xu, X.; Jahanpour, M.; Razavi, S.; Haghnegahdar, A.; Craig, J. R.

    2015-12-01

    We introduce asynchronous parallel implementations of the Dynamically Dimensioned Search (DDS) family of algorithms including DDS, discrete DDS, PA-DDS and DDS-AU. These parallel algorithms are unique from most existing parallel optimization algorithms in the water resources field in that parallel DDS is asynchronous and does not require an entire population (set of candidate solutions) to be evaluated before generating and then sending a new candidate solution for evaluation. One key advance in this study is developing the first parallel PA-DDS multi-objective optimization algorithm. The other key advance is enhancing the computational efficiency of solving optimization problems (such as model calibration) by combining a parallel optimization algorithm with the deterministic model pre-emption concept. These two efficiency techniques can only be combined because of the asynchronous nature of parallel DDS. Model pre-emption functions to terminate simulation model runs early, prior to completely simulating the model calibration period for example, when intermediate results indicate the candidate solution is so poor that it will definitely have no influence on the generation of further candidate solutions. The computational savings of deterministic model preemption available in serial implementations of population-based algorithms (e.g., PSO) disappear in synchronous parallel implementations as these algorithms. In addition to the key advances above, we implement the algorithms across a range of computation platforms (Windows and Unix-based operating systems from multi-core desktops to a supercomputer system) and package these for future modellers within a model-independent calibration software package called Ostrich as well as MATLAB versions. Results across multiple platforms and multiple case studies (from 4 to 64 processors) demonstrate the vast improvement over serial DDS-based algorithms and highlight the important role model pre-emption plays in the performance of parallel, pre-emptable DDS algorithms. Case studies include single- and multiple-objective optimization problems in water resources model calibration and in many cases linear or near linear speedups are observed.

  16. Dynamic Search and Working Memory in Social Recall

    ERIC Educational Resources Information Center

    Hills, Thomas T.; Pachur, Thorsten

    2012-01-01

    What are the mechanisms underlying search in social memory (e.g., remembering the people one knows)? Do the search mechanisms involve dynamic local-to-global transitions similar to semantic search, and are these transitions governed by the general control of attention, associated with working memory span? To find out, we asked participants to…

  17. A Framework and Methodology for Navigating Disaster and Global Health in Crisis Literature

    PubMed Central

    Chan, Jennifer L.; Burkle, Frederick M.

    2013-01-01

    Both ‘disasters’ and ‘global health in crisis’ research has dramatically grown due to the ever-increasing frequency and magnitude of crises around the world. Large volumes of peer-reviewed literature are not only a testament to the field’s value and evolution, but also present an unprecedented outpouring of seemingly unmanageable information across a wide array of crises and disciplines. Disaster medicine, health and humanitarian assistance, global health and public health disaster literature all lie within the disaster and global health in crisis literature spectrum and are increasingly accepted as multidisciplinary and transdisciplinary disciplines. Researchers, policy makers, and practitioners now face a new challenge; that of accessing this expansive literature for decision-making and exploring new areas of research. Individuals are also reaching beyond the peer-reviewed environment to grey literature using search engines like Google Scholar to access policy documents, consensus reports and conference proceedings. What is needed is a method and mechanism with which to search and retrieve relevant articles from this expansive body of literature. This manuscript presents both a framework and workable process for a diverse group of users to navigate the growing peer-reviewed and grey disaster and global health in crises literature. Methods: Disaster terms from textbooks, peer-reviewed and grey literature were used to design a framework of thematic clusters and subject matter ‘nodes’. A set of 84 terms, selected from 143 curated terms was organized within each node reflecting topics within the disaster and global health in crisis literature. Terms were crossed with one another and the term ‘disaster’. The results were formatted into tables and matrices. This process created a roadmap of search terms that could be applied to the PubMed database. Each search in the matrix or table results in a listed number of articles. This process was applied to literature from PubMed from 2005-2011. A complementary process was also applied to Google Scholar using the same framework of clusters, nodes, and terms expanding the search process to include the broader grey literature assets. Results: A framework of four thematic clusters and twelve subject matter nodes were designed to capture diverse disaster and global health in crisis-related content. From 2005-2011 there were 18,660 articles referring to the term [disaster]. Restricting the search to human research, MeSH, and English language there remained 7,736 identified articles representing an unmanageable number to adequately process for research, policy or best practices. However, using the crossed search and matrix process revealed further examples of robust realms of research in disasters, emergency medicine, EMS, public health and global health. Examples of potential gaps in current peer-reviewed disaster and global health in crisis literature were identified as mental health, elderly care, and alternate sites of care. The same framework and process was then applied to Google Scholar, specifically for topics that resulted in few PubMed search returns. When applying the same framework and process to the Google Scholar example searches retrieved unique peer-reviewed articles not identified in PubMed and documents including books, governmental documents and consensus papers. Conclusions: The proposed framework, methodology and process using four clusters, twelve nodes and a matrix and table process applied to PubMed and Google Scholar unlocks otherwise inaccessible opportunities to better navigate the massively growing body of peer-reviewed disaster and global health in crises literature. This approach will assist researchers, policy makers, and practitioners to generate future research questions, report on the overall evolution of the disaster and global health in crisis field and further guide disaster planning, prevention, preparedness, mitigation response and recovery. PMID:23591457

  18. A framework and methodology for navigating disaster and global health in crisis literature.

    PubMed

    Chan, Jennifer L; Burkle, Frederick M

    2013-04-04

    Both 'disasters' and 'global health in crisis' research has dramatically grown due to the ever-increasing frequency and magnitude of crises around the world. Large volumes of peer-reviewed literature are not only a testament to the field's value and evolution, but also present an unprecedented outpouring of seemingly unmanageable information across a wide array of crises and disciplines. Disaster medicine, health and humanitarian assistance, global health and public health disaster literature all lie within the disaster and global health in crisis literature spectrum and are increasingly accepted as multidisciplinary and transdisciplinary disciplines. Researchers, policy makers, and practitioners now face a new challenge; that of accessing this expansive literature for decision-making and exploring new areas of research. Individuals are also reaching beyond the peer-reviewed environment to grey literature using search engines like Google Scholar to access policy documents, consensus reports and conference proceedings. What is needed is a method and mechanism with which to search and retrieve relevant articles from this expansive body of literature. This manuscript presents both a framework and workable process for a diverse group of users to navigate the growing peer-reviewed and grey disaster and global health in crises literature. Disaster terms from textbooks, peer-reviewed and grey literature were used to design a framework of thematic clusters and subject matter 'nodes'. A set of 84 terms, selected from 143 curated terms was organized within each node reflecting topics within the disaster and global health in crisis literature. Terms were crossed with one another and the term 'disaster'. The results were formatted into tables and matrices. This process created a roadmap of search terms that could be applied to the PubMed database. Each search in the matrix or table results in a listed number of articles. This process was applied to literature from PubMed from 2005-2011. A complementary process was also applied to Google Scholar using the same framework of clusters, nodes, and terms expanding the search process to include the broader grey literature assets. A framework of four thematic clusters and twelve subject matter nodes were designed to capture diverse disaster and global health in crisis-related content. From 2005-2011 there were 18,660 articles referring to the term [disaster]. Restricting the search to human research, MeSH, and English language there remained 7,736 identified articles representing an unmanageable number to adequately process for research, policy or best practices. However, using the crossed search and matrix process revealed further examples of robust realms of research in disasters, emergency medicine, EMS, public health and global health. Examples of potential gaps in current peer-reviewed disaster and global health in crisis literature were identified as mental health, elderly care, and alternate sites of care. The same framework and process was then applied to Google Scholar, specifically for topics that resulted in few PubMed search returns. When applying the same framework and process to the Google Scholar example searches retrieved unique peer-reviewed articles not identified in PubMed and documents including books, governmental documents and consensus papers. The proposed framework, methodology and process using four clusters, twelve nodes and a matrix and table process applied to PubMed and Google Scholar unlocks otherwise inaccessible opportunities to better navigate the massively growing body of peer-reviewed disaster and global health in crises literature. This approach will assist researchers, policy makers, and practitioners to generate future research questions, report on the overall evolution of the disaster and global health in crisis field and further guide disaster planning, prevention, preparedness, mitigation response and recovery.

  19. Fast scattering simulation tool for multi-energy x-ray imaging

    NASA Astrophysics Data System (ADS)

    Sossin, A.; Tabary, J.; Rebuffel, V.; Létang, J. M.; Freud, N.; Verger, L.

    2015-12-01

    A combination of Monte Carlo (MC) and deterministic approaches was employed as a means of creating a simulation tool capable of providing energy resolved x-ray primary and scatter images within a reasonable time interval. Libraries of Sindbad, a previously developed x-ray simulation software, were used in the development. The scatter simulation capabilities of the tool were validated through simulation with the aid of GATE and through experimentation by using a spectrometric CdTe detector. A simple cylindrical phantom with cavities and an aluminum insert was used. Cross-validation with GATE showed good agreement with a global spatial error of 1.5% and a maximum scatter spectrum error of around 6%. Experimental validation also supported the accuracy of the simulations obtained from the developed software with a global spatial error of 1.8% and a maximum error of around 8.5% in the scatter spectra.

  20. Global solutions to random 3D vorticity equations for small initial data

    NASA Astrophysics Data System (ADS)

    Barbu, Viorel; Röckner, Michael

    2017-11-01

    One proves the existence and uniqueness in (Lp (R3)) 3, 3/2 < p < 2, of a global mild solution to random vorticity equations associated to stochastic 3D Navier-Stokes equations with linear multiplicative Gaussian noise of convolution type, for sufficiently small initial vorticity. This resembles some earlier deterministic results of T. Kato [16] and are obtained by treating the equation in vorticity form and reducing the latter to a random nonlinear parabolic equation. The solution has maximal regularity in the spatial variables and is weakly continuous in (L3 ∩L 3p/4p - 6)3 with respect to the time variable. Furthermore, we obtain the pathwise continuous dependence of solutions with respect to the initial data. In particular, one gets a locally unique solution of 3D stochastic Navier-Stokes equation in vorticity form up to some explosion stopping time τ adapted to the Brownian motion.

  1. Backward Bifurcation in a Cholera Model: A Case Study of Outbreak in Zimbabwe and Haiti

    NASA Astrophysics Data System (ADS)

    Sharma, Sandeep; Kumari, Nitu

    In this paper, a nonlinear deterministic model is proposed with a saturated treatment function. The expression of the basic reproduction number for the proposed model was obtained. The global dynamics of the proposed model was studied using the basic reproduction number and theory of dynamical systems. It is observed that proposed model exhibits backward bifurcation as multiple endemic equilibrium points exist when R0 < 1. The existence of backward bifurcation implies that making R0 < 1 is not enough for disease eradication. This, in turn, makes it difficult to control the spread of cholera in the community. We also obtain a unique endemic equilibria when R0 > 1. The global stability of unique endemic equilibria is performed using the geometric approach. An extensive numerical study is performed to support our analytical results. Finally, we investigate two major cholera outbreaks, Zimbabwe (2008-09) and Haiti (2010), with the help of the present study.

  2. Nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates

    DOEpatents

    Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E. , Guillorn, Michael A.; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TN; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN

    2011-05-17

    Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. A method includes depositing a catalyst particle on a surface of a substrate to define a deterministically located position; growing an aligned elongated nanostructure on the substrate, an end of the aligned elongated nanostructure coupled to the substrate at the deterministically located position; coating the aligned elongated nanostructure with a conduit material; removing a portion of the conduit material to expose the catalyst particle; removing the catalyst particle; and removing the elongated nanostructure to define a nanoconduit.

  3. Human brain detects short-time nonlinear predictability in the temporal fine structure of deterministic chaotic sounds

    NASA Astrophysics Data System (ADS)

    Itoh, Kosuke; Nakada, Tsutomu

    2013-04-01

    Deterministic nonlinear dynamical processes are ubiquitous in nature. Chaotic sounds generated by such processes may appear irregular and random in waveform, but these sounds are mathematically distinguished from random stochastic sounds in that they contain deterministic short-time predictability in their temporal fine structures. We show that the human brain distinguishes deterministic chaotic sounds from spectrally matched stochastic sounds in neural processing and perception. Deterministic chaotic sounds, even without being attended to, elicited greater cerebral cortical responses than the surrogate control sounds after about 150 ms in latency after sound onset. Listeners also clearly discriminated these sounds in perception. The results support the hypothesis that the human auditory system is sensitive to the subtle short-time predictability embedded in the temporal fine structure of sounds.

  4. A deterministic particle method for one-dimensional reaction-diffusion equations

    NASA Technical Reports Server (NTRS)

    Mascagni, Michael

    1995-01-01

    We derive a deterministic particle method for the solution of nonlinear reaction-diffusion equations in one spatial dimension. This deterministic method is an analog of a Monte Carlo method for the solution of these problems that has been previously investigated by the author. The deterministic method leads to the consideration of a system of ordinary differential equations for the positions of suitably defined particles. We then consider the time explicit and implicit methods for this system of ordinary differential equations and we study a Picard and Newton iteration for the solution of the implicit system. Next we solve numerically this system and study the discretization error both analytically and numerically. Numerical computation shows that this deterministic method is automatically adaptive to large gradients in the solution.

  5. Characterization of forced response of density stratified reacting wake

    NASA Astrophysics Data System (ADS)

    Pawar, Samadhan A.; Sujith, Raman I.; Emerson, Benjamin; Lieuwen, Tim

    2018-02-01

    The hydrodynamic stability of a reacting wake depends primarily on the density ratio [i.e., ratio of unburnt gas density (ρu) to burnt gas density (ρb)] of the flow across the wake. The variation of the density ratio from high to low value, keeping ρ u / ρ b > 1 , transitions dynamical characteristics of the reacting wake from a linearly globally stable (or convectively unstable) to a globally unstable mode. In this paper, we propose a framework to analyze the effect of harmonic forcing on the deterministic and synchronization characteristics of reacting wakes. Using the recurrence quantification analysis of the forced wake response, we show that the deterministic behaviour of the reacting wake increases as the amplitude of forcing is increased. Furthermore, for different density ratios, we found that the synchronization of the top and bottom branches of the wake with the forcing signal is dependent on whether the mean frequency of the natural oscillations of the wake (fn) is lesser or greater than the frequency of external forcing (ff). We notice that the response of both branches (top and bottom) of the reacting wake to the external forcing is asymmetric and symmetric for the low and high density ratios, respectively. Furthermore, we characterize the phase-locking behaviour between the top and bottom branches of the wake for different values of density ratios. We observe that an increase in the density ratio results in a gradual decrease in the relative phase angle between the top and bottom branches of the wake, which leads to a change in the vortex shedding pattern from a sinuous (anti-phase) to a varicose (in-phase) mode of the oscillations.

  6. Defining pyromes and global syndromes of fire regimes.

    PubMed

    Archibald, Sally; Lehmann, Caroline E R; Gómez-Dans, Jose L; Bradstock, Ross A

    2013-04-16

    Fire is a ubiquitous component of the Earth system that is poorly understood. To date, a global-scale understanding of fire is largely limited to the annual extent of burning as detected by satellites. This is problematic because fire is multidimensional, and focus on a single metric belies its complexity and importance within the Earth system. To address this, we identified five key characteristics of fire regimes--size, frequency, intensity, season, and extent--and combined new and existing global datasets to represent each. We assessed how these global fire regime characteristics are related to patterns of climate, vegetation (biomes), and human activity. Cross-correlations demonstrate that only certain combinations of fire characteristics are possible, reflecting fundamental constraints in the types of fire regimes that can exist. A Bayesian clustering algorithm identified five global syndromes of fire regimes, or pyromes. Four pyromes represent distinctions between crown, litter, and grass-fueled fires, and the relationship of these to biomes and climate are not deterministic. Pyromes were partially discriminated on the basis of available moisture and rainfall seasonality. Human impacts also affected pyromes and are globally apparent as the driver of a fifth and unique pyrome that represents human-engineered modifications to fire characteristics. Differing biomes and climates may be represented within the same pyrome, implying that pathways of change in future fire regimes in response to changes in climate and human activity may be difficult to predict.

  7. Defining pyromes and global syndromes of fire regimes

    PubMed Central

    Archibald, Sally; Lehmann, Caroline E. R.; Gómez-Dans, Jose L.; Bradstock, Ross A.

    2013-01-01

    Fire is a ubiquitous component of the Earth system that is poorly understood. To date, a global-scale understanding of fire is largely limited to the annual extent of burning as detected by satellites. This is problematic because fire is multidimensional, and focus on a single metric belies its complexity and importance within the Earth system. To address this, we identified five key characteristics of fire regimes—size, frequency, intensity, season, and extent—and combined new and existing global datasets to represent each. We assessed how these global fire regime characteristics are related to patterns of climate, vegetation (biomes), and human activity. Cross-correlations demonstrate that only certain combinations of fire characteristics are possible, reflecting fundamental constraints in the types of fire regimes that can exist. A Bayesian clustering algorithm identified five global syndromes of fire regimes, or pyromes. Four pyromes represent distinctions between crown, litter, and grass-fueled fires, and the relationship of these to biomes and climate are not deterministic. Pyromes were partially discriminated on the basis of available moisture and rainfall seasonality. Human impacts also affected pyromes and are globally apparent as the driver of a fifth and unique pyrome that represents human-engineered modifications to fire characteristics. Differing biomes and climates may be represented within the same pyrome, implying that pathways of change in future fire regimes in response to changes in climate and human activity may be difficult to predict. PMID:23559374

  8. On-line range images registration with GPGPU

    NASA Astrophysics Data System (ADS)

    Będkowski, J.; Naruniec, J.

    2013-03-01

    This paper concerns implementation of algorithms in the two important aspects of modern 3D data processing: data registration and segmentation. Solution proposed for the first topic is based on the 3D space decomposition, while the latter on image processing and local neighbourhood search. Data processing is implemented by using NVIDIA compute unified device architecture (NIVIDIA CUDA) parallel computation. The result of the segmentation is a coloured map where different colours correspond to different objects, such as walls, floor and stairs. The research is related to the problem of collecting 3D data with a RGB-D camera mounted on a rotated head, to be used in mobile robot applications. Performance of the data registration algorithm is aimed for on-line processing. The iterative closest point (ICP) approach is chosen as a registration method. Computations are based on the parallel fast nearest neighbour search. This procedure decomposes 3D space into cubic buckets and, therefore, the time of the matching is deterministic. First technique of the data segmentation uses accele-rometers integrated with a RGB-D sensor to obtain rotation compensation and image processing method for defining pre-requisites of the known categories. The second technique uses the adapted nearest neighbour search procedure for obtaining normal vectors for each range point.

  9. Automated parameterization of intermolecular pair potentials using global optimization techniques

    NASA Astrophysics Data System (ADS)

    Krämer, Andreas; Hülsmann, Marco; Köddermann, Thorsten; Reith, Dirk

    2014-12-01

    In this work, different global optimization techniques are assessed for the automated development of molecular force fields, as used in molecular dynamics and Monte Carlo simulations. The quest of finding suitable force field parameters is treated as a mathematical minimization problem. Intricate problem characteristics such as extremely costly and even abortive simulations, noisy simulation results, and especially multiple local minima naturally lead to the use of sophisticated global optimization algorithms. Five diverse algorithms (pure random search, recursive random search, CMA-ES, differential evolution, and taboo search) are compared to our own tailor-made solution named CoSMoS. CoSMoS is an automated workflow. It models the parameters' influence on the simulation observables to detect a globally optimal set of parameters. It is shown how and why this approach is superior to other algorithms. Applied to suitable test functions and simulations for phosgene, CoSMoS effectively reduces the number of required simulations and real time for the optimization task.

  10. Expedite random structure searching using objects from Wyckoff positions

    NASA Astrophysics Data System (ADS)

    Wang, Shu-Wei; Hsing, Cheng-Rong; Wei, Ching-Ming

    2018-02-01

    Random structure searching has been proved to be a powerful approach to search and find the global minimum and the metastable structures. A true random sampling is in principle needed yet it would be highly time-consuming and/or practically impossible to find the global minimum for the complicated systems in their high-dimensional configuration space. Thus the implementations of reasonable constraints, such as adopting system symmetries to reduce the independent dimension in structural space and/or imposing chemical information to reach and relax into low-energy regions, are the most essential issues in the approach. In this paper, we propose the concept of "object" which is either an atom or composed of a set of atoms (such as molecules or carbonates) carrying a symmetry defined by one of the Wyckoff positions of space group and through this process it allows the searching of global minimum for a complicated system to be confined in a greatly reduced structural space and becomes accessible in practice. We examined several representative materials, including Cd3As2 crystal, solid methanol, high-pressure carbonates (FeCO3), and Si(111)-7 × 7 reconstructed surface, to demonstrate the power and the advantages of using "object" concept in random structure searching.

  11. A new effective operator for the hybrid algorithm for solving global optimisation problems

    NASA Astrophysics Data System (ADS)

    Duc, Le Anh; Li, Kenli; Nguyen, Tien Trong; Yen, Vu Minh; Truong, Tung Khac

    2018-04-01

    Hybrid algorithms have been recently used to solve complex single-objective optimisation problems. The ultimate goal is to find an optimised global solution by using these algorithms. Based on the existing algorithms (HP_CRO, PSO, RCCRO), this study proposes a new hybrid algorithm called MPC (Mean-PSO-CRO), which utilises a new Mean-Search Operator. By employing this new operator, the proposed algorithm improves the search ability on areas of the solution space that the other operators of previous algorithms do not explore. Specifically, the Mean-Search Operator helps find the better solutions in comparison with other algorithms. Moreover, the authors have proposed two parameters for balancing local and global search and between various types of local search, as well. In addition, three versions of this operator, which use different constraints, are introduced. The experimental results on 23 benchmark functions, which are used in previous works, show that our framework can find better optimal or close-to-optimal solutions with faster convergence speed for most of the benchmark functions, especially the high-dimensional functions. Thus, the proposed algorithm is more effective in solving single-objective optimisation problems than the other existing algorithms.

  12. Calculation of earthquake rupture histories using a hybrid global search algorithm: Application to the 1992 Landers, California, earthquake

    USGS Publications Warehouse

    Hartzell, S.; Liu, P.

    1996-01-01

    A method is presented for the simultaneous calculation of slip amplitudes and rupture times for a finite fault using a hybrid global search algorithm. The method we use combines simulated annealing with the downhill simplex method to produce a more efficient search algorithm then either of the two constituent parts. This formulation has advantages over traditional iterative or linearized approaches to the problem because it is able to escape local minima in its search through model space for the global optimum. We apply this global search method to the calculation of the rupture history for the Landers, California, earthquake. The rupture is modeled using three separate finite-fault planes to represent the three main fault segments that failed during this earthquake. Both the slip amplitude and the time of slip are calculated for a grid work of subfaults. The data used consist of digital, teleseismic P and SH body waves. Long-period, broadband, and short-period records are utilized to obtain a wideband characterization of the source. The results of the global search inversion are compared with a more traditional linear-least-squares inversion for only slip amplitudes. We use a multi-time-window linear analysis to relax the constraints on rupture time and rise time in the least-squares inversion. Both inversions produce similar slip distributions, although the linear-least-squares solution has a 10% larger moment (7.3 ?? 1026 dyne-cm compared with 6.6 ?? 1026 dyne-cm). Both inversions fit the data equally well and point out the importance of (1) using a parameterization with sufficient spatial and temporal flexibility to encompass likely complexities in the rupture process, (2) including suitable physically based constraints on the inversion to reduce instabilities in the solution, and (3) focusing on those robust rupture characteristics that rise above the details of the parameterization and data set.

  13. Deterministic and Stochastic Analysis of a Prey-Dependent Predator-Prey System

    ERIC Educational Resources Information Center

    Maiti, Alakes; Samanta, G. P.

    2005-01-01

    This paper reports on studies of the deterministic and stochastic behaviours of a predator-prey system with prey-dependent response function. The first part of the paper deals with the deterministic analysis of uniform boundedness, permanence, stability and bifurcation. In the second part the reproductive and mortality factors of the prey and…

  14. ShinyGPAS: interactive genomic prediction accuracy simulator based on deterministic formulas.

    PubMed

    Morota, Gota

    2017-12-20

    Deterministic formulas for the accuracy of genomic predictions highlight the relationships among prediction accuracy and potential factors influencing prediction accuracy prior to performing computationally intensive cross-validation. Visualizing such deterministic formulas in an interactive manner may lead to a better understanding of how genetic factors control prediction accuracy. The software to simulate deterministic formulas for genomic prediction accuracy was implemented in R and encapsulated as a web-based Shiny application. Shiny genomic prediction accuracy simulator (ShinyGPAS) simulates various deterministic formulas and delivers dynamic scatter plots of prediction accuracy versus genetic factors impacting prediction accuracy, while requiring only mouse navigation in a web browser. ShinyGPAS is available at: https://chikudaisei.shinyapps.io/shinygpas/ . ShinyGPAS is a shiny-based interactive genomic prediction accuracy simulator using deterministic formulas. It can be used for interactively exploring potential factors that influence prediction accuracy in genome-enabled prediction, simulating achievable prediction accuracy prior to genotyping individuals, or supporting in-class teaching. ShinyGPAS is open source software and it is hosted online as a freely available web-based resource with an intuitive graphical user interface.

  15. Inconvenient Truth or Convenient Fiction? Probable Maximum Precipitation and Nonstationarity

    NASA Astrophysics Data System (ADS)

    Nielsen-Gammon, J. W.

    2017-12-01

    According to the inconvenient truth that Probable Maximum Precipitation (PMP) represents a non-deterministic, statistically very rare event, future changes in PMP involve a complex interplay between future frequencies of storm type, storm morphology, and environmental characteristics, many of which are poorly constrained by global climate models. On the other hand, according to the convenient fiction that PMP represents an estimate of the maximum possible precipitation that can occur at a given location, as determined by storm maximization and transposition, the primary climatic driver of PMP change is simply a change in maximum moisture availability. Increases in boundary-layer and total-column moisture have been observed globally, are anticipated from basic physical principles, and are robustly projected to continue by global climate models. Thus, using the same techniques that are used within the PMP storm maximization process itself, future PMP values may be projected. The resulting PMP trend projections are qualitatively consistent with observed trends of extreme rainfall within Texas, suggesting that in this part of the world the inconvenient truth is congruent with the convenient fiction.

  16. Measuring Search Efficiency in Complex Visual Search Tasks: Global and Local Clutter

    ERIC Educational Resources Information Center

    Beck, Melissa R.; Lohrenz, Maura C.; Trafton, J. Gregory

    2010-01-01

    Set size and crowding affect search efficiency by limiting attention for recognition and attention against competition; however, these factors can be difficult to quantify in complex search tasks. The current experiments use a quantitative measure of the amount and variability of visual information (i.e., clutter) in highly complex stimuli (i.e.,…

  17. Research of converter transformer fault diagnosis based on improved PSO-BP algorithm

    NASA Astrophysics Data System (ADS)

    Long, Qi; Guo, Shuyong; Li, Qing; Sun, Yong; Li, Yi; Fan, Youping

    2017-09-01

    To overcome those disadvantages that BP (Back Propagation) neural network and conventional Particle Swarm Optimization (PSO) converge at the global best particle repeatedly in early stage and is easy trapped in local optima and with low diagnosis accuracy when being applied in converter transformer fault diagnosis, we come up with the improved PSO-BP neural network to improve the accuracy rate. This algorithm improves the inertia weight Equation by using the attenuation strategy based on concave function to avoid the premature convergence of PSO algorithm and Time-Varying Acceleration Coefficient (TVAC) strategy was adopted to balance the local search and global search ability. At last the simulation results prove that the proposed approach has a better ability in optimizing BP neural network in terms of network output error, global searching performance and diagnosis accuracy.

  18. Mechanisms of Age-Related Decline in Memory Search Across the Adult Life Span

    PubMed Central

    Hills, Thomas T.; Mata, Rui; Wilke, Andreas; Samanez-Larkin, Gregory R.

    2013-01-01

    Three alternative mechanisms for age-related decline in memory search have been proposed, which result from either reduced processing speed (global slowing hypothesis), overpersistence on categories (cluster-switching hypothesis), or the inability to maintain focus on local cues related to a decline in working memory (cue-maintenance hypothesis). We investigated these 3 hypotheses by formally modeling the semantic recall patterns of 185 adults between 27 to 99 years of age in the animal fluency task (Thurstone, 1938). The results indicate that people switch between global frequency-based retrieval cues and local item-based retrieval cues to navigate their semantic memory. Contrary to the global slowing hypothesis that predicts no qualitative differences in dynamic search processes and the cluster-switching hypothesis that predicts reduced switching between retrieval cues, the results indicate that as people age, they tend to switch more often between local and global cues per item recalled, supporting the cue-maintenance hypothesis. Additional support for the cue-maintenance hypothesis is provided by a negative correlation between switching and digit span scores and between switching and total items recalled, which suggests that cognitive control may be involved in cue maintenance and the effective search of memory. Overall, the results are consistent with age-related decline in memory search being a consequence of reduced cognitive control, consistent with models suggesting that working memory is related to goal perseveration and the ability to inhibit distracting information. PMID:23586941

  19. Multi-period natural gas market modeling Applications, stochastic extensions and solution approaches

    NASA Astrophysics Data System (ADS)

    Egging, Rudolf Gerardus

    This dissertation develops deterministic and stochastic multi-period mixed complementarity problems (MCP) for the global natural gas market, as well as solution approaches for large-scale stochastic MCP. The deterministic model is unique in the combination of the level of detail of the actors in the natural gas markets and the transport options, the detailed regional and global coverage, the multi-period approach with endogenous capacity expansions for transportation and storage infrastructure, the seasonal variation in demand and the representation of market power according to Nash-Cournot theory. The model is applied to several scenarios for the natural gas market that cover the formation of a cartel by the members of the Gas Exporting Countries Forum, a low availability of unconventional gas in the United States, and cost reductions in long-distance gas transportation. 1 The results provide insights in how different regions are affected by various developments, in terms of production, consumption, traded volumes, prices and profits of market participants. The stochastic MCP is developed and applied to a global natural gas market problem with four scenarios for a time horizon until 2050 with nineteen regions and containing 78,768 variables. The scenarios vary in the possibility of a gas market cartel formation and varying depletion rates of gas reserves in the major gas importing regions. Outcomes for hedging decisions of market participants show some significant shifts in the timing and location of infrastructure investments, thereby affecting local market situations. A first application of Benders decomposition (BD) is presented to solve a large-scale stochastic MCP for the global gas market with many hundreds of first-stage capacity expansion variables and market players exerting various levels of market power. The largest problem solved successfully using BD contained 47,373 variables of which 763 first-stage variables, however using BD did not result in shorter solution times relative to solving the extensive-forms. Larger problems, up to 117,481 variables, were solved in extensive-form, but not when applying BD due to numerical issues. It is discussed how BD could significantly reduce the solution time of large-scale stochastic models, but various challenges remain and more research is needed to assess the potential of Benders decomposition for solving large-scale stochastic MCP. 1 www.gecforum.org

  20. The pseudo-Boolean optimization approach to form the N-version software structure

    NASA Astrophysics Data System (ADS)

    Kovalev, I. V.; Kovalev, D. I.; Zelenkov, P. V.; Voroshilova, A. A.

    2015-10-01

    The problem of developing an optimal structure of N-version software system presents a kind of very complex optimization problem. This causes the use of deterministic optimization methods inappropriate for solving the stated problem. In this view, exploiting heuristic strategies looks more rational. In the field of pseudo-Boolean optimization theory, the so called method of varied probabilities (MVP) has been developed to solve problems with a large dimensionality. Some additional modifications of MVP have been made to solve the problem of N-version systems design. Those algorithms take into account the discovered specific features of the objective function. The practical experiments have shown the advantage of using these algorithm modifications because of reducing a search space.

  1. An efficient algorithm for global periodic orbits generation near irregular-shaped asteroids

    NASA Astrophysics Data System (ADS)

    Shang, Haibin; Wu, Xiaoyu; Ren, Yuan; Shan, Jinjun

    2017-07-01

    Periodic orbits (POs) play an important role in understanding dynamical behaviors around natural celestial bodies. In this study, an efficient algorithm was presented to generate the global POs around irregular-shaped uniformly rotating asteroids. The algorithm was performed in three steps, namely global search, local refinement, and model continuation. First, a mascon model with a low number of particles and optimized mass distribution was constructed to remodel the exterior gravitational potential of the asteroid. Using this model, a multi-start differential evolution enhanced with a deflection strategy with strong global exploration and bypassing abilities was adopted. This algorithm can be regarded as a search engine to find multiple globally optimal regions in which potential POs were located. This was followed by applying a differential correction to locally refine global search solutions and generate the accurate POs in the mascon model in which an analytical Jacobian matrix was derived to improve convergence. Finally, the concept of numerical model continuation was introduced and used to convert the POs from the mascon model into a high-fidelity polyhedron model by sequentially correcting the initial states. The efficiency of the proposed algorithm was substantiated by computing the global POs around an elongated shoe-shaped asteroid 433 Eros. Various global POs with different topological structures in the configuration space were successfully located. Specifically, the proposed algorithm was generic and could be conveniently extended to explore periodic motions in other gravitational systems.

  2. Federated or cached searches: Providing expected performance from multiple invasive species databases

    NASA Astrophysics Data System (ADS)

    Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.

    2011-06-01

    Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search "deep" web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  3. Federated or cached searches: providing expected performance from multiple invasive species databases

    USGS Publications Warehouse

    Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.

    2011-01-01

    Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search “deep” web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  4. Free energy of RNA-counterion interactions in a tight-binding model computed by a discrete space mapping

    NASA Astrophysics Data System (ADS)

    Henke, Paul S.; Mak, Chi H.

    2014-08-01

    The thermodynamic stability of a folded RNA is intricately tied to the counterions and the free energy of this interaction must be accounted for in any realistic RNA simulations. Extending a tight-binding model published previously, in this paper we investigate the fundamental structure of charges arising from the interaction between small functional RNA molecules and divalent ions such as Mg2+ that are especially conducive to stabilizing folded conformations. The characteristic nature of these charges is utilized to construct a discretely connected energy landscape that is then traversed via a novel application of a deterministic graph search technique. This search method can be incorporated into larger simulations of small RNA molecules and provides a fast and accurate way to calculate the free energy arising from the interactions between an RNA and divalent counterions. The utility of this algorithm is demonstrated within a fully atomistic Monte Carlo simulation of the P4-P6 domain of the Tetrahymena group I intron, in which it is shown that the counterion-mediated free energy conclusively directs folding into a compact structure.

  5. Free energy of RNA-counterion interactions in a tight-binding model computed by a discrete space mapping.

    PubMed

    Henke, Paul S; Mak, Chi H

    2014-08-14

    The thermodynamic stability of a folded RNA is intricately tied to the counterions and the free energy of this interaction must be accounted for in any realistic RNA simulations. Extending a tight-binding model published previously, in this paper we investigate the fundamental structure of charges arising from the interaction between small functional RNA molecules and divalent ions such as Mg(2+) that are especially conducive to stabilizing folded conformations. The characteristic nature of these charges is utilized to construct a discretely connected energy landscape that is then traversed via a novel application of a deterministic graph search technique. This search method can be incorporated into larger simulations of small RNA molecules and provides a fast and accurate way to calculate the free energy arising from the interactions between an RNA and divalent counterions. The utility of this algorithm is demonstrated within a fully atomistic Monte Carlo simulation of the P4-P6 domain of the Tetrahymena group I intron, in which it is shown that the counterion-mediated free energy conclusively directs folding into a compact structure.

  6. Global Emergency Medicine: A review of the literature from 2017.

    PubMed

    Becker, Torben K; Trehan, Indi; Hayward, Alison Schroth; Hexom, Braden J; Kivlehan, Sean M; Lunney, Kevin M; Modi, Payal; Osei-Ampofo, Maxwell; Pousson, Amelia; Cho, Daniel K; Levine, Adam C

    2018-05-23

    The Global Emergency Medicine Literature Review (GEMLR) conducts an annual search of peer-reviewed and gray literature relevant to global emergency medicine (EM) to identify, review, and disseminate the most important new research in this field to a global audience of academics and clinical practitioners. This year, 17,722 articles written in three languages were identified by our electronic search. These articles were distributed among 20 reviewers for initial screening based on their relevance to the field of global EM. Another two reviewers searched the gray literature, yielding an additional 11 articles. All articles that were deemed appropriate by at least one reviewer and approved by their editor underwent formal scoring of overall quality and importance. Two independent reviewers scored all articles. A total of 848 articles met our inclusion criteria and underwent full review. 63% were categorized as emergency care in resource-limited settings, 23% as disaster and humanitarian response, and 14% as emergency medicine development. 21 articles received scores of 18.5 or higher out of a maximum score 20 and were selected for formal summary and critique. Inter-rater reliability testing between reviewers revealed a Cohen's Kappa of 0.344. In 2017, the total number of articles identified by our search continued to increase. Studies and reviews with a focus on infectious diseases, pediatrics, and trauma represented the majority of top-scoring articles. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  7. PSQP: Puzzle Solving by Quadratic Programming.

    PubMed

    Andalo, Fernanda A; Taubin, Gabriel; Goldenstein, Siome

    2017-02-01

    In this article we present the first effective method based on global optimization for the reconstruction of image puzzles comprising rectangle pieces-Puzzle Solving by Quadratic Programming (PSQP). The proposed novel mathematical formulation reduces the problem to the maximization of a constrained quadratic function, which is solved via a gradient ascent approach. The proposed method is deterministic and can deal with arbitrary identical rectangular pieces. We provide experimental results showing its effectiveness when compared to state-of-the-art approaches. Although the method was developed to solve image puzzles, we also show how to apply it to the reconstruction of simulated strip-shredded documents, broadening its applicability.

  8. Reliability-Based Control Design for Uncertain Systems

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.

    2005-01-01

    This paper presents a robust control design methodology for systems with probabilistic parametric uncertainty. Control design is carried out by solving a reliability-based multi-objective optimization problem where the probability of violating design requirements is minimized. Simultaneously, failure domains are optimally enlarged to enable global improvements in the closed-loop performance. To enable an efficient numerical implementation, a hybrid approach for estimating reliability metrics is developed. This approach, which integrates deterministic sampling and asymptotic approximations, greatly reduces the numerical burden associated with complex probabilistic computations without compromising the accuracy of the results. Examples using output-feedback and full-state feedback with state estimation are used to demonstrate the ideas proposed.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y M; Bush, K; Han, B

    Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) methodmore » that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high performance dose calculation in modern RT. The approach is generalizable to all modalities where heterogeneities play a large role, notably particle therapy.« less

  10. Are seismic hazard assessment errors and earthquake surprises unavoidable?

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir

    2013-04-01

    Why earthquake occurrences bring us so many surprises? The answer seems evident if we review the relationships that are commonly used to assess seismic hazard. The time-span of physically reliable Seismic History is yet a small portion of a rupture recurrence cycle at an earthquake-prone site, which makes premature any kind of reliable probabilistic statements about narrowly localized seismic hazard. Moreover, seismic evidences accumulated to-date demonstrate clearly that most of the empirical relations commonly accepted in the early history of instrumental seismology can be proved erroneous when testing statistical significance is applied. Seismic events, including mega-earthquakes, cluster displaying behaviors that are far from independent or periodic. Their distribution in space is possibly fractal, definitely, far from uniform even in a single segment of a fault zone. Such a situation contradicts generally accepted assumptions used for analytically tractable or computer simulations and complicates design of reliable methodologies for realistic earthquake hazard assessment, as well as search and definition of precursory behaviors to be used for forecast/prediction purposes. As a result, the conclusions drawn from such simulations and analyses can MISLEAD TO SCIENTIFICALLY GROUNDLESS APPLICATION, which is unwise and extremely dangerous in assessing expected societal risks and losses. For example, a systematic comparison of the GSHAP peak ground acceleration estimates with those related to actual strong earthquakes, unfortunately, discloses gross inadequacy of this "probabilistic" product, which appears UNACCEPTABLE FOR ANY KIND OF RESPONSIBLE SEISMIC RISK EVALUATION AND KNOWLEDGEABLE DISASTER PREVENTION. The self-evident shortcomings and failures of GSHAP appeals to all earthquake scientists and engineers for an urgent revision of the global seismic hazard maps from the first principles including background methodologies involved, such that there becomes: (a) a demonstrated and sufficient justification of hazard assessment protocols; (b) a more complete learning of the actual range of earthquake hazards to local communities and populations, and (c) a more ethically responsible control over how seismic hazard and seismic risk is implemented to protect public safety. It follows that the international project GEM is on the wrong track, if it continues to base seismic risk estimates on the standard method to assess seismic hazard. The situation is not hopeless and could be improved dramatically due to available geological, geomorphologic, seismic, and tectonic evidences and data combined with deterministic pattern recognition methodologies, specifically, when intending to PREDICT PREDICTABLE, but not the exact size, site, date, and probability of a target event. Understanding the complexity of non-linear dynamics of hierarchically organized systems of blocks-and-faults has led already to methodologies of neo-deterministic seismic hazard analysis and intermediate-term middle- to narrow-range earthquake prediction algorithms tested in real-time applications over the last decades. It proves that Contemporary Science can do a better job in disclosing Natural Hazards, assessing Risks, and delivering such info in advance extreme catastrophes, which are LOW PROBABILITY EVENTS THAT HAPPEN WITH CERTAINTY. Geoscientists must initiate shifting the minds of community from pessimistic disbelieve to optimistic challenging issues of neo-deterministic Hazard Predictability.

  11. The past, present and future of cyber-physical systems: a focus on models.

    PubMed

    Lee, Edward A

    2015-02-26

    This paper is about better engineering of cyber-physical systems (CPSs) through better models. Deterministic models have historically proven extremely useful and arguably form the kingpin of the industrial revolution and the digital and information technology revolutions. Key deterministic models that have proven successful include differential equations, synchronous digital logic and single-threaded imperative programs. Cyber-physical systems, however, combine these models in such a way that determinism is not preserved. Two projects show that deterministic CPS models with faithful physical realizations are possible and practical. The first project is PRET, which shows that the timing precision of synchronous digital logic can be practically made available at the software level of abstraction. The second project is Ptides (programming temporally-integrated distributed embedded systems), which shows that deterministic models for distributed cyber-physical systems have practical faithful realizations. These projects are existence proofs that deterministic CPS models are possible and practical.

  12. The Past, Present and Future of Cyber-Physical Systems: A Focus on Models

    PubMed Central

    Lee, Edward A.

    2015-01-01

    This paper is about better engineering of cyber-physical systems (CPSs) through better models. Deterministic models have historically proven extremely useful and arguably form the kingpin of the industrial revolution and the digital and information technology revolutions. Key deterministic models that have proven successful include differential equations, synchronous digital logic and single-threaded imperative programs. Cyber-physical systems, however, combine these models in such a way that determinism is not preserved. Two projects show that deterministic CPS models with faithful physical realizations are possible and practical. The first project is PRET, which shows that the timing precision of synchronous digital logic can be practically made available at the software level of abstraction. The second project is Ptides (programming temporally-integrated distributed embedded systems), which shows that deterministic models for distributed cyber-physical systems have practical faithful realizations. These projects are existence proofs that deterministic CPS models are possible and practical. PMID:25730486

  13. National Centers for Environmental Prediction

    Science.gov Websites

    Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather Modeling Mesoscale Modeling Marine Modeling and Analysis Teams Climate Data Assimilation Ensembles and Post Model Configuration Collaborators Documentation and Code FAQ Operational Change Log Parallel Experiment

  14. A Bloom Filter-Powered Technique Supporting Scalable Semantic Discovery in Data Service Networks

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Shi, R.; Bao, Q.; Lee, T. J.; Ramachandran, R.

    2016-12-01

    More and more Earth data analytics software products are published onto the Internet as a service, in the format of either heavyweight WSDL service or lightweight RESTful API. Such reusable data analytics services form a data service network, which allows Earth scientists to compose (mashup) services into value-added ones. Therefore, it is important to have a technique that is capable of helping Earth scientists quickly identify appropriate candidate datasets and services in the global data service network. Most existing services discovery techniques, however, mainly rely on syntax or semantics-based service matchmaking between service requests and available services. Since the scale of the data service network is increasing rapidly, the run-time computational cost will soon become a bottleneck. To address this issue, this project presents a way of applying network routing mechanism to facilitate data service discovery in a service network, featuring scalability and performance. Earth data services are automatically annotated in Web Ontology Language for Services (OWL-S) based on their metadata, semantic information, and usage history. Deterministic Annealing (DA) technique is applied to dynamically organize annotated data services into a hierarchical network, where virtual routers are created to represent semantic local network featuring leading terms. Afterwards Bloom Filters are generated over virtual routers. A data service search request is transformed into a network routing problem in order to quickly locate candidate services through network hierarchy. A neural network-powered technique is applied to assure network address encoding and routing performance. A series of empirical study has been conducted to evaluate the applicability and effectiveness of the proposed approach.

  15. Global and local "teachable moments": The role of Nobel Prize and national pride.

    PubMed

    Baram-Tsabari, Ayelet; Segev, Elad

    2018-05-01

    This study examined to what extent Nobel Prize announcements and awards trigger global and local searches or "teachable moments" related to the laureates and their discoveries. We examined the longitudinal trends in Google searches for the names and discoveries of Nobel laureates from 2012 to 2017. The findings show that Nobel Prize events clearly trigger more searches for laureates, but also for their respective discoveries. We suggest that fascination with the Nobel prize creates a teachable moment not only for the underlying science, but also about the nature of science. Locality also emerged as playing a significant role in intensifying interest.

  16. Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate

    NASA Astrophysics Data System (ADS)

    Wang, Zhi-Gang; Gao, Rui-Mei; Fan, Xiao-Ming; Han, Qi-Xing

    2014-09-01

    We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ0, a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ0, when the stochastic system obeys some conditions and ℛ0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations.

  17. Hybrid Monte Carlo/deterministic methods for radiation shielding problems

    NASA Astrophysics Data System (ADS)

    Becker, Troy L.

    For the past few decades, the most common type of deep-penetration (shielding) problem simulated using Monte Carlo methods has been the source-detector problem, in which a response is calculated at a single location in space. Traditionally, the nonanalog Monte Carlo methods used to solve these problems have required significant user input to generate and sufficiently optimize the biasing parameters necessary to obtain a statistically reliable solution. It has been demonstrated that this laborious task can be replaced by automated processes that rely on a deterministic adjoint solution to set the biasing parameters---the so-called hybrid methods. The increase in computational power over recent years has also led to interest in obtaining the solution in a region of space much larger than a point detector. In this thesis, we propose two methods for solving problems ranging from source-detector problems to more global calculations---weight windows and the Transform approach. These techniques employ sonic of the same biasing elements that have been used previously; however, the fundamental difference is that here the biasing techniques are used as elements of a comprehensive tool set to distribute Monte Carlo particles in a user-specified way. The weight window achieves the user-specified Monte Carlo particle distribution by imposing a particular weight window on the system, without altering the particle physics. The Transform approach introduces a transform into the neutron transport equation, which results in a complete modification of the particle physics to produce the user-specified Monte Carlo distribution. These methods are tested in a three-dimensional multigroup Monte Carlo code. For a basic shielding problem and a more realistic one, these methods adequately solved source-detector problems and more global calculations. Furthermore, they confirmed that theoretical Monte Carlo particle distributions correspond to the simulated ones, implying that these methods can be used to achieve user-specified Monte Carlo distributions. Overall, the Transform approach performed more efficiently than the weight window methods, but it performed much more efficiently for source-detector problems than for global problems.

  18. Edge states in the climate system: exploring global instabilities and critical transitions

    NASA Astrophysics Data System (ADS)

    Lucarini, Valerio; Bódai, Tamás

    2017-07-01

    Multistability is a ubiquitous feature in systems of geophysical relevance and provides key challenges for our ability to predict a system’s response to perturbations. Near critical transitions small causes can lead to large effects and—for all practical purposes—irreversible changes in the properties of the system. As is well known, the Earth climate is multistable: present astronomical and astrophysical conditions support two stable regimes, the warm climate we live in, and a snowball climate characterized by global glaciation. We first provide an overview of methods and ideas relevant for studying the climate response to forcings and focus on the properties of critical transitions in the context of both stochastic and deterministic dynamics, and assess strengths and weaknesses of simplified approaches to the problem. Following an idea developed by Eckhardt and collaborators for the investigation of multistable turbulent fluid dynamical systems, we study the global instability giving rise to the snowball/warm multistability in the climate system by identifying the climatic edge state, a saddle embedded in the boundary between the two basins of attraction of the stable climates. The edge state attracts initial conditions belonging to such a boundary and, while being defined by the deterministic dynamics, is the gate facilitating noise-induced transitions between competing attractors. We use a simplified yet Earth-like intermediate complexity climate model constructed by coupling a primitive equations model of the atmosphere with a simple diffusive ocean. We refer to the climatic edge states as Melancholia states and provide an extensive analysis of their features. We study their dynamics, their symmetry properties, and we follow a complex set of bifurcations. We find situations where the Melancholia state has chaotic dynamics. In these cases, we have that the basin boundary between the two basins of attraction is a strange geometric set with a nearly zero codimension, and relate this feature to the time scale separation between instabilities occurring on weather and climatic time scales. We also discover a new stable climatic state that is similar to a Melancholia state and is characterized by non-trivial symmetry properties.

  19. Trend assessment: applications for hydrology and climate research

    NASA Astrophysics Data System (ADS)

    Kallache, M.; Rust, H. W.; Kropp, J.

    2005-02-01

    The assessment of trends in climatology and hydrology still is a matter of debate. Capturing typical properties of time series, like trends, is highly relevant for the discussion of potential impacts of global warming or flood occurrences. It provides indicators for the separation of anthropogenic signals and natural forcing factors by distinguishing between deterministic trends and stochastic variability. In this contribution river run-off data from gauges in Southern Germany are analysed regarding their trend behaviour by combining a deterministic trend component and a stochastic model part in a semi-parametric approach. In this way the trade-off between trend and autocorrelation structure can be considered explicitly. A test for a significant trend is introduced via three steps: First, a stochastic fractional ARIMA model, which is able to reproduce short-term as well as long-term correlations, is fitted to the empirical data. In a second step, wavelet analysis is used to separate the variability of small and large time-scales assuming that the trend component is part of the latter. Finally, a comparison of the overall variability to that restricted to small scales results in a test for a trend. The extraction of the large-scale behaviour by wavelet analysis provides a clue concerning the shape of the trend.

  20. Automatic design of synthetic gene circuits through mixed integer non-linear programming.

    PubMed

    Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias

    2012-01-01

    Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits.

  1. Approximate reduction of linear population models governed by stochastic differential equations: application to multiregional models.

    PubMed

    Sanz, Luis; Alonso, Juan Antonio

    2017-12-01

    In this work we develop approximate aggregation techniques in the context of slow-fast linear population models governed by stochastic differential equations and apply the results to the treatment of populations with spatial heterogeneity. Approximate aggregation techniques allow one to transform a complex system involving many coupled variables and in which there are processes with different time scales, by a simpler reduced model with a fewer number of 'global' variables, in such a way that the dynamics of the former can be approximated by that of the latter. In our model we contemplate a linear fast deterministic process together with a linear slow process in which the parameters are affected by additive noise, and give conditions for the solutions corresponding to positive initial conditions to remain positive for all times. By letting the fast process reach equilibrium we build a reduced system with a lesser number of variables, and provide results relating the asymptotic behaviour of the first- and second-order moments of the population vector for the original and the reduced system. The general technique is illustrated by analysing a multiregional stochastic system in which dispersal is deterministic and the rate growth of the populations in each patch is affected by additive noise.

  2. Optimal Foraging in Semantic Memory

    ERIC Educational Resources Information Center

    Hills, Thomas T.; Jones, Michael N.; Todd, Peter M.

    2012-01-01

    Do humans search in memory using dynamic local-to-global search strategies similar to those that animals use to forage between patches in space? If so, do their dynamic memory search policies correspond to optimal foraging strategies seen for spatial foraging? Results from a number of fields suggest these possibilities, including the shared…

  3. National Centers for Environmental Prediction

    Science.gov Websites

    Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather Modeling Mesoscale Modeling Marine Modeling and Analysis Teams Climate Data Assimilation Ensembles and Post Configuration Collaborators Documentation and Code FAQ Operational Change Log Parallel Experiment Change Log

  4. National Centers for Environmental Prediction

    Science.gov Websites

    Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather Modeling Mesoscale Modeling Marine Modeling and Analysis Teams Climate Data Assimilation Ensembles and Post Collaborators Documentation and Code FAQ Operational Change Log Parallel Experiment Change Log Contacts

  5. Global trends in the awareness of sepsis: insights from search engine data between 2012 and 2017.

    PubMed

    Jabaley, Craig S; Blum, James M; Groff, Robert F; O'Reilly-Shah, Vikas N

    2018-01-17

    Sepsis is an established global health priority with high mortality that can be curtailed through early recognition and intervention; as such, efforts to raise awareness are potentially impactful and increasingly common. We sought to characterize trends in the awareness of sepsis by examining temporal, geographic, and other changes in search engine utilization for sepsis information-seeking online. Using time series analyses and mixed descriptive methods, we retrospectively analyzed publicly available global usage data reported by Google Trends (Google, Palo Alto, CA, USA) concerning web searches for the topic of sepsis between 24 June 2012 and 24 June 2017. Google Trends reports aggregated and de-identified usage data for its search products, including interest over time, interest by region, and details concerning the popularity of related queries where applicable. Outlying epochs of search activity were identified using autoregressive integrated moving average modeling with transfer functions. We then identified awareness campaigns and news media coverage that correlated with epochs of significantly heightened search activity. A second-order autoregressive model with transfer functions was specified following preliminary outlier analysis. Nineteen significant outlying epochs above the modeled baseline were identified in the final analysis that correlated with 14 awareness and news media events. Our model demonstrated that the baseline level of search activity increased in a nonlinear fashion. A recurrent cyclic increase in search volume beginning in 2012 was observed that correlates with World Sepsis Day. Numerous other awareness and media events were correlated with outlying epochs. The average worldwide search volume for sepsis was less than that of influenza, myocardial infarction, and stroke. Analyzing aggregate search engine utilization data has promise as a mechanism to measure the impact of awareness efforts. Heightened information-seeking about sepsis occurs in close proximity to awareness events and relevant news media coverage. Future work should focus on validating this approach in other contexts and comparing its results to traditional methods of awareness campaign evaluation.

  6. Application of Metaheuristic and Deterministic Algorithms for Aircraft Reference Trajectory Optimization =

    NASA Astrophysics Data System (ADS)

    Murrieta Mendoza, Alejandro

    Aircraft reference trajectory is an alternative method to reduce fuel consumption, thus the pollution released to the atmosphere. Fuel consumption reduction is of special importance for two reasons: first, because the aeronautical industry is responsible of 2% of the CO2 released to the atmosphere, and second, because it will reduce the flight cost. The aircraft fuel model was obtained from a numerical performance database which was created and validated by our industrial partner from flight experimental test data. A new methodology using the numerical database was proposed in this thesis to compute the fuel burn for a given trajectory. Weather parameters such as wind and temperature were taken into account as they have an important effect in fuel burn. The open source model used to obtain the weather forecast was provided by Weather Canada. A combination of linear and bi-linear interpolations allowed finding the required weather data. The search space was modelled using different graphs: one graph was used for mapping the different flight phases such as climb, cruise and descent, and another graph was used for mapping the physical space in which the aircraft would perform its flight. The trajectory was optimized in its vertical reference trajectory using the Beam Search algorithm, and a combination of the Beam Search algorithm with a search space reduction technique. The trajectory was optimized simultaneously for the vertical and lateral reference navigation plans while fulfilling a Required Time of Arrival constraint using three different metaheuristic algorithms: the artificial bee's colony, and the ant colony optimization. Results were validated using the software FlightSIMRTM, a commercial Flight Management System, an exhaustive search algorithm, and as flown flights obtained from flightawareRTM. All algorithms were able to reduce the fuel burn, and the flight costs. None None None None None None None

  7. A surface wave reflector in Southwestern Japan

    NASA Astrophysics Data System (ADS)

    Mak, S.; Koketsu, K.; Miyake, H.; Obara, K.; Sekine, S.

    2009-12-01

    Surface waves at short periods (<35s) are affected severely by heterogeneities in the crust and the uppermost mantle. When the scale of heterogeneity is sufficiently large, its effect can be studied in a deterministic way using conventional concepts of reflection and refraction. A well-known example is surface wave refraction at continental margin. We present a case study to investigate the composition of surface wave coda in a deterministic approach. A long duration of surface wave coda with a predominant period of 20s is observed during various strong earthquakes around Japan. The coda shows an unambiguous propagation direction, implying a deterministic nature. Beamforming and particle motion analysis suggest that the surface wave later arrivals could be explained by Love wave reflections by a point reflector located at offshore southeast to Kyushu. The reflection demonstrates a seemingly incidence-independent favorable azimuth in emitting strength. In additional to beamforming, we use a new regional crustal velocity model to perform a grid-search ray-tracing with the assumption of point reflector to further constrain to location of coda generation. Because strong velocity anomalies exist near the zone of interest, we decide to use a network shortest-path ray-tracing method, instead of analytical methods like shooting and bending, to avoid the problems like convergence, shadow zone, and smooth model assumption. Two geological features are found to be related to the formation of the coda. The primary one is the intersection between the Kyushu-Palau Ridge and the Nankai Trough at offshore southeast to Kyushu (hereafter referred as "KPR-NT"), which may act as a point reflector. There is a strong Love wave phase velocity anomaly at KPR-NT but not other parts of the ridge, implying that topography is irrelevant. Rayleigh wave phase velocity does not experience a strong anomaly there, which is consistent to the absence of Rayleigh wave reflections implied by the observed particle motions. The secondary one is a low phase velocity (<2km/s for T=20s) at the accretionary wedge of the Nankai Trough due to the thick sediment. Such a long and narrow low velocity zone, with its southwest tip at KPR-NT, is a potential wave-guide to channel waves towards KPR-NT. The longer duration of deterministic later arrivals than the direct arrival is partially explained by multi-pathing due to the wave-guide. The surface wave coda is observable for earthquakes whose propagation path does not include the accretionary wedge, implying that the wedge is an enhancer but not indispensable of the formation of the observed coda.

  8. Global Optimization of Interplanetary Trajectories in the Presence of Realistic Mission Contraints

    NASA Technical Reports Server (NTRS)

    Hinckley, David, Jr.; Englander, Jacob; Hitt, Darren

    2015-01-01

    Interplanetary missions are often subject to difficult constraints, like solar phase angle upon arrival at the destination, velocity at arrival, and altitudes for flybys. Preliminary design of such missions is often conducted by solving the unconstrained problem and then filtering away solutions which do not naturally satisfy the constraints. However this can bias the search into non-advantageous regions of the solution space, so it can be better to conduct preliminary design with the full set of constraints imposed. In this work two stochastic global search methods are developed which are well suited to the constrained global interplanetary trajectory optimization problem.

  9. PREFACE: The random search problem: trends and perspectives The random search problem: trends and perspectives

    NASA Astrophysics Data System (ADS)

    da Luz, Marcos G. E.; Grosberg, Alexander; Raposo, Ernesto P.; Viswanathan, Gandhi M.

    2009-10-01

    `I can't find my keys!' Who hasn't gone through this experience when leaving, in a hurry, to attend to some urgent matter? The keys could be in many different places. Unless one remembers where he or she has left the keys, the only solution is to look around, more or less randomly. Random searches are common because in many cases the locations of the specific targets are not known a priori. Indeed, such problems have been discussed in diverse contexts, attracting the interest of scientists from many fields, for example: the dynamical or stochastic search for a stable minimum in a complex energy landscape, relevant to systems such as glasses, protein (folding), and others; oil recovery from mature reservoirs; proteins searching for their specific target sites on DNA; animal foraging; survival at the edge of extinction due to low availability of energetic resources; automated searches of registers in high-capacity databases, search engine (e.g., `crawlers') that explore the internet; and even pizza delivery in a jammed traffic system of a medium-size town. In this way, the subject is interesting, challenging and has recently become an important scientific area of investigation. Although the applications are diverse, the underlying physical mechanisms are the same which will become clear in this special issue. Moreover, the inherent complexity of the problem, the abundance of ideas and methods found in this growing interdisciplinary field of research is studied in many areas of physics. In particular, the concepts and methods of statistical mechanics are particularly useful to the study of random searches. On one hand, it centres on how to find the global or local maxima of search efficiency functions with incomplete information. This is, of course, related to the long tradition in physics of using different conceptual and mathematical tools, such as variational methods, to extremize relevant quantities, e.g., energy, entropy and action. Such ideas and approaches are very important to solve computationally complex problems (e.g., protein folding), which involve optimizations in very high dimensional energy landscapes. On the other hand, random searches can also be studied from the perspective of diffusion and transport properties which is an important topic in condensed matter and statistical physics. For instance, the features of light scattered in a media, where the scatterers have a power-law distribution of sizes in many aspects, may resemble the patterns generated by a searcher performing Lévy walks. There are many questions related to random searches: how the searcher moves or should move, what are the patterns generated during the locomotion, how do the encounter rates depend on parameters of the search, etc. But perhaps, the most well known issue is how to optimize the search for specific target scenarios. The optimization can be in either continuous or discrete environments, when the information available is limited. The answer to this question determines specific strategies of movement that would maximize some properly defined search efficiency measure. The relevance of the question stems from the fact that the strategy-dynamics represents one of the most important factors that modulate the rate of encounters (e.g., the encounter rate between predator and prey). In the general context, strategy choices can be essential in determining the outcome and thus the success of a given search. For instance, realistic searches—and locomotion in general—require the expenditure of energy. Thus, inefficient search could deplete energy reserves (e.g., fat) and lead to rates of encounters below a minimum acceptable threshold (resulting in extinction of a species, for example). The framework of the random search `game' distinguishes between the two interacting players in a context of pursuit and chance. They are either a `searcher' (e.g., predator, protein, radar, `crawler') or a `target' (e.g., prey, DNA sequence, a missing aircraft, a given web site). Regarding the nature of the searching drive, in certain instances, it can be guided almost entirely by external cues, either by the cognitive (memory) or detective (olfaction, vision, etc) skills of the searcher. However, in many situations the movement is non-oriented, being in essence a stochastic process. Therefore, in such cases (and even when a small deterministic component in the locomotion exists) a random search effectively defines the final rates of encounters. Hence, one reason underlying the richness of the random search problem relates just to the `ignorance' of the locations of the randomly located targets. Contrary to conventional wisdom, the lack of complete information does not necessarily lead to greater complexity. As an illustrative example, let us consider the case of complete information. If the positions of all target sites are known in advance, then the question of what sequential order to visit the sites so to reduce the energy costs of locomotion itself becomes a rather challenging problem: the famous `travelling salesman' optimization query, belonging to the NP-complete class of problems. The ignorance of the target site locations, however, considerably modifies the problem and renders it not amenable to be treated by purely deterministic computational methods. In fact, as expected, the random search problem is not particularly suited to search algorithms that do not use elements of randomness. So, only a statistical approach to the search problem can adequately deal with the element of ignorance. In other words, the incomplete information renders the search under-determined, i.e., it is not possible to find the `best' solution to the problem because all the information is not given. Instead, one must guess and probabilistic or stochastic strategies become unavoidable. Also, the random search problem bears a relation to reaction-diffusion processes, because the search involves a diffusive aspect, movement, as well as a reactive component, e.g., eating, mating, etc. From the comments above, it is clear that the subject can be treated from the perspective of different fields and subfields of physics and mathematics: statistical mechanics, stochastic processes, Lévy walks and flights, complex systems, fractal geometry, and non-linear phenomena. Some important questions in random searches, especially in the case of discrete landscapes, are also associated with graph theory, random lattices, and complex networks. The aim of this special issue is to bring together, in a single publication, all or most of the relevant theoretical concepts-ideas together with discussions of recent findings that are important for understanding the main elements of random searches. In addition, we will address the types of problems which are characteristic of random searching. Thus, we sincerely hope that this collection of works will provide a good overview for anyone interested in this field. Finally, we should thank the editors and staff of Journal of Physics A: Mathematical and Theoretical for opining that random searches are an interesting topic of research deserving a topical publication. Furthermore, we are very grateful to Rebecca Gillan for helping us at all stages of the preparation and organization of this special issue. Finally, we would like to thank the contributing authors who share with the guest editors the enthusiasm and interest for this fascinating field of research.

  10. Global Emergency Medicine: A Review of the Literature From 2016.

    PubMed

    Becker, Torben K; Hansoti, Bhakti; Bartels, Susan; Hayward, Alison Schroth; Hexom, Braden J; Lunney, Kevin M; Marsh, Regan H; Osei-Ampofo, Maxwell; Trehan, Indi; Chang, Julia; Levine, Adam C

    2017-09-01

    The Global Emergency Medicine Literature Review (GEMLR) conducts an annual search of peer-reviewed and gray literature relevant to global emergency medicine (EM) to identify, review, and disseminate the most important new research in this field to a global audience of academics and clinical practitioners. This year 13,890 articles written in four languages were identified by our search. These articles were distributed among 20 reviewers for initial screening based on their relevance to the field of global EM. An additional two reviewers searched the gray literature. All articles that were deemed appropriate by at least one reviewer and approved by their editor underwent formal scoring of overall quality and importance. Two independent reviewers scored all articles. A total of 716 articles met our inclusion criteria and underwent full review. Fifty-nine percent were categorized as emergency care in resource-limited settings, 17% as EM development, and 24% as disaster and humanitarian response. Nineteen articles received scores of 18.5 or higher out of a maximum score of 20 and were selected for formal summary and critique. Inter-rater reliability testing between reviewers revealed Cohen's kappa of 0.441. In 2016, the total number of articles identified by our search continued to increase. The proportion of articles in each of the three categories remained stable. Studies and reviews with a focus on infectious diseases, pediatrics, and the use of ultrasound in resource-limited settings represented the majority of articles selected for final review. © 2017 The Authors. Academic Emergency Medicine published by Wiley Periodicals, Inc. on behalf of the Society for Academic Emergency Medicine (SAEM).

  11. Deterministic and stochastic CTMC models from Zika disease transmission

    NASA Astrophysics Data System (ADS)

    Zevika, Mona; Soewono, Edy

    2018-03-01

    Zika infection is one of the most important mosquito-borne diseases in the world. Zika virus (ZIKV) is transmitted by many Aedes-type mosquitoes including Aedes aegypti. Pregnant women with the Zika virus are at risk of having a fetus or infant with a congenital defect and suffering from microcephaly. Here, we formulate a Zika disease transmission model using two approaches, a deterministic model and a continuous-time Markov chain stochastic model. The basic reproduction ratio is constructed from a deterministic model. Meanwhile, the CTMC stochastic model yields an estimate of the probability of extinction and outbreaks of Zika disease. Dynamical simulations and analysis of the disease transmission are shown for the deterministic and stochastic models.

  12. Distinguishing between stochasticity and determinism: Examples from cell cycle duration variability.

    PubMed

    Pearl Mizrahi, Sivan; Sandler, Oded; Lande-Diner, Laura; Balaban, Nathalie Q; Simon, Itamar

    2016-01-01

    We describe a recent approach for distinguishing between stochastic and deterministic sources of variability, focusing on the mammalian cell cycle. Variability between cells is often attributed to stochastic noise, although it may be generated by deterministic components. Interestingly, lineage information can be used to distinguish between variability and determinism. Analysis of correlations within a lineage of the mammalian cell cycle duration revealed its deterministic nature. Here, we discuss the sources of such variability and the possibility that the underlying deterministic process is due to the circadian clock. Finally, we discuss the "kicked cell cycle" model and its implication on the study of the cell cycle in healthy and cancerous tissues. © 2015 WILEY Periodicals, Inc.

  13. Genetic algorithm approaches for conceptual design of spacecraft systems including multi-objective optimization and design under uncertainty

    NASA Astrophysics Data System (ADS)

    Hassan, Rania A.

    In the design of complex large-scale spacecraft systems that involve a large number of components and subsystems, many specialized state-of-the-art design tools are employed to optimize the performance of various subsystems. However, there is no structured system-level concept-architecting process. Currently, spacecraft design is heavily based on the heritage of the industry. Old spacecraft designs are modified to adapt to new mission requirements, and feasible solutions---rather than optimal ones---are often all that is achieved. During the conceptual phase of the design, the choices available to designers are predominantly discrete variables describing major subsystems' technology options and redundancy levels. The complexity of spacecraft configurations makes the number of the system design variables that need to be traded off in an optimization process prohibitive when manual techniques are used. Such a discrete problem is well suited for solution with a Genetic Algorithm, which is a global search technique that performs optimization-like tasks. This research presents a systems engineering framework that places design requirements at the core of the design activities and transforms the design paradigm for spacecraft systems to a top-down approach rather than the current bottom-up approach. To facilitate decision-making in the early phases of the design process, the population-based search nature of the Genetic Algorithm is exploited to provide computationally inexpensive---compared to the state-of-the-practice---tools for both multi-objective design optimization and design optimization under uncertainty. In terms of computational cost, those tools are nearly on the same order of magnitude as that of standard single-objective deterministic Genetic Algorithm. The use of a multi-objective design approach provides system designers with a clear tradeoff optimization surface that allows them to understand the effect of their decisions on all the design objectives under consideration simultaneously. Incorporating uncertainties avoids large safety margins and unnecessary high redundancy levels. The focus on low computational cost for the optimization tools stems from the objective that improving the design of complex systems should not be achieved at the expense of a costly design methodology.

  14. A Meta-Data Driven Approach to Searching for Educational Resources in a Global Context.

    ERIC Educational Resources Information Center

    Wade, Vincent P.; Doherty, Paul

    This paper presents the design of an Internet-enabled search service that supports educational resource discovery within an educational brokerage service. More specifically, it presents the design and implementation of a metadata-driven approach to implementing the distributed search and retrieval of Internet-based educational resources and…

  15. A modified three-term PRP conjugate gradient algorithm for optimization models.

    PubMed

    Wu, Yanlin

    2017-01-01

    The nonlinear conjugate gradient (CG) algorithm is a very effective method for optimization, especially for large-scale problems, because of its low memory requirement and simplicity. Zhang et al. (IMA J. Numer. Anal. 26:629-649, 2006) firstly propose a three-term CG algorithm based on the well known Polak-Ribière-Polyak (PRP) formula for unconstrained optimization, where their method has the sufficient descent property without any line search technique. They proved the global convergence of the Armijo line search but this fails for the Wolfe line search technique. Inspired by their method, we will make a further study and give a modified three-term PRP CG algorithm. The presented method possesses the following features: (1) The sufficient descent property also holds without any line search technique; (2) the trust region property of the search direction is automatically satisfied; (3) the steplengh is bounded from below; (4) the global convergence will be established under the Wolfe line search. Numerical results show that the new algorithm is more effective than that of the normal method.

  16. Fast adaptive diamond search algorithm for block-matching motion estimation using spatial correlation

    NASA Astrophysics Data System (ADS)

    Park, Sang-Gon; Jeong, Dong-Seok

    2000-12-01

    In this paper, we propose a fast adaptive diamond search algorithm (FADS) for block matching motion estimation. Many fast motion estimation algorithms reduce the computational complexity by the UESA (Unimodal Error Surface Assumption) where the matching error monotonically increases as the search moves away from the global minimum point. Recently, many fast BMAs (Block Matching Algorithms) make use of the fact that global minimum points in real world video sequences are centered at the position of zero motion. But these BMAs, especially in large motion, are easily trapped into the local minima and result in poor matching accuracy. So, we propose a new motion estimation algorithm using the spatial correlation among the neighboring blocks. We move the search origin according to the motion vectors of the spatially neighboring blocks and their MAEs (Mean Absolute Errors). The computer simulation shows that the proposed algorithm has almost the same computational complexity with DS (Diamond Search), but enhances PSNR. Moreover, the proposed algorithm gives almost the same PSNR as that of FS (Full Search), even for the large motion with half the computational load.

  17. Disentangling Mechanisms That Mediate the Balance Between Stochastic and Deterministic Processes in Microbial Succession

    DOE PAGES

    Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan D.; ...

    2015-03-17

    Despite growing recognition that deterministic and stochastic factors simultaneously influence bacterial communities, little is known about mechanisms shifting their relative importance. To better understand underlying mechanisms, we developed a conceptual model linking ecosystem development during primary succession to shifts in the stochastic/deterministic balance. To evaluate the conceptual model we coupled spatiotemporal data on soil bacterial communities with environmental conditions spanning 105 years of salt marsh development. At the local scale there was a progression from stochasticity to determinism due to Na accumulation with increasing ecosystem age, supporting a main element of the conceptual model. At the regional-scale, soil organic mattermore » (SOM) governed the relative influence of stochasticity and the type of deterministic ecological selection, suggesting scale-dependency in how deterministic ecological selection is imposed. Analysis of a new ecological simulation model supported these conceptual inferences. Looking forward, we propose an extended conceptual model that integrates primary and secondary succession in microbial systems.« less

  18. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    USGS Publications Warehouse

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-01-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  19. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    NASA Astrophysics Data System (ADS)

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-07-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  20. Statistics of Delta v magnitude for a trajectory correction maneuver containing deterministic and random components

    NASA Technical Reports Server (NTRS)

    Bollman, W. E.; Chadwick, C.

    1982-01-01

    A number of interplanetary missions now being planned involve placing deterministic maneuvers along the flight path to alter the trajectory. Lee and Boain (1973) examined the statistics of trajectory correction maneuver (TCM) magnitude with no deterministic ('bias') component. The Delta v vector magnitude statistics were generated for several values of random Delta v standard deviations using expansions in terms of infinite hypergeometric series. The present investigation uses a different technique (Monte Carlo simulation) to generate Delta v magnitude statistics for a wider selection of random Delta v standard deviations and also extends the analysis to the case of nonzero deterministic Delta v's. These Delta v magnitude statistics are plotted parametrically. The plots are useful in assisting the analyst in quickly answering questions about the statistics of Delta v magnitude for single TCM's consisting of both a deterministic and a random component. The plots provide quick insight into the nature of the Delta v magnitude distribution for the TCM.

  1. Simultaneous estimation of deterministic and fractal stochastic components in non-stationary time series

    NASA Astrophysics Data System (ADS)

    García, Constantino A.; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G.

    2018-07-01

    In the past few decades, it has been recognized that 1 / f fluctuations are ubiquitous in nature. The most widely used mathematical models to capture the long-term memory properties of 1 / f fluctuations have been stochastic fractal models. However, physical systems do not usually consist of just stochastic fractal dynamics, but they often also show some degree of deterministic behavior. The present paper proposes a model based on fractal stochastic and deterministic components that can provide a valuable basis for the study of complex systems with long-term correlations. The fractal stochastic component is assumed to be a fractional Brownian motion process and the deterministic component is assumed to be a band-limited signal. We also provide a method that, under the assumptions of this model, is able to characterize the fractal stochastic component and to provide an estimate of the deterministic components present in a given time series. The method is based on a Bayesian wavelet shrinkage procedure that exploits the self-similar properties of the fractal processes in the wavelet domain. This method has been validated over simulated signals and over real signals with economical and biological origin. Real examples illustrate how our model may be useful for exploring the deterministic-stochastic duality of complex systems, and uncovering interesting patterns present in time series.

  2. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model

    PubMed Central

    Nené, Nuno R.; Dunham, Alistair S.; Illingworth, Christopher J. R.

    2018-01-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. PMID:29500183

  3. Biobotic insect swarm based sensor networks for search and rescue

    NASA Astrophysics Data System (ADS)

    Bozkurt, Alper; Lobaton, Edgar; Sichitiu, Mihail; Hedrick, Tyson; Latif, Tahmid; Dirafzoon, Alireza; Whitmire, Eric; Verderber, Alexander; Marin, Juan; Xiong, Hong

    2014-06-01

    The potential benefits of distributed robotics systems in applications requiring situational awareness, such as search-and-rescue in emergency situations, are indisputable. The efficiency of such systems requires robotic agents capable of coping with uncertain and dynamic environmental conditions. For example, after an earthquake, a tremendous effort is spent for days to reach to surviving victims where robotic swarms or other distributed robotic systems might play a great role in achieving this faster. However, current technology falls short of offering centimeter scale mobile agents that can function effectively under such conditions. Insects, the inspiration of many robotic swarms, exhibit an unmatched ability to navigate through such environments while successfully maintaining control and stability. We have benefitted from recent developments in neural engineering and neuromuscular stimulation research to fuse the locomotory advantages of insects with the latest developments in wireless networking technologies to enable biobotic insect agents to function as search-and-rescue agents. Our research efforts towards this goal include development of biobot electronic backpack technologies, establishment of biobot tracking testbeds to evaluate locomotion control efficiency, investigation of biobotic control strategies with Gromphadorhina portentosa cockroaches and Manduca sexta moths, establishment of a localization and communication infrastructure, modeling and controlling collective motion by learning deterministic and stochastic motion models, topological motion modeling based on these models, and the development of a swarm robotic platform to be used as a testbed for our algorithms.

  4. Spacecraft Attitude Maneuver Planning Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Kornfeld, Richard P.

    2004-01-01

    A key enabling technology that leads to greater spacecraft autonomy is the capability to autonomously and optimally slew the spacecraft from and to different attitudes while operating under a number of celestial and dynamic constraints. The task of finding an attitude trajectory that meets all the constraints is a formidable one, in particular for orbiting or fly-by spacecraft where the constraints and initial and final conditions are of time-varying nature. This approach for attitude path planning makes full use of a priori constraint knowledge and is computationally tractable enough to be executed onboard a spacecraft. The approach is based on incorporating the constraints into a cost function and using a Genetic Algorithm to iteratively search for and optimize the solution. This results in a directed random search that explores a large part of the solution space while maintaining the knowledge of good solutions from iteration to iteration. A solution obtained this way may be used as is or as an initial solution to initialize additional deterministic optimization algorithms. A number of representative case examples for time-fixed and time-varying conditions yielded search times that are typically on the order of minutes, thus demonstrating the viability of this method. This approach is applicable to all deep space and planet Earth missions requiring greater spacecraft autonomy, and greatly facilitates navigation and science observation planning.

  5. A modified conjugate gradient coefficient with inexact line search for unconstrained optimization

    NASA Astrophysics Data System (ADS)

    Aini, Nurul; Rivaie, Mohd; Mamat, Mustafa

    2016-11-01

    Conjugate gradient (CG) method is a line search algorithm mostly known for its wide application in solving unconstrained optimization problems. Its low memory requirements and global convergence properties makes it one of the most preferred method in real life application such as in engineering and business. In this paper, we present a new CG method based on AMR* and CD method for solving unconstrained optimization functions. The resulting algorithm is proven to have both the sufficient descent and global convergence properties under inexact line search. Numerical tests are conducted to assess the effectiveness of the new method in comparison to some previous CG methods. The results obtained indicate that our method is indeed superior.

  6. Automatic mesh adaptivity for hybrid Monte Carlo/deterministic neutronics modeling of difficult shielding problems

    DOE PAGES

    Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; ...

    2015-06-30

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less

  7. Improving ground-penetrating radar data in sedimentary rocks using deterministic deconvolution

    USGS Publications Warehouse

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.; Byrnes, A.P.

    2003-01-01

    Resolution is key to confidently identifying unique geologic features using ground-penetrating radar (GPR) data. Source wavelet "ringing" (related to bandwidth) in a GPR section limits resolution because of wavelet interference, and can smear reflections in time and/or space. The resultant potential for misinterpretation limits the usefulness of GPR. Deconvolution offers the ability to compress the source wavelet and improve temporal resolution. Unlike statistical deconvolution, deterministic deconvolution is mathematically simple and stable while providing the highest possible resolution because it uses the source wavelet unique to the specific radar equipment. Source wavelets generated in, transmitted through and acquired from air allow successful application of deterministic approaches to wavelet suppression. We demonstrate the validity of using a source wavelet acquired in air as the operator for deterministic deconvolution in a field application using "400-MHz" antennas at a quarry site characterized by interbedded carbonates with shale partings. We collected GPR data on a bench adjacent to cleanly exposed quarry faces in which we placed conductive rods to provide conclusive groundtruth for this approach to deconvolution. The best deconvolution results, which are confirmed by the conductive rods for the 400-MHz antenna tests, were observed for wavelets acquired when the transmitter and receiver were separated by 0.3 m. Applying deterministic deconvolution to GPR data collected in sedimentary strata at our study site resulted in an improvement in resolution (50%) and improved spatial location (0.10-0.15 m) of geologic features compared to the same data processed without deterministic deconvolution. The effectiveness of deterministic deconvolution for increased resolution and spatial accuracy of specific geologic features is further demonstrated by comparing results of deconvolved data with nondeconvolved data acquired along a 30-m transect immediately adjacent to a fresh quarry face. The results at this site support using deterministic deconvolution, which incorporates the GPR instrument's unique source wavelet, as a standard part of routine GPR data processing. ?? 2003 Elsevier B.V. All rights reserved.

  8. Signatures of a globally optimal searching strategy in the three-dimensional foraging flights of bumblebees

    NASA Astrophysics Data System (ADS)

    Lihoreau, Mathieu; Ings, Thomas C.; Chittka, Lars; Reynolds, Andy M.

    2016-07-01

    Simulated annealing is a powerful stochastic search algorithm for locating a global maximum that is hidden among many poorer local maxima in a search space. It is frequently implemented in computers working on complex optimization problems but until now has not been directly observed in nature as a searching strategy adopted by foraging animals. We analysed high-speed video recordings of the three-dimensional searching flights of bumblebees (Bombus terrestris) made in the presence of large or small artificial flowers within a 0.5 m3 enclosed arena. Analyses of the three-dimensional flight patterns in both conditions reveal signatures of simulated annealing searches. After leaving a flower, bees tend to scan back-and forth past that flower before making prospecting flights (loops), whose length increases over time. The search pattern becomes gradually more expansive and culminates when another rewarding flower is found. Bees then scan back and forth in the vicinity of the newly discovered flower and the process repeats. This looping search pattern, in which flight step lengths are typically power-law distributed, provides a relatively simple yet highly efficient strategy for pollinators such as bees to find best quality resources in complex environments made of multiple ephemeral feeding sites with nutritionally variable rewards.

  9. Experimental search for Exact Coherent Structures in turbulent small aspect ratio Taylor-Couette flow

    NASA Astrophysics Data System (ADS)

    Crowley, Christopher J.; Krygier, Michael; Grigoriev, Roman O.; Schatz, Michael F.

    2017-11-01

    Recent theoretical and experimental work suggests that the dynamics of turbulent flows are guided by unstable nonchaotic solutions to the Navier-Stokes equations. These solutions, known as exact coherent structures (ECS), play a key role in a fundamentally deterministic description of turbulence. In order to quantitatively demonstrate that actual turbulence in 3D flows is guided by ECS, high resolution, 3D-3C experimental measurements of the velocity need to be compared to solutions from direct numerical simulation of the Navier-Stokes equations. In this talk, we will present experimental measurements of fully time resolved, velocity measurements in a volume of turbulence in a counter-rotating, small aspect ratio Taylor-Couette flow. This work is supported by the Army Research Office (Contract # W911NF-16-1-0281).

  10. Search | The University of Virginia

    Science.gov Websites

    Menu Search The University of Virginia Main menu Life at UVA Start Here Affording UVA Residence Life Logo Life at UVA Academics Arts Athletics Global Health & Medicine Research Schools Libraries Visit

  11. SU-E-T-22: A Deterministic Solver of the Boltzmann-Fokker-Planck Equation for Dose Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, X; Gao, H; Paganetti, H

    2015-06-15

    Purpose: The Boltzmann-Fokker-Planck equation (BFPE) accurately models the migration of photons/charged particles in tissues. While the Monte Carlo (MC) method is popular for solving BFPE in a statistical manner, we aim to develop a deterministic BFPE solver based on various state-of-art numerical acceleration techniques for rapid and accurate dose calculation. Methods: Our BFPE solver is based on the structured grid that is maximally parallelizable, with the discretization in energy, angle and space, and its cross section coefficients are derived or directly imported from the Geant4 database. The physical processes that are taken into account are Compton scattering, photoelectric effect, pairmore » production for photons, and elastic scattering, ionization and bremsstrahlung for charged particles.While the spatial discretization is based on the diamond scheme, the angular discretization synergizes finite element method (FEM) and spherical harmonics (SH). Thus, SH is used to globally expand the scattering kernel and FFM is used to locally discretize the angular sphere. As a Result, this hybrid method (FEM-SH) is both accurate in dealing with forward-peaking scattering via FEM, and efficient for multi-energy-group computation via SH. In addition, FEM-SH enables the analytical integration in energy variable of delta scattering kernel for elastic scattering with reduced truncation error from the numerical integration based on the classic SH-based multi-energy-group method. Results: The accuracy of the proposed BFPE solver was benchmarked against Geant4 for photon dose calculation. In particular, FEM-SH had improved accuracy compared to FEM, while both were within 2% of the results obtained with Geant4. Conclusion: A deterministic solver of the Boltzmann-Fokker-Planck equation is developed for dose calculation, and benchmarked against Geant4. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  12. On a numerical solving of random generated hexamatrix games

    NASA Astrophysics Data System (ADS)

    Orlov, Andrei; Strekalovskiy, Alexander

    2016-10-01

    In this paper, we develop a global search method for finding a Nash equilibrium in a hexamatrix game (polymatrix game of three players). The method, on the one hand, is based on the equivalence theorem of the problem of finding a Nash equilibrium in the game and a special mathematical optimization problem, and, on the other hand, on the usage of Global Search Theory for solving the latter problem. The efficiency of this approach is demonstrated by the results of computational testing.

  13. Estimating the epidemic threshold on networks by deterministic connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Kezan, E-mail: lkzzr@sohu.com; Zhu, Guanghu; Fu, Xinchu

    2014-12-15

    For many epidemic networks some connections between nodes are treated as deterministic, while the remainder are random and have different connection probabilities. By applying spectral analysis to several constructed models, we find that one can estimate the epidemic thresholds of these networks by investigating information from only the deterministic connections. Nonetheless, in these models, generic nonuniform stochastic connections and heterogeneous community structure are also considered. The estimation of epidemic thresholds is achieved via inequalities with upper and lower bounds, which are found to be in very good agreement with numerical simulations. Since these deterministic connections are easier to detect thanmore » those stochastic connections, this work provides a feasible and effective method to estimate the epidemic thresholds in real epidemic networks.« less

  14. Experimental demonstration on the deterministic quantum key distribution based on entangled photons.

    PubMed

    Chen, Hua; Zhou, Zhi-Yuan; Zangana, Alaa Jabbar Jumaah; Yin, Zhen-Qiang; Wu, Juan; Han, Yun-Guang; Wang, Shuang; Li, Hong-Wei; He, De-Yong; Tawfeeq, Shelan Khasro; Shi, Bao-Sen; Guo, Guang-Can; Chen, Wei; Han, Zheng-Fu

    2016-02-10

    As an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified "Ping-Pong"(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based quantum communications.

  15. Experimental demonstration on the deterministic quantum key distribution based on entangled photons

    PubMed Central

    Chen, Hua; Zhou, Zhi-Yuan; Zangana, Alaa Jabbar Jumaah; Yin, Zhen-Qiang; Wu, Juan; Han, Yun-Guang; Wang, Shuang; Li, Hong-Wei; He, De-Yong; Tawfeeq, Shelan Khasro; Shi, Bao-Sen; Guo, Guang-Can; Chen, Wei; Han, Zheng-Fu

    2016-01-01

    As an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified “Ping-Pong”(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based quantum communications. PMID:26860582

  16. Epidemic spreading on adaptively weighted scale-free networks.

    PubMed

    Sun, Mengfeng; Zhang, Haifeng; Kang, Huiyan; Zhu, Guanghu; Fu, Xinchu

    2017-04-01

    We introduce three modified SIS models on scale-free networks that take into account variable population size, nonlinear infectivity, adaptive weights, behavior inertia and time delay, so as to better characterize the actual spread of epidemics. We develop new mathematical methods and techniques to study the dynamics of the models, including the basic reproduction number, and the global asymptotic stability of the disease-free and endemic equilibria. We show the disease-free equilibrium cannot undergo a Hopf bifurcation. We further analyze the effects of local information of diseases and various immunization schemes on epidemic dynamics. We also perform some stochastic network simulations which yield quantitative agreement with the deterministic mean-field approach.

  17. Deterministic Approach to the Kinetic Theory of Gases

    NASA Astrophysics Data System (ADS)

    Beck, József

    2010-02-01

    In the so-called Bernoulli model of the kinetic theory of gases, where (1) the particles are dimensionless points, (2) they are contained in a cube container, (3) no attractive or exterior forces are acting on them, (4) there is no collision between the particles, (5) the collision against the walls of the container are according to the law of elastic reflection, we deduce from Newtonian mechanics two local probabilistic laws: a Poisson limit law and a central limit theorem. We also prove some global law of large numbers, justifying that "density" and "pressure" are constant. Finally, as a byproduct of our research, we prove the surprising super-uniformity of the typical billiard path in a square.

  18. Multigroup SIR epidemic model with stochastic perturbation

    NASA Astrophysics Data System (ADS)

    Ji, Chunyan; Jiang, Daqing; Shi, Ningzhong

    2011-05-01

    In this paper, we discuss a multigroup SIR model with stochastic perturbation. We deduce the globally asymptotic stability of the disease-free equilibrium when R0≤1, which means the disease will die out. On the other hand, when R0>1, we derive the disease will prevail, which is measured through the difference between the solution and the endemic equilibrium of the deterministic model in time average. Furthermore, we prove the system is persistent in the mean which also reflects the disease will prevail. The key to our analysis is choosing appropriate Lyapunov functions. Finally, we illustrate the dynamic behavior of the model with n=2 and their approximations via a range of numerical experiments.

  19. Apocalypse...now? Molecular epidemiology, predictive genetic tests, and social communication of genetic contents.

    PubMed

    Castiel, L D

    1999-01-01

    The author analyzes the underlying theoretical aspects in the construction of the molecular watershed of epidemiology and the concept of genetic risk, focusing on issues raised by contemporary reality: new technologies, globalization, proliferation of communications strategies, and the dilution of identity matrices. He discusses problems pertaining to the establishment of such new interdisciplinary fields as molecular epidemiology and molecular genetics. Finally, he analyzes the repercussions of the social communication of genetic content, especially as related to predictive genetic tests and cloning of animals, based on triumphal, deterministic metaphors sustaining beliefs relating to the existence and supremacy of concepts such as 'purity', 'essence', and 'unification' of rational, integrated 'I's/egos'.

  20. Research on particle swarm optimization algorithm based on optimal movement probability

    NASA Astrophysics Data System (ADS)

    Ma, Jianhong; Zhang, Han; He, Baofeng

    2017-01-01

    The particle swarm optimization algorithm to improve the control precision, and has great application value training neural network and fuzzy system control fields etc.The traditional particle swarm algorithm is used for the training of feed forward neural networks,the search efficiency is low, and easy to fall into local convergence.An improved particle swarm optimization algorithm is proposed based on error back propagation gradient descent. Particle swarm optimization for Solving Least Squares Problems to meme group, the particles in the fitness ranking, optimization problem of the overall consideration, the error back propagation gradient descent training BP neural network, particle to update the velocity and position according to their individual optimal and global optimization, make the particles more to the social optimal learning and less to its optimal learning, it can avoid the particles fall into local optimum, by using gradient information can accelerate the PSO local search ability, improve the multi beam particle swarm depth zero less trajectory information search efficiency, the realization of improved particle swarm optimization algorithm. Simulation results show that the algorithm in the initial stage of rapid convergence to the global optimal solution can be near to the global optimal solution and keep close to the trend, the algorithm has faster convergence speed and search performance in the same running time, it can improve the convergence speed of the algorithm, especially the later search efficiency.

  1. Reliability assessment of slender concrete columns at the stability failure

    NASA Astrophysics Data System (ADS)

    Valašík, Adrián; Benko, Vladimír; Strauss, Alfred; Täubling, Benjamin

    2018-01-01

    The European Standard for designing concrete columns within the use of non-linear methods shows deficiencies in terms of global reliability, in case that the concrete columns fail by the loss of stability. The buckling failure is a brittle failure which occurs without warning and the probability of its formation depends on the columns slenderness. Experiments with slender concrete columns were carried out in cooperation with STRABAG Bratislava LTD in Central Laboratory of Faculty of Civil Engineering SUT in Bratislava. The following article aims to compare the global reliability of slender concrete columns with slenderness of 90 and higher. The columns were designed according to methods offered by EN 1992-1-1 [1]. The mentioned experiments were used as basis for deterministic nonlinear modelling of the columns and subsequent the probabilistic evaluation of structural response variability. Final results may be utilized as thresholds for loading of produced structural elements and they aim to present probabilistic design as less conservative compared to classic partial safety factor based design and alternative ECOV method.

  2. Quantum computation over the butterfly network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soeda, Akihito; Kinjo, Yoshiyuki; Turner, Peter S.

    2011-07-15

    In order to investigate distributed quantum computation under restricted network resources, we introduce a quantum computation task over the butterfly network where both quantum and classical communications are limited. We consider deterministically performing a two-qubit global unitary operation on two unknown inputs given at different nodes, with outputs at two distinct nodes. By using a particular resource setting introduced by M. Hayashi [Phys. Rev. A 76, 040301(R) (2007)], which is capable of performing a swap operation by adding two maximally entangled qubits (ebits) between the two input nodes, we show that unitary operations can be performed without adding any entanglementmore » resource, if and only if the unitary operations are locally unitary equivalent to controlled unitary operations. Our protocol is optimal in the sense that the unitary operations cannot be implemented if we relax the specifications of any of the channels. We also construct protocols for performing controlled traceless unitary operations with a 1-ebit resource and for performing global Clifford operations with a 2-ebit resource.« less

  3. A stochastic two-scale model for pressure-driven flow between rough surfaces

    PubMed Central

    Larsson, Roland; Lundström, Staffan; Wall, Peter; Almqvist, Andreas

    2016-01-01

    Seal surface topography typically consists of global-scale geometric features as well as local-scale roughness details and homogenization-based approaches are, therefore, readily applied. These provide for resolving the global scale (large domain) with a relatively coarse mesh, while resolving the local scale (small domain) in high detail. As the total flow decreases, however, the flow pattern becomes tortuous and this requires a larger local-scale domain to obtain a converged solution. Therefore, a classical homogenization-based approach might not be feasible for simulation of very small flows. In order to study small flows, a model allowing feasibly-sized local domains, for really small flow rates, is developed. Realization was made possible by coupling the two scales with a stochastic element. Results from numerical experiments, show that the present model is in better agreement with the direct deterministic one than the conventional homogenization type of model, both quantitatively in terms of flow rate and qualitatively in reflecting the flow pattern. PMID:27436975

  4. Sustainability or collapse: what can we learn from integrating the history of humans and the rest of nature?

    PubMed

    Costanza, Robert; Graumlich, Lisa; Steffen, Will; Crumley, Carole; Dearing, John; Hibbard, Kathy; Leemans, Rik; Redman, Charles; Schimel, David

    2007-11-01

    Understanding the history of how humans have interacted with the rest of nature can help clarify the options for managing our increasingly interconnected global system. Simple, deterministic relationships between environmental stress and social change are inadequate. Extreme drought, for instance, triggered both social collapse and ingenious management of water through irrigation. Human responses to change, in turn, feed into climate and ecological systems, producing a complex web of multidirectional connections in time and space. Integrated records of the co-evolving human-environment system over millennia are needed to provide a basis for a deeper understanding of the present and for forecasting the future. This requires the major task of assembling and integrating regional and global historical, archaeological, and paleoenvironmental records. Humans cannot predict the future. But, if we can adequately understand the past, we can use that understanding to influence our decisions and to create a better, more sustainable and desirable future.

  5. A robust and efficient stepwise regression method for building sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abraham, Simon, E-mail: Simon.Abraham@ulb.ac.be; Raisee, Mehrdad; Ghorbaniasl, Ghader

    2017-03-01

    Polynomial Chaos (PC) expansions are widely used in various engineering fields for quantifying uncertainties arising from uncertain parameters. The computational cost of classical PC solution schemes is unaffordable as the number of deterministic simulations to be calculated grows dramatically with the number of stochastic dimension. This considerably restricts the practical use of PC at the industrial level. A common approach to address such problems is to make use of sparse PC expansions. This paper presents a non-intrusive regression-based method for building sparse PC expansions. The most important PC contributions are detected sequentially through an automatic search procedure. The variable selectionmore » criterion is based on efficient tools relevant to probabilistic method. Two benchmark analytical functions are used to validate the proposed algorithm. The computational efficiency of the method is then illustrated by a more realistic CFD application, consisting of the non-deterministic flow around a transonic airfoil subject to geometrical uncertainties. To assess the performance of the developed methodology, a detailed comparison is made with the well established LAR-based selection technique. The results show that the developed sparse regression technique is able to identify the most significant PC contributions describing the problem. Moreover, the most important stochastic features are captured at a reduced computational cost compared to the LAR method. The results also demonstrate the superior robustness of the method by repeating the analyses using random experimental designs.« less

  6. National Centers for Environmental Prediction

    Science.gov Websites

    : Influence of convective parameterization on the systematic errors of Climate Forecast System (CFS) model ; Climate Dynamics, 41, 45-61, 2013. Saha, S., S. Pokhrel and H. S. Chaudhari : Influence of Eurasian snow Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather

  7. The spreading dynamics of sexually transmitted diseases with birth and death on heterogeneous networks

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Cao, Jinde; Alsaedi, Ahmed; Hayat, Tasawar

    2017-02-01

    In this paper, we formulate a deterministic model by including the vacant sites, which represent inactive individuals or potential contacts, to investigate the spreading dynamics of sexually transmitted diseases in heterogeneous networks. We first analytically derive the basic reproduction number R 0, which completely determines global dynamics of the system in the long run. Specifically, if R 0  <  1, the disease-free equilibrium is globally asymptotically stable, i.e. disease disappears from the network irrespective of initial infected numbers and distributions, whereas if R 0  >  1, the system is uniformly persistent around a unique endemic equilibrium, i.e. disease persists in the network. Furthermore, by using a suitable Lyapunov function the global stability of endemic equilibrium for low/high-risk infected individuals only is proved. Finally, the effects of three immunization schemes are studied and compared, and extensive numerical simulations are performed to investigate the effect of network topology and population turnover on disease spread. Our results suggest that population turnover could have great impact on the sexually transmitted disease system in heterogeneous networks, including the basic reproduction number and infection prevalence.

  8. Statistical Maps of Ground Magnetic Disturbance Derived from Global Geospace Models

    NASA Astrophysics Data System (ADS)

    Rigler, E. J.; Wiltberger, M. J.; Love, J. J.

    2017-12-01

    Electric currents in space are the principal driver of magnetic variations measured at Earth's surface. These in turn induce geoelectric fields that present a natural hazard for technological systems like high-voltage power distribution networks. Modern global geospace models can reasonably simulate large-scale geomagnetic response to solar wind variations, but they are less successful at deterministic predictions of intense localized geomagnetic activity that most impacts technological systems on the ground. Still, recent studies have shown that these models can accurately reproduce the spatial statistical distributions of geomagnetic activity, suggesting that their physics are largely correct. Since the magnetosphere is a largely externally driven system, most model-measurement discrepancies probably arise from uncertain boundary conditions. So, with realistic distributions of solar wind parameters to establish its boundary conditions, we use the Lyon-Fedder-Mobarry (LFM) geospace model to build a synthetic multivariate statistical model of gridded ground magnetic disturbance. From this, we analyze the spatial modes of geomagnetic response, regress on available measurements to fill in unsampled locations on the grid, and estimate the global probability distribution of extreme magnetic disturbance. The latter offers a prototype geomagnetic "hazard map", similar to those used to characterize better-known geophysical hazards like earthquakes and floods.

  9. Solar variability: Implications for global change

    NASA Technical Reports Server (NTRS)

    Lean, Judith; Rind, David

    1994-01-01

    Solar variability is examined in search of implications for global change. The topics covered include the following: solar variation modification of global surface temperature; the significance of solar variability with respect to future climate change; and methods of reducing the uncertainty of the potential amplitude of solar variability on longer time scales.

  10. The Globalization Classroom: New Option for Becoming More Human?

    ERIC Educational Resources Information Center

    Svetelj, Tony

    2014-01-01

    Within a multi-cultural and multi-religious society, exposed to the challenges of globalization, a traditional understanding of humanism offers insufficient frameworks for an adequate comprehension of human agency, its flourishing and search for meaning. The process of globalization continuously shakes the pedagogical assumptions and principles of…

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowder, Jeff; Cornish, Neil J.; Reddinger, J. Lucas

    This work presents the first application of the method of genetic algorithms (GAs) to data analysis for the Laser Interferometer Space Antenna (LISA). In the low frequency regime of the LISA band there are expected to be tens of thousands of galactic binary systems that will be emitting gravitational waves detectable by LISA. The challenge of parameter extraction of such a large number of sources in the LISA data stream requires a search method that can efficiently explore the large parameter spaces involved. As signals of many of these sources will overlap, a global search method is desired. GAs representmore » such a global search method for parameter extraction of multiple overlapping sources in the LISA data stream. We find that GAs are able to correctly extract source parameters for overlapping sources. Several optimizations of a basic GA are presented with results derived from applications of the GA searches to simulated LISA data.« less

  12. When being narrow minded is a good thing: locally biased people show stronger contextual cueing.

    PubMed

    Bellaera, Lauren; von Mühlenen, Adrian; Watson, Derrick G

    2014-01-01

    Repeated contexts allow us to find relevant information more easily. Learning such contexts has been proposed to depend upon either global processing of the repeated contexts, or alternatively processing of the local region surrounding the target information. In this study, we measured the extent to which observers were by default biased to process towards a more global or local level. The findings showed that the ability to use context to help guide their search was strongly related to an observer's local/global processing bias. Locally biased people could use context to help improve their search better than globally biased people. The results suggest that the extent to which context can be used depends crucially on the observer's attentional bias and thus also to factors and influences that can change this bias.

  13. A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems

    PubMed Central

    Cao, Leilei; Xu, Lihong; Goodman, Erik D.

    2016-01-01

    A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared. PMID:27293421

  14. A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems.

    PubMed

    Cao, Leilei; Xu, Lihong; Goodman, Erik D

    2016-01-01

    A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared.

  15. Characterization of normality of chaotic systems including prediction and detection of anomalies

    NASA Astrophysics Data System (ADS)

    Engler, Joseph John

    Accurate prediction and control pervades domains such as engineering, physics, chemistry, and biology. Often, it is discovered that the systems under consideration cannot be well represented by linear, periodic nor random data. It has been shown that these systems exhibit deterministic chaos behavior. Deterministic chaos describes systems which are governed by deterministic rules but whose data appear to be random or quasi-periodic distributions. Deterministically chaotic systems characteristically exhibit sensitive dependence upon initial conditions manifested through rapid divergence of states initially close to one another. Due to this characterization, it has been deemed impossible to accurately predict future states of these systems for longer time scales. Fortunately, the deterministic nature of these systems allows for accurate short term predictions, given the dynamics of the system are well understood. This fact has been exploited in the research community and has resulted in various algorithms for short term predictions. Detection of normality in deterministically chaotic systems is critical in understanding the system sufficiently to able to predict future states. Due to the sensitivity to initial conditions, the detection of normal operational states for a deterministically chaotic system can be challenging. The addition of small perturbations to the system, which may result in bifurcation of the normal states, further complicates the problem. The detection of anomalies and prediction of future states of the chaotic system allows for greater understanding of these systems. The goal of this research is to produce methodologies for determining states of normality for deterministically chaotic systems, detection of anomalous behavior, and the more accurate prediction of future states of the system. Additionally, the ability to detect subtle system state changes is discussed. The dissertation addresses these goals by proposing new representational techniques and novel prediction methodologies. The value and efficiency of these methods are explored in various case studies. Presented is an overview of chaotic systems with examples taken from the real world. A representation schema for rapid understanding of the various states of deterministically chaotic systems is presented. This schema is then used to detect anomalies and system state changes. Additionally, a novel prediction methodology which utilizes Lyapunov exponents to facilitate longer term prediction accuracy is presented and compared with other nonlinear prediction methodologies. These novel methodologies are then demonstrated on applications such as wind energy, cyber security and classification of social networks.

  16. Construction of nested maximin designs based on successive local enumeration and modified novel global harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin

    2017-01-01

    Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.

  17. A novel global Harmony Search method based on Ant Colony Optimisation algorithm

    NASA Astrophysics Data System (ADS)

    Fouad, Allouani; Boukhetala, Djamel; Boudjema, Fares; Zenger, Kai; Gao, Xiao-Zhi

    2016-03-01

    The Global-best Harmony Search (GHS) is a stochastic optimisation algorithm recently developed, which hybridises the Harmony Search (HS) method with the concept of swarm intelligence in the particle swarm optimisation (PSO) to enhance its performance. In this article, a new optimisation algorithm called GHSACO is developed by incorporating the GHS with the Ant Colony Optimisation algorithm (ACO). Our method introduces a novel improvisation process, which is different from that of the GHS in the following aspects. (i) A modified harmony memory (HM) representation and conception. (ii) The use of a global random switching mechanism to monitor the choice between the ACO and GHS. (iii) An additional memory consideration selection rule using the ACO random proportional transition rule with a pheromone trail update mechanism. The proposed GHSACO algorithm has been applied to various benchmark functions and constrained optimisation problems. Simulation results demonstrate that it can find significantly better solutions when compared with the original HS and some of its variants.

  18. QuickVina: accelerating AutoDock Vina using gradient-based heuristics for global optimization.

    PubMed

    Handoko, Stephanus Daniel; Ouyang, Xuchang; Su, Chinh Tran To; Kwoh, Chee Keong; Ong, Yew Soon

    2012-01-01

    Predicting binding between macromolecule and small molecule is a crucial phase in the field of rational drug design. AutoDock Vina, one of the most widely used docking software released in 2009, uses an empirical scoring function to evaluate the binding affinity between the molecules and employs the iterated local search global optimizer for global optimization, achieving a significantly improved speed and better accuracy of the binding mode prediction compared its predecessor, AutoDock 4. In this paper, we propose further improvement in the local search algorithm of Vina by heuristically preventing some intermediate points from undergoing local search. Our improved version of Vina-dubbed QVina-achieved a maximum acceleration of about 25 times with the average speed-up of 8.34 times compared to the original Vina when tested on a set of 231 protein-ligand complexes while maintaining the optimal scores mostly identical. Using our heuristics, larger number of different ligands can be quickly screened against a given receptor within the same time frame.

  19. Global Emergency Medicine: A Review of the Literature From 2015.

    PubMed

    Becker, Torben K; Hansoti, Bhakti; Bartels, Susan; Bisanzo, Mark; Jacquet, Gabrielle A; Lunney, Kevin; Marsh, Regan; Osei-Ampofo, Maxwell; Trehan, Indi; Lam, Christopher; Levine, Adam C

    2016-10-01

    The Global Emergency Medicine Literature Review (GEMLR) conducts an annual search of peer-reviewed and gray literature relevant to global emergency medicine (EM) to identify, review, and disseminate the most important new research in this field to a global audience of academics and clinical practitioners. This year 12,435 articles written in six languages were identified by our search. These articles were distributed among 20 reviewers for initial screening based on their relevance to the field of global EM. An additional two reviewers searched the gray literature. A total of 723 articles were deemed appropriate by at least one reviewer and approved by their editor for formal scoring of overall quality and importance. Two independent reviewers scored all articles. A total of 723 articles met our predetermined inclusion criteria and underwent full review. Sixty percent were categorized as emergency care in resource-limited settings (ECRLS), 17% as EM development (EMD), and 23% as disaster and humanitarian response (DHR). Twenty-four articles received scores of 18.5 or higher out of a maximum score 20 and were selected for formal summary and critique. Inter-rater reliability between reviewers gave an intraclass correlation coefficient of 0.71 (95% confidence interval = 0.66 to 0.75). Studies and reviews with a focus on infectious diseases, trauma, and the diagnosis and treatment of diseases common in resource-limited settings represented the majority of articles selected for final review. In 2015, there were almost twice as many articles found by our search compared to the 2014 review. The number of EMD articles increased, while the number ECRLS articles decreased. The number of DHR articles remained stable. As in prior years, the majority of articles focused on infectious diseases. © 2016 by the Society for Academic Emergency Medicine.

  20. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model.

    PubMed

    Nené, Nuno R; Dunham, Alistair S; Illingworth, Christopher J R

    2018-05-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. Copyright © 2018 Nené et al.

  1. Testing calibration routines for LISFLOOD, a distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Pannemans, B.

    2009-04-01

    Traditionally hydrological models are considered as difficult to calibrate: their highly non-linearity results in rugged and rough response surfaces were calibration algorithms easily get stuck in local minima. For the calibration of distributed hydrological models two extra factors play an important role: on the one hand they are often costly on computation, thus restricting the feasible number of model runs; on the other hand their distributed nature smooths the response surface, thus facilitating the search for a global minimum. Lisflood is a distributed hydrological model currently used for the European Flood Alert System - EFAS (Van der Knijff et al, 2008). Its upcoming recalibration over more then 200 catchments, each with an average runtime of 2-3 minutes, proved a perfect occasion to put several existing calibration algorithms to the test. The tested routines are Downhill Simplex (DHS, Nelder and Mead, 1965), SCEUA (Duan et Al. 1993), SCEM (Vrugt et al., 2003) and AMALGAM (Vrugt et al., 2008), and they were evaluated on their capability to efficiently converge onto the global minimum and on the spread in the found solutions in repeated runs. The routines were let loose on a simple hyperbolic function, on a Lisflood catchment using model output as observation, and on two Lisflood catchments using real observations (one on the river Inn in the Alps, the other along the downstream stretch of the Elbe). On the mathematical problem and on the catchment with synthetic observations DHS proved to be the fastest and the most efficient in finding a solution. SCEUA and AMALGAM are a slower, but while SCEUA keeps converging on the exact solution, AMALGAM slows down after about 600 runs. For the Lisflood models with real-time observations AMALGAM (hybrid algorithm that combines several other algorithms, we used CMA, PSO and GA) came as fastest out of the tests, and giving comparable results in consecutive runs. However, some more work is needed to tweak the stopping criteria. SCEUA is a bit slower, but has very transparent stopping rules. Both have closed in on the minima after about 600 runs. DHS equals only SCEUA on convergence speed. The stopping criteria we applied so far are too strict, causing it to stop too early. SCEM converges 5-6 times slower. This is a high price for the parameter uncertainty analysis that is simultaneously done. The ease with which all algorithms find the same optimum suggests that we are dealing with a smooth and relatively simple response surface. This leaves room for other deterministic calibration algorithms being smarter than DHS in sliding downhill. PEST seems promising but sofar we haven't managed to get it running with LISFLOOD. • Duan, Q.; Gupta, V. & Sorooshian, S., 1993, Shuffled complex evolution approach for effective and efficient global minimization, J Optim Theory Appl, Kluwer Academic Publishers-Plenum Publishers, 76, 501-521 • Nelder, J. & Mead, R., 1965, A simplex method for function minimization, Comput. J., 7, 308-313 • Van Der Knijff, J. M.; Younis, J. & De Roo, A. P. J., 2008, LISFLOOD: a GIS-based distributed model for river basin scale water balance and flood simulation, International Journal of Geographical Information Science, • Vrugt, J.; Gupta, H.; Bouten, W. & Sorooshian, S., 2003, A Shuffled Complex Evolution Metropolis algorithm for optimization and uncertainty assessment of hydrologic model parameters, Water Resour. Res., 39 • Vrugt, J.; Robinson, B. & Hyman, J., 2008, Self-Adaptive Multimethod Search for Global Optimization in Real-Parameter Spaces, IEEE Trans Evol Comput, IEEE,

  2. Hungarian contribution to the Global Soil Organic Carbon Map (GSOC17) using advanced machine learning algorithms and geostatistics

    NASA Astrophysics Data System (ADS)

    Szatmári, Gábor; Laborczi, Annamária; Takács, Katalin; Pásztor, László

    2017-04-01

    The knowledge about soil organic carbon (SOC) baselines and changes, and the detection of vulnerable hot spots for SOC losses and gains under climate change and changed land management is still fairly limited. Thus Global Soil Partnership (GSP) has been requested to develop a global SOC mapping campaign by 2017. GSPs concept builds on official national data sets, therefore, a bottom-up (country-driven) approach is pursued. The elaborated Hungarian methodology suits the general specifications of GSOC17 provided by GSP. The input data for GSOC17@HU mapping approach has involved legacy soil data bases, as well as proper environmental covariates related to the main soil forming factors, such as climate, organisms, relief and parent material. Nowadays, digital soil mapping (DSM) highly relies on the assumption that soil properties of interest can be modelled as a sum of a deterministic and stochastic component, which can be treated and modelled separately. We also adopted this assumption in our methodology. In practice, multiple regression techniques are commonly used to model the deterministic part. However, this global (and usually linear) models commonly oversimplify the often complex and non-linear relationship, which has a crucial effect on the resulted soil maps. Thus, we integrated machine learning algorithms (namely random forest and quantile regression forest) in the elaborated methodology, supposing then to be more suitable for the problem in hand. This approach has enable us to model the GSOC17 soil properties in that complex and non-linear forms as the soil itself. Furthermore, it has enable us to model and assess the uncertainty of the results, which is highly relevant in decision making. The applied methodology has used geostatistical approach to model the stochastic part of the spatial variability of the soil properties of interest. We created GSOC17@HU map with 1 km grid resolution according to the GSPs specifications. The map contributes to the GSPs GSOC17 proposals, as well as to the development of global soil information system under GSP Pillar 4 on soil data and information. However, we elaborated our adherent code (created in R software environment) in such a way that it can be improved, specified and applied for further uses. Hence, it opens the door to create countrywide map(s) with higher grid resolution for SOC (or other soil related properties) using the advanced methodology, as well as to contribute and support the SOC (or other soil) related country level decision making. Our paper will present the soil mapping methodology itself, the resulted GSOC17@HU map, some of our conclusions drawn from the experiences and their effects on the further uses. Acknowledgement: Our work was supported by the Hungarian National Scientific Research Foundation (OTKA, Grant No. K105167).

  3. Controllability of Deterministic Networks with the Identical Degree Sequence

    PubMed Central

    Ma, Xiujuan; Zhao, Haixing; Wang, Binghong

    2015-01-01

    Controlling complex network is an essential problem in network science and engineering. Recent advances indicate that the controllability of complex network is dependent on the network's topology. Liu and Barabási, et.al speculated that the degree distribution was one of the most important factors affecting controllability for arbitrary complex directed network with random link weights. In this paper, we analysed the effect of degree distribution to the controllability for the deterministic networks with unweighted and undirected. We introduce a class of deterministic networks with identical degree sequence, called (x,y)-flower. We analysed controllability of the two deterministic networks ((1, 3)-flower and (2, 2)-flower) by exact controllability theory in detail and give accurate results of the minimum number of driver nodes for the two networks. In simulation, we compare the controllability of (x,y)-flower networks. Our results show that the family of (x,y)-flower networks have the same degree sequence, but their controllability is totally different. So the degree distribution itself is not sufficient to characterize the controllability of deterministic networks with unweighted and undirected. PMID:26020920

  4. Inverse kinematic problem for a random gradient medium in geometric optics approximation

    NASA Astrophysics Data System (ADS)

    Petersen, N. V.

    1990-03-01

    Scattering at random inhomogeneities in a gradient medium results in systematic deviations of the rays and travel times of refracted body waves from those corresponding to the deterministic velocity component. The character of the difference depends on the parameters of the deterministic and random velocity component. However, at great distances to the source, independently of the velocity parameters (weakly or strongly inhomogeneous medium), the most probable depth of the ray turning point is smaller than that corresponding to the deterministic velocity component, the most probable travel times also being lower. The relative uncertainty in the deterministic velocity component, derived from the mean travel times using methods developed for laterally homogeneous media (for instance, the Herglotz-Wiechert method), is systematic in character, but does not exceed the contrast of velocity inhomogeneities by magnitude. The gradient of the deterministic velocity component has a significant effect on the travel-time fluctuations. The variance at great distances to the source is mainly controlled by shallow inhomogeneities. The travel-time flucutations are studied only for weakly inhomogeneous media.

  5. GlobiPack v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Roscoe

    2010-03-31

    GlobiPack contains a small collection of optimization globalization algorithms. These algorithms are used by optimization and various nonlinear equation solver algorithms.Used as the line-search procedure with Newton and Quasi-Newton optimization and nonlinear equation solver methods. These are standard published 1-D line search algorithms such as are described in the book Nocedal and Wright Numerical Optimization: 2nd edition, 2006. One set of algorithms were copied and refactored from the existing open-source Trilinos package MOOCHO where the linear search code is used to globalize SQP methods. This software is generic to any mathematical optimization problem where smooth derivatives exist. There is nomore » specific connection or mention whatsoever to any specific application, period. You cannot find more general mathematical software.« less

  6. Global search and rescue - A new concept. [orbital digital radar system with passive reflectors

    NASA Technical Reports Server (NTRS)

    Sivertson, W. E., Jr.

    1976-01-01

    A new terrestrial search and rescue concept is defined embodying the use of simple passive radiofreqeuncy reflectors in conjunction with a low earth-orbiting, all-weather, synthetic aperture radar to detect, identify, and position locate earth-bound users in distress. Users include ships, aircraft, small boats, explorers, hikers, etc. Airborne radar tests were conducted to evaluate the basic concept. Both X-band and L-band, dual polarization radars were operated simultaneously. Simple, relatively small, corner-reflector targets were successfully imaged and digital data processing approaches were investigated. Study of the basic concept and evaluation of results obtained from aircraft flight tests indicate an all-weather, day or night, global search and rescue system is feasible.

  7. Quasi-Static Probabilistic Structural Analyses Process and Criteria

    NASA Technical Reports Server (NTRS)

    Goldberg, B.; Verderaime, V.

    1999-01-01

    Current deterministic structural methods are easily applied to substructures and components, and analysts have built great design insights and confidence in them over the years. However, deterministic methods cannot support systems risk analyses, and it was recently reported that deterministic treatment of statistical data is inconsistent with error propagation laws that can result in unevenly conservative structural predictions. Assuming non-nal distributions and using statistical data formats throughout prevailing stress deterministic processes lead to a safety factor in statistical format, which integrated into the safety index, provides a safety factor and first order reliability relationship. The embedded safety factor in the safety index expression allows a historically based risk to be determined and verified over a variety of quasi-static metallic substructures consistent with the traditional safety factor methods and NASA Std. 5001 criteria.

  8. Effect of Uncertainty on Deterministic Runway Scheduling

    NASA Technical Reports Server (NTRS)

    Gupta, Gautam; Malik, Waqar; Jung, Yoon C.

    2012-01-01

    Active runway scheduling involves scheduling departures for takeoffs and arrivals for runway crossing subject to numerous constraints. This paper evaluates the effect of uncertainty on a deterministic runway scheduler. The evaluation is done against a first-come- first-serve scheme. In particular, the sequence from a deterministic scheduler is frozen and the times adjusted to satisfy all separation criteria; this approach is tested against FCFS. The comparison is done for both system performance (throughput and system delay) and predictability, and varying levels of congestion are considered. The modeling of uncertainty is done in two ways: as equal uncertainty in availability at the runway as for all aircraft, and as increasing uncertainty for later aircraft. Results indicate that the deterministic approach consistently performs better than first-come-first-serve in both system performance and predictability.

  9. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.

    PubMed

    Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M

    2016-12-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.

  10. There's Waldo! A Normalization Model of Visual Search Predicts Single-Trial Human Fixations in an Object Search Task

    PubMed Central

    Miconi, Thomas; Groomes, Laura; Kreiman, Gabriel

    2016-01-01

    When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global “priority map” that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects. PMID:26092221

  11. More than Just Finding Color: Strategy in Global Visual Search Is Shaped by Learned Target Probabilities

    ERIC Educational Resources Information Center

    Williams, Carrick C.; Pollatsek, Alexander; Cave, Kyle R.; Stroud, Michael J.

    2009-01-01

    In 2 experiments, eye movements were examined during searches in which elements were grouped into four 9-item clusters. The target (a red or blue "T") was known in advance, and each cluster contained different numbers of target-color elements. Rather than color composition of a cluster invariantly guiding the order of search though…

  12. On the Local Convergence of Pattern Search

    NASA Technical Reports Server (NTRS)

    Dolan, Elizabeth D.; Lewis, Robert Michael; Torczon, Virginia; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    We examine the local convergence properties of pattern search methods, complementing the previously established global convergence properties for this class of algorithms. We show that the step-length control parameter which appears in the definition of pattern search algorithms provides a reliable asymptotic measure of first-order stationarity. This gives an analytical justification for a traditional stopping criterion for pattern search methods. Using this measure of first-order stationarity, we analyze the behavior of pattern search in the neighborhood of an isolated local minimizer. We show that a recognizable subsequence converges r-linearly to the minimizer.

  13. [Study on Abnormal Topological Properties of Structural Brain Networks of Patients with Depression Comorbid with Anxiety].

    PubMed

    Wu, Xiuyong; Wu, Xiaoming; Peng, Hongjun; Ning, Yuping; Wu, Kai

    2016-06-01

    This paper is aimed to analyze the topological properties of structural brain networks in depressive patients with and without anxiety and to explore the neuropath logical mechanisms of depression comorbid with anxiety.Diffusion tensor imaging and deterministic tractography were applied to map the white matter structural networks.We collected 20 depressive patients with anxiety(DPA),18 depressive patients without anxiety(DP),and 28 normal controls(NC)as comparative groups.The global and nodal properties of the structural brain networks in the three groups were analyzed with graph theoretical methods.The result showed that1 the structural brain networks in three groups showed small-world properties and highly connected global hubs predominately from association cortices;2DP group showed lower local efficiency and global efficiency compared to NC group,whereas DPA group showed higher local efficiency and global efficiency compared to NC group;3significant differences of network properties(clustering coefficient,characteristic path lengths,local efficiency,global efficiency)were found between DPA and DP groups;4DP group showed significant changes of nodal efficiency in the brain areas primarily in the temporal lobe and bilateral frontal gyrus,compared to DPA and NC groups.The analysis indicated that the DP and DPA groups showed nodal properties of the structural brain networks,compared to NC group.Moreover,the two diseased groups indicated an opposite trend in the network properties.The results of this study may provide a new imaging index for clinical diagnosis for depression comorbid with anxiety.

  14. Global Optimization of Low-Thrust Interplanetary Trajectories Subject to Operational Constraints

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Vavrina, Matthew A.; Hinckley, David

    2016-01-01

    Low-thrust interplanetary space missions are highly complex and there can be many locally optimal solutions. While several techniques exist to search for globally optimal solutions to low-thrust trajectory design problems, they are typically limited to unconstrained trajectories. The operational design community in turn has largely avoided using such techniques and has primarily focused on accurate constrained local optimization combined with grid searches and intuitive design processes at the expense of efficient exploration of the global design space. This work is an attempt to bridge the gap between the global optimization and operational design communities by presenting a mathematical framework for global optimization of low-thrust trajectories subject to complex constraints including the targeting of planetary landing sites, a solar range constraint to simplify the thermal design of the spacecraft, and a real-world multi-thruster electric propulsion system that must switch thrusters on and off as available power changes over the course of a mission.

  15. The behaviour of basic autocatalytic signalling modules in isolation and embedded in networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, J.; Mois, Kristina; Suwanmajo, Thapanar

    2014-11-07

    In this paper, we examine the behaviour of basic autocatalytic feedback modules involving a species catalyzing its own production, either directly or indirectly. We first perform a systematic study of the autocatalytic feedback module in isolation, examining the effect of different factors, showing how this module is capable of exhibiting monostable threshold and bistable switch-like behaviour. We then study the behaviour of this module embedded in different kinds of basic networks including (essentially) irreversible cycles, open and closed reversible chains, and networks with additional feedback. We study the behaviour of the networks deterministically and also stochastically, using simulations, analytical work,more » and bifurcation analysis. We find that (i) there are significant differences between the behaviour of this module in isolation and in a network: thresholds may be altered or destroyed and bistability may be destroyed or even induced, even when the ambient network is simple. The global characteristics and topology of this network and the position of the module in the ambient network can play important and unexpected roles. (ii) There can be important differences between the deterministic and stochastic dynamics of the module embedded in networks, which may be accentuated by the ambient network. This provides new insights into the functioning of such enzymatic modules individually and as part of networks, with relevance to other enzymatic signalling modules as well.« less

  16. An Evaluation of the Predictability of Austral Summer Season Precipitation over South America.

    NASA Astrophysics Data System (ADS)

    Misra, Vasubandhu

    2004-03-01

    In this study predictability of austral summer seasonal precipitation over South America is investigated using a 12-yr set of a 3.5-month range (seasonal) and a 17-yr range (continuous multiannual) five-member ensemble integrations of the Center for Ocean Land Atmosphere Studies (COLA) atmospheric general circulation model (AGCM). These integrations were performed with prescribed observed sea surface temperature (SST); therefore, skill attained represents an estimate of the upper bound of the skill achievable by COLA AGCM with predicted SST. The seasonal runs outperform the multiannual model integrations both in deterministic and probabilistic skill. The simulation of the January February March (JFM) seasonal climatology of precipitation is vastly superior in the seasonal runs except over the Nordeste region where the multiannual runs show a marginal improvement. The teleconnection of the ensemble mean JFM precipitation over tropical South America with global contemporaneous observed sea surface temperature in the seasonal runs conforms more closely to observations than in the multiannual runs. Both the sets of runs clearly beat persistence in predicting the interannual precipitation anomalies over the Amazon River basin, Nordeste, South Atlantic convergence zone, and subtropical South America. However, both types of runs display poorer simulations over subtropical regions than the tropical areas of South America. The examination of probabilistic skill of precipitation supports the conclusions from deterministic skill analysis that the seasonal runs yield superior simulations than the multiannual-type runs.

  17. Automatic Design of Synthetic Gene Circuits through Mixed Integer Non-linear Programming

    PubMed Central

    Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias

    2012-01-01

    Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits. PMID:22536398

  18. Spatial dynamics of invasion: the geometry of introduced species.

    PubMed

    Korniss, Gyorgy; Caraco, Thomas

    2005-03-07

    Many exotic species combine low probability of establishment at each introduction with rapid population growth once introduction does succeed. To analyse this phenomenon, we note that invaders often cluster spatially when rare, and consequently an introduced exotic's population dynamics should depend on locally structured interactions. Ecological theory for spatially structured invasion relies on deterministic approximations, and determinism does not address the observed uncertainty of the exotic-introduction process. We take a new approach to the population dynamics of invasion and, by extension, to the general question of invasibility in any spatial ecology. We apply the physical theory for nucleation of spatial systems to a lattice-based model of competition between plant species, a resident and an invader, and the analysis reaches conclusions that differ qualitatively from the standard ecological theories. Nucleation theory distinguishes between dynamics of single- and multi-cluster invasion. Low introduction rates and small system size produce single-cluster dynamics, where success or failure of introduction is inherently stochastic. Single-cluster invasion occurs only if the cluster reaches a critical size, typically preceded by a number of failed attempts. For this case, we identify the functional form of the probability distribution of time elapsing until invasion succeeds. Although multi-cluster invasion for sufficiently large systems exhibits spatial averaging and almost-deterministic dynamics of the global densities, an analytical approximation from nucleation theory, known as Avrami's law, describes our simulation results far better than standard ecological approximations.

  19. The behaviour of basic autocatalytic signalling modules in isolation and embedded in networks

    NASA Astrophysics Data System (ADS)

    Krishnan, J.; Mois, Kristina; Suwanmajo, Thapanar

    2014-11-01

    In this paper, we examine the behaviour of basic autocatalytic feedback modules involving a species catalyzing its own production, either directly or indirectly. We first perform a systematic study of the autocatalytic feedback module in isolation, examining the effect of different factors, showing how this module is capable of exhibiting monostable threshold and bistable switch-like behaviour. We then study the behaviour of this module embedded in different kinds of basic networks including (essentially) irreversible cycles, open and closed reversible chains, and networks with additional feedback. We study the behaviour of the networks deterministically and also stochastically, using simulations, analytical work, and bifurcation analysis. We find that (i) there are significant differences between the behaviour of this module in isolation and in a network: thresholds may be altered or destroyed and bistability may be destroyed or even induced, even when the ambient network is simple. The global characteristics and topology of this network and the position of the module in the ambient network can play important and unexpected roles. (ii) There can be important differences between the deterministic and stochastic dynamics of the module embedded in networks, which may be accentuated by the ambient network. This provides new insights into the functioning of such enzymatic modules individually and as part of networks, with relevance to other enzymatic signalling modules as well.

  20. A mixed SIR-SIS model to contain a virus spreading through networks with two degrees

    NASA Astrophysics Data System (ADS)

    Essouifi, Mohamed; Achahbar, Abdelfattah

    Due to the fact that the “nodes” and “links” of real networks are heterogeneous, to model computer viruses prevalence throughout the Internet, we borrow the idea of the reduced scale free network which was introduced recently. The purpose of this paper is to extend the previous deterministic two subchains of Susceptible-Infected-Susceptible (SIS) model into a mixed Susceptible-Infected-Recovered and Susceptible-Infected-Susceptible (SIR-SIS) model to contain the computer virus spreading over networks with two degrees. Moreover, we develop its stochastic counterpart. Due to the high protection and security taken for hubs class, we suggest to treat it by using SIR epidemic model rather than the SIS one. The analytical study reveals that the proposed model admits a stable viral equilibrium. Thus, it is shown numerically that the mean dynamic behavior of the stochastic model is in agreement with the deterministic one. Unlike the infection densities i2 and i which both tend to a viral equilibrium for both approaches as in the previous study, i1 tends to the virus-free equilibrium. Furthermore, since a proportion of infectives are recovered, the global infection density i is minimized. Therefore, the permanent presence of viruses in the network due to the lower-degree nodes class. Many suggestions are put forward for containing viruses propagation and minimizing their damages.

  1. Nonlinear Time Series Analysis of Nodulation Factor Induced Calcium Oscillations: Evidence for Deterministic Chaos?

    PubMed Central

    Hazledine, Saul; Sun, Jongho; Wysham, Derin; Downie, J. Allan; Oldroyd, Giles E. D.; Morris, Richard J.

    2009-01-01

    Legume plants form beneficial symbiotic interactions with nitrogen fixing bacteria (called rhizobia), with the rhizobia being accommodated in unique structures on the roots of the host plant. The legume/rhizobial symbiosis is responsible for a significant proportion of the global biologically available nitrogen. The initiation of this symbiosis is governed by a characteristic calcium oscillation within the plant root hair cells and this signal is activated by the rhizobia. Recent analyses on calcium time series data have suggested that stochastic effects have a large role to play in defining the nature of the oscillations. The use of multiple nonlinear time series techniques, however, suggests an alternative interpretation, namely deterministic chaos. We provide an extensive, nonlinear time series analysis on the nature of this calcium oscillation response. We build up evidence through a series of techniques that test for determinism, quantify linear and nonlinear components, and measure the local divergence of the system. Chaos is common in nature and it seems plausible that properties of chaotic dynamics might be exploited by biological systems to control processes within the cell. Systems possessing chaotic control mechanisms are more robust in the sense that the enhanced flexibility allows more rapid response to environmental changes with less energetic costs. The desired behaviour could be most efficiently targeted in this manner, supporting some intriguing speculations about nonlinear mechanisms in biological signaling. PMID:19675679

  2. Stochastic integrated assessment of climate tipping points indicates the need for strict climate policy

    NASA Astrophysics Data System (ADS)

    Lontzek, Thomas S.; Cai, Yongyang; Judd, Kenneth L.; Lenton, Timothy M.

    2015-05-01

    Perhaps the most `dangerous’ aspect of future climate change is the possibility that human activities will push parts of the climate system past tipping points, leading to irreversible impacts. The likelihood of such large-scale singular events is expected to increase with global warming, but is fundamentally uncertain. A key question is how should the uncertainty surrounding tipping events affect climate policy? We address this using a stochastic integrated assessment model, based on the widely used deterministic DICE model. The temperature-dependent likelihood of tipping is calibrated using expert opinions, which we find to be internally consistent. The irreversible impacts of tipping events are assumed to accumulate steadily over time (rather than instantaneously), consistent with scientific understanding. Even with conservative assumptions about the rate and impacts of a stochastic tipping event, today’s optimal carbon tax is increased by ~50%. For a plausibly rapid, high-impact tipping event, today’s optimal carbon tax is increased by >200%. The additional carbon tax to delay climate tipping grows at only about half the rate of the baseline carbon tax. This implies that the effective discount rate for the costs of stochastic climate tipping is much lower than the discount rate for deterministic climate damages. Our results support recent suggestions that the costs of carbon emission used to inform policy are being underestimated, and that uncertain future climate damages should be discounted at a low rate.

  3. Efficient room-temperature source of polarized single photons

    DOEpatents

    Lukishova, Svetlana G.; Boyd, Robert W.; Stroud, Carlos R.

    2007-08-07

    An efficient technique for producing deterministically polarized single photons uses liquid-crystal hosts of either monomeric or oligomeric/polymeric form to preferentially align the single emitters for maximum excitation efficiency. Deterministic molecular alignment also provides deterministically polarized output photons; using planar-aligned cholesteric liquid crystal hosts as 1-D photonic-band-gap microcavities tunable to the emitter fluorescence band to increase source efficiency, using liquid crystal technology to prevent emitter bleaching. Emitters comprise soluble dyes, inorganic nanocrystals or trivalent rare-earth chelates.

  4. Finding the global minimum: a fuzzy end elimination implementation

    NASA Technical Reports Server (NTRS)

    Keller, D. A.; Shibata, M.; Marcus, E.; Ornstein, R. L.; Rein, R.

    1995-01-01

    The 'fuzzy end elimination theorem' (FEE) is a mathematically proven theorem that identifies rotameric states in proteins which are incompatible with the global minimum energy conformation. While implementing the FEE we noticed two different aspects that directly affected the final results at convergence. First, the identification of a single dead-ending rotameric state can trigger a 'domino effect' that initiates the identification of additional rotameric states which become dead-ending. A recursive check for dead-ending rotameric states is therefore necessary every time a dead-ending rotameric state is identified. It is shown that, if the recursive check is omitted, it is possible to miss the identification of some dead-ending rotameric states causing a premature termination of the elimination process. Second, we examined the effects of removing dead-ending rotameric states from further considerations at different moments of time. Two different methods of rotameric state removal were examined for an order dependence. In one case, each rotamer found to be incompatible with the global minimum energy conformation was removed immediately following its identification. In the other, dead-ending rotamers were marked for deletion but retained during the search, so that they influenced the evaluation of other rotameric states. When the search was completed, all marked rotamers were removed simultaneously. In addition, to expand further the usefulness of the FEE, a novel method is presented that allows for further reduction in the remaining set of conformations at the FEE convergence. In this method, called a tree-based search, each dead-ending pair of rotamers which does not lead to the direct removal of either rotameric state is used to reduce significantly the number of remaining conformations. In the future this method can also be expanded to triplet and quadruplet sets of rotameric states. We tested our implementation of the FEE by exhaustively searching ten protein segments and found that the FEE identified the global minimum every time. For each segment, the global minimum was exhaustively searched in two different environments: (i) the segments were extracted from the protein and exhaustively searched in the absence of the surrounding residues; (ii) the segments were exhaustively searched in the presence of the remaining residues fixed at crystal structure conformations. We also evaluated the performance of the method for accurately predicting side chain conformations. We examined the influence of factors such as type and accuracy of backbone template used, and the restrictions imposed by the choice of potential function, parameterization and rotamer database. Conclusions are drawn on these results and future prospects are given.

  5. Rational Drug Discovery of HCV Helicase Inhibitor: Improved Docking Accuracy with Multiple Seeding in AutoDock Vina and In Situ Minimization.

    PubMed

    Lim, See K; Othman, Rozana; Yusof, Rohana; Heh, Choon H

    2017-01-01

    Hepatitis C is a significant cause for end-stage liver diseases and liver transplantation which affects approximately 3% of the global populations. Despite the current several direct antiviral agents in the treatment of Hepatitis C, the standard treatment for HCV infection is accompanied by several drawbacks, such as adverse side effects, high pricing of medications and the rapid emerging rate of resistant HCV variants. To discover potential inhibitors for HCV helicase through an optimized in silico approach. In this study, a homology model (HCV Genotype 3 helicase) was used as the target and screened through a benzopyran-based virtual library. Multiple-seedings of AutoDock Vina and in situ minimization were to account for the non-deterministic nature of AutoDock Vina search algorithm and binding site flexibility, respectively. ADME/T and interaction analyses were also done on the top hits via FAFDRUG3 web server and Discovery Studio 4.5. This study involved the development of an improved flow for virtual screening via implemention of multiple-seeding screening approach and in situ minimization. With the new docking protocol, the redocked standards have shown better RMSD value in reference to their native conformations. Ten benzopyran-like compounds with satisfactory physicochemical properties were discovered to be potential inhibitors of HCV helicase. ZINC38649350 was identified as the most potential inhibitor. Ten potential HCV helicase inhibitors were discovered via a new docking optimization protocol with better docking accuracy. These findings could contribute to the discovery of novel HCV antivirals and serve as an alternative approach of in silico rational drug discovery. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  6. Polynomial-time solution of prime factorization and NP-complete problems with digital memcomputing machines

    NASA Astrophysics Data System (ADS)

    Traversa, Fabio L.; Di Ventra, Massimiliano

    2017-02-01

    We introduce a class of digital machines, we name Digital Memcomputing Machines, (DMMs) able to solve a wide range of problems including Non-deterministic Polynomial (NP) ones with polynomial resources (in time, space, and energy). An abstract DMM with this power must satisfy a set of compatible mathematical constraints underlying its practical realization. We prove this by making a connection with the dynamical systems theory. This leads us to a set of physical constraints for poly-resource resolvability. Once the mathematical requirements have been assessed, we propose a practical scheme to solve the above class of problems based on the novel concept of self-organizing logic gates and circuits (SOLCs). These are logic gates and circuits able to accept input signals from any terminal, without distinction between conventional input and output terminals. They can solve boolean problems by self-organizing into their solution. They can be fabricated either with circuit elements with memory (such as memristors) and/or standard MOS technology. Using tools of functional analysis, we prove mathematically the following constraints for the poly-resource resolvability: (i) SOLCs possess a global attractor; (ii) their only equilibrium points are the solutions of the problems to solve; (iii) the system converges exponentially fast to the solutions; (iv) the equilibrium convergence rate scales at most polynomially with input size. We finally provide arguments that periodic orbits and strange attractors cannot coexist with equilibria. As examples, we show how to solve the prime factorization and the search version of the NP-complete subset-sum problem. Since DMMs map integers into integers, they are robust against noise and hence scalable. We finally discuss the implications of the DMM realization through SOLCs to the NP = P question related to constraints of poly-resources resolvability.

  7. Explorative search of distributed bio-data to answer complex biomedical questions

    PubMed Central

    2014-01-01

    Background The huge amount of biomedical-molecular data increasingly produced is providing scientists with potentially valuable information. Yet, such data quantity makes difficult to find and extract those data that are most reliable and most related to the biomedical questions to be answered, which are increasingly complex and often involve many different biomedical-molecular aspects. Such questions can be addressed only by comprehensively searching and exploring different types of data, which frequently are ordered and provided by different data sources. Search Computing has been proposed for the management and integration of ranked results from heterogeneous search services. Here, we present its novel application to the explorative search of distributed biomedical-molecular data and the integration of the search results to answer complex biomedical questions. Results A set of available bioinformatics search services has been modelled and registered in the Search Computing framework, and a Bioinformatics Search Computing application (Bio-SeCo) using such services has been created and made publicly available at http://www.bioinformatics.deib.polimi.it/bio-seco/seco/. It offers an integrated environment which eases search, exploration and ranking-aware combination of heterogeneous data provided by the available registered services, and supplies global results that can support answering complex multi-topic biomedical questions. Conclusions By using Bio-SeCo, scientists can explore the very large and very heterogeneous biomedical-molecular data available. They can easily make different explorative search attempts, inspect obtained results, select the most appropriate, expand or refine them and move forward and backward in the construction of a global complex biomedical query on multiple distributed sources that could eventually find the most relevant results. Thus, it provides an extremely useful automated support for exploratory integrated bio search, which is fundamental for Life Science data driven knowledge discovery. PMID:24564278

  8. Searching bioremediation patents through Cooperative Patent Classification (CPC).

    PubMed

    Prasad, Rajendra

    2016-03-01

    Patent classification systems have traditionally evolved independently at each patent jurisdiction to classify patents handled by their examiners to be able to search previous patents while dealing with new patent applications. As patent databases maintained by them went online for free access to public as also for global search of prior art by examiners, the need arose for a common platform and uniform structure of patent databases. The diversity of different classification, however, posed problems of integrating and searching relevant patents across patent jurisdictions. To address this problem of comparability of data from different sources and searching patents, WIPO in the recent past developed what is known as International Patent Classification (IPC) system which most countries readily adopted to code their patents with IPC codes along with their own codes. The Cooperative Patent Classification (CPC) is the latest patent classification system based on IPC/European Classification (ECLA) system, developed by the European Patent Office (EPO) and the United States Patent and Trademark Office (USPTO) which is likely to become a global standard. This paper discusses this new classification system with reference to patents on bioremediation.

  9. Association of Structural Global Brain Network Properties with Intelligence in Normal Aging

    PubMed Central

    Fischer, Florian U.; Wolf, Dominik; Scheurich, Armin; Fellgiebel, Andreas

    2014-01-01

    Higher general intelligence attenuates age-associated cognitive decline and the risk of dementia. Thus, intelligence has been associated with cognitive reserve or resilience in normal aging. Neurophysiologically, intelligence is considered as a complex capacity that is dependent on a global cognitive network rather than isolated brain areas. An association of structural as well as functional brain network characteristics with intelligence has already been reported in young adults. We investigated the relationship between global structural brain network properties, general intelligence and age in a group of 43 cognitively healthy elderly, age 60–85 years. Individuals were assessed cross-sectionally using Wechsler Adult Intelligence Scale-Revised (WAIS-R) and diffusion-tensor imaging. Structural brain networks were reconstructed individually using deterministic tractography, global network properties (global efficiency, mean shortest path length, and clustering coefficient) were determined by graph theory and correlated to intelligence scores within both age groups. Network properties were significantly correlated to age, whereas no significant correlation to WAIS-R was observed. However, in a subgroup of 15 individuals aged 75 and above, the network properties were significantly correlated to WAIS-R. Our findings suggest that general intelligence and global properties of structural brain networks may not be generally associated in cognitively healthy elderly. However, we provide first evidence of an association between global structural brain network properties and general intelligence in advanced elderly. Intelligence might be affected by age-associated network deterioration only if a certain threshold of structural degeneration is exceeded. Thus, age-associated brain structural changes seem to be partially compensated by the network and the range of this compensation might be a surrogate of cognitive reserve or brain resilience. PMID:24465994

  10. Fast protein tertiary structure retrieval based on global surface shape similarity.

    PubMed

    Sael, Lee; Li, Bin; La, David; Fang, Yi; Ramani, Karthik; Rustamov, Raif; Kihara, Daisuke

    2008-09-01

    Characterization and identification of similar tertiary structure of proteins provides rich information for investigating function and evolution. The importance of structure similarity searches is increasing as structure databases continue to expand, partly due to the structural genomics projects. A crucial drawback of conventional protein structure comparison methods, which compare structures by their main-chain orientation or the spatial arrangement of secondary structure, is that a database search is too slow to be done in real-time. Here we introduce a global surface shape representation by three-dimensional (3D) Zernike descriptors, which represent a protein structure compactly as a series expansion of 3D functions. With this simplified representation, the search speed against a few thousand structures takes less than a minute. To investigate the agreement between surface representation defined by 3D Zernike descriptor and conventional main-chain based representation, a benchmark was performed against a protein classification generated by the combinatorial extension algorithm. Despite the different representation, 3D Zernike descriptor retrieved proteins of the same conformation defined by combinatorial extension in 89.6% of the cases within the top five closest structures. The real-time protein structure search by 3D Zernike descriptor will open up new possibility of large-scale global and local protein surface shape comparison. 2008 Wiley-Liss, Inc.

  11. Nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates

    DOEpatents

    Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E [Greenback, TN; Guillorn, Michael A [Ithaca, NY; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TN; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN

    2011-08-23

    Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoreplicant structure coupled to a surface of the substrate.

  12. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology

    PubMed Central

    Gao, Fei; Li, Ye; Novak, Igor L.; Slepchenko, Boris M.

    2016-01-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium ‘sparks’ as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell. PMID:27959915

  13. Stochasticity and determinism in models of hematopoiesis.

    PubMed

    Kimmel, Marek

    2014-01-01

    This chapter represents a novel view of modeling in hematopoiesis, synthesizing both deterministic and stochastic approaches. Whereas the stochastic models work in situations where chance dominates, for example when the number of cells is small, or under random mutations, the deterministic models are more important for large-scale, normal hematopoiesis. New types of models are on the horizon. These models attempt to account for distributed environments such as hematopoietic niches and their impact on dynamics. Mixed effects of such structures and chance events are largely unknown and constitute both a challenge and promise for modeling. Our discussion is presented under the separate headings of deterministic and stochastic modeling; however, the connections between both are frequently mentioned. Four case studies are included to elucidate important examples. We also include a primer of deterministic and stochastic dynamics for the reader's use.

  14. Competitive Facility Location with Fuzzy Random Demands

    NASA Astrophysics Data System (ADS)

    Uno, Takeshi; Katagiri, Hideki; Kato, Kosuke

    2010-10-01

    This paper proposes a new location problem of competitive facilities, e.g. shops, with uncertainty and vagueness including demands for the facilities in a plane. By representing the demands for facilities as fuzzy random variables, the location problem can be formulated as a fuzzy random programming problem. For solving the fuzzy random programming problem, first the α-level sets for fuzzy numbers are used for transforming it to a stochastic programming problem, and secondly, by using their expectations and variances, it can be reformulated to a deterministic programming problem. After showing that one of their optimal solutions can be found by solving 0-1 programming problems, their solution method is proposed by improving the tabu search algorithm with strategic oscillation. The efficiency of the proposed method is shown by applying it to numerical examples of the facility location problems.

  15. Seven rules to avoid the tragedy of the commons.

    PubMed

    Murase, Yohsuke; Baek, Seung Ki

    2018-07-14

    Cooperation among self-interested players in a social dilemma is fragile and easily interrupted by mistakes. In this work, we study the repeated n-person public-goods game and search for a strategy that forms a cooperative Nash equilibrium in the presence of implementation error with a guarantee that the resulting payoff will be no less than any of the co-players'. By enumerating strategic possibilities for n=3, we show that such a strategy indeed exists when its memory length m equals three. It means that a deterministic strategy can be publicly employed to stabilize cooperation against error with avoiding the risk of being exploited. We furthermore show that, for general n-person public-goods game, m ≥ n is necessary to satisfy the above criteria. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  16. 47 CFR 80.1125 - Search and rescue coordinating communications.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 5 2012-10-01 2012-10-01 false Search and rescue coordinating communications. 80.1125 Section 80.1125 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Global Maritime Distress and Safety System (GMDSS...

  17. 2005 8th Annual Systems Engineering Conference. Volume 1, Tuesday

    DTIC Science & Technology

    2005-10-27

    Services NCES Discovery Services Federated Search Denotes interface Service NCCP Oktoberfest 2004 101 Task: Global Strike Mission Planning NCCP Oktoberfest...Enterprise Service Management Security Services NCES Discovery Services Federated Search Service Test, cert and accreditation needs to be focused on small

  18. Exposure to Organic Solvents Used in Dry Cleaning Reduces Low and High Level Visual Function

    PubMed Central

    Jiménez Barbosa, Ingrid Astrid

    2015-01-01

    Purpose To investigate whether exposure to occupational levels of organic solvents in the dry cleaning industry is associated with neurotoxic symptoms and visual deficits in the perception of basic visual features such as luminance contrast and colour, higher level processing of global motion and form (Experiment 1), and cognitive function as measured in a visual search task (Experiment 2). Methods The Q16 neurotoxic questionnaire, a commonly used measure of neurotoxicity (by the World Health Organization), was administered to assess the neurotoxic status of a group of 33 dry cleaners exposed to occupational levels of organic solvents (OS) and 35 age-matched non dry-cleaners who had never worked in the dry cleaning industry. In Experiment 1, to assess visual function, contrast sensitivity, colour/hue discrimination (Munsell Hue 100 test), global motion and form thresholds were assessed using computerised psychophysical tests. Sensitivity to global motion or form structure was quantified by varying the pattern coherence of global dot motion (GDM) and Glass pattern (oriented dot pairs) respectively (i.e., the percentage of dots/dot pairs that contribute to the perception of global structure). In Experiment 2, a letter visual-search task was used to measure reaction times (as a function of the number of elements: 4, 8, 16, 32, 64 and 100) in both parallel and serial search conditions. Results Dry cleaners exposed to organic solvents had significantly higher scores on the Q16 compared to non dry-cleaners indicating that dry cleaners experienced more neurotoxic symptoms on average. The contrast sensitivity function for dry cleaners was significantly lower at all spatial frequencies relative to non dry-cleaners, which is consistent with previous studies. Poorer colour discrimination performance was also noted in dry cleaners than non dry-cleaners, particularly along the blue/yellow axis. In a new finding, we report that global form and motion thresholds for dry cleaners were also significantly higher and almost double than that obtained from non dry-cleaners. However, reaction time performance on both parallel and serial visual search was not different between dry cleaners and non dry-cleaners. Conclusions Exposure to occupational levels of organic solvents is associated with neurotoxicity which is in turn associated with both low level deficits (such as the perception of contrast and discrimination of colour) and high level visual deficits such as the perception of global form and motion, but not visual search performance. The latter finding indicates that the deficits in visual function are unlikely to be due to changes in general cognitive performance. PMID:25933026

  19. Taboo Search: An Approach to the Multiple Minima Problem

    NASA Astrophysics Data System (ADS)

    Cvijovic, Djurdje; Klinowski, Jacek

    1995-02-01

    Described here is a method, based on Glover's taboo search for discrete functions, of solving the multiple minima problem for continuous functions. As demonstrated by model calculations, the algorithm avoids entrapment in local minima and continues the search to give a near-optimal final solution. Unlike other methods of global optimization, this procedure is generally applicable, easy to implement, derivative-free, and conceptually simple.

  20. Hybrid deterministic/stochastic simulation of complex biochemical systems.

    PubMed

    Lecca, Paola; Bagagiolo, Fabio; Scarpa, Marina

    2017-11-21

    In a biological cell, cellular functions and the genetic regulatory apparatus are implemented and controlled by complex networks of chemical reactions involving genes, proteins, and enzymes. Accurate computational models are indispensable means for understanding the mechanisms behind the evolution of a complex system, not always explored with wet lab experiments. To serve their purpose, computational models, however, should be able to describe and simulate the complexity of a biological system in many of its aspects. Moreover, it should be implemented by efficient algorithms requiring the shortest possible execution time, to avoid enlarging excessively the time elapsing between data analysis and any subsequent experiment. Besides the features of their topological structure, the complexity of biological networks also refers to their dynamics, that is often non-linear and stiff. The stiffness is due to the presence of molecular species whose abundance fluctuates by many orders of magnitude. A fully stochastic simulation of a stiff system is computationally time-expensive. On the other hand, continuous models are less costly, but they fail to capture the stochastic behaviour of small populations of molecular species. We introduce a new efficient hybrid stochastic-deterministic computational model and the software tool MoBioS (MOlecular Biology Simulator) implementing it. The mathematical model of MoBioS uses continuous differential equations to describe the deterministic reactions and a Gillespie-like algorithm to describe the stochastic ones. Unlike the majority of current hybrid methods, the MoBioS algorithm divides the reactions' set into fast reactions, moderate reactions, and slow reactions and implements a hysteresis switching between the stochastic model and the deterministic model. Fast reactions are approximated as continuous-deterministic processes and modelled by deterministic rate equations. Moderate reactions are those whose reaction waiting time is greater than the fast reaction waiting time but smaller than the slow reaction waiting time. A moderate reaction is approximated as a stochastic (deterministic) process if it was classified as a stochastic (deterministic) process at the time at which it crosses the threshold of low (high) waiting time. A Gillespie First Reaction Method is implemented to select and execute the slow reactions. The performances of MoBios were tested on a typical example of hybrid dynamics: that is the DNA transcription regulation. The simulated dynamic profile of the reagents' abundance and the estimate of the error introduced by the fully deterministic approach were used to evaluate the consistency of the computational model and that of the software tool.

  1. Restricted random search method based on taboo search in the multiple minima problem

    NASA Astrophysics Data System (ADS)

    Hong, Seung Do; Jhon, Mu Shik

    1997-03-01

    The restricted random search method is proposed as a simple Monte Carlo sampling method to search minima fast in the multiple minima problem. This method is based on taboo search applied recently to continuous test functions. The concept of the taboo region instead of the taboo list is used and therefore the sampling of a region near an old configuration is restricted in this method. This method is applied to 2-dimensional test functions and the argon clusters. This method is found to be a practical and efficient method to search near-global configurations of test functions and the argon clusters.

  2. WorldWideScience.org: the global science gateway.

    PubMed

    Fitzpatrick, Roberta Bronson

    2009-10-01

    WorldWideScience.org is a Web-based global gateway connecting users to both national and international scientific databases and portals. This column will provide background information on the resource as well as introduce basic searching practices for users.

  3. Hybrid Genetic Algorithm - Local Search Method for Ground-Water Management

    NASA Astrophysics Data System (ADS)

    Chiu, Y.; Nishikawa, T.; Martin, P.

    2008-12-01

    Ground-water management problems commonly are formulated as a mixed-integer, non-linear programming problem (MINLP). Relying only on conventional gradient-search methods to solve the management problem is computationally fast; however, the methods may become trapped in a local optimum. Global-optimization schemes can identify the global optimum, but the convergence is very slow when the optimal solution approaches the global optimum. In this study, we developed a hybrid optimization scheme, which includes a genetic algorithm and a gradient-search method, to solve the MINLP. The genetic algorithm identifies a near- optimal solution, and the gradient search uses the near optimum to identify the global optimum. Our methodology is applied to a conjunctive-use project in the Warren ground-water basin, California. Hi- Desert Water District (HDWD), the primary water-manager in the basin, plans to construct a wastewater treatment plant to reduce future septic-tank effluent from reaching the ground-water system. The treated wastewater instead will recharge the ground-water basin via percolation ponds as part of a larger conjunctive-use strategy, subject to State regulations (e.g. minimum distances and travel times). HDWD wishes to identify the least-cost conjunctive-use strategies that control ground-water levels, meet regulations, and identify new production-well locations. As formulated, the MINLP objective is to minimize water-delivery costs subject to constraints including pump capacities, available recharge water, water-supply demand, water-level constraints, and potential new-well locations. The methodology was demonstrated by an enumerative search of the entire feasible solution and comparing the optimum solution with results from the branch-and-bound algorithm. The results also indicate that the hybrid method identifies the global optimum within an affordable computation time. Sensitivity analyses, which include testing different recharge-rate scenarios, pond layouts, and water-supply constraints, indicate that the number of new wells is insensitive to water-supply constraints; however, pumping rates and patterns of the existing wells are sensitive. The locations of new wells are mildly sensitive to the pond layout.

  4. Barriers, supports, and effective interventions for uptake of human papillomavirus- and other vaccines within global and Canadian Indigenous peoples: a systematic review protocol.

    PubMed

    Mrklas, Kelly J; MacDonald, Shannon; Shea-Budgell, Melissa A; Bedingfield, Nancy; Ganshorn, Heather; Glaze, Sarah; Bill, Lea; Healy, Bonnie; Healy, Chyloe; Guichon, Juliet; Colquhoun, Amy; Bell, Christopher; Richardson, Ruth; Henderson, Rita; Kellner, James; Barnabe, Cheryl; Bednarczyk, Robert A; Letendre, Angeline; Nelson, Gregg S

    2018-03-02

    Despite the existence of human papilloma virus (HPV) vaccines with demonstrated safety and effectiveness and funded HPV vaccination programs, coverage rates are persistently lower and cervical cancer burden higher among Canadian Indigenous peoples. Barriers and supports to HPV vaccination in Indigenous peoples have not been systematically documented, nor have interventions to increase uptake in this population. This protocol aims to appraise the literature in Canadian and global Indigenous peoples, relating to documented barriers and supports to vaccination and interventions to increase acceptability/uptake or reduce hesitancy of vaccination. Although HPV vaccination is the primary focus, we anticipate only a small number of relevant studies to emerge from the search and will, therefore, employ a broad search strategy to capture literature related to both HPV vaccination and vaccination in general in global Indigenous peoples. Eligible studies will include global Indigenous peoples and discuss barriers or supports and/or interventions to improve uptake or to reduce hesitancy, for the HPV vaccine and/or other vaccines. Primary outcomes are documented barriers or supports or interventions. All study designs meeting inclusion criteria will be considered, without restricting by language, location, or data type. We will use an a priori search strategy, comprised of key words and controlled vocabulary terms, developed in consultation with an academic librarian, and reviewed by a second academic librarian using the PRESS checklist. We will search several electronic databases from date of inception, without restrictions. A pre-defined group of global Indigenous websites will be reviewed for relevant gray literature. Bibliographic searches will be conducted for all included studies to identify relevant reviews. Data analysis will include an inductive, qualitative, thematic synthesis and a quantitative analysis of measured barriers and supports, as well as a descriptive synthesis and quantitative summary of measures for interventions. To our knowledge, this study will contribute the first systematic review of documented barriers, supports, and interventions for vaccination in general and for HPV vaccination. The results of this study are expected to inform future research, policies, programs, and community-driven initiatives to enhance acceptability and uptake of HPV vaccination among Indigenous peoples. PROSPERO Registration Number: CRD42017048844.

  5. Pro Free Will Priming Enhances “Risk-Taking” Behavior in the Iowa Gambling Task, but Not in the Balloon Analogue Risk Task: Two Independent Priming Studies

    PubMed Central

    Schrag, Yann; Tremea, Alessandro; Lagger, Cyril; Ohana, Noé; Mohr, Christine

    2016-01-01

    Studies indicated that people behave less responsibly after exposure to information containing deterministic statements as compared to free will statements or neutral statements. Thus, deterministic primes should lead to enhanced risk-taking behavior. We tested this prediction in two studies with healthy participants. In experiment 1, we tested 144 students (24 men) in the laboratory using the Iowa Gambling Task. In experiment 2, we tested 274 participants (104 men) online using the Balloon Analogue Risk Task. In the Iowa Gambling Task, the free will priming condition resulted in more risky decisions than both the deterministic and neutral priming conditions. We observed no priming effects on risk-taking behavior in the Balloon Analogue Risk Task. To explain these unpredicted findings, we consider the somatic marker hypothesis, a gain frequency approach as well as attention to gains and / or inattention to losses. In addition, we highlight the necessity to consider both pro free will and deterministic priming conditions in future studies. Importantly, our and previous results indicate that the effects of pro free will and deterministic priming do not oppose each other on a frequently assumed continuum. PMID:27018854

  6. Pro Free Will Priming Enhances "Risk-Taking" Behavior in the Iowa Gambling Task, but Not in the Balloon Analogue Risk Task: Two Independent Priming Studies.

    PubMed

    Schrag, Yann; Tremea, Alessandro; Lagger, Cyril; Ohana, Noé; Mohr, Christine

    2016-01-01

    Studies indicated that people behave less responsibly after exposure to information containing deterministic statements as compared to free will statements or neutral statements. Thus, deterministic primes should lead to enhanced risk-taking behavior. We tested this prediction in two studies with healthy participants. In experiment 1, we tested 144 students (24 men) in the laboratory using the Iowa Gambling Task. In experiment 2, we tested 274 participants (104 men) online using the Balloon Analogue Risk Task. In the Iowa Gambling Task, the free will priming condition resulted in more risky decisions than both the deterministic and neutral priming conditions. We observed no priming effects on risk-taking behavior in the Balloon Analogue Risk Task. To explain these unpredicted findings, we consider the somatic marker hypothesis, a gain frequency approach as well as attention to gains and / or inattention to losses. In addition, we highlight the necessity to consider both pro free will and deterministic priming conditions in future studies. Importantly, our and previous results indicate that the effects of pro free will and deterministic priming do not oppose each other on a frequently assumed continuum.

  7. A systematic review of models used in cost-effectiveness analyses of preventing osteoporotic fractures.

    PubMed

    Si, L; Winzenberg, T M; Palmer, A J

    2014-01-01

    This review was aimed at the evolution of health economic models used in evaluations of clinical approaches aimed at preventing osteoporotic fractures. Models have improved, with medical continuance becoming increasingly recognized as a contributor to health and economic outcomes, as well as advancements in epidemiological data. Model-based health economic evaluation studies are increasingly used to investigate the cost-effectiveness of osteoporotic fracture preventions and treatments. The objective of this study was to carry out a systematic review of the evolution of health economic models used in the evaluation of osteoporotic fracture preventions. Electronic searches within MEDLINE and EMBASE were carried out using a predefined search strategy. Inclusion and exclusion criteria were used to select relevant studies. References listed of included studies were searched to identify any potential study that was not captured in our electronic search. Data on country, interventions, type of fracture prevention, evaluation perspective, type of model, time horizon, fracture sites, expressed costs, types of costs included, and effectiveness measurement were extracted. Seventy-four models were described in 104 publications, of which 69% were European. Earlier models focused mainly on hip, vertebral, and wrist fracture, but later models included multiple fracture sites (humerus, pelvis, tibia, and other fractures). Modeling techniques have evolved from simple decision trees, through deterministic Markov processes to individual patient simulation models accounting for uncertainty in multiple parameters. Treatment continuance has been increasingly taken into account in the models in the last decade. Models have evolved in their complexity and emphasis, with medical continuance becoming increasingly recognized as a contributor to health and economic outcomes. This evolution may be driven in part by the desire to capture all the important differentiating characteristics of medications under scrutiny, as well as the advancement in epidemiological data relevant to osteoporosis fractures.

  8. GLCF: Global Land Survey(GLS)

    Science.gov Websites

    Services Contact Site Map Go Global Land Survey(GLS) Data Access ESDI Download via Search and Preview Tool gls The Global Land Survey (GLS) collection of Landsat imagery is designed to meet a need from Availability: GLS 1975, 1990, 2000, and 2005 are available to the public for free at the US Geological Survey

  9. How visual search relates to visual diagnostic performance: a narrative systematic review of eye-tracking research in radiology.

    PubMed

    van der Gijp, A; Ravesloot, C J; Jarodzka, H; van der Schaaf, M F; van der Schaaf, I C; van Schaik, J P J; Ten Cate, Th J

    2017-08-01

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology domain aims to identify visual search patterns associated with high perceptual performance. Databases PubMed, EMBASE, ERIC, PsycINFO, Scopus and Web of Science were searched using 'visual perception' OR 'eye tracking' AND 'radiology' and synonyms. Two authors independently screened search results and included eye tracking studies concerning visual skills in radiology published between January 1, 1994 and July 31, 2015. Two authors independently assessed study quality with the Medical Education Research Study Quality Instrument, and extracted study data with respect to design, participant and task characteristics, and variables. A thematic analysis was conducted to extract and arrange study results, and a textual narrative synthesis was applied for data integration and interpretation. The search resulted in 22 relevant full-text articles. Thematic analysis resulted in six themes that informed the relation between visual search and level of expertise: (1) time on task, (2) eye movement characteristics of experts, (3) differences in visual attention, (4) visual search patterns, (5) search patterns in cross sectional stack imaging, and (6) teaching visual search strategies. Expert search was found to be characterized by a global-focal search pattern, which represents an initial global impression, followed by a detailed, focal search-to-find mode. Specific task-related search patterns, like drilling through CT scans and systematic search in chest X-rays, were found to be related to high expert levels. One study investigated teaching of visual search strategies, and did not find a significant effect on perceptual performance. Eye tracking literature in radiology indicates several search patterns are related to high levels of expertise, but teaching novices to search as an expert may not be effective. Experimental research is needed to find out which search strategies can improve image perception in learners.

  10. Application of Non-Deterministic Methods to Assess Modeling Uncertainties for Reinforced Carbon-Carbon Debris Impacts

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.; Fasanella, Edwin L.; Melis, Matthew; Carney, Kelly; Gabrys, Jonathan

    2004-01-01

    The Space Shuttle Columbia Accident Investigation Board (CAIB) made several recommendations for improving the NASA Space Shuttle Program. An extensive experimental and analytical program has been developed to address two recommendations related to structural impact analysis. The objective of the present work is to demonstrate the application of probabilistic analysis to assess the effect of uncertainties on debris impacts on Space Shuttle Reinforced Carbon-Carbon (RCC) panels. The probabilistic analysis is used to identify the material modeling parameters controlling the uncertainty. A comparison of the finite element results with limited experimental data provided confidence that the simulations were adequately representing the global response of the material. Five input parameters were identified as significantly controlling the response.

  11. Prototyping the E-ELT M1 local control system communication infrastructure

    NASA Astrophysics Data System (ADS)

    Argomedo, J.; Kornweibel, N.; Grudzien, T.; Dimmler, M.; Andolfato, L.; Barriga, P.

    2016-08-01

    The primary mirror of the E-ELT is composed of 798 hexagonal segments of about 1.45 meters across. Each segment can be moved in piston and tip-tilt using three position actuators. Inductive edge sensors are used to provide feedback for global reconstruction of the mirror shape. The E-ELT M1 Local Control System will provide a deterministic infrastructure for collecting edge sensor and actuators readings and distribute the new position actuators references while at the same time providing failure detection, isolation and notification, synchronization, monitoring and configuration management. The present paper describes the prototyping activities carried out to verify the feasibility of the E-ELT M1 local control system communication architecture design and assess its performance and potential limitations.

  12. The impact of ARM on climate modeling

    DOE PAGES

    Randall, David A.; Del Genio, Anthony D.; Donner, Lee J.; ...

    2016-07-15

    Climate models are among humanity’s most ambitious and elaborate creations. They are designed to simulate the interactions of the atmosphere, ocean, land surface, and cryosphere on time scales far beyond the limits of deterministic predictability and including the effects of time-dependent external forcings. The processes involved include radiative transfer, fluid dynamics, microphysics, and some aspects of geochemistry, biology, and ecology. The models explicitly simulate processes on spatial scales ranging from the circumference of Earth down to 100 km or smaller and implicitly include the effects of processes on even smaller scales down to a micron or so. In addition, themore » atmospheric component of a climate model can be called an atmospheric global circulation model (AGCM).« less

  13. Ion implantation for deterministic single atom devices

    NASA Astrophysics Data System (ADS)

    Pacheco, J. L.; Singh, M.; Perry, D. L.; Wendt, J. R.; Ten Eyck, G.; Manginell, R. P.; Pluym, T.; Luhman, D. R.; Lilly, M. P.; Carroll, M. S.; Bielejec, E.

    2017-12-01

    We demonstrate a capability of deterministic doping at the single atom level using a combination of direct write focused ion beam and solid-state ion detectors. The focused ion beam system can position a single ion to within 35 nm of a targeted location and the detection system is sensitive to single low energy heavy ions. This platform can be used to deterministically fabricate single atom devices in materials where the nanostructure and ion detectors can be integrated, including donor-based qubits in Si and color centers in diamond.

  14. Counterfactual Quantum Deterministic Key Distribution

    NASA Astrophysics Data System (ADS)

    Zhang, Sheng; Wang, Jian; Tang, Chao-Jing

    2013-01-01

    We propose a new counterfactual quantum cryptography protocol concerning about distributing a deterministic key. By adding a controlled blocking operation module to the original protocol [T.G. Noh, Phys. Rev. Lett. 103 (2009) 230501], the correlation between the polarizations of the two parties, Alice and Bob, is extended, therefore, one can distribute both deterministic keys and random ones using our protocol. We have also given a simple proof of the security of our protocol using the technique we ever applied to the original protocol. Most importantly, our analysis produces a bound tighter than the existing ones.

  15. Ion implantation for deterministic single atom devices

    DOE PAGES

    Pacheco, J. L.; Singh, M.; Perry, D. L.; ...

    2017-12-04

    Here, we demonstrate a capability of deterministic doping at the single atom level using a combination of direct write focused ion beam and solid-state ion detectors. The focused ion beam system can position a single ion to within 35 nm of a targeted location and the detection system is sensitive to single low energy heavy ions. This platform can be used to deterministically fabricate single atom devices in materials where the nanostructure and ion detectors can be integrated, including donor-based qubits in Si and color centers in diamond.

  16. Deterministic quantum splitter based on time-reversed Hong-Ou-Mandel interference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jun; Lee, Kim Fook; Kumar, Prem

    2007-09-15

    By utilizing a fiber-based indistinguishable photon-pair source in the 1.55 {mu}m telecommunications band [J. Chen et al., Opt. Lett. 31, 2798 (2006)], we present the first, to the best of our knowledge, deterministic quantum splitter based on the principle of time-reversed Hong-Ou-Mandel quantum interference. The deterministically separated identical photons' indistinguishability is then verified by using a conventional Hong-Ou-Mandel quantum interference, which exhibits a near-unity dip visibility of 94{+-}1%, making this quantum splitter useful for various quantum information processing applications.

  17. The Art Gallery Test: A Preliminary Comparison between Traditional Neuropsychological and Ecological VR-Based Tests.

    PubMed

    Gamito, Pedro; Oliveira, Jorge; Alghazzawi, Daniyal; Fardoun, Habib; Rosa, Pedro; Sousa, Tatiana; Maia, Ines; Morais, Diogo; Lopes, Paulo; Brito, Rodrigo

    2017-01-01

    Ecological validity should be the cornerstone of any assessment of cognitive functioning. For this purpose, we have developed a preliminary study to test the Art Gallery Test (AGT) as an alternative to traditional neuropsychological testing. The AGT involves three visual search subtests displayed in a virtual reality (VR) art gallery, designed to assess visual attention within an ecologically valid setting. To evaluate the relation between AGT and standard neuropsychological assessment scales, data were collected on a normative sample of healthy adults ( n = 30). The measures consisted of concurrent paper-and-pencil neuropsychological measures [Montreal Cognitive Assessment (MoCA), Frontal Assessment Battery (FAB), and Color Trails Test (CTT)] along with the outcomes from the three subtests of the AGT. The results showed significant correlations between the AGT subtests describing different visual search exercises strategies with global and specific cognitive measures. Comparative visual search was associated with attention and cognitive flexibility (CTT); whereas visual searches involving pictograms correlated with global cognitive function (MoCA).

  18. On computing the global time-optimal motions of robotic manipulators in the presence of obstacles

    NASA Technical Reports Server (NTRS)

    Shiller, Zvi; Dubowsky, Steven

    1991-01-01

    A method for computing the time-optimal motions of robotic manipulators is presented that considers the nonlinear manipulator dynamics, actuator constraints, joint limits, and obstacles. The optimization problem is reduced to a search for the time-optimal path in the n-dimensional position space. A small set of near-optimal paths is first efficiently selected from a grid, using a branch and bound search and a series of lower bound estimates on the traveling time along a given path. These paths are further optimized with a local path optimization to yield the global optimal solution. Obstacles are considered by eliminating the collision points from the tessellated space and by adding a penalty function to the motion time in the local optimization. The computational efficiency of the method stems from the reduced dimensionality of the searched spaced and from combining the grid search with a local optimization. The method is demonstrated in several examples for two- and six-degree-of-freedom manipulators with obstacles.

  19. Changing contributions of stochastic and deterministic processes in community assembly over a successional gradient.

    PubMed

    Måren, Inger Elisabeth; Kapfer, Jutta; Aarrestad, Per Arild; Grytnes, John-Arvid; Vandvik, Vigdis

    2018-01-01

    Successional dynamics in plant community assembly may result from both deterministic and stochastic ecological processes. The relative importance of different ecological processes is expected to vary over the successional sequence, between different plant functional groups, and with the disturbance levels and land-use management regimes of the successional systems. We evaluate the relative importance of stochastic and deterministic processes in bryophyte and vascular plant community assembly after fire in grazed and ungrazed anthropogenic coastal heathlands in Northern Europe. A replicated series of post-fire successions (n = 12) were initiated under grazed and ungrazed conditions, and vegetation data were recorded in permanent plots over 13 years. We used redundancy analysis (RDA) to test for deterministic successional patterns in species composition repeated across the replicate successional series and analyses of co-occurrence to evaluate to what extent species respond synchronously along the successional gradient. Change in species co-occurrences over succession indicates stochastic successional dynamics at the species level (i.e., species equivalence), whereas constancy in co-occurrence indicates deterministic dynamics (successional niche differentiation). The RDA shows high and deterministic vascular plant community compositional change, especially early in succession. Co-occurrence analyses indicate stochastic species-level dynamics the first two years, which then give way to more deterministic replacements. Grazed and ungrazed successions are similar, but the early stage stochasticity is higher in ungrazed areas. Bryophyte communities in ungrazed successions resemble vascular plant communities. In contrast, bryophytes in grazed successions showed consistently high stochasticity and low determinism in both community composition and species co-occurrence. In conclusion, stochastic and individualistic species responses early in succession give way to more niche-driven dynamics in later successional stages. Grazing reduces predictability in both successional trends and species-level dynamics, especially in plant functional groups that are not well adapted to disturbance. © 2017 The Authors. Ecology, published by Wiley Periodicals, Inc., on behalf of the Ecological Society of America.

  20. MDTri: robust and efficient global mixed integer search of spaces of multiple ternary alloys: A DIRECT-inspired optimization algorithm for experimentally accessible computational material design

    DOE PAGES

    Graf, Peter A.; Billups, Stephen

    2017-07-24

    Computational materials design has suffered from a lack of algorithms formulated in terms of experimentally accessible variables. Here we formulate the problem of (ternary) alloy optimization at the level of choice of atoms and their composition that is normal for synthesists. Mathematically, this is a mixed integer problem where a candidate solution consists of a choice of three elements, and how much of each of them to use. This space has the natural structure of a set of equilateral triangles. We solve this problem by introducing a novel version of the DIRECT algorithm that (1) operates on equilateral triangles insteadmore » of rectangles and (2) works across multiple triangles. We demonstrate on a test case that the algorithm is both robust and efficient. Lastly, we offer an explanation of the efficacy of DIRECT -- specifically, its balance of global and local search -- by showing that 'potentially optimal rectangles' of the original algorithm are akin to the Pareto front of the 'multi-component optimization' of global and local search.« less

  1. SASS: A symmetry adapted stochastic search algorithm exploiting site symmetry

    NASA Astrophysics Data System (ADS)

    Wheeler, Steven E.; Schleyer, Paul v. R.; Schaefer, Henry F.

    2007-03-01

    A simple symmetry adapted search algorithm (SASS) exploiting point group symmetry increases the efficiency of systematic explorations of complex quantum mechanical potential energy surfaces. In contrast to previously described stochastic approaches, which do not employ symmetry, candidate structures are generated within simple point groups, such as C2, Cs, and C2v. This facilitates efficient sampling of the 3N-6 Pople's dimensional configuration space and increases the speed and effectiveness of quantum chemical geometry optimizations. Pople's concept of framework groups [J. Am. Chem. Soc. 102, 4615 (1980)] is used to partition the configuration space into structures spanning all possible distributions of sets of symmetry equivalent atoms. This provides an efficient means of computing all structures of a given symmetry with minimum redundancy. This approach also is advantageous for generating initial structures for global optimizations via genetic algorithm and other stochastic global search techniques. Application of the SASS method is illustrated by locating 14 low-lying stationary points on the cc-pwCVDZ ROCCSD(T) potential energy surface of Li5H2. The global minimum structure is identified, along with many unique, nonintuitive, energetically favorable isomers.

  2. MDTri: robust and efficient global mixed integer search of spaces of multiple ternary alloys: A DIRECT-inspired optimization algorithm for experimentally accessible computational material design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graf, Peter A.; Billups, Stephen

    Computational materials design has suffered from a lack of algorithms formulated in terms of experimentally accessible variables. Here we formulate the problem of (ternary) alloy optimization at the level of choice of atoms and their composition that is normal for synthesists. Mathematically, this is a mixed integer problem where a candidate solution consists of a choice of three elements, and how much of each of them to use. This space has the natural structure of a set of equilateral triangles. We solve this problem by introducing a novel version of the DIRECT algorithm that (1) operates on equilateral triangles insteadmore » of rectangles and (2) works across multiple triangles. We demonstrate on a test case that the algorithm is both robust and efficient. Lastly, we offer an explanation of the efficacy of DIRECT -- specifically, its balance of global and local search -- by showing that 'potentially optimal rectangles' of the original algorithm are akin to the Pareto front of the 'multi-component optimization' of global and local search.« less

  3. Multi-objective flexible job-shop scheduling problem using modified discrete particle swarm optimization.

    PubMed

    Huang, Song; Tian, Na; Wang, Yan; Ji, Zhicheng

    2016-01-01

    Taking resource allocation into account, flexible job shop problem (FJSP) is a class of complex scheduling problem in manufacturing system. In order to utilize the machine resources rationally, multi-objective particle swarm optimization (MOPSO) integrating with variable neighborhood search is introduced to address FJSP efficiently. Firstly, the assignment rules (AL) and dispatching rules (DR) are provided to initialize the population. And then special discrete operators are designed to produce new individuals and earliest completion machine (ECM) is adopted in the disturbance operator to escape the optima. Secondly, personal-best archives (cognitive memories) and global-best archive (social memory), which are updated by the predefined non-dominated archive update strategy, are simultaneously designed to preserve non-dominated individuals and select personal-best positions and the global-best position. Finally, three neighborhoods are provided to search the neighborhoods of global-best archive for enhancing local search ability. The proposed algorithm is evaluated by using Kacem instances and Brdata instances, and a comparison with other approaches shows the effectiveness of the proposed algorithm for FJSP.

  4. Deterministic multidimensional nonuniform gap sampling.

    PubMed

    Worley, Bradley; Powers, Robert

    2015-12-01

    Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. A Comparison of Probabilistic and Deterministic Campaign Analysis for Human Space Exploration

    NASA Technical Reports Server (NTRS)

    Merrill, R. Gabe; Andraschko, Mark; Stromgren, Chel; Cirillo, Bill; Earle, Kevin; Goodliff, Kandyce

    2008-01-01

    Human space exploration is by its very nature an uncertain endeavor. Vehicle reliability, technology development risk, budgetary uncertainty, and launch uncertainty all contribute to stochasticity in an exploration scenario. However, traditional strategic analysis has been done in a deterministic manner, analyzing and optimizing the performance of a series of planned missions. History has shown that exploration scenarios rarely follow such a planned schedule. This paper describes a methodology to integrate deterministic and probabilistic analysis of scenarios in support of human space exploration. Probabilistic strategic analysis is used to simulate "possible" scenario outcomes, based upon the likelihood of occurrence of certain events and a set of pre-determined contingency rules. The results of the probabilistic analysis are compared to the nominal results from the deterministic analysis to evaluate the robustness of the scenario to adverse events and to test and optimize contingency planning.

  6. First Order Reliability Application and Verification Methods for Semistatic Structures

    NASA Technical Reports Server (NTRS)

    Verderaime, Vincent

    1994-01-01

    Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored by conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments, its stress audits are shown to be arbitrary and incomplete, and it compromises high strength materials performance. A reliability method is proposed which combines first order reliability principles with deterministic design variables and conventional test technique to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety index expression. The application is reduced to solving for a factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and with the pace of semistatic structural designs.

  7. Apparatus for fixing latency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, David R; Bartholomew, David B; Moon, Justin

    2009-09-08

    An apparatus for fixing computational latency within a deterministic region on a network comprises a network interface modem, a high priority module and at least one deterministic peripheral device. The network interface modem is in communication with the network. The high priority module is in communication with the network interface modem. The at least one deterministic peripheral device is connected to the high priority module. The high priority module comprises a packet assembler/disassembler, and hardware for performing at least one operation. Also disclosed is an apparatus for executing at least one instruction on a downhole device within a deterministic region,more » the apparatus comprising a control device, a downhole network, and a downhole device. The control device is near the surface of a downhole tool string. The downhole network is integrated into the tool string. The downhole device is in communication with the downhole network.« less

  8. Stochastic Petri Net extension of a yeast cell cycle model.

    PubMed

    Mura, Ivan; Csikász-Nagy, Attila

    2008-10-21

    This paper presents the definition, solution and validation of a stochastic model of the budding yeast cell cycle, based on Stochastic Petri Nets (SPN). A specific family of SPNs is selected for building a stochastic version of a well-established deterministic model. We describe the procedure followed in defining the SPN model from the deterministic ODE model, a procedure that can be largely automated. The validation of the SPN model is conducted with respect to both the results provided by the deterministic one and the experimental results available from literature. The SPN model catches the behavior of the wild type budding yeast cells and a variety of mutants. We show that the stochastic model matches some characteristics of budding yeast cells that cannot be found with the deterministic model. The SPN model fine-tunes the simulation results, enriching the breadth and the quality of its outcome.

  9. Effect of sample volume on metastable zone width and induction time

    NASA Astrophysics Data System (ADS)

    Kubota, Noriaki

    2012-04-01

    The metastable zone width (MSZW) and the induction time, measured for a large sample (say>0.1 L) are reproducible and deterministic, while, for a small sample (say<1 mL), these values are irreproducible and stochastic. Such behaviors of MSZW and induction time were theoretically discussed both with stochastic and deterministic models. Equations for the distribution of stochastic MSZW and induction time were derived. The average values of stochastic MSZW and induction time both decreased with an increase in sample volume, while, the deterministic MSZW and induction time remained unchanged. Such different behaviors with variation in sample volume were explained in terms of detection sensitivity of crystallization events. The average values of MSZW and induction time in the stochastic model were compared with the deterministic MSZW and induction time, respectively. Literature data reported for paracetamol aqueous solution were explained theoretically with the presented models.

  10. Fencing network direct memory access data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-07-07

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to a deterministic data communications network through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and the deterministic data communications network; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  11. Fencing network direct memory access data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-07-14

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to a deterministic data communications network through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and the deterministic data communications network; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  12. Canada's contribution to global research in cardiovascular diseases.

    PubMed

    Nguyen, Hai V; de Oliveira, Claire; Wijeysundera, Harindra C; Wong, William W L; Woo, Gloria; Grootendorst, Paul; Liu, Peter P; Krahn, Murray D

    2013-06-01

    The burden of cardiovascular disease (CVD) in Canada and other developed countries is growing, in part because of the aging of the population and the alarming rise of obesity. Studying Canada's contribution to the global body of CVD research output will shed light on the effectiveness of investments in Canadian CVD research and inform if Canada has been responding to its CVD burden. Search was conducted using the Web-of-Science database for publications during 1981 through 2010 on major areas and specific interventions in CVD. Search was also conducted using Canadian and US online databases for patents issued between 1981 and 2010. Search data were used to estimate the proportions of the world's pool of research publications and of patents conducted by researchers based in Canada. The results indicate that Canada contributed 6% of global research in CVD during 1981 through 2010. Further, Canada's contribution shows a strong upward trend during the period. Based on patent data, Canada's contribution level was similar (5%-7%). Canada's contribution to the global pool of CVD research is on par with France and close to the UK, Japan, and Germany. Canada's contribution in global CVD research is higher than its average contribution in all fields of research (6% vs 3%). As the burden of chronic diseases including CVD rises with Canada's aging population, the increase in Canadian research into CVD is encouraging. Copyright © 2013 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.

  13. Three-dimensional high-precision indoor positioning strategy using Tabu search based on visible light communication

    NASA Astrophysics Data System (ADS)

    Peng, Qi; Guan, Weipeng; Wu, Yuxiang; Cai, Ye; Xie, Canyu; Wang, Pengfei

    2018-01-01

    This paper proposes a three-dimensional (3-D) high-precision indoor positioning strategy using Tabu search based on visible light communication. Tabu search is a powerful global optimization algorithm, and the 3-D indoor positioning can be transformed into an optimal solution problem. Therefore, in the 3-D indoor positioning, the optimal receiver coordinate can be obtained by the Tabu search algorithm. For all we know, this is the first time the Tabu search algorithm is applied to visible light positioning. Each light-emitting diode (LED) in the system broadcasts a unique identity (ID) and transmits the ID information. When the receiver detects optical signals with ID information from different LEDs, using the global optimization of the Tabu search algorithm, the 3-D high-precision indoor positioning can be realized when the fitness value meets certain conditions. Simulation results show that the average positioning error is 0.79 cm, and the maximum error is 5.88 cm. The extended experiment of trajectory tracking also shows that 95.05% positioning errors are below 1.428 cm. It can be concluded from the data that the 3-D indoor positioning based on the Tabu search algorithm achieves the requirements of centimeter level indoor positioning. The algorithm used in indoor positioning is very effective and practical and is superior to other existing methods for visible light indoor positioning.

  14. A statistical approach to nuclear fuel design and performance

    NASA Astrophysics Data System (ADS)

    Cunning, Travis Andrew

    As CANDU fuel failures can have significant economic and operational consequences on the Canadian nuclear power industry, it is essential that factors impacting fuel performance are adequately understood. Current industrial practice relies on deterministic safety analysis and the highly conservative "limit of operating envelope" approach, where all parameters are assumed to be at their limits simultaneously. This results in a conservative prediction of event consequences with little consideration given to the high quality and precision of current manufacturing processes. This study employs a novel approach to the prediction of CANDU fuel reliability. Probability distributions are fitted to actual fuel manufacturing datasets provided by Cameco Fuel Manufacturing, Inc. They are used to form input for two industry-standard fuel performance codes: ELESTRES for the steady-state case and ELOCA for the transient case---a hypothesized 80% reactor outlet header break loss of coolant accident. Using a Monte Carlo technique for input generation, 105 independent trials are conducted and probability distributions are fitted to key model output quantities. Comparing model output against recognized industrial acceptance criteria, no fuel failures are predicted for either case. Output distributions are well removed from failure limit values, implying that margin exists in current fuel manufacturing and design. To validate the results and attempt to reduce the simulation burden of the methodology, two dimensional reduction methods are assessed. Using just 36 trials, both methods are able to produce output distributions that agree strongly with those obtained via the brute-force Monte Carlo method, often to a relative discrepancy of less than 0.3% when predicting the first statistical moment, and a relative discrepancy of less than 5% when predicting the second statistical moment. In terms of global sensitivity, pellet density proves to have the greatest impact on fuel performance, with an average sensitivity index of 48.93% on key output quantities. Pellet grain size and dish depth are also significant contributors, at 31.53% and 13.46%, respectively. A traditional limit of operating envelope case is also evaluated. This case produces output values that exceed the maximum values observed during the 105 Monte Carlo trials for all output quantities of interest. In many cases the difference between the predictions of the two methods is very prominent, and the highly conservative nature of the deterministic approach is demonstrated. A reliability analysis of CANDU fuel manufacturing parametric data, specifically pertaining to the quantification of fuel performance margins, has not been conducted previously. Key Words: CANDU, nuclear fuel, Cameco, fuel manufacturing, fuel modelling, fuel performance, fuel reliability, ELESTRES, ELOCA, dimensional reduction methods, global sensitivity analysis, deterministic safety analysis, probabilistic safety analysis.

  15. Comparing physically-based and statistical landslide susceptibility model outputs - a case study from Lower Austria

    NASA Astrophysics Data System (ADS)

    Canli, Ekrem; Thiebes, Benni; Petschko, Helene; Glade, Thomas

    2015-04-01

    By now there is a broad consensus that due to human-induced global change the frequency and magnitude of heavy precipitation events is expected to increase in certain parts of the world. Given the fact, that rainfall serves as the most common triggering agent for landslide initiation, also an increased landside activity can be expected there. Landslide occurrence is a globally spread phenomenon that clearly needs to be handled. The present and well known problems in modelling landslide susceptibility and hazard give uncertain results in the prediction. This includes the lack of a universal applicable modelling solution for adequately assessing landslide susceptibility (which can be seen as the relative indication of the spatial probability of landslide initiation). Generally speaking, there are three major approaches for performing landslide susceptibility analysis: heuristic, statistical and deterministic models, all with different assumptions, its distinctive data requirements and differently interpretable outcomes. Still, detailed comparison of resulting landslide susceptibility maps are rare. In this presentation, the susceptibility modelling outputs of a deterministic model (Stability INdex MAPping - SINMAP) and a statistical modelling approach (generalized additive model - GAM) are compared. SINMAP is an infinite slope stability model which requires parameterization of soil mechanical parameters. Modelling with the generalized additive model, which represents a non-linear extension of a generalized linear model, requires a high quality landslide inventory that serves as the dependent variable in the statistical approach. Both methods rely on topographical data derived from the DTM. The comparison has been carried out in a study area located in the district of Waidhofen/Ybbs in Lower Austria. For the whole district (ca. 132 km²), 1063 landslides have been mapped and partially used within the analysis and the validation of the model outputs. The respective susceptibility maps have been reclassified to contain three susceptibility classes each. The comparison of the susceptibility maps was performed on a grid cell basis. A match of the maps was observed for grid cells located in the same susceptibility class. In contrast, a mismatch or deviation was observed for locations with different assigned susceptibility classes (up to two classes' difference). Although the modelling approaches differ significantly, more than 70% of the pixels reveal a match in the same susceptibility class. A mismatch by two classes' difference occurred in less than 2% of all pixels. Although the result looks promising and strengthens the confidence in the susceptibility zonation for this area, some of the general drawbacks related to the respective approaches still have to be addressed in further detail. Future work is heading towards an integration of probabilistic aspects into deterministic modelling.

  16. Seismo-induced effects in the near-earth space: Combined ground and space investigations as a contribution to earthquake prediction

    NASA Astrophysics Data System (ADS)

    Sgrigna, V.; Buzzi, A.; Conti, L.; Picozza, P.; Stagni, C.; Zilpimiani, D.

    2007-02-01

    The paper aims at giving a few methodological suggestions in deterministic earthquake prediction studies based on combined ground-based and space observations of earthquake precursors. Up to now what is lacking is the demonstration of a causal relationship with explained physical processes and looking for a correlation between data gathered simultaneously and continuously by space observations and ground-based measurements. Coordinated space and ground-based observations imply available test sites on the Earth surface to correlate ground data, collected by appropriate networks of instruments, with space ones detected on board of LEO satellites. At this purpose a new result reported in the paper is an original and specific space mission project (ESPERIA) and two instruments of its payload. The ESPERIA space project has been performed for the Italian Space Agency and three ESPERIA instruments (ARINA and LAZIO particle detectors, and EGLE search-coil magnetometer) have been built and tested in space. The EGLE experiment started last April 15, 2005 on board the ISS, within the ENEIDE mission. The launch of ARINA occurred on June 15, 2006, on board the RESURS DK-1 Russian LEO satellite. As an introduction and justification to these experiments the paper clarifies some basic concepts and critical methodological aspects concerning deterministic and statistic approaches and their use in earthquake prediction. We also take the liberty of giving the scientific community a few critical hints based on our personal experience in the field and propose a joint study devoted to earthquake prediction and warning.

  17. A random utility model of delay discounting and its application to people with externalizing psychopathology.

    PubMed

    Dai, Junyi; Gunn, Rachel L; Gerst, Kyle R; Busemeyer, Jerome R; Finn, Peter R

    2016-10-01

    Previous studies have demonstrated that working memory capacity plays a central role in delay discounting in people with externalizing psychopathology. These studies used a hyperbolic discounting model, and its single parameter-a measure of delay discounting-was estimated using the standard method of searching for indifference points between intertemporal options. However, there are several problems with this approach. First, the deterministic perspective on delay discounting underlying the indifference point method might be inappropriate. Second, the estimation procedure using the R2 measure often leads to poor model fit. Third, when parameters are estimated using indifference points only, much of the information collected in a delay discounting decision task is wasted. To overcome these problems, this article proposes a random utility model of delay discounting. The proposed model has 2 parameters, 1 for delay discounting and 1 for choice variability. It was fit to choice data obtained from a recently published data set using both maximum-likelihood and Bayesian parameter estimation. As in previous studies, the delay discounting parameter was significantly associated with both externalizing problems and working memory capacity. Furthermore, choice variability was also found to be significantly associated with both variables. This finding suggests that randomness in decisions may be a mechanism by which externalizing problems and low working memory capacity are associated with poor decision making. The random utility model thus has the advantage of disclosing the role of choice variability, which had been masked by the traditional deterministic model. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  18. Realistic Simulation for Body Area and Body-To-Body Networks

    PubMed Central

    Alam, Muhammad Mahtab; Ben Hamida, Elyes; Ben Arbia, Dhafer; Maman, Mickael; Mani, Francesco; Denis, Benoit; D’Errico, Raffaele

    2016-01-01

    In this paper, we present an accurate and realistic simulation for body area networks (BAN) and body-to-body networks (BBN) using deterministic and semi-deterministic approaches. First, in the semi-deterministic approach, a real-time measurement campaign is performed, which is further characterized through statistical analysis. It is able to generate link-correlated and time-varying realistic traces (i.e., with consistent mobility patterns) for on-body and body-to-body shadowing and fading, including body orientations and rotations, by means of stochastic channel models. The full deterministic approach is particularly targeted to enhance IEEE 802.15.6 proposed channel models by introducing space and time variations (i.e., dynamic distances) through biomechanical modeling. In addition, it helps to accurately model the radio link by identifying the link types and corresponding path loss factors for line of sight (LOS) and non-line of sight (NLOS). This approach is particularly important for links that vary over time due to mobility. It is also important to add that the communication and protocol stack, including the physical (PHY), medium access control (MAC) and networking models, is developed for BAN and BBN, and the IEEE 802.15.6 compliance standard is provided as a benchmark for future research works of the community. Finally, the two approaches are compared in terms of the successful packet delivery ratio, packet delay and energy efficiency. The results show that the semi-deterministic approach is the best option; however, for the diversity of the mobility patterns and scenarios applicable, biomechanical modeling and the deterministic approach are better choices. PMID:27104537

  19. Realistic Simulation for Body Area and Body-To-Body Networks.

    PubMed

    Alam, Muhammad Mahtab; Ben Hamida, Elyes; Ben Arbia, Dhafer; Maman, Mickael; Mani, Francesco; Denis, Benoit; D'Errico, Raffaele

    2016-04-20

    In this paper, we present an accurate and realistic simulation for body area networks (BAN) and body-to-body networks (BBN) using deterministic and semi-deterministic approaches. First, in the semi-deterministic approach, a real-time measurement campaign is performed, which is further characterized through statistical analysis. It is able to generate link-correlated and time-varying realistic traces (i.e., with consistent mobility patterns) for on-body and body-to-body shadowing and fading, including body orientations and rotations, by means of stochastic channel models. The full deterministic approach is particularly targeted to enhance IEEE 802.15.6 proposed channel models by introducing space and time variations (i.e., dynamic distances) through biomechanical modeling. In addition, it helps to accurately model the radio link by identifying the link types and corresponding path loss factors for line of sight (LOS) and non-line of sight (NLOS). This approach is particularly important for links that vary over time due to mobility. It is also important to add that the communication and protocol stack, including the physical (PHY), medium access control (MAC) and networking models, is developed for BAN and BBN, and the IEEE 802.15.6 compliance standard is provided as a benchmark for future research works of the community. Finally, the two approaches are compared in terms of the successful packet delivery ratio, packet delay and energy efficiency. The results show that the semi-deterministic approach is the best option; however, for the diversity of the mobility patterns and scenarios applicable, biomechanical modeling and the deterministic approach are better choices.

  20. Emergency Preparedness & Recovery News Releases - PHE

    Science.gov Websites

    and reload this page. Skip over global navigation links U.S. Department of Health and Human Services Health Emergency - Leading a Nation Prepared Search Search PHE Home > Emergency Emergency Preparedness necesitan medicamentos HHS Acting Secretary Declares Public Health Emergency to Address National Opioid

  1. ({The) Solar System Large Planets influence on a new Maunder Miniμm}

    NASA Astrophysics Data System (ADS)

    Yndestad, Harald; Solheim, Jan-Erik

    2016-04-01

    In 1890´s G. Spörer and E. W. Maunder (1890) reported that the solar activity stopped in a period of 70 years from 1645 to 1715. Later a reconstruction of the solar activity confirms the grand minima Maunder (1640-1720), Spörer (1390-1550), Wolf (1270-1340), and the minima Oort (1010-1070) and Dalton (1785-1810) since the year 1000 A.D. (Usoskin et al. 2007). These minimum periods have been associated with less irradiation from the Sun and cold climate periods on Earth. An identification of a three grand Maunder type periods and two Dalton type periods in a period thousand years, indicates that sooner or later there will be a colder climate on Earth from a new Maunder- or Dalton- type period. The cause of these minimum periods, are not well understood. An expected new Maunder-type period is based on the properties of solar variability. If the solar variability has a deterministic element, we can estimate better a new Maunder grand minimum. A random solar variability can only explain the past. This investigation is based on the simple idea that if the solar variability has a deterministic property, it must have a deterministic source, as a first cause. If this deterministic source is known, we can compute better estimates the next expected Maunder grand minimum period. The study is based on a TSI ACRIM data series from 1700, a TSI ACRIM data series from 1000 A.D., sunspot data series from 1611 and a Solar Barycenter orbit data series from 1000. The analysis method is based on a wavelet spectrum analysis, to identify stationary periods, coincidence periods and their phase relations. The result shows that the TSI variability and the sunspots variability have deterministic oscillations, controlled by the large planets Jupiter, Uranus and Neptune, as the first cause. A deterministic model of TSI variability and sunspot variability confirms the known minimum and grand minimum periods since 1000. From this deterministic model we may expect a new Maunder type sunspot minimum period from about 2018 to 2055. The deterministic model of a TSI ACRIM data series from 1700 computes a new Maunder type grand minimum period from 2015 to 2071. A model of the longer TSI ACRIM data series from 1000 computes a new Dalton to Maunder type minimum irradiation period from 2047 to 2068.

  2. Visual search in scenes involves selective and non-selective pathways

    PubMed Central

    Wolfe, Jeremy M; Vo, Melissa L-H; Evans, Karla K; Greene, Michelle R

    2010-01-01

    How do we find objects in scenes? For decades, visual search models have been built on experiments in which observers search for targets, presented among distractor items, isolated and randomly arranged on blank backgrounds. Are these models relevant to search in continuous scenes? This paper argues that the mechanisms that govern artificial, laboratory search tasks do play a role in visual search in scenes. However, scene-based information is used to guide search in ways that had no place in earlier models. Search in scenes may be best explained by a dual-path model: A “selective” path in which candidate objects must be individually selected for recognition and a “non-selective” path in which information can be extracted from global / statistical information. PMID:21227734

  3. Increasing atmospheric CO2 overrides the historical legacy of multiple stable biome states in Africa.

    PubMed

    Moncrieff, Glenn R; Scheiter, Simon; Bond, William J; Higgins, Steven I

    2014-02-01

    The dominant vegetation over much of the global land surface is not predetermined by contemporary climate, but also influenced by past environmental conditions. This confounds attempts to predict current and future biome distributions, because even a perfect model would project multiple possible biomes without knowledge of the historical vegetation state. Here we compare the distribution of tree- and grass-dominated biomes across Africa simulated using a dynamic global vegetation model (DGVM). We explicitly evaluate where and under what conditions multiple stable biome states are possible for current and projected future climates. Our simulation results show that multiple stable biomes states are possible for vast areas of tropical and subtropical Africa under current conditions. Widespread loss of the potential for multiple stable biomes states is projected in the 21st Century, driven by increasing atmospheric CO2 . Many sites where currently both tree-dominated and grass-dominated biomes are possible become deterministically tree-dominated. Regions with multiple stable biome states are widespread and require consideration when attempting to predict future vegetation changes. Testing for behaviour characteristic of systems with multiple stable equilibria, such as hysteresis and dependence on historical conditions, and the resulting uncertainty in simulated vegetation, will lead to improved projections of global change impacts. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.

  4. Effects of deterministic and random refuge in a prey-predator model with parasite infection.

    PubMed

    Mukhopadhyay, B; Bhattacharyya, R

    2012-09-01

    Most natural ecosystem populations suffer from various infectious diseases and the resulting host-pathogen dynamics is dependent on host's characteristics. On the other hand, empirical evidences show that for most host pathogen systems, a part of the host population always forms a refuge. To study the role of refuge on the host-pathogen interaction, we study a predator-prey-pathogen model where the susceptible and the infected prey can undergo refugia of constant size to evade predator attack. The stability aspects of the model system is investigated from a local and global perspective. The study reveals that the refuge sizes for the susceptible and the infected prey are the key parameters that control possible predator extinction as well as species co-existence. Next we perform a global study of the model system using Lyapunov functions and show the existence of a global attractor. Finally we perform a stochastic extension of the basic model to study the phenomenon of random refuge arising from various intrinsic, habitat-related and environmental factors. The stochastic model is analyzed for exponential mean square stability. Numerical study of the stochastic model shows that increasing the refuge rates has a stabilizing effect on the stochastic dynamics. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. British American Tobacco on Facebook: undermining Article 13 of the global World Health Organization Framework Convention on Tobacco Control.

    PubMed

    Freeman, Becky; Chapman, Simon

    2010-06-01

    The World Health Organization Framework Convention on Tobacco Control (WHO FCTC) bans all forms of tobacco advertising, promotion and sponsorship. The comprehensiveness of this ban has yet to be tested by online social networking media such as Facebook. In this paper, the activities of employees of the transnational tobacco company, British American Tobacco, (BAT) on Facebook and the type of content associated with two globally popular BAT brands (Dunhill and Lucky Strike) are mapped. BAT employees on Facebook were identified and then the term 'British American Tobacco' was searched for in the Facebook search engine and results recorded, including titles, descriptions, names and the number of Facebook participants involved for each search result. To further detail any potential promotional activities, a search for two of BAT's global brands, 'Dunhill' and 'Lucky Strike', was conducted. Each of the 3 search terms generated more than 500 items across a variety of Facebook subsections. Some BAT employees are energetically promoting BAT and BAT brands on Facebook through joining and administrating groups, joining pages as fans and posting photographs of BAT events, products and promotional items. BAT employees undertaking these actions are from countries that have ratified the WHO FCTC, which requires signatories to ban all forms of tobacco advertising, including online and crossborder exposure from countries that are not enforcing advertising restrictions. The results of the present research could be used to test the comprehensiveness of the advertising ban by requesting that governments mandate the removal of this promotional material from Facebook.

  6. British American Tobacco on Facebook: undermining article 13 of the global World Health Organization Framework Convention on Tobacco Control

    PubMed Central

    Chapman, Simon

    2010-01-01

    Background The World Health Organization Framework Convention on Tobacco Control (WHO FCTC) bans all forms of tobacco advertising, promotion and sponsorship. The comprehensiveness of this ban has yet to be tested by online social networking media such as Facebook. In this paper, the activities of employees of the transnational tobacco company, British American Tobacco, (BAT) on Facebook and the type of content associated with two globally popular BAT brands (Dunhill and Lucky Strike) are mapped. Methods BAT employees on Facebook were identified and then the term ‘British American Tobacco’ was searched for in the Facebook search engine and results recorded, including titles, descriptions, names and the number of Facebook participants involved for each search result. To further detail any potential promotional activities, a search for two of BAT's global brands, ‘Dunhill’ and ‘Lucky Strike’, was conducted. Results Each of the 3 search terms generated more than 500 items across a variety of Facebook subsections. Discussion Some BAT employees are energetically promoting BAT and BAT brands on Facebook through joining and administrating groups, joining pages as fans and posting photographs of BAT events, products and promotional items. BAT employees undertaking these actions are from countries that have ratified the WHO FCTC, which requires signatories to ban all forms of tobacco advertising, including online and crossborder exposure from countries that are not enforcing advertising restrictions. The results of the present research could be used to test the comprehensiveness of the advertising ban by requesting that governments mandate the removal of this promotional material from Facebook. PMID:20395406

  7. Easing the Discovery of NASA and International Near-Real-Time Data Using the Global Change Master Directory

    NASA Technical Reports Server (NTRS)

    Olsen, Lola; Morahan, Michael; Aleman, Alicia; Cepero, Laurel; Stevens, Tyler; Ritz, Scott; Holland, Monica

    2011-01-01

    The Global Change Master Directory (GCMD) provides an extensive directory of descriptive and spatial information about data sets and data-related services, which are relevant to Earth science research. The directory's data discovery components include controlled keywords, free-text searches, and map/date searches. The GCMD portal for NASA's Land Atmosphere Near-real-time Capability for EOS (LANCE) data products leverages these discovery features by providing users a direct route to NASA's Near-Real-Time (NRT) collections. This portal offers direct access to collection entries by instrument name, informing users of the availability of data. After a relevant collection entry is found through the GCMD's search components, the "Get Data" URL within the entry directs the user to the desired data. http://gcmd.nasa.gov/r/p/gcmd_lance_nrt.

  8. Search of exploration opportunity for near earth objects based on analytical gradients

    NASA Astrophysics Data System (ADS)

    Ren, Y.; Cui, P. Y.; Luan, E. J.

    2008-01-01

    The problem of searching for exploration opportunity of near Earth objects is investigated. For rendezvous missions, the analytical gradients of performance index with respect to free parameters are derived by combining the calculus of variation with the theory of state-transition matrix. Then, some initial guesses are generated random in the search space, and the performance index is optimized with the guidance of analytical gradients from these initial guesses. This method not only keeps the property of global search in traditional method, but also avoids the blindness in the traditional exploration opportunity search; hence, the computing speed could be increased greatly. Furthermore, by using this method, the search precision could be controlled effectively.

  9. Residual vibration control based on a global search method in a high-speed white light scanning interferometer.

    PubMed

    Song, Zhenyuan; Guo, Tong; Fu, Xing; Hu, Xiaotang

    2018-05-01

    To achieve high-speed measurements using white light scanning interferometers, the scanning devices used need to have high feedback gain in closed-loop operations. However, flexure hinges induce a residual vibration that can cause a misidentification of the fringe order. The reduction of this residual vibration is crucial because the highly nonlinear distortions in interferograms lead to clearly incorrect measured profiles. Input shaping can be used to control the amplitude of the residual vibration. The conventional method uses continuous wavelet transform (CWT) to estimate parameters of the scanning device. Our proposed method extracts equivalent modal parameters using a global search algorithm. Due to its simplicity, ease of implementation, and response speed, this global search method outperforms CWT. The delay time is shortened by searching, because fewer modes are needed for the shaper. The effectiveness of the method has been confirmed by the agreement between simulated shaped responses and experimental displacement information from the capacitive sensor inside the scanning device, and the intensity profiles of the interferometer have been greatly improved. An experiment measuring the surface of a silicon wafer is also presented. The method is shown to be effective at improving the intensity profiles and recovering accurate surface topography. Finally, frequency localizations are found to be almost stable with different proportional gains, but their energy distributions change.

  10. A weighted sampling algorithm for the design of RNA sequences with targeted secondary structure and nucleotide distribution.

    PubMed

    Reinharz, Vladimir; Ponty, Yann; Waldispühl, Jérôme

    2013-07-01

    The design of RNA sequences folding into predefined secondary structures is a milestone for many synthetic biology and gene therapy studies. Most of the current software uses similar local search strategies (i.e. a random seed is progressively adapted to acquire the desired folding properties) and more importantly do not allow the user to control explicitly the nucleotide distribution such as the GC-content in their sequences. However, the latter is an important criterion for large-scale applications as it could presumably be used to design sequences with better transcription rates and/or structural plasticity. In this article, we introduce IncaRNAtion, a novel algorithm to design RNA sequences folding into target secondary structures with a predefined nucleotide distribution. IncaRNAtion uses a global sampling approach and weighted sampling techniques. We show that our approach is fast (i.e. running time comparable or better than local search methods), seedless (we remove the bias of the seed in local search heuristics) and successfully generates high-quality sequences (i.e. thermodynamically stable) for any GC-content. To complete this study, we develop a hybrid method combining our global sampling approach with local search strategies. Remarkably, our glocal methodology overcomes both local and global approaches for sampling sequences with a specific GC-content and target structure. IncaRNAtion is available at csb.cs.mcgill.ca/incarnation/. Supplementary data are available at Bioinformatics online.

  11. Technologies That Assess the Location of Physical Activity and Sedentary Behavior: A Systematic Review.

    PubMed

    Loveday, Adam; Sherar, Lauren B; Sanders, James P; Sanderson, Paul W; Esliger, Dale W

    2015-08-05

    The location in which physical activity and sedentary behavior are performed can provide valuable behavioral information, both in isolation and synergistically with other areas of physical activity and sedentary behavior research. Global positioning systems (GPS) have been used in physical activity research to identify outdoor location; however, while GPS can receive signals in certain indoor environments, it is not able to provide room- or subroom-level location. On average, adults spend a high proportion of their time indoors. A measure of indoor location would, therefore, provide valuable behavioral information. This systematic review sought to identify and critique technology which has been or could be used to assess the location of physical activity and sedentary behavior. To identify published research papers, four electronic databases were searched using key terms built around behavior, technology, and location. To be eligible for inclusion, papers were required to be published in English and describe a wearable or portable technology or device capable of measuring location. Searches were performed up to February 4, 2015. This was supplemented by backward and forward reference searching. In an attempt to include novel devices which may not yet have made their way into the published research, searches were also performed using three Internet search engines. Specialized software was used to download search results and thus mitigate the potential pitfalls of changing search algorithms. A total of 188 research papers met the inclusion criteria. Global positioning systems were the most widely used location technology in the published research, followed by wearable cameras, and radio-frequency identification. Internet search engines identified 81 global positioning systems, 35 real-time locating systems, and 21 wearable cameras. Real-time locating systems determine the indoor location of a wearable tag via the known location of reference nodes. Although the type of reference node and location determination method varies between manufacturers, Wi-Fi appears to be the most popular method. The addition of location information to existing measures of physical activity and sedentary behavior will provide important behavioral information.

  12. Technologies That Assess the Location of Physical Activity and Sedentary Behavior: A Systematic Review

    PubMed Central

    Sherar, Lauren B; Sanders, James P; Sanderson, Paul W; Esliger, Dale W

    2015-01-01

    Background The location in which physical activity and sedentary behavior are performed can provide valuable behavioral information, both in isolation and synergistically with other areas of physical activity and sedentary behavior research. Global positioning systems (GPS) have been used in physical activity research to identify outdoor location; however, while GPS can receive signals in certain indoor environments, it is not able to provide room- or subroom-level location. On average, adults spend a high proportion of their time indoors. A measure of indoor location would, therefore, provide valuable behavioral information. Objective This systematic review sought to identify and critique technology which has been or could be used to assess the location of physical activity and sedentary behavior. Methods To identify published research papers, four electronic databases were searched using key terms built around behavior, technology, and location. To be eligible for inclusion, papers were required to be published in English and describe a wearable or portable technology or device capable of measuring location. Searches were performed up to February 4, 2015. This was supplemented by backward and forward reference searching. In an attempt to include novel devices which may not yet have made their way into the published research, searches were also performed using three Internet search engines. Specialized software was used to download search results and thus mitigate the potential pitfalls of changing search algorithms. Results A total of 188 research papers met the inclusion criteria. Global positioning systems were the most widely used location technology in the published research, followed by wearable cameras, and radio-frequency identification. Internet search engines identified 81 global positioning systems, 35 real-time locating systems, and 21 wearable cameras. Real-time locating systems determine the indoor location of a wearable tag via the known location of reference nodes. Although the type of reference node and location determination method varies between manufacturers, Wi-Fi appears to be the most popular method. Conclusions The addition of location information to existing measures of physical activity and sedentary behavior will provide important behavioral information. PMID:26245157

  13. Deterministic Computer-Controlled Polishing Process for High-Energy X-Ray Optics

    NASA Technical Reports Server (NTRS)

    Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian

    2010-01-01

    A deterministic computer-controlled polishing process for large X-ray mirror mandrels is presented. Using tool s influence function and material removal rate extracted from polishing experiments, design considerations of polishing laps and optimized operating parameters are discussed

  14. Solving difficult problems creatively: a role for energy optimised deterministic/stochastic hybrid computing

    PubMed Central

    Palmer, Tim N.; O’Shea, Michael

    2015-01-01

    How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete. PMID:26528173

  15. Deterministic and efficient quantum cryptography based on Bell's theorem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Zengbing; Pan Jianwei; Physikalisches Institut, Universitaet Heidelberg, Philosophenweg 12, 69120 Heidelberg

    2006-05-15

    We propose a double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish one and only one perfect correlation, and thus deterministically create a key bit. Eavesdropping can be detected by violation of local realism. A variation of the protocol shows a higher security, similar to the six-state protocol, under individual attacks. Our scheme allows a robust implementation under the current technology.

  16. Heart rate variability as determinism with jump stochastic parameters.

    PubMed

    Zheng, Jiongxuan; Skufca, Joseph D; Bollt, Erik M

    2013-08-01

    We use measured heart rate information (RR intervals) to develop a one-dimensional nonlinear map that describes short term deterministic behavior in the data. Our study suggests that there is a stochastic parameter with persistence which causes the heart rate and rhythm system to wander about a bifurcation point. We propose a modified circle map with a jump process noise term as a model which can qualitatively capture such this behavior of low dimensional transient determinism with occasional (stochastically defined) jumps from one deterministic system to another within a one parameter family of deterministic systems.

  17. Deterministic bead-in-droplet ejection utilizing an integrated plug-in bead dispenser for single bead-based applications

    NASA Astrophysics Data System (ADS)

    Kim, Hojin; Choi, In Ho; Lee, Sanghyun; Won, Dong-Joon; Oh, Yong Suk; Kwon, Donghoon; Sung, Hyung Jin; Jeon, Sangmin; Kim, Joonwon

    2017-04-01

    This paper presents a deterministic bead-in-droplet ejection (BIDE) technique that regulates the precise distribution of microbeads in an ejected droplet. The deterministic BIDE was realized through the effective integration of a microfluidic single-particle handling technique with a liquid dispensing system. The integrated bead dispenser facilitates the transfer of the desired number of beads into a dispensing volume and the on-demand ejection of bead-encapsulated droplets. Single bead-encapsulated droplets were ejected every 3 s without any failure. Multiple-bead dispensing with deterministic control of the number of beads was demonstrated to emphasize the originality and quality of the proposed dispensing technique. The dispenser was mounted using a plug-socket type connection, and the dispensing process was completely automated using a programmed sequence without any microscopic observation. To demonstrate a potential application of the technique, bead-based streptavidin-biotin binding assay in an evaporating droplet was conducted using ultralow numbers of beads. The results evidenced the number of beads in the droplet crucially influences the reliability of the assay. Therefore, the proposed deterministic bead-in-droplet technology can be utilized to deliver desired beads onto a reaction site, particularly to reliably and efficiently enrich and detect target biomolecules.

  18. Stochastic assembly in a subtropical forest chronosequence: evidence from contrasting changes of species, phylogenetic and functional dissimilarity over succession.

    PubMed

    Mi, Xiangcheng; Swenson, Nathan G; Jia, Qi; Rao, Mide; Feng, Gang; Ren, Haibao; Bebber, Daniel P; Ma, Keping

    2016-09-07

    Deterministic and stochastic processes jointly determine the community dynamics of forest succession. However, it has been widely held in previous studies that deterministic processes dominate forest succession. Furthermore, inference of mechanisms for community assembly may be misleading if based on a single axis of diversity alone. In this study, we evaluated the relative roles of deterministic and stochastic processes along a disturbance gradient by integrating species, functional, and phylogenetic beta diversity in a subtropical forest chronosequence in Southeastern China. We found a general pattern of increasing species turnover, but little-to-no change in phylogenetic and functional turnover over succession at two spatial scales. Meanwhile, the phylogenetic and functional beta diversity were not significantly different from random expectation. This result suggested a dominance of stochastic assembly, contrary to the general expectation that deterministic processes dominate forest succession. On the other hand, we found significant interactions of environment and disturbance and limited evidence for significant deviations of phylogenetic or functional turnover from random expectations for different size classes. This result provided weak evidence of deterministic processes over succession. Stochastic assembly of forest succession suggests that post-disturbance restoration may be largely unpredictable and difficult to control in subtropical forests.

  19. Deterministic bead-in-droplet ejection utilizing an integrated plug-in bead dispenser for single bead-based applications.

    PubMed

    Kim, Hojin; Choi, In Ho; Lee, Sanghyun; Won, Dong-Joon; Oh, Yong Suk; Kwon, Donghoon; Sung, Hyung Jin; Jeon, Sangmin; Kim, Joonwon

    2017-04-10

    This paper presents a deterministic bead-in-droplet ejection (BIDE) technique that regulates the precise distribution of microbeads in an ejected droplet. The deterministic BIDE was realized through the effective integration of a microfluidic single-particle handling technique with a liquid dispensing system. The integrated bead dispenser facilitates the transfer of the desired number of beads into a dispensing volume and the on-demand ejection of bead-encapsulated droplets. Single bead-encapsulated droplets were ejected every 3 s without any failure. Multiple-bead dispensing with deterministic control of the number of beads was demonstrated to emphasize the originality and quality of the proposed dispensing technique. The dispenser was mounted using a plug-socket type connection, and the dispensing process was completely automated using a programmed sequence without any microscopic observation. To demonstrate a potential application of the technique, bead-based streptavidin-biotin binding assay in an evaporating droplet was conducted using ultralow numbers of beads. The results evidenced the number of beads in the droplet crucially influences the reliability of the assay. Therefore, the proposed deterministic bead-in-droplet technology can be utilized to deliver desired beads onto a reaction site, particularly to reliably and efficiently enrich and detect target biomolecules.

  20. Discrete-State Stochastic Models of Calcium-Regulated Calcium Influx and Subspace Dynamics Are Not Well-Approximated by ODEs That Neglect Concentration Fluctuations

    PubMed Central

    Weinberg, Seth H.; Smith, Gregory D.

    2012-01-01

    Cardiac myocyte calcium signaling is often modeled using deterministic ordinary differential equations (ODEs) and mass-action kinetics. However, spatially restricted “domains” associated with calcium influx are small enough (e.g., 10−17 liters) that local signaling may involve 1–100 calcium ions. Is it appropriate to model the dynamics of subspace calcium using deterministic ODEs or, alternatively, do we require stochastic descriptions that account for the fundamentally discrete nature of these local calcium signals? To address this question, we constructed a minimal Markov model of a calcium-regulated calcium channel and associated subspace. We compared the expected value of fluctuating subspace calcium concentration (a result that accounts for the small subspace volume) with the corresponding deterministic model (an approximation that assumes large system size). When subspace calcium did not regulate calcium influx, the deterministic and stochastic descriptions agreed. However, when calcium binding altered channel activity in the model, the continuous deterministic description often deviated significantly from the discrete stochastic model, unless the subspace volume is unrealistically large and/or the kinetics of the calcium binding are sufficiently fast. This principle was also demonstrated using a physiologically realistic model of calmodulin regulation of L-type calcium channels introduced by Yue and coworkers. PMID:23509597

  1. Deterministic bead-in-droplet ejection utilizing an integrated plug-in bead dispenser for single bead–based applications

    PubMed Central

    Kim, Hojin; Choi, In Ho; Lee, Sanghyun; Won, Dong-Joon; Oh, Yong Suk; Kwon, Donghoon; Sung, Hyung Jin; Jeon, Sangmin; Kim, Joonwon

    2017-01-01

    This paper presents a deterministic bead-in-droplet ejection (BIDE) technique that regulates the precise distribution of microbeads in an ejected droplet. The deterministic BIDE was realized through the effective integration of a microfluidic single-particle handling technique with a liquid dispensing system. The integrated bead dispenser facilitates the transfer of the desired number of beads into a dispensing volume and the on-demand ejection of bead-encapsulated droplets. Single bead–encapsulated droplets were ejected every 3 s without any failure. Multiple-bead dispensing with deterministic control of the number of beads was demonstrated to emphasize the originality and quality of the proposed dispensing technique. The dispenser was mounted using a plug-socket type connection, and the dispensing process was completely automated using a programmed sequence without any microscopic observation. To demonstrate a potential application of the technique, bead-based streptavidin–biotin binding assay in an evaporating droplet was conducted using ultralow numbers of beads. The results evidenced the number of beads in the droplet crucially influences the reliability of the assay. Therefore, the proposed deterministic bead-in-droplet technology can be utilized to deliver desired beads onto a reaction site, particularly to reliably and efficiently enrich and detect target biomolecules. PMID:28393911

  2. Mixing Single Scattering Properties in Vector Radiative Transfer for Deterministic and Stochastic Solutions

    NASA Astrophysics Data System (ADS)

    Mukherjee, L.; Zhai, P.; Hu, Y.; Winker, D. M.

    2016-12-01

    Among the primary factors, which determine the polarized radiation, field of a turbid medium are the single scattering properties of the medium. When multiple types of scatterers are present, the single scattering properties of the scatterers need to be properly mixed in order to find the solutions to the vector radiative transfer theory (VRT). The VRT solvers can be divided into two types: deterministic and stochastic. The deterministic solver can only accept one set of single scattering property in its smallest discretized spatial volume. When the medium contains more than one kind of scatterer, their single scattering properties are averaged, and then used as input for the deterministic solver. The stochastic solver, can work with different kinds of scatterers explicitly. In this work, two different mixing schemes are studied using the Successive Order of Scattering (SOS) method and Monte Carlo (MC) methods. One scheme is used for deterministic and the other is used for the stochastic Monte Carlo method. It is found that the solutions from the two VRT solvers using two different mixing schemes agree with each other extremely well. This confirms the equivalence to the two mixing schemes and also provides a benchmark for the VRT solution for the medium studied.

  3. A top-down approach to projecting market impacts of climate change

    NASA Astrophysics Data System (ADS)

    Lemoine, Derek; Kapnick, Sarah

    2016-01-01

    To evaluate policies to reduce greenhouse-gas emissions, economic models require estimates of how future climate change will affect well-being. So far, nearly all estimates of the economic impacts of future warming have been developed by combining estimates of impacts in individual sectors of the economy. Recent work has used variation in warming over time and space to produce top-down estimates of how past climate and weather shocks have affected economic output. Here we propose a statistical framework for converting these top-down estimates of past economic costs of regional warming into projections of the economic cost of future global warming. Combining the latest physical climate models, socioeconomic projections, and economic estimates of past impacts, we find that future warming could raise the expected rate of economic growth in richer countries, reduce the expected rate of economic growth in poorer countries, and increase the variability of growth by increasing the climate's variability. This study suggests we should rethink the focus on global impacts and the use of deterministic frameworks for modelling impacts and policy.

  4. Evaluation of ensemble precipitation forecasts generated through post-processing in a Canadian catchment

    NASA Astrophysics Data System (ADS)

    Jha, Sanjeev K.; Shrestha, Durga L.; Stadnyk, Tricia A.; Coulibaly, Paulin

    2018-03-01

    Flooding in Canada is often caused by heavy rainfall during the snowmelt period. Hydrologic forecast centers rely on precipitation forecasts obtained from numerical weather prediction (NWP) models to enforce hydrological models for streamflow forecasting. The uncertainties in raw quantitative precipitation forecasts (QPFs) are enhanced by physiography and orography effects over a diverse landscape, particularly in the western catchments of Canada. A Bayesian post-processing approach called rainfall post-processing (RPP), developed in Australia (Robertson et al., 2013; Shrestha et al., 2015), has been applied to assess its forecast performance in a Canadian catchment. Raw QPFs obtained from two sources, Global Ensemble Forecasting System (GEFS) Reforecast 2 project, from the National Centers for Environmental Prediction, and Global Deterministic Forecast System (GDPS), from Environment and Climate Change Canada, are used in this study. The study period from January 2013 to December 2015 covered a major flood event in Calgary, Alberta, Canada. Post-processed results show that the RPP is able to remove the bias and reduce the errors of both GEFS and GDPS forecasts. Ensembles generated from the RPP reliably quantify the forecast uncertainty.

  5. A multiscale climate emulator for long-term morphodynamics (MUSCLE-morpho)

    NASA Astrophysics Data System (ADS)

    Antolínez, José Antonio A.; Méndez, Fernando J.; Camus, Paula; Vitousek, Sean; González, E. Mauricio; Ruggiero, Peter; Barnard, Patrick

    2016-01-01

    Interest in understanding long-term coastal morphodynamics has recently increased as climate change impacts become perceptible and accelerated. Multiscale, behavior-oriented and process-based models, or hybrids of the two, are typically applied with deterministic approaches which require considerable computational effort. In order to reduce the computational cost of modeling large spatial and temporal scales, input reduction and morphological acceleration techniques have been developed. Here we introduce a general framework for reducing dimensionality of wave-driver inputs to morphodynamic models. The proposed framework seeks to account for dependencies with global atmospheric circulation fields and deals simultaneously with seasonality, interannual variability, long-term trends, and autocorrelation of wave height, wave period, and wave direction. The model is also able to reproduce future wave climate time series accounting for possible changes in the global climate system. An application of long-term shoreline evolution is presented by comparing the performance of the real and the simulated wave climate using a one-line model. This article was corrected on 2 FEB 2016. See the end of the full text for details.

  6. The Tunneling Method for Global Optimization in Multidimensional Scaling.

    ERIC Educational Resources Information Center

    Groenen, Patrick J. F.; Heiser, Willem J.

    1996-01-01

    A tunneling method for global minimization in multidimensional scaling is introduced and adjusted for multidimensional scaling with general Minkowski distances. The method alternates a local search step with a tunneling step in which a different configuration is sought with the same STRESS implementation. (SLD)

  7. Suicide rates and information seeking via search engines: A cross-national correlational approach.

    PubMed

    Arendt, Florian

    2018-09-01

    The volume of Google searches for suicide-related terms is positively associated with suicide rates, but previous studies used data from specific, restricted geographical contexts, thus, limiting the generalizability of this finding. We investigated the correlation between suicide-related search volume and suicide rates of 50 nations from five continents. We found a positive correlation between suicide rates and search volume, even after controlling for the level of industrialization. Results give credence to the global existence of a correlation. However, the reason why suicide-related search volume is higher in countries with higher suicide rates is still unclear and up to future research.

  8. Deterministic models for traffic jams

    NASA Astrophysics Data System (ADS)

    Nagel, Kai; Herrmann, Hans J.

    1993-10-01

    We study several deterministic one-dimensional traffic models. For integer positions and velocities we find the typical high and low density phases separated by a simple transition. If positions and velocities are continuous variables the model shows self-organized critically driven by the slowest car.

  9. Extended Range Prediction of Indian Summer Monsoon: Current status

    NASA Astrophysics Data System (ADS)

    Sahai, A. K.; Abhilash, S.; Borah, N.; Joseph, S.; Chattopadhyay, R.; S, S.; Rajeevan, M.; Mandal, R.; Dey, A.

    2014-12-01

    The main focus of this study is to develop forecast consensus in the extended range prediction (ERP) of monsoon Intraseasonal oscillations using a suit of different variants of Climate Forecast system (CFS) model. In this CFS based Grand MME prediction system (CGMME), the ensemble members are generated by perturbing the initial condition and using different configurations of CFSv2. This is to address the role of different physical mechanisms known to have control on the error growth in the ERP in the 15-20 day time scale. The final formulation of CGMME is based on 21 ensembles of the standalone Global Forecast System (GFS) forced with bias corrected forecasted SST from CFS, 11 low resolution CFST126 and 11 high resolution CFST382. Thus, we develop the multi-model consensus forecast for the ERP of Indian summer monsoon (ISM) using a suite of different variants of CFS model. This coordinated international effort lead towards the development of specific tailor made regional forecast products over Indian region. Skill of deterministic and probabilistic categorical rainfall forecast as well the verification of large-scale low frequency monsoon intraseasonal oscillations has been carried out using hindcast from 2001-2012 during the monsoon season in which all models are initialized at every five days starting from 16May to 28 September. The skill of deterministic forecast from CGMME is better than the best participating single model ensemble configuration (SME). The CGMME approach is believed to quantify the uncertainty in both initial conditions and model formulation. Main improvement is attained in probabilistic forecast which is because of an increase in the ensemble spread, thereby reducing the error due to over-confident ensembles in a single model configuration. For probabilistic forecast, three tercile ranges are determined by ranking method based on the percentage of ensemble members from all the participating models falls in those three categories. CGMME further added value to both deterministic and probability forecast compared to raw SME's and this better skill is probably flows from large spread and improved spread-error relationship. CGMME system is currently capable of generating ER prediction in real time and successfully delivering its experimental operational ER forecast of ISM for the last few years.

  10. Model selection for integrated pest management with stochasticity.

    PubMed

    Akman, Olcay; Comar, Timothy D; Hrozencik, Daniel

    2018-04-07

    In Song and Xiang (2006), an integrated pest management model with periodically varying climatic conditions was introduced. In order to address a wider range of environmental effects, the authors here have embarked upon a series of studies resulting in a more flexible modeling approach. In Akman et al. (2013), the impact of randomly changing environmental conditions is examined by incorporating stochasticity into the birth pulse of the prey species. In Akman et al. (2014), the authors introduce a class of models via a mixture of two birth-pulse terms and determined conditions for the global and local asymptotic stability of the pest eradication solution. With this work, the authors unify the stochastic and mixture model components to create further flexibility in modeling the impacts of random environmental changes on an integrated pest management system. In particular, we first determine the conditions under which solutions of our deterministic mixture model are permanent. We then analyze the stochastic model to find the optimal value of the mixing parameter that minimizes the variance in the efficacy of the pesticide. Additionally, we perform a sensitivity analysis to show that the corresponding pesticide efficacy determined by this optimization technique is indeed robust. Through numerical simulations we show that permanence can be preserved in our stochastic model. Our study of the stochastic version of the model indicates that our results on the deterministic model provide informative conclusions about the behavior of the stochastic model. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Resolution improvement of 3D stereo-lithography through the direct laser trajectory programming: Application to microfluidic deterministic lateral displacement device.

    PubMed

    Juskova, Petra; Ollitrault, Alexis; Serra, Marco; Viovy, Jean-Louis; Malaquin, Laurent

    2018-02-13

    The vast majority of current microfluidic devices are produced using soft lithography, a technique with strong limitations regarding the fabrication of three-dimensional architectures. Additive manufacturing holds great promises to overcome these limitations, but conventional machines still lack the resolution required by most microfluidic applications. 3D printing machines based on two-photon lasers, in contrast, have the needed resolution but are too limited in speed and size of the global device. Here we demonstrate how the resolution of conventional stereolithographic machines can be improved by a direct programming of the laser path and can contribute to bridge the gap between the two above technologies, allowing the direct printing of features between 10 and 100 μm, corresponding to a large fraction of microfluidic applications. This strategy allows to achieve resolutions limited only by the physical size of the laser beam, decreasing by a factor at least 2× the size of the smallest features printable, and increasing their reproducibility by a factor 5. The approach was applied to produce an open microfluidic device with the reversible seal, integrating periodical patterns using the simple motifs, and validated by the fabrication of a deterministic lateral displacement particles sorting device. The sorting of polystyrene beads (diameter: 20 μm and 45 μm) was achieved with a specificity >95%, comparable with that achieved with arrays prepared by microlithography. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Stochastic dynamics and non-equilibrium thermodynamics of a bistable chemical system: the Schlögl model revisited.

    PubMed

    Vellela, Melissa; Qian, Hong

    2009-10-06

    Schlögl's model is the canonical example of a chemical reaction system that exhibits bistability. Because the biological examples of bistability and switching behaviour are increasingly numerous, this paper presents an integrated deterministic, stochastic and thermodynamic analysis of the model. After a brief review of the deterministic and stochastic modelling frameworks, the concepts of chemical and mathematical detailed balances are discussed and non-equilibrium conditions are shown to be necessary for bistability. Thermodynamic quantities such as the flux, chemical potential and entropy production rate are defined and compared across the two models. In the bistable region, the stochastic model exhibits an exchange of the global stability between the two stable states under changes in the pump parameters and volume size. The stochastic entropy production rate shows a sharp transition that mirrors this exchange. A new hybrid model that includes continuous diffusion and discrete jumps is suggested to deal with the multiscale dynamics of the bistable system. Accurate approximations of the exponentially small eigenvalue associated with the time scale of this switching and the full time-dependent solution are calculated using Matlab. A breakdown of previously known asymptotic approximations on small volume scales is observed through comparison with these and Monte Carlo results. Finally, in the appendix section is an illustration of how the diffusion approximation of the chemical master equation can fail to represent correctly the mesoscopically interesting steady-state behaviour of the system.

  13. Temporal assessment of microbial communities in soils of two contrasting mangroves.

    PubMed

    Rigonato, Janaina; Kent, Angela D; Gumiere, Thiago; Branco, Luiz Henrique Zanini; Andreote, Fernando Dini; Fiore, Marli Fátima

    Variations in microbial communities promoted by alterations in environmental conditions are reflected in similarities/differences both at taxonomic and functional levels. Here we used a natural gradient within mangroves from seashore to upland, to contrast the natural variability in bacteria, cyanobacteria and diazotroph assemblages in a pristine area compared to an oil polluted area along a timespan of three years, based on ARISA (bacteria and cyanobacteria) and nifH T-RFLP (diazotrophs) fingerprinting. The data presented herein indicated that changes in all the communities evaluated were mainly driven by the temporal effect in the contaminated area, while local effects were dominant on the pristine mangrove. A positive correlation of community structure between diazotrophs and cyanobacteria was observed, suggesting the functional importance of this phylum as nitrogen fixers in mangroves soils. Different ecological patterns explained the microbial behavior in the pristine and polluted mangroves. Stochastic models in the pristine mangrove indicate that there is not a specific environmental factor that determines the bacterial distribution, while cyanobacteria and diazotrophs better fitted in deterministic model in the same area. For the contaminated mangrove site, deterministic models better represented the variations in the communities, suggesting that the presence of oil might change the microbial ecological structures over time. Mangroves represent a unique environment threatened by global change, and this study contributed to the knowledge of the microbial distribution in such areas and its response on persistent contamination historic events. Copyright © 2017 Sociedade Brasileira de Microbiologia. Published by Elsevier Editora Ltda. All rights reserved.

  14. Towards the reliable calculation of residence time for off-lattice kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Alexander, Kathleen C.; Schuh, Christopher A.

    2016-08-01

    Kinetic Monte Carlo (KMC) methods have the potential to extend the accessible timescales of off-lattice atomistic simulations beyond the limits of molecular dynamics by making use of transition state theory and parallelization. However, it is a challenge to identify a complete catalog of events accessible to an off-lattice system in order to accurately calculate the residence time for KMC. Here we describe possible approaches to some of the key steps needed to address this problem. These include methods to compare and distinguish individual kinetic events, to deterministically search an energy landscape, and to define local atomic environments. When applied to the ground state  ∑5(2 1 0) grain boundary in copper, these methods achieve a converged residence time, accounting for the full set of kinetically relevant events for this off-lattice system, with calculable uncertainty.

  15. The genetics of Alzheimer disease: current status and future prospects.

    PubMed

    Blacker, D; Tanzi, R E

    1998-03-01

    Four genes involved in the development of Alzheimer disease have been identified. Three fully penetrant (deterministic) genes lead to the development of Alzheimer disease in patients younger than 60 years: the amyloid beta-protein precursor on chromosome 21, presenilin 1 on chromosome 14, and presenilin 2 on chromosome 1. Together, they account for about half of this early-onset form of the disease. One genetic risk factor--apolipoprotein E-4--is associated with late-onset Alzheimer disease. It accounts for a substantial fraction of disease burden but seems to act primarily to lower the age of disease onset. In general, none of these genes can be easily adapted for use as a diagnostic or predictive test for Alzheimer disease. Research activity includes searching for additional genes, especially for late-onset disease, and elucidating the mechanism of action of all identified genes as part of a long-term effort to develop more effective therapeutic and preventive strategies.

  16. Costs of eliminating malaria and the impact of the global fund in 34 countries.

    PubMed

    Zelman, Brittany; Kiszewski, Anthony; Cotter, Chris; Liu, Jenny

    2014-01-01

    International financing for malaria increased more than 18-fold between 2000 and 2011; the largest source came from The Global Fund to Fight AIDS, Tuberculosis and Malaria (Global Fund). Countries have made substantial progress, but achieving elimination requires sustained finances to interrupt transmission and prevent reintroduction. Since 2011, global financing for malaria has declined, fueling concerns that further progress will be impeded, especially for current malaria-eliminating countries that may face resurgent malaria if programs are disrupted. This study aims to 1) assess past total and Global Fund funding to the 34 current malaria-eliminating countries, and 2) estimate their future funding needs to achieve malaria elimination and prevent reintroduction through 2030. Historical funding is assessed against trends in country-level malaria annual parasite incidences (APIs) and income per capita. Following Kizewski et al. (2007), program costs to eliminate malaria and prevent reintroduction through 2030 are estimated using a deterministic model. The cost parameters are tailored to a package of interventions aimed at malaria elimination and prevention of reintroduction. The majority of Global Fund-supported countries experiencing increases in total funding from 2005 to 2010 coincided with reductions in malaria APIs and also overall GNI per capita average annual growth. The total amount of projected funding needed for the current malaria-eliminating countries to achieve elimination and prevent reintroduction through 2030 is approximately US$8.5 billion, or about $1.84 per person at risk per year (PPY) (ranging from $2.51 PPY in 2014 to $1.43 PPY in 2030). Although external donor funding, particularly from the Global Fund, has been key for many malaria-eliminating countries, sustained and sufficient financing is critical for furthering global malaria elimination. Projected cost estimates for elimination provide policymakers with an indication of the level of financial resources that should be mobilized to achieve malaria elimination goals.

  17. Travelling Policies and Global Buzzwords: How International Non-Governmental Organizations and Charities Spread the Word about Early Childhood in the Global South

    ERIC Educational Resources Information Center

    Penn, Helen

    2011-01-01

    This article is based on a web-search commissioned by an international charity to review the work of international non-governmental organizations (INGOs) and charities which promote and support early childhood education and care (ECEC) in the global South. The article examines examples of such initiatives. It is suggested that there is…

  18. The meta-Gaussian Bayesian Processor of forecasts and associated preliminary experiments

    NASA Astrophysics Data System (ADS)

    Chen, Fajing; Jiao, Meiyan; Chen, Jing

    2013-04-01

    Public weather services are trending toward providing users with probabilistic weather forecasts, in place of traditional deterministic forecasts. Probabilistic forecasting techniques are continually being improved to optimize available forecasting information. The Bayesian Processor of Forecast (BPF), a new statistical method for probabilistic forecast, can transform a deterministic forecast into a probabilistic forecast according to the historical statistical relationship between observations and forecasts generated by that forecasting system. This technique accounts for the typical forecasting performance of a deterministic forecasting system in quantifying the forecast uncertainty. The meta-Gaussian likelihood model is suitable for a variety of stochastic dependence structures with monotone likelihood ratios. The meta-Gaussian BPF adopting this kind of likelihood model can therefore be applied across many fields, including meteorology and hydrology. The Bayes theorem with two continuous random variables and the normal-linear BPF are briefly introduced. The meta-Gaussian BPF for a continuous predictand using a single predictor is then presented and discussed. The performance of the meta-Gaussian BPF is tested in a preliminary experiment. Control forecasts of daily surface temperature at 0000 UTC at Changsha and Wuhan stations are used as the deterministic forecast data. These control forecasts are taken from ensemble predictions with a 96-h lead time generated by the National Meteorological Center of the China Meteorological Administration, the European Centre for Medium-Range Weather Forecasts, and the US National Centers for Environmental Prediction during January 2008. The results of the experiment show that the meta-Gaussian BPF can transform a deterministic control forecast of surface temperature from any one of the three ensemble predictions into a useful probabilistic forecast of surface temperature. These probabilistic forecasts quantify the uncertainty of the control forecast; accordingly, the performance of the probabilistic forecasts differs based on the source of the underlying deterministic control forecasts.

  19. Deterministic Factors Overwhelm Stochastic Environmental Fluctuations as Drivers of Jellyfish Outbreaks.

    PubMed

    Benedetti-Cecchi, Lisandro; Canepa, Antonio; Fuentes, Veronica; Tamburello, Laura; Purcell, Jennifer E; Piraino, Stefano; Roberts, Jason; Boero, Ferdinando; Halpin, Patrick

    2015-01-01

    Jellyfish outbreaks are increasingly viewed as a deterministic response to escalating levels of environmental degradation and climate extremes. However, a comprehensive understanding of the influence of deterministic drivers and stochastic environmental variations favouring population renewal processes has remained elusive. This study quantifies the deterministic and stochastic components of environmental change that lead to outbreaks of the jellyfish Pelagia noctiluca in the Mediterranen Sea. Using data of jellyfish abundance collected at 241 sites along the Catalan coast from 2007 to 2010 we: (1) tested hypotheses about the influence of time-varying and spatial predictors of jellyfish outbreaks; (2) evaluated the relative importance of stochastic vs. deterministic forcing of outbreaks through the environmental bootstrap method; and (3) quantified return times of extreme events. Outbreaks were common in May and June and less likely in other summer months, which resulted in a negative relationship between outbreaks and SST. Cross- and along-shore advection by geostrophic flow were important concentrating forces of jellyfish, but most outbreaks occurred in the proximity of two canyons in the northern part of the study area. This result supported the recent hypothesis that canyons can funnel P. noctiluca blooms towards shore during upwelling. This can be a general, yet unappreciated mechanism leading to outbreaks of holoplanktonic jellyfish species. The environmental bootstrap indicated that stochastic environmental fluctuations have negligible effects on return times of outbreaks. Our analysis emphasized the importance of deterministic processes leading to jellyfish outbreaks compared to the stochastic component of environmental variation. A better understanding of how environmental drivers affect demographic and population processes in jellyfish species will increase the ability to anticipate jellyfish outbreaks in the future.

  20. Memoryless cooperative graph search based on the simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Hou, Jian; Yan, Gang-Feng; Fan, Zhen

    2011-04-01

    We have studied the problem of reaching a globally optimal segment for a graph-like environment with a single or a group of autonomous mobile agents. Firstly, two efficient simulated-annealing-like algorithms are given for a single agent to solve the problem in a partially known environment and an unknown environment, respectively. It shows that under both proposed control strategies, the agent will eventually converge to a globally optimal segment with probability 1. Secondly, we use multi-agent searching to simultaneously reduce the computation complexity and accelerate convergence based on the algorithms we have given for a single agent. By exploiting graph partition, a gossip-consensus method based scheme is presented to update the key parameter—radius of the graph, ensuring that the agents spend much less time finding a globally optimal segment.

  1. Cognitive Diagnostic Analysis Using Hierarchically Structured Skills

    ERIC Educational Resources Information Center

    Su, Yu-Lan

    2013-01-01

    This dissertation proposes two modified cognitive diagnostic models (CDMs), the deterministic, inputs, noisy, "and" gate with hierarchy (DINA-H) model and the deterministic, inputs, noisy, "or" gate with hierarchy (DINO-H) model. Both models incorporate the hierarchical structures of the cognitive skills in the model estimation…

  2. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE PAGES

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    2016-05-03

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d

  3. Active temporal multiplexing of indistinguishable heralded single photons

    PubMed Central

    Xiong, C.; Zhang, X.; Liu, Z.; Collins, M. J.; Mahendra, A.; Helt, L. G.; Steel, M. J.; Choi, D. -Y.; Chae, C. J.; Leong, P. H. W.; Eggleton, B. J.

    2016-01-01

    It is a fundamental challenge in quantum optics to deterministically generate indistinguishable single photons through non-deterministic nonlinear optical processes, due to the intrinsic coupling of single- and multi-photon-generation probabilities in these processes. Actively multiplexing photons generated in many temporal modes can decouple these probabilities, but key issues are to minimize resource requirements to allow scalability, and to ensure indistinguishability of the generated photons. Here we demonstrate the multiplexing of photons from four temporal modes solely using fibre-integrated optics and off-the-shelf electronic components. We show a 100% enhancement to the single-photon output probability without introducing additional multi-photon noise. Photon indistinguishability is confirmed by a fourfold Hong–Ou–Mandel quantum interference with a 91±16% visibility after subtracting multi-photon noise due to high pump power. Our demonstration paves the way for scalable multiplexing of many non-deterministic photon sources to a single near-deterministic source, which will be of benefit to future quantum photonic technologies. PMID:26996317

  4. Recent progress in the assembly of nanodevices and van der Waals heterostructures by deterministic placement of 2D materials.

    PubMed

    Frisenda, Riccardo; Navarro-Moratalla, Efrén; Gant, Patricia; Pérez De Lara, David; Jarillo-Herrero, Pablo; Gorbachev, Roman V; Castellanos-Gomez, Andres

    2018-01-02

    Designer heterostructures can now be assembled layer-by-layer with unmatched precision thanks to the recently developed deterministic placement methods to transfer two-dimensional (2D) materials. This possibility constitutes the birth of a very active research field on the so-called van der Waals heterostructures. Moreover, these deterministic placement methods also open the door to fabricate complex devices, which would be otherwise very difficult to achieve by conventional bottom-up nanofabrication approaches, and to fabricate fully-encapsulated devices with exquisite electronic properties. The integration of 2D materials with existing technologies such as photonic and superconducting waveguides and fiber optics is another exciting possibility. Here, we review the state-of-the-art of the deterministic placement methods, describing and comparing the different alternative methods available in the literature, and we illustrate their potential to fabricate van der Waals heterostructures, to integrate 2D materials into complex devices and to fabricate artificial bilayer structures where the layers present a user-defined rotational twisting angle.

  5. First-order reliability application and verification methods for semistatic structures

    NASA Astrophysics Data System (ADS)

    Verderaime, V.

    1994-11-01

    Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored in conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments; stress audits are shown to be arbitrary and incomplete, and the concept compromises the performance of high-strength materials. A reliability method is proposed that combines first-order reliability principles with deterministic design variables and conventional test techniques to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety-index expression. The application is reduced to solving for a design factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this design factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and the development of semistatic structural designs.

  6. Fault Detection for Nonlinear Process With Deterministic Disturbances: A Just-In-Time Learning Based Data Driven Method.

    PubMed

    Yin, Shen; Gao, Huijun; Qiu, Jianbin; Kaynak, Okyay

    2017-11-01

    Data-driven fault detection plays an important role in industrial systems due to its applicability in case of unknown physical models. In fault detection, disturbances must be taken into account as an inherent characteristic of processes. Nevertheless, fault detection for nonlinear processes with deterministic disturbances still receive little attention, especially in data-driven field. To solve this problem, a just-in-time learning-based data-driven (JITL-DD) fault detection method for nonlinear processes with deterministic disturbances is proposed in this paper. JITL-DD employs JITL scheme for process description with local model structures to cope with processes dynamics and nonlinearity. The proposed method provides a data-driven fault detection solution for nonlinear processes with deterministic disturbances, and owns inherent online adaptation and high accuracy of fault detection. Two nonlinear systems, i.e., a numerical example and a sewage treatment process benchmark, are employed to show the effectiveness of the proposed method.

  7. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d

  8. Improving Deterministic Reserve Requirements for Security Constrained Unit Commitment and Scheduling Problems in Power Systems

    NASA Astrophysics Data System (ADS)

    Wang, Fengyu

    Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch structure, especially with the consideration of renewables, 2) to develop a market settlement scheme of proposed dynamic reserve policies such that the market efficiency is improved, 3) to evaluate the market impacts and price signal of the proposed dynamic reserve policies.

  9. HOW DO RADIOLOGISTS USE THE HUMAN SEARCH ENGINE?

    PubMed

    Wolfe, Jeremy M; Evans, Karla K; Drew, Trafton; Aizenman, Avigael; Josephs, Emilie

    2016-06-01

    Radiologists perform many 'visual search tasks' in which they look for one or more instances of one or more types of target item in a medical image (e.g. cancer screening). To understand and improve how radiologists do such tasks, it must be understood how the human 'search engine' works. This article briefly reviews some of the relevant work into this aspect of medical image perception. Questions include how attention and the eyes are guided in radiologic search? How is global (image-wide) information used in search? How might properties of human vision and human cognition lead to errors in radiologic search? © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Stochastic search in structural optimization - Genetic algorithms and simulated annealing

    NASA Technical Reports Server (NTRS)

    Hajela, Prabhat

    1993-01-01

    An account is given of illustrative applications of genetic algorithms and simulated annealing methods in structural optimization. The advantages of such stochastic search methods over traditional mathematical programming strategies are emphasized; it is noted that these methods offer a significantly higher probability of locating the global optimum in a multimodal design space. Both genetic-search and simulated annealing can be effectively used in problems with a mix of continuous, discrete, and integer design variables.

  11. NCEP SST Analysis

    Science.gov Websites

    Branches Global Climate & Weather Modeling Mesoscale Modeling Marine Modeling and Analysis Contact EMC Weather Service NWS logo - Click to go to the NWS homepage Environmental Modeling Center Home News Organization Search Go Search Polar Go MMAB SST Analysis Main page About MMAB Our Mission Our Personnel EMC

  12. Emergency Home - PHE

    Science.gov Websites

    and reload this page. Skip over global navigation links U.S. Department of Health and Human Services Health Emergency - Leading a Nation Prepared Search Search Right Box1 Content 2017 Hurricane Response and Deaf or Hard of Hearing For Public Health Communicators and the Media Tools for the Media and Public

  13. Federal Public Health Actions - PHE

    Science.gov Websites

    and reload this page. Skip over global navigation links U.S. Department of Health and Human Services Health Emergency - Leading a Nation Prepared Search Search PHE Home > PHE Newsroom > Federal Public Health Actions Federal Public Health Actions Main Content April 20, 2018: Renewal of Determination that a

  14. Committee on Women in Science, Engineering, and Medicine (CWSEM)

    Science.gov Websites

    Skip to Main Content Contact Us | Search: Search The National Academies of Sciences, Engineering and Medicine Committee on Women in Science, Engineering, and Medicine Committee on Women in Science , Engineering, and Medicine Policy and Global Affairs Home About Us Members Subscribe to CWSEM Alerts Resources

  15. Mechanisms of Age-Related Decline in Memory Search across the Adult Life Span

    ERIC Educational Resources Information Center

    Hills, Thomas T.; Mata, Rui; Wilke, Andreas; Samanez-Larkin, Gregory R.

    2013-01-01

    Three alternative mechanisms for age-related decline in memory search have been proposed, which result from either reduced processing speed (global slowing hypothesis), overpersistence on categories (cluster-switching hypothesis), or the inability to maintain focus on local cues related to a decline in working memory (cue-maintenance hypothesis).…

  16. Search Problems in Mission Planning and Navigation of Autonomous Aircraft. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Krozel, James A.

    1988-01-01

    An architecture for the control of an autonomous aircraft is presented. The architecture is a hierarchical system representing an anthropomorphic breakdown of the control problem into planner, navigator, and pilot systems. The planner system determines high level global plans from overall mission objectives. This abstract mission planning is investigated by focusing on the Traveling Salesman Problem with variations on local and global constraints. Tree search techniques are applied including the breadth first, depth first, and best first algorithms. The minimum-column and row entries for the Traveling Salesman Problem cost matrix provides a powerful heuristic to guide these search techniques. Mission planning subgoals are directed from the planner to the navigator for planning routes in mountainous terrain with threats. Terrain/threat information is abstracted into a graph of possible paths for which graph searches are performed. It is shown that paths can be well represented by a search graph based on the Voronoi diagram of points representing the vertices of mountain boundaries. A comparison of Dijkstra's dynamic programming algorithm and the A* graph search algorithm from artificial intelligence/operations research is performed for several navigation path planning examples. These examples illustrate paths that minimize a combination of distance and exposure to threats. Finally, the pilot system synthesizes the flight trajectory by creating the control commands to fly the aircraft.

  17. A Study of Impact Point Detecting Method Based on Seismic Signal

    NASA Astrophysics Data System (ADS)

    Huo, Pengju; Zhang, Yu; Xu, Lina; Huang, Yong

    The projectile landing position has to be determined for its recovery and range in the targeting test. In this paper, a global search method based on the velocity variance is proposed. In order to verify the applicability of this method, simulation analysis within the scope of four million square meters has been conducted in the same array structure of the commonly used linear positioning method, and MATLAB was used to compare and analyze the two methods. The compared simulation results show that the global search method based on the speed of variance has high positioning accuracy and stability, which can meet the needs of impact point location.

  18. International volunteerism and global responsibility

    PubMed Central

    2017-01-01

    To review the current status of volunteerism in the field of urology as a global responsibility, a PubMed search was done with keywords: international volunteer, urology, global health, international resident education. Furthermore, internet search using Google and Bing as search engines for the same key words was done. Websites of multinational urological organizations was researched for available information. Expert opinions of key individuals in the arena were obtained. There is a paucity of published literature as to the efforts and outcomes of volunteerism in Urology; most of the information is related to the fistula repair outcomes in medical literature. There is credible information on the impact on resident training in the field of dentistry, general surgery, orthopedics and plastic surgery. The desire to volunteer is demonstrated in a survey of urologists, the delivery of volunteer missions is fragmented, the outcomes or impact of these efforts is less well documented. There is a need to create pathways to harness the desire of urologists to volunteer. Simultaneously, the effort to establish ethical standards and guidelines for surgical care in LMIC countries by visiting doctors is important. There is a tremendous opportunity for resident education in this arena, as demonstrated by the ACGME approved path in general surgery. PMID:28540233

  19. Adding statistical regularity results in a global slowdown in visual search.

    PubMed

    Vaskevich, Anna; Luria, Roy

    2018-05-01

    Current statistical learning theories predict that embedding implicit regularities within a task should further improve online performance, beyond general practice. We challenged this assumption by contrasting performance in a visual search task containing either a consistent-mapping (regularity) condition, a random-mapping condition, or both conditions, mixed. Surprisingly, performance in a random visual search, without any regularity, was better than performance in a mixed design search that contained a beneficial regularity. This result was replicated using different stimuli and different regularities, suggesting that mixing consistent and random conditions leads to an overall slowing down of performance. Relying on the predictive-processing framework, we suggest that this global detrimental effect depends on the validity of the regularity: when its predictive value is low, as it is in the case of a mixed design, reliance on all prior information is reduced, resulting in a general slowdown. Our results suggest that our cognitive system does not maximize speed, but rather continues to gather and implement statistical information at the expense of a possible slowdown in performance. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Recurrence quantification analysis of global stock markets

    NASA Astrophysics Data System (ADS)

    Bastos, João A.; Caiado, Jorge

    2011-04-01

    This study investigates the presence of deterministic dependencies in international stock markets using recurrence plots and recurrence quantification analysis (RQA). The results are based on a large set of free float-adjusted market capitalization stock indices, covering a period of 15 years. The statistical tests suggest that the dynamics of stock prices in emerging markets is characterized by higher values of RQA measures when compared to their developed counterparts. The behavior of stock markets during critical financial events, such as the burst of the technology bubble, the Asian currency crisis, and the recent subprime mortgage crisis, is analyzed by performing RQA in sliding windows. It is shown that during these events stock markets exhibit a distinctive behavior that is characterized by temporary decreases in the fraction of recurrence points contained in diagonal and vertical structures.

  1. Comparison of Response Surface and Kriging Models for Multidisciplinary Design Optimization

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Korte, John J.; Mauery, Timothy M.; Mistree, Farrokh

    1998-01-01

    In this paper, we compare and contrast the use of second-order response surface models and kriging models for approximating non-random, deterministic computer analyses. After reviewing the response surface method for constructing polynomial approximations, kriging is presented as an alternative approximation method for the design and analysis of computer experiments. Both methods are applied to the multidisciplinary design of an aerospike nozzle which consists of a computational fluid dynamics model and a finite-element model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations, and four optimization problems m formulated and solved using both sets of approximation models. The second-order response surface models and kriging models-using a constant underlying global model and a Gaussian correlation function-yield comparable results.

  2. The Collaborative Seismic Earth Model: Generation 1

    NASA Astrophysics Data System (ADS)

    Fichtner, Andreas; van Herwaarden, Dirk-Philip; Afanasiev, Michael; SimutÄ--, SaulÄ--; Krischer, Lion; ćubuk-Sabuncu, Yeşim; Taymaz, Tuncay; Colli, Lorenzo; Saygin, Erdinc; Villaseñor, Antonio; Trampert, Jeannot; Cupillard, Paul; Bunge, Hans-Peter; Igel, Heiner

    2018-05-01

    We present a general concept for evolutionary, collaborative, multiscale inversion of geophysical data, specifically applied to the construction of a first-generation Collaborative Seismic Earth Model. This is intended to address the limited resources of individual researchers and the often limited use of previously accumulated knowledge. Model evolution rests on a Bayesian updating scheme, simplified into a deterministic method that honors today's computational restrictions. The scheme is able to harness distributed human and computing power. It furthermore handles conflicting updates, as well as variable parameterizations of different model refinements or different inversion techniques. The first-generation Collaborative Seismic Earth Model comprises 12 refinements from full seismic waveform inversion, ranging from regional crustal- to continental-scale models. A global full-waveform inversion ensures that regional refinements translate into whole-Earth structure.

  3. Parameter Estimation in Epidemiology: from Simple to Complex Dynamics

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Ballesteros, Sebastién; Boto, João Pedro; Kooi, Bob W.; Mateus, Luís; Stollenwerk, Nico

    2011-09-01

    We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models like multi-strain dynamics to describe the virus-host interaction in dengue fever, even most recently developed parameter estimation techniques, like maximum likelihood iterated filtering, come to their computational limits. However, the first results of parameter estimation with data on dengue fever from Thailand indicate a subtle interplay between stochasticity and deterministic skeleton. The deterministic system on its own already displays complex dynamics up to deterministic chaos and coexistence of multiple attractors.

  4. Inherent Conservatism in Deterministic Quasi-Static Structural Analysis

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1997-01-01

    The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.

  5. Sampled-Data Consensus of Linear Multi-agent Systems With Packet Losses.

    PubMed

    Zhang, Wenbing; Tang, Yang; Huang, Tingwen; Kurths, Jurgen

    In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.

  6. An application of ensemble/multi model approach for wind power production forecast.

    NASA Astrophysics Data System (ADS)

    Alessandrini, S.; Decimi, G.; Hagedorn, R.; Sperati, S.

    2010-09-01

    The wind power forecast of the 3 days ahead period are becoming always more useful and important in reducing the problem of grid integration and energy price trading due to the increasing wind power penetration. Therefore it's clear that the accuracy of this forecast is one of the most important requirements for a successful application. The wind power forecast is based on a mesoscale meteorological models that provides the 3 days ahead wind data. A Model Output Statistic correction is then performed to reduce systematic error caused, for instance, by a wrong representation of surface roughness or topography in the meteorological models. The corrected wind data are then used as input in the wind farm power curve to obtain the power forecast. These computations require historical time series of wind measured data (by an anemometer located in the wind farm or on the nacelle) and power data in order to be able to perform the statistical analysis on the past. For this purpose a Neural Network (NN) is trained on the past data and then applied in the forecast task. Considering that the anemometer measurements are not always available in a wind farm a different approach has also been adopted. A training of the NN to link directly the forecasted meteorological data and the power data has also been performed. The normalized RMSE forecast error seems to be lower in most cases by following the second approach. We have examined two wind farms, one located in Denmark on flat terrain and one located in a mountain area in the south of Italy (Sicily). In both cases we compare the performances of a prediction based on meteorological data coming from a single model with those obtained by using two or more models (RAMS, ECMWF deterministic, LAMI, HIRLAM). It is shown that the multi models approach reduces the day-ahead normalized RMSE forecast error of at least 1% compared to the singles models approach. Moreover the use of a deterministic global model, (e.g. ECMWF deterministic model) seems to reach similar level of accuracy of those of the mesocale models (LAMI and RAMS). Finally we have focused on the possibility of using the ensemble model (ECMWF) to estimate the hourly, three days ahead, power forecast accuracy. Contingency diagram between RMSE of the deterministic power forecast and the ensemble members spread of wind forecast have been produced. From this first analysis it seems that ensemble spread could be used as an indicator of the forecast's accuracy at least for the first day ahead period. In fact low spreads often correspond to low forecast error. For longer forecast horizon the correlation between RMSE and ensemble spread decrease becoming too low to be used for this purpose.

  7. Stochastic approach to data analysis in fluorescence correlation spectroscopy.

    PubMed

    Rao, Ramachandra; Langoju, Rajesh; Gösch, Michael; Rigler, Per; Serov, Alexandre; Lasser, Theo

    2006-09-21

    Fluorescence correlation spectroscopy (FCS) has emerged as a powerful technique for measuring low concentrations of fluorescent molecules and their diffusion constants. In FCS, the experimental data is conventionally fit using standard local search techniques, for example, the Marquardt-Levenberg (ML) algorithm. A prerequisite for these categories of algorithms is the sound knowledge of the behavior of fit parameters and in most cases good initial guesses for accurate fitting, otherwise leading to fitting artifacts. For known fit models and with user experience about the behavior of fit parameters, these local search algorithms work extremely well. However, for heterogeneous systems or where automated data analysis is a prerequisite, there is a need to apply a procedure, which treats FCS data fitting as a black box and generates reliable fit parameters with accuracy for the chosen model in hand. We present a computational approach to analyze FCS data by means of a stochastic algorithm for global search called PGSL, an acronym for Probabilistic Global Search Lausanne. This algorithm does not require any initial guesses and does the fitting in terms of searching for solutions by global sampling. It is flexible as well as computationally faster at the same time for multiparameter evaluations. We present the performance study of PGSL for two-component with triplet fits. The statistical study and the goodness of fit criterion for PGSL are also presented. The robustness of PGSL on noisy experimental data for parameter estimation is also verified. We further extend the scope of PGSL by a hybrid analysis wherein the output of PGSL is fed as initial guesses to ML. Reliability studies show that PGSL and the hybrid combination of both perform better than ML for various thresholds of the mean-squared error (MSE).

  8. Performance comparisons between PCA-EA-LBG and PCA-LBG-EA approaches in VQ codebook generation for image compression

    NASA Astrophysics Data System (ADS)

    Tsai, Jinn-Tsong; Chou, Ping-Yi; Chou, Jyh-Horng

    2015-11-01

    The aim of this study is to generate vector quantisation (VQ) codebooks by integrating principle component analysis (PCA) algorithm, Linde-Buzo-Gray (LBG) algorithm, and evolutionary algorithms (EAs). The EAs include genetic algorithm (GA), particle swarm optimisation (PSO), honey bee mating optimisation (HBMO), and firefly algorithm (FF). The study is to provide performance comparisons between PCA-EA-LBG and PCA-LBG-EA approaches. The PCA-EA-LBG approaches contain PCA-GA-LBG, PCA-PSO-LBG, PCA-HBMO-LBG, and PCA-FF-LBG, while the PCA-LBG-EA approaches contain PCA-LBG, PCA-LBG-GA, PCA-LBG-PSO, PCA-LBG-HBMO, and PCA-LBG-FF. All training vectors of test images are grouped according to PCA. The PCA-EA-LBG used the vectors grouped by PCA as initial individuals, and the best solution gained by the EAs was given for LBG to discover a codebook. The PCA-LBG approach is to use the PCA to select vectors as initial individuals for LBG to find a codebook. The PCA-LBG-EA used the final result of PCA-LBG as an initial individual for EAs to find a codebook. The search schemes in PCA-EA-LBG first used global search and then applied local search skill, while in PCA-LBG-EA first used local search and then employed global search skill. The results verify that the PCA-EA-LBG indeed gain superior results compared to the PCA-LBG-EA, because the PCA-EA-LBG explores a global area to find a solution, and then exploits a better one from the local area of the solution. Furthermore the proposed PCA-EA-LBG approaches in designing VQ codebooks outperform existing approaches shown in the literature.

  9. Comparison of space radiation calculations for deterministic and Monte Carlo transport codes

    NASA Astrophysics Data System (ADS)

    Lin, Zi-Wei; Adams, James; Barghouty, Abdulnasser; Randeniya, Sharmalee; Tripathi, Ram; Watts, John; Yepes, Pablo

    For space radiation protection of astronauts or electronic equipments, it is necessary to develop and use accurate radiation transport codes. Radiation transport codes include deterministic codes, such as HZETRN from NASA and UPROP from the Naval Research Laboratory, and Monte Carlo codes such as FLUKA, the Geant4 toolkit and HETC-HEDS. The deterministic codes and Monte Carlo codes complement each other in that deterministic codes are very fast while Monte Carlo codes are more elaborate. Therefore it is important to investigate how well the results of deterministic codes compare with those of Monte Carlo transport codes and where they differ. In this study we evaluate these different codes in their space radiation applications by comparing their output results in the same given space radiation environments, shielding geometry and material. Typical space radiation environments such as the 1977 solar minimum galactic cosmic ray environment are used as the well-defined input, and simple geometries made of aluminum, water and/or polyethylene are used to represent the shielding material. We then compare various outputs of these codes, such as the dose-depth curves and the flux spectra of different fragments and other secondary particles. These comparisons enable us to learn more about the main differences between these space radiation transport codes. At the same time, they help us to learn the qualitative and quantitative features that these transport codes have in common.

  10. Extraction of angle deterministic signals in the presence of stationary speed fluctuations with cyclostationary blind source separation

    NASA Astrophysics Data System (ADS)

    Delvecchio, S.; Antoni, J.

    2012-02-01

    This paper addresses the use of a cyclostationary blind source separation algorithm (namely RRCR) to extract angle deterministic signals from mechanical rotating machines in presence of stationary speed fluctuations. This means that only phase fluctuations while machine is running in steady-state conditions are considered while run-up or run-down speed variations are not taken into account. The machine is also supposed to run in idle conditions so non-stationary phenomena due to the load are not considered. It is theoretically assessed that in such operating conditions the deterministic (periodic) signal in the angle domain becomes cyclostationary at first and second orders in the time domain. This fact justifies the use of the RRCR algorithm, which is able to directly extract the angle deterministic signal from the time domain without performing any kind of interpolation. This is particularly valuable when angular resampling fails because of uncontrolled speed fluctuations. The capability of the proposed approach is verified by means of simulated and actual vibration signals captured on a pneumatic screwdriver handle. In this particular case not only the extraction of the angle deterministic part can be performed but also the separation of the main sources of excitation (i.e. motor shaft imbalance, epyciloidal gear meshing and air pressure forces) affecting the user hand during operations.

  11. Northern Hemisphere glaciation and the evolution of Plio-Pleistocene climate noise

    NASA Astrophysics Data System (ADS)

    Meyers, Stephen R.; Hinnov, Linda A.

    2010-08-01

    Deterministic orbital controls on climate variability are commonly inferred to dominate across timescales of 104-106 years, although some studies have suggested that stochastic processes may be of equal or greater importance. Here we explicitly quantify changes in deterministic orbital processes (forcing and/or pacing) versus stochastic climate processes during the Plio-Pleistocene, via time-frequency analysis of two prominent foraminifera oxygen isotopic stacks. Our results indicate that development of the Northern Hemisphere ice sheet is paralleled by an overall amplification of both deterministic and stochastic climate energy, but their relative dominance is variable. The progression from a more stochastic early Pliocene to a strongly deterministic late Pleistocene is primarily accommodated during two transitory phases of Northern Hemisphere ice sheet growth. This long-term trend is punctuated by “stochastic events,” which we interpret as evidence for abrupt reorganization of the climate system at the initiation and termination of the mid-Pleistocene transition and at the onset of Northern Hemisphere glaciation. In addition to highlighting a complex interplay between deterministic and stochastic climate change during the Plio-Pleistocene, our results support an early onset for Northern Hemisphere glaciation (between 3.5 and 3.7 Ma) and reveal some new characteristics of the orbital signal response, such as the puzzling emergence of 100 ka and 400 ka cyclic climate variability during theoretical eccentricity nodes.

  12. Tag-mediated cooperation with non-deterministic genotype-phenotype mapping

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Chen, Shu

    2016-01-01

    Tag-mediated cooperation provides a helpful framework for resolving evolutionary social dilemmas. However, most of the previous studies have not taken into account genotype-phenotype distinction in tags, which may play an important role in the process of evolution. To take this into consideration, we introduce non-deterministic genotype-phenotype mapping into a tag-based model with spatial prisoner's dilemma. By our definition, the similarity between genotypic tags does not directly imply the similarity between phenotypic tags. We find that the non-deterministic mapping from genotypic tag to phenotypic tag has non-trivial effects on tag-mediated cooperation. Although we observe that high levels of cooperation can be established under a wide variety of conditions especially when the decisiveness is moderate, the uncertainty in the determination of phenotypic tags may have a detrimental effect on the tag mechanism by disturbing the homophilic interaction structure which can explain the promotion of cooperation in tag systems. Furthermore, the non-deterministic mapping may undermine the robustness of the tag mechanism with respect to various factors such as the structure of the tag space and the tag flexibility. This observation warns us about the danger of applying the classical tag-based models to the analysis of empirical phenomena if genotype-phenotype distinction is significant in real world. Non-deterministic genotype-phenotype mapping thus provides a new perspective to the understanding of tag-mediated cooperation.

  13. Integration Policies in Europe--A Web-Based Search for Consensus

    ERIC Educational Resources Information Center

    Öttl, Ulrich Franz Josef; Pichler, Bernhard; Schultze-Naumburg, Jonas; Wadispointner, Sabine

    2014-01-01

    Purpose: The purpose of the present paper is to describe a web-based consensus-finding procedure, resulting in an agreement among the group of participants representing global stakeholders regarding the interdisciplinary topic in a university master's seminar on "Global Studies". The result of the collectively elaborated solution…

  14. Challenges in the global QCD analysis of parton structure of nucleons

    NASA Astrophysics Data System (ADS)

    Tung, Wu-Ki

    2000-12-01

    We briefly summarize the current status of global QCD analysis of the parton structure of the nucleon and then highlight the open questions and challenges which confront this endeavor on which much of the phenomenology of the Standard Model and the search of New Physics depend.

  15. Anger and Globalization among Young People in India

    ERIC Educational Resources Information Center

    Suchday, Sonia

    2015-01-01

    This article addresses the challenges faced by youth in developing countries. Using India as an example of a fast-globalizing country, this article highlights the experience and challenges faced by adolescents and emerging adults as they search for their interpersonal and professional identities. The difficulties of defining identity in the…

  16. On-board autonomous attitude maneuver planning for planetary spacecraft using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Kornfeld, Richard P.

    2003-01-01

    A key enabling technology that leads to greater spacecraft autonomy is the capability to autonomously and optimally slew the spacecraft from and to different attitudes while operating under a number of celestial and dynamic constraints. The task of finding an attitude trajectory that meets all the constraints is a formidable one, in particular for orbiting or fly-by spacecraft where the constraints and initial and final conditions are of time-varying nature. This paper presents an approach for attitude path planning that makes full use of a priori constraint knowledge and is computationally tractable enough to be executed on-board a spacecraft. The approach is based on incorporating the constraints into a cost function and using a Genetic Algorithm to iteratively search for and optimize the solution. This results in a directed random search that explores a large part of the solution space while maintaining the knowledge of good solutions from iteration to iteration. A solution obtained this way may be used 'as is' or as an initial solution to initialize additional deterministic optimization algorithms. A number of example simulations are presented including the case examples of a generic Europa Orbiter spacecraft in cruise as well as in orbit around Europa. The search times are typically on the order of minutes, thus demonstrating the viability of the presented approach. The results are applicable to all future deep space missions where greater spacecraft autonomy is required. In addition, onboard autonomous attitude planning greatly facilitates navigation and science observation planning, benefiting thus all missions to planet Earth as well.

  17. Space shuttle search and rescue experiment using synthetic aperture radar

    NASA Technical Reports Server (NTRS)

    Sivertson, W. E., Jr.; Larson, R. W.; Zelenka, J. S.

    1977-01-01

    The feasibility of a synthetic aperture radar for search and rescue applications was demonstrated with aircraft experiments. One experiment was conducted using the ERIM four-channel radar and several test sites in the Michigan area. In this test simple corner-reflector targets were successfully imaged. Results from this investigation were positive and indicate that the concept can be used to investigate new approaches focused on the development of a global search and rescue system. An orbital experiment to demonstrate the application of synthetic aperture radar to search and rescue is proposed using the space shuttle.

  18. Language Preferences on Websites and in Google Searches for Human Health and Food Information

    PubMed Central

    Singh, Punam Mony; Wight, Carly A; Sercinoglu, Olcan; Wilson, David C; Boytsov, Artem

    2007-01-01

    Background While it is known that the majority of pages on the World Wide Web are in English, little is known about the preferred language of users searching for health information online. Objectives (1) To help global and domestic publishers, for example health and food agencies, to determine the need for translation of online information from English into local languages. (2) To help these agencies determine which language(s) they should select when publishing information online in target nations and for target subpopulations within nations. Methods To estimate the percentage of Web publishers that translate their health and food websites, we measured the frequency at which domain names retrieved by Google overlap for language translations of the same health-related search term. To quantify language choice of searchers from different countries, Google provided estimates of the rate at which its search engine was queried in six languages relative to English for the terms “avian flu,” “tuberculosis,” “schizophrenia,” and “maize” (corn) from January 2004 to April 2006. The estimate was based on a 20% sample of all Google queries from 227 nations. Results We estimate that 80%-90% of health- and food-related institutions do not translate their websites into multiple languages, even when the information concerns pandemic disease such as avian influenza. Although Internet users are often well-educated, there was a strong preference for searching for health and food information in the local language, rather than English. For “avian flu,” we found that only 1% of searches in non-English-speaking nations were in English, whereas for “tuberculosis” or “schizophrenia,” about 4%-40% of searches in non-English countries employed English. A subset of searches for health information presumably originating from immigrants occurred in their native tongue, not the language of the adopted country. However, Spanish-language online searches for “avian flu,” “schizophrenia,” and “maize/corn” in the United States occurred at only <1% of the English search rate, although the US online Hispanic population constitutes 12% of the total US online population. Sub-Saharan Africa and Bangladesh searches for health information occurred in unexpected languages, perhaps reflecting the presence of aid workers and the global migration of Internet users, respectively. In Latin America, indigenous-language search terms were often used rather than Spanish. Conclusions (1) Based on the strong preference for searching the Internet for health information in the local language, indigenous language, or immigrant language of origin, global and domestic health and food agencies should continue their efforts to translate their institutional websites into more languages. (2) We have provided linguistic online search pattern data to help health and food agencies better select languages for targeted website publishing. PMID:17613488

  19. Optimizing human activity patterns using global sensitivity analysis.

    PubMed

    Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M

    2014-12-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  20. Optimizing human activity patterns using global sensitivity analysis

    PubMed Central

    Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.

    2014-01-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations. PMID:25580080

Top