Sample records for characterize non-deterministic complexity

  1. Simultaneous estimation of deterministic and fractal stochastic components in non-stationary time series

    NASA Astrophysics Data System (ADS)

    García, Constantino A.; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G.

    2018-07-01

    In the past few decades, it has been recognized that 1 / f fluctuations are ubiquitous in nature. The most widely used mathematical models to capture the long-term memory properties of 1 / f fluctuations have been stochastic fractal models. However, physical systems do not usually consist of just stochastic fractal dynamics, but they often also show some degree of deterministic behavior. The present paper proposes a model based on fractal stochastic and deterministic components that can provide a valuable basis for the study of complex systems with long-term correlations. The fractal stochastic component is assumed to be a fractional Brownian motion process and the deterministic component is assumed to be a band-limited signal. We also provide a method that, under the assumptions of this model, is able to characterize the fractal stochastic component and to provide an estimate of the deterministic components present in a given time series. The method is based on a Bayesian wavelet shrinkage procedure that exploits the self-similar properties of the fractal processes in the wavelet domain. This method has been validated over simulated signals and over real signals with economical and biological origin. Real examples illustrate how our model may be useful for exploring the deterministic-stochastic duality of complex systems, and uncovering interesting patterns present in time series.

  2. The Study of the Relationship between Probabilistic Design and Axiomatic Design Methodology. Volume 2

    NASA Technical Reports Server (NTRS)

    Onwubiko, Chin-Yere; Onyebueke, Landon

    1996-01-01

    The structural design, or the design of machine elements, has been traditionally based on deterministic design methodology. The deterministic method considers all design parameters to be known with certainty. This methodology is, therefore, inadequate to design complex structures that are subjected to a variety of complex, severe loading conditions. A nonlinear behavior that is dependent on stress, stress rate, temperature, number of load cycles, and time is observed on all components subjected to complex conditions. These complex conditions introduce uncertainties; hence, the actual factor of safety margin remains unknown. In the deterministic methodology, the contingency of failure is discounted; hence, there is a use of a high factor of safety. It may be most useful in situations where the design structures are simple. The probabilistic method is concerned with the probability of non-failure performance of structures or machine elements. It is much more useful in situations where the design is characterized by complex geometry, possibility of catastrophic failure, sensitive loads and material properties. Also included: Comparative Study of the use of AGMA Geometry Factors and Probabilistic Design Methodology in the Design of Compact Spur Gear Set.

  3. Analysis of wireless sensor network topology and estimation of optimal network deployment by deterministic radio channel characterization.

    PubMed

    Aguirre, Erik; Lopez-Iturri, Peio; Azpilicueta, Leire; Astrain, José Javier; Villadangos, Jesús; Falcone, Francisco

    2015-02-05

    One of the main challenges in the implementation and design of context-aware scenarios is the adequate deployment strategy for Wireless Sensor Networks (WSNs), mainly due to the strong dependence of the radiofrequency physical layer with the surrounding media, which can lead to non-optimal network designs. In this work, radioplanning analysis for WSN deployment is proposed by employing a deterministic 3D ray launching technique in order to provide insight into complex wireless channel behavior in context-aware indoor scenarios. The proposed radioplanning procedure is validated with a testbed implemented with a Mobile Ad Hoc Network WSN following a chain configuration, enabling the analysis and assessment of a rich variety of parameters, such as received signal level, signal quality and estimation of power consumption. The adoption of deterministic radio channel techniques allows the design and further deployment of WSNs in heterogeneous wireless scenarios with optimized behavior in terms of coverage, capacity, quality of service and energy consumption.

  4. Characterizing Uncertainty and Variability in PBPK Models ...

    EPA Pesticide Factsheets

    Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variability; such characterization for PBPK model predictions represents a continuing challenge to both modelers and users. Current practices show significant progress in specifying deterministic biological models and the non-deterministic (often statistical) models, estimating their parameters using diverse data sets from multiple sources, and using them to make predictions and characterize uncertainty and variability. The International Workshop on Uncertainty and Variability in PBPK Models, held Oct 31-Nov 2, 2006, sought to identify the state-of-the-science in this area and recommend priorities for research and changes in practice and implementation. For the short term, these include: (1) multidisciplinary teams to integrate deterministic and non-deterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through more complete documentation of the model structure(s) and parameter values, the results of sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include: (1) theoretic and practical methodological impro

  5. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.

    PubMed

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-08-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.

  6. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks

    PubMed Central

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-01-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784

  7. Controllability of Deterministic Networks with the Identical Degree Sequence

    PubMed Central

    Ma, Xiujuan; Zhao, Haixing; Wang, Binghong

    2015-01-01

    Controlling complex network is an essential problem in network science and engineering. Recent advances indicate that the controllability of complex network is dependent on the network's topology. Liu and Barabási, et.al speculated that the degree distribution was one of the most important factors affecting controllability for arbitrary complex directed network with random link weights. In this paper, we analysed the effect of degree distribution to the controllability for the deterministic networks with unweighted and undirected. We introduce a class of deterministic networks with identical degree sequence, called (x,y)-flower. We analysed controllability of the two deterministic networks ((1, 3)-flower and (2, 2)-flower) by exact controllability theory in detail and give accurate results of the minimum number of driver nodes for the two networks. In simulation, we compare the controllability of (x,y)-flower networks. Our results show that the family of (x,y)-flower networks have the same degree sequence, but their controllability is totally different. So the degree distribution itself is not sufficient to characterize the controllability of deterministic networks with unweighted and undirected. PMID:26020920

  8. Using Reputation Systems and Non-Deterministic Routing to Secure Wireless Sensor Networks

    PubMed Central

    Moya, José M.; Vallejo, Juan Carlos; Fraga, David; Araujo, Álvaro; Villanueva, Daniel; de Goyeneche, Juan-Mariano

    2009-01-01

    Security in wireless sensor networks is difficult to achieve because of the resource limitations of the sensor nodes. We propose a trust-based decision framework for wireless sensor networks coupled with a non-deterministic routing protocol. Both provide a mechanism to effectively detect and confine common attacks, and, unlike previous approaches, allow bad reputation feedback to the network. This approach has been extensively simulated, obtaining good results, even for unrealistically complex attack scenarios. PMID:22412345

  9. Ordinal optimization and its application to complex deterministic problems

    NASA Astrophysics Data System (ADS)

    Yang, Mike Shang-Yu

    1998-10-01

    We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.

  10. A Study of Students' Reasoning about Probabilistic Causality: Implications for Understanding Complex Systems and for Instructional Design

    ERIC Educational Resources Information Center

    Grotzer, Tina A.; Solis, S. Lynneth; Tutwiler, M. Shane; Cuzzolino, Megan Powell

    2017-01-01

    Understanding complex systems requires reasoning about causal relationships that behave or appear to behave probabilistically. Features such as distributed agency, large spatial scales, and time delays obscure co-variation relationships and complex interactions can result in non-deterministic relationships between causes and effects that are best…

  11. Spatial delineation, fluid-lithology characterization, and petrophysical modeling of deepwater Gulf of Mexico reservoirs though joint AVA deterministic and stochastic inversion of three-dimensional partially-stacked seismic amplitude data and well logs

    NASA Astrophysics Data System (ADS)

    Contreras, Arturo Javier

    This dissertation describes a novel Amplitude-versus-Angle (AVA) inversion methodology to quantitatively integrate pre-stack seismic data, well logs, geologic data, and geostatistical information. Deterministic and stochastic inversion algorithms are used to characterize flow units of deepwater reservoirs located in the central Gulf of Mexico. A detailed fluid/lithology sensitivity analysis was conducted to assess the nature of AVA effects in the study area. Standard AVA analysis indicates that the shale/sand interface represented by the top of the hydrocarbon-bearing turbidite deposits generate typical Class III AVA responses. Layer-dependent Biot-Gassmann analysis shows significant sensitivity of the P-wave velocity and density to fluid substitution, indicating that presence of light saturating fluids clearly affects the elastic response of sands. Accordingly, AVA deterministic and stochastic inversions, which combine the advantages of AVA analysis with those of inversion, have provided quantitative information about the lateral continuity of the turbidite reservoirs based on the interpretation of inverted acoustic properties and fluid-sensitive modulus attributes (P-Impedance, S-Impedance, density, and LambdaRho, in the case of deterministic inversion; and P-velocity, S-velocity, density, and lithotype (sand-shale) distributions, in the case of stochastic inversion). The quantitative use of rock/fluid information through AVA seismic data, coupled with the implementation of co-simulation via lithotype-dependent multidimensional joint probability distributions of acoustic/petrophysical properties, provides accurate 3D models of petrophysical properties such as porosity, permeability, and water saturation. Pre-stack stochastic inversion provides more realistic and higher-resolution results than those obtained from analogous deterministic techniques. Furthermore, 3D petrophysical models can be more accurately co-simulated from AVA stochastic inversion results. By combining AVA sensitivity analysis techniques with pre-stack stochastic inversion, geologic data, and awareness of inversion pitfalls, it is possible to substantially reduce the risk in exploration and development of conventional and non-conventional reservoirs. From the final integration of deterministic and stochastic inversion results with depositional models and analogous examples, the M-series reservoirs have been interpreted as stacked terminal turbidite lobes within an overall fan complex (the Miocene MCAVLU Submarine Fan System); this interpretation is consistent with previous core data interpretations and regional stratigraphic/depositional studies.

  12. Hybrid deterministic/stochastic simulation of complex biochemical systems.

    PubMed

    Lecca, Paola; Bagagiolo, Fabio; Scarpa, Marina

    2017-11-21

    In a biological cell, cellular functions and the genetic regulatory apparatus are implemented and controlled by complex networks of chemical reactions involving genes, proteins, and enzymes. Accurate computational models are indispensable means for understanding the mechanisms behind the evolution of a complex system, not always explored with wet lab experiments. To serve their purpose, computational models, however, should be able to describe and simulate the complexity of a biological system in many of its aspects. Moreover, it should be implemented by efficient algorithms requiring the shortest possible execution time, to avoid enlarging excessively the time elapsing between data analysis and any subsequent experiment. Besides the features of their topological structure, the complexity of biological networks also refers to their dynamics, that is often non-linear and stiff. The stiffness is due to the presence of molecular species whose abundance fluctuates by many orders of magnitude. A fully stochastic simulation of a stiff system is computationally time-expensive. On the other hand, continuous models are less costly, but they fail to capture the stochastic behaviour of small populations of molecular species. We introduce a new efficient hybrid stochastic-deterministic computational model and the software tool MoBioS (MOlecular Biology Simulator) implementing it. The mathematical model of MoBioS uses continuous differential equations to describe the deterministic reactions and a Gillespie-like algorithm to describe the stochastic ones. Unlike the majority of current hybrid methods, the MoBioS algorithm divides the reactions' set into fast reactions, moderate reactions, and slow reactions and implements a hysteresis switching between the stochastic model and the deterministic model. Fast reactions are approximated as continuous-deterministic processes and modelled by deterministic rate equations. Moderate reactions are those whose reaction waiting time is greater than the fast reaction waiting time but smaller than the slow reaction waiting time. A moderate reaction is approximated as a stochastic (deterministic) process if it was classified as a stochastic (deterministic) process at the time at which it crosses the threshold of low (high) waiting time. A Gillespie First Reaction Method is implemented to select and execute the slow reactions. The performances of MoBios were tested on a typical example of hybrid dynamics: that is the DNA transcription regulation. The simulated dynamic profile of the reagents' abundance and the estimate of the error introduced by the fully deterministic approach were used to evaluate the consistency of the computational model and that of the software tool.

  13. Origins of Chaos in Autonomous Boolean Networks

    NASA Astrophysics Data System (ADS)

    Socolar, Joshua; Cavalcante, Hugo; Gauthier, Daniel; Zhang, Rui

    2010-03-01

    Networks with nodes consisting of ideal Boolean logic gates are known to display either steady states, periodic behavior, or an ultraviolet catastrophe where the number of logic-transition events circulating in the network per unit time grows as a power-law. In an experiment, non-ideal behavior of the logic gates prevents the ultraviolet catastrophe and may lead to deterministic chaos. We identify certain non-ideal features of real logic gates that enable chaos in experimental networks. We find that short-pulse rejection and the asymmetry between the logic states tends to engender periodic behavior. On the other hand, a memory effect termed ``degradation'' can generate chaos. Our results strongly suggest that deterministic chaos can be expected in a large class of experimental Boolean-like networks. Such devices may find application in a variety of technologies requiring fast complex waveforms or flat power spectra. The non-ideal effects identified here also have implications for the statistics of attractors in large complex networks.

  14. Automatic design of synthetic gene circuits through mixed integer non-linear programming.

    PubMed

    Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias

    2012-01-01

    Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits.

  15. ASP-based method for the enumeration of attractors in non-deterministic synchronous and asynchronous multi-valued networks.

    PubMed

    Ben Abdallah, Emna; Folschette, Maxime; Roux, Olivier; Magnin, Morgan

    2017-01-01

    This paper addresses the problem of finding attractors in biological regulatory networks. We focus here on non-deterministic synchronous and asynchronous multi-valued networks, modeled using automata networks (AN). AN is a general and well-suited formalism to study complex interactions between different components (genes, proteins,...). An attractor is a minimal trap domain, that is, a part of the state-transition graph that cannot be escaped. Such structures are terminal components of the dynamics and take the form of steady states (singleton) or complex compositions of cycles (non-singleton). Studying the effect of a disease or a mutation on an organism requires finding the attractors in the model to understand the long-term behaviors. We present a computational logical method based on answer set programming (ASP) to identify all attractors. Performed without any network reduction, the method can be applied on any dynamical semantics. In this paper, we present the two most widespread non-deterministic semantics: the asynchronous and the synchronous updating modes. The logical approach goes through a complete enumeration of the states of the network in order to find the attractors without the necessity to construct the whole state-transition graph. We realize extensive computational experiments which show good performance and fit the expected theoretical results in the literature. The originality of our approach lies on the exhaustive enumeration of all possible (sets of) states verifying the properties of an attractor thanks to the use of ASP. Our method is applied to non-deterministic semantics in two different schemes (asynchronous and synchronous). The merits of our methods are illustrated by applying them to biological examples of various sizes and comparing the results with some existing approaches. It turns out that our approach succeeds to exhaustively enumerate on a desktop computer, in a large model (100 components), all existing attractors up to a given size (20 states). This size is only limited by memory and computation time.

  16. Will systems biology offer new holistic paradigms to life sciences?

    PubMed Central

    Conti, Filippo; Valerio, Maria Cristina; Zbilut, Joseph P.

    2008-01-01

    A biological system, like any complex system, blends stochastic and deterministic features, displaying properties of both. In a certain sense, this blend is exactly what we perceive as the “essence of complexity” given we tend to consider as non-complex both an ideal gas (fully stochastic and understandable at the statistical level in the thermodynamic limit of a huge number of particles) and a frictionless pendulum (fully deterministic relative to its motion). In this commentary we make the statement that systems biology will have a relevant impact on nowadays biology if (and only if) will be able to capture the essential character of this blend that in our opinion is the generation of globally ordered collective modes supported by locally stochastic atomisms. PMID:19003440

  17. Disentangling the stochastic behavior of complex time series

    NASA Astrophysics Data System (ADS)

    Anvari, Mehrnaz; Tabar, M. Reza Rahimi; Peinke, Joachim; Lehnertz, Klaus

    2016-10-01

    Complex systems involving a large number of degrees of freedom, generally exhibit non-stationary dynamics, which can result in either continuous or discontinuous sample paths of the corresponding time series. The latter sample paths may be caused by discontinuous events - or jumps - with some distributed amplitudes, and disentangling effects caused by such jumps from effects caused by normal diffusion processes is a main problem for a detailed understanding of stochastic dynamics of complex systems. Here we introduce a non-parametric method to address this general problem. By means of a stochastic dynamical jump-diffusion modelling, we separate deterministic drift terms from different stochastic behaviors, namely diffusive and jumpy ones, and show that all of the unknown functions and coefficients of this modelling can be derived directly from measured time series. We demonstrate appli- cability of our method to empirical observations by a data-driven inference of the deterministic drift term and of the diffusive and jumpy behavior in brain dynamics from ten epilepsy patients. Particularly these different stochastic behaviors provide extra information that can be regarded valuable for diagnostic purposes.

  18. Automatic Design of Synthetic Gene Circuits through Mixed Integer Non-linear Programming

    PubMed Central

    Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias

    2012-01-01

    Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits. PMID:22536398

  19. Calibration of an Unsteady Groundwater Flow Model for a Complex, Strongly Heterogeneous Aquifer

    NASA Astrophysics Data System (ADS)

    Curtis, Z. K.; Liao, H.; Li, S. G.; Phanikumar, M. S.; Lusch, D.

    2016-12-01

    Modeling of groundwater systems characterized by complex three-dimensional structure and heterogeneity remains a significant challenge. Most of today's groundwater models are developed based on relatively simple conceptual representations in favor of model calibratibility. As more complexities are modeled, e.g., by adding more layers and/or zones, or introducing transient processes, more parameters have to be estimated and issues related to ill-posed groundwater problems and non-unique calibration arise. Here, we explore the use of an alternative conceptual representation for groundwater modeling that is fully three-dimensional and can capture complex 3D heterogeneity (both systematic and "random") without over-parameterizing the aquifer system. In particular, we apply Transition Probability (TP) geostatistics on high resolution borehole data from a water well database to characterize the complex 3D geology. Different aquifer material classes, e.g., `AQ' (aquifer material), `MAQ' (marginal aquifer material'), `PCM' (partially confining material), and `CM' (confining material), are simulated, with the hydraulic properties of each material type as tuning parameters during calibration. The TP-based approach is applied to simulate unsteady groundwater flow in a large, complex, and strongly heterogeneous glacial aquifer system in Michigan across multiple spatial and temporal scales. The resulting model is calibrated to observed static water level data over a time span of 50 years. The results show that the TP-based conceptualization enables much more accurate and robust calibration/simulation than that based on conventional deterministic layer/zone based conceptual representations.

  20. Non-Lipschitzian dynamics for neural net modelling

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1989-01-01

    Failure of the Lipschitz condition in unstable equilibrium points of dynamical systems leads to a multiple-choice response to an initial deterministic input. The evolution of such systems is characterized by a special type of unpredictability measured by unbounded Liapunov exponents. Possible relation of these systems to future neural networks is discussed.

  1. Application fields for the new Object Management Group (OMG) Standards Case Management Model and Notation (CMMN) and Decision Management Notation (DMN) in the perioperative field.

    PubMed

    Wiemuth, M; Junger, D; Leitritz, M A; Neumann, J; Neumuth, T; Burgert, O

    2017-08-01

    Medical processes can be modeled using different methods and notations. Currently used modeling systems like Business Process Model and Notation (BPMN) are not capable of describing the highly flexible and variable medical processes in sufficient detail. We combined two modeling systems, Business Process Management (BPM) and Adaptive Case Management (ACM), to be able to model non-deterministic medical processes. We used the new Standards Case Management Model and Notation (CMMN) and Decision Management Notation (DMN). First, we explain how CMMN, DMN and BPMN could be used to model non-deterministic medical processes. We applied this methodology to model 79 cataract operations provided by University Hospital Leipzig, Germany, and four cataract operations provided by University Eye Hospital Tuebingen, Germany. Our model consists of 85 tasks and about 20 decisions in BPMN. We were able to expand the system with more complex situations that might appear during an intervention. An effective modeling of the cataract intervention is possible using the combination of BPM and ACM. The combination gives the possibility to depict complex processes with complex decisions. This combination allows a significant advantage for modeling perioperative processes.

  2. Time Domain and Frequency Domain Deterministic Channel Modeling for Tunnel/Mining Environments.

    PubMed

    Zhou, Chenming; Jacksha, Ronald; Yan, Lincan; Reyes, Miguel; Kovalchik, Peter

    2017-01-01

    Understanding wireless channels in complex mining environments is critical for designing optimized wireless systems operated in these environments. In this paper, we propose two physics-based, deterministic ultra-wideband (UWB) channel models for characterizing wireless channels in mining/tunnel environments - one in the time domain and the other in the frequency domain. For the time domain model, a general Channel Impulse Response (CIR) is derived and the result is expressed in the classic UWB tapped delay line model. The derived time domain channel model takes into account major propagation controlling factors including tunnel or entry dimensions, frequency, polarization, electrical properties of the four tunnel walls, and transmitter and receiver locations. For the frequency domain model, a complex channel transfer function is derived analytically. Based on the proposed physics-based deterministic channel models, channel parameters such as delay spread, multipath component number, and angular spread are analyzed. It is found that, despite the presence of heavy multipath, both channel delay spread and angular spread for tunnel environments are relatively smaller compared to that of typical indoor environments. The results and findings in this paper have application in the design and deployment of wireless systems in underground mining environments.

  3. Time Domain and Frequency Domain Deterministic Channel Modeling for Tunnel/Mining Environments

    PubMed Central

    Zhou, Chenming; Jacksha, Ronald; Yan, Lincan; Reyes, Miguel; Kovalchik, Peter

    2018-01-01

    Understanding wireless channels in complex mining environments is critical for designing optimized wireless systems operated in these environments. In this paper, we propose two physics-based, deterministic ultra-wideband (UWB) channel models for characterizing wireless channels in mining/tunnel environments — one in the time domain and the other in the frequency domain. For the time domain model, a general Channel Impulse Response (CIR) is derived and the result is expressed in the classic UWB tapped delay line model. The derived time domain channel model takes into account major propagation controlling factors including tunnel or entry dimensions, frequency, polarization, electrical properties of the four tunnel walls, and transmitter and receiver locations. For the frequency domain model, a complex channel transfer function is derived analytically. Based on the proposed physics-based deterministic channel models, channel parameters such as delay spread, multipath component number, and angular spread are analyzed. It is found that, despite the presence of heavy multipath, both channel delay spread and angular spread for tunnel environments are relatively smaller compared to that of typical indoor environments. The results and findings in this paper have application in the design and deployment of wireless systems in underground mining environments.† PMID:29457801

  4. Complexity and health professions education: a basic glossary.

    PubMed

    Mennin, Stewart

    2010-08-01

    The study of health professions education in the context of complexity science and complex adaptive systems involves different concepts and terminology that are likely to be unfamiliar to many health professions educators. A list of selected key terms and definitions from the literature of complexity science is provided to assist readers to navigate familiar territory from a different perspective. include agent, attractor, bifurcation, chaos, co-evolution, collective variable, complex adaptive systems, complexity science, deterministic systems, dynamical system, edge of chaos, emergence, equilibrium, far from equilibrium, fuzzy boundaries, linear system, non-linear system, random, self-organization and self-similarity.

  5. Correlations in electrically coupled chaotic lasers.

    PubMed

    Rosero, E J; Barbosa, W A S; Martinez Avila, J F; Khoury, A Z; Rios Leite, J R

    2016-09-01

    We show how two electrically coupled semiconductor lasers having optical feedback can present simultaneous antiphase correlated fast power fluctuations, and strong in-phase synchronized spikes of chaotic power drops. This quite counterintuitive phenomenon is demonstrated experimentally and confirmed by numerical solutions of a deterministic dynamical system of rate equations. The occurrence of negative and positive cross correlation between parts of a complex system according to time scales, as proved in our simple arrangement, is relevant for the understanding and characterization of collective properties in complex networks.

  6. Down to the roughness scale assessment of piston-ring/liner contacts

    NASA Astrophysics Data System (ADS)

    Checo, H. M.; Jaramillo, A.; Ausas, R. F.; Jai, M.; Buscaglia, G. C.

    2017-02-01

    The effects of surface roughness in hydrodynamic bearings been accounted for through several approaches, the most widely used being averaging or stochastic techniques. With these the surface is not treated “as it is”, but by means of an assumed probability distribution for the roughness. The so called direct, deterministic or measured-surface simulation) solve the lubrication problem with realistic surfaces down to the roughness scale. This leads to expensive computational problems. Most researchers have tackled this problem considering non-moving surfaces and neglecting the ring dynamics to reduce the computational burden. What is proposed here is to solve the fully-deterministic simulation both in space and in time, so that the actual movement of the surfaces and the rings dynamics are taken into account. This simulation is much more complex than previous ones, as it is intrinsically transient. The feasibility of these fully-deterministic simulations is illustrated two cases: fully deterministic simulation of liner surfaces with diverse finishings (honed and coated bores) with constant piston velocity and load on the ring and also in real engine conditions.

  7. Assessing information content and interactive relationships of subgenomic DNA sequences of the MHC using complexity theory approaches based on the non-extensive statistical mechanics

    NASA Astrophysics Data System (ADS)

    Karakatsanis, L. P.; Pavlos, G. P.; Iliopoulos, A. C.; Pavlos, E. G.; Clark, P. M.; Duke, J. L.; Monos, D. S.

    2018-09-01

    This study combines two independent domains of science, the high throughput DNA sequencing capabilities of Genomics and complexity theory from Physics, to assess the information encoded by the different genomic segments of exonic, intronic and intergenic regions of the Major Histocompatibility Complex (MHC) and identify possible interactive relationships. The dynamic and non-extensive statistical characteristics of two well characterized MHC sequences from the homozygous cell lines, PGF and COX, in addition to two other genomic regions of comparable size, used as controls, have been studied using the reconstructed phase space theorem and the non-extensive statistical theory of Tsallis. The results reveal similar non-linear dynamical behavior as far as complexity and self-organization features. In particular, the low-dimensional deterministic nonlinear chaotic and non-extensive statistical character of the DNA sequences was verified with strong multifractal characteristics and long-range correlations. The nonlinear indices repeatedly verified that MHC sequences, whether exonic, intronic or intergenic include varying levels of information and reveal an interaction of the genes with intergenic regions, whereby the lower the number of genes in a region, the less the complexity and information content of the intergenic region. Finally we showed the significance of the intergenic region in the production of the DNA dynamics. The findings reveal interesting content information in all three genomic elements and interactive relationships of the genes with the intergenic regions. The results most likely are relevant to the whole genome and not only to the MHC. These findings are consistent with the ENCODE project, which has now established that the non-coding regions of the genome remain to be of relevance, as they are functionally important and play a significant role in the regulation of expression of genes and coordination of the many biological processes of the cell.

  8. Estimation of electromagnetic dosimetric values from non-ionizing radiofrequency fields in an indoor commercial airplane environment.

    PubMed

    Aguirre, Erik; Arpón, Javier; Azpilicueta, Leire; López, Peio; de Miguel, Silvia; Ramos, Victoria; Falcone, Francisco

    2014-12-01

    In this article, the impact of topology as well as morphology of a complex indoor environment such as a commercial aircraft in the estimation of dosimetric assessment is presented. By means of an in-house developed deterministic 3D ray-launching code, estimation of electric field amplitude as a function of position for the complete volume of a commercial passenger airplane is obtained. Estimation of electromagnetic field exposure in this environment is challenging, due to the complexity and size of the scenario, as well as to the large metallic content, giving rise to strong multipath components. By performing the calculation with a deterministic technique, the complete scenario can be considered with an optimized balance between accuracy and computational cost. The proposed method can aid in the assessment of electromagnetic dosimetry in the future deployment of embarked wireless systems in commercial aircraft.

  9. Self-Organized Dynamic Flocking Behavior from a Simple Deterministic Map

    NASA Astrophysics Data System (ADS)

    Krueger, Wesley

    2007-10-01

    Coherent motion exhibiting large-scale order, such as flocking, swarming, and schooling behavior in animals, can arise from simple rules applied to an initial random array of self-driven particles. We present a completely deterministic dynamic map that exhibits emergent, collective, complex motion for a group of particles. Each individual particle is driven with a constant speed in two dimensions adopting the average direction of a fixed set of non-spatially related partners. In addition, the particle changes direction by π as it reaches a circular boundary. The dynamical patterns arising from these rules range from simple circular-type convective motion to highly sophisticated, complex, collective behavior which can be easily interpreted as flocking, schooling, or swarming depending on the chosen parameters. We present the results as a series of short movies and we also explore possible order parameters and correlation functions capable of quantifying the resulting coherence.

  10. Implicit Sequence Learning in Dyslexia: A Within-Sequence Comparison of First- and Higher-Order Information

    ERIC Educational Resources Information Center

    Du, Wenchong; Kelly, Steve W.

    2013-01-01

    The present study examines implicit sequence learning in adult dyslexics with a focus on comparing sequence transitions with different statistical complexities. Learning of a 12-item deterministic sequence was assessed in 12 dyslexic and 12 non-dyslexic university students. Both groups showed equivalent standard reaction time increments when the…

  11. Cancer dormancy and criticality from a game theory perspective.

    PubMed

    Wu, Amy; Liao, David; Kirilin, Vlamimir; Lin, Ke-Chih; Torga, Gonzalo; Qu, Junle; Liu, Liyu; Sturm, James C; Pienta, Kenneth; Austin, Robert

    2018-01-01

    The physics of cancer dormancy, the time between initial cancer treatment and re-emergence after a protracted period, is a puzzle. Cancer cells interact with host cells via complex, non-linear population dynamics, which can lead to very non-intuitive but perhaps deterministic and understandable progression dynamics of cancer and dormancy. We explore here the dynamics of host-cancer cell populations in the presence of (1) payoffs gradients and (2) perturbations due to cell migration. We determine to what extent the time-dependence of the populations can be quantitively understood in spite of the underlying complexity of the individual agents and model the phenomena of dormancy.

  12. Application of deterministic deconvolution of ground-penetrating radar data in a study of carbonate strata

    USGS Publications Warehouse

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.

    2004-01-01

    We successfully applied deterministic deconvolution to real ground-penetrating radar (GPR) data by using the source wavelet that was generated in and transmitted through air as the operator. The GPR data were collected with 400-MHz antennas on a bench adjacent to a cleanly exposed quarry face. The quarry site is characterized by horizontally bedded carbonate strata with shale partings. In order to provide groundtruth for this deconvolution approach, 23 conductive rods were drilled into the quarry face at key locations. The steel rods provided critical information for: (1) correlation between reflections on GPR data and geologic features exposed in the quarry face, (2) GPR resolution limits, (3) accuracy of velocities calculated from common midpoint data and (4) identifying any multiples. Comparing the results of deconvolved data with non-deconvolved data demonstrates the effectiveness of deterministic deconvolution in low dielectric-loss media for increased accuracy of velocity models (improved at least 10-15% in our study after deterministic deconvolution), increased vertical and horizontal resolution of specific geologic features and more accurate representation of geologic features as confirmed from detailed study of the adjacent quarry wall. ?? 2004 Elsevier B.V. All rights reserved.

  13. The relationship between stochastic and deterministic quasi-steady state approximations.

    PubMed

    Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R

    2015-11-23

    The quasi steady-state approximation (QSSA) is frequently used to reduce deterministic models of biochemical networks. The resulting equations provide a simplified description of the network in terms of non-elementary reaction functions (e.g. Hill functions). Such deterministic reductions are frequently a basis for heuristic stochastic models in which non-elementary reaction functions are used to define reaction propensities. Despite their popularity, it remains unclear when such stochastic reductions are valid. It is frequently assumed that the stochastic reduction can be trusted whenever its deterministic counterpart is accurate. However, a number of recent examples show that this is not necessarily the case. Here we explain the origin of these discrepancies, and demonstrate a clear relationship between the accuracy of the deterministic and the stochastic QSSA for examples widely used in biological systems. With an analysis of a two-state promoter model, and numerical simulations for a variety of other models, we find that the stochastic QSSA is accurate whenever its deterministic counterpart provides an accurate approximation over a range of initial conditions which cover the likely fluctuations from the quasi steady-state (QSS). We conjecture that this relationship provides a simple and computationally inexpensive way to test the accuracy of reduced stochastic models using deterministic simulations. The stochastic QSSA is one of the most popular multi-scale stochastic simulation methods. While the use of QSSA, and the resulting non-elementary functions has been justified in the deterministic case, it is not clear when their stochastic counterparts are accurate. In this study, we show how the accuracy of the stochastic QSSA can be tested using their deterministic counterparts providing a concrete method to test when non-elementary rate functions can be used in stochastic simulations.

  14. Soils: man-caused radioactivity and radiation forecast

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gablin, Vassily

    2007-07-01

    Available in abstract form only. Full text of publication follows: One of the main tasks of the radiation safety guarantee is non-admission of the excess over critical radiation levels. In Russia they are man-caused radiation levels. Meanwhile any radiation measurement represents total radioactivity. That is why it is hard to assess natural and man-caused contributions to total radioactivity. It is shown that soil radioactivity depends on natural factors including radioactivity of rocks and cosmic radiation as well as man-caused factors including nuclear and non-nuclear technologies. Whole totality of these factors includes unpredictable (non-deterministic) factors - nuclear explosions and radiation accidents,more » and predictable ones (deterministic) - all the rest. Deterministic factors represent background radioactivity whose trends is the base of the radiation forecast. Non-deterministic factors represent man-caused radiation treatment contribution which is to be controlled. This contribution is equal to the difference in measured radioactivity and radiation background. The way of calculation of background radioactivity is proposed. Contemporary soils are complicated technologically influenced systems with multi-leveled spatial and temporary inhomogeneity of radionuclides distribution. Generally analysis area can be characterized by any set of factors of soil radioactivity including natural and man-caused factors. Natural factors are cosmic radiation and radioactivity of rocks. Man-caused factors are shown on Fig. 1. It is obvious that man-caused radioactivity is due to both artificial and natural emitters. Any result of radiation measurement represents total radioactivity i.e. the sum of activities resulting from natural and man-caused emitters. There is no gauge which could separately measure natural and man-caused radioactivity. That is why it is so hard to assess natural and man-caused contributions to soil radioactivity. It would have been possible if human activity had led to contamination of soil only by artificial radionuclides. But we can view a totality of soil radioactivity factors in the following way. (author)« less

  15. Parameter Estimation in Epidemiology: from Simple to Complex Dynamics

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Ballesteros, Sebastién; Boto, João Pedro; Kooi, Bob W.; Mateus, Luís; Stollenwerk, Nico

    2011-09-01

    We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models like multi-strain dynamics to describe the virus-host interaction in dengue fever, even most recently developed parameter estimation techniques, like maximum likelihood iterated filtering, come to their computational limits. However, the first results of parameter estimation with data on dengue fever from Thailand indicate a subtle interplay between stochasticity and deterministic skeleton. The deterministic system on its own already displays complex dynamics up to deterministic chaos and coexistence of multiple attractors.

  16. Realistic Simulation for Body Area and Body-To-Body Networks

    PubMed Central

    Alam, Muhammad Mahtab; Ben Hamida, Elyes; Ben Arbia, Dhafer; Maman, Mickael; Mani, Francesco; Denis, Benoit; D’Errico, Raffaele

    2016-01-01

    In this paper, we present an accurate and realistic simulation for body area networks (BAN) and body-to-body networks (BBN) using deterministic and semi-deterministic approaches. First, in the semi-deterministic approach, a real-time measurement campaign is performed, which is further characterized through statistical analysis. It is able to generate link-correlated and time-varying realistic traces (i.e., with consistent mobility patterns) for on-body and body-to-body shadowing and fading, including body orientations and rotations, by means of stochastic channel models. The full deterministic approach is particularly targeted to enhance IEEE 802.15.6 proposed channel models by introducing space and time variations (i.e., dynamic distances) through biomechanical modeling. In addition, it helps to accurately model the radio link by identifying the link types and corresponding path loss factors for line of sight (LOS) and non-line of sight (NLOS). This approach is particularly important for links that vary over time due to mobility. It is also important to add that the communication and protocol stack, including the physical (PHY), medium access control (MAC) and networking models, is developed for BAN and BBN, and the IEEE 802.15.6 compliance standard is provided as a benchmark for future research works of the community. Finally, the two approaches are compared in terms of the successful packet delivery ratio, packet delay and energy efficiency. The results show that the semi-deterministic approach is the best option; however, for the diversity of the mobility patterns and scenarios applicable, biomechanical modeling and the deterministic approach are better choices. PMID:27104537

  17. Realistic Simulation for Body Area and Body-To-Body Networks.

    PubMed

    Alam, Muhammad Mahtab; Ben Hamida, Elyes; Ben Arbia, Dhafer; Maman, Mickael; Mani, Francesco; Denis, Benoit; D'Errico, Raffaele

    2016-04-20

    In this paper, we present an accurate and realistic simulation for body area networks (BAN) and body-to-body networks (BBN) using deterministic and semi-deterministic approaches. First, in the semi-deterministic approach, a real-time measurement campaign is performed, which is further characterized through statistical analysis. It is able to generate link-correlated and time-varying realistic traces (i.e., with consistent mobility patterns) for on-body and body-to-body shadowing and fading, including body orientations and rotations, by means of stochastic channel models. The full deterministic approach is particularly targeted to enhance IEEE 802.15.6 proposed channel models by introducing space and time variations (i.e., dynamic distances) through biomechanical modeling. In addition, it helps to accurately model the radio link by identifying the link types and corresponding path loss factors for line of sight (LOS) and non-line of sight (NLOS). This approach is particularly important for links that vary over time due to mobility. It is also important to add that the communication and protocol stack, including the physical (PHY), medium access control (MAC) and networking models, is developed for BAN and BBN, and the IEEE 802.15.6 compliance standard is provided as a benchmark for future research works of the community. Finally, the two approaches are compared in terms of the successful packet delivery ratio, packet delay and energy efficiency. The results show that the semi-deterministic approach is the best option; however, for the diversity of the mobility patterns and scenarios applicable, biomechanical modeling and the deterministic approach are better choices.

  18. Geometric Universality in Brain Allosteric Protein Dynamics: Complex Hydrophobic Transformation Predicts Mutual Recognition by Polypeptides and Proteins,

    DTIC Science & Technology

    1986-10-01

    organic acids using the Hammett equation , has been called the hydrophobic effect.’ Water adjusts its geometry to maximize the number of intact hydrogen...understanding both structural stability with respect to the underlying equations (not initial values) and phase transitions in these dynamical hierarchies...for quantitative characterization. Although the complicated behavior is gen- erated by deterministic equations , its description in entropies leads to

  19. Potential and flux field landscape theory. I. Global stability and dynamics of spatially dependent non-equilibrium systems.

    PubMed

    Wu, Wei; Wang, Jin

    2013-09-28

    We established a potential and flux field landscape theory to quantify the global stability and dynamics of general spatially dependent non-equilibrium deterministic and stochastic systems. We extended our potential and flux landscape theory for spatially independent non-equilibrium stochastic systems described by Fokker-Planck equations to spatially dependent stochastic systems governed by general functional Fokker-Planck equations as well as functional Kramers-Moyal equations derived from master equations. Our general theory is applied to reaction-diffusion systems. For equilibrium spatially dependent systems with detailed balance, the potential field landscape alone, defined in terms of the steady state probability distribution functional, determines the global stability and dynamics of the system. The global stability of the system is closely related to the topography of the potential field landscape in terms of the basins of attraction and barrier heights in the field configuration state space. The effective driving force of the system is generated by the functional gradient of the potential field alone. For non-equilibrium spatially dependent systems, the curl probability flux field is indispensable in breaking detailed balance and creating non-equilibrium condition for the system. A complete characterization of the non-equilibrium dynamics of the spatially dependent system requires both the potential field and the curl probability flux field. While the non-equilibrium potential field landscape attracts the system down along the functional gradient similar to an electron moving in an electric field, the non-equilibrium flux field drives the system in a curly way similar to an electron moving in a magnetic field. In the small fluctuation limit, the intrinsic potential field as the small fluctuation limit of the potential field for spatially dependent non-equilibrium systems, which is closely related to the steady state probability distribution functional, is found to be a Lyapunov functional of the deterministic spatially dependent system. Therefore, the intrinsic potential landscape can characterize the global stability of the deterministic system. The relative entropy functional of the stochastic spatially dependent non-equilibrium system is found to be the Lyapunov functional of the stochastic dynamics of the system. Therefore, the relative entropy functional quantifies the global stability of the stochastic system with finite fluctuations. Our theory offers an alternative general approach to other field-theoretic techniques, to study the global stability and dynamics of spatially dependent non-equilibrium field systems. It can be applied to many physical, chemical, and biological spatially dependent non-equilibrium systems.

  20. Menstruation, perimenopause, and chaos theory.

    PubMed

    Derry, Paula S; Derry, Gregory N

    2012-01-01

    This article argues that menstruation, including the transition to menopause, results from a specific kind of complex system, namely, one that is nonlinear, dynamical, and chaotic. A complexity-based perspective changes how we think about and research menstruation-related health problems and positive health. Chaotic systems are deterministic but not predictable, characterized by sensitivity to initial conditions and strange attractors. Chaos theory provides a coherent framework that qualitatively accounts for puzzling results from perimenopause research. It directs attention to variability within and between women, adaptation, lifespan development, and the need for complex explanations of disease. Whether the menstrual cycle is chaotic can be empirically tested, and a summary of our research on 20- to 40-year-old women is provided.

  1. Deterministic quantum teleportation with feed-forward in a solid state system.

    PubMed

    Steffen, L; Salathe, Y; Oppliger, M; Kurpiers, P; Baur, M; Lang, C; Eichler, C; Puebla-Hellmann, G; Fedorov, A; Wallraff, A

    2013-08-15

    Engineered macroscopic quantum systems based on superconducting electronic circuits are attractive for experimentally exploring diverse questions in quantum information science. At the current state of the art, quantum bits (qubits) are fabricated, initialized, controlled, read out and coupled to each other in simple circuits. This enables the realization of basic logic gates, the creation of complex entangled states and the demonstration of algorithms or error correction. Using different variants of low-noise parametric amplifiers, dispersive quantum non-demolition single-shot readout of single-qubit states with high fidelity has enabled continuous and discrete feedback control of single qubits. Here we realize full deterministic quantum teleportation with feed-forward in a chip-based superconducting circuit architecture. We use a set of two parametric amplifiers for both joint two-qubit and individual qubit single-shot readout, combined with flexible real-time digital electronics. Our device uses a crossed quantum bus technology that allows us to create complex networks with arbitrary connecting topology in a planar architecture. The deterministic teleportation process succeeds with order unit probability for any input state, as we prepare maximally entangled two-qubit states as a resource and distinguish all Bell states in a single two-qubit measurement with high efficiency and high fidelity. We teleport quantum states between two macroscopic systems separated by 6 mm at a rate of 10(4) s(-1), exceeding other reported implementations. The low transmission loss of superconducting waveguides is likely to enable the range of this and other schemes to be extended to significantly larger distances, enabling tests of non-locality and the realization of elements for quantum communication at microwave frequencies. The demonstrated feed-forward may also find application in error correction schemes.

  2. Validation of a Deterministic Vibroacoustic Response Prediction Model

    NASA Technical Reports Server (NTRS)

    Caimi, Raoul E.; Margasahayam, Ravi

    1997-01-01

    This report documents the recently completed effort involving validation of a deterministic theory for the random vibration problem of predicting the response of launch pad structures in the low-frequency range (0 to 50 hertz). Use of the Statistical Energy Analysis (SEA) methods is not suitable in this range. Measurements of launch-induced acoustic loads and subsequent structural response were made on a cantilever beam structure placed in close proximity (200 feet) to the launch pad. Innovative ways of characterizing random, nonstationary, non-Gaussian acoustics are used for the development of a structure's excitation model. Extremely good correlation was obtained between analytically computed responses and those measured on the cantilever beam. Additional tests are recommended to bound the problem to account for variations in launch trajectory and inclination.

  3. Tag-mediated cooperation with non-deterministic genotype-phenotype mapping

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Chen, Shu

    2016-01-01

    Tag-mediated cooperation provides a helpful framework for resolving evolutionary social dilemmas. However, most of the previous studies have not taken into account genotype-phenotype distinction in tags, which may play an important role in the process of evolution. To take this into consideration, we introduce non-deterministic genotype-phenotype mapping into a tag-based model with spatial prisoner's dilemma. By our definition, the similarity between genotypic tags does not directly imply the similarity between phenotypic tags. We find that the non-deterministic mapping from genotypic tag to phenotypic tag has non-trivial effects on tag-mediated cooperation. Although we observe that high levels of cooperation can be established under a wide variety of conditions especially when the decisiveness is moderate, the uncertainty in the determination of phenotypic tags may have a detrimental effect on the tag mechanism by disturbing the homophilic interaction structure which can explain the promotion of cooperation in tag systems. Furthermore, the non-deterministic mapping may undermine the robustness of the tag mechanism with respect to various factors such as the structure of the tag space and the tag flexibility. This observation warns us about the danger of applying the classical tag-based models to the analysis of empirical phenomena if genotype-phenotype distinction is significant in real world. Non-deterministic genotype-phenotype mapping thus provides a new perspective to the understanding of tag-mediated cooperation.

  4. Microscopic molecular dynamics characterization of the second-order non-Navier-Fourier constitutive laws in the Poiseuille gas flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rana, A.; Ravichandran, R.; Park, J. H.

    The second-order non-Navier-Fourier constitutive laws, expressed in a compact algebraic mathematical form, were validated for the force-driven Poiseuille gas flow by the deterministic atomic-level microscopic molecular dynamics (MD). Emphasis is placed on how completely different methods (a second-order continuum macroscopic theory based on the kinetic Boltzmann equation, the probabilistic mesoscopic direct simulation Monte Carlo, and, in particular, the deterministic microscopic MD) describe the non-classical physics, and whether the second-order non-Navier-Fourier constitutive laws derived from the continuum theory can be validated using MD solutions for the viscous stress and heat flux calculated directly from the molecular data using the statistical method.more » Peculiar behaviors (non-uniform tangent pressure profile and exotic instantaneous heat conduction from cold to hot [R. S. Myong, “A full analytical solution for the force-driven compressible Poiseuille gas flow based on a nonlinear coupled constitutive relation,” Phys. Fluids 23(1), 012002 (2011)]) were re-examined using atomic-level MD results. It was shown that all three results were in strong qualitative agreement with each other, implying that the second-order non-Navier-Fourier laws are indeed physically legitimate in the transition regime. Furthermore, it was shown that the non-Navier-Fourier constitutive laws are essential for describing non-zero normal stress and tangential heat flux, while the classical and non-classical laws remain similar for shear stress and normal heat flux.« less

  5. Advanced Spectroscopic and Thermal Imaging Instrumentation for Shock Tube and Ballistic Range Facilities

    DTIC Science & Technology

    2010-04-01

    the development process, increase its quality and reduce development time through automation of synthesis, analysis or verification. For this purpose...made of time-non-deterministic systems, improving efficiency and reducing complexity of formal analysis . We also show how our theory relates to, and...of the most recent investigations for Earth and Mars atmospheres will be discussed in the following sections. 2.4.1 Earth: lunar return NASA’s

  6. Edge states in the climate system: exploring global instabilities and critical transitions

    NASA Astrophysics Data System (ADS)

    Lucarini, Valerio; Bódai, Tamás

    2017-07-01

    Multistability is a ubiquitous feature in systems of geophysical relevance and provides key challenges for our ability to predict a system’s response to perturbations. Near critical transitions small causes can lead to large effects and—for all practical purposes—irreversible changes in the properties of the system. As is well known, the Earth climate is multistable: present astronomical and astrophysical conditions support two stable regimes, the warm climate we live in, and a snowball climate characterized by global glaciation. We first provide an overview of methods and ideas relevant for studying the climate response to forcings and focus on the properties of critical transitions in the context of both stochastic and deterministic dynamics, and assess strengths and weaknesses of simplified approaches to the problem. Following an idea developed by Eckhardt and collaborators for the investigation of multistable turbulent fluid dynamical systems, we study the global instability giving rise to the snowball/warm multistability in the climate system by identifying the climatic edge state, a saddle embedded in the boundary between the two basins of attraction of the stable climates. The edge state attracts initial conditions belonging to such a boundary and, while being defined by the deterministic dynamics, is the gate facilitating noise-induced transitions between competing attractors. We use a simplified yet Earth-like intermediate complexity climate model constructed by coupling a primitive equations model of the atmosphere with a simple diffusive ocean. We refer to the climatic edge states as Melancholia states and provide an extensive analysis of their features. We study their dynamics, their symmetry properties, and we follow a complex set of bifurcations. We find situations where the Melancholia state has chaotic dynamics. In these cases, we have that the basin boundary between the two basins of attraction is a strange geometric set with a nearly zero codimension, and relate this feature to the time scale separation between instabilities occurring on weather and climatic time scales. We also discover a new stable climatic state that is similar to a Melancholia state and is characterized by non-trivial symmetry properties.

  7. Direct generation of linearly polarized single photons with a deterministic axis in quantum dots

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Puchtler, Tim J.; Patra, Saroj K.; Zhu, Tongtong; Ali, Muhammad; Badcock, Tom J.; Ding, Tao; Oliver, Rachel A.; Schulz, Stefan; Taylor, Robert A.

    2017-07-01

    We report the direct generation of linearly polarized single photons with a deterministic polarization axis in self-assembled quantum dots (QDs), achieved by the use of non-polar InGaN without complex device geometry engineering. Here, we present a comprehensive investigation of the polarization properties of these QDs and their origin with statistically significant experimental data and rigorous k·p modeling. The experimental study of 180 individual QDs allows us to compute an average polarization degree of 0.90, with a standard deviation of only 0.08. When coupled with theoretical insights, we show that these QDs are highly insensitive to size differences, shape anisotropies, and material content variations. Furthermore, 91% of the studied QDs exhibit a polarization axis along the crystal [1-100] axis, with the other 9% polarized orthogonal to this direction. These features give non-polar InGaN QDs unique advantages in polarization control over other materials, such as conventional polar nitride, InAs, or CdSe QDs. Hence, the ability to generate single photons with polarization control makes non-polar InGaN QDs highly attractive for quantum cryptography protocols.

  8. Traveling Salesman Problem for Surveillance Mission Using Particle Swarm Optimization

    DTIC Science & Technology

    2001-03-20

    design of experiments, results of the experiments, and qualitative and quantitative analysis . Conclusions and recommendations based on the qualitative and...characterize the algorithm. Such analysis and comparison between LK and a non-deterministic algorithm produces claims such as "Lin-Kernighan algorithm takes... based on experiments 5 and 6. All other parameters are the same as the baseline (see 4.2.1.2). 4.2.2.6 Experiment 10 - Fine Tuning PSO AS: 85,95% Global

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polettini, M., E-mail: matteo.polettini@uni.lu; Wachtel, A., E-mail: artur.wachtel@uni.lu; Esposito, M., E-mail: massimilano.esposito@uni.lu

    We study the effect of intrinsic noise on the thermodynamic balance of complex chemical networks subtending cellular metabolism and gene regulation. A topological network property called deficiency, known to determine the possibility of complex behavior such as multistability and oscillations, is shown to also characterize the entropic balance. In particular, when deficiency is zero the average stochastic dissipation rate equals that of the corresponding deterministic model, where correlations are disregarded. In fact, dissipation can be reduced by the effect of noise, as occurs in a toy model of metabolism that we employ to illustrate our findings. This phenomenon highlights thatmore » there is a close interplay between deficiency and the activation of new dissipative pathways at low molecule numbers.« less

  10. Detecting nonlinear dynamics of functional connectivity

    NASA Astrophysics Data System (ADS)

    LaConte, Stephen M.; Peltier, Scott J.; Kadah, Yasser; Ngan, Shing-Chung; Deshpande, Gopikrishna; Hu, Xiaoping

    2004-04-01

    Functional magnetic resonance imaging (fMRI) is a technique that is sensitive to correlates of neuronal activity. The application of fMRI to measure functional connectivity of related brain regions across hemispheres (e.g. left and right motor cortices) has great potential for revealing fundamental physiological brain processes. Primarily, functional connectivity has been characterized by linear correlations in resting-state data, which may not provide a complete description of its temporal properties. In this work, we broaden the measure of functional connectivity to study not only linear correlations, but also those arising from deterministic, non-linear dynamics. Here the delta-epsilon approach is extended and applied to fMRI time series. The method of delays is used to reconstruct the joint system defined by a reference pixel and a candidate pixel. The crux of this technique relies on determining whether the candidate pixel provides additional information concerning the time evolution of the reference. As in many correlation-based connectivity studies, we fix the reference pixel. Every brain location is then used as a candidate pixel to estimate the spatial pattern of deterministic coupling with the reference. Our results indicate that measured connectivity is often emphasized in the motor cortex contra-lateral to the reference pixel, demonstrating the suitability of this approach for functional connectivity studies. In addition, discrepancies with traditional correlation analysis provide initial evidence for non-linear dynamical properties of resting-state fMRI data. Consequently, the non-linear characterization provided from our approach may provide a more complete description of the underlying physiology and brain function measured by this type of data.

  11. Asymmetrical Damage Partitioning in Bacteria: A Model for the Evolution of Stochasticity, Determinism, and Genetic Assimilation

    PubMed Central

    Chao, Lin; Rang, Camilla Ulla; Proenca, Audrey Menegaz; Chao, Jasper Ubirajara

    2016-01-01

    Non-genetic phenotypic variation is common in biological organisms. The variation is potentially beneficial if the environment is changing. If the benefit is large, selection can favor the evolution of genetic assimilation, the process by which the expression of a trait is transferred from environmental to genetic control. Genetic assimilation is an important evolutionary transition, but it is poorly understood because the fitness costs and benefits of variation are often unknown. Here we show that the partitioning of damage by a mother bacterium to its two daughters can evolve through genetic assimilation. Bacterial phenotypes are also highly variable. Because gene-regulating elements can have low copy numbers, the variation is attributed to stochastic sampling. Extant Escherichia coli partition asymmetrically and deterministically more damage to the old daughter, the one receiving the mother’s old pole. By modeling in silico damage partitioning in a population, we show that deterministic asymmetry is advantageous because it increases fitness variance and hence the efficiency of natural selection. However, we find that symmetrical but stochastic partitioning can be similarly beneficial. To examine why bacteria evolved deterministic asymmetry, we modeled the effect of damage anchored to the mother’s old pole. While anchored damage strengthens selection for asymmetry by creating additional fitness variance, it has the opposite effect on symmetry. The difference results because anchored damage reinforces the polarization of partitioning in asymmetric bacteria. In symmetric bacteria, it dilutes the polarization. Thus, stochasticity alone may have protected early bacteria from damage, but deterministic asymmetry has evolved to be equally important in extant bacteria. We estimate that 47% of damage partitioning is deterministic in E. coli. We suggest that the evolution of deterministic asymmetry from stochasticity offers an example of Waddington’s genetic assimilation. Our model is able to quantify the evolution of the assimilation because it characterizes the fitness consequences of variation. PMID:26761487

  12. Asymmetrical Damage Partitioning in Bacteria: A Model for the Evolution of Stochasticity, Determinism, and Genetic Assimilation.

    PubMed

    Chao, Lin; Rang, Camilla Ulla; Proenca, Audrey Menegaz; Chao, Jasper Ubirajara

    2016-01-01

    Non-genetic phenotypic variation is common in biological organisms. The variation is potentially beneficial if the environment is changing. If the benefit is large, selection can favor the evolution of genetic assimilation, the process by which the expression of a trait is transferred from environmental to genetic control. Genetic assimilation is an important evolutionary transition, but it is poorly understood because the fitness costs and benefits of variation are often unknown. Here we show that the partitioning of damage by a mother bacterium to its two daughters can evolve through genetic assimilation. Bacterial phenotypes are also highly variable. Because gene-regulating elements can have low copy numbers, the variation is attributed to stochastic sampling. Extant Escherichia coli partition asymmetrically and deterministically more damage to the old daughter, the one receiving the mother's old pole. By modeling in silico damage partitioning in a population, we show that deterministic asymmetry is advantageous because it increases fitness variance and hence the efficiency of natural selection. However, we find that symmetrical but stochastic partitioning can be similarly beneficial. To examine why bacteria evolved deterministic asymmetry, we modeled the effect of damage anchored to the mother's old pole. While anchored damage strengthens selection for asymmetry by creating additional fitness variance, it has the opposite effect on symmetry. The difference results because anchored damage reinforces the polarization of partitioning in asymmetric bacteria. In symmetric bacteria, it dilutes the polarization. Thus, stochasticity alone may have protected early bacteria from damage, but deterministic asymmetry has evolved to be equally important in extant bacteria. We estimate that 47% of damage partitioning is deterministic in E. coli. We suggest that the evolution of deterministic asymmetry from stochasticity offers an example of Waddington's genetic assimilation. Our model is able to quantify the evolution of the assimilation because it characterizes the fitness consequences of variation.

  13. Kinetics of Thermal Unimolecular Decomposition of Acetic Anhydride: An Integrated Deterministic and Stochastic Model.

    PubMed

    Mai, Tam V-T; Duong, Minh V; Nguyen, Hieu T; Lin, Kuang C; Huynh, Lam K

    2017-04-27

    An integrated deterministic and stochastic model within the master equation/Rice-Ramsperger-Kassel-Marcus (ME/RRKM) framework was first used to characterize temperature- and pressure-dependent behaviors of thermal decomposition of acetic anhydride in a wide range of conditions (i.e., 300-1500 K and 0.001-100 atm). Particularly, using potential energy surface and molecular properties obtained from high-level electronic structure calculations at CCSD(T)/CBS, macroscopic thermodynamic properties and rate coefficients of the title reaction were derived with corrections for hindered internal rotation and tunneling treatments. Being in excellent agreement with the scattered experimental data, the results from deterministic and stochastic frameworks confirmed and complemented each other to reveal that the main decomposition pathway proceeds via a 6-membered-ring transition state with the 0 K barrier of 35.2 kcal·mol -1 . This observation was further understood and confirmed by the sensitivity analysis on the time-resolved species profiles and the derived rate coefficients with respect to the ab initio barriers. Such an agreement suggests the integrated model can be confidently used for a wide range of conditions as a powerful postfacto and predictive tool in detailed chemical kinetic modeling and simulation for the title reaction and thus can be extended to complex chemical reactions.

  14. Periodicity and chaos from switched flow systems - Contrasting examples of discretely controlled continuous systems

    NASA Technical Reports Server (NTRS)

    Chase, Christopher; Serrano, Joseph; Ramadge, Peter J.

    1993-01-01

    We analyze two examples of the discrete control of a continuous variable system. These examples exhibit what may be regarded as the two extremes of complexity of the closed-loop behavior: one is eventually periodic, the other is chaotic. Our examples are derived from sampled deterministic flow models. These are of interest in their own right but have also been used as models for certain aspects of manufacturing systems. In each case, we give a precise characterization of the closed-loop behavior.

  15. Least Squares Best Fit Method for the Three Parameter Weibull Distribution: Analysis of Tensile and Bend Specimens with Volume or Surface Flaw Failure

    NASA Technical Reports Server (NTRS)

    Gross, Bernard

    1996-01-01

    Material characterization parameters obtained from naturally flawed specimens are necessary for reliability evaluation of non-deterministic advanced ceramic structural components. The least squares best fit method is applied to the three parameter uniaxial Weibull model to obtain the material parameters from experimental tests on volume or surface flawed specimens subjected to pure tension, pure bending, four point or three point loading. Several illustrative example problems are provided.

  16. Recent progress in the assembly of nanodevices and van der Waals heterostructures by deterministic placement of 2D materials.

    PubMed

    Frisenda, Riccardo; Navarro-Moratalla, Efrén; Gant, Patricia; Pérez De Lara, David; Jarillo-Herrero, Pablo; Gorbachev, Roman V; Castellanos-Gomez, Andres

    2018-01-02

    Designer heterostructures can now be assembled layer-by-layer with unmatched precision thanks to the recently developed deterministic placement methods to transfer two-dimensional (2D) materials. This possibility constitutes the birth of a very active research field on the so-called van der Waals heterostructures. Moreover, these deterministic placement methods also open the door to fabricate complex devices, which would be otherwise very difficult to achieve by conventional bottom-up nanofabrication approaches, and to fabricate fully-encapsulated devices with exquisite electronic properties. The integration of 2D materials with existing technologies such as photonic and superconducting waveguides and fiber optics is another exciting possibility. Here, we review the state-of-the-art of the deterministic placement methods, describing and comparing the different alternative methods available in the literature, and we illustrate their potential to fabricate van der Waals heterostructures, to integrate 2D materials into complex devices and to fabricate artificial bilayer structures where the layers present a user-defined rotational twisting angle.

  17. A measurement of disorder in binary sequences

    NASA Astrophysics Data System (ADS)

    Gong, Longyan; Wang, Haihong; Cheng, Weiwen; Zhao, Shengmei

    2015-03-01

    We propose a complex quantity, AL, to characterize the degree of disorder of L-length binary symbolic sequences. As examples, we respectively apply it to typical random and deterministic sequences. One kind of random sequences is generated from a periodic binary sequence and the other is generated from the logistic map. The deterministic sequences are the Fibonacci and Thue-Morse sequences. In these analyzed sequences, we find that the modulus of AL, denoted by |AL | , is a (statistically) equivalent quantity to the Boltzmann entropy, the metric entropy, the conditional block entropy and/or other quantities, so it is a useful quantitative measure of disorder. It can be as a fruitful index to discern which sequence is more disordered. Moreover, there is one and only one value of |AL | for the overall disorder characteristics. It needs extremely low computational costs. It can be easily experimentally realized. From all these mentioned, we believe that the proposed measure of disorder is a valuable complement to existing ones in symbolic sequences.

  18. Topological chaos of the spatial prisoner's dilemma game on regular networks.

    PubMed

    Jin, Weifeng; Chen, Fangyue

    2016-02-21

    The spatial version of evolutionary prisoner's dilemma on infinitely large regular lattice with purely deterministic strategies and no memories among players is investigated in this paper. Based on the statistical inferences, it is pertinent to confirm that the frequency of cooperation for characterizing its macroscopic behaviors is very sensitive to the initial conditions, which is the most practically significant property of chaos. Its intrinsic complexity is then justified on firm ground from the theory of symbolic dynamics; that is, this game is topologically mixing and possesses positive topological entropy on its subsystems. It is demonstrated therefore that its frequency of cooperation could not be adopted by simply averaging over several steps after the game reaches the equilibrium state. Furthermore, the chaotically changing spatial patterns via empirical observations can be defined and justified in view of symbolic dynamics. It is worth mentioning that the procedure proposed in this work is also applicable to other deterministic spatial evolutionary games therein. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Deterministic Evolutionary Trajectories Influence Primary Tumor Growth: TRACERx Renal.

    PubMed

    Turajlic, Samra; Xu, Hang; Litchfield, Kevin; Rowan, Andrew; Horswell, Stuart; Chambers, Tim; O'Brien, Tim; Lopez, Jose I; Watkins, Thomas B K; Nicol, David; Stares, Mark; Challacombe, Ben; Hazell, Steve; Chandra, Ashish; Mitchell, Thomas J; Au, Lewis; Eichler-Jonsson, Claudia; Jabbar, Faiz; Soultati, Aspasia; Chowdhury, Simon; Rudman, Sarah; Lynch, Joanna; Fernando, Archana; Stamp, Gordon; Nye, Emma; Stewart, Aengus; Xing, Wei; Smith, Jonathan C; Escudero, Mickael; Huffman, Adam; Matthews, Nik; Elgar, Greg; Phillimore, Ben; Costa, Marta; Begum, Sharmin; Ward, Sophia; Salm, Max; Boeing, Stefan; Fisher, Rosalie; Spain, Lavinia; Navas, Carolina; Grönroos, Eva; Hobor, Sebastijan; Sharma, Sarkhara; Aurangzeb, Ismaeel; Lall, Sharanpreet; Polson, Alexander; Varia, Mary; Horsfield, Catherine; Fotiadis, Nicos; Pickering, Lisa; Schwarz, Roland F; Silva, Bruno; Herrero, Javier; Luscombe, Nick M; Jamal-Hanjani, Mariam; Rosenthal, Rachel; Birkbak, Nicolai J; Wilson, Gareth A; Pipek, Orsolya; Ribli, Dezso; Krzystanek, Marcin; Csabai, Istvan; Szallasi, Zoltan; Gore, Martin; McGranahan, Nicholas; Van Loo, Peter; Campbell, Peter; Larkin, James; Swanton, Charles

    2018-04-19

    The evolutionary features of clear-cell renal cell carcinoma (ccRCC) have not been systematically studied to date. We analyzed 1,206 primary tumor regions from 101 patients recruited into the multi-center prospective study, TRACERx Renal. We observe up to 30 driver events per tumor and show that subclonal diversification is associated with known prognostic parameters. By resolving the patterns of driver event ordering, co-occurrence, and mutual exclusivity at clone level, we show the deterministic nature of clonal evolution. ccRCC can be grouped into seven evolutionary subtypes, ranging from tumors characterized by early fixation of multiple mutational and copy number drivers and rapid metastases to highly branched tumors with >10 subclonal drivers and extensive parallel evolution associated with attenuated progression. We identify genetic diversity and chromosomal complexity as determinants of patient outcome. Our insights reconcile the variable clinical behavior of ccRCC and suggest evolutionary potential as a biomarker for both intervention and surveillance. Copyright © 2018 Francis Crick Institute. Published by Elsevier Inc. All rights reserved.

  20. Study on the evaluation method for fault displacement based on characterized source model

    NASA Astrophysics Data System (ADS)

    Tonagi, M.; Takahama, T.; Matsumoto, Y.; Inoue, N.; Irikura, K.; Dalguer, L. A.

    2016-12-01

    In IAEA Specific Safety Guide (SSG) 9 describes that probabilistic methods for evaluating fault displacement should be used if no sufficient basis is provided to decide conclusively that the fault is not capable by using the deterministic methodology. In addition, International Seismic Safety Centre compiles as ANNEX to realize seismic hazard for nuclear facilities described in SSG-9 and shows the utility of the deterministic and probabilistic evaluation methods for fault displacement. In Japan, it is required that important nuclear facilities should be established on ground where fault displacement will not arise when earthquakes occur in the future. Under these situations, based on requirements, we need develop evaluation methods for fault displacement to enhance safety in nuclear facilities. We are studying deterministic and probabilistic methods with tentative analyses using observed records such as surface fault displacement and near-fault strong ground motions of inland crustal earthquake which fault displacements arose. In this study, we introduce the concept of evaluation methods for fault displacement. After that, we show parts of tentative analysis results for deterministic method as follows: (1) For the 1999 Chi-Chi earthquake, referring slip distribution estimated by waveform inversion, we construct a characterized source model (Miyake et al., 2003, BSSA) which can explain observed near-fault broad band strong ground motions. (2) Referring a characterized source model constructed in (1), we study an evaluation method for surface fault displacement using hybrid method, which combines particle method and distinct element method. At last, we suggest one of the deterministic method to evaluate fault displacement based on characterized source model. This research was part of the 2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.

  1. Optical characterization limits of nanoparticle aggregates at different wavelengths using approximate Bayesian computation

    NASA Astrophysics Data System (ADS)

    Eriçok, Ozan Burak; Ertürk, Hakan

    2018-07-01

    Optical characterization of nanoparticle aggregates is a complex inverse problem that can be solved by deterministic or statistical methods. Previous studies showed that there exists a different lower size limit of reliable characterization, corresponding to the wavelength of light source used. In this study, these characterization limits are determined considering a light source wavelength range changing from ultraviolet to near infrared (266-1064 nm) relying on numerical light scattering experiments. Two different measurement ensembles are considered. Collection of well separated aggregates made up of same sized particles and that of having particle size distribution. Filippov's cluster-cluster algorithm is used to generate the aggregates and the light scattering behavior is calculated by discrete dipole approximation. A likelihood-free Approximate Bayesian Computation, relying on Adaptive Population Monte Carlo method, is used for characterization. It is found that when the wavelength range of 266-1064 nm is used, successful characterization limit changes from 21-62 nm effective radius for monodisperse and polydisperse soot aggregates.

  2. Observations and modeling of the effects of waves and rotors on submeso and turbulence variability within the stable boundary layer over central Pennsylvania

    NASA Astrophysics Data System (ADS)

    Suarez Mullins, Astrid

    Terrain-induced gravity waves and rotor circulations have been hypothesized to enhance the generation of submeso motions (i.e., nonstationary shear events with spatial and temporal scales greater than the turbulence scale and smaller than the meso-gamma scale) and to modulate low-level intermittency in the stable boundary layer (SBL). Intermittent turbulence, generated by submeso motions and/or the waves, can affect the atmospheric transport and dispersion of pollutants and hazardous materials. Thus, the study of these motions and the mechanisms through which they impact the weakly to very stable SBL is crucial for improving air quality modeling and hazard predictions. In this thesis, the effects of waves and rotor circulations on submeso and turbulence variability within the SBL is investigated over the moderate terrain of central Pennsylvania using special observations from a network deployed at Rock Springs, PA and high-resolution Weather Research and Forecasting (WRF) model forecasts. The investigation of waves and rotors over central PA is important because 1) the moderate topography of this region is common to most of the eastern US and thus the knowledge acquired from this study can be of significance to a large population, 2) there have been little evidence of complex wave structures and rotors reported for this region, and 3) little is known about the waves and rotors generated by smaller and more moderate topographies. Six case studies exhibiting an array of wave and rotor structures are analyzed. Observational evidence of the presence of complex wave structures, resembling nonstationary trapped gravity waves and downslope windstorms, and complex rotor circulations, resembling trapped and jump-type rotors, is presented. These motions and the mechanisms through which they modulate the SBL are further investigated using high-resolution WRF forecasts. First, the efficacy of the 0.444-km horizontal grid spacing WRF model to reproduce submeso and meso-gamma motions, generated by waves and rotors and hypothesized to impact the SBL, is investigated using a new wavelet-based verification methodology for assessing non-deterministic model skill in the submeso and meso-gamma range to complement standard deterministic measures. This technique allows the verification and/or intercomparison of any two nonstationary stochastic systems without many of the limitations of typical wavelet-based verification approaches (e.g., selection of noise models, testing for significance, etc.). Through this analysis, it is shown that the WRF model largely underestimates the number of small amplitude fluctuations in the small submeso range, as expected; and it overestimates the number of small amplitude fluctuations in the meso-gamma range, generally resulting in forecasts that are too smooth. Investigation of the variability for different initialization strategies shows that deterministic wind speed predictions are less sensitive to the choice of initialization strategy than temperature forecasts. Similarly, investigation of the variability for various planetary boundary layer (PBL) parameterizations reveals that turbulent kinetic energy (TKE)-based schemes have an advantage over the non-local schemes for non-deterministic motions. The larger spread in the verification scores for various PBL parameterizations than initialization strategies indicates that PBL parameterization may play a larger role modulating the variability of non-deterministic motions in the SBL for these cases. These results confirm previous findings that have shown WRF to have limited skill forecasting submeso variability for periods greater than ~20 min. The limited skill of the WRF at these scales in these cases is related to the systematic underestimation of the amplitude of observed fluctuations. These results are implemented in the model design and configuration for the investigation of nonstationary waves and rotor structures modulating submeso and mesogamma motions and the SBL. Observations and WRF forecasts of two wave cases characterized by nonstationary waves and rotors are investigated to show the WRF model to have reasonable accuracy forecasting low-level temperature and wind speed in the SBL and to qualitatively produce rotors, similar to those observed, as well as some of the mechanisms modulating their development and evolution. Finally, observations and high-resolution WRF forecasts under different environmental conditions using various initialization strategies are used to investigate the impact of nonlinear gravity waves and rotor structures on the generation of intermittent turbulence and valley transport in the SBL. Evidence of the presence of elevated regions of TKE generated by the complex waves and rotors is presented and investigated using an additional four case studies, exhibiting two synoptic flow regimes and different wave and rotor structures. Throughout this thesis, terrain-induced gravity waves and rotors in the SBL are shown to synergistically interact with the surface cold pool and to enhance low-level turbulence intermittency through the development of submeso and meso-gamma motions. These motions are shown to be an important source of uncertainty for the atmospheric transport and dispersion of pollutants and hazardous materials under very stable conditions. (Abstract shortened by ProQuest.).

  3. Uncertainty Quantification of Non-linear Oscillation Triggering in a Multi-injector Liquid-propellant Rocket Combustion Chamber

    NASA Astrophysics Data System (ADS)

    Popov, Pavel; Sideris, Athanasios; Sirignano, William

    2014-11-01

    We examine the non-linear dynamics of the transverse modes of combustion-driven acoustic instability in a liquid-propellant rocket engine. Triggering can occur, whereby small perturbations from mean conditions decay, while larger disturbances grow to a limit-cycle of amplitude that may compare to the mean pressure. For a deterministic perturbation, the system is also deterministic, computed by coupled finite-volume solvers at low computational cost for a single realization. The randomness of the triggering disturbance is captured by treating the injector flow rates, local pressure disturbances, and sudden acceleration of the entire combustion chamber as random variables. The combustor chamber with its many sub-fields resulting from many injector ports may be viewed as a multi-scale complex system wherein the developing acoustic oscillation is the emergent structure. Numerical simulation of the resulting stochastic PDE system is performed using the polynomial chaos expansion method. The overall probability of unstable growth is assessed in different regions of the parameter space. We address, in particular, the seven-injector, rectangular Purdue University experimental combustion chamber. In addition to the novel geometry, new features include disturbances caused by engine acceleration and unsteady thruster nozzle flow.

  4. The Non-Signalling theorem in generalizations of Bell's theorem

    NASA Astrophysics Data System (ADS)

    Walleczek, J.; Grössing, G.

    2014-04-01

    Does "epistemic non-signalling" ensure the peaceful coexistence of special relativity and quantum nonlocality? The possibility of an affirmative answer is of great importance to deterministic approaches to quantum mechanics given recent developments towards generalizations of Bell's theorem. By generalizations of Bell's theorem we here mean efforts that seek to demonstrate the impossibility of any deterministic theories to obey the predictions of Bell's theorem, including not only local hidden-variables theories (LHVTs) but, critically, of nonlocal hidden-variables theories (NHVTs) also, such as de Broglie-Bohm theory. Naturally, in light of the well-established experimental findings from quantum physics, whether or not a deterministic approach to quantum mechanics, including an emergent quantum mechanics, is logically possible, depends on compatibility with the predictions of Bell's theorem. With respect to deterministic NHVTs, recent attempts to generalize Bell's theorem have claimed the impossibility of any such approaches to quantum mechanics. The present work offers arguments showing why such efforts towards generalization may fall short of their stated goal. In particular, we challenge the validity of the use of the non-signalling theorem as a conclusive argument in favor of the existence of free randomness, and therefore reject the use of the non-signalling theorem as an argument against the logical possibility of deterministic approaches. We here offer two distinct counter-arguments in support of the possibility of deterministic NHVTs: one argument exposes the circularity of the reasoning which is employed in recent claims, and a second argument is based on the inconclusive metaphysical status of the non-signalling theorem itself. We proceed by presenting an entirely informal treatment of key physical and metaphysical assumptions, and of their interrelationship, in attempts seeking to generalize Bell's theorem on the basis of an ontic, foundational interpretation of the non-signalling theorem. We here argue that the non-signalling theorem must instead be viewed as an epistemic, operational theorem i.e. one that refers exclusively to what epistemic agents can, or rather cannot, do. That is, we emphasize that the non-signalling theorem is a theorem about the operational inability of epistemic agents to signal information. In other words, as a proper principle, the non-signalling theorem may only be employed as an epistemic, phenomenological, or operational principle. Critically, our argument emphasizes that the non-signalling principle must not be used as an ontic principle about physical reality as such, i.e. as a theorem about the nature of physical reality independently of epistemic agents e.g. human observers. One major reason in favor of our conclusion is that any definition of signalling or of non-signalling invariably requires a reference to epistemic agents, and what these agents can actually measure and report. Otherwise, the non-signalling theorem would equal a general "no-influence" theorem. In conclusion, under the assumption that the non-signalling theorem is epistemic (i.e. "epistemic non-signalling"), the search for deterministic approaches to quantum mechanics, including NHVTs and an emergent quantum mechanics, continues to be a viable research program towards disclosing the foundations of physical reality at its smallest dimensions.

  5. Chaotic dynamics and control of deterministic ratchets.

    PubMed

    Family, Fereydoon; Larrondo, H A; Zarlenga, D G; Arizmendi, C M

    2005-11-30

    Deterministic ratchets, in the inertial and also in the overdamped limit, have a very complex dynamics, including chaotic motion. This deterministically induced chaos mimics, to some extent, the role of noise, changing, on the other hand, some of the basic properties of thermal ratchets; for example, inertial ratchets can exhibit multiple reversals in the current direction. The direction depends on the amount of friction and inertia, which makes it especially interesting for technological applications such as biological particle separation. We overview in this work different strategies to control the current of inertial ratchets. The control parameters analysed are the strength and frequency of the periodic external force, the strength of the quenched noise that models a non-perfectly-periodic potential, and the mass of the particles. Control mechanisms are associated with the fractal nature of the basins of attraction of the mean velocity attractors. The control of the overdamped motion of noninteracting particles in a rocking periodic asymmetric potential is also reviewed. The analysis is focused on synchronization of the motion of the particles with the external sinusoidal driving force. Two cases are considered: a perfect lattice without disorder and a lattice with noncorrelated quenched noise. The amplitude of the driving force and the strength of the quenched noise are used as control parameters.

  6. Interesting examples of supervised continuous variable systems

    NASA Technical Reports Server (NTRS)

    Chase, Christopher; Serrano, Joe; Ramadge, Peter

    1990-01-01

    The authors analyze two simple deterministic flow models for multiple buffer servers which are examples of the supervision of continuous variable systems by a discrete controller. These systems exhibit what may be regarded as the two extremes of complexity of the closed loop behavior: one is eventually periodic, the other is chaotic. The first example exhibits chaotic behavior that could be characterized statistically. The dual system, the switched server system, exhibits very predictable behavior, which is modeled by a finite state automaton. This research has application to multimodal discrete time systems where the controller can choose from a set of transition maps to implement.

  7. Lévy-like behaviour in deterministic models of intelligent agents exploring heterogeneous environments

    NASA Astrophysics Data System (ADS)

    Boyer, D.; Miramontes, O.; Larralde, H.

    2009-10-01

    Many studies on animal and human movement patterns report the existence of scaling laws and power-law distributions. Whereas a number of random walk models have been proposed to explain observations, in many situations individuals actually rely on mental maps to explore strongly heterogeneous environments. In this work, we study a model of a deterministic walker, visiting sites randomly distributed on the plane and with varying weight or attractiveness. At each step, the walker minimizes a function that depends on the distance to the next unvisited target (cost) and on the weight of that target (gain). If the target weight distribution is a power law, p(k) ~ k-β, in some range of the exponent β, the foraging medium induces movements that are similar to Lévy flights and are characterized by non-trivial exponents. We explore variations of the choice rule in order to test the robustness of the model and argue that the addition of noise has a limited impact on the dynamics in strongly disordered media.

  8. Bursting as a source of non-linear determinism in the firing patterns of nigral dopamine neurons

    PubMed Central

    Jeong, Jaeseung; Shi, Wei-Xing; Hoffman, Ralph; Oh, Jihoon; Gore, John C.; Bunney, Benjamin S.; Peterson, Bradley S.

    2012-01-01

    Nigral dopamine (DA) neurons in vivo exhibit complex firing patterns consisting of tonic single-spikes and phasic bursts that encode information for certain types of reward-related learning and behavior. Non-linear dynamical analysis has previously demonstrated the presence of a non-linear deterministic structure in complex firing patterns of DA neurons, yet the origin of this non-linear determinism remains unknown. In this study, we hypothesized that bursting activity is the primary source of non-linear determinism in the firing patterns of DA neurons. To test this hypothesis, we investigated the dimension complexity of inter-spike interval data recorded in vivo from bursting and non-bursting DA neurons in the chloral hydrate-anesthetized rat substantia nigra. We found that bursting DA neurons exhibited non-linear determinism in their firing patterns, whereas non-bursting DA neurons showed truly stochastic firing patterns. Determinism was also detected in the isolated burst and inter-burst interval data extracted from firing patterns of bursting neurons. Moreover, less bursting DA neurons in halothane-anesthetized rats exhibited higher dimensional spiking dynamics than do more bursting DA neurons in chloral hydrate-anesthetized rats. These results strongly indicate that bursting activity is the main source of low-dimensional, non-linear determinism in the firing patterns of DA neurons. This finding furthermore suggests that bursts are the likely carriers of meaningful information in the firing activities of DA neurons. PMID:22831464

  9. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE PAGES

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    2016-05-03

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d

  10. Active temporal multiplexing of indistinguishable heralded single photons

    PubMed Central

    Xiong, C.; Zhang, X.; Liu, Z.; Collins, M. J.; Mahendra, A.; Helt, L. G.; Steel, M. J.; Choi, D. -Y.; Chae, C. J.; Leong, P. H. W.; Eggleton, B. J.

    2016-01-01

    It is a fundamental challenge in quantum optics to deterministically generate indistinguishable single photons through non-deterministic nonlinear optical processes, due to the intrinsic coupling of single- and multi-photon-generation probabilities in these processes. Actively multiplexing photons generated in many temporal modes can decouple these probabilities, but key issues are to minimize resource requirements to allow scalability, and to ensure indistinguishability of the generated photons. Here we demonstrate the multiplexing of photons from four temporal modes solely using fibre-integrated optics and off-the-shelf electronic components. We show a 100% enhancement to the single-photon output probability without introducing additional multi-photon noise. Photon indistinguishability is confirmed by a fourfold Hong–Ou–Mandel quantum interference with a 91±16% visibility after subtracting multi-photon noise due to high pump power. Our demonstration paves the way for scalable multiplexing of many non-deterministic photon sources to a single near-deterministic source, which will be of benefit to future quantum photonic technologies. PMID:26996317

  11. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d

  12. Vagal-dependent nonlinear variability in the respiratory pattern of anesthetized, spontaneously breathing rats

    PubMed Central

    Dhingra, R. R.; Jacono, F. J.; Fishman, M.; Loparo, K. A.; Rybak, I. A.

    2011-01-01

    Physiological rhythms, including respiration, exhibit endogenous variability associated with health, and deviations from this are associated with disease. Specific changes in the linear and nonlinear sources of breathing variability have not been investigated. In this study, we used information theory-based techniques, combined with surrogate data testing, to quantify and characterize the vagal-dependent nonlinear pattern variability in urethane-anesthetized, spontaneously breathing adult rats. Surrogate data sets preserved the amplitude distribution and linear correlations of the original data set, but nonlinear correlation structure in the data was removed. Differences in mutual information and sample entropy between original and surrogate data sets indicated the presence of deterministic nonlinear or stochastic non-Gaussian variability. With vagi intact (n = 11), the respiratory cycle exhibited significant nonlinear behavior in templates of points separated by time delays ranging from one sample to one cycle length. After vagotomy (n = 6), even though nonlinear variability was reduced significantly, nonlinear properties were still evident at various time delays. Nonlinear deterministic variability did not change further after subsequent bilateral microinjection of MK-801, an N-methyl-d-aspartate receptor antagonist, in the Kölliker-Fuse nuclei. Reversing the sequence (n = 5), blocking N-methyl-d-aspartate receptors bilaterally in the dorsolateral pons significantly decreased nonlinear variability in the respiratory pattern, even with the vagi intact, and subsequent vagotomy did not change nonlinear variability. Thus both vagal and dorsolateral pontine influences contribute to nonlinear respiratory pattern variability. Furthermore, breathing dynamics of the intact system are mutually dependent on vagal and pontine sources of nonlinear complexity. Understanding the structure and modulation of variability provides insight into disease effects on respiratory patterning. PMID:21527661

  13. Stochastic and Deterministic Approaches to Gas-grain Modeling of Interstellar Sources

    NASA Astrophysics Data System (ADS)

    Vasyunin, Anton; Herbst, Eric; Caselli, Paola

    During the last decade, our understanding of the chemistry on surfaces of interstellar grains has been significantly enchanced. Extensive laboratory studies have revealed complex structure and dynamics in interstellar ice analogues, thus making our knowledge much more detailed. In addition, the first qualitative investigations of new processes were made, such as non-thermal chemical desorption of species from dust grains into the gas. Not surprisingly, the rapid growth of knowledge about the physics and chemistry of interstellar ices led to the development of a new generation of astrochemical models. The models are typically characterized by more detailed treatments of the ice physics and chemistry than previously. The utilized numerical approaches vary greatly from microscopic models, in which every single molecule is traced, to ``mean field'' macroscopic models, which simulate the evolution of averaged characteristics of interstellar ices, such as overall bulk composition. While microscopic models based on a stochastic Monte Carlo approach are potentially able to simulate the evolution of interstellar ices with an account of most subtle effects found in a laboratory, their use is often impractical due to limited knowledge about star-forming regions and huge computational demands. On the other hand, deterministic macroscopic models that often utilize kinetic rate equations are computationally efficient but experience difficulties in incorporation of such potentially important effects as ice segregation or discreteness of surface chemical reactions. In my talk, I will review the state of the art in the development of gas-grain astrochemical models. I will discuss how to incorporate key features of ice chemistry and dynamics in the gas-grain astrochemical models, and how the incorporation of recent laboratory findings into gas-grain models helps to better match observations.

  14. Technologies for precision manufacture of current and future windows and domes

    NASA Astrophysics Data System (ADS)

    Hallock, Bob; Shorey, Aric

    2009-05-01

    The final finish and characterization of windows and domes presents a number of challenges in achieving desired precision with acceptable cost and schedule. This becomes more difficult with advanced materials and as window and dome shapes and requirements become more complex, including acute angle corners, transmitted wavefront specifications, aspheric geometries and trending toward conformal surfaces. Magnetorheological Finishing (MRF®) and Magnetorheological Jet (MR Jet®), along with metrology provided by Sub-aperture Stitching Interferometry (SSI®) have several unique attributes that provide them advantages in enhancing fabrication of current and next generation windows and domes. The advantages that MRF brings to the precision finishing of a wide range of shapes such as flats, spheres (including hemispheres), cylinders, aspheres and even freeform optics, has been well documented. Recent advancements include the ability to finish freeform shapes up to 2-meters in size as well as progress in finishing challenging IR materials. Due to its shear-based removal mechanism in contrast to the pressure-based process of other techniques, edges are not typically rolled, in particular on parts with acute angle corners. MR Jet provides additional benefits, particularly in the finishing of the inside of steep concave domes and other irregular shapes. The ability of MR Jet to correct the figure of conformal domes deterministically and to high precision has been demonstrated. Combining these technologies with metrology techniques, such as SSI provides a solution for finishing current and future windows and domes in a reliable, deterministic and cost-effective way. The ability to use the SSI to characterize a range of shapes such as domes and aspheres, as well as progress in using MRF and MR Jet for finishing conventional and conformal windows and domes with increasing size and complexity of design will be presented.

  15. Chaotic behavior in the locomotion of Amoeba proteus.

    PubMed

    Miyoshi, H; Kagawa, Y; Tsuchiya, Y

    2001-01-01

    The locomotion of Amoeba proteus has been investigated by algorithms evaluating correlation dimension and Lyapunov spectrum developed in the field of nonlinear science. It is presumed by these parameters whether the random behavior of the system is stochastic or deterministic. For the analysis of the nonlinear parameters, n-dimensional time-delayed vectors have been reconstructed from a time series of periphery and area of A. proteus images captured with a charge-coupled-device camera, which characterize its random motion. The correlation dimension analyzed has shown the random motion of A. proteus is subjected only to 3-4 macrovariables, though the system is a complex system composed of many degrees of freedom. Furthermore, the analysis of the Lyapunov spectrum has shown its largest exponent takes positive values. These results indicate the random behavior of A. proteus is chaotic and deterministic motion on an attractor with low dimension. It may be important for the elucidation of the cell locomotion to take account of nonlinear interactions among a small number of dynamics such as the sol-gel transformation, the cytoplasmic streaming, and the relating chemical reaction occurring in the cell.

  16. A hydro-meteorological ensemble prediction system for real-time flood forecasting purposes in the Milano area

    NASA Astrophysics Data System (ADS)

    Ravazzani, Giovanni; Amengual, Arnau; Ceppi, Alessandro; Romero, Romualdo; Homar, Victor; Mancini, Marco

    2015-04-01

    Analysis of forecasting strategies that can provide a tangible basis for flood early warning procedures and mitigation measures over the Western Mediterranean region is one of the fundamental motivations of the European HyMeX programme. Here, we examine a set of hydro-meteorological episodes that affected the Milano urban area for which the complex flood protection system of the city did not completely succeed before the occurred flash-floods. Indeed, flood damages have exponentially increased in the area during the last 60 years, due to industrial and urban developments. Thus, the improvement of the Milano flood control system needs a synergism between structural and non-structural approaches. The flood forecasting system tested in this work comprises the Flash-flood Event-based Spatially distributed rainfall-runoff Transformation, including Water Balance (FEST-WB) and the Weather Research and Forecasting (WRF) models, in order to provide a hydrological ensemble prediction system (HEPS). Deterministic and probabilistic quantitative precipitation forecasts (QPFs) have been provided by WRF model in a set of 48-hours experiments. HEPS has been generated by combining different physical parameterizations (i.e. cloud microphysics, moist convection and boundary-layer schemes) of the WRF model in order to better encompass the atmospheric processes leading to high precipitation amounts. We have been able to test the value of a probabilistic versus a deterministic framework when driving Quantitative Discharge Forecasts (QDFs). Results highlight (i) the benefits of using a high-resolution HEPS in conveying uncertainties for this complex orographic area and (ii) a better simulation of the most of extreme precipitation events, potentially enabling valuable probabilistic QDFs. Hence, the HEPS copes with the significant deficiencies found in the deterministic QPFs. These shortcomings would prevent to correctly forecast the location and timing of high precipitation rates and total amounts at the catchment scale, thus impacting heavily the deterministic QDFs. In contrast, early warnings would have been possible within a HEPS context for the Milano area, proving the suitability of such system for civil protection purposes.

  17. Discrete Deterministic and Stochastic Petri Nets

    NASA Technical Reports Server (NTRS)

    Zijal, Robert; Ciardo, Gianfranco

    1996-01-01

    Petri nets augmented with timing specifications gained a wide acceptance in the area of performance and reliability evaluation of complex systems exhibiting concurrency, synchronization, and conflicts. The state space of time-extended Petri nets is mapped onto its basic underlying stochastic process, which can be shown to be Markovian under the assumption of exponentially distributed firing times. The integration of exponentially and non-exponentially distributed timing is still one of the major problems for the analysis and was first attacked for continuous time Petri nets at the cost of structural or analytical restrictions. We propose a discrete deterministic and stochastic Petri net (DDSPN) formalism with no imposed structural or analytical restrictions where transitions can fire either in zero time or according to arbitrary firing times that can be represented as the time to absorption in a finite absorbing discrete time Markov chain (DTMC). Exponentially distributed firing times are then approximated arbitrarily well by geometric distributions. Deterministic firing times are a special case of the geometric distribution. The underlying stochastic process of a DDSPN is then also a DTMC, from which the transient and stationary solution can be obtained by standard techniques. A comprehensive algorithm and some state space reduction techniques for the analysis of DDSPNs are presented comprising the automatic detection of conflicts and confusions, which removes a major obstacle for the analysis of discrete time models.

  18. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states.

    PubMed

    Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew

    2016-07-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Proposed principles of maximum local entropy production.

    PubMed

    Ross, John; Corlan, Alexandru D; Müller, Stefan C

    2012-07-12

    Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.

  20. Deterministic Role of Collision Cascade Density in Radiation Defect Dynamics in Si

    NASA Astrophysics Data System (ADS)

    Wallace, J. B.; Aji, L. B. Bayu; Shao, L.; Kucheyev, S. O.

    2018-05-01

    The formation of stable radiation damage in solids often proceeds via complex dynamic annealing (DA) processes, involving point defect migration and interaction. The dependence of DA on irradiation conditions remains poorly understood even for Si. Here, we use a pulsed ion beam method to study defect interaction dynamics in Si bombarded in the temperature range from ˜-30 ° C to 210 °C with ions in a wide range of masses, from Ne to Xe, creating collision cascades with different densities. We demonstrate that the complexity of the influence of irradiation conditions on defect dynamics can be reduced to a deterministic effect of a single parameter, the average cascade density, calculated by taking into account the fractal nature of collision cascades. For each ion species, the DA rate exhibits two well-defined Arrhenius regions where different DA mechanisms dominate. These two regions intersect at a critical temperature, which depends linearly on the cascade density. The low-temperature DA regime is characterized by an activation energy of ˜0.1 eV , independent of the cascade density. The high-temperature regime, however, exhibits a change in the dominant DA process for cascade densities above ˜0.04 at.%, evidenced by an increase in the activation energy. These results clearly demonstrate a crucial role of the collision cascade density and can be used to predict radiation defect dynamics in Si.

  1. Deterministic Role of Collision Cascade Density in Radiation Defect Dynamics in Si.

    PubMed

    Wallace, J B; Aji, L B Bayu; Shao, L; Kucheyev, S O

    2018-05-25

    The formation of stable radiation damage in solids often proceeds via complex dynamic annealing (DA) processes, involving point defect migration and interaction. The dependence of DA on irradiation conditions remains poorly understood even for Si. Here, we use a pulsed ion beam method to study defect interaction dynamics in Si bombarded in the temperature range from ∼-30 °C to 210 °C with ions in a wide range of masses, from Ne to Xe, creating collision cascades with different densities. We demonstrate that the complexity of the influence of irradiation conditions on defect dynamics can be reduced to a deterministic effect of a single parameter, the average cascade density, calculated by taking into account the fractal nature of collision cascades. For each ion species, the DA rate exhibits two well-defined Arrhenius regions where different DA mechanisms dominate. These two regions intersect at a critical temperature, which depends linearly on the cascade density. The low-temperature DA regime is characterized by an activation energy of ∼0.1  eV, independent of the cascade density. The high-temperature regime, however, exhibits a change in the dominant DA process for cascade densities above ∼0.04 at.%, evidenced by an increase in the activation energy. These results clearly demonstrate a crucial role of the collision cascade density and can be used to predict radiation defect dynamics in Si.

  2. Measuring predictability in ultrasonic signals: an application to scattering material characterization.

    PubMed

    Carrión, Alicia; Miralles, Ramón; Lara, Guillermo

    2014-09-01

    In this paper, we present a novel and completely different approach to the problem of scattering material characterization: measuring the degree of predictability of the time series. Measuring predictability can provide information of the signal strength of the deterministic component of the time series in relation to the whole time series acquired. This relationship can provide information about coherent reflections in material grains with respect to the rest of incoherent noises that typically appear in non-destructive testing using ultrasonics. This is a non-parametric technique commonly used in chaos theory that does not require making any kind of assumptions about attenuation profiles. In highly scattering media (low SNR), it has been shown theoretically that the degree of predictability allows material characterization. The experimental results obtained in this work with 32 cement probes of 4 different porosities demonstrate the ability of this technique to do classification. It has also been shown that, in this particular application, the measurement of predictability can be used as an indicator of the percentages of porosity of the test samples with great accuracy. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    USGS Publications Warehouse

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-01-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  4. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    NASA Astrophysics Data System (ADS)

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-07-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  5. Decentralized stochastic control

    NASA Technical Reports Server (NTRS)

    Speyer, J. L.

    1980-01-01

    Decentralized stochastic control is characterized by being decentralized in that the information to one controller is not the same as information to another controller. The system including the information has a stochastic or uncertain component. This complicates the development of decision rules which one determines under the assumption that the system is deterministic. The system is dynamic which means the present decisions affect future system responses and the information in the system. This circumstance presents a complex problem where tools like dynamic programming are no longer applicable. These difficulties are discussed from an intuitive viewpoint. Particular assumptions are introduced which allow a limited theory which produces mechanizable affine decision rules.

  6. Unifying Complexity and Information

    NASA Astrophysics Data System (ADS)

    Ke, Da-Guan

    2013-04-01

    Complex systems, arising in many contexts in the computer, life, social, and physical sciences, have not shared a generally-accepted complexity measure playing a fundamental role as the Shannon entropy H in statistical mechanics. Superficially-conflicting criteria of complexity measurement, i.e. complexity-randomness (C-R) relations, have given rise to a special measure intrinsically adaptable to more than one criterion. However, deep causes of the conflict and the adaptability are not much clear. Here I trace the root of each representative or adaptable measure to its particular universal data-generating or -regenerating model (UDGM or UDRM). A representative measure for deterministic dynamical systems is found as a counterpart of the H for random process, clearly redefining the boundary of different criteria. And a specific UDRM achieving the intrinsic adaptability enables a general information measure that ultimately solves all major disputes. This work encourages a single framework coving deterministic systems, statistical mechanics and real-world living organisms.

  7. P ≠NP Millenium-Problem(MP) TRIVIAL Physics Proof Via NATURAL TRUMPS Artificial-``Intelligence'' Via: Euclid Geometry, Plato Forms, Aristotle Square-of-Opposition, Menger Dimension-Theory Connections!!! NO Computational-Complexity(CC)/ANYthing!!!: Geometry!!!

    NASA Astrophysics Data System (ADS)

    Clay, London; Menger, Karl; Rota, Gian-Carlo; Euclid, Alexandria; Siegel, Edward

    P ≠NP MP proof is by computer-''science''/SEANCE(!!!)(CS) computational-''intelligence'' lingo jargonial-obfuscation(JO) NATURAL-Intelligence(NI) DISambiguation! CS P =(?) =NP MEANS (Deterministic)(PC) = (?) =(Non-D)(PC) i.e. D(P) =(?) = N(P). For inclusion(equality) vs. exclusion (inequality) irrelevant (P) simply cancels!!! (Equally any/all other CCs IF both sides identical). Crucial question left: (D) =(?) =(ND), i.e. D =(?) = N. Algorithmics[Sipser[Intro. Thy.Comp.(`97)-p.49Fig.1.15!!!

  8. Limit Theorems for Dispersing Billiards with Cusps

    NASA Astrophysics Data System (ADS)

    Bálint, P.; Chernov, N.; Dolgopyat, D.

    2011-12-01

    Dispersing billiards with cusps are deterministic dynamical systems with a mild degree of chaos, exhibiting "intermittent" behavior that alternates between regular and chaotic patterns. Their statistical properties are therefore weak and delicate. They are characterized by a slow (power-law) decay of correlations, and as a result the classical central limit theorem fails. We prove that a non-classical central limit theorem holds, with a scaling factor of {sqrt{nlog n}} replacing the standard {sqrt{n}} . We also derive the respective Weak Invariance Principle, and we identify the class of observables for which the classical CLT still holds.

  9. Characterization of normality of chaotic systems including prediction and detection of anomalies

    NASA Astrophysics Data System (ADS)

    Engler, Joseph John

    Accurate prediction and control pervades domains such as engineering, physics, chemistry, and biology. Often, it is discovered that the systems under consideration cannot be well represented by linear, periodic nor random data. It has been shown that these systems exhibit deterministic chaos behavior. Deterministic chaos describes systems which are governed by deterministic rules but whose data appear to be random or quasi-periodic distributions. Deterministically chaotic systems characteristically exhibit sensitive dependence upon initial conditions manifested through rapid divergence of states initially close to one another. Due to this characterization, it has been deemed impossible to accurately predict future states of these systems for longer time scales. Fortunately, the deterministic nature of these systems allows for accurate short term predictions, given the dynamics of the system are well understood. This fact has been exploited in the research community and has resulted in various algorithms for short term predictions. Detection of normality in deterministically chaotic systems is critical in understanding the system sufficiently to able to predict future states. Due to the sensitivity to initial conditions, the detection of normal operational states for a deterministically chaotic system can be challenging. The addition of small perturbations to the system, which may result in bifurcation of the normal states, further complicates the problem. The detection of anomalies and prediction of future states of the chaotic system allows for greater understanding of these systems. The goal of this research is to produce methodologies for determining states of normality for deterministically chaotic systems, detection of anomalous behavior, and the more accurate prediction of future states of the system. Additionally, the ability to detect subtle system state changes is discussed. The dissertation addresses these goals by proposing new representational techniques and novel prediction methodologies. The value and efficiency of these methods are explored in various case studies. Presented is an overview of chaotic systems with examples taken from the real world. A representation schema for rapid understanding of the various states of deterministically chaotic systems is presented. This schema is then used to detect anomalies and system state changes. Additionally, a novel prediction methodology which utilizes Lyapunov exponents to facilitate longer term prediction accuracy is presented and compared with other nonlinear prediction methodologies. These novel methodologies are then demonstrated on applications such as wind energy, cyber security and classification of social networks.

  10. Bursting as a source of non-linear determinism in the firing patterns of nigral dopamine neurons.

    PubMed

    Jeong, Jaeseung; Shi, Wei-Xing; Hoffman, Ralph; Oh, Jihoon; Gore, John C; Bunney, Benjamin S; Peterson, Bradley S

    2012-11-01

    Nigral dopamine (DA) neurons in vivo exhibit complex firing patterns consisting of tonic single-spikes and phasic bursts that encode information for certain types of reward-related learning and behavior. Non-linear dynamical analysis has previously demonstrated the presence of a non-linear deterministic structure in complex firing patterns of DA neurons, yet the origin of this non-linear determinism remains unknown. In this study, we hypothesized that bursting activity is the primary source of non-linear determinism in the firing patterns of DA neurons. To test this hypothesis, we investigated the dimension complexity of inter-spike interval data recorded in vivo from bursting and non-bursting DA neurons in the chloral hydrate-anesthetized rat substantia nigra. We found that bursting DA neurons exhibited non-linear determinism in their firing patterns, whereas non-bursting DA neurons showed truly stochastic firing patterns. Determinism was also detected in the isolated burst and inter-burst interval data extracted from firing patterns of bursting neurons. Moreover, less bursting DA neurons in halothane-anesthetized rats exhibited higher dimensional spiking dynamics than do more bursting DA neurons in chloral hydrate-anesthetized rats. These results strongly indicate that bursting activity is the main source of low-dimensional, non-linear determinism in the firing patterns of DA neurons. This finding furthermore suggests that bursts are the likely carriers of meaningful information in the firing activities of DA neurons. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  11. Coupled Multi-Disciplinary Optimization for Structural Reliability and Affordability

    NASA Technical Reports Server (NTRS)

    Abumeri, Galib H.; Chamis, Christos C.

    2003-01-01

    A computational simulation method is presented for Non-Deterministic Multidisciplinary Optimization of engine composite materials and structures. A hypothetical engine duct made with ceramic matrix composites (CMC) is evaluated probabilistically in the presence of combined thermo-mechanical loading. The structure is tailored by quantifying the uncertainties in all relevant design variables such as fabrication, material, and loading parameters. The probabilistic sensitivities are used to select critical design variables for optimization. In this paper, two approaches for non-deterministic optimization are presented. The non-deterministic minimization of combined failure stress criterion is carried out by: (1) performing probabilistic evaluation first and then optimization and (2) performing optimization first and then probabilistic evaluation. The first approach shows that the optimization feasible region can be bounded by a set of prescribed probability limits and that the optimization follows the cumulative distribution function between those limits. The second approach shows that the optimization feasible region is bounded by 0.50 and 0.999 probabilities.

  12. Simple deterministic models and applications. Comment on "Coupled disease-behavior dynamics on complex networks: A review" by Z. Wang et al.

    NASA Astrophysics Data System (ADS)

    Yang, Hyun Mo

    2015-12-01

    Currently, discrete modellings are largely accepted due to the access to computers with huge storage capacity and high performance processors and easy implementation of algorithms, allowing to develop and simulate increasingly sophisticated models. Wang et al. [7] present a review of dynamics in complex networks, focusing on the interaction between disease dynamics and human behavioral and social dynamics. By doing an extensive review regarding to the human behavior responding to disease dynamics, the authors briefly describe the complex dynamics found in the literature: well-mixed populations networks, where spatial structure can be neglected, and other networks considering heterogeneity on spatially distributed populations. As controlling mechanisms are implemented, such as social distancing due 'social contagion', quarantine, non-pharmaceutical interventions and vaccination, adaptive behavior can occur in human population, which can be easily taken into account in the dynamics formulated by networked populations.

  13. Domain imaging in ferroelectric thin films via channeling-contrast backscattered electron microscopy

    DOE PAGES

    Ihlefeld, Jon F.; Michael, Joseph R.; McKenzie, Bonnie B.; ...

    2016-09-16

    We report that ferroelastic domain walls provide opportunities for deterministically controlling mechanical, optical, electrical, and thermal energy. Domain wall characterization in micro- and nanoscale systems, where their spacing may be of the order of 100 nm or less is presently limited to only a few techniques, such as piezoresponse force microscopy and transmission electron microscopy. These respective techniques cannot, however, independently characterize domain polarization orientation and domain wall motion in technologically relevant capacitor structures or in a non-destructive manner, thus presenting a limitation of their utility. In this work, we show how backscatter scanning electron microscopy utilizing channeling contrast yieldmore » can image the ferroelastic domain structure of ferroelectric films with domain wall spacing as narrow as 10 nm.« less

  14. The Endogenous GRP78 Interactome in Human Head and Neck Cancers: A Deterministic Role of Cell Surface GRP78 in Cancer Stemness.

    PubMed

    Chen, Hsin-Ying; Chang, Joseph Tung-Chieh; Chien, Kun-Yi; Lee, Yun-Shien; You, Guo-Rung; Cheng, Ann-Joy

    2018-01-11

    Cell surface glucose regulated protein 78 (GRP78), an endoplasmic reticulum (ER) chaperone, was suggested to be a cancer stem cell marker, but the influence of this molecule on cancer stemness is poorly characterized. In this study, we developed a mass spectrometry platform to detect the endogenous interactome of GRP78 and investigated its role in cancer stemness. The interactome results showed that cell surface GRP78 associates with multiple molecules. The influence of cell population heterogeneity of head and neck cancer cell lines (OECM1, FaDu, and BM2) according to the cell surface expression levels of GRP78 and the GRP78 interactome protein, Progranulin, was investigated. The four sorted cell groups exhibited distinct cell cycle distributions, asymmetric/symmetric cell divisions, and different relative expression levels of stemness markers. Our results demonstrate that cell surface GRP78 promotes cancer stemness, whereas drives cells toward a non-stemlike phenotype when it chaperones Progranulin. We conclude that cell surface GRP78 is a chaperone exerting a deterministic influence on cancer stemness.

  15. Multifractal Approach to the Analysis of Crime Dynamics: Results for Burglary in San Francisco

    NASA Astrophysics Data System (ADS)

    Melgarejo, Miguel; Obregon, Nelson

    This paper provides evidence of fractal, multifractal and chaotic behaviors in urban crime by computing key statistical attributes over a long data register of criminal activity. Fractal and multifractal analyses based on power spectrum, Hurst exponent computation, hierarchical power law detection and multifractal spectrum are considered ways to characterize and quantify the footprint of complexity of criminal activity. Moreover, observed chaos analysis is considered a second step to pinpoint the nature of the underlying crime dynamics. This approach is carried out on a long database of burglary activity reported by 10 police districts of San Francisco city. In general, interarrival time processes of criminal activity in San Francisco exhibit fractal and multifractal patterns. The behavior of some of these processes is close to 1/f noise. Therefore, a characterization as deterministic, high-dimensional, chaotic phenomena is viable. Thus, the nature of crime dynamics can be studied from geometric and chaotic perspectives. Our findings support that crime dynamics may be understood from complex systems theories like self-organized criticality or highly optimized tolerance.

  16. The Stochastic Multi-strain Dengue Model: Analysis of the Dynamics

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Stollenwerk, Nico; Kooi, Bob W.

    2011-09-01

    Dengue dynamics is well known to be particularly complex with large fluctuations of disease incidences. An epidemic multi-strain model motivated by dengue fever epidemiology shows deterministic chaos in wide parameter regions. The addition of seasonal forcing, mimicking the vectorial dynamics, and a low import of infected individuals, which is realistic in the dynamics of infectious diseases epidemics show complex dynamics and qualitatively a good agreement between empirical DHF monitoring data and the obtained model simulation. The addition of noise can explain the fluctuations observed in the empirical data and for large enough population size, the stochastic system can be well described by the deterministic skeleton.

  17. On the dimension of complex responses in nonlinear structural vibrations

    NASA Astrophysics Data System (ADS)

    Wiebe, R.; Spottswood, S. M.

    2016-07-01

    The ability to accurately model engineering systems under extreme dynamic loads would prove a major breakthrough in many aspects of aerospace, mechanical, and civil engineering. Extreme loads frequently induce both nonlinearities and coupling which increase the complexity of the response and the computational cost of finite element models. Dimension reduction has recently gained traction and promises the ability to distill dynamic responses down to a minimal dimension without sacrificing accuracy. In this context, the dimensionality of a response is related to the number of modes needed in a reduced order model to accurately simulate the response. Thus, an important step is characterizing the dimensionality of complex nonlinear responses of structures. In this work, the dimensionality of the nonlinear response of a post-buckled beam is investigated. Significant detail is dedicated to carefully introducing the experiment, the verification of a finite element model, and the dimensionality estimation algorithm as it is hoped that this system may help serve as a benchmark test case. It is shown that with minor modifications, the method of false nearest neighbors can quantitatively distinguish between the response dimension of various snap-through, non-snap-through, random, and deterministic loads. The state-space dimension of the nonlinear system in question increased from 2-to-10 as the system response moved from simple, low-level harmonic to chaotic snap-through. Beyond the problem studied herein, the techniques developed will serve as a prescriptive guide in developing fast and accurate dimensionally reduced models of nonlinear systems, and eventually as a tool for adaptive dimension-reduction in numerical modeling. The results are especially relevant in the aerospace industry for the design of thin structures such as beams, panels, and shells, which are all capable of spatio-temporally complex dynamic responses that are difficult and computationally expensive to model.

  18. Production scheduling and rescheduling with genetic algorithms.

    PubMed

    Bierwirth, C; Mattfeld, D C

    1999-01-01

    A general model for job shop scheduling is described which applies to static, dynamic and non-deterministic production environments. Next, a Genetic Algorithm is presented which solves the job shop scheduling problem. This algorithm is tested in a dynamic environment under different workload situations. Thereby, a highly efficient decoding procedure is proposed which strongly improves the quality of schedules. Finally, this technique is tested for scheduling and rescheduling in a non-deterministic environment. It is shown by experiment that conventional methods of production control are clearly outperformed at reasonable run-time costs.

  19. From Astrochemistry to prebiotic chemistry? An hypothetical approach toward Astrobiology

    NASA Astrophysics Data System (ADS)

    Le Sergeant d'Hendecourt, L.; Danger, G.

    2012-12-01

    We present in this paper a general perspective about the evolution of molecular complexity, as observed from an astrophysicist point of view and its possible relation to the problem of the origin of life on Earth. Based on the cosmic abundances of the elements and the molecular composition of our life, we propose that life cannot really be based on other elements. We discuss where the necessary molecular complexity is built-up in astrophysical environments, actually within inter/circumstellar solid state materials known as ``grains''. Considerations based on non-directed laboratory experiments, that must be further extended in the prebiotic domain, lead to the hypothesis that if the chemistry at the origin of life may indeed be a rather universal and deterministic phenomenon, once molecular complexity is installed, the chemical evolution that generated the first prebiotic reactions that involve autoreplication must be treated in a systemic approach because of the strong contingency imposed by the complex local environment(s) and associated processes in which these chemical systems have evolved.

  20. Transfer of non-Gaussian quantum states of mechanical oscillator to light

    NASA Astrophysics Data System (ADS)

    Filip, Radim; Rakhubovsky, Andrey A.

    2015-11-01

    Non-Gaussian quantum states are key resources for quantum optics with continuous-variable oscillators. The non-Gaussian states can be deterministically prepared by a continuous evolution of the mechanical oscillator isolated in a nonlinear potential. We propose feasible and deterministic transfer of non-Gaussian quantum states of mechanical oscillators to a traveling light beam, using purely all-optical methods. The method relies on only basic feasible and high-quality elements of quantum optics: squeezed states of light, linear optics, homodyne detection, and electro-optical feedforward control of light. By this method, a wide range of novel non-Gaussian states of light can be produced in the future from the mechanical states of levitating particles in optical tweezers, including states necessary for the implementation of an important cubic phase gate.

  1. Fumonisin B1 Toxicity in Grower-Finisher Pigs: A Comparative Analysis of Genetically Engineered Bt Corn and non-Bt Corn by Using Quantitative Dietary Exposure Assessment Modeling

    PubMed Central

    Delgado, James E.; Wolt, Jeffrey D.

    2011-01-01

    In this study, we investigate the long-term exposure (20 weeks) to fumonisin B1 (FB1) in grower-finisher pigs by conducting a quantitative exposure assessment (QEA). Our analytical approach involved both deterministic and semi-stochastic modeling for dietary comparative analyses of FB1 exposures originating from genetically engineered Bacillus thuringiensis (Bt)-corn, conventional non-Bt corn and distiller’s dried grains with solubles (DDGS) derived from Bt and/or non-Bt corn. Results from both deterministic and semi-stochastic demonstrated a distinct difference of FB1 toxicity in feed between Bt corn and non-Bt corn. Semi-stochastic results predicted the lowest FB1 exposure for Bt grain with a mean of 1.5 mg FB1/kg diet and the highest FB1 exposure for a diet consisting of non-Bt grain and non-Bt DDGS with a mean of 7.87 mg FB1/kg diet; the chronic toxicological incipient level of concern is 1.0 mg of FB1/kg of diet. Deterministic results closely mirrored but tended to slightly under predict the mean result for the semi-stochastic analysis. This novel comparative QEA model reveals that diet scenarios where the source of grain is derived from Bt corn presents less potential to induce FB1 toxicity than diets containing non-Bt corn. PMID:21909298

  2. First Order Reliability Application and Verification Methods for Semistatic Structures

    NASA Technical Reports Server (NTRS)

    Verderaime, Vincent

    1994-01-01

    Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored by conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments, its stress audits are shown to be arbitrary and incomplete, and it compromises high strength materials performance. A reliability method is proposed which combines first order reliability principles with deterministic design variables and conventional test technique to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety index expression. The application is reduced to solving for a factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and with the pace of semistatic structural designs.

  3. Quasi-Static Probabilistic Structural Analyses Process and Criteria

    NASA Technical Reports Server (NTRS)

    Goldberg, B.; Verderaime, V.

    1999-01-01

    Current deterministic structural methods are easily applied to substructures and components, and analysts have built great design insights and confidence in them over the years. However, deterministic methods cannot support systems risk analyses, and it was recently reported that deterministic treatment of statistical data is inconsistent with error propagation laws that can result in unevenly conservative structural predictions. Assuming non-nal distributions and using statistical data formats throughout prevailing stress deterministic processes lead to a safety factor in statistical format, which integrated into the safety index, provides a safety factor and first order reliability relationship. The embedded safety factor in the safety index expression allows a historically based risk to be determined and verified over a variety of quasi-static metallic substructures consistent with the traditional safety factor methods and NASA Std. 5001 criteria.

  4. Markov Logic Networks in the Analysis of Genetic Data

    PubMed Central

    Sakhanenko, Nikita A.

    2010-01-01

    Abstract Complex, non-additive genetic interactions are common and can be critical in determining phenotypes. Genome-wide association studies (GWAS) and similar statistical studies of linkage data, however, assume additive models of gene interactions in looking for genotype-phenotype associations. These statistical methods view the compound effects of multiple genes on a phenotype as a sum of influences of each gene and often miss a substantial part of the heritable effect. Such methods do not use any biological knowledge about underlying mechanisms. Modeling approaches from the artificial intelligence (AI) field that incorporate deterministic knowledge into models to perform statistical analysis can be applied to include prior knowledge in genetic analysis. We chose to use the most general such approach, Markov Logic Networks (MLNs), for combining deterministic knowledge with statistical analysis. Using simple, logistic regression-type MLNs we can replicate the results of traditional statistical methods, but we also show that we are able to go beyond finding independent markers linked to a phenotype by using joint inference without an independence assumption. The method is applied to genetic data on yeast sporulation, a complex phenotype with gene interactions. In addition to detecting all of the previously identified loci associated with sporulation, our method identifies four loci with smaller effects. Since their effect on sporulation is small, these four loci were not detected with methods that do not account for dependence between markers due to gene interactions. We show how gene interactions can be detected using more complex models, which can be used as a general framework for incorporating systems biology with genetics. PMID:20958249

  5. Taking a gamble or playing by the rules: Dissociable prefrontal systems implicated in probabilistic versus deterministic rule-based decisions

    PubMed Central

    Bhanji, Jamil P.; Beer, Jennifer S.; Bunge, Silvia A.

    2014-01-01

    A decision may be difficult because complex information processing is required to evaluate choices according to deterministic decision rules and/or because it is not certain which choice will lead to the best outcome in a probabilistic context. Factors that tax decision making such as decision rule complexity and low decision certainty should be disambiguated for a more complete understanding of the decision making process. Previous studies have examined the brain regions that are modulated by decision rule complexity or by decision certainty but have not examined these factors together in the context of a single task or study. In the present functional magnetic resonance imaging study, both decision rule complexity and decision certainty were varied in comparable decision tasks. Further, the level of certainty about which choice to make (choice certainty) was varied separately from certainty about the final outcome resulting from a choice (outcome certainty). Lateral prefrontal cortex, dorsal anterior cingulate cortex, and bilateral anterior insula were modulated by decision rule complexity. Anterior insula was engaged more strongly by low than high choice certainty decisions, whereas ventromedial prefrontal cortex showed the opposite pattern. These regions showed no effect of the independent manipulation of outcome certainty. The results disambiguate the influence of decision rule complexity, choice certainty, and outcome certainty on activity in diverse brain regions that have been implicated in decision making. Lateral prefrontal cortex plays a key role in implementing deterministic decision rules, ventromedial prefrontal cortex in probabilistic rules, and anterior insula in both. PMID:19781652

  6. Stochastic reduced order models for inverse problems under uncertainty

    PubMed Central

    Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.

    2014-01-01

    This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115

  7. Ecosystems Biology Approaches To Determine Key Fitness Traits of Soil Microorganisms

    NASA Astrophysics Data System (ADS)

    Brodie, E.; Zhalnina, K.; Karaoz, U.; Cho, H.; Nuccio, E. E.; Shi, S.; Lipton, M. S.; Zhou, J.; Pett-Ridge, J.; Northen, T.; Firestone, M.

    2014-12-01

    The application of theoretical approaches such as trait-based modeling represent powerful tools to explain and perhaps predict complex patterns in microbial distribution and function across environmental gradients in space and time. These models are mostly deterministic and where available are built upon a detailed understanding of microbial physiology and response to environmental factors. However as most soil microorganisms have not been cultivated, for the majority our understanding is limited to insights from environmental 'omic information. Information gleaned from 'omic studies of complex systems should be regarded as providing hypotheses, and these hypotheses should be tested under controlled laboratory conditions if they are to be propagated into deterministic models. In a semi-arid Mediterranean grassland system we are attempting to dissect microbial communities into functional guilds with defined physiological traits and are using a range of 'omics approaches to characterize their metabolic potential and niche preference. Initially, two physiologically relevant time points (peak plant activity and prior to wet-up) were sampled and metagenomes sequenced deeply (600-900 Gbp). Following assembly, differential coverage and nucleotide frequency binning were carried out to yield draft genomes. In addition, using a range of cultivation media we have isolated a broad range of bacteria representing abundant bacterial genotypes and with genome sequences of almost 40 isolates are testing genomic predictions regarding growth rate, temperature and substrate utilization in vitro. This presentation will discuss the opportunities and challenges in parameterizing microbial functional guilds from environmental 'omic information for use in trait-based models.

  8. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model

    PubMed Central

    Nené, Nuno R.; Dunham, Alistair S.; Illingworth, Christopher J. R.

    2018-01-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. PMID:29500183

  9. Performance evaluation of a distance learning program.

    PubMed

    Dailey, D J; Eno, K R; Brinkley, J F

    1994-01-01

    This paper presents a performance metric which uses a single number to characterize the response time for a non-deterministic client-server application operating over the Internet. When applied to a Macintosh-based distance learning application called the Digital Anatomist Browser, the metric allowed us to observe that "A typical student doing a typical mix of Browser commands on a typical data set will experience the same delay if they use a slow Macintosh on a local network or a fast Macintosh on the other side of the country accessing the data over the Internet." The methodology presented is applicable to other client-server applications that are rapidly appearing on the Internet.

  10. Efficient analysis of stochastic gene dynamics in the non-adiabatic regime using piecewise deterministic Markov processes

    PubMed Central

    2018-01-01

    Single-cell experiments show that gene expression is stochastic and bursty, a feature that can emerge from slow switching between promoter states with different activities. In addition to slow chromatin and/or DNA looping dynamics, one source of long-lived promoter states is the slow binding and unbinding kinetics of transcription factors to promoters, i.e. the non-adiabatic binding regime. Here, we introduce a simple analytical framework, known as a piecewise deterministic Markov process (PDMP), that accurately describes the stochastic dynamics of gene expression in the non-adiabatic regime. We illustrate the utility of the PDMP on a non-trivial dynamical system by analysing the properties of a titration-based oscillator in the non-adiabatic limit. We first show how to transform the underlying chemical master equation into a PDMP where the slow transitions between promoter states are stochastic, but whose rates depend upon the faster deterministic dynamics of the transcription factors regulated by these promoters. We show that the PDMP accurately describes the observed periods of stochastic cycles in activator and repressor-based titration oscillators. We then generalize our PDMP analysis to more complicated versions of titration-based oscillators to explain how multiple binding sites lengthen the period and improve coherence. Last, we show how noise-induced oscillation previously observed in a titration-based oscillator arises from non-adiabatic and discrete binding events at the promoter site. PMID:29386401

  11. Watching cellular machinery in action, one molecule at a time.

    PubMed

    Monachino, Enrico; Spenkelink, Lisanne M; van Oijen, Antoine M

    2017-01-02

    Single-molecule manipulation and imaging techniques have become important elements of the biologist's toolkit to gain mechanistic insights into cellular processes. By removing ensemble averaging, single-molecule methods provide unique access to the dynamic behavior of biomolecules. Recently, the use of these approaches has expanded to the study of complex multiprotein systems and has enabled detailed characterization of the behavior of individual molecules inside living cells. In this review, we provide an overview of the various force- and fluorescence-based single-molecule methods with applications both in vitro and in vivo, highlighting these advances by describing their applications in studies on cytoskeletal motors and DNA replication. We also discuss how single-molecule approaches have increased our understanding of the dynamic behavior of complex multiprotein systems. These methods have shown that the behavior of multicomponent protein complexes is highly stochastic and less linear and deterministic than previously thought. Further development of single-molecule tools will help to elucidate the molecular dynamics of these complex systems both inside the cell and in solutions with purified components. © 2017 Monachino et al.

  12. Single-photon non-linear optics with a quantum dot in a waveguide

    NASA Astrophysics Data System (ADS)

    Javadi, A.; Söllner, I.; Arcari, M.; Hansen, S. Lindskov; Midolo, L.; Mahmoodian, S.; Kiršanskė, G.; Pregnolato, T.; Lee, E. H.; Song, J. D.; Stobbe, S.; Lodahl, P.

    2015-10-01

    Strong non-linear interactions between photons enable logic operations for both classical and quantum-information technology. Unfortunately, non-linear interactions are usually feeble and therefore all-optical logic gates tend to be inefficient. A quantum emitter deterministically coupled to a propagating mode fundamentally changes the situation, since each photon inevitably interacts with the emitter, and highly correlated many-photon states may be created. Here we show that a single quantum dot in a photonic-crystal waveguide can be used as a giant non-linearity sensitive at the single-photon level. The non-linear response is revealed from the intensity and quantum statistics of the scattered photons, and contains contributions from an entangled photon-photon bound state. The quantum non-linearity will find immediate applications for deterministic Bell-state measurements and single-photon transistors and paves the way to scalable waveguide-based photonic quantum-computing architectures.

  13. Modeling and Simulation for Mission Operations Work System Design

    NASA Technical Reports Server (NTRS)

    Sierhuis, Maarten; Clancey, William J.; Seah, Chin; Trimble, Jay P.; Sims, Michael H.

    2003-01-01

    Work System analysis and design is complex and non-deterministic. In this paper we describe Brahms, a multiagent modeling and simulation environment for designing complex interactions in human-machine systems. Brahms was originally conceived as a business process design tool that simulates work practices, including social systems of work. We describe our modeling and simulation method for mission operations work systems design, based on a research case study in which we used Brahms to design mission operations for a proposed discovery mission to the Moon. We then describe the results of an actual method application project-the Brahms Mars Exploration Rover. Space mission operations are similar to operations of traditional organizations; we show that the application of Brahms for space mission operations design is relevant and transferable to other types of business processes in organizations.

  14. A new method for predicting response in complex linear systems. II. [under random or deterministic steady state excitation

    NASA Technical Reports Server (NTRS)

    Bogdanoff, J. L.; Kayser, K.; Krieger, W.

    1977-01-01

    The paper describes convergence and response studies in the low frequency range of complex systems, particularly with low values of damping of different distributions, and reports on the modification of the relaxation procedure required under these conditions. A new method is presented for response estimation in complex lumped parameter linear systems under random or deterministic steady state excitation. The essence of the method is the use of relaxation procedures with a suitable error function to find the estimated response; natural frequencies and normal modes are not computed. For a 45 degree of freedom system, and two relaxation procedures, convergence studies and frequency response estimates were performed. The low frequency studies are considered in the framework of earlier studies (Kayser and Bogdanoff, 1975) involving the mid to high frequency range.

  15. Resolution of Infinite-Loop in Hyperincursive and Nonlocal Cellular Automata: Introduction to Slime Mold Computing

    NASA Astrophysics Data System (ADS)

    Aono, Masashi; Gunji, Yukio-Pegio

    2004-08-01

    How can non-algorithmic/non-deterministic computational syntax be computed? "The hyperincursive system" introduced by Dubois is an anticipatory system embracing the contradiction/uncertainty. Although it may provide a novel viewpoint for the understanding of complex systems, conventional digital computers cannot run faithfully as the hyperincursive computational syntax specifies, in a strict sense. Then is it an imaginary story? In this paper we try to argue that it is not. We show that a model of complex systems "Elementary Conflictable Cellular Automata (ECCA)" proposed by Aono and Gunji is embracing the hyperincursivity and the nonlocality. ECCA is based on locality-only type settings basically as well as other CA models, and/but at the same time, each cell is required to refer to globality-dominant regularity. Due to this contradictory locality-globality loop, the time evolution equation specifies that the system reaches the deadlock/infinite-loop. However, we show that there is a possibility of the resolution of these problems if the computing system has parallel and/but non-distributed property like an amoeboid organism. This paper is an introduction to "the slime mold computing" that is an attempt to cultivate an unconventional notion of computation.

  16. Do rational numbers play a role in selection for stochasticity?

    PubMed

    Sinclair, Robert

    2014-01-01

    When a given tissue must, to be able to perform its various functions, consist of different cell types, each fairly evenly distributed and with specific probabilities, then there are at least two quite different developmental mechanisms which might achieve the desired result. Let us begin with the case of two cell types, and first imagine that the proportion of numbers of cells of these types should be 1:3. Clearly, a regular structure composed of repeating units of four cells, three of which are of the dominant type, will easily satisfy the requirements, and a deterministic mechanism may lend itself to the task. What if, however, the proportion should be 10:33? The same simple, deterministic approach would now require a structure of repeating units of 43 cells, and this certainly seems to require a far more complex and potentially prohibitive deterministic developmental program. Stochastic development, replacing regular units with random distributions of given densities, might not be evolutionarily competitive in comparison with the deterministic program when the proportions should be 1:3, but it has the property that, whatever developmental mechanism underlies it, its complexity does not need to depend very much upon target cell densities at all. We are immediately led to speculate that proportions which correspond to fractions with large denominators (such as the 33 of 10/33) may be more easily achieved by stochastic developmental programs than by deterministic ones, and this is the core of our thesis: that stochastic development may tend to occur more often in cases involving rational numbers with large denominators. To be imprecise: that simple rationality and determinism belong together, as do irrationality and randomness.

  17. Non-uniform multivariate embedding to assess the information transfer in cardiovascular and cardiorespiratory variability series.

    PubMed

    Faes, Luca; Nollo, Giandomenico; Porta, Alberto

    2012-03-01

    The complexity of the short-term cardiovascular control prompts for the introduction of multivariate (MV) nonlinear time series analysis methods to assess directional interactions reflecting the underlying regulatory mechanisms. This study introduces a new approach for the detection of nonlinear Granger causality in MV time series, based on embedding the series by a sequential, non-uniform procedure, and on estimating the information flow from one series to another by means of the corrected conditional entropy. The approach is validated on short realizations of linear stochastic and nonlinear deterministic processes, and then evaluated on heart period, systolic arterial pressure and respiration variability series measured from healthy humans in the resting supine position and in the upright position after head-up tilt. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. First-order reliability application and verification methods for semistatic structures

    NASA Astrophysics Data System (ADS)

    Verderaime, V.

    1994-11-01

    Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored in conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments; stress audits are shown to be arbitrary and incomplete, and the concept compromises the performance of high-strength materials. A reliability method is proposed that combines first-order reliability principles with deterministic design variables and conventional test techniques to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety-index expression. The application is reduced to solving for a design factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this design factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and the development of semistatic structural designs.

  19. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model.

    PubMed

    Nené, Nuno R; Dunham, Alistair S; Illingworth, Christopher J R

    2018-05-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. Copyright © 2018 Nené et al.

  20. Coexistence and chaos in complex ecologies [rapid communication

    NASA Astrophysics Data System (ADS)

    Sprott, J. C.; Vano, J. A.; Wildenberg, J. C.; Anderson, M. B.; Noel, J. K.

    2005-02-01

    Many complex dynamical systems in ecology, economics, neurology, and elsewhere, in which agents compete for limited resources, exhibit apparently chaotic fluctuations. This Letter proposes a purely deterministic mechanism for evolving robustly but weakly chaotic systems that exhibit adaptation, self-organization, sporadic volatility, and punctuated equilibria.

  1. Deterministic photon bias in speckle imaging

    NASA Technical Reports Server (NTRS)

    Beletic, James W.

    1989-01-01

    A method for determining photo bias terms in speckle imaging is presented, and photon bias is shown to be a deterministic quantity that can be calculated without the use of the expectation operator. The quantities obtained are found to be identical to previous results. The present results have extended photon bias calculations to the important case of the bispectrum where photon events are assigned different weights, in which regime the bias is a frequency dependent complex quantity that must be calculated for each frame.

  2. Nano transfer and nanoreplication using deterministically grown sacrificial nanotemplates

    DOEpatents

    Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E [Greenback, TN; Guillorn, Michael A [Ithaca, NY; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TX; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN

    2012-03-27

    Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoconduit material coupled to a surface of the substrate. The substrate defines an aperture and the nanoconduit material defines a nanoconduit that is i) contiguous with the aperture and ii) aligned substantially non-parallel to a plane defined by the surface of the substrate.

  3. Efficient analysis of stochastic gene dynamics in the non-adiabatic regime using piecewise deterministic Markov processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Yen Ting; Buchler, Nicolas E.

    Single-cell experiments show that gene expression is stochastic and bursty, a feature that can emerge from slow switching between promoter states with different activities. In addition to slow chromatin and/or DNA looping dynamics, one source of long-lived promoter states is the slow binding and unbinding kinetics of transcription factors to promoters, i.e. the non-adiabatic binding regime. Here, we introduce a simple analytical framework, known as a piecewise deterministic Markov process (PDMP), that accurately describes the stochastic dynamics of gene expression in the non-adiabatic regime. We illustrate the utility of the PDMP on a non-trivial dynamical system by analysing the propertiesmore » of a titration-based oscillator in the non-adiabatic limit. We first show how to transform the underlying chemical master equation into a PDMP where the slow transitions between promoter states are stochastic, but whose rates depend upon the faster deterministic dynamics of the transcription factors regulated by these promoters. We show that the PDMP accurately describes the observed periods of stochastic cycles in activator and repressor-based titration oscillators. We then generalize our PDMP analysis to more complicated versions of titration-based oscillators to explain how multiple binding sites lengthen the period and improve coherence. Finally, we show how noise-induced oscillation previously observed in a titration-based oscillator arises from non-adiabatic and discrete binding events at the promoter site.« less

  4. Efficient analysis of stochastic gene dynamics in the non-adiabatic regime using piecewise deterministic Markov processes

    DOE PAGES

    Lin, Yen Ting; Buchler, Nicolas E.

    2018-01-31

    Single-cell experiments show that gene expression is stochastic and bursty, a feature that can emerge from slow switching between promoter states with different activities. In addition to slow chromatin and/or DNA looping dynamics, one source of long-lived promoter states is the slow binding and unbinding kinetics of transcription factors to promoters, i.e. the non-adiabatic binding regime. Here, we introduce a simple analytical framework, known as a piecewise deterministic Markov process (PDMP), that accurately describes the stochastic dynamics of gene expression in the non-adiabatic regime. We illustrate the utility of the PDMP on a non-trivial dynamical system by analysing the propertiesmore » of a titration-based oscillator in the non-adiabatic limit. We first show how to transform the underlying chemical master equation into a PDMP where the slow transitions between promoter states are stochastic, but whose rates depend upon the faster deterministic dynamics of the transcription factors regulated by these promoters. We show that the PDMP accurately describes the observed periods of stochastic cycles in activator and repressor-based titration oscillators. We then generalize our PDMP analysis to more complicated versions of titration-based oscillators to explain how multiple binding sites lengthen the period and improve coherence. Finally, we show how noise-induced oscillation previously observed in a titration-based oscillator arises from non-adiabatic and discrete binding events at the promoter site.« less

  5. Deterministic chaos and fractal complexity in the dynamics of cardiovascular behavior: perspectives on a new frontier.

    PubMed

    Sharma, Vijay

    2009-09-10

    Physiological systems such as the cardiovascular system are capable of five kinds of behavior: equilibrium, periodicity, quasi-periodicity, deterministic chaos and random behavior. Systems adopt one or more these behaviors depending on the function they have evolved to perform. The emerging mathematical concepts of fractal mathematics and chaos theory are extending our ability to study physiological behavior. Fractal geometry is observed in the physical structure of pathways, networks and macroscopic structures such the vasculature and the His-Purkinje network of the heart. Fractal structure is also observed in processes in time, such as heart rate variability. Chaos theory describes the underlying dynamics of the system, and chaotic behavior is also observed at many levels, from effector molecules in the cell to heart function and blood pressure. This review discusses the role of fractal structure and chaos in the cardiovascular system at the level of the heart and blood vessels, and at the cellular level. Key functional consequences of these phenomena are highlighted, and a perspective provided on the possible evolutionary origins of chaotic behavior and fractal structure. The discussion is non-mathematical with an emphasis on the key underlying concepts.

  6. Deterministic Chaos and Fractal Complexity in the Dynamics of Cardiovascular Behavior: Perspectives on a New Frontier

    PubMed Central

    Sharma, Vijay

    2009-01-01

    Physiological systems such as the cardiovascular system are capable of five kinds of behavior: equilibrium, periodicity, quasi-periodicity, deterministic chaos and random behavior. Systems adopt one or more these behaviors depending on the function they have evolved to perform. The emerging mathematical concepts of fractal mathematics and chaos theory are extending our ability to study physiological behavior. Fractal geometry is observed in the physical structure of pathways, networks and macroscopic structures such the vasculature and the His-Purkinje network of the heart. Fractal structure is also observed in processes in time, such as heart rate variability. Chaos theory describes the underlying dynamics of the system, and chaotic behavior is also observed at many levels, from effector molecules in the cell to heart function and blood pressure. This review discusses the role of fractal structure and chaos in the cardiovascular system at the level of the heart and blood vessels, and at the cellular level. Key functional consequences of these phenomena are highlighted, and a perspective provided on the possible evolutionary origins of chaotic behavior and fractal structure. The discussion is non-mathematical with an emphasis on the key underlying concepts. PMID:19812706

  7. Transforming Better Babies into Fitter Families: archival resources and the history of American eugenics movement, 1908-1930.

    PubMed

    Selden, Steven

    2005-06-01

    In the early 1920s, determinist conceptions of biology helped to transform Better Babies contest into Fitter Families competitions with a strong commitment to controlled human breeding. While the earlier competitions were concerned for physical and mental standards, the latter contests collected data on a broad range of presumed hereditary characters. The complex behaviors thought to be determined by one's heredity included being generous, jealous, and cruel. In today's context, the popular media often interpret advances in molecular genetics in a similarly reductive and determinist fashion. This paper argues that such a narrow interpretation of contemporary biology unnecessarily constrains the public in developing social policies concerning complex social behavior ranging from crime to intelligence.

  8. About influence of input rate random part of nonstationary queue system on statistical estimates of its macroscopic indicators

    NASA Astrophysics Data System (ADS)

    Korelin, Ivan A.; Porshnev, Sergey V.

    2018-05-01

    A model of the non-stationary queuing system (NQS) is described. The input of this model receives a flow of requests with input rate λ = λdet (t) + λrnd (t), where λdet (t) is a deterministic function depending on time; λrnd (t) is a random function. The parameters of functions λdet (t), λrnd (t) were identified on the basis of statistical information on visitor flows collected from various Russian football stadiums. The statistical modeling of NQS is carried out and the average statistical dependences are obtained: the length of the queue of requests waiting for service, the average wait time for the service, the number of visitors entered to the stadium on the time. It is shown that these dependencies can be characterized by the following parameters: the number of visitors who entered at the time of the match; time required to service all incoming visitors; the maximum value; the argument value when the studied dependence reaches its maximum value. The dependences of these parameters on the energy ratio of the deterministic and random component of the input rate are investigated.

  9. Multiple object tracking with non-unique data-to-object association via generalized hypothesis testing. [tracking several aircraft near each other or ships at sea

    NASA Technical Reports Server (NTRS)

    Porter, D. W.; Lefler, R. M.

    1979-01-01

    A generalized hypothesis testing approach is applied to the problem of tracking several objects where several different associations of data with objects are possible. Such problems occur, for instance, when attempting to distinctly track several aircraft maneuvering near each other or when tracking ships at sea. Conceptually, the problem is solved by first, associating data with objects in a statistically reasonable fashion and then, tracking with a bank of Kalman filters. The objects are assumed to have motion characterized by a fixed but unknown deterministic portion plus a random process portion modeled by a shaping filter. For example, the object might be assumed to have a mean straight line path about which it maneuvers in a random manner. Several hypothesized associations of data with objects are possible because of ambiguity as to which object the data comes from, false alarm/detection errors, and possible uncertainty in the number of objects being tracked. The statistical likelihood function is computed for each possible hypothesized association of data with objects. Then the generalized likelihood is computed by maximizing the likelihood over parameters that define the deterministic motion of the object.

  10. Review of smoothing methods for enhancement of noisy data from heavy-duty LHD mining machines

    NASA Astrophysics Data System (ADS)

    Wodecki, Jacek; Michalak, Anna; Stefaniak, Paweł

    2018-01-01

    Appropriate analysis of data measured on heavy-duty mining machines is essential for processes monitoring, management and optimization. Some particular classes of machines, for example LHD (load-haul-dump) machines, hauling trucks, drilling/bolting machines etc. are characterized with cyclicity of operations. In those cases, identification of cycles and their segments or in other words - simply data segmentation is a key to evaluate their performance, which may be very useful from the management point of view, for example leading to introducing optimization to the process. However, in many cases such raw signals are contaminated with various artifacts, and in general are expected to be very noisy, which makes the segmentation task very difficult or even impossible. To deal with that problem, there is a need for efficient smoothing methods that will allow to retain informative trends in the signals while disregarding noises and other undesired non-deterministic components. In this paper authors present a review of various approaches to diagnostic data smoothing. Described methods can be used in a fast and efficient way, effectively cleaning the signals while preserving informative deterministic behaviour, that is a crucial to precise segmentation and other approaches to industrial data analysis.

  11. The way to uncover community structure with core and diversity

    NASA Astrophysics Data System (ADS)

    Chang, Y. F.; Han, S. K.; Wang, X. D.

    2018-07-01

    Communities are ubiquitous in nature and society. Individuals that share common properties often self-organize to form communities. Avoiding the shortages of computation complexity, pre-given information and unstable results in different run, in this paper, we propose a simple and efficient method to deepen our understanding of the emergence and diversity of communities in complex systems. By introducing the rational random selection, our method reveals the hidden deterministic and normal diverse community states of community structure. To demonstrate this method, we test it with real-world systems. The results show that our method could not only detect community structure with high sensitivity and reliability, but also provide instructional information about the hidden deterministic community world and the real normal diverse community world by giving out the core-community, the real-community, the tide and the diversity. Thizs is of paramount importance in understanding, predicting, and controlling a variety of collective behaviors in complex systems.

  12. Non-Deterministic Dynamic Instability of Composite Shells

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Abumeri, Galib H.

    2004-01-01

    A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics, and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties, in that order.

  13. Automatic mesh adaptivity for hybrid Monte Carlo/deterministic neutronics modeling of difficult shielding problems

    DOE PAGES

    Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; ...

    2015-06-30

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less

  14. Sampled-Data Consensus of Linear Multi-agent Systems With Packet Losses.

    PubMed

    Zhang, Wenbing; Tang, Yang; Huang, Tingwen; Kurths, Jurgen

    In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.

  15. Complex dynamic in ecological time series

    Treesearch

    Peter Turchin; Andrew D. Taylor

    1992-01-01

    Although the possibility of complex dynamical behaviors-limit cycles, quasiperiodic oscillations, and aperiodic chaos-has been recognized theoretically, most ecologists are skeptical of their importance in nature. In this paper we develop a methodology for reconstructing endogenous (or deterministic) dynamics from ecological time series. Our method consists of fitting...

  16. Deterministic transfection drives efficient nonviral reprogramming and uncovers reprogramming barriers.

    PubMed

    Gallego-Perez, Daniel; Otero, Jose J; Czeisler, Catherine; Ma, Junyu; Ortiz, Cristina; Gygli, Patrick; Catacutan, Fay Patsy; Gokozan, Hamza Numan; Cowgill, Aaron; Sherwood, Thomas; Ghatak, Subhadip; Malkoc, Veysi; Zhao, Xi; Liao, Wei-Ching; Gnyawali, Surya; Wang, Xinmei; Adler, Andrew F; Leong, Kam; Wulff, Brian; Wilgus, Traci A; Askwith, Candice; Khanna, Savita; Rink, Cameron; Sen, Chandan K; Lee, L James

    2016-02-01

    Safety concerns and/or the stochastic nature of current transduction approaches have hampered nuclear reprogramming's clinical translation. We report a novel non-viral nanotechnology-based platform permitting deterministic large-scale transfection with single-cell resolution. The superior capabilities of our technology are demonstrated by modification of the well-established direct neuronal reprogramming paradigm using overexpression of the transcription factors Brn2, Ascl1, and Myt1l (BAM). Reprogramming efficiencies were comparable to viral methodologies (up to ~9-12%) without the constraints of capsid size and with the ability to control plasmid dosage, in addition to showing superior performance relative to existing non-viral methods. Furthermore, increased neuronal complexity could be tailored by varying BAM ratio and by including additional proneural genes to the BAM cocktail. Furthermore, high-throughput NEP allowed easy interrogation of the reprogramming process. We discovered that BAM-mediated reprogramming is regulated by AsclI dosage, the S-phase cyclin CCNA2, and that some induced neurons passed through a nestin-positive cell stage. In the field of regenerative medicine, the ability to direct cell fate by nuclear reprogramming is an important facet in terms of clinical application. In this article, the authors described their novel technique of cell reprogramming through overexpression of the transcription factors Brn2, Ascl1, and Myt1l (BAM) by in situ electroporation through nanochannels. This new technique could provide a platform for further future designs. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Hierarchical cluster-based partial least squares regression (HC-PLSR) is an efficient tool for metamodelling of nonlinear dynamic models.

    PubMed

    Tøndel, Kristin; Indahl, Ulf G; Gjuvsland, Arne B; Vik, Jon Olav; Hunter, Peter; Omholt, Stig W; Martens, Harald

    2011-06-01

    Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs) to variation in features of the trajectories of the state variables (outputs) throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR), where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR) and ordinary least squares (OLS) regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback loops. HC-PLSR is a promising approach for metamodelling in systems biology, especially for highly nonlinear or non-monotone parameter to phenotype maps. The algorithm can be flexibly adjusted to suit the complexity of the dynamic model behaviour, inviting automation in the metamodelling of complex systems.

  18. Hierarchical Cluster-based Partial Least Squares Regression (HC-PLSR) is an efficient tool for metamodelling of nonlinear dynamic models

    PubMed Central

    2011-01-01

    Background Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs) to variation in features of the trajectories of the state variables (outputs) throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR), where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR) and ordinary least squares (OLS) regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Results Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback loops. Conclusions HC-PLSR is a promising approach for metamodelling in systems biology, especially for highly nonlinear or non-monotone parameter to phenotype maps. The algorithm can be flexibly adjusted to suit the complexity of the dynamic model behaviour, inviting automation in the metamodelling of complex systems. PMID:21627852

  19. Analyzing simulation-based PRA data through traditional and topological clustering: A BWR station blackout case study

    DOE PAGES

    Maljovec, D.; Liu, S.; Wang, B.; ...

    2015-07-14

    Here, dynamic probabilistic risk assessment (DPRA) methodologies couple system simulator codes (e.g., RELAP and MELCOR) with simulation controller codes (e.g., RAVEN and ADAPT). Whereas system simulator codes model system dynamics deterministically, simulation controller codes introduce both deterministic (e.g., system control logic and operating procedures) and stochastic (e.g., component failures and parameter uncertainties) elements into the simulation. Typically, a DPRA is performed by sampling values of a set of parameters and simulating the system behavior for that specific set of parameter values. For complex systems, a major challenge in using DPRA methodologies is to analyze the large number of scenarios generated,more » where clustering techniques are typically employed to better organize and interpret the data. In this paper, we focus on the analysis of two nuclear simulation datasets that are part of the risk-informed safety margin characterization (RISMC) boiling water reactor (BWR) station blackout (SBO) case study. We provide the domain experts a software tool that encodes traditional and topological clustering techniques within an interactive analysis and visualization environment, for understanding the structures of such high-dimensional nuclear simulation datasets. We demonstrate through our case study that both types of clustering techniques complement each other for enhanced structural understanding of the data.« less

  20. Large deviations and mixing for dissipative PDEs with unbounded random kicks

    NASA Astrophysics Data System (ADS)

    Jakšić, V.; Nersesyan, V.; Pillet, C.-A.; Shirikyan, A.

    2018-02-01

    We study the problem of exponential mixing and large deviations for discrete-time Markov processes associated with a class of random dynamical systems. Under some dissipativity and regularisation hypotheses for the underlying deterministic dynamics and a non-degeneracy condition for the driving random force, we discuss the existence and uniqueness of a stationary measure and its exponential stability in the Kantorovich-Wasserstein metric. We next turn to the large deviations principle (LDP) and establish its validity for the occupation measures of the Markov processes in question. The proof is based on Kifer’s criterion for non-compact spaces, a result on large-time asymptotics for generalised Markov semigroup, and a coupling argument. These tools combined together constitute a new approach to LDP for infinite-dimensional processes without strong Feller property in a non-compact space. The results obtained can be applied to the two-dimensional Navier-Stokes system in a bounded domain and to the complex Ginzburg-Landau equation.

  1. Coupled Effects of non-Newtonian Rheology and Aperture Variability on Flow in a Single Fracture

    NASA Astrophysics Data System (ADS)

    Di Federico, V.; Felisa, G.; Lauriola, I.; Longo, S.

    2017-12-01

    Modeling of non-Newtonian flow in fractured media is essential in hydraulic fracturing and drilling operations, EOR, environmental remediation, and to understand magma intrusions. An important step in the modeling effort is a detailed understanding of flow in a single fracture, as the fracture aperture is spatially variable. A large bibliography exists on Newtonian and non-Newtonian flow in variable aperture fractures. Ultimately, stochastic or deterministic modeling leads to the flowrate under a given pressure gradient as a function of the parameters describing the aperture variability and the fluid rheology. Typically, analytical or numerical studies are performed adopting a power-law (Oswald-de Waele) model. Yet the power-law model, routinely used e.g. for hydro-fracturing modeling, does not characterize real fluids at low and high shear rates. A more appropriate rheological model is provided by e.g. the four-parameter Carreau constitutive equation, which is in turn approximated by the more tractable truncated power-law model. Moreover, fluids of interest may exhibit yield stress, which requires the Bingham or Herschel-Bulkely model. This study employs different rheological models in the context of flow in variable aperture fractures, with the aim of understanding the coupled effect of rheology and aperture spatial variability with a simplified model. The aperture variation, modeled within a stochastic or deterministic framework, is taken to be one-dimensional and i) perpendicular; ii) parallel to the flow direction; for stochastic modeling, the influence of different distribution functions is examined. Results for the different rheological models are compared with those obtained for the pure power-law. The adoption of the latter model leads to overestimation of the flowrate, more so for large aperture variability. The presence of yield stress also induces significant changes in the resulting flowrate for assigned external pressure gradient.

  2. Non-covalent pomegranate (Punica granatum) hydrolyzable tannin-protein complexes modulate antigen uptake, processing and presentation by a T-cell hybridoma line co-cultured with murine peritoneal macrophages.

    PubMed

    Madrigal-Carballo, Sergio; Haas, Linda; Vestling, Martha; Krueger, Christian G; Reed, Jess D

    2016-12-01

    In this work we characterize the interaction of pomegranate hydrolyzable tannins (HT) with hen egg-white lysozyme (HEL) and determine the effects of non-covalent tannin-protein complexes on macrophage endocytosis, processing and presentation of antigen. We isolated HT from pomegranate and complex to HEL, the resulting non-covalent tannin-protein complex was characterized by gel electrophoresis and MALDI-TOF MS. Finally, cell culture studies and confocal microscopy imaging were conducted on the non-covalent pomegranate HT-HEL protein complexes to evaluate its effect on macrophage antigen uptake, processing and presentation to T-cell hybridomas. Our results indicate that non-covalent pomegranate HT-HEL protein complexes modulate uptake, processing and antigen presentation by mouse peritoneal macrophages. After 4 h of pre-incubation, only trace amounts of IL-2 were detected in the co-cultures treated with HEL alone, whereas a non-covalent pomegranate HT-HEL complex had already reached maximum IL-2 expression. Pomegranate HT may increase rate of endocytose of HEL and subsequent expression of IL-2 by the T-cell hybridomas.

  3. Contact Versus Non-Contact Measurement of a Helicopter Main Rotor Composite Blade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luczak, Marcin; Dziedziech, Kajetan; Peeters, Bart

    2010-05-28

    The dynamic characterization of lightweight structures is particularly complex as the impact of the weight of sensors and instrumentation (cables, mounting of exciters...) can distort the results. Varying mass loading or constraint effects between partial measurements may determine several errors on the final conclusions. Frequency shifts can lead to erroneous interpretations of the dynamics parameters. Typically these errors remain limited to a few percent. Inconsistent data sets however can result in major processing errors, with all related consequences towards applications based on the consistency assumption, such as global modal parameter identification, model-based damage detection and FRF-based matrix inversion in substructuring,more » load identification and transfer path analysis [1]. This paper addresses the subject of accuracy in the context of the measurement of the dynamic properties of a particular lightweight structure. It presents a comprehensive comparative study between the use of accelerometer, laser vibrometer (scanning LDV) and PU-probe (acoustic particle velocity and pressure) measurements to measure the structural responses, with as final aim the comparison of modal model quality assessment. The object of the investigation is a composite material blade from the main rotor of a helicopter. The presented results are part of an extensive test campaign performed with application of SIMO, MIMO, random and harmonic excitation, and the use of the mentioned contact and non-contact measurement techniques. The advantages and disadvantages of the applied instrumentation are discussed. Presented are real-life measurement problems related to the different set up conditions. Finally an analysis of estimated models is made in view of assessing the applicability of the various measurement approaches for successful fault detection based on modal parameters observation as well as in uncertain non-deterministic numerical model updating.« less

  4. Contact Versus Non-Contact Measurement of a Helicopter Main Rotor Composite Blade

    NASA Astrophysics Data System (ADS)

    Luczak, Marcin; Dziedziech, Kajetan; Vivolo, Marianna; Desmet, Wim; Peeters, Bart; Van der Auweraer, Herman

    2010-05-01

    The dynamic characterization of lightweight structures is particularly complex as the impact of the weight of sensors and instrumentation (cables, mounting of exciters…) can distort the results. Varying mass loading or constraint effects between partial measurements may determine several errors on the final conclusions. Frequency shifts can lead to erroneous interpretations of the dynamics parameters. Typically these errors remain limited to a few percent. Inconsistent data sets however can result in major processing errors, with all related consequences towards applications based on the consistency assumption, such as global modal parameter identification, model-based damage detection and FRF-based matrix inversion in substructuring, load identification and transfer path analysis [1]. This paper addresses the subject of accuracy in the context of the measurement of the dynamic properties of a particular lightweight structure. It presents a comprehensive comparative study between the use of accelerometer, laser vibrometer (scanning LDV) and PU-probe (acoustic particle velocity and pressure) measurements to measure the structural responses, with as final aim the comparison of modal model quality assessment. The object of the investigation is a composite material blade from the main rotor of a helicopter. The presented results are part of an extensive test campaign performed with application of SIMO, MIMO, random and harmonic excitation, and the use of the mentioned contact and non-contact measurement techniques. The advantages and disadvantages of the applied instrumentation are discussed. Presented are real-life measurement problems related to the different set up conditions. Finally an analysis of estimated models is made in view of assessing the applicability of the various measurement approaches for successful fault detection based on modal parameters observation as well as in uncertain non-deterministic numerical model updating.

  5. Forecasting transitions in systems with high-dimensional stochastic complex dynamics: a linear stability analysis of the tangled nature model.

    PubMed

    Cairoli, Andrea; Piovani, Duccio; Jensen, Henrik Jeldtoft

    2014-12-31

    We propose a new procedure to monitor and forecast the onset of transitions in high-dimensional complex systems. We describe our procedure by an application to the tangled nature model of evolutionary ecology. The quasistable configurations of the full stochastic dynamics are taken as input for a stability analysis by means of the deterministic mean-field equations. Numerical analysis of the high-dimensional stability matrix allows us to identify unstable directions associated with eigenvalues with a positive real part. The overlap of the instantaneous configuration vector of the full stochastic system with the eigenvectors of the unstable directions of the deterministic mean-field approximation is found to be a good early warning of the transitions occurring intermittently.

  6. Deterministic direct reprogramming of somatic cells to pluripotency.

    PubMed

    Rais, Yoach; Zviran, Asaf; Geula, Shay; Gafni, Ohad; Chomsky, Elad; Viukov, Sergey; Mansour, Abed AlFatah; Caspi, Inbal; Krupalnik, Vladislav; Zerbib, Mirie; Maza, Itay; Mor, Nofar; Baran, Dror; Weinberger, Leehee; Jaitin, Diego A; Lara-Astiaso, David; Blecher-Gonen, Ronnie; Shipony, Zohar; Mukamel, Zohar; Hagai, Tzachi; Gilad, Shlomit; Amann-Zalcenstein, Daniela; Tanay, Amos; Amit, Ido; Novershtern, Noa; Hanna, Jacob H

    2013-10-03

    Somatic cells can be inefficiently and stochastically reprogrammed into induced pluripotent stem (iPS) cells by exogenous expression of Oct4 (also called Pou5f1), Sox2, Klf4 and Myc (hereafter referred to as OSKM). The nature of the predominant rate-limiting barrier(s) preventing the majority of cells to successfully and synchronously reprogram remains to be defined. Here we show that depleting Mbd3, a core member of the Mbd3/NuRD (nucleosome remodelling and deacetylation) repressor complex, together with OSKM transduction and reprogramming in naive pluripotency promoting conditions, result in deterministic and synchronized iPS cell reprogramming (near 100% efficiency within seven days from mouse and human cells). Our findings uncover a dichotomous molecular function for the reprogramming factors, serving to reactivate endogenous pluripotency networks while simultaneously directly recruiting the Mbd3/NuRD repressor complex that potently restrains the reactivation of OSKM downstream target genes. Subsequently, the latter interactions, which are largely depleted during early pre-implantation development in vivo, lead to a stochastic and protracted reprogramming trajectory towards pluripotency in vitro. The deterministic reprogramming approach devised here offers a novel platform for the dissection of molecular dynamics leading to establishing pluripotency at unprecedented flexibility and resolution.

  7. The deterministic optical alignment of the HERMES spectrograph

    NASA Astrophysics Data System (ADS)

    Gers, Luke; Staszak, Nicholas

    2014-07-01

    The High Efficiency and Resolution Multi Element Spectrograph (HERMES) is a four channel, VPH-grating spectrograph fed by two 400 fiber slit assemblies whose construction and commissioning has now been completed at the Anglo Australian Telescope (AAT). The size, weight, complexity, and scheduling constraints of the system necessitated that a fully integrated, deterministic, opto-mechanical alignment system be designed into the spectrograph before it was manufactured. This paper presents the principles about which the system was assembled and aligned, including the equipment and the metrology methods employed to complete the spectrograph integration.

  8. Autonomous choices among deterministic evolution-laws as source of uncertainty

    NASA Astrophysics Data System (ADS)

    Trujillo, Leonardo; Meyroneinc, Arnaud; Campos, Kilver; Rendón, Otto; Sigalotti, Leonardo Di G.

    2018-03-01

    We provide evidence of an extreme form of sensitivity to initial conditions in a family of one-dimensional self-ruling dynamical systems. We prove that some hyperchaotic sequences are closed-form expressions of the orbits of these pseudo-random dynamical systems. Each chaotic system in this family exhibits a sensitivity to initial conditions that encompasses the sequence of choices of the evolution rule in some collection of maps. This opens a possibility to extend current theories of complex behaviors on the basis of intrinsic uncertainty in deterministic chaos.

  9. Improving ground-penetrating radar data in sedimentary rocks using deterministic deconvolution

    USGS Publications Warehouse

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.; Byrnes, A.P.

    2003-01-01

    Resolution is key to confidently identifying unique geologic features using ground-penetrating radar (GPR) data. Source wavelet "ringing" (related to bandwidth) in a GPR section limits resolution because of wavelet interference, and can smear reflections in time and/or space. The resultant potential for misinterpretation limits the usefulness of GPR. Deconvolution offers the ability to compress the source wavelet and improve temporal resolution. Unlike statistical deconvolution, deterministic deconvolution is mathematically simple and stable while providing the highest possible resolution because it uses the source wavelet unique to the specific radar equipment. Source wavelets generated in, transmitted through and acquired from air allow successful application of deterministic approaches to wavelet suppression. We demonstrate the validity of using a source wavelet acquired in air as the operator for deterministic deconvolution in a field application using "400-MHz" antennas at a quarry site characterized by interbedded carbonates with shale partings. We collected GPR data on a bench adjacent to cleanly exposed quarry faces in which we placed conductive rods to provide conclusive groundtruth for this approach to deconvolution. The best deconvolution results, which are confirmed by the conductive rods for the 400-MHz antenna tests, were observed for wavelets acquired when the transmitter and receiver were separated by 0.3 m. Applying deterministic deconvolution to GPR data collected in sedimentary strata at our study site resulted in an improvement in resolution (50%) and improved spatial location (0.10-0.15 m) of geologic features compared to the same data processed without deterministic deconvolution. The effectiveness of deterministic deconvolution for increased resolution and spatial accuracy of specific geologic features is further demonstrated by comparing results of deconvolved data with nondeconvolved data acquired along a 30-m transect immediately adjacent to a fresh quarry face. The results at this site support using deterministic deconvolution, which incorporates the GPR instrument's unique source wavelet, as a standard part of routine GPR data processing. ?? 2003 Elsevier B.V. All rights reserved.

  10. Deterministic generation of multiparticle entanglement by quantum Zeno dynamics.

    PubMed

    Barontini, Giovanni; Hohmann, Leander; Haas, Florian; Estève, Jérôme; Reichel, Jakob

    2015-09-18

    Multiparticle entangled quantum states, a key resource in quantum-enhanced metrology and computing, are usually generated by coherent operations exclusively. However, unusual forms of quantum dynamics can be obtained when environment coupling is used as part of the state generation. In this work, we used quantum Zeno dynamics (QZD), based on nondestructive measurement with an optical microcavity, to deterministically generate different multiparticle entangled states in an ensemble of 36 qubit atoms in less than 5 microseconds. We characterized the resulting states by performing quantum tomography, yielding a time-resolved account of the entanglement generation. In addition, we studied the dependence of quantum states on measurement strength and quantified the depth of entanglement. Our results show that QZD is a versatile tool for fast and deterministic entanglement generation in quantum engineering applications. Copyright © 2015, American Association for the Advancement of Science.

  11. Mesoscopic chaos mediated by Drude electron-hole plasma in silicon optomechanical oscillators

    PubMed Central

    Wu, Jiagui; Huang, Shu-Wei; Huang, Yongjun; Zhou, Hao; Yang, Jinghui; Liu, Jia-Ming; Yu, Mingbin; Lo, Guoqiang; Kwong, Dim-Lee; Duan, Shukai; Wei Wong, Chee

    2017-01-01

    Chaos has revolutionized the field of nonlinear science and stimulated foundational studies from neural networks, extreme event statistics, to physics of electron transport. Recent studies in cavity optomechanics provide a new platform to uncover quintessential architectures of chaos generation and the underlying physics. Here, we report the generation of dynamical chaos in silicon-based monolithic optomechanical oscillators, enabled by the strong and coupled nonlinearities of two-photon absorption induced Drude electron–hole plasma. Deterministic chaotic oscillation is achieved, and statistical and entropic characterization quantifies the chaos complexity at 60 fJ intracavity energies. The correlation dimension D2 is determined at 1.67 for the chaotic attractor, along with a maximal Lyapunov exponent rate of about 2.94 times the fundamental optomechanical oscillation for fast adjacent trajectory divergence. Nonlinear dynamical maps demonstrate the subharmonics, bifurcations and stable regimes, along with distinct transitional routes into chaos. This provides a CMOS-compatible and scalable architecture for understanding complex dynamics on the mesoscopic scale. PMID:28598426

  12. Imitative and best response behaviors in a nonlinear Cournotian setting

    NASA Astrophysics Data System (ADS)

    Cerboni Baiardi, Lorenzo; Naimzada, Ahmad K.

    2018-05-01

    We consider the competition among quantity setting players in a deterministic nonlinear oligopoly framework characterized by an isoelastic demand curve. Players are characterized by having heterogeneous decisional mechanisms to set their outputs: some players are imitators, while the remaining others adopt a rational-like rule according to which their past decisions are adjusted towards their static expectation best response. The Cournot-Nash production level is a stationary state of our model together with a further production level that can be interpreted as the competitive outcome in case only imitators are present. We found that both the number of players and the relative fraction of imitators influence stability of the Cournot-Nash equilibrium with an ambiguous role, and double instability thresholds may be observed. Global analysis shows that a wide variety of complex dynamic scenarios emerge. Chaotic trajectories as well as multi-stabilities, where different attractors coexist, are robust phenomena that can be observed for a wide spectrum of parameter sets.

  13. Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.

    PubMed

    Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing

    2016-01-01

    Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.

  14. Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling

    NASA Technical Reports Server (NTRS)

    Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2005-01-01

    Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).

  15. Large system change challenges: addressing complex critical issues in linked physical and social domains

    NASA Astrophysics Data System (ADS)

    Waddell, Steve; Cornell, Sarah; Hsueh, Joe; Ozer, Ceren; McLachlan, Milla; Birney, Anna

    2015-04-01

    Most action to address contemporary complex challenges, including the urgent issues of global sustainability, occurs piecemeal and without meaningful guidance from leading complex change knowledge and methods. The potential benefit of using such knowledge is greater efficacy of effort and investment. However, this knowledge and its associated tools and methods are under-utilized because understanding about them is low, fragmented between diverse knowledge traditions, and often requires shifts in mindsets and skills from expert-led to participant-based action. We have been engaged in diverse action-oriented research efforts in Large System Change for sustainability. For us, "large" systems can be characterized as large-scale systems - up to global - with many components, of many kinds (physical, biological, institutional, cultural/conceptual), operating at multiple levels, driven by multiple forces, and presenting major challenges for people involved. We see change of such systems as complex challenges, in contrast with simple or complicated problems, or chaotic situations. In other words, issues and sub-systems have unclear boundaries, interact with each other, and are often contradictory; dynamics are non-linear; issues are not "controllable", and "solutions" are "emergent" and often paradoxical. Since choices are opportunity-, power- and value-driven, these social, institutional and cultural factors need to be made explicit in any actionable theory of change. Our emerging network is sharing and building a knowledge base of experience, heuristics, and theories of change from multiple disciplines and practice domains. We will present our views on focal issues for the development of the field of large system change, which include processes of goal-setting and alignment; leverage of systemic transitions and transformation; and the role of choice in influencing critical change processes, when only some sub-systems or levels of the system behave in purposeful ways, while others are undeniably and unavoidably deterministic.

  16. Natural selection and self-organization in complex adaptive systems.

    PubMed

    Di Bernardo, Mirko

    2010-01-01

    The central theme of this work is self-organization "interpreted" both from the point of view of theoretical biology, and from a philosophical point of view. By analysing, on the one hand, those which are now considered--not only in the field of physics--some of the most important discoveries, that is complex systems and deterministic chaos and, on the other hand, the new frontiers of systemic biology, this work highlights how large thermodynamic systems which are open can spontaneously stay in an orderly regime. Such systems can represent the natural source of the order required for a stable self-organization, for homoeostasis and for hereditary variations. The order, emerging in enormous randomly interconnected nets of binary variables, is almost certainly only the precursor of similar orders emerging in all the varieties of complex systems. Hence, this work, by finding new foundations for the order pervading the living world, advances the daring hypothesis according to which Darwinian natural selection is not the only source of order in the biosphere. Thus, the article, by examining the passage from Prigogine's dissipative structures theory to the contemporary theory of biological complexity, highlights the development of a coherent and continuous line of research which is set to individuate the general principles marking the profound reality of that mysterious self-organization characterizing the complexity of life.

  17. Computing exponentially faster: implementing a non-deterministic universal Turing machine using DNA

    PubMed Central

    Currin, Andrew; Korovin, Konstantin; Ababi, Maria; Roper, Katherine; Kell, Douglas B.; Day, Philip J.

    2017-01-01

    The theory of computer science is based around universal Turing machines (UTMs): abstract machines able to execute all possible algorithms. Modern digital computers are physical embodiments of classical UTMs. For the most important class of problem in computer science, non-deterministic polynomial complete problems, non-deterministic UTMs (NUTMs) are theoretically exponentially faster than both classical UTMs and quantum mechanical UTMs (QUTMs). However, no attempt has previously been made to build an NUTM, and their construction has been regarded as impossible. Here, we demonstrate the first physical design of an NUTM. This design is based on Thue string rewriting systems, and thereby avoids the limitations of most previous DNA computing schemes: all the computation is local (simple edits to strings) so there is no need for communication, and there is no need to order operations. The design exploits DNA's ability to replicate to execute an exponential number of computational paths in P time. Each Thue rewriting step is embodied in a DNA edit implemented using a novel combination of polymerase chain reactions and site-directed mutagenesis. We demonstrate that the design works using both computational modelling and in vitro molecular biology experimentation: the design is thermodynamically favourable, microprogramming can be used to encode arbitrary Thue rules, all classes of Thue rule can be implemented, and non-deterministic rule implementation. In an NUTM, the resource limitation is space, which contrasts with classical UTMs and QUTMs where it is time. This fundamental difference enables an NUTM to trade space for time, which is significant for both theoretical computer science and physics. It is also of practical importance, for to quote Richard Feynman ‘there's plenty of room at the bottom’. This means that a desktop DNA NUTM could potentially utilize more processors than all the electronic computers in the world combined, and thereby outperform the world's current fastest supercomputer, while consuming a tiny fraction of its energy. PMID:28250099

  18. Artificial Bee Colony Optimization of Capping Potentials for Hybrid Quantum Mechanical/Molecular Mechanical Calculations.

    PubMed

    Schiffmann, Christoph; Sebastiani, Daniel

    2011-05-10

    We present an algorithmic extension of a numerical optimization scheme for analytic capping potentials for use in mixed quantum-classical (quantum mechanical/molecular mechanical, QM/MM) ab initio calculations. Our goal is to minimize bond-cleavage-induced perturbations in the electronic structure, measured by means of a suitable penalty functional. The optimization algorithm-a variant of the artificial bee colony (ABC) algorithm, which relies on swarm intelligence-couples deterministic (downhill gradient) and stochastic elements to avoid local minimum trapping. The ABC algorithm outperforms the conventional downhill gradient approach, if the penalty hypersurface exhibits wiggles that prevent a straight minimization pathway. We characterize the optimized capping potentials by computing NMR chemical shifts. This approach will increase the accuracy of QM/MM calculations of complex biomolecules.

  19. Path selection in the growth of rivers

    DOE PAGES

    Cohen, Yossi; Devauchelle, Olivier; Seybold, Hansjörg F.; ...

    2015-11-02

    River networks exhibit a complex ramified structure that has inspired decades of studies. But, an understanding of the propagation of a single stream remains elusive. In this paper, we invoke a criterion for path selection from fracture mechanics and apply it to the growth of streams in a diffusion field. We show that, as it cuts through the landscape, a stream maintains a symmetric groundwater flow around its tip. The local flow conditions therefore determine the growth of the drainage network. We use this principle to reconstruct the history of a network and to find a growth law associated withmore » it. Finally, our results show that the deterministic growth of a single channel based on its local environment can be used to characterize the structure of river networks.« less

  20. Optimal Vaccination in a Stochastic Epidemic Model of Two Non-Interacting Populations

    DTIC Science & Technology

    2015-02-17

    of diminishing returns from vacci- nation will generally take place at smaller vaccine allocations V compared to the deterministic model. Optimal...take place and small r0 values where it does not is illustrat- ed in Fig. 4C. As r0 is decreased, the region between the two instances of switching...approximately distribute vaccine in proportion to population size. For large r0 (r0 ≳ 2.9), two switches take place . In the deterministic optimal solution, a

  1. A Parallel Biological Optimization Algorithm to Solve the Unbalanced Assignment Problem Based on DNA Molecular Computing.

    PubMed

    Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian

    2015-10-23

    The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation.

  2. Mathematical Modeling of the Origins of Life

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew

    2006-01-01

    The emergence of early metabolism - a network of catalyzed chemical reactions that supported self-maintenance, growth, reproduction and evolution of the ancestors of contemporary cells (protocells) was a critical, but still very poorly understood step on the path from inanimate to animate matter. Here, it is proposed and tested through mathematical modeling of biochemically plausible systems that the emergence of metabolism and its initial evolution towards higher complexity preceded the emergence of a genome. Even though the formation of protocellular metabolism was driven by non-genomic, highly stochastic processes the outcome was largely deterministic, strongly constrained by laws of chemistry. It is shown that such concepts as speciation and fitness to the environment, developed in the context of genomic evolution, also held in the absence of a genome.

  3. Classical-quantum arbitrarily varying wiretap channel: Secret message transmission under jamming attacks

    NASA Astrophysics Data System (ADS)

    Boche, Holger; Cai, Minglai; Deppe, Christian; Nötzel, Janis

    2017-10-01

    We analyze arbitrarily varying classical-quantum wiretap channels. These channels are subject to two attacks at the same time: one passive (eavesdropping) and one active (jamming). We elaborate on our previous studies [H. Boche et al., Quantum Inf. Process. 15(11), 4853-4895 (2016) and H. Boche et al., Quantum Inf. Process. 16(1), 1-48 (2016)] by introducing a reduced class of allowable codes that fulfills a more stringent secrecy requirement than earlier definitions. In addition, we prove that non-symmetrizability of the legal link is sufficient for equality of the deterministic and the common randomness assisted secrecy capacities. Finally, we focus on analytic properties of both secrecy capacities: We completely characterize their discontinuity points and their super-activation properties.

  4. Non-random nature of spontaneous mIPSCs in mouse auditory brainstem neurons revealed by recurrence quantification analysis

    PubMed Central

    Leao, Richardson N; Leao, Fabricio N; Walmsley, Bruce

    2005-01-01

    A change in the spontaneous release of neurotransmitter is a useful indicator of processes occurring within presynaptic terminals. Linear techniques (e.g. Fourier transform) have been used to analyse spontaneous synaptic events in previous studies, but such methods are inappropriate if the timing pattern is complex. We have investigated spontaneous glycinergic miniature synaptic currents (mIPSCs) in principal cells of the medial nucleus of the trapezoid body. The random versus deterministic (or periodic) nature of mIPSCs was assessed using recurrence quantification analysis. Nonlinear methods were then used to quantify any detected determinism in spontaneous release, and to test for chaotic or fractal patterns. Modelling demonstrated that this procedure is much more sensitive in detecting periodicities than conventional techniques. mIPSCs were found to exhibit periodicities that were abolished by blockade of internal calcium stores with ryanodine, suggesting calcium oscillations in the presynaptic inhibitory terminals. Analysis indicated that mIPSC occurrences were chaotic in nature. Furthermore, periodicities were less evident in congenitally deaf mice than in normal mice, indicating that appropriate neural activity during development is necessary for the expression of deterministic chaos in mIPSC patterns. We suggest that chaotic oscillations of mIPSC occurrences play a physiological role in signal processing in the auditory brainstem. PMID:16271982

  5. Northern Hemisphere glaciation and the evolution of Plio-Pleistocene climate noise

    NASA Astrophysics Data System (ADS)

    Meyers, Stephen R.; Hinnov, Linda A.

    2010-08-01

    Deterministic orbital controls on climate variability are commonly inferred to dominate across timescales of 104-106 years, although some studies have suggested that stochastic processes may be of equal or greater importance. Here we explicitly quantify changes in deterministic orbital processes (forcing and/or pacing) versus stochastic climate processes during the Plio-Pleistocene, via time-frequency analysis of two prominent foraminifera oxygen isotopic stacks. Our results indicate that development of the Northern Hemisphere ice sheet is paralleled by an overall amplification of both deterministic and stochastic climate energy, but their relative dominance is variable. The progression from a more stochastic early Pliocene to a strongly deterministic late Pleistocene is primarily accommodated during two transitory phases of Northern Hemisphere ice sheet growth. This long-term trend is punctuated by “stochastic events,” which we interpret as evidence for abrupt reorganization of the climate system at the initiation and termination of the mid-Pleistocene transition and at the onset of Northern Hemisphere glaciation. In addition to highlighting a complex interplay between deterministic and stochastic climate change during the Plio-Pleistocene, our results support an early onset for Northern Hemisphere glaciation (between 3.5 and 3.7 Ma) and reveal some new characteristics of the orbital signal response, such as the puzzling emergence of 100 ka and 400 ka cyclic climate variability during theoretical eccentricity nodes.

  6. The Evolution of Software and Its Impact on Complex System Design in Robotic Spacecraft Embedded Systems

    NASA Technical Reports Server (NTRS)

    Butler, Roy

    2013-01-01

    The growth in computer hardware performance, coupled with reduced energy requirements, has led to a rapid expansion of the resources available to software systems, driving them towards greater logical abstraction, flexibility, and complexity. This shift in focus from compacting functionality into a limited field towards developing layered, multi-state architectures in a grand field has both driven and been driven by the history of embedded processor design in the robotic spacecraft industry.The combinatorial growth of interprocess conditions is accompanied by benefits (concurrent development, situational autonomy, and evolution of goals) and drawbacks (late integration, non-deterministic interactions, and multifaceted anomalies) in achieving mission success, as illustrated by the case of the Mars Reconnaissance Orbiter. Approaches to optimizing the benefits while mitigating the drawbacks have taken the form of the formalization of requirements, modular design practices, extensive system simulation, and spacecraft data trend analysis. The growth of hardware capability and software complexity can be expected to continue, with future directions including stackable commodity subsystems, computer-generated algorithms, runtime reconfigurable processors, and greater autonomy.

  7. Properties of networks with partially structured and partially random connectivity

    NASA Astrophysics Data System (ADS)

    Ahmadian, Yashar; Fumarola, Francesco; Miller, Kenneth D.

    2015-01-01

    Networks studied in many disciplines, including neuroscience and mathematical biology, have connectivity that may be stochastic about some underlying mean connectivity represented by a non-normal matrix. Furthermore, the stochasticity may not be independent and identically distributed (iid) across elements of the connectivity matrix. More generally, the problem of understanding the behavior of stochastic matrices with nontrivial mean structure and correlations arises in many settings. We address this by characterizing large random N ×N matrices of the form A =M +L J R , where M ,L , and R are arbitrary deterministic matrices and J is a random matrix of zero-mean iid elements. M can be non-normal, and L and R allow correlations that have separable dependence on row and column indices. We first provide a general formula for the eigenvalue density of A . For A non-normal, the eigenvalues do not suffice to specify the dynamics induced by A , so we also provide general formulas for the transient evolution of the magnitude of activity and frequency power spectrum in an N -dimensional linear dynamical system with a coupling matrix given by A . These quantities can also be thought of as characterizing the stability and the magnitude of the linear response of a nonlinear network to small perturbations about a fixed point. We derive these formulas and work them out analytically for some examples of M ,L , and R motivated by neurobiological models. We also argue that the persistence as N →∞ of a finite number of randomly distributed outlying eigenvalues outside the support of the eigenvalue density of A , as previously observed, arises in regions of the complex plane Ω where there are nonzero singular values of L-1(z 1 -M ) R-1 (for z ∈Ω ) that vanish as N →∞ . When such singular values do not exist and L and R are equal to the identity, there is a correspondence in the normalized Frobenius norm (but not in the operator norm) between the support of the spectrum of A for J of norm σ and the σ pseudospectrum of M .

  8. On the Hosoya index of a family of deterministic recursive trees

    NASA Astrophysics Data System (ADS)

    Chen, Xufeng; Zhang, Jingyuan; Sun, Weigang

    2017-01-01

    In this paper, we calculate the Hosoya index in a family of deterministic recursive trees with a special feature that includes new nodes which are connected to existing nodes with a certain rule. We then obtain a recursive solution of the Hosoya index based on the operations of a determinant. The computational complexity of our proposed algorithm is O(log2 n) with n being the network size, which is lower than that of the existing numerical methods. Finally, we give a weighted tree shrinking method as a graphical interpretation of the recurrence formula for the Hosoya index.

  9. Distribution and regulation of stochasticity and plasticity in Saccharomyces cerevisiae

    DOE PAGES

    Dar, R. D.; Karig, D. K.; Cooke, J. F.; ...

    2010-09-01

    Stochasticity is an inherent feature of complex systems with nanoscale structure. In such systems information is represented by small collections of elements (e.g. a few electrons on a quantum dot), and small variations in the populations of these elements may lead to big uncertainties in the information. Unfortunately, little is known about how to work within this inherently noisy environment to design robust functionality into complex nanoscale systems. Here, we look to the biological cell as an intriguing model system where evolution has mediated the trade-offs between fluctuations and function, and in particular we look at the relationships and trade-offsmore » between stochastic and deterministic responses in the gene expression of budding yeast (Saccharomyces cerevisiae). We find gene regulatory arrangements that control the stochastic and deterministic components of expression, and show that genes that have evolved to respond to stimuli (stress) in the most strongly deterministic way exhibit the most noise in the absence of the stimuli. We show that this relationship is consistent with a bursty 2-state model of gene expression, and demonstrate that this regulatory motif generates the most uncertainty in gene expression when there is the greatest uncertainty in the optimal level of gene expression.« less

  10. The dual reading of general conditionals: The influence of abstract versus concrete contexts.

    PubMed

    Wang, Moyun; Yao, Xinyun

    2018-04-01

    A current main issue on conditionals is whether the meaning of general conditionals (e.g., If a card is red, then it is round) is deterministic (exceptionless) or probabilistic (exception-tolerating). In order to resolve the issue, two experiments examined the influence of conditional contexts (with vs. without frequency information of truth table cases) on the reading of general conditionals. Experiment 1 examined the direct reading of general conditionals in the possibility judgment task. Experiment 2 examined the indirect reading of general conditionals in the truth judgment task. It was found that both the direct and indirect reading of general conditionals exhibited the duality: the predominant deterministic semantic reading of conditionals without frequency information, and the predominant probabilistic pragmatic reading of conditionals with frequency information. The context of general conditionals determined the predominant reading of general conditionals. There were obvious individual differences in reading general conditionals with frequency information. The meaning of general conditionals is relative, depending on conditional contexts. The reading of general conditionals is flexible and complex so that no simple deterministic and probabilistic accounts are able to explain it. The present findings are beyond the extant deterministic and probabilistic accounts of conditionals.

  11. Shielding Calculations on Waste Packages - The Limits and Possibilities of different Calculation Methods by the example of homogeneous and inhomogeneous Waste Packages

    NASA Astrophysics Data System (ADS)

    Adams, Mike; Smalian, Silva

    2017-09-01

    For nuclear waste packages the expected dose rates and nuclide inventory are beforehand calculated. Depending on the package of the nuclear waste deterministic programs like MicroShield® provide a range of results for each type of packaging. Stochastic programs like "Monte-Carlo N-Particle Transport Code System" (MCNP®) on the other hand provide reliable results for complex geometries. However this type of program requires a fully trained operator and calculations are time consuming. The problem here is to choose an appropriate program for a specific geometry. Therefore we compared the results of deterministic programs like MicroShield® and stochastic programs like MCNP®. These comparisons enable us to make a statement about the applicability of the various programs for chosen types of containers. As a conclusion we found that for thin-walled geometries deterministic programs like MicroShield® are well suited to calculate the dose rate. For cylindrical containers with inner shielding however, deterministic programs hit their limits. Furthermore we investigate the effect of an inhomogeneous material and activity distribution on the results. The calculations are still ongoing. Results will be presented in the final abstract.

  12. Enterprise resource planning for hospitals.

    PubMed

    van Merode, Godefridus G; Groothuis, Siebren; Hasman, Arie

    2004-06-30

    Integrated hospitals need a central planning and control system to plan patients' processes and the required capacity. Given the changes in healthcare one can ask the question what type of information systems can best support these healthcare delivery organizations. We focus in this review on the potential of enterprise resource planning (ERP) systems for healthcare delivery organizations. First ERP systems are explained. An overview is then presented of the characteristics of the planning process in hospital environments. Problems with ERP that are due to the special characteristics of healthcare are presented. The situations in which ERP can or cannot be used are discussed. It is suggested to divide hospitals in a part that is concerned only with deterministic processes and a part that is concerned with non-deterministic processes. ERP can be very useful for planning and controlling the deterministic processes.

  13. Evidence of Deterministic Components in the Apparent Randomness of GRBs: Clues of a Chaotic Dynamic

    PubMed Central

    Greco, G.; Rosa, R.; Beskin, G.; Karpov, S.; Romano, L.; Guarnieri, A.; Bartolini, C.; Bedogni, R.

    2011-01-01

    Prompt γ-ray emissions from gamma-ray bursts (GRBs) exhibit a vast range of extremely complex temporal structures with a typical variability time-scale significantly short – as fast as milliseconds. This work aims to investigate the apparent randomness of the GRB time profiles making extensive use of nonlinear techniques combining the advanced spectral method of the Singular Spectrum Analysis (SSA) with the classical tools provided by the Chaos Theory. Despite their morphological complexity, we detect evidence of a non stochastic short-term variability during the overall burst duration – seemingly consistent with a chaotic behavior. The phase space portrait of such variability shows the existence of a well-defined strange attractor underlying the erratic prompt emission structures. This scenario can shed new light on the ultra-relativistic processes believed to take place in GRB explosions and usually associated with the birth of a fast-spinning magnetar or accretion of matter onto a newly formed black hole. PMID:22355609

  14. Evidence of deterministic components in the apparent randomness of GRBs: clues of a chaotic dynamic.

    PubMed

    Greco, G; Rosa, R; Beskin, G; Karpov, S; Romano, L; Guarnieri, A; Bartolini, C; Bedogni, R

    2011-01-01

    Prompt γ-ray emissions from gamma-ray bursts (GRBs) exhibit a vast range of extremely complex temporal structures with a typical variability time-scale significantly short - as fast as milliseconds. This work aims to investigate the apparent randomness of the GRB time profiles making extensive use of nonlinear techniques combining the advanced spectral method of the Singular Spectrum Analysis (SSA) with the classical tools provided by the Chaos Theory. Despite their morphological complexity, we detect evidence of a non stochastic short-term variability during the overall burst duration - seemingly consistent with a chaotic behavior. The phase space portrait of such variability shows the existence of a well-defined strange attractor underlying the erratic prompt emission structures. This scenario can shed new light on the ultra-relativistic processes believed to take place in GRB explosions and usually associated with the birth of a fast-spinning magnetar or accretion of matter onto a newly formed black hole.

  15. An improved spanning tree approach for the reliability analysis of supply chain collaborative network

    NASA Astrophysics Data System (ADS)

    Lam, C. Y.; Ip, W. H.

    2012-11-01

    A higher degree of reliability in the collaborative network can increase the competitiveness and performance of an entire supply chain. As supply chain networks grow more complex, the consequences of unreliable behaviour become increasingly severe in terms of cost, effort and time. Moreover, it is computationally difficult to calculate the network reliability of a Non-deterministic Polynomial-time hard (NP-hard) all-terminal network using state enumeration, as this may require a huge number of iterations for topology optimisation. Therefore, this paper proposes an alternative approach of an improved spanning tree for reliability analysis to help effectively evaluate and analyse the reliability of collaborative networks in supply chains and reduce the comparative computational complexity of algorithms. Set theory is employed to evaluate and model the all-terminal reliability of the improved spanning tree algorithm and present a case study of a supply chain used in lamp production to illustrate the application of the proposed approach.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaul, Alexander; Holzinger, Dennis; Müglich, Nicolas David

    A magnetic domain texture has been deterministically engineered in a topographically flat exchange-biased (EB) thin film system. The texture consists of long-range periodically arranged unit cells of four individual domains, characterized by individual anisotropies, individual geometry, and with non-collinear remanent magnetizations. The texture has been engineered by a sequence of light-ion bombardment induced magnetic patterning of the EB layer system. The magnetic texture's in-plane spatial magnetization distribution and the corresponding domain walls have been characterized by scanning electron microscopy with polarization analysis (SEMPA). The influence of magnetic stray fields emerging from neighboring domain walls and the influence of the differentmore » anisotropies of the adjacent domains on the Néel type domain wall core's magnetization rotation sense and widths were investigated. It is shown that the usual energy degeneracy of clockwise and counterclockwise rotating magnetization through the walls is revoked, suppressing Bloch lines along the domain wall. Estimates of the domain wall widths for different domain configurations based on material parameters determined by vibrating sample magnetometry were quantitatively compared to the SEMPA data.« less

  17. Characteristics of Early Stages of Corrosion Fatigue in Aircraft Skin

    DOT National Transportation Integrated Search

    1996-02-01

    SRI International is conducting research to characterize and quantitatively describe the early stages of corrosion fatigue in the fuselage skin of commercial aircraft. Specific objectives are to gain an improved deterministic understanding of the tra...

  18. Investigating Neuromagnetic Brain Responses against Chromatic Flickering Stimuli by Wavelet Entropies

    PubMed Central

    Bhagat, Mayank; Bhushan, Chitresh; Saha, Goutam; Shimjo, Shinsuke; Watanabe, Katsumi; Bhattacharya, Joydeep

    2009-01-01

    Background Photosensitive epilepsy is a type of reflexive epilepsy triggered by various visual stimuli including colourful ones. Despite the ubiquitous presence of colorful displays, brain responses against different colour combinations are not properly studied. Methodology/Principal Findings Here, we studied the photosensitivity of the human brain against three types of chromatic flickering stimuli by recording neuromagnetic brain responses (magnetoencephalogram, MEG) from nine adult controls, an unmedicated patient, a medicated patient, and two controls age-matched with patients. Dynamical complexities of MEG signals were investigated by a family of wavelet entropies. Wavelet entropy is a newly proposed measure to characterize large scale brain responses, which quantifies the degree of order/disorder associated with a multi-frequency signal response. In particular, we found that as compared to the unmedicated patient, controls showed significantly larger wavelet entropy values. We also found that Renyi entropy is the most powerful feature for the participant classification. Finally, we also demonstrated the effect of combinational chromatic sensitivity on the underlying order/disorder in MEG signals. Conclusions/Significance Our results suggest that when perturbed by potentially epileptic-triggering stimulus, healthy human brain manages to maintain a non-deterministic, possibly nonlinear state, with high degree of disorder, but an epileptic brain represents a highly ordered state which making it prone to hyper-excitation. Further, certain colour combination was found to be more threatening than other combinations. PMID:19779630

  19. Investigating neuromagnetic brain responses against chromatic flickering stimuli by wavelet entropies.

    PubMed

    Bhagat, Mayank; Bhushan, Chitresh; Saha, Goutam; Shimjo, Shinsuke; Watanabe, Katsumi; Bhattacharya, Joydeep

    2009-09-25

    Photosensitive epilepsy is a type of reflexive epilepsy triggered by various visual stimuli including colourful ones. Despite the ubiquitous presence of colorful displays, brain responses against different colour combinations are not properly studied. Here, we studied the photosensitivity of the human brain against three types of chromatic flickering stimuli by recording neuromagnetic brain responses (magnetoencephalogram, MEG) from nine adult controls, an unmedicated patient, a medicated patient, and two controls age-matched with patients. Dynamical complexities of MEG signals were investigated by a family of wavelet entropies. Wavelet entropy is a newly proposed measure to characterize large scale brain responses, which quantifies the degree of order/disorder associated with a multi-frequency signal response. In particular, we found that as compared to the unmedicated patient, controls showed significantly larger wavelet entropy values. We also found that Renyi entropy is the most powerful feature for the participant classification. Finally, we also demonstrated the effect of combinational chromatic sensitivity on the underlying order/disorder in MEG signals. Our results suggest that when perturbed by potentially epileptic-triggering stimulus, healthy human brain manages to maintain a non-deterministic, possibly nonlinear state, with high degree of disorder, but an epileptic brain represents a highly ordered state which making it prone to hyper-excitation. Further, certain colour combination was found to be more threatening than other combinations.

  20. On the holistic approach in cellular and cancer biology: nonlinearity, complexity, and quasi-determinism of the dynamic cellular network.

    PubMed

    Waliszewski, P; Molski, M; Konarski, J

    1998-06-01

    A keystone of the molecular reductionist approach to cellular biology is a specific deductive strategy relating genotype to phenotype-two distinct categories. This relationship is based on the assumption that the intermediary cellular network of actively transcribed genes and their regulatory elements is deterministic (i.e., a link between expression of a gene and a phenotypic trait can always be identified, and evolution of the network in time is predetermined). However, experimental data suggest that the relationship between genotype and phenotype is nonbijective (i.e., a gene can contribute to the emergence of more than just one phenotypic trait or a phenotypic trait can be determined by expression of several genes). This implies nonlinearity (i.e., lack of the proportional relationship between input and the outcome), complexity (i.e. emergence of the hierarchical network of multiple cross-interacting elements that is sensitive to initial conditions, possesses multiple equilibria, organizes spontaneously into different morphological patterns, and is controlled in dispersed rather than centralized manner), and quasi-determinism (i.e., coexistence of deterministic and nondeterministic events) of the network. Nonlinearity within the space of the cellular molecular events underlies the existence of a fractal structure within a number of metabolic processes, and patterns of tissue growth, which is measured experimentally as a fractal dimension. Because of its complexity, the same phenotype can be associated with a number of alternative sequences of cellular events. Moreover, the primary cause initiating phenotypic evolution of cells such as malignant transformation can be favored probabilistically, but not identified unequivocally. Thermodynamic fluctuations of energy rather than gene mutations, the material traits of the fluctuations alter both the molecular and informational structure of the network. Then, the interplay between deterministic chaos, complexity, self-organization, and natural selection drives formation of malignant phenotype. This concept offers a novel perspective for investigation of tumorigenesis without invalidating current molecular findings. The essay integrates the ideas of the sciences of complexity in a biological context.

  1. Fatal and nonfatal risk associated with recycle of D&D-generated concrete

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boren, J.K.; Ayers, K.W.; Parker, F.L.

    1997-02-01

    As decontamination and decommissioning activities proceed within the U.S. Department of Energy Complex, vast volumes of uncontaminated and contaminated concrete will be generated. The current practice of decontaminating and landfilling the concrete is an expensive and potentially wasteful practice. Research is being conducted at Vanderbilt University to assess the economic, social, legal, and political ramifications of alternate methods of dealing with waste concrete. An important aspect of this research work is the assessment of risk associated with the various alternatives. A deterministic risk assessment model has been developed which quantifies radiological as well as non-radiological risks associated with concrete disposalmore » and recycle activities. The risk model accounts for fatal as well as non-fatal risks to both workers and the public. Preliminary results indicate that recycling of concrete presents potentially lower risks than the current practice. Radiological considerations are shown to be of minor importance in comparison to other sources of risk, with conventional transportation fatalities and injuries dominating. Onsite activities can also be a major contributor to non-fatal risk.« less

  2. Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information

    PubMed Central

    Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing

    2016-01-01

    Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft’s algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms. PMID:27806102

  3. Deterministic ripple-spreading model for complex networks.

    PubMed

    Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel

    2011-04-01

    This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.

  4. The "Chaos" Pattern in Piaget's Theory of Cognitive Development.

    ERIC Educational Resources Information Center

    Lindsay, Jean S.

    Piaget's theory of the cognitive development of the child is related to the recently developed non-linear "chaos" model. The term "chaos" refers to the tendency of dynamical, non-linear systems toward irregular, sometimes unpredictable, deterministic behavior. Piaget identified this same pattern in his model of cognitive…

  5. A Parallel Biological Optimization Algorithm to Solve the Unbalanced Assignment Problem Based on DNA Molecular Computing

    PubMed Central

    Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian

    2015-01-01

    The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation. PMID:26512650

  6. Non-Gaussian Correlations between Reflected and Transmitted Intensity Patterns Emerging from Opaque Disordered Media

    NASA Astrophysics Data System (ADS)

    Starshynov, I.; Paniagua-Diaz, A. M.; Fayard, N.; Goetschy, A.; Pierrat, R.; Carminati, R.; Bertolotti, J.

    2018-04-01

    The propagation of monochromatic light through a scattering medium produces speckle patterns in reflection and transmission, and the apparent randomness of these patterns prevents direct imaging through thick turbid media. Yet, since elastic multiple scattering is fundamentally a linear and deterministic process, information is not lost but distributed among many degrees of freedom that can be resolved and manipulated. Here, we demonstrate experimentally that the reflected and transmitted speckle patterns are robustly correlated, and we unravel all the complex and unexpected features of this fundamentally non-Gaussian and long-range correlation. In particular, we show that it is preserved even for opaque media with thickness much larger than the scattering mean free path, proving that information survives the multiple scattering process and can be recovered. The existence of correlations between the two sides of a scattering medium opens up new possibilities for the control of transmitted light without any feedback from the target side, but using only information gathered from the reflected speckle.

  7. Efficient solution of the Wigner-Liouville equation using a spectral decomposition of the force field

    NASA Astrophysics Data System (ADS)

    Van de Put, Maarten L.; Sorée, Bart; Magnus, Wim

    2017-12-01

    The Wigner-Liouville equation is reformulated using a spectral decomposition of the classical force field instead of the potential energy. The latter is shown to simplify the Wigner-Liouville kernel both conceptually and numerically as the spectral force Wigner-Liouville equation avoids the numerical evaluation of the highly oscillatory Wigner kernel which is nonlocal in both position and momentum. The quantum mechanical evolution is instead governed by a term local in space and non-local in momentum, where the non-locality in momentum has only a limited range. An interpretation of the time evolution in terms of two processes is presented; a classical evolution under the influence of the averaged driving field, and a probability-preserving quantum-mechanical generation and annihilation term. Using the inherent stability and reduced complexity, a direct deterministic numerical implementation using Chebyshev and Fourier pseudo-spectral methods is detailed. For the purpose of illustration, we present results for the time-evolution of a one-dimensional resonant tunneling diode driven out of equilibrium.

  8. [Radiotherapy and chaos theory: the tit bird and the butterfly...].

    PubMed

    Denis, F; Letellier, C

    2012-09-01

    Although the same simple laws govern cancer outcome (cell division repeated again and again), each tumour has a different outcome before as well as after irradiation therapy. The linear-quadratic radiosensitivity model allows an assessment of tumor sensitivity to radiotherapy. This model presents some limitations in clinical practice because it does not take into account the interactions between tumour cells and non-tumoral bystander cells (such as endothelial cells, fibroblasts, immune cells...) that modulate radiosensitivity and tumor growth dynamics. These interactions can lead to non-linear and complex tumor growth which appears to be random but that is not since there is not so many tumors spontaneously regressing. In this paper we propose to develop a deterministic approach for tumour growth dynamics using chaos theory. Various characteristics of cancer dynamics and tumor radiosensitivity can be explained using mathematical models of competing cell species. Copyright © 2012 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.

  9. The Bilinear Product Model of Hysteresis Phenomena

    NASA Astrophysics Data System (ADS)

    Kádár, György

    1989-01-01

    In ferromagnetic materials non-reversible magnetization processes are represented by rather complex hysteresis curves. The phenomenological description of such curves needs the use of multi-valued, yet unambiguous, deterministic functions. The history dependent calculation of consecutive Everett-integrals of the two-variable Preisach-function can account for the main features of hysteresis curves in uniaxial magnetic materials. The traditional Preisach model has recently been modified on the basis of population dynamics considerations, removing the non-real congruency property of the model. The Preisach-function was proposed to be a product of two factors of distinct physical significance: a magnetization dependent function taking into account the overall magnetization state of the body and a bilinear form of a single variable, magnetic field dependent, switching probability function. The most important statement of the bilinear product model is, that the switching process of individual particles is to be separated from the book-keeping procedure of their states. This empirical model of hysteresis can easily be extended to other irreversible physical processes, such as first order phase transitions.

  10. Extraction of angle deterministic signals in the presence of stationary speed fluctuations with cyclostationary blind source separation

    NASA Astrophysics Data System (ADS)

    Delvecchio, S.; Antoni, J.

    2012-02-01

    This paper addresses the use of a cyclostationary blind source separation algorithm (namely RRCR) to extract angle deterministic signals from mechanical rotating machines in presence of stationary speed fluctuations. This means that only phase fluctuations while machine is running in steady-state conditions are considered while run-up or run-down speed variations are not taken into account. The machine is also supposed to run in idle conditions so non-stationary phenomena due to the load are not considered. It is theoretically assessed that in such operating conditions the deterministic (periodic) signal in the angle domain becomes cyclostationary at first and second orders in the time domain. This fact justifies the use of the RRCR algorithm, which is able to directly extract the angle deterministic signal from the time domain without performing any kind of interpolation. This is particularly valuable when angular resampling fails because of uncontrolled speed fluctuations. The capability of the proposed approach is verified by means of simulated and actual vibration signals captured on a pneumatic screwdriver handle. In this particular case not only the extraction of the angle deterministic part can be performed but also the separation of the main sources of excitation (i.e. motor shaft imbalance, epyciloidal gear meshing and air pressure forces) affecting the user hand during operations.

  11. Field-free deterministic ultrafast creation of magnetic skyrmions by spin-orbit torques

    NASA Astrophysics Data System (ADS)

    Büttner, Felix; Lemesh, Ivan; Schneider, Michael; Pfau, Bastian; Günther, Christian M.; Hessing, Piet; Geilhufe, Jan; Caretta, Lucas; Engel, Dieter; Krüger, Benjamin; Viefhaus, Jens; Eisebitt, Stefan; Beach, Geoffrey S. D.

    2017-11-01

    Magnetic skyrmions are stabilized by a combination of external magnetic fields, stray field energies, higher-order exchange interactions and the Dzyaloshinskii-Moriya interaction (DMI). The last favours homochiral skyrmions, whose motion is driven by spin-orbit torques and is deterministic, which makes systems with a large DMI relevant for applications. Asymmetric multilayers of non-magnetic heavy metals with strong spin-orbit interactions and transition-metal ferromagnetic layers provide a large and tunable DMI. Also, the non-magnetic heavy metal layer can inject a vertical spin current with transverse spin polarization into the ferromagnetic layer via the spin Hall effect. This leads to torques that can be used to switch the magnetization completely in out-of-plane magnetized ferromagnetic elements, but the switching is deterministic only in the presence of a symmetry-breaking in-plane field. Although spin-orbit torques led to domain nucleation in continuous films and to stochastic nucleation of skyrmions in magnetic tracks, no practical means to create individual skyrmions controllably in an integrated device design at a selected position has been reported yet. Here we demonstrate that sub-nanosecond spin-orbit torque pulses can generate single skyrmions at custom-defined positions in a magnetic racetrack deterministically using the same current path as used for the shifting operation. The effect of the DMI implies that no external in-plane magnetic fields are needed for this aim. This implementation exploits a defect, such as a constriction in the magnetic track, that can serve as a skyrmion generator. The concept is applicable to any track geometry, including three-dimensional designs.

  12. Developing Stochastic Models as Inputs for High-Frequency Ground Motion Simulations

    NASA Astrophysics Data System (ADS)

    Savran, William Harvey

    High-frequency ( 10 Hz) deterministic ground motion simulations are challenged by our understanding of the small-scale structure of the earth's crust and the rupture process during an earthquake. We will likely never obtain deterministic models that can accurately describe these processes down to the meter scale length required for broadband wave propagation. Instead, we can attempt to explain the behavior, in a statistical sense, by including stochastic models defined by correlations observed in the natural earth and through physics based simulations of the earthquake rupture process. Toward this goal, we develop stochastic models to address both of the primary considerations for deterministic ground motion simulations: namely, the description of the material properties in the crust, and broadband earthquake source descriptions. Using borehole sonic log data recorded in Los Angeles basin, we estimate the spatial correlation structure of the small-scale fluctuations in P-wave velocities by determining the best-fitting parameters of a von Karman correlation function. We find that Hurst exponents, nu, between 0.0-0.2, vertical correlation lengths, az, of 15-150m, an standard deviation, sigma of about 5% characterize the variability in the borehole data. Usin these parameters, we generated a stochastic model of velocity and density perturbations and combined with leading seismic velocity models to perform a validation exercise for the 2008, Chino Hills, CA using heterogeneous media. We find that models of velocity and density perturbations can have significant effects on the wavefield at frequencies as low as 0.3 Hz, with ensemble median values of various ground motion metrics varying up to +/-50%, at certain stations, compared to those computed solely from the CVM. Finally, we develop a kinematic rupture generator based on dynamic rupture simulations on geometrically complex faults. We analyze 100 dynamic rupture simulations on strike-slip faults ranging from Mw 6.4-7.2. We find that our dynamic simulations follow empirical scaling relationships for inter-plate strike-slip events, and provide source spectra comparable with an o -2 model. Our rupture generator reproduces GMPE medians and intra-event standard deviations spectral accelerations for an ensemble of 10 Hz fully-deterministic ground motion simulations, as compared to NGA West2 GMPE relationships up to 0.2 seconds.

  13. Probability and Locality: Determinism Versus Indeterminism in Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Dickson, William Michael

    1995-01-01

    Quantum mechanics is often taken to be necessarily probabilistic. However, this view of quantum mechanics appears to be more the result of historical accident than of careful analysis. Moreover, quantum mechanics in its usual form faces serious problems. Although the mathematical core of quantum mechanics--quantum probability theory- -does not face conceptual difficulties, the application of quantum probability to the physical world leads to problems. In particular, quantum mechanics seems incapable of describing our everyday macroscopic experience. Therefore, several authors have proposed new interpretations --including (but not limited to) modal interpretations, spontaneous localization interpretations, the consistent histories approach, and the Bohm theory--each of which deals with quantum-mechanical probabilities differently. Each of these interpretations promises to describe our macroscopic experience and, arguably, each succeeds. Is there any way to compare them? Perhaps, if we turn to another troubling aspect of quantum mechanics, non-locality. Non -locality is troubling because prima facie it threatens the compatibility of quantum mechanics with special relativity. This prima facie threat is mitigated by the no-signalling theorems in quantum mechanics, but nonetheless one may find a 'conflict of spirit' between nonlocality in quantum mechanics and special relativity. Do any of these interpretations resolve this conflict of spirit?. There is a strong relation between how an interpretation deals with quantum-mechanical probabilities and how it deals with non-locality. The main argument here is that only a completely deterministic interpretation can be completely local. That is, locality together with the empirical predictions of quantum mechanics (specifically, its strict correlations) entails determinism. But even with this entailment in hand, comparison of the various interpretations requires a look at each, to see how non-locality arises, or in the case of deterministic interpretations, whether it arises. The result of this investigation is that, at the least, deterministic interpretations are no worse off with respect to special relativity than indeterministic interpretations. This conclusion runs against a common view that deterministic interpretations, specifically the Bohm theory, have more difficulty with special relativity than other interpretations.

  14. Location of coating defects and assessment of level of cathodic protection on underground pipelines using AC impedance, deterministic and non-deterministic models

    NASA Astrophysics Data System (ADS)

    Castaneda-Lopez, Homero

    A methodology for detecting and locating defects or discontinuities on the outside covering of coated metal underground pipelines subjected to cathodic protection has been addressed. On the basis of wide range AC impedance signals for various frequencies applied to a steel-coated pipeline system and by measuring its corresponding transfer function under several laboratory simulation scenarios, a physical laboratory setup of an underground cathodic-protected, coated pipeline was built. This model included different variables and elements that exist under real conditions, such as soil resistivity, soil chemical composition, defect (holiday) location in the pipeline covering, defect area and geometry, and level of cathodic protection. The AC impedance data obtained under different working conditions were used to fit an electrical transmission line model. This model was then used as a tool to fit the impedance signal for different experimental conditions and to establish trends in the impedance behavior without the necessity of further experimental work. However, due to the chaotic nature of the transfer function response of this system under several conditions, it is believed that non-deterministic models based on pattern recognition algorithms are suitable for field condition analysis. A non-deterministic approach was used for experimental analysis by applying an artificial neural network (ANN) algorithm based on classification analysis capable of studying the pipeline system and differentiating the variables that can change impedance conditions. These variables include level of cathodic protection, location of discontinuities (holidays), and severity of corrosion. This work demonstrated a proof-of-concept for a well-known technique and a novel algorithm capable of classifying impedance data for experimental results to predict the exact location of the active holidays and defects on the buried pipelines. Laboratory findings from this procedure are promising, and efforts to develop it for field conditions should continue.

  15. Chaos emerging in soil failure patterns observed during tillage: Normalized deterministic nonlinear prediction (NDNP) and its application.

    PubMed

    Sakai, Kenshi; Upadhyaya, Shrinivasa K; Andrade-Sanchez, Pedro; Sviridova, Nina V

    2017-03-01

    Real-world processes are often combinations of deterministic and stochastic processes. Soil failure observed during farm tillage is one example of this phenomenon. In this paper, we investigated the nonlinear features of soil failure patterns in a farm tillage process. We demonstrate emerging determinism in soil failure patterns from stochastic processes under specific soil conditions. We normalized the deterministic nonlinear prediction considering autocorrelation and propose it as a robust way of extracting a nonlinear dynamical system from noise contaminated motion. Soil is a typical granular material. The results obtained here are expected to be applicable to granular materials in general. From a global scale to nano scale, the granular material is featured in seismology, geotechnology, soil mechanics, and particle technology. The results and discussions presented here are applicable in these wide research areas. The proposed method and our findings are useful with respect to the application of nonlinear dynamics to investigate complex motions generated from granular materials.

  16. Deterministic Integration of Quantum Dots into on-Chip Multimode Interference Beamsplitters Using in Situ Electron Beam Lithography.

    PubMed

    Schnauber, Peter; Schall, Johannes; Bounouar, Samir; Höhne, Theresa; Park, Suk-In; Ryu, Geun-Hwan; Heindel, Tobias; Burger, Sven; Song, Jin-Dong; Rodt, Sven; Reitzenstein, Stephan

    2018-04-11

    The development of multinode quantum optical circuits has attracted great attention in recent years. In particular, interfacing quantum-light sources, gates, and detectors on a single chip is highly desirable for the realization of large networks. In this context, fabrication techniques that enable the deterministic integration of preselected quantum-light emitters into nanophotonic elements play a key role when moving forward to circuits containing multiple emitters. Here, we present the deterministic integration of an InAs quantum dot into a 50/50 multimode interference beamsplitter via in situ electron beam lithography. We demonstrate the combined emitter-gate interface functionality by measuring triggered single-photon emission on-chip with g (2) (0) = 0.13 ± 0.02. Due to its high patterning resolution as well as spectral and spatial control, in situ electron beam lithography allows for integration of preselected quantum emitters into complex photonic systems. Being a scalable single-step approach, it paves the way toward multinode, fully integrated quantum photonic chips.

  17. A deterministic and stochastic model for the system dynamics of tumor-immune responses to chemotherapy

    NASA Astrophysics Data System (ADS)

    Liu, Xiangdong; Li, Qingze; Pan, Jianxin

    2018-06-01

    Modern medical studies show that chemotherapy can help most cancer patients, especially for those diagnosed early, to stabilize their disease conditions from months to years, which means the population of tumor cells remained nearly unchanged in quite a long time after fighting against immune system and drugs. In order to better understand the dynamics of tumor-immune responses under chemotherapy, deterministic and stochastic differential equation models are constructed to characterize the dynamical change of tumor cells and immune cells in this paper. The basic dynamical properties, such as boundedness, existence and stability of equilibrium points, are investigated in the deterministic model. Extended stochastic models include stochastic differential equations (SDEs) model and continuous-time Markov chain (CTMC) model, which accounts for the variability in cellular reproduction, growth and death, interspecific competitions, and immune response to chemotherapy. The CTMC model is harnessed to estimate the extinction probability of tumor cells. Numerical simulations are performed, which confirms the obtained theoretical results.

  18. How the growth rate of host cells affects cancer risk in a deterministic way

    NASA Astrophysics Data System (ADS)

    Draghi, Clément; Viger, Louise; Denis, Fabrice; Letellier, Christophe

    2017-09-01

    It is well known that cancers are significantly more often encountered in some tissues than in other ones. In this paper, by using a deterministic model describing the interactions between host, effector immune and tumor cells at the tissue level, we show that this can be explained by the dependency of tumor growth on parameter values characterizing the type as well as the state of the tissue considered due to the "way of life" (environmental factors, food consumption, drinking or smoking habits, etc.). Our approach is purely deterministic and, consequently, the strong correlation (r = 0.99) between the number of detectable growing tumors and the growth rate of cells from the nesting tissue can be explained without evoking random mutation arising during DNA replications in nonmalignant cells or "bad luck". Strategies to limit the mortality induced by cancer could therefore be well based on improving the way of life, that is, by better preserving the tissue where mutant cells randomly arise.

  19. Analysis of deterministic swapping of photonic and atomic states through single-photon Raman interaction

    NASA Astrophysics Data System (ADS)

    Rosenblum, Serge; Borne, Adrien; Dayan, Barak

    2017-03-01

    The long-standing goal of deterministic quantum interactions between single photons and single atoms was recently realized in various experiments. Among these, an appealing demonstration relied on single-photon Raman interaction (SPRINT) in a three-level atom coupled to a single-mode waveguide. In essence, the interference-based process of SPRINT deterministically swaps the qubits encoded in a single photon and a single atom, without the need for additional control pulses. It can also be harnessed to construct passive entangling quantum gates, and can therefore form the basis for scalable quantum networks in which communication between the nodes is carried out only by single-photon pulses. Here we present an analytical and numerical study of SPRINT, characterizing its limitations and defining parameters for its optimal operation. Specifically, we study the effect of losses, imperfect polarization, and the presence of multiple excited states. In all cases we discuss strategies for restoring the operation of SPRINT.

  20. Kinematic source inversions of teleseismic data based on the QUESO library for uncertainty quantification and prediction

    NASA Astrophysics Data System (ADS)

    Zielke, O.; McDougall, D.; Mai, P. M.; Babuska, I.

    2014-12-01

    One fundamental aspect of seismic hazard mitigation is gaining a better understanding of the rupture process. Because direct observation of the relevant parameters and properties is not possible, other means such as kinematic source inversions are used instead. By constraining the spatial and temporal evolution of fault slip during an earthquake, those inversion approaches may enable valuable insights in the physics of the rupture process. However, due to the underdetermined nature of this inversion problem (i.e., inverting a kinematic source model for an extended fault based on seismic data), the provided solutions are generally non-unique. Here we present a statistical (Bayesian) inversion approach based on an open-source library for uncertainty quantification (UQ) called QUESO that was developed at ICES (UT Austin). The approach has advantages with respect to deterministic inversion approaches as it provides not only a single (non-unique) solution but also provides uncertainty bounds with it. Those uncertainty bounds help to qualitatively and quantitatively judge how well constrained an inversion solution is and how much rupture complexity the data reliably resolve. The presented inversion scheme uses only tele-seismically recorded body waves but future developments may lead us towards joint inversion schemes. After giving an insight in the inversion scheme ifself (based on delayed rejection adaptive metropolis, DRAM) we explore the method's resolution potential. For that, we synthetically generate tele-seismic data, add for example different levels of noise and/or change fault plane parameterization and then apply our inversion scheme in the attempt to extract the (known) kinematic rupture model. We conclude with exemplary inverting real tele-seismic data of a recent large earthquake and compare those results with deterministically derived kinematic source models provided by other research groups.

  1. Substrate growth dynamics and biomineralization of an Ediacaran encrusting poriferan.

    PubMed

    Wood, Rachel; Penny, Amelia

    2018-01-10

    The ability to encrust in order to secure and maintain growth on a substrate is a key competitive innovation in benthic metazoans. Here we describe the substrate growth dynamics, mode of biomineralization and possible affinity of Namapoikia rietoogensis , a large (up to 1 m), robustly skeletal, and modular Ediacaran metazoan which encrusted the walls of synsedimentary fissures within microbial-metazoan reefs. Namapoikia formed laminar or domal morphologies with an internal structure of open tubules and transverse elements, and had a very plastic, non-deterministic growth form which could encrust both fully lithified surfaces as well as living microbial substrates, the latter via modified skeletal holdfasts. Namapoikia shows complex growth interactions and substrate competition with contemporary living microbialites and thrombolites, including the production of plate-like dissepiments in response to microbial overgrowth which served to elevate soft tissue above the microbial surface. Namapoikia could also recover from partial mortality due to microbial fouling. We infer initial skeletal growth to have propagated via the rapid formation of an organic scaffold via a basal pinacoderm prior to calcification. This is likely an ancient mode of biomineralization with similarities to the living calcified demosponge Vaceletia. Namapoikia also shows inferred skeletal growth banding which, combined with its large size, implies notable individual longevity. In sum, Namapoikia was a large, relatively long-lived Ediacaran clonal skeletal metazoan that propagated via an organic scaffold prior to calcification, enabling rapid, effective and dynamic substrate occupation and competition in cryptic reef settings. The open tubular internal structure, highly flexible, non-deterministic skeletal organization, and inferred style of biomineralization of Namapoikia places probable affinity within total-group poriferans. © 2018 The Author(s).

  2. Hydraulic tomography of discrete networks of conduits and fractures in a karstic aquifer by using a deterministic inversion algorithm

    NASA Astrophysics Data System (ADS)

    Fischer, P.; Jardani, A.; Lecoq, N.

    2018-02-01

    In this paper, we present a novel inverse modeling method called Discrete Network Deterministic Inversion (DNDI) for mapping the geometry and property of the discrete network of conduits and fractures in the karstified aquifers. The DNDI algorithm is based on a coupled discrete-continuum concept to simulate numerically water flows in a model and a deterministic optimization algorithm to invert a set of observed piezometric data recorded during multiple pumping tests. In this method, the model is partioned in subspaces piloted by a set of parameters (matrix transmissivity, and geometry and equivalent transmissivity of the conduits) that are considered as unknown. In this way, the deterministic optimization process can iteratively correct the geometry of the network and the values of the properties, until it converges to a global network geometry in a solution model able to reproduce the set of data. An uncertainty analysis of this result can be performed from the maps of posterior uncertainties on the network geometry or on the property values. This method has been successfully tested for three different theoretical and simplified study cases with hydraulic responses data generated from hypothetical karstic models with an increasing complexity of the network geometry, and of the matrix heterogeneity.

  3. Modeling the within-host dynamics of cholera: bacterial-viral interaction.

    PubMed

    Wang, Xueying; Wang, Jin

    2017-08-01

    Novel deterministic and stochastic models are proposed in this paper for the within-host dynamics of cholera, with a focus on the bacterial-viral interaction. The deterministic model is a system of differential equations describing the interaction among the two types of vibrios and the viruses. The stochastic model is a system of Markov jump processes that is derived based on the dynamics of the deterministic model. The multitype branching process approximation is applied to estimate the extinction probability of bacteria and viruses within a human host during the early stage of the bacterial-viral infection. Accordingly, a closed-form expression is derived for the disease extinction probability, and analytic estimates are validated with numerical simulations. The local and global dynamics of the bacterial-viral interaction are analysed using the deterministic model, and the result indicates that there is a sharp disease threshold characterized by the basic reproduction number [Formula: see text]: if [Formula: see text], vibrios ingested from the environment into human body will not cause cholera infection; if [Formula: see text], vibrios will grow with increased toxicity and persist within the host, leading to human cholera. In contrast, the stochastic model indicates, more realistically, that there is always a positive probability of disease extinction within the human host.

  4. Random attractor of non-autonomous stochastic Boussinesq lattice system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Min, E-mail: zhaomin1223@126.com; Zhou, Shengfan, E-mail: zhoushengfan@yahoo.com

    2015-09-15

    In this paper, we first consider the existence of tempered random attractor for second-order non-autonomous stochastic lattice dynamical system of nonlinear Boussinesq equations effected by time-dependent coupled coefficients and deterministic forces and multiplicative white noise. Then, we establish the upper semicontinuity of random attractors as the intensity of noise approaches zero.

  5. Nonlinear unitary quantum collapse model with self-generated noise

    NASA Astrophysics Data System (ADS)

    Geszti, Tamás

    2018-04-01

    Collapse models including some external noise of unknown origin are routinely used to describe phenomena on the quantum-classical border; in particular, quantum measurement. Although containing nonlinear dynamics and thereby exposed to the possibility of superluminal signaling in individual events, such models are widely accepted on the basis of fully reproducing the non-signaling statistical predictions of quantum mechanics. Here we present a deterministic nonlinear model without any external noise, in which randomness—instead of being universally present—emerges in the measurement process, from deterministic irregular dynamics of the detectors. The treatment is based on a minimally nonlinear von Neumann equation for a Stern–Gerlach or Bell-type measuring setup, containing coordinate and momentum operators in a self-adjoint skew-symmetric, split scalar product structure over the configuration space. The microscopic states of the detectors act as a nonlocal set of hidden parameters, controlling individual outcomes. The model is shown to display pumping of weights between setup-defined basis states, with a single winner randomly selected and the rest collapsing to zero. Environmental decoherence has no role in the scenario. Through stochastic modelling, based on Pearle’s ‘gambler’s ruin’ scheme, outcome probabilities are shown to obey Born’s rule under a no-drift or ‘fair-game’ condition. This fully reproduces quantum statistical predictions, implying that the proposed non-linear deterministic model satisfies the non-signaling requirement. Our treatment is still vulnerable to hidden signaling in individual events, which remains to be handled by future research.

  6. Evaluation of the selection methods used in the exIWO algorithm based on the optimization of multidimensional functions

    NASA Astrophysics Data System (ADS)

    Kostrzewa, Daniel; Josiński, Henryk

    2016-06-01

    The expanded Invasive Weed Optimization algorithm (exIWO) is an optimization metaheuristic modelled on the original IWO version inspired by dynamic growth of weeds colony. The authors of the present paper have modified the exIWO algorithm introducing a set of both deterministic and non-deterministic strategies of individuals' selection. The goal of the project was to evaluate the modified exIWO by testing its usefulness for multidimensional numerical functions optimization. The optimized functions: Griewank, Rastrigin, and Rosenbrock are frequently used as benchmarks because of their characteristics.

  7. How synapses can enhance sensibility of a neural network

    NASA Astrophysics Data System (ADS)

    Protachevicz, P. R.; Borges, F. S.; Iarosz, K. C.; Caldas, I. L.; Baptista, M. S.; Viana, R. L.; Lameu, E. L.; Macau, E. E. N.; Batista, A. M.

    2018-02-01

    In this work, we study the dynamic range in a neural network modelled by cellular automaton. We consider deterministic and non-deterministic rules to simulate electrical and chemical synapses. Chemical synapses have an intrinsic time-delay and are susceptible to parameter variations guided by learning Hebbian rules of behaviour. The learning rules are related to neuroplasticity that describes change to the neural connections in the brain. Our results show that chemical synapses can abruptly enhance sensibility of the neural network, a manifestation that can become even more predominant if learning rules of evolution are applied to the chemical synapses.

  8. Long Non-Coding RNAs (lncRNAs) of Sea Cucumber: Large-Scale Prediction, Expression Profiling, Non-Coding Network Construction, and lncRNA-microRNA-Gene Interaction Analysis of lncRNAs in Apostichopus japonicus and Holothuria glaberrima During LPS Challenge and Radial Organ Complex Regeneration.

    PubMed

    Mu, Chuang; Wang, Ruijia; Li, Tianqi; Li, Yuqiang; Tian, Meilin; Jiao, Wenqian; Huang, Xiaoting; Zhang, Lingling; Hu, Xiaoli; Wang, Shi; Bao, Zhenmin

    2016-08-01

    Long non-coding RNA (lncRNA) structurally resembles mRNA but cannot be translated into protein. Although the systematic identification and characterization of lncRNAs have been increasingly reported in model species, information concerning non-model species is still lacking. Here, we report the first systematic identification and characterization of lncRNAs in two sea cucumber species: (1) Apostichopus japonicus during lipopolysaccharide (LPS) challenge and in heathy tissues and (2) Holothuria glaberrima during radial organ complex regeneration, using RNA-seq datasets and bioinformatics analysis. We identified A. japonicus and H. glaberrima lncRNAs that were differentially expressed during LPS challenge and radial organ complex regeneration, respectively. Notably, the predicted lncRNA-microRNA-gene trinities revealed that, in addition to targeting protein-coding transcripts, miRNAs might also target lncRNAs, thereby participating in a potential novel layer of regulatory interactions among non-coding RNA classes in echinoderms. Furthermore, the constructed coding-non-coding network implied the potential involvement of lncRNA-gene interactions during the regulation of several important genes (e.g., Toll-like receptor 1 [TLR1] and transglutaminase-1 [TGM1]) in response to LPS challenge and radial organ complex regeneration in sea cucumbers. Overall, this pioneer systematic identification, annotation, and characterization of lncRNAs in echinoderm pave the way for similar studies and future genetic, genomic, and evolutionary research in non-model species.

  9. A simplified model to evaluate the effect of fluid rheology on non-Newtonian flow in variable aperture fractures

    NASA Astrophysics Data System (ADS)

    Felisa, Giada; Ciriello, Valentina; Longo, Sandro; Di Federico, Vittorio

    2017-04-01

    Modeling of non-Newtonian flow in fractured media is essential in hydraulic fracturing operations, largely used for optimal exploitation of oil, gas and thermal reservoirs. Complex fluids interact with pre-existing rock fractures also during drilling operations, enhanced oil recovery, environmental remediation, and other natural phenomena such as magma and sand intrusions, and mud volcanoes. A first step in the modeling effort is a detailed understanding of flow in a single fracture, as the fracture aperture is typically spatially variable. A large bibliography exists on Newtonian flow in single, variable aperture fractures. Ultimately, stochastic modeling of aperture variability at the single fracture scale leads to determination of the flowrate under a given pressure gradient as a function of the parameters describing the variability of the aperture field and the fluid rheological behaviour. From the flowrate, a flow, or 'hydraulic', aperture can then be derived. The equivalent flow aperture for non-Newtonian fluids of power-law nature in single, variable aperture fractures has been obtained in the past both for deterministic and stochastic variations. Detailed numerical modeling of power-law fluid flow in a variable aperture fracture demonstrated that pronounced channelization effects are associated to a nonlinear fluid rheology. The availability of an equivalent flow aperture as a function of the parameters describing the fluid rheology and the aperture variability is enticing, as it allows taking their interaction into account when modeling flow in fracture networks at a larger scale. A relevant issue in non-Newtonian fracture flow is the rheological nature of the fluid. The constitutive model routinely used for hydro-fracturing modeling is the simple, two-parameter power-law. Yet this model does not characterize real fluids at low and high shear rates, as it implies, for shear-thinning fluids, an apparent viscosity which becomes unbounded for zero shear rate and tends to zero for infinite shear rate. On the contrary, the four-parameter Carreau constitutive equation includes asymptotic values of the apparent viscosity at those limits; in turn, the Carreau rheological equation is well approximated by the more tractable truncated power-law model. Results for flow of such fluids between parallel walls are already available. This study extends the adoption of the truncated power-law model to variable aperture fractures, with the aim of understanding the joint influence of rheology and aperture spatial variability. The aperture variation, modeled within a stochastic or deterministic framework, is taken to be one-dimensional and perpendicular to the flow direction; for stochastic modeling, the influence of different distribution functions is examined. Results are then compared with those obtained for pure power-law fluids for different combinations of model parameters. It is seen that the adoption of the pure power law model leads to significant overestimation of the flowrate with respect to the truncated model, more so for large external pressure gradient and/or aperture variability.

  10. Metal complexes of diisopropylthiourea: synthesis, characterization and antibacterial studies.

    PubMed

    Ajibade, Peter A; Zulu, Nonkululeko H

    2011-01-01

    Co(II), Cu(II), Zn(II) and Fe(III) complexes of diisopropylthiourea have been synthesized and characterized by elemental analyses, molar conductivity, magnetic susceptibility, FTIR and electronic spectroscopy. The compounds are non-electrolytes in solution and spectroscopic data of the complexes are consistent with 4-coordinate geometry for the metal(II) complexes and six coordinate octahedral for Fe(III) complex. The complexes were screened for their antibacterial activities against six bacteria: Escherichia coli, Pseudomonas auriginosa, Klebsiella pneumoniae, Bacillus cereus, Staphylococcus aureus and Bacillus pumilus. The complexes showed varied antibacterial activities and their minimum inhibitory concentrations (MICs) were determined.

  11. Exposure Assessment Tools by Tiers and Types - Deterministic and Probabilistic Assessments

    EPA Pesticide Factsheets

    EPA ExpoBox is a toolbox for exposure assessors. Its purpose is to provide a compendium of exposure assessment and risk characterization tools that will present comprehensive step-by-step guidance and links to relevant exposure assessment data bases

  12. Theory and applications of a deterministic approximation to the coalescent model

    PubMed Central

    Jewett, Ethan M.; Rosenberg, Noah A.

    2014-01-01

    Under the coalescent model, the random number nt of lineages ancestral to a sample is nearly deterministic as a function of time when nt is moderate to large in value, and it is well approximated by its expectation E[nt]. In turn, this expectation is well approximated by simple deterministic functions that are easy to compute. Such deterministic functions have been applied to estimate allele age, effective population size, and genetic diversity, and they have been used to study properties of models of infectious disease dynamics. Although a number of simple approximations of E[nt] have been derived and applied to problems of population-genetic inference, the theoretical accuracy of the formulas and the inferences obtained using these approximations is not known, and the range of problems to which they can be applied is not well understood. Here, we demonstrate general procedures by which the approximation nt ≈ E[nt] can be used to reduce the computational complexity of coalescent formulas, and we show that the resulting approximations converge to their true values under simple assumptions. Such approximations provide alternatives to exact formulas that are computationally intractable or numerically unstable when the number of sampled lineages is moderate or large. We also extend an existing class of approximations of E[nt] to the case of multiple populations of time-varying size with migration among them. Our results facilitate the use of the deterministic approximation nt ≈ E[nt] for deriving functionally simple, computationally efficient, and numerically stable approximations of coalescent formulas under complicated demographic scenarios. PMID:24412419

  13. Invited Review: A review of deterministic effects in cyclic variability of internal combustion engines

    DOE PAGES

    Finney, Charles E.; Kaul, Brian C.; Daw, C. Stuart; ...

    2015-02-18

    Here we review developments in the understanding of cycle to cycle variability in internal combustion engines, with a focus on spark-ignited and premixed combustion conditions. Much of the research on cyclic variability has focused on stochastic aspects, that is, features that can be modeled as inherently random with no short term predictability. In some cases, models of this type appear to work very well at describing experimental observations, but the lack of predictability limits control options. Also, even when the statistical properties of the stochastic variations are known, it can be very difficult to discern their underlying physical causes andmore » thus mitigate them. Some recent studies have demonstrated that under some conditions, cyclic combustion variations can have a relatively high degree of low dimensional deterministic structure, which implies some degree of predictability and potential for real time control. These deterministic effects are typically more pronounced near critical stability limits (e.g. near tipping points associated with ignition or flame propagation) such during highly dilute fueling or near the onset of homogeneous charge compression ignition. We review recent progress in experimental and analytical characterization of cyclic variability where low dimensional, deterministic effects have been observed. We describe some theories about the sources of these dynamical features and discuss prospects for interactive control and improved engine designs. In conclusion, taken as a whole, the research summarized here implies that the deterministic component of cyclic variability will become a pivotal issue (and potential opportunity) as engine manufacturers strive to meet aggressive emissions and fuel economy regulations in the coming decades.« less

  14. Deterministic photon-emitter coupling in chiral photonic circuits.

    PubMed

    Söllner, Immo; Mahmoodian, Sahand; Hansen, Sofie Lindskov; Midolo, Leonardo; Javadi, Alisa; Kiršanskė, Gabija; Pregnolato, Tommaso; El-Ella, Haitham; Lee, Eun Hye; Song, Jin Dong; Stobbe, Søren; Lodahl, Peter

    2015-09-01

    Engineering photon emission and scattering is central to modern photonics applications ranging from light harvesting to quantum-information processing. To this end, nanophotonic waveguides are well suited as they confine photons to a one-dimensional geometry and thereby increase the light-matter interaction. In a regular waveguide, a quantum emitter interacts equally with photons in either of the two propagation directions. This symmetry is violated in nanophotonic structures in which non-transversal local electric-field components imply that photon emission and scattering may become directional. Here we show that the helicity of the optical transition of a quantum emitter determines the direction of single-photon emission in a specially engineered photonic-crystal waveguide. We observe single-photon emission into the waveguide with a directionality that exceeds 90% under conditions in which practically all the emitted photons are coupled to the waveguide. The chiral light-matter interaction enables deterministic and highly directional photon emission for experimentally achievable on-chip non-reciprocal photonic elements. These may serve as key building blocks for single-photon optical diodes, transistors and deterministic quantum gates. Furthermore, chiral photonic circuits allow the dissipative preparation of entangled states of multiple emitters for experimentally achievable parameters, may lead to novel topological photon states and could be applied for directional steering of light.

  15. Deterministic photon-emitter coupling in chiral photonic circuits

    NASA Astrophysics Data System (ADS)

    Söllner, Immo; Mahmoodian, Sahand; Hansen, Sofie Lindskov; Midolo, Leonardo; Javadi, Alisa; Kiršanskė, Gabija; Pregnolato, Tommaso; El-Ella, Haitham; Lee, Eun Hye; Song, Jin Dong; Stobbe, Søren; Lodahl, Peter

    2015-09-01

    Engineering photon emission and scattering is central to modern photonics applications ranging from light harvesting to quantum-information processing. To this end, nanophotonic waveguides are well suited as they confine photons to a one-dimensional geometry and thereby increase the light-matter interaction. In a regular waveguide, a quantum emitter interacts equally with photons in either of the two propagation directions. This symmetry is violated in nanophotonic structures in which non-transversal local electric-field components imply that photon emission and scattering may become directional. Here we show that the helicity of the optical transition of a quantum emitter determines the direction of single-photon emission in a specially engineered photonic-crystal waveguide. We observe single-photon emission into the waveguide with a directionality that exceeds 90% under conditions in which practically all the emitted photons are coupled to the waveguide. The chiral light-matter interaction enables deterministic and highly directional photon emission for experimentally achievable on-chip non-reciprocal photonic elements. These may serve as key building blocks for single-photon optical diodes, transistors and deterministic quantum gates. Furthermore, chiral photonic circuits allow the dissipative preparation of entangled states of multiple emitters for experimentally achievable parameters, may lead to novel topological photon states and could be applied for directional steering of light.

  16. Population density equations for stochastic processes with memory kernels

    NASA Astrophysics Data System (ADS)

    Lai, Yi Ming; de Kamps, Marc

    2017-06-01

    We present a method for solving population density equations (PDEs)-a mean-field technique describing homogeneous populations of uncoupled neurons—where the populations can be subject to non-Markov noise for arbitrary distributions of jump sizes. The method combines recent developments in two different disciplines that traditionally have had limited interaction: computational neuroscience and the theory of random networks. The method uses a geometric binning scheme, based on the method of characteristics, to capture the deterministic neurodynamics of the population, separating the deterministic and stochastic process cleanly. We can independently vary the choice of the deterministic model and the model for the stochastic process, leading to a highly modular numerical solution strategy. We demonstrate this by replacing the master equation implicit in many formulations of the PDE formalism by a generalization called the generalized Montroll-Weiss equation—a recent result from random network theory—describing a random walker subject to transitions realized by a non-Markovian process. We demonstrate the method for leaky- and quadratic-integrate and fire neurons subject to spike trains with Poisson and gamma-distributed interspike intervals. We are able to model jump responses for both models accurately to both excitatory and inhibitory input under the assumption that all inputs are generated by one renewal process.

  17. Simulation of anaerobic digestion processes using stochastic algorithm.

    PubMed

    Palanichamy, Jegathambal; Palani, Sundarambal

    2014-01-01

    The Anaerobic Digestion (AD) processes involve numerous complex biological and chemical reactions occurring simultaneously. Appropriate and efficient models are to be developed for simulation of anaerobic digestion systems. Although several models have been developed, mostly they suffer from lack of knowledge on constants, complexity and weak generalization. The basis of the deterministic approach for modelling the physico and bio-chemical reactions occurring in the AD system is the law of mass action, which gives the simple relationship between the reaction rates and the species concentrations. The assumptions made in the deterministic models are not hold true for the reactions involving chemical species of low concentration. The stochastic behaviour of the physicochemical processes can be modeled at mesoscopic level by application of the stochastic algorithms. In this paper a stochastic algorithm (Gillespie Tau Leap Method) developed in MATLAB was applied to predict the concentration of glucose, acids and methane formation at different time intervals. By this the performance of the digester system can be controlled. The processes given by ADM1 (Anaerobic Digestion Model 1) were taken for verification of the model. The proposed model was verified by comparing the results of Gillespie's algorithms with the deterministic solution for conversion of glucose into methane through degraders. At higher value of 'τ' (timestep), the computational time required for reaching the steady state is more since the number of chosen reactions is less. When the simulation time step is reduced, the results are similar to ODE solver. It was concluded that the stochastic algorithm is a suitable approach for the simulation of complex anaerobic digestion processes. The accuracy of the results depends on the optimum selection of tau value.

  18. Programming with non-heap memory in the real time specification for Java

    NASA Technical Reports Server (NTRS)

    Bollella, G.; Canham, T.; Carson, V.; Champlin, V.; Dvorak, D.; Giovannoni, B.; Indictor, M.; Meyer, K.; Reinholtz, A.; Murray, K.

    2003-01-01

    The Real-Time Specification for Java (RTSJ) provides facilities for deterministic, real-time execution in a language that is otherwise subject to variable latencies in memory allocation and garbage collection.

  19. A comprehensive multi-scenario based approach for a reliable flood-hazard assessment: a case-study application

    NASA Astrophysics Data System (ADS)

    Lanni, Cristiano; Mazzorana, Bruno; Volcan, Claudio; Bertagnolli, Rudi

    2015-04-01

    Flood hazard is generally assessed by assuming the return period of the rainfall as a proxy for the return period of the discharge and the related hydrograph. Frequently this deterministic view is extended also to the straightforward application of hydrodynamic models. However, the climate (i.e. precipitation), the catchment (i.e. geology, soil and antecedent soil-moisture condition) and the anthropogenic (i.e. drainage system and its regulation) systems interact in a complex way, and the occurrence probability of a flood inundation event can significantly differ from the occurrence probability of the triggering event (i.e. rainfall). In order to reliably determine the spatial patterns of flood intensities and probabilities, the rigorous determination of flood event scenarios is beneficial because it provides a clear, rationale method to recognize and unveil the inherent stochastic behavior of natural processes. Therefore, a multi-scenario approach for hazard assessment should be applied and should consider the possible events taking place in the area potentially subject to flooding (i.e. floodplains). Here, we apply a multi-scenario approach for the assessment of the flood hazard around the Idro lake (Italy). We consider and estimate the probability of occurrence of several scenarios related to the initial (i.e. initial water level in the lake) and boundary (i.e. shape of the hydrograph, downslope drainage, spillway opening operations) conditions characterizing the lake. Finally, we discuss the advantages and issues of the presented methodological procedure compared to traditional (and essentially deterministic) approaches.

  20. Cell-Free Optogenetic Gene Expression System.

    PubMed

    Jayaraman, Premkumar; Yeoh, Jing Wui; Jayaraman, Sudhaghar; Teh, Ai Ying; Zhang, Jingyun; Poh, Chueh Loo

    2018-04-20

    Optogenetic tools provide a new and efficient way to dynamically program gene expression with unmatched spatiotemporal precision. To date, their vast potential remains untapped in the field of cell-free synthetic biology, largely due to the lack of simple and efficient light-switchable systems. Here, to bridge the gap between cell-free systems and optogenetics, we studied our previously engineered one component-based blue light-inducible Escherichia coli promoter in a cell-free environment through experimental characterization and mathematical modeling. We achieved >10-fold dynamic expression and demonstrated rapid and reversible activation of the target gene to generate oscillatory response. The deterministic model developed was able to recapitulate the system behavior and helped to provide quantitative insights to optimize dynamic response. This in vitro optogenetic approach could be a powerful new high-throughput screening technology for rapid prototyping of complex biological networks in both space and time without the need for chemical induction.

  1. Progressively expanded neural network for automatic material identification in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Paheding, Sidike

    The science of hyperspectral remote sensing focuses on the exploitation of the spectral signatures of various materials to enhance capabilities including object detection, recognition, and material characterization. Hyperspectral imagery (HSI) has been extensively used for object detection and identification applications since it provides plenty of spectral information to uniquely identify materials by their reflectance spectra. HSI-based object detection algorithms can be generally classified into stochastic and deterministic approaches. Deterministic approaches are comparatively simple to apply since it is usually based on direct spectral similarity such as spectral angles or spectral correlation. In contrast, stochastic algorithms require statistical modeling and estimation for target class and non-target class. Over the decades, many single class object detection methods have been proposed in the literature, however, deterministic multiclass object detection in HSI has not been explored. In this work, we propose a deterministic multiclass object detection scheme, named class-associative spectral fringe-adjusted joint transform correlation. Human brain is capable of simultaneously processing high volumes of multi-modal data received every second of the day. In contrast, a machine sees input data simply as random binary numbers. Although machines are computationally efficient, they are inferior when comes to data abstraction and interpretation. Thus, mimicking the learning strength of human brain has been current trend in artificial intelligence. In this work, we present a biological inspired neural network, named progressively expanded neural network (PEN Net), based on nonlinear transformation of input neurons to a feature space for better pattern differentiation. In PEN Net, discrete fixed excitations are disassembled and scattered in the feature space as a nonlinear line. Each disassembled element on the line corresponds to a pattern with similar features. Unlike the conventional neural network where hidden neurons need to be iteratively adjusted to achieve better accuracy, our proposed PEN Net does not require hidden neurons tuning which achieves better computational efficiency, and it has also shown superior performance in HSI classification tasks compared to the state-of-the-arts. Spectral-spatial features based HSI classification framework has shown stronger strength compared to spectral-only based methods. In our lastly proposed technique, PEN Net is incorporated with multiscale spatial features (i.e., multiscale complete local binary pattern) to perform a spectral-spatial classification of HSI. Several experiments demonstrate excellent performance of our proposed technique compared to the more recent developed approaches.

  2. Complexity in Soil Systems: What Does It Mean and How Should We Proceed?

    NASA Astrophysics Data System (ADS)

    Faybishenko, B.; Molz, F. J.; Brodie, E.; Hubbard, S. S.

    2015-12-01

    The complex soil systems approach is needed fundamentally for the development of integrated, interdisciplinary methods to measure and quantify the physical, chemical and biological processes taking place in soil, and to determine the role of fine-scale heterogeneities. This presentation is aimed at a review of the concepts and observations concerning complexity and complex systems theory, including terminology, emergent complexity and simplicity, self-organization and a general approach to the study of complex systems using the Weaver (1948) concept of "organized complexity." These concepts are used to provide understanding of complex soil systems, and to develop experimental and mathematical approaches to soil microbiological processes. The results of numerical simulations, observations and experiments are presented that indicate the presence of deterministic chaotic dynamics in soil microbial systems. So what are the implications for the scientists who wish to develop mathematical models in the area of organized complexity or to perform experiments to help clarify an aspect of an organized complex system? The modelers have to deal with coupled systems having at least three dependent variables, and they have to forgo making linear approximations to nonlinear phenomena. The analogous rule for experimentalists is that they need to perform experiments that involve measurement of at least three interacting entities (variables depending on time, space, and each other). These entities could be microbes in soil penetrated by roots. If a process being studied in a soil affects the soil properties, like biofilm formation, then this effect has to be measured and included. The mathematical implications of this viewpoint are examined, and results of numerical solutions to a system of equations demonstrating deterministic chaotic behavior are also discussed using time series and the 3D strange attractors.

  3. Stochastic associative memory

    NASA Astrophysics Data System (ADS)

    Baumann, Erwin W.; Williams, David L.

    1993-08-01

    Artificial neural networks capable of learning and recalling stochastic associations between non-deterministic quantities have received relatively little attention to date. One potential application of such stochastic associative networks is the generation of sensory 'expectations' based on arbitrary subsets of sensor inputs to support anticipatory and investigate behavior in sensor-based robots. Another application of this type of associative memory is the prediction of how a scene will look in one spectral band, including noise, based upon its appearance in several other wavebands. This paper describes a semi-supervised neural network architecture composed of self-organizing maps associated through stochastic inter-layer connections. This 'Stochastic Associative Memory' (SAM) can learn and recall non-deterministic associations between multi-dimensional probability density functions. The stochastic nature of the network also enables it to represent noise distributions that are inherent in any true sensing process. The SAM architecture, training process, and initial application to sensor image prediction are described. Relationships to Fuzzy Associative Memory (FAM) are discussed.

  4. A Pipelined Non-Deterministic Finite Automaton-Based String Matching Scheme Using Merged State Transitions in an FPGA

    PubMed Central

    Choi, Kang-Il

    2016-01-01

    This paper proposes a pipelined non-deterministic finite automaton (NFA)-based string matching scheme using field programmable gate array (FPGA) implementation. The characteristics of the NFA such as shared common prefixes and no failure transitions are considered in the proposed scheme. In the implementation of the automaton-based string matching using an FPGA, each state transition is implemented with a look-up table (LUT) for the combinational logic circuit between registers. In addition, multiple state transitions between stages can be performed in a pipelined fashion. In this paper, it is proposed that multiple one-to-one state transitions, called merged state transitions, can be performed with an LUT. By cutting down the number of used LUTs for implementing state transitions, the hardware overhead of combinational logic circuits is greatly reduced in the proposed pipelined NFA-based string matching scheme. PMID:27695114

  5. A Pipelined Non-Deterministic Finite Automaton-Based String Matching Scheme Using Merged State Transitions in an FPGA.

    PubMed

    Kim, HyunJin; Choi, Kang-Il

    2016-01-01

    This paper proposes a pipelined non-deterministic finite automaton (NFA)-based string matching scheme using field programmable gate array (FPGA) implementation. The characteristics of the NFA such as shared common prefixes and no failure transitions are considered in the proposed scheme. In the implementation of the automaton-based string matching using an FPGA, each state transition is implemented with a look-up table (LUT) for the combinational logic circuit between registers. In addition, multiple state transitions between stages can be performed in a pipelined fashion. In this paper, it is proposed that multiple one-to-one state transitions, called merged state transitions, can be performed with an LUT. By cutting down the number of used LUTs for implementing state transitions, the hardware overhead of combinational logic circuits is greatly reduced in the proposed pipelined NFA-based string matching scheme.

  6. Phonon arithmetic in a trapped ion system

    NASA Astrophysics Data System (ADS)

    Um, Mark; Zhang, Junhua; Lv, Dingshun; Lu, Yao; An, Shuoming; Zhang, Jing-Ning; Nha, Hyunchul; Kim, M. S.; Kim, Kihwan

    2016-04-01

    Single-quantum level operations are important tools to manipulate a quantum state. Annihilation or creation of single particles translates a quantum state to another by adding or subtracting a particle, depending on how many are already in the given state. The operations are probabilistic and the success rate has yet been low in their experimental realization. Here we experimentally demonstrate (near) deterministic addition and subtraction of a bosonic particle, in particular a phonon of ionic motion in a harmonic potential. We realize the operations by coupling phonons to an auxiliary two-level system and applying transitionless adiabatic passage. We show handy repetition of the operations on various initial states and demonstrate by the reconstruction of the density matrices that the operations preserve coherences. We observe the transformation of a classical state to a highly non-classical one and a Gaussian state to a non-Gaussian one by applying a sequence of operations deterministically.

  7. No-go theorem for passive single-rail linear optical quantum computing.

    PubMed

    Wu, Lian-Ao; Walther, Philip; Lidar, Daniel A

    2013-01-01

    Photonic quantum systems are among the most promising architectures for quantum computers. It is well known that for dual-rail photons effective non-linearities and near-deterministic non-trivial two-qubit gates can be achieved via the measurement process and by introducing ancillary photons. While in principle this opens a legitimate path to scalable linear optical quantum computing, the technical requirements are still very challenging and thus other optical encodings are being actively investigated. One of the alternatives is to use single-rail encoded photons, where entangled states can be deterministically generated. Here we prove that even for such systems universal optical quantum computing using only passive optical elements such as beam splitters and phase shifters is not possible. This no-go theorem proves that photon bunching cannot be passively suppressed even when extra ancilla modes and arbitrary number of photons are used. Our result provides useful guidance for the design of optical quantum computers.

  8. A combinatorial model of malware diffusion via bluetooth connections.

    PubMed

    Merler, Stefano; Jurman, Giuseppe

    2013-01-01

    We outline here the mathematical expression of a diffusion model for cellphones malware transmitted through Bluetooth channels. In particular, we provide the deterministic formula underlying the proposed infection model, in its equivalent recursive (simple but computationally heavy) and closed form (more complex but efficiently computable) expression.

  9. General Methodology Combining Engineering Optimization of Primary HVAC and R Plants with Decision Analysis Methods--Part II: Uncertainty and Decision Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Wei; Reddy, T. A.; Gurian, Patrick

    2007-01-31

    A companion paper to Jiang and Reddy that presents a general and computationally efficient methodology for dyanmic scheduling and optimal control of complex primary HVAC&R plants using a deterministic engineering optimization approach.

  10. Optimization of forest wildlife objectives

    Treesearch

    John Hof; Robert Haight

    2007-01-01

    This chapter presents an overview of methods for optimizing wildlife-related objectives. These objectives hinge on landscape pattern, so we refer to these methods as "spatial optimization." It is currently possible to directly capture deterministic characterizations of the most basic spatial relationships: proximity relationships (including those that lead to...

  11. Deterministic quantum controlled-PHASE gates based on non-Markovian environments

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Chen, Tian; Wang, Xiang-Bin

    2017-12-01

    We study the realization of the quantum controlled-PHASE gate in an atom-cavity system beyond the Markovian approximation. The general description of the dynamics for the atom-cavity system without any approximation is presented. When the spectral density of the reservoir has the Lorentz form, by making use of the memory backflow from the reservoir, we can always construct the deterministic quantum controlled-PHASE gate between a photon and an atom, no matter the atom-cavity coupling strength is weak or strong. While, the phase shift in the output pulse hinders the implementation of quantum controlled-PHASE gates in the sub-Ohmic, Ohmic or super-Ohmic reservoirs.

  12. Deterministic nonlinear phase gates induced by a single qubit

    NASA Astrophysics Data System (ADS)

    Park, Kimin; Marek, Petr; Filip, Radim

    2018-05-01

    We propose deterministic realizations of nonlinear phase gates by repeating a finite sequence of non-commuting Rabi interactions between a harmonic oscillator and only a single two-level ancillary qubit. We show explicitly that the key nonclassical features of the ideal cubic phase gate and the quartic phase gate are generated in the harmonic oscillator faithfully by our method. We numerically analyzed the performance of our scheme under realistic imperfections of the oscillator and the two-level system. The methodology is extended further to higher-order nonlinear phase gates. This theoretical proposal completes the set of operations required for continuous-variable quantum computation.

  13. Discrete-time systems with random switches: From systems stability to networks synchronization.

    PubMed

    Guo, Yao; Lin, Wei; Ho, Daniel W C

    2016-03-01

    In this article, we develop some approaches, which enable us to more accurately and analytically identify the essential patterns that guarantee the almost sure stability of discrete-time systems with random switches. We allow for the case that the elements in the switching connection matrix even obey some unbounded and continuous-valued distributions. In addition to the almost sure stability, we further investigate the almost sure synchronization in complex dynamical networks consisting of randomly connected nodes. Numerical examples illustrate that a chaotic dynamics in the synchronization manifold is preserved when statistical parameters enter some almost sure synchronization region established by the developed approach. Moreover, some delicate configurations are considered on probability space for ensuring synchronization in networks whose nodes are described by nonlinear maps. Both theoretical and numerical results on synchronization are presented by setting only a few random connections in each switch duration. More interestingly, we analytically find it possible to achieve almost sure synchronization in the randomly switching complex networks even with very large population sizes, which cannot be easily realized in non-switching but deterministically connected networks.

  14. Discrete-time systems with random switches: From systems stability to networks synchronization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Yao; Lin, Wei, E-mail: wlin@fudan.edu.cn; Shanghai Key Laboratory of Contemporary Applied Mathematics, LMNS, and Shanghai Center for Mathematical Sciences, Shanghai 200433

    2016-03-15

    In this article, we develop some approaches, which enable us to more accurately and analytically identify the essential patterns that guarantee the almost sure stability of discrete-time systems with random switches. We allow for the case that the elements in the switching connection matrix even obey some unbounded and continuous-valued distributions. In addition to the almost sure stability, we further investigate the almost sure synchronization in complex dynamical networks consisting of randomly connected nodes. Numerical examples illustrate that a chaotic dynamics in the synchronization manifold is preserved when statistical parameters enter some almost sure synchronization region established by the developedmore » approach. Moreover, some delicate configurations are considered on probability space for ensuring synchronization in networks whose nodes are described by nonlinear maps. Both theoretical and numerical results on synchronization are presented by setting only a few random connections in each switch duration. More interestingly, we analytically find it possible to achieve almost sure synchronization in the randomly switching complex networks even with very large population sizes, which cannot be easily realized in non-switching but deterministically connected networks.« less

  15. The concerted calculation of the BN-600 reactor for the deterministic and stochastic codes

    NASA Astrophysics Data System (ADS)

    Bogdanova, E. V.; Kuznetsov, A. N.

    2017-01-01

    The solution of the problem of increasing the safety of nuclear power plants implies the existence of complete and reliable information about the processes occurring in the core of a working reactor. Nowadays the Monte-Carlo method is the most general-purpose method used to calculate the neutron-physical characteristic of the reactor. But it is characterized by large time of calculation. Therefore, it may be useful to carry out coupled calculations with stochastic and deterministic codes. This article presents the results of research for possibility of combining stochastic and deterministic algorithms in calculation the reactor BN-600. This is only one part of the work, which was carried out in the framework of the graduation project at the NRC “Kurchatov Institute” in cooperation with S. S. Gorodkov and M. A. Kalugin. It is considering the 2-D layer of the BN-600 reactor core from the international benchmark test, published in the report IAEA-TECDOC-1623. Calculations of the reactor were performed with MCU code and then with a standard operative diffusion algorithm with constants taken from the Monte - Carlo computation. Macro cross-section, diffusion coefficients, the effective multiplication factor and the distribution of neutron flux and power were obtained in 15 energy groups. The reasonable agreement between stochastic and deterministic calculations of the BN-600 is observed.

  16. Combining Deterministic structures and stochastic heterogeneity for transport modeling

    NASA Astrophysics Data System (ADS)

    Zech, Alraune; Attinger, Sabine; Dietrich, Peter; Teutsch, Georg

    2017-04-01

    Contaminant transport in highly heterogeneous aquifers is extremely challenging and subject of current scientific debate. Tracer plumes often show non-symmetric but highly skewed plume shapes. Predicting such transport behavior using the classical advection-dispersion-equation (ADE) in combination with a stochastic description of aquifer properties requires a dense measurement network. This is in contrast to the available information for most aquifers. A new conceptual aquifer structure model is presented which combines large-scale deterministic information and the stochastic approach for incorporating sub-scale heterogeneity. The conceptual model is designed to allow for a goal-oriented, site specific transport analysis making use of as few data as possible. Thereby the basic idea is to reproduce highly skewed tracer plumes in heterogeneous media by incorporating deterministic contrasts and effects of connectivity instead of using unimodal heterogeneous models with high variances. The conceptual model consists of deterministic blocks of mean hydraulic conductivity which might be measured by pumping tests indicating values differing in orders of magnitudes. A sub-scale heterogeneity is introduced within every block. This heterogeneity can be modeled as bimodal or log-normal distributed. The impact of input parameters, structure and conductivity contrasts is investigated in a systematic manor. Furthermore, some first successful implementation of the model was achieved for the well known MADE site.

  17. Deterministic and stochastic models for middle east respiratory syndrome (MERS)

    NASA Astrophysics Data System (ADS)

    Suryani, Dessy Rizki; Zevika, Mona; Nuraini, Nuning

    2018-03-01

    World Health Organization (WHO) data stated that since September 2012, there were 1,733 cases of Middle East Respiratory Syndrome (MERS) with 628 death cases that occurred in 27 countries. MERS was first identified in Saudi Arabia in 2012 and the largest cases of MERS outside Saudi Arabia occurred in South Korea in 2015. MERS is a disease that attacks the respiratory system caused by infection of MERS-CoV. MERS-CoV transmission occurs directly through direct contact between infected individual with non-infected individual or indirectly through contaminated object by the free virus. Suspected, MERS can spread quickly because of the free virus in environment. Mathematical modeling is used to illustrate the transmission of MERS disease using deterministic model and stochastic model. Deterministic model is used to investigate the temporal dynamic from the system to analyze the steady state condition. Stochastic model approach using Continuous Time Markov Chain (CTMC) is used to predict the future states by using random variables. From the models that were built, the threshold value for deterministic models and stochastic models obtained in the same form and the probability of disease extinction can be computed by stochastic model. Simulations for both models using several of different parameters are shown, and the probability of disease extinction will be compared with several initial conditions.

  18. On the usage of ultrasound computational models for decision making under ambiguity

    NASA Astrophysics Data System (ADS)

    Dib, Gerges; Sexton, Samuel; Prowant, Matthew; Crawford, Susan; Diaz, Aaron

    2018-04-01

    Computer modeling and simulation is becoming pervasive within the non-destructive evaluation (NDE) industry as a convenient tool for designing and assessing inspection techniques. This raises a pressing need for developing quantitative techniques for demonstrating the validity and applicability of the computational models. Computational models provide deterministic results based on deterministic and well-defined input, or stochastic results based on inputs defined by probability distributions. However, computational models cannot account for the effects of personnel, procedures, and equipment, resulting in ambiguity about the efficacy of inspections based on guidance from computational models only. In addition, ambiguity arises when model inputs, such as the representation of realistic cracks, cannot be defined deterministically, probabilistically, or by intervals. In this work, Pacific Northwest National Laboratory demonstrates the ability of computational models to represent field measurements under known variabilities, and quantify the differences using maximum amplitude and power spectrum density metrics. Sensitivity studies are also conducted to quantify the effects of different input parameters on the simulation results.

  19. On the influence of additive and multiplicative noise on holes in dissipative systems.

    PubMed

    Descalzi, Orazio; Cartes, Carlos; Brand, Helmut R

    2017-05-01

    We investigate the influence of noise on deterministically stable holes in the cubic-quintic complex Ginzburg-Landau equation. Inspired by experimental possibilities, we specifically study two types of noise: additive noise delta-correlated in space and spatially homogeneous multiplicative noise on the formation of π-holes and 2π-holes. Our results include the following main features. For large enough additive noise, we always find a transition to the noisy version of the spatially homogeneous finite amplitude solution, while for sufficiently large multiplicative noise, a collapse occurs to the zero amplitude solution. The latter type of behavior, while unexpected deterministically, can be traced back to a characteristic feature of multiplicative noise; the zero solution acts as the analogue of an absorbing boundary: once trapped at zero, the system cannot escape. For 2π-holes, which exist deterministically over a fairly small range of values of subcriticality, one can induce a transition to a π-hole (for additive noise) or to a noise-sustained pulse (for multiplicative noise). This observation opens the possibility of noise-induced switching back and forth from and to 2π-holes.

  20. Deterministic Integration of Quantum Dots into on-Chip Multimode Interference Beamsplitters Using in Situ Electron Beam Lithography

    NASA Astrophysics Data System (ADS)

    Schnauber, Peter; Schall, Johannes; Bounouar, Samir; Höhne, Theresa; Park, Suk-In; Ryu, Geun-Hwan; Heindel, Tobias; Burger, Sven; Song, Jin-Dong; Rodt, Sven; Reitzenstein, Stephan

    2018-04-01

    The development of multi-node quantum optical circuits has attracted great attention in recent years. In particular, interfacing quantum-light sources, gates and detectors on a single chip is highly desirable for the realization of large networks. In this context, fabrication techniques that enable the deterministic integration of pre-selected quantum-light emitters into nanophotonic elements play a key role when moving forward to circuits containing multiple emitters. Here, we present the deterministic integration of an InAs quantum dot into a 50/50 multi-mode interference beamsplitter via in-situ electron beam lithography. We demonstrate the combined emitter-gate interface functionality by measuring triggered single-photon emission on-chip with $g^{(2)}(0) = 0.13\\pm 0.02$. Due to its high patterning resolution as well as spectral and spatial control, in-situ electron beam lithography allows for integration of pre-selected quantum emitters into complex photonic systems. Being a scalable single-step approach, it paves the way towards multi-node, fully integrated quantum photonic chips.

  1. A Comparison of Deterministic and Stochastic Modeling Approaches for Biochemical Reaction Systems: On Fixed Points, Means, and Modes.

    PubMed

    Hahl, Sayuri K; Kremling, Andreas

    2016-01-01

    In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still expected to provide relevant indications on the underlying dynamics.

  2. Randomized central limit theorems: A unified theory.

    PubMed

    Eliazar, Iddo; Klafter, Joseph

    2010-08-01

    The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles' aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles' extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic-scaling all ensemble components by a common deterministic scale. However, there are "random environment" settings in which the underlying scaling schemes are stochastic-scaling the ensemble components by different random scales. Examples of such settings include Holtsmark's law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)-in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes-and present "randomized counterparts" to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.

  3. Development of DCGLs by using both probabilistic and deterministic analyses in RESRAD (onsite) and RESRAD-OFFSITE codes.

    PubMed

    Kamboj, Sunita; Yu, Charley; Johnson, Robert

    2013-05-01

    The Derived Concentration Guideline Levels for two building areas previously used in waste processing and storage at Argonne National Laboratory were developed using both probabilistic and deterministic radiological environmental pathway analysis. Four scenarios were considered. The two current uses considered were on-site industrial use and off-site residential use with farming. The two future uses (i.e., after an institutional control period of 100 y) were on-site recreational use and on-site residential use with farming. The RESRAD-OFFSITE code was used for the current-use off-site residential/farming scenario and RESRAD (onsite) was used for the other three scenarios. Contaminants of concern were identified from the past operations conducted in the buildings and the actual characterization done at the site. Derived Concentration Guideline Levels were developed for all four scenarios using deterministic and probabilistic approaches, which include both "peak-of-the-means" and "mean-of-the-peaks" analyses. The future-use on-site residential/farming scenario resulted in the most restrictive Derived Concentration Guideline Levels for most radionuclides.

  4. Synthesis, characterization and solid state electrical properties of 1-D coordination polymer of the type [Cu{sub x}Ni{sub 1-x}(dadb){center_dot}yH{sub 2}O]{sub n}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prasad, R.L., E-mail: rlpjc@yahoo.co.in; Kushwaha, A.; Shrivastava, O.N.

    2012-12-15

    New heterobimetallic complexes [Cu{sub x}Ni{sub 1-x}(dadb){center_dot}yH{sub 2}O]{sub n} {l_brace}where dadb=2,5-Diamino-3,6-dichloro-1,4-benzoquinone (1); x=1 (2), 0.5 (4), 0.25 (5), 0.125 (6), 0.0625 (7) and 0 (3); y=2; n=degree of polymerization{r_brace} were synthesized and characterized. Heterobimetallic complexes show normal magnetic moments, whereas, monometallic complexes exhibit magnetic moments less than the value due to spin only. Thermo-gravimetric analysis shows that degradation of the ligand dadb moiety is being controlled by the electronic environment of the Cu(II) ions in preference over Ni(II) in heterobimetallic complexes. Existence of the mixed valency/non-integral oxidation states of copper and nickel metal ions in the complex 4 has been attributedmore » from magnetic moment and ESR spectral results. Solid state dc electrical conductivity of all the complexes was investigated. Monometallic complexes were found to be semiconductors, whereas heterobimetallic coordination polymer 4 was found to exhibit metallic behaviour. Existence of mixed valency/ non-integral oxidation state of metal ions seems to be responsible for the metallic behaviour. - Graphical abstract: Contrast to the semiconductor monometallic complexes 2 and 3, the heterobimetallic complex 4 exhibits metallic behaviour attributed to the mixed valency/non-integral oxidation state of the metal ions concluded from magnetic and ESR spectral studies. Highlights: Black-Right-Pointing-Pointer 1-D coordination compounds of the type Cu{sub x}Ni{sub 1-x}(dadb){center_dot}yH{sub 2}O were synthesized and characterized. Black-Right-Pointing-Pointer Thermal degradation of the complexes provides an indication of long range electronic communication between metal to ligand. Black-Right-Pointing-Pointer On inclusion of Ni(II) into 1-D coordination polymer of Cu(II). (a) Cu(II) and Ni(II) ions exhibit non-integral oxidation state. (b) resulting heterobimetallic complex 4 exhibits metallic behaviour at all temperature range of the present study whereas monometallic complexes are semiconductor.« less

  5. A Combinatorial Model of Malware Diffusion via Bluetooth Connections

    PubMed Central

    Merler, Stefano; Jurman, Giuseppe

    2013-01-01

    We outline here the mathematical expression of a diffusion model for cellphones malware transmitted through Bluetooth channels. In particular, we provide the deterministic formula underlying the proposed infection model, in its equivalent recursive (simple but computationally heavy) and closed form (more complex but efficiently computable) expression. PMID:23555677

  6. Improving Deterministic Reserve Requirements for Security Constrained Unit Commitment and Scheduling Problems in Power Systems

    NASA Astrophysics Data System (ADS)

    Wang, Fengyu

    Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch structure, especially with the consideration of renewables, 2) to develop a market settlement scheme of proposed dynamic reserve policies such that the market efficiency is improved, 3) to evaluate the market impacts and price signal of the proposed dynamic reserve policies.

  7. A Deterministic Computational Procedure for Space Environment Electron Transport

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamcyk, Anne M.

    2010-01-01

    A deterministic computational procedure for describing the transport of electrons in condensed media is formulated to simulate the effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The primary purpose for developing the procedure is to provide a means of rapidly performing numerous repetitive transport calculations essential for electron radiation exposure assessments for complex space structures. The present code utilizes well-established theoretical representations to describe the relevant interactions and transport processes. A combined mean free path and average trajectory approach is used in the transport formalism. For typical space environment spectra, several favorable comparisons with Monte Carlo calculations are made which have indicated that accuracy is not compromised at the expense of the computational speed.

  8. Churchill: an ultra-fast, deterministic, highly scalable and balanced parallelization strategy for the discovery of human genetic variation in clinical and population-scale genomics.

    PubMed

    Kelly, Benjamin J; Fitch, James R; Hu, Yangqiu; Corsmeier, Donald J; Zhong, Huachun; Wetzel, Amy N; Nordquist, Russell D; Newsom, David L; White, Peter

    2015-01-20

    While advances in genome sequencing technology make population-scale genomics a possibility, current approaches for analysis of these data rely upon parallelization strategies that have limited scalability, complex implementation and lack reproducibility. Churchill, a balanced regional parallelization strategy, overcomes these challenges, fully automating the multiple steps required to go from raw sequencing reads to variant discovery. Through implementation of novel deterministic parallelization techniques, Churchill allows computationally efficient analysis of a high-depth whole genome sample in less than two hours. The method is highly scalable, enabling full analysis of the 1000 Genomes raw sequence dataset in a week using cloud resources. http://churchill.nchri.org/.

  9. Synchrony and entrainment properties of robust circadian oscillators

    PubMed Central

    Bagheri, Neda; Taylor, Stephanie R.; Meeker, Kirsten; Petzold, Linda R.; Doyle, Francis J.

    2008-01-01

    Systems theoretic tools (i.e. mathematical modelling, control, and feedback design) advance the understanding of robust performance in complex biological networks. We highlight phase entrainment as a key performance measure used to investigate dynamics of a single deterministic circadian oscillator for the purpose of generating insight into the behaviour of a population of (synchronized) oscillators. More specifically, the analysis of phase characteristics may facilitate the identification of appropriate coupling mechanisms for the ensemble of noisy (stochastic) circadian clocks. Phase also serves as a critical control objective to correct mismatch between the biological clock and its environment. Thus, we introduce methods of investigating synchrony and entrainment in both stochastic and deterministic frameworks, and as a property of a single oscillator or population of coupled oscillators. PMID:18426774

  10. Delayed-feedback chimera states: Forced multiclusters and stochastic resonance

    NASA Astrophysics Data System (ADS)

    Semenov, V.; Zakharova, A.; Maistrenko, Y.; Schöll, E.

    2016-07-01

    A nonlinear oscillator model with negative time-delayed feedback is studied numerically under external deterministic and stochastic forcing. It is found that in the unforced system complex partial synchronization patterns like chimera states as well as salt-and-pepper-like solitary states arise on the route from regular dynamics to spatio-temporal chaos. The control of the dynamics by external periodic forcing is demonstrated by numerical simulations. It is shown that one-cluster and multi-cluster chimeras can be achieved by adjusting the external forcing frequency to appropriate resonance conditions. If a stochastic component is superimposed to the deterministic external forcing, chimera states can be induced in a way similar to stochastic resonance, they appear, therefore, in regimes where they do not exist without noise.

  11. Interictal to Ictal Phase Transition in a Small-World Network

    NASA Astrophysics Data System (ADS)

    Nemzer, Louis; Cravens, Gary; Worth, Robert

    Real-time detection and prediction of seizures in patients with epilepsy is essential for rapid intervention. Here, we perform a full Hodgkin-Huxley calculation using n 50 in silico neurons configured in a small-world network topology to generate simulated EEG signals. The connectivity matrix, constructed using a Watts-Strogatz algorithm, admits randomized or deterministic entries. We find that situations corresponding to interictal (non-seizure) and ictal (seizure) states are separated by a phase transition that can be influenced by congenital channelopathies, anticonvulsant drugs, and connectome plasticity. The interictal phase exhibits scale-free phenomena, as characterized by a power law form of the spectral power density, while the ictal state suffers from pathological synchronization. We compare the results with intracranial EEG data and show how these findings may be used to detect or even predict seizure onset. Along with the balance of excitatory and inhibitory factors, the network topology plays a large role in determining the overall characteristics of brain activity. We have developed a new platform for testing the conditions that contribute to the phase transition between non-seizure and seizure states.

  12. Deterministic analysis of processes at corroding metal surfaces and the study of electrochemical noise in these systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Latanision, R.M.

    1990-12-01

    Electrochemical corrosion is pervasive in virtually all engineering systems and in virtually all industrial circumstances. Although engineers now understand how to design systems to minimize corrosion in many instances, many fundamental questions remain poorly understood and, therefore, the development of corrosion control strategies is based more on empiricism than on a deep understanding of the processes by which metals corrode in electrolytes. Fluctuations in potential, or current, in electrochemical systems have been observed for many years. To date, all investigations of this phenomenon have utilized non-deterministic analyses. In this work it is proposed to study electrochemical noise from a deterministicmore » viewpoint by comparison of experimental parameters, such as first and second order moments (non-deterministic), with computer simulation of corrosion at metal surfaces. In this way it is proposed to analyze the origins of these fluctuations and to elucidate the relationship between these fluctuations and kinetic parameters associated with metal dissolution and cathodic reduction reactions. This research program addresses in essence two areas of interest: (a) computer modeling of corrosion processes in order to study the electrochemical processes on an atomistic scale, and (b) experimental investigations of fluctuations in electrochemical systems and correlation of experimental results with computer modeling. In effect, the noise generated by mathematical modeling will be analyzed and compared to experimental noise in electrochemical systems. 1 fig.« less

  13. Evidence for deterministic chaos in aperiodic oscillations of acute lymphoblastic leukemia cells in long-term culture

    NASA Astrophysics Data System (ADS)

    Lambrou, George I.; Chatziioannou, Aristotelis; Vlahopoulos, Spiros; Moschovi, Maria; Chrousos, George P.

    Biological systems are dynamic and possess properties that depend on two key elements: initial conditions and the response of the system over time. Conceptualizing this on tumor models will influence conclusions drawn with regard to disease initiation and progression. Alterations in initial conditions dynamically reshape the properties of proliferating tumor cells. The present work aims to test the hypothesis of Wolfrom et al., that proliferation shows evidence for deterministic chaos in a manner such that subtle differences in the initial conditions give rise to non-linear response behavior of the system. Their hypothesis, tested on adherent Fao rat hepatoma cells, provides evidence that these cells manifest aperiodic oscillations in their proliferation rate. We have tested this hypothesis with some modifications to the proposed experimental setup. We have used the acute lymphoblastic leukemia cell line CCRF-CEM, as it provides an excellent substrate for modeling proliferation dynamics. Measurements were taken at time points varying from 24h to 48h, extending the assayed populations beyond that of previous published reports that dealt with the complex dynamic behavior of animal cell populations. We conducted flow cytometry studies to examine the apoptotic and necrotic rate of the system, as well as DNA content changes of the cells over time. The cells exhibited a proliferation rate of nonlinear nature, as this rate presented oscillatory behavior. The obtained data have been fit in known models of growth, such as logistic and Gompertzian growth.

  14. Dual noise-like pulse and soliton operation of a fiber ring cavity

    NASA Astrophysics Data System (ADS)

    Bracamontes Rodríguez, Y. E.; Pottiez, O.; García Sanchez, E.; Lauterio Cruz, J. P.; Ibarra-Villalón, H.; Hernandez-Garcia, J. C.; Bello-Jimenez, M.; Beltrán-Pérez, G.; Ibarra-Escamilla, B.; Kuzin, E. A.

    2017-03-01

    Passively mode-locked fiber lasers (PML-FLs) are versatile sources that are capable of generating a broad variety of short and ultrashort optical pulses. Besides conservative solitons, PML-FLs allow the generation of different kinds of dissipative structures, usually called dissipative solitons, a concept that also encompasses more complex structures and collective behaviors such as soliton molecules, gas, rain of solitons, etc. In addition to this, PML-FLs are also able to generate even more complex objects, the so-called noise-like pulses (NLPs). A few recent research results revealed a connection between NLPs and solitons, a sign that deterministic ingredients enter into the composition of NLPs, whose nature is traditionally assumed to be random. Although it is usual that a fiber laser is able to generate either solitons or noise-like pulses, depending on pump power and adjustments in the cavity, these two regimes are rarely observed simultaneously. In this paper, a PML-FL in a ring configuration is presented, in which it is possible to observe and verify experimentally the simultaneous presence of NLPs and solitons. Interestingly, these two components are found in different spectral regions, which greatly facilitates their separation and individual study and characterization.

  15. Art and architecture as experience: an alternative approach to bridging art history and the neurosciences.

    PubMed

    Zschocke, Nina

    2012-08-01

    In 1972, Michael Baxandal characterizes the processes responsible for the cultural relativism of art experience as highly complex and unknown in their physiological detail. While art history still shows considerable interest in the brain sciences forty years later, most cross-disciplinary studies today are referring to the neurosciences in an attempt to seek scientific legitimization of variations of a generalized and largely deterministic model of perception, reducing interaction between a work of art and its observers to a set of biological automatisms. I will challenge such an approach and take up art theory's interest in the historico-cultural and situational dimensions of art experience. Looking at two examples of large-scale installation and sculptural post-war American art, I will explore instable perceptions of depth and changing experiences of space that indicate complex interactions between perceptual and higher cognitive processes. The argument will draw on recent theories describing neuronal processes underlying multistable phenomena, eye movement, visual attention and decision-making. As I will show a large number of neuroscientific studies provide theoretical models that help us analyse not the anthropological constants but the influence of cultural, individual and situational variables on aesthetic experience.

  16. Nonlinear analysis of dynamic signature

    NASA Astrophysics Data System (ADS)

    Rashidi, S.; Fallah, A.; Towhidkhah, F.

    2013-12-01

    Signature is a long trained motor skill resulting in well combination of segments like strokes and loops. It is a physical manifestation of complex motor processes. The problem, generally stated, is that how relative simplicity in behavior emerges from considerable complexity of perception-action system that produces behavior within an infinitely variable biomechanical and environmental context. To solve this problem, we present evidences which indicate that motor control dynamic in signing process is a chaotic process. This chaotic dynamic may explain a richer array of time series behavior in motor skill of signature. Nonlinear analysis is a powerful approach and suitable tool which seeks for characterizing dynamical systems through concepts such as fractal dimension and Lyapunov exponent. As a result, they can be analyzed in both horizontal and vertical for time series of position and velocity. We observed from the results that noninteger values for the correlation dimension indicates low dimensional deterministic dynamics. This result could be confirmed by using surrogate data tests. We have also used time series to calculate the largest Lyapunov exponent and obtain a positive value. These results constitute significant evidence that signature data are outcome of chaos in a nonlinear dynamical system of motor control.

  17. Evaluation of Deployment Challenges of Wireless Sensor Networks at Signalized Intersections

    PubMed Central

    Azpilicueta, Leyre; López-Iturri, Peio; Aguirre, Erik; Martínez, Carlos; Astrain, José Javier; Villadangos, Jesús; Falcone, Francisco

    2016-01-01

    With the growing demand of Intelligent Transportation Systems (ITS) for safer and more efficient transportation, research on and development of such vehicular communication systems have increased considerably in the last years. The use of wireless networks in vehicular environments has grown exponentially. However, it is highly important to analyze radio propagation prior to the deployment of a wireless sensor network in such complex scenarios. In this work, the radio wave characterization for ISM 2.4 GHz and 5 GHz Wireless Sensor Networks (WSNs) deployed taking advantage of the existence of traffic light infrastructure has been assessed. By means of an in-house developed 3D ray launching algorithm, the impact of topology as well as urban morphology of the environment has been analyzed, emulating the realistic operation in the framework of the scenario. The complexity of the scenario, which is an intersection city area with traffic lights, vehicles, people, buildings, vegetation and urban environment, makes necessary the channel characterization with accurate models before the deployment of wireless networks. A measurement campaign has been conducted emulating the interaction of the system, in the vicinity of pedestrians as well as nearby vehicles. A real time interactive application has been developed and tested in order to visualize and monitor traffic as well as pedestrian user location and behavior. Results show that the use of deterministic tools in WSN deployment can aid in providing optimal layouts in terms of coverage, capacity and energy efficiency of the network. PMID:27455270

  18. Evaluation of Deployment Challenges of Wireless Sensor Networks at Signalized Intersections.

    PubMed

    Azpilicueta, Leyre; López-Iturri, Peio; Aguirre, Erik; Martínez, Carlos; Astrain, José Javier; Villadangos, Jesús; Falcone, Francisco

    2016-07-22

    With the growing demand of Intelligent Transportation Systems (ITS) for safer and more efficient transportation, research on and development of such vehicular communication systems have increased considerably in the last years. The use of wireless networks in vehicular environments has grown exponentially. However, it is highly important to analyze radio propagation prior to the deployment of a wireless sensor network in such complex scenarios. In this work, the radio wave characterization for ISM 2.4 GHz and 5 GHz Wireless Sensor Networks (WSNs) deployed taking advantage of the existence of traffic light infrastructure has been assessed. By means of an in-house developed 3D ray launching algorithm, the impact of topology as well as urban morphology of the environment has been analyzed, emulating the realistic operation in the framework of the scenario. The complexity of the scenario, which is an intersection city area with traffic lights, vehicles, people, buildings, vegetation and urban environment, makes necessary the channel characterization with accurate models before the deployment of wireless networks. A measurement campaign has been conducted emulating the interaction of the system, in the vicinity of pedestrians as well as nearby vehicles. A real time interactive application has been developed and tested in order to visualize and monitor traffic as well as pedestrian user location and behavior. Results show that the use of deterministic tools in WSN deployment can aid in providing optimal layouts in terms of coverage, capacity and energy efficiency of the network.

  19. Stochastic Modeling and Generation of Partially Polarized or Partially Coherent Electromagnetic Waves

    NASA Technical Reports Server (NTRS)

    Davis, Brynmor; Kim, Edward; Piepmeier, Jeffrey; Hildebrand, Peter H. (Technical Monitor)

    2001-01-01

    Many new Earth remote-sensing instruments are embracing both the advantages and added complexity that result from interferometric or fully polarimetric operation. To increase instrument understanding and functionality a model of the signals these instruments measure is presented. A stochastic model is used as it recognizes the non-deterministic nature of any real-world measurements while also providing a tractable mathematical framework. A stationary, Gaussian-distributed model structure is proposed. Temporal and spectral correlation measures provide a statistical description of the physical properties of coherence and polarization-state. From this relationship the model is mathematically defined. The model is shown to be unique for any set of physical parameters. A method of realizing the model (necessary for applications such as synthetic calibration-signal generation) is given and computer simulation results are presented. The signals are constructed using the output of a multi-input multi-output linear filter system, driven with white noise.

  20. Chiral quantum optics.

    PubMed

    Lodahl, Peter; Mahmoodian, Sahand; Stobbe, Søren; Rauschenbeutel, Arno; Schneeweiss, Philipp; Volz, Jürgen; Pichler, Hannes; Zoller, Peter

    2017-01-25

    Advanced photonic nanostructures are currently revolutionizing the optics and photonics that underpin applications ranging from light technology to quantum-information processing. The strong light confinement in these structures can lock the local polarization of the light to its propagation direction, leading to propagation-direction-dependent emission, scattering and absorption of photons by quantum emitters. The possibility of such a propagation-direction-dependent, or chiral, light-matter interaction is not accounted for in standard quantum optics and its recent discovery brought about the research field of chiral quantum optics. The latter offers fundamentally new functionalities and applications: it enables the assembly of non-reciprocal single-photon devices that can be operated in a quantum superposition of two or more of their operational states and the realization of deterministic spin-photon interfaces. Moreover, engineered directional photonic reservoirs could lead to the development of complex quantum networks that, for example, could simulate novel classes of quantum many-body systems.

  1. Simulation of glioblastoma multiforme (GBM) tumor cells using ising model on the Creutz Cellular Automaton

    NASA Astrophysics Data System (ADS)

    Züleyha, Artuç; Ziya, Merdan; Selçuk, Yeşiltaş; Kemal, Öztürk M.; Mesut, Tez

    2017-11-01

    Computational models for tumors have difficulties due to complexity of tumor nature and capacities of computational tools, however, these models provide visions to understand interactions between tumor and its micro environment. Moreover computational models have potential to develop strategies for individualized treatments for cancer. To observe a solid brain tumor, glioblastoma multiforme (GBM), we present a two dimensional Ising Model applied on Creutz cellular automaton (CCA). The aim of this study is to analyze avascular spherical solid tumor growth, considering transitions between non tumor cells and cancer cells are like phase transitions in physical system. Ising model on CCA algorithm provides a deterministic approach with discrete time steps and local interactions in position space to view tumor growth as a function of time. Our simulation results are given for fixed tumor radius and they are compatible with theoretical and clinic data.

  2. Antibiotic-induced population fluctuations and stochastic clearance of bacteria

    PubMed Central

    Le, Dai; Şimşek, Emrah; Chaudhry, Waqas

    2018-01-01

    Effective antibiotic use that minimizes treatment failures remains a challenge. A better understanding of how bacterial populations respond to antibiotics is necessary. Previous studies of large bacterial populations established the deterministic framework of pharmacodynamics. Here, characterizing the dynamics of population extinction, we demonstrated the stochastic nature of eradicating bacteria with antibiotics. Antibiotics known to kill bacteria (bactericidal) induced population fluctuations. Thus, at high antibiotic concentrations, the dynamics of bacterial clearance were heterogeneous. At low concentrations, clearance still occurred with a non-zero probability. These striking outcomes of population fluctuations were well captured by our probabilistic model. Our model further suggested a strategy to facilitate eradication by increasing extinction probability. We experimentally tested this prediction for antibiotic-susceptible and clinically-isolated resistant bacteria. This new knowledge exposes fundamental limits in our ability to predict bacterial eradication. Additionally, it demonstrates the potential of using antibiotic concentrations that were previously deemed inefficacious to eradicate bacteria. PMID:29508699

  3. Dynamic characteristics of Non Newtonian fluid Squeeze film damper

    NASA Astrophysics Data System (ADS)

    Palaksha, C. P.; Shivaprakash, S.; Jagadish, H. P.

    2016-09-01

    The fluids which do not follow linear relationship between rate of strain and shear stress are termed as non-Newtonian fluid. The non-Newtonian fluids are usually categorized as those in which shear stress depends on the rates of shear only, fluids for which relation between shear stress and rate of shear depends on time and the visco inelastic fluids which possess both elastic and viscous properties. It is quite difficult to provide a single constitutive relation that can be used to define a non-Newtonian fluid due to a great diversity found in its physical structure. Non-Newtonian fluids can present a complex rheological behaviour involving shear-thinning, viscoelastic or thixotropic effects. The rheological characterization of complex fluids is an important issue in many areas. The paper analyses the damping and stiffness characteristics of non-Newtonian fluids (waxy crude oil) used in squeeze film dampers using the available literature for viscosity characterization. Damping and stiffness characteristic will be evaluated as a function of shear strain rate, temperature and percentage wax concentration etc.

  4. Non Kolmogorov Probability Models Outside Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Accardi, Luigi

    2009-03-01

    This paper is devoted to analysis of main conceptual problems in the interpretation of QM: reality, locality, determinism, physical state, Heisenberg principle, "deterministic" and "exact" theories, laws of chance, notion of event, statistical invariants, adaptive realism, EPR correlations and, finally, the EPR-chameleon experiment.

  5. Characterization of a high-spin non-heme Fe(III)-OOH intermediate and its quantitative conversion to an Fe(IV)═O complex.

    PubMed

    Li, Feifei; Meier, Katlyn K; Cranswick, Matthew A; Chakrabarti, Mrinmoy; Van Heuvelen, Katherine M; Münck, Eckard; Que, Lawrence

    2011-05-18

    We have generated a high-spin Fe(III)-OOH complex supported by tetramethylcyclam via protonation of its conjugate base and characterized it in detail using various spectroscopic methods. This Fe(III)-OOH species can be converted quantitatively to an Fe(IV)═O complex via O-O bond cleavage; this is the first example of such a conversion. This conversion is promoted by two factors: the strong Fe(III)-OOH bond, which inhibits Fe-O bond lysis, and the addition of protons, which facilitates O-O bond cleavage. This example provides a synthetic precedent for how O-O bond cleavage of high-spin Fe(III)-peroxo intermediates of non-heme iron enzymes may be promoted. © 2011 American Chemical Society

  6. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    NASA Technical Reports Server (NTRS)

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  7. Persistence and memory timescales in root-zone soil moisture dynamics

    Treesearch

    Khaled Ghannam; Taro Nakai; Athanasios Paschalis; Andrew C. Oishi; Ayumi Kotani; Yasunori Igarashi; Tomo' omi Kumagai; Gabriel G. Katul

    2016-01-01

    The memory timescale that characterizes root-zone soil moisture remains the dominant measure in seasonal forecasts of land-climate interactions. This memory is a quasi-deterministic timescale associated with the losses (e.g., evapotranspiration) from the soil column and is often interpreted as persistence in soil moisture states. Persistence, however,...

  8. Time Series Analysis of the Bacillus subtilis Sporulation Network Reveals Low Dimensional Chaotic Dynamics.

    PubMed

    Lecca, Paola; Mura, Ivan; Re, Angela; Barker, Gary C; Ihekwaba, Adaoha E C

    2016-01-01

    Chaotic behavior refers to a behavior which, albeit irregular, is generated by an underlying deterministic process. Therefore, a chaotic behavior is potentially controllable. This possibility becomes practically amenable especially when chaos is shown to be low-dimensional, i.e., to be attributable to a small fraction of the total systems components. In this case, indeed, including the major drivers of chaos in a system into the modeling approach allows us to improve predictability of the systems dynamics. Here, we analyzed the numerical simulations of an accurate ordinary differential equation model of the gene network regulating sporulation initiation in Bacillus subtilis to explore whether the non-linearity underlying time series data is due to low-dimensional chaos. Low-dimensional chaos is expectedly common in systems with few degrees of freedom, but rare in systems with many degrees of freedom such as the B. subtilis sporulation network. The estimation of a number of indices, which reflect the chaotic nature of a system, indicates that the dynamics of this network is affected by deterministic chaos. The neat separation between the indices obtained from the time series simulated from the model and those obtained from time series generated by Gaussian white and colored noise confirmed that the B. subtilis sporulation network dynamics is affected by low dimensional chaos rather than by noise. Furthermore, our analysis identifies the principal driver of the networks chaotic dynamics to be sporulation initiation phosphotransferase B (Spo0B). We then analyzed the parameters and the phase space of the system to characterize the instability points of the network dynamics, and, in turn, to identify the ranges of values of Spo0B and of the other drivers of the chaotic dynamics, for which the whole system is highly sensitive to minimal perturbation. In summary, we described an unappreciated source of complexity in the B. subtilis sporulation network by gathering evidence for the chaotic behavior of the system, and by suggesting candidate molecules driving chaos in the system. The results of our chaos analysis can increase our understanding of the intricacies of the regulatory network under analysis, and suggest experimental work to refine our behavior of the mechanisms underlying B. subtilis sporulation initiation control.

  9. Adaptive linearization of phase space. A hydrological case study

    NASA Astrophysics Data System (ADS)

    Angarita, Hector; Domínguez, Efraín

    2013-04-01

    Here is presented a method and its implementation to extract transition operators from hydrological signals with significant algorithmic complexity, i.e. signals with an identifiable deterministic component and a non-periodic and irregular part, being the latter a source of uncertainty for the observer. The method assumes that in a system such as a hydrological system, from the perspective of information theory, signals cannot be known to an arbitrary level of precision due to limited observation or coding capabilities. According to the Shannon-Hartley theorem, at a given sampling frequency -fs' there is a theoretical peak capacity C to observe data from a random signal (i.e. the discharge) transmitted through a noisy channel with a signal to noise ratio -SNR. This imposes a limit on the observer capability to completely reconstruct an observed signal if the sampling frequency -fs' is lower than a given threshold -fs', for which a system signal can be completely recovered for any given SNR. Since most hydrological monitoring systems have low monitoring frequency, the observations may contain less information than required to describe the process dynamics and as a result observed signals exhibit some level of uncertainty if compared with the "true" signal. In the proposed approach, a simple local phase-space model, with locally linearized deterministic and stochastic differential equations, is applied to extract system's state transition operators and to probabilistically characterize the signal uncertainty. In order to determine optimality of the local operators, three main elements are considered: i: System state dimensionality, ii. Sampling frequency and, iii. Parameterization window length. Two examples are shown and discussed to illustrate the method. First, the evaluation of the feasibility of real-time forecasting models for levels and fow rates, from hourly to 14-day lead times. The results of this application demonstrate the operational feasibility for simple predictive models for most of the evaluated cases. The second application is the definition of a stage-discharge decoding method based on the dynamics of the water level observed signal. The results indicate that the method leads to a reduction of hysteresis in the decoded flow, which however is not satisfactory as a quadratic bias emerged in the decoded values and needs explanation. Both examples allow to conclude about the optimal sampling frequency of studied variables.

  10. Implementation and characterization of active feed-forward for deterministic linear optics quantum computing

    NASA Astrophysics Data System (ADS)

    Böhi, P.; Prevedel, R.; Jennewein, T.; Stefanov, A.; Tiefenbacher, F.; Zeilinger, A.

    2007-12-01

    In general, quantum computer architectures which are based on the dynamical evolution of quantum states, also require the processing of classical information, obtained by measurements of the actual qubits that make up the computer. This classical processing involves fast, active adaptation of subsequent measurements and real-time error correction (feed-forward), so that quantum gates and algorithms can be executed in a deterministic and hence error-free fashion. This is also true in the linear optical regime, where the quantum information is stored in the polarization state of photons. The adaptation of the photon’s polarization can be achieved in a very fast manner by employing electro-optical modulators, which change the polarization of a trespassing photon upon appliance of a high voltage. In this paper we discuss techniques for implementing fast, active feed-forward at the single photon level and we present their application in the context of photonic quantum computing. This includes the working principles and the characterization of the EOMs as well as a description of the switching logics, both of which allow quantum computation at an unprecedented speed.

  11. Development of Methodologies for IV and V of Neural Networks

    NASA Technical Reports Server (NTRS)

    Taylor, Brian; Darrah, Marjorie

    2003-01-01

    Non-deterministic systems often rely upon neural network (NN) technology to "lean" to manage flight systems under controlled conditions using carefully chosen training sets. How can these adaptive systems be certified to ensure that they will become increasingly efficient and behave appropriately in real-time situations? The bulk of Independent Verification and Validation (IV&V) research of non-deterministic software control systems such as Adaptive Flight Controllers (AFC's) addresses NNs in well-behaved and constrained environments such as simulations and strict process control. However, neither substantive research, nor effective IV&V techniques have been found to address AFC's learning in real-time and adapting to live flight conditions. Adaptive flight control systems offer good extensibility into commercial aviation as well as military aviation and transportation. Consequently, this area of IV&V represents an area of growing interest and urgency. ISR proposes to further the current body of knowledge to meet two objectives: Research the current IV&V methods and assess where these methods may be applied toward a methodology for the V&V of Neural Network; and identify effective methods for IV&V of NNs that learn in real-time, including developing a prototype test bed for IV&V of AFC's. Currently. no practical method exists. lSR will meet these objectives through the tasks identified and described below. First, ISR will conduct a literature review of current IV&V technology. TO do this, ISR will collect the existing body of research on IV&V of non-deterministic systems and neural network. ISR will also develop the framework for disseminating this information through specialized training. This effort will focus on developing NASA's capability to conduct IV&V of neural network systems and to provide training to meet the increasing need for IV&V expertise in such systems.

  12. Stochastic simulations on a model of circadian rhythm generation.

    PubMed

    Miura, Shigehiro; Shimokawa, Tetsuya; Nomura, Taishin

    2008-01-01

    Biological phenomena are often modeled by differential equations, where states of a model system are described by continuous real values. When we consider concentrations of molecules as dynamical variables for a set of biochemical reactions, we implicitly assume that numbers of the molecules are large enough so that their changes can be regarded as continuous and they are described deterministically. However, for a system with small numbers of molecules, changes in their numbers are apparently discrete and molecular noises become significant. In such cases, models with deterministic differential equations may be inappropriate, and the reactions must be described by stochastic equations. In this study, we focus a clock gene expression for a circadian rhythm generation, which is known as a system involving small numbers of molecules. Thus it is appropriate for the system to be modeled by stochastic equations and analyzed by methodologies of stochastic simulations. The interlocked feedback model proposed by Ueda et al. as a set of deterministic ordinary differential equations provides a basis of our analyses. We apply two stochastic simulation methods, namely Gillespie's direct method and the stochastic differential equation method also by Gillespie, to the interlocked feedback model. To this end, we first reformulated the original differential equations back to elementary chemical reactions. With those reactions, we simulate and analyze the dynamics of the model using two methods in order to compare them with the dynamics obtained from the original deterministic model and to characterize dynamics how they depend on the simulation methodologies.

  13. Synchronisation of chaos and its applications

    NASA Astrophysics Data System (ADS)

    Eroglu, Deniz; Lamb, Jeroen S. W.; Pereira, Tiago

    2017-07-01

    Dynamical networks are important models for the behaviour of complex systems, modelling physical, biological and societal systems, including the brain, food webs, epidemic disease in populations, power grids and many other. Such dynamical networks can exhibit behaviour in which deterministic chaos, exhibiting unpredictability and disorder, coexists with synchronisation, a classical paradigm of order. We survey the main theory behind complete, generalised and phase synchronisation phenomena in simple as well as complex networks and discuss applications to secure communications, parameter estimation and the anticipation of chaos.

  14. Maxwell Demon Dynamics: Deterministic Chaos, the Szilard Map, and the Intelligence of Thermodynamic Systems

    NASA Astrophysics Data System (ADS)

    Boyd, Alexander B.; Crutchfield, James P.

    2016-05-01

    We introduce a deterministic chaotic system—the Szilard map—that encapsulates the measurement, control, and erasure protocol by which Maxwellian demons extract work from a heat reservoir. Implementing the demon's control function in a dynamical embodiment, our construction symmetrizes the demon and the thermodynamic system, allowing one to explore their functionality and recover the fundamental trade-off between the thermodynamic costs of dissipation due to measurement and those due to erasure. The map's degree of chaos—captured by the Kolmogorov-Sinai entropy—is the rate of energy extraction from the heat bath. Moreover, an engine's statistical complexity quantifies the minimum necessary system memory for it to function. In this way, dynamical instability in the control protocol plays an essential and constructive role in intelligent thermodynamic systems.

  15. Deterministic Joint Remote Preparation of a Four-Qubit Cluster-Type State via GHZ States

    NASA Astrophysics Data System (ADS)

    Wang, Hai-bin; Zhou, Xiao-Yan; An, Xing-xing; Cui, Meng-Meng; Fu, De-sheng

    2016-08-01

    A scheme for the deterministic joint remote preparation of a four-qubit cluster-type state using only two Greenberger-Horne-Zeilinger (GHZ) states as quantum channels is presented. In this scheme, the first sender performs a two-qubit projective measurement according to the real coefficient of the desired state. Then, the other sender utilizes the measurement result and the complex coefficient to perform another projective measurement. To obtain the desired state, the receiver applies appropriate unitary operations to his/her own two qubits and two CNOT operations to the two ancillary ones. Most interestingly, our scheme can achieve unit success probability, i.e., P s u c =1. Furthermore, comparison reveals that the efficiency is higher than that of most other analogous schemes.

  16. Classification and unification of the microscopic deterministic traffic models.

    PubMed

    Yang, Bo; Monterola, Christopher

    2015-10-01

    We identify a universal mathematical structure in microscopic deterministic traffic models (with identical drivers), and thus we show that all such existing models in the literature, including both the two-phase and three-phase models, can be understood as special cases of a master model by expansion around a set of well-defined ground states. This allows any two traffic models to be properly compared and identified. The three-phase models are characterized by the vanishing of leading orders of expansion within a certain density range, and as an example the popular intelligent driver model is shown to be equivalent to a generalized optimal velocity (OV) model. We also explore the diverse solutions of the generalized OV model that can be important both for understanding human driving behaviors and algorithms for autonomous driverless vehicles.

  17. Protein Aggregation/Folding: The Role of Deterministic Singularities of Sequence Hydrophobicity as Determined by Nonlinear Signal Analysis of Acylphosphatase and Aβ(1–40)

    PubMed Central

    Zbilut, Joseph P.; Colosimo, Alfredo; Conti, Filippo; Colafranceschi, Mauro; Manetti, Cesare; Valerio, MariaCristina; Webber, Charles L.; Giuliani, Alessandro

    2003-01-01

    The problem of protein folding vs. aggregation was investigated in acylphosphatase and the amyloid protein Aβ(1–40) by means of nonlinear signal analysis of their chain hydrophobicity. Numerical descriptors of recurrence patterns provided the basis for statistical evaluation of folding/aggregation distinctive features. Static and dynamic approaches were used to elucidate conditions coincident with folding vs. aggregation using comparisons with known protein secondary structure classifications, site-directed mutagenesis studies of acylphosphatase, and molecular dynamics simulations of amyloid protein, Aβ(1–40). The results suggest that a feature derived from principal component space characterized by the smoothness of singular, deterministic hydrophobicity patches plays a significant role in the conditions governing protein aggregation. PMID:14645049

  18. On the applicability of low-dimensional models for convective flow reversals at extreme Prandtl numbers

    NASA Astrophysics Data System (ADS)

    Mannattil, Manu; Pandey, Ambrish; Verma, Mahendra K.; Chakraborty, Sagar

    2017-12-01

    Constructing simpler models, either stochastic or deterministic, for exploring the phenomenon of flow reversals in fluid systems is in vogue across disciplines. Using direct numerical simulations and nonlinear time series analysis, we illustrate that the basic nature of flow reversals in convecting fluids can depend on the dimensionless parameters describing the system. Specifically, we find evidence of low-dimensional behavior in flow reversals occurring at zero Prandtl number, whereas we fail to find such signatures for reversals at infinite Prandtl number. Thus, even in a single system, as one varies the system parameters, one can encounter reversals that are fundamentally different in nature. Consequently, we conclude that a single general low-dimensional deterministic model cannot faithfully characterize flow reversals for every set of parameter values.

  19. The origin of life is a spatially localized stochastic transition

    PubMed Central

    2012-01-01

    Background Life depends on biopolymer sequences as catalysts and as genetic material. A key step in the Origin of Life is the emergence of an autocatalytic system of biopolymers. Here we study computational models that address the way a living autocatalytic system could have emerged from a non-living chemical system, as envisaged in the RNA World hypothesis. Results We consider (i) a chemical reaction system describing RNA polymerization, and (ii) a simple model of catalytic replicators that we call the Two’s Company model. Both systems have two stable states: a non-living state, characterized by a slow spontaneous rate of RNA synthesis, and a living state, characterized by rapid autocatalytic RNA synthesis. The origin of life is a transition between these two stable states. The transition is driven by stochastic concentration fluctuations involving relatively small numbers of molecules in a localized region of space. These models are simulated on a two-dimensional lattice in which reactions occur locally on single sites and diffusion occurs by hopping of molecules to neighbouring sites. Conclusions If diffusion is very rapid, the system is well-mixed. The transition to life becomes increasingly difficult as the lattice size is increased because the concentration fluctuations that drive the transition become relatively smaller when larger numbers of molecules are involved. In contrast, when diffusion occurs at a finite rate, concentration fluctuations are local. The transition to life occurs in one local region and then spreads across the rest of the surface. The transition becomes easier with larger lattice sizes because there are more independent regions in which it could occur. The key observations that apply to our models and to the real world are that the origin of life is a rare stochastic event that is localized in one region of space due to the limited rate of diffusion of the molecules involved and that the subsequent spread across the surface is deterministic. It is likely that the time required for the deterministic spread is much shorter than the waiting time for the origin, in which case life evolves only once on a planet, and then rapidly occupies the whole surface. Reviewers Reviewed by Omer Markovitch (nominated by Doron Lancet), Claus Wilke, and Nobuto Takeuchi (nominated by Eugene Koonin). PMID:23176307

  20. Complexity in Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Moore, Cristopher David

    The study of chaos has shown us that deterministic systems can have a kind of unpredictability, based on a limited knowledge of their initial conditions; after a finite time, the motion appears essentially random. This observation has inspired a general interest in the subject of unpredictability, and more generally, complexity; how can we characterize how "complex" a dynamical system is?. In this thesis, we attempt to answer this question with a paradigm of complexity that comes from computer science, we extract sets of symbol sequences, or languages, from a dynamical system using standard methods of symbolic dynamics; we then ask what kinds of grammars or automata are needed a generate these languages. This places them in the Chomsky heirarchy, which in turn tells us something about how subtle and complex the dynamical system's behavior is. This gives us insight into the question of unpredictability, since these automata can also be thought of as computers attempting to predict the system. In the culmination of the thesis, we find a class of smooth, two-dimensional maps which are equivalent to the highest class in the Chomsky heirarchy, the turning machine; they are capable of universal computation. Therefore, these systems possess a kind of unpredictability qualitatively different from the usual "chaos": even if the initial conditions are known exactly, questions about the system's long-term dynamics are undecidable. No algorithm exists to answer them. Although this kind of unpredictability has been discussed in the context of distributed, many-degree-of -freedom systems (for instance, cellular automata) we believe this is the first example of such phenomena in a smooth, finite-degree-of-freedom system.

  1. Rogue waves in terms of multi-point statistics and nonequilibrium thermodynamics

    NASA Astrophysics Data System (ADS)

    Hadjihosseini, Ali; Lind, Pedro; Mori, Nobuhito; Hoffmann, Norbert P.; Peinke, Joachim

    2017-04-01

    Ocean waves, which lead to rogue waves, are investigated on the background of complex systems. In contrast to deterministic approaches based on the nonlinear Schroedinger equation or focusing effects, we analyze this system in terms of a noisy stochastic system. In particular we present a statistical method that maps the complexity of multi-point data into the statistics of hierarchically ordered height increments for different time scales. We show that the stochastic cascade process with Markov properties is governed by a Fokker-Planck equation. Conditional probabilities as well as the Fokker-Planck equation itself can be estimated directly from the available observational data. This stochastic description enables us to show several new aspects of wave states. Surrogate data sets can in turn be generated allowing to work out different statistical features of the complex sea state in general and extreme rogue wave events in particular. The results also open up new perspectives for forecasting the occurrence probability of extreme rogue wave events, and even for forecasting the occurrence of individual rogue waves based on precursory dynamics. As a new outlook the ocean wave states will be considered in terms of nonequilibrium thermodynamics, for which the entropy production of different wave heights will be considered. We show evidence that rogue waves are characterized by negative entropy production. The statistics of the entropy production can be used to distinguish different wave states.

  2. Quantum teleportation via quantum channels with non-maximal Schmidt rank

    NASA Astrophysics Data System (ADS)

    Solís-Prosser, M. A.; Jiménez, O.; Neves, L.; Delgado, A.

    2013-03-01

    We study the problem of teleporting unknown pure states of a single qudit via a pure quantum channel with non-maximal Schmidt rank. We relate this process to the discrimination of linearly dependent symmetric states with the help of the maximum-confidence discrimination strategy. We show that with a certain probability, it is possible to teleport with a fidelity larger than the fidelity optimal deterministic teleportation.

  3. Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices

    PubMed Central

    Monajemi, Hatef; Jafarpour, Sina; Gavish, Matan; Donoho, David L.; Ambikasaran, Sivaram; Bacallado, Sergio; Bharadia, Dinesh; Chen, Yuxin; Choi, Young; Chowdhury, Mainak; Chowdhury, Soham; Damle, Anil; Fithian, Will; Goetz, Georges; Grosenick, Logan; Gross, Sam; Hills, Gage; Hornstein, Michael; Lakkam, Milinda; Lee, Jason; Li, Jian; Liu, Linxi; Sing-Long, Carlos; Marx, Mike; Mittal, Akshay; Monajemi, Hatef; No, Albert; Omrani, Reza; Pekelis, Leonid; Qin, Junjie; Raines, Kevin; Ryu, Ernest; Saxe, Andrew; Shi, Dai; Siilats, Keith; Strauss, David; Tang, Gary; Wang, Chaojun; Zhou, Zoey; Zhu, Zhen

    2013-01-01

    In compressed sensing, one takes samples of an N-dimensional vector using an matrix A, obtaining undersampled measurements . For random matrices with independent standard Gaussian entries, it is known that, when is k-sparse, there is a precisely determined phase transition: for a certain region in the (,)-phase diagram, convex optimization typically finds the sparsest solution, whereas outside that region, it typically fails. It has been shown empirically that the same property—with the same phase transition location—holds for a wide range of non-Gaussian random matrix ensembles. We report extensive experiments showing that the Gaussian phase transition also describes numerous deterministic matrices, including Spikes and Sines, Spikes and Noiselets, Paley Frames, Delsarte-Goethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Namely, for each of these deterministic matrices in turn, for a typical k-sparse object, we observe that convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian random matrices. Our experiments considered coefficients constrained to for four different sets , and the results establish our finding for each of the four associated phase transitions. PMID:23277588

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayes, T.; Smith, K.S.; Severino, F.

    A critical capability of the new RHIC low level rf (LLRF) system is the ability to synchronize signals across multiple locations. The 'Update Link' provides this functionality. The 'Update Link' is a deterministic serial data link based on the Xilinx RocketIO protocol that is broadcast over fiber optic cable at 1 gigabit per second (Gbps). The link provides timing events and data packets as well as time stamp information for synchronizing diagnostic data from multiple sources. The new RHIC LLRF was designed to be a flexible, modular system. The system is constructed of numerous independent RF Controller chassis. To providemore » synchronization among all of these chassis, the Update Link system was designed. The Update Link system provides a low latency, deterministic data path to broadcast information to all receivers in the system. The Update Link system is based on a central hub, the Update Link Master (ULM), which generates the data stream that is distributed via fiber optic links. Downstream chassis have non-deterministic connections back to the ULM that allow any chassis to provide data that is broadcast globally.« less

  5. Analysis of stochastic model for non-linear volcanic dynamics

    NASA Astrophysics Data System (ADS)

    Alexandrov, D.; Bashkirtseva, I.; Ryashko, L.

    2014-12-01

    Motivated by important geophysical applications we consider a dynamic model of the magma-plug system previously derived by Iverson et al. (2006) under the influence of stochastic forcing. Due to strong nonlinearity of the friction force for solid plug along its margins, the initial deterministic system exhibits impulsive oscillations. Two types of dynamic behavior of the system under the influence of the parametric stochastic forcing have been found: random trajectories are scattered on both sides of the deterministic cycle or grouped on its internal side only. It is shown that dispersions are highly inhomogeneous along cycles in the presence of noises. The effects of noise-induced shifts, pressure stabilization and localization of random trajectories have been revealed with increasing the noise intensity. The plug velocity, pressure and displacement are highly dependent of noise intensity as well. These new stochastic phenomena are related with the nonlinear peculiarities of the deterministic phase portrait. It is demonstrated that the repetitive stick-slip motions of the magma-plug system in the case of stochastic forcing can be connected with drumbeat earthquakes.

  6. Deterministic modelling and stochastic simulation of biochemical pathways using MATLAB.

    PubMed

    Ullah, M; Schmidt, H; Cho, K H; Wolkenhauer, O

    2006-03-01

    The analysis of complex biochemical networks is conducted in two popular conceptual frameworks for modelling. The deterministic approach requires the solution of ordinary differential equations (ODEs, reaction rate equations) with concentrations as continuous state variables. The stochastic approach involves the simulation of differential-difference equations (chemical master equations, CMEs) with probabilities as variables. This is to generate counts of molecules for chemical species as realisations of random variables drawn from the probability distribution described by the CMEs. Although there are numerous tools available, many of them free, the modelling and simulation environment MATLAB is widely used in the physical and engineering sciences. We describe a collection of MATLAB functions to construct and solve ODEs for deterministic simulation and to implement realisations of CMEs for stochastic simulation using advanced MATLAB coding (Release 14). The program was successfully applied to pathway models from the literature for both cases. The results were compared to implementations using alternative tools for dynamic modelling and simulation of biochemical networks. The aim is to provide a concise set of MATLAB functions that encourage the experimentation with systems biology models. All the script files are available from www.sbi.uni-rostock.de/ publications_matlab-paper.html.

  7. Exact and approximate stochastic simulation of intracellular calcium dynamics.

    PubMed

    Wieder, Nicolas; Fink, Rainer H A; Wegner, Frederic von

    2011-01-01

    In simulations of chemical systems, the main task is to find an exact or approximate solution of the chemical master equation (CME) that satisfies certain constraints with respect to computation time and accuracy. While Brownian motion simulations of single molecules are often too time consuming to represent the mesoscopic level, the classical Gillespie algorithm is a stochastically exact algorithm that provides satisfying results in the representation of calcium microdomains. Gillespie's algorithm can be approximated via the tau-leap method and the chemical Langevin equation (CLE). Both methods lead to a substantial acceleration in computation time and a relatively small decrease in accuracy. Elimination of the noise terms leads to the classical, deterministic reaction rate equations (RRE). For complex multiscale systems, hybrid simulations are increasingly proposed to combine the advantages of stochastic and deterministic algorithms. An often used exemplary cell type in this context are striated muscle cells (e.g., cardiac and skeletal muscle cells). The properties of these cells are well described and they express many common calcium-dependent signaling pathways. The purpose of the present paper is to provide an overview of the aforementioned simulation approaches and their mutual relationships in the spectrum ranging from stochastic to deterministic algorithms.

  8. Optical microtopographic inspection of asphalt pavement surfaces

    NASA Astrophysics Data System (ADS)

    Costa, Manuel F. M.; Freitas, E. F.; Torres, H.; Cerezo, V.

    2017-08-01

    Microtopographic and rugometric characterization of surfaces is routinely and effectively performed non-invasively by a number of different optical methods. Rough surfaces are also inspected using optical profilometers and microtopographer. The characterization of road asphalt pavement surfaces produced in different ways and compositions is fundamental for economical and safety reasons. Having complex structures, including topographically with different ranges of form error and roughness, the inspection of asphalt pavement surfaces is difficult to perform non-invasively. In this communication we will report on the optical non-contact rugometric characterization of the surface of different types of road pavements performed at the Microtopography Laboratory of the Physics Department of the University of Minho.

  9. An enhanced deterministic K-Means clustering algorithm for cancer subtype prediction from gene expression data.

    PubMed

    Nidheesh, N; Abdul Nazeer, K A; Ameer, P M

    2017-12-01

    Clustering algorithms with steps involving randomness usually give different results on different executions for the same dataset. This non-deterministic nature of algorithms such as the K-Means clustering algorithm limits their applicability in areas such as cancer subtype prediction using gene expression data. It is hard to sensibly compare the results of such algorithms with those of other algorithms. The non-deterministic nature of K-Means is due to its random selection of data points as initial centroids. We propose an improved, density based version of K-Means, which involves a novel and systematic method for selecting initial centroids. The key idea of the algorithm is to select data points which belong to dense regions and which are adequately separated in feature space as the initial centroids. We compared the proposed algorithm to a set of eleven widely used single clustering algorithms and a prominent ensemble clustering algorithm which is being used for cancer data classification, based on the performances on a set of datasets comprising ten cancer gene expression datasets. The proposed algorithm has shown better overall performance than the others. There is a pressing need in the Biomedical domain for simple, easy-to-use and more accurate Machine Learning tools for cancer subtype prediction. The proposed algorithm is simple, easy-to-use and gives stable results. Moreover, it provides comparatively better predictions of cancer subtypes from gene expression data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Weakly anomalous diffusion with non-Gaussian propagators

    NASA Astrophysics Data System (ADS)

    Cressoni, J. C.; Viswanathan, G. M.; Ferreira, A. S.; da Silva, M. A. A.

    2012-08-01

    A poorly understood phenomenon seen in complex systems is diffusion characterized by Hurst exponent H≈1/2 but with non-Gaussian statistics. Motivated by such empirical findings, we report an exact analytical solution for a non-Markovian random walk model that gives rise to weakly anomalous diffusion with H=1/2 but with a non-Gaussian propagator.

  11. PROCEEDINGS OF THE SYMPOSIUM ON SYSTEM THEORY, NEW YORK, N. Y. APRIL 20, 21, 22 1965. VOLUME XV.

    DTIC Science & Technology

    The papers presented at the symposium may be grouped as follows: (1) What is system theory ; (2) Representations of systems; (3) System dynamics; (4...Non-deterministic systems; (5) Optimal systems; and (6) Applications of system theory .

  12. A stochastic method for stand-alone photovoltaic system sizing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cabral, Claudia Valeria Tavora; Filho, Delly Oliveira; Martins, Jose Helvecio

    Photovoltaic systems utilize solar energy to generate electrical energy to meet load demands. Optimal sizing of these systems includes the characterization of solar radiation. Solar radiation at the Earth's surface has random characteristics and has been the focus of various academic studies. The objective of this study was to stochastically analyze parameters involved in the sizing of photovoltaic generators and develop a methodology for sizing of stand-alone photovoltaic systems. Energy storage for isolated systems and solar radiation were analyzed stochastically due to their random behavior. For the development of the methodology proposed stochastic analysis were studied including the Markov chainmore » and beta probability density function. The obtained results were compared with those for sizing of stand-alone using from the Sandia method (deterministic), in which the stochastic model presented more reliable values. Both models present advantages and disadvantages; however, the stochastic one is more complex and provides more reliable and realistic results. (author)« less

  13. Entanglement sensitivity to signal attenuation and amplification

    NASA Astrophysics Data System (ADS)

    Filippov, Sergey N.; Ziman, Mário

    2014-07-01

    We analyze general laws of continuous-variable entanglement dynamics during the deterministic attenuation and amplification of the physical signal carrying the entanglement. These processes are inevitably accompanied by noises, so we find fundamental limitations on noise intensities that destroy entanglement of Gaussian and non-Gaussian input states. The phase-insensitive amplification Φ1⊗Φ2⊗⋯ΦN with the power gain κi≥2 (≈3 dB, i =1,...,N) is shown to destroy entanglement of any N-mode Gaussian state even in the case of quantum-limited performance. In contrast, we demonstrate non-Gaussian states with the energy of a few photons such that their entanglement survives within a wide range of noises beyond quantum-limited performance for any degree of attenuation or gain. We detect entanglement preservation properties of the channel Φ1⊗Φ2, where each mode is deterministically attenuated or amplified. Gaussian states of high energy are shown to be robust to very asymmetric attenuations, whereas non-Gaussian states are at an advantage in the case of symmetric attenuation and general amplification. If Φ1=Φ2, the total noise should not exceed 1/2√κ2+1 to guarantee entanglement preservation.

  14. Vibroacoustic Response of Pad Structures to Space Shuttle Launch Acoustic Loads

    NASA Technical Reports Server (NTRS)

    Margasahayam, R. N.; Caimi, Raoul E.

    1995-01-01

    This paper presents a deterministic theory for the random vibration problem for predicting the response of structures in the low-frequency range (0 to 20 hertz) of launch transients. Also presented are some innovative ways to characterize noise and highlights of ongoing test-analysis correlation efforts titled the Verification Test Article (VETA) project.

  15. Mutation Clusters from Cancer Exome.

    PubMed

    Kakushadze, Zura; Yu, Willie

    2017-08-15

    We apply our statistically deterministic machine learning/clustering algorithm *K-means (recently developed in https://ssrn.com/abstract=2908286) to 10,656 published exome samples for 32 cancer types. A majority of cancer types exhibit a mutation clustering structure. Our results are in-sample stable. They are also out-of-sample stable when applied to 1389 published genome samples across 14 cancer types. In contrast, we find in- and out-of-sample instabilities in cancer signatures extracted from exome samples via nonnegative matrix factorization (NMF), a computationally-costly and non-deterministic method. Extracting stable mutation structures from exome data could have important implications for speed and cost, which are critical for early-stage cancer diagnostics, such as novel blood-test methods currently in development.

  16. Mutation Clusters from Cancer Exome

    PubMed Central

    Kakushadze, Zura; Yu, Willie

    2017-01-01

    We apply our statistically deterministic machine learning/clustering algorithm *K-means (recently developed in https://ssrn.com/abstract=2908286) to 10,656 published exome samples for 32 cancer types. A majority of cancer types exhibit a mutation clustering structure. Our results are in-sample stable. They are also out-of-sample stable when applied to 1389 published genome samples across 14 cancer types. In contrast, we find in- and out-of-sample instabilities in cancer signatures extracted from exome samples via nonnegative matrix factorization (NMF), a computationally-costly and non-deterministic method. Extracting stable mutation structures from exome data could have important implications for speed and cost, which are critical for early-stage cancer diagnostics, such as novel blood-test methods currently in development. PMID:28809811

  17. On the Development of a Deterministic Three-Dimensional Radiation Transport Code

    NASA Technical Reports Server (NTRS)

    Rockell, Candice; Tweed, John

    2011-01-01

    Since astronauts on future deep space missions will be exposed to dangerous radiations, there is a need to accurately model the transport of radiation through shielding materials and to estimate the received radiation dose. In response to this need a three dimensional deterministic code for space radiation transport is now under development. The new code GRNTRN is based on a Green's function solution of the Boltzmann transport equation that is constructed in the form of a Neumann series. Analytical approximations will be obtained for the first three terms of the Neumann series and the remainder will be estimated by a non-perturbative technique . This work discusses progress made to date and exhibits some computations based on the first two Neumann series terms.

  18. Modelling human decision-making in coupled human and natural systems

    NASA Astrophysics Data System (ADS)

    Feola, G.

    2012-12-01

    A solid understanding of human decision-making is essential to analyze the complexity of coupled human and natural systems (CHANS) and inform policies to promote resilience in the face of environmental change. Human decisions drive and/or mediate the interactions and feedbacks, and contribute to the heterogeneity and non-linearity that characterize CHANS. However, human decision-making is usually over-simplistically modeled, whereby human agents are represented deterministically either as dumb or clairvoyant decision-makers. Decision-making models fall short in the integration of both environmental and human behavioral drivers, and concerning the latter, tend to focus on only one category, e.g. economic, cultural, or psychological. Furthermore, these models render a linear decision-making process and therefore fail to account for the recursive co-evolutionary dynamics in CHANS. As a result, these models constitute only a weak basis for policy-making. There is therefore scope and an urgent need for better approaches to human decision-making, to produce the knowledge that can inform vulnerability reduction policies in the face of environmental change. This presentation synthesizes the current state-of-the-art of modelling human decision-making in CHANS, with particular reference to agricultural systems, and delineates how the above mentioned shortcomings can be overcome. Through examples from research on pesticide use and adaptation to climate change, both based on the integrative agent-centered framework (Feola and Binder, 2010), the approach for an improved understanding of human agents in CHANS are illustrated. This entails: integrative approach, focus on behavioral dynamics more than states, feedbacks between individual and system levels, and openness to heterogeneity.

  19. Novel physical constraints on implementation of computational processes

    NASA Astrophysics Data System (ADS)

    Wolpert, David; Kolchinsky, Artemy

    Non-equilibrium statistical physics permits us to analyze computational processes, i.e., ways to drive a physical system such that its coarse-grained dynamics implements some desired map. It is now known how to implement any such desired computation without dissipating work, and what the minimal (dissipationless) work is that such a computation will require (the so-called generalized Landauer bound\\x9D). We consider how these analyses change if we impose realistic constraints on the computational process. First, we analyze how many degrees of freedom of the system must be controlled, in addition to the ones specifying the information-bearing degrees of freedom, in order to avoid dissipating work during a given computation, when local detailed balance holds. We analyze this issue for deterministic computations, deriving a state-space vs. speed trade-off, and use our results to motivate a measure of the complexity of a computation. Second, we consider computations that are implemented with logic circuits, in which only a small numbers of degrees of freedom are coupled at a time. We show that the way a computation is implemented using circuits affects its minimal work requirements, and relate these minimal work requirements to information-theoretic measures of complexity.

  20. Decrease of cardiac chaos in congestive heart failure

    NASA Astrophysics Data System (ADS)

    Poon, Chi-Sang; Merrill, Christopher K.

    1997-10-01

    The electrical properties of the mammalian heart undergo many complex transitions in normal and diseased states. It has been proposed that the normal heartbeat may display complex nonlinear dynamics, including deterministic chaos,, and that such cardiac chaos may be a useful physiological marker for the diagnosis and management, of certain heart trouble. However, it is not clear whether the heartbeat series of healthy and diseased hearts are chaotic or stochastic, or whether cardiac chaos represents normal or abnormal behaviour. Here we have used a highly sensitive technique, which is robust to random noise, to detect chaos. We analysed the electrocardiograms from a group of healthy subjects and those with severe congestive heart failure (CHF), a clinical condition associated with a high risk of sudden death. The short-term variations of beat-to-beat interval exhibited strongly and consistently chaotic behaviour in all healthy subjects, but were frequently interrupted by periods of seemingly non-chaotic fluctuations in patients with CHF. Chaotic dynamics in the CHF data, even when discernible, exhibited a high degree of random variability over time, suggesting a weaker form of chaos. These findings suggest that cardiac chaos is prevalent in healthy heart, and a decrease in such chaos may be indicative of CHF.

  1. Nested polynomial trends for the improvement of Gaussian process-based predictors

    NASA Astrophysics Data System (ADS)

    Perrin, G.; Soize, C.; Marque-Pucheu, S.; Garnier, J.

    2017-10-01

    The role of simulation keeps increasing for the sensitivity analysis and the uncertainty quantification of complex systems. Such numerical procedures are generally based on the processing of a huge amount of code evaluations. When the computational cost associated with one particular evaluation of the code is high, such direct approaches based on the computer code only, are not affordable. Surrogate models have therefore to be introduced to interpolate the information given by a fixed set of code evaluations to the whole input space. When confronted to deterministic mappings, the Gaussian process regression (GPR), or kriging, presents a good compromise between complexity, efficiency and error control. Such a method considers the quantity of interest of the system as a particular realization of a Gaussian stochastic process, whose mean and covariance functions have to be identified from the available code evaluations. In this context, this work proposes an innovative parametrization of this mean function, which is based on the composition of two polynomials. This approach is particularly relevant for the approximation of strongly non linear quantities of interest from very little information. After presenting the theoretical basis of this method, this work compares its efficiency to alternative approaches on a series of examples.

  2. Quantum Matching Theory (with new complexity-theoretic, combinatorial and topical insights on the nature of the quantum entanglement)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gurvits, L.

    2002-01-01

    Classical matching theory can be defined in terms of matrices with nonnegative entries. The notion of Positive operator, central in Quantum Theory, is a natural generalization of matrices with non-negative entries. Based on this point of view, we introduce a definition of perfect Quantum (operator) matching. We show that the new notion inherits many 'classical' properties, but not all of them. This new notion goes somewhere beyound matroids. For separable bipartite quantum states this new notion coinsides with the full rank property of the intersection of two corresponding geometric matroids. In the classical situation, permanents are naturally associated with perfectsmore » matchings. We introduce an analog of permanents for positive operators, called Quantum Permanent and show how this generalization of the permanent is related to the Quantum Entanglement. Besides many other things, Quantum Permanents provide new rational inequalities necessary for the separability of bipartite quantum states. Using Quantum Permanents, we give deterministic poly-time algorithm to solve Hidden Matroids Intersection Problem and indicate some 'classical' complexity difficulties associated with the Quantum Entanglement. Finally, we prove that the weak membership problem for the convex set of separable bipartite density matrices is NP-HARD.« less

  3. The ESPAT tool: a general-purpose DSS shell for solving stochastic optimization problems in complex river-aquifer systems

    NASA Astrophysics Data System (ADS)

    Macian-Sorribes, Hector; Pulido-Velazquez, Manuel; Tilmant, Amaury

    2015-04-01

    Stochastic programming methods are better suited to deal with the inherent uncertainty of inflow time series in water resource management. However, one of the most important hurdles in their use in practical implementations is the lack of generalized Decision Support System (DSS) shells, usually based on a deterministic approach. The purpose of this contribution is to present a general-purpose DSS shell, named Explicit Stochastic Programming Advanced Tool (ESPAT), able to build and solve stochastic programming problems for most water resource systems. It implements a hydro-economic approach, optimizing the total system benefits as the sum of the benefits obtained by each user. It has been coded using GAMS, and implements a Microsoft Excel interface with a GAMS-Excel link that allows the user to introduce the required data and recover the results. Therefore, no GAMS skills are required to run the program. The tool is divided into four modules according to its capabilities: 1) the ESPATR module, which performs stochastic optimization procedures in surface water systems using a Stochastic Dual Dynamic Programming (SDDP) approach; 2) the ESPAT_RA module, which optimizes coupled surface-groundwater systems using a modified SDDP approach; 3) the ESPAT_SDP module, capable of performing stochastic optimization procedures in small-size surface systems using a standard SDP approach; and 4) the ESPAT_DET module, which implements a deterministic programming procedure using non-linear programming, able to solve deterministic optimization problems in complex surface-groundwater river basins. The case study of the Mijares river basin (Spain) is used to illustrate the method. It consists in two reservoirs in series, one aquifer and four agricultural demand sites currently managed using historical (XIV century) rights, which give priority to the most traditional irrigation district over the XX century agricultural developments. Its size makes it possible to use either the SDP or the SDDP methods. The independent use of surface and groundwater can be examined with and without the aquifer. The ESPAT_DET, ESPATR and ESPAT_SDP modules were executed for the surface system, while the ESPAT_RA and the ESPAT_DET modules were run for the surface-groundwater system. The surface system's results show a similar performance between the ESPAT_SDP and ESPATR modules, with outperform the one showed by the current policies besides being outperformed by the ESPAT_DET results, which have the advantage of the perfect foresight. The surface-groundwater system's results show a robust situation in which the differences between the module's results and the current policies are lower due the use of pumped groundwater in the XX century crops when surface water is scarce. The results are realistic, with the deterministic optimization outperforming the stochastic one, which at the same time outperforms the current policies; showing that the tool is able to stochastically optimize river-aquifer water resources systems. We are currently working in the application of these tools in the analysis of changes in systems' operation under global change conditions. ACKNOWLEDGEMENT: This study has been partially supported by the IMPADAPT project (CGL2013-48424-C2-1-R) with Spanish MINECO (Ministerio de Economía y Competitividad) funds.

  4. Synthesis, characterization and biological evaluation of a (67)Ga-labeled (η(6)-Tyr)Ru(η(5)-Cp) peptide complex with the HAV motif.

    PubMed

    Bihari, Zsolt; Vultos, Filipe; Fernandes, Célia; Gano, Lurdes; Santos, Isabel; Correia, João D G; Buglyó, Péter

    2016-07-01

    Heterobimetallic complexes with the evolutionary, well-preserved, histidyl-alanyl-valinyl (HAV) sequence for cadherin targeting, an organometallic Ru core with anticancer activity and a radioactive moiety for imaging may hold potential as theranostic agents for cancer. Visible-light irradiation of the HAVAY-NH2 pentapeptide in the presence of [(η(5)-Cp)Ru(η(6)-naphthalene)](+) resulted in the formation of a full sandwich type complex, (η(6)-Tyr-RuCp)-HAVAY-NH2 in aqueous solution, where the metal ion is connected to the Tyr (Y) unit of the peptide. Conjugation of this complex to 2,2'-(7-(1-carboxy-4-((4-isothiocyanatobenzyl)amino)-4-oxobutyl)-1,4,7-triazonane-1,4-diyl)diacetic acid (NODA-GA) and subsequent metalation of the resulting product with stable ((nat)Ga) and radioactive ((67)Ga) isotope yielded (nat)Ga/(67)Ga-NODA-GA-[(η(6)-Tyr-RuCp)-HAVAY-NH2]. The non-radioactive compounds were characterized by NMR spectroscopy and Mass Spectrometry. The cellular uptake and cytotoxicity of the radioactive and non-radioactive complexes, respectively, were evaluated in various human cancer cell lines characterized by different levels of N- or E-cadherins expression. Results from these studies indicate moderate cellular uptake of the radioactive complexes. However, the inhibition of the cell proliferation was not relevant. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Characterizing Protein Complexes with UV absorption, Light Scattering, and Refractive Index Detection.

    NASA Astrophysics Data System (ADS)

    Trainoff, Steven

    2009-03-01

    Many modern pharmaceuticals and naturally occurring biomolecules consist of complexes of proteins and polyethylene glycol or carbohydrates. In the case of vaccine development, these complexes are often used to induce or amplify immune responses. For protein therapeutics they are used to modify solubility and function, or to control the rate of degradation and elimination of a drug from the body. Characterizing the stoichiometry of these complexes is an important industrial problem that presents a formidable challenge to analytical instrument designers. Traditional analytical methods, such as using florescent tagging, chemical assays, and mass spectrometry perturb the system so dramatically that the complexes are often destroyed or uncontrollably modified by the measurement. A solution to this problem consists of fractionating the samples and then measuring the fractions using sequential non-invasive detectors that are sensitive to different components of the complex. We present results using UV absorption, which is primarily sensitive to the protein fraction, Light Scattering, which measures the total weight average molar mass, and Refractive Index detection, which measures the net concentration. We also present a solution of the problem inter-detector band-broadening problem that has heretofore made this approach impractical. Presented will be instrumentation and an analysis method that overcome these obstacles and make this technique a reliable and robust way of non-invasively characterizing these industrially important compounds.

  6. Dynamical signatures of isometric force control as a function of age, expertise, and task constraints.

    PubMed

    Vieluf, Solveig; Sleimen-Malkoun, Rita; Voelcker-Rehage, Claudia; Jirsa, Viktor; Reuter, Eva-Maria; Godde, Ben; Temprado, Jean-Jacques; Huys, Raoul

    2017-07-01

    From the conceptual and methodological framework of the dynamical systems approach, force control results from complex interactions of various subsystems yielding observable behavioral fluctuations, which comprise both deterministic (predictable) and stochastic (noise-like) dynamical components. Here, we investigated these components contributing to the observed variability in force control in groups of participants differing in age and expertise level. To this aim, young (18-25 yr) as well as late middle-aged (55-65 yr) novices and experts (precision mechanics) performed a force maintenance and a force modulation task. Results showed that whereas the amplitude of force variability did not differ across groups in the maintenance tasks, in the modulation task it was higher for late middle-aged novices than for experts and higher for both these groups than for young participants. Within both tasks and for all groups, stochastic fluctuations were lowest where the deterministic influence was smallest. However, although all groups showed similar dynamics underlying force control in the maintenance task, a group effect was found for deterministic and stochastic fluctuations in the modulation task. The latter findings imply that both components were involved in the observed group differences in the variability of force fluctuations in the modulation task. These findings suggest that between groups the general characteristics of the dynamics do not differ in either task and that force control is more affected by age than by expertise. However, expertise seems to counteract some of the age effects. NEW & NOTEWORTHY Stochastic and deterministic dynamical components contribute to force production. Dynamical signatures differ between force maintenance and cyclic force modulation tasks but hardly between age and expertise groups. Differences in both stochastic and deterministic components are associated with group differences in behavioral variability, and observed behavioral variability is more strongly task dependent than person dependent. Copyright © 2017 the American Physiological Society.

  7. Scattering effects of machined optical surfaces

    NASA Astrophysics Data System (ADS)

    Thompson, Anita Kotha

    1998-09-01

    Optical fabrication is one of the most labor-intensive industries in existence. Lensmakers use pitch to affix glass blanks to metal chucks that hold the glass as they grind it with tools that have not changed much in fifty years. Recent demands placed on traditional optical fabrication processes in terms of surface accuracy, smoothnesses, and cost effectiveness has resulted in the exploitation of precision machining technology to develop a new generation of computer numerically controlled (CNC) optical fabrication equipment. This new kind of precision machining process is called deterministic microgrinding. The most conspicuous feature of optical surfaces manufactured by the precision machining processes (such as single-point diamond turning or deterministic microgrinding) is the presence of residual cutting tool marks. These residual tool marks exhibit a highly structured topography of periodic azimuthal or radial deterministic marks in addition to random microroughness. These distinct topographic features give rise to surface scattering effects that can significantly degrade optical performance. In this dissertation project we investigate the scattering behavior of machined optical surfaces and their imaging characteristics. In particular, we will characterize the residual optical fabrication errors and relate the resulting scattering behavior to the tool and machine parameters in order to evaluate and improve the deterministic microgrinding process. Other desired information derived from the investigation of scattering behavior is the optical fabrication tolerances necessary to satisfy specific image quality requirements. Optical fabrication tolerances are a major cost driver for any precision optical manufacturing technology. The derivation and control of the optical fabrication tolerances necessary for different applications and operating wavelength regimes will play a unique and central role in establishing deterministic microgrinding as a preferred and a cost-effective optical fabrication process. Other well understood optical fabrication processes will also be reviewed and a performance comparison with the conventional grinding and polishing technique will be made to determine any inherent advantages in the optical quality of surfaces produced by other techniques.

  8. Synthesis, characterization and solid state electrical properties of 1-D coordination polymer of the type [CuxNi1-x(dadb)·yH2O]n

    NASA Astrophysics Data System (ADS)

    Prasad, R. L.; Kushwaha, A.; Shrivastava, O. N.

    2012-12-01

    New heterobimetallic complexes [CuxNi1-x(dadb)·yH2O]n {where dadb=2,5-Diamino-3,6-dichloro-1,4-benzoquinone (1); x=1 (2), 0.5 (4), 0.25 (5), 0.125 (6), 0.0625 (7) and 0 (3); y=2; n=degree of polymerization} were synthesized and characterized. Heterobimetallic complexes show normal magnetic moments, whereas, monometallic complexes exhibit magnetic moments less than the value due to spin only. Thermo-gravimetric analysis shows that degradation of the ligand dadb moiety is being controlled by the electronic environment of the Cu(II) ions in preference over Ni(II) in heterobimetallic complexes. Existence of the mixed valency/non-integral oxidation states of copper and nickel metal ions in the complex 4 has been attributed from magnetic moment and ESR spectral results. Solid state dc electrical conductivity of all the complexes was investigated. Monometallic complexes were found to be semiconductors, whereas heterobimetallic coordination polymer 4 was found to exhibit metallic behaviour. Existence of mixed valency/ non-integral oxidation state of metal ions seems to be responsible for the metallic behaviour.

  9. The importance of diverse data types to calibrate a watershed model of the Trout Lake Basin, Northern Wisconsin, USA

    USGS Publications Warehouse

    Hunt, R.J.; Feinstein, D.T.; Pint, C.D.; Anderson, M.P.

    2006-01-01

    As part of the USGS Water, Energy, and Biogeochemical Budgets project and the NSF Long-Term Ecological Research work, a parameter estimation code was used to calibrate a deterministic groundwater flow model of the Trout Lake Basin in northern Wisconsin. Observations included traditional calibration targets (head, lake stage, and baseflow observations) as well as unconventional targets such as groundwater flows to and from lakes, depth of a lake water plume, and time of travel. The unconventional data types were important for parameter estimation convergence and allowed the development of a more detailed parameterization capable of resolving model objectives with well-constrained parameter values. Independent estimates of groundwater inflow to lakes were most important for constraining lakebed leakance and the depth of the lake water plume was important for determining hydraulic conductivity and conceptual aquifer layering. The most important target overall, however, was a conventional regional baseflow target that led to correct distribution of flow between sub-basins and the regional system during model calibration. The use of an automated parameter estimation code: (1) facilitated the calibration process by providing a quantitative assessment of the model's ability to match disparate observed data types; and (2) allowed assessment of the influence of observed targets on the calibration process. The model calibration required the use of a 'universal' parameter estimation code in order to include all types of observations in the objective function. The methods described in this paper help address issues of watershed complexity and non-uniqueness common to deterministic watershed models. ?? 2005 Elsevier B.V. All rights reserved.

  10. Signal Processing Applications Of Wigner-Ville Analysis

    NASA Astrophysics Data System (ADS)

    Whitehouse, H. J.; Boashash, B.

    1986-04-01

    The Wigner-Ville distribution (WVD), a form of time-frequency analysis, is shown to be useful in the analysis of a variety of non-stationary signals both deterministic and stochastic. The properties of the WVD are reviewed and alternative methods of calculating the WVD are discussed. Applications are presented.

  11. Comparison of Four Probabilistic Models (CARES, Calendex, ConsEspo, SHEDS) to Estimate Aggregate Residential Exposures to Pesticides

    EPA Science Inventory

    Two deterministic models (US EPA’s Office of Pesticide Programs Residential Standard Operating Procedures (OPP Residential SOPs) and Draft Protocol for Measuring Children’s Non-Occupational Exposure to Pesticides by all Relevant Pathways (Draft Protocol)) and four probabilistic mo...

  12. The Evolution of Human Longevity: Toward a Biocultural Theory.

    ERIC Educational Resources Information Center

    Mayer, Peter J.

    Homo sapiens is the only extant species for which there exists a significant post-reproductive period in the normal lifespan. Explanations for the evolution of this species-specific trait are possible through "non-deterministic" theories of aging positing "wear and tear" or the failure of nature to eliminate imperfection, or…

  13. Robust Observation Detection for Single Object Tracking: Deterministic and Probabilistic Patch-Based Approaches

    PubMed Central

    Zulkifley, Mohd Asyraf; Rawlinson, David; Moran, Bill

    2012-01-01

    In video analytics, robust observation detection is very important as the content of the videos varies a lot, especially for tracking implementation. Contrary to the image processing field, the problems of blurring, moderate deformation, low illumination surroundings, illumination change and homogenous texture are normally encountered in video analytics. Patch-Based Observation Detection (PBOD) is developed to improve detection robustness to complex scenes by fusing both feature- and template-based recognition methods. While we believe that feature-based detectors are more distinctive, however, for finding the matching between the frames are best achieved by a collection of points as in template-based detectors. Two methods of PBOD—the deterministic and probabilistic approaches—have been tested to find the best mode of detection. Both algorithms start by building comparison vectors at each detected points of interest. The vectors are matched to build candidate patches based on their respective coordination. For the deterministic method, patch matching is done in 2-level test where threshold-based position and size smoothing are applied to the patch with the highest correlation value. For the second approach, patch matching is done probabilistically by modelling the histograms of the patches by Poisson distributions for both RGB and HSV colour models. Then, maximum likelihood is applied for position smoothing while a Bayesian approach is applied for size smoothing. The result showed that probabilistic PBOD outperforms the deterministic approach with average distance error of 10.03% compared with 21.03%. This algorithm is best implemented as a complement to other simpler detection methods due to heavy processing requirement. PMID:23202226

  14. ACCELERATING FUSION REACTOR NEUTRONICS MODELING BY AUTOMATIC COUPLING OF HYBRID MONTE CARLO/DETERMINISTIC TRANSPORT ON CAD GEOMETRY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W

    2015-01-01

    Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNcemore » reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).« less

  15. Dual Roles for Spike Signaling in Cortical Neural Populations

    PubMed Central

    Ballard, Dana H.; Jehee, Janneke F. M.

    2011-01-01

    A prominent feature of signaling in cortical neurons is that of randomness in the action potential. The output of a typical pyramidal cell can be well fit with a Poisson model, and variations in the Poisson rate repeatedly have been shown to be correlated with stimuli. However while the rate provides a very useful characterization of neural spike data, it may not be the most fundamental description of the signaling code. Recent data showing γ frequency range multi-cell action potential correlations, together with spike timing dependent plasticity, are spurring a re-examination of the classical model, since precise timing codes imply that the generation of spikes is essentially deterministic. Could the observed Poisson randomness and timing determinism reflect two separate modes of communication, or do they somehow derive from a single process? We investigate in a timing-based model whether the apparent incompatibility between these probabilistic and deterministic observations may be resolved by examining how spikes could be used in the underlying neural circuits. The crucial component of this model draws on dual roles for spike signaling. In learning receptive fields from ensembles of inputs, spikes need to behave probabilistically, whereas for fast signaling of individual stimuli, the spikes need to behave deterministically. Our simulations show that this combination is possible if deterministic signals using γ latency coding are probabilistically routed through different members of a cortical cell population at different times. This model exhibits standard features characteristic of Poisson models such as orientation tuning and exponential interval histograms. In addition, it makes testable predictions that follow from the γ latency coding. PMID:21687798

  16. Stochastic Processes in Physics: Deterministic Origins and Control

    NASA Astrophysics Data System (ADS)

    Demers, Jeffery

    Stochastic processes are ubiquitous in the physical sciences and engineering. While often used to model imperfections and experimental uncertainties in the macroscopic world, stochastic processes can attain deeper physical significance when used to model the seemingly random and chaotic nature of the underlying microscopic world. Nowhere more prevalent is this notion than in the field of stochastic thermodynamics - a modern systematic framework used describe mesoscale systems in strongly fluctuating thermal environments which has revolutionized our understanding of, for example, molecular motors, DNA replication, far-from equilibrium systems, and the laws of macroscopic thermodynamics as they apply to the mesoscopic world. With progress, however, come further challenges and deeper questions, most notably in the thermodynamics of information processing and feedback control. Here it is becoming increasingly apparent that, due to divergences and subtleties of interpretation, the deterministic foundations of the stochastic processes themselves must be explored and understood. This thesis presents a survey of stochastic processes in physical systems, the deterministic origins of their emergence, and the subtleties associated with controlling them. First, we study time-dependent billiards in the quivering limit - a limit where a billiard system is indistinguishable from a stochastic system, and where the simplified stochastic system allows us to view issues associated with deterministic time-dependent billiards in a new light and address some long-standing problems. Then, we embark on an exploration of the deterministic microscopic Hamiltonian foundations of non-equilibrium thermodynamics, and we find that important results from mesoscopic stochastic thermodynamics have simple microscopic origins which would not be apparent without the benefit of both the micro and meso perspectives. Finally, we study the problem of stabilizing a stochastic Brownian particle with feedback control, and we find that in order to avoid paradoxes involving the first law of thermodynamics, we need a model for the fine details of the thermal driving noise. The underlying theme of this thesis is the argument that the deterministic microscopic perspective and stochastic mesoscopic perspective are both important and useful, and when used together, we can more deeply and satisfyingly understand the physics occurring over either scale.

  17. El Niño$-$Southern Oscillation frequency cascade

    DOE PAGES

    Stuecker, Malte F.; Jin, Fei -Fei; Timmermann, Axel

    2015-10-19

    The El Niño$-$Southern Oscillation (ENSO) phenomenon, the most pronounced feature of internally generated climate variability, occurs on interannual timescales and impacts the global climate system through an interaction with the annual cycle. The tight coupling between ENSO and the annual cycle is particularly pronounced over the tropical Western Pacific. In this paper, we show that this nonlinear interaction results in a frequency cascade in the atmospheric circulation, which is characterized by deterministic high-frequency variability on near-annual and subannual timescales. Finally, through climate model experiments and observational analysis, it is documented that a substantial fraction of the anomalous Northwest Pacific anticyclonemore » variability, which is the main atmospheric link between ENSO and the East Asian Monsoon system, can be explained by these interactions and is thus deterministic and potentially predictable.« less

  18. Deterministic separation of cancer cells from blood at 10 mL/min

    NASA Astrophysics Data System (ADS)

    Loutherback, Kevin; D'Silva, Joseph; Liu, Liyu; Wu, Amy; Austin, Robert H.; Sturm, James C.

    2012-12-01

    Circulating tumor cells (CTCs) and circulating clusters of cancer and stromal cells have been identified in the blood of patients with malignant cancer and can be used as a diagnostic for disease severity, assess the efficacy of different treatment strategies and possibly determine the eventual location of metastatic invasions for possible treatment. There is thus a critical need to isolate, propagate and characterize viable CTCs and clusters of cancer cells with their associated stroma cells. Here, we present a microfluidic device for mL/min flow rate, continuous-flow capture of viable CTCs from blood using deterministic lateral displacement (DLD) arrays. We show here that a DLD array device can isolate CTCs from blood with capture efficiency greater than 85% CTCs at volumetric flow rates of up to 10 mL/min with no effect on cell viability.

  19. Current fluctuations in periodically driven systems

    NASA Astrophysics Data System (ADS)

    Barato, Andre C.; Chetrite, Raphael

    2018-05-01

    Small nonequelibrium systems driven by an external periodic protocol can be described by Markov processes with time-periodic transition rates. In general, current fluctuations in such small systems are large and may play a crucial role. We develop a theoretical formalism to evaluate the rate of such large deviations in periodically driven systems. We show that the scaled cumulant generating function that characterizes current fluctuations is given by a maximal Floquet exponent. Comparing deterministic protocols with stochastic protocols, we show that, with respect to large deviations, systems driven by a stochastic protocol with an infinitely large number of jumps are equivalent to systems driven by deterministic protocols. Our results are illustrated with three case studies: a two-state model for a heat engine, a three-state model for a molecular pump, and a biased random walk with a time-periodic affinity.

  20. El Niño$-$Southern Oscillation frequency cascade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stuecker, Malte F.; Jin, Fei -Fei; Timmermann, Axel

    The El Niño$-$Southern Oscillation (ENSO) phenomenon, the most pronounced feature of internally generated climate variability, occurs on interannual timescales and impacts the global climate system through an interaction with the annual cycle. The tight coupling between ENSO and the annual cycle is particularly pronounced over the tropical Western Pacific. In this paper, we show that this nonlinear interaction results in a frequency cascade in the atmospheric circulation, which is characterized by deterministic high-frequency variability on near-annual and subannual timescales. Finally, through climate model experiments and observational analysis, it is documented that a substantial fraction of the anomalous Northwest Pacific anticyclonemore » variability, which is the main atmospheric link between ENSO and the East Asian Monsoon system, can be explained by these interactions and is thus deterministic and potentially predictable.« less

  1. A Novel Approach of Understanding and Incorporating Error of Chemical Transport Models into a Geostatistical Framework

    NASA Astrophysics Data System (ADS)

    Reyes, J.; Vizuete, W.; Serre, M. L.; Xu, Y.

    2015-12-01

    The EPA employs a vast monitoring network to measure ambient PM2.5 concentrations across the United States with one of its goals being to quantify exposure within the population. However, there are several areas of the country with sparse monitoring spatially and temporally. One means to fill in these monitoring gaps is to use PM2.5 modeled estimates from Chemical Transport Models (CTMs) specifically the Community Multi-scale Air Quality (CMAQ) model. CMAQ is able to provide complete spatial coverage but is subject to systematic and random error due to model uncertainty. Due to the deterministic nature of CMAQ, often these uncertainties are not quantified. Much effort is employed to quantify the efficacy of these models through different metrics of model performance. Currently evaluation is specific to only locations with observed data. Multiyear studies across the United States are challenging because the error and model performance of CMAQ are not uniform over such large space/time domains. Error changes regionally and temporally. Because of the complex mix of species that constitute PM2.5, CMAQ error is also a function of increasing PM2.5 concentration. To address this issue we introduce a model performance evaluation for PM2.5 CMAQ that is regionalized and non-linear. This model performance evaluation leads to error quantification for each CMAQ grid. Areas and time periods of error being better qualified. The regionalized error correction approach is non-linear and is therefore more flexible at characterizing model performance than approaches that rely on linearity assumptions and assume homoscedasticity of CMAQ predictions errors. Corrected CMAQ data are then incorporated into the modern geostatistical framework of Bayesian Maximum Entropy (BME). Through cross validation it is shown that incorporating error-corrected CMAQ data leads to more accurate estimates than just using observed data by themselves.

  2. Determining the bias and variance of a deterministic finger-tracking algorithm.

    PubMed

    Morash, Valerie S; van der Velden, Bas H M

    2016-06-01

    Finger tracking has the potential to expand haptic research and applications, as eye tracking has done in vision research. In research applications, it is desirable to know the bias and variance associated with a finger-tracking method. However, assessing the bias and variance of a deterministic method is not straightforward. Multiple measurements of the same finger position data will not produce different results, implying zero variance. Here, we present a method of assessing deterministic finger-tracking variance and bias through comparison to a non-deterministic measure. A proof-of-concept is presented using a video-based finger-tracking algorithm developed for the specific purpose of tracking participant fingers during a psychological research study. The algorithm uses ridge detection on videos of the participant's hand, and estimates the location of the right index fingertip. The algorithm was evaluated using data from four participants, who explored tactile maps using only their right index finger and all right-hand fingers. The algorithm identified the index fingertip in 99.78 % of one-finger video frames and 97.55 % of five-finger video frames. Although the algorithm produced slightly biased and more dispersed estimates relative to a human coder, these differences (x=0.08 cm, y=0.04 cm) and standard deviations (σ x =0.16 cm, σ y =0.21 cm) were small compared to the size of a fingertip (1.5-2.0 cm). Some example finger-tracking results are provided where corrections are made using the bias and variance estimates.

  3. Characterization of the Androgen-sensitive MDA-kb2 Cell Line for Assessing Complex Environmental Mixtures

    EPA Science Inventory

    Complex mixtures of synthetic and natural androgens and estrogens, and many other non-steroidal components are commonly released to the aquatic environment from anthropogenic sources. It is important to understand the potential interactive (i.e., additive, synergistic, antagonist...

  4. Modeling stochastic noise in gene regulatory systems

    PubMed Central

    Meister, Arwen; Du, Chao; Li, Ye Henry; Wong, Wing Hung

    2014-01-01

    The Master equation is considered the gold standard for modeling the stochastic mechanisms of gene regulation in molecular detail, but it is too complex to solve exactly in most cases, so approximation and simulation methods are essential. However, there is still a lack of consensus about the best way to carry these out. To help clarify the situation, we review Master equation models of gene regulation, theoretical approximations based on an expansion method due to N.G. van Kampen and R. Kubo, and simulation algorithms due to D.T. Gillespie and P. Langevin. Expansion of the Master equation shows that for systems with a single stable steady-state, the stochastic model reduces to a deterministic model in a first-order approximation. Additional theory, also due to van Kampen, describes the asymptotic behavior of multistable systems. To support and illustrate the theory and provide further insight into the complex behavior of multistable systems, we perform a detailed simulation study comparing the various approximation and simulation methods applied to synthetic gene regulatory systems with various qualitative characteristics. The simulation studies show that for large stochastic systems with a single steady-state, deterministic models are quite accurate, since the probability distribution of the solution has a single peak tracking the deterministic trajectory whose variance is inversely proportional to the system size. In multistable stochastic systems, large fluctuations can cause individual trajectories to escape from the domain of attraction of one steady-state and be attracted to another, so the system eventually reaches a multimodal probability distribution in which all stable steady-states are represented proportional to their relative stability. However, since the escape time scales exponentially with system size, this process can take a very long time in large systems. PMID:25632368

  5. Stochastic blockmodeling of the modules and core of the Caenorhabditis elegans connectome.

    PubMed

    Pavlovic, Dragana M; Vértes, Petra E; Bullmore, Edward T; Schafer, William R; Nichols, Thomas E

    2014-01-01

    Recently, there has been much interest in the community structure or mesoscale organization of complex networks. This structure is characterised either as a set of sparsely inter-connected modules or as a highly connected core with a sparsely connected periphery. However, it is often difficult to disambiguate these two types of mesoscale structure or, indeed, to summarise the full network in terms of the relationships between its mesoscale constituents. Here, we estimate a community structure with a stochastic blockmodel approach, the Erdős-Rényi Mixture Model, and compare it to the much more widely used deterministic methods, such as the Louvain and Spectral algorithms. We used the Caenorhabditis elegans (C. elegans) nervous system (connectome) as a model system in which biological knowledge about each node or neuron can be used to validate the functional relevance of the communities obtained. The deterministic algorithms derived communities with 4-5 modules, defined by sparse inter-connectivity between all modules. In contrast, the stochastic Erdős-Rényi Mixture Model estimated a community with 9 blocks or groups which comprised a similar set of modules but also included a clearly defined core, made of 2 small groups. We show that the "core-in-modules" decomposition of the worm brain network, estimated by the Erdős-Rényi Mixture Model, is more compatible with prior biological knowledge about the C. elegans nervous system than the purely modular decomposition defined deterministically. We also show that the blockmodel can be used both to generate stochastic realisations (simulations) of the biological connectome, and to compress network into a small number of super-nodes and their connectivity. We expect that the Erdős-Rényi Mixture Model may be useful for investigating the complex community structures in other (nervous) systems.

  6. Efficient quantum computing using coherent photon conversion.

    PubMed

    Langford, N K; Ramelow, S; Prevedel, R; Munro, W J; Milburn, G J; Zeilinger, A

    2011-10-12

    Single photons are excellent quantum information carriers: they were used in the earliest demonstrations of entanglement and in the production of the highest-quality entanglement reported so far. However, current schemes for preparing, processing and measuring them are inefficient. For example, down-conversion provides heralded, but randomly timed, single photons, and linear optics gates are inherently probabilistic. Here we introduce a deterministic process--coherent photon conversion (CPC)--that provides a new way to generate and process complex, multiquanta states for photonic quantum information applications. The technique uses classically pumped nonlinearities to induce coherent oscillations between orthogonal states of multiple quantum excitations. One example of CPC, based on a pumped four-wave-mixing interaction, is shown to yield a single, versatile process that provides a full set of photonic quantum processing tools. This set satisfies the DiVincenzo criteria for a scalable quantum computing architecture, including deterministic multiqubit entanglement gates (based on a novel form of photon-photon interaction), high-quality heralded single- and multiphoton states free from higher-order imperfections, and robust, high-efficiency detection. It can also be used to produce heralded multiphoton entanglement, create optically switchable quantum circuits and implement an improved form of down-conversion with reduced higher-order effects. Such tools are valuable building blocks for many quantum-enabled technologies. Finally, using photonic crystal fibres we experimentally demonstrate quantum correlations arising from a four-colour nonlinear process suitable for CPC and use these measurements to study the feasibility of reaching the deterministic regime with current technology. Our scheme, which is based on interacting bosonic fields, is not restricted to optical systems but could also be implemented in optomechanical, electromechanical and superconducting systems with extremely strong intrinsic nonlinearities. Furthermore, exploiting higher-order nonlinearities with multiple pump fields yields a mechanism for multiparty mediation of the complex, coherent dynamics.

  7. Determining Methane Budgets with Eddy Covariance Data ascertained in a heterogeneous Footprint

    NASA Astrophysics Data System (ADS)

    Rößger, N.; Wille, C.; Kutzbach, L.

    2016-12-01

    Amplified climate change in the Arctic may cause methane emissions to increase considerably due to more suitable production conditions. With a focus on methane, we studied the carbon turnover on the modern flood plain of Samoylov Island situated in the Lena River Delta (72°22'N, 126°28'E) using the eddy covariance data. In contrast to the ice-wedge polygonal tundra on the delta's river terraces, the flood plains have to date received little attention. During the warm season in 2014 and 2015, the mean methane flux amounted to 0.012 μmol m-2 s-1. This average is the result of a large variability in methane fluxes which is attributed to the complexity of the footprint where methane sources are unevenly distributed. Explaining this variability is based on three modelling approaches: a deterministic model using exponential relationships for flux drivers, a multilinear model created through stepwise regression and a neural network which relies on machine learning techniques. A substantial boost in model performance was achieved through inputting footprint information in the form of the contribution of vegetation classes; this indicates the vegetation is serving as an integrated proxy for potential methane flux drivers. The neural network performed best; however, a robust validation revealed that the deterministic model best captured ecosystem-intrinsic features. Furthermore, the deterministic model allowed a downscaling of the net flux by allocating fractions to three vegetation classes which in turn form the basis for upscaling methane fluxes in order to obtain the budget for the entire flood plain. Arctic methane emissions occur in a spatio-temporally complex pattern and employing fine-scale information is crucial to understanding the flux dynamics.

  8. Evolution with Stochastic Fitness and Stochastic Migration

    PubMed Central

    Rice, Sean H.; Papadopoulos, Anthony

    2009-01-01

    Background Migration between local populations plays an important role in evolution - influencing local adaptation, speciation, extinction, and the maintenance of genetic variation. Like other evolutionary mechanisms, migration is a stochastic process, involving both random and deterministic elements. Many models of evolution have incorporated migration, but these have all been based on simplifying assumptions, such as low migration rate, weak selection, or large population size. We thus have no truly general and exact mathematical description of evolution that incorporates migration. Methodology/Principal Findings We derive an exact equation for directional evolution, essentially a stochastic Price equation with migration, that encompasses all processes, both deterministic and stochastic, contributing to directional change in an open population. Using this result, we show that increasing the variance in migration rates reduces the impact of migration relative to selection. This means that models that treat migration as a single parameter tend to be biassed - overestimating the relative impact of immigration. We further show that selection and migration interact in complex ways, one result being that a strategy for which fitness is negatively correlated with migration rates (high fitness when migration is low) will tend to increase in frequency, even if it has lower mean fitness than do other strategies. Finally, we derive an equation for the effective migration rate, which allows some of the complex stochastic processes that we identify to be incorporated into models with a single migration parameter. Conclusions/Significance As has previously been shown with selection, the role of migration in evolution is determined by the entire distributions of immigration and emigration rates, not just by the mean values. The interactions of stochastic migration with stochastic selection produce evolutionary processes that are invisible to deterministic evolutionary theory. PMID:19816580

  9. Computational design of enzyme-ligand binding using a combined energy function and deterministic sequence optimization algorithm.

    PubMed

    Tian, Ye; Huang, Xiaoqiang; Zhu, Yushan

    2015-08-01

    Enzyme amino-acid sequences at ligand-binding interfaces are evolutionarily optimized for reactions, and the natural conformation of an enzyme-ligand complex must have a low free energy relative to alternative conformations in native-like or non-native sequences. Based on this assumption, a combined energy function was developed for enzyme design and then evaluated by recapitulating native enzyme sequences at ligand-binding interfaces for 10 enzyme-ligand complexes. In this energy function, the electrostatic interaction between polar or charged atoms at buried interfaces is described by an explicitly orientation-dependent hydrogen-bonding potential and a pairwise-decomposable generalized Born model based on the general side chain in the protein design framework. The energy function is augmented with a pairwise surface-area based hydrophobic contribution for nonpolar atom burial. Using this function, on average, 78% of the amino acids at ligand-binding sites were predicted correctly in the minimum-energy sequences, whereas 84% were predicted correctly in the most-similar sequences, which were selected from the top 20 sequences for each enzyme-ligand complex. Hydrogen bonds at the enzyme-ligand binding interfaces in the 10 complexes were usually recovered with the correct geometries. The binding energies calculated using the combined energy function helped to discriminate the active sequences from a pool of alternative sequences that were generated by repeatedly solving a series of mixed-integer linear programming problems for sequence selection with increasing integer cuts.

  10. Some loopholes to save quantum nonlocality

    NASA Astrophysics Data System (ADS)

    Accardi, Luigi

    2005-02-01

    The EPR-chameleon experiment has closed a long standing debate between the supporters of quantum nonlocality and the thesis of quantum probability according to which the essence of the quantum pecularity is non Kolmogorovianity rather than non locality. The theory of adaptive systems (symbolized by the chameleon effect) provides a natural intuition for the emergence of non-Kolmogorovian statistics from classical deterministic dynamical systems. These developments are quickly reviewed and in conclusion some comments are introduced on recent attempts to "reconstruct history" on the lines described by Orwell in "1984".

  11. Bayesian characterization of uncertainty in species interaction strengths.

    PubMed

    Wolf, Christopher; Novak, Mark; Gitelman, Alix I

    2017-06-01

    Considerable effort has been devoted to the estimation of species interaction strengths. This effort has focused primarily on statistical significance testing and obtaining point estimates of parameters that contribute to interaction strength magnitudes, leaving the characterization of uncertainty associated with those estimates unconsidered. We consider a means of characterizing the uncertainty of a generalist predator's interaction strengths by formulating an observational method for estimating a predator's prey-specific per capita attack rates as a Bayesian statistical model. This formulation permits the explicit incorporation of multiple sources of uncertainty. A key insight is the informative nature of several so-called non-informative priors that have been used in modeling the sparse data typical of predator feeding surveys. We introduce to ecology a new neutral prior and provide evidence for its superior performance. We use a case study to consider the attack rates in a New Zealand intertidal whelk predator, and we illustrate not only that Bayesian point estimates can be made to correspond with those obtained by frequentist approaches, but also that estimation uncertainty as described by 95% intervals is more useful and biologically realistic using the Bayesian method. In particular, unlike in bootstrap confidence intervals, the lower bounds of the Bayesian posterior intervals for attack rates do not include zero when a predator-prey interaction is in fact observed. We conclude that the Bayesian framework provides a straightforward, probabilistic characterization of interaction strength uncertainty, enabling future considerations of both the deterministic and stochastic drivers of interaction strength and their impact on food webs.

  12. POD Model Reconstruction for Gray-Box Fault Detection

    NASA Technical Reports Server (NTRS)

    Park, Han; Zak, Michail

    2007-01-01

    Proper orthogonal decomposition (POD) is the mathematical basis of a method of constructing low-order mathematical models for the "gray-box" fault-detection algorithm that is a component of a diagnostic system known as beacon-based exception analysis for multi-missions (BEAM). POD has been successfully applied in reducing computational complexity by generating simple models that can be used for control and simulation for complex systems such as fluid flows. In the present application to BEAM, POD brings the same benefits to automated diagnosis. BEAM is a method of real-time or offline, automated diagnosis of a complex dynamic system.The gray-box approach makes it possible to utilize incomplete or approximate knowledge of the dynamics of the system that one seeks to diagnose. In the gray-box approach, a deterministic model of the system is used to filter a time series of system sensor data to remove the deterministic components of the time series from further examination. What is left after the filtering operation is a time series of residual quantities that represent the unknown (or at least unmodeled) aspects of the behavior of the system. Stochastic modeling techniques are then applied to the residual time series. The procedure for detecting abnormal behavior of the system then becomes one of looking for statistical differences between the residual time series and the predictions of the stochastic model.

  13. Quantum logic using correlated one-dimensional quantum walks

    NASA Astrophysics Data System (ADS)

    Lahini, Yoav; Steinbrecher, Gregory R.; Bookatz, Adam D.; Englund, Dirk

    2018-01-01

    Quantum Walks are unitary processes describing the evolution of an initially localized wavefunction on a lattice potential. The complexity of the dynamics increases significantly when several indistinguishable quantum walkers propagate on the same lattice simultaneously, as these develop non-trivial spatial correlations that depend on the particle's quantum statistics, mutual interactions, initial positions, and the lattice potential. We show that even in the simplest case of a quantum walk on a one dimensional graph, these correlations can be shaped to yield a complete set of compact quantum logic operations. We provide detailed recipes for implementing quantum logic on one-dimensional quantum walks in two general cases. For non-interacting bosons—such as photons in waveguide lattices—we find high-fidelity probabilistic quantum gates that could be integrated into linear optics quantum computation schemes. For interacting quantum-walkers on a one-dimensional lattice—a situation that has recently been demonstrated using ultra-cold atoms—we find deterministic logic operations that are universal for quantum information processing. The suggested implementation requires minimal resources and a level of control that is within reach using recently demonstrated techniques. Further work is required to address error-correction.

  14. Multiscale fracture network characterization and impact on flow: A case study on the Latemar carbonate platform

    NASA Astrophysics Data System (ADS)

    Hardebol, N. J.; Maier, C.; Nick, H.; Geiger, S.; Bertotti, G.; Boro, H.

    2015-12-01

    A fracture network arrangement is quantified across an isolated carbonate platform from outcrop and aerial imagery to address its impact on fluid flow. The network is described in terms of fracture density, orientation, and length distribution parameters. Of particular interest is the role of fracture cross connections and abutments on the effective permeability. Hence, the flow simulations explicitly account for network topology by adopting Discrete-Fracture-and-Matrix description. The interior of the Latemar carbonate platform (Dolomites, Italy) is taken as outcrop analogue for subsurface reservoirs of isolated carbonate build-ups that exhibit a fracture-dominated permeability. New is our dual strategy to describe the fracture network both as deterministic- and stochastic-based inputs for flow simulations. The fracture geometries are captured explicitly and form a multiscale data set by integration of interpretations from outcrops, airborne imagery, and lidar. The deterministic network descriptions form the basis for descriptive rules that are diagnostic of the complex natural fracture arrangement. The fracture networks exhibit a variable degree of multitier hierarchies with smaller-sized fractures abutting against larger fractures under both right and oblique angles. The influence of network topology on connectivity is quantified using Discrete-Fracture-Single phase fluid flow simulations. The simulation results show that the effective permeability for the fracture and matrix ensemble can be 50 to 400 times higher than the matrix permeability of 1.0 · 10-14 m2. The permeability enhancement is strongly controlled by the connectivity of the fracture network. Therefore, the degree of intersecting and abutting fractures should be captured from outcrops with accuracy to be of value as analogue.

  15. On the precision of quasi steady state assumptions in stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Agarwal, Animesh; Adams, Rhys; Castellani, Gastone C.; Shouval, Harel Z.

    2012-07-01

    Many biochemical networks have complex multidimensional dynamics and there is a long history of methods that have been used for dimensionality reduction for such reaction networks. Usually a deterministic mass action approach is used; however, in small volumes, there are significant fluctuations from the mean which the mass action approach cannot capture. In such cases stochastic simulation methods should be used. In this paper, we evaluate the applicability of one such dimensionality reduction method, the quasi-steady state approximation (QSSA) [L. Menten and M. Michaelis, "Die kinetik der invertinwirkung," Biochem. Z 49, 333369 (1913)] for dimensionality reduction in case of stochastic dynamics. First, the applicability of QSSA approach is evaluated for a canonical system of enzyme reactions. Application of QSSA to such a reaction system in a deterministic setting leads to Michaelis-Menten reduced kinetics which can be used to derive the equilibrium concentrations of the reaction species. In the case of stochastic simulations, however, the steady state is characterized by fluctuations around the mean equilibrium concentration. Our analysis shows that a QSSA based approach for dimensionality reduction captures well the mean of the distribution as obtained from a full dimensional simulation but fails to accurately capture the distribution around that mean. Moreover, the QSSA approximation is not unique. We have then extended the analysis to a simple bistable biochemical network model proposed to account for the stability of synaptic efficacies; the substrate of learning and memory [J. E. Lisman, "A mechanism of memory storage insensitive to molecular turnover: A bistable autophosphorylating kinase," Proc. Natl. Acad. Sci. U.S.A. 82, 3055-3057 (1985)], 10.1073/pnas.82.9.3055. Our analysis shows that a QSSA based dimensionality reduction method results in errors as big as two orders of magnitude in predicting the residence times in the two stable states.

  16. Synthesis, spectroscopic characterization, DNA interaction and antibacterial study of metal complexes of tetraazamacrocyclic Schiff base

    NASA Astrophysics Data System (ADS)

    Shakir, Mohammad; Khanam, Sadiqa; Firdaus, Farha; Latif, Abdul; Aatif, Mohammad; Al-Resayes, Saud I.

    The template condensation reaction between benzil and 3,4-diaminotoulene resulted mononuclear 12-membered tetraimine macrocyclic complexes of the type, [MLCl2] [M = Co(II), Ni(II), Cu(II) and Zn(II)]. The synthesized complexes have been characterized on the basis of the results of elemental analysis, molar conductance, magnetic susceptibility measurements and spectroscopic studies viz. FT-IR, 1H and 13C NMR, FAB mass, UV-vis and EPR. An octahedral geometry has been envisaged for all these complexes, while a distorted octahedral geometry has been noticed for Cu(II) complex. Low conductivity data of all these complexes suggest their non-ionic nature. The interactive studies of these complexes with calf thymus DNA showed that the complexes are avid binders of calf thymus DNA. The in vitro antibacterial studies of these complexes screened against pathogenic bacteria proved them as growth inhibiting agents.

  17. Speculative behavior and asset price dynamics.

    PubMed

    Westerhoff, Frank

    2003-07-01

    This paper deals with speculative trading. Guided by empirical observations, a nonlinear deterministic asset pricing model is developed in which traders repeatedly choose between technical and fundamental analysis to determine their orders. The interaction between the trading rules produces complex dynamics. The model endogenously replicates the stylized facts of excess volatility, high trading volumes, shifts in the level of asset prices, and volatility clustering.

  18. Native top-down mass spectrometry for the structural characterization of human hemoglobin

    DOE PAGES

    Zhang, Jiang; Malmirchegini, G. Reza; Clubb, Robert T.; ...

    2015-06-09

    Native mass spectrometry (MS) has become an invaluable tool for the characterization of proteins and non-covalent protein complexes under near physiological solution conditions. Here we report the structural characterization of human hemoglobin (Hb), a 64 kDa oxygen-transporting protein complex, by high resolution native top-down mass spectrometry using electrospray ionization (ESI) and a 15-Tesla Fourier transform ion cyclotron resonance (FTICR) mass spectrometer. Native MS preserves the non-covalent interactions between the globin subunits, and electron capture dissociation (ECD) produces fragments directly from the intact Hb complex without dissociating the subunits. Using activated ion ECD, we observe the gradual unfolding process of themore » Hb complex in the gas phase. Without protein ion activation, the native Hb shows very limited ECD fragmentation from the N-termini, suggesting a tightly packed structure of the native complex and therefore low fragmentation efficiency. Precursor ion activation allows steady increase of N-terminal fragment ions, while the C-terminal fragments remain limited (38 c ions and 4 z ions on the α chain; 36 c ions and 2 z ions on the β chain). This ECD fragmentation pattern suggests that upon activation, the Hb complex starts to unfold from the N-termini of both subunits, whereas the C-terminal regions and therefore the potential regions involved in the subunit binding interactions remain intact. ECD-MS of the Hb dimer show similar fragmentation patterns as the Hb tetramer, providing further evidence for the hypothesized unfolding process of the Hb complex in the gas phase. Native top-down ECD-MS allows efficient probing of the Hb complex structure and the subunit binding interactions in the gas phase. Finally, it may provide a fast and effective means to probe the structure of novel protein complexes that are intractable to traditional structural characterization tools.« less

  19. Who killed Laius?: On Sophocles' enigmatic message.

    PubMed

    Priel, Beatriz

    2002-04-01

    Using Laplanche's basic conceptualisation of the role of the other in unconscious processes, the author proposes a reading of Sophocles' tragedy, Oedipus the King, according to basic principles of dream interpretation. This reading corroborates contemporary literary perspectives suggesting that Sophocles' tragedy may not only convey the myth but also provide a critical analysis of how myths work. Important textual inconsistencies and incoherence, which have been noted through the centuries, suggest the existence of another, repressed story. Moreover, the action of the play points to enigmatic parental messages of infanticide and the silencing of Oedipus's story, as well as their translation into primordial guilt, as the origins of the tragic denouement. Oedipus's self-condemnation of parricide follows these enigmatic codes and is unrelated to, and may even contradict, the evidence offered in the tragedy as to the identity of Laius's murderers. Moreover, Sophocles' text provides a complex intertwining of hermeneutic and deterministic perspectives. Through the use of the mythical deterministic content, the formal characteristics of Sophocles' text, mainly its complex time perspective and extensive use of double meaning, dramatise in the act of reading an acute awareness of interpretation. This reading underscores the fundamental role of the other in the constitution of unconscious processes.

  20. Thermostatted kinetic equations as models for complex systems in physics and life sciences.

    PubMed

    Bianca, Carlo

    2012-12-01

    Statistical mechanics is a powerful method for understanding equilibrium thermodynamics. An equivalent theoretical framework for nonequilibrium systems has remained elusive. The thermodynamic forces driving the system away from equilibrium introduce energy that must be dissipated if nonequilibrium steady states are to be obtained. Historically, further terms were introduced, collectively called a thermostat, whose original application was to generate constant-temperature equilibrium ensembles. This review surveys kinetic models coupled with time-reversible deterministic thermostats for the modeling of large systems composed both by inert matter particles and living entities. The introduction of deterministic thermostats allows to model the onset of nonequilibrium stationary states that are typical of most real-world complex systems. The first part of the paper is focused on a general presentation of the main physical and mathematical definitions and tools: nonequilibrium phenomena, Gauss least constraint principle and Gaussian thermostats. The second part provides a review of a variety of thermostatted mathematical models in physics and life sciences, including Kac, Boltzmann, Jager-Segel and the thermostatted (continuous and discrete) kinetic for active particles models. Applications refer to semiconductor devices, nanosciences, biological phenomena, vehicular traffic, social and economics systems, crowds and swarms dynamics. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Are Individual Differences in Performance on Perceptual and Cognitive Optimization Problems Determined by General Intelligence?

    ERIC Educational Resources Information Center

    Burns, Nicholas R.; Lee, Michael D.; Vickers, Douglas

    2006-01-01

    Studies of human problem solving have traditionally used deterministic tasks that require the execution of a systematic series of steps to reach a rational and optimal solution. Most real-world problems, however, are characterized by uncertainty, the need to consider an enormous number of variables and possible courses of action at each stage in…

  2. Machine tools error characterization and compensation by on-line measurement of artifact

    NASA Astrophysics Data System (ADS)

    Wahid Khan, Abdul; Chen, Wuyi; Wu, Lili

    2009-11-01

    Most manufacturing machine tools are utilized for mass production or batch production with high accuracy at a deterministic manufacturing principle. Volumetric accuracy of machine tools depends on the positional accuracy of the cutting tool, probe or end effector related to the workpiece in the workspace volume. In this research paper, a methodology is presented for volumetric calibration of machine tools by on-line measurement of an artifact or an object of a similar type. The machine tool geometric error characterization was carried out through a standard or an artifact, having similar geometry to the mass production or batch production product. The artifact was measured at an arbitrary position in the volumetric workspace with a calibrated Renishaw touch trigger probe system. Positional errors were stored into a computer for compensation purpose, to further run the manufacturing batch through compensated codes. This methodology was found quite effective to manufacture high precision components with more dimensional accuracy and reliability. Calibration by on-line measurement gives the advantage to improve the manufacturing process by use of deterministic manufacturing principle and found efficient and economical but limited to the workspace or envelop surface of the measured artifact's geometry or the profile.

  3. Sustainability science: accounting for nonlinear dynamics in policy and social-ecological systems

    EPA Science Inventory

    Resilience is an emergent property of complex systems. Understanding resilience is critical for sustainability science, as linked social-ecological systems and the policy process that governs them are characterized by non-linear dynamics. Non-linear dynamics in these systems mean...

  4. The Transcriptional Regulator CBP Has Defined Spatial Associations within Interphase Nuclei

    PubMed Central

    McManus, Kirk J; Stephens, David A; Adams, Niall M; Islam, Suhail A; Freemont, Paul S; Hendzel, Michael J

    2006-01-01

    It is becoming increasingly clear that nuclear macromolecules and macromolecular complexes are compartmentalized through binding interactions into an apparent three-dimensionally ordered structure. This ordering, however, does not appear to be deterministic to the extent that chromatin and nonchromatin structures maintain a strict 3-D arrangement. Rather, spatial ordering within the cell nucleus appears to conform to stochastic rather than deterministic spatial relationships. The stochastic nature of organization becomes particularly problematic when any attempt is made to describe the spatial relationship between proteins involved in the regulation of the genome. The CREB–binding protein (CBP) is one such transcriptional regulator that, when visualised by confocal microscopy, reveals a highly punctate staining pattern comprising several hundred individual foci distributed within the nuclear volume. Markers for euchromatic sequences have similar patterns. Surprisingly, in most cases, the predicted one-to-one relationship between transcription factor and chromatin sequence is not observed. Consequently, to understand whether spatial relationships that are not coincident are nonrandom and potentially biologically important, it is necessary to develop statistical approaches. In this study, we report on the development of such an approach and apply it to understanding the role of CBP in mediating chromatin modification and transcriptional regulation. We have used nearest-neighbor distance measurements and probability analyses to study the spatial relationship between CBP and other nuclear subcompartments enriched in transcription factors, chromatin, and splicing factors. Our results demonstrate that CBP has an order of spatial association with other nuclear subcompartments. We observe closer associations between CBP and RNA polymerase II–enriched foci and SC35 speckles than nascent RNA or specific acetylated histones. Furthermore, we find that CBP has a significantly higher probability of being close to its known in vivo substrate histone H4 lysine 5 compared with the closely related H4 lysine 12. This study demonstrates that complex relationships not described by colocalization exist in the interphase nucleus and can be characterized and quantified. The subnuclear distribution of CBP is difficult to reconcile with a model where chromatin organization is the sole determinant of the nuclear organization of proteins that regulate transcription but is consistent with a close link between spatial associations and nuclear functions. PMID:17054391

  5. Relevance of deterministic chaos theory to studies in functioning of dynamical systems

    NASA Astrophysics Data System (ADS)

    Glagolev, S. N.; Bukhonova, S. M.; Chikina, E. D.

    2018-03-01

    The paper considers chaotic behavior of dynamical systems typical for social and economic processes. Approaches to analysis and evaluation of system development processes are studies from the point of view of controllability and determinateness. Explanations are given for necessity to apply non-standard mathematical tools to explain states of dynamical social and economic systems on the basis of fractal theory. Features of fractal structures, such as non-regularity, self-similarity, dimensionality and fractionality are considered.

  6. Robustness of non-interdependent and interdependent networks against dependent and adaptive attacks

    NASA Astrophysics Data System (ADS)

    Tyra, Adam; Li, Jingtao; Shang, Yilun; Jiang, Shuo; Zhao, Yanjun; Xu, Shouhuai

    2017-09-01

    Robustness of complex networks has been extensively studied via the notion of site percolation, which typically models independent and non-adaptive attacks (or disruptions). However, real-life attacks are often dependent and/or adaptive. This motivates us to characterize the robustness of complex networks, including non-interdependent and interdependent ones, against dependent and adaptive attacks. For this purpose, dependent attacks are accommodated by L-hop percolation where the nodes within some L-hop (L ≥ 0) distance of a chosen node are all deleted during one attack (with L = 0 degenerating to site percolation). Whereas, adaptive attacks are launched by attackers who can make node-selection decisions based on the network state in the beginning of each attack. The resulting characterization enriches the body of knowledge with new insights, such as: (i) the Achilles' Heel phenomenon is only valid for independent attacks, but not for dependent attacks; (ii) powerful attack strategies (e.g., targeted attacks and dependent attacks, dependent attacks and adaptive attacks) are not compatible and cannot help the attacker when used collectively. Our results shed some light on the design of robust complex networks.

  7. Jackpot Structural Features: Rollover Effect and Goal-Gradient Effect in EGM Gambling.

    PubMed

    Li, En; Rockloff, Matthew J; Browne, Matthew; Donaldson, Phillip

    2016-06-01

    Relatively little research has been undertaken on the influence of jackpot structural features on electronic gaming machine (EGM) gambling behavior. This study considered two common features of EGM jackpots: progressive (i.e., the jackpot incrementally growing in value as players make additional bets), and deterministic (i.e., a guaranteed jackpot after a fixed number of bets, which is determined in advance and at random). Their joint influences on player betting behavior and the moderating role of jackpot size were investigated in a crossed-design experiment. Using real money, players gambled on a computer simulated EGM with real jackpot prizes of either $500 (i.e., small jackpot) or $25,000 (i.e., large jackpot). The results revealed three important findings. Firstly, players placed the largest bets (20.3 % higher than the average) on large jackpot EGMs that were represented to be deterministic and non-progressive. This finding was supportive of a hypothesized 'goal-gradient effect', whereby players might have felt subjectively close to an inevitable payoff for a high-value prize. Secondly, large jackpots that were non-deterministic and progressive also promoted high bet sizes (17.8 % higher than the average), resembling the 'rollover effect' demonstrated in lottery betting, whereby players might imagine that their large bets could be later recouped through a big win. Lastly, neither the hypothesized goal-gradient effect nor the rollover effect was evident among players betting on small jackpot machines. These findings suggest that certain high-value jackpot configurations may have intensifying effects on player behavior.

  8. Stochastic and deterministic causes of streamer branching in liquid dielectrics

    NASA Astrophysics Data System (ADS)

    Jadidian, Jouya; Zahn, Markus; Lavesson, Nils; Widlund, Ola; Borg, Karl

    2013-08-01

    Streamer branching in liquid dielectrics is driven by stochastic and deterministic factors. The presence of stochastic causes of streamer branching such as inhomogeneities inherited from noisy initial states, impurities, or charge carrier density fluctuations is inevitable in any dielectric. A fully three-dimensional streamer model presented in this paper indicates that deterministic origins of branching are intrinsic attributes of streamers, which in some cases make the branching inevitable depending on shape and velocity of the volume charge at the streamer frontier. Specifically, any given inhomogeneous perturbation can result in streamer branching if the volume charge layer at the original streamer head is relatively thin and slow enough. Furthermore, discrete nature of electrons at the leading edge of an ionization front always guarantees the existence of a non-zero inhomogeneous perturbation ahead of the streamer head propagating even in perfectly homogeneous dielectric. Based on the modeling results for streamers propagating in a liquid dielectric, a gauge on the streamer head geometry is introduced that determines whether the branching occurs under particular inhomogeneous circumstances. Estimated number, diameter, and velocity of the born branches agree qualitatively with experimental images of the streamer branching.

  9. Characterization of Non-Innocent Metal Complexes Using Solid-State NMR Spectroscopy: o-Dioxolene Vanadium Complexes

    PubMed Central

    Chatterjee, Pabitra B.; Goncharov-Zapata, Olga; Quinn, Laurence L.; Hou, Guangjin; Hamaed, Hiyam; Schurko, Robert W.; Polenova, Tatyana; Crans, Debbie C.

    2012-01-01

    51V solid-state NMR (SSNMR) studies of a series of non-innocent vanadium(V) catechol complexes have been conducted to evaluate the possibility that 51V NMR observables, quadrupolar and chemical shift anisotropies, and electronic structures of such compounds can be used to characterize these compounds. The vanadium(V) catechol complexes described in these studies have relatively small quadrupolar coupling constants, which cover a surprisingly small range from 3.4 to 4.2 MHz. On the other hand, isotropic 51V NMR chemical shifts cover a wide range from −200 ppm to 400 ppm in solution and from −219 to 530 ppm in the solid state. A linear correlation of 51V NMR isotropic solution and solid-state chemical shifts of complexes containing non-innocent ligands is observed. These experimental results provide the information needed for the application of 51V SSNMR spectroscopy in characterizing the electronic properties of a wide variety of vanadium-containing systems, and in particular those containing non-innocent ligands and that have chemical shifts outside the populated range of −300 ppm to −700 ppm. The studies presented in this report demonstrate that the small quadrupolar couplings covering a narrow range of values reflect the symmetric electronic charge distribution, which is also similar across these complexes. These quadrupolar interaction parameters alone are not sufficient to capture the rich electronic structure of these complexes. In contrast, the chemical shift anisotropy tensor elements accessible from 51V SSNMR experiments are a highly sensitive probe of subtle differences in electronic distribution and orbital occupancy in these compounds. Quantum chemical (DFT) calculations of NMR parameters for [VO(hshed)(Cat)] yield 51V CSA tensor in reasonable agreement with the experimental results, but surprisingly, the calculated quadrupolar coupling constant is significantly greater than the experimental value. The studies demonstrate that substitution of the catechol ligand with electron donating groups results in an increase in the HOMO-LUMO gap and can be directly followed by an upfield shift for the vanadium catechol complex. In contrast, substitution of the catechol ligand with electron withdrawing groups results in a decrease in the HOMO-LUMO gap and can directly be followed by a downfield shift for the complex. The vanadium catechol complexes were used in this work because the 51V is a half-integer quadrupolar nucleus whose NMR observables are highly sensitive to the local environment. However, the results are general and could be extended to other redox active complexes that exhibit similar coordination chemistry as the vanadium catechol complexes. PMID:21842875

  10. A deterministic model of electron transport for electron probe microanalysis

    NASA Astrophysics Data System (ADS)

    Bünger, J.; Richter, S.; Torrilhon, M.

    2018-01-01

    Within the last decades significant improvements in the spatial resolution of electron probe microanalysis (EPMA) were obtained by instrumental enhancements. In contrast, the quantification procedures essentially remained unchanged. As the classical procedures assume either homogeneity or a multi-layered structure of the material, they limit the spatial resolution of EPMA. The possibilities of improving the spatial resolution through more sophisticated quantification procedures are therefore almost untouched. We investigate a new analytical model (M 1-model) for the quantification procedure based on fast and accurate modelling of electron-X-ray-matter interactions in complex materials using a deterministic approach to solve the electron transport equations. We outline the derivation of the model from the Boltzmann equation for electron transport using the method of moments with a minimum entropy closure and present first numerical results for three different test cases (homogeneous, thin film and interface). Taking Monte Carlo as a reference, the results for the three test cases show that the M 1-model is able to reproduce the electron dynamics in EPMA applications very well. Compared to classical analytical models like XPP and PAP, the M 1-model is more accurate and far more flexible, which indicates the potential of deterministic models of electron transport to further increase the spatial resolution of EPMA.

  11. Multi-Scale Modeling of the Gamma Radiolysis of Nitrate Solutions.

    PubMed

    Horne, Gregory P; Donoclift, Thomas A; Sims, Howard E; Orr, Robin M; Pimblott, Simon M

    2016-11-17

    A multiscale modeling approach has been developed for the extended time scale long-term radiolysis of aqueous systems. The approach uses a combination of stochastic track structure and track chemistry as well as deterministic homogeneous chemistry techniques and involves four key stages: radiation track structure simulation, the subsequent physicochemical processes, nonhomogeneous diffusion-reaction kinetic evolution, and homogeneous bulk chemistry modeling. The first three components model the physical and chemical evolution of an isolated radiation chemical track and provide radiolysis yields, within the extremely low dose isolated track paradigm, as the input parameters for a bulk deterministic chemistry model. This approach to radiation chemical modeling has been tested by comparison with the experimentally observed yield of nitrite from the gamma radiolysis of sodium nitrate solutions. This is a complex radiation chemical system which is strongly dependent on secondary reaction processes. The concentration of nitrite is not just dependent upon the evolution of radiation track chemistry and the scavenging of the hydrated electron and its precursors but also on the subsequent reactions of the products of these scavenging reactions with other water radiolysis products. Without the inclusion of intratrack chemistry, the deterministic component of the multiscale model is unable to correctly predict experimental data, highlighting the importance of intratrack radiation chemistry in the chemical evolution of the irradiated system.

  12. Probabilistic Design and Analysis Framework

    NASA Technical Reports Server (NTRS)

    Strack, William C.; Nagpal, Vinod K.

    2010-01-01

    PRODAF is a software package designed to aid analysts and designers in conducting probabilistic analysis of components and systems. PRODAF can integrate multiple analysis programs to ease the tedious process of conducting a complex analysis process that requires the use of multiple software packages. The work uses a commercial finite element analysis (FEA) program with modules from NESSUS to conduct a probabilistic analysis of a hypothetical turbine blade, disk, and shaft model. PRODAF applies the response surface method, at the component level, and extrapolates the component-level responses to the system level. Hypothetical components of a gas turbine engine are first deterministically modeled using FEA. Variations in selected geometrical dimensions and loading conditions are analyzed to determine the effects of the stress state within each component. Geometric variations include the cord length and height for the blade, inner radius, outer radius, and thickness, which are varied for the disk. Probabilistic analysis is carried out using developing software packages like System Uncertainty Analysis (SUA) and PRODAF. PRODAF was used with a commercial deterministic FEA program in conjunction with modules from the probabilistic analysis program, NESTEM, to perturb loads and geometries to provide a reliability and sensitivity analysis. PRODAF simplified the handling of data among the various programs involved, and will work with many commercial and opensource deterministic programs, probabilistic programs, or modules.

  13. Losers in the 'Rock-Paper-Scissors' game: The role of non-hierarchical competition and chaos as biodiversity sustaining agents in aquatic systems

    EPA Science Inventory

    Processes occurring within small areas (patch-scale) that influence species richness and spatial heterogeneity of larger areas (landscape-scale) have long been an interest of ecologists. This research focused on the role of patch-scale deterministic chaos arising in phytoplankton...

  14. Examining Errors in Simple Spreadsheet Modeling from Different Research Perspectives

    ERIC Educational Resources Information Center

    Kadijevich, Djordje M.

    2012-01-01

    By using a sample of 1st-year undergraduate business students, this study dealt with the development of simple (deterministic and non-optimization) spreadsheet models of income statements within an introductory course on business informatics. The study examined students' errors in doing this for business situations of their choice and found three…

  15. Characterization of the Androgen-sensitive MDA-kb2 Cell Line for Assessing Complex Environmental Mixtures, Presentation

    EPA Science Inventory

    Synthetic and natural steroidal androgens and estrogens and many other non-steroidal endocrine-active compounds commonly occur as complex mixtures in aquatic environments. It is important to understand the potential interactive effects of these mixtures to properly assess their r...

  16. Arctic Sea Ice: Trends, Stability and Variability

    NASA Astrophysics Data System (ADS)

    Moon, Woosok

    A stochastic Arctic sea-ice model is derived and analyzed in detail to interpret the recent decay and associated variability of Arctic sea-ice under changes in greenhouse gas forcing widely referred to as global warming. The approach begins from a deterministic model of the heat flux balance through the air/sea/ice system, which uses observed monthly-averaged heat fluxes to drive a time evolution of sea-ice thickness. This model reproduces the observed seasonal cycle of the ice cover and it is to this that stochastic noise---representing high frequency variability---is introduced. The model takes the form of a single periodic non-autonomous stochastic ordinary differential equation. Following an introductory chapter, the two that follow focus principally on the properties of the deterministic model in order to identify the main properties governing the stability of the ice cover. In chapter 2 the underlying time-dependent solutions to the deterministic model are analyzed for their stability. It is found that the response time-scale of the system to perturbations is dominated by the destabilizing sea-ice albedo feedback, which is operative in the summer, and the stabilizing long wave radiative cooling of the ice surface, which is operative in the winter. This basic competition is found throughout the thesis to define the governing dynamics of the system. In particular, as greenhouse gas forcing increases, the sea-ice albedo feedback becomes more effective at destabilizing the system. Thus, any projections of the future state of Arctic sea-ice will depend sensitively on the treatment of the ice-albedo feedback. This in turn implies that the treatment a fractional ice cover as the ice areal extent changes rapidly, must be handled with the utmost care. In chapter 3, the idea of a two-season model, with just winter and summer, is revisited. By breaking the seasonal cycle up in this manner one can simplify the interpretation of the basic dynamics. Whereas in the fully time-dependent seasonal model one finds stable seasonal ice cover (vanishing in the summer but reappearing in the winter), in previous two-season models such a state could not be found. In this chapter the sufficient conditions are found for a stable seasonal ice cover, which reside in including a time variation in the shortwave radiance during summer. This provides a qualitative interpretation of the continuous and reversible shift from perennial to seasonally-varying states in the more complex deterministic model. In order to put the stochastic model into a realistic observational framework, in chapter 4, the analysis of daily satellite retrievals of ice albedo and ice extent is described. Both the basic statistics are examined and a new method, called multi-fractal temporally weighted detrended fluctuation analysis, is applied. Because the basic data are taken on daily time scales, the full fidelity of the retrieved data is accessed and we find time scales from days and weeks to seasonal and decadal. Importantly, the data show a white-noise structure on annual to biannual time scales and this provides the basis for using a Wiener process for the noise in the stochastic Arctic sea-ice model. In chapter 5 a generalized perturbation analysis of a non-autonomous stochastic differential equation is developed and then applied to interpreting the variability of Arctic sea-ice as greenhouse gas forcing increases. The resulting analytic expressions of the statistical moments provide insight into the transient and memory-delay effects associated with the basic competition in the system: the ice-albedo feedback and long wave radiative stabilization along with the asymmetry in the nonlinearity of the deterministic contributions to the model and the magnitude and structure of the stochastic noise. A systematic study of the impact of the noise structure, from additive to multiplicative, is undertaken in chapters 6 and 7. Finally, in chapter 8 the matter of including a fractional ice cover into a deterministic model is addressed. It is found that a simple but crucial mistake is made in one of the most widely used model schemes and this has a major impact given the important role of areal fraction in the ice-albedo feedback in such a model. The thesis is summarized in chapter 9.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiang; Malmirchegini, G. Reza; Clubb, Robert T.

    Native mass spectrometry (MS) has become an invaluable tool for the characterization of proteins and non-covalent protein complexes under near physiological solution conditions. Here we report the structural characterization of human hemoglobin (Hb), a 64 kDa oxygen-transporting protein complex, by high resolution native top-down mass spectrometry using electrospray ionization (ESI) and a 15-Tesla Fourier transform ion cyclotron resonance (FTICR) mass spectrometer. Native MS preserves the non-covalent interactions between the globin subunits, and electron capture dissociation (ECD) produces fragments directly from the intact Hb complex without dissociating the subunits. Using activated ion ECD, we observe the gradual unfolding process of themore » Hb complex in the gas phase. Without protein ion activation, the native Hb shows very limited ECD fragmentation from the N-termini, suggesting a tightly packed structure of the native complex and therefore low fragmentation efficiency. Precursor ion activation allows steady increase of N-terminal fragment ions, while the C-terminal fragments remain limited (38 c ions and 4 z ions on the α chain; 36 c ions and 2 z ions on the β chain). This ECD fragmentation pattern suggests that upon activation, the Hb complex starts to unfold from the N-termini of both subunits, whereas the C-terminal regions and therefore the potential regions involved in the subunit binding interactions remain intact. ECD-MS of the Hb dimer show similar fragmentation patterns as the Hb tetramer, providing further evidence for the hypothesized unfolding process of the Hb complex in the gas phase. Native top-down ECD-MS allows efficient probing of the Hb complex structure and the subunit binding interactions in the gas phase. Finally, it may provide a fast and effective means to probe the structure of novel protein complexes that are intractable to traditional structural characterization tools.« less

  18. Improved multi-objective ant colony optimization algorithm and its application in complex reasoning

    NASA Astrophysics Data System (ADS)

    Wang, Xinqing; Zhao, Yang; Wang, Dong; Zhu, Huijie; Zhang, Qing

    2013-09-01

    The problem of fault reasoning has aroused great concern in scientific and engineering fields. However, fault investigation and reasoning of complex system is not a simple reasoning decision-making problem. It has become a typical multi-constraint and multi-objective reticulate optimization decision-making problem under many influencing factors and constraints. So far, little research has been carried out in this field. This paper transforms the fault reasoning problem of complex system into a paths-searching problem starting from known symptoms to fault causes. Three optimization objectives are considered simultaneously: maximum probability of average fault, maximum average importance, and minimum average complexity of test. Under the constraints of both known symptoms and the causal relationship among different components, a multi-objective optimization mathematical model is set up, taking minimizing cost of fault reasoning as the target function. Since the problem is non-deterministic polynomial-hard(NP-hard), a modified multi-objective ant colony algorithm is proposed, in which a reachability matrix is set up to constrain the feasible search nodes of the ants and a new pseudo-random-proportional rule and a pheromone adjustment mechinism are constructed to balance conflicts between the optimization objectives. At last, a Pareto optimal set is acquired. Evaluation functions based on validity and tendency of reasoning paths are defined to optimize noninferior set, through which the final fault causes can be identified according to decision-making demands, thus realize fault reasoning of the multi-constraint and multi-objective complex system. Reasoning results demonstrate that the improved multi-objective ant colony optimization(IMACO) can realize reasoning and locating fault positions precisely by solving the multi-objective fault diagnosis model, which provides a new method to solve the problem of multi-constraint and multi-objective fault diagnosis and reasoning of complex system.

  19. Observation-Driven Configuration of Complex Software Systems

    NASA Astrophysics Data System (ADS)

    Sage, Aled

    2010-06-01

    The ever-increasing complexity of software systems makes them hard to comprehend, predict and tune due to emergent properties and non-deterministic behaviour. Complexity arises from the size of software systems and the wide variety of possible operating environments: the increasing choice of platforms and communication policies leads to ever more complex performance characteristics. In addition, software systems exhibit different behaviour under different workloads. Many software systems are designed to be configurable so that policies can be chosen to meet the needs of various stakeholders. For complex software systems it can be difficult to accurately predict the effects of a change and to know which configuration is most appropriate. This thesis demonstrates that it is useful to run automated experiments that measure a selection of system configurations. Experiments can find configurations that meet the stakeholders' needs, find interesting behavioural characteristics, and help produce predictive models of the system's behaviour. The design and use of ACT (Automated Configuration Tool) for running such experiments is described, in combination a number of search strategies for deciding on the configurations to measure. Design Of Experiments (DOE) is discussed, with emphasis on Taguchi Methods. These statistical methods have been used extensively in manufacturing, but have not previously been used for configuring software systems. The novel contribution here is an industrial case study, applying the combination of ACT and Taguchi Methods to DC-Directory, a product from Data Connection Ltd (DCL). The case study investigated the applicability of Taguchi Methods for configuring complex software systems. Taguchi Methods were found to be useful for modelling and configuring DC- Directory, making them a valuable addition to the techniques available to system administrators and developers.

  20. Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)

    NASA Astrophysics Data System (ADS)

    Kędra, Mariola

    2014-02-01

    Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.

  1. Real-time flood forecasting

    USGS Publications Warehouse

    Lai, C.; Tsay, T.-K.; Chien, C.-H.; Wu, I.-L.

    2009-01-01

    Researchers at the Hydroinformatic Research and Development Team (HIRDT) of the National Taiwan University undertook a project to create a real time flood forecasting model, with an aim to predict the current in the Tamsui River Basin. The model was designed based on deterministic approach with mathematic modeling of complex phenomenon, and specific parameter values operated to produce a discrete result. The project also devised a rainfall-stage model that relates the rate of rainfall upland directly to the change of the state of river, and is further related to another typhoon-rainfall model. The geographic information system (GIS) data, based on precise contour model of the terrain, estimate the regions that were perilous to flooding. The HIRDT, in response to the project's progress, also devoted their application of a deterministic model to unsteady flow of thermodynamics to help predict river authorities issue timely warnings and take other emergency measures.

  2. Mutual Information Rate and Bounds for It

    PubMed Central

    Baptista, Murilo S.; Rubinger, Rero M.; Viana, Emilson R.; Sartorelli, José C.; Parlitz, Ulrich; Grebogi, Celso

    2012-01-01

    The amount of information exchanged per unit of time between two nodes in a dynamical network or between two data sets is a powerful concept for analysing complex systems. This quantity, known as the mutual information rate (MIR), is calculated from the mutual information, which is rigorously defined only for random systems. Moreover, the definition of mutual information is based on probabilities of significant events. This work offers a simple alternative way to calculate the MIR in dynamical (deterministic) networks or between two time series (not fully deterministic), and to calculate its upper and lower bounds without having to calculate probabilities, but rather in terms of well known and well defined quantities in dynamical systems. As possible applications of our bounds, we study the relationship between synchronisation and the exchange of information in a system of two coupled maps and in experimental networks of coupled oscillators. PMID:23112809

  3. Survivability of Deterministic Dynamical Systems

    PubMed Central

    Hellmann, Frank; Schultz, Paul; Grabow, Carsten; Heitzig, Jobst; Kurths, Jürgen

    2016-01-01

    The notion of a part of phase space containing desired (or allowed) states of a dynamical system is important in a wide range of complex systems research. It has been called the safe operating space, the viability kernel or the sunny region. In this paper we define the notion of survivability: Given a random initial condition, what is the likelihood that the transient behaviour of a deterministic system does not leave a region of desirable states. We demonstrate the utility of this novel stability measure by considering models from climate science, neuronal networks and power grids. We also show that a semi-analytic lower bound for the survivability of linear systems allows a numerically very efficient survivability analysis in realistic models of power grids. Our numerical and semi-analytic work underlines that the type of stability measured by survivability is not captured by common asymptotic stability measures. PMID:27405955

  4. Deterministic Joint Remote Preparation of an Arbitrary Sevenqubit Cluster-type State

    NASA Astrophysics Data System (ADS)

    Ding, MengXiao; Jiang, Min

    2017-06-01

    In this paper, we propose a scheme for joint remotely preparing an arbitrary seven-qubit cluster-type state by using several GHZ entangled states as the quantum channel. The coefficients of the prepared states can be not only real, but also complex. Firstly, Alice performs a three-qubit projective measurement according to the amplitude coefficients of the target state, and then Bob carries out another three-qubit projective measurement based on its phase coefficients. Next, one three-qubit state containing all information of the target state is prepared with suitable operation. Finally, the target seven-qubit cluster-type state can be prepared by introducing four auxiliary qubits and performing appropriate local unitary operations based on the prepared three-qubit state in a deterministic way. The receiver's all recovery operations are summarized into a concise formula. Furthermore, it's worth noting that our scheme is more novel and feasible with the present technologies than most other previous schemes.

  5. Comparison of three controllers applied to helicopter vibration

    NASA Technical Reports Server (NTRS)

    Leyland, Jane A.

    1992-01-01

    A comparison was made of the applicability and suitability of the deterministic controller, the cautious controller, and the dual controller for the reduction of helicopter vibration by using higher harmonic blade pitch control. A randomly generated linear plant model was assumed and the performance index was defined to be a quadratic output metric of this linear plant. A computer code, designed to check out and evaluate these controllers, was implemented and used to accomplish this comparison. The effects of random measurement noise, the initial estimate of the plant matrix, and the plant matrix propagation rate were determined for each of the controllers. With few exceptions, the deterministic controller yielded the greatest vibration reduction (as characterized by the quadratic output metric) and operated with the greatest reliability. Theoretical limitations of these controllers were defined and appropriate candidate alternative methods, including one method particularly suitable to the cockpit, were identified.

  6. Deterministic phase slips in mesoscopic superconducting rings

    PubMed Central

    Petković, I.; Lollo, A.; Glazman, L. I.; Harris, J. G. E.

    2016-01-01

    The properties of one-dimensional superconductors are strongly influenced by topological fluctuations of the order parameter, known as phase slips, which cause the decay of persistent current in superconducting rings and the appearance of resistance in superconducting wires. Despite extensive work, quantitative studies of phase slips have been limited by uncertainty regarding the order parameter's free-energy landscape. Here we show detailed agreement between measurements of the persistent current in isolated flux-biased rings and Ginzburg–Landau theory over a wide range of temperature, magnetic field and ring size; this agreement provides a quantitative picture of the free-energy landscape. We also demonstrate that phase slips occur deterministically as the barrier separating two competing order parameter configurations vanishes. These results will enable studies of quantum and thermal phase slips in a well-characterized system and will provide access to outstanding questions regarding the nature of one-dimensional superconductivity. PMID:27882924

  7. Deterministic phase slips in mesoscopic superconducting rings.

    PubMed

    Petković, I; Lollo, A; Glazman, L I; Harris, J G E

    2016-11-24

    The properties of one-dimensional superconductors are strongly influenced by topological fluctuations of the order parameter, known as phase slips, which cause the decay of persistent current in superconducting rings and the appearance of resistance in superconducting wires. Despite extensive work, quantitative studies of phase slips have been limited by uncertainty regarding the order parameter's free-energy landscape. Here we show detailed agreement between measurements of the persistent current in isolated flux-biased rings and Ginzburg-Landau theory over a wide range of temperature, magnetic field and ring size; this agreement provides a quantitative picture of the free-energy landscape. We also demonstrate that phase slips occur deterministically as the barrier separating two competing order parameter configurations vanishes. These results will enable studies of quantum and thermal phase slips in a well-characterized system and will provide access to outstanding questions regarding the nature of one-dimensional superconductivity.

  8. Hyperchaotic Dynamics for Light Polarization in a Laser Diode

    NASA Astrophysics Data System (ADS)

    Bonatto, Cristian

    2018-04-01

    It is shown that a highly randomlike behavior of light polarization states in the output of a free-running laser diode, covering the whole Poincaré sphere, arises as a result from a fully deterministic nonlinear process, which is characterized by a hyperchaotic dynamics of two polarization modes nonlinearly coupled with a semiconductor medium, inside the optical cavity. A number of statistical distributions were found to describe the deterministic data of the low-dimensional nonlinear flow, such as lognormal distribution for the light intensity, Gaussian distributions for the electric field components and electron densities, Rice and Rayleigh distributions, and Weibull and negative exponential distributions, for the modulus and intensity of the orthogonal linear components of the electric field, respectively. The presented results could be relevant for the generation of single units of compact light source devices to be used in low-dimensional optical hyperchaos-based applications.

  9. Aquatic bacterial assemblage structure in Pozas Azules, Cuatro Cienegas Basin, Mexico: Deterministic vs. stochastic processes.

    PubMed

    Espinosa-Asuar, Laura; Escalante, Ana Elena; Gasca-Pineda, Jaime; Blaz, Jazmín; Peña, Lorena; Eguiarte, Luis E; Souza, Valeria

    2015-06-01

    The aim of this study was to determine the contributions of stochastic vs. deterministic processes in the distribution of microbial diversity in four ponds (Pozas Azules) within a temporally stable aquatic system in the Cuatro Cienegas Basin, State of Coahuila, Mexico. A sampling strategy for sites that were geographically delimited and had low environmental variation was applied to avoid obscuring distance effects. Aquatic bacterial diversity was characterized following a culture-independent approach (16S sequencing of clone libraries). The results showed a correlation between bacterial beta diversity (1-Sorensen) and geographic distance (distance decay of similarity), which indicated the influence of stochastic processes related to dispersion in the assembly of the ponds' bacterial communities. Our findings are the first to show the influence of dispersal limitation in the prokaryotic diversity distribution of Cuatro Cienegas Basin. Copyright© by the Spanish Society for Microbiology and Institute for Catalan Studies.

  10. Amygdala and Ventral Striatum Make Distinct Contributions to Reinforcement Learning.

    PubMed

    Costa, Vincent D; Dal Monte, Olga; Lucas, Daniel R; Murray, Elisabeth A; Averbeck, Bruno B

    2016-10-19

    Reinforcement learning (RL) theories posit that dopaminergic signals are integrated within the striatum to associate choices with outcomes. Often overlooked is that the amygdala also receives dopaminergic input and is involved in Pavlovian processes that influence choice behavior. To determine the relative contributions of the ventral striatum (VS) and amygdala to appetitive RL, we tested rhesus macaques with VS or amygdala lesions on deterministic and stochastic versions of a two-arm bandit reversal learning task. When learning was characterized with an RL model relative to controls, amygdala lesions caused general decreases in learning from positive feedback and choice consistency. By comparison, VS lesions only affected learning in the stochastic task. Moreover, the VS lesions hastened the monkeys' choice reaction times, which emphasized a speed-accuracy trade-off that accounted for errors in deterministic learning. These results update standard accounts of RL by emphasizing distinct contributions of the amygdala and VS to RL. Published by Elsevier Inc.

  11. Amygdala and ventral striatum make distinct contributions to reinforcement learning

    PubMed Central

    Costa, Vincent D.; Monte, Olga Dal; Lucas, Daniel R.; Murray, Elisabeth A.; Averbeck, Bruno B.

    2016-01-01

    Summary Reinforcement learning (RL) theories posit that dopaminergic signals are integrated within the striatum to associate choices with outcomes. Often overlooked is that the amygdala also receives dopaminergic input and is involved in Pavlovian processes that influence choice behavior. To determine the relative contributions of the ventral striatum (VS) and amygdala to appetitive RL we tested rhesus macaques with VS or amygdala lesions on deterministic and stochastic versions of a two-arm bandit reversal learning task. When learning was characterized with a RL model relative to controls, amygdala lesions caused general decreases in learning from positive feedback and choice consistency. By comparison, VS lesions only affected learning in the stochastic task. Moreover, the VS lesions hastened the monkeys’ choice reaction times, which emphasized a speed-accuracy tradeoff that accounted for errors in deterministic learning. These results update standard accounts of RL by emphasizing distinct contributions of the amygdala and VS to RL. PMID:27720488

  12. Deterministic implementation of a bright, on-demand single photon source with near-unity indistinguishability via quantum dot imaging.

    PubMed

    He, Yu-Ming; Liu, Jin; Maier, Sebastian; Emmerling, Monika; Gerhardt, Stefan; Davanço, Marcelo; Srinivasan, Kartik; Schneider, Christian; Höfling, Sven

    2017-07-20

    Deterministic techniques enabling the implementation and engineering of bright and coherent solid-state quantum light sources are key for the reliable realization of a next generation of quantum devices. Such a technology, at best, should allow one to significantly scale up the number of implemented devices within a given processing time. In this work, we discuss a possible technology platform for such a scaling procedure, relying on the application of nanoscale quantum dot imaging to the pillar microcavity architecture, which promises to combine very high photon extraction efficiency and indistinguishability. We discuss the alignment technology in detail, and present the optical characterization of a selected device which features a strongly Purcell-enhanced emission output. This device, which yields an extraction efficiency of η = (49 ± 4) %, facilitates the emission of photons with (94 ± 2.7) % indistinguishability.

  13. Health risk assessment of inorganic arsenic intake of Ronphibun residents via duplicate diet study.

    PubMed

    Saipan, Piyawat; Ruangwises, Suthep

    2009-06-01

    To assess health risk from exposure to inorganic arsenic via duplicate portion sampling method in Ronphibun residents. A hundred and forty samples (140 subject-days) were collected from participants in Ronphibun sub-district. Inorganic arsenic in duplicate diet sample was determined by acid digestion and hydride generation-atomic absorption spectrometry. Deterministic risk assessment is referenced throughout the present paper using United States Environmental Protection Agency (U.S. EPA) guidelines. The average daily dose and lifetime average daily dose of inorganic arsenic via duplicate diet were 0.0021 mg/kg/d and 0.00084 mg/kg/d, respectively. The risk estimates in terms of hazard quotient was 6.98 and cancer risk was 1.26 x 10(-3). The results of deterministic risk characterization both hazard quotient and cancer risk from exposure inorganic arsenic in duplicate diets were greater than safety risk levels of hazard quotient (1) and cancer risk (1 x 10(-4)).

  14. Deterministic phase slips in mesoscopic superconducting rings

    DOE PAGES

    Petković, Ivana; Lollo, A.; Glazman, L. I.; ...

    2016-11-24

    The properties of one-dimensional superconductors are strongly influenced by topological fluctuations of the order parameter, known as phase slips, which cause the decay of persistent current in superconducting rings and the appearance of resistance in superconducting wires. Despite extensive work, quantitative studies of phase slips have been limited by uncertainty regarding the order parameter’s free-energy landscape. Here we show detailed agreement between measurements of the persistent current in isolated flux-biased rings and Ginzburg–Landau theory over a wide range of temperature, magnetic field and ring size; this agreement provides a quantitative picture of the free-energy landscape. Furthermore, we also demonstrate thatmore » phase slips occur deterministically as the barrier separating two competing order parameter configurations vanishes. These results will enable studies of quantum and thermal phase slips in a well-characterized system and will provide access to outstanding questions regarding the nature of one-dimensional superconductivity.« less

  15. Data dependent systems approach to modal analysis Part 1: Theory

    NASA Astrophysics Data System (ADS)

    Pandit, S. M.; Mehta, N. P.

    1988-05-01

    The concept of Data Dependent Systems (DDS) and its applicability in the context of modal vibration analysis is presented. The ability of the DDS difference equation models to provide a complete representation of a linear dynamic system from its sampled response data forms the basis of the approach. The models are decomposed into deterministic and stochastic components so that system characteristics are isolated from noise effects. The modelling strategy is outlined, and the method of analysis associated with modal parameter identification is described in detail. Advantages and special features of the DDS methodology are discussed. Since the correlated noise is appropriately and automatically modelled by the DDS, the modal parameters are shown to be estimated very accurately and hence no preprocessing of the data is needed. Complex mode shapes and non-classical damping are as easily analyzed as the classical normal mode analysis. These features are illustrated by using simulated data in this Part I and real data on a disc-brake rotor in Part II.

  16. Learning to integrate reactivity and deliberation in uncertain planning and scheduling problems

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Gervasio, Melinda T.; Dejong, Gerald F.

    1992-01-01

    This paper describes an approach to planning and scheduling in uncertain domains. In this approach, a system divides a task on a goal by goal basis into reactive and deliberative components. Initially, a task is handled entirely reactively. When failures occur, the system changes the reactive/deliverative goal division by moving goals into the deliberative component. Because our approach attempts to minimize the number of deliberative goals, we call our approach Minimal Deliberation (MD). Because MD allows goals to be treated reactively, it gains some of the advantages of reactive systems: computational efficiency, the ability to deal with noise and non-deterministic effects, and the ability to take advantage of unforseen opportunities. However, because MD can fall back upon deliberation, it can also provide some of the guarantees of classical planning, such as the ability to deal with complex goal interactions. This paper describes the Minimal Deliberation approach to integrating reactivity and deliberation and describe an ongoing application of the approach to an uncertain planning and scheduling domain.

  17. Monte Carlo Simulations of the Formation Flying Dynamics for the Magnetospheric Multiscale (MMS) Mission

    NASA Technical Reports Server (NTRS)

    Schiff, Conrad; Dove, Edwin

    2011-01-01

    The MMS mission is an ambitious space physics mission that will fly 4 spacecraft in a tetrahedron formation in a series of highly elliptical orbits in order to study magnetic reconnection in the Earth's magnetosphere. The mission design is comprised of a combination of deterministic orbit adjust and random maintenance maneuvers distributed over the 2.5 year mission life. Formal verification of the requirements is achieved by analysis through the use of the End-to-End (ETE) code, which is a modular simulation of the maneuver operations over the entire mission duration. Error models for navigation accuracy (knowledge) and maneuver execution (control) are incorporated to realistically simulate the possible maneuver scenarios that might be realized These error models, coupled with the complex formation flying physics, lead to non-trivial effects that must be taken into account by the ETE automation. Using the ETE code, the MMS Flight Dynamics team was able to demonstrate that the current mission design satisfies the mission requirements.

  18. A Comparison of Techniques for Scheduling Earth-Observing Satellites

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna

    2004-01-01

    Scheduling observations by coordinated fleets of Earth Observing Satellites (EOS) involves large search spaces, complex constraints and poorly understood bottlenecks, conditions where evolutionary and related algorithms are often effective. However, there are many such algorithms and the best one to use is not clear. Here we compare multiple variants of the genetic algorithm: stochastic hill climbing, simulated annealing, squeaky wheel optimization and iterated sampling on ten realistically-sized EOS scheduling problems. Schedules are represented by a permutation (non-temperal ordering) of the observation requests. A simple deterministic scheduler assigns times and resources to each observation request in the order indicated by the permutation, discarding those that violate the constraints created by previously scheduled observations. Simulated annealing performs best. Random mutation outperform a more 'intelligent' mutator. Furthermore, the best mutator, by a small margin, was a novel approach we call temperature dependent random sampling that makes large changes in the early stages of evolution and smaller changes towards the end of search.

  19. Multi-segmental postural coordination in professional ballet dancers.

    PubMed

    Kiefer, Adam W; Riley, Michael A; Shockley, Kevin; Sitton, Candace A; Hewett, Timothy E; Cummins-Sebree, Sarah; Haas, Jacqui G

    2011-05-01

    Ballet dancers have heightened balance skills, but previous studies that compared dancers to non-dancers have not quantified patterns of multi-joint postural coordination. This study utilized a visual tracking task that required professional ballet dancers and untrained control participants to sway with the fore-aft motion of a target while standing on one leg, at target frequencies of 0.2 and 0.6Hz. The mean and variability of relative phase between the ankle and hip, and measures from cross-recurrence quantification analysis (i.e., percent cross-recurrence, percent cross-determinism, and cross-maxline), indexed the coordination patterns and their stability. Dancers exhibited less variable ankle-hip coordination and a less deterministic ankle-hip coupling, compared to controls. The results indicate that ballet dancers have increased coordination stability, potentially achieved through enhanced neuromuscular control and/or perceptual sensitivity, and indicate proficiency at optimizing the constraints that enable dancers to perform complex balance tasks. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Asymmetrical Deterministic Lateral Displacement Gaps for Dual Functions of Enhanced Separation and Throughput of Red Blood Cells

    PubMed Central

    Zeming, Kerwin Kwek; Salafi, Thoriq; Chen, Chia-Hung; Zhang, Yong

    2016-01-01

    Deterministic lateral displacement (DLD) method for particle separation in microfluidic devices has been extensively used for particle separation in recent years due to its high resolution and robust separation. DLD has shown versatility for a wide spectrum of applications for sorting of micro particles such as parasites, blood cells to bacteria and DNA. DLD model is designed for spherical particles and efficient separation of blood cells is challenging due to non-uniform shape and size. Moreover, separation in sub-micron regime requires the gap size of DLD systems to be reduced which exponentially increases the device resistance, resulting in greatly reduced throughput. This paper shows how simple application of asymmetrical DLD gap-size by changing the ratio of lateral-gap (GL) to downstream-gap (GD) enables efficient separation of RBCs without greatly restricting throughput. This method reduces the need for challenging fabrication of DLD pillars and provides new insight to the current DLD model. The separation shows an increase in DLD critical diameter resolution (separate smaller particles) and increase selectivity for non-spherical RBCs. The RBCs separate better as compared to standard DLD model with symmetrical gap sizes. This method can be applied to separate non-spherical bacteria or sub-micron particles to enhance throughput and DLD resolution. PMID:26961061

  1. Selective Attention, Diffused Attention, and the Development of Categorization

    PubMed Central

    Deng, Wei (Sophia); Sloutsky, Vladimir M.

    2016-01-01

    How do people learn categories and what changes with development? The current study attempts to address these questions by focusing on the role of attention in the development of categorization. In Experiment 1, participants (adults, 7-year-olds, and 4-year-olds) were trained with novel categories consisting of deterministic and probabilistic features, and their categorization and memory for features were tested. In Experiment 2, participants’ attention was directed to the deterministic feature, and in Experiment 3 it was directed to the probabilistic features. Attentional cuing affected categorization and memory in adults and 7-year-olds: these participants relied on the cued features in their categorization and exhibited better memory of cued than of non-cued features. In contrast, in 4-year-olds attentional cueing affected only categorization, but not memory: these participants exhibited equally good memory for both cued and non-cued features. Furthermore, across the experiments, 4-year-olds remembered non-cued features better than adults. These results coupled with computational simulations provide novel evidence (1) pointing to differences in category representation and mechanisms of categorization across development, (2) elucidating the role of attention in the development of categorization, and (3) suggesting an important distinction between representation and decision factors in categorization early in development. These issues are discussed with respect to theories of categorization and its development. PMID:27721103

  2. Asymmetrical Deterministic Lateral Displacement Gaps for Dual Functions of Enhanced Separation and Throughput of Red Blood Cells.

    PubMed

    Zeming, Kerwin Kwek; Salafi, Thoriq; Chen, Chia-Hung; Zhang, Yong

    2016-03-10

    Deterministic lateral displacement (DLD) method for particle separation in microfluidic devices has been extensively used for particle separation in recent years due to its high resolution and robust separation. DLD has shown versatility for a wide spectrum of applications for sorting of micro particles such as parasites, blood cells to bacteria and DNA. DLD model is designed for spherical particles and efficient separation of blood cells is challenging due to non-uniform shape and size. Moreover, separation in sub-micron regime requires the gap size of DLD systems to be reduced which exponentially increases the device resistance, resulting in greatly reduced throughput. This paper shows how simple application of asymmetrical DLD gap-size by changing the ratio of lateral-gap (GL) to downstream-gap (GD) enables efficient separation of RBCs without greatly restricting throughput. This method reduces the need for challenging fabrication of DLD pillars and provides new insight to the current DLD model. The separation shows an increase in DLD critical diameter resolution (separate smaller particles) and increase selectivity for non-spherical RBCs. The RBCs separate better as compared to standard DLD model with symmetrical gap sizes. This method can be applied to separate non-spherical bacteria or sub-micron particles to enhance throughput and DLD resolution.

  3. Coupling hydrologic and hydraulic models to take into consideration retention effects on extreme peak discharges in Switzerland

    NASA Astrophysics Data System (ADS)

    Felder, Guido; Zischg, Andreas; Weingartner, Rolf

    2015-04-01

    Estimating peak discharges with very low probabilities is still accompanied by large uncertainties. Common estimation methods are usually based on extreme value statistics applied to observed time series or to hydrological model outputs. However, such methods assume the system to be stationary and do not specifically consider non-stationary effects. Observed time series may exclude events where peak discharge is damped by retention effects, as this process does not occur until specific thresholds, possibly beyond those of the highest measured event, are exceeded. Hydrological models can be complemented and parameterized with non-linear functions. However, in such cases calibration depends on observed data and non-stationary behaviour is not deterministically calculated. Our study discusses the option of considering retention effects on extreme peak discharges by coupling hydrological and hydraulic models. This possibility is tested by forcing the semi-distributed deterministic hydrological model PREVAH with randomly generated, physically plausible extreme precipitation patterns. The resulting hydrographs are then used to force the hydraulic model BASEMENT-ETH (riverbed in 1D, potential inundation areas in 2D). The procedure ensures that the estimated extreme peak discharge does not exceed the physical limit given by the riverbed capacity and that the dampening effect of inundation processes on peak discharge is considered.

  4. Enumeration and extension of non-equivalent deterministic update schedules in Boolean networks.

    PubMed

    Palma, Eduardo; Salinas, Lilian; Aracena, Julio

    2016-03-01

    Boolean networks (BNs) are commonly used to model genetic regulatory networks (GRNs). Due to the sensibility of the dynamical behavior to changes in the updating scheme (order in which the nodes of a network update their state values), it is increasingly common to use different updating rules in the modeling of GRNs to better capture an observed biological phenomenon and thus to obtain more realistic models.In Aracena et al. equivalence classes of deterministic update schedules in BNs, that yield exactly the same dynamical behavior of the network, were defined according to a certain label function on the arcs of the interaction digraph defined for each scheme. Thus, the interaction digraph so labeled (update digraphs) encode the non-equivalent schemes. We address the problem of enumerating all non-equivalent deterministic update schedules of a given BN. First, we show that it is an intractable problem in general. To solve it, we first construct an algorithm that determines the set of update digraphs of a BN. For that, we use divide and conquer methodology based on the structural characteristics of the interaction digraph. Next, for each update digraph we determine a scheme associated. This algorithm also works in the case where there is a partial knowledge about the relative order of the updating of the states of the nodes. We exhibit some examples of how the algorithm works on some GRNs published in the literature. An executable file of the UpdateLabel algorithm made in Java and the files with the outputs of the algorithms used with the GRNs are available at: www.inf.udec.cl/ ∼lilian/UDE/ CONTACT: lilisalinas@udec.cl Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. Non-extensivity and complexity in the earthquake activity at the West Corinth rift (Greece)

    NASA Astrophysics Data System (ADS)

    Michas, Georgios; Vallianatos, Filippos; Sammonds, Peter

    2013-04-01

    Earthquakes exhibit complex phenomenology that is revealed from the fractal structure in space, time and magnitude. For that reason other tools rather than the simple Poissonian statistics seem more appropriate to describe the statistical properties of the phenomenon. Here we use Non-Extensive Statistical Physics [NESP] to investigate the inter-event time distribution of the earthquake activity at the west Corinth rift (central Greece). This area is one of the most seismotectonically active areas in Europe, with an important continental N-S extension and high seismicity rates. NESP concept refers to the non-additive Tsallis entropy Sq that includes Boltzmann-Gibbs entropy as a particular case. This concept has been successfully used for the analysis of a variety of complex dynamic systems including earthquakes, where fractality and long-range interactions are important. The analysis indicates that the cumulative inter-event time distribution can be successfully described with NESP, implying the complexity that characterizes the temporal occurrences of earthquakes. Further on, we use the Tsallis entropy (Sq) and the Fischer Information Measure (FIM) to investigate the complexity that characterizes the inter-event time distribution through different time windows along the evolution of the seismic activity at the West Corinth rift. The results of this analysis reveal a different level of organization and clusterization of the seismic activity in time. Acknowledgments. GM wish to acknowledge the partial support of the Greek State Scholarships Foundation (IKY).

  6. Bounds on the sample complexity for private learning and private data release

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kasiviswanathan, Shiva; Beime, Amos; Nissim, Kobbi

    2009-01-01

    Learning is a task that generalizes many of the analyses that are applied to collections of data, and in particular, collections of sensitive individual information. Hence, it is natural to ask what can be learned while preserving individual privacy. [Kasiviswanathan, Lee, Nissim, Raskhodnikova, and Smith; FOCS 2008] initiated such a discussion. They formalized the notion of private learning, as a combination of PAC learning and differential privacy, and investigated what concept classes can be learned privately. Somewhat surprisingly, they showed that, ignoring time complexity, every PAC learning task could be performed privately with polynomially many samples, and in many naturalmore » cases this could even be done in polynomial time. While these results seem to equate non-private and private learning, there is still a significant gap: the sample complexity of (non-private) PAC learning is crisply characterized in terms of the VC-dimension of the concept class, whereas this relationship is lost in the constructions of private learners, which exhibit, generally, a higher sample complexity. Looking into this gap, we examine several private learning tasks and give tight bounds on their sample complexity. In particular, we show strong separations between sample complexities of proper and improper private learners (such separation does not exist for non-private learners), and between sample complexities of efficient and inefficient proper private learners. Our results show that VC-dimension is not the right measure for characterizing the sample complexity of proper private learning. We also examine the task of private data release (as initiated by [Blum, Ligett, and Roth; STOC 2008]), and give new lower bounds on the sample complexity. Our results show that the logarithmic dependence on size of the instance space is essential for private data release.« less

  7. Comparison of numerical weather prediction based deterministic and probabilistic wind resource assessment methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jie; Draxl, Caroline; Hopson, Thomas

    Numerical weather prediction (NWP) models have been widely used for wind resource assessment. Model runs with higher spatial resolution are generally more accurate, yet extremely computational expensive. An alternative approach is to use data generated by a low resolution NWP model, in conjunction with statistical methods. In order to analyze the accuracy and computational efficiency of different types of NWP-based wind resource assessment methods, this paper performs a comparison of three deterministic and probabilistic NWP-based wind resource assessment methodologies: (i) a coarse resolution (0.5 degrees x 0.67 degrees) global reanalysis data set, the Modern-Era Retrospective Analysis for Research and Applicationsmore » (MERRA); (ii) an analog ensemble methodology based on the MERRA, which provides both deterministic and probabilistic predictions; and (iii) a fine resolution (2-km) NWP data set, the Wind Integration National Dataset (WIND) Toolkit, based on the Weather Research and Forecasting model. Results show that: (i) as expected, the analog ensemble and WIND Toolkit perform significantly better than MERRA confirming their ability to downscale coarse estimates; (ii) the analog ensemble provides the best estimate of the multi-year wind distribution at seven of the nine sites, while the WIND Toolkit is the best at one site; (iii) the WIND Toolkit is more accurate in estimating the distribution of hourly wind speed differences, which characterizes the wind variability, at five of the available sites, with the analog ensemble being best at the remaining four locations; and (iv) the analog ensemble computational cost is negligible, whereas the WIND Toolkit requires large computational resources. Future efforts could focus on the combination of the analog ensemble with intermediate resolution (e.g., 10-15 km) NWP estimates, to considerably reduce the computational burden, while providing accurate deterministic estimates and reliable probabilistic assessments.« less

  8. Analytical results for the statistical distribution related to a memoryless deterministic walk: dimensionality effect and mean-field models.

    PubMed

    Terçariol, César Augusto Sangaletti; Martinez, Alexandre Souto

    2005-08-01

    Consider a medium characterized by N points whose coordinates are randomly generated by a uniform distribution along the edges of a unitary d-dimensional hypercube. A walker leaves from each point of this disordered medium and moves according to the deterministic rule to go to the nearest point which has not been visited in the preceding mu steps (deterministic tourist walk). Each trajectory generated by this dynamics has an initial nonperiodic part of t steps (transient) and a final periodic part of p steps (attractor). The neighborhood rank probabilities are parametrized by the normalized incomplete beta function Id= I1/4 [1/2, (d+1) /2] . The joint distribution S(N) (mu,d) (t,p) is relevant, and the marginal distributions previously studied are particular cases. We show that, for the memory-less deterministic tourist walk in the euclidean space, this distribution is Sinfinity(1,d) (t,p) = [Gamma (1+ I(-1)(d)) (t+ I(-1)(d) ) /Gamma(t+p+ I(-1)(d)) ] delta(p,2), where t=0, 1,2, ... infinity, Gamma(z) is the gamma function and delta(i,j) is the Kronecker delta. The mean-field models are the random link models, which correspond to d-->infinity, and the random map model which, even for mu=0 , presents nontrivial cycle distribution [ S(N)(0,rm) (p) proportional to p(-1) ] : S(N)(0,rm) (t,p) =Gamma(N)/ {Gamma[N+1- (t+p) ] N( t+p)}. The fundamental quantities are the number of explored points n(e)=t+p and Id. Although the obtained distributions are simple, they do not follow straightforwardly and they have been validated by numerical experiments.

  9. Gd-Complexes of New Arylpiperazinyl Conjugates of DTPA-Bis(amides): Synthesis, Characterization and Magnetic Relaxation Properties.

    PubMed

    Ba-Salem, Abdullah O; Ullah, Nisar; Shaikh, M Nasiruzzaman; Faiz, Mohamed; Ul-Haq, Zaheer

    2015-04-29

    Two new DTPA-bis(amide) based ligands conjugated with the arylpiperazinyl moiety were synthesized and subsequently transformed into their corresponding Gd(III) complexes 1 and 2 of the type [Gd(L)H2O]·nH2O. The relaxivity (R1) of these complexes was measured, which turned out to be comparable with that of Omniscan®, a commercially available MRI contrast agent. The cytotoxicity studies of these complexes indicated that they are non-toxic, which reveals their potential and physiological suitability as MRI contrast agents. All the synthesized ligands and complexes were characterized with the aid of analytical and spectroscopic methods, including elemental analysis, 1H-NMR, FT-IR, XPS and fast atom bombardment (FAB) mass spectrometry.

  10. Synthesis, characterization and spectroscopic properties of the hydrazodye and new hydrazodye-metal complexes

    NASA Astrophysics Data System (ADS)

    Bouhdada, M.; EL Amane, M.

    2017-12-01

    The hydrazodye (L) [disodium (7-hydroxyl-8-phenylazo-1,3-naphthalenedisulfonate)] is prepared and characterized by infrared 1H,13C, NMR and UV-visible spectra. The spectral data of the ligand (L) existing predominantly in the hydrazones form. Reaction of the hydrazodye (L) with Cd2+, Ni2+, Cu2+ and Zn2+ chloride gave two series of metal complexes with general formula [ML2 (OH2)2]2Cl and [ML2 (caf)2]2Cl. Their complexes were identified by FTIR, UV-visible spectra, 1H, 13C, NMR, EPR, XRD and molar conductance. The molar conductance reveals that the new series of metal complexes are non-electrolytes. The postulated octahedral structure of the obtained complexes is also proposed on spectroscopic data analysis.

  11. FW/CADIS-O: An Angle-Informed Hybrid Method for Neutron Transport

    NASA Astrophysics Data System (ADS)

    Munk, Madicken

    The development of methods for deep-penetration radiation transport is of continued importance for radiation shielding, nonproliferation, nuclear threat reduction, and medical applications. As these applications become more ubiquitous, the need for transport methods that can accurately and reliably model the systems' behavior will persist. For these types of systems, hybrid methods are often the best choice to obtain a reliable answer in a short amount of time. Hybrid methods leverage the speed and uniform uncertainty distribution of a deterministic solution to bias Monte Carlo transport to reduce the variance in the solution. At present, the Consistent Adjoint-Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) hybrid methods are the gold standard by which to model systems that have deeply-penetrating radiation. They use an adjoint scalar flux to generate variance reduction parameters for Monte Carlo. However, in problems where there exists strong anisotropy in the flux, CADIS and FW-CADIS are not as effective at reducing the problem variance as isotropic problems. This dissertation covers the theoretical background, implementation of, and characteri- zation of a set of angle-informed hybrid methods that can be applied to strongly anisotropic deep-penetration radiation transport problems. These methods use a forward-weighted adjoint angular flux to generate variance reduction parameters for Monte Carlo. As a result, they leverage both adjoint and contributon theory for variance reduction. They have been named CADIS-O and FW-CADIS-O. To characterize CADIS-O, several characterization problems with flux anisotropies were devised. These problems contain different physical mechanisms by which flux anisotropy is induced. Additionally, a series of novel anisotropy metrics by which to quantify flux anisotropy are used to characterize the methods beyond standard Figure of Merit (FOM) and relative error metrics. As a result, a more thorough investigation into the effects of anisotropy and the degree of anisotropy on Monte Carlo convergence is possible. The results from the characterization of CADIS-O show that it performs best in strongly anisotropic problems that have preferential particle flowpaths, but only if the flowpaths are not comprised of air. Further, the characterization of the method's sensitivity to deterministic angular discretization showed that CADIS-O has less sensitivity to discretization than CADIS for both quadrature order and PN order. However, more variation in the results were observed in response to changing quadrature order than PN order. Further, as a result of the forward-normalization in the O-methods, ray effect mitigation was observed in many of the characterization problems. The characterization of the CADIS-O-method in this dissertation serves to outline a path forward for further hybrid methods development. In particular, the response that the O-method has with changes in quadrature order, PN order, and on ray effect mitigation are strong indicators that the method is more resilient than its predecessors to strong anisotropies in the flux. With further method characterization, the full potential of the O-methods can be realized. The method can then be applied to geometrically complex, materially diverse problems and help to advance system modelling in deep-penetration radiation transport problems with strong anisotropies in the flux.

  12. Stochastic Plume Simulations for the Fukushima Accident and the Deep Water Horizon Oil Spill

    NASA Astrophysics Data System (ADS)

    Coelho, E.; Peggion, G.; Rowley, C.; Hogan, P.

    2012-04-01

    The Fukushima Dai-ichi power plant suffered damage leading to radioactive contamination of coastal waters. Major issues in characterizing the extent of the affected waters were a poor knowledge of the radiation released to the coastal waters and the rather complex coastal dynamics of the region, not deterministically captured by the available prediction systems. Equivalently, during the Gulf of Mexico Deep Water Horizon oil platform accident in April 2010, significant amounts of oil and gas were released from the ocean floor. For this case, issues in mapping and predicting the extent of the affected waters in real-time were a poor knowledge of the actual amounts of oil reaching the surface and the fact that coastal dynamics over the region were not deterministically captured by the available prediction systems. To assess the ocean regions and times that were most likely affected by these accidents while capturing the above sources of uncertainty, ensembles of the Navy Coastal Ocean Model (NCOM) were configured over the two regions (NE Japan and Northern Gulf of Mexico). For the Fukushima case tracers were released on each ensemble member; their locations at each instant provided reference positions of water volumes where the signature of water released from the plant could be found. For the Deep Water Horizon oil spill case each ensemble member was coupled with a diffusion-advection solution to estimate possible scenarios of oil concentrations using perturbed estimates of the released amounts as the source terms at the surface. Stochastic plumes were then defined using a Risk Assessment Code (RAC) analysis that associates a number from 1 to 5 to each grid point, determined by the likelihood of having tracer particle within short ranges (for the Fukushima case), hence defining the high risk areas and those recommended for monitoring. For the Oil Spill case the RAC codes were determined by the likelihood of reaching oil concentrations as defined in the Bonn Agreement Oil Appearance Code. The likelihoods were taken in both cases from probability distribution functions derived from the ensemble runs. Results were compared with a control-deterministic solution and checked against available reports to assess their skill in capturing the actual observed plumes and other in-situ data, as well as their relevance for planning surveys and reconnaissance flights for both cases.

  13. Deterministic Tectonic Origin Tsunami Hazard Analysis for the Eastern Mediterranean and its Connected Seas

    NASA Astrophysics Data System (ADS)

    Necmioglu, O.; Meral Ozel, N.

    2014-12-01

    Accurate earthquake source parameters are essential for any tsunami hazard assessment and mitigation, including early warning systems. Complex tectonic setting makes the a priori accurate assumptions of earthquake source parameters difficult and characterization of the faulting type is a challenge. Information on tsunamigenic sources is of crucial importance in the Eastern Mediterranean and its Connected Seas, especially considering the short arrival times and lack of offshore sea-level measurements. In addition, the scientific community have had to abandon the paradigm of a ''maximum earthquake'' predictable from simple tectonic parameters (Ruff and Kanamori, 1980) in the wake of the 2004 Sumatra event (Okal, 2010) and one of the lessons learnt from the 2011 Tohoku event was that tsunami hazard maps may need to be prepared for infrequent gigantic earthquakes as well as more frequent smaller-sized earthquakes (Satake, 2011). We have initiated an extensive modeling study to perform a deterministic Tsunami Hazard Analysis for the Eastern Mediterranean and its Connected Seas. Characteristic earthquake source parameters (strike, dip, rake, depth, Mwmax) at each 0.5° x 0.5° size bin for 0-40 km depth (total of 310 bins) and for 40-100 km depth (total of 92 bins) in the Eastern Mediterranean, Aegean and Black Sea region (30°N-48°N and 22°E-44°E) have been assigned from the harmonization of the available databases and previous studies. These parameters have been used as input parameters for the deterministic tsunami hazard modeling. Nested Tsunami simulations of 6h duration with a coarse (2 arc-min) and medium (1 arc-min) grid resolution have been simulated at EC-JRC premises for Black Sea and Eastern and Central Mediterranean (30°N-41.5°N and 8°E-37°E) for each source defined using shallow water finite-difference SWAN code (Mader, 2004) for the magnitude range of 6.5 - Mwmax defined for that bin with a Mw increment of 0.1. Results show that not only the earthquakes resembling the well-known historical earthquakes such as AD 365 or AD 1303 in the Hellenic Arc, but also earthquakes with lower magnitudes do constitute to the tsunami hazard in the study area.

  14. Feasibility for direct rapid energy dispersive X-ray fluorescence (EDXRF) and scattering analysis of complex matrix liquids by partial least squares.

    PubMed

    Angeyo, K H; Gari, S; Mustapha, A O; Mangala, J M

    2012-11-01

    The greatest challenge to material characterization by XRF technique is encountered in direct trace analysis of complex matrices. We exploited partial least squares (PLS) in conjunction with energy dispersive X-ray fluorescence and scattering (EDXRFS) spectrometry to rapidly (200 s) analyze lubricating oils. The PLS-EDXRFS method affords non-invasive quality assurance (QA) analysis of complex matrix liquids as it gave optimistic results for both heavy- and low-Z metal additives. Scatter peaks may further be used for QA characterization via the light elements. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Intrication temporelle et communication quantique

    NASA Astrophysics Data System (ADS)

    Bussieres, Felix

    Quantum communication is the art of transferring a quantum state from one place to another and the study of tasks that can be accomplished with it. This thesis is devoted to the development of tools and tasks for quantum communication in a real-world setting. These were implemented using an underground optical fibre link deployed in an urban environment. The technological and theoretical innovations presented here broaden the range of applications of time-bin entanglement through new methods of manipulating time-bin qubits, a novel model for characterizing sources of photon pairs, new ways of testing non-locality and the design and the first implementation of a new loss-tolerant quantum coin-flipping protocol. Manipulating time-bin qubits. A single photon is an excellent vehicle in which a qubit, the fundamental unit of quantum information, can be encoded. In particular, the time-bin encoding of photonic qubits is well suited for optical fibre transmission. Before this thesis, the applications of quantum communication based on the time-bin encoding were limited due to the lack of methods to implement arbitrary operations and measurements. We have removed this restriction by proposing the first methods to realize arbitrary deterministic operations on time-bin qubits as well as single qubit measurements in an arbitrary basis. We applied these propositions to the specific case of optical measurement-based quantum computing and showed how to implement the feedforward operations, which are essential to this model. This therefore opens new possibilities for creating an optical quantum computer, but also for other quantum communication tasks. Characterizing sources of photon pairs. Experimental quantum communication requires the creation of single photons and entangled photons. These two ingredients can be obtained from a source of photon pairs based on non-linear spontaneous processes. Several tasks in quantum communication require a precise knowledge of the properties of the source being used. We developed and implemented a fast and simple method to characterize a source of photon pairs. This method is well suited for a realistic setting where experimental conditions, such as channel transmittance, may fluctuate, and for which the characterization of the source has to be done in real time. Testing the non-locality of time-bin entanglement. Entanglement is a resource needed for the realization of many important tasks in quantum communication. It also allows two physical systems to be correlated in a way that cannot be explained by classical physics; this manifestation of entanglement is called non-locality. We built a source of time-bin entangled photonic qubits and characterized it with the new methods implementing arbitrary single qubit measurements that we developed. This allowed us to reveal the non-local nature of our source of entanglement in ways that were never implemented before. It also opens the door to study previously untested features of non-locality using this source. Theses experiments were performed in a realistic setting where quantum (non-local) correlations were observed even after transmission of one of the entangled qubits over 12.4 km of an underground optical fibre. Flipping quantum coins. Quantum coin-flipping is a quantum cryptographic primitive proposed in 1984, that is when the very first steps of quantum communication were being taken, where two players alternate in sending classical and quantum information in order to generate a shared random bit. The use of quantum information is such that a potential cheater cannot force the outcome to his choice with certainty. Classically, however, one of the players can always deterministically choose the outcome. Unfortunately, the security of all previous quantum coin-flipping protocols is seriously compromised in the presence of losses on the transmission channel, thereby making this task impractical. We found a solution to this problem and obtained the first loss-tolerant quantum coin-flipping protocol whose security is independent of the amount of the losses. We have also experimentally demonstrated our loss-tolerant protocol using our source of time-bin entanglement combined with our arbitrary single qubit measurement methods. This experiment took place in a realistic setting where qubits travelled over an underground optical fibre link. This new task thus joins quantum key distribution as a practical application of quantum communication. Keywords. quantum communication, photonics, time-bin encoding, source of photon pairs, heralded single photon source, entanglement, non-locality, time-bin entanglement, hybrid entanglement, quantum network, quantum cryptography, quantum coin-flipping, measurement-based quantum computation, telecommunication, optical fibre, nonlinear optics.

  16. Heteroleptic Palladium(II) dithiocarbamates: Synthesis, characterization and in vitro biological screening

    NASA Astrophysics Data System (ADS)

    Khan, Shahan Zeb; Zia-ur-Rehman; Amir, Muhammad Kashif; Ullah, Imdad; Akhter, M. S.; Bélanger-Gariepy, Francine

    2018-03-01

    Two new heteroleptic Pd(II) complexes of sodium 4-(2-pyrimidyl)piperazine-1-carbodithioate with tris-p-flourophenylphosphine (1) and tris-p-chlorophenylphosphine (2) were prepared and characterized by elemental analysis, FT-IR, multinuclear NMR {1H, 13C and 31P} and single-crystal X-ray diffraction measurement. In both complexes, Pd exhibit pseudo square planner geometry mediated by SS chelate, P and Cl. In vitro cytotoxicity against five different cancer cell lines using staurosporine as a standard revealed 1 to be more cytotoxic than 2, though both complexes are more active than cisplatin. Subsequent DNA binding studies revealed that non-covalent complex-DNA interaction may be the reason for arresting cancer cell growth. Furthermore, 1 and 2 are potent antioxidant agents.

  17. Voltage control of magnetic single domains in Ni discs on ferroelectric BaTiO3

    NASA Astrophysics Data System (ADS)

    Ghidini, M.; Zhu, B.; Mansell, R.; Pellicelli, R.; Lesaine, A.; Moya, X.; Crossley, S.; Nair, B.; Maccherozzi, F.; Barnes, C. H. W.; Cowburn, R. P.; Dhesi, S. S.; Mathur, N. D.

    2018-06-01

    For 1 µm-diameter Ni discs on a BaTiO3 substrate, the local magnetization direction is determined by ferroelectric domain orientation as a consequence of growth strain, such that single-domain discs lie on single ferroelectric domains. On applying a voltage across the substrate, ferroelectric domain switching yields non-volatile magnetization rotations of 90°, while piezoelectric effects that are small and continuous yield non-volatile magnetization reversals that are non-deterministic. This demonstration of magnetization reversal without ferroelectric domain switching implies reduced fatigue, and therefore represents a step towards applications.

  18. Distinguishing humans from computers in the game of go: A complex network approach

    NASA Astrophysics Data System (ADS)

    Coquidé, C.; Georgeot, B.; Giraud, O.

    2017-08-01

    We compare complex networks built from the game of go and obtained from databases of human-played games with those obtained from computer-played games. Our investigations show that statistical features of the human-based networks and the computer-based networks differ, and that these differences can be statistically significant on a relatively small number of games using specific estimators. We show that the deterministic or stochastic nature of the computer algorithm playing the game can also be distinguished from these quantities. This can be seen as a tool to implement a Turing-like test for go simulators.

  19. A Scalable Computational Framework for Establishing Long-Term Behavior of Stochastic Reaction Networks

    PubMed Central

    Khammash, Mustafa

    2014-01-01

    Reaction networks are systems in which the populations of a finite number of species evolve through predefined interactions. Such networks are found as modeling tools in many biological disciplines such as biochemistry, ecology, epidemiology, immunology, systems biology and synthetic biology. It is now well-established that, for small population sizes, stochastic models for biochemical reaction networks are necessary to capture randomness in the interactions. The tools for analyzing such models, however, still lag far behind their deterministic counterparts. In this paper, we bridge this gap by developing a constructive framework for examining the long-term behavior and stability properties of the reaction dynamics in a stochastic setting. In particular, we address the problems of determining ergodicity of the reaction dynamics, which is analogous to having a globally attracting fixed point for deterministic dynamics. We also examine when the statistical moments of the underlying process remain bounded with time and when they converge to their steady state values. The framework we develop relies on a blend of ideas from probability theory, linear algebra and optimization theory. We demonstrate that the stability properties of a wide class of biological networks can be assessed from our sufficient theoretical conditions that can be recast as efficient and scalable linear programs, well-known for their tractability. It is notably shown that the computational complexity is often linear in the number of species. We illustrate the validity, the efficiency and the wide applicability of our results on several reaction networks arising in biochemistry, systems biology, epidemiology and ecology. The biological implications of the results as well as an example of a non-ergodic biological network are also discussed. PMID:24968191

  20. A Robust Scalable Transportation System Concept

    NASA Technical Reports Server (NTRS)

    Hahn, Andrew; DeLaurentis, Daniel

    2006-01-01

    This report documents the 2005 Revolutionary System Concept for Aeronautics (RSCA) study entitled "A Robust, Scalable Transportation System Concept". The objective of the study was to generate, at a high-level of abstraction, characteristics of a new concept for the National Airspace System, or the new NAS, under which transportation goals such as increased throughput, delay reduction, and improved robustness could be realized. Since such an objective can be overwhelmingly complex if pursued at the lowest levels of detail, instead a System-of-Systems (SoS) approach was adopted to model alternative air transportation architectures at a high level. The SoS approach allows the consideration of not only the technical aspects of the NAS", but also incorporates policy, socio-economic, and alternative transportation system considerations into one architecture. While the representations of the individual systems are basic, the higher level approach allows for ways to optimize the SoS at the network level, determining the best topology (i.e. configuration of nodes and links). The final product (concept) is a set of rules of behavior and network structure that not only satisfies national transportation goals, but represents the high impact rules that accomplish those goals by getting the agents to "do the right thing" naturally. The novel combination of Agent Based Modeling and Network Theory provides the core analysis methodology in the System-of-Systems approach. Our method of approach is non-deterministic which means, fundamentally, it asks and answers different questions than deterministic models. The nondeterministic method is necessary primarily due to our marriage of human systems with technological ones in a partially unknown set of future worlds. Our goal is to understand and simulate how the SoS, human and technological components combined, evolve.

  1. Modeling the spreading of large-scale wildland fires

    Treesearch

    Mohamed Drissi

    2015-01-01

    The objective of the present study is twofold. First, the last developments and validation results of a hybrid model designed to simulate fire patterns in heterogeneous landscapes are presented. The model combines the features of a stochastic small-world network model with those of a deterministic semi-physical model of the interaction between burning and non-burning...

  2. Neural nets with terminal chaos for simulation of non-deterministic patterns

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1993-01-01

    Models for simulating some aspects of neural intelligence are presented and discussed. Special attention is given to terminal neurodynamics as a particular architecture of terminal dynamics suitable for modeling information flows. Applications of terminal chaos to information fusion as well as to planning and modeling coordination among neurons in biological systems are disussed.

  3. Towards a Non-Deterministic Reading of Pierre Bourdieu: Habitus and Educational Change in Urban Schools

    ERIC Educational Resources Information Center

    Barrett, Brian D.; Martina, Camille Anne

    2012-01-01

    Building on the social reproduction theory of Pierre Bourdieu, this study examines the impact of school context and institutional agency on shaping urban students' access to social and cultural capital resources, which are selectively valued and rewarded by the education system, in two schools across two high-poverty, intensely segregated urban…

  4. Reach for the Stars: A Constellational Approach to Ethnographies of Elite Schools

    ERIC Educational Resources Information Center

    Prosser, Howard

    2014-01-01

    This paper offers a method for examining elite schools in a global setting by appropriating Theodor Adorno's constellational approach. I contend that arranging ideas and themes in a non-deterministic fashion can illuminate the social reality of elite schools. Drawing on my own fieldwork at an elite school in Argentina, I suggest that local and…

  5. Evaluating the risk of death via the hematopoietic syndrome mode for prolonged exposure of nuclear workers to radiation delivered at very low rates.

    PubMed

    Scott, B R; Lyzlov, A F; Osovets, S V

    1998-05-01

    During a Phase-I effort, studies were planned to evaluate deterministic (nonstochastic) effects of chronic exposure of nuclear workers at the Mayak atomic complex in the former Soviet Union to relatively high levels (> 0.25 Gy) of ionizing radiation. The Mayak complex has been used, since the late 1940's, to produce plutonium for nuclear weapons. Workers at Site A of the complex were involved in plutonium breeding using nuclear reactors, and some were exposed to relatively large doses of gamma rays plus relatively small neutron doses. The Weibull normalized-dose model, which has been set up to evaluate the risk of specific deterministic effects of combined, continuous exposure of humans to alpha, beta, and gamma radiations, is here adapted for chronic exposure to gamma rays and neutrons during repeated 6-h work shifts--as occurred for some nuclear workers at Site A. Using the adapted model, key conclusions were reached that will facilitate a Phase-II study of deterministic effects among Mayak workers. These conclusions include the following: (1) neutron doses may be more important for Mayak workers than for Japanese A-bomb victims in Hiroshima and can be accounted for using an adjusted dose (which accounts for neutron relative biological effectiveness); (2) to account for dose-rate effects, normalized dose X (a dimensionless fraction of an LD50 or ED50) can be evaluated in terms of an adjusted dose; (3) nonlinear dose-response curves for the risk of death via the hematopoietic mode can be converted to linear dose-response curves (for low levels of risk) using a newly proposed dimensionless dose, D = X(V), in units of Oklad (where D is pronounced "deh"), and V is the shape parameter in the Weibull model; (4) for X < or = Xo, where Xo is the threshold normalized dose, D = 0; (5) unlike absorbed dose, the dose D can be averaged over different Mayak workers in order to calculate the average risk of death via the hematopoietic mode for the population exposed at Site A; and (6) the expected cases of death via the hematopoietic syndrome mode for Mayak workers chronically exposed during work shifts at Site A to gamma rays and neutrons can be predicted using ln(2)B M[D]; where B (pronounced "beh") is the number of workers at risk (criticality accident victims excluded); and M[D] is the average (mean) value of D (averaged over the worker population at risk, for Site A, for the time period considered). These results can be used to facilitate a Phase II study of deterministic radiation effects among Mayak workers chronically exposed to gamma rays and neutrons.

  6. Development of a Probabilistic Component Mode Synthesis Method for the Analysis of Non-Deterministic Substructures

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Ferri, Aldo A.

    1995-01-01

    Standard methods of structural dynamic analysis assume that the structural characteristics are deterministic. Recognizing that these characteristics are actually statistical in nature, researchers have recently developed a variety of methods that use this information to determine probabilities of a desired response characteristic, such as natural frequency, without using expensive Monte Carlo simulations. One of the problems in these methods is correctly identifying the statistical properties of primitive variables such as geometry, stiffness, and mass. This paper presents a method where the measured dynamic properties of substructures are used instead as the random variables. The residual flexibility method of component mode synthesis is combined with the probabilistic methods to determine the cumulative distribution function of the system eigenvalues. A simple cantilever beam test problem is presented that illustrates the theory.

  7. A split-step method to include electron–electron collisions via Monte Carlo in multiple rate equation simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huthmacher, Klaus; Molberg, Andreas K.; Rethfeld, Bärbel

    2016-10-01

    A split-step numerical method for calculating ultrafast free-electron dynamics in dielectrics is introduced. The two split steps, independently programmed in C++11 and FORTRAN 2003, are interfaced via the presented open source wrapper. The first step solves a deterministic extended multi-rate equation for the ionization, electron–phonon collisions, and single photon absorption by free-carriers. The second step is stochastic and models electron–electron collisions using Monte-Carlo techniques. This combination of deterministic and stochastic approaches is a unique and efficient method of calculating the nonlinear dynamics of 3D materials exposed to high intensity ultrashort pulses. Results from simulations solving the proposed model demonstrate howmore » electron–electron scattering relaxes the non-equilibrium electron distribution on the femtosecond time scale.« less

  8. A white-box model of S-shaped and double S-shaped single-species population growth

    PubMed Central

    Kalmykov, Lev V.

    2015-01-01

    Complex systems may be mechanistically modelled by white-box modeling with using logical deterministic individual-based cellular automata. Mathematical models of complex systems are of three types: black-box (phenomenological), white-box (mechanistic, based on the first principles) and grey-box (mixtures of phenomenological and mechanistic models). Most basic ecological models are of black-box type, including Malthusian, Verhulst, Lotka–Volterra models. In black-box models, the individual-based (mechanistic) mechanisms of population dynamics remain hidden. Here we mechanistically model the S-shaped and double S-shaped population growth of vegetatively propagated rhizomatous lawn grasses. Using purely logical deterministic individual-based cellular automata we create a white-box model. From a general physical standpoint, the vegetative propagation of plants is an analogue of excitation propagation in excitable media. Using the Monte Carlo method, we investigate a role of different initial positioning of an individual in the habitat. We have investigated mechanisms of the single-species population growth limited by habitat size, intraspecific competition, regeneration time and fecundity of individuals in two types of boundary conditions and at two types of fecundity. Besides that, we have compared the S-shaped and J-shaped population growth. We consider this white-box modeling approach as a method of artificial intelligence which works as automatic hyper-logical inference from the first principles of the studied subject. This approach is perspective for direct mechanistic insights into nature of any complex systems. PMID:26038717

  9. Probabilistic Finite Element Analysis & Design Optimization for Structural Designs

    NASA Astrophysics Data System (ADS)

    Deivanayagam, Arumugam

    This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.

  10. Can complexity decrease in congestive heart failure?

    NASA Astrophysics Data System (ADS)

    Mukherjee, Sayan; Palit, Sanjay Kumar; Banerjee, Santo; Ariffin, M. R. K.; Rondoni, Lamberto; Bhattacharya, D. K.

    2015-12-01

    The complexity of a signal can be measured by the Recurrence period density entropy (RPDE) from the reconstructed phase space. We have chosen a window based RPDE method for the classification of signals, as RPDE is an average entropic measure of the whole phase space. We have observed the changes in the complexity in cardiac signals of normal healthy person (NHP) and congestive heart failure patients (CHFP). The results show that the cardiac dynamics of a healthy subject is more complex and random compare to the same for a heart failure patient, whose dynamics is more deterministic. We have constructed a general threshold to distinguish the border line between a healthy and a congestive heart failure dynamics. The results may be useful for wide range for physiological and biomedical analysis.

  11. Synthesis, spectroscopic characterization, first order nonlinear optical properties and DFT calculations of novel Mn(II), Co(II), Ni(II), Cu(II) and Zn(II) complexes with 1,3-diphenyl-4-phenylazo-5-pyrazolone ligand

    NASA Astrophysics Data System (ADS)

    Abdel-Latif, Samir A.; Mohamed, Adel A.

    2018-02-01

    Novel Mn(II), Co(II), Ni(II), Cu(II) and Zn(II) metal ions with 1,3-diphenyl-4-phenylazo-5-pyrazolone (L) have been prepared and characterized using different analytical and spectroscopic techniques. 1:1 Complexes of Mn(II), Co(II) and Zn(II) are distorted octahedral whereas Ni(II) complex is square planar and Cu(II) is distorted trigonal bipyramid. 1:2 Complexes of Mn(II), Co(II), Cu(II) and Zn(II) are distorted trigonal bipyramid whereas Ni(II) complex is distorted tetrahedral. All complexes behave as non-ionic in dimethyl formamide (DMF). The electronic structure and nonlinear optical parameters (NLO) of the complexes were investigated theoretically at the B3LYP/GEN level of theory. Molecular stability and bond strengths have been investigated by applying natural bond orbital (NBO) analysis. The geometries of the studied complexes are non-planner. DFT calculations have been also carried out to calculate the global properties; hardness (η), global softness (S) and electronegativity (χ). The calculated small energy gap between HOMO and LUMO energies shows that the charge transfer occurs within the complexes. The total static dipole moment (μtot), the mean polarizability (<α>), the anisotropy of the polarizability (Δα) and the mean first-order hyperpolarizability (<β>) were calculated and compared with urea as a reference material. The complexes show implying optical properties.

  12. Definitions of Complexity are Notoriously Difficult

    NASA Astrophysics Data System (ADS)

    Schuster, Peter

    Definitions of complexity are notoriously difficult if not impossible at all. A good working hypothesis might be: Everything is complex that is not simple. This is precisely the way in which we define nonlinear behavior. Things appear complex for different reasons: i) Complexity may result from lack of insight, ii) complexity may result from lack of methods, and (iii) complexity may be inherent to the system. The best known example for i) is celestial mechanics: The highly complex Pythagorean epicycles become obsolete by the introduction of Newton's law of universal gravitation. To give an example for ii), pattern formation and deterministic chaos became not really understandable before extensive computer simulations became possible. Cellular metabolism may serve as an example for iii) and is caused by the enormous complexity of biochemical reaction networks with up to one hundred individual reaction fluxes. Nevertheless, only few fluxes are dominant in the sense that using Pareto optimal values for them provides near optimal values for all the others...

  13. Cooperation of Deterministic Dynamics and Random Noise in Production of Complex Syntactical Avian Song Sequences: A Neural Network Model

    PubMed Central

    Yamashita, Yuichi; Okumura, Tetsu; Okanoya, Kazuo; Tani, Jun

    2011-01-01

    How the brain learns and generates temporal sequences is a fundamental issue in neuroscience. The production of birdsongs, a process which involves complex learned sequences, provides researchers with an excellent biological model for this topic. The Bengalese finch in particular learns a highly complex song with syntactical structure. The nucleus HVC (HVC), a premotor nucleus within the avian song system, plays a key role in generating the temporal structures of their songs. From lesion studies, the nucleus interfacialis (NIf) projecting to the HVC is considered one of the essential regions that contribute to the complexity of their songs. However, the types of interaction between the HVC and the NIf that can produce complex syntactical songs remain unclear. In order to investigate the function of interactions between the HVC and NIf, we have proposed a neural network model based on previous biological evidence. The HVC is modeled by a recurrent neural network (RNN) that learns to generate temporal patterns of songs. The NIf is modeled as a mechanism that provides auditory feedback to the HVC and generates random noise that feeds into the HVC. The model showed that complex syntactical songs can be replicated by simple interactions between deterministic dynamics of the RNN and random noise. In the current study, the plausibility of the model is tested by the comparison between the changes in the songs of actual birds induced by pharmacological inhibition of the NIf and the changes in the songs produced by the model resulting from modification of parameters representing NIf functions. The efficacy of the model demonstrates that the changes of songs induced by pharmacological inhibition of the NIf can be interpreted as a trade-off between the effects of noise and the effects of feedback on the dynamics of the RNN of the HVC. These facts suggest that the current model provides a convincing hypothesis for the functional role of NIf–HVC interaction. PMID:21559065

  14. Statistical Physics of Complex Substitutive Systems

    NASA Astrophysics Data System (ADS)

    Jin, Qing

    Diffusion processes are central to human interactions. Despite extensive studies that span multiple disciplines, our knowledge is limited to spreading processes in non-substitutive systems. Yet, a considerable number of ideas, products, and behaviors spread by substitution; to adopt a new one, agents must give up an existing one. This captures the spread of scientific constructs--forcing scientists to choose, for example, a deterministic or probabilistic worldview, as well as the adoption of durable items, such as mobile phones, cars, or homes. In this dissertation, I develop a statistical physics framework to describe, quantify, and understand substitutive systems. By empirically exploring three collected high-resolution datasets pertaining to such systems, I build a mechanistic model describing substitutions, which not only analytically predicts the universal macroscopic phenomenon discovered in the collected datasets, but also accurately captures the trajectories of individual items in a complex substitutive system, demonstrating a high degree of regularity and universality in substitutive systems. I also discuss the origins and insights of the parameters in the substitution model and possible generalization form of the mathematical framework. The systematical study of substitutive systems presented in this dissertation could potentially guide the understanding and prediction of all spreading phenomena driven by substitutions, from electric cars to scientific paradigms, and from renewable energy to new healthy habits.

  15. Deterministic-random separation in nonstationary regime

    NASA Astrophysics Data System (ADS)

    Abboud, D.; Antoni, J.; Sieg-Zieba, S.; Eltabach, M.

    2016-02-01

    In rotating machinery vibration analysis, the synchronous average is perhaps the most widely used technique for extracting periodic components. Periodic components are typically related to gear vibrations, misalignments, unbalances, blade rotations, reciprocating forces, etc. Their separation from other random components is essential in vibration-based diagnosis in order to discriminate useful information from masking noise. However, synchronous averaging theoretically requires the machine to operate under stationary regime (i.e. the related vibration signals are cyclostationary) and is otherwise jeopardized by the presence of amplitude and phase modulations. A first object of this paper is to investigate the nature of the nonstationarity induced by the response of a linear time-invariant system subjected to speed varying excitation. For this purpose, the concept of a cyclo-non-stationary signal is introduced, which extends the class of cyclostationary signals to speed-varying regimes. Next, a "generalized synchronous average'' is designed to extract the deterministic part of a cyclo-non-stationary vibration signal-i.e. the analog of the periodic part of a cyclostationary signal. Two estimators of the GSA have been proposed. The first one returns the synchronous average of the signal at predefined discrete operating speeds. A brief statistical study of it is performed, aiming to provide the user with confidence intervals that reflect the "quality" of the estimator according to the SNR and the estimated speed. The second estimator returns a smoothed version of the former by enforcing continuity over the speed axis. It helps to reconstruct the deterministic component by tracking a specific trajectory dictated by the speed profile (assumed to be known a priori).The proposed method is validated first on synthetic signals and then on actual industrial signals. The usefulness of the approach is demonstrated on envelope-based diagnosis of bearings in variable-speed operation.

  16. Complexity and non-commutativity of learning operations on graphs.

    PubMed

    Atmanspacher, Harald; Filk, Thomas

    2006-07-01

    We present results from numerical studies of supervised learning operations in small recurrent networks considered as graphs, leading from a given set of input conditions to predetermined outputs. Graphs that have optimized their output for particular inputs with respect to predetermined outputs are asymptotically stable and can be characterized by attractors, which form a representation space for an associative multiplicative structure of input operations. As the mapping from a series of inputs onto a series of such attractors generally depends on the sequence of inputs, this structure is generally non-commutative. Moreover, the size of the set of attractors, indicating the complexity of learning, is found to behave non-monotonically as learning proceeds. A tentative relation between this complexity and the notion of pragmatic information is indicated.

  17. Synthesis, characterization and investigation of electrochemical and spectroelectrochemical properties of non-peripherally tetra-5-methyl-1,3,4-thiadiazole substituted copper(II) iron(II) and oxo-titanium (IV) phthalocyanines

    NASA Astrophysics Data System (ADS)

    Demirbaş, Ümit; Akyüz, Duygu; Akçay, Hakkı Türker; Barut, Burak; Koca, Atıf; Kantekin, Halit

    2017-09-01

    In this study novel substituted phthalonitrile (3) and non-peripherally tetra 5-Methyl-1,3,4-thiadiazole substituted copper(II) (4), iron(II) (5) and oxo-titanium (IV) (6) phthalocyanines were synthesized. These novel compounds were fully characterized by FT-IR, 1H NMR, UV-vis and MALDI-TOF mass spectroscopic techniques. Voltammetric and in situ spectroelectrochemical measurements were performed for metallo-phthalocyanines (4-6). TiIVOPc and FeIIPc showed metal-based and ligand-based electron transfer reactions while CuIIPc shows only ligand-based electron transfer reaction. Voltammetric measurements indicated that the complexes have reversible, diffusion controlled and one-electron redox reactions. The assignments of the redox processes and color of the electrogenerated species of the complexes were determined with in-situ spectroelectrochemical and electrocolorimetric measurements. These measurements showed that the complexes can be used as the electrochromic materials for various display technologies.

  18. Paracousti-UQ: A Stochastic 3-D Acoustic Wave Propagation Algorithm.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preston, Leiph

    Acoustic full waveform algorithms, such as Paracousti, provide deterministic solutions in complex, 3-D variable environments. In reality, environmental and source characteristics are often only known in a statistical sense. Thus, to fully characterize the expected sound levels within an environment, this uncertainty in environmental and source factors should be incorporated into the acoustic simulations. Performing Monte Carlo (MC) simulations is one method of assessing this uncertainty, but it can quickly become computationally intractable for realistic problems. An alternative method, using the technique of stochastic partial differential equations (SPDE), allows computation of the statistical properties of output signals at a fractionmore » of the computational cost of MC. Paracousti-UQ solves the SPDE system of 3-D acoustic wave propagation equations and provides estimates of the uncertainty of the output simulated wave field (e.g., amplitudes, waveforms) based on estimated probability distributions of the input medium and source parameters. This report describes the derivation of the stochastic partial differential equations, their implementation, and comparison of Paracousti-UQ results with MC simulations using simple models.« less

  19. Turbulent Fluid Motion 6: Turbulence, Nonlinear Dynamics, and Deterministic Chaos

    NASA Technical Reports Server (NTRS)

    Deissler, Robert G.

    1996-01-01

    Several turbulent and nonturbulent solutions of the Navier-Stokes equations are obtained. The unaveraged equations are used numerically in conjunction with tools and concepts from nonlinear dynamics, including time series, phase portraits, Poincare sections, Liapunov exponents, power spectra, and strange attractors. Initially neighboring solutions for a low-Reynolds-number fully developed turbulence are compared. The turbulence is sustained by a nonrandom time-independent external force. The solutions, on the average, separate exponentially with time, having a positive Liapunov exponent. Thus, the turbulence is characterized as chaotic. In a search for solutions which contrast with the turbulent ones, the Reynolds number (or strength of the forcing) is reduced. Several qualitatively different flows are noted. These are, respectively, fully chaotic, complex periodic, weakly chaotic, simple periodic, and fixed-point. Of these, we classify only the fully chaotic flows as turbulent. Those flows have both a positive Liapunov exponent and Poincare sections without pattern. By contrast, the weakly chaotic flows, although having positive Liapunov exponents, have some pattern in their Poincare sections. The fixed-point and periodic flows are nonturbulent, since turbulence, as generally understood, is both time-dependent and aperiodic.

  20. White Matter Changes in Tinnitus: Is It All Age and Hearing Loss?

    PubMed

    Yoo, Hye Bin; De Ridder, Dirk; Vanneste, Sven

    2016-02-01

    Tinnitus is a condition characterized by the perception of auditory phantom sounds. It is known as the result of complex interactions between auditory and nonauditory regions. However, previous structural imaging studies on tinnitus patients showed evidence of significant white matter changes caused by hearing loss that are positively correlated with aging. Current study focused on which aspects of tinnitus pathologies affect the white matter integrity the most. We used the diffusion tensor imaging technique to acquire images that have higher contrast in brain white matter to analyze how white matter is influenced by tinnitus-related factors using voxel-based methods, region of interest analysis, and deterministic tractography. As a result, white matter integrity in chronic tinnitus patients was both directly affected by age and also mediated by the hearing loss. The most important changes in white matter regions were found bilaterally in the anterior corona radiata, anterior corpus callosum, and bilateral sagittal strata. In the tractography analysis, the white matter integrity values in tracts of right parahippocampus were correlated with the subjective tinnitus loudness.

  1. Challenges in Wireless System Integration as Enablers for Indoor Context Aware Environments

    PubMed Central

    Aguirre, Erik

    2017-01-01

    The advent of fully interactive environments within Smart Cities and Smart Regions requires the use of multiple wireless systems. In the case of user-device interaction, which finds multiple applications such as Ambient Assisted Living, Intelligent Transportation Systems or Smart Grids, among others, large amount of transceivers are employed in order to achieve anytime, anyplace and any device connectivity. The resulting combination of heterogeneous wireless network exhibits fundamental limitations derived from Coverage/Capacity relations, as a function of required Quality of Service parameters, required bit rate, energy restrictions and adaptive modulation and coding schemes. In this context, inherent transceiver density poses challenges in overall system operation, given by multiple node operation which increases overall interference levels. In this work, a deterministic based analysis applied to variable density wireless sensor network operation within complex indoor scenarios is presented, as a function of topological node distribution. The extensive analysis derives interference characterizations, both for conventional transceivers as well as wearables, which provide relevant information in terms of individual node configuration as well as complete network layout. PMID:28704963

  2. Stability analysis and application of a mathematical cholera model.

    PubMed

    Liao, Shu; Wang, Jin

    2011-07-01

    In this paper, we conduct a dynamical analysis of the deterministic cholera model proposed in [9]. We study the stability of both the disease-free and endemic equilibria so as to explore the complex epidemic and endemic dynamics of the disease. We demonstrate a real-world application of this model by investigating the recent cholera outbreak in Zimbabwe. Meanwhile, we present numerical simulation results to verify the analytical predictions.

  3. Purification of complex samples: Implementation of a modular and reconfigurable droplet-based microfluidic platform with cascaded deterministic lateral displacement separation modules

    PubMed Central

    Pudda, Catherine; Boizot, François; Verplanck, Nicolas; Revol-Cavalier, Frédéric; Berthier, Jean; Thuaire, Aurélie

    2018-01-01

    Particle separation in microfluidic devices is a common problematic for sample preparation in biology. Deterministic lateral displacement (DLD) is efficiently implemented as a size-based fractionation technique to separate two populations of particles around a specific size. However, real biological samples contain components of many different sizes and a single DLD separation step is not sufficient to purify these complex samples. When connecting several DLD modules in series, pressure balancing at the DLD outlets of each step becomes critical to ensure an optimal separation efficiency. A generic microfluidic platform is presented in this paper to optimize pressure balancing, when DLD separation is connected either to another DLD module or to a different microfluidic function. This is made possible by generating droplets at T-junctions connected to the DLD outlets. Droplets act as pressure controllers, which perform at the same time the encapsulation of DLD sorted particles and the balance of output pressures. The optimized pressures to apply on DLD modules and on T-junctions are determined by a general model that ensures the equilibrium of the entire platform. The proposed separation platform is completely modular and reconfigurable since the same predictive model applies to any cascaded DLD modules of the droplet-based cartridge. PMID:29768490

  4. Comparing reactive and memory-one strategies of direct reciprocity

    NASA Astrophysics Data System (ADS)

    Baek, Seung Ki; Jeong, Hyeong-Chai; Hilbe, Christian; Nowak, Martin A.

    2016-05-01

    Direct reciprocity is a mechanism for the evolution of cooperation based on repeated interactions. When individuals meet repeatedly, they can use conditional strategies to enforce cooperative outcomes that would not be feasible in one-shot social dilemmas. Direct reciprocity requires that individuals keep track of their past interactions and find the right response. However, there are natural bounds on strategic complexity: Humans find it difficult to remember past interactions accurately, especially over long timespans. Given these limitations, it is natural to ask how complex strategies need to be for cooperation to evolve. Here, we study stochastic evolutionary game dynamics in finite populations to systematically compare the evolutionary performance of reactive strategies, which only respond to the co-player’s previous move, and memory-one strategies, which take into account the own and the co-player’s previous move. In both cases, we compare deterministic strategy and stochastic strategy spaces. For reactive strategies and small costs, we find that stochasticity benefits cooperation, because it allows for generous-tit-for-tat. For memory one strategies and small costs, we find that stochasticity does not increase the propensity for cooperation, because the deterministic rule of win-stay, lose-shift works best. For memory one strategies and large costs, however, stochasticity can augment cooperation.

  5. Deterministic Assembly of Complex Bacterial Communities in Guts of Germ-Free Cockroaches

    PubMed Central

    Mikaelyan, Aram; Thompson, Claire L.; Hofer, Markus J.

    2015-01-01

    The gut microbiota of termites plays important roles in the symbiotic digestion of lignocellulose. However, the factors shaping the microbial community structure remain poorly understood. Because termites cannot be raised under axenic conditions, we established the closely related cockroach Shelfordella lateralis as a germ-free model to study microbial community assembly and host-microbe interactions. In this study, we determined the composition of the bacterial assemblages in cockroaches inoculated with the gut microbiota of termites and mice using pyrosequencing analysis of their 16S rRNA genes. Although the composition of the xenobiotic communities was influenced by the lineages present in the foreign inocula, their structure resembled that of conventional cockroaches. Bacterial taxa abundant in conventional cockroaches but rare in the foreign inocula, such as Dysgonomonas and Parabacteroides spp., were selectively enriched in the xenobiotic communities. Donor-specific taxa, such as endomicrobia or spirochete lineages restricted to the gut microbiota of termites, however, either were unable to colonize germ-free cockroaches or formed only small populations. The exposure of xenobiotic cockroaches to conventional adults restored their normal microbiota, which indicated that autochthonous lineages outcompete foreign ones. Our results provide experimental proof that the assembly of a complex gut microbiota in insects is deterministic. PMID:26655763

  6. Comparing reactive and memory-one strategies of direct reciprocity

    PubMed Central

    Baek, Seung Ki; Jeong, Hyeong-Chai; Hilbe, Christian; Nowak, Martin A.

    2016-01-01

    Direct reciprocity is a mechanism for the evolution of cooperation based on repeated interactions. When individuals meet repeatedly, they can use conditional strategies to enforce cooperative outcomes that would not be feasible in one-shot social dilemmas. Direct reciprocity requires that individuals keep track of their past interactions and find the right response. However, there are natural bounds on strategic complexity: Humans find it difficult to remember past interactions accurately, especially over long timespans. Given these limitations, it is natural to ask how complex strategies need to be for cooperation to evolve. Here, we study stochastic evolutionary game dynamics in finite populations to systematically compare the evolutionary performance of reactive strategies, which only respond to the co-player’s previous move, and memory-one strategies, which take into account the own and the co-player’s previous move. In both cases, we compare deterministic strategy and stochastic strategy spaces. For reactive strategies and small costs, we find that stochasticity benefits cooperation, because it allows for generous-tit-for-tat. For memory one strategies and small costs, we find that stochasticity does not increase the propensity for cooperation, because the deterministic rule of win-stay, lose-shift works best. For memory one strategies and large costs, however, stochasticity can augment cooperation. PMID:27161141

  7. A Magnetorheological Polishing-Based Approach for Studying Precision Microground Surfaces of Tungsten Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shafrir, S.N.; Lambropoulos, J.C.; Jacobs, S.D.

    2007-03-23

    Surface features of tungsten carbide composites processed by bound abrasive deterministic microgrinding and magnetorheological finishing (MRF) were studied for five WC-Ni composites, including one binderless material. All the materials studied were nonmagnetic with different microstructures and mechanical properties. White-light interferometry, scanning electron microscopy, and atomic force microscopy were used to characterize the surfaces after various grinding steps, surface etching, and MRF spot-taking.

  8. CRAX/Cassandra Reliability Analysis Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, D.

    1999-02-10

    Over the past few years Sandia National Laboratories has been moving toward an increased dependence on model- or physics-based analyses as a means to assess the impact of long-term storage on the nuclear weapons stockpile. These deterministic models have also been used to evaluate replacements for aging systems, often involving commercial off-the-shelf components (COTS). In addition, the models have been used to assess the performance of replacement components manufactured via unique, small-lot production runs. In either case, the limited amount of available test data dictates that the only logical course of action to characterize the reliability of these components ismore » to specifically consider the uncertainties in material properties, operating environment etc. within the physics-based (deterministic) model. This not only provides the ability to statistically characterize the expected performance of the component or system, but also provides direction regarding the benefits of additional testing on specific components within the system. An effort was therefore initiated to evaluate the capabilities of existing probabilistic methods and, if required, to develop new analysis methods to support the inclusion of uncertainty in the classical design tools used by analysts and design engineers at Sandia. The primary result of this effort is the CMX (Cassandra Exoskeleton) reliability analysis software.« less

  9. What does the structure of its visibility graph tell us about the nature of the time series?

    NASA Astrophysics Data System (ADS)

    Franke, Jasper G.; Donner, Reik V.

    2017-04-01

    Visibility graphs are a recently introduced method to construct complex network representations based upon univariate time series in order to study their dynamical characteristics [1]. In the last years, this approach has been successfully applied to studying a considerable variety of geoscientific research questions and data sets, including non-trivial temporal patterns in complex earthquake catalogs [2] or time-reversibility in climate time series [3]. It has been shown that several characteristic features of the thus constructed networks differ between stochastic and deterministic (possibly chaotic) processes, which is, however, relatively hard to exploit in the case of real-world applications. In this study, we propose studying two new measures related with the network complexity of visibility graphs constructed from time series, one being a special type of network entropy [4] and the other a recently introduced measure of the heterogeneity of the network's degree distribution [5]. For paradigmatic model systems exhibiting bifurcation sequences between regular and chaotic dynamics, both properties clearly trace the transitions between both types of regimes and exhibit marked quantitative differences for regular and chaotic dynamics. Moreover, for dynamical systems with a small amount of additive noise, the considered properties demonstrate gradual changes prior to the bifurcation point. This finding appears closely related to the subsequent loss of stability of the current state known to lead to a critical slowing down as the transition point is approaches. In this spirit, both considered visibility graph characteristics provide alternative tracers of dynamical early warning signals consistent with classical indicators. Our results demonstrate that measures of visibility graph complexity (i) provide a potentially useful means to tracing changes in the dynamical patterns encoded in a univariate time series that originate from increasing autocorrelation and (ii) allow to systematically distinguish regular from deterministic-chaotic dynamics. We demonstrate the application of our method for different model systems as well as selected paleoclimate time series from the North Atlantic region. Notably, visibility graph based methods are particularly suited for studying the latter type of geoscientific data, since they do not impose intrinsic restrictions or assumptions on the nature of the time series under investigation in terms of noise process, linearity and sampling homogeneity. [1] Lacasa, Lucas, et al. "From time series to complex networks: The visibility graph." Proceedings of the National Academy of Sciences 105.13 (2008): 4972-4975. [2] Telesca, Luciano, and Michele Lovallo. "Analysis of seismic sequences by using the method of visibility graph." EPL (Europhysics Letters) 97.5 (2012): 50002. [3] Donges, Jonathan F., Reik V. Donner, and Jürgen Kurths. "Testing time series irreversibility using complex network methods." EPL (Europhysics Letters) 102.1 (2013): 10004. [4] Small, Michael. "Complex networks from time series: capturing dynamics." 2013 IEEE International Symposium on Circuits and Systems (ISCAS2013), Beijing (2013): 2509-2512. [5] Jacob, Rinku, K.P. Harikrishnan, Ranjeev Misra, and G. Ambika. "Measure for degree heterogeneity in complex networks and its application to recurrence network analysis." arXiv preprint 1605.06607 (2016).

  10. Non-linear stochastic growth rates and redshift space distortions

    DOE PAGES

    Jennings, Elise; Jennings, David

    2015-04-09

    The linear growth rate is commonly defined through a simple deterministic relation between the velocity divergence and the matter overdensity in the linear regime. We introduce a formalism that extends this to a non-linear, stochastic relation between θ = ∇ ∙ v(x,t)/aH and δ. This provides a new phenomenological approach that examines the conditional mean , together with the fluctuations of θ around this mean. We also measure these stochastic components using N-body simulations and find they are non-negative and increase with decreasing scale from ~10 per cent at k < 0.2 h Mpc -1 to 25 per cent atmore » k ~ 0.45 h Mpc -1 at z = 0. Both the stochastic relation and non-linearity are more pronounced for haloes, M ≤ 5 × 10 12 M ⊙ h -1, compared to the dark matter at z = 0 and 1. Non-linear growth effects manifest themselves as a rotation of the mean away from the linear theory prediction -f LTδ, where f LT is the linear growth rate. This rotation increases with wavenumber, k, and we show that it can be well-described by second-order Lagrangian perturbation theory (2LPT) fork < 0.1 h Mpc -1. Furthermore, the stochasticity in the θ – δ relation is not so simply described by 2LPT, and we discuss its impact on measurements of f LT from two-point statistics in redshift space. Furthermore, given that the relationship between δ and θ is stochastic and non-linear, this will have implications for the interpretation and precision of f LT extracted using models which assume a linear, deterministic expression.« less

  11. Synthesis, characterization, thermal and biological evaluation of Cu (II), Co (II) and Ni (II) complexes of azo dye ligand containing sulfamethaxazole moiety

    NASA Astrophysics Data System (ADS)

    Mallikarjuna, N. M.; Keshavayya, J.; Maliyappa, M. R.; Shoukat Ali, R. A.; Venkatesh, Talavara

    2018-08-01

    A novel bioactive Cu (II), Co (II) and Ni (II) complexes of the azo dye ligand (L) derived from sulfamethoxazole were synthesized. The structures of the newly synthesized compounds were characterized by elemental analysis, molar conductance, magnetic susceptibility, FTIR, UV-visible, 1H NMR, mass, thermal and powder XRD spectral techniques. Molar conductivity measurements in DMSO solution confirmed the non-electrolytic nature of the complexes. All the synthesized metal complexes were found to be monomeric and showed square planar geometry except the Co (II) complex which has six coordinate, octahedral environment. The metal complexes have exhibited potential growth inhibitory effect against tested bacterial strains as compared to the free ligand. The ligand and complexes have also shown significant antioxidant and Calf Thymus DNA cleavage activities. Further, the in silico molecular docking studies were performed to predict the possible binding sites of the ligand (L) and its metal complexes with target receptor Glu-6P.

  12. Chaos-order transition in foraging behavior of ants.

    PubMed

    Li, Lixiang; Peng, Haipeng; Kurths, Jürgen; Yang, Yixian; Schellnhuber, Hans Joachim

    2014-06-10

    The study of the foraging behavior of group animals (especially ants) is of practical ecological importance, but it also contributes to the development of widely applicable optimization problem-solving techniques. Biologists have discovered that single ants exhibit low-dimensional deterministic-chaotic activities. However, the influences of the nest, ants' physical abilities, and ants' knowledge (or experience) on foraging behavior have received relatively little attention in studies of the collective behavior of ants. This paper provides new insights into basic mechanisms of effective foraging for social insects or group animals that have a home. We propose that the whole foraging process of ants is controlled by three successive strategies: hunting, homing, and path building. A mathematical model is developed to study this complex scheme. We show that the transition from chaotic to periodic regimes observed in our model results from an optimization scheme for group animals with a home. According to our investigation, the behavior of such insects is not represented by random but rather deterministic walks (as generated by deterministic dynamical systems, e.g., by maps) in a random environment: the animals use their intelligence and experience to guide them. The more knowledge an ant has, the higher its foraging efficiency is. When young insects join the collective to forage with old and middle-aged ants, it benefits the whole colony in the long run. The resulting strategy can even be optimal.

  13. Chaos–order transition in foraging behavior of ants

    PubMed Central

    Li, Lixiang; Peng, Haipeng; Kurths, Jürgen; Yang, Yixian; Schellnhuber, Hans Joachim

    2014-01-01

    The study of the foraging behavior of group animals (especially ants) is of practical ecological importance, but it also contributes to the development of widely applicable optimization problem-solving techniques. Biologists have discovered that single ants exhibit low-dimensional deterministic-chaotic activities. However, the influences of the nest, ants’ physical abilities, and ants’ knowledge (or experience) on foraging behavior have received relatively little attention in studies of the collective behavior of ants. This paper provides new insights into basic mechanisms of effective foraging for social insects or group animals that have a home. We propose that the whole foraging process of ants is controlled by three successive strategies: hunting, homing, and path building. A mathematical model is developed to study this complex scheme. We show that the transition from chaotic to periodic regimes observed in our model results from an optimization scheme for group animals with a home. According to our investigation, the behavior of such insects is not represented by random but rather deterministic walks (as generated by deterministic dynamical systems, e.g., by maps) in a random environment: the animals use their intelligence and experience to guide them. The more knowledge an ant has, the higher its foraging efficiency is. When young insects join the collective to forage with old and middle-aged ants, it benefits the whole colony in the long run. The resulting strategy can even be optimal. PMID:24912159

  14. Common features and peculiarities of the seismic activity at Phlegraean Fields, Long Valley, and Vesuvius

    USGS Publications Warehouse

    Marzocchi, W.; Vilardo, G.; Hill, D.P.; Ricciardi, G.P.; Ricco, C.

    2001-01-01

    We analyzed and compared the seismic activity that has occurred in the last two to three decades in three distinct volcanic areas: Phlegraean Fields, Italy; Vesuvius, Italy; and Long Valley, California. Our main goal is to identify and discuss common features and peculiarities in the temporal evolution of earthquake sequences that may reflect similarities and differences in the generating processes between these volcanic systems. In particular, we tried to characterize the time series of the number of events and of the seismic energy release in terms of stochastic, deterministic, and chaotic components. The time sequences from each area consist of thousands of earthquakes that allow a detailed quantitative analysis and comparison. The results obtained showed no evidence for either deterministic or chaotic components in the earthquake sequences in Long Valley caldera, which appears to be dominated by stochastic behavior. In contrast, earthquake sequences at Phlegrean Fields and Mount Vesuvius show a deterministic signal mainly consisting of a 24-hour periodicity. Our analysis suggests that the modulation in seismicity is in some way related to thermal diurnal processes, rather than luni-solar tidal effects. Independently from the process that generates these periodicities on the seismicity., it is suggested that the lack (or presence) of diurnal cycles is seismic swarms of volcanic areas could be closely linked to the presence (or lack) of magma motion.

  15. Synthesis and spectral characterization of mono- and binuclear copper(II) complexes derived from 2-benzoylpyridine-N4-methyl-3-thiosemicarbazone: Crystal structure of a novel sulfur bridged copper(II) box-dimer

    NASA Astrophysics Data System (ADS)

    Jayakumar, K.; Sithambaresan, M.; Aiswarya, N.; Kurup, M. R. Prathapachandra

    2015-03-01

    Mononuclear and binuclear copper(II) complexes of 2-benzoylpyridine-N4-methyl thiosemicarbazone (HL) were prepared and characterized by a variety of spectroscopic techniques. Structural evidence for the novel sulfur bridged copper(II) iodo binuclear complex is obtained by single crystal X-ray diffraction analysis. The complex [Cu2L2I2], a non-centrosymmetric box dimer, crystallizes in monoclinic C2/c space group and it was found to have distorted square pyramidal geometry (Addison parameter, τ = 0.238) with the square basal plane occupied by the thiosemicarbazone moiety and iodine atom whereas the sulfur atom from the other coordinated thiosemicarbazone moiety occupies the apical position. This is the first crystallographically studied system having non-centrosymmetrical entities bridged via thiolate S atoms with Cu(II)sbnd I bond. The tridentate thiosemicarbazone coordinates in mono deprotonated thionic tautomeric form in all complexes except in sulfato complex, [Cu(HL)(SO4)]·H2O (1) where it binds to the metal centre in neutral form. The magnetic moment values and the EPR spectral studies reflect the binuclearity of some of the complexes. The spin Hamiltonian and bonding parameters are calculated based on EPR studies. In all the complexes g|| > g⊥ > 2.0023 and the g values in frozen DMF are consistent with the dx2-y2 ground state. The thermal stabilities of some of the complexes were also determined.

  16. System of systems design: Evaluating aircraft in a fleet context using reliability and non-deterministic approaches

    NASA Astrophysics Data System (ADS)

    Frommer, Joshua B.

    This work develops and implements a solution framework that allows for an integrated solution to a resource allocation system-of-systems problem associated with designing vehicles for integration into an existing fleet to extend that fleet's capability while improving efficiency. Typically, aircraft design focuses on using a specific design mission while a fleet perspective would provide a broader capability. Aspects of design for both the vehicles and missions may be, for simplicity, deterministic in nature or, in a model that reflects actual conditions, uncertain. Toward this end, the set of tasks or goals for the to-be-planned system-of-systems will be modeled more accurately with non-deterministic values, and the designed platforms will be evaluated using reliability analysis. The reliability, defined as the probability of a platform or set of platforms to complete possible missions, will contribute to the fitness of the overall system. The framework includes building surrogate models for metrics such as capability and cost, and includes the ideas of reliability in the overall system-level design space. The concurrent design and allocation system-of-systems problem is a multi-objective mixed integer nonlinear programming (MINLP) problem. This study considered two system-of-systems problems that seek to simultaneously design new aircraft and allocate these aircraft into a fleet to provide a desired capability. The Coast Guard's Integrated Deepwater System program inspired the first problem, which consists of a suite of search-and-find missions for aircraft based on descriptions from the National Search and Rescue Manual. The second represents suppression of enemy air defense operations similar to those carried out by the U.S. Air Force, proposed as part of the Department of Defense Network Centric Warfare structure, and depicted in MILSTD-3013. The two problems seem similar, with long surveillance segments, but because of the complex nature of aircraft design, the analysis of the vehicle for high-speed attack combined with a long loiter period is considerably different from that for quick cruise to an area combined with a low speed search. However, the framework developed to solve this class of system-of-systems problem handles both scenarios and leads to a solution type for this kind of problem. On the vehicle-level of the problem, different technology can have an impact on the fleet-level. One such technology is Morphing, the ability to change shape, which is an ideal candidate technology for missions with dissimilar segments, such as the aforementioned two. A framework, using surrogate models based on optimally-sized aircraft, and using probabilistic parameters to define a concept of operations, is investigated; this has provided insight into the setup of the optimization problem, the use of the reliability metric, and the measurement of fleet level impacts of morphing aircraft. The research consisted of four phases. The two initial phases built and defined the framework to solve system-of-systems problem; these investigations used the search-and-find scenario as the example application. The first phase included the design of fixed-geometry and morphing aircraft for a range of missions and evaluated the aircraft capability using non-deterministic mission parameters. The second phase introduced the idea of multiple aircraft in a fleet, but only considered a fleet consisting of one aircraft type. The third phase incorporated the simultaneous design of a new vehicle and allocation into a fleet for the search-and-find scenario; in this phase, multiple types of aircraft are considered. The fourth phase repeated the simultaneous new aircraft design and fleet allocation for the SEAD scenario to show that the approach is not specific to the search-and-find scenario. The framework presented in this work appears to be a viable approach for concurrently designing and allocating constituents in a system, specifically aircraft in a fleet. The research also shows that new technology impact can be assessed at the fleet level using conceptual design principles.

  17. Chance of Necessity: Modeling Origins of Life

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew

    2006-01-01

    The fundamental nature of processes that led to the emergence of life has been a subject of long-standing debate. One view holds that the origin of life is an event governed by chance, and the result of so many random events is unpredictable. This view was eloquently expressed by Jacques Monod in his book Chance or Necessity. In an alternative view, the origin of life is considered a deterministic event. Its details need not be deterministic in every respect, but the overall behavior is predictable. A corollary to the deterministic view is that the emergence of life must have been determined primarily by universal chemistry and biochemistry rather than by subtle details of environmental conditions. In my lecture I will explore two different paradigms for the emergence of life and discuss their implications for predictability and universality of life-forming processes. The dominant approach is that the origin of life was guided by information stored in nucleic acids (the RNA World hypothesis). In this view, selection of improved combinations of nucleic acids obtained through random mutations drove evolution of biological systems from their conception. An alternative hypothesis states that the formation of protocellular metabolism was driven by non-genomic processes. Even though these processes were highly stochastic the outcome was largely deterministic, strongly constrained by laws of chemistry. I will argue that self-replication of macromolecules was not required at the early stages of evolution; the reproduction of cellular functions alone was sufficient for self-maintenance of protocells. In fact, the precise transfer of information between successive generations of the earliest protocells was unnecessary and could have impeded the discovery of cellular metabolism. I will also show that such concepts as speciation and fitness to the environment, developed in the context of genomic evolution also hold in the absence of a genome.

  18. Solving geosteering inverse problems by stochastic Hybrid Monte Carlo method

    DOE PAGES

    Shen, Qiuyang; Wu, Xuqing; Chen, Jiefu; ...

    2017-11-20

    The inverse problems arise in almost all fields of science where the real-world parameters are extracted from a set of measured data. The geosteering inversion plays an essential role in the accurate prediction of oncoming strata as well as a reliable guidance to adjust the borehole position on the fly to reach one or more geological targets. This mathematical treatment is not easy to solve, which requires finding an optimum solution among a large solution space, especially when the problem is non-linear and non-convex. Nowadays, a new generation of logging-while-drilling (LWD) tools has emerged on the market. The so-called azimuthalmore » resistivity LWD tools have azimuthal sensitivity and a large depth of investigation. Hence, the associated inverse problems become much more difficult since the earth model to be inverted will have more detailed structures. The conventional deterministic methods are incapable to solve such a complicated inverse problem, where they suffer from the local minimum trap. Alternatively, stochastic optimizations are in general better at finding global optimal solutions and handling uncertainty quantification. In this article, we investigate the Hybrid Monte Carlo (HMC) based statistical inversion approach and suggest that HMC based inference is more efficient in dealing with the increased complexity and uncertainty faced by the geosteering problems.« less

  19. Analysis of Phase-Type Stochastic Petri Nets With Discrete and Continuous Timing

    NASA Technical Reports Server (NTRS)

    Jones, Robert L.; Goode, Plesent W. (Technical Monitor)

    2000-01-01

    The Petri net formalism is useful in studying many discrete-state, discrete-event systems exhibiting concurrency, synchronization, and other complex behavior. As a bipartite graph, the net can conveniently capture salient aspects of the system. As a mathematical tool, the net can specify an analyzable state space. Indeed, one can reason about certain qualitative properties (from state occupancies) and how they arise (the sequence of events leading there). By introducing deterministic or random delays, the model is forced to sojourn in states some amount of time, giving rise to an underlying stochastic process, one that can be specified in a compact way and capable of providing quantitative, probabilistic measures. We formalize a new non-Markovian extension to the Petri net that captures both discrete and continuous timing in the same model. The approach affords efficient, stationary analysis in most cases and efficient transient analysis under certain restrictions. Moreover, this new formalism has the added benefit in modeling fidelity stemming from the simultaneous capture of discrete- and continuous-time events (as opposed to capturing only one and approximating the other). We show how the underlying stochastic process, which is non-Markovian, can be resolved into simpler Markovian problems that enjoy efficient solutions. Solution algorithms are provided that can be easily programmed.

  20. A Tabu-Search Heuristic for Deterministic Two-Mode Blockmodeling of Binary Network Matrices

    ERIC Educational Resources Information Center

    Brusco, Michael; Steinley, Douglas

    2011-01-01

    Two-mode binary data matrices arise in a variety of social network contexts, such as the attendance or non-attendance of individuals at events, the participation or lack of participation of groups in projects, and the votes of judges on cases. A popular method for analyzing such data is two-mode blockmodeling based on structural equivalence, where…

  1. Steepest Ascent Low/Non-Low-Frequency Ratio in Empirical Mode Decomposition to Separate Deterministic and Stochastic Velocities From a Single Lagrangian Drifter

    NASA Astrophysics Data System (ADS)

    Chu, Peter C.

    2018-03-01

    SOund Fixing And Ranging (RAFOS) floats deployed by the Naval Postgraduate School (NPS) in the California Current system from 1992 to 2001 at depth between 150 and 600 m (http://www.oc.nps.edu/npsRAFOS/) are used to study 2-D turbulent characteristics. Each drifter trajectory is adaptively decomposed using the empirical mode decomposition (EMD) into a series of intrinsic mode functions (IMFs) with corresponding specific scale for each IMF. A new steepest ascent low/non-low-frequency ratio is proposed in this paper to separate a Lagrangian trajectory into low-frequency (nondiffusive, i.e., deterministic) and high-frequency (diffusive, i.e., stochastic) components. The 2-D turbulent (or called eddy) diffusion coefficients are calculated on the base of the classical turbulent diffusion with mixing length theory from stochastic component of a single drifter. Statistical characteristics of the calculated 2-D turbulence length scale, strength, and diffusion coefficients from the NPS RAFOS data are presented with the mean values (over the whole drifters) of the 2-D diffusion coefficients comparable to the commonly used diffusivity tensor method.

  2. Negative mobility of a Brownian particle: Strong damping regime

    NASA Astrophysics Data System (ADS)

    Słapik, A.; Łuczka, J.; Spiechowicz, J.

    2018-02-01

    We study impact of inertia on directed transport of a Brownian particle under non-equilibrium conditions: the particle moves in a one-dimensional periodic and symmetric potential, is driven by both an unbiased time-periodic force and a constant force, and is coupled to a thermostat of temperature T. Within selected parameter regimes this system exhibits negative mobility, which means that the particle moves in the direction opposite to the direction of the constant force. It is known that in such a setup the inertial term is essential for the emergence of negative mobility and it cannot be detected in the limiting case of overdamped dynamics. We analyse inertial effects and show that negative mobility can be observed even in the strong damping regime. We determine the optimal dimensionless mass for the presence of negative mobility and reveal three mechanisms standing behind this anomaly: deterministic chaotic, thermal noise induced and deterministic non-chaotic. The last origin has never been reported. It may provide guidance to the possibility of observation of negative mobility for strongly damped dynamics which is of fundamental importance from the point of view of biological systems, all of which in situ operate in fluctuating environments.

  3. Structural Deterministic Safety Factors Selection Criteria and Verification

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1992-01-01

    Though current deterministic safety factors are arbitrarily and unaccountably specified, its ratio is rooted in resistive and applied stress probability distributions. This study approached the deterministic method from a probabilistic concept leading to a more systematic and coherent philosophy and criterion for designing more uniform and reliable high-performance structures. The deterministic method was noted to consist of three safety factors: a standard deviation multiplier of the applied stress distribution; a K-factor for the A- or B-basis material ultimate stress; and the conventional safety factor to ensure that the applied stress does not operate in the inelastic zone of metallic materials. The conventional safety factor is specifically defined as the ratio of ultimate-to-yield stresses. A deterministic safety index of the combined safety factors was derived from which the corresponding reliability proved the deterministic method is not reliability sensitive. The bases for selecting safety factors are presented and verification requirements are discussed. The suggested deterministic approach is applicable to all NASA, DOD, and commercial high-performance structures under static stresses.

  4. Selective attention, diffused attention, and the development of categorization.

    PubMed

    Deng, Wei Sophia; Sloutsky, Vladimir M

    2016-12-01

    How do people learn categories and what changes with development? The current study attempts to address these questions by focusing on the role of attention in the development of categorization. In Experiment 1, participants (adults, 7-year-olds, and 4-year-olds) were trained with novel categories consisting of deterministic and probabilistic features, and their categorization and memory for features were tested. In Experiment 2, participants' attention was directed to the deterministic feature, and in Experiment 3 it was directed to the probabilistic features. Attentional cueing affected categorization and memory in adults and 7-year-olds: these participants relied on the cued features in their categorization and exhibited better memory of cued than of non-cued features. In contrast, in 4-year-olds attentional cueing affected only categorization, but not memory: these participants exhibited equally good memory for both cued and non-cued features. Furthermore, across the experiments, 4-year-olds remembered non-cued features better than adults. These results coupled with computational simulations provide novel evidence (1) pointing to differences in category representation and mechanisms of categorization across development, (2) elucidating the role of attention in the development of categorization, and (3) suggesting an important distinction between representation and decision factors in categorization early in development. These issues are discussed with respect to theories of categorization and its development. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Stochastic Blockmodeling of the Modules and Core of the Caenorhabditis elegans Connectome

    PubMed Central

    Pavlovic, Dragana M.; Vértes, Petra E.; Bullmore, Edward T.; Schafer, William R.; Nichols, Thomas E.

    2014-01-01

    Recently, there has been much interest in the community structure or mesoscale organization of complex networks. This structure is characterised either as a set of sparsely inter-connected modules or as a highly connected core with a sparsely connected periphery. However, it is often difficult to disambiguate these two types of mesoscale structure or, indeed, to summarise the full network in terms of the relationships between its mesoscale constituents. Here, we estimate a community structure with a stochastic blockmodel approach, the Erdős-Rényi Mixture Model, and compare it to the much more widely used deterministic methods, such as the Louvain and Spectral algorithms. We used the Caenorhabditis elegans (C. elegans) nervous system (connectome) as a model system in which biological knowledge about each node or neuron can be used to validate the functional relevance of the communities obtained. The deterministic algorithms derived communities with 4–5 modules, defined by sparse inter-connectivity between all modules. In contrast, the stochastic Erdős-Rényi Mixture Model estimated a community with 9 blocks or groups which comprised a similar set of modules but also included a clearly defined core, made of 2 small groups. We show that the “core-in-modules” decomposition of the worm brain network, estimated by the Erdős-Rényi Mixture Model, is more compatible with prior biological knowledge about the C. elegans nervous system than the purely modular decomposition defined deterministically. We also show that the blockmodel can be used both to generate stochastic realisations (simulations) of the biological connectome, and to compress network into a small number of super-nodes and their connectivity. We expect that the Erdős-Rényi Mixture Model may be useful for investigating the complex community structures in other (nervous) systems. PMID:24988196

  6. Cryptic breakpoint identified by whole-genome mate-pair sequencing in a rare paternally inherited complex chromosomal rearrangement.

    PubMed

    Aristidou, Constantia; Theodosiou, Athina; Ketoni, Andria; Bak, Mads; Mehrjouy, Mana M; Tommerup, Niels; Sismani, Carolina

    2018-01-01

    Precise characterization of apparently balanced complex chromosomal rearrangements in non-affected individuals is crucial as they may result in reproductive failure, recurrent miscarriages or affected offspring. We present a family, where the non-affected father and daughter were found, using FISH and karyotyping, to be carriers of a three-way complex chromosomal rearrangement [t(6;7;10)(q16.2;q34;q26.1), de novo in the father]. The family suffered from two stillbirths, one miscarriage, and has a son with severe intellectual disability. In the present study, the family was revisited using whole-genome mate-pair sequencing. Interestingly, whole-genome mate-pair sequencing revealed a cryptic breakpoint on derivative (der) chromosome 6 rendering the rearrangement even more complex. FISH using a chromosome (chr) 6 custom-designed probe and a chr10 control probe confirmed that the interstitial chr6 segment, created by the two chr6 breakpoints, was translocated onto der(10). Breakpoints were successfully validated with Sanger sequencing, and small imbalances as well as microhomology were identified. Finally, the complex chromosomal rearrangement breakpoints disrupted the SIM1 , GRIK2 , CNTNAP2 , and PTPRE genes without causing any phenotype development. In contrast to the majority of maternally transmitted complex chromosomal rearrangement cases, our study investigated a rare case where a complex chromosomal rearrangement, which most probably resulted from a Type IV hexavalent during the pachytene stage of meiosis I, was stably transmitted from a fertile father to his non-affected daughter. Whole-genome mate-pair sequencing proved highly successful in identifying cryptic complexity, which consequently provided further insight into the meiotic segregation of chromosomes and the increased reproductive risk in individuals carrying the specific complex chromosomal rearrangement. We propose that such complex rearrangements should be characterized in detail using a combination of conventional cytogenetic and NGS-based approaches to aid in better prenatal preimplantation genetic diagnosis and counseling in couples with reproductive problems.

  7. Second Cancers After Fractionated Radiotherapy: Stochastic Population Dynamics Effects

    NASA Technical Reports Server (NTRS)

    Sachs, Rainer K.; Shuryak, Igor; Brenner, David; Fakir, Hatim; Hahnfeldt, Philip

    2007-01-01

    When ionizing radiation is used in cancer therapy it can induce second cancers in nearby organs. Mainly due to longer patient survival times, these second cancers have become of increasing concern. Estimating the risk of solid second cancers involves modeling: because of long latency times, available data is usually for older, obsolescent treatment regimens. Moreover, modeling second cancers gives unique insights into human carcinogenesis, since the therapy involves administering well characterized doses of a well studied carcinogen, followed by long-term monitoring. In addition to putative radiation initiation that produces pre-malignant cells, inactivation (i.e. cell killing), and subsequent cell repopulation by proliferation can be important at the doses relevant to second cancer situations. A recent initiation/inactivation/proliferation (IIP) model characterized quantitatively the observed occurrence of second breast and lung cancers, using a deterministic cell population dynamics approach. To analyze ifradiation-initiated pre-malignant clones become extinct before full repopulation can occur, we here give a stochastic version of this I I model. Combining Monte Carlo simulations with standard solutions for time-inhomogeneous birth-death equations, we show that repeated cycles of inactivation and repopulation, as occur during fractionated radiation therapy, can lead to distributions of pre-malignant cells per patient with variance >> mean, even when pre-malignant clones are Poisson-distributed. Thus fewer patients would be affected, but with a higher probability, than a deterministic model, tracking average pre-malignant cell numbers, would predict. Our results are applied to data on breast cancers after radiotherapy for Hodgkin disease. The stochastic IIP analysis, unlike the deterministic one, indicates: a) initiated, pre-malignant cells can have a growth advantage during repopulation, not just during the longer tumor latency period that follows; b) weekend treatment gaps during radiotherapy, apart from decreasing the probability of eradicating the primary cancer, substantially increase the risk of later second cancers.

  8. Characterizing the topology of probabilistic biological networks.

    PubMed

    Todor, Andrei; Dobra, Alin; Kahveci, Tamer

    2013-01-01

    Biological interactions are often uncertain events, that may or may not take place with some probability. This uncertainty leads to a massive number of alternative interaction topologies for each such network. The existing studies analyze the degree distribution of biological networks by assuming that all the given interactions take place under all circumstances. This strong and often incorrect assumption can lead to misleading results. In this paper, we address this problem and develop a sound mathematical basis to characterize networks in the presence of uncertain interactions. Using our mathematical representation, we develop a method that can accurately describe the degree distribution of such networks. We also take one more step and extend our method to accurately compute the joint-degree distributions of node pairs connected by edges. The number of possible network topologies grows exponentially with the number of uncertain interactions. However, the mathematical model we develop allows us to compute these degree distributions in polynomial time in the number of interactions. Our method works quickly even for entire protein-protein interaction (PPI) networks. It also helps us find an adequate mathematical model using MLE. We perform a comparative study of node-degree and joint-degree distributions in two types of biological networks: the classical deterministic networks and the more flexible probabilistic networks. Our results confirm that power-law and log-normal models best describe degree distributions for both probabilistic and deterministic networks. Moreover, the inverse correlation of degrees of neighboring nodes shows that, in probabilistic networks, nodes with large number of interactions prefer to interact with those with small number of interactions more frequently than expected. We also show that probabilistic networks are more robust for node-degree distribution computation than the deterministic ones. all the data sets used, the software implemented and the alignments found in this paper are available at http://bioinformatics.cise.ufl.edu/projects/probNet/.

  9. A random walk on water (Henry Darcy Medal Lecture)

    NASA Astrophysics Data System (ADS)

    Koutsoyiannis, D.

    2009-04-01

    Randomness and uncertainty had been well appreciated in hydrology and water resources engineering in their initial steps as scientific disciplines. However, this changed through the years and, following other geosciences, hydrology adopted a naïve view of randomness in natural processes. Such a view separates natural phenomena into two mutually exclusive types, random or stochastic, and deterministic. When a classification of a specific process into one of these two types fails, then a separation of the process into two different, usually additive, parts is typically devised, each of which may be further subdivided into subparts (e.g., deterministic subparts such as periodic and aperiodic or trends). This dichotomous logic is typically combined with a manichean perception, in which the deterministic part supposedly represents cause-effect relationships and thus is physics and science (the "good"), whereas randomness has little relationship with science and no relationship with understanding (the "evil"). Probability theory and statistics, which traditionally provided the tools for dealing with randomness and uncertainty, have been regarded by some as the "necessary evil" but not as an essential part of hydrology and geophysics. Some took a step further to banish them from hydrology, replacing them with deterministic sensitivity analysis and fuzzy-logic representations. Others attempted to demonstrate that irregular fluctuations observed in natural processes are au fond manifestations of underlying chaotic deterministic dynamics with low dimensionality, thus attempting to render probabilistic descriptions unnecessary. Some of the above recent developments are simply flawed because they make erroneous use of probability and statistics (which, remarkably, provide the tools for such analyses), whereas the entire underlying logic is just a false dichotomy. To see this, it suffices to recall that Pierre Simon Laplace, perhaps the most famous proponent of determinism in the history of philosophy of science (cf. Laplace's demon), is, at the same time, one of the founders of probability theory, which he regarded as "nothing but common sense reduced to calculation". This harmonizes with James Clerk Maxwell's view that "the true logic for this world is the calculus of Probabilities" and was more recently and epigrammatically formulated in the title of Edwin Thompson Jaynes's book "Probability Theory: The Logic of Science" (2003). Abandoning dichotomous logic, either on ontological or epistemic grounds, we can identify randomness or stochasticity with unpredictability. Admitting that (a) uncertainty is an intrinsic property of nature; (b) causality implies dependence of natural processes in time and thus suggests predictability; but, (c) even the tiniest uncertainty (e.g., in initial conditions) may result in unpredictability after a certain time horizon, we may shape a stochastic representation of natural processes that is consistent with Karl Popper's indeterministic world view. In this representation, probability quantifies uncertainty according to the Kolmogorov system, in which probability is a normalized measure, i.e., a function that maps sets (areas where the initial conditions or the parameter values lie) to real numbers (in the interval [0, 1]). In such a representation, predictability (suggested by deterministic laws) and unpredictability (randomness) coexist, are not separable or additive components, and it is a matter of specifying the time horizon of prediction to decide which of the two dominates. An elementary numerical example has been devised to illustrate the above ideas and demonstrate that they offer a pragmatic and useful guide for practice, rather than just pertaining to philosophical discussions. A chaotic model, with fully and a priori known deterministic dynamics and deterministic inputs (without any random agent), is assumed to represent the hydrological balance in an area partly covered by vegetation. Experimentation with this toy model demonstrates, inter alia, that: (1) for short time horizons the deterministic dynamics is able to give good predictions; but (2) these predictions become extremely inaccurate and useless for long time horizons; (3) for such horizons a naïve statistical prediction (average of past data) which fully neglects the deterministic dynamics is more skilful; and (4) if this statistical prediction, in addition to past data, is combined with the probability theory (the principle of maximum entropy, in particular), it can provide a more informative prediction. Also, the toy model shows that the trajectories of the system state (and derivative properties thereof) do not resemble a regular (e.g., periodic) deterministic process nor a purely random process, but exhibit patterns indicating anti-persistence and persistence (where the latter statistically complies with a Hurst-Kolmogorov behaviour). If the process is averaged over long time scales, the anti-persistent behaviour improves predictability, whereas the persistent behaviour substantially deteriorates it. A stochastic representation of this deterministic system, which incorporates dynamics, is not only possible, but also powerful as it provides good predictions for both short and long horizons and helps to decide on when the deterministic dynamics should be considered or neglected. Obviously, a natural system is extremely more complex than this simple toy model and hence unpredictability is naturally even more prominent in the former. In addition, in a complex natural system, we can never know the exact dynamics and we must infer it from past data, which implies additional uncertainty and an additional role of stochastics in the process of formulating the system equations and estimating the involved parameters. Data also offer the only solid grounds to test any hypothesis about the dynamics, and failure of performing such testing against evidence from data renders the hypothesised dynamics worthless. If this perception of natural phenomena is adequately plausible, then it may help in studying interesting fundamental questions regarding the current state and the trends of hydrological and water resources research and their promising future paths. For instance: (i) Will it ever be possible to achieve a fully "physically based" modelling of hydrological systems that will not depend on data or stochastic representations? (ii) To what extent can hydrological uncertainty be reduced and what are the effective means for such reduction? (iii) Are current stochastic methods in hydrology consistent with observed natural behaviours? What paths should we explore for their advancement? (iv) Can deterministic methods provide solid scientific grounds for water resources engineering and management? In particular, can there be risk-free hydraulic engineering and water management? (v) Is the current (particularly important) interface between hydrology and climate satisfactory?. In particular, should hydrology rely on climate models that are not properly validated (i.e., for periods and scales not used in calibration)? In effect, is the evolution of climate and its impacts on water resources deterministically predictable?

  10. Intelligent Manufacturing of Commercial Optics Final Report CRADA No. TC-0313-92

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, J. S.; Pollicove, H.

    The project combined the research and development efforts of LLNL and the University of Rochester Center for Manufacturing Optics (COM), to develop a new generation of flexible computer controlled optics· grinding machines. COM's principal near term development effort is to commercialize the OPTICAM-SM, a new prototype spherical grinding machine. A crucial requirement for commercializing the OPTICAM-SM is the development of a predictable and repeatable material removal process ( deterministic micro-grinding) that yields high quality surfaces that minimize non-deterministic polishing. OPTICAM machine tools and the fabrication process development studies are part of COM' s response to the DOD (ARPA) request tomore » implement a modernization strategy for revitalizing the U.S. optics manufacturing base. This project was entered into in order to develop a new generation of :flexible, computer-controlled optics grinding machines.« less

  11. Converting differential-equation models of biological systems to membrane computing.

    PubMed

    Muniyandi, Ravie Chandren; Zin, Abdullah Mohd; Sanders, J W

    2013-12-01

    This paper presents a method to convert the deterministic, continuous representation of a biological system by ordinary differential equations into a non-deterministic, discrete membrane computation. The dynamics of the membrane computation is governed by rewrite rules operating at certain rates. That has the advantage of applying accurately to small systems, and to expressing rates of change that are determined locally, by region, but not necessary globally. Such spatial information augments the standard differentiable approach to provide a more realistic model. A biological case study of the ligand-receptor network of protein TGF-β is used to validate the effectiveness of the conversion method. It demonstrates the sense in which the behaviours and properties of the system are better preserved in the membrane computing model, suggesting that the proposed conversion method may prove useful for biological systems in particular. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. Robust Audio Watermarking Scheme Based on Deterministic Plus Stochastic Model

    NASA Astrophysics Data System (ADS)

    Dhar, Pranab Kumar; Kim, Cheol Hong; Kim, Jong-Myon

    Digital watermarking has been widely used for protecting digital contents from unauthorized duplication. This paper proposes a new watermarking scheme based on spectral modeling synthesis (SMS) for copyright protection of digital contents. SMS defines a sound as a combination of deterministic events plus a stochastic component that makes it possible for a synthesized sound to attain all of the perceptual characteristics of the original sound. In our proposed scheme, watermarks are embedded into the highest prominent peak of the magnitude spectrum of each non-overlapping frame in peak trajectories. Simulation results indicate that the proposed watermarking scheme is highly robust against various kinds of attacks such as noise addition, cropping, re-sampling, re-quantization, and MP3 compression and achieves similarity values ranging from 17 to 22. In addition, our proposed scheme achieves signal-to-noise ratio (SNR) values ranging from 29 dB to 30 dB.

  13. Deterministic control of radiative processes by shaping the mode field

    NASA Astrophysics Data System (ADS)

    Pellegrino, D.; Pagliano, F.; Genco, A.; Petruzzella, M.; van Otten, F. W.; Fiore, A.

    2018-04-01

    Quantum dots (QDs) interacting with confined light fields in photonic crystal cavities represent a scalable light source for the generation of single photons and laser radiation in the solid-state platform. The complete control of light-matter interaction in these sources is needed to fully exploit their potential, but it has been challenging due to the small length scales involved. In this work, we experimentally demonstrate the control of the radiative interaction between InAs QDs and one mode of three coupled nanocavities. By non-locally moulding the mode field experienced by the QDs inside one of the cavities, we are able to deterministically tune, and even inhibit, the spontaneous emission into the mode. The presented method will enable the real-time switching of Rabi oscillations, the shaping of the temporal waveform of single photons, and the implementation of unexplored nanolaser modulation schemes.

  14. History matching of a complex epidemiological model of human immunodeficiency virus transmission by using variance emulation.

    PubMed

    Andrianakis, I; Vernon, I; McCreesh, N; McKinley, T J; Oakley, J E; Nsubuga, R N; Goldstein, M; White, R G

    2017-08-01

    Complex stochastic models are commonplace in epidemiology, but their utility depends on their calibration to empirical data. History matching is a (pre)calibration method that has been applied successfully to complex deterministic models. In this work, we adapt history matching to stochastic models, by emulating the variance in the model outputs, and therefore accounting for its dependence on the model's input values. The method proposed is applied to a real complex epidemiological model of human immunodeficiency virus in Uganda with 22 inputs and 18 outputs, and is found to increase the efficiency of history matching, requiring 70% of the time and 43% fewer simulator evaluations compared with a previous variant of the method. The insight gained into the structure of the human immunodeficiency virus model, and the constraints placed on it, are then discussed.

  15. Classificaiton and Discrimination of Sources with Time-Varying Frequency and Spatial Spectra

    DTIC Science & Technology

    2007-04-01

    sensitivity enhancement by impulse noise excision," in Proc. IEEE Nat. Radar Conf., pp. 252-256, 1997. [7] M. Turley, " Impulse noise rejection in HF...specific time-frequency points or regions, where one or more signals reside, enhances signal-to- noise ratio (SNR) and allows source discrimination and...source separation. The proposed algorithm is developed assuming deterministic signals with additive white complex Gaussian noise . 6. Estimation of FM

  16. Synthesis, crystal structure and spectroscopy of bioactive Cd(II) polymeric complex of the non-steroidal anti-inflammatory drug diclofenac sodium: Antiproliferative and biological activity

    NASA Astrophysics Data System (ADS)

    Tabrizi, Leila; Chiniforoshan, Hossein; McArdle, Patrick

    2015-02-01

    The interaction of Cd(II) with the non-steroidal anti-inflammatory drug diclofenac sodium (Dic) leads to the formation of the complex [Cd2(L)41.5(MeOH)2(H2O)]n(L = Dic), 1, which has been isolated and structurally characterized by X-ray crystallography. Diclofenac sodium and its metal complex 1 have also been evaluated for antiproliferative activity in vitro against the cells of three human cancer cell lines, MCF-7 (breast cancer cell line), T24 (bladder cancer cell line), A-549 (non-small cell lung carcinoma), and a mouse fibroblast L-929 cell line. The results of cytotoxic activity in vitro expressed as IC50 values indicated the diclofenac sodium and cadmium chloride are non active or less active than the metal complex of diclofenac (1). Complex 1 was also found to be a more potent cytotoxic agent against T-24 and MCF-7 cancer cell lines than the prevalent benchmark metallodrug, cisplatin, under the same experimental conditions. The superoxide dismutase activity was measured by Fridovich test which showed that complex 1 shows a low value in comparison with Cu complexes. The binding properties of this complex to biomolecules, bovine or human serum albumin, are presented and evaluated. Antibacterial and growth inhibitory activity is also higher than that of the parent ligand compound.

  17. Probabilistic vs. deterministic fiber tracking and the influence of different seed regions to delineate cerebellar-thalamic fibers in deep brain stimulation.

    PubMed

    Schlaier, Juergen R; Beer, Anton L; Faltermeier, Rupert; Fellner, Claudia; Steib, Kathrin; Lange, Max; Greenlee, Mark W; Brawanski, Alexander T; Anthofer, Judith M

    2017-06-01

    This study compared tractography approaches for identifying cerebellar-thalamic fiber bundles relevant to planning target sites for deep brain stimulation (DBS). In particular, probabilistic and deterministic tracking of the dentate-rubro-thalamic tract (DRTT) and differences between the spatial courses of the DRTT and the cerebello-thalamo-cortical (CTC) tract were compared. Six patients with movement disorders were examined by magnetic resonance imaging (MRI), including two sets of diffusion-weighted images (12 and 64 directions). Probabilistic and deterministic tractography was applied on each diffusion-weighted dataset to delineate the DRTT. Results were compared with regard to their sensitivity in revealing the DRTT and additional fiber tracts and processing time. Two sets of regions-of-interests (ROIs) guided deterministic tractography of the DRTT or the CTC, respectively. Tract distances to an atlas-based reference target were compared. Probabilistic fiber tracking with 64 orientations detected the DRTT in all twelve hemispheres. Deterministic tracking detected the DRTT in nine (12 directions) and in only two (64 directions) hemispheres. Probabilistic tracking was more sensitive in detecting additional fibers (e.g. ansa lenticularis and medial forebrain bundle) than deterministic tracking. Probabilistic tracking lasted substantially longer than deterministic. Deterministic tracking was more sensitive in detecting the CTC than the DRTT. CTC tracts were located adjacent but consistently more posterior to DRTT tracts. These results suggest that probabilistic tracking is more sensitive and robust in detecting the DRTT but harder to implement than deterministic approaches. Although sensitivity of deterministic tracking is higher for the CTC than the DRTT, targets for DBS based on these tracts likely differ. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  18. Laser targets compensate for limitations in inertial confinement fusion drivers

    NASA Astrophysics Data System (ADS)

    Kilkenny, J. D.; Alexander, N. B.; Nikroo, A.; Steinman, D. A.; Nobile, A.; Bernat, T.; Cook, R.; Letts, S.; Takagi, M.; Harding, D.

    2005-10-01

    Success in inertial confinement fusion (ICF) requires sophisticated, characterized targets. The increasing fidelity of three-dimensional (3D), radiation hydrodynamic computer codes has made it possible to design targets for ICF which can compensate for limitations in the existing single shot laser and Z pinch ICF drivers. Developments in ICF target fabrication technology allow more esoteric target designs to be fabricated. At present, requirements require new deterministic nano-material fabrication on micro scale.

  19. Inclusion of Multiple Functional Types in an Automaton Model of Bioturbation and Their Effects on Sediments Properties

    DTIC Science & Technology

    2007-09-30

    if the traditional models adequately parameterize and characterize the actual mixing. As an example of the application of this method , we have...2) Deterministic Modelling Results. As noted above, we are working on a stochastic method of modelling transient and short-lived tracers...heterogeneity. RELATED PROJECTS We have worked in collaboration with Peter Jumars (Univ. Maine), and his PhD student Kelley Dorgan, who are measuring

  20. Radial variations of large-scale magnetohydrodynamic fluctuations in the solar wind

    NASA Technical Reports Server (NTRS)

    Burlaga, L. F.; Goldstein, M. L.

    1983-01-01

    Two time periods are studied for which comprehensive data coverage is available at both 1 AU using IMP-8 and ISEE-3 and beyond using Voyager 1. One of these periods is characterized by the predominance of corotating stream interactions. Relatively small scale transient flows characterize the second period. The evolution of these flows with heliocentric distance is studied using power spectral techniques. The evolution of the transient dominated period is consistent with the hypothesis of turbulent evolution including an inverse cascade of large scales. The evolution of the corotating period is consistent with the entrainment of slow streams by faster streams in a deterministic model.

  1. Used Nuclear Fuel Loading and Structural Performance Under Normal Conditions of Transport- Demonstration of Approach and Results on Used Fuel Performance Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adkins, Harold; Geelhood, Ken; Koeppel, Brian

    2013-09-30

    This document addresses Oak Ridge National Laboratory milestone M2FT-13OR0822015 Demonstration of Approach and Results on Used Nuclear Fuel Performance Characterization. This report provides results of the initial demonstration of the modeling capability developed to perform preliminary deterministic evaluations of moderate-to-high burnup used nuclear fuel (UNF) mechanical performance under normal conditions of storage (NCS) and normal conditions of transport (NCT) conditions. This report also provides results from the sensitivity studies that have been performed. Finally, discussion on the long-term goals and objectives of this initiative are provided.

  2. Perfection and complexity in the lower Brazos River

    NASA Astrophysics Data System (ADS)

    Phillips, Jonathan D.

    2007-11-01

    The "perfect landscape" concept is based on the notion that any specific geomorphic system represents the combined, interacting effects of a set of generally applicable global laws and a set of geographically and historically contingent local controls. Because the joint probability of any specific combination of local and global controls is low, and the local controls are inherently idiosyncratic, the probability of existence of any given landscape is vanishingly small. A perfect landscape approach to geomorphic complexity views landscapes as circumstantial, contingent outcomes of deterministic laws operating in a specific environmental and historical context. Thus, explaining evolution of complex landscapes requires the integration of global and local approaches. Because perfection in this sense is the most important and pervasive form of complexity, the study of geomorphic complexity is not restricted to nonlinear dynamics, self-organization, or any other aspects of complexity theory. Beyond what can be achieved via complexity theory, the details of historical and geographic contexts must be addressed. One way to approach this is via synoptic analyses, where the relevant global laws are applied in specific situational contexts. A study of non-acute tributary junctions in the lower Brazos River, Texas illustrates this strategy. The application of generalizations about tributary junction angles, and of relevant theories, does not explain the unexpectedly high occurrence or the specific instances of barbed or straight junctions in the study area. At least five different causes for the development of straight or obtuse junction angles are evident in the lower Brazos. The dominant mechanism, however, is associated with river bank erosion and lateral channel migration which encroaches on upstream-oriented reaches of meandering tributaries. Because the tributaries are generally strongly incised in response to Holocene incision of the Brazos, the junctions are not readily reoriented to the expected acute angle. The findings are interpreted in the context of nonlinear divergent evolution, geographical and historical contingency, synoptic frameworks for generalizing results, and applicability of the dominant processes concept in geomorphology.

  3. Distributed Soil Moisture Estimation in a Mountainous Semiarid Basin: Constraining Soil Parameter Uncertainty through Field Studies

    NASA Astrophysics Data System (ADS)

    Yatheendradas, S.; Vivoni, E.

    2007-12-01

    A common practice in distributed hydrological modeling is to assign soil hydraulic properties based on coarse textural datasets. For semiarid regions with poor soil information, the performance of a model can be severely constrained due to the high model sensitivity to near-surface soil characteristics. Neglecting the uncertainty in soil hydraulic properties, their spatial variation and their naturally-occurring horizonation can potentially affect the modeled hydrological response. In this study, we investigate such effects using the TIN-based Real-time Integrated Basin Simulator (tRIBS) applied to the mid-sized (100 km2) Sierra Los Locos watershed in northern Sonora, Mexico. The Sierra Los Locos basin is characterized by complex mountainous terrain leading to topographic organization of soil characteristics and ecosystem distributions. We focus on simulations during the 2004 North American Monsoon Experiment (NAME) when intensive soil moisture measurements and aircraft- based soil moisture retrievals are available in the basin. Our experiments focus on soil moisture comparisons at the point, topographic transect and basin scales using a range of different soil characterizations. We compare the distributed soil moisture estimates obtained using (1) a deterministic simulation based on soil texture from coarse soil maps, (2) a set of ensemble simulations that capture soil parameter uncertainty and their spatial distribution, and (3) a set of simulations that conditions the ensemble on recent soil profile measurements. Uncertainties considered in near-surface soil characterization provide insights into their influence on the modeled uncertainty, into the value of soil profile observations, and into effective use of on-going field observations for constraining the soil moisture response uncertainty.

  4. Tensorial Minkowski functionals of triply periodic minimal surfaces

    PubMed Central

    Mickel, Walter; Schröder-Turk, Gerd E.; Mecke, Klaus

    2012-01-01

    A fundamental understanding of the formation and properties of a complex spatial structure relies on robust quantitative tools to characterize morphology. A systematic approach to the characterization of average properties of anisotropic complex interfacial geometries is provided by integral geometry which furnishes a family of morphological descriptors known as tensorial Minkowski functionals. These functionals are curvature-weighted integrals of tensor products of position vectors and surface normal vectors over the interfacial surface. We here demonstrate their use by application to non-cubic triply periodic minimal surface model geometries, whose Weierstrass parametrizations allow for accurate numerical computation of the Minkowski tensors. PMID:24098847

  5. Hardware-software face detection system based on multi-block local binary patterns

    NASA Astrophysics Data System (ADS)

    Acasandrei, Laurentiu; Barriga, Angel

    2015-03-01

    Face detection is an important aspect for biometrics, video surveillance and human computer interaction. Due to the complexity of the detection algorithms any face detection system requires a huge amount of computational and memory resources. In this communication an accelerated implementation of MB LBP face detection algorithm targeting low frequency, low memory and low power embedded system is presented. The resulted implementation is time deterministic and uses a customizable AMBA IP hardware accelerator. The IP implements the kernel operations of the MB-LBP algorithm and can be used as universal accelerator for MB LBP based applications. The IP employs 8 parallel MB-LBP feature evaluators cores, uses a deterministic bandwidth, has a low area profile and the power consumption is ~95 mW on a Virtex5 XC5VLX50T. The resulted implementation acceleration gain is between 5 to 8 times, while the hardware MB-LBP feature evaluation gain is between 69 and 139 times.

  6. Quantum resonant activation.

    PubMed

    Magazzù, Luca; Hänggi, Peter; Spagnolo, Bernardo; Valenti, Davide

    2017-04-01

    Quantum resonant activation is investigated for the archetype setup of an externally driven two-state (spin-boson) system subjected to strong dissipation by means of both analytical and extensive numerical calculations. The phenomenon of resonant activation emerges in the presence of either randomly fluctuating or deterministic periodically varying driving fields. Addressing the incoherent regime, a characteristic minimum emerges in the mean first passage time to reach an absorbing neighboring state whenever the intrinsic time scale of the modulation matches the characteristic time scale of the system dynamics. For the case of deterministic periodic driving, the first passage time probability density function (pdf) displays a complex, multipeaked behavior, which depends crucially on the details of initial phase, frequency, and strength of the driving. As an interesting feature we find that the mean first passage time enters the resonant activation regime at a critical frequency ν^{*} which depends very weakly on the strength of the driving. Moreover, we provide the relation between the first passage time pdf and the statistics of residence times.

  7. Asteroids - the modern challenge of celestial dynamics

    NASA Astrophysics Data System (ADS)

    Dikova, Smiliana

    2002-11-01

    Among the most powerful statements in Science are those that mark absolute limits to knowledge. For example, Relativity and Quantum Theory touched the limits of speed and accuracy. Deterministic Chaos - the new scientific paradigma of our days, also falls in this class theories. Chaos means complexity in space and unpredictability in time. It shows the limit of our basic counting system and leads to a limited predictability of the long time dynamical evolution. Perhaps for that reason, in 1986 Sir James Lighthill remarked for all physicists: "We collectively wish to apologize for having misled the general educated public by spreading ideas about the determinism of systems satisfying Newton's laws of motion that, after 1960, were proved incorrect." Our main thesis is that Asteroid Dynamics is the arena where the drama Chaos versus predictability is initiated and developed. The aim of the present research is to show the way in which Deterministic Chaos restricts the long term dynamical predictability of asteroid motions.

  8. Evaluation of electromagnetic interference and exposure assessment from s-health solutions based on Wi-Fi devices.

    PubMed

    de Miguel-Bilbao, Silvia; Aguirre, Erik; Lopez Iturri, Peio; Azpilicueta, Leire; Roldán, José; Falcone, Francisco; Ramos, Victoria

    2015-01-01

    In the last decade the number of wireless devices operating at the frequency band of 2.4 GHz has increased in several settings, such as healthcare, occupational, and household. In this work, the emissions from Wi-Fi transceivers applicable to context aware scenarios are analyzed in terms of potential interference and assessment on exposure guideline compliance. Near field measurement results as well as deterministic simulation results on realistic indoor environments are presented, providing insight on the interaction between the Wi-Fi transceiver and implantable/body area network devices as well as other transceivers operating within an indoor environment, exhibiting topological and morphological complexity. By following approaches (near field estimation/deterministic estimation), colocated body situations as well as large indoor emissions can be determined. The results show in general compliance with exposure levels and the impact of overall network deployment, which can be optimized in order to reduce overall interference levels while maximizing system performance.

  9. Evaluation of Electromagnetic Interference and Exposure Assessment from s-Health Solutions Based on Wi-Fi Devices

    PubMed Central

    de Miguel-Bilbao, Silvia; Aguirre, Erik; Lopez Iturri, Peio; Azpilicueta, Leire; Roldán, José; Falcone, Francisco; Ramos, Victoria

    2015-01-01

    In the last decade the number of wireless devices operating at the frequency band of 2.4 GHz has increased in several settings, such as healthcare, occupational, and household. In this work, the emissions from Wi-Fi transceivers applicable to context aware scenarios are analyzed in terms of potential interference and assessment on exposure guideline compliance. Near field measurement results as well as deterministic simulation results on realistic indoor environments are presented, providing insight on the interaction between the Wi-Fi transceiver and implantable/body area network devices as well as other transceivers operating within an indoor environment, exhibiting topological and morphological complexity. By following approaches (near field estimation/deterministic estimation), colocated body situations as well as large indoor emissions can be determined. The results show in general compliance with exposure levels and the impact of overall network deployment, which can be optimized in order to reduce overall interference levels while maximizing system performance. PMID:25632400

  10. Quantum resonant activation

    NASA Astrophysics Data System (ADS)

    Magazzó, Luca; Hänggi, Peter; Spagnolo, Bernardo; Valenti, Davide

    2017-04-01

    Quantum resonant activation is investigated for the archetype setup of an externally driven two-state (spin-boson) system subjected to strong dissipation by means of both analytical and extensive numerical calculations. The phenomenon of resonant activation emerges in the presence of either randomly fluctuating or deterministic periodically varying driving fields. Addressing the incoherent regime, a characteristic minimum emerges in the mean first passage time to reach an absorbing neighboring state whenever the intrinsic time scale of the modulation matches the characteristic time scale of the system dynamics. For the case of deterministic periodic driving, the first passage time probability density function (pdf) displays a complex, multipeaked behavior, which depends crucially on the details of initial phase, frequency, and strength of the driving. As an interesting feature we find that the mean first passage time enters the resonant activation regime at a critical frequency ν* which depends very weakly on the strength of the driving. Moreover, we provide the relation between the first passage time pdf and the statistics of residence times.

  11. Fractals, Coherence and Brain Dynamics

    NASA Astrophysics Data System (ADS)

    Vitiello, Giuseppe

    2010-11-01

    I show that the self-similarity property of deterministic fractals provides a direct connection with the space of the entire analytical functions. Fractals are thus described in terms of coherent states in the Fock-Bargmann representation. Conversely, my discussion also provides insights on the geometrical properties of coherent states: it allows to recognize, in some specific sense, fractal properties of coherent states. In particular, the relation is exhibited between fractals and q-deformed coherent states. The connection with the squeezed coherent states is also displayed. In this connection, the non-commutative geometry arising from the fractal relation with squeezed coherent states is discussed and the fractal spectral properties are identified. I also briefly discuss the description of neuro-phenomenological data in terms of squeezed coherent states provided by the dissipative model of brain and consider the fact that laboratory observations have shown evidence that self-similarity characterizes the brain background activity. This suggests that a connection can be established between brain dynamics and the fractal self-similarity properties on the basis of the relation discussed in this report between fractals and squeezed coherent states. Finally, I do not consider in this paper the so-called random fractals, namely those fractals obtained by randomization processes introduced in their iterative generation. Since self-similarity is still a characterizing property in many of such random fractals, my conjecture is that also in such cases there must exist a connection with the coherent state algebraic structure. In condensed matter physics, in many cases the generation by the microscopic dynamics of some kind of coherent states is involved in the process of the emergence of mesoscopic/macroscopic patterns. The discussion presented in this paper suggests that also fractal generation may provide an example of emergence of global features, namely long range correlation at mesoscopic/macroscopic level, from microscopic local deformation processes. In view of the wide spectrum of application of both, fractal studies and coherent state physics, spanning from solid state physics to laser physics, quantum optics, complex dynamical systems and biological systems, the results presented in the present report may lead to interesting practical developments in many research sectors.

  12. Geometric state space uncertainty as a new type of uncertainty addressing disparity in ';emergent properties' between real and modeled systems

    NASA Astrophysics Data System (ADS)

    Montero, J. T.; Lintz, H. E.; Sharp, D.

    2013-12-01

    Do emergent properties that result from models of complex systems match emergent properties from real systems? This question targets a type of uncertainty that we argue requires more attention in system modeling and validation efforts. We define an ';emergent property' to be an attribute or behavior of a modeled or real system that can be surprising or unpredictable and result from complex interactions among the components of a system. For example, thresholds are common across diverse systems and scales and can represent emergent system behavior that is difficult to predict. Thresholds or other types of emergent system behavior can be characterized by their geometry in state space (where state space is the space containing the set of all states of a dynamic system). One way to expedite our growing mechanistic understanding of how emergent properties emerge from complex systems is to compare the geometry of surfaces in state space between real and modeled systems. Here, we present an index (threshold strength) that can quantify a geometric attribute of a surface in state space. We operationally define threshold strength as how strongly a surface in state space resembles a step or an abrupt transition between two system states. First, we validated the index for application in greater than three dimensions of state space using simulated data. Then, we demonstrated application of the index in measuring geometric state space uncertainty between a real system and a deterministic, modeled system. In particular, we looked at geometric space uncertainty between climate behavior in 20th century and modeled climate behavior simulated by global climate models (GCMs) in the Coupled Model Intercomparison Project phase 5 (CMIP5). Surfaces from the climate models came from running the models over the same domain as the real data. We also created response surfaces from a real, climate data based on an empirical model that produces a geometric surface of predicted values in state space. We used a kernel regression method designed to capture the geometry of real data pattern without imposing shape assumptions a priori on the data; this kernel regression method is known as Non-parametric Multiplicative Regression (NPMR). We found that quantifying and comparing a geometric attribute in more than three dimensions of state space can discern whether the emergent nature of complex interactions in modeled systems matches that of real systems. Further, this method has potentially wider application in contexts where searching for abrupt change or ';action' in any hyperspace is desired.

  13. Heme versus non-heme iron-nitroxyl {FeN(H)O}⁸ complexes: electronic structure and biologically relevant reactivity.

    PubMed

    Speelman, Amy L; Lehnert, Nicolai

    2014-04-15

    Researchers have completed extensive studies on heme and non-heme iron-nitrosyl complexes, which are labeled {FeNO}(7) in the Enemark-Feltham notation, but they have had very limited success in producing corresponding, one-electron reduced, {FeNO}(8) complexes where a nitroxyl anion (NO(-)) is formally bound to an iron(II) center. These complexes, and their protonated iron(II)-NHO analogues, are proposed key intermediates in nitrite (NO2(-)) and nitric oxide (NO) reducing enzymes in bacteria and fungi. In addition, HNO is known to have a variety of physiological effects, most notably in the cardiovascular system. HNO may also serve as a signaling molecule in mammals. For these functions, iron-containing proteins may mediate the production of HNO and serve as receptors for HNO in vivo. In this Account, we highlight recent key advances in the preparation, spectroscopic characterization, and reactivity of ferrous heme and non-heme nitroxyl (NO(-)/HNO) complexes that have greatly enhanced our understanding of the potential biological roles of these species. Low-spin (ls) heme {FeNO}(7) complexes (S = 1/2) can be reversibly reduced to the corresponding {FeNO}(8) species, which are stable, diamagnetic compounds. Because the reduction is ligand (NO) centered in these cases, it occurs at extremely negative redox potentials that are at the edge of the biologically feasible range. Interestingly, the electronic structures of ls-{FeNO}(7) and ls-{FeNO}(8) species are strongly correlated with very similar frontier molecular orbitals (FMOs) and thermodynamically strong Fe-NO bonds. In contrast, high-spin (hs) non-heme {FeNO}(7) complexes (S = 3/2) can be reduced at relatively mild redox potentials. Here, the reduction is metal-centered and leads to a paramagnetic (S = 1) {FeNO}(8) complex. The increased electron density at the iron center in these species significantly decreases the covalency of the Fe-NO bond, making the reduced complexes highly reactive. In the absence of steric bulk, monomeric high-spin {FeNO}(8) complexes decompose rapidly. Notably, in a recently prepared, dimeric [{FeNO}(7)]2 species, we observed that reduction leads to rapid N-N bond formation and N2O generation, which directly models the reactivity of flavodiiron NO reductases (FNORs). We have also made key progress in the preparation and stabilization of corresponding HNO complexes, {FeNHO}(8), using both heme and non-heme ligand sets. In both cases, we have taken advantage of sterically bulky coligands to stabilize these species. ls-{FeNO}(8) complexes are basic and easily form corresponding ls-{FeNHO}(8) species, which, however, decompose rapidly via disproportionation and H2 release. Importantly, we recently showed that we can suppress this reaction via steric protection of the bound HNO ligand. As a result, we have demonstrated that ls-{FeNHO}(8) model complexes are stable and amenable to spectroscopic characterization. Neither ls-{FeNO}(8) nor ls-{FeNHO}(8) model complexes are active for N-N coupling, and hence, seem unsuitable as reactive intermediates in nitric oxide reductases (NORs). Hs-{FeNO}(8) complexes are more basic than their hs-{FeNO}(7) precursors, but their electronic structure and reactivity is not as well characterized.

  14. Synthesis, characterization and antimicrobial studies of Schiff base complexes

    NASA Astrophysics Data System (ADS)

    Zafar, Hina; Ahmad, Anis; Khan, Asad U.; Khan, Tahir Ali

    2015-10-01

    The Schiff base complexes, MLCl2 [M = Fe(II), Co(II), Ni(II), Cu(II) and Zn(II)] have been synthesized by the template reaction of respective metal ions with 2-acetylpyrrole and 1,3-diaminopropane in 1:2:1 M ratio. The complexes have been characterized by elemental analyses, ESI - mass, NMR (1H and 13C), IR, XRD, electronic and EPR spectral studies, magnetic susceptibility and molar conductance measurements. These studies show that all the complexes have octahedral arrangement around the metal ions. The molar conductance measurements of all the complexes in DMSO indicate their non-electrolytic nature. The complexes were screened for their antibacterial activity in vitro against Gram-positive (Streptococcus pyogenes) and Gram-negative (Klebsiella pneumoniae) bacteria. Among the metal complexes studied the copper complex [CuLCl2], showed highest antibacterial activity nearly equal to standard drug ciprofloxacin. Other complexes also showed considerable antibacterial activity. The relative order of activity against S. Pyogenes is as Cu(II) > Zn(II) > Co(II) = Fe(II) > Ni(II) and with K. Pneumonia is as Cu(II) > Co(II) > Zn(II) > Fe(II) > Ni(II).

  15. Preparation, spectroscopic, thermal, antihepatotoxicity, hematological parameters and liver antioxidant capacity characterizations of Cd(II), Hg(II), and Pb(II) mononuclear complexes of paracetamol anti-inflammatory drug

    NASA Astrophysics Data System (ADS)

    El-Megharbel, Samy M.; Hamza, Reham Z.; Refat, Moamen S.

    2014-10-01

    Keeping in view that some metal complexes are found to be more potent than their parent drugs, therefore, our present paper aimed to synthesized Cd(II), Hg(II) and Pb(II) complexes of paracetamol (Para) anti-inflammatory drug. Paracetamol complexes with general formula [M(Para)2(H2O)2]·nH2O have been synthesized and characterized on the basis of elemental analysis, conductivity, IR and thermal (TG/DTG), 1H NMR, electronic spectral studies. The conductivity data of these complexes have non-electrolytic nature. Comparative antimicrobial (bacteria and fungi) behaviors and molecular weights of paracetamol with their complexes have been studied. In vivo the antihepatotoxicity effect and some liver function parameters levels (serum total protein, ALT, AST, and LDH) were measured. Hematological parameters and liver antioxidant capacities of both Para and their complexes were performed. The Cd2+ + Para complex was recorded amelioration of antioxidant capacities in liver homogenates compared to other Para complexes treated groups.

  16. A series of novel oxovanadium(IV) complexes: Synthesis, spectral characterization and antimicrobial study

    NASA Astrophysics Data System (ADS)

    Sahani, M. K.; Pandey, S. K.; Pandey, O. P.; Sengupta, S. K.

    2014-09-01

    Oxovanadium(IV) complexes have been synthesized by reacting vanadyl sulfate with Schiff bases derived from 4-amino-5-(substitutedphenoxyacetic acid)-1,2,4-triazole-3-thiol and benzil. All these complexes are soluble in DMF and DMSO; low molar conductance values indicate that they are non-electrolytes and characterized by elemental analysis, spectral techniques (UV-Vis, IR, EPR and XRD) and magnetic moment measurements. The EPR spectra indicate that the free electron is in dxy orbital. In vitro antifungal activity of ligands and synthesized compounds was determined against fungi Aspergillus niger, Colletotrichum falcatum and Colletotrichum pallescence and in vitro antibacterial activity was determined by screening the compounds against Gram-negative (Escherichia coli and Salmonella typhi) and Gram-positive (Staphylococcus aureus and Bacillus subtilis) bacterial strains. The antimicrobial activities have shown that the activity increases upon complexation.

  17. Specialized Silicon Compilers for Language Recognition.

    DTIC Science & Technology

    1984-07-01

    realizations of non-deterministic automata have been reported that solve these problems in diffierent ways. Floyd and Ullman [ 281 have presented a...in Applied Mathematics, pages 19-31. American Mathematical Society, 1967. [ 281 Floyd, R. W. and J. D. Ullman. The Compilation of Regular Expressions...Shannon (editor). Automata Studies, chapter 1, pages 3-41. Princeton University Press, Princeton. N. J., 1956. [44] Kohavi, Zwi . Switching and Finite

  18. Deterministic quantum dense coding networks

    NASA Astrophysics Data System (ADS)

    Roy, Saptarshi; Chanda, Titas; Das, Tamoghna; Sen(De), Aditi; Sen, Ujjwal

    2018-07-01

    We consider the scenario of deterministic classical information transmission between multiple senders and a single receiver, when they a priori share a multipartite quantum state - an attempt towards building a deterministic dense coding network. Specifically, we prove that in the case of two or three senders and a single receiver, generalized Greenberger-Horne-Zeilinger (gGHZ) states are not beneficial for sending classical information deterministically beyond the classical limit, except when the shared state is the GHZ state itself. On the other hand, three- and four-qubit generalized W (gW) states with specific parameters as well as the four-qubit Dicke states can provide a quantum advantage of sending the information in deterministic dense coding. Interestingly however, numerical simulations in the three-qubit scenario reveal that the percentage of states from the GHZ-class that are deterministic dense codeable is higher than that of states from the W-class.

  19. Synthesis and spectral characterization of mono- and binuclear copper(II) complexes derived from 2-benzoylpyridine-N⁴-methyl-3-thiosemicarbazone: crystal structure of a novel sulfur bridged copper(II) box-dimer.

    PubMed

    Jayakumar, K; Sithambaresan, M; Aiswarya, N; Kurup, M R Prathapachandra

    2015-03-15

    Mononuclear and binuclear copper(II) complexes of 2-benzoylpyridine-N(4)-methyl thiosemicarbazone (HL) were prepared and characterized by a variety of spectroscopic techniques. Structural evidence for the novel sulfur bridged copper(II) iodo binuclear complex is obtained by single crystal X-ray diffraction analysis. The complex [Cu2L2I2], a non-centrosymmetric box dimer, crystallizes in monoclinic C2/c space group and it was found to have distorted square pyramidal geometry (Addison parameter, τ=0.238) with the square basal plane occupied by the thiosemicarbazone moiety and iodine atom whereas the sulfur atom from the other coordinated thiosemicarbazone moiety occupies the apical position. This is the first crystallographically studied system having non-centrosymmetrical entities bridged via thiolate S atoms with Cu(II)I bond. The tridentate thiosemicarbazone coordinates in mono deprotonated thionic tautomeric form in all complexes except in sulfato complex, [Cu(HL)(SO4)]·H2O (1) where it binds to the metal centre in neutral form. The magnetic moment values and the EPR spectral studies reflect the binuclearity of some of the complexes. The spin Hamiltonian and bonding parameters are calculated based on EPR studies. In all the complexes g||>g⊥>2.0023 and the g values in frozen DMF are consistent with the d(x2-y2) ground state. The thermal stabilities of some of the complexes were also determined. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. The linear and non-linear characterization of dust ion acoustic mode in complex plasma in presence of dynamical charging of dust

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhattacharjee, Saurav, E-mail: sauravtsk.bhattacharjee@gmail.com; Das, Nilakshi

    2015-10-15

    A systematic theoretical investigation has been carried out on the role of dust charging dynamics on the nature and stability of DIA (Dust Ion Acoustic) mode in complex plasma. The study has been made for both linear and non-linear scale regime of DIA mode. The observed results have been characterized in terms of background plasma responses towards dust surface responsible for dust charge fluctuation, invoking important dusty plasma parameters, especially the ion flow speed and dust size. The linear analyses confirm the nature of instability in DIA mode in presence of dust charge fluctuation. The instability shows a damping ofmore » DIA mode in subsonic flow regime followed by a gradual growth in instability in supersonic limit of ion flow. The strength of non-linearity and their existence domain is found to be driven by different dusty plasma parameters. As dust is ubiquitous in interstellar medium with plasma background, the study also addresses the possible effect of dust charging dynamics in gravito-electrostatic characterization and the stability of dust molecular clouds especially in proto-planetary disc. The observations are influential and interesting towards the understanding of dust settling mechanism and formation of dust environments in different regions in space.« less

  1. Characterizing muscular activities using non-negative matrix factorization from EMG channels for driver swings in golf.

    PubMed

    Ozaki, Yasunori; Aoki, Ryosuke; Kimura, Toshitaka; Takashima, Youichi; Yamada, Tomohiro

    2016-08-01

    The goal of this study is to propose a data driven approach method to characterize muscular activities of complex actions in sports such as golf from a lot of EMG channels. Two problems occur in a many channel measurement. The first problem is that it takes a lot of time to check the many channel data because of combinatorial explosion. The second problem is that it is difficult to understand muscle activities related with complex actions. To solve these problems, we propose an analysis method of multi EMG channels using Non-negative Matrix Factorization and adopt the method to driver swings in golf. We measured 26 EMG channels about 4 professional coaches of golf. The results show that the proposed method detected 9 muscle synergies and the activation of each synergy were mostly fitted by sigmoid curve (R2=0.85).

  2. Spatio-Temporal Data Analysis at Scale Using Models Based on Gaussian Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Michael

    Gaussian processes are the most commonly used statistical model for spatial and spatio-temporal processes that vary continuously. They are broadly applicable in the physical sciences and engineering and are also frequently used to approximate the output of complex computer models, deterministic or stochastic. We undertook research related to theory, computation, and applications of Gaussian processes as well as some work on estimating extremes of distributions for which a Gaussian process assumption might be inappropriate. Our theoretical contributions include the development of new classes of spatial-temporal covariance functions with desirable properties and new results showing that certain covariance models lead tomore » predictions with undesirable properties. To understand how Gaussian process models behave when applied to deterministic computer models, we derived what we believe to be the first significant results on the large sample properties of estimators of parameters of Gaussian processes when the actual process is a simple deterministic function. Finally, we investigated some theoretical issues related to maxima of observations with varying upper bounds and found that, depending on the circumstances, standard large sample results for maxima may or may not hold. Our computational innovations include methods for analyzing large spatial datasets when observations fall on a partially observed grid and methods for estimating parameters of a Gaussian process model from observations taken by a polar-orbiting satellite. In our application of Gaussian process models to deterministic computer experiments, we carried out some matrix computations that would have been infeasible using even extended precision arithmetic by focusing on special cases in which all elements of the matrices under study are rational and using exact arithmetic. The applications we studied include total column ozone as measured from a polar-orbiting satellite, sea surface temperatures over the Pacific Ocean, and annual temperature extremes at a site in New York City. In each of these applications, our theoretical and computational innovations were directly motivated by the challenges posed by analyzing these and similar types of data.« less

  3. Stochastic Analysis and Probabilistic Downscaling of Soil Moisture

    NASA Astrophysics Data System (ADS)

    Deshon, J. P.; Niemann, J. D.; Green, T. R.; Jones, A. S.

    2017-12-01

    Soil moisture is a key variable for rainfall-runoff response estimation, ecological and biogeochemical flux estimation, and biodiversity characterization, each of which is useful for watershed condition assessment. These applications require not only accurate, fine-resolution soil-moisture estimates but also confidence limits on those estimates and soil-moisture patterns that exhibit realistic statistical properties (e.g., variance and spatial correlation structure). The Equilibrium Moisture from Topography, Vegetation, and Soil (EMT+VS) model downscales coarse-resolution (9-40 km) soil moisture from satellite remote sensing or land-surface models to produce fine-resolution (10-30 m) estimates. The model was designed to produce accurate deterministic soil-moisture estimates at multiple points, but the resulting patterns do not reproduce the variance or spatial correlation of observed soil-moisture patterns. The primary objective of this research is to generalize the EMT+VS model to produce a probability density function (pdf) for soil moisture at each fine-resolution location and time. Each pdf has a mean that is equal to the deterministic soil-moisture estimate, and the pdf can be used to quantify the uncertainty in the soil-moisture estimates and to simulate soil-moisture patterns. Different versions of the generalized model are hypothesized based on how uncertainty enters the model, whether the uncertainty is additive or multiplicative, and which distributions describe the uncertainty. These versions are then tested by application to four catchments with detailed soil-moisture observations (Tarrawarra, Satellite Station, Cache la Poudre, and Nerrigundah). The performance of the generalized models is evaluated by comparing the statistical properties of the simulated soil-moisture patterns to those of the observations and the deterministic EMT+VS model. The versions of the generalized EMT+VS model with normally distributed stochastic components produce soil-moisture patterns with more realistic statistical properties than the deterministic model. Additionally, the results suggest that the variance and spatial correlation of the stochastic soil-moisture variations do not vary consistently with the spatial-average soil moisture.

  4. Controlled deterministic implantation by nanostencil lithography at the limit of ion-aperture straggling

    NASA Astrophysics Data System (ADS)

    Alves, A. D. C.; Newnham, J.; van Donkelaar, J. A.; Rubanov, S.; McCallum, J. C.; Jamieson, D. N.

    2013-04-01

    Solid state electronic devices fabricated in silicon employ many ion implantation steps in their fabrication. In nanoscale devices deterministic implants of dopant atoms with high spatial precision will be needed to overcome problems with statistical variations in device characteristics and to open new functionalities based on controlled quantum states of single atoms. However, to deterministically place a dopant atom with the required precision is a significant technological challenge. Here we address this challenge with a strategy based on stepped nanostencil lithography for the construction of arrays of single implanted atoms. We address the limit on spatial precision imposed by ion straggling in the nanostencil—fabricated with the readily available focused ion beam milling technique followed by Pt deposition. Two nanostencils have been fabricated; a 60 nm wide aperture in a 3 μm thick Si cantilever and a 30 nm wide aperture in a 200 nm thick Si3N4 membrane. The 30 nm wide aperture demonstrates the fabricating process for sub-50 nm apertures while the 60 nm aperture was characterized with 500 keV He+ ion forward scattering to measure the effect of ion straggling in the collimator and deduce a model for its internal structure using the GEANT4 ion transport code. This model is then applied to simulate collimation of a 14 keV P+ ion beam in a 200 nm thick Si3N4 membrane nanostencil suitable for the implantation of donors in silicon. We simulate collimating apertures with widths in the range of 10-50 nm because we expect the onset of J-coupling in a device with 30 nm donor spacing. We find that straggling in the nanostencil produces mis-located implanted ions with a probability between 0.001 and 0.08 depending on the internal collimator profile and the alignment with the beam direction. This result is favourable for the rapid prototyping of a proof-of-principle device containing multiple deterministically implanted dopants.

  5. Capping the calix: How toluene completes cesium(i) coordination with calix[4]pyrrole

    DOE PAGES

    Ellis, Ross J.; Reinhart, Benjamin; Williams, Neil J.; ...

    2017-05-04

    The role of solvent in molecular recognition systems is under-researched and often ignored, especially when the solvent is considered “non-interacting”. This study concerns the role of toluene solvent in cesium(I) recognition by calix[4]pyrrole. We show that π-donor interactions bind toluene molecules onto the open face of the cation-receptor complex, thus “capping the calix.” As a result, by characterizing this unusual aromatically-saturated complex, we show how “non-interacting” aromatic solvents can directly coordinate receptor-bound cations and thus influence recognition.

  6. Capping the calix: How toluene completes cesium(i) coordination with calix[4]pyrrole

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellis, Ross J.; Reinhart, Benjamin; Williams, Neil J.

    The role of solvent in molecular recognition systems is under-researched and often ignored, especially when the solvent is considered “non-interacting”. This study concerns the role of toluene solvent in cesium(I) recognition by calix[4]pyrrole. We show that π-donor interactions bind toluene molecules onto the open face of the cation-receptor complex, thus “capping the calix.” As a result, by characterizing this unusual aromatically-saturated complex, we show how “non-interacting” aromatic solvents can directly coordinate receptor-bound cations and thus influence recognition.

  7. Health Monitoring for Airframe Structural Characterization

    NASA Technical Reports Server (NTRS)

    Munns, Thomas E.; Kent, Renee M.; Bartolini, Antony; Gause, Charles B.; Borinski, Jason W.; Dietz, Jason; Elster, Jennifer L.; Boyd, Clark; Vicari, Larry; Ray, Asok; hide

    2002-01-01

    This study established requirements for structural health monitoring systems, identified and characterized a prototype structural sensor system, developed sensor interpretation algorithms, and demonstrated the sensor systems on operationally realistic test articles. Fiber-optic corrosion sensors (i.e., moisture and metal ion sensors) and low-cycle fatigue sensors (i.e., strain and acoustic emission sensors) were evaluated to validate their suitability for monitoring aging degradation; characterize the sensor performance in aircraft environments; and demonstrate placement processes and multiplexing schemes. In addition, a unique micromachined multimeasure and sensor concept was developed and demonstrated. The results show that structural degradation of aircraft materials could be effectively detected and characterized using available and emerging sensors. A key component of the structural health monitoring capability is the ability to interpret the information provided by sensor system in order to characterize the structural condition. Novel deterministic and stochastic fatigue damage development and growth models were developed for this program. These models enable real time characterization and assessment of structural fatigue damage.

  8. Experimental phase synchronization detection in non-phase coherent chaotic systems by using the discrete complex wavelet approach

    NASA Astrophysics Data System (ADS)

    Ferreira, Maria Teodora; Follmann, Rosangela; Domingues, Margarete O.; Macau, Elbert E. N.; Kiss, István Z.

    2017-08-01

    Phase synchronization may emerge from mutually interacting non-linear oscillators, even under weak coupling, when phase differences are bounded, while amplitudes remain uncorrelated. However, the detection of this phenomenon can be a challenging problem to tackle. In this work, we apply the Discrete Complex Wavelet Approach (DCWA) for phase assignment, considering signals from coupled chaotic systems and experimental data. The DCWA is based on the Dual-Tree Complex Wavelet Transform (DT-CWT), which is a discrete transformation. Due to its multi-scale properties in the context of phase characterization, it is possible to obtain very good results from scalar time series, even with non-phase-coherent chaotic systems without state space reconstruction or pre-processing. The method correctly predicts the phase synchronization for a chemical experiment with three locally coupled, non-phase-coherent chaotic processes. The impact of different time-scales is demonstrated on the synchronization process that outlines the advantages of DCWA for analysis of experimental data.

  9. multiUQ: An intrusive uncertainty quantification tool for gas-liquid multiphase flows

    NASA Astrophysics Data System (ADS)

    Turnquist, Brian; Owkes, Mark

    2017-11-01

    Uncertainty quantification (UQ) can improve our understanding of the sensitivity of gas-liquid multiphase flows to variability about inflow conditions and fluid properties, creating a valuable tool for engineers. While non-intrusive UQ methods (e.g., Monte Carlo) are simple and robust, the cost associated with these techniques can render them unrealistic. In contrast, intrusive UQ techniques modify the governing equations by replacing deterministic variables with stochastic variables, adding complexity, but making UQ cost effective. Our numerical framework, called multiUQ, introduces an intrusive UQ approach for gas-liquid flows, leveraging a polynomial chaos expansion of the stochastic variables: density, momentum, pressure, viscosity, and surface tension. The gas-liquid interface is captured using a conservative level set approach, including a modified reinitialization equation which is robust and quadrature free. A least-squares method is leveraged to compute the stochastic interface normal and curvature needed in the continuum surface force method for surface tension. The solver is tested by applying uncertainty to one or two variables and verifying results against the Monte Carlo approach. NSF Grant #1511325.

  10. Parallel Monte Carlo transport modeling in the context of a time-dependent, three-dimensional multi-physics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Procassini, R.J.

    1997-12-31

    The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution ofmore » particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.« less

  11. MEANS: python package for Moment Expansion Approximation, iNference and Simulation

    PubMed Central

    Fan, Sisi; Geissmann, Quentin; Lakatos, Eszter; Lukauskas, Saulius; Ale, Angelique; Babtie, Ann C.; Kirk, Paul D. W.; Stumpf, Michael P. H.

    2016-01-01

    Motivation: Many biochemical systems require stochastic descriptions. Unfortunately these can only be solved for the simplest cases and their direct simulation can become prohibitively expensive, precluding thorough analysis. As an alternative, moment closure approximation methods generate equations for the time-evolution of the system’s moments and apply a closure ansatz to obtain a closed set of differential equations; that can become the basis for the deterministic analysis of the moments of the outputs of stochastic systems. Results: We present a free, user-friendly tool implementing an efficient moment expansion approximation with parametric closures that integrates well with the IPython interactive environment. Our package enables the analysis of complex stochastic systems without any constraints on the number of species and moments studied and the type of rate laws in the system. In addition to the approximation method our package provides numerous tools to help non-expert users in stochastic analysis. Availability and implementation: https://github.com/theosysbio/means Contacts: m.stumpf@imperial.ac.uk or e.lakatos13@imperial.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153663

  12. MEANS: python package for Moment Expansion Approximation, iNference and Simulation.

    PubMed

    Fan, Sisi; Geissmann, Quentin; Lakatos, Eszter; Lukauskas, Saulius; Ale, Angelique; Babtie, Ann C; Kirk, Paul D W; Stumpf, Michael P H

    2016-09-15

    Many biochemical systems require stochastic descriptions. Unfortunately these can only be solved for the simplest cases and their direct simulation can become prohibitively expensive, precluding thorough analysis. As an alternative, moment closure approximation methods generate equations for the time-evolution of the system's moments and apply a closure ansatz to obtain a closed set of differential equations; that can become the basis for the deterministic analysis of the moments of the outputs of stochastic systems. We present a free, user-friendly tool implementing an efficient moment expansion approximation with parametric closures that integrates well with the IPython interactive environment. Our package enables the analysis of complex stochastic systems without any constraints on the number of species and moments studied and the type of rate laws in the system. In addition to the approximation method our package provides numerous tools to help non-expert users in stochastic analysis. https://github.com/theosysbio/means m.stumpf@imperial.ac.uk or e.lakatos13@imperial.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  13. Low-complexity DOA estimation from short data snapshots for ULA systems using the annihilating filter technique

    NASA Astrophysics Data System (ADS)

    Bellili, Faouzi; Amor, Souheib Ben; Affes, Sofiène; Ghrayeb, Ali

    2017-12-01

    This paper addresses the problem of DOA estimation using uniform linear array (ULA) antenna configurations. We propose a new low-cost method of multiple DOA estimation from very short data snapshots. The new estimator is based on the annihilating filter (AF) technique. It is non-data-aided (NDA) and does not impinge therefore on the whole throughput of the system. The noise components are assumed temporally and spatially white across the receiving antenna elements. The transmitted signals are also temporally and spatially white across the transmitting sources. The new method is compared in performance to the Cramér-Rao lower bound (CRLB), the root-MUSIC algorithm, the deterministic maximum likelihood estimator and another Bayesian method developed precisely for the single snapshot case. Simulations show that the new estimator performs well over a wide SNR range. Prominently, the main advantage of the new AF-based method is that it succeeds in accurately estimating the DOAs from short data snapshots and even from a single snapshot outperforming by far the state-of-the-art techniques both in DOA estimation accuracy and computational cost.

  14. Using heuristic algorithms for capacity leasing and task allocation issues in telecommunication networks under fuzzy quality of service constraints

    NASA Astrophysics Data System (ADS)

    Huseyin Turan, Hasan; Kasap, Nihat; Savran, Huseyin

    2014-03-01

    Nowadays, every firm uses telecommunication networks in different amounts and ways in order to complete their daily operations. In this article, we investigate an optimisation problem that a firm faces when acquiring network capacity from a market in which there exist several network providers offering different pricing and quality of service (QoS) schemes. The QoS level guaranteed by network providers and the minimum quality level of service, which is needed for accomplishing the operations are denoted as fuzzy numbers in order to handle the non-deterministic nature of the telecommunication network environment. Interestingly, the mathematical formulation of the aforementioned problem leads to the special case of a well-known two-dimensional bin packing problem, which is famous for its computational complexity. We propose two different heuristic solution procedures that have the capability of solving the resulting nonlinear mixed integer programming model with fuzzy constraints. In conclusion, the efficiency of each algorithm is tested in several test instances to demonstrate the applicability of the methodology.

  15. Dynamic and adaptive policy models for coalition operations

    NASA Astrophysics Data System (ADS)

    Verma, Dinesh; Calo, Seraphin; Chakraborty, Supriyo; Bertino, Elisa; Williams, Chris; Tucker, Jeremy; Rivera, Brian; de Mel, Geeth R.

    2017-05-01

    It is envisioned that the success of future military operations depends on the better integration, organizationally and operationally, among allies, coalition members, inter-agency partners, and so forth. However, this leads to a challenging and complex environment where the heterogeneity and dynamism in the operating environment intertwines with the evolving situational factors that affect the decision-making life cycle of the war fighter. Therefore, the users in such environments need secure, accessible, and resilient information infrastructures where policy-based mechanisms adopt the behaviours of the systems to meet end user goals. By specifying and enforcing a policy based model and framework for operations and security which accommodates heterogeneous coalitions, high levels of agility can be enabled to allow rapid assembly and restructuring of system and information resources. However, current prevalent policy models (e.g., rule based event-condition-action model and its variants) are not sufficient to deal with the highly dynamic and plausibly non-deterministic nature of these environments. Therefore, to address the above challenges, in this paper, we present a new approach for policies which enables managed systems to take more autonomic decisions regarding their operations.

  16. Inconvenient Truth or Convenient Fiction? Probable Maximum Precipitation and Nonstationarity

    NASA Astrophysics Data System (ADS)

    Nielsen-Gammon, J. W.

    2017-12-01

    According to the inconvenient truth that Probable Maximum Precipitation (PMP) represents a non-deterministic, statistically very rare event, future changes in PMP involve a complex interplay between future frequencies of storm type, storm morphology, and environmental characteristics, many of which are poorly constrained by global climate models. On the other hand, according to the convenient fiction that PMP represents an estimate of the maximum possible precipitation that can occur at a given location, as determined by storm maximization and transposition, the primary climatic driver of PMP change is simply a change in maximum moisture availability. Increases in boundary-layer and total-column moisture have been observed globally, are anticipated from basic physical principles, and are robustly projected to continue by global climate models. Thus, using the same techniques that are used within the PMP storm maximization process itself, future PMP values may be projected. The resulting PMP trend projections are qualitatively consistent with observed trends of extreme rainfall within Texas, suggesting that in this part of the world the inconvenient truth is congruent with the convenient fiction.

  17. Dynamical mechanism for sharp orientation tuning in an integrate-and-fire model of a cortical hypercolumn.

    PubMed

    Bressloff, P C; Bressloff, N W; Cowan, J D

    2000-11-01

    Orientation tuning in a ring of pulse-coupled integrate-and-fire (IF) neurons is analyzed in terms of spontaneous pattern formation. It is shown how the ring bifurcates from a synchronous state to a non-phase-locked state whose spike trains are characterized by clustered but irregular fluctuations of the interspike intervals (ISIs). The separation of these clusters in phase space results in a localized peak of activity as measured by the time-averaged firing rate of the neurons. This generates a sharp orientation tuning curve that can lock to a slowly rotating, weakly tuned external stimulus. Under certain conditions, the peak can slowly rotate even to a fixed external stimulus. The ring also exhibits hysteresis due to the subcritical nature of the bifurcation to sharp orientation tuning. Such behavior is shown to be consistent with a corresponding analog version of the IF model in the limit of slow synaptic interactions. For fast synapses, the deterministic fluctuations of the ISIs associated with the tuning curve can support a coefficient of variation of order unity.

  18. Identification of gene regulation models from single-cell data

    NASA Astrophysics Data System (ADS)

    Weber, Lisa; Raymond, William; Munsky, Brian

    2018-09-01

    In quantitative analyses of biological processes, one may use many different scales of models (e.g. spatial or non-spatial, deterministic or stochastic, time-varying or at steady-state) or many different approaches to match models to experimental data (e.g. model fitting or parameter uncertainty/sloppiness quantification with different experiment designs). These different analyses can lead to surprisingly different results, even when applied to the same data and the same model. We use a simplified gene regulation model to illustrate many of these concerns, especially for ODE analyses of deterministic processes, chemical master equation and finite state projection analyses of heterogeneous processes, and stochastic simulations. For each analysis, we employ MATLAB and PYTHON software to consider a time-dependent input signal (e.g. a kinase nuclear translocation) and several model hypotheses, along with simulated single-cell data. We illustrate different approaches (e.g. deterministic and stochastic) to identify the mechanisms and parameters of the same model from the same simulated data. For each approach, we explore how uncertainty in parameter space varies with respect to the chosen analysis approach or specific experiment design. We conclude with a discussion of how our simulated results relate to the integration of experimental and computational investigations to explore signal-activated gene expression models in yeast (Neuert et al 2013 Science 339 584–7) and human cells (Senecal et al 2014 Cell Rep. 8 75–83)5.

  19. Developing a stochastic conflict resolution model for urban runoff quality management: Application of info-gap and bargaining theories

    NASA Astrophysics Data System (ADS)

    Ghodsi, Seyed Hamed; Kerachian, Reza; Estalaki, Siamak Malakpour; Nikoo, Mohammad Reza; Zahmatkesh, Zahra

    2016-02-01

    In this paper, two deterministic and stochastic multilateral, multi-issue, non-cooperative bargaining methodologies are proposed for urban runoff quality management. In the proposed methodologies, a calibrated Storm Water Management Model (SWMM) is used to simulate stormwater runoff quantity and quality for different urban stormwater runoff management scenarios, which have been defined considering several Low Impact Development (LID) techniques. In the deterministic methodology, the best management scenario, representing location and area of LID controls, is identified using the bargaining model. In the stochastic methodology, uncertainties of some key parameters of SWMM are analyzed using the info-gap theory. For each water quality management scenario, robustness and opportuneness criteria are determined based on utility functions of different stakeholders. Then, to find the best solution, the bargaining model is performed considering a combination of robustness and opportuneness criteria for each scenario based on utility function of each stakeholder. The results of applying the proposed methodology in the Velenjak urban watershed located in the northeastern part of Tehran, the capital city of Iran, illustrate its practical utility for conflict resolution in urban water quantity and quality management. It is shown that the solution obtained using the deterministic model cannot outperform the result of the stochastic model considering the robustness and opportuneness criteria. Therefore, it can be concluded that the stochastic model, which incorporates the main uncertainties, could provide more reliable results.

  20. Probabilistic Evaluation of Advanced Ceramic Matrix Composite Structures

    NASA Technical Reports Server (NTRS)

    Abumeri, Galib H.; Chamis, Christos C.

    2003-01-01

    The objective of this report is to summarize the deterministic and probabilistic structural evaluation results of two structures made with advanced ceramic composites (CMC): internally pressurized tube and uniformly loaded flange. The deterministic structural evaluation includes stress, displacement, and buckling analyses. It is carried out using the finite element code MHOST, developed for the 3-D inelastic analysis of structures that are made with advanced materials. The probabilistic evaluation is performed using the integrated probabilistic assessment of composite structures computer code IPACS. The affects of uncertainties in primitive variables related to the material, fabrication process, and loadings on the material property and structural response behavior are quantified. The primitive variables considered are: thermo-mechanical properties of fiber and matrix, fiber and void volume ratios, use temperature, and pressure. The probabilistic structural analysis and probabilistic strength results are used by IPACS to perform reliability and risk evaluation of the two structures. The results will show that the sensitivity information obtained for the two composite structures from the computational simulation can be used to alter the design process to meet desired service requirements. In addition to detailed probabilistic analysis of the two structures, the following were performed specifically on the CMC tube: (1) predicted the failure load and the buckling load, (2) performed coupled non-deterministic multi-disciplinary structural analysis, and (3) demonstrated that probabilistic sensitivities can be used to select a reduced set of design variables for optimization.

Top