Origins of Chaos in Autonomous Boolean Networks
NASA Astrophysics Data System (ADS)
Socolar, Joshua; Cavalcante, Hugo; Gauthier, Daniel; Zhang, Rui
2010-03-01
Networks with nodes consisting of ideal Boolean logic gates are known to display either steady states, periodic behavior, or an ultraviolet catastrophe where the number of logic-transition events circulating in the network per unit time grows as a power-law. In an experiment, non-ideal behavior of the logic gates prevents the ultraviolet catastrophe and may lead to deterministic chaos. We identify certain non-ideal features of real logic gates that enable chaos in experimental networks. We find that short-pulse rejection and the asymmetry between the logic states tends to engender periodic behavior. On the other hand, a memory effect termed ``degradation'' can generate chaos. Our results strongly suggest that deterministic chaos can be expected in a large class of experimental Boolean-like networks. Such devices may find application in a variety of technologies requiring fast complex waveforms or flat power spectra. The non-ideal effects identified here also have implications for the statistics of attractors in large complex networks.
Controllability of Deterministic Networks with the Identical Degree Sequence
Ma, Xiujuan; Zhao, Haixing; Wang, Binghong
2015-01-01
Controlling complex network is an essential problem in network science and engineering. Recent advances indicate that the controllability of complex network is dependent on the network's topology. Liu and Barabási, et.al speculated that the degree distribution was one of the most important factors affecting controllability for arbitrary complex directed network with random link weights. In this paper, we analysed the effect of degree distribution to the controllability for the deterministic networks with unweighted and undirected. We introduce a class of deterministic networks with identical degree sequence, called (x,y)-flower. We analysed controllability of the two deterministic networks ((1, 3)-flower and (2, 2)-flower) by exact controllability theory in detail and give accurate results of the minimum number of driver nodes for the two networks. In simulation, we compare the controllability of (x,y)-flower networks. Our results show that the family of (x,y)-flower networks have the same degree sequence, but their controllability is totally different. So the degree distribution itself is not sufficient to characterize the controllability of deterministic networks with unweighted and undirected. PMID:26020920
Ordinal optimization and its application to complex deterministic problems
NASA Astrophysics Data System (ADS)
Yang, Mike Shang-Yu
1998-10-01
We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.
Using Reputation Systems and Non-Deterministic Routing to Secure Wireless Sensor Networks
Moya, José M.; Vallejo, Juan Carlos; Fraga, David; Araujo, Álvaro; Villanueva, Daniel; de Goyeneche, Juan-Mariano
2009-01-01
Security in wireless sensor networks is difficult to achieve because of the resource limitations of the sensor nodes. We propose a trust-based decision framework for wireless sensor networks coupled with a non-deterministic routing protocol. Both provide a mechanism to effectively detect and confine common attacks, and, unlike previous approaches, allow bad reputation feedback to the network. This approach has been extensively simulated, obtaining good results, even for unrealistically complex attack scenarios. PMID:22412345
Computing exponentially faster: implementing a non-deterministic universal Turing machine using DNA
Currin, Andrew; Korovin, Konstantin; Ababi, Maria; Roper, Katherine; Kell, Douglas B.; Day, Philip J.
2017-01-01
The theory of computer science is based around universal Turing machines (UTMs): abstract machines able to execute all possible algorithms. Modern digital computers are physical embodiments of classical UTMs. For the most important class of problem in computer science, non-deterministic polynomial complete problems, non-deterministic UTMs (NUTMs) are theoretically exponentially faster than both classical UTMs and quantum mechanical UTMs (QUTMs). However, no attempt has previously been made to build an NUTM, and their construction has been regarded as impossible. Here, we demonstrate the first physical design of an NUTM. This design is based on Thue string rewriting systems, and thereby avoids the limitations of most previous DNA computing schemes: all the computation is local (simple edits to strings) so there is no need for communication, and there is no need to order operations. The design exploits DNA's ability to replicate to execute an exponential number of computational paths in P time. Each Thue rewriting step is embodied in a DNA edit implemented using a novel combination of polymerase chain reactions and site-directed mutagenesis. We demonstrate that the design works using both computational modelling and in vitro molecular biology experimentation: the design is thermodynamically favourable, microprogramming can be used to encode arbitrary Thue rules, all classes of Thue rule can be implemented, and non-deterministic rule implementation. In an NUTM, the resource limitation is space, which contrasts with classical UTMs and QUTMs where it is time. This fundamental difference enables an NUTM to trade space for time, which is significant for both theoretical computer science and physics. It is also of practical importance, for to quote Richard Feynman ‘there's plenty of room at the bottom’. This means that a desktop DNA NUTM could potentially utilize more processors than all the electronic computers in the world combined, and thereby outperform the world's current fastest supercomputer, while consuming a tiny fraction of its energy. PMID:28250099
Large deviations and mixing for dissipative PDEs with unbounded random kicks
NASA Astrophysics Data System (ADS)
Jakšić, V.; Nersesyan, V.; Pillet, C.-A.; Shirikyan, A.
2018-02-01
We study the problem of exponential mixing and large deviations for discrete-time Markov processes associated with a class of random dynamical systems. Under some dissipativity and regularisation hypotheses for the underlying deterministic dynamics and a non-degeneracy condition for the driving random force, we discuss the existence and uniqueness of a stationary measure and its exponential stability in the Kantorovich-Wasserstein metric. We next turn to the large deviations principle (LDP) and establish its validity for the occupation measures of the Markov processes in question. The proof is based on Kifer’s criterion for non-compact spaces, a result on large-time asymptotics for generalised Markov semigroup, and a coupling argument. These tools combined together constitute a new approach to LDP for infinite-dimensional processes without strong Feller property in a non-compact space. The results obtained can be applied to the two-dimensional Navier-Stokes system in a bounded domain and to the complex Ginzburg-Landau equation.
NASA Astrophysics Data System (ADS)
García, Constantino A.; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G.
2018-07-01
In the past few decades, it has been recognized that 1 / f fluctuations are ubiquitous in nature. The most widely used mathematical models to capture the long-term memory properties of 1 / f fluctuations have been stochastic fractal models. However, physical systems do not usually consist of just stochastic fractal dynamics, but they often also show some degree of deterministic behavior. The present paper proposes a model based on fractal stochastic and deterministic components that can provide a valuable basis for the study of complex systems with long-term correlations. The fractal stochastic component is assumed to be a fractional Brownian motion process and the deterministic component is assumed to be a band-limited signal. We also provide a method that, under the assumptions of this model, is able to characterize the fractal stochastic component and to provide an estimate of the deterministic components present in a given time series. The method is based on a Bayesian wavelet shrinkage procedure that exploits the self-similar properties of the fractal processes in the wavelet domain. This method has been validated over simulated signals and over real signals with economical and biological origin. Real examples illustrate how our model may be useful for exploring the deterministic-stochastic duality of complex systems, and uncovering interesting patterns present in time series.
ERIC Educational Resources Information Center
Grotzer, Tina A.; Solis, S. Lynneth; Tutwiler, M. Shane; Cuzzolino, Megan Powell
2017-01-01
Understanding complex systems requires reasoning about causal relationships that behave or appear to behave probabilistically. Features such as distributed agency, large spatial scales, and time delays obscure co-variation relationships and complex interactions can result in non-deterministic relationships between causes and effects that are best…
NASA Technical Reports Server (NTRS)
Onwubiko, Chin-Yere; Onyebueke, Landon
1996-01-01
The structural design, or the design of machine elements, has been traditionally based on deterministic design methodology. The deterministic method considers all design parameters to be known with certainty. This methodology is, therefore, inadequate to design complex structures that are subjected to a variety of complex, severe loading conditions. A nonlinear behavior that is dependent on stress, stress rate, temperature, number of load cycles, and time is observed on all components subjected to complex conditions. These complex conditions introduce uncertainties; hence, the actual factor of safety margin remains unknown. In the deterministic methodology, the contingency of failure is discounted; hence, there is a use of a high factor of safety. It may be most useful in situations where the design structures are simple. The probabilistic method is concerned with the probability of non-failure performance of structures or machine elements. It is much more useful in situations where the design is characterized by complex geometry, possibility of catastrophic failure, sensitive loads and material properties. Also included: Comparative Study of the use of AGMA Geometry Factors and Probabilistic Design Methodology in the Design of Compact Spur Gear Set.
Deterministic quantum dense coding networks
NASA Astrophysics Data System (ADS)
Roy, Saptarshi; Chanda, Titas; Das, Tamoghna; Sen(De), Aditi; Sen, Ujjwal
2018-07-01
We consider the scenario of deterministic classical information transmission between multiple senders and a single receiver, when they a priori share a multipartite quantum state - an attempt towards building a deterministic dense coding network. Specifically, we prove that in the case of two or three senders and a single receiver, generalized Greenberger-Horne-Zeilinger (gGHZ) states are not beneficial for sending classical information deterministically beyond the classical limit, except when the shared state is the GHZ state itself. On the other hand, three- and four-qubit generalized W (gW) states with specific parameters as well as the four-qubit Dicke states can provide a quantum advantage of sending the information in deterministic dense coding. Interestingly however, numerical simulations in the three-qubit scenario reveal that the percentage of states from the GHZ-class that are deterministic dense codeable is higher than that of states from the W-class.
Hybrid deterministic/stochastic simulation of complex biochemical systems.
Lecca, Paola; Bagagiolo, Fabio; Scarpa, Marina
2017-11-21
In a biological cell, cellular functions and the genetic regulatory apparatus are implemented and controlled by complex networks of chemical reactions involving genes, proteins, and enzymes. Accurate computational models are indispensable means for understanding the mechanisms behind the evolution of a complex system, not always explored with wet lab experiments. To serve their purpose, computational models, however, should be able to describe and simulate the complexity of a biological system in many of its aspects. Moreover, it should be implemented by efficient algorithms requiring the shortest possible execution time, to avoid enlarging excessively the time elapsing between data analysis and any subsequent experiment. Besides the features of their topological structure, the complexity of biological networks also refers to their dynamics, that is often non-linear and stiff. The stiffness is due to the presence of molecular species whose abundance fluctuates by many orders of magnitude. A fully stochastic simulation of a stiff system is computationally time-expensive. On the other hand, continuous models are less costly, but they fail to capture the stochastic behaviour of small populations of molecular species. We introduce a new efficient hybrid stochastic-deterministic computational model and the software tool MoBioS (MOlecular Biology Simulator) implementing it. The mathematical model of MoBioS uses continuous differential equations to describe the deterministic reactions and a Gillespie-like algorithm to describe the stochastic ones. Unlike the majority of current hybrid methods, the MoBioS algorithm divides the reactions' set into fast reactions, moderate reactions, and slow reactions and implements a hysteresis switching between the stochastic model and the deterministic model. Fast reactions are approximated as continuous-deterministic processes and modelled by deterministic rate equations. Moderate reactions are those whose reaction waiting time is greater than the fast reaction waiting time but smaller than the slow reaction waiting time. A moderate reaction is approximated as a stochastic (deterministic) process if it was classified as a stochastic (deterministic) process at the time at which it crosses the threshold of low (high) waiting time. A Gillespie First Reaction Method is implemented to select and execute the slow reactions. The performances of MoBios were tested on a typical example of hybrid dynamics: that is the DNA transcription regulation. The simulated dynamic profile of the reagents' abundance and the estimate of the error introduced by the fully deterministic approach were used to evaluate the consistency of the computational model and that of the software tool.
Ben Abdallah, Emna; Folschette, Maxime; Roux, Olivier; Magnin, Morgan
2017-01-01
This paper addresses the problem of finding attractors in biological regulatory networks. We focus here on non-deterministic synchronous and asynchronous multi-valued networks, modeled using automata networks (AN). AN is a general and well-suited formalism to study complex interactions between different components (genes, proteins,...). An attractor is a minimal trap domain, that is, a part of the state-transition graph that cannot be escaped. Such structures are terminal components of the dynamics and take the form of steady states (singleton) or complex compositions of cycles (non-singleton). Studying the effect of a disease or a mutation on an organism requires finding the attractors in the model to understand the long-term behaviors. We present a computational logical method based on answer set programming (ASP) to identify all attractors. Performed without any network reduction, the method can be applied on any dynamical semantics. In this paper, we present the two most widespread non-deterministic semantics: the asynchronous and the synchronous updating modes. The logical approach goes through a complete enumeration of the states of the network in order to find the attractors without the necessity to construct the whole state-transition graph. We realize extensive computational experiments which show good performance and fit the expected theoretical results in the literature. The originality of our approach lies on the exhaustive enumeration of all possible (sets of) states verifying the properties of an attractor thanks to the use of ASP. Our method is applied to non-deterministic semantics in two different schemes (asynchronous and synchronous). The merits of our methods are illustrated by applying them to biological examples of various sizes and comparing the results with some existing approaches. It turns out that our approach succeeds to exhaustively enumerate on a desktop computer, in a large model (100 components), all existing attractors up to a given size (20 states). This size is only limited by memory and computation time.
Billiard, Sylvain; Castric, Vincent; Vekemans, Xavier
2007-03-01
We developed a general model of sporophytic self-incompatibility under negative frequency-dependent selection allowing complex patterns of dominance among alleles. We used this model deterministically to investigate the effects on equilibrium allelic frequencies of the number of dominance classes, the number of alleles per dominance class, the asymmetry in dominance expression between pollen and pistil, and whether selection acts on male fitness only or both on male and on female fitnesses. We show that the so-called "recessive effect" occurs under a wide variety of situations. We found emerging properties of finite population models with several alleles per dominance class such as that higher numbers of alleles are maintained in more dominant classes and that the number of dominance classes can evolve. We also investigated the occurrence of homozygous genotypes and found that substantial proportions of those can occur for the most recessive alleles. We used the model for two species with complex dominance patterns to test whether allelic frequencies in natural populations are in agreement with the distribution predicted by our model. We suggest that the model can be used to test explicitly for additional, allele-specific, selective forces.
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; ...
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less
Determining Methane Budgets with Eddy Covariance Data ascertained in a heterogeneous Footprint
NASA Astrophysics Data System (ADS)
Rößger, N.; Wille, C.; Kutzbach, L.
2016-12-01
Amplified climate change in the Arctic may cause methane emissions to increase considerably due to more suitable production conditions. With a focus on methane, we studied the carbon turnover on the modern flood plain of Samoylov Island situated in the Lena River Delta (72°22'N, 126°28'E) using the eddy covariance data. In contrast to the ice-wedge polygonal tundra on the delta's river terraces, the flood plains have to date received little attention. During the warm season in 2014 and 2015, the mean methane flux amounted to 0.012 μmol m-2 s-1. This average is the result of a large variability in methane fluxes which is attributed to the complexity of the footprint where methane sources are unevenly distributed. Explaining this variability is based on three modelling approaches: a deterministic model using exponential relationships for flux drivers, a multilinear model created through stepwise regression and a neural network which relies on machine learning techniques. A substantial boost in model performance was achieved through inputting footprint information in the form of the contribution of vegetation classes; this indicates the vegetation is serving as an integrated proxy for potential methane flux drivers. The neural network performed best; however, a robust validation revealed that the deterministic model best captured ecosystem-intrinsic features. Furthermore, the deterministic model allowed a downscaling of the net flux by allocating fractions to three vegetation classes which in turn form the basis for upscaling methane fluxes in order to obtain the budget for the entire flood plain. Arctic methane emissions occur in a spatio-temporally complex pattern and employing fine-scale information is crucial to understanding the flux dynamics.
NASA Astrophysics Data System (ADS)
Contreras, Arturo Javier
This dissertation describes a novel Amplitude-versus-Angle (AVA) inversion methodology to quantitatively integrate pre-stack seismic data, well logs, geologic data, and geostatistical information. Deterministic and stochastic inversion algorithms are used to characterize flow units of deepwater reservoirs located in the central Gulf of Mexico. A detailed fluid/lithology sensitivity analysis was conducted to assess the nature of AVA effects in the study area. Standard AVA analysis indicates that the shale/sand interface represented by the top of the hydrocarbon-bearing turbidite deposits generate typical Class III AVA responses. Layer-dependent Biot-Gassmann analysis shows significant sensitivity of the P-wave velocity and density to fluid substitution, indicating that presence of light saturating fluids clearly affects the elastic response of sands. Accordingly, AVA deterministic and stochastic inversions, which combine the advantages of AVA analysis with those of inversion, have provided quantitative information about the lateral continuity of the turbidite reservoirs based on the interpretation of inverted acoustic properties and fluid-sensitive modulus attributes (P-Impedance, S-Impedance, density, and LambdaRho, in the case of deterministic inversion; and P-velocity, S-velocity, density, and lithotype (sand-shale) distributions, in the case of stochastic inversion). The quantitative use of rock/fluid information through AVA seismic data, coupled with the implementation of co-simulation via lithotype-dependent multidimensional joint probability distributions of acoustic/petrophysical properties, provides accurate 3D models of petrophysical properties such as porosity, permeability, and water saturation. Pre-stack stochastic inversion provides more realistic and higher-resolution results than those obtained from analogous deterministic techniques. Furthermore, 3D petrophysical models can be more accurately co-simulated from AVA stochastic inversion results. By combining AVA sensitivity analysis techniques with pre-stack stochastic inversion, geologic data, and awareness of inversion pitfalls, it is possible to substantially reduce the risk in exploration and development of conventional and non-conventional reservoirs. From the final integration of deterministic and stochastic inversion results with depositional models and analogous examples, the M-series reservoirs have been interpreted as stacked terminal turbidite lobes within an overall fan complex (the Miocene MCAVLU Submarine Fan System); this interpretation is consistent with previous core data interpretations and regional stratigraphic/depositional studies.
Will systems biology offer new holistic paradigms to life sciences?
Conti, Filippo; Valerio, Maria Cristina; Zbilut, Joseph P.
2008-01-01
A biological system, like any complex system, blends stochastic and deterministic features, displaying properties of both. In a certain sense, this blend is exactly what we perceive as the “essence of complexity” given we tend to consider as non-complex both an ideal gas (fully stochastic and understandable at the statistical level in the thermodynamic limit of a huge number of particles) and a frictionless pendulum (fully deterministic relative to its motion). In this commentary we make the statement that systems biology will have a relevant impact on nowadays biology if (and only if) will be able to capture the essential character of this blend that in our opinion is the generation of globally ordered collective modes supported by locally stochastic atomisms. PMID:19003440
Disentangling the stochastic behavior of complex time series
NASA Astrophysics Data System (ADS)
Anvari, Mehrnaz; Tabar, M. Reza Rahimi; Peinke, Joachim; Lehnertz, Klaus
2016-10-01
Complex systems involving a large number of degrees of freedom, generally exhibit non-stationary dynamics, which can result in either continuous or discontinuous sample paths of the corresponding time series. The latter sample paths may be caused by discontinuous events - or jumps - with some distributed amplitudes, and disentangling effects caused by such jumps from effects caused by normal diffusion processes is a main problem for a detailed understanding of stochastic dynamics of complex systems. Here we introduce a non-parametric method to address this general problem. By means of a stochastic dynamical jump-diffusion modelling, we separate deterministic drift terms from different stochastic behaviors, namely diffusive and jumpy ones, and show that all of the unknown functions and coefficients of this modelling can be derived directly from measured time series. We demonstrate appli- cability of our method to empirical observations by a data-driven inference of the deterministic drift term and of the diffusive and jumpy behavior in brain dynamics from ten epilepsy patients. Particularly these different stochastic behaviors provide extra information that can be regarded valuable for diagnostic purposes.
Lodahl, Peter; Mahmoodian, Sahand; Stobbe, Søren; Rauschenbeutel, Arno; Schneeweiss, Philipp; Volz, Jürgen; Pichler, Hannes; Zoller, Peter
2017-01-25
Advanced photonic nanostructures are currently revolutionizing the optics and photonics that underpin applications ranging from light technology to quantum-information processing. The strong light confinement in these structures can lock the local polarization of the light to its propagation direction, leading to propagation-direction-dependent emission, scattering and absorption of photons by quantum emitters. The possibility of such a propagation-direction-dependent, or chiral, light-matter interaction is not accounted for in standard quantum optics and its recent discovery brought about the research field of chiral quantum optics. The latter offers fundamentally new functionalities and applications: it enables the assembly of non-reciprocal single-photon devices that can be operated in a quantum superposition of two or more of their operational states and the realization of deterministic spin-photon interfaces. Moreover, engineered directional photonic reservoirs could lead to the development of complex quantum networks that, for example, could simulate novel classes of quantum many-body systems.
Wiemuth, M; Junger, D; Leitritz, M A; Neumann, J; Neumuth, T; Burgert, O
2017-08-01
Medical processes can be modeled using different methods and notations. Currently used modeling systems like Business Process Model and Notation (BPMN) are not capable of describing the highly flexible and variable medical processes in sufficient detail. We combined two modeling systems, Business Process Management (BPM) and Adaptive Case Management (ACM), to be able to model non-deterministic medical processes. We used the new Standards Case Management Model and Notation (CMMN) and Decision Management Notation (DMN). First, we explain how CMMN, DMN and BPMN could be used to model non-deterministic medical processes. We applied this methodology to model 79 cataract operations provided by University Hospital Leipzig, Germany, and four cataract operations provided by University Eye Hospital Tuebingen, Germany. Our model consists of 85 tasks and about 20 decisions in BPMN. We were able to expand the system with more complex situations that might appear during an intervention. An effective modeling of the cataract intervention is possible using the combination of BPM and ACM. The combination gives the possibility to depict complex processes with complex decisions. This combination allows a significant advantage for modeling perioperative processes.
Complexity and health professions education: a basic glossary.
Mennin, Stewart
2010-08-01
The study of health professions education in the context of complexity science and complex adaptive systems involves different concepts and terminology that are likely to be unfamiliar to many health professions educators. A list of selected key terms and definitions from the literature of complexity science is provided to assist readers to navigate familiar territory from a different perspective. include agent, attractor, bifurcation, chaos, co-evolution, collective variable, complex adaptive systems, complexity science, deterministic systems, dynamical system, edge of chaos, emergence, equilibrium, far from equilibrium, fuzzy boundaries, linear system, non-linear system, random, self-organization and self-similarity.
Progressively expanded neural network for automatic material identification in hyperspectral imagery
NASA Astrophysics Data System (ADS)
Paheding, Sidike
The science of hyperspectral remote sensing focuses on the exploitation of the spectral signatures of various materials to enhance capabilities including object detection, recognition, and material characterization. Hyperspectral imagery (HSI) has been extensively used for object detection and identification applications since it provides plenty of spectral information to uniquely identify materials by their reflectance spectra. HSI-based object detection algorithms can be generally classified into stochastic and deterministic approaches. Deterministic approaches are comparatively simple to apply since it is usually based on direct spectral similarity such as spectral angles or spectral correlation. In contrast, stochastic algorithms require statistical modeling and estimation for target class and non-target class. Over the decades, many single class object detection methods have been proposed in the literature, however, deterministic multiclass object detection in HSI has not been explored. In this work, we propose a deterministic multiclass object detection scheme, named class-associative spectral fringe-adjusted joint transform correlation. Human brain is capable of simultaneously processing high volumes of multi-modal data received every second of the day. In contrast, a machine sees input data simply as random binary numbers. Although machines are computationally efficient, they are inferior when comes to data abstraction and interpretation. Thus, mimicking the learning strength of human brain has been current trend in artificial intelligence. In this work, we present a biological inspired neural network, named progressively expanded neural network (PEN Net), based on nonlinear transformation of input neurons to a feature space for better pattern differentiation. In PEN Net, discrete fixed excitations are disassembled and scattered in the feature space as a nonlinear line. Each disassembled element on the line corresponds to a pattern with similar features. Unlike the conventional neural network where hidden neurons need to be iteratively adjusted to achieve better accuracy, our proposed PEN Net does not require hidden neurons tuning which achieves better computational efficiency, and it has also shown superior performance in HSI classification tasks compared to the state-of-the-arts. Spectral-spatial features based HSI classification framework has shown stronger strength compared to spectral-only based methods. In our lastly proposed technique, PEN Net is incorporated with multiscale spatial features (i.e., multiscale complete local binary pattern) to perform a spectral-spatial classification of HSI. Several experiments demonstrate excellent performance of our proposed technique compared to the more recent developed approaches.
Theory and applications of a deterministic approximation to the coalescent model
Jewett, Ethan M.; Rosenberg, Noah A.
2014-01-01
Under the coalescent model, the random number nt of lineages ancestral to a sample is nearly deterministic as a function of time when nt is moderate to large in value, and it is well approximated by its expectation E[nt]. In turn, this expectation is well approximated by simple deterministic functions that are easy to compute. Such deterministic functions have been applied to estimate allele age, effective population size, and genetic diversity, and they have been used to study properties of models of infectious disease dynamics. Although a number of simple approximations of E[nt] have been derived and applied to problems of population-genetic inference, the theoretical accuracy of the formulas and the inferences obtained using these approximations is not known, and the range of problems to which they can be applied is not well understood. Here, we demonstrate general procedures by which the approximation nt ≈ E[nt] can be used to reduce the computational complexity of coalescent formulas, and we show that the resulting approximations converge to their true values under simple assumptions. Such approximations provide alternatives to exact formulas that are computationally intractable or numerically unstable when the number of sampled lineages is moderate or large. We also extend an existing class of approximations of E[nt] to the case of multiple populations of time-varying size with migration among them. Our results facilitate the use of the deterministic approximation nt ≈ E[nt] for deriving functionally simple, computationally efficient, and numerically stable approximations of coalescent formulas under complicated demographic scenarios. PMID:24412419
Down to the roughness scale assessment of piston-ring/liner contacts
NASA Astrophysics Data System (ADS)
Checo, H. M.; Jaramillo, A.; Ausas, R. F.; Jai, M.; Buscaglia, G. C.
2017-02-01
The effects of surface roughness in hydrodynamic bearings been accounted for through several approaches, the most widely used being averaging or stochastic techniques. With these the surface is not treated “as it is”, but by means of an assumed probability distribution for the roughness. The so called direct, deterministic or measured-surface simulation) solve the lubrication problem with realistic surfaces down to the roughness scale. This leads to expensive computational problems. Most researchers have tackled this problem considering non-moving surfaces and neglecting the ring dynamics to reduce the computational burden. What is proposed here is to solve the fully-deterministic simulation both in space and in time, so that the actual movement of the surfaces and the rings dynamics are taken into account. This simulation is much more complex than previous ones, as it is intrinsically transient. The feasibility of these fully-deterministic simulations is illustrated two cases: fully deterministic simulation of liner surfaces with diverse finishings (honed and coated bores) with constant piston velocity and load on the ring and also in real engine conditions.
Deterministic quantum teleportation and information splitting via a peculiar W-class state
NASA Astrophysics Data System (ADS)
Mei, Feng; Yu, Ya-Fei; Zhang, Zhi-Ming
2010-02-01
In the paper (Phys. Rev. 2006 A 74 062320) Agrawal et al. have introduced a kind of W-class state which can be used for the quantum teleportation of single-particle state via a three-particle von Neumann measurement, and they thought that the state could not be used to teleport an unknown state by making two-particle and one-particle measurements. Here we reconsider the features of the W-class state and the quantum teleportation process via the W-class state. We show that, by introducing a unitary operation, the quantum teleportation can be achieved deterministically by making two-particle and one-particle measurements. In addition, our protocol is extended to the process of teleporting two-particle state and splitting information.
Calibration of an Unsteady Groundwater Flow Model for a Complex, Strongly Heterogeneous Aquifer
NASA Astrophysics Data System (ADS)
Curtis, Z. K.; Liao, H.; Li, S. G.; Phanikumar, M. S.; Lusch, D.
2016-12-01
Modeling of groundwater systems characterized by complex three-dimensional structure and heterogeneity remains a significant challenge. Most of today's groundwater models are developed based on relatively simple conceptual representations in favor of model calibratibility. As more complexities are modeled, e.g., by adding more layers and/or zones, or introducing transient processes, more parameters have to be estimated and issues related to ill-posed groundwater problems and non-unique calibration arise. Here, we explore the use of an alternative conceptual representation for groundwater modeling that is fully three-dimensional and can capture complex 3D heterogeneity (both systematic and "random") without over-parameterizing the aquifer system. In particular, we apply Transition Probability (TP) geostatistics on high resolution borehole data from a water well database to characterize the complex 3D geology. Different aquifer material classes, e.g., `AQ' (aquifer material), `MAQ' (marginal aquifer material'), `PCM' (partially confining material), and `CM' (confining material), are simulated, with the hydraulic properties of each material type as tuning parameters during calibration. The TP-based approach is applied to simulate unsteady groundwater flow in a large, complex, and strongly heterogeneous glacial aquifer system in Michigan across multiple spatial and temporal scales. The resulting model is calibrated to observed static water level data over a time span of 50 years. The results show that the TP-based conceptualization enables much more accurate and robust calibration/simulation than that based on conventional deterministic layer/zone based conceptual representations.
USDA-ARS?s Scientific Manuscript database
Major histocompatibility complex class I (MHC-I) proteins can be expressed as cell surface or secreted proteins. To investigate whether bovine non-classical MHC-I proteins are expressed as cell surface or secreted proteins, and to assess the reactivity pattern of monoclonal antibodies with non-class...
Aguirre, Erik; Arpón, Javier; Azpilicueta, Leire; López, Peio; de Miguel, Silvia; Ramos, Victoria; Falcone, Francisco
2014-12-01
In this article, the impact of topology as well as morphology of a complex indoor environment such as a commercial aircraft in the estimation of dosimetric assessment is presented. By means of an in-house developed deterministic 3D ray-launching code, estimation of electric field amplitude as a function of position for the complete volume of a commercial passenger airplane is obtained. Estimation of electromagnetic field exposure in this environment is challenging, due to the complexity and size of the scenario, as well as to the large metallic content, giving rise to strong multipath components. By performing the calculation with a deterministic technique, the complete scenario can be considered with an optimized balance between accuracy and computational cost. The proposed method can aid in the assessment of electromagnetic dosimetry in the future deployment of embarked wireless systems in commercial aircraft.
Self-Organized Dynamic Flocking Behavior from a Simple Deterministic Map
NASA Astrophysics Data System (ADS)
Krueger, Wesley
2007-10-01
Coherent motion exhibiting large-scale order, such as flocking, swarming, and schooling behavior in animals, can arise from simple rules applied to an initial random array of self-driven particles. We present a completely deterministic dynamic map that exhibits emergent, collective, complex motion for a group of particles. Each individual particle is driven with a constant speed in two dimensions adopting the average direction of a fixed set of non-spatially related partners. In addition, the particle changes direction by π as it reaches a circular boundary. The dynamical patterns arising from these rules range from simple circular-type convective motion to highly sophisticated, complex, collective behavior which can be easily interpreted as flocking, schooling, or swarming depending on the chosen parameters. We present the results as a series of short movies and we also explore possible order parameters and correlation functions capable of quantifying the resulting coherence.
ERIC Educational Resources Information Center
Du, Wenchong; Kelly, Steve W.
2013-01-01
The present study examines implicit sequence learning in adult dyslexics with a focus on comparing sequence transitions with different statistical complexities. Learning of a 12-item deterministic sequence was assessed in 12 dyslexic and 12 non-dyslexic university students. Both groups showed equivalent standard reaction time increments when the…
Bossew, Peter; Dubois, Grégoire; Tollefsen, Tore
2008-01-01
Geological classes are used to model the deterministic (drift or trend) component of the Radon potential (Friedmann's RP) in Austria. It is shown that the RP can be grouped according to geological classes, but also according to individual geological units belonging to the same class. Geological classes can thus serve as predictors for mean RP within the classes. Variability of the RP within classes or units is interpreted as the stochastic part of the regionalized variable RP; however, there does not seem to exist a smallest unit which would naturally divide the RP into a deterministic and a stochastic part. Rather, this depends on the scale of the geological maps used, down to which size of geological units is used for modelling the trend. In practice, there must be a sufficient number of data points (measurements) distributed as uniformly as possible within one unit to allow reasonable determination of the trend component.
Cancer dormancy and criticality from a game theory perspective.
Wu, Amy; Liao, David; Kirilin, Vlamimir; Lin, Ke-Chih; Torga, Gonzalo; Qu, Junle; Liu, Liyu; Sturm, James C; Pienta, Kenneth; Austin, Robert
2018-01-01
The physics of cancer dormancy, the time between initial cancer treatment and re-emergence after a protracted period, is a puzzle. Cancer cells interact with host cells via complex, non-linear population dynamics, which can lead to very non-intuitive but perhaps deterministic and understandable progression dynamics of cancer and dormancy. We explore here the dynamics of host-cancer cell populations in the presence of (1) payoffs gradients and (2) perturbations due to cell migration. We determine to what extent the time-dependence of the populations can be quantitively understood in spite of the underlying complexity of the individual agents and model the phenomena of dormancy.
Chiu, Chia-Yi; Köhn, Hans-Friedrich
2016-09-01
The asymptotic classification theory of cognitive diagnosis (ACTCD) provided the theoretical foundation for using clustering methods that do not rely on a parametric statistical model for assigning examinees to proficiency classes. Like general diagnostic classification models, clustering methods can be useful in situations where the true diagnostic classification model (DCM) underlying the data is unknown and possibly misspecified, or the items of a test conform to a mix of multiple DCMs. Clustering methods can also be an option when fitting advanced and complex DCMs encounters computational difficulties. These can range from the use of excessive CPU times to plain computational infeasibility. However, the propositions of the ACTCD have only been proven for the Deterministic Input Noisy Output "AND" gate (DINA) model and the Deterministic Input Noisy Output "OR" gate (DINO) model. For other DCMs, there does not exist a theoretical justification to use clustering for assigning examinees to proficiency classes. But if clustering is to be used legitimately, then the ACTCD must cover a larger number of DCMs than just the DINA model and the DINO model. Thus, the purpose of this article is to prove the theoretical propositions of the ACTCD for two other important DCMs, the Reduced Reparameterized Unified Model and the General Diagnostic Model.
Aguirre, Erik; Lopez-Iturri, Peio; Azpilicueta, Leire; Astrain, José Javier; Villadangos, Jesús; Falcone, Francisco
2015-02-05
One of the main challenges in the implementation and design of context-aware scenarios is the adequate deployment strategy for Wireless Sensor Networks (WSNs), mainly due to the strong dependence of the radiofrequency physical layer with the surrounding media, which can lead to non-optimal network designs. In this work, radioplanning analysis for WSN deployment is proposed by employing a deterministic 3D ray launching technique in order to provide insight into complex wireless channel behavior in context-aware indoor scenarios. The proposed radioplanning procedure is validated with a testbed implemented with a Mobile Ad Hoc Network WSN following a chain configuration, enabling the analysis and assessment of a rich variety of parameters, such as received signal level, signal quality and estimation of power consumption. The adoption of deterministic radio channel techniques allows the design and further deployment of WSNs in heterogeneous wireless scenarios with optimized behavior in terms of coverage, capacity, quality of service and energy consumption.
Chaotic map clustering algorithm for EEG analysis
NASA Astrophysics Data System (ADS)
Bellotti, R.; De Carlo, F.; Stramaglia, S.
2004-03-01
The non-parametric chaotic map clustering algorithm has been applied to the analysis of electroencephalographic signals, in order to recognize the Huntington's disease, one of the most dangerous pathologies of the central nervous system. The performance of the method has been compared with those obtained through parametric algorithms, as K-means and deterministic annealing, and supervised multi-layer perceptron. While supervised neural networks need a training phase, performed by means of data tagged by the genetic test, and the parametric methods require a prior choice of the number of classes to find, the chaotic map clustering gives a natural evidence of the pathological class, without any training or supervision, thus providing a new efficient methodology for the recognition of patterns affected by the Huntington's disease.
Asteroids - the modern challenge of celestial dynamics
NASA Astrophysics Data System (ADS)
Dikova, Smiliana
2002-11-01
Among the most powerful statements in Science are those that mark absolute limits to knowledge. For example, Relativity and Quantum Theory touched the limits of speed and accuracy. Deterministic Chaos - the new scientific paradigma of our days, also falls in this class theories. Chaos means complexity in space and unpredictability in time. It shows the limit of our basic counting system and leads to a limited predictability of the long time dynamical evolution. Perhaps for that reason, in 1986 Sir James Lighthill remarked for all physicists: "We collectively wish to apologize for having misled the general educated public by spreading ideas about the determinism of systems satisfying Newton's laws of motion that, after 1960, were proved incorrect." Our main thesis is that Asteroid Dynamics is the arena where the drama Chaos versus predictability is initiated and developed. The aim of the present research is to show the way in which Deterministic Chaos restricts the long term dynamical predictability of asteroid motions.
The relationship between stochastic and deterministic quasi-steady state approximations.
Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R
2015-11-23
The quasi steady-state approximation (QSSA) is frequently used to reduce deterministic models of biochemical networks. The resulting equations provide a simplified description of the network in terms of non-elementary reaction functions (e.g. Hill functions). Such deterministic reductions are frequently a basis for heuristic stochastic models in which non-elementary reaction functions are used to define reaction propensities. Despite their popularity, it remains unclear when such stochastic reductions are valid. It is frequently assumed that the stochastic reduction can be trusted whenever its deterministic counterpart is accurate. However, a number of recent examples show that this is not necessarily the case. Here we explain the origin of these discrepancies, and demonstrate a clear relationship between the accuracy of the deterministic and the stochastic QSSA for examples widely used in biological systems. With an analysis of a two-state promoter model, and numerical simulations for a variety of other models, we find that the stochastic QSSA is accurate whenever its deterministic counterpart provides an accurate approximation over a range of initial conditions which cover the likely fluctuations from the quasi steady-state (QSS). We conjecture that this relationship provides a simple and computationally inexpensive way to test the accuracy of reduced stochastic models using deterministic simulations. The stochastic QSSA is one of the most popular multi-scale stochastic simulation methods. While the use of QSSA, and the resulting non-elementary functions has been justified in the deterministic case, it is not clear when their stochastic counterparts are accurate. In this study, we show how the accuracy of the stochastic QSSA can be tested using their deterministic counterparts providing a concrete method to test when non-elementary rate functions can be used in stochastic simulations.
Parameter Estimation in Epidemiology: from Simple to Complex Dynamics
NASA Astrophysics Data System (ADS)
Aguiar, Maíra; Ballesteros, Sebastién; Boto, João Pedro; Kooi, Bob W.; Mateus, Luís; Stollenwerk, Nico
2011-09-01
We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models like multi-strain dynamics to describe the virus-host interaction in dengue fever, even most recently developed parameter estimation techniques, like maximum likelihood iterated filtering, come to their computational limits. However, the first results of parameter estimation with data on dengue fever from Thailand indicate a subtle interplay between stochasticity and deterministic skeleton. The deterministic system on its own already displays complex dynamics up to deterministic chaos and coexistence of multiple attractors.
GPELab, a Matlab toolbox to solve Gross-Pitaevskii equations II: Dynamics and stochastic simulations
NASA Astrophysics Data System (ADS)
Antoine, Xavier; Duboscq, Romain
2015-08-01
GPELab is a free Matlab toolbox for modeling and numerically solving large classes of systems of Gross-Pitaevskii equations that arise in the physics of Bose-Einstein condensates. The aim of this second paper, which follows (Antoine and Duboscq, 2014), is to first present the various pseudospectral schemes available in GPELab for computing the deterministic and stochastic nonlinear dynamics of Gross-Pitaevskii equations (Antoine, et al., 2013). Next, the corresponding GPELab functions are explained in detail. Finally, some numerical examples are provided to show how the code works for the complex dynamics of BEC problems.
Discrete structural features among interface residue-level classes.
Sowmya, Gopichandran; Ranganathan, Shoba
2015-01-01
Protein-protein interaction (PPI) is essential for molecular functions in biological cells. Investigation on protein interfaces of known complexes is an important step towards deciphering the driving forces of PPIs. Each PPI complex is specific, sensitive and selective to binding. Therefore, we have estimated the relative difference in percentage of polar residues between surface and the interface for each complex in a non-redundant heterodimer dataset of 278 complexes to understand the predominant forces driving binding. Our analysis showed ~60% of protein complexes with surface polarity greater than interface polarity (designated as class A). However, a considerable number of complexes (~40%) have interface polarity greater than surface polarity, (designated as class B), with a significantly different p-value of 1.66E-45 from class A. Comprehensive analyses of protein complexes show that interface features such as interface area, interface polarity abundance, solvation free energy gain upon interface formation, binding energy and the percentage of interface charged residue abundance distinguish among class A and class B complexes, while electrostatic visualization maps also help differentiate interface classes among complexes. Class A complexes are classical with abundant non-polar interactions at the interface; however class B complexes have abundant polar interactions at the interface, similar to protein surface characteristics. Five physicochemical interface features analyzed from the protein heterodimer dataset are discriminatory among the interface residue-level classes. These novel observations find application in developing residue-level models for protein-protein binding prediction, protein-protein docking studies and interface inhibitor design as drugs.
Discrete structural features among interface residue-level classes
2015-01-01
Background Protein-protein interaction (PPI) is essential for molecular functions in biological cells. Investigation on protein interfaces of known complexes is an important step towards deciphering the driving forces of PPIs. Each PPI complex is specific, sensitive and selective to binding. Therefore, we have estimated the relative difference in percentage of polar residues between surface and the interface for each complex in a non-redundant heterodimer dataset of 278 complexes to understand the predominant forces driving binding. Results Our analysis showed ~60% of protein complexes with surface polarity greater than interface polarity (designated as class A). However, a considerable number of complexes (~40%) have interface polarity greater than surface polarity, (designated as class B), with a significantly different p-value of 1.66E-45 from class A. Comprehensive analyses of protein complexes show that interface features such as interface area, interface polarity abundance, solvation free energy gain upon interface formation, binding energy and the percentage of interface charged residue abundance distinguish among class A and class B complexes, while electrostatic visualization maps also help differentiate interface classes among complexes. Conclusions Class A complexes are classical with abundant non-polar interactions at the interface; however class B complexes have abundant polar interactions at the interface, similar to protein surface characteristics. Five physicochemical interface features analyzed from the protein heterodimer dataset are discriminatory among the interface residue-level classes. These novel observations find application in developing residue-level models for protein-protein binding prediction, protein-protein docking studies and interface inhibitor design as drugs. PMID:26679043
Extreme current fluctuations in lattice gases: Beyond nonequilibrium steady states
NASA Astrophysics Data System (ADS)
Meerson, Baruch; Sasorov, Pavel V.
2014-01-01
We use the macroscopic fluctuation theory (MFT) to study large current fluctuations in nonstationary diffusive lattice gases. We identify two universality classes of these fluctuations, which we call elliptic and hyperbolic. They emerge in the limit when the deterministic mass flux is small compared to the mass flux due to the shot noise. The two classes are determined by the sign of compressibility of effective fluid, obtained by mapping the MFT into an inviscid hydrodynamics. An example of the elliptic class is the symmetric simple exclusion process, where, for some initial conditions, we can solve the effective hydrodynamics exactly. This leads to a super-Gaussian extreme current statistics conjectured by Derrida and Gerschenfeld [J. Stat. Phys. 137, 978 (2009), 10.1007/s10955-009-9830-1] and yields the optimal path of the system. For models of the hyperbolic class, the deterministic mass flux cannot be neglected, leading to a different extreme current statistics.
Deterministic quantum teleportation with feed-forward in a solid state system.
Steffen, L; Salathe, Y; Oppliger, M; Kurpiers, P; Baur, M; Lang, C; Eichler, C; Puebla-Hellmann, G; Fedorov, A; Wallraff, A
2013-08-15
Engineered macroscopic quantum systems based on superconducting electronic circuits are attractive for experimentally exploring diverse questions in quantum information science. At the current state of the art, quantum bits (qubits) are fabricated, initialized, controlled, read out and coupled to each other in simple circuits. This enables the realization of basic logic gates, the creation of complex entangled states and the demonstration of algorithms or error correction. Using different variants of low-noise parametric amplifiers, dispersive quantum non-demolition single-shot readout of single-qubit states with high fidelity has enabled continuous and discrete feedback control of single qubits. Here we realize full deterministic quantum teleportation with feed-forward in a chip-based superconducting circuit architecture. We use a set of two parametric amplifiers for both joint two-qubit and individual qubit single-shot readout, combined with flexible real-time digital electronics. Our device uses a crossed quantum bus technology that allows us to create complex networks with arbitrary connecting topology in a planar architecture. The deterministic teleportation process succeeds with order unit probability for any input state, as we prepare maximally entangled two-qubit states as a resource and distinguish all Bell states in a single two-qubit measurement with high efficiency and high fidelity. We teleport quantum states between two macroscopic systems separated by 6 mm at a rate of 10(4) s(-1), exceeding other reported implementations. The low transmission loss of superconducting waveguides is likely to enable the range of this and other schemes to be extended to significantly larger distances, enabling tests of non-locality and the realization of elements for quantum communication at microwave frequencies. The demonstrated feed-forward may also find application in error correction schemes.
Soft network composite materials with deterministic and bio-inspired designs
Jang, Kyung-In; Chung, Ha Uk; Xu, Sheng; Lee, Chi Hwan; Luan, Haiwen; Jeong, Jaewoong; Cheng, Huanyu; Kim, Gwang-Tae; Han, Sang Youn; Lee, Jung Woo; Kim, Jeonghyun; Cho, Moongee; Miao, Fuxing; Yang, Yiyuan; Jung, Han Na; Flavin, Matthew; Liu, Howard; Kong, Gil Woo; Yu, Ki Jun; Rhee, Sang Il; Chung, Jeahoon; Kim, Byunggik; Kwak, Jean Won; Yun, Myoung Hee; Kim, Jin Young; Song, Young Min; Paik, Ungyu; Zhang, Yihui; Huang, Yonggang; Rogers, John A.
2015-01-01
Hard and soft structural composites found in biology provide inspiration for the design of advanced synthetic materials. Many examples of bio-inspired hard materials can be found in the literature; far less attention has been devoted to soft systems. Here we introduce deterministic routes to low-modulus thin film materials with stress/strain responses that can be tailored precisely to match the non-linear properties of biological tissues, with application opportunities that range from soft biomedical devices to constructs for tissue engineering. The approach combines a low-modulus matrix with an open, stretchable network as a structural reinforcement that can yield classes of composites with a wide range of desired mechanical responses, including anisotropic, spatially heterogeneous, hierarchical and self-similar designs. Demonstrative application examples in thin, skin-mounted electrophysiological sensors with mechanics precisely matched to the human epidermis and in soft, hydrogel-based vehicles for triggered drug release suggest their broad potential uses in biomedical devices. PMID:25782446
Tag-mediated cooperation with non-deterministic genotype-phenotype mapping
NASA Astrophysics Data System (ADS)
Zhang, Hong; Chen, Shu
2016-01-01
Tag-mediated cooperation provides a helpful framework for resolving evolutionary social dilemmas. However, most of the previous studies have not taken into account genotype-phenotype distinction in tags, which may play an important role in the process of evolution. To take this into consideration, we introduce non-deterministic genotype-phenotype mapping into a tag-based model with spatial prisoner's dilemma. By our definition, the similarity between genotypic tags does not directly imply the similarity between phenotypic tags. We find that the non-deterministic mapping from genotypic tag to phenotypic tag has non-trivial effects on tag-mediated cooperation. Although we observe that high levels of cooperation can be established under a wide variety of conditions especially when the decisiveness is moderate, the uncertainty in the determination of phenotypic tags may have a detrimental effect on the tag mechanism by disturbing the homophilic interaction structure which can explain the promotion of cooperation in tag systems. Furthermore, the non-deterministic mapping may undermine the robustness of the tag mechanism with respect to various factors such as the structure of the tag space and the tag flexibility. This observation warns us about the danger of applying the classical tag-based models to the analysis of empirical phenomena if genotype-phenotype distinction is significant in real world. Non-deterministic genotype-phenotype mapping thus provides a new perspective to the understanding of tag-mediated cooperation.
Limit Theorems for Dispersing Billiards with Cusps
NASA Astrophysics Data System (ADS)
Bálint, P.; Chernov, N.; Dolgopyat, D.
2011-12-01
Dispersing billiards with cusps are deterministic dynamical systems with a mild degree of chaos, exhibiting "intermittent" behavior that alternates between regular and chaotic patterns. Their statistical properties are therefore weak and delicate. They are characterized by a slow (power-law) decay of correlations, and as a result the classical central limit theorem fails. We prove that a non-classical central limit theorem holds, with a scaling factor of {sqrt{nlog n}} replacing the standard {sqrt{n}} . We also derive the respective Weak Invariance Principle, and we identify the class of observables for which the classical CLT still holds.
ShinyGPAS: interactive genomic prediction accuracy simulator based on deterministic formulas.
Morota, Gota
2017-12-20
Deterministic formulas for the accuracy of genomic predictions highlight the relationships among prediction accuracy and potential factors influencing prediction accuracy prior to performing computationally intensive cross-validation. Visualizing such deterministic formulas in an interactive manner may lead to a better understanding of how genetic factors control prediction accuracy. The software to simulate deterministic formulas for genomic prediction accuracy was implemented in R and encapsulated as a web-based Shiny application. Shiny genomic prediction accuracy simulator (ShinyGPAS) simulates various deterministic formulas and delivers dynamic scatter plots of prediction accuracy versus genetic factors impacting prediction accuracy, while requiring only mouse navigation in a web browser. ShinyGPAS is available at: https://chikudaisei.shinyapps.io/shinygpas/ . ShinyGPAS is a shiny-based interactive genomic prediction accuracy simulator using deterministic formulas. It can be used for interactively exploring potential factors that influence prediction accuracy in genome-enabled prediction, simulating achievable prediction accuracy prior to genotyping individuals, or supporting in-class teaching. ShinyGPAS is open source software and it is hosted online as a freely available web-based resource with an intuitive graphical user interface.
The Diffusion Model Is Not a Deterministic Growth Model: Comment on Jones and Dzhafarov (2014)
Smith, Philip L.; Ratcliff, Roger; McKoon, Gail
2015-01-01
Jones and Dzhafarov (2014) claim that several current models of speeded decision making in cognitive tasks, including the diffusion model, can be viewed as special cases of other general models or model classes. The general models can be made to match any set of response time (RT) distribution and accuracy data exactly by a suitable choice of parameters and so are unfalsifiable. The implication of their claim is that models like the diffusion model are empirically testable only by artificially restricting them to exclude unfalsifiable instances of the general model. We show that Jones and Dzhafarov’s argument depends on enlarging the class of “diffusion” models to include models in which there is little or no diffusion. The unfalsifiable models are deterministic or near-deterministic growth models, from which the effects of within-trial variability have been removed or in which they are constrained to be negligible. These models attribute most or all of the variability in RT and accuracy to across-trial variability in the rate of evidence growth, which is permitted to be distributed arbitrarily and to vary freely across experimental conditions. In contrast, in the standard diffusion model, within-trial variability in evidence is the primary determinant of variability in RT. Across-trial variability, which determines the relative speed of correct responses and errors, is theoretically and empirically constrained. Jones and Dzhafarov’s attempt to include the diffusion model in a class of models that also includes deterministic growth models misrepresents and trivializes it and conveys a misleading picture of cognitive decision-making research. PMID:25347314
Simple Deterministically Constructed Recurrent Neural Networks
NASA Astrophysics Data System (ADS)
Rodan, Ali; Tiňo, Peter
A large number of models for time series processing, forecasting or modeling follows a state-space formulation. Models in the specific class of state-space approaches, referred to as Reservoir Computing, fix their state-transition function. The state space with the associated state transition structure forms a reservoir, which is supposed to be sufficiently complex so as to capture a large number of features of the input stream that can be potentially exploited by the reservoir-to-output readout mapping. The largely "black box" character of reservoirs prevents us from performing a deeper theoretical investigation of the dynamical properties of successful reservoirs. Reservoir construction is largely driven by a series of (more-or-less) ad-hoc randomized model building stages, with both the researchers and practitioners having to rely on a series of trials and errors. We show that a very simple deterministically constructed reservoir with simple cycle topology gives performances comparable to those of the Echo State Network (ESN) on a number of time series benchmarks. Moreover, we argue that the memory capacity of such a model can be made arbitrarily close to the proved theoretical limit.
2010-04-01
the development process, increase its quality and reduce development time through automation of synthesis, analysis or verification. For this purpose...made of time-non-deterministic systems, improving efficiency and reducing complexity of formal analysis . We also show how our theory relates to, and...of the most recent investigations for Earth and Mars atmospheres will be discussed in the following sections. 2.4.1 Earth: lunar return NASA’s
Direct generation of linearly polarized single photons with a deterministic axis in quantum dots
NASA Astrophysics Data System (ADS)
Wang, Tong; Puchtler, Tim J.; Patra, Saroj K.; Zhu, Tongtong; Ali, Muhammad; Badcock, Tom J.; Ding, Tao; Oliver, Rachel A.; Schulz, Stefan; Taylor, Robert A.
2017-07-01
We report the direct generation of linearly polarized single photons with a deterministic polarization axis in self-assembled quantum dots (QDs), achieved by the use of non-polar InGaN without complex device geometry engineering. Here, we present a comprehensive investigation of the polarization properties of these QDs and their origin with statistically significant experimental data and rigorous k·p modeling. The experimental study of 180 individual QDs allows us to compute an average polarization degree of 0.90, with a standard deviation of only 0.08. When coupled with theoretical insights, we show that these QDs are highly insensitive to size differences, shape anisotropies, and material content variations. Furthermore, 91% of the studied QDs exhibit a polarization axis along the crystal [1-100] axis, with the other 9% polarized orthogonal to this direction. These features give non-polar InGaN QDs unique advantages in polarization control over other materials, such as conventional polar nitride, InAs, or CdSe QDs. Hence, the ability to generate single photons with polarization control makes non-polar InGaN QDs highly attractive for quantum cryptography protocols.
Khammash, Mustafa
2014-01-01
Reaction networks are systems in which the populations of a finite number of species evolve through predefined interactions. Such networks are found as modeling tools in many biological disciplines such as biochemistry, ecology, epidemiology, immunology, systems biology and synthetic biology. It is now well-established that, for small population sizes, stochastic models for biochemical reaction networks are necessary to capture randomness in the interactions. The tools for analyzing such models, however, still lag far behind their deterministic counterparts. In this paper, we bridge this gap by developing a constructive framework for examining the long-term behavior and stability properties of the reaction dynamics in a stochastic setting. In particular, we address the problems of determining ergodicity of the reaction dynamics, which is analogous to having a globally attracting fixed point for deterministic dynamics. We also examine when the statistical moments of the underlying process remain bounded with time and when they converge to their steady state values. The framework we develop relies on a blend of ideas from probability theory, linear algebra and optimization theory. We demonstrate that the stability properties of a wide class of biological networks can be assessed from our sufficient theoretical conditions that can be recast as efficient and scalable linear programs, well-known for their tractability. It is notably shown that the computational complexity is often linear in the number of species. We illustrate the validity, the efficiency and the wide applicability of our results on several reaction networks arising in biochemistry, systems biology, epidemiology and ecology. The biological implications of the results as well as an example of a non-ergodic biological network are also discussed. PMID:24968191
Deterministic-random separation in nonstationary regime
NASA Astrophysics Data System (ADS)
Abboud, D.; Antoni, J.; Sieg-Zieba, S.; Eltabach, M.
2016-02-01
In rotating machinery vibration analysis, the synchronous average is perhaps the most widely used technique for extracting periodic components. Periodic components are typically related to gear vibrations, misalignments, unbalances, blade rotations, reciprocating forces, etc. Their separation from other random components is essential in vibration-based diagnosis in order to discriminate useful information from masking noise. However, synchronous averaging theoretically requires the machine to operate under stationary regime (i.e. the related vibration signals are cyclostationary) and is otherwise jeopardized by the presence of amplitude and phase modulations. A first object of this paper is to investigate the nature of the nonstationarity induced by the response of a linear time-invariant system subjected to speed varying excitation. For this purpose, the concept of a cyclo-non-stationary signal is introduced, which extends the class of cyclostationary signals to speed-varying regimes. Next, a "generalized synchronous average'' is designed to extract the deterministic part of a cyclo-non-stationary vibration signal-i.e. the analog of the periodic part of a cyclostationary signal. Two estimators of the GSA have been proposed. The first one returns the synchronous average of the signal at predefined discrete operating speeds. A brief statistical study of it is performed, aiming to provide the user with confidence intervals that reflect the "quality" of the estimator according to the SNR and the estimated speed. The second estimator returns a smoothed version of the former by enforcing continuity over the speed axis. It helps to reconstruct the deterministic component by tracking a specific trajectory dictated by the speed profile (assumed to be known a priori).The proposed method is validated first on synthetic signals and then on actual industrial signals. The usefulness of the approach is demonstrated on envelope-based diagnosis of bearings in variable-speed operation.
NASA Astrophysics Data System (ADS)
Canli, Ekrem; Thiebes, Benni; Petschko, Helene; Glade, Thomas
2015-04-01
By now there is a broad consensus that due to human-induced global change the frequency and magnitude of heavy precipitation events is expected to increase in certain parts of the world. Given the fact, that rainfall serves as the most common triggering agent for landslide initiation, also an increased landside activity can be expected there. Landslide occurrence is a globally spread phenomenon that clearly needs to be handled. The present and well known problems in modelling landslide susceptibility and hazard give uncertain results in the prediction. This includes the lack of a universal applicable modelling solution for adequately assessing landslide susceptibility (which can be seen as the relative indication of the spatial probability of landslide initiation). Generally speaking, there are three major approaches for performing landslide susceptibility analysis: heuristic, statistical and deterministic models, all with different assumptions, its distinctive data requirements and differently interpretable outcomes. Still, detailed comparison of resulting landslide susceptibility maps are rare. In this presentation, the susceptibility modelling outputs of a deterministic model (Stability INdex MAPping - SINMAP) and a statistical modelling approach (generalized additive model - GAM) are compared. SINMAP is an infinite slope stability model which requires parameterization of soil mechanical parameters. Modelling with the generalized additive model, which represents a non-linear extension of a generalized linear model, requires a high quality landslide inventory that serves as the dependent variable in the statistical approach. Both methods rely on topographical data derived from the DTM. The comparison has been carried out in a study area located in the district of Waidhofen/Ybbs in Lower Austria. For the whole district (ca. 132 km²), 1063 landslides have been mapped and partially used within the analysis and the validation of the model outputs. The respective susceptibility maps have been reclassified to contain three susceptibility classes each. The comparison of the susceptibility maps was performed on a grid cell basis. A match of the maps was observed for grid cells located in the same susceptibility class. In contrast, a mismatch or deviation was observed for locations with different assigned susceptibility classes (up to two classes' difference). Although the modelling approaches differ significantly, more than 70% of the pixels reveal a match in the same susceptibility class. A mismatch by two classes' difference occurred in less than 2% of all pixels. Although the result looks promising and strengthens the confidence in the susceptibility zonation for this area, some of the general drawbacks related to the respective approaches still have to be addressed in further detail. Future work is heading towards an integration of probabilistic aspects into deterministic modelling.
Complex Population Dynamics and the Coalescent Under Neutrality
Volz, Erik M.
2012-01-01
Estimates of the coalescent effective population size Ne can be poorly correlated with the true population size. The relationship between Ne and the population size is sensitive to the way in which birth and death rates vary over time. The problem of inference is exacerbated when the mechanisms underlying population dynamics are complex and depend on many parameters. In instances where nonparametric estimators of Ne such as the skyline struggle to reproduce the correct demographic history, model-based estimators that can draw on prior information about population size and growth rates may be more efficient. A coalescent model is developed for a large class of populations such that the demographic history is described by a deterministic nonlinear dynamical system of arbitrary dimension. This class of demographic model differs from those typically used in population genetics. Birth and death rates are not fixed, and no assumptions are made regarding the fraction of the population sampled. Furthermore, the population may be structured in such a way that gene copies reproduce both within and across demes. For this large class of models, it is shown how to derive the rate of coalescence, as well as the likelihood of a gene genealogy with heterochronous sampling and labeled taxa, and how to simulate a coalescent tree conditional on a complex demographic history. This theoretical framework encapsulates many of the models used by ecologists and epidemiologists and should facilitate the integration of population genetics with the study of mathematical population dynamics. PMID:22042576
Enumeration and extension of non-equivalent deterministic update schedules in Boolean networks.
Palma, Eduardo; Salinas, Lilian; Aracena, Julio
2016-03-01
Boolean networks (BNs) are commonly used to model genetic regulatory networks (GRNs). Due to the sensibility of the dynamical behavior to changes in the updating scheme (order in which the nodes of a network update their state values), it is increasingly common to use different updating rules in the modeling of GRNs to better capture an observed biological phenomenon and thus to obtain more realistic models.In Aracena et al. equivalence classes of deterministic update schedules in BNs, that yield exactly the same dynamical behavior of the network, were defined according to a certain label function on the arcs of the interaction digraph defined for each scheme. Thus, the interaction digraph so labeled (update digraphs) encode the non-equivalent schemes. We address the problem of enumerating all non-equivalent deterministic update schedules of a given BN. First, we show that it is an intractable problem in general. To solve it, we first construct an algorithm that determines the set of update digraphs of a BN. For that, we use divide and conquer methodology based on the structural characteristics of the interaction digraph. Next, for each update digraph we determine a scheme associated. This algorithm also works in the case where there is a partial knowledge about the relative order of the updating of the states of the nodes. We exhibit some examples of how the algorithm works on some GRNs published in the literature. An executable file of the UpdateLabel algorithm made in Java and the files with the outputs of the algorithms used with the GRNs are available at: www.inf.udec.cl/ ∼lilian/UDE/ CONTACT: lilisalinas@udec.cl Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Minimum complexity echo state network.
Rodan, Ali; Tino, Peter
2011-01-01
Reservoir computing (RC) refers to a new class of state-space models with a fixed state transition structure (the reservoir) and an adaptable readout form the state space. The reservoir is supposed to be sufficiently complex so as to capture a large number of features of the input stream that can be exploited by the reservoir-to-output readout mapping. The field of RC has been growing rapidly with many successful applications. However, RC has been criticized for not being principled enough. Reservoir construction is largely driven by a series of randomized model-building stages, with both researchers and practitioners having to rely on a series of trials and errors. To initialize a systematic study of the field, we concentrate on one of the most popular classes of RC methods, namely echo state network, and ask: What is the minimal complexity of reservoir construction for obtaining competitive models and what is the memory capacity (MC) of such simplified reservoirs? On a number of widely used time series benchmarks of different origin and characteristics, as well as by conducting a theoretical analysis we show that a simple deterministically constructed cycle reservoir is comparable to the standard echo state network methodology. The (short-term) MC of linear cyclic reservoirs can be made arbitrarily close to the proved optimal value.
Network approaches for expert decisions in sports.
Glöckner, Andreas; Heinen, Thomas; Johnson, Joseph G; Raab, Markus
2012-04-01
This paper focuses on a model comparison to explain choices based on gaze behavior via simulation procedures. We tested two classes of models, a parallel constraint satisfaction (PCS) artificial neuronal network model and an accumulator model in a handball decision-making task from a lab experiment. Both models predict action in an option-generation task in which options can be chosen from the perspective of a playmaker in handball (i.e., passing to another player or shooting at the goal). Model simulations are based on a dataset of generated options together with gaze behavior measurements from 74 expert handball players for 22 pieces of video footage. We implemented both classes of models as deterministic vs. probabilistic models including and excluding fitted parameters. Results indicated that both classes of models can fit and predict participants' initially generated options based on gaze behavior data, and that overall, the classes of models performed about equally well. Early fixations were thereby particularly predictive for choices. We conclude that the analyses of complex environments via network approaches can be successfully applied to the field of experts' decision making in sports and provide perspectives for further theoretical developments. Copyright © 2011 Elsevier B.V. All rights reserved.
On chaos synchronization and secure communication.
Kinzel, W; Englert, A; Kanter, I
2010-01-28
Chaos synchronization, in particular isochronal synchronization of two chaotic trajectories to each other, may be used to build a means of secure communication over a public channel. In this paper, we give an overview of coupling schemes of Bernoulli units deduced from chaotic laser systems, different ways to transmit information by chaos synchronization and the advantage of bidirectional over unidirectional coupling with respect to secure communication. We present the protocol for using dynamical private commutative filters for tap-proof transmission of information that maps the task of a passive attacker to the class of non-deterministic polynomial time-complete problems. This journal is © 2010 The Royal Society
NASA Technical Reports Server (NTRS)
Hendricks, Robert C.; Zaretsky, Erwin V.
2001-01-01
Critical component design is based on minimizing product failures that results in loss of life. Potential catastrophic failures are reduced to secondary failures where components removed for cause or operating time in the system. Issues of liability and cost of component removal become of paramount importance. Deterministic design with factors of safety and probabilistic design address but lack the essential characteristics for the design of critical components. In deterministic design and fabrication there are heuristic rules and safety factors developed over time for large sets of structural/material components. These factors did not come without cost. Many designs failed and many rules (codes) have standing committees to oversee their proper usage and enforcement. In probabilistic design, not only are failures a given, the failures are calculated; an element of risk is assumed based on empirical failure data for large classes of component operations. Failure of a class of components can be predicted, yet one can not predict when a specific component will fail. The analogy is to the life insurance industry where very careful statistics are book-kept on classes of individuals. For a specific class, life span can be predicted within statistical limits, yet life-span of a specific element of that class can not be predicted.
Making classical ground-state spin computing fault-tolerant.
Crosson, I J; Bacon, D; Brown, K R
2010-09-01
We examine a model of classical deterministic computing in which the ground state of the classical system is a spatial history of the computation. This model is relevant to quantum dot cellular automata as well as to recent universal adiabatic quantum computing constructions. In its most primitive form, systems constructed in this model cannot compute in an error-free manner when working at nonzero temperature. However, by exploiting a mapping between the partition function for this model and probabilistic classical circuits we are able to show that it is possible to make this model effectively error-free. We achieve this by using techniques in fault-tolerant classical computing and the result is that the system can compute effectively error-free if the temperature is below a critical temperature. We further link this model to computational complexity and show that a certain problem concerning finite temperature classical spin systems is complete for the complexity class Merlin-Arthur. This provides an interesting connection between the physical behavior of certain many-body spin systems and computational complexity.
Absorbing phase transitions in deterministic fixed-energy sandpile models
NASA Astrophysics Data System (ADS)
Park, Su-Chan
2018-03-01
We investigate the origin of the difference, which was noticed by Fey et al. [Phys. Rev. Lett. 104, 145703 (2010), 10.1103/PhysRevLett.104.145703], between the steady state density of an Abelian sandpile model (ASM) and the transition point of its corresponding deterministic fixed-energy sandpile model (DFES). Being deterministic, the configuration space of a DFES can be divided into two disjoint classes such that every configuration in one class should evolve into one of absorbing states, whereas no configurations in the other class can reach an absorbing state. Since the two classes are separated in terms of toppling dynamics, the system can be made to exhibit an absorbing phase transition (APT) at various points that depend on the initial probability distribution of the configurations. Furthermore, we show that in general the transition point also depends on whether an infinite-size limit is taken before or after the infinite-time limit. To demonstrate, we numerically study the two-dimensional DFES with Bak-Tang-Wiesenfeld toppling rule (BTW-FES). We confirm that there are indeed many thresholds. Nonetheless, the critical phenomena at various transition points are found to be universal. We furthermore discuss a microscopic absorbing phase transition, or a so-called spreading dynamics, of the BTW-FES, to find that the phase transition in this setting is related to the dynamical isotropic percolation process rather than self-organized criticality. In particular, we argue that choosing recurrent configurations of the corresponding ASM as an initial configuration does not allow for a nontrivial APT in the DFES.
Absorbing phase transitions in deterministic fixed-energy sandpile models.
Park, Su-Chan
2018-03-01
We investigate the origin of the difference, which was noticed by Fey et al. [Phys. Rev. Lett. 104, 145703 (2010)PRLTAO0031-900710.1103/PhysRevLett.104.145703], between the steady state density of an Abelian sandpile model (ASM) and the transition point of its corresponding deterministic fixed-energy sandpile model (DFES). Being deterministic, the configuration space of a DFES can be divided into two disjoint classes such that every configuration in one class should evolve into one of absorbing states, whereas no configurations in the other class can reach an absorbing state. Since the two classes are separated in terms of toppling dynamics, the system can be made to exhibit an absorbing phase transition (APT) at various points that depend on the initial probability distribution of the configurations. Furthermore, we show that in general the transition point also depends on whether an infinite-size limit is taken before or after the infinite-time limit. To demonstrate, we numerically study the two-dimensional DFES with Bak-Tang-Wiesenfeld toppling rule (BTW-FES). We confirm that there are indeed many thresholds. Nonetheless, the critical phenomena at various transition points are found to be universal. We furthermore discuss a microscopic absorbing phase transition, or a so-called spreading dynamics, of the BTW-FES, to find that the phase transition in this setting is related to the dynamical isotropic percolation process rather than self-organized criticality. In particular, we argue that choosing recurrent configurations of the corresponding ASM as an initial configuration does not allow for a nontrivial APT in the DFES.
Nonlocality distillation and postquantum theories with trivial communication complexity.
Brunner, Nicolas; Skrzypczyk, Paul
2009-04-24
We first present a protocol for deterministically distilling nonlocality, building upon a recent result of Forster et al. [Phys. Rev. Lett. 102, 120401 (2009)10.1103/PhysRevLett.102.120401]. Our protocol, which is optimal for two-copy distillation, works efficiently for a specific class of postquantum nonlocal boxes, which we term correlated nonlocal boxes. In the asymptotic limit, all correlated nonlocal boxes are distilled to the maximally nonlocal box of Popescu and Rohrlich. Then, taking advantage of a result of Brassard et al. [Phys. Rev. Lett. 96, 250401 (2006)10.1103/PhysRevLett.96.250401] we show that all correlated nonlocal boxes make communication complexity trivial, and therefore appear very unlikely to exist in nature. Astonishingly, some of these nonlocal boxes are arbitrarily close to the set of classical correlations. This result therefore gives new insight to the problem of why quantum nonlocality is limited.
Casey, M
1996-08-15
Recurrent neural networks (RNNs) can learn to perform finite state computations. It is shown that an RNN performing a finite state computation must organize its state space to mimic the states in the minimal deterministic finite state machine that can perform that computation, and a precise description of the attractor structure of such systems is given. This knowledge effectively predicts activation space dynamics, which allows one to understand RNN computation dynamics in spite of complexity in activation dynamics. This theory provides a theoretical framework for understanding finite state machine (FSM) extraction techniques and can be used to improve training methods for RNNs performing FSM computations. This provides an example of a successful approach to understanding a general class of complex systems that has not been explicitly designed, e.g., systems that have evolved or learned their internal structure.
Frisenda, Riccardo; Navarro-Moratalla, Efrén; Gant, Patricia; Pérez De Lara, David; Jarillo-Herrero, Pablo; Gorbachev, Roman V; Castellanos-Gomez, Andres
2018-01-02
Designer heterostructures can now be assembled layer-by-layer with unmatched precision thanks to the recently developed deterministic placement methods to transfer two-dimensional (2D) materials. This possibility constitutes the birth of a very active research field on the so-called van der Waals heterostructures. Moreover, these deterministic placement methods also open the door to fabricate complex devices, which would be otherwise very difficult to achieve by conventional bottom-up nanofabrication approaches, and to fabricate fully-encapsulated devices with exquisite electronic properties. The integration of 2D materials with existing technologies such as photonic and superconducting waveguides and fiber optics is another exciting possibility. Here, we review the state-of-the-art of the deterministic placement methods, describing and comparing the different alternative methods available in the literature, and we illustrate their potential to fabricate van der Waals heterostructures, to integrate 2D materials into complex devices and to fabricate artificial bilayer structures where the layers present a user-defined rotational twisting angle.
Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.
Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph
2015-08-01
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.
Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks
Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph
2015-01-01
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784
2007-05-02
stability of a class of discrete event systems ", IEEE Transactions on Automatic Control , vol. 39, no. 2... stability , input/output stability , external stability and incremental input/output stability , as they apply to deterministic finite state machine systems ... class of systems , incremental 1/0 stability and external stability are equivalent notions, stronger than the notion of I/O stability . 15. SUBJECT
Packaging Concerns and Techniques for Large Devices: Challenges for Complex Electronics
NASA Technical Reports Server (NTRS)
LaBel, Kenneth A.; Sampson, Michael J.
2010-01-01
NASA is going to have to accept the use of non-hermetic packages for complex devices. There are a large number of packaging options available. Space application subjects the packages to stresses that they were probably not designed for (vacuum for instance). NASA has to find a way of having assurance in the integrity of the packages. There are manufacturers interested in qualifying non-hermetic packages to MIL-PRF-38535 Class V. Government space users are agreed that Class V should be for hermetic packages only. NASA is working on a new Class for non-hermetic packages for M38535 Appendix B, "Class Y". Testing for package integrity will be required but can be package specific as described by a Package Integrity Test Plan. The plan is developed by the manufacturer and approved by DSCC and government space.
Evidence for a non-canonical role of HDAC5 in regulation of the cardiac Ncx1 and Bnp genes.
Harris, Lillianne G; Wang, Sabina H; Mani, Santhosh K; Kasiganesan, Harinath; Chou, C James; Menick, Donald R
2016-05-05
Class IIa histone deacetylases (HDACs) are very important for tissue specific gene regulation in development and pathology. Because class IIa HDAC catalytic activity is low, their exact molecular roles have not been fully elucidated. Studies have suggested that class IIa HDACs may serve as a scaffold to recruit the catalytically active class I HDAC complexes to their substrate. Here we directly address whether the class IIa HDAC, HDAC5 may function as a scaffold to recruit co-repressor complexes to promoters. We examined two well-characterized cardiac promoters, the sodium calcium exchanger (Ncx1) and the brain natriuretic peptide (Bnp) whose hypertrophic upregulation is mediated by both class I and IIa HDACs. Selective inhibition of class IIa HDACs did not prevent adrenergic stimulated Ncx1 upregulation, however HDAC5 knockout prevented pressure overload induced Ncx1 upregulation. Using the HDAC5((-/-)) mouse we show that HDAC5 is required for the interaction of the HDAC1/2/Sin3a co-repressor complexes with the Nkx2.5 and YY1 transcription factors and critical for recruitment of the HDAC1/Sin3a co-repressor complex to either the Ncx1 or Bnp promoter. Our novel findings support a non-canonical role of class IIa HDACs in the scaffolding of transcriptional regulatory complexes, which may be relevant for therapeutic intervention for pathologies. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
NASA Astrophysics Data System (ADS)
Popov, Pavel; Sideris, Athanasios; Sirignano, William
2014-11-01
We examine the non-linear dynamics of the transverse modes of combustion-driven acoustic instability in a liquid-propellant rocket engine. Triggering can occur, whereby small perturbations from mean conditions decay, while larger disturbances grow to a limit-cycle of amplitude that may compare to the mean pressure. For a deterministic perturbation, the system is also deterministic, computed by coupled finite-volume solvers at low computational cost for a single realization. The randomness of the triggering disturbance is captured by treating the injector flow rates, local pressure disturbances, and sudden acceleration of the entire combustion chamber as random variables. The combustor chamber with its many sub-fields resulting from many injector ports may be viewed as a multi-scale complex system wherein the developing acoustic oscillation is the emergent structure. Numerical simulation of the resulting stochastic PDE system is performed using the polynomial chaos expansion method. The overall probability of unstable growth is assessed in different regions of the parameter space. We address, in particular, the seven-injector, rectangular Purdue University experimental combustion chamber. In addition to the novel geometry, new features include disturbances caused by engine acceleration and unsteady thruster nozzle flow.
Spatio-Temporal Data Analysis at Scale Using Models Based on Gaussian Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Michael
Gaussian processes are the most commonly used statistical model for spatial and spatio-temporal processes that vary continuously. They are broadly applicable in the physical sciences and engineering and are also frequently used to approximate the output of complex computer models, deterministic or stochastic. We undertook research related to theory, computation, and applications of Gaussian processes as well as some work on estimating extremes of distributions for which a Gaussian process assumption might be inappropriate. Our theoretical contributions include the development of new classes of spatial-temporal covariance functions with desirable properties and new results showing that certain covariance models lead tomore » predictions with undesirable properties. To understand how Gaussian process models behave when applied to deterministic computer models, we derived what we believe to be the first significant results on the large sample properties of estimators of parameters of Gaussian processes when the actual process is a simple deterministic function. Finally, we investigated some theoretical issues related to maxima of observations with varying upper bounds and found that, depending on the circumstances, standard large sample results for maxima may or may not hold. Our computational innovations include methods for analyzing large spatial datasets when observations fall on a partially observed grid and methods for estimating parameters of a Gaussian process model from observations taken by a polar-orbiting satellite. In our application of Gaussian process models to deterministic computer experiments, we carried out some matrix computations that would have been infeasible using even extended precision arithmetic by focusing on special cases in which all elements of the matrices under study are rational and using exact arithmetic. The applications we studied include total column ozone as measured from a polar-orbiting satellite, sea surface temperatures over the Pacific Ocean, and annual temperature extremes at a site in New York City. In each of these applications, our theoretical and computational innovations were directly motivated by the challenges posed by analyzing these and similar types of data.« less
The Non-Signalling theorem in generalizations of Bell's theorem
NASA Astrophysics Data System (ADS)
Walleczek, J.; Grössing, G.
2014-04-01
Does "epistemic non-signalling" ensure the peaceful coexistence of special relativity and quantum nonlocality? The possibility of an affirmative answer is of great importance to deterministic approaches to quantum mechanics given recent developments towards generalizations of Bell's theorem. By generalizations of Bell's theorem we here mean efforts that seek to demonstrate the impossibility of any deterministic theories to obey the predictions of Bell's theorem, including not only local hidden-variables theories (LHVTs) but, critically, of nonlocal hidden-variables theories (NHVTs) also, such as de Broglie-Bohm theory. Naturally, in light of the well-established experimental findings from quantum physics, whether or not a deterministic approach to quantum mechanics, including an emergent quantum mechanics, is logically possible, depends on compatibility with the predictions of Bell's theorem. With respect to deterministic NHVTs, recent attempts to generalize Bell's theorem have claimed the impossibility of any such approaches to quantum mechanics. The present work offers arguments showing why such efforts towards generalization may fall short of their stated goal. In particular, we challenge the validity of the use of the non-signalling theorem as a conclusive argument in favor of the existence of free randomness, and therefore reject the use of the non-signalling theorem as an argument against the logical possibility of deterministic approaches. We here offer two distinct counter-arguments in support of the possibility of deterministic NHVTs: one argument exposes the circularity of the reasoning which is employed in recent claims, and a second argument is based on the inconclusive metaphysical status of the non-signalling theorem itself. We proceed by presenting an entirely informal treatment of key physical and metaphysical assumptions, and of their interrelationship, in attempts seeking to generalize Bell's theorem on the basis of an ontic, foundational interpretation of the non-signalling theorem. We here argue that the non-signalling theorem must instead be viewed as an epistemic, operational theorem i.e. one that refers exclusively to what epistemic agents can, or rather cannot, do. That is, we emphasize that the non-signalling theorem is a theorem about the operational inability of epistemic agents to signal information. In other words, as a proper principle, the non-signalling theorem may only be employed as an epistemic, phenomenological, or operational principle. Critically, our argument emphasizes that the non-signalling principle must not be used as an ontic principle about physical reality as such, i.e. as a theorem about the nature of physical reality independently of epistemic agents e.g. human observers. One major reason in favor of our conclusion is that any definition of signalling or of non-signalling invariably requires a reference to epistemic agents, and what these agents can actually measure and report. Otherwise, the non-signalling theorem would equal a general "no-influence" theorem. In conclusion, under the assumption that the non-signalling theorem is epistemic (i.e. "epistemic non-signalling"), the search for deterministic approaches to quantum mechanics, including NHVTs and an emergent quantum mechanics, continues to be a viable research program towards disclosing the foundations of physical reality at its smallest dimensions.
Chaotic dynamics and control of deterministic ratchets.
Family, Fereydoon; Larrondo, H A; Zarlenga, D G; Arizmendi, C M
2005-11-30
Deterministic ratchets, in the inertial and also in the overdamped limit, have a very complex dynamics, including chaotic motion. This deterministically induced chaos mimics, to some extent, the role of noise, changing, on the other hand, some of the basic properties of thermal ratchets; for example, inertial ratchets can exhibit multiple reversals in the current direction. The direction depends on the amount of friction and inertia, which makes it especially interesting for technological applications such as biological particle separation. We overview in this work different strategies to control the current of inertial ratchets. The control parameters analysed are the strength and frequency of the periodic external force, the strength of the quenched noise that models a non-perfectly-periodic potential, and the mass of the particles. Control mechanisms are associated with the fractal nature of the basins of attraction of the mean velocity attractors. The control of the overdamped motion of noninteracting particles in a rocking periodic asymmetric potential is also reviewed. The analysis is focused on synchronization of the motion of the particles with the external sinusoidal driving force. Two cases are considered: a perfect lattice without disorder and a lattice with noncorrelated quenched noise. The amplitude of the driving force and the strength of the quenched noise are used as control parameters.
On the number of different dynamics in Boolean networks with deterministic update schedules.
Aracena, J; Demongeot, J; Fanchon, E; Montalva, M
2013-04-01
Deterministic Boolean networks are a type of discrete dynamical systems widely used in the modeling of genetic networks. The dynamics of such systems is characterized by the local activation functions and the update schedule, i.e., the order in which the nodes are updated. In this paper, we address the problem of knowing the different dynamics of a Boolean network when the update schedule is changed. We begin by proving that the problem of the existence of a pair of update schedules with different dynamics is NP-complete. However, we show that certain structural properties of the interaction diagraph are sufficient for guaranteeing distinct dynamics of a network. In [1] the authors define equivalence classes which have the property that all the update schedules of a given class yield the same dynamics. In order to determine the dynamics associated to a network, we develop an algorithm to efficiently enumerate the above equivalence classes by selecting a representative update schedule for each class with a minimum number of blocks. Finally, we run this algorithm on the well known Arabidopsis thaliana network to determine the full spectrum of its different dynamics. Copyright © 2013 Elsevier Inc. All rights reserved.
Complexity in Dynamical Systems
NASA Astrophysics Data System (ADS)
Moore, Cristopher David
The study of chaos has shown us that deterministic systems can have a kind of unpredictability, based on a limited knowledge of their initial conditions; after a finite time, the motion appears essentially random. This observation has inspired a general interest in the subject of unpredictability, and more generally, complexity; how can we characterize how "complex" a dynamical system is?. In this thesis, we attempt to answer this question with a paradigm of complexity that comes from computer science, we extract sets of symbol sequences, or languages, from a dynamical system using standard methods of symbolic dynamics; we then ask what kinds of grammars or automata are needed a generate these languages. This places them in the Chomsky heirarchy, which in turn tells us something about how subtle and complex the dynamical system's behavior is. This gives us insight into the question of unpredictability, since these automata can also be thought of as computers attempting to predict the system. In the culmination of the thesis, we find a class of smooth, two-dimensional maps which are equivalent to the highest class in the Chomsky heirarchy, the turning machine; they are capable of universal computation. Therefore, these systems possess a kind of unpredictability qualitatively different from the usual "chaos": even if the initial conditions are known exactly, questions about the system's long-term dynamics are undecidable. No algorithm exists to answer them. Although this kind of unpredictability has been discussed in the context of distributed, many-degree-of -freedom systems (for instance, cellular automata) we believe this is the first example of such phenomena in a smooth, finite-degree-of-freedom system.
Sampled-Data Consensus of Linear Multi-agent Systems With Packet Losses.
Zhang, Wenbing; Tang, Yang; Huang, Tingwen; Kurths, Jurgen
In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.
Review of smoothing methods for enhancement of noisy data from heavy-duty LHD mining machines
NASA Astrophysics Data System (ADS)
Wodecki, Jacek; Michalak, Anna; Stefaniak, Paweł
2018-01-01
Appropriate analysis of data measured on heavy-duty mining machines is essential for processes monitoring, management and optimization. Some particular classes of machines, for example LHD (load-haul-dump) machines, hauling trucks, drilling/bolting machines etc. are characterized with cyclicity of operations. In those cases, identification of cycles and their segments or in other words - simply data segmentation is a key to evaluate their performance, which may be very useful from the management point of view, for example leading to introducing optimization to the process. However, in many cases such raw signals are contaminated with various artifacts, and in general are expected to be very noisy, which makes the segmentation task very difficult or even impossible. To deal with that problem, there is a need for efficient smoothing methods that will allow to retain informative trends in the signals while disregarding noises and other undesired non-deterministic components. In this paper authors present a review of various approaches to diagnostic data smoothing. Described methods can be used in a fast and efficient way, effectively cleaning the signals while preserving informative deterministic behaviour, that is a crucial to precise segmentation and other approaches to industrial data analysis.
Uncertain dynamical systems: A differential game approach
NASA Technical Reports Server (NTRS)
Gutman, S.
1976-01-01
A class of dynamical systems in a conflict situation is formulated and discussed, and the formulation is applied to the study of an important class of systems in the presence of uncertainty. The uncertainty is deterministic and the only assumption is that its value belongs to a known compact set. Asymptotic stability is fully discussed with application to variable structure and model reference control systems.
Bursting as a source of non-linear determinism in the firing patterns of nigral dopamine neurons
Jeong, Jaeseung; Shi, Wei-Xing; Hoffman, Ralph; Oh, Jihoon; Gore, John C.; Bunney, Benjamin S.; Peterson, Bradley S.
2012-01-01
Nigral dopamine (DA) neurons in vivo exhibit complex firing patterns consisting of tonic single-spikes and phasic bursts that encode information for certain types of reward-related learning and behavior. Non-linear dynamical analysis has previously demonstrated the presence of a non-linear deterministic structure in complex firing patterns of DA neurons, yet the origin of this non-linear determinism remains unknown. In this study, we hypothesized that bursting activity is the primary source of non-linear determinism in the firing patterns of DA neurons. To test this hypothesis, we investigated the dimension complexity of inter-spike interval data recorded in vivo from bursting and non-bursting DA neurons in the chloral hydrate-anesthetized rat substantia nigra. We found that bursting DA neurons exhibited non-linear determinism in their firing patterns, whereas non-bursting DA neurons showed truly stochastic firing patterns. Determinism was also detected in the isolated burst and inter-burst interval data extracted from firing patterns of bursting neurons. Moreover, less bursting DA neurons in halothane-anesthetized rats exhibited higher dimensional spiking dynamics than do more bursting DA neurons in chloral hydrate-anesthetized rats. These results strongly indicate that bursting activity is the main source of low-dimensional, non-linear determinism in the firing patterns of DA neurons. This finding furthermore suggests that bursts are the likely carriers of meaningful information in the firing activities of DA neurons. PMID:22831464
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul
2016-05-03
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d
Active temporal multiplexing of indistinguishable heralded single photons
Xiong, C.; Zhang, X.; Liu, Z.; Collins, M. J.; Mahendra, A.; Helt, L. G.; Steel, M. J.; Choi, D. -Y.; Chae, C. J.; Leong, P. H. W.; Eggleton, B. J.
2016-01-01
It is a fundamental challenge in quantum optics to deterministically generate indistinguishable single photons through non-deterministic nonlinear optical processes, due to the intrinsic coupling of single- and multi-photon-generation probabilities in these processes. Actively multiplexing photons generated in many temporal modes can decouple these probabilities, but key issues are to minimize resource requirements to allow scalability, and to ensure indistinguishability of the generated photons. Here we demonstrate the multiplexing of photons from four temporal modes solely using fibre-integrated optics and off-the-shelf electronic components. We show a 100% enhancement to the single-photon output probability without introducing additional multi-photon noise. Photon indistinguishability is confirmed by a fourfold Hong–Ou–Mandel quantum interference with a 91±16% visibility after subtracting multi-photon noise due to high pump power. Our demonstration paves the way for scalable multiplexing of many non-deterministic photon sources to a single near-deterministic source, which will be of benefit to future quantum photonic technologies. PMID:26996317
Deterministic Mean-Field Ensemble Kalman Filtering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d
Orthogonal-state-based cryptography in quantum mechanics and local post-quantum theories
NASA Astrophysics Data System (ADS)
Aravinda, S.; Banerjee, Anindita; Pathak, Anirban; Srikanth, R.
2014-02-01
We introduce the concept of cryptographic reduction, in analogy with a similar concept in computational complexity theory. In this framework, class A of crypto-protocols reduces to protocol class B in a scenario X, if for every instance a of A, there is an instance b of B and a secure transformation X that reproduces a given b, such that the security of b guarantees the security of a. Here we employ this reductive framework to study the relationship between security in quantum key distribution (QKD) and quantum secure direct communication (QSDC). We show that replacing the streaming of independent qubits in a QKD scheme by block encoding and transmission (permuting the order of particles block by block) of qubits, we can construct a QSDC scheme. This forms the basis for the block reduction from a QSDC class of protocols to a QKD class of protocols, whereby if the latter is secure, then so is the former. Conversely, given a secure QSDC protocol, we can of course construct a secure QKD scheme by transmitting a random key as the direct message. Then the QKD class of protocols is secure, assuming the security of the QSDC class which it is built from. We refer to this method of deduction of security for this class of QKD protocols, as key reduction. Finally, we propose an orthogonal-state-based deterministic key distribution (KD) protocol which is secure in some local post-quantum theories. Its security arises neither from geographic splitting of a code state nor from Heisenberg uncertainty, but from post-measurement disturbance.
Gaussification and entanglement distillation of continuous-variable systems: a unifying picture.
Campbell, Earl T; Eisert, Jens
2012-01-13
Distillation of entanglement using only Gaussian operations is an important primitive in quantum communication, quantum repeater architectures, and distributed quantum computing. Existing distillation protocols for continuous degrees of freedom are only known to converge to a Gaussian state when measurements yield precisely the vacuum outcome. In sharp contrast, non-Gaussian states can be deterministically converted into Gaussian states while preserving their second moments, albeit by usually reducing their degree of entanglement. In this work-based on a novel instance of a noncommutative central limit theorem-we introduce a picture general enough to encompass the known protocols leading to Gaussian states, and new classes of protocols including multipartite distillation. This gives the experimental option of balancing the merits of success probability against entanglement produced.
NASA Astrophysics Data System (ADS)
Boche, Holger; Cai, Minglai; Deppe, Christian; Nötzel, Janis
2017-10-01
We analyze arbitrarily varying classical-quantum wiretap channels. These channels are subject to two attacks at the same time: one passive (eavesdropping) and one active (jamming). We elaborate on our previous studies [H. Boche et al., Quantum Inf. Process. 15(11), 4853-4895 (2016) and H. Boche et al., Quantum Inf. Process. 16(1), 1-48 (2016)] by introducing a reduced class of allowable codes that fulfills a more stringent secrecy requirement than earlier definitions. In addition, we prove that non-symmetrizability of the legal link is sufficient for equality of the deterministic and the common randomness assisted secrecy capacities. Finally, we focus on analytic properties of both secrecy capacities: We completely characterize their discontinuity points and their super-activation properties.
Mi, Xiangcheng; Swenson, Nathan G; Jia, Qi; Rao, Mide; Feng, Gang; Ren, Haibao; Bebber, Daniel P; Ma, Keping
2016-09-07
Deterministic and stochastic processes jointly determine the community dynamics of forest succession. However, it has been widely held in previous studies that deterministic processes dominate forest succession. Furthermore, inference of mechanisms for community assembly may be misleading if based on a single axis of diversity alone. In this study, we evaluated the relative roles of deterministic and stochastic processes along a disturbance gradient by integrating species, functional, and phylogenetic beta diversity in a subtropical forest chronosequence in Southeastern China. We found a general pattern of increasing species turnover, but little-to-no change in phylogenetic and functional turnover over succession at two spatial scales. Meanwhile, the phylogenetic and functional beta diversity were not significantly different from random expectation. This result suggested a dominance of stochastic assembly, contrary to the general expectation that deterministic processes dominate forest succession. On the other hand, we found significant interactions of environment and disturbance and limited evidence for significant deviations of phylogenetic or functional turnover from random expectations for different size classes. This result provided weak evidence of deterministic processes over succession. Stochastic assembly of forest succession suggests that post-disturbance restoration may be largely unpredictable and difficult to control in subtropical forests.
Hands-on-Entropy, Energy Balance with Biological Relevance
NASA Astrophysics Data System (ADS)
Reeves, Mark
2015-03-01
Entropy changes underlie the physics that dominates biological interactions. Indeed, introductory biology courses often begin with an exploration of the qualities of water that are important to living systems. However, one idea that is not explicitly addressed in most introductory physics or biology textbooks is important contribution of the entropy in driving fundamental biological processes towards equilibrium. From diffusion to cell-membrane formation, to electrostatic binding in protein folding, to the functioning of nerve cells, entropic effects often act to counterbalance deterministic forces such as electrostatic attraction and in so doing, allow for effective molecular signaling. A small group of biology, biophysics and computer science faculty have worked together for the past five years to develop curricular modules (based on SCALEUP pedagogy). This has enabled students to create models of stochastic and deterministic processes. Our students are first-year engineering and science students in the calculus-based physics course and they are not expected to know biology beyond the high-school level. In our class, they learn to reduce complex biological processes and structures in order model them mathematically to account for both deterministic and probabilistic processes. The students test these models in simulations and in laboratory experiments that are biologically relevant such as diffusion, ionic transport, and ligand-receptor binding. Moreover, the students confront random forces and traditional forces in problems, simulations, and in laboratory exploration throughout the year-long course as they move from traditional kinematics through thermodynamics to electrostatic interactions. This talk will present a number of these exercises, with particular focus on the hands-on experiments done by the students, and will give examples of the tangible material that our students work with throughout the two-semester sequence of their course on introductory physics with a bio focus. Supported by NSF DUE.
NASA Astrophysics Data System (ADS)
Ravazzani, Giovanni; Amengual, Arnau; Ceppi, Alessandro; Romero, Romualdo; Homar, Victor; Mancini, Marco
2015-04-01
Analysis of forecasting strategies that can provide a tangible basis for flood early warning procedures and mitigation measures over the Western Mediterranean region is one of the fundamental motivations of the European HyMeX programme. Here, we examine a set of hydro-meteorological episodes that affected the Milano urban area for which the complex flood protection system of the city did not completely succeed before the occurred flash-floods. Indeed, flood damages have exponentially increased in the area during the last 60 years, due to industrial and urban developments. Thus, the improvement of the Milano flood control system needs a synergism between structural and non-structural approaches. The flood forecasting system tested in this work comprises the Flash-flood Event-based Spatially distributed rainfall-runoff Transformation, including Water Balance (FEST-WB) and the Weather Research and Forecasting (WRF) models, in order to provide a hydrological ensemble prediction system (HEPS). Deterministic and probabilistic quantitative precipitation forecasts (QPFs) have been provided by WRF model in a set of 48-hours experiments. HEPS has been generated by combining different physical parameterizations (i.e. cloud microphysics, moist convection and boundary-layer schemes) of the WRF model in order to better encompass the atmospheric processes leading to high precipitation amounts. We have been able to test the value of a probabilistic versus a deterministic framework when driving Quantitative Discharge Forecasts (QDFs). Results highlight (i) the benefits of using a high-resolution HEPS in conveying uncertainties for this complex orographic area and (ii) a better simulation of the most of extreme precipitation events, potentially enabling valuable probabilistic QDFs. Hence, the HEPS copes with the significant deficiencies found in the deterministic QPFs. These shortcomings would prevent to correctly forecast the location and timing of high precipitation rates and total amounts at the catchment scale, thus impacting heavily the deterministic QDFs. In contrast, early warnings would have been possible within a HEPS context for the Milano area, proving the suitability of such system for civil protection purposes.
Deterministic Intracellular Modeling
2003-03-01
eukaryotes encompass all plants, animal, fungi and protists [6:71]. Structures in this class are more defined. For example, cells in this class possess a...affect cells. 5.3 Recommendations Further research into the construction and evaluation of intracellular models would benefit Air Force toxicology studies...manual220/indexE.html. 16. MathWorks, “The Benefits of MATLAB.” Internet, 2003. http://www.mathworks.com/products/matlab/description1.jsp. 17. Mendes
Automatic design of synthetic gene circuits through mixed integer non-linear programming.
Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias
2012-01-01
Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits.
Discrete Deterministic and Stochastic Petri Nets
NASA Technical Reports Server (NTRS)
Zijal, Robert; Ciardo, Gianfranco
1996-01-01
Petri nets augmented with timing specifications gained a wide acceptance in the area of performance and reliability evaluation of complex systems exhibiting concurrency, synchronization, and conflicts. The state space of time-extended Petri nets is mapped onto its basic underlying stochastic process, which can be shown to be Markovian under the assumption of exponentially distributed firing times. The integration of exponentially and non-exponentially distributed timing is still one of the major problems for the analysis and was first attacked for continuous time Petri nets at the cost of structural or analytical restrictions. We propose a discrete deterministic and stochastic Petri net (DDSPN) formalism with no imposed structural or analytical restrictions where transitions can fire either in zero time or according to arbitrary firing times that can be represented as the time to absorption in a finite absorbing discrete time Markov chain (DTMC). Exponentially distributed firing times are then approximated arbitrarily well by geometric distributions. Deterministic firing times are a special case of the geometric distribution. The underlying stochastic process of a DDSPN is then also a DTMC, from which the transient and stationary solution can be obtained by standard techniques. A comprehensive algorithm and some state space reduction techniques for the analysis of DDSPNs are presented comprising the automatic detection of conflicts and confusions, which removes a major obstacle for the analysis of discrete time models.
Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew
2016-07-01
Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
The optimization of peptide cargo bound to MHC class I molecules by the peptide-loading complex.
Elliott, Tim; Williams, Anthony
2005-10-01
Major histocompatibility complex (MHC) class I complexes present peptides from both self and foreign intracellular proteins on the surface of most nucleated cells. The assembled heterotrimeric complexes consist of a polymorphic glycosylated heavy chain, non-polymorphic beta(2) microglobulin, and a peptide of typically nine amino acids in length. Assembly of the class I complexes occurs in the endoplasmic reticulum and is assisted by a number of chaperone molecules. A multimolecular unit termed the peptide-loading complex (PLC) is integral to this process. The PLC contains a peptide transporter (transporter associated with antigen processing), a thiooxido-reductase (ERp57), a glycoprotein chaperone (calreticulin), and tapasin, a class I-specific chaperone. We suggest that class I assembly involves a process of optimization where the peptide cargo of the complex is edited by the PLC. Furthermore, this selective peptide loading is biased toward peptides that have a longer off-rate from the assembled complex. We suggest that tapasin is the key chaperone that directs this action of the PLC with secondary contributions from calreticulin and possibly ERp57. We provide a framework model for how this may operate at the molecular level and draw parallels with the proposed mechanism of action of human leukocyte antigen-DM for MHC class II complex optimization.
NASA Astrophysics Data System (ADS)
Lemarchand, A.; Lesne, A.; Mareschal, M.
1995-05-01
The reaction-diffusion equation associated with the Fisher chemical model A+B-->2A admits wave-front solutions by replacing an unstable stationary state with a stable one. The deterministic analysis concludes that their propagation velocity is not prescribed by the dynamics. For a large class of initial conditions the velocity which is spontaneously selected is equal to the minimum allowed velocity vmin, as predicted by the marginal stability criterion. In order to test the relevance of this deterministic description we investigate the macroscopic consequences, on the velocity and the width of the front, of the intrinsic stochasticity due to the underlying microscopic dynamics. We solve numerically the Langevin equations, deduced analytically from the master equation within a system size expansion procedure. We show that the mean profile associated with the stochastic solution propagates faster than the deterministic solution at a velocity up to 25% greater than vmin.
Guymon, Gary L.; Yen, Chung-Cheng
1990-01-01
The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.
NASA Astrophysics Data System (ADS)
Guymon, Gary L.; Yen, Chung-Cheng
1990-07-01
The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.
Mathematical models of behavior of individual animals.
Tsibulsky, Vladimir L; Norman, Andrew B
2007-01-01
This review is focused on mathematical modeling of behaviors of a whole organism with special emphasis on models with a clearly scientific approach to the problem that helps to understand the mechanisms underlying behavior. The aim is to provide an overview of old and contemporary mathematical models without complex mathematical details. Only deterministic and stochastic, but not statistical models are reviewed. All mathematical models of behavior can be divided into two main classes. First, models that are based on the principle of teleological determinism assume that subjects choose the behavior that will lead them to a better payoff in the future. Examples are game theories and operant behavior models both of which are based on the matching law. The second class of models are based on the principle of causal determinism, which assume that subjects do not choose from a set of possibilities but rather are compelled to perform a predetermined behavior in response to specific stimuli. Examples are perception and discrimination models, drug effects models and individual-based population models. A brief overview of the utility of each mathematical model is provided for each section.
Mode-locking behavior of Izhikevich neurons under periodic external forcing
NASA Astrophysics Data System (ADS)
Farokhniaee, AmirAli; Large, Edward W.
2017-06-01
Many neurons in the auditory system of the brain must encode periodic signals. These neurons under periodic stimulation display rich dynamical states including mode locking and chaotic responses. Periodic stimuli such as sinusoidal waves and amplitude modulated sounds can lead to various forms of n :m mode-locked states, in which a neuron fires n action potentials per m cycles of the stimulus. Here, we study mode-locking in the Izhikevich neurons, a reduced model of the Hodgkin-Huxley neurons. The Izhikevich model is much simpler in terms of the dimension of the coupled nonlinear differential equations compared with other existing models, but excellent for generating the complex spiking patterns observed in real neurons. We obtained the regions of existence of the various mode-locked states on the frequency-amplitude plane, called Arnold tongues, for the Izhikevich neurons. Arnold tongue analysis provides useful insight into the organization of mode-locking behavior of neurons under periodic forcing. We find these tongues for both class-1 and class-2 excitable neurons in both deterministic and noisy regimes.
Unifying Complexity and Information
NASA Astrophysics Data System (ADS)
Ke, Da-Guan
2013-04-01
Complex systems, arising in many contexts in the computer, life, social, and physical sciences, have not shared a generally-accepted complexity measure playing a fundamental role as the Shannon entropy H in statistical mechanics. Superficially-conflicting criteria of complexity measurement, i.e. complexity-randomness (C-R) relations, have given rise to a special measure intrinsically adaptable to more than one criterion. However, deep causes of the conflict and the adaptability are not much clear. Here I trace the root of each representative or adaptable measure to its particular universal data-generating or -regenerating model (UDGM or UDRM). A representative measure for deterministic dynamical systems is found as a counterpart of the H for random process, clearly redefining the boundary of different criteria. And a specific UDRM achieving the intrinsic adaptability enables a general information measure that ultimately solves all major disputes. This work encourages a single framework coving deterministic systems, statistical mechanics and real-world living organisms.
NASA Astrophysics Data System (ADS)
Clay, London; Menger, Karl; Rota, Gian-Carlo; Euclid, Alexandria; Siegel, Edward
P ≠NP MP proof is by computer-''science''/SEANCE(!!!)(CS) computational-''intelligence'' lingo jargonial-obfuscation(JO) NATURAL-Intelligence(NI) DISambiguation! CS P =(?) =NP MEANS (Deterministic)(PC) = (?) =(Non-D)(PC) i.e. D(P) =(?) = N(P). For inclusion(equality) vs. exclusion (inequality) irrelevant (P) simply cancels!!! (Equally any/all other CCs IF both sides identical). Crucial question left: (D) =(?) =(ND), i.e. D =(?) = N. Algorithmics[Sipser[Intro. Thy.Comp.(`97)-p.49Fig.1.15!!!
NASA Astrophysics Data System (ADS)
Lorenzetti, Romina; Barbetti, Roberto; L'Abate, Giovanni; Fantappiè, Maria; Costantini, Edoardo A. C.
2013-04-01
Estimating frequency of soil classes in map unit is always affected by some degree of uncertainty, especially at small scales, with a larger generalization. The aim of this study was to compare different possible approaches - data mining, geostatistic, deterministic pedology - to assess the frequency of WRB Reference Soil Groups (RSG) in the major Italian soil regions. In the soil map of Italy (Costantini et al., 2012), a list of the first five RSG was reported in each major 10 soil regions. The soil map was produced using the national soil geodatabase, which stored 22,015 analyzed and classified pedons, 1,413 soil typological unit (STU) and a set of auxiliary variables (lithology, land-use, DEM). Other variables were added, to better consider the influence of soil forming factors (slope, soil aridity index, carbon stock, soil inorganic carbon content, clay, sand, geography of soil regions and soil systems) and a grid at 1 km mesh was set up. The traditional deterministic pedology assessed the STU frequency according to the expert judgment presence in every elementary landscape which formed the mapping unit. Different data mining techniques were firstly compared in their ability to predict RSG through auxiliary variables (neural networks, random forests, boosted tree, supported vector machine (SVM)). We selected SVM according to the result of a testing set. A SVM model is a representation of the examples as points in space, mapped so that examples of separate categories are divided by a clear gap that is as wide as possible. The geostatistic algorithm we used was an indicator collocated cokriging. The class values of the auxiliary variables, available at all the points of the grid, were transformed in indicator variables (values 0, 1). A principal component analysis allowed us to select the variables that were able to explain the largest variability, and to correlate each RSG with the first principal component, which explained the 51% of the total variability. The principal component was used as collocated variable. The results were as many probability maps as the estimated WRB classes. They were summed up in a unique map, with the most probable class at each pixel. The first five more frequent RSG resulting from the three methods were compared. The outcomes were validated with a subset of the 10% of the pedons, kept out before the elaborations. The error estimate was produced for each estimated RSG. The first results, obtained in one of the most widespread soil region (plains and low hills of central and southern Italy) showed that the first two frequency classes were the same for all the three methods. The deterministic method differed from the others at the third position, while the statistical methods inverted the third and fourth position. An advantage of the SVM was the possibility to use in the same elaboration numeric and categorical variable, without any previous transformation, which reduced the processing time. A Bayesian validation indicated that the SVM method was as reliable as the indicator collocated cokriging, and better than the deterministic pedological approach.
Bursting as a source of non-linear determinism in the firing patterns of nigral dopamine neurons.
Jeong, Jaeseung; Shi, Wei-Xing; Hoffman, Ralph; Oh, Jihoon; Gore, John C; Bunney, Benjamin S; Peterson, Bradley S
2012-11-01
Nigral dopamine (DA) neurons in vivo exhibit complex firing patterns consisting of tonic single-spikes and phasic bursts that encode information for certain types of reward-related learning and behavior. Non-linear dynamical analysis has previously demonstrated the presence of a non-linear deterministic structure in complex firing patterns of DA neurons, yet the origin of this non-linear determinism remains unknown. In this study, we hypothesized that bursting activity is the primary source of non-linear determinism in the firing patterns of DA neurons. To test this hypothesis, we investigated the dimension complexity of inter-spike interval data recorded in vivo from bursting and non-bursting DA neurons in the chloral hydrate-anesthetized rat substantia nigra. We found that bursting DA neurons exhibited non-linear determinism in their firing patterns, whereas non-bursting DA neurons showed truly stochastic firing patterns. Determinism was also detected in the isolated burst and inter-burst interval data extracted from firing patterns of bursting neurons. Moreover, less bursting DA neurons in halothane-anesthetized rats exhibited higher dimensional spiking dynamics than do more bursting DA neurons in chloral hydrate-anesthetized rats. These results strongly indicate that bursting activity is the main source of low-dimensional, non-linear determinism in the firing patterns of DA neurons. This finding furthermore suggests that bursts are the likely carriers of meaningful information in the firing activities of DA neurons. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Coupled Multi-Disciplinary Optimization for Structural Reliability and Affordability
NASA Technical Reports Server (NTRS)
Abumeri, Galib H.; Chamis, Christos C.
2003-01-01
A computational simulation method is presented for Non-Deterministic Multidisciplinary Optimization of engine composite materials and structures. A hypothetical engine duct made with ceramic matrix composites (CMC) is evaluated probabilistically in the presence of combined thermo-mechanical loading. The structure is tailored by quantifying the uncertainties in all relevant design variables such as fabrication, material, and loading parameters. The probabilistic sensitivities are used to select critical design variables for optimization. In this paper, two approaches for non-deterministic optimization are presented. The non-deterministic minimization of combined failure stress criterion is carried out by: (1) performing probabilistic evaluation first and then optimization and (2) performing optimization first and then probabilistic evaluation. The first approach shows that the optimization feasible region can be bounded by a set of prescribed probability limits and that the optimization follows the cumulative distribution function between those limits. The second approach shows that the optimization feasible region is bounded by 0.50 and 0.999 probabilities.
NASA Astrophysics Data System (ADS)
Yang, Hyun Mo
2015-12-01
Currently, discrete modellings are largely accepted due to the access to computers with huge storage capacity and high performance processors and easy implementation of algorithms, allowing to develop and simulate increasingly sophisticated models. Wang et al. [7] present a review of dynamics in complex networks, focusing on the interaction between disease dynamics and human behavioral and social dynamics. By doing an extensive review regarding to the human behavior responding to disease dynamics, the authors briefly describe the complex dynamics found in the literature: well-mixed populations networks, where spatial structure can be neglected, and other networks considering heterogeneity on spatially distributed populations. As controlling mechanisms are implemented, such as social distancing due 'social contagion', quarantine, non-pharmaceutical interventions and vaccination, adaptive behavior can occur in human population, which can be easily taken into account in the dynamics formulated by networked populations.
The maximally entangled set of 4-qubit states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spee, C.; Kraus, B.; Vicente, J. I. de
Entanglement is a resource to overcome the natural restriction of operations used for state manipulation to Local Operations assisted by Classical Communication (LOCC). Hence, a bipartite maximally entangled state is a state which can be transformed deterministically into any other state via LOCC. In the multipartite setting no such state exists. There, rather a whole set, the Maximally Entangled Set of states (MES), which we recently introduced, is required. This set has on the one hand the property that any state outside of this set can be obtained via LOCC from one of the states within the set and onmore » the other hand, no state in the set can be obtained from any other state via LOCC. Recently, we studied LOCC transformations among pure multipartite states and derived the MES for three and generic four qubit states. Here, we consider the non-generic four qubit states and analyze their properties regarding local transformations. As already the most coarse grained classification, due to Stochastic LOCC (SLOCC), of four qubit states is much richer than in case of three qubits, the investigation of possible LOCC transformations is correspondingly more difficult. We prove that most SLOCC classes show a similar behavior as the generic states, however we also identify here three classes with very distinct properties. The first consists of the GHZ and W class, where any state can be transformed into some other state non-trivially. In particular, there exists no isolation. On the other hand, there also exist classes where all states are isolated. Last but not least we identify an additional class of states, whose transformation properties differ drastically from all the other classes. Although the possibility of transforming states into local-unitary inequivalent states by LOCC turns out to be very rare, we identify those states (with exception of the latter class) which are in the MES and those, which can be obtained (transformed) non-trivially from (into) other states respectively. These investigations do not only identify the most relevant classes of states for LOCC entanglement manipulation, but also reveal new insight into the similarities and differences between separable and LOCC transformations and enable the investigation of LOCC transformations among arbitrary four qubit states.« less
The Stochastic Multi-strain Dengue Model: Analysis of the Dynamics
NASA Astrophysics Data System (ADS)
Aguiar, Maíra; Stollenwerk, Nico; Kooi, Bob W.
2011-09-01
Dengue dynamics is well known to be particularly complex with large fluctuations of disease incidences. An epidemic multi-strain model motivated by dengue fever epidemiology shows deterministic chaos in wide parameter regions. The addition of seasonal forcing, mimicking the vectorial dynamics, and a low import of infected individuals, which is realistic in the dynamics of infectious diseases epidemics show complex dynamics and qualitatively a good agreement between empirical DHF monitoring data and the obtained model simulation. The addition of noise can explain the fluctuations observed in the empirical data and for large enough population size, the stochastic system can be well described by the deterministic skeleton.
Production scheduling and rescheduling with genetic algorithms.
Bierwirth, C; Mattfeld, D C
1999-01-01
A general model for job shop scheduling is described which applies to static, dynamic and non-deterministic production environments. Next, a Genetic Algorithm is presented which solves the job shop scheduling problem. This algorithm is tested in a dynamic environment under different workload situations. Thereby, a highly efficient decoding procedure is proposed which strongly improves the quality of schedules. Finally, this technique is tested for scheduling and rescheduling in a non-deterministic environment. It is shown by experiment that conventional methods of production control are clearly outperformed at reasonable run-time costs.
From Astrochemistry to prebiotic chemistry? An hypothetical approach toward Astrobiology
NASA Astrophysics Data System (ADS)
Le Sergeant d'Hendecourt, L.; Danger, G.
2012-12-01
We present in this paper a general perspective about the evolution of molecular complexity, as observed from an astrophysicist point of view and its possible relation to the problem of the origin of life on Earth. Based on the cosmic abundances of the elements and the molecular composition of our life, we propose that life cannot really be based on other elements. We discuss where the necessary molecular complexity is built-up in astrophysical environments, actually within inter/circumstellar solid state materials known as ``grains''. Considerations based on non-directed laboratory experiments, that must be further extended in the prebiotic domain, lead to the hypothesis that if the chemistry at the origin of life may indeed be a rather universal and deterministic phenomenon, once molecular complexity is installed, the chemical evolution that generated the first prebiotic reactions that involve autoreplication must be treated in a systemic approach because of the strong contingency imposed by the complex local environment(s) and associated processes in which these chemical systems have evolved.
Transfer of non-Gaussian quantum states of mechanical oscillator to light
NASA Astrophysics Data System (ADS)
Filip, Radim; Rakhubovsky, Andrey A.
2015-11-01
Non-Gaussian quantum states are key resources for quantum optics with continuous-variable oscillators. The non-Gaussian states can be deterministically prepared by a continuous evolution of the mechanical oscillator isolated in a nonlinear potential. We propose feasible and deterministic transfer of non-Gaussian quantum states of mechanical oscillators to a traveling light beam, using purely all-optical methods. The method relies on only basic feasible and high-quality elements of quantum optics: squeezed states of light, linear optics, homodyne detection, and electro-optical feedforward control of light. By this method, a wide range of novel non-Gaussian states of light can be produced in the future from the mechanical states of levitating particles in optical tweezers, including states necessary for the implementation of an important cubic phase gate.
Delgado, James E.; Wolt, Jeffrey D.
2011-01-01
In this study, we investigate the long-term exposure (20 weeks) to fumonisin B1 (FB1) in grower-finisher pigs by conducting a quantitative exposure assessment (QEA). Our analytical approach involved both deterministic and semi-stochastic modeling for dietary comparative analyses of FB1 exposures originating from genetically engineered Bacillus thuringiensis (Bt)-corn, conventional non-Bt corn and distiller’s dried grains with solubles (DDGS) derived from Bt and/or non-Bt corn. Results from both deterministic and semi-stochastic demonstrated a distinct difference of FB1 toxicity in feed between Bt corn and non-Bt corn. Semi-stochastic results predicted the lowest FB1 exposure for Bt grain with a mean of 1.5 mg FB1/kg diet and the highest FB1 exposure for a diet consisting of non-Bt grain and non-Bt DDGS with a mean of 7.87 mg FB1/kg diet; the chronic toxicological incipient level of concern is 1.0 mg of FB1/kg of diet. Deterministic results closely mirrored but tended to slightly under predict the mean result for the semi-stochastic analysis. This novel comparative QEA model reveals that diet scenarios where the source of grain is derived from Bt corn presents less potential to induce FB1 toxicity than diets containing non-Bt corn. PMID:21909298
Genes in one megabase of the HLA class I region
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, H.; Fan, Wu-Fang; Xu, Hongxia
1993-11-15
To define the gene content of the HLA class I region, cDNA selection was applied to three overlapping yeast artificial chromosomes (YACs) that spanned 1 megabase (Mb) of this region of the human major histocompatibility complex. These YACs extended from the region centromeric to HLA-E to the region telomeric to HLA-F. In additions to the recognized class I genes and pseudogenes and the anonymous non-class-I genes described recently by the authors and others, 20 additional anonymous cDNA clones were identified from this 1-Mb region. They also identified a long repetitive DNA element in the region between HLA-B and HLA-E. Homologuesmore » of this outside of the HLA complex. The portion of the HLA class I region represented by these YACs shows an average gene density as high as the class II and class III regions. Thus, the high gene density portion of the HLA complex is extended to more than 3 Mb.« less
Engineering fluid flow using sequenced microstructures
NASA Astrophysics Data System (ADS)
Amini, Hamed; Sollier, Elodie; Masaeli, Mahdokht; Xie, Yu; Ganapathysubramanian, Baskar; Stone, Howard A.; di Carlo, Dino
2013-05-01
Controlling the shape of fluid streams is important across scales: from industrial processing to control of biomolecular interactions. Previous approaches to control fluid streams have focused mainly on creating chaotic flows to enhance mixing. Here we develop an approach to apply order using sequences of fluid transformations rather than enhancing chaos. We investigate the inertial flow deformations around a library of single cylindrical pillars within a microfluidic channel and assemble these net fluid transformations to engineer fluid streams. As these transformations provide a deterministic mapping of fluid elements from upstream to downstream of a pillar, we can sequentially arrange pillars to apply the associated nested maps and, therefore, create complex fluid structures without additional numerical simulation. To show the range of capabilities, we present sequences that sculpt the cross-sectional shape of a stream into complex geometries, move and split a fluid stream, perform solution exchange and achieve particle separation. A general strategy to engineer fluid streams into a broad class of defined configurations in which the complexity of the nonlinear equations of fluid motion are abstracted from the user is a first step to programming streams of any desired shape, which would be useful for biological, chemical and materials automation.
First Order Reliability Application and Verification Methods for Semistatic Structures
NASA Technical Reports Server (NTRS)
Verderaime, Vincent
1994-01-01
Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored by conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments, its stress audits are shown to be arbitrary and incomplete, and it compromises high strength materials performance. A reliability method is proposed which combines first order reliability principles with deterministic design variables and conventional test technique to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety index expression. The application is reduced to solving for a factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and with the pace of semistatic structural designs.
Quasi-Static Probabilistic Structural Analyses Process and Criteria
NASA Technical Reports Server (NTRS)
Goldberg, B.; Verderaime, V.
1999-01-01
Current deterministic structural methods are easily applied to substructures and components, and analysts have built great design insights and confidence in them over the years. However, deterministic methods cannot support systems risk analyses, and it was recently reported that deterministic treatment of statistical data is inconsistent with error propagation laws that can result in unevenly conservative structural predictions. Assuming non-nal distributions and using statistical data formats throughout prevailing stress deterministic processes lead to a safety factor in statistical format, which integrated into the safety index, provides a safety factor and first order reliability relationship. The embedded safety factor in the safety index expression allows a historically based risk to be determined and verified over a variety of quasi-static metallic substructures consistent with the traditional safety factor methods and NASA Std. 5001 criteria.
Boldin, Barbara; Kisdi, Éva
2016-03-01
Evolutionary suicide is a riveting phenomenon in which adaptive evolution drives a viable population to extinction. Gyllenberg and Parvinen (Bull Math Biol 63(5):981-993, 2001) showed that, in a wide class of deterministic population models, a discontinuous transition to extinction is a necessary condition for evolutionary suicide. An implicit assumption of their proof is that the invasion fitness of a rare strategy is well-defined also in the extinction state of the population. Epidemic models with frequency-dependent incidence, which are often used to model the spread of sexually transmitted infections or the dynamics of infectious diseases within herds, violate this assumption. In these models, evolutionary suicide can occur through a non-catastrophic bifurcation whereby pathogen adaptation leads to a continuous decline of host (and consequently pathogen) population size to zero. Evolutionary suicide of pathogens with frequency-dependent transmission can occur in two ways, with pathogen strains evolving either higher or lower virulence.
A new diode laser acupuncture therapy apparatus
NASA Astrophysics Data System (ADS)
Li, Chengwei; Huang, Zhen; Li, Dongyu; Zhang, Xiaoyuan
2006-06-01
Since the first laser-needles acupuncture apparatus was introduced in therapy, this kind of apparatus has been well used in laser biomedicine as its non-invasive, pain- free, non-bacterium, and safetool. The laser acupuncture apparatus in this paper is based on single-chip microcomputer and associated by semiconductor laser technology. The function like traditional moxibustion including reinforcing and reducing is implemented by applying chaos method to control the duty cycle of moxibustion signal, and the traditional lifting and thrusting of acupuncture is implemented by changing power output of the diode laser. The radiator element of diode laser is made and the drive circuit is designed. And chaos mathematic model is used to produce deterministic class stochastic signal to avoid the body adaptability. This function covers the shortages of continuous irradiation or that of simple disciplinary stimulate signal, which is controlled by some simple electronic circuit and become easily adjusted by human body. The realization of reinforcing and reducing of moxibustion is technological innovation in traditional acupuncture coming true in engineering.
Uncertainty quantification-based robust aerodynamic optimization of laminar flow nacelle
NASA Astrophysics Data System (ADS)
Xiong, Neng; Tao, Yang; Liu, Zhiyong; Lin, Jun
2018-05-01
The aerodynamic performance of laminar flow nacelle is highly sensitive to uncertain working conditions, especially the surface roughness. An efficient robust aerodynamic optimization method on the basis of non-deterministic computational fluid dynamic (CFD) simulation and Efficient Global Optimization (EGO)algorithm was employed. A non-intrusive polynomial chaos method is used in conjunction with an existing well-verified CFD module to quantify the uncertainty propagation in the flow field. This paper investigates the roughness modeling behavior with the γ-Ret shear stress transport model including modeling flow transition and surface roughness effects. The roughness effects are modeled to simulate sand grain roughness. A Class-Shape Transformation-based parametrical description of the nacelle contour as part of an automatic design evaluation process is presented. A Design-of-Experiments (DoE) was performed and surrogate model by Kriging method was built. The new design nacelle process demonstrates that significant improvements of both mean and variance of the efficiency are achieved and the proposed method can be applied to laminar flow nacelle design successfully.
Markov Logic Networks in the Analysis of Genetic Data
Sakhanenko, Nikita A.
2010-01-01
Abstract Complex, non-additive genetic interactions are common and can be critical in determining phenotypes. Genome-wide association studies (GWAS) and similar statistical studies of linkage data, however, assume additive models of gene interactions in looking for genotype-phenotype associations. These statistical methods view the compound effects of multiple genes on a phenotype as a sum of influences of each gene and often miss a substantial part of the heritable effect. Such methods do not use any biological knowledge about underlying mechanisms. Modeling approaches from the artificial intelligence (AI) field that incorporate deterministic knowledge into models to perform statistical analysis can be applied to include prior knowledge in genetic analysis. We chose to use the most general such approach, Markov Logic Networks (MLNs), for combining deterministic knowledge with statistical analysis. Using simple, logistic regression-type MLNs we can replicate the results of traditional statistical methods, but we also show that we are able to go beyond finding independent markers linked to a phenotype by using joint inference without an independence assumption. The method is applied to genetic data on yeast sporulation, a complex phenotype with gene interactions. In addition to detecting all of the previously identified loci associated with sporulation, our method identifies four loci with smaller effects. Since their effect on sporulation is small, these four loci were not detected with methods that do not account for dependence between markers due to gene interactions. We show how gene interactions can be detected using more complex models, which can be used as a general framework for incorporating systems biology with genetics. PMID:20958249
Bhanji, Jamil P.; Beer, Jennifer S.; Bunge, Silvia A.
2014-01-01
A decision may be difficult because complex information processing is required to evaluate choices according to deterministic decision rules and/or because it is not certain which choice will lead to the best outcome in a probabilistic context. Factors that tax decision making such as decision rule complexity and low decision certainty should be disambiguated for a more complete understanding of the decision making process. Previous studies have examined the brain regions that are modulated by decision rule complexity or by decision certainty but have not examined these factors together in the context of a single task or study. In the present functional magnetic resonance imaging study, both decision rule complexity and decision certainty were varied in comparable decision tasks. Further, the level of certainty about which choice to make (choice certainty) was varied separately from certainty about the final outcome resulting from a choice (outcome certainty). Lateral prefrontal cortex, dorsal anterior cingulate cortex, and bilateral anterior insula were modulated by decision rule complexity. Anterior insula was engaged more strongly by low than high choice certainty decisions, whereas ventromedial prefrontal cortex showed the opposite pattern. These regions showed no effect of the independent manipulation of outcome certainty. The results disambiguate the influence of decision rule complexity, choice certainty, and outcome certainty on activity in diverse brain regions that have been implicated in decision making. Lateral prefrontal cortex plays a key role in implementing deterministic decision rules, ventromedial prefrontal cortex in probabilistic rules, and anterior insula in both. PMID:19781652
Automatic Design of Synthetic Gene Circuits through Mixed Integer Non-linear Programming
Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias
2012-01-01
Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits. PMID:22536398
Characterizing Uncertainty and Variability in PBPK Models ...
Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variability; such characterization for PBPK model predictions represents a continuing challenge to both modelers and users. Current practices show significant progress in specifying deterministic biological models and the non-deterministic (often statistical) models, estimating their parameters using diverse data sets from multiple sources, and using them to make predictions and characterize uncertainty and variability. The International Workshop on Uncertainty and Variability in PBPK Models, held Oct 31-Nov 2, 2006, sought to identify the state-of-the-science in this area and recommend priorities for research and changes in practice and implementation. For the short term, these include: (1) multidisciplinary teams to integrate deterministic and non-deterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through more complete documentation of the model structure(s) and parameter values, the results of sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include: (1) theoretic and practical methodological impro
Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model
Nené, Nuno R.; Dunham, Alistair S.; Illingworth, Christopher J. R.
2018-01-01
A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. PMID:29500183
NASA Astrophysics Data System (ADS)
Shukla, Chitra; Thapliyal, Kishore; Pathak, Anirban
2017-12-01
Semi-quantum protocols that allow some of the users to remain classical are proposed for a large class of problems associated with secure communication and secure multiparty computation. Specifically, first-time semi-quantum protocols are proposed for key agreement, controlled deterministic secure communication and dialogue, and it is shown that the semi-quantum protocols for controlled deterministic secure communication and dialogue can be reduced to semi-quantum protocols for e-commerce and private comparison (socialist millionaire problem), respectively. Complementing with the earlier proposed semi-quantum schemes for key distribution, secret sharing and deterministic secure communication, set of schemes proposed here and subsequent discussions have established that almost every secure communication and computation tasks that can be performed using fully quantum protocols can also be performed in semi-quantum manner. Some of the proposed schemes are completely orthogonal-state-based, and thus, fundamentally different from the existing semi-quantum schemes that are conjugate coding-based. Security, efficiency and applicability of the proposed schemes have been discussed with appropriate importance.
Phase ordering in disordered and inhomogeneous systems
NASA Astrophysics Data System (ADS)
Corberi, Federico; Zannetti, Marco; Lippiello, Eugenio; Burioni, Raffaella; Vezzani, Alessandro
2015-06-01
We study numerically the coarsening dynamics of the Ising model on a regular lattice with random bonds and on deterministic fractal substrates. We propose a unifying interpretation of the phase-ordering processes based on two classes of dynamical behaviors characterized by different growth laws of the ordered domain size, namely logarithmic or power law, respectively. It is conjectured that the interplay between these dynamical classes is regulated by the same topological feature that governs the presence or the absence of a finite-temperature phase transition.
Teaching Deterministic Chaos through Music.
ERIC Educational Resources Information Center
Chacon, R.; And Others
1992-01-01
Presents music education as a setting for teaching nonlinear dynamics and chaotic behavior connected with fixed-point and limit-cycle attractors. The aim is not music composition but a first approach to an interdisciplinary tool suitable for a single-session class, at either the secondary or undergraduate level, for the introduction of these…
Bayesian Estimation of the DINA Model with Gibbs Sampling
ERIC Educational Resources Information Center
Culpepper, Steven Andrew
2015-01-01
A Bayesian model formulation of the deterministic inputs, noisy "and" gate (DINA) model is presented. Gibbs sampling is employed to simulate from the joint posterior distribution of item guessing and slipping parameters, subject attribute parameters, and latent class probabilities. The procedure extends concepts in Béguin and Glas,…
2018-01-01
Single-cell experiments show that gene expression is stochastic and bursty, a feature that can emerge from slow switching between promoter states with different activities. In addition to slow chromatin and/or DNA looping dynamics, one source of long-lived promoter states is the slow binding and unbinding kinetics of transcription factors to promoters, i.e. the non-adiabatic binding regime. Here, we introduce a simple analytical framework, known as a piecewise deterministic Markov process (PDMP), that accurately describes the stochastic dynamics of gene expression in the non-adiabatic regime. We illustrate the utility of the PDMP on a non-trivial dynamical system by analysing the properties of a titration-based oscillator in the non-adiabatic limit. We first show how to transform the underlying chemical master equation into a PDMP where the slow transitions between promoter states are stochastic, but whose rates depend upon the faster deterministic dynamics of the transcription factors regulated by these promoters. We show that the PDMP accurately describes the observed periods of stochastic cycles in activator and repressor-based titration oscillators. We then generalize our PDMP analysis to more complicated versions of titration-based oscillators to explain how multiple binding sites lengthen the period and improve coherence. Last, we show how noise-induced oscillation previously observed in a titration-based oscillator arises from non-adiabatic and discrete binding events at the promoter site. PMID:29386401
Information-theoretic approach to interactive learning
NASA Astrophysics Data System (ADS)
Still, S.
2009-01-01
The principles of statistical mechanics and information theory play an important role in learning and have inspired both theory and the design of numerous machine learning algorithms. The new aspect in this paper is a focus on integrating feedback from the learner. A quantitative approach to interactive learning and adaptive behavior is proposed, integrating model- and decision-making into one theoretical framework. This paper follows simple principles by requiring that the observer's world model and action policy should result in maximal predictive power at minimal complexity. Classes of optimal action policies and of optimal models are derived from an objective function that reflects this trade-off between prediction and complexity. The resulting optimal models then summarize, at different levels of abstraction, the process's causal organization in the presence of the learner's actions. A fundamental consequence of the proposed principle is that the learner's optimal action policies balance exploration and control as an emerging property. Interestingly, the explorative component is present in the absence of policy randomness, i.e. in the optimal deterministic behavior. This is a direct result of requiring maximal predictive power in the presence of feedback.
Single-photon non-linear optics with a quantum dot in a waveguide
NASA Astrophysics Data System (ADS)
Javadi, A.; Söllner, I.; Arcari, M.; Hansen, S. Lindskov; Midolo, L.; Mahmoodian, S.; Kiršanskė, G.; Pregnolato, T.; Lee, E. H.; Song, J. D.; Stobbe, S.; Lodahl, P.
2015-10-01
Strong non-linear interactions between photons enable logic operations for both classical and quantum-information technology. Unfortunately, non-linear interactions are usually feeble and therefore all-optical logic gates tend to be inefficient. A quantum emitter deterministically coupled to a propagating mode fundamentally changes the situation, since each photon inevitably interacts with the emitter, and highly correlated many-photon states may be created. Here we show that a single quantum dot in a photonic-crystal waveguide can be used as a giant non-linearity sensitive at the single-photon level. The non-linear response is revealed from the intensity and quantum statistics of the scattered photons, and contains contributions from an entangled photon-photon bound state. The quantum non-linearity will find immediate applications for deterministic Bell-state measurements and single-photon transistors and paves the way to scalable waveguide-based photonic quantum-computing architectures.
NASA Astrophysics Data System (ADS)
Frommer, Joshua B.
This work develops and implements a solution framework that allows for an integrated solution to a resource allocation system-of-systems problem associated with designing vehicles for integration into an existing fleet to extend that fleet's capability while improving efficiency. Typically, aircraft design focuses on using a specific design mission while a fleet perspective would provide a broader capability. Aspects of design for both the vehicles and missions may be, for simplicity, deterministic in nature or, in a model that reflects actual conditions, uncertain. Toward this end, the set of tasks or goals for the to-be-planned system-of-systems will be modeled more accurately with non-deterministic values, and the designed platforms will be evaluated using reliability analysis. The reliability, defined as the probability of a platform or set of platforms to complete possible missions, will contribute to the fitness of the overall system. The framework includes building surrogate models for metrics such as capability and cost, and includes the ideas of reliability in the overall system-level design space. The concurrent design and allocation system-of-systems problem is a multi-objective mixed integer nonlinear programming (MINLP) problem. This study considered two system-of-systems problems that seek to simultaneously design new aircraft and allocate these aircraft into a fleet to provide a desired capability. The Coast Guard's Integrated Deepwater System program inspired the first problem, which consists of a suite of search-and-find missions for aircraft based on descriptions from the National Search and Rescue Manual. The second represents suppression of enemy air defense operations similar to those carried out by the U.S. Air Force, proposed as part of the Department of Defense Network Centric Warfare structure, and depicted in MILSTD-3013. The two problems seem similar, with long surveillance segments, but because of the complex nature of aircraft design, the analysis of the vehicle for high-speed attack combined with a long loiter period is considerably different from that for quick cruise to an area combined with a low speed search. However, the framework developed to solve this class of system-of-systems problem handles both scenarios and leads to a solution type for this kind of problem. On the vehicle-level of the problem, different technology can have an impact on the fleet-level. One such technology is Morphing, the ability to change shape, which is an ideal candidate technology for missions with dissimilar segments, such as the aforementioned two. A framework, using surrogate models based on optimally-sized aircraft, and using probabilistic parameters to define a concept of operations, is investigated; this has provided insight into the setup of the optimization problem, the use of the reliability metric, and the measurement of fleet level impacts of morphing aircraft. The research consisted of four phases. The two initial phases built and defined the framework to solve system-of-systems problem; these investigations used the search-and-find scenario as the example application. The first phase included the design of fixed-geometry and morphing aircraft for a range of missions and evaluated the aircraft capability using non-deterministic mission parameters. The second phase introduced the idea of multiple aircraft in a fleet, but only considered a fleet consisting of one aircraft type. The third phase incorporated the simultaneous design of a new vehicle and allocation into a fleet for the search-and-find scenario; in this phase, multiple types of aircraft are considered. The fourth phase repeated the simultaneous new aircraft design and fleet allocation for the SEAD scenario to show that the approach is not specific to the search-and-find scenario. The framework presented in this work appears to be a viable approach for concurrently designing and allocating constituents in a system, specifically aircraft in a fleet. The research also shows that new technology impact can be assessed at the fleet level using conceptual design principles.
Modeling and Simulation for Mission Operations Work System Design
NASA Technical Reports Server (NTRS)
Sierhuis, Maarten; Clancey, William J.; Seah, Chin; Trimble, Jay P.; Sims, Michael H.
2003-01-01
Work System analysis and design is complex and non-deterministic. In this paper we describe Brahms, a multiagent modeling and simulation environment for designing complex interactions in human-machine systems. Brahms was originally conceived as a business process design tool that simulates work practices, including social systems of work. We describe our modeling and simulation method for mission operations work systems design, based on a research case study in which we used Brahms to design mission operations for a proposed discovery mission to the Moon. We then describe the results of an actual method application project-the Brahms Mars Exploration Rover. Space mission operations are similar to operations of traditional organizations; we show that the application of Brahms for space mission operations design is relevant and transferable to other types of business processes in organizations.
Nuclear power and probabilistic safety assessment (PSA): past through future applications
NASA Astrophysics Data System (ADS)
Stamatelatos, M. G.; Moieni, P.; Everline, C. J.
1995-03-01
Nuclear power reactor safety in the United States is about to enter a new era -- an era of risk- based management and risk-based regulation. First, there was the age of `prescribed safety assessment,' during which a series of design-basis accidents in eight categories of severity, or classes, were postulated and analyzed. Toward the end of that era, it was recognized that `Class 9,' or `beyond design basis,' accidents would need special attention because of the potentially severe health and financial consequences of these accidents. The accident at Three Mile Island showed that sequences of low-consequence, high-frequency events and human errors can be much more risk dominant than the Class 9 accidents. A different form of safety assessment, PSA, emerged and began to gain ground against the deterministic safety establishment. Eventually, this led to the current regulatory requirements for individual plant examinations (IPEs). The IPEs can serve as a basis for risk-based regulation and management, a concept that may ultimately transform the U.S. regulatory process from its traditional deterministic foundations to a process predicated upon PSA. Beyond the possibility of a regulatory environment predicated upon PSA lies the possibility of using PSA as the foundation for managing daily nuclear power plant operations.
NASA Technical Reports Server (NTRS)
Bogdanoff, J. L.; Kayser, K.; Krieger, W.
1977-01-01
The paper describes convergence and response studies in the low frequency range of complex systems, particularly with low values of damping of different distributions, and reports on the modification of the relaxation procedure required under these conditions. A new method is presented for response estimation in complex lumped parameter linear systems under random or deterministic steady state excitation. The essence of the method is the use of relaxation procedures with a suitable error function to find the estimated response; natural frequencies and normal modes are not computed. For a 45 degree of freedom system, and two relaxation procedures, convergence studies and frequency response estimates were performed. The low frequency studies are considered in the framework of earlier studies (Kayser and Bogdanoff, 1975) involving the mid to high frequency range.
Analysis of MHC class I genes across horse MHC haplotypes
Tallmadge, Rebecca L.; Campbell, Julie A.; Miller, Donald C.; Antczak, Douglas F.
2010-01-01
The genomic sequences of 15 horse Major Histocompatibility Complex (MHC) class I genes and a collection of MHC class I homozygous horses of five different haplotypes were used to investigate the genomic structure and polymorphism of the equine MHC. A combination of conserved and locus-specific primers was used to amplify horse MHC class I genes with classical and non-classical characteristics. Multiple clones from each haplotype identified three to five classical sequences per homozygous animal, and two to three non-classical sequences. Phylogenetic analysis was applied to these sequences and groups were identified which appear to be allelic series, but some sequences were left ungrouped. Sequences determined from MHC class I heterozygous horses and previously described MHC class I sequences were then added, representing a total of ten horse MHC haplotypes. These results were consistent with those obtained from the MHC homozygous horses alone, and 30 classical sequences were assigned to four previously confirmed loci and three new provisional loci. The non-classical genes had few alleles and the classical genes had higher levels of allelic polymorphism. Alleles for two classical loci with the expected pattern of polymorphism were found in the majority of haplotypes tested, but alleles at two other commonly detected loci had more variation outside of the hypervariable region than within. Our data indicate that the equine Major Histocompatibility Complex is characterized by variation in the complement of class I genes expressed in different haplotypes in addition to the expected allelic polymorphism within loci. PMID:20099063
NASA Astrophysics Data System (ADS)
Aono, Masashi; Gunji, Yukio-Pegio
2004-08-01
How can non-algorithmic/non-deterministic computational syntax be computed? "The hyperincursive system" introduced by Dubois is an anticipatory system embracing the contradiction/uncertainty. Although it may provide a novel viewpoint for the understanding of complex systems, conventional digital computers cannot run faithfully as the hyperincursive computational syntax specifies, in a strict sense. Then is it an imaginary story? In this paper we try to argue that it is not. We show that a model of complex systems "Elementary Conflictable Cellular Automata (ECCA)" proposed by Aono and Gunji is embracing the hyperincursivity and the nonlocality. ECCA is based on locality-only type settings basically as well as other CA models, and/but at the same time, each cell is required to refer to globality-dominant regularity. Due to this contradictory locality-globality loop, the time evolution equation specifies that the system reaches the deadlock/infinite-loop. However, we show that there is a possibility of the resolution of these problems if the computing system has parallel and/but non-distributed property like an amoeboid organism. This paper is an introduction to "the slime mold computing" that is an attempt to cultivate an unconventional notion of computation.
Singh, G D; McNamara, J A; Lozanoff, S
1998-01-01
While the dynamics of maxillo-mandibular allometry associated with treatment modalities available for the management of Class III malocclusions currently are under investigation, developmental aberration of the soft tissues in untreated Class III malocclusions requires specification. In this study, lateral cephalographs of 124 prepubertal European-American children (71 with untreated Class III malocclusion; 53 with Class I occlusion) were traced, and 12 soft-tissue landmarks digitized. Resultant geometries were scaled to an equivalent size and mean Class III and Class I configurations compared. Procrustes analysis established statistical difference (P < 0.001) between the mean configurations. Comparing the overall untreated Class III and Class I configurations, thin-plate spline (TPS) analysis indicated that both affine and non-affine transformations contribute towards the deformation (total spline) of the averaged Class III soft tissue configuration. For non-affine transformations, partial warp 8 had the highest magnitude, indicating large-scale deformations visualized as a combination of columellar retrusion and lower labial protrusion. In addition, partial warp 5 also had a high magnitude, demonstrating upper labial vertical compression with antero-inferior elongation of the lower labio-mental soft tissue complex. Thus, children with Class III malocclusions demonstrate antero-posterior and vertical deformations of the maxillary soft tissue complex in combination with antero-inferior mandibular soft tissue elongation. This pattern of deformations may represent gene-environment interactions, resulting in Class III malocclusions with characteristic phenotypes, that are amenable to orthodontic and dentofacial orthopedic manipulations.
Figures of Merit for Control Verification
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Goesu. Daniel P.
2008-01-01
This paper proposes a methodology for evaluating a controller's ability to satisfy a set of closed-loop specifications when the plant has an arbitrary functional dependency on uncertain parameters. Control verification metrics applicable to deterministic and probabilistic uncertainty models are proposed. These metrics, which result from sizing the largest uncertainty set of a given class for which the specifications are satisfied, enable systematic assessment of competing control alternatives regardless of the methods used to derive them. A particularly attractive feature of the tools derived is that their efficiency and accuracy do not depend on the robustness of the controller. This is in sharp contrast to Monte Carlo based methods where the number of simulations required to accurately approximate the failure probability grows exponentially with its closeness to zero. This framework allows for the integration of complex, high-fidelity simulations of the integrated system and only requires standard optimization algorithms for its implementation.
An ITK framework for deterministic global optimization for medical image registration
NASA Astrophysics Data System (ADS)
Dru, Florence; Wachowiak, Mark P.; Peters, Terry M.
2006-03-01
Similarity metric optimization is an essential step in intensity-based rigid and nonrigid medical image registration. For clinical applications, such as image guidance of minimally invasive procedures, registration accuracy and efficiency are prime considerations. In addition, clinical utility is enhanced when registration is integrated into image analysis and visualization frameworks, such as the popular Insight Toolkit (ITK). ITK is an open source software environment increasingly used to aid the development, testing, and integration of new imaging algorithms. In this paper, we present a new ITK-based implementation of the DIRECT (Dividing Rectangles) deterministic global optimization algorithm for medical image registration. Previously, it has been shown that DIRECT improves the capture range and accuracy for rigid registration. Our ITK class also contains enhancements over the original DIRECT algorithm by improving stopping criteria, adaptively adjusting a locality parameter, and by incorporating Powell's method for local refinement. 3D-3D registration experiments with ground-truth brain volumes and clinical cardiac volumes show that combining DIRECT with Powell's method improves registration accuracy over Powell's method used alone, is less sensitive to initial misorientation errors, and, with the new stopping criteria, facilitates adequate exploration of the search space without expending expensive iterations on non-improving function evaluations. Finally, in this framework, a new parallel implementation for computing mutual information is presented, resulting in near-linear speedup with two processors.
Do rational numbers play a role in selection for stochasticity?
Sinclair, Robert
2014-01-01
When a given tissue must, to be able to perform its various functions, consist of different cell types, each fairly evenly distributed and with specific probabilities, then there are at least two quite different developmental mechanisms which might achieve the desired result. Let us begin with the case of two cell types, and first imagine that the proportion of numbers of cells of these types should be 1:3. Clearly, a regular structure composed of repeating units of four cells, three of which are of the dominant type, will easily satisfy the requirements, and a deterministic mechanism may lend itself to the task. What if, however, the proportion should be 10:33? The same simple, deterministic approach would now require a structure of repeating units of 43 cells, and this certainly seems to require a far more complex and potentially prohibitive deterministic developmental program. Stochastic development, replacing regular units with random distributions of given densities, might not be evolutionarily competitive in comparison with the deterministic program when the proportions should be 1:3, but it has the property that, whatever developmental mechanism underlies it, its complexity does not need to depend very much upon target cell densities at all. We are immediately led to speculate that proportions which correspond to fractions with large denominators (such as the 33 of 10/33) may be more easily achieved by stochastic developmental programs than by deterministic ones, and this is the core of our thesis: that stochastic development may tend to occur more often in cases involving rational numbers with large denominators. To be imprecise: that simple rationality and determinism belong together, as do irrationality and randomness.
Faes, Luca; Nollo, Giandomenico; Porta, Alberto
2012-03-01
The complexity of the short-term cardiovascular control prompts for the introduction of multivariate (MV) nonlinear time series analysis methods to assess directional interactions reflecting the underlying regulatory mechanisms. This study introduces a new approach for the detection of nonlinear Granger causality in MV time series, based on embedding the series by a sequential, non-uniform procedure, and on estimating the information flow from one series to another by means of the corrected conditional entropy. The approach is validated on short realizations of linear stochastic and nonlinear deterministic processes, and then evaluated on heart period, systolic arterial pressure and respiration variability series measured from healthy humans in the resting supine position and in the upright position after head-up tilt. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Karakatsanis, L. P.; Pavlos, G. P.; Iliopoulos, A. C.; Pavlos, E. G.; Clark, P. M.; Duke, J. L.; Monos, D. S.
2018-09-01
This study combines two independent domains of science, the high throughput DNA sequencing capabilities of Genomics and complexity theory from Physics, to assess the information encoded by the different genomic segments of exonic, intronic and intergenic regions of the Major Histocompatibility Complex (MHC) and identify possible interactive relationships. The dynamic and non-extensive statistical characteristics of two well characterized MHC sequences from the homozygous cell lines, PGF and COX, in addition to two other genomic regions of comparable size, used as controls, have been studied using the reconstructed phase space theorem and the non-extensive statistical theory of Tsallis. The results reveal similar non-linear dynamical behavior as far as complexity and self-organization features. In particular, the low-dimensional deterministic nonlinear chaotic and non-extensive statistical character of the DNA sequences was verified with strong multifractal characteristics and long-range correlations. The nonlinear indices repeatedly verified that MHC sequences, whether exonic, intronic or intergenic include varying levels of information and reveal an interaction of the genes with intergenic regions, whereby the lower the number of genes in a region, the less the complexity and information content of the intergenic region. Finally we showed the significance of the intergenic region in the production of the DNA dynamics. The findings reveal interesting content information in all three genomic elements and interactive relationships of the genes with the intergenic regions. The results most likely are relevant to the whole genome and not only to the MHC. These findings are consistent with the ENCODE project, which has now established that the non-coding regions of the genome remain to be of relevance, as they are functionally important and play a significant role in the regulation of expression of genes and coordination of the many biological processes of the cell.
A family of small-world network models built by complete graph and iteration-function
NASA Astrophysics Data System (ADS)
Ma, Fei; Yao, Bing
2018-02-01
Small-world networks are popular in real-life complex systems. In the past few decades, researchers presented amounts of small-world models, in which some are stochastic and the rest are deterministic. In comparison with random models, it is not only convenient but also interesting to study the topological properties of deterministic models in some fields, such as graph theory, theorem computer sciences and so on. As another concerned darling in current researches, community structure (modular topology) is referred to as an useful statistical parameter to uncover the operating functions of network. So, building and studying such models with community structure and small-world character will be a demanded task. Hence, in this article, we build a family of sparse network space N(t) which is different from those previous deterministic models. Even though, our models are established in the same way as them, iterative generation. By randomly connecting manner in each time step, every resulting member in N(t) has no absolutely self-similar feature widely shared in a large number of previous models. This makes our insight not into discussing a class certain model, but into investigating a group various ones spanning a network space. Somewhat surprisingly, our results prove all members of N(t) to possess some similar characters: (a) sparsity, (b) exponential-scale feature P(k) ∼α-k, and (c) small-world property. Here, we must stress a very screming, but intriguing, phenomenon that the difference of average path length (APL) between any two members in N(t) is quite small, which indicates this random connecting way among members has no great effect on APL. At the end of this article, as a new topological parameter correlated to reliability, synchronization capability and diffusion properties of networks, the number of spanning trees on a representative member NB(t) of N(t) is studied in detail, then an exact analytical solution for its spanning trees entropy is also obtained.
First-order reliability application and verification methods for semistatic structures
NASA Astrophysics Data System (ADS)
Verderaime, V.
1994-11-01
Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored in conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments; stress audits are shown to be arbitrary and incomplete, and the concept compromises the performance of high-strength materials. A reliability method is proposed that combines first-order reliability principles with deterministic design variables and conventional test techniques to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety-index expression. The application is reduced to solving for a design factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this design factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and the development of semistatic structural designs.
Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model.
Nené, Nuno R; Dunham, Alistair S; Illingworth, Christopher J R
2018-05-01
A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. Copyright © 2018 Nené et al.
Sullivan, L C; Clements, C S; Rossjohn, J; Brooks, A G
2008-11-01
The non-classical major histocompatibility complex (MHC) class I molecule human leucocyte antigen (HLA)-E is the least polymorphic of all the MHC class I molecules and acts as a ligand for receptors of both the innate and the adaptive immune systems. The recognition of self-peptides complexed to HLA-E by the CD94-NKG2A receptor expressed by natural killer (NK) cells represents a crucial checkpoint for immune surveillance by NK cells. However, HLA-E can also be recognised by the T-cell receptor expressed by alphabeta CD8 T cells and therefore can play a role in the adaptive immune response to invading pathogens. The recent resolution of HLA-E in complex with both innate and adaptive ligands has provided insight into the dual role of this molecule in immunity.
Differentials on graph complexes II: hairy graphs
NASA Astrophysics Data System (ADS)
Khoroshkin, Anton; Willwacher, Thomas; Živković, Marko
2017-10-01
We study the cohomology of the hairy graph complexes which compute the rational homotopy of embedding spaces, generalizing the Vassiliev invariants of knot theory. We provide spectral sequences converging to zero whose first pages contain the hairy graph cohomology. Our results yield a way to construct many nonzero hairy graph cohomology classes out of (known) non-hairy classes by studying the cancellations in those sequences. This provide a first glimpse at the tentative global structure of the hairy graph cohomology.
Coexistence and chaos in complex ecologies [rapid communication
NASA Astrophysics Data System (ADS)
Sprott, J. C.; Vano, J. A.; Wildenberg, J. C.; Anderson, M. B.; Noel, J. K.
2005-02-01
Many complex dynamical systems in ecology, economics, neurology, and elsewhere, in which agents compete for limited resources, exhibit apparently chaotic fluctuations. This Letter proposes a purely deterministic mechanism for evolving robustly but weakly chaotic systems that exhibit adaptation, self-organization, sporadic volatility, and punctuated equilibria.
Deterministic photon bias in speckle imaging
NASA Technical Reports Server (NTRS)
Beletic, James W.
1989-01-01
A method for determining photo bias terms in speckle imaging is presented, and photon bias is shown to be a deterministic quantity that can be calculated without the use of the expectation operator. The quantities obtained are found to be identical to previous results. The present results have extended photon bias calculations to the important case of the bispectrum where photon events are assigned different weights, in which regime the bias is a frequency dependent complex quantity that must be calculated for each frame.
Nano transfer and nanoreplication using deterministically grown sacrificial nanotemplates
Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E [Greenback, TN; Guillorn, Michael A [Ithaca, NY; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TX; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN
2012-03-27
Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoconduit material coupled to a surface of the substrate. The substrate defines an aperture and the nanoconduit material defines a nanoconduit that is i) contiguous with the aperture and ii) aligned substantially non-parallel to a plane defined by the surface of the substrate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Yen Ting; Buchler, Nicolas E.
Single-cell experiments show that gene expression is stochastic and bursty, a feature that can emerge from slow switching between promoter states with different activities. In addition to slow chromatin and/or DNA looping dynamics, one source of long-lived promoter states is the slow binding and unbinding kinetics of transcription factors to promoters, i.e. the non-adiabatic binding regime. Here, we introduce a simple analytical framework, known as a piecewise deterministic Markov process (PDMP), that accurately describes the stochastic dynamics of gene expression in the non-adiabatic regime. We illustrate the utility of the PDMP on a non-trivial dynamical system by analysing the propertiesmore » of a titration-based oscillator in the non-adiabatic limit. We first show how to transform the underlying chemical master equation into a PDMP where the slow transitions between promoter states are stochastic, but whose rates depend upon the faster deterministic dynamics of the transcription factors regulated by these promoters. We show that the PDMP accurately describes the observed periods of stochastic cycles in activator and repressor-based titration oscillators. We then generalize our PDMP analysis to more complicated versions of titration-based oscillators to explain how multiple binding sites lengthen the period and improve coherence. Finally, we show how noise-induced oscillation previously observed in a titration-based oscillator arises from non-adiabatic and discrete binding events at the promoter site.« less
Lin, Yen Ting; Buchler, Nicolas E.
2018-01-31
Single-cell experiments show that gene expression is stochastic and bursty, a feature that can emerge from slow switching between promoter states with different activities. In addition to slow chromatin and/or DNA looping dynamics, one source of long-lived promoter states is the slow binding and unbinding kinetics of transcription factors to promoters, i.e. the non-adiabatic binding regime. Here, we introduce a simple analytical framework, known as a piecewise deterministic Markov process (PDMP), that accurately describes the stochastic dynamics of gene expression in the non-adiabatic regime. We illustrate the utility of the PDMP on a non-trivial dynamical system by analysing the propertiesmore » of a titration-based oscillator in the non-adiabatic limit. We first show how to transform the underlying chemical master equation into a PDMP where the slow transitions between promoter states are stochastic, but whose rates depend upon the faster deterministic dynamics of the transcription factors regulated by these promoters. We show that the PDMP accurately describes the observed periods of stochastic cycles in activator and repressor-based titration oscillators. We then generalize our PDMP analysis to more complicated versions of titration-based oscillators to explain how multiple binding sites lengthen the period and improve coherence. Finally, we show how noise-induced oscillation previously observed in a titration-based oscillator arises from non-adiabatic and discrete binding events at the promoter site.« less
Sharma, Vijay
2009-09-10
Physiological systems such as the cardiovascular system are capable of five kinds of behavior: equilibrium, periodicity, quasi-periodicity, deterministic chaos and random behavior. Systems adopt one or more these behaviors depending on the function they have evolved to perform. The emerging mathematical concepts of fractal mathematics and chaos theory are extending our ability to study physiological behavior. Fractal geometry is observed in the physical structure of pathways, networks and macroscopic structures such the vasculature and the His-Purkinje network of the heart. Fractal structure is also observed in processes in time, such as heart rate variability. Chaos theory describes the underlying dynamics of the system, and chaotic behavior is also observed at many levels, from effector molecules in the cell to heart function and blood pressure. This review discusses the role of fractal structure and chaos in the cardiovascular system at the level of the heart and blood vessels, and at the cellular level. Key functional consequences of these phenomena are highlighted, and a perspective provided on the possible evolutionary origins of chaotic behavior and fractal structure. The discussion is non-mathematical with an emphasis on the key underlying concepts.
Sharma, Vijay
2009-01-01
Physiological systems such as the cardiovascular system are capable of five kinds of behavior: equilibrium, periodicity, quasi-periodicity, deterministic chaos and random behavior. Systems adopt one or more these behaviors depending on the function they have evolved to perform. The emerging mathematical concepts of fractal mathematics and chaos theory are extending our ability to study physiological behavior. Fractal geometry is observed in the physical structure of pathways, networks and macroscopic structures such the vasculature and the His-Purkinje network of the heart. Fractal structure is also observed in processes in time, such as heart rate variability. Chaos theory describes the underlying dynamics of the system, and chaotic behavior is also observed at many levels, from effector molecules in the cell to heart function and blood pressure. This review discusses the role of fractal structure and chaos in the cardiovascular system at the level of the heart and blood vessels, and at the cellular level. Key functional consequences of these phenomena are highlighted, and a perspective provided on the possible evolutionary origins of chaotic behavior and fractal structure. The discussion is non-mathematical with an emphasis on the key underlying concepts. PMID:19812706
Some mechanistic requirements for major transitions
2016-01-01
Major transitions in nature and human society are accompanied by a substantial change towards higher complexity in the core of the evolving system. New features are established, novel hierarchies emerge, new regulatory mechanisms are required and so on. An obvious way to achieve higher complexity is integration of autonomous elements into new organized systems whereby the previously independent units give up their autonomy at least in part. In this contribution, we reconsider the more than 40 years old hypercycle model and analyse it by the tools of stochastic chemical kinetics. An open system is implemented in the form of a flow reactor. The formation of new dynamically organized units through integration of competitors is identified with transcritical bifurcations. In the stochastic model, the fully organized state is quasi-stationary whereas the unorganized state corresponds to a population with natural selection. The stability of the organized state depends strongly on the number of individual subspecies, n, that have to be integrated: two and three classes of individuals, and , readily form quasi-stationary states. The four-membered deterministic dynamical system, , is stable but in the stochastic approach self-enhancing fluctuations drive it into extinction. In systems with five and more classes of individuals, , the state of cooperation is unstable and the solutions of the deterministic ODEs exhibit large amplitude oscillations. In the stochastic system self-enhancing fluctuations lead to extinction as observed with . Interestingly, cooperative systems in nature are commonly two-membered as shown by numerous examples of binary symbiosis. A few cases of symbiosis of three partners, called three-way symbiosis, have been found and were analysed within the past decade. Four-way symbiosis is rather rare but was reported to occur in fungus-growing ants. The model reported here can be used to illustrate the interplay between competition and cooperation whereby we obtain a hint on the role that resources play in major transitions. Abundance of resources seems to be an indispensable prerequisite of radical innovation that apparently needs substantial investments. Economists often claim that scarcity is driving innovation. Our model sheds some light on this apparent contradiction. In a nutshell, the answer is: scarcity drives optimization and increase in efficiency but abundance is required for radical novelty and the development of new features. This article is part of the themed issue ‘The major synthetic evolutionary transitions’. PMID:27431517
Selden, Steven
2005-06-01
In the early 1920s, determinist conceptions of biology helped to transform Better Babies contest into Fitter Families competitions with a strong commitment to controlled human breeding. While the earlier competitions were concerned for physical and mental standards, the latter contests collected data on a broad range of presumed hereditary characters. The complex behaviors thought to be determined by one's heredity included being generous, jealous, and cruel. In today's context, the popular media often interpret advances in molecular genetics in a similarly reductive and determinist fashion. This paper argues that such a narrow interpretation of contemporary biology unnecessarily constrains the public in developing social policies concerning complex social behavior ranging from crime to intelligence.
The way to uncover community structure with core and diversity
NASA Astrophysics Data System (ADS)
Chang, Y. F.; Han, S. K.; Wang, X. D.
2018-07-01
Communities are ubiquitous in nature and society. Individuals that share common properties often self-organize to form communities. Avoiding the shortages of computation complexity, pre-given information and unstable results in different run, in this paper, we propose a simple and efficient method to deepen our understanding of the emergence and diversity of communities in complex systems. By introducing the rational random selection, our method reveals the hidden deterministic and normal diverse community states of community structure. To demonstrate this method, we test it with real-world systems. The results show that our method could not only detect community structure with high sensitivity and reliability, but also provide instructional information about the hidden deterministic community world and the real normal diverse community world by giving out the core-community, the real-community, the tide and the diversity. Thizs is of paramount importance in understanding, predicting, and controlling a variety of collective behaviors in complex systems.
Kim, Sung-Cheol; Wunsch, Benjamin H; Hu, Huan; Smith, Joshua T; Austin, Robert H; Stolovitzky, Gustavo
2017-06-27
Deterministic lateral displacement (DLD) is a technique for size fractionation of particles in continuous flow that has shown great potential for biological applications. Several theoretical models have been proposed, but experimental evidence has demonstrated that a rich class of intermediate migration behavior exists, which is not predicted. We present a unified theoretical framework to infer the path of particles in the whole array on the basis of trajectories in a unit cell. This framework explains many of the unexpected particle trajectories reported and can be used to design arrays for even nanoscale particle fractionation. We performed experiments that verify these predictions and used our model to develop a condenser array that achieves full particle separation with a single fluidic input.
Non-Deterministic Dynamic Instability of Composite Shells
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2004-01-01
A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics, and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties, in that order.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, V; Labby, Z; Culberson, W
Purpose: To determine whether body site-specific treatment plans form unique “plan class” clusters in a multi-dimensional analysis of plan complexity metrics such that a single beam quality correction determined for a representative plan could be universally applied within the “plan class”, thereby increasing the dosimetric accuracy of a detector’s response within a subset of similarly modulated nonstandard deliveries. Methods: We collected 95 clinical volumetric modulated arc therapy (VMAT) plans from four body sites (brain, lung, prostate, and spine). The lung data was further subdivided into SBRT and non-SBRT data for a total of five plan classes. For each control pointmore » in each plan, a variety of aperture-based complexity metrics were calculated and stored as unique characteristics of each patient plan. A multiple comparison of means analysis was performed such that every plan class was compared to every other plan class for every complexity metric in order to determine which groups could be considered different from one another. Statistical significance was assessed after correcting for multiple hypothesis testing. Results: Six out of a possible 10 pairwise plan class comparisons were uniquely distinguished based on at least nine out of 14 of the proposed metrics (Brain/Lung, Brain/SBRT lung, Lung/Prostate, Lung/SBRT Lung, Lung/Spine, Prostate/SBRT Lung). Eight out of 14 of the complexity metrics could distinguish at least six out of the possible 10 pairwise plan class comparisons. Conclusion: Aperture-based complexity metrics could prove to be useful tools to quantitatively describe a distinct class of treatment plans. Certain plan-averaged complexity metrics could be considered unique characteristics of a particular plan. A new approach to generating plan-class specific reference (pcsr) fields could be established through a targeted preservation of select complexity metrics or a clustering algorithm that identifies plans exhibiting similar modulation characteristics. Measurements and simulations will better elucidate potential plan-class specific dosimetry correction factors.« less
NASA Astrophysics Data System (ADS)
Samoilov, Michael; Plyasunov, Sergey; Arkin, Adam P.
2005-02-01
Stochastic effects in biomolecular systems have now been recognized as a major physiologically and evolutionarily important factor in the development and function of many living organisms. Nevertheless, they are often thought of as providing only moderate refinements to the behaviors otherwise predicted by the classical deterministic system description. In this work we show by using both analytical and numerical investigation that at least in one ubiquitous class of (bio)chemical-reaction mechanisms, enzymatic futile cycles, the external noise may induce a bistable oscillatory (dynamic switching) behavior that is both quantitatively and qualitatively different from what is predicted or possible deterministically. We further demonstrate that the noise required to produce these distinct properties can itself be caused by a set of auxiliary chemical reactions, making it feasible for biological systems of sufficient complexity to generate such behavior internally. This new stochastic dynamics then serves to confer additional functional modalities on the enzymatic futile cycle mechanism that include stochastic amplification and signaling, the characteristics of which could be controlled by both the type and parameters of the driving noise. Hence, such noise-induced phenomena may, among other roles, potentially offer a novel type of control mechanism in pathways that contain these cycles and the like units. In particular, observations of endogenous or externally driven noise-induced dynamics in regulatory networks may thus provide additional insight into their topology, structure, and kinetics. network motif | signal transduction | chemical reaction | synthetic biology | systems biology
Complex dynamic in ecological time series
Peter Turchin; Andrew D. Taylor
1992-01-01
Although the possibility of complex dynamical behaviors-limit cycles, quasiperiodic oscillations, and aperiodic chaos-has been recognized theoretically, most ecologists are skeptical of their importance in nature. In this paper we develop a methodology for reconstructing endogenous (or deterministic) dynamics from ecological time series. Our method consists of fitting...
Gallego-Perez, Daniel; Otero, Jose J; Czeisler, Catherine; Ma, Junyu; Ortiz, Cristina; Gygli, Patrick; Catacutan, Fay Patsy; Gokozan, Hamza Numan; Cowgill, Aaron; Sherwood, Thomas; Ghatak, Subhadip; Malkoc, Veysi; Zhao, Xi; Liao, Wei-Ching; Gnyawali, Surya; Wang, Xinmei; Adler, Andrew F; Leong, Kam; Wulff, Brian; Wilgus, Traci A; Askwith, Candice; Khanna, Savita; Rink, Cameron; Sen, Chandan K; Lee, L James
2016-02-01
Safety concerns and/or the stochastic nature of current transduction approaches have hampered nuclear reprogramming's clinical translation. We report a novel non-viral nanotechnology-based platform permitting deterministic large-scale transfection with single-cell resolution. The superior capabilities of our technology are demonstrated by modification of the well-established direct neuronal reprogramming paradigm using overexpression of the transcription factors Brn2, Ascl1, and Myt1l (BAM). Reprogramming efficiencies were comparable to viral methodologies (up to ~9-12%) without the constraints of capsid size and with the ability to control plasmid dosage, in addition to showing superior performance relative to existing non-viral methods. Furthermore, increased neuronal complexity could be tailored by varying BAM ratio and by including additional proneural genes to the BAM cocktail. Furthermore, high-throughput NEP allowed easy interrogation of the reprogramming process. We discovered that BAM-mediated reprogramming is regulated by AsclI dosage, the S-phase cyclin CCNA2, and that some induced neurons passed through a nestin-positive cell stage. In the field of regenerative medicine, the ability to direct cell fate by nuclear reprogramming is an important facet in terms of clinical application. In this article, the authors described their novel technique of cell reprogramming through overexpression of the transcription factors Brn2, Ascl1, and Myt1l (BAM) by in situ electroporation through nanochannels. This new technique could provide a platform for further future designs. Copyright © 2016 Elsevier Inc. All rights reserved.
Tøndel, Kristin; Indahl, Ulf G; Gjuvsland, Arne B; Vik, Jon Olav; Hunter, Peter; Omholt, Stig W; Martens, Harald
2011-06-01
Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs) to variation in features of the trajectories of the state variables (outputs) throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR), where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR) and ordinary least squares (OLS) regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback loops. HC-PLSR is a promising approach for metamodelling in systems biology, especially for highly nonlinear or non-monotone parameter to phenotype maps. The algorithm can be flexibly adjusted to suit the complexity of the dynamic model behaviour, inviting automation in the metamodelling of complex systems.
2011-01-01
Background Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs) to variation in features of the trajectories of the state variables (outputs) throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR), where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR) and ordinary least squares (OLS) regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Results Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback loops. Conclusions HC-PLSR is a promising approach for metamodelling in systems biology, especially for highly nonlinear or non-monotone parameter to phenotype maps. The algorithm can be flexibly adjusted to suit the complexity of the dynamic model behaviour, inviting automation in the metamodelling of complex systems. PMID:21627852
Model reduction for stochastic chemical systems with abundant species.
Smith, Stephen; Cianci, Claudia; Grima, Ramon
2015-12-07
Biochemical processes typically involve many chemical species, some in abundance and some in low molecule numbers. We first identify the rate constant limits under which the concentrations of a given set of species will tend to infinity (the abundant species) while the concentrations of all other species remains constant (the non-abundant species). Subsequently, we prove that, in this limit, the fluctuations in the molecule numbers of non-abundant species are accurately described by a hybrid stochastic description consisting of a chemical master equation coupled to deterministic rate equations. This is a reduced description when compared to the conventional chemical master equation which describes the fluctuations in both abundant and non-abundant species. We show that the reduced master equation can be solved exactly for a number of biochemical networks involving gene expression and enzyme catalysis, whose conventional chemical master equation description is analytically impenetrable. We use the linear noise approximation to obtain approximate expressions for the difference between the variance of fluctuations in the non-abundant species as predicted by the hybrid approach and by the conventional chemical master equation. Furthermore, we show that surprisingly, irrespective of any separation in the mean molecule numbers of various species, the conventional and hybrid master equations exactly agree for a class of chemical systems.
Model reduction for stochastic chemical systems with abundant species
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Stephen; Cianci, Claudia; Grima, Ramon
2015-12-07
Biochemical processes typically involve many chemical species, some in abundance and some in low molecule numbers. We first identify the rate constant limits under which the concentrations of a given set of species will tend to infinity (the abundant species) while the concentrations of all other species remains constant (the non-abundant species). Subsequently, we prove that, in this limit, the fluctuations in the molecule numbers of non-abundant species are accurately described by a hybrid stochastic description consisting of a chemical master equation coupled to deterministic rate equations. This is a reduced description when compared to the conventional chemical master equationmore » which describes the fluctuations in both abundant and non-abundant species. We show that the reduced master equation can be solved exactly for a number of biochemical networks involving gene expression and enzyme catalysis, whose conventional chemical master equation description is analytically impenetrable. We use the linear noise approximation to obtain approximate expressions for the difference between the variance of fluctuations in the non-abundant species as predicted by the hybrid approach and by the conventional chemical master equation. Furthermore, we show that surprisingly, irrespective of any separation in the mean molecule numbers of various species, the conventional and hybrid master equations exactly agree for a class of chemical systems.« less
Theory of Stochastic Laplacian Growth
NASA Astrophysics Data System (ADS)
Alekseev, Oleg; Mineev-Weinstein, Mark
2017-07-01
We generalize the diffusion-limited aggregation by issuing many randomly-walking particles, which stick to a cluster at the discrete time unit providing its growth. Using simple combinatorial arguments we determine probabilities of different growth scenarios and prove that the most probable evolution is governed by the deterministic Laplacian growth equation. A potential-theoretical analysis of the growth probabilities reveals connections with the tau-function of the integrable dispersionless limit of the two-dimensional Toda hierarchy, normal matrix ensembles, and the two-dimensional Dyson gas confined in a non-uniform magnetic field. We introduce the time-dependent Hamiltonian, which generates transitions between different classes of equivalence of closed curves, and prove the Hamiltonian structure of the interface dynamics. Finally, we propose a relation between probabilities of growth scenarios and the semi-classical limit of certain correlation functions of "light" exponential operators in the Liouville conformal field theory on a pseudosphere.
Cairoli, Andrea; Piovani, Duccio; Jensen, Henrik Jeldtoft
2014-12-31
We propose a new procedure to monitor and forecast the onset of transitions in high-dimensional complex systems. We describe our procedure by an application to the tangled nature model of evolutionary ecology. The quasistable configurations of the full stochastic dynamics are taken as input for a stability analysis by means of the deterministic mean-field equations. Numerical analysis of the high-dimensional stability matrix allows us to identify unstable directions associated with eigenvalues with a positive real part. The overlap of the instantaneous configuration vector of the full stochastic system with the eigenvectors of the unstable directions of the deterministic mean-field approximation is found to be a good early warning of the transitions occurring intermittently.
Time Domain and Frequency Domain Deterministic Channel Modeling for Tunnel/Mining Environments.
Zhou, Chenming; Jacksha, Ronald; Yan, Lincan; Reyes, Miguel; Kovalchik, Peter
2017-01-01
Understanding wireless channels in complex mining environments is critical for designing optimized wireless systems operated in these environments. In this paper, we propose two physics-based, deterministic ultra-wideband (UWB) channel models for characterizing wireless channels in mining/tunnel environments - one in the time domain and the other in the frequency domain. For the time domain model, a general Channel Impulse Response (CIR) is derived and the result is expressed in the classic UWB tapped delay line model. The derived time domain channel model takes into account major propagation controlling factors including tunnel or entry dimensions, frequency, polarization, electrical properties of the four tunnel walls, and transmitter and receiver locations. For the frequency domain model, a complex channel transfer function is derived analytically. Based on the proposed physics-based deterministic channel models, channel parameters such as delay spread, multipath component number, and angular spread are analyzed. It is found that, despite the presence of heavy multipath, both channel delay spread and angular spread for tunnel environments are relatively smaller compared to that of typical indoor environments. The results and findings in this paper have application in the design and deployment of wireless systems in underground mining environments.
Time Domain and Frequency Domain Deterministic Channel Modeling for Tunnel/Mining Environments
Zhou, Chenming; Jacksha, Ronald; Yan, Lincan; Reyes, Miguel; Kovalchik, Peter
2018-01-01
Understanding wireless channels in complex mining environments is critical for designing optimized wireless systems operated in these environments. In this paper, we propose two physics-based, deterministic ultra-wideband (UWB) channel models for characterizing wireless channels in mining/tunnel environments — one in the time domain and the other in the frequency domain. For the time domain model, a general Channel Impulse Response (CIR) is derived and the result is expressed in the classic UWB tapped delay line model. The derived time domain channel model takes into account major propagation controlling factors including tunnel or entry dimensions, frequency, polarization, electrical properties of the four tunnel walls, and transmitter and receiver locations. For the frequency domain model, a complex channel transfer function is derived analytically. Based on the proposed physics-based deterministic channel models, channel parameters such as delay spread, multipath component number, and angular spread are analyzed. It is found that, despite the presence of heavy multipath, both channel delay spread and angular spread for tunnel environments are relatively smaller compared to that of typical indoor environments. The results and findings in this paper have application in the design and deployment of wireless systems in underground mining environments.† PMID:29457801
Deterministic direct reprogramming of somatic cells to pluripotency.
Rais, Yoach; Zviran, Asaf; Geula, Shay; Gafni, Ohad; Chomsky, Elad; Viukov, Sergey; Mansour, Abed AlFatah; Caspi, Inbal; Krupalnik, Vladislav; Zerbib, Mirie; Maza, Itay; Mor, Nofar; Baran, Dror; Weinberger, Leehee; Jaitin, Diego A; Lara-Astiaso, David; Blecher-Gonen, Ronnie; Shipony, Zohar; Mukamel, Zohar; Hagai, Tzachi; Gilad, Shlomit; Amann-Zalcenstein, Daniela; Tanay, Amos; Amit, Ido; Novershtern, Noa; Hanna, Jacob H
2013-10-03
Somatic cells can be inefficiently and stochastically reprogrammed into induced pluripotent stem (iPS) cells by exogenous expression of Oct4 (also called Pou5f1), Sox2, Klf4 and Myc (hereafter referred to as OSKM). The nature of the predominant rate-limiting barrier(s) preventing the majority of cells to successfully and synchronously reprogram remains to be defined. Here we show that depleting Mbd3, a core member of the Mbd3/NuRD (nucleosome remodelling and deacetylation) repressor complex, together with OSKM transduction and reprogramming in naive pluripotency promoting conditions, result in deterministic and synchronized iPS cell reprogramming (near 100% efficiency within seven days from mouse and human cells). Our findings uncover a dichotomous molecular function for the reprogramming factors, serving to reactivate endogenous pluripotency networks while simultaneously directly recruiting the Mbd3/NuRD repressor complex that potently restrains the reactivation of OSKM downstream target genes. Subsequently, the latter interactions, which are largely depleted during early pre-implantation development in vivo, lead to a stochastic and protracted reprogramming trajectory towards pluripotency in vitro. The deterministic reprogramming approach devised here offers a novel platform for the dissection of molecular dynamics leading to establishing pluripotency at unprecedented flexibility and resolution.
The deterministic optical alignment of the HERMES spectrograph
NASA Astrophysics Data System (ADS)
Gers, Luke; Staszak, Nicholas
2014-07-01
The High Efficiency and Resolution Multi Element Spectrograph (HERMES) is a four channel, VPH-grating spectrograph fed by two 400 fiber slit assemblies whose construction and commissioning has now been completed at the Anglo Australian Telescope (AAT). The size, weight, complexity, and scheduling constraints of the system necessitated that a fully integrated, deterministic, opto-mechanical alignment system be designed into the spectrograph before it was manufactured. This paper presents the principles about which the system was assembled and aligned, including the equipment and the metrology methods employed to complete the spectrograph integration.
Autonomous choices among deterministic evolution-laws as source of uncertainty
NASA Astrophysics Data System (ADS)
Trujillo, Leonardo; Meyroneinc, Arnaud; Campos, Kilver; Rendón, Otto; Sigalotti, Leonardo Di G.
2018-03-01
We provide evidence of an extreme form of sensitivity to initial conditions in a family of one-dimensional self-ruling dynamical systems. We prove that some hyperchaotic sequences are closed-form expressions of the orbits of these pseudo-random dynamical systems. Each chaotic system in this family exhibits a sensitivity to initial conditions that encompasses the sequence of choices of the evolution rule in some collection of maps. This opens a possibility to extend current theories of complex behaviors on the basis of intrinsic uncertainty in deterministic chaos.
Kim, Sung-Cheol; Wunsch, Benjamin H.; Hu, Huan; Smith, Joshua T.; Stolovitzky, Gustavo
2017-01-01
Deterministic lateral displacement (DLD) is a technique for size fractionation of particles in continuous flow that has shown great potential for biological applications. Several theoretical models have been proposed, but experimental evidence has demonstrated that a rich class of intermediate migration behavior exists, which is not predicted. We present a unified theoretical framework to infer the path of particles in the whole array on the basis of trajectories in a unit cell. This framework explains many of the unexpected particle trajectories reported and can be used to design arrays for even nanoscale particle fractionation. We performed experiments that verify these predictions and used our model to develop a condenser array that achieves full particle separation with a single fluidic input. PMID:28607075
Multi-dimensional photonic states from a quantum dot
NASA Astrophysics Data System (ADS)
Lee, J. P.; Bennett, A. J.; Stevenson, R. M.; Ellis, D. J. P.; Farrer, I.; Ritchie, D. A.; Shields, A. J.
2018-04-01
Quantum states superposed across multiple particles or degrees of freedom offer an advantage in the development of quantum technologies. Creating these states deterministically and with high efficiency is an ongoing challenge. A promising approach is the repeated excitation of multi-level quantum emitters, which have been shown to naturally generate light with quantum statistics. Here we describe how to create one class of higher dimensional quantum state, a so called W-state, which is superposed across multiple time bins. We do this by repeated Raman scattering of photons from a charged quantum dot in a pillar microcavity. We show this method can be scaled to larger dimensions with no reduction in coherence or single-photon character. We explain how to extend this work to enable the deterministic creation of arbitrary time-bin encoded qudits.
NASA Technical Reports Server (NTRS)
Huber, Hans
2006-01-01
Air transport forms complex networks that can be measured in order to understand its structural characteristics and functional properties. Recent models for network growth (i.e., preferential attachment, etc.) remain stochastic and do not seek to understand other network-specific mechanisms that may account for their development in a more microscopic way. Air traffic is made up of many constituent airlines that are either privately or publicly owned and that operate their own networks. They follow more or less similar business policies each. The way these airline networks organize among themselves into distinct traffic distributions reveals complex interaction among them, which in turn can be aggregated into larger (macro-) traffic distributions. Our approach allows for a more deterministic methodology that will assess the impact of airline strategies on the distinct distributions for air traffic, particularly inside Europe. One key question this paper is seeking to answer is whether there are distinct patterns of preferential attachment for given classes of airline networks to distinct types of European airports. Conclusions about the advancing degree of concentration in this industry and the airline operators that accelerate this process can be drawn.
NASA Astrophysics Data System (ADS)
Suarez Mullins, Astrid
Terrain-induced gravity waves and rotor circulations have been hypothesized to enhance the generation of submeso motions (i.e., nonstationary shear events with spatial and temporal scales greater than the turbulence scale and smaller than the meso-gamma scale) and to modulate low-level intermittency in the stable boundary layer (SBL). Intermittent turbulence, generated by submeso motions and/or the waves, can affect the atmospheric transport and dispersion of pollutants and hazardous materials. Thus, the study of these motions and the mechanisms through which they impact the weakly to very stable SBL is crucial for improving air quality modeling and hazard predictions. In this thesis, the effects of waves and rotor circulations on submeso and turbulence variability within the SBL is investigated over the moderate terrain of central Pennsylvania using special observations from a network deployed at Rock Springs, PA and high-resolution Weather Research and Forecasting (WRF) model forecasts. The investigation of waves and rotors over central PA is important because 1) the moderate topography of this region is common to most of the eastern US and thus the knowledge acquired from this study can be of significance to a large population, 2) there have been little evidence of complex wave structures and rotors reported for this region, and 3) little is known about the waves and rotors generated by smaller and more moderate topographies. Six case studies exhibiting an array of wave and rotor structures are analyzed. Observational evidence of the presence of complex wave structures, resembling nonstationary trapped gravity waves and downslope windstorms, and complex rotor circulations, resembling trapped and jump-type rotors, is presented. These motions and the mechanisms through which they modulate the SBL are further investigated using high-resolution WRF forecasts. First, the efficacy of the 0.444-km horizontal grid spacing WRF model to reproduce submeso and meso-gamma motions, generated by waves and rotors and hypothesized to impact the SBL, is investigated using a new wavelet-based verification methodology for assessing non-deterministic model skill in the submeso and meso-gamma range to complement standard deterministic measures. This technique allows the verification and/or intercomparison of any two nonstationary stochastic systems without many of the limitations of typical wavelet-based verification approaches (e.g., selection of noise models, testing for significance, etc.). Through this analysis, it is shown that the WRF model largely underestimates the number of small amplitude fluctuations in the small submeso range, as expected; and it overestimates the number of small amplitude fluctuations in the meso-gamma range, generally resulting in forecasts that are too smooth. Investigation of the variability for different initialization strategies shows that deterministic wind speed predictions are less sensitive to the choice of initialization strategy than temperature forecasts. Similarly, investigation of the variability for various planetary boundary layer (PBL) parameterizations reveals that turbulent kinetic energy (TKE)-based schemes have an advantage over the non-local schemes for non-deterministic motions. The larger spread in the verification scores for various PBL parameterizations than initialization strategies indicates that PBL parameterization may play a larger role modulating the variability of non-deterministic motions in the SBL for these cases. These results confirm previous findings that have shown WRF to have limited skill forecasting submeso variability for periods greater than ~20 min. The limited skill of the WRF at these scales in these cases is related to the systematic underestimation of the amplitude of observed fluctuations. These results are implemented in the model design and configuration for the investigation of nonstationary waves and rotor structures modulating submeso and mesogamma motions and the SBL. Observations and WRF forecasts of two wave cases characterized by nonstationary waves and rotors are investigated to show the WRF model to have reasonable accuracy forecasting low-level temperature and wind speed in the SBL and to qualitatively produce rotors, similar to those observed, as well as some of the mechanisms modulating their development and evolution. Finally, observations and high-resolution WRF forecasts under different environmental conditions using various initialization strategies are used to investigate the impact of nonlinear gravity waves and rotor structures on the generation of intermittent turbulence and valley transport in the SBL. Evidence of the presence of elevated regions of TKE generated by the complex waves and rotors is presented and investigated using an additional four case studies, exhibiting two synoptic flow regimes and different wave and rotor structures. Throughout this thesis, terrain-induced gravity waves and rotors in the SBL are shown to synergistically interact with the surface cold pool and to enhance low-level turbulence intermittency through the development of submeso and meso-gamma motions. These motions are shown to be an important source of uncertainty for the atmospheric transport and dispersion of pollutants and hazardous materials under very stable conditions. (Abstract shortened by ProQuest.).
Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.
Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing
2016-01-01
Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.
Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling
NASA Technical Reports Server (NTRS)
Hojnicki, Jeffrey S.; Rusick, Jeffrey J.
2005-01-01
Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).
Method of fuzzy inference for one class of MISO-structure systems with non-singleton inputs
NASA Astrophysics Data System (ADS)
Sinuk, V. G.; Panchenko, M. V.
2018-03-01
In fuzzy modeling, the inputs of the simulated systems can receive both crisp values and non-Singleton. Computational complexity of fuzzy inference with fuzzy non-Singleton inputs corresponds to an exponential. This paper describes a new method of inference, based on the theorem of decomposition of a multidimensional fuzzy implication and a fuzzy truth value. This method is considered for fuzzy inputs and has a polynomial complexity, which makes it possible to use it for modeling large-dimensional MISO-structure systems.
Zhang, Haihong; Guan, Cuntai; Ang, Kai Keng; Wang, Chuanchu
2012-01-01
Detecting motor imagery activities versus non-control in brain signals is the basis of self-paced brain-computer interfaces (BCIs), but also poses a considerable challenge to signal processing due to the complex and non-stationary characteristics of motor imagery as well as non-control. This paper presents a self-paced BCI based on a robust learning mechanism that extracts and selects spatio-spectral features for differentiating multiple EEG classes. It also employs a non-linear regression and post-processing technique for predicting the time-series of class labels from the spatio-spectral features. The method was validated in the BCI Competition IV on Dataset I where it produced the lowest prediction error of class labels continuously. This report also presents and discusses analysis of the method using the competition data set. PMID:22347153
The evolution of flaring and non-flaring active regions
NASA Astrophysics Data System (ADS)
Kilcik, A.; Yurchyshyn, V.; Sahin, S.; Sarp, V.; Obridko, V.; Ozguc, A.; Rozelot, J. P.
2018-06-01
According to the modified Zurich classification, sunspot groups are classified into seven different classes (A, B, C, D, E, F and H) based on their morphology and evolution. In this classification, classes A and B, which are small groups, describe the beginning of sunspot evolution, while classes D, E and F describe the large and evolved groups. Class C describes the middle phase of sunspot evolution and the class H describes the end of sunspot evolution. Here, we compare the lifetime and temporal evolution of flaring and non-flaring active regions (ARs), and the flaring effect on ARs in these groups in detail for the last two solar cycles (1996 through 2016). Our main findings are as follows: (i) Flaring sunspot groups have longer lifetimes than non-flaring ones. (ii) Most of the class A, B and C flaring ARs rapidly evolve to higher classes, while this is not applicable for non-flaring ARs. More than 50 per cent of the flaring A, B and C groups changed morphologically, while the remaining D, E, F and H groups did not change remarkably after the flare activity. (iii) 75 per cent of all flaring sunspot groups are large and complex. (iv) There is a significant increase in the sunspot group area in classes A, B, C, D and H after flaring activity. In contrast, the sunspot group area of classes E and F decreased. The sunspot counts of classes D, E and F decreased as well, while classes A, B, C and H showed an increase.
Distinct evolutionary strategies of human leucocyte antigen loci in pathogen-rich environments
Sanchez-Mazas, Alicia; Lemaître, Jean-François; Currat, Mathias
2012-01-01
Human leucocyte antigen (HLA) loci have a complex evolution where both stochastic (e.g. genetic drift) and deterministic (natural selection) forces are involved. Owing to their extraordinary level of polymorphism, HLA genes are useful markers for reconstructing human settlement history. However, HLA variation often deviates significantly from neutral expectations towards an excess of genetic diversity. Because HLA molecules play a crucial role in immunity, this observation is generally explained by pathogen-driven-balancing selection (PDBS). In this study, we investigate the PDBS model by analysing HLA allelic diversity on a large database of 535 populations in relation to pathogen richness. Our results confirm that geographical distances are excellent predictors of HLA genetic differentiation worldwide. We also find a significant positive correlation between genetic diversity and pathogen richness at two HLA class I loci (HLA-A and -B), as predicted by PDBS, and a significant negative correlation at one HLA class II locus (HLA-DQB1). Although these effects are weak, as shown by a loss of significance when populations submitted to rapid genetic drift are removed from the analysis, the inverse relationship between genetic diversity and pathogen richness at different loci indicates that HLA genes have adopted distinct evolutionary strategies to provide immune protection in pathogen-rich environments. PMID:22312050
Soils: man-caused radioactivity and radiation forecast
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gablin, Vassily
2007-07-01
Available in abstract form only. Full text of publication follows: One of the main tasks of the radiation safety guarantee is non-admission of the excess over critical radiation levels. In Russia they are man-caused radiation levels. Meanwhile any radiation measurement represents total radioactivity. That is why it is hard to assess natural and man-caused contributions to total radioactivity. It is shown that soil radioactivity depends on natural factors including radioactivity of rocks and cosmic radiation as well as man-caused factors including nuclear and non-nuclear technologies. Whole totality of these factors includes unpredictable (non-deterministic) factors - nuclear explosions and radiation accidents,more » and predictable ones (deterministic) - all the rest. Deterministic factors represent background radioactivity whose trends is the base of the radiation forecast. Non-deterministic factors represent man-caused radiation treatment contribution which is to be controlled. This contribution is equal to the difference in measured radioactivity and radiation background. The way of calculation of background radioactivity is proposed. Contemporary soils are complicated technologically influenced systems with multi-leveled spatial and temporary inhomogeneity of radionuclides distribution. Generally analysis area can be characterized by any set of factors of soil radioactivity including natural and man-caused factors. Natural factors are cosmic radiation and radioactivity of rocks. Man-caused factors are shown on Fig. 1. It is obvious that man-caused radioactivity is due to both artificial and natural emitters. Any result of radiation measurement represents total radioactivity i.e. the sum of activities resulting from natural and man-caused emitters. There is no gauge which could separately measure natural and man-caused radioactivity. That is why it is so hard to assess natural and man-caused contributions to soil radioactivity. It would have been possible if human activity had led to contamination of soil only by artificial radionuclides. But we can view a totality of soil radioactivity factors in the following way. (author)« less
Optimal Vaccination in a Stochastic Epidemic Model of Two Non-Interacting Populations
2015-02-17
of diminishing returns from vacci- nation will generally take place at smaller vaccine allocations V compared to the deterministic model. Optimal...take place and small r0 values where it does not is illustrat- ed in Fig. 4C. As r0 is decreased, the region between the two instances of switching...approximately distribute vaccine in proportion to population size. For large r0 (r0 ≳ 2.9), two switches take place . In the deterministic optimal solution, a
Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian
2015-10-23
The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation.
Mathematical Modeling of the Origins of Life
NASA Technical Reports Server (NTRS)
Pohorille, Andrew
2006-01-01
The emergence of early metabolism - a network of catalyzed chemical reactions that supported self-maintenance, growth, reproduction and evolution of the ancestors of contemporary cells (protocells) was a critical, but still very poorly understood step on the path from inanimate to animate matter. Here, it is proposed and tested through mathematical modeling of biochemically plausible systems that the emergence of metabolism and its initial evolution towards higher complexity preceded the emergence of a genome. Even though the formation of protocellular metabolism was driven by non-genomic, highly stochastic processes the outcome was largely deterministic, strongly constrained by laws of chemistry. It is shown that such concepts as speciation and fitness to the environment, developed in the context of genomic evolution, also held in the absence of a genome.
Leao, Richardson N; Leao, Fabricio N; Walmsley, Bruce
2005-01-01
A change in the spontaneous release of neurotransmitter is a useful indicator of processes occurring within presynaptic terminals. Linear techniques (e.g. Fourier transform) have been used to analyse spontaneous synaptic events in previous studies, but such methods are inappropriate if the timing pattern is complex. We have investigated spontaneous glycinergic miniature synaptic currents (mIPSCs) in principal cells of the medial nucleus of the trapezoid body. The random versus deterministic (or periodic) nature of mIPSCs was assessed using recurrence quantification analysis. Nonlinear methods were then used to quantify any detected determinism in spontaneous release, and to test for chaotic or fractal patterns. Modelling demonstrated that this procedure is much more sensitive in detecting periodicities than conventional techniques. mIPSCs were found to exhibit periodicities that were abolished by blockade of internal calcium stores with ryanodine, suggesting calcium oscillations in the presynaptic inhibitory terminals. Analysis indicated that mIPSC occurrences were chaotic in nature. Furthermore, periodicities were less evident in congenitally deaf mice than in normal mice, indicating that appropriate neural activity during development is necessary for the expression of deterministic chaos in mIPSC patterns. We suggest that chaotic oscillations of mIPSC occurrences play a physiological role in signal processing in the auditory brainstem. PMID:16271982
Deterministic Integration of Biological and Soft Materials onto 3D Microscale Cellular Frameworks
McCracken, Joselle M.; Xu, Sheng; Badea, Adina; Jang, Kyung-In; Yan, Zheng; Wetzel, David J.; Nan, Kewang; Lin, Qing; Han, Mengdi; Anderson, Mikayla A.; Lee, Jung Woo; Wei, Zijun; Pharr, Matt; Wang, Renhan; Su, Jessica; Rubakhin, Stanislav S.; Sweedler, Jonathan V.
2018-01-01
Complex 3D organizations of materials represent ubiquitous structural motifs found in the most sophisticated forms of matter, the most notable of which are in life-sustaining hierarchical structures found in biology, but where simpler examples also exist as dense multilayered constructs in high-performance electronics. Each class of system evinces specific enabling forms of assembly to establish their functional organization at length scales not dissimilar to tissue-level constructs. This study describes materials and means of assembly that extend and join these disparate systems—schemes for the functional integration of soft and biological materials with synthetic 3D microscale, open frameworks that can leverage the most advanced forms of multilayer electronic technologies, including device-grade semiconductors such as monocrystalline silicon. Cellular migration behaviors, temporal dependencies of their growth, and contact guidance cues provided by the nonplanarity of these frameworks illustrate design criteria useful for their functional integration with living matter (e.g., NIH 3T3 fibroblast and primary rat dorsal root ganglion cell cultures). PMID:29552634
A generic motif discovery algorithm for sequential data.
Jensen, Kyle L; Styczynski, Mark P; Rigoutsos, Isidore; Stephanopoulos, Gregory N
2006-01-01
Motif discovery in sequential data is a problem of great interest and with many applications. However, previous methods have been unable to combine exhaustive search with complex motif representations and are each typically only applicable to a certain class of problems. Here we present a generic motif discovery algorithm (Gemoda) for sequential data. Gemoda can be applied to any dataset with a sequential character, including both categorical and real-valued data. As we show, Gemoda deterministically discovers motifs that are maximal in composition and length. As well, the algorithm allows any choice of similarity metric for finding motifs. Finally, Gemoda's output motifs are representation-agnostic: they can be represented using regular expressions, position weight matrices or any number of other models for any type of sequential data. We demonstrate a number of applications of the algorithm, including the discovery of motifs in amino acids sequences, a new solution to the (l,d)-motif problem in DNA sequences and the discovery of conserved protein substructures. Gemoda is freely available at http://web.mit.edu/bamel/gemoda
Extinction in neutrally stable stochastic Lotka-Volterra models
NASA Astrophysics Data System (ADS)
Dobrinevski, Alexander; Frey, Erwin
2012-05-01
Populations of competing biological species exhibit a fascinating interplay between the nonlinear dynamics of evolutionary selection forces and random fluctuations arising from the stochastic nature of the interactions. The processes leading to extinction of species, whose understanding is a key component in the study of evolution and biodiversity, are influenced by both of these factors. Here, we investigate a class of stochastic population dynamics models based on generalized Lotka-Volterra systems. In the case of neutral stability of the underlying deterministic model, the impact of intrinsic noise on the survival of species is dramatic: It destroys coexistence of interacting species on a time scale proportional to the population size. We introduce a new method based on stochastic averaging which allows one to understand this extinction process quantitatively by reduction to a lower-dimensional effective dynamics. This is performed analytically for two highly symmetrical models and can be generalized numerically to more complex situations. The extinction probability distributions and other quantities of interest we obtain show excellent agreement with simulations.
Extinction in neutrally stable stochastic Lotka-Volterra models.
Dobrinevski, Alexander; Frey, Erwin
2012-05-01
Populations of competing biological species exhibit a fascinating interplay between the nonlinear dynamics of evolutionary selection forces and random fluctuations arising from the stochastic nature of the interactions. The processes leading to extinction of species, whose understanding is a key component in the study of evolution and biodiversity, are influenced by both of these factors. Here, we investigate a class of stochastic population dynamics models based on generalized Lotka-Volterra systems. In the case of neutral stability of the underlying deterministic model, the impact of intrinsic noise on the survival of species is dramatic: It destroys coexistence of interacting species on a time scale proportional to the population size. We introduce a new method based on stochastic averaging which allows one to understand this extinction process quantitatively by reduction to a lower-dimensional effective dynamics. This is performed analytically for two highly symmetrical models and can be generalized numerically to more complex situations. The extinction probability distributions and other quantities of interest we obtain show excellent agreement with simulations.
Northern Hemisphere glaciation and the evolution of Plio-Pleistocene climate noise
NASA Astrophysics Data System (ADS)
Meyers, Stephen R.; Hinnov, Linda A.
2010-08-01
Deterministic orbital controls on climate variability are commonly inferred to dominate across timescales of 104-106 years, although some studies have suggested that stochastic processes may be of equal or greater importance. Here we explicitly quantify changes in deterministic orbital processes (forcing and/or pacing) versus stochastic climate processes during the Plio-Pleistocene, via time-frequency analysis of two prominent foraminifera oxygen isotopic stacks. Our results indicate that development of the Northern Hemisphere ice sheet is paralleled by an overall amplification of both deterministic and stochastic climate energy, but their relative dominance is variable. The progression from a more stochastic early Pliocene to a strongly deterministic late Pleistocene is primarily accommodated during two transitory phases of Northern Hemisphere ice sheet growth. This long-term trend is punctuated by “stochastic events,” which we interpret as evidence for abrupt reorganization of the climate system at the initiation and termination of the mid-Pleistocene transition and at the onset of Northern Hemisphere glaciation. In addition to highlighting a complex interplay between deterministic and stochastic climate change during the Plio-Pleistocene, our results support an early onset for Northern Hemisphere glaciation (between 3.5 and 3.7 Ma) and reveal some new characteristics of the orbital signal response, such as the puzzling emergence of 100 ka and 400 ka cyclic climate variability during theoretical eccentricity nodes.
Expendable launch vehicle studies
NASA Technical Reports Server (NTRS)
Bainum, Peter M.; Reiss, Robert
1995-01-01
Analytical support studies of expendable launch vehicles concentrate on the stability of the dynamics during launch especially during or near the region of maximum dynamic pressure. The in-plane dynamic equations of a generic launch vehicle with multiple flexible bending and fuel sloshing modes are developed and linearized. The information from LeRC about the grids, masses, and modes is incorporated into the model. The eigenvalues of the plant are analyzed for several modeling factors: utilizing diagonal mass matrix, uniform beam assumption, inclusion of aerodynamics, and the interaction between the aerodynamics and the flexible bending motion. Preliminary PID, LQR, and LQG control designs with sensor and actuator dynamics for this system and simulations are also conducted. The initial analysis for comparison of PD (proportional-derivative) and full state feedback LQR Linear quadratic regulator) shows that the split weighted LQR controller has better performance than that of the PD. In order to meet both the performance and robustness requirements, the H(sub infinity) robust controller for the expendable launch vehicle is developed. The simulation indicates that both the performance and robustness of the H(sub infinity) controller are better than that for the PID and LQG controllers. The modelling and analysis support studies team has continued development of methodology, using eigensensitivity analysis, to solve three classes of discrete eigenvalue equations. In the first class, the matrix elements are non-linear functions of the eigenvector. All non-linear periodic motion can be cast in this form. Here the eigenvector is comprised of the coefficients of complete basis functions spanning the response space and the eigenvalue is the frequency. The second class of eigenvalue problems studied is the quadratic eigenvalue problem. Solutions for linear viscously damped structures or viscoelastic structures can be reduced to this form. Particular attention is paid to Maxwell and Kelvin models. The third class of problems consists of linear eigenvalue problems in which the elements of the mass and stiffness matrices are stochastic. dynamic structural response for which the parameters are given by probabilistic distribution functions, rather than deterministic values, can be cast in this form. Solutions for several problems in each class will be presented.
NASA Technical Reports Server (NTRS)
Butler, Roy
2013-01-01
The growth in computer hardware performance, coupled with reduced energy requirements, has led to a rapid expansion of the resources available to software systems, driving them towards greater logical abstraction, flexibility, and complexity. This shift in focus from compacting functionality into a limited field towards developing layered, multi-state architectures in a grand field has both driven and been driven by the history of embedded processor design in the robotic spacecraft industry.The combinatorial growth of interprocess conditions is accompanied by benefits (concurrent development, situational autonomy, and evolution of goals) and drawbacks (late integration, non-deterministic interactions, and multifaceted anomalies) in achieving mission success, as illustrated by the case of the Mars Reconnaissance Orbiter. Approaches to optimizing the benefits while mitigating the drawbacks have taken the form of the formalization of requirements, modular design practices, extensive system simulation, and spacecraft data trend analysis. The growth of hardware capability and software complexity can be expected to continue, with future directions including stackable commodity subsystems, computer-generated algorithms, runtime reconfigurable processors, and greater autonomy.
Deterministic magnetorheological finishing of optical aspheric mirrors
NASA Astrophysics Data System (ADS)
Song, Ci; Dai, Yifan; Peng, Xiaoqiang; Li, Shengyi; Shi, Feng
2009-05-01
A new method magnetorheological finishing (MRF) used for deterministical finishing of optical aspheric mirrors is applied to overcome some disadvantages including low finishing efficiency, long iterative time and unstable convergence in the process of conventional polishing. Based on the introduction of the basic principle of MRF, the key techniques to implement deterministical MRF are also discussed. To demonstrate it, a 200 mm diameter K9 class concave asphere with a vertex radius of 640mm was figured on MRF polish tool developed by ourselves. Through one process about two hours, the surface accuracy peak-to-valley (PV) is improved from initial 0.216λ to final 0.179λ and root-mean-square (RMS) is improved from 0.027λ to 0.017λ (λ = 0.6328um ). High-precision and high-efficiency convergence of optical aspheric surface error shows that MRF is an advanced optical manufacturing method that owns high convergence ratio of surface figure, high precision of optical surfacing, stabile and controllable finishing process. Therefore, utilizing MRF to finish optical aspheric mirrors determinately is credible and stabile; its advantages can be also used for finishing optical elements on varieties of types such as plane mirrors and spherical mirrors.
On the deterministic and stochastic use of hydrologic models
Farmer, William H.; Vogel, Richard M.
2016-01-01
Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.
On the Hosoya index of a family of deterministic recursive trees
NASA Astrophysics Data System (ADS)
Chen, Xufeng; Zhang, Jingyuan; Sun, Weigang
2017-01-01
In this paper, we calculate the Hosoya index in a family of deterministic recursive trees with a special feature that includes new nodes which are connected to existing nodes with a certain rule. We then obtain a recursive solution of the Hosoya index based on the operations of a determinant. The computational complexity of our proposed algorithm is O(log2 n) with n being the network size, which is lower than that of the existing numerical methods. Finally, we give a weighted tree shrinking method as a graphical interpretation of the recurrence formula for the Hosoya index.
[Basic understanding of the HLA system in allogeneic hematopoietic cell transplantation].
Ichinohe, Tatsuo
2015-10-01
Human immune responses are principally characterized by the human leukocyte antigen (HLA) system, a diverse set of cell surface molecules encoded by the major histocompatibility complex gene cluster on the short arm of chromosome 6. Among various members of the HLA family, the best characterized are the classic highly polymorphic class I and class II molecules that are responsible for antigen presentation to T cells and regulation of NK cell functions. In allogeneic hematopoietic cell transplantation, sophisticated approaches to donor-recipient allele-level matching at 3 class I (HLA-A/B/C) and 3 class II (HLA-DRB1/DQB1/DPB1) loci have been proven to lower the risk of immunologic complications such as graft failure and graft-versus-host disease, and possibly to confer effective graft-versus-malignancy effects. Future areas of research include clarifying the role of relatively non-polymorphic non-classical HLA molecules (HLA-E/F/G, HLA-DM/DO) and polymorphic/non-polymorphic class I-related molecules (MICA, MICB, HFE, MR1, CD1, FcRn) in the immune regulation that follows hematopoietic cell transplantation.
ERIC Educational Resources Information Center
DeCarlo, Lawrence T.
2011-01-01
Cognitive diagnostic models (CDMs) attempt to uncover latent skills or attributes that examinees must possess in order to answer test items correctly. The DINA (deterministic input, noisy "and") model is a popular CDM that has been widely used. It is shown here that a logistic version of the model can easily be fit with standard software for…
Javidi, Soroush; Mandic, Danilo P.; Took, Clive Cheong; Cichocki, Andrzej
2011-01-01
A new class of complex domain blind source extraction algorithms suitable for the extraction of both circular and non-circular complex signals is proposed. This is achieved through sequential extraction based on the degree of kurtosis and in the presence of non-circular measurement noise. The existence and uniqueness analysis of the solution is followed by a study of fast converging variants of the algorithm. The performance is first assessed through simulations on well understood benchmark signals, followed by a case study on real-time artifact removal from EEG signals, verified using both qualitative and quantitative metrics. The results illustrate the power of the proposed approach in real-time blind extraction of general complex-valued sources. PMID:22319461
Sometimes "Newton's Method" Always "Cycles"
ERIC Educational Resources Information Center
Latulippe, Joe; Switkes, Jennifer
2012-01-01
Are there functions for which Newton's method cycles for all non-trivial initial guesses? We construct and solve a differential equation whose solution is a real-valued function that two-cycles under Newton iteration. Higher-order cycles of Newton's method iterates are explored in the complex plane using complex powers of "x." We find a class of…
Duarte, Gabriel M; Braun, Jason D; Giesbrecht, Patrick K; Herbert, David E
2017-12-21
Diiminepyridines are a well-known class of "non-innocent" ligands that confer additional redox activity to coordination complexes beyond metal-centred oxidation/reduction. Here, we demonstrate that metal coordination complexes (MCCs) of diiminepyridine (DIP) ligands with iron are suitable anolytes for redox-flow battery applications, with enhanced capacitance and stability compared with bipyridine analogs, and access to storage of up to 1.6 electron equivalents. Substitution of the ligand is shown to be a key factor in the cycling stability and performance of MCCs based on DIP ligands, opening the door to further optimization.
Distribution and regulation of stochasticity and plasticity in Saccharomyces cerevisiae
Dar, R. D.; Karig, D. K.; Cooke, J. F.; ...
2010-09-01
Stochasticity is an inherent feature of complex systems with nanoscale structure. In such systems information is represented by small collections of elements (e.g. a few electrons on a quantum dot), and small variations in the populations of these elements may lead to big uncertainties in the information. Unfortunately, little is known about how to work within this inherently noisy environment to design robust functionality into complex nanoscale systems. Here, we look to the biological cell as an intriguing model system where evolution has mediated the trade-offs between fluctuations and function, and in particular we look at the relationships and trade-offsmore » between stochastic and deterministic responses in the gene expression of budding yeast (Saccharomyces cerevisiae). We find gene regulatory arrangements that control the stochastic and deterministic components of expression, and show that genes that have evolved to respond to stimuli (stress) in the most strongly deterministic way exhibit the most noise in the absence of the stimuli. We show that this relationship is consistent with a bursty 2-state model of gene expression, and demonstrate that this regulatory motif generates the most uncertainty in gene expression when there is the greatest uncertainty in the optimal level of gene expression.« less
The dual reading of general conditionals: The influence of abstract versus concrete contexts.
Wang, Moyun; Yao, Xinyun
2018-04-01
A current main issue on conditionals is whether the meaning of general conditionals (e.g., If a card is red, then it is round) is deterministic (exceptionless) or probabilistic (exception-tolerating). In order to resolve the issue, two experiments examined the influence of conditional contexts (with vs. without frequency information of truth table cases) on the reading of general conditionals. Experiment 1 examined the direct reading of general conditionals in the possibility judgment task. Experiment 2 examined the indirect reading of general conditionals in the truth judgment task. It was found that both the direct and indirect reading of general conditionals exhibited the duality: the predominant deterministic semantic reading of conditionals without frequency information, and the predominant probabilistic pragmatic reading of conditionals with frequency information. The context of general conditionals determined the predominant reading of general conditionals. There were obvious individual differences in reading general conditionals with frequency information. The meaning of general conditionals is relative, depending on conditional contexts. The reading of general conditionals is flexible and complex so that no simple deterministic and probabilistic accounts are able to explain it. The present findings are beyond the extant deterministic and probabilistic accounts of conditionals.
NASA Astrophysics Data System (ADS)
Adams, Mike; Smalian, Silva
2017-09-01
For nuclear waste packages the expected dose rates and nuclide inventory are beforehand calculated. Depending on the package of the nuclear waste deterministic programs like MicroShield® provide a range of results for each type of packaging. Stochastic programs like "Monte-Carlo N-Particle Transport Code System" (MCNP®) on the other hand provide reliable results for complex geometries. However this type of program requires a fully trained operator and calculations are time consuming. The problem here is to choose an appropriate program for a specific geometry. Therefore we compared the results of deterministic programs like MicroShield® and stochastic programs like MCNP®. These comparisons enable us to make a statement about the applicability of the various programs for chosen types of containers. As a conclusion we found that for thin-walled geometries deterministic programs like MicroShield® are well suited to calculate the dose rate. For cylindrical containers with inner shielding however, deterministic programs hit their limits. Furthermore we investigate the effect of an inhomogeneous material and activity distribution on the results. The calculations are still ongoing. Results will be presented in the final abstract.
NASA Astrophysics Data System (ADS)
Reeves, Mark
2014-03-01
Entropy changes underlie the physics that dominates biological interactions. Indeed, introductory biology courses often begin with an exploration of the qualities of water that are important to living systems. However, one idea that is not explicitly addressed in most introductory physics or biology textbooks is dominant contribution of the entropy in driving important biological processes towards equilibrium. From diffusion to cell-membrane formation, to electrostatic binding in protein folding, to the functioning of nerve cells, entropic effects often act to counterbalance deterministic forces such as electrostatic attraction and in so doing, allow for effective molecular signaling. A small group of biology, biophysics and computer science faculty have worked together for the past five years to develop curricular modules (based on SCALEUP pedagogy) that enable students to create models of stochastic and deterministic processes. Our students are first-year engineering and science students in the calculus-based physics course and they are not expected to know biology beyond the high-school level. In our class, they learn to reduce seemingly complex biological processes and structures to be described by tractable models that include deterministic processes and simple probabilistic inference. The students test these models in simulations and in laboratory experiments that are biologically relevant. The students are challenged to bridge the gap between statistical parameterization of their data (mean and standard deviation) and simple model-building by inference. This allows the students to quantitatively describe realistic cellular processes such as diffusion, ionic transport, and ligand-receptor binding. Moreover, the students confront ``random'' forces and traditional forces in problems, simulations, and in laboratory exploration throughout the year-long course as they move from traditional kinematics through thermodynamics to electrostatic interactions. This talk will present a number of these exercises, with particular focus on the hands-on experiments done by the students, and will give examples of the tangible material that our students work with throughout the two-semester sequence of their course on introductory physics with a bio focus. Supported by NSF DUE.
Equiangular tight frames and unistochastic matrices
NASA Astrophysics Data System (ADS)
Goyeneche, Dardo; Turek, Ondřej
2017-06-01
We demonstrate that a complex equiangular tight frame composed of N vectors in dimension d, denoted ETF (d, N), exists if and only if a certain bistochastic matrix, univocally determined by N and d, belongs to a special class of unistochastic matrices. This connection allows us to find new complex ETFs in infinitely many dimensions and to derive a method to introduce non-trivial free parameters in ETFs. We present an explicit six-parametric family of complex ETF(6,16), which defines a family of symmetric POVMs. Minimal and maximal possible average entanglement of the vectors within this qubit-qutrit family are described. Furthermore, we propose an efficient numerical procedure to compute the unitary matrix underlying a unistochastic matrix, which we apply to find all existing classes of complex ETFs containing up to 20 vectors.
Some mechanistic requirements for major transitions.
Schuster, Peter
2016-08-19
Major transitions in nature and human society are accompanied by a substantial change towards higher complexity in the core of the evolving system. New features are established, novel hierarchies emerge, new regulatory mechanisms are required and so on. An obvious way to achieve higher complexity is integration of autonomous elements into new organized systems whereby the previously independent units give up their autonomy at least in part. In this contribution, we reconsider the more than 40 years old hypercycle model and analyse it by the tools of stochastic chemical kinetics. An open system is implemented in the form of a flow reactor. The formation of new dynamically organized units through integration of competitors is identified with transcritical bifurcations. In the stochastic model, the fully organized state is quasi-stationary whereas the unorganized state corresponds to a population with natural selection. The stability of the organized state depends strongly on the number of individual subspecies, n, that have to be integrated: two and three classes of individuals, [Formula: see text] and [Formula: see text], readily form quasi-stationary states. The four-membered deterministic dynamical system, [Formula: see text], is stable but in the stochastic approach self-enhancing fluctuations drive it into extinction. In systems with five and more classes of individuals, [Formula: see text], the state of cooperation is unstable and the solutions of the deterministic ODEs exhibit large amplitude oscillations. In the stochastic system self-enhancing fluctuations lead to extinction as observed with [Formula: see text] Interestingly, cooperative systems in nature are commonly two-membered as shown by numerous examples of binary symbiosis. A few cases of symbiosis of three partners, called three-way symbiosis, have been found and were analysed within the past decade. Four-way symbiosis is rather rare but was reported to occur in fungus-growing ants. The model reported here can be used to illustrate the interplay between competition and cooperation whereby we obtain a hint on the role that resources play in major transitions. Abundance of resources seems to be an indispensable prerequisite of radical innovation that apparently needs substantial investments. Economists often claim that scarcity is driving innovation. Our model sheds some light on this apparent contradiction. In a nutshell, the answer is: scarcity drives optimization and increase in efficiency but abundance is required for radical novelty and the development of new features.This article is part of the themed issue 'The major synthetic evolutionary transitions'. © 2016 The Author(s).
Enterprise resource planning for hospitals.
van Merode, Godefridus G; Groothuis, Siebren; Hasman, Arie
2004-06-30
Integrated hospitals need a central planning and control system to plan patients' processes and the required capacity. Given the changes in healthcare one can ask the question what type of information systems can best support these healthcare delivery organizations. We focus in this review on the potential of enterprise resource planning (ERP) systems for healthcare delivery organizations. First ERP systems are explained. An overview is then presented of the characteristics of the planning process in hospital environments. Problems with ERP that are due to the special characteristics of healthcare are presented. The situations in which ERP can or cannot be used are discussed. It is suggested to divide hospitals in a part that is concerned only with deterministic processes and a part that is concerned with non-deterministic processes. ERP can be very useful for planning and controlling the deterministic processes.
Evidence of Deterministic Components in the Apparent Randomness of GRBs: Clues of a Chaotic Dynamic
Greco, G.; Rosa, R.; Beskin, G.; Karpov, S.; Romano, L.; Guarnieri, A.; Bartolini, C.; Bedogni, R.
2011-01-01
Prompt γ-ray emissions from gamma-ray bursts (GRBs) exhibit a vast range of extremely complex temporal structures with a typical variability time-scale significantly short – as fast as milliseconds. This work aims to investigate the apparent randomness of the GRB time profiles making extensive use of nonlinear techniques combining the advanced spectral method of the Singular Spectrum Analysis (SSA) with the classical tools provided by the Chaos Theory. Despite their morphological complexity, we detect evidence of a non stochastic short-term variability during the overall burst duration – seemingly consistent with a chaotic behavior. The phase space portrait of such variability shows the existence of a well-defined strange attractor underlying the erratic prompt emission structures. This scenario can shed new light on the ultra-relativistic processes believed to take place in GRB explosions and usually associated with the birth of a fast-spinning magnetar or accretion of matter onto a newly formed black hole. PMID:22355609
Evidence of deterministic components in the apparent randomness of GRBs: clues of a chaotic dynamic.
Greco, G; Rosa, R; Beskin, G; Karpov, S; Romano, L; Guarnieri, A; Bartolini, C; Bedogni, R
2011-01-01
Prompt γ-ray emissions from gamma-ray bursts (GRBs) exhibit a vast range of extremely complex temporal structures with a typical variability time-scale significantly short - as fast as milliseconds. This work aims to investigate the apparent randomness of the GRB time profiles making extensive use of nonlinear techniques combining the advanced spectral method of the Singular Spectrum Analysis (SSA) with the classical tools provided by the Chaos Theory. Despite their morphological complexity, we detect evidence of a non stochastic short-term variability during the overall burst duration - seemingly consistent with a chaotic behavior. The phase space portrait of such variability shows the existence of a well-defined strange attractor underlying the erratic prompt emission structures. This scenario can shed new light on the ultra-relativistic processes believed to take place in GRB explosions and usually associated with the birth of a fast-spinning magnetar or accretion of matter onto a newly formed black hole.
NASA Astrophysics Data System (ADS)
Lam, C. Y.; Ip, W. H.
2012-11-01
A higher degree of reliability in the collaborative network can increase the competitiveness and performance of an entire supply chain. As supply chain networks grow more complex, the consequences of unreliable behaviour become increasingly severe in terms of cost, effort and time. Moreover, it is computationally difficult to calculate the network reliability of a Non-deterministic Polynomial-time hard (NP-hard) all-terminal network using state enumeration, as this may require a huge number of iterations for topology optimisation. Therefore, this paper proposes an alternative approach of an improved spanning tree for reliability analysis to help effectively evaluate and analyse the reliability of collaborative networks in supply chains and reduce the comparative computational complexity of algorithms. Set theory is employed to evaluate and model the all-terminal reliability of the improved spanning tree algorithm and present a case study of a supply chain used in lamp production to illustrate the application of the proposed approach.
Bounds on the sample complexity for private learning and private data release
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kasiviswanathan, Shiva; Beime, Amos; Nissim, Kobbi
2009-01-01
Learning is a task that generalizes many of the analyses that are applied to collections of data, and in particular, collections of sensitive individual information. Hence, it is natural to ask what can be learned while preserving individual privacy. [Kasiviswanathan, Lee, Nissim, Raskhodnikova, and Smith; FOCS 2008] initiated such a discussion. They formalized the notion of private learning, as a combination of PAC learning and differential privacy, and investigated what concept classes can be learned privately. Somewhat surprisingly, they showed that, ignoring time complexity, every PAC learning task could be performed privately with polynomially many samples, and in many naturalmore » cases this could even be done in polynomial time. While these results seem to equate non-private and private learning, there is still a significant gap: the sample complexity of (non-private) PAC learning is crisply characterized in terms of the VC-dimension of the concept class, whereas this relationship is lost in the constructions of private learners, which exhibit, generally, a higher sample complexity. Looking into this gap, we examine several private learning tasks and give tight bounds on their sample complexity. In particular, we show strong separations between sample complexities of proper and improper private learners (such separation does not exist for non-private learners), and between sample complexities of efficient and inefficient proper private learners. Our results show that VC-dimension is not the right measure for characterizing the sample complexity of proper private learning. We also examine the task of private data release (as initiated by [Blum, Ligett, and Roth; STOC 2008]), and give new lower bounds on the sample complexity. Our results show that the logarithmic dependence on size of the instance space is essential for private data release.« less
Waliszewski, P; Molski, M; Konarski, J
1998-06-01
A keystone of the molecular reductionist approach to cellular biology is a specific deductive strategy relating genotype to phenotype-two distinct categories. This relationship is based on the assumption that the intermediary cellular network of actively transcribed genes and their regulatory elements is deterministic (i.e., a link between expression of a gene and a phenotypic trait can always be identified, and evolution of the network in time is predetermined). However, experimental data suggest that the relationship between genotype and phenotype is nonbijective (i.e., a gene can contribute to the emergence of more than just one phenotypic trait or a phenotypic trait can be determined by expression of several genes). This implies nonlinearity (i.e., lack of the proportional relationship between input and the outcome), complexity (i.e. emergence of the hierarchical network of multiple cross-interacting elements that is sensitive to initial conditions, possesses multiple equilibria, organizes spontaneously into different morphological patterns, and is controlled in dispersed rather than centralized manner), and quasi-determinism (i.e., coexistence of deterministic and nondeterministic events) of the network. Nonlinearity within the space of the cellular molecular events underlies the existence of a fractal structure within a number of metabolic processes, and patterns of tissue growth, which is measured experimentally as a fractal dimension. Because of its complexity, the same phenotype can be associated with a number of alternative sequences of cellular events. Moreover, the primary cause initiating phenotypic evolution of cells such as malignant transformation can be favored probabilistically, but not identified unequivocally. Thermodynamic fluctuations of energy rather than gene mutations, the material traits of the fluctuations alter both the molecular and informational structure of the network. Then, the interplay between deterministic chaos, complexity, self-organization, and natural selection drives formation of malignant phenotype. This concept offers a novel perspective for investigation of tumorigenesis without invalidating current molecular findings. The essay integrates the ideas of the sciences of complexity in a biological context.
Spectrum of classes of point emitters of electromagnetic wave fields.
Castañeda, Román
2016-09-01
The spectrum of classes of point emitters has been introduced as a numerical tool suitable for the design, analysis, and synthesis of non-paraxial optical fields in arbitrary states of spatial coherence. In this paper, the polarization state of planar electromagnetic wave fields is included in the spectrum of classes, thus increasing its modeling capabilities. In this context, optical processing is realized as a filtering on the spectrum of classes of point emitters, performed by the complex degree of spatial coherence and the two-point correlation of polarization, which could be implemented dynamically by using programmable optical devices.
Fatal and nonfatal risk associated with recycle of D&D-generated concrete
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boren, J.K.; Ayers, K.W.; Parker, F.L.
1997-02-01
As decontamination and decommissioning activities proceed within the U.S. Department of Energy Complex, vast volumes of uncontaminated and contaminated concrete will be generated. The current practice of decontaminating and landfilling the concrete is an expensive and potentially wasteful practice. Research is being conducted at Vanderbilt University to assess the economic, social, legal, and political ramifications of alternate methods of dealing with waste concrete. An important aspect of this research work is the assessment of risk associated with the various alternatives. A deterministic risk assessment model has been developed which quantifies radiological as well as non-radiological risks associated with concrete disposalmore » and recycle activities. The risk model accounts for fatal as well as non-fatal risks to both workers and the public. Preliminary results indicate that recycling of concrete presents potentially lower risks than the current practice. Radiological considerations are shown to be of minor importance in comparison to other sources of risk, with conventional transportation fatalities and injuries dominating. Onsite activities can also be a major contributor to non-fatal risk.« less
Ambient Sound-Based Collaborative Localization of Indeterministic Devices
Kamminga, Jacob; Le, Duc; Havinga, Paul
2016-01-01
Localization is essential in wireless sensor networks. To our knowledge, no prior work has utilized low-cost devices for collaborative localization based on only ambient sound, without the support of local infrastructure. The reason may be the fact that most low-cost devices are indeterministic and suffer from uncertain input latencies. This uncertainty makes accurate localization challenging. Therefore, we present a collaborative localization algorithm (Cooperative Localization on Android with ambient Sound Sources (CLASS)) that simultaneously localizes the position of indeterministic devices and ambient sound sources without local infrastructure. The CLASS algorithm deals with the uncertainty by splitting the devices into subsets so that outliers can be removed from the time difference of arrival values and localization results. Since Android is indeterministic, we select Android devices to evaluate our approach. The algorithm is evaluated with an outdoor experiment and achieves a mean Root Mean Square Error (RMSE) of 2.18 m with a standard deviation of 0.22 m. Estimated directions towards the sound sources have a mean RMSE of 17.5° and a standard deviation of 2.3°. These results show that it is feasible to simultaneously achieve a relative positioning of both devices and sound sources with sufficient accuracy, even when using non-deterministic devices and platforms, such as Android. PMID:27649176
Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information
Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing
2016-01-01
Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft’s algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms. PMID:27806102
Deterministic ripple-spreading model for complex networks.
Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel
2011-04-01
This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.
Realistic Simulation for Body Area and Body-To-Body Networks
Alam, Muhammad Mahtab; Ben Hamida, Elyes; Ben Arbia, Dhafer; Maman, Mickael; Mani, Francesco; Denis, Benoit; D’Errico, Raffaele
2016-01-01
In this paper, we present an accurate and realistic simulation for body area networks (BAN) and body-to-body networks (BBN) using deterministic and semi-deterministic approaches. First, in the semi-deterministic approach, a real-time measurement campaign is performed, which is further characterized through statistical analysis. It is able to generate link-correlated and time-varying realistic traces (i.e., with consistent mobility patterns) for on-body and body-to-body shadowing and fading, including body orientations and rotations, by means of stochastic channel models. The full deterministic approach is particularly targeted to enhance IEEE 802.15.6 proposed channel models by introducing space and time variations (i.e., dynamic distances) through biomechanical modeling. In addition, it helps to accurately model the radio link by identifying the link types and corresponding path loss factors for line of sight (LOS) and non-line of sight (NLOS). This approach is particularly important for links that vary over time due to mobility. It is also important to add that the communication and protocol stack, including the physical (PHY), medium access control (MAC) and networking models, is developed for BAN and BBN, and the IEEE 802.15.6 compliance standard is provided as a benchmark for future research works of the community. Finally, the two approaches are compared in terms of the successful packet delivery ratio, packet delay and energy efficiency. The results show that the semi-deterministic approach is the best option; however, for the diversity of the mobility patterns and scenarios applicable, biomechanical modeling and the deterministic approach are better choices. PMID:27104537
Realistic Simulation for Body Area and Body-To-Body Networks.
Alam, Muhammad Mahtab; Ben Hamida, Elyes; Ben Arbia, Dhafer; Maman, Mickael; Mani, Francesco; Denis, Benoit; D'Errico, Raffaele
2016-04-20
In this paper, we present an accurate and realistic simulation for body area networks (BAN) and body-to-body networks (BBN) using deterministic and semi-deterministic approaches. First, in the semi-deterministic approach, a real-time measurement campaign is performed, which is further characterized through statistical analysis. It is able to generate link-correlated and time-varying realistic traces (i.e., with consistent mobility patterns) for on-body and body-to-body shadowing and fading, including body orientations and rotations, by means of stochastic channel models. The full deterministic approach is particularly targeted to enhance IEEE 802.15.6 proposed channel models by introducing space and time variations (i.e., dynamic distances) through biomechanical modeling. In addition, it helps to accurately model the radio link by identifying the link types and corresponding path loss factors for line of sight (LOS) and non-line of sight (NLOS). This approach is particularly important for links that vary over time due to mobility. It is also important to add that the communication and protocol stack, including the physical (PHY), medium access control (MAC) and networking models, is developed for BAN and BBN, and the IEEE 802.15.6 compliance standard is provided as a benchmark for future research works of the community. Finally, the two approaches are compared in terms of the successful packet delivery ratio, packet delay and energy efficiency. The results show that the semi-deterministic approach is the best option; however, for the diversity of the mobility patterns and scenarios applicable, biomechanical modeling and the deterministic approach are better choices.
Landis, E.D.; Purcell, M.K.; Thorgaard, G.H.; Wheeler, P.A.; Hansen, J.D.
2008-01-01
Major histocompatibility complex (MHC) molecules are important mediators of cell-mediated immunity in vertebrates. MHC class IA molecules are important for host anti-viral immunity as they present intracellular antigens and regulate natural killer cell (NK) activity. MHC class Ib molecules on the other hand are less understood and have demonstrated diverse immune and non-immune functions in mammals. Rainbow trout possess a single classical MHC IA locus (Onmy-UBA) that is believed to function similar to that of mammalian MHC class Ia. Numerous MHC class Ib genes with undetermined functions have also been described in trout. Here we utilize quantitative reverse transcriptase PCR (qRT-PCR) techniques to survey the levels of basal and inducible transcription for selected trout MHC class Ib genes, sIgM and sentinels of IFN induction in response to viral infection. Basal transcription of all the class Ib genes examined in this study was lower than Onmy-UBA in nai??ve fish. UBA, along with all of the non-classical genes were induced in fish infected with virus but not in control fish. Our results support a non-classical designation for the majority of the class IB genes surveyed in this study based upon expression levels while also indicating that they may play an important role in anti-viral immunity in trout.
Landis, Eric D.; Purcell, Maureen K.; Thorgaard, Gary H.; Wheeler , Paul A.; Hansen, John D.
2008-01-01
Major histocompatibility complex (MHC) molecules are important mediators of cell-mediated immunity in vertebrates. MHC class IA molecules are important for host anti-viral immunity as they present intracellular antigens and regulate natural killer cell (NK) activity. MHC class Ib molecules on the other hand are less understood and have demonstrated diverse immune and non-immune functions in mammals. Rainbow trout possess a single classical MHC IA locus (Onmy-UBA) that is believed to function similar to that of mammalian MHC class Ia. Numerous MHC class Ib genes with undetermined functions have also been described in trout. Here we utilize quantitative reverse transcriptase PCR (qRT-PCR) techniques to survey the levels of basal and inducible transcription for selected trout MHC class Ib genes, sIgM and sentinels of IFN induction in response to viral infection. Basal transcription of all the class Ib genes examined in this study was lower than Onmy-UBA in naïve fish. UBA, along with all of the non-classical genes were induced in fish infected with virus but not in control fish. Our results support a non-classical designation for the majority of the class IB genes surveyed in this study based upon expression levels while also indicating that they may play an important role in anti-viral immunity in trout.
NASA Astrophysics Data System (ADS)
Vrecica, Teodor; Toledo, Yaron
2015-04-01
One-dimensional deterministic and stochastic evolution equations are derived for the dispersive nonlinear waves while taking dissipation of energy into account. The deterministic nonlinear evolution equations are formulated using operational calculus by following the approach of Bredmose et al. (2005). Their formulation is extended to include the linear and nonlinear effects of wave dissipation due to friction and breaking. The resulting equation set describes the linear evolution of the velocity potential for each wave harmonic coupled by quadratic nonlinear terms. These terms describe the nonlinear interactions between triads of waves, which represent the leading-order nonlinear effects in the near-shore region. The equations are translated to the amplitudes of the surface elevation by using the approach of Agnon and Sheremet (1997) with the correction of Eldeberky and Madsen (1999). The only current possibility for calculating the surface gravity wave field over large domains is by using stochastic wave evolution models. Hence, the above deterministic model is formulated as a stochastic one using the method of Agnon and Sheremet (1997) with two types of stochastic closure relations (Benney and Saffman's, 1966, and Hollway's, 1980). These formulations cannot be applied to the common wave forecasting models without further manipulation, as they include a non-local wave shoaling coefficients (i.e., ones that require integration along the wave rays). Therefore, a localization method was applied (see Stiassnie and Drimer, 2006, and Toledo and Agnon, 2012). This process essentially extracts the local terms that constitute the mean nonlinear energy transfer while discarding the remaining oscillatory terms, which transfer energy back and forth. One of the main findings of this work is the understanding that the approximated non-local coefficients behave in two essentially different manners. In intermediate water depths these coefficients indeed consist of rapidly oscillating terms, but as the water depth becomes shallow they change to an exponential growth (or decay) behavior. Hence, the formerly used localization technique cannot be justified for the shallow water region. A new formulation is devised for the localization in shallow water, it approximates the nonlinear non-local shoaling coefficient in shallow water and matches it to the one fitting to the intermediate water region. This allows the model behavior to be consistent from deep water to intermediate depths and up to the shallow water regime. Various simulations of the model were performed for the cases of intermediate, and shallow water, overall the model was found to give good results in both shallow and intermediate water depths. The essential difference between the shallow and intermediate nonlinear shoaling physics is explained via the dominating class III Bragg resonances phenomenon. By inspecting the resonance conditions and the nature of the dispersion relation, it is shown that unlike in the intermediate water regime, in shallow water depths the formation of resonant interactions is possible without taking into account bottom components. References Agnon, Y. & Sheremet, A. 1997 Stochastic nonlinear shoaling of directional spectra. J. Fluid Mech. 345, 79-99. Benney, D. J. & Saffman, P. G. 1966 Nonlinear interactions of random waves. Proc. R. Soc. Lond. A 289, 301-321. Bredmose, H., Agnon, Y., Madsen, P.A. & Schaffer, H.A. 2005 Wave transformation models with exact second-order transfer. European J. of Mech. - B/Fluids 24 (6), 659-682. Eldeberky, Y. & Madsen, P. A. 1999 Deterministic and stochastic evolution equations for fully dispersive and weakly nonlinear waves. Coastal Engineering 38, 1-24. Kaihatu, J. M. & Kirby, J. T. 1995 Nonlinear transformation of waves in infinite water depth. Phys. Fluids 8, 175-188. Holloway, G. 1980 Oceanic internal waves are not weak waves. J. Phys. Oceanogr. 10, 906-914. Stiassnie, M. & Drimer, N. 2006 Prediction of long forcing waves for harbor agitation studies. J. of waterways, port, coastal and ocean engineering 132(3), 166-171. Toledo, Y. & Agnon, Y. 2012 Stochastic evolution equations with localized nonlinear shoaling coefficients. European J. of Mech. - B/Fluids 34, 13-18.
The "Chaos" Pattern in Piaget's Theory of Cognitive Development.
ERIC Educational Resources Information Center
Lindsay, Jean S.
Piaget's theory of the cognitive development of the child is related to the recently developed non-linear "chaos" model. The term "chaos" refers to the tendency of dynamical, non-linear systems toward irregular, sometimes unpredictable, deterministic behavior. Piaget identified this same pattern in his model of cognitive…
Scaling theory for the quasideterministic limit of continuous bifurcations.
Kessler, David A; Shnerb, Nadav M
2012-05-01
Deterministic rate equations are widely used in the study of stochastic, interacting particles systems. This approach assumes that the inherent noise, associated with the discreteness of the elementary constituents, may be neglected when the number of particles N is large. Accordingly, it fails close to the extinction transition, when the amplitude of stochastic fluctuations is comparable with the size of the population. Here we present a general scaling theory of the transition regime for spatially extended systems. We demonstrate this through a detailed study of two fundamental models for out-of-equilibrium phase transitions: the Susceptible-Infected-Susceptible (SIS) that belongs to the directed percolation equivalence class and the Susceptible-Infected-Recovered (SIR) model belonging to the dynamic percolation class. Implementing the Ginzburg criteria we show that the width of the fluctuation-dominated region scales like N^{-κ}, where N is the number of individuals per site and κ=2/(d_{u}-d), d_{u} is the upper critical dimension. Other exponents that control the approach to the deterministic limit are shown to be calculable once κ is known. The theory is extended to include the corrections to the front velocity above the transition. It is supported by the results of extensive numerical simulations for systems of various dimensionalities.
Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian
2015-01-01
The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation. PMID:26512650
Inconsistent Investment and Consumption Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kronborg, Morten Tolver, E-mail: mtk@atp.dk; Steffensen, Mogens, E-mail: mogens@math.ku.dk
In a traditional Black–Scholes market we develop a verification theorem for a general class of investment and consumption problems where the standard dynamic programming principle does not hold. The theorem is an extension of the standard Hamilton–Jacobi–Bellman equation in the form of a system of non-linear differential equations. We derive the optimal investment and consumption strategy for a mean-variance investor without pre-commitment endowed with labor income. In the case of constant risk aversion it turns out that the optimal amount of money to invest in stocks is independent of wealth. The optimal consumption strategy is given as a deterministic bang-bangmore » strategy. In order to have a more realistic model we allow the risk aversion to be time and state dependent. Of special interest is the case were the risk aversion is inversely proportional to present wealth plus the financial value of future labor income net of consumption. Using the verification theorem we give a detailed analysis of this problem. It turns out that the optimal amount of money to invest in stocks is given by a linear function of wealth plus the financial value of future labor income net of consumption. The optimal consumption strategy is again given as a deterministic bang-bang strategy. We also calculate, for a general time and state dependent risk aversion function, the optimal investment and consumption strategy for a mean-standard deviation investor without pre-commitment. In that case, it turns out that it is optimal to take no risk at all.« less
NASA Astrophysics Data System (ADS)
Starshynov, I.; Paniagua-Diaz, A. M.; Fayard, N.; Goetschy, A.; Pierrat, R.; Carminati, R.; Bertolotti, J.
2018-04-01
The propagation of monochromatic light through a scattering medium produces speckle patterns in reflection and transmission, and the apparent randomness of these patterns prevents direct imaging through thick turbid media. Yet, since elastic multiple scattering is fundamentally a linear and deterministic process, information is not lost but distributed among many degrees of freedom that can be resolved and manipulated. Here, we demonstrate experimentally that the reflected and transmitted speckle patterns are robustly correlated, and we unravel all the complex and unexpected features of this fundamentally non-Gaussian and long-range correlation. In particular, we show that it is preserved even for opaque media with thickness much larger than the scattering mean free path, proving that information survives the multiple scattering process and can be recovered. The existence of correlations between the two sides of a scattering medium opens up new possibilities for the control of transmitted light without any feedback from the target side, but using only information gathered from the reflected speckle.
NASA Astrophysics Data System (ADS)
Van de Put, Maarten L.; Sorée, Bart; Magnus, Wim
2017-12-01
The Wigner-Liouville equation is reformulated using a spectral decomposition of the classical force field instead of the potential energy. The latter is shown to simplify the Wigner-Liouville kernel both conceptually and numerically as the spectral force Wigner-Liouville equation avoids the numerical evaluation of the highly oscillatory Wigner kernel which is nonlocal in both position and momentum. The quantum mechanical evolution is instead governed by a term local in space and non-local in momentum, where the non-locality in momentum has only a limited range. An interpretation of the time evolution in terms of two processes is presented; a classical evolution under the influence of the averaged driving field, and a probability-preserving quantum-mechanical generation and annihilation term. Using the inherent stability and reduced complexity, a direct deterministic numerical implementation using Chebyshev and Fourier pseudo-spectral methods is detailed. For the purpose of illustration, we present results for the time-evolution of a one-dimensional resonant tunneling diode driven out of equilibrium.
[Radiotherapy and chaos theory: the tit bird and the butterfly...].
Denis, F; Letellier, C
2012-09-01
Although the same simple laws govern cancer outcome (cell division repeated again and again), each tumour has a different outcome before as well as after irradiation therapy. The linear-quadratic radiosensitivity model allows an assessment of tumor sensitivity to radiotherapy. This model presents some limitations in clinical practice because it does not take into account the interactions between tumour cells and non-tumoral bystander cells (such as endothelial cells, fibroblasts, immune cells...) that modulate radiosensitivity and tumor growth dynamics. These interactions can lead to non-linear and complex tumor growth which appears to be random but that is not since there is not so many tumors spontaneously regressing. In this paper we propose to develop a deterministic approach for tumour growth dynamics using chaos theory. Various characteristics of cancer dynamics and tumor radiosensitivity can be explained using mathematical models of competing cell species. Copyright © 2012 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.
The Bilinear Product Model of Hysteresis Phenomena
NASA Astrophysics Data System (ADS)
Kádár, György
1989-01-01
In ferromagnetic materials non-reversible magnetization processes are represented by rather complex hysteresis curves. The phenomenological description of such curves needs the use of multi-valued, yet unambiguous, deterministic functions. The history dependent calculation of consecutive Everett-integrals of the two-variable Preisach-function can account for the main features of hysteresis curves in uniaxial magnetic materials. The traditional Preisach model has recently been modified on the basis of population dynamics considerations, removing the non-real congruency property of the model. The Preisach-function was proposed to be a product of two factors of distinct physical significance: a magnetization dependent function taking into account the overall magnetization state of the body and a bilinear form of a single variable, magnetic field dependent, switching probability function. The most important statement of the bilinear product model is, that the switching process of individual particles is to be separated from the book-keeping procedure of their states. This empirical model of hysteresis can easily be extended to other irreversible physical processes, such as first order phase transitions.
NASA Astrophysics Data System (ADS)
Delvecchio, S.; Antoni, J.
2012-02-01
This paper addresses the use of a cyclostationary blind source separation algorithm (namely RRCR) to extract angle deterministic signals from mechanical rotating machines in presence of stationary speed fluctuations. This means that only phase fluctuations while machine is running in steady-state conditions are considered while run-up or run-down speed variations are not taken into account. The machine is also supposed to run in idle conditions so non-stationary phenomena due to the load are not considered. It is theoretically assessed that in such operating conditions the deterministic (periodic) signal in the angle domain becomes cyclostationary at first and second orders in the time domain. This fact justifies the use of the RRCR algorithm, which is able to directly extract the angle deterministic signal from the time domain without performing any kind of interpolation. This is particularly valuable when angular resampling fails because of uncontrolled speed fluctuations. The capability of the proposed approach is verified by means of simulated and actual vibration signals captured on a pneumatic screwdriver handle. In this particular case not only the extraction of the angle deterministic part can be performed but also the separation of the main sources of excitation (i.e. motor shaft imbalance, epyciloidal gear meshing and air pressure forces) affecting the user hand during operations.
Field-free deterministic ultrafast creation of magnetic skyrmions by spin-orbit torques
NASA Astrophysics Data System (ADS)
Büttner, Felix; Lemesh, Ivan; Schneider, Michael; Pfau, Bastian; Günther, Christian M.; Hessing, Piet; Geilhufe, Jan; Caretta, Lucas; Engel, Dieter; Krüger, Benjamin; Viefhaus, Jens; Eisebitt, Stefan; Beach, Geoffrey S. D.
2017-11-01
Magnetic skyrmions are stabilized by a combination of external magnetic fields, stray field energies, higher-order exchange interactions and the Dzyaloshinskii-Moriya interaction (DMI). The last favours homochiral skyrmions, whose motion is driven by spin-orbit torques and is deterministic, which makes systems with a large DMI relevant for applications. Asymmetric multilayers of non-magnetic heavy metals with strong spin-orbit interactions and transition-metal ferromagnetic layers provide a large and tunable DMI. Also, the non-magnetic heavy metal layer can inject a vertical spin current with transverse spin polarization into the ferromagnetic layer via the spin Hall effect. This leads to torques that can be used to switch the magnetization completely in out-of-plane magnetized ferromagnetic elements, but the switching is deterministic only in the presence of a symmetry-breaking in-plane field. Although spin-orbit torques led to domain nucleation in continuous films and to stochastic nucleation of skyrmions in magnetic tracks, no practical means to create individual skyrmions controllably in an integrated device design at a selected position has been reported yet. Here we demonstrate that sub-nanosecond spin-orbit torque pulses can generate single skyrmions at custom-defined positions in a magnetic racetrack deterministically using the same current path as used for the shifting operation. The effect of the DMI implies that no external in-plane magnetic fields are needed for this aim. This implementation exploits a defect, such as a constriction in the magnetic track, that can serve as a skyrmion generator. The concept is applicable to any track geometry, including three-dimensional designs.
Probability and Locality: Determinism Versus Indeterminism in Quantum Mechanics
NASA Astrophysics Data System (ADS)
Dickson, William Michael
1995-01-01
Quantum mechanics is often taken to be necessarily probabilistic. However, this view of quantum mechanics appears to be more the result of historical accident than of careful analysis. Moreover, quantum mechanics in its usual form faces serious problems. Although the mathematical core of quantum mechanics--quantum probability theory- -does not face conceptual difficulties, the application of quantum probability to the physical world leads to problems. In particular, quantum mechanics seems incapable of describing our everyday macroscopic experience. Therefore, several authors have proposed new interpretations --including (but not limited to) modal interpretations, spontaneous localization interpretations, the consistent histories approach, and the Bohm theory--each of which deals with quantum-mechanical probabilities differently. Each of these interpretations promises to describe our macroscopic experience and, arguably, each succeeds. Is there any way to compare them? Perhaps, if we turn to another troubling aspect of quantum mechanics, non-locality. Non -locality is troubling because prima facie it threatens the compatibility of quantum mechanics with special relativity. This prima facie threat is mitigated by the no-signalling theorems in quantum mechanics, but nonetheless one may find a 'conflict of spirit' between nonlocality in quantum mechanics and special relativity. Do any of these interpretations resolve this conflict of spirit?. There is a strong relation between how an interpretation deals with quantum-mechanical probabilities and how it deals with non-locality. The main argument here is that only a completely deterministic interpretation can be completely local. That is, locality together with the empirical predictions of quantum mechanics (specifically, its strict correlations) entails determinism. But even with this entailment in hand, comparison of the various interpretations requires a look at each, to see how non-locality arises, or in the case of deterministic interpretations, whether it arises. The result of this investigation is that, at the least, deterministic interpretations are no worse off with respect to special relativity than indeterministic interpretations. This conclusion runs against a common view that deterministic interpretations, specifically the Bohm theory, have more difficulty with special relativity than other interpretations.
NASA Astrophysics Data System (ADS)
Castaneda-Lopez, Homero
A methodology for detecting and locating defects or discontinuities on the outside covering of coated metal underground pipelines subjected to cathodic protection has been addressed. On the basis of wide range AC impedance signals for various frequencies applied to a steel-coated pipeline system and by measuring its corresponding transfer function under several laboratory simulation scenarios, a physical laboratory setup of an underground cathodic-protected, coated pipeline was built. This model included different variables and elements that exist under real conditions, such as soil resistivity, soil chemical composition, defect (holiday) location in the pipeline covering, defect area and geometry, and level of cathodic protection. The AC impedance data obtained under different working conditions were used to fit an electrical transmission line model. This model was then used as a tool to fit the impedance signal for different experimental conditions and to establish trends in the impedance behavior without the necessity of further experimental work. However, due to the chaotic nature of the transfer function response of this system under several conditions, it is believed that non-deterministic models based on pattern recognition algorithms are suitable for field condition analysis. A non-deterministic approach was used for experimental analysis by applying an artificial neural network (ANN) algorithm based on classification analysis capable of studying the pipeline system and differentiating the variables that can change impedance conditions. These variables include level of cathodic protection, location of discontinuities (holidays), and severity of corrosion. This work demonstrated a proof-of-concept for a well-known technique and a novel algorithm capable of classifying impedance data for experimental results to predict the exact location of the active holidays and defects on the buried pipelines. Laboratory findings from this procedure are promising, and efforts to develop it for field conditions should continue.
Modeling DNA methylation by analyzing the individual configurations of single molecules
Affinito, Ornella; Scala, Giovanni; Palumbo, Domenico; Florio, Ermanno; Monticelli, Antonella; Miele, Gennaro; Avvedimento, Vittorio Enrico; Usiello, Alessandro; Chiariotti, Lorenzo; Cocozza, Sergio
2016-01-01
ABSTRACT DNA methylation is often analyzed by reporting the average methylation degree of each cytosine. In this study, we used a single molecule methylation analysis in order to look at the methylation conformation of individual molecules. Using D-aspartate oxidase as a model gene, we performed an in-depth methylation analysis through the developmental stages of 3 different mouse tissues (brain, lung, and gut), where this gene undergoes opposite methylation destiny. This approach allowed us to track both methylation and demethylation processes at high resolution. The complexity of these dynamics was markedly simplified by introducing the concept of methylation classes (MCs), defined as the number of methylated cytosines per molecule, irrespective of their position. The MC concept smooths the stochasticity of the system, allowing a more deterministic description. In this framework, we also propose a mathematical model based on the Markov chain. This model aims to identify the transition probability of a molecule from one MC to another during methylation and demethylation processes. The results of our model suggest that: 1) both processes are ruled by a dominant class of phenomena, namely, the gain or loss of one methyl group at a time; and 2) the probability of a single CpG site becoming methylated or demethylated depends on the methylation status of the whole molecule at that time. PMID:27748645
Sakai, Kenshi; Upadhyaya, Shrinivasa K; Andrade-Sanchez, Pedro; Sviridova, Nina V
2017-03-01
Real-world processes are often combinations of deterministic and stochastic processes. Soil failure observed during farm tillage is one example of this phenomenon. In this paper, we investigated the nonlinear features of soil failure patterns in a farm tillage process. We demonstrate emerging determinism in soil failure patterns from stochastic processes under specific soil conditions. We normalized the deterministic nonlinear prediction considering autocorrelation and propose it as a robust way of extracting a nonlinear dynamical system from noise contaminated motion. Soil is a typical granular material. The results obtained here are expected to be applicable to granular materials in general. From a global scale to nano scale, the granular material is featured in seismology, geotechnology, soil mechanics, and particle technology. The results and discussions presented here are applicable in these wide research areas. The proposed method and our findings are useful with respect to the application of nonlinear dynamics to investigate complex motions generated from granular materials.
Schnauber, Peter; Schall, Johannes; Bounouar, Samir; Höhne, Theresa; Park, Suk-In; Ryu, Geun-Hwan; Heindel, Tobias; Burger, Sven; Song, Jin-Dong; Rodt, Sven; Reitzenstein, Stephan
2018-04-11
The development of multinode quantum optical circuits has attracted great attention in recent years. In particular, interfacing quantum-light sources, gates, and detectors on a single chip is highly desirable for the realization of large networks. In this context, fabrication techniques that enable the deterministic integration of preselected quantum-light emitters into nanophotonic elements play a key role when moving forward to circuits containing multiple emitters. Here, we present the deterministic integration of an InAs quantum dot into a 50/50 multimode interference beamsplitter via in situ electron beam lithography. We demonstrate the combined emitter-gate interface functionality by measuring triggered single-photon emission on-chip with g (2) (0) = 0.13 ± 0.02. Due to its high patterning resolution as well as spectral and spatial control, in situ electron beam lithography allows for integration of preselected quantum emitters into complex photonic systems. Being a scalable single-step approach, it paves the way toward multinode, fully integrated quantum photonic chips.
NASA Astrophysics Data System (ADS)
Zielke, O.; McDougall, D.; Mai, P. M.; Babuska, I.
2014-12-01
One fundamental aspect of seismic hazard mitigation is gaining a better understanding of the rupture process. Because direct observation of the relevant parameters and properties is not possible, other means such as kinematic source inversions are used instead. By constraining the spatial and temporal evolution of fault slip during an earthquake, those inversion approaches may enable valuable insights in the physics of the rupture process. However, due to the underdetermined nature of this inversion problem (i.e., inverting a kinematic source model for an extended fault based on seismic data), the provided solutions are generally non-unique. Here we present a statistical (Bayesian) inversion approach based on an open-source library for uncertainty quantification (UQ) called QUESO that was developed at ICES (UT Austin). The approach has advantages with respect to deterministic inversion approaches as it provides not only a single (non-unique) solution but also provides uncertainty bounds with it. Those uncertainty bounds help to qualitatively and quantitatively judge how well constrained an inversion solution is and how much rupture complexity the data reliably resolve. The presented inversion scheme uses only tele-seismically recorded body waves but future developments may lead us towards joint inversion schemes. After giving an insight in the inversion scheme ifself (based on delayed rejection adaptive metropolis, DRAM) we explore the method's resolution potential. For that, we synthetically generate tele-seismic data, add for example different levels of noise and/or change fault plane parameterization and then apply our inversion scheme in the attempt to extract the (known) kinematic rupture model. We conclude with exemplary inverting real tele-seismic data of a recent large earthquake and compare those results with deterministically derived kinematic source models provided by other research groups.
Substrate growth dynamics and biomineralization of an Ediacaran encrusting poriferan.
Wood, Rachel; Penny, Amelia
2018-01-10
The ability to encrust in order to secure and maintain growth on a substrate is a key competitive innovation in benthic metazoans. Here we describe the substrate growth dynamics, mode of biomineralization and possible affinity of Namapoikia rietoogensis , a large (up to 1 m), robustly skeletal, and modular Ediacaran metazoan which encrusted the walls of synsedimentary fissures within microbial-metazoan reefs. Namapoikia formed laminar or domal morphologies with an internal structure of open tubules and transverse elements, and had a very plastic, non-deterministic growth form which could encrust both fully lithified surfaces as well as living microbial substrates, the latter via modified skeletal holdfasts. Namapoikia shows complex growth interactions and substrate competition with contemporary living microbialites and thrombolites, including the production of plate-like dissepiments in response to microbial overgrowth which served to elevate soft tissue above the microbial surface. Namapoikia could also recover from partial mortality due to microbial fouling. We infer initial skeletal growth to have propagated via the rapid formation of an organic scaffold via a basal pinacoderm prior to calcification. This is likely an ancient mode of biomineralization with similarities to the living calcified demosponge Vaceletia. Namapoikia also shows inferred skeletal growth banding which, combined with its large size, implies notable individual longevity. In sum, Namapoikia was a large, relatively long-lived Ediacaran clonal skeletal metazoan that propagated via an organic scaffold prior to calcification, enabling rapid, effective and dynamic substrate occupation and competition in cryptic reef settings. The open tubular internal structure, highly flexible, non-deterministic skeletal organization, and inferred style of biomineralization of Namapoikia places probable affinity within total-group poriferans. © 2018 The Author(s).
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-01-01
Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-11-02
We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.
NASA Astrophysics Data System (ADS)
Fischer, P.; Jardani, A.; Lecoq, N.
2018-02-01
In this paper, we present a novel inverse modeling method called Discrete Network Deterministic Inversion (DNDI) for mapping the geometry and property of the discrete network of conduits and fractures in the karstified aquifers. The DNDI algorithm is based on a coupled discrete-continuum concept to simulate numerically water flows in a model and a deterministic optimization algorithm to invert a set of observed piezometric data recorded during multiple pumping tests. In this method, the model is partioned in subspaces piloted by a set of parameters (matrix transmissivity, and geometry and equivalent transmissivity of the conduits) that are considered as unknown. In this way, the deterministic optimization process can iteratively correct the geometry of the network and the values of the properties, until it converges to a global network geometry in a solution model able to reproduce the set of data. An uncertainty analysis of this result can be performed from the maps of posterior uncertainties on the network geometry or on the property values. This method has been successfully tested for three different theoretical and simplified study cases with hydraulic responses data generated from hypothetical karstic models with an increasing complexity of the network geometry, and of the matrix heterogeneity.
Random attractor of non-autonomous stochastic Boussinesq lattice system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Min, E-mail: zhaomin1223@126.com; Zhou, Shengfan, E-mail: zhoushengfan@yahoo.com
2015-09-15
In this paper, we first consider the existence of tempered random attractor for second-order non-autonomous stochastic lattice dynamical system of nonlinear Boussinesq equations effected by time-dependent coupled coefficients and deterministic forces and multiplicative white noise. Then, we establish the upper semicontinuity of random attractors as the intensity of noise approaches zero.
Crucial HSP70 co–chaperone complex unlocks metazoan protein disaggregation
Nillegoda, Nadinath B.; Kirstein, Janine; Szlachcic, Anna; Berynskyy, Mykhaylo; Stank, Antonia; Stengel, Florian; Arnsburg, Kristin; Gao, Xuechao; Scior, Annika; Aebersold, Ruedi; Guilbride, D. Lys; Wade, Rebecca C.; Morimoto, Richard I.; Mayer, Matthias P.; Bukau, Bernd
2016-01-01
Protein aggregates are the hallmark of stressed and ageing cells, and characterize several pathophysiological states1,2. Healthy metazoan cells effectively eliminate intracellular protein aggregates3,4, indicating that efficient disaggregation and/or degradation mechanisms exist. However, metazoans lack the key heat-shock protein disaggregase HSP100 of non-metazoan HSP70-dependent protein disaggregation systems5,6, and the human HSP70 system alone, even with the crucial HSP110 nucleotide exchange factor, has poor disaggregation activity in vitro4,7. This unresolved conundrum is central to protein quality control biology. Here we show that synergic cooperation between complexed J-protein co-chaperones of classes A and B unleashes highly efficient protein disaggregation activity in human and nematode HSP70 systems. Metazoan mixed-class J-protein complexes are transient, involve complementary charged regions conserved in the J-domains and carboxy-terminal domains of each J-protein class, and are flexible with respect to subunit composition. Complex formation allows J-proteins to initiate transient higher order chaperone structures involving HSP70 and interacting nucleotide exchange factors. A network of cooperative class A and B J-protein interactions therefore provides the metazoan HSP70 machinery with powerful, flexible, and finely regulatable disaggregase activity and a further level of regulation crucial for cellular protein quality control. PMID:26245380
A class of optimum digital phase locked loops for the DSN advanced receiver
NASA Technical Reports Server (NTRS)
Hurd, W. J.; Kumar, R.
1985-01-01
A class of optimum digital filters for digital phase locked loop of the deep space network advanced receiver is discussed. The filter minimizes a weighted combination of the variance of the random component of the phase error and the sum square of the deterministic dynamic component of phase error at the output of the numerically controlled oscillator (NCO). By varying the weighting coefficient over a suitable range of values, a wide set of filters are obtained such that, for any specified value of the equivalent loop-noise bandwidth, there corresponds a unique filter in this class. This filter thus has the property of having the best transient response over all possible filters of the same bandwidth and type. The optimum filters are also evaluated in terms of their gain margin for stability and their steady-state error performance.
Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.
2004-01-01
We successfully applied deterministic deconvolution to real ground-penetrating radar (GPR) data by using the source wavelet that was generated in and transmitted through air as the operator. The GPR data were collected with 400-MHz antennas on a bench adjacent to a cleanly exposed quarry face. The quarry site is characterized by horizontally bedded carbonate strata with shale partings. In order to provide groundtruth for this deconvolution approach, 23 conductive rods were drilled into the quarry face at key locations. The steel rods provided critical information for: (1) correlation between reflections on GPR data and geologic features exposed in the quarry face, (2) GPR resolution limits, (3) accuracy of velocities calculated from common midpoint data and (4) identifying any multiples. Comparing the results of deconvolved data with non-deconvolved data demonstrates the effectiveness of deterministic deconvolution in low dielectric-loss media for increased accuracy of velocity models (improved at least 10-15% in our study after deterministic deconvolution), increased vertical and horizontal resolution of specific geologic features and more accurate representation of geologic features as confirmed from detailed study of the adjacent quarry wall. ?? 2004 Elsevier B.V. All rights reserved.
Proceedings of the Expert Systems Workshop Held in Pacific Grove, California on 16-18 April 1986
1986-04-18
13- NUMBER OF PAGES 197 N IS. SECURITY CLASS, (ol Mm raport) UNCLASSIFIED I5a. DECLASSIFI CATION/DOWNGRADING SCHEDULE 16. DISTRIBUTION...are distributed and parallel. * - Features unimplemented at present; scheduled for phase 2. Table 1-1: Key design characteristics of ABE 2. a...data structuring techniques and a semi- deterministic scheduler . A program for the DF framework consists of a number of independent processing modules
Nonlinear unitary quantum collapse model with self-generated noise
NASA Astrophysics Data System (ADS)
Geszti, Tamás
2018-04-01
Collapse models including some external noise of unknown origin are routinely used to describe phenomena on the quantum-classical border; in particular, quantum measurement. Although containing nonlinear dynamics and thereby exposed to the possibility of superluminal signaling in individual events, such models are widely accepted on the basis of fully reproducing the non-signaling statistical predictions of quantum mechanics. Here we present a deterministic nonlinear model without any external noise, in which randomness—instead of being universally present—emerges in the measurement process, from deterministic irregular dynamics of the detectors. The treatment is based on a minimally nonlinear von Neumann equation for a Stern–Gerlach or Bell-type measuring setup, containing coordinate and momentum operators in a self-adjoint skew-symmetric, split scalar product structure over the configuration space. The microscopic states of the detectors act as a nonlocal set of hidden parameters, controlling individual outcomes. The model is shown to display pumping of weights between setup-defined basis states, with a single winner randomly selected and the rest collapsing to zero. Environmental decoherence has no role in the scenario. Through stochastic modelling, based on Pearle’s ‘gambler’s ruin’ scheme, outcome probabilities are shown to obey Born’s rule under a no-drift or ‘fair-game’ condition. This fully reproduces quantum statistical predictions, implying that the proposed non-linear deterministic model satisfies the non-signaling requirement. Our treatment is still vulnerable to hidden signaling in individual events, which remains to be handled by future research.
NASA Astrophysics Data System (ADS)
Kostrzewa, Daniel; Josiński, Henryk
2016-06-01
The expanded Invasive Weed Optimization algorithm (exIWO) is an optimization metaheuristic modelled on the original IWO version inspired by dynamic growth of weeds colony. The authors of the present paper have modified the exIWO algorithm introducing a set of both deterministic and non-deterministic strategies of individuals' selection. The goal of the project was to evaluate the modified exIWO by testing its usefulness for multidimensional numerical functions optimization. The optimized functions: Griewank, Rastrigin, and Rosenbrock are frequently used as benchmarks because of their characteristics.
How synapses can enhance sensibility of a neural network
NASA Astrophysics Data System (ADS)
Protachevicz, P. R.; Borges, F. S.; Iarosz, K. C.; Caldas, I. L.; Baptista, M. S.; Viana, R. L.; Lameu, E. L.; Macau, E. E. N.; Batista, A. M.
2018-02-01
In this work, we study the dynamic range in a neural network modelled by cellular automaton. We consider deterministic and non-deterministic rules to simulate electrical and chemical synapses. Chemical synapses have an intrinsic time-delay and are susceptible to parameter variations guided by learning Hebbian rules of behaviour. The learning rules are related to neuroplasticity that describes change to the neural connections in the brain. Our results show that chemical synapses can abruptly enhance sensibility of the neural network, a manifestation that can become even more predominant if learning rules of evolution are applied to the chemical synapses.
2013-01-01
Background Classical major histocompatibility complex (MHC) class II molecules play an essential role in presenting peptide antigens to CD4+ T lymphocytes in the acquired immune system. The non-classical class II DM molecule, HLA-DM in the case of humans, possesses critical function in assisting the classical MHC class II molecules for proper peptide loading and is highly conserved in tetrapod species. Although the absence of DM-like genes in teleost fish has been speculated based on the results of homology searches, it has not been definitively clear whether the DM system is truly specific for tetrapods or not. To obtain a clear answer, we comprehensively searched class II genes in representative teleost fish genomes and analyzed those genes regarding the critical functional features required for the DM system. Results We discovered a novel ancient class II group (DE) in teleost fish and classified teleost fish class II genes into three major groups (DA, DB and DE). Based on several criteria, we investigated the classical/non-classical nature of various class II genes and showed that only one of three groups (DA) exhibits classical-type characteristics. Analyses of predicted class II molecules revealed that the critical tryptophan residue required for a classical class II molecule in the DM system could be found only in some non-classical but not in classical-type class II molecules of teleost fish. Conclusions Teleost fish, a major group of vertebrates, do not possess the DM system for the classical class II peptide-loading and this sophisticated system has specially evolved in the tetrapod lineage. PMID:24279922
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-08
... notice to solicit comments on the proposed rule change from interested persons. \\1\\ 15 U.S.C. 78s(b)(1... exchanges in the listed options marketplace. The Exchange proposes to adopt a set of fees for simple, non... Public Customer simple, non-complex Maker orders in all multiply-listed index and ETF options classes...
Deterministic photon-emitter coupling in chiral photonic circuits.
Söllner, Immo; Mahmoodian, Sahand; Hansen, Sofie Lindskov; Midolo, Leonardo; Javadi, Alisa; Kiršanskė, Gabija; Pregnolato, Tommaso; El-Ella, Haitham; Lee, Eun Hye; Song, Jin Dong; Stobbe, Søren; Lodahl, Peter
2015-09-01
Engineering photon emission and scattering is central to modern photonics applications ranging from light harvesting to quantum-information processing. To this end, nanophotonic waveguides are well suited as they confine photons to a one-dimensional geometry and thereby increase the light-matter interaction. In a regular waveguide, a quantum emitter interacts equally with photons in either of the two propagation directions. This symmetry is violated in nanophotonic structures in which non-transversal local electric-field components imply that photon emission and scattering may become directional. Here we show that the helicity of the optical transition of a quantum emitter determines the direction of single-photon emission in a specially engineered photonic-crystal waveguide. We observe single-photon emission into the waveguide with a directionality that exceeds 90% under conditions in which practically all the emitted photons are coupled to the waveguide. The chiral light-matter interaction enables deterministic and highly directional photon emission for experimentally achievable on-chip non-reciprocal photonic elements. These may serve as key building blocks for single-photon optical diodes, transistors and deterministic quantum gates. Furthermore, chiral photonic circuits allow the dissipative preparation of entangled states of multiple emitters for experimentally achievable parameters, may lead to novel topological photon states and could be applied for directional steering of light.
Deterministic photon-emitter coupling in chiral photonic circuits
NASA Astrophysics Data System (ADS)
Söllner, Immo; Mahmoodian, Sahand; Hansen, Sofie Lindskov; Midolo, Leonardo; Javadi, Alisa; Kiršanskė, Gabija; Pregnolato, Tommaso; El-Ella, Haitham; Lee, Eun Hye; Song, Jin Dong; Stobbe, Søren; Lodahl, Peter
2015-09-01
Engineering photon emission and scattering is central to modern photonics applications ranging from light harvesting to quantum-information processing. To this end, nanophotonic waveguides are well suited as they confine photons to a one-dimensional geometry and thereby increase the light-matter interaction. In a regular waveguide, a quantum emitter interacts equally with photons in either of the two propagation directions. This symmetry is violated in nanophotonic structures in which non-transversal local electric-field components imply that photon emission and scattering may become directional. Here we show that the helicity of the optical transition of a quantum emitter determines the direction of single-photon emission in a specially engineered photonic-crystal waveguide. We observe single-photon emission into the waveguide with a directionality that exceeds 90% under conditions in which practically all the emitted photons are coupled to the waveguide. The chiral light-matter interaction enables deterministic and highly directional photon emission for experimentally achievable on-chip non-reciprocal photonic elements. These may serve as key building blocks for single-photon optical diodes, transistors and deterministic quantum gates. Furthermore, chiral photonic circuits allow the dissipative preparation of entangled states of multiple emitters for experimentally achievable parameters, may lead to novel topological photon states and could be applied for directional steering of light.
Population density equations for stochastic processes with memory kernels
NASA Astrophysics Data System (ADS)
Lai, Yi Ming; de Kamps, Marc
2017-06-01
We present a method for solving population density equations (PDEs)-a mean-field technique describing homogeneous populations of uncoupled neurons—where the populations can be subject to non-Markov noise for arbitrary distributions of jump sizes. The method combines recent developments in two different disciplines that traditionally have had limited interaction: computational neuroscience and the theory of random networks. The method uses a geometric binning scheme, based on the method of characteristics, to capture the deterministic neurodynamics of the population, separating the deterministic and stochastic process cleanly. We can independently vary the choice of the deterministic model and the model for the stochastic process, leading to a highly modular numerical solution strategy. We demonstrate this by replacing the master equation implicit in many formulations of the PDE formalism by a generalization called the generalized Montroll-Weiss equation—a recent result from random network theory—describing a random walker subject to transitions realized by a non-Markovian process. We demonstrate the method for leaky- and quadratic-integrate and fire neurons subject to spike trains with Poisson and gamma-distributed interspike intervals. We are able to model jump responses for both models accurately to both excitatory and inhibitory input under the assumption that all inputs are generated by one renewal process.
Simulation of anaerobic digestion processes using stochastic algorithm.
Palanichamy, Jegathambal; Palani, Sundarambal
2014-01-01
The Anaerobic Digestion (AD) processes involve numerous complex biological and chemical reactions occurring simultaneously. Appropriate and efficient models are to be developed for simulation of anaerobic digestion systems. Although several models have been developed, mostly they suffer from lack of knowledge on constants, complexity and weak generalization. The basis of the deterministic approach for modelling the physico and bio-chemical reactions occurring in the AD system is the law of mass action, which gives the simple relationship between the reaction rates and the species concentrations. The assumptions made in the deterministic models are not hold true for the reactions involving chemical species of low concentration. The stochastic behaviour of the physicochemical processes can be modeled at mesoscopic level by application of the stochastic algorithms. In this paper a stochastic algorithm (Gillespie Tau Leap Method) developed in MATLAB was applied to predict the concentration of glucose, acids and methane formation at different time intervals. By this the performance of the digester system can be controlled. The processes given by ADM1 (Anaerobic Digestion Model 1) were taken for verification of the model. The proposed model was verified by comparing the results of Gillespie's algorithms with the deterministic solution for conversion of glucose into methane through degraders. At higher value of 'τ' (timestep), the computational time required for reaching the steady state is more since the number of chosen reactions is less. When the simulation time step is reduced, the results are similar to ODE solver. It was concluded that the stochastic algorithm is a suitable approach for the simulation of complex anaerobic digestion processes. The accuracy of the results depends on the optimum selection of tau value.
Alien life: how would we know?
NASA Astrophysics Data System (ADS)
Boden, Margaret A.
2003-04-01
To recognize alien life, we would have to be clear about the defining criteria of "life". Metabolism - in other words, biochemical fine-tuning - is one of these criteria. Three senses of metabolism are distinguished. The weakest allows strong artificial life (A-life): virtual creatures having physical existence in computer electronics, but not bodies, are classed as "alive". The second excludes strong A-life but allows that some non-biochemical A-life robots could be classed as alive. The third, which stresses the body's self-production by energy budgeting and self-equilibrating energy exchanges of some (necessary) complexity, excludes both strong A-life and living non-biochemical robots.
Programming with non-heap memory in the real time specification for Java
NASA Technical Reports Server (NTRS)
Bollella, G.; Canham, T.; Carson, V.; Champlin, V.; Dvorak, D.; Giovannoni, B.; Indictor, M.; Meyer, K.; Reinholtz, A.; Murray, K.
2003-01-01
The Real-Time Specification for Java (RTSJ) provides facilities for deterministic, real-time execution in a language that is otherwise subject to variable latencies in memory allocation and garbage collection.
Immunomodulation of classical and non-classical HLA molecules by ionizing radiation.
Gallegos, Cristina E; Michelin, Severino; Dubner, Diana; Carosella, Edgardo D
2016-05-01
Radiotherapy has been employed for the treatment of oncological patients for nearly a century, and together with surgery and chemotherapy, radiation oncology constitutes one of the three pillars of cancer therapy. Ionizing radiation has complex effects on neoplastic cells and on tumor microenvironment: beyond its action as a direct cytotoxic agent, tumor irradiation triggers a series of alterations in tumoral cells, which includes the de novo synthesis of particular proteins and the up/down-regulation of cell surface molecules. Additionally, ionizing radiation may induce the release of "danger signals" which may, in turn lead to cellular and molecular responses by the immune system. This immunomodulatory action of ionizing radiation highlights the importance of the combined use (radiotherapy plus immunotherapy) for cancer healing. Major histocompatibility complex antigens (also called Human Leukocyte Antigens, HLA in humans) are one of those molecules whose expression is modulated after irradiation. This review summarizes the modulatory properties of ionizing radiation on the expression of HLA class I (classical and non-classical) and class II molecules, with special emphasis in non-classical HLA-I molecules. Copyright © 2016 Elsevier Inc. All rights reserved.
Exact posterior computation in non-conjugate Gaussian location-scale parameters models
NASA Astrophysics Data System (ADS)
Andrade, J. A. A.; Rathie, P. N.
2017-12-01
In Bayesian analysis the class of conjugate models allows to obtain exact posterior distributions, however this class quite restrictive in the sense that it involves only a few distributions. In fact, most of the practical applications involves non-conjugate models, thus approximate methods, such as the MCMC algorithms, are required. Although these methods can deal with quite complex structures, some practical problems can make their applications quite time demanding, for example, when we use heavy-tailed distributions, convergence may be difficult, also the Metropolis-Hastings algorithm can become very slow, in addition to the extra work inevitably required on choosing efficient candidate generator distributions. In this work, we draw attention to the special functions as a tools for Bayesian computation, we propose an alternative method for obtaining the posterior distribution in Gaussian non-conjugate models in an exact form. We use complex integration methods based on the H-function in order to obtain the posterior distribution and some of its posterior quantities in an explicit computable form. Two examples are provided in order to illustrate the theory.
Nilofer, Christina; Sukhwal, Anshul; Mohanapriya, Arumugam; Kangueane, Pandjassarame
2017-01-01
Several catalysis, cellular regulation, immune function, cell wall assembly, transport, signaling and inhibition occur through Protein- Protein Interactions (PPI). This is possible with the formation of specific yet stable protein-protein interfaces. Therefore, it is of interest to understand its molecular principles using structural data in relation to known function. Several interface features have been documented using known X-ray structures of protein complexes since 1975. This has improved our understanding of the interface using structural features such as interface area, binding energy, hydrophobicity, relative hydrophobicity, salt bridges and hydrogen bonds. The strength of binding between two proteins is dependent on interface size (number of residues at the interface) and thus its corresponding interface area. It is known that large interfaces have high binding energy (sum of (van der Waals) vdW, H-bonds, electrostatics). However, the selective role played by each of these energy components and more especially that of vdW is not explicitly known. Therefore, it is important to document their individual role in known protein-protein structural complexes. It is of interest to relate interface size with vdW, H-bonds and electrostatic interactions at the interfaces of protein structural complexes with known function using statistical and multiple linear regression analysis methods to identify the prominent force. We used the manually curated non-redundant dataset of 278 hetero-dimeric protein structural complexes grouped using known functions by Sowmya et al. (2015) to gain additional insight to this phenomenon using a robust inter-atomic non-covalent interaction analyzing tool PPCheck (Anshul and Sowdhamini, 2015). This dataset consists of obligatory (enzymes, regulator, biological assembly), immune and nonobligatory (enzyme and regulator inhibitors) complexes. Results show that the total binding energy is more for large interfaces. However, this is not true for its individual energy factors. Analysis shows that vdW energies contribute to about 75% ± 11% on average among all complexes and it also increases with interface size (r2 ranging from 0.67 to 0.89 with p<0.01) at 95% confidence limit irrespective of molecular function. Thus, vdW is both dominant and proportional at the interface independent of molecular function. Nevertheless, H bond energy contributes to 15% ± 6.5% on average in these complexes. It also moderately increases with interface size (r2 ranging from 0.43 to 0.61 with p<0.01) only among obligatory and immune complexes. Moreover, there is about 11.3% ± 8.7% contribution by electrostatic energy. It increases with interface size specifically among non-obligatory regulator-inhibitors (r2 = 0.44). It is implied that both H-bonds and electrostatics are neither dominant nor proportional at the interface. Nonetheless, their presence cannot be ignored in binding. Therefore, H-bonds and (or) electrostatic energy having specific role for improved stability in complexes is implied. Thus, vdW is common at the interface stabilized further with selective H-bonds and (or) electrostatic interactions at an atomic level in almost all complexes. Comparison of this observation with residue level analysis of the interface is compelling. The role by H-bonds (14.83% ± 6.5% and r2 = 0.61 with p<0.01) among obligatory and electrostatic energy (8.8% ± 4.77% and r2 = 0.63 with p <0.01) among non-obligatory complexes within interfaces (class A) having more non-polar residues than surface is influencing our inference. However, interfaces (class B) having less non-polar residues than surface show 1.5 fold more electrostatic energy on average. The interpretation of the interface using inter-atomic (vdW, H-bonds, electrostatic) interactions combined with inter-residue predominance (class A and class B) in relation to known function is the key to reveal its molecular principles with new challenges.
Vibratory Regime Classification of Infant Phonation
Buder, Eugene H.; Chorna, Lesya B.; Oller, D. Kimbrough; Robinson, Rebecca B.
2008-01-01
Infant phonation is highly variable in many respects, including the basic vibratory patterns by which the vocal tissues create acoustic signals. Previous studies have identified the regular occurrence of non-modal phonation types in normal infant phonation. The glottis is like many oscillating systems that, because of non-linear relationships among the elements, may vibrate in ways representing the deterministic patterns classified theoretically within the mathematical framework of non-linear dynamics. The infant’s pre-verbal vocal explorations present such a variety of phonations that it may be possible to find effectively all the classes of vibration predicted by non-linear dynamic theory. The current report defines acoustic criteria for an important subset of such vibratory regimes, and demonstrates that analysts can be trained to reliably use these criteria for a classification that includes all instances of infant phonation in the recorded corpora. The method is thus internally comprehensive in the sense that all phonations are classified, but it is not exhaustive in the sense that all vocal qualities are thereby represented. Using the methods thus developed, this study also demonstrates that the distributions of these phonation types vary significantly across sessions of recording in the first year of life, suggesting developmental changes. The method of regime classification is thus capable of tracking changes that may be indicative of maturation of the mechanism, the learning of categories of phonatory control, and the possibly varying use of vocalizations across social contexts. PMID:17509829
Complexity in Soil Systems: What Does It Mean and How Should We Proceed?
NASA Astrophysics Data System (ADS)
Faybishenko, B.; Molz, F. J.; Brodie, E.; Hubbard, S. S.
2015-12-01
The complex soil systems approach is needed fundamentally for the development of integrated, interdisciplinary methods to measure and quantify the physical, chemical and biological processes taking place in soil, and to determine the role of fine-scale heterogeneities. This presentation is aimed at a review of the concepts and observations concerning complexity and complex systems theory, including terminology, emergent complexity and simplicity, self-organization and a general approach to the study of complex systems using the Weaver (1948) concept of "organized complexity." These concepts are used to provide understanding of complex soil systems, and to develop experimental and mathematical approaches to soil microbiological processes. The results of numerical simulations, observations and experiments are presented that indicate the presence of deterministic chaotic dynamics in soil microbial systems. So what are the implications for the scientists who wish to develop mathematical models in the area of organized complexity or to perform experiments to help clarify an aspect of an organized complex system? The modelers have to deal with coupled systems having at least three dependent variables, and they have to forgo making linear approximations to nonlinear phenomena. The analogous rule for experimentalists is that they need to perform experiments that involve measurement of at least three interacting entities (variables depending on time, space, and each other). These entities could be microbes in soil penetrated by roots. If a process being studied in a soil affects the soil properties, like biofilm formation, then this effect has to be measured and included. The mathematical implications of this viewpoint are examined, and results of numerical solutions to a system of equations demonstrating deterministic chaotic behavior are also discussed using time series and the 3D strange attractors.
Acceleration techniques in the univariate Lipschitz global optimization
NASA Astrophysics Data System (ADS)
Sergeyev, Yaroslav D.; Kvasov, Dmitri E.; Mukhametzhanov, Marat S.; De Franco, Angela
2016-10-01
Univariate box-constrained Lipschitz global optimization problems are considered in this contribution. Geometric and information statistical approaches are presented. The novel powerful local tuning and local improvement techniques are described in the contribution as well as the traditional ways to estimate the Lipschitz constant. The advantages of the presented local tuning and local improvement techniques are demonstrated using the operational characteristics approach for comparing deterministic global optimization algorithms on the class of 100 widely used test functions.
A mixed SIR-SIS model to contain a virus spreading through networks with two degrees
NASA Astrophysics Data System (ADS)
Essouifi, Mohamed; Achahbar, Abdelfattah
Due to the fact that the “nodes” and “links” of real networks are heterogeneous, to model computer viruses prevalence throughout the Internet, we borrow the idea of the reduced scale free network which was introduced recently. The purpose of this paper is to extend the previous deterministic two subchains of Susceptible-Infected-Susceptible (SIS) model into a mixed Susceptible-Infected-Recovered and Susceptible-Infected-Susceptible (SIR-SIS) model to contain the computer virus spreading over networks with two degrees. Moreover, we develop its stochastic counterpart. Due to the high protection and security taken for hubs class, we suggest to treat it by using SIR epidemic model rather than the SIS one. The analytical study reveals that the proposed model admits a stable viral equilibrium. Thus, it is shown numerically that the mean dynamic behavior of the stochastic model is in agreement with the deterministic one. Unlike the infection densities i2 and i which both tend to a viral equilibrium for both approaches as in the previous study, i1 tends to the virus-free equilibrium. Furthermore, since a proportion of infectives are recovered, the global infection density i is minimized. Therefore, the permanent presence of viruses in the network due to the lower-degree nodes class. Many suggestions are put forward for containing viruses propagation and minimizing their damages.
Scavuzzo-Duggan, Tess R; Chaves, Arielle M; Singh, Abhishek; Sethaphong, Latsavongsakda; Slabaugh, Erin; Yingling, Yaroslava G; Haigler, Candace H; Roberts, Alison W
2018-06-01
Cellulose synthases (CESAs) are glycosyltransferases that catalyze formation of cellulose microfibrils in plant cell walls. Seed plant CESA isoforms cluster in six phylogenetic clades, whose non-interchangeable members play distinct roles within cellulose synthesis complexes (CSCs). A 'class specific region' (CSR), with higher sequence similarity within versus between functional CESA classes, has been suggested to contribute to specific activities or interactions of different isoforms. We investigated CESA isoform specificity in the moss, Physcomitrella patens (Hedw.) B. S. G. to gain evolutionary insights into CESA structure/function relationships. Like seed plants, P. patens has oligomeric rosette-type CSCs, but the PpCESAs diverged independently and form a separate CESA clade. We showed that P. patens has two functionally distinct CESAs classes, based on the ability to complement the gametophore-negative phenotype of a ppcesa5 knockout line. Thus, non-interchangeable CESA classes evolved separately in mosses and seed plants. However, testing of chimeric moss CESA genes for complementation demonstrated that functional class-specificity is not determined by the CSR. Sequence analysis and computational modeling showed that the CSR is intrinsically disordered and contains predicted molecular recognition features, consistent with a possible role in CESA oligomerization and explaining the evolution of class-specific sequences without selection for class-specific function. © 2018 Institute of Botany, Chinese Academy of Sciences.
NASA Astrophysics Data System (ADS)
Baumann, Erwin W.; Williams, David L.
1993-08-01
Artificial neural networks capable of learning and recalling stochastic associations between non-deterministic quantities have received relatively little attention to date. One potential application of such stochastic associative networks is the generation of sensory 'expectations' based on arbitrary subsets of sensor inputs to support anticipatory and investigate behavior in sensor-based robots. Another application of this type of associative memory is the prediction of how a scene will look in one spectral band, including noise, based upon its appearance in several other wavebands. This paper describes a semi-supervised neural network architecture composed of self-organizing maps associated through stochastic inter-layer connections. This 'Stochastic Associative Memory' (SAM) can learn and recall non-deterministic associations between multi-dimensional probability density functions. The stochastic nature of the network also enables it to represent noise distributions that are inherent in any true sensing process. The SAM architecture, training process, and initial application to sensor image prediction are described. Relationships to Fuzzy Associative Memory (FAM) are discussed.
Choi, Kang-Il
2016-01-01
This paper proposes a pipelined non-deterministic finite automaton (NFA)-based string matching scheme using field programmable gate array (FPGA) implementation. The characteristics of the NFA such as shared common prefixes and no failure transitions are considered in the proposed scheme. In the implementation of the automaton-based string matching using an FPGA, each state transition is implemented with a look-up table (LUT) for the combinational logic circuit between registers. In addition, multiple state transitions between stages can be performed in a pipelined fashion. In this paper, it is proposed that multiple one-to-one state transitions, called merged state transitions, can be performed with an LUT. By cutting down the number of used LUTs for implementing state transitions, the hardware overhead of combinational logic circuits is greatly reduced in the proposed pipelined NFA-based string matching scheme. PMID:27695114
Kim, HyunJin; Choi, Kang-Il
2016-01-01
This paper proposes a pipelined non-deterministic finite automaton (NFA)-based string matching scheme using field programmable gate array (FPGA) implementation. The characteristics of the NFA such as shared common prefixes and no failure transitions are considered in the proposed scheme. In the implementation of the automaton-based string matching using an FPGA, each state transition is implemented with a look-up table (LUT) for the combinational logic circuit between registers. In addition, multiple state transitions between stages can be performed in a pipelined fashion. In this paper, it is proposed that multiple one-to-one state transitions, called merged state transitions, can be performed with an LUT. By cutting down the number of used LUTs for implementing state transitions, the hardware overhead of combinational logic circuits is greatly reduced in the proposed pipelined NFA-based string matching scheme.
Phonon arithmetic in a trapped ion system
NASA Astrophysics Data System (ADS)
Um, Mark; Zhang, Junhua; Lv, Dingshun; Lu, Yao; An, Shuoming; Zhang, Jing-Ning; Nha, Hyunchul; Kim, M. S.; Kim, Kihwan
2016-04-01
Single-quantum level operations are important tools to manipulate a quantum state. Annihilation or creation of single particles translates a quantum state to another by adding or subtracting a particle, depending on how many are already in the given state. The operations are probabilistic and the success rate has yet been low in their experimental realization. Here we experimentally demonstrate (near) deterministic addition and subtraction of a bosonic particle, in particular a phonon of ionic motion in a harmonic potential. We realize the operations by coupling phonons to an auxiliary two-level system and applying transitionless adiabatic passage. We show handy repetition of the operations on various initial states and demonstrate by the reconstruction of the density matrices that the operations preserve coherences. We observe the transformation of a classical state to a highly non-classical one and a Gaussian state to a non-Gaussian one by applying a sequence of operations deterministically.
No-go theorem for passive single-rail linear optical quantum computing.
Wu, Lian-Ao; Walther, Philip; Lidar, Daniel A
2013-01-01
Photonic quantum systems are among the most promising architectures for quantum computers. It is well known that for dual-rail photons effective non-linearities and near-deterministic non-trivial two-qubit gates can be achieved via the measurement process and by introducing ancillary photons. While in principle this opens a legitimate path to scalable linear optical quantum computing, the technical requirements are still very challenging and thus other optical encodings are being actively investigated. One of the alternatives is to use single-rail encoded photons, where entangled states can be deterministically generated. Here we prove that even for such systems universal optical quantum computing using only passive optical elements such as beam splitters and phase shifters is not possible. This no-go theorem proves that photon bunching cannot be passively suppressed even when extra ancilla modes and arbitrary number of photons are used. Our result provides useful guidance for the design of optical quantum computers.
ERIC Educational Resources Information Center
Sampson, Russell D.
2013-01-01
A simple naked eye observational exercise is outlined that teaches non-major astronomy students basic observational and critical thinking skills but does not require complex equipment or extensive knowledge of the night sky. Students measure the relationship between stellar scintillation and the altitude of a set of stars. Successful observations…
A combinatorial model of malware diffusion via bluetooth connections.
Merler, Stefano; Jurman, Giuseppe
2013-01-01
We outline here the mathematical expression of a diffusion model for cellphones malware transmitted through Bluetooth channels. In particular, we provide the deterministic formula underlying the proposed infection model, in its equivalent recursive (simple but computationally heavy) and closed form (more complex but efficiently computable) expression.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Wei; Reddy, T. A.; Gurian, Patrick
2007-01-31
A companion paper to Jiang and Reddy that presents a general and computationally efficient methodology for dyanmic scheduling and optimal control of complex primary HVAC&R plants using a deterministic engineering optimization approach.
Liao, Chen; Sa, Niya; Key, Baris; ...
2015-02-02
We developed a unique class of non-Grignard, aluminum-free magnesium electrolytes based on a simple mixture of magnesium compounds: magnesium hexamethyldisilazide (Mg(HMDS) 2) and magnesium chloride (MgCl 2).
Evolution of major histocompatibility complex class I and class II genes in the brown bear
2012-01-01
Background Major histocompatibility complex (MHC) proteins constitute an essential component of the vertebrate immune response, and are coded by the most polymorphic of the vertebrate genes. Here, we investigated sequence variation and evolution of MHC class I and class II DRB, DQA and DQB genes in the brown bear Ursus arctos to characterise the level of polymorphism, estimate the strength of positive selection acting on them, and assess the extent of gene orthology and trans-species polymorphism in Ursidae. Results We found 37 MHC class I, 16 MHC class II DRB, four DQB and two DQA alleles. We confirmed the expression of several loci: three MHC class I, two DRB, two DQB and one DQA. MHC class I also contained two clusters of non-expressed sequences. MHC class I and DRB allele frequencies differed between northern and southern populations of the Scandinavian brown bear. The rate of nonsynonymous substitutions (dN) exceeded the rate of synonymous substitutions (dS) at putative antigen binding sites of DRB and DQB loci and, marginally significantly, at MHC class I loci. Models of codon evolution supported positive selection at DRB and MHC class I loci. Both MHC class I and MHC class II sequences showed orthology to gene clusters found in the giant panda Ailuropoda melanoleuca. Conclusions Historical positive selection has acted on MHC class I, class II DRB and DQB, but not on the DQA locus. The signal of historical positive selection on the DRB locus was particularly strong, which may be a general feature of caniforms. The presence of MHC class I pseudogenes may indicate faster gene turnover in this class through the birth-and-death process. South–north population structure at MHC loci probably reflects origin of the populations from separate glacial refugia. PMID:23031405
Evolution of major histocompatibility complex class I and class II genes in the brown bear.
Kuduk, Katarzyna; Babik, Wiesław; Bojarska, Katarzyna; Sliwińska, Ewa B; Kindberg, Jonas; Taberlet, Pierre; Swenson, Jon E; Radwan, Jacek
2012-10-02
Major histocompatibility complex (MHC) proteins constitute an essential component of the vertebrate immune response, and are coded by the most polymorphic of the vertebrate genes. Here, we investigated sequence variation and evolution of MHC class I and class II DRB, DQA and DQB genes in the brown bear Ursus arctos to characterise the level of polymorphism, estimate the strength of positive selection acting on them, and assess the extent of gene orthology and trans-species polymorphism in Ursidae. We found 37 MHC class I, 16 MHC class II DRB, four DQB and two DQA alleles. We confirmed the expression of several loci: three MHC class I, two DRB, two DQB and one DQA. MHC class I also contained two clusters of non-expressed sequences. MHC class I and DRB allele frequencies differed between northern and southern populations of the Scandinavian brown bear. The rate of nonsynonymous substitutions (dN) exceeded the rate of synonymous substitutions (dS) at putative antigen binding sites of DRB and DQB loci and, marginally significantly, at MHC class I loci. Models of codon evolution supported positive selection at DRB and MHC class I loci. Both MHC class I and MHC class II sequences showed orthology to gene clusters found in the giant panda Ailuropoda melanoleuca. Historical positive selection has acted on MHC class I, class II DRB and DQB, but not on the DQA locus. The signal of historical positive selection on the DRB locus was particularly strong, which may be a general feature of caniforms. The presence of MHC class I pseudogenes may indicate faster gene turnover in this class through the birth-and-death process. South-north population structure at MHC loci probably reflects origin of the populations from separate glacial refugia.
Deterministic quantum controlled-PHASE gates based on non-Markovian environments
NASA Astrophysics Data System (ADS)
Zhang, Rui; Chen, Tian; Wang, Xiang-Bin
2017-12-01
We study the realization of the quantum controlled-PHASE gate in an atom-cavity system beyond the Markovian approximation. The general description of the dynamics for the atom-cavity system without any approximation is presented. When the spectral density of the reservoir has the Lorentz form, by making use of the memory backflow from the reservoir, we can always construct the deterministic quantum controlled-PHASE gate between a photon and an atom, no matter the atom-cavity coupling strength is weak or strong. While, the phase shift in the output pulse hinders the implementation of quantum controlled-PHASE gates in the sub-Ohmic, Ohmic or super-Ohmic reservoirs.
Deterministic nonlinear phase gates induced by a single qubit
NASA Astrophysics Data System (ADS)
Park, Kimin; Marek, Petr; Filip, Radim
2018-05-01
We propose deterministic realizations of nonlinear phase gates by repeating a finite sequence of non-commuting Rabi interactions between a harmonic oscillator and only a single two-level ancillary qubit. We show explicitly that the key nonclassical features of the ideal cubic phase gate and the quartic phase gate are generated in the harmonic oscillator faithfully by our method. We numerically analyzed the performance of our scheme under realistic imperfections of the oscillator and the two-level system. The methodology is extended further to higher-order nonlinear phase gates. This theoretical proposal completes the set of operations required for continuous-variable quantum computation.
Discrete-time systems with random switches: From systems stability to networks synchronization.
Guo, Yao; Lin, Wei; Ho, Daniel W C
2016-03-01
In this article, we develop some approaches, which enable us to more accurately and analytically identify the essential patterns that guarantee the almost sure stability of discrete-time systems with random switches. We allow for the case that the elements in the switching connection matrix even obey some unbounded and continuous-valued distributions. In addition to the almost sure stability, we further investigate the almost sure synchronization in complex dynamical networks consisting of randomly connected nodes. Numerical examples illustrate that a chaotic dynamics in the synchronization manifold is preserved when statistical parameters enter some almost sure synchronization region established by the developed approach. Moreover, some delicate configurations are considered on probability space for ensuring synchronization in networks whose nodes are described by nonlinear maps. Both theoretical and numerical results on synchronization are presented by setting only a few random connections in each switch duration. More interestingly, we analytically find it possible to achieve almost sure synchronization in the randomly switching complex networks even with very large population sizes, which cannot be easily realized in non-switching but deterministically connected networks.
Discrete-time systems with random switches: From systems stability to networks synchronization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Yao; Lin, Wei, E-mail: wlin@fudan.edu.cn; Shanghai Key Laboratory of Contemporary Applied Mathematics, LMNS, and Shanghai Center for Mathematical Sciences, Shanghai 200433
2016-03-15
In this article, we develop some approaches, which enable us to more accurately and analytically identify the essential patterns that guarantee the almost sure stability of discrete-time systems with random switches. We allow for the case that the elements in the switching connection matrix even obey some unbounded and continuous-valued distributions. In addition to the almost sure stability, we further investigate the almost sure synchronization in complex dynamical networks consisting of randomly connected nodes. Numerical examples illustrate that a chaotic dynamics in the synchronization manifold is preserved when statistical parameters enter some almost sure synchronization region established by the developedmore » approach. Moreover, some delicate configurations are considered on probability space for ensuring synchronization in networks whose nodes are described by nonlinear maps. Both theoretical and numerical results on synchronization are presented by setting only a few random connections in each switch duration. More interestingly, we analytically find it possible to achieve almost sure synchronization in the randomly switching complex networks even with very large population sizes, which cannot be easily realized in non-switching but deterministically connected networks.« less
Combining Deterministic structures and stochastic heterogeneity for transport modeling
NASA Astrophysics Data System (ADS)
Zech, Alraune; Attinger, Sabine; Dietrich, Peter; Teutsch, Georg
2017-04-01
Contaminant transport in highly heterogeneous aquifers is extremely challenging and subject of current scientific debate. Tracer plumes often show non-symmetric but highly skewed plume shapes. Predicting such transport behavior using the classical advection-dispersion-equation (ADE) in combination with a stochastic description of aquifer properties requires a dense measurement network. This is in contrast to the available information for most aquifers. A new conceptual aquifer structure model is presented which combines large-scale deterministic information and the stochastic approach for incorporating sub-scale heterogeneity. The conceptual model is designed to allow for a goal-oriented, site specific transport analysis making use of as few data as possible. Thereby the basic idea is to reproduce highly skewed tracer plumes in heterogeneous media by incorporating deterministic contrasts and effects of connectivity instead of using unimodal heterogeneous models with high variances. The conceptual model consists of deterministic blocks of mean hydraulic conductivity which might be measured by pumping tests indicating values differing in orders of magnitudes. A sub-scale heterogeneity is introduced within every block. This heterogeneity can be modeled as bimodal or log-normal distributed. The impact of input parameters, structure and conductivity contrasts is investigated in a systematic manor. Furthermore, some first successful implementation of the model was achieved for the well known MADE site.
Deterministic and stochastic models for middle east respiratory syndrome (MERS)
NASA Astrophysics Data System (ADS)
Suryani, Dessy Rizki; Zevika, Mona; Nuraini, Nuning
2018-03-01
World Health Organization (WHO) data stated that since September 2012, there were 1,733 cases of Middle East Respiratory Syndrome (MERS) with 628 death cases that occurred in 27 countries. MERS was first identified in Saudi Arabia in 2012 and the largest cases of MERS outside Saudi Arabia occurred in South Korea in 2015. MERS is a disease that attacks the respiratory system caused by infection of MERS-CoV. MERS-CoV transmission occurs directly through direct contact between infected individual with non-infected individual or indirectly through contaminated object by the free virus. Suspected, MERS can spread quickly because of the free virus in environment. Mathematical modeling is used to illustrate the transmission of MERS disease using deterministic model and stochastic model. Deterministic model is used to investigate the temporal dynamic from the system to analyze the steady state condition. Stochastic model approach using Continuous Time Markov Chain (CTMC) is used to predict the future states by using random variables. From the models that were built, the threshold value for deterministic models and stochastic models obtained in the same form and the probability of disease extinction can be computed by stochastic model. Simulations for both models using several of different parameters are shown, and the probability of disease extinction will be compared with several initial conditions.
On the usage of ultrasound computational models for decision making under ambiguity
NASA Astrophysics Data System (ADS)
Dib, Gerges; Sexton, Samuel; Prowant, Matthew; Crawford, Susan; Diaz, Aaron
2018-04-01
Computer modeling and simulation is becoming pervasive within the non-destructive evaluation (NDE) industry as a convenient tool for designing and assessing inspection techniques. This raises a pressing need for developing quantitative techniques for demonstrating the validity and applicability of the computational models. Computational models provide deterministic results based on deterministic and well-defined input, or stochastic results based on inputs defined by probability distributions. However, computational models cannot account for the effects of personnel, procedures, and equipment, resulting in ambiguity about the efficacy of inspections based on guidance from computational models only. In addition, ambiguity arises when model inputs, such as the representation of realistic cracks, cannot be defined deterministically, probabilistically, or by intervals. In this work, Pacific Northwest National Laboratory demonstrates the ability of computational models to represent field measurements under known variabilities, and quantify the differences using maximum amplitude and power spectrum density metrics. Sensitivity studies are also conducted to quantify the effects of different input parameters on the simulation results.
Continuum models of cohesive stochastic swarms: The effect of motility on aggregation patterns
NASA Astrophysics Data System (ADS)
Hughes, Barry D.; Fellner, Klemens
2013-10-01
Mathematical models of swarms of moving agents with non-local interactions have many applications and have been the subject of considerable recent interest. For modest numbers of agents, cellular automata or related algorithms can be used to study such systems, but in the present work, instead of considering discrete agents, we discuss a class of one-dimensional continuum models, in which the agents possess a density ρ(x,t) at location x at time t. The agents are subject to a stochastic motility mechanism and to a global cohesive inter-agent force. The motility mechanisms covered include classical diffusion, nonlinear diffusion (which may be used to model, in a phenomenological way, volume exclusion or other short-range local interactions), and a family of linear redistribution operators related to fractional diffusion equations. A variety of exact analytic results are discussed, including equilibrium solutions and criteria for unimodality of equilibrium distributions, full time-dependent solutions, and transitions between asymptotic collapse and asymptotic escape. We address the behaviour of the system for diffusive motility in the low-diffusivity limit for both smooth and singular interaction potentials and show how this elucidates puzzling behaviour in fully deterministic non-local particle interaction models. We conclude with speculative remarks about extensions and applications of the models.
Development of low friction snake-inspired deterministic textured surfaces
NASA Astrophysics Data System (ADS)
Cuervo, P.; López, D. A.; Cano, J. P.; Sánchez, J. C.; Rudas, S.; Estupiñán, H.; Toro, A.; Abdel-Aal, H. A.
2016-06-01
The use of surface texturization to reduce friction in sliding interfaces has proved successful in some tribological applications. However, it is still difficult to achieve robust surface texturing with controlled designer-functionalities. This is because the current existing gap between enabling texturization technologies and surface design paradigms. Surface engineering, however, is advanced in natural surface constructs especially within legless reptiles. Many intriguing features facilitate the tribology of such animals so that it is feasible to discover the essence of their surface construction. In this work, we report on the tribological behavior of a novel class of surfaces of which the spatial dimensions of the textural patterns originate from micro-scale features present within the ventral scales of pre-selected snake species. Mask lithography was used to produce implement elliptical texturizing patterns on the surface of titanium alloy (Ti6Al4V) pins. To study the tribological behavior of the texturized pins, pin-on-disc tests were carried out with the pins sliding against ultra-high molecular weight polyethylene discs with no lubrication. For comparison, two non-texturized samples were also tested under the same conditions. The results show the feasibility of the texturization technique based on the coefficient of friction of the textured surfaces to be consistently lower than that of the non-texturized samples.
On the influence of additive and multiplicative noise on holes in dissipative systems.
Descalzi, Orazio; Cartes, Carlos; Brand, Helmut R
2017-05-01
We investigate the influence of noise on deterministically stable holes in the cubic-quintic complex Ginzburg-Landau equation. Inspired by experimental possibilities, we specifically study two types of noise: additive noise delta-correlated in space and spatially homogeneous multiplicative noise on the formation of π-holes and 2π-holes. Our results include the following main features. For large enough additive noise, we always find a transition to the noisy version of the spatially homogeneous finite amplitude solution, while for sufficiently large multiplicative noise, a collapse occurs to the zero amplitude solution. The latter type of behavior, while unexpected deterministically, can be traced back to a characteristic feature of multiplicative noise; the zero solution acts as the analogue of an absorbing boundary: once trapped at zero, the system cannot escape. For 2π-holes, which exist deterministically over a fairly small range of values of subcriticality, one can induce a transition to a π-hole (for additive noise) or to a noise-sustained pulse (for multiplicative noise). This observation opens the possibility of noise-induced switching back and forth from and to 2π-holes.
NASA Astrophysics Data System (ADS)
Schnauber, Peter; Schall, Johannes; Bounouar, Samir; Höhne, Theresa; Park, Suk-In; Ryu, Geun-Hwan; Heindel, Tobias; Burger, Sven; Song, Jin-Dong; Rodt, Sven; Reitzenstein, Stephan
2018-04-01
The development of multi-node quantum optical circuits has attracted great attention in recent years. In particular, interfacing quantum-light sources, gates and detectors on a single chip is highly desirable for the realization of large networks. In this context, fabrication techniques that enable the deterministic integration of pre-selected quantum-light emitters into nanophotonic elements play a key role when moving forward to circuits containing multiple emitters. Here, we present the deterministic integration of an InAs quantum dot into a 50/50 multi-mode interference beamsplitter via in-situ electron beam lithography. We demonstrate the combined emitter-gate interface functionality by measuring triggered single-photon emission on-chip with $g^{(2)}(0) = 0.13\\pm 0.02$. Due to its high patterning resolution as well as spectral and spatial control, in-situ electron beam lithography allows for integration of pre-selected quantum emitters into complex photonic systems. Being a scalable single-step approach, it paves the way towards multi-node, fully integrated quantum photonic chips.
On factoring RSA modulus using random-restart hill-climbing algorithm and Pollard’s rho algorithm
NASA Astrophysics Data System (ADS)
Budiman, M. A.; Rachmawati, D.
2017-12-01
The security of the widely-used RSA public key cryptography algorithm depends on the difficulty of factoring a big integer into two large prime numbers. For many years, the integer factorization problem has been intensively and extensively studied in the field of number theory. As a result, a lot of deterministic algorithms such as Euler’s algorithm, Kraitchik’s, and variants of Pollard’s algorithms have been researched comprehensively. Our study takes a rather uncommon approach: rather than making use of intensive number theories, we attempt to factorize RSA modulus n by using random-restart hill-climbing algorithm, which belongs the class of metaheuristic algorithms. The factorization time of RSA moduli with different lengths is recorded and compared with the factorization time of Pollard’s rho algorithm, which is a deterministic algorithm. Our experimental results indicates that while random-restart hill-climbing algorithm is an acceptable candidate to factorize smaller RSA moduli, the factorization speed is much slower than that of Pollard’s rho algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goreac, Dan, E-mail: Dan.Goreac@u-pem.fr; Kobylanski, Magdalena, E-mail: Magdalena.Kobylanski@u-pem.fr; Martinez, Miguel, E-mail: Miguel.Martinez@u-pem.fr
2016-10-15
We study optimal control problems in infinite horizon whxen the dynamics belong to a specific class of piecewise deterministic Markov processes constrained to star-shaped networks (corresponding to a toy traffic model). We adapt the results in Soner (SIAM J Control Optim 24(6):1110–1122, 1986) to prove the regularity of the value function and the dynamic programming principle. Extending the networks and Krylov’s “shaking the coefficients” method, we prove that the value function can be seen as the solution to a linearized optimization problem set on a convenient set of probability measures. The approach relies entirely on viscosity arguments. As a by-product,more » the dual formulation guarantees that the value function is the pointwise supremum over regular subsolutions of the associated Hamilton–Jacobi integrodifferential system. This ensures that the value function satisfies Perron’s preconization for the (unique) candidate to viscosity solution.« less
O'Leary, Brendan; Park, Joonho; Plaxton, William C
2011-05-15
PEPC [PEP (phosphoenolpyruvate) carboxylase] is a tightly controlled enzyme located at the core of plant C-metabolism that catalyses the irreversible β-carboxylation of PEP to form oxaloacetate and Pi. The critical role of PEPC in assimilating atmospheric CO(2) during C(4) and Crassulacean acid metabolism photosynthesis has been studied extensively. PEPC also fulfils a broad spectrum of non-photosynthetic functions, particularly the anaplerotic replenishment of tricarboxylic acid cycle intermediates consumed during biosynthesis and nitrogen assimilation. An impressive array of strategies has evolved to co-ordinate in vivo PEPC activity with cellular demands for C(4)-C(6) carboxylic acids. To achieve its diverse roles and complex regulation, PEPC belongs to a small multigene family encoding several closely related PTPCs (plant-type PEPCs), along with a distantly related BTPC (bacterial-type PEPC). PTPC genes encode ~110-kDa polypeptides containing conserved serine-phosphorylation and lysine-mono-ubiquitination sites, and typically exist as homotetrameric Class-1 PEPCs. In contrast, BTPC genes encode larger ~117-kDa polypeptides owing to a unique intrinsically disordered domain that mediates BTPC's tight interaction with co-expressed PTPC subunits. This association results in the formation of unusual ~900-kDa Class-2 PEPC hetero-octameric complexes that are desensitized to allosteric effectors. BTPC is a catalytic and regulatory subunit of Class-2 PEPC that is subject to multi-site regulatory phosphorylation in vivo. The interaction between divergent PEPC polypeptides within Class-2 PEPCs adds another layer of complexity to the evolution, physiological functions and metabolic control of this essential CO(2)-fixing plant enzyme. The present review summarizes exciting developments concerning the functions, post-translational controls and subcellular location of plant PTPC and BTPC isoenzymes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rana, A.; Ravichandran, R.; Park, J. H.
The second-order non-Navier-Fourier constitutive laws, expressed in a compact algebraic mathematical form, were validated for the force-driven Poiseuille gas flow by the deterministic atomic-level microscopic molecular dynamics (MD). Emphasis is placed on how completely different methods (a second-order continuum macroscopic theory based on the kinetic Boltzmann equation, the probabilistic mesoscopic direct simulation Monte Carlo, and, in particular, the deterministic microscopic MD) describe the non-classical physics, and whether the second-order non-Navier-Fourier constitutive laws derived from the continuum theory can be validated using MD solutions for the viscous stress and heat flux calculated directly from the molecular data using the statistical method.more » Peculiar behaviors (non-uniform tangent pressure profile and exotic instantaneous heat conduction from cold to hot [R. S. Myong, “A full analytical solution for the force-driven compressible Poiseuille gas flow based on a nonlinear coupled constitutive relation,” Phys. Fluids 23(1), 012002 (2011)]) were re-examined using atomic-level MD results. It was shown that all three results were in strong qualitative agreement with each other, implying that the second-order non-Navier-Fourier laws are indeed physically legitimate in the transition regime. Furthermore, it was shown that the non-Navier-Fourier constitutive laws are essential for describing non-zero normal stress and tangential heat flux, while the classical and non-classical laws remain similar for shear stress and normal heat flux.« less
A Combinatorial Model of Malware Diffusion via Bluetooth Connections
Merler, Stefano; Jurman, Giuseppe
2013-01-01
We outline here the mathematical expression of a diffusion model for cellphones malware transmitted through Bluetooth channels. In particular, we provide the deterministic formula underlying the proposed infection model, in its equivalent recursive (simple but computationally heavy) and closed form (more complex but efficiently computable) expression. PMID:23555677
NASA Astrophysics Data System (ADS)
Wang, Fengyu
Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch structure, especially with the consideration of renewables, 2) to develop a market settlement scheme of proposed dynamic reserve policies such that the market efficiency is improved, 3) to evaluate the market impacts and price signal of the proposed dynamic reserve policies.
Nano-materials are emerging into the global marketplace. Nano-particles, and other throwaway nano-devices may constitute a whole new class of non-biodegradable pollutants of which scientists have very little understanding. Therefore, the production of significant quantities of n...
A Deterministic Computational Procedure for Space Environment Electron Transport
NASA Technical Reports Server (NTRS)
Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamcyk, Anne M.
2010-01-01
A deterministic computational procedure for describing the transport of electrons in condensed media is formulated to simulate the effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The primary purpose for developing the procedure is to provide a means of rapidly performing numerous repetitive transport calculations essential for electron radiation exposure assessments for complex space structures. The present code utilizes well-established theoretical representations to describe the relevant interactions and transport processes. A combined mean free path and average trajectory approach is used in the transport formalism. For typical space environment spectra, several favorable comparisons with Monte Carlo calculations are made which have indicated that accuracy is not compromised at the expense of the computational speed.
Kelly, Benjamin J; Fitch, James R; Hu, Yangqiu; Corsmeier, Donald J; Zhong, Huachun; Wetzel, Amy N; Nordquist, Russell D; Newsom, David L; White, Peter
2015-01-20
While advances in genome sequencing technology make population-scale genomics a possibility, current approaches for analysis of these data rely upon parallelization strategies that have limited scalability, complex implementation and lack reproducibility. Churchill, a balanced regional parallelization strategy, overcomes these challenges, fully automating the multiple steps required to go from raw sequencing reads to variant discovery. Through implementation of novel deterministic parallelization techniques, Churchill allows computationally efficient analysis of a high-depth whole genome sample in less than two hours. The method is highly scalable, enabling full analysis of the 1000 Genomes raw sequence dataset in a week using cloud resources. http://churchill.nchri.org/.
Synchrony and entrainment properties of robust circadian oscillators
Bagheri, Neda; Taylor, Stephanie R.; Meeker, Kirsten; Petzold, Linda R.; Doyle, Francis J.
2008-01-01
Systems theoretic tools (i.e. mathematical modelling, control, and feedback design) advance the understanding of robust performance in complex biological networks. We highlight phase entrainment as a key performance measure used to investigate dynamics of a single deterministic circadian oscillator for the purpose of generating insight into the behaviour of a population of (synchronized) oscillators. More specifically, the analysis of phase characteristics may facilitate the identification of appropriate coupling mechanisms for the ensemble of noisy (stochastic) circadian clocks. Phase also serves as a critical control objective to correct mismatch between the biological clock and its environment. Thus, we introduce methods of investigating synchrony and entrainment in both stochastic and deterministic frameworks, and as a property of a single oscillator or population of coupled oscillators. PMID:18426774
Delayed-feedback chimera states: Forced multiclusters and stochastic resonance
NASA Astrophysics Data System (ADS)
Semenov, V.; Zakharova, A.; Maistrenko, Y.; Schöll, E.
2016-07-01
A nonlinear oscillator model with negative time-delayed feedback is studied numerically under external deterministic and stochastic forcing. It is found that in the unforced system complex partial synchronization patterns like chimera states as well as salt-and-pepper-like solitary states arise on the route from regular dynamics to spatio-temporal chaos. The control of the dynamics by external periodic forcing is demonstrated by numerical simulations. It is shown that one-cluster and multi-cluster chimeras can be achieved by adjusting the external forcing frequency to appropriate resonance conditions. If a stochastic component is superimposed to the deterministic external forcing, chimera states can be induced in a way similar to stochastic resonance, they appear, therefore, in regimes where they do not exist without noise.
The noisy edge of traveling waves
Hallatschek, Oskar
2011-01-01
Traveling waves are ubiquitous in nature and control the speed of many important dynamical processes, including chemical reactions, epidemic outbreaks, and biological evolution. Despite their fundamental role in complex systems, traveling waves remain elusive because they are often dominated by rare fluctuations in the wave tip, which have defied any rigorous analysis so far. Here, we show that by adjusting nonlinear model details, noisy traveling waves can be solved exactly. The moment equations of these tuned models are closed and have a simple analytical structure resembling the deterministic approximation supplemented by a nonlocal cutoff term. The peculiar form of the cutoff shapes the noisy edge of traveling waves and is critical for the correct prediction of the wave speed and its fluctuations. Our approach is illustrated and benchmarked using the example of fitness waves arising in simple models of microbial evolution, which are highly sensitive to number fluctuations. We demonstrate explicitly how these models can be tuned to account for finite population sizes and determine how quickly populations adapt as a function of population size and mutation rates. More generally, our method is shown to apply to a broad class of models, in which number fluctuations are generated by branching processes. Because of this versatility, the method of model tuning may serve as a promising route toward unraveling universal properties of complex discrete particle systems. PMID:21187435
DOE Office of Scientific and Technical Information (OSTI.GOV)
Latanision, R.M.
1990-12-01
Electrochemical corrosion is pervasive in virtually all engineering systems and in virtually all industrial circumstances. Although engineers now understand how to design systems to minimize corrosion in many instances, many fundamental questions remain poorly understood and, therefore, the development of corrosion control strategies is based more on empiricism than on a deep understanding of the processes by which metals corrode in electrolytes. Fluctuations in potential, or current, in electrochemical systems have been observed for many years. To date, all investigations of this phenomenon have utilized non-deterministic analyses. In this work it is proposed to study electrochemical noise from a deterministicmore » viewpoint by comparison of experimental parameters, such as first and second order moments (non-deterministic), with computer simulation of corrosion at metal surfaces. In this way it is proposed to analyze the origins of these fluctuations and to elucidate the relationship between these fluctuations and kinetic parameters associated with metal dissolution and cathodic reduction reactions. This research program addresses in essence two areas of interest: (a) computer modeling of corrosion processes in order to study the electrochemical processes on an atomistic scale, and (b) experimental investigations of fluctuations in electrochemical systems and correlation of experimental results with computer modeling. In effect, the noise generated by mathematical modeling will be analyzed and compared to experimental noise in electrochemical systems. 1 fig.« less
G-structures and domain walls in heterotic theories
NASA Astrophysics Data System (ADS)
Lukas, Andre; Matti, Cyril
2011-01-01
We consider heterotic string solutions based on a warped product of a four-dimensional domain wall and a six-dimensional internal manifold, preserving two supercharges. The constraints on the internal manifolds with SU(3) structure are derived. They are found to be generalized half-flat manifolds with a particular pattern of torsion classes and they include half-flat manifolds and Strominger's complex non-Kahler manifolds as special cases. We also verify that previous heterotic compactifications on half-flat mirror manifolds are based on this class of solutions.
The Impact of Body Mass Index on Abdominal Wall Reconstruction Outcomes: A Comparative Study
Giordano, Salvatore A; Garvey, Patrick B; Baumann, Donald P; Liu, Jun; Butler, Charles E
2016-01-01
Background Obesity and higher body mass index (BMI) may be associated with higher rates of wound healing complications and hernia recurrence rates following complex abdominal wall reconstruction (AWR). We hypothesized that higher BMI’s result in higher rates of postoperative wound healing complications but similar rates of hernia recurrence in AWR patients. Methods We included 511 consecutive patients who underwent AWR with underlay mesh. Patients were divided into three groups on the basis of preoperative BMI: <30 kg/m2 (non-obese), 30–34.9 kg/m2 (class I obesity) and ≥35 kg/m2 (class II/III obesity). We compared postoperative outcomes among these three groups. Results Class I and class II/III obesity patients had higher surgical site occurrence rates than non-obese patients (26.4% vs. 14.9%; p=0.006 and 36.8% vs. 14.9%; p<0.001, respectively) and higher overall complication rates (37.9% vs. 24.7%; p=0.007 and 43.4% vs. 24.7%; p<0.001, respectively). Similarly, obese patients had significantly higher skin dehiscence (19.3% vs 7.2%; p<0.001 and 26.5% vs 7.2%; p<0.001, respectively) and fat necrosis rates (10.0% vs 2.1%; p=0.001 and 11.8% vs 2.1%; p<0.001, respectively) than non-obese patients. Obesity class II/III patients had higher infection and seroma rates than non-obese patients (9.6% vs 4.3%; p=0.041 and 8.1% vs 2.1%; p=0.006, respectively). However, class I and class II/III obesity patients experienced hernia recurrence rates (11.4% vs. 7.7%; p=0.204 and 10.3% vs. 7.7%; p=0.381, respectively) and freedom from hernia recurrence (overall log-rank p=0.41) similar to non-obese patients. Conclusions Hernia recurrence rates do not appear to be affected by obesity on long-term follow-up in AWR. PMID:28445378
NASA Astrophysics Data System (ADS)
Lambrou, George I.; Chatziioannou, Aristotelis; Vlahopoulos, Spiros; Moschovi, Maria; Chrousos, George P.
Biological systems are dynamic and possess properties that depend on two key elements: initial conditions and the response of the system over time. Conceptualizing this on tumor models will influence conclusions drawn with regard to disease initiation and progression. Alterations in initial conditions dynamically reshape the properties of proliferating tumor cells. The present work aims to test the hypothesis of Wolfrom et al., that proliferation shows evidence for deterministic chaos in a manner such that subtle differences in the initial conditions give rise to non-linear response behavior of the system. Their hypothesis, tested on adherent Fao rat hepatoma cells, provides evidence that these cells manifest aperiodic oscillations in their proliferation rate. We have tested this hypothesis with some modifications to the proposed experimental setup. We have used the acute lymphoblastic leukemia cell line CCRF-CEM, as it provides an excellent substrate for modeling proliferation dynamics. Measurements were taken at time points varying from 24h to 48h, extending the assayed populations beyond that of previous published reports that dealt with the complex dynamic behavior of animal cell populations. We conducted flow cytometry studies to examine the apoptotic and necrotic rate of the system, as well as DNA content changes of the cells over time. The cells exhibited a proliferation rate of nonlinear nature, as this rate presented oscillatory behavior. The obtained data have been fit in known models of growth, such as logistic and Gompertzian growth.
Acoustic Wave Dispersion and Scattering in Complex Marine Sediment Structures
2018-03-21
Developed theory and methodology to distinguish between the two major classes of volume heterogeneities, discrete particles or a fluctuation...acoustics of muddy sediments has become of intense interest in the ONR community and very large and non -linear gradients have been observed in such...method was applied to measured reflection data in a muddy sediment area, where highly non -linear depth-dependent profiles were obtained – informed by the
Chao, Lin; Rang, Camilla Ulla; Proenca, Audrey Menegaz; Chao, Jasper Ubirajara
2016-01-01
Non-genetic phenotypic variation is common in biological organisms. The variation is potentially beneficial if the environment is changing. If the benefit is large, selection can favor the evolution of genetic assimilation, the process by which the expression of a trait is transferred from environmental to genetic control. Genetic assimilation is an important evolutionary transition, but it is poorly understood because the fitness costs and benefits of variation are often unknown. Here we show that the partitioning of damage by a mother bacterium to its two daughters can evolve through genetic assimilation. Bacterial phenotypes are also highly variable. Because gene-regulating elements can have low copy numbers, the variation is attributed to stochastic sampling. Extant Escherichia coli partition asymmetrically and deterministically more damage to the old daughter, the one receiving the mother’s old pole. By modeling in silico damage partitioning in a population, we show that deterministic asymmetry is advantageous because it increases fitness variance and hence the efficiency of natural selection. However, we find that symmetrical but stochastic partitioning can be similarly beneficial. To examine why bacteria evolved deterministic asymmetry, we modeled the effect of damage anchored to the mother’s old pole. While anchored damage strengthens selection for asymmetry by creating additional fitness variance, it has the opposite effect on symmetry. The difference results because anchored damage reinforces the polarization of partitioning in asymmetric bacteria. In symmetric bacteria, it dilutes the polarization. Thus, stochasticity alone may have protected early bacteria from damage, but deterministic asymmetry has evolved to be equally important in extant bacteria. We estimate that 47% of damage partitioning is deterministic in E. coli. We suggest that the evolution of deterministic asymmetry from stochasticity offers an example of Waddington’s genetic assimilation. Our model is able to quantify the evolution of the assimilation because it characterizes the fitness consequences of variation. PMID:26761487
Chao, Lin; Rang, Camilla Ulla; Proenca, Audrey Menegaz; Chao, Jasper Ubirajara
2016-01-01
Non-genetic phenotypic variation is common in biological organisms. The variation is potentially beneficial if the environment is changing. If the benefit is large, selection can favor the evolution of genetic assimilation, the process by which the expression of a trait is transferred from environmental to genetic control. Genetic assimilation is an important evolutionary transition, but it is poorly understood because the fitness costs and benefits of variation are often unknown. Here we show that the partitioning of damage by a mother bacterium to its two daughters can evolve through genetic assimilation. Bacterial phenotypes are also highly variable. Because gene-regulating elements can have low copy numbers, the variation is attributed to stochastic sampling. Extant Escherichia coli partition asymmetrically and deterministically more damage to the old daughter, the one receiving the mother's old pole. By modeling in silico damage partitioning in a population, we show that deterministic asymmetry is advantageous because it increases fitness variance and hence the efficiency of natural selection. However, we find that symmetrical but stochastic partitioning can be similarly beneficial. To examine why bacteria evolved deterministic asymmetry, we modeled the effect of damage anchored to the mother's old pole. While anchored damage strengthens selection for asymmetry by creating additional fitness variance, it has the opposite effect on symmetry. The difference results because anchored damage reinforces the polarization of partitioning in asymmetric bacteria. In symmetric bacteria, it dilutes the polarization. Thus, stochasticity alone may have protected early bacteria from damage, but deterministic asymmetry has evolved to be equally important in extant bacteria. We estimate that 47% of damage partitioning is deterministic in E. coli. We suggest that the evolution of deterministic asymmetry from stochasticity offers an example of Waddington's genetic assimilation. Our model is able to quantify the evolution of the assimilation because it characterizes the fitness consequences of variation.
Strandh, Maria; Westerdahl, Helena; Pontarp, Mikael; Canbäck, Björn; Dubois, Marie-Pierre; Miquel, Christian; Taberlet, Pierre; Bonadonna, Francesco
2012-11-07
Mate choice for major histocompatibility complex (MHC) compatibility has been found in several taxa, although rarely in birds. MHC is a crucial component in adaptive immunity and by choosing an MHC-dissimilar partner, heterozygosity and potentially broad pathogen resistance is maximized in the offspring. The MHC genotype influences odour cues and preferences in mammals and fish and hence olfactory-based mate choice can occur. We tested whether blue petrels, Halobaena caerulea, choose partners based on MHC compatibility. This bird is long-lived, monogamous and can discriminate between individual odours using olfaction, which makes it exceptionally well suited for this analysis. We screened MHC class I and II B alleles in blue petrels using 454-pyrosequencing and quantified the phylogenetic, functional and allele-sharing similarity between individuals. Partners were functionally more dissimilar at the MHC class II B loci than expected from random mating (p = 0.033), whereas there was no such difference at the MHC class I loci. Phylogenetic and non-sequence-based MHC allele-sharing measures detected no MHC dissimilarity between partners for either MHC class I or II B. Our study provides evidence of mate choice for MHC compatibility in a bird with a high dependency on odour cues, suggesting that MHC odour-mediated mate choice occurs in birds.
NASA Technical Reports Server (NTRS)
Davis, Brynmor; Kim, Edward; Piepmeier, Jeffrey; Hildebrand, Peter H. (Technical Monitor)
2001-01-01
Many new Earth remote-sensing instruments are embracing both the advantages and added complexity that result from interferometric or fully polarimetric operation. To increase instrument understanding and functionality a model of the signals these instruments measure is presented. A stochastic model is used as it recognizes the non-deterministic nature of any real-world measurements while also providing a tractable mathematical framework. A stationary, Gaussian-distributed model structure is proposed. Temporal and spectral correlation measures provide a statistical description of the physical properties of coherence and polarization-state. From this relationship the model is mathematically defined. The model is shown to be unique for any set of physical parameters. A method of realizing the model (necessary for applications such as synthetic calibration-signal generation) is given and computer simulation results are presented. The signals are constructed using the output of a multi-input multi-output linear filter system, driven with white noise.
NASA Astrophysics Data System (ADS)
Züleyha, Artuç; Ziya, Merdan; Selçuk, Yeşiltaş; Kemal, Öztürk M.; Mesut, Tez
2017-11-01
Computational models for tumors have difficulties due to complexity of tumor nature and capacities of computational tools, however, these models provide visions to understand interactions between tumor and its micro environment. Moreover computational models have potential to develop strategies for individualized treatments for cancer. To observe a solid brain tumor, glioblastoma multiforme (GBM), we present a two dimensional Ising Model applied on Creutz cellular automaton (CCA). The aim of this study is to analyze avascular spherical solid tumor growth, considering transitions between non tumor cells and cancer cells are like phase transitions in physical system. Ising model on CCA algorithm provides a deterministic approach with discrete time steps and local interactions in position space to view tumor growth as a function of time. Our simulation results are given for fixed tumor radius and they are compatible with theoretical and clinic data.
Joint passive radar tracking and target classification using radar cross section
NASA Astrophysics Data System (ADS)
Herman, Shawn M.
2004-01-01
We present a recursive Bayesian solution for the problem of joint tracking and classification of airborne targets. In our system, we allow for complications due to multiple targets, false alarms, and missed detections. More importantly, though, we utilize the full benefit of a joint approach by implementing our tracker using an aerodynamically valid flight model that requires aircraft-specific coefficients such as wing area and vehicle mass, which are provided by our classifier. A key feature that bridges the gap between tracking and classification is radar cross section (RCS). By modeling the true deterministic relationship that exists between RCS and target aspect, we are able to gain both valuable class information and an estimate of target orientation. However, the lack of a closed-form relationship between RCS and target aspect prevents us from using the Kalman filter or its variants. Instead, we rely upon a sequential Monte Carlo-based approach known as particle filtering. In addition to allowing us to include RCS as a measurement, the particle filter also simplifies the implementation of our nonlinear non-Gaussian flight model.
Joint passive radar tracking and target classification using radar cross section
NASA Astrophysics Data System (ADS)
Herman, Shawn M.
2003-12-01
We present a recursive Bayesian solution for the problem of joint tracking and classification of airborne targets. In our system, we allow for complications due to multiple targets, false alarms, and missed detections. More importantly, though, we utilize the full benefit of a joint approach by implementing our tracker using an aerodynamically valid flight model that requires aircraft-specific coefficients such as wing area and vehicle mass, which are provided by our classifier. A key feature that bridges the gap between tracking and classification is radar cross section (RCS). By modeling the true deterministic relationship that exists between RCS and target aspect, we are able to gain both valuable class information and an estimate of target orientation. However, the lack of a closed-form relationship between RCS and target aspect prevents us from using the Kalman filter or its variants. Instead, we rely upon a sequential Monte Carlo-based approach known as particle filtering. In addition to allowing us to include RCS as a measurement, the particle filter also simplifies the implementation of our nonlinear non-Gaussian flight model.
Fiber optic voice/data network
NASA Technical Reports Server (NTRS)
Bergman, Larry A. (Inventor)
1989-01-01
An asynchronous, high-speed, fiber optic local area network originally developed for tactical environments with additional benefits for other environments such as spacecraft, and the like. The network supports ordinary data packet traffic simultaneously with synchronous T1 voice traffic over a common token ring channel; however, the techniques and apparatus of this invention can be applied to any deterministic class of packet data networks, including multitier backbones, that must transport stream data (e.g., video, SAR, sensors) as well as data. A voice interface module parses, buffers, and resynchronizes the voice data to the packet network employing elastic buffers on both the sending and receiving ends. Voice call setup and switching functions are performed external to the network with ordinary PABX equipment. Clock information is passed across network boundaries in a token passing ring by preceeding the token with an idle period of non-transmission which allows the token to be used to re-establish a clock synchronized to the data. Provision is made to monitor and compensate the elastic receiving buffers so as to prevent them from overflowing or going empty.
Nuclear binding of progesterone in hen oviduct. Binding to multiple sites in vitro.
Pikler, G M; Webster, R A; Spelsberg, T C
1976-01-01
Steroid hormones, including progesterone, are known to bind with high affinity (Kd approximately 1x10(-10)M) to receptor proteins once they enter target cells. This complex (the progesterone-receptor) then undergoes a temperature-and/or salt-dependent activation which allows it to migrate to the cell nucleus and to bind to the deoxyribonucleoproteins. The present studies demonstrate that binding the hormone-receptor complex in vitro to isolated nuclei from the oviducts of laying hens required the same conditions as do other studies of bbinding in vitro reported previously, e.g. the hormone must be complexed to intact and activated receptor. The assay of the nuclear binding by using multiple concentrations of progesterone receptor reveals the presence of more than one class of binding site in the oviduct nuclei. The affinity of each of these classes of binding sites range from Kd approximately 1x10(-9)-1x10(-8)M. Assays using free steroid (not complexed with receptor) show no binding to these sites. The binding to each of the classes of sites, displays a differential stability to increasing ionic concentrations, suggesting primarily an ionic-type interaction for all classes. Only the highest-affinity class of binding site is capable of binding progesterone receptor under physioligical-saline conditions. This class represent 6000-10000 sites per cell nucleus and resembles the sites detected in vivo (Spelsberg, 1976, Biochem. J. 156, 391-398) which cause maximal transcriptional response when saturated with the progesterone receptor. The multiple binding sites for the progesterone receptor either are not present or are found in limited numbers in the nuclei of non-target organs. Differences in extent of binding to the nuclear material between a target tissue (oviduct) and other tissues (spleen or erythrocyte) are markedly dependent on the ionic conditions, and are probably due to binding to different classes of sites in the nuclei. PMID:182147
Luminescent macrocyclic lanthanide complexes
Raymond, Kenneth N; Corneillie, Todd M; Xu, Jide
2014-05-20
The present invention provides a novel class of macrocyclic compounds as well as complexes formed between a metal (e.g., lanthanide) ion and the compounds of the invention. Preferred complexes exhibit high stability as well as high quantum yields of lanthanide ion luminescence in aqueous media without the need for secondary activating agents. Preferred compounds incorporate hydroxy-isophthalamide moieties within their macrocyclic structure and are characterized by surprisingly low, non-specific binding to a variety of polypeptides such as antibodies and proteins as well as high kinetic stability. These characteristics distinguish them from known, open-structured ligands.
Wu, Wei; Wang, Jin
2013-09-28
We established a potential and flux field landscape theory to quantify the global stability and dynamics of general spatially dependent non-equilibrium deterministic and stochastic systems. We extended our potential and flux landscape theory for spatially independent non-equilibrium stochastic systems described by Fokker-Planck equations to spatially dependent stochastic systems governed by general functional Fokker-Planck equations as well as functional Kramers-Moyal equations derived from master equations. Our general theory is applied to reaction-diffusion systems. For equilibrium spatially dependent systems with detailed balance, the potential field landscape alone, defined in terms of the steady state probability distribution functional, determines the global stability and dynamics of the system. The global stability of the system is closely related to the topography of the potential field landscape in terms of the basins of attraction and barrier heights in the field configuration state space. The effective driving force of the system is generated by the functional gradient of the potential field alone. For non-equilibrium spatially dependent systems, the curl probability flux field is indispensable in breaking detailed balance and creating non-equilibrium condition for the system. A complete characterization of the non-equilibrium dynamics of the spatially dependent system requires both the potential field and the curl probability flux field. While the non-equilibrium potential field landscape attracts the system down along the functional gradient similar to an electron moving in an electric field, the non-equilibrium flux field drives the system in a curly way similar to an electron moving in a magnetic field. In the small fluctuation limit, the intrinsic potential field as the small fluctuation limit of the potential field for spatially dependent non-equilibrium systems, which is closely related to the steady state probability distribution functional, is found to be a Lyapunov functional of the deterministic spatially dependent system. Therefore, the intrinsic potential landscape can characterize the global stability of the deterministic system. The relative entropy functional of the stochastic spatially dependent non-equilibrium system is found to be the Lyapunov functional of the stochastic dynamics of the system. Therefore, the relative entropy functional quantifies the global stability of the stochastic system with finite fluctuations. Our theory offers an alternative general approach to other field-theoretic techniques, to study the global stability and dynamics of spatially dependent non-equilibrium field systems. It can be applied to many physical, chemical, and biological spatially dependent non-equilibrium systems.
Complexity, information loss, and model building: from neuro- to cognitive dynamics
NASA Astrophysics Data System (ADS)
Arecchi, F. Tito
2007-06-01
A scientific problem described within a given code is mapped by a corresponding computational problem, We call complexity (algorithmic) the bit length of the shortest instruction which solves the problem. Deterministic chaos in general affects a dynamical systems making the corresponding problem experimentally and computationally heavy, since one must reset the initial conditions at a rate higher than that of information loss (Kolmogorov entropy). One can control chaos by adding to the system new degrees of freedom (information swapping: information lost by chaos is replaced by that arising from the new degrees of freedom). This implies a change of code, or a new augmented model. Within a single code, changing hypotheses is equivalent to fixing different sets of control parameters, each with a different a-priori probability, to be then confirmed and transformed to an a-posteriori probability via Bayes theorem. Sequential application of Bayes rule is nothing else than the Darwinian strategy in evolutionary biology. The sequence is a steepest ascent algorithm, which stops once maximum probability has been reached. At this point the hypothesis exploration stops. By changing code (and hence the set of relevant variables) one can start again to formulate new classes of hypotheses . We call semantic complexity the number of accessible scientific codes, or models, that describe a situation. It is however a fuzzy concept, in so far as this number changes due to interaction of the operator with the system under investigation. These considerations are illustrated with reference to a cognitive task, starting from synchronization of neuron arrays in a perceptual area and tracing the putative path toward a model building.
Non Kolmogorov Probability Models Outside Quantum Mechanics
NASA Astrophysics Data System (ADS)
Accardi, Luigi
2009-03-01
This paper is devoted to analysis of main conceptual problems in the interpretation of QM: reality, locality, determinism, physical state, Heisenberg principle, "deterministic" and "exact" theories, laws of chance, notion of event, statistical invariants, adaptive realism, EPR correlations and, finally, the EPR-chameleon experiment.
NASA Astrophysics Data System (ADS)
Oladyshkin, Sergey; Class, Holger; Helmig, Rainer; Nowak, Wolfgang
2010-05-01
CO2 storage in geological formations is currently being discussed intensively as a technology for mitigating CO2 emissions. However, any large-scale application requires a thorough analysis of the potential risks. Current numerical simulation models are too expensive for probabilistic risk analysis and for stochastic approaches based on brute-force repeated simulation. Even single deterministic simulations may require parallel high-performance computing. The multiphase flow processes involved are too non-linear for quasi-linear error propagation and other simplified stochastic tools. As an alternative approach, we propose a massive stochastic model reduction based on the probabilistic collocation method. The model response is projected onto a orthogonal basis of higher-order polynomials to approximate dependence on uncertain parameters (porosity, permeability etc.) and design parameters (injection rate, depth etc.). This allows for a non-linear propagation of model uncertainty affecting the predicted risk, ensures fast computation and provides a powerful tool for combining design variables and uncertain variables into one approach based on an integrative response surface. Thus, the design task of finding optimal injection regimes explicitly includes uncertainty, which leads to robust designs of the non-linear system that minimize failure probability and provide valuable support for risk-informed management decisions. We validate our proposed stochastic approach by Monte Carlo simulation using a common 3D benchmark problem (Class et al. Computational Geosciences 13, 2009). A reasonable compromise between computational efforts and precision was reached already with second-order polynomials. In our case study, the proposed approach yields a significant computational speedup by a factor of 100 compared to Monte Carlo simulation. We demonstrate that, due to the non-linearity of the flow and transport processes during CO2 injection, including uncertainty in the analysis leads to a systematic and significant shift of predicted leakage rates towards higher values compared with deterministic simulations, affecting both risk estimates and the design of injection scenarios. This implies that, neglecting uncertainty can be a strong simplification for modeling CO2 injection, and the consequences can be stronger than when neglecting several physical phenomena (e.g. phase transition, convective mixing, capillary forces etc.). The authors would like to thank the German Research Foundation (DFG) for financial support of the project within the Cluster of Excellence in Simulation Technology (EXC 310/1) at the University of Stuttgart. Keywords: polynomial chaos; CO2 storage; multiphase flow; porous media; risk assessment; uncertainty; integrative response surfaces
Rare events in finite and infinite dimensions
NASA Astrophysics Data System (ADS)
Reznikoff, Maria G.
Thermal noise introduces stochasticity into deterministic equations and makes possible events which are never seen in the zero temperature setting. The driving force behind the thesis work is a desire to bring analysis and probability to bear on a class of relevant and intriguing physical problems, and in so doing, to allow applications to drive the development of new mathematical theory. The unifying theme is the study of rare events under the influence of small, random perturbations, and the manifold mathematical problems which ensue. In the first part, we apply large deviation theory and prefactor estimates to a coherent rotation micromagnetic model in order to analyze thermally activated magnetic switching. We consider recent physical experiments and the mathematical questions "asked" by them. A stochastic resonance type phenomenon is discovered, leading to the definition of finite temperature astroids. Non-Arrhenius behavior is discussed. The analysis is extended to ramped astroids. In addition, we discover that for low damping and ultrashort pulses, deterministic effects can override thermal effects, in accord with very recent ultrashort pulse experiments. Even more interesting, perhaps, is the study of large deviations in the infinite dimensional context, i.e. in spatially extended systems. Inspired by recent numerical investigations, we study the stochastically perturbed Allen Cahn and Cahn Hilliard equations. For the Allen Cahn equation, we study the action minimization problem (a deterministic variational problem) and prove the action scaling in four parameter regimes, via upper and lower bounds. The sharp interface limit is studied. We formally derive a reduced action functional which lends insight into the connection between action minimization and curvature flow. For the Cahn Hilliard equation, we prove upper and lower bounds for the scaling of the energy barrier in the nucleation and growth regime. Finally, we consider rare events in large or infinite domains, in one spatial dimension. We introduce a natural reference measure through which to analyze the invariant measure of stochastically perturbed, nonlinear partial differential equations. Also, for noisy reaction diffusion equations with an asymmetric potential, we discover how to rescale space and time in order to map the dynamics in the zero temperature limit to the Poisson Model, a simple version of the Johnson-Mehl-Avrami-Kolmogorov model for nucleation and growth.
Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach
NASA Technical Reports Server (NTRS)
Aguilo, Miguel A.; Warner, James E.
2017-01-01
This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.
Mwanga, Gasper G; Haario, Heikki; Capasso, Vicenzo
2015-03-01
The main scope of this paper is to study the optimal control practices of malaria, by discussing the implementation of a catalog of optimal control strategies in presence of parameter uncertainties, which is typical of infectious diseases data. In this study we focus on a deterministic mathematical model for the transmission of malaria, including in particular asymptomatic carriers and two age classes in the human population. A partial qualitative analysis of the relevant ODE system has been carried out, leading to a realistic threshold parameter. For the deterministic model under consideration, four possible control strategies have been analyzed: the use of Long-lasting treated mosquito nets, indoor residual spraying, screening and treatment of symptomatic and asymptomatic individuals. The numerical results show that using optimal control the disease can be brought to a stable disease free equilibrium when all four controls are used. The Incremental Cost-Effectiveness Ratio (ICER) for all possible combinations of the disease-control measures is determined. The numerical simulations of the optimal control in the presence of parameter uncertainty demonstrate the robustness of the optimal control: the main conclusions of the optimal control remain unchanged, even if inevitable variability remains in the control profiles. The results provide a promising framework for the designing of cost-effective strategies for disease controls with multiple interventions, even under considerable uncertainty of model parameters. Copyright © 2014 Elsevier Inc. All rights reserved.
Development of Methodologies for IV and V of Neural Networks
NASA Technical Reports Server (NTRS)
Taylor, Brian; Darrah, Marjorie
2003-01-01
Non-deterministic systems often rely upon neural network (NN) technology to "lean" to manage flight systems under controlled conditions using carefully chosen training sets. How can these adaptive systems be certified to ensure that they will become increasingly efficient and behave appropriately in real-time situations? The bulk of Independent Verification and Validation (IV&V) research of non-deterministic software control systems such as Adaptive Flight Controllers (AFC's) addresses NNs in well-behaved and constrained environments such as simulations and strict process control. However, neither substantive research, nor effective IV&V techniques have been found to address AFC's learning in real-time and adapting to live flight conditions. Adaptive flight control systems offer good extensibility into commercial aviation as well as military aviation and transportation. Consequently, this area of IV&V represents an area of growing interest and urgency. ISR proposes to further the current body of knowledge to meet two objectives: Research the current IV&V methods and assess where these methods may be applied toward a methodology for the V&V of Neural Network; and identify effective methods for IV&V of NNs that learn in real-time, including developing a prototype test bed for IV&V of AFC's. Currently. no practical method exists. lSR will meet these objectives through the tasks identified and described below. First, ISR will conduct a literature review of current IV&V technology. TO do this, ISR will collect the existing body of research on IV&V of non-deterministic systems and neural network. ISR will also develop the framework for disseminating this information through specialized training. This effort will focus on developing NASA's capability to conduct IV&V of neural network systems and to provide training to meet the increasing need for IV&V expertise in such systems.
microRNAs of parasites: current status and future perspectives
USDA-ARS?s Scientific Manuscript database
MicroRNAs (miRNAs) are a class of endogenous non-coding small RNAs regulating gene expression in eukaryotes at the post-transcriptional level. The complex life cycles of parasites may require the ability to respond to environmental and developmental signals through miRNA-mediated gene expression. Ov...
Time-frequency signal analysis and synthesis - The choice of a method and its application
NASA Astrophysics Data System (ADS)
Boashash, Boualem
In this paper, the problem of choosing a method for time-frequency signal analysis is discussed. It is shown that a natural approach leads to the introduction of the concepts of the analytic signal and instantaneous frequency. The Wigner-Ville Distribution (WVD) is a method of analysis based upon these concepts and it is shown that an accurate Time-Frequency representation of a signal can be obtained by using the WVD for the analysis of a class of signals referred to as 'asymptotic'. For this class of signals, the instantaneous frequency describes an important physical parameter characteristic of the process under investigation. The WVD procedure for signal analysis and synthesis is outlined and its properties are reviewed for deterministic and random signals.
Time-Frequency Signal Analysis And Synthesis The Choice Of A Method And Its Application
NASA Astrophysics Data System (ADS)
Boashash, Boualem
1988-02-01
In this paper, the problem of choosing a method for time-frequency signal analysis is discussed. It is shown that a natural approach leads to the introduction of the concepts of the analytic signal and in-stantaneous frequency. The Wigner-Ville Distribution (WVD) is a method of analysis based upon these concepts and it is shown that an accurate Time-Frequency representation of a signal can be obtained by using the WVD for the analysis of a class of signals referred to as "asymptotic". For this class of signals, the instantaneous frequency describes an important physical parameter characteristic of the process under investigation. The WVD procedure for signal analysis and synthesis is outlined and its properties are reviewed for deterministic and random signals.
Correlations in electrically coupled chaotic lasers.
Rosero, E J; Barbosa, W A S; Martinez Avila, J F; Khoury, A Z; Rios Leite, J R
2016-09-01
We show how two electrically coupled semiconductor lasers having optical feedback can present simultaneous antiphase correlated fast power fluctuations, and strong in-phase synchronized spikes of chaotic power drops. This quite counterintuitive phenomenon is demonstrated experimentally and confirmed by numerical solutions of a deterministic dynamical system of rate equations. The occurrence of negative and positive cross correlation between parts of a complex system according to time scales, as proved in our simple arrangement, is relevant for the understanding and characterization of collective properties in complex networks.
Synchronisation of chaos and its applications
NASA Astrophysics Data System (ADS)
Eroglu, Deniz; Lamb, Jeroen S. W.; Pereira, Tiago
2017-07-01
Dynamical networks are important models for the behaviour of complex systems, modelling physical, biological and societal systems, including the brain, food webs, epidemic disease in populations, power grids and many other. Such dynamical networks can exhibit behaviour in which deterministic chaos, exhibiting unpredictability and disorder, coexists with synchronisation, a classical paradigm of order. We survey the main theory behind complete, generalised and phase synchronisation phenomena in simple as well as complex networks and discuss applications to secure communications, parameter estimation and the anticipation of chaos.
NASA Astrophysics Data System (ADS)
Boyd, Alexander B.; Crutchfield, James P.
2016-05-01
We introduce a deterministic chaotic system—the Szilard map—that encapsulates the measurement, control, and erasure protocol by which Maxwellian demons extract work from a heat reservoir. Implementing the demon's control function in a dynamical embodiment, our construction symmetrizes the demon and the thermodynamic system, allowing one to explore their functionality and recover the fundamental trade-off between the thermodynamic costs of dissipation due to measurement and those due to erasure. The map's degree of chaos—captured by the Kolmogorov-Sinai entropy—is the rate of energy extraction from the heat bath. Moreover, an engine's statistical complexity quantifies the minimum necessary system memory for it to function. In this way, dynamical instability in the control protocol plays an essential and constructive role in intelligent thermodynamic systems.
Deterministic Joint Remote Preparation of a Four-Qubit Cluster-Type State via GHZ States
NASA Astrophysics Data System (ADS)
Wang, Hai-bin; Zhou, Xiao-Yan; An, Xing-xing; Cui, Meng-Meng; Fu, De-sheng
2016-08-01
A scheme for the deterministic joint remote preparation of a four-qubit cluster-type state using only two Greenberger-Horne-Zeilinger (GHZ) states as quantum channels is presented. In this scheme, the first sender performs a two-qubit projective measurement according to the real coefficient of the desired state. Then, the other sender utilizes the measurement result and the complex coefficient to perform another projective measurement. To obtain the desired state, the receiver applies appropriate unitary operations to his/her own two qubits and two CNOT operations to the two ancillary ones. Most interestingly, our scheme can achieve unit success probability, i.e., P s u c =1. Furthermore, comparison reveals that the efficiency is higher than that of most other analogous schemes.
Quantum teleportation via quantum channels with non-maximal Schmidt rank
NASA Astrophysics Data System (ADS)
Solís-Prosser, M. A.; Jiménez, O.; Neves, L.; Delgado, A.
2013-03-01
We study the problem of teleporting unknown pure states of a single qudit via a pure quantum channel with non-maximal Schmidt rank. We relate this process to the discrimination of linearly dependent symmetric states with the help of the maximum-confidence discrimination strategy. We show that with a certain probability, it is possible to teleport with a fidelity larger than the fidelity optimal deterministic teleportation.
Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices
Monajemi, Hatef; Jafarpour, Sina; Gavish, Matan; Donoho, David L.; Ambikasaran, Sivaram; Bacallado, Sergio; Bharadia, Dinesh; Chen, Yuxin; Choi, Young; Chowdhury, Mainak; Chowdhury, Soham; Damle, Anil; Fithian, Will; Goetz, Georges; Grosenick, Logan; Gross, Sam; Hills, Gage; Hornstein, Michael; Lakkam, Milinda; Lee, Jason; Li, Jian; Liu, Linxi; Sing-Long, Carlos; Marx, Mike; Mittal, Akshay; Monajemi, Hatef; No, Albert; Omrani, Reza; Pekelis, Leonid; Qin, Junjie; Raines, Kevin; Ryu, Ernest; Saxe, Andrew; Shi, Dai; Siilats, Keith; Strauss, David; Tang, Gary; Wang, Chaojun; Zhou, Zoey; Zhu, Zhen
2013-01-01
In compressed sensing, one takes samples of an N-dimensional vector using an matrix A, obtaining undersampled measurements . For random matrices with independent standard Gaussian entries, it is known that, when is k-sparse, there is a precisely determined phase transition: for a certain region in the (,)-phase diagram, convex optimization typically finds the sparsest solution, whereas outside that region, it typically fails. It has been shown empirically that the same property—with the same phase transition location—holds for a wide range of non-Gaussian random matrix ensembles. We report extensive experiments showing that the Gaussian phase transition also describes numerous deterministic matrices, including Spikes and Sines, Spikes and Noiselets, Paley Frames, Delsarte-Goethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Namely, for each of these deterministic matrices in turn, for a typical k-sparse object, we observe that convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian random matrices. Our experiments considered coefficients constrained to for four different sets , and the results establish our finding for each of the four associated phase transitions. PMID:23277588
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayes, T.; Smith, K.S.; Severino, F.
A critical capability of the new RHIC low level rf (LLRF) system is the ability to synchronize signals across multiple locations. The 'Update Link' provides this functionality. The 'Update Link' is a deterministic serial data link based on the Xilinx RocketIO protocol that is broadcast over fiber optic cable at 1 gigabit per second (Gbps). The link provides timing events and data packets as well as time stamp information for synchronizing diagnostic data from multiple sources. The new RHIC LLRF was designed to be a flexible, modular system. The system is constructed of numerous independent RF Controller chassis. To providemore » synchronization among all of these chassis, the Update Link system was designed. The Update Link system provides a low latency, deterministic data path to broadcast information to all receivers in the system. The Update Link system is based on a central hub, the Update Link Master (ULM), which generates the data stream that is distributed via fiber optic links. Downstream chassis have non-deterministic connections back to the ULM that allow any chassis to provide data that is broadcast globally.« less
Analysis of stochastic model for non-linear volcanic dynamics
NASA Astrophysics Data System (ADS)
Alexandrov, D.; Bashkirtseva, I.; Ryashko, L.
2014-12-01
Motivated by important geophysical applications we consider a dynamic model of the magma-plug system previously derived by Iverson et al. (2006) under the influence of stochastic forcing. Due to strong nonlinearity of the friction force for solid plug along its margins, the initial deterministic system exhibits impulsive oscillations. Two types of dynamic behavior of the system under the influence of the parametric stochastic forcing have been found: random trajectories are scattered on both sides of the deterministic cycle or grouped on its internal side only. It is shown that dispersions are highly inhomogeneous along cycles in the presence of noises. The effects of noise-induced shifts, pressure stabilization and localization of random trajectories have been revealed with increasing the noise intensity. The plug velocity, pressure and displacement are highly dependent of noise intensity as well. These new stochastic phenomena are related with the nonlinear peculiarities of the deterministic phase portrait. It is demonstrated that the repetitive stick-slip motions of the magma-plug system in the case of stochastic forcing can be connected with drumbeat earthquakes.
Co-evolution with chicken class I genes.
Kaufman, Jim
2015-09-01
The concept of co-evolution (or co-adaptation) has a long history, but application at molecular levels (e.g., 'supergenes' in genetics) is more recent, with a consensus definition still developing. One interesting example is the chicken major histocompatibility complex (MHC). In contrast to typical mammals that have many class I and class I-like genes, only two classical class I genes, two CD1 genes and some non-classical Rfp-Y genes are known in chicken, and all are found on the microchromosome that bears the MHC. Rarity of recombination between the closely linked and polymorphic genes encoding classical class I and TAPs allows co-evolution, leading to a single dominantly expressed class I molecule in each MHC haplotype, with strong functional consequences in terms of resistance to infectious pathogens. Chicken tapasin is highly polymorphic, but co-evolution with TAP and class I genes remains unclear. T-cell receptors, natural killer (NK) cell receptors, and CD8 co-receptor genes are found on non-MHC chromosomes, with some evidence for co-evolution of surface residues and number of genes along the avian and mammalian lineages. Over even longer periods, co-evolution has been invoked to explain how the adaptive immune system of jawed vertebrates arose from closely linked receptor, ligand, and antigen-processing genes in the primordial MHC. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Deterministic modelling and stochastic simulation of biochemical pathways using MATLAB.
Ullah, M; Schmidt, H; Cho, K H; Wolkenhauer, O
2006-03-01
The analysis of complex biochemical networks is conducted in two popular conceptual frameworks for modelling. The deterministic approach requires the solution of ordinary differential equations (ODEs, reaction rate equations) with concentrations as continuous state variables. The stochastic approach involves the simulation of differential-difference equations (chemical master equations, CMEs) with probabilities as variables. This is to generate counts of molecules for chemical species as realisations of random variables drawn from the probability distribution described by the CMEs. Although there are numerous tools available, many of them free, the modelling and simulation environment MATLAB is widely used in the physical and engineering sciences. We describe a collection of MATLAB functions to construct and solve ODEs for deterministic simulation and to implement realisations of CMEs for stochastic simulation using advanced MATLAB coding (Release 14). The program was successfully applied to pathway models from the literature for both cases. The results were compared to implementations using alternative tools for dynamic modelling and simulation of biochemical networks. The aim is to provide a concise set of MATLAB functions that encourage the experimentation with systems biology models. All the script files are available from www.sbi.uni-rostock.de/ publications_matlab-paper.html.
Exact and approximate stochastic simulation of intracellular calcium dynamics.
Wieder, Nicolas; Fink, Rainer H A; Wegner, Frederic von
2011-01-01
In simulations of chemical systems, the main task is to find an exact or approximate solution of the chemical master equation (CME) that satisfies certain constraints with respect to computation time and accuracy. While Brownian motion simulations of single molecules are often too time consuming to represent the mesoscopic level, the classical Gillespie algorithm is a stochastically exact algorithm that provides satisfying results in the representation of calcium microdomains. Gillespie's algorithm can be approximated via the tau-leap method and the chemical Langevin equation (CLE). Both methods lead to a substantial acceleration in computation time and a relatively small decrease in accuracy. Elimination of the noise terms leads to the classical, deterministic reaction rate equations (RRE). For complex multiscale systems, hybrid simulations are increasingly proposed to combine the advantages of stochastic and deterministic algorithms. An often used exemplary cell type in this context are striated muscle cells (e.g., cardiac and skeletal muscle cells). The properties of these cells are well described and they express many common calcium-dependent signaling pathways. The purpose of the present paper is to provide an overview of the aforementioned simulation approaches and their mutual relationships in the spectrum ranging from stochastic to deterministic algorithms.
Hilbert complexes of nonlinear elasticity
NASA Astrophysics Data System (ADS)
Angoshtari, Arzhang; Yavari, Arash
2016-12-01
We introduce some Hilbert complexes involving second-order tensors on flat compact manifolds with boundary that describe the kinematics and the kinetics of motion in nonlinear elasticity. We then use the general framework of Hilbert complexes to write Hodge-type and Helmholtz-type orthogonal decompositions for second-order tensors. As some applications of these decompositions in nonlinear elasticity, we study the strain compatibility equations of linear and nonlinear elasticity in the presence of Dirichlet boundary conditions and the existence of stress functions on non-contractible bodies. As an application of these Hilbert complexes in computational mechanics, we briefly discuss the derivation of a new class of mixed finite element methods for nonlinear elasticity.
Nidheesh, N; Abdul Nazeer, K A; Ameer, P M
2017-12-01
Clustering algorithms with steps involving randomness usually give different results on different executions for the same dataset. This non-deterministic nature of algorithms such as the K-Means clustering algorithm limits their applicability in areas such as cancer subtype prediction using gene expression data. It is hard to sensibly compare the results of such algorithms with those of other algorithms. The non-deterministic nature of K-Means is due to its random selection of data points as initial centroids. We propose an improved, density based version of K-Means, which involves a novel and systematic method for selecting initial centroids. The key idea of the algorithm is to select data points which belong to dense regions and which are adequately separated in feature space as the initial centroids. We compared the proposed algorithm to a set of eleven widely used single clustering algorithms and a prominent ensemble clustering algorithm which is being used for cancer data classification, based on the performances on a set of datasets comprising ten cancer gene expression datasets. The proposed algorithm has shown better overall performance than the others. There is a pressing need in the Biomedical domain for simple, easy-to-use and more accurate Machine Learning tools for cancer subtype prediction. The proposed algorithm is simple, easy-to-use and gives stable results. Moreover, it provides comparatively better predictions of cancer subtypes from gene expression data. Copyright © 2017 Elsevier Ltd. All rights reserved.
The role of population inertia in predicting the outcome of stage-structured biological invasions.
Guiver, Chris; Dreiwi, Hanan; Filannino, Donna-Maria; Hodgson, Dave; Lloyd, Stephanie; Townley, Stuart
2015-07-01
Deterministic dynamic models for coupled resident and invader populations are considered with the purpose of finding quantities that are effective at predicting when the invasive population will become established asymptotically. A key feature of the models considered is the stage-structure, meaning that the populations are described by vectors of discrete developmental stage- or age-classes. The vector structure permits exotic transient behaviour-phenomena not encountered in scalar models. Analysis using a linear Lyapunov function demonstrates that for the class of population models considered, a large so-called population inertia is indicative of successful invasion. Population inertia is an indicator of transient growth or decline. Furthermore, for the class of models considered, we find that the so-called invasion exponent, an existing index used in models for invasion, is not always a reliable comparative indicator of successful invasion. We highlight these findings through numerical examples and a biological interpretation of why this might be the case is discussed. Copyright © 2015. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Esquível, Manuel L.; Fernandes, José Moniz; Guerreiro, Gracinda R.
2016-06-01
We introduce a schematic formalism for the time evolution of a random population entering some set of classes and such that each member of the population evolves among these classes according to a scheme based on a Markov chain model. We consider that the flow of incoming members is modeled by a time series and we detail the time series structure of the elements in each of the classes. We present a practical application to data from a credit portfolio of a Cape Verdian bank; after modeling the entering population in two different ways - namely as an ARIMA process and as a deterministic sigmoid type trend plus a SARMA process for the residues - we simulate the behavior of the population and compare the results. We get that the second method is more accurate in describing the behavior of the populations when compared to the observed values in a direct simulation of the Markov chain.
Fuzzy membership functions for analysis of high-resolution CT images of diffuse pulmonary diseases.
Almeida, Eliana; Rangayyan, Rangaraj M; Azevedo-Marques, Paulo M
2015-08-01
We propose the use of fuzzy membership functions to analyze images of diffuse pulmonary diseases (DPDs) based on fractal and texture features. The features were extracted from preprocessed regions of interest (ROIs) selected from high-resolution computed tomography images. The ROIs represent five different patterns of DPDs and normal lung tissue. A Gaussian mixture model (GMM) was constructed for each feature, with six Gaussians modeling the six patterns. Feature selection was performed and the GMMs of the five significant features were used. From the GMMs, fuzzy membership functions were obtained by a probability-possibility transformation and further statistical analysis was performed. An average classification accuracy of 63.5% was obtained for the six classes. For four of the six classes, the classification accuracy was superior to 65%, and the best classification accuracy was 75.5% for one class. The use of fuzzy membership functions to assist in pattern classification is an alternative to deterministic approaches to explore strategies for medical diagnosis.
Living in the city: school friendships, diversity and the middle classes.
Vincent, Carol; Neal, Sarah; Iqbal, Humera
2018-06-01
Much of the literature on the urban middle classes describes processes of both affiliation (often to the localities) and disaffiliation (often from some of the non-middle-class residents). In this paper, we consider this situation from a different position, drawing on research exploring whether and how children and adults living in diverse localities develop friendships with those different to themselves in terms of social class and ethnicity. This paper focuses on the interviews with the ethnically diverse, but predominantly white British, middle-class parent participants, considering their attitudes towards social and cultural difference. We emphasize the importance of highlighting inequalities that arise from social class and its intersection with ethnicity in analyses of complex urban populations. The paper's contribution is, first, to examine processes of clustering amongst the white British middle-class parents, particularly in relation to social class. Second, we contrast this process, and its moments of reflection and unease, with the more deliberate and purposeful efforts of one middle-class, Bangladeshi-origin mother who engages in active labour to facilitate relationships across social and ethnic difference. © London School of Economics and Political Science 2017.
Linear dynamics of classical spin as Mobius transformation
Galda, Alexey; Vinokur, Valerii Ð.
2017-04-26
Though the overwhelming majority of natural processes occur far from the equilibrium, general theoretical approaches to non-equilibrium phase transitions remain scarce. Recent breakthroughs introduced a description of open dissipative systems in terms of non-Hermitian quantum mechanics enabling the identification of a class of non-equilibrium phase transitions associated with the loss of combined parity (reflection) and time-reversal symmetries. Here we report that the time evolution of a single classical spin (e.g. monodomain ferromagnet) governed by the Landau-Lifshitz-Gilbert-Slonczewski equation in the absence of magnetic anisotropy terms is described by a Mobius transformation in complex stereographic coordinates. We identify the parity-time symmetry-breaking phasemore » transition occurring in spin-transfer torque-driven linear spin systems as a transition between hyperbolic and loxodromic classes of Mobius transformations, with the critical point of the transition corresponding to the parabolic transformation. However, this establishes the understanding of non-equilibrium phase transitions as topological transitions in configuration space.« less
Modeling Defects, Shape Evolution, and Programmed Auto-origami in Liquid Crystal Elastomers
NASA Astrophysics Data System (ADS)
Konya, Andrew; Gimenez-Pinto, Vianney; Selinger, Robin
2016-06-01
Liquid crystal elastomers represent a novel class of programmable shape-transforming materials whose shape change trajectory is encoded in the material’s nematic director field. Using three-dimensional nonlinear finite element elastodynamics simulation, we model a variety of different actuation geometries and device designs: thin films containing topological defects, patterns that induce formation of folds and twists, and a bas-relief structure. The inclusion of finite bending energy in the simulation model reveals features of actuation trajectory that may be absent when bending energy is neglected. We examine geometries with a director pattern uniform through the film thickness encoding multiple regions of positive Gaussian curvature. Simulations indicate that heating such a system uniformly produces a disordered state with curved regions emerging randomly in both directions due to the film’s up/down symmetry. By contrast, applying a thermal gradient by heating the material first on one side breaks up/down symmetry and results in a deterministic trajectory producing a more ordered final shape. We demonstrate that a folding zone design containing cut-out areas accommodates transverse displacements without warping or buckling; and demonstrate that bas-relief and more complex bent/twisted structures can be assembled by combining simple design motifs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoare, Hilary L; Sullivan, Lucy C; Clements, Craig S
2008-03-31
Human leukocyte antigen (HLA)-E is a non-classical major histocompatibility complex class I molecule that binds peptides derived from the leader sequences of other HLA class I molecules. Natural killer cell recognition of these HLA-E molecules, via the CD94-NKG2 natural killer family, represents a central innate mechanism for monitoring major histocompatibility complex expression levels within a cell. The leader sequence-derived peptides bound to HLA-E exhibit very limited polymorphism, yet subtle differences affect the recognition of HLA-E by the CD94-NKG2 receptors. To better understand the basis for this peptide-specific recognition, we determined the structure of HLA-E in complex with two leader peptides,more » namely, HLA-Cw*07 (VMAPRALLL), which is poorly recognised by CD94-NKG2 receptors, and HLA-G*01 (VMAPRTLFL), a high-affinity ligand of CD94-NKG2 receptors. A comparison of these structures, both of which were determined to 2.5-Å resolution, revealed that allotypic variations in the bound leader sequences do not result in conformational changes in the HLA-E heavy chain, although subtle changes in the conformation of the peptide within the binding groove of HLA-E were evident. Accordingly, our data indicate that the CD94-NKG2 receptors interact with HLA-E in a manner that maximises the ability of the receptors to discriminate between subtle changes in both the sequence and conformation of peptides bound to HLA-E.« less
PROCEEDINGS OF THE SYMPOSIUM ON SYSTEM THEORY, NEW YORK, N. Y. APRIL 20, 21, 22 1965. VOLUME XV.
The papers presented at the symposium may be grouped as follows: (1) What is system theory ; (2) Representations of systems; (3) System dynamics; (4...Non-deterministic systems; (5) Optimal systems; and (6) Applications of system theory .
Universal photonic quantum computation via time-delayed feedback
Pichler, Hannes; Choi, Soonwon; Zoller, Peter; Lukin, Mikhail D.
2017-01-01
We propose and analyze a deterministic protocol to generate two-dimensional photonic cluster states using a single quantum emitter via time-delayed quantum feedback. As a physical implementation, we consider a single atom or atom-like system coupled to a 1D waveguide with a distant mirror, where guided photons represent the qubits, while the mirror allows the implementation of feedback. We identify the class of many-body quantum states that can be produced using this approach and characterize them in terms of 2D tensor network states. PMID:29073057
Entanglement sensitivity to signal attenuation and amplification
NASA Astrophysics Data System (ADS)
Filippov, Sergey N.; Ziman, Mário
2014-07-01
We analyze general laws of continuous-variable entanglement dynamics during the deterministic attenuation and amplification of the physical signal carrying the entanglement. These processes are inevitably accompanied by noises, so we find fundamental limitations on noise intensities that destroy entanglement of Gaussian and non-Gaussian input states. The phase-insensitive amplification Φ1⊗Φ2⊗⋯ΦN with the power gain κi≥2 (≈3 dB, i =1,...,N) is shown to destroy entanglement of any N-mode Gaussian state even in the case of quantum-limited performance. In contrast, we demonstrate non-Gaussian states with the energy of a few photons such that their entanglement survives within a wide range of noises beyond quantum-limited performance for any degree of attenuation or gain. We detect entanglement preservation properties of the channel Φ1⊗Φ2, where each mode is deterministically attenuated or amplified. Gaussian states of high energy are shown to be robust to very asymmetric attenuations, whereas non-Gaussian states are at an advantage in the case of symmetric attenuation and general amplification. If Φ1=Φ2, the total noise should not exceed 1/2√κ2+1 to guarantee entanglement preservation.
Papenfuss, Anthony T; Feng, Zhi-Ping; Krasnec, Katina; Deakin, Janine E; Baker, Michelle L; Miller, Robert D
2015-07-22
Major histocompatibility complex (MHC) class I genes are found in the genomes of all jawed vertebrates. The evolution of this gene family is closely tied to the evolution of the vertebrate genome. Family members are frequently found in four paralogous regions, which were formed in two rounds of genome duplication in the early vertebrates, but in some species class Is have been subject to additional duplication or translocation, creating additional clusters. The gene family is traditionally grouped into two subtypes: classical MHC class I genes that are usually MHC-linked, highly polymorphic, expressed in a broad range of tissues and present endogenously-derived peptides to cytotoxic T-cells; and non-classical MHC class I genes generally have lower polymorphism, may have tissue-specific expression and have evolved to perform immune-related or non-immune functions. As immune genes can evolve rapidly and are subject to different selection pressure, we hypothesised that there may be divergent, as yet unannotated or uncharacterised class I genes. Application of a novel method of sensitive genome searching of available vertebrate genome sequences revealed a new, extensive sub-family of divergent MHC class I genes, denoted as UT, which has not previously been characterized. These class I genes are found in both American and Australian marsupials, and in monotremes, at an evolutionary chromosomal breakpoint, but are not present in non-mammalian genomes and have been lost from the eutherian lineage. We show that UT family members are expressed in the thymus of the gray short-tailed opossum and in other immune tissues of several Australian marsupials. Structural homology modelling shows that the proteins encoded by this family are predicted to have an open, though short, antigen-binding groove. We have identified a novel sub-family of putatively non-classical MHC class I genes that are specific to marsupials and monotremes. This family was present in the ancestral mammal and is found in extant marsupials and monotremes, but has been lost from the eutherian lineage. The function of this family is as yet unknown, however, their predicted structure may be consistent with presentation of antigens to T-cells.
Mutation Clusters from Cancer Exome.
Kakushadze, Zura; Yu, Willie
2017-08-15
We apply our statistically deterministic machine learning/clustering algorithm *K-means (recently developed in https://ssrn.com/abstract=2908286) to 10,656 published exome samples for 32 cancer types. A majority of cancer types exhibit a mutation clustering structure. Our results are in-sample stable. They are also out-of-sample stable when applied to 1389 published genome samples across 14 cancer types. In contrast, we find in- and out-of-sample instabilities in cancer signatures extracted from exome samples via nonnegative matrix factorization (NMF), a computationally-costly and non-deterministic method. Extracting stable mutation structures from exome data could have important implications for speed and cost, which are critical for early-stage cancer diagnostics, such as novel blood-test methods currently in development.
Mutation Clusters from Cancer Exome
Kakushadze, Zura; Yu, Willie
2017-01-01
We apply our statistically deterministic machine learning/clustering algorithm *K-means (recently developed in https://ssrn.com/abstract=2908286) to 10,656 published exome samples for 32 cancer types. A majority of cancer types exhibit a mutation clustering structure. Our results are in-sample stable. They are also out-of-sample stable when applied to 1389 published genome samples across 14 cancer types. In contrast, we find in- and out-of-sample instabilities in cancer signatures extracted from exome samples via nonnegative matrix factorization (NMF), a computationally-costly and non-deterministic method. Extracting stable mutation structures from exome data could have important implications for speed and cost, which are critical for early-stage cancer diagnostics, such as novel blood-test methods currently in development. PMID:28809811
On the Development of a Deterministic Three-Dimensional Radiation Transport Code
NASA Technical Reports Server (NTRS)
Rockell, Candice; Tweed, John
2011-01-01
Since astronauts on future deep space missions will be exposed to dangerous radiations, there is a need to accurately model the transport of radiation through shielding materials and to estimate the received radiation dose. In response to this need a three dimensional deterministic code for space radiation transport is now under development. The new code GRNTRN is based on a Green's function solution of the Boltzmann transport equation that is constructed in the form of a Neumann series. Analytical approximations will be obtained for the first three terms of the Neumann series and the remainder will be estimated by a non-perturbative technique . This work discusses progress made to date and exhibits some computations based on the first two Neumann series terms.
"A Complicated Tangle of Circumstances"
ERIC Educational Resources Information Center
Miller, Carole; Saxton, Juliana
2009-01-01
The post-modern curriculum, drawing on chaos and complexity theory, recognises the realities of a world in flux and posits that the teacher and the class are always teetering "in the midst" of chaos, "not linked by chains of causality but [by] layers of meaning, recursive dynamics, non-linear effects and chance" (Osberg 2008,…
Alcaide, Miguel; Liu, Mark
2013-01-01
Genes of the Major Histocompatibility Complex (MHC) have become an important marker for the investigation of adaptive genetic variation in vertebrates because of their critical role in pathogen resistance. However, despite significant advances in the last few years the characterization of MHC variation in non-model species still remains a challenging task due to the redundancy and high variation of this gene complex. Here we report the utility of a single pair of primers for the cross-amplification of the third exon of MHC class I genes, which encodes the more polymorphic half of the peptide-binding region (PBR), in oscine passerines (songbirds; Aves: Passeriformes), a group especially challenging for MHC characterization due to the presence of large and complex MHC multigene families. In our survey, although the primers failed to amplify exon 3 from two suboscine passerine birds, they amplified exon 3 of multiple MHC class I genes in all 16 species of oscine songbirds tested, yielding a total of 120 sequences. The 16 songbird species belong to 14 different families, primarily within the Passerida, but also in the Corvida. Using a conservative approach based on the analysis of cloned amplicons (n = 16) from each species, we found between 3 and 10 MHC sequences per individual. Each allele repertoire was highly divergent, with the overall number of polymorphic sites per species ranging from 33 to 108 (out of 264 sites) and the average number of nucleotide differences between alleles ranging from 14.67 to 43.67. Our survey in songbirds allowed us to compare macroevolutionary dynamics of exon 3 between songbirds and non-passerine birds. We found compelling evidence of positive selection acting specifically upon peptide-binding codons across birds, and we estimate the strength of diversifying selection in songbirds to be about twice that in non-passerines. Analysis using comparative methods suggest weaker evidence for a higher GC content in the 3rd codon position of exon 3 in non-passerine birds, a pattern that contrasts with among-clade GC patterns found in other avian studies and may suggests different mutational mechanisms. Our primers represent a useful tool for the characterization of functional and evolutionarily relevant MHC variation across the hyperdiverse songbirds. PMID:23781408
Strandh, Maria; Westerdahl, Helena; Pontarp, Mikael; Canbäck, Björn; Dubois, Marie-Pierre; Miquel, Christian; Taberlet, Pierre; Bonadonna, Francesco
2012-01-01
Mate choice for major histocompatibility complex (MHC) compatibility has been found in several taxa, although rarely in birds. MHC is a crucial component in adaptive immunity and by choosing an MHC-dissimilar partner, heterozygosity and potentially broad pathogen resistance is maximized in the offspring. The MHC genotype influences odour cues and preferences in mammals and fish and hence olfactory-based mate choice can occur. We tested whether blue petrels, Halobaena caerulea, choose partners based on MHC compatibility. This bird is long-lived, monogamous and can discriminate between individual odours using olfaction, which makes it exceptionally well suited for this analysis. We screened MHC class I and II B alleles in blue petrels using 454-pyrosequencing and quantified the phylogenetic, functional and allele-sharing similarity between individuals. Partners were functionally more dissimilar at the MHC class II B loci than expected from random mating (p = 0.033), whereas there was no such difference at the MHC class I loci. Phylogenetic and non-sequence-based MHC allele-sharing measures detected no MHC dissimilarity between partners for either MHC class I or II B. Our study provides evidence of mate choice for MHC compatibility in a bird with a high dependency on odour cues, suggesting that MHC odour-mediated mate choice occurs in birds. PMID:22951737
Deterministic versus evidence-based attitude towards clinical diagnosis.
Soltani, Akbar; Moayyeri, Alireza
2007-08-01
Generally, two basic classes have been proposed for scientific explanation of events. Deductive reasoning emphasizes on reaching conclusions about a hypothesis based on verification of universal laws pertinent to that hypothesis, while inductive or probabilistic reasoning explains an event by calculation of some probabilities for that event to be related to a given hypothesis. Although both types of reasoning are used in clinical practice, evidence-based medicine stresses on the advantages of the second approach for most instances in medical decision making. While 'probabilistic or evidence-based' reasoning seems to involve more mathematical formulas at the first look, this attitude is more dynamic and less imprisoned by the rigidity of mathematics comparing with 'deterministic or mathematical attitude'. In the field of medical diagnosis, appreciation of uncertainty in clinical encounters and utilization of likelihood ratio as measure of accuracy seem to be the most important characteristics of evidence-based doctors. Other characteristics include use of series of tests for refining probability, changing diagnostic thresholds considering external evidences and nature of the disease, and attention to confidence intervals to estimate uncertainty of research-derived parameters.
Hong, Hyunsuk; O'Keeffe, Kevin P; Strogatz, Steven H
2016-10-01
We consider a mean-field model of coupled phase oscillators with quenched disorder in the natural frequencies and coupling strengths. A fraction p of oscillators are positively coupled, attracting all others, while the remaining fraction 1-p are negatively coupled, repelling all others. The frequencies and couplings are deterministically chosen in a manner which correlates them, thereby correlating the two types of disorder in the model. We first explore the effect of this correlation on the system's phase coherence. We find that there is a critical width γ c in the frequency distribution below which the system spontaneously synchronizes. Moreover, this γ c is independent of p. Hence, our model and the traditional Kuramoto model (recovered when p = 1) have the same critical width γ c . We next explore the critical behavior of the system by examining the finite-size scaling and the dynamic fluctuation of the traditional order parameter. We find that the model belongs to the same universality class as the Kuramoto model with deterministically (not randomly) chosen natural frequencies for the case of p < 1.
Front propagation and effect of memory in stochastic desertification models with an absorbing state
NASA Astrophysics Data System (ADS)
Herman, Dor; Shnerb, Nadav M.
2017-08-01
Desertification in dryland ecosystems is considered to be a major environmental threat that may lead to devastating consequences. The concern increases when the system admits two alternative steady states and the transition is abrupt and irreversible (catastrophic shift). However, recent studies show that the inherent stochasticity of the birth-death process, when superimposed on the presence of an absorbing state, may lead to a continuous (second order) transition even if the deterministic dynamics supports a catastrophic transition. Following these works we present here a numerical study of a one-dimensional stochastic desertification model, where the deterministic predictions are confronted with the observed dynamics. Our results suggest that a stochastic spatial system allows for a propagating front only when its active phase invades the inactive (desert) one. In the extinction phase one observes transient front propagation followed by a global collapse. In the presence of a seed bank the vegetation state is shown to be more robust against demographic stochasticity, but the transition in that case still belongs to the directed percolation equivalence class.
Novel physical constraints on implementation of computational processes
NASA Astrophysics Data System (ADS)
Wolpert, David; Kolchinsky, Artemy
Non-equilibrium statistical physics permits us to analyze computational processes, i.e., ways to drive a physical system such that its coarse-grained dynamics implements some desired map. It is now known how to implement any such desired computation without dissipating work, and what the minimal (dissipationless) work is that such a computation will require (the so-called generalized Landauer bound\\x9D). We consider how these analyses change if we impose realistic constraints on the computational process. First, we analyze how many degrees of freedom of the system must be controlled, in addition to the ones specifying the information-bearing degrees of freedom, in order to avoid dissipating work during a given computation, when local detailed balance holds. We analyze this issue for deterministic computations, deriving a state-space vs. speed trade-off, and use our results to motivate a measure of the complexity of a computation. Second, we consider computations that are implemented with logic circuits, in which only a small numbers of degrees of freedom are coupled at a time. We show that the way a computation is implemented using circuits affects its minimal work requirements, and relate these minimal work requirements to information-theoretic measures of complexity.
Decrease of cardiac chaos in congestive heart failure
NASA Astrophysics Data System (ADS)
Poon, Chi-Sang; Merrill, Christopher K.
1997-10-01
The electrical properties of the mammalian heart undergo many complex transitions in normal and diseased states. It has been proposed that the normal heartbeat may display complex nonlinear dynamics, including deterministic chaos,, and that such cardiac chaos may be a useful physiological marker for the diagnosis and management, of certain heart trouble. However, it is not clear whether the heartbeat series of healthy and diseased hearts are chaotic or stochastic, or whether cardiac chaos represents normal or abnormal behaviour. Here we have used a highly sensitive technique, which is robust to random noise, to detect chaos. We analysed the electrocardiograms from a group of healthy subjects and those with severe congestive heart failure (CHF), a clinical condition associated with a high risk of sudden death. The short-term variations of beat-to-beat interval exhibited strongly and consistently chaotic behaviour in all healthy subjects, but were frequently interrupted by periods of seemingly non-chaotic fluctuations in patients with CHF. Chaotic dynamics in the CHF data, even when discernible, exhibited a high degree of random variability over time, suggesting a weaker form of chaos. These findings suggest that cardiac chaos is prevalent in healthy heart, and a decrease in such chaos may be indicative of CHF.
Nested polynomial trends for the improvement of Gaussian process-based predictors
NASA Astrophysics Data System (ADS)
Perrin, G.; Soize, C.; Marque-Pucheu, S.; Garnier, J.
2017-10-01
The role of simulation keeps increasing for the sensitivity analysis and the uncertainty quantification of complex systems. Such numerical procedures are generally based on the processing of a huge amount of code evaluations. When the computational cost associated with one particular evaluation of the code is high, such direct approaches based on the computer code only, are not affordable. Surrogate models have therefore to be introduced to interpolate the information given by a fixed set of code evaluations to the whole input space. When confronted to deterministic mappings, the Gaussian process regression (GPR), or kriging, presents a good compromise between complexity, efficiency and error control. Such a method considers the quantity of interest of the system as a particular realization of a Gaussian stochastic process, whose mean and covariance functions have to be identified from the available code evaluations. In this context, this work proposes an innovative parametrization of this mean function, which is based on the composition of two polynomials. This approach is particularly relevant for the approximation of strongly non linear quantities of interest from very little information. After presenting the theoretical basis of this method, this work compares its efficiency to alternative approaches on a series of examples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gurvits, L.
2002-01-01
Classical matching theory can be defined in terms of matrices with nonnegative entries. The notion of Positive operator, central in Quantum Theory, is a natural generalization of matrices with non-negative entries. Based on this point of view, we introduce a definition of perfect Quantum (operator) matching. We show that the new notion inherits many 'classical' properties, but not all of them. This new notion goes somewhere beyound matroids. For separable bipartite quantum states this new notion coinsides with the full rank property of the intersection of two corresponding geometric matroids. In the classical situation, permanents are naturally associated with perfectsmore » matchings. We introduce an analog of permanents for positive operators, called Quantum Permanent and show how this generalization of the permanent is related to the Quantum Entanglement. Besides many other things, Quantum Permanents provide new rational inequalities necessary for the separability of bipartite quantum states. Using Quantum Permanents, we give deterministic poly-time algorithm to solve Hidden Matroids Intersection Problem and indicate some 'classical' complexity difficulties associated with the Quantum Entanglement. Finally, we prove that the weak membership problem for the convex set of separable bipartite density matrices is NP-HARD.« less
NASA Astrophysics Data System (ADS)
Macian-Sorribes, Hector; Pulido-Velazquez, Manuel; Tilmant, Amaury
2015-04-01
Stochastic programming methods are better suited to deal with the inherent uncertainty of inflow time series in water resource management. However, one of the most important hurdles in their use in practical implementations is the lack of generalized Decision Support System (DSS) shells, usually based on a deterministic approach. The purpose of this contribution is to present a general-purpose DSS shell, named Explicit Stochastic Programming Advanced Tool (ESPAT), able to build and solve stochastic programming problems for most water resource systems. It implements a hydro-economic approach, optimizing the total system benefits as the sum of the benefits obtained by each user. It has been coded using GAMS, and implements a Microsoft Excel interface with a GAMS-Excel link that allows the user to introduce the required data and recover the results. Therefore, no GAMS skills are required to run the program. The tool is divided into four modules according to its capabilities: 1) the ESPATR module, which performs stochastic optimization procedures in surface water systems using a Stochastic Dual Dynamic Programming (SDDP) approach; 2) the ESPAT_RA module, which optimizes coupled surface-groundwater systems using a modified SDDP approach; 3) the ESPAT_SDP module, capable of performing stochastic optimization procedures in small-size surface systems using a standard SDP approach; and 4) the ESPAT_DET module, which implements a deterministic programming procedure using non-linear programming, able to solve deterministic optimization problems in complex surface-groundwater river basins. The case study of the Mijares river basin (Spain) is used to illustrate the method. It consists in two reservoirs in series, one aquifer and four agricultural demand sites currently managed using historical (XIV century) rights, which give priority to the most traditional irrigation district over the XX century agricultural developments. Its size makes it possible to use either the SDP or the SDDP methods. The independent use of surface and groundwater can be examined with and without the aquifer. The ESPAT_DET, ESPATR and ESPAT_SDP modules were executed for the surface system, while the ESPAT_RA and the ESPAT_DET modules were run for the surface-groundwater system. The surface system's results show a similar performance between the ESPAT_SDP and ESPATR modules, with outperform the one showed by the current policies besides being outperformed by the ESPAT_DET results, which have the advantage of the perfect foresight. The surface-groundwater system's results show a robust situation in which the differences between the module's results and the current policies are lower due the use of pumped groundwater in the XX century crops when surface water is scarce. The results are realistic, with the deterministic optimization outperforming the stochastic one, which at the same time outperforms the current policies; showing that the tool is able to stochastically optimize river-aquifer water resources systems. We are currently working in the application of these tools in the analysis of changes in systems' operation under global change conditions. ACKNOWLEDGEMENT: This study has been partially supported by the IMPADAPT project (CGL2013-48424-C2-1-R) with Spanish MINECO (Ministerio de Economía y Competitividad) funds.
Vieluf, Solveig; Sleimen-Malkoun, Rita; Voelcker-Rehage, Claudia; Jirsa, Viktor; Reuter, Eva-Maria; Godde, Ben; Temprado, Jean-Jacques; Huys, Raoul
2017-07-01
From the conceptual and methodological framework of the dynamical systems approach, force control results from complex interactions of various subsystems yielding observable behavioral fluctuations, which comprise both deterministic (predictable) and stochastic (noise-like) dynamical components. Here, we investigated these components contributing to the observed variability in force control in groups of participants differing in age and expertise level. To this aim, young (18-25 yr) as well as late middle-aged (55-65 yr) novices and experts (precision mechanics) performed a force maintenance and a force modulation task. Results showed that whereas the amplitude of force variability did not differ across groups in the maintenance tasks, in the modulation task it was higher for late middle-aged novices than for experts and higher for both these groups than for young participants. Within both tasks and for all groups, stochastic fluctuations were lowest where the deterministic influence was smallest. However, although all groups showed similar dynamics underlying force control in the maintenance task, a group effect was found for deterministic and stochastic fluctuations in the modulation task. The latter findings imply that both components were involved in the observed group differences in the variability of force fluctuations in the modulation task. These findings suggest that between groups the general characteristics of the dynamics do not differ in either task and that force control is more affected by age than by expertise. However, expertise seems to counteract some of the age effects. NEW & NOTEWORTHY Stochastic and deterministic dynamical components contribute to force production. Dynamical signatures differ between force maintenance and cyclic force modulation tasks but hardly between age and expertise groups. Differences in both stochastic and deterministic components are associated with group differences in behavioral variability, and observed behavioral variability is more strongly task dependent than person dependent. Copyright © 2017 the American Physiological Society.
Hunt, R.J.; Feinstein, D.T.; Pint, C.D.; Anderson, M.P.
2006-01-01
As part of the USGS Water, Energy, and Biogeochemical Budgets project and the NSF Long-Term Ecological Research work, a parameter estimation code was used to calibrate a deterministic groundwater flow model of the Trout Lake Basin in northern Wisconsin. Observations included traditional calibration targets (head, lake stage, and baseflow observations) as well as unconventional targets such as groundwater flows to and from lakes, depth of a lake water plume, and time of travel. The unconventional data types were important for parameter estimation convergence and allowed the development of a more detailed parameterization capable of resolving model objectives with well-constrained parameter values. Independent estimates of groundwater inflow to lakes were most important for constraining lakebed leakance and the depth of the lake water plume was important for determining hydraulic conductivity and conceptual aquifer layering. The most important target overall, however, was a conventional regional baseflow target that led to correct distribution of flow between sub-basins and the regional system during model calibration. The use of an automated parameter estimation code: (1) facilitated the calibration process by providing a quantitative assessment of the model's ability to match disparate observed data types; and (2) allowed assessment of the influence of observed targets on the calibration process. The model calibration required the use of a 'universal' parameter estimation code in order to include all types of observations in the objective function. The methods described in this paper help address issues of watershed complexity and non-uniqueness common to deterministic watershed models. ?? 2005 Elsevier B.V. All rights reserved.
Deconstructing zero: resurgence, supersymmetry and complex saddles
Dunne, Gerald V.; Ünsal, Mithat
2016-12-01
We explain how a vanishing, or truncated, perturbative expansion, such as often arises in semi-classically tractable supersymmetric theories, can nevertheless be related to fluctuations about non-perturbative sectors via resurgence. We also demonstrate that, in the same class of theories, the vanishing of the ground state energy (unbroken supersymmetry) can be attributed to the cancellation between a real saddle and a complex saddle (with hidden topological angle π), and positivity of the ground state energy (broken supersymmetry) can be interpreted as the dominance of complex saddles. In either case, despite the fact that the ground state energy is zero to allmore » orders in perturbation theory, all orders of fluctuations around non-perturbative saddles are encoded in the perturbative E (N, g). Finally, we illustrate these ideas with examples from supersymmetric quantum mechanics and quantum field theory.« less
Signal Processing Applications Of Wigner-Ville Analysis
NASA Astrophysics Data System (ADS)
Whitehouse, H. J.; Boashash, B.
1986-04-01
The Wigner-Ville distribution (WVD), a form of time-frequency analysis, is shown to be useful in the analysis of a variety of non-stationary signals both deterministic and stochastic. The properties of the WVD are reviewed and alternative methods of calculating the WVD are discussed. Applications are presented.
Non-Lipschitzian dynamics for neural net modelling
NASA Technical Reports Server (NTRS)
Zak, Michail
1989-01-01
Failure of the Lipschitz condition in unstable equilibrium points of dynamical systems leads to a multiple-choice response to an initial deterministic input. The evolution of such systems is characterized by a special type of unpredictability measured by unbounded Liapunov exponents. Possible relation of these systems to future neural networks is discussed.
Two deterministic models (US EPA’s Office of Pesticide Programs Residential Standard Operating Procedures (OPP Residential SOPs) and Draft Protocol for Measuring Children’s Non-Occupational Exposure to Pesticides by all Relevant Pathways (Draft Protocol)) and four probabilistic mo...
The Evolution of Human Longevity: Toward a Biocultural Theory.
ERIC Educational Resources Information Center
Mayer, Peter J.
Homo sapiens is the only extant species for which there exists a significant post-reproductive period in the normal lifespan. Explanations for the evolution of this species-specific trait are possible through "non-deterministic" theories of aging positing "wear and tear" or the failure of nature to eliminate imperfection, or…
Zulkifley, Mohd Asyraf; Rawlinson, David; Moran, Bill
2012-01-01
In video analytics, robust observation detection is very important as the content of the videos varies a lot, especially for tracking implementation. Contrary to the image processing field, the problems of blurring, moderate deformation, low illumination surroundings, illumination change and homogenous texture are normally encountered in video analytics. Patch-Based Observation Detection (PBOD) is developed to improve detection robustness to complex scenes by fusing both feature- and template-based recognition methods. While we believe that feature-based detectors are more distinctive, however, for finding the matching between the frames are best achieved by a collection of points as in template-based detectors. Two methods of PBOD—the deterministic and probabilistic approaches—have been tested to find the best mode of detection. Both algorithms start by building comparison vectors at each detected points of interest. The vectors are matched to build candidate patches based on their respective coordination. For the deterministic method, patch matching is done in 2-level test where threshold-based position and size smoothing are applied to the patch with the highest correlation value. For the second approach, patch matching is done probabilistically by modelling the histograms of the patches by Poisson distributions for both RGB and HSV colour models. Then, maximum likelihood is applied for position smoothing while a Bayesian approach is applied for size smoothing. The result showed that probabilistic PBOD outperforms the deterministic approach with average distance error of 10.03% compared with 21.03%. This algorithm is best implemented as a complement to other simpler detection methods due to heavy processing requirement. PMID:23202226
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W
2015-01-01
Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNcemore » reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).« less
NASA Astrophysics Data System (ADS)
Klügel, J.
2006-12-01
Deterministic scenario-based seismic hazard analysis has a long tradition in earthquake engineering for developing the design basis of critical infrastructures like dams, transport infrastructures, chemical plants and nuclear power plants. For many applications besides of the design of infrastructures it is of interest to assess the efficiency of the design measures taken. These applications require a method allowing to perform a meaningful quantitative risk analysis. A new method for a probabilistic scenario-based seismic risk analysis has been developed based on a probabilistic extension of proven deterministic methods like the MCE- methodology. The input data required for the method are entirely based on the information which is necessary to perform any meaningful seismic hazard analysis. The method is based on the probabilistic risk analysis approach common for applications in nuclear technology developed originally by Kaplan & Garrick (1981). It is based (1) on a classification of earthquake events into different size classes (by magnitude), (2) the evaluation of the frequency of occurrence of events, assigned to the different classes (frequency of initiating events, (3) the development of bounding critical scenarios assigned to each class based on the solution of an optimization problem and (4) in the evaluation of the conditional probability of exceedance of critical design parameters (vulnerability analysis). The advantage of the method in comparison with traditional PSHA consists in (1) its flexibility, allowing to use different probabilistic models for earthquake occurrence as well as to incorporate advanced physical models into the analysis, (2) in the mathematically consistent treatment of uncertainties, and (3) in the explicit consideration of the lifetime of the critical structure as a criterion to formulate different risk goals. The method was applied for the evaluation of the risk of production interruption losses of a nuclear power plant during its residual lifetime.
Stochastic Processes in Physics: Deterministic Origins and Control
NASA Astrophysics Data System (ADS)
Demers, Jeffery
Stochastic processes are ubiquitous in the physical sciences and engineering. While often used to model imperfections and experimental uncertainties in the macroscopic world, stochastic processes can attain deeper physical significance when used to model the seemingly random and chaotic nature of the underlying microscopic world. Nowhere more prevalent is this notion than in the field of stochastic thermodynamics - a modern systematic framework used describe mesoscale systems in strongly fluctuating thermal environments which has revolutionized our understanding of, for example, molecular motors, DNA replication, far-from equilibrium systems, and the laws of macroscopic thermodynamics as they apply to the mesoscopic world. With progress, however, come further challenges and deeper questions, most notably in the thermodynamics of information processing and feedback control. Here it is becoming increasingly apparent that, due to divergences and subtleties of interpretation, the deterministic foundations of the stochastic processes themselves must be explored and understood. This thesis presents a survey of stochastic processes in physical systems, the deterministic origins of their emergence, and the subtleties associated with controlling them. First, we study time-dependent billiards in the quivering limit - a limit where a billiard system is indistinguishable from a stochastic system, and where the simplified stochastic system allows us to view issues associated with deterministic time-dependent billiards in a new light and address some long-standing problems. Then, we embark on an exploration of the deterministic microscopic Hamiltonian foundations of non-equilibrium thermodynamics, and we find that important results from mesoscopic stochastic thermodynamics have simple microscopic origins which would not be apparent without the benefit of both the micro and meso perspectives. Finally, we study the problem of stabilizing a stochastic Brownian particle with feedback control, and we find that in order to avoid paradoxes involving the first law of thermodynamics, we need a model for the fine details of the thermal driving noise. The underlying theme of this thesis is the argument that the deterministic microscopic perspective and stochastic mesoscopic perspective are both important and useful, and when used together, we can more deeply and satisfyingly understand the physics occurring over either scale.
On the generation of log-Lévy distributions and extreme randomness
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2011-10-01
The log-normal distribution is prevalent across the sciences, as it emerges from the combination of multiplicative processes and the central limit theorem (CLT). The CLT, beyond yielding the normal distribution, also yields the class of Lévy distributions. The log-Lévy distributions are the Lévy counterparts of the log-normal distribution, they appear in the context of ultraslow diffusion processes, and they are categorized by Mandelbrot as belonging to the class of extreme randomness. In this paper, we present a natural stochastic growth model from which both the log-normal distribution and the log-Lévy distributions emerge universally—the former in the case of deterministic underlying setting, and the latter in the case of stochastic underlying setting. In particular, we establish a stochastic growth model which universally generates Mandelbrot’s extreme randomness.
Determining the bias and variance of a deterministic finger-tracking algorithm.
Morash, Valerie S; van der Velden, Bas H M
2016-06-01
Finger tracking has the potential to expand haptic research and applications, as eye tracking has done in vision research. In research applications, it is desirable to know the bias and variance associated with a finger-tracking method. However, assessing the bias and variance of a deterministic method is not straightforward. Multiple measurements of the same finger position data will not produce different results, implying zero variance. Here, we present a method of assessing deterministic finger-tracking variance and bias through comparison to a non-deterministic measure. A proof-of-concept is presented using a video-based finger-tracking algorithm developed for the specific purpose of tracking participant fingers during a psychological research study. The algorithm uses ridge detection on videos of the participant's hand, and estimates the location of the right index fingertip. The algorithm was evaluated using data from four participants, who explored tactile maps using only their right index finger and all right-hand fingers. The algorithm identified the index fingertip in 99.78 % of one-finger video frames and 97.55 % of five-finger video frames. Although the algorithm produced slightly biased and more dispersed estimates relative to a human coder, these differences (x=0.08 cm, y=0.04 cm) and standard deviations (σ x =0.16 cm, σ y =0.21 cm) were small compared to the size of a fingertip (1.5-2.0 cm). Some example finger-tracking results are provided where corrections are made using the bias and variance estimates.
Modeling stochastic noise in gene regulatory systems
Meister, Arwen; Du, Chao; Li, Ye Henry; Wong, Wing Hung
2014-01-01
The Master equation is considered the gold standard for modeling the stochastic mechanisms of gene regulation in molecular detail, but it is too complex to solve exactly in most cases, so approximation and simulation methods are essential. However, there is still a lack of consensus about the best way to carry these out. To help clarify the situation, we review Master equation models of gene regulation, theoretical approximations based on an expansion method due to N.G. van Kampen and R. Kubo, and simulation algorithms due to D.T. Gillespie and P. Langevin. Expansion of the Master equation shows that for systems with a single stable steady-state, the stochastic model reduces to a deterministic model in a first-order approximation. Additional theory, also due to van Kampen, describes the asymptotic behavior of multistable systems. To support and illustrate the theory and provide further insight into the complex behavior of multistable systems, we perform a detailed simulation study comparing the various approximation and simulation methods applied to synthetic gene regulatory systems with various qualitative characteristics. The simulation studies show that for large stochastic systems with a single steady-state, deterministic models are quite accurate, since the probability distribution of the solution has a single peak tracking the deterministic trajectory whose variance is inversely proportional to the system size. In multistable stochastic systems, large fluctuations can cause individual trajectories to escape from the domain of attraction of one steady-state and be attracted to another, so the system eventually reaches a multimodal probability distribution in which all stable steady-states are represented proportional to their relative stability. However, since the escape time scales exponentially with system size, this process can take a very long time in large systems. PMID:25632368
Stochastic blockmodeling of the modules and core of the Caenorhabditis elegans connectome.
Pavlovic, Dragana M; Vértes, Petra E; Bullmore, Edward T; Schafer, William R; Nichols, Thomas E
2014-01-01
Recently, there has been much interest in the community structure or mesoscale organization of complex networks. This structure is characterised either as a set of sparsely inter-connected modules or as a highly connected core with a sparsely connected periphery. However, it is often difficult to disambiguate these two types of mesoscale structure or, indeed, to summarise the full network in terms of the relationships between its mesoscale constituents. Here, we estimate a community structure with a stochastic blockmodel approach, the Erdős-Rényi Mixture Model, and compare it to the much more widely used deterministic methods, such as the Louvain and Spectral algorithms. We used the Caenorhabditis elegans (C. elegans) nervous system (connectome) as a model system in which biological knowledge about each node or neuron can be used to validate the functional relevance of the communities obtained. The deterministic algorithms derived communities with 4-5 modules, defined by sparse inter-connectivity between all modules. In contrast, the stochastic Erdős-Rényi Mixture Model estimated a community with 9 blocks or groups which comprised a similar set of modules but also included a clearly defined core, made of 2 small groups. We show that the "core-in-modules" decomposition of the worm brain network, estimated by the Erdős-Rényi Mixture Model, is more compatible with prior biological knowledge about the C. elegans nervous system than the purely modular decomposition defined deterministically. We also show that the blockmodel can be used both to generate stochastic realisations (simulations) of the biological connectome, and to compress network into a small number of super-nodes and their connectivity. We expect that the Erdős-Rényi Mixture Model may be useful for investigating the complex community structures in other (nervous) systems.
Efficient quantum computing using coherent photon conversion.
Langford, N K; Ramelow, S; Prevedel, R; Munro, W J; Milburn, G J; Zeilinger, A
2011-10-12
Single photons are excellent quantum information carriers: they were used in the earliest demonstrations of entanglement and in the production of the highest-quality entanglement reported so far. However, current schemes for preparing, processing and measuring them are inefficient. For example, down-conversion provides heralded, but randomly timed, single photons, and linear optics gates are inherently probabilistic. Here we introduce a deterministic process--coherent photon conversion (CPC)--that provides a new way to generate and process complex, multiquanta states for photonic quantum information applications. The technique uses classically pumped nonlinearities to induce coherent oscillations between orthogonal states of multiple quantum excitations. One example of CPC, based on a pumped four-wave-mixing interaction, is shown to yield a single, versatile process that provides a full set of photonic quantum processing tools. This set satisfies the DiVincenzo criteria for a scalable quantum computing architecture, including deterministic multiqubit entanglement gates (based on a novel form of photon-photon interaction), high-quality heralded single- and multiphoton states free from higher-order imperfections, and robust, high-efficiency detection. It can also be used to produce heralded multiphoton entanglement, create optically switchable quantum circuits and implement an improved form of down-conversion with reduced higher-order effects. Such tools are valuable building blocks for many quantum-enabled technologies. Finally, using photonic crystal fibres we experimentally demonstrate quantum correlations arising from a four-colour nonlinear process suitable for CPC and use these measurements to study the feasibility of reaching the deterministic regime with current technology. Our scheme, which is based on interacting bosonic fields, is not restricted to optical systems but could also be implemented in optomechanical, electromechanical and superconducting systems with extremely strong intrinsic nonlinearities. Furthermore, exploiting higher-order nonlinearities with multiple pump fields yields a mechanism for multiparty mediation of the complex, coherent dynamics.
Evolution with Stochastic Fitness and Stochastic Migration
Rice, Sean H.; Papadopoulos, Anthony
2009-01-01
Background Migration between local populations plays an important role in evolution - influencing local adaptation, speciation, extinction, and the maintenance of genetic variation. Like other evolutionary mechanisms, migration is a stochastic process, involving both random and deterministic elements. Many models of evolution have incorporated migration, but these have all been based on simplifying assumptions, such as low migration rate, weak selection, or large population size. We thus have no truly general and exact mathematical description of evolution that incorporates migration. Methodology/Principal Findings We derive an exact equation for directional evolution, essentially a stochastic Price equation with migration, that encompasses all processes, both deterministic and stochastic, contributing to directional change in an open population. Using this result, we show that increasing the variance in migration rates reduces the impact of migration relative to selection. This means that models that treat migration as a single parameter tend to be biassed - overestimating the relative impact of immigration. We further show that selection and migration interact in complex ways, one result being that a strategy for which fitness is negatively correlated with migration rates (high fitness when migration is low) will tend to increase in frequency, even if it has lower mean fitness than do other strategies. Finally, we derive an equation for the effective migration rate, which allows some of the complex stochastic processes that we identify to be incorporated into models with a single migration parameter. Conclusions/Significance As has previously been shown with selection, the role of migration in evolution is determined by the entire distributions of immigration and emigration rates, not just by the mean values. The interactions of stochastic migration with stochastic selection produce evolutionary processes that are invisible to deterministic evolutionary theory. PMID:19816580
Crabtree, Nathaniel M; Moore, Jason H; Bowyer, John F; George, Nysia I
2017-01-01
A computational evolution system (CES) is a knowledge discovery engine that can identify subtle, synergistic relationships in large datasets. Pareto optimization allows CESs to balance accuracy with model complexity when evolving classifiers. Using Pareto optimization, a CES is able to identify a very small number of features while maintaining high classification accuracy. A CES can be designed for various types of data, and the user can exploit expert knowledge about the classification problem in order to improve discrimination between classes. These characteristics give CES an advantage over other classification and feature selection algorithms, particularly when the goal is to identify a small number of highly relevant, non-redundant biomarkers. Previously, CESs have been developed only for binary class datasets. In this study, we developed a multi-class CES. The multi-class CES was compared to three common feature selection and classification algorithms: support vector machine (SVM), random k-nearest neighbor (RKNN), and random forest (RF). The algorithms were evaluated on three distinct multi-class RNA sequencing datasets. The comparison criteria were run-time, classification accuracy, number of selected features, and stability of selected feature set (as measured by the Tanimoto distance). The performance of each algorithm was data-dependent. CES performed best on the dataset with the smallest sample size, indicating that CES has a unique advantage since the accuracy of most classification methods suffer when sample size is small. The multi-class extension of CES increases the appeal of its application to complex, multi-class datasets in order to identify important biomarkers and features.
Tian, Ye; Huang, Xiaoqiang; Zhu, Yushan
2015-08-01
Enzyme amino-acid sequences at ligand-binding interfaces are evolutionarily optimized for reactions, and the natural conformation of an enzyme-ligand complex must have a low free energy relative to alternative conformations in native-like or non-native sequences. Based on this assumption, a combined energy function was developed for enzyme design and then evaluated by recapitulating native enzyme sequences at ligand-binding interfaces for 10 enzyme-ligand complexes. In this energy function, the electrostatic interaction between polar or charged atoms at buried interfaces is described by an explicitly orientation-dependent hydrogen-bonding potential and a pairwise-decomposable generalized Born model based on the general side chain in the protein design framework. The energy function is augmented with a pairwise surface-area based hydrophobic contribution for nonpolar atom burial. Using this function, on average, 78% of the amino acids at ligand-binding sites were predicted correctly in the minimum-energy sequences, whereas 84% were predicted correctly in the most-similar sequences, which were selected from the top 20 sequences for each enzyme-ligand complex. Hydrogen bonds at the enzyme-ligand binding interfaces in the 10 complexes were usually recovered with the correct geometries. The binding energies calculated using the combined energy function helped to discriminate the active sequences from a pool of alternative sequences that were generated by repeatedly solving a series of mixed-integer linear programming problems for sequence selection with increasing integer cuts.
Some loopholes to save quantum nonlocality
NASA Astrophysics Data System (ADS)
Accardi, Luigi
2005-02-01
The EPR-chameleon experiment has closed a long standing debate between the supporters of quantum nonlocality and the thesis of quantum probability according to which the essence of the quantum pecularity is non Kolmogorovianity rather than non locality. The theory of adaptive systems (symbolized by the chameleon effect) provides a natural intuition for the emergence of non-Kolmogorovian statistics from classical deterministic dynamical systems. These developments are quickly reviewed and in conclusion some comments are introduced on recent attempts to "reconstruct history" on the lines described by Orwell in "1984".
POD Model Reconstruction for Gray-Box Fault Detection
NASA Technical Reports Server (NTRS)
Park, Han; Zak, Michail
2007-01-01
Proper orthogonal decomposition (POD) is the mathematical basis of a method of constructing low-order mathematical models for the "gray-box" fault-detection algorithm that is a component of a diagnostic system known as beacon-based exception analysis for multi-missions (BEAM). POD has been successfully applied in reducing computational complexity by generating simple models that can be used for control and simulation for complex systems such as fluid flows. In the present application to BEAM, POD brings the same benefits to automated diagnosis. BEAM is a method of real-time or offline, automated diagnosis of a complex dynamic system.The gray-box approach makes it possible to utilize incomplete or approximate knowledge of the dynamics of the system that one seeks to diagnose. In the gray-box approach, a deterministic model of the system is used to filter a time series of system sensor data to remove the deterministic components of the time series from further examination. What is left after the filtering operation is a time series of residual quantities that represent the unknown (or at least unmodeled) aspects of the behavior of the system. Stochastic modeling techniques are then applied to the residual time series. The procedure for detecting abnormal behavior of the system then becomes one of looking for statistical differences between the residual time series and the predictions of the stochastic model.
Quantum logic using correlated one-dimensional quantum walks
NASA Astrophysics Data System (ADS)
Lahini, Yoav; Steinbrecher, Gregory R.; Bookatz, Adam D.; Englund, Dirk
2018-01-01
Quantum Walks are unitary processes describing the evolution of an initially localized wavefunction on a lattice potential. The complexity of the dynamics increases significantly when several indistinguishable quantum walkers propagate on the same lattice simultaneously, as these develop non-trivial spatial correlations that depend on the particle's quantum statistics, mutual interactions, initial positions, and the lattice potential. We show that even in the simplest case of a quantum walk on a one dimensional graph, these correlations can be shaped to yield a complete set of compact quantum logic operations. We provide detailed recipes for implementing quantum logic on one-dimensional quantum walks in two general cases. For non-interacting bosons—such as photons in waveguide lattices—we find high-fidelity probabilistic quantum gates that could be integrated into linear optics quantum computation schemes. For interacting quantum-walkers on a one-dimensional lattice—a situation that has recently been demonstrated using ultra-cold atoms—we find deterministic logic operations that are universal for quantum information processing. The suggested implementation requires minimal resources and a level of control that is within reach using recently demonstrated techniques. Further work is required to address error-correction.
Speculative behavior and asset price dynamics.
Westerhoff, Frank
2003-07-01
This paper deals with speculative trading. Guided by empirical observations, a nonlinear deterministic asset pricing model is developed in which traders repeatedly choose between technical and fundamental analysis to determine their orders. The interaction between the trading rules produces complex dynamics. The model endogenously replicates the stylized facts of excess volatility, high trading volumes, shifts in the level of asset prices, and volatility clustering.
Who killed Laius?: On Sophocles' enigmatic message.
Priel, Beatriz
2002-04-01
Using Laplanche's basic conceptualisation of the role of the other in unconscious processes, the author proposes a reading of Sophocles' tragedy, Oedipus the King, according to basic principles of dream interpretation. This reading corroborates contemporary literary perspectives suggesting that Sophocles' tragedy may not only convey the myth but also provide a critical analysis of how myths work. Important textual inconsistencies and incoherence, which have been noted through the centuries, suggest the existence of another, repressed story. Moreover, the action of the play points to enigmatic parental messages of infanticide and the silencing of Oedipus's story, as well as their translation into primordial guilt, as the origins of the tragic denouement. Oedipus's self-condemnation of parricide follows these enigmatic codes and is unrelated to, and may even contradict, the evidence offered in the tragedy as to the identity of Laius's murderers. Moreover, Sophocles' text provides a complex intertwining of hermeneutic and deterministic perspectives. Through the use of the mythical deterministic content, the formal characteristics of Sophocles' text, mainly its complex time perspective and extensive use of double meaning, dramatise in the act of reading an acute awareness of interpretation. This reading underscores the fundamental role of the other in the constitution of unconscious processes.
Thermostatted kinetic equations as models for complex systems in physics and life sciences.
Bianca, Carlo
2012-12-01
Statistical mechanics is a powerful method for understanding equilibrium thermodynamics. An equivalent theoretical framework for nonequilibrium systems has remained elusive. The thermodynamic forces driving the system away from equilibrium introduce energy that must be dissipated if nonequilibrium steady states are to be obtained. Historically, further terms were introduced, collectively called a thermostat, whose original application was to generate constant-temperature equilibrium ensembles. This review surveys kinetic models coupled with time-reversible deterministic thermostats for the modeling of large systems composed both by inert matter particles and living entities. The introduction of deterministic thermostats allows to model the onset of nonequilibrium stationary states that are typical of most real-world complex systems. The first part of the paper is focused on a general presentation of the main physical and mathematical definitions and tools: nonequilibrium phenomena, Gauss least constraint principle and Gaussian thermostats. The second part provides a review of a variety of thermostatted mathematical models in physics and life sciences, including Kac, Boltzmann, Jager-Segel and the thermostatted (continuous and discrete) kinetic for active particles models. Applications refer to semiconductor devices, nanosciences, biological phenomena, vehicular traffic, social and economics systems, crowds and swarms dynamics. Copyright © 2012 Elsevier B.V. All rights reserved.
Zhao, Nan; Han, Jing Ginger; Shyu, Chi-Ren; Korkin, Dmitry
2014-01-01
Single nucleotide polymorphisms (SNPs) are among the most common types of genetic variation in complex genetic disorders. A growing number of studies link the functional role of SNPs with the networks and pathways mediated by the disease-associated genes. For example, many non-synonymous missense SNPs (nsSNPs) have been found near or inside the protein-protein interaction (PPI) interfaces. Determining whether such nsSNP will disrupt or preserve a PPI is a challenging task to address, both experimentally and computationally. Here, we present this task as three related classification problems, and develop a new computational method, called the SNP-IN tool (non-synonymous SNP INteraction effect predictor). Our method predicts the effects of nsSNPs on PPIs, given the interaction's structure. It leverages supervised and semi-supervised feature-based classifiers, including our new Random Forest self-learning protocol. The classifiers are trained based on a dataset of comprehensive mutagenesis studies for 151 PPI complexes, with experimentally determined binding affinities of the mutant and wild-type interactions. Three classification problems were considered: (1) a 2-class problem (strengthening/weakening PPI mutations), (2) another 2-class problem (mutations that disrupt/preserve a PPI), and (3) a 3-class classification (detrimental/neutral/beneficial mutation effects). In total, 11 different supervised and semi-supervised classifiers were trained and assessed resulting in a promising performance, with the weighted f-measure ranging from 0.87 for Problem 1 to 0.70 for the most challenging Problem 3. By integrating prediction results of the 2-class classifiers into the 3-class classifier, we further improved its performance for Problem 3. To demonstrate the utility of SNP-IN tool, it was applied to study the nsSNP-induced rewiring of two disease-centered networks. The accurate and balanced performance of SNP-IN tool makes it readily available to study the rewiring of large-scale protein-protein interaction networks, and can be useful for functional annotation of disease-associated SNPs. SNIP-IN tool is freely accessible as a web-server at http://korkinlab.org/snpintool/. PMID:24784581
ERIC Educational Resources Information Center
Hornberger, Nancy H.; De Korne, Haley; Weinberg, Miranda
2016-01-01
The experiences of a community of people learning and teaching Lenape in Pennsylvania provide insights into the complexities of current ways of talking and acting about language reclamation. We illustrate how Native and non-Native participants in a university-based Indigenous language class constructed language, identity, and place in nuanced ways…
USDA-ARS?s Scientific Manuscript database
Peptidergic neurons are not easily integrated into current connectomics concepts, since their peptide messages can be distributed via non-synaptic paracrine signaling or even via volume transmission. Moreover, and especially in insects, the polarity of peptidergic interneurons in terms of in- and o...
1998-04-01
information representation and processing technology, although faster than the wheels and gears of the Charles Babbage computation machine, is still in...the same computational complexity class as the Babbage machine, with bits of information represented by entities which obey classical (non-quantum...nuclear double resonances Charles M Bowden and Jonathan P. Dowling Weapons Sciences Directorate, AMSMI-RD-WS-ST Missile Research, Development, and
Karlsson, Maria; Westerdahl, Helena
2013-08-01
In birds the major histocompatibility complex (MHC) organization differs both among and within orders; chickens Gallus gallus of the order Galliformes have a simple arrangement, while many songbirds of the order Passeriformes have a more complex arrangement with larger numbers of MHC class I and II genes. Chicken MHC genes are found at two independent loci, classical MHC-B and non-classical MHC-Y, whereas non-classical MHC genes are yet to be verified in passerines. Here we characterize MHC class I transcripts (α1 to α3 domain) and perform amplicon sequencing using a next-generation sequencing technique on exon 3 from house sparrow Passer domesticus (a passerine) families. Then we use phylogenetic, selection, and segregation analyses to gain a better understanding of the MHC class I organization. Trees based on the α1 and α2 domain revealed a distinct cluster with short terminal branches for transcripts with a 6-bp deletion. Interestingly, this cluster was not seen in the tree based on the α3 domain. 21 exon 3 sequences were verified in a single individual and the average numbers within an individual were nine and five for sequences with and without a 6-bp deletion, respectively. All individuals had exon 3 sequences with and without a 6-bp deletion. The sequences with a 6-bp deletion have many characteristics in common with non-classical MHC, e.g., highly conserved amino acid positions were substituted compared with the other alleles, low nucleotide diversity and just a single site was subject to positive selection. However, these alleles also have characteristics that suggest they could be classical, e.g., complete linkage and absence of a distinct cluster in a tree based on the α3 domain. Thus, we cannot determine for certain whether or not the alleles with a 6-bp deletion are non-classical based on our present data. Further analyses on segregation patterns of these alleles in combination with dating the 6-bp deletion through MHC characterization across the genus Passer may solve this matter in the future.
Selective Shielding of Bone Marrow: An Approach to Protecting Humans from External Gamma Radiation.
Waterman, Gideon; Kase, Kenneth; Orion, Itzhak; Broisman, Andrey; Milstein, Oren
2017-09-01
The current feasibility of protecting emergency responders through bone marrow selective shielding is highlighted in the recent OECD/NEA report on severe accident management. Until recently, there was no effective personal protection from externally penetrating gamma radiation. In Chernobyl, first-responders wore makeshift lead sheeting, whereas in Fukushima protective equipment from gamma radiation was not available. Older protective solutions that use thin layers of shielding over large body surfaces are ineffective for energetic gamma radiation. Acute exposures may result in Acute Radiation Syndrome where the survival-limiting factor up to 10 Gy uniform, homogeneous exposure is irreversible bone marrow damage. Protracted, lower exposures may result in malignancies of which bone marrow is especially susceptible, being compounded by leukemia's short latency time. This highlights the importance of shielding bone marrow for preventing both deterministic and stochastic effects. Due to the extraordinary regenerative potential of hematopoietic stem cells, to effectively prevent the deterministic effects of bone marrow exposure, it is sufficient to protect only a small fraction of this tissue. This biological principle allows for a new class of equipment providing unprecedented attenuation of radiation to select marrow-rich regions, deferring the hematopoietic sub-syndrome of Acute Radiation Syndrome to much higher doses. As approximately half of the body's active bone marrow resides within the pelvis region, shielding this area holds great promise for preventing the deterministic effects of bone marrow exposure and concomitantly reducing stochastic effects. The efficacy of a device that selectively shields this region and other radiosensitive organs in the abdominal area is shown here.
Relevance of deterministic chaos theory to studies in functioning of dynamical systems
NASA Astrophysics Data System (ADS)
Glagolev, S. N.; Bukhonova, S. M.; Chikina, E. D.
2018-03-01
The paper considers chaotic behavior of dynamical systems typical for social and economic processes. Approaches to analysis and evaluation of system development processes are studies from the point of view of controllability and determinateness. Explanations are given for necessity to apply non-standard mathematical tools to explain states of dynamical social and economic systems on the basis of fractal theory. Features of fractal structures, such as non-regularity, self-similarity, dimensionality and fractionality are considered.
Jackpot Structural Features: Rollover Effect and Goal-Gradient Effect in EGM Gambling.
Li, En; Rockloff, Matthew J; Browne, Matthew; Donaldson, Phillip
2016-06-01
Relatively little research has been undertaken on the influence of jackpot structural features on electronic gaming machine (EGM) gambling behavior. This study considered two common features of EGM jackpots: progressive (i.e., the jackpot incrementally growing in value as players make additional bets), and deterministic (i.e., a guaranteed jackpot after a fixed number of bets, which is determined in advance and at random). Their joint influences on player betting behavior and the moderating role of jackpot size were investigated in a crossed-design experiment. Using real money, players gambled on a computer simulated EGM with real jackpot prizes of either $500 (i.e., small jackpot) or $25,000 (i.e., large jackpot). The results revealed three important findings. Firstly, players placed the largest bets (20.3 % higher than the average) on large jackpot EGMs that were represented to be deterministic and non-progressive. This finding was supportive of a hypothesized 'goal-gradient effect', whereby players might have felt subjectively close to an inevitable payoff for a high-value prize. Secondly, large jackpots that were non-deterministic and progressive also promoted high bet sizes (17.8 % higher than the average), resembling the 'rollover effect' demonstrated in lottery betting, whereby players might imagine that their large bets could be later recouped through a big win. Lastly, neither the hypothesized goal-gradient effect nor the rollover effect was evident among players betting on small jackpot machines. These findings suggest that certain high-value jackpot configurations may have intensifying effects on player behavior.
Validation of a Deterministic Vibroacoustic Response Prediction Model
NASA Technical Reports Server (NTRS)
Caimi, Raoul E.; Margasahayam, Ravi
1997-01-01
This report documents the recently completed effort involving validation of a deterministic theory for the random vibration problem of predicting the response of launch pad structures in the low-frequency range (0 to 50 hertz). Use of the Statistical Energy Analysis (SEA) methods is not suitable in this range. Measurements of launch-induced acoustic loads and subsequent structural response were made on a cantilever beam structure placed in close proximity (200 feet) to the launch pad. Innovative ways of characterizing random, nonstationary, non-Gaussian acoustics are used for the development of a structure's excitation model. Extremely good correlation was obtained between analytically computed responses and those measured on the cantilever beam. Additional tests are recommended to bound the problem to account for variations in launch trajectory and inclination.
Thin-plate spline analysis of the cranial base in subjects with Class III malocclusion.
Singh, G D; McNamara, J A; Lozanoff, S
1997-08-01
The role of the cranial base in the emergence of Class III malocclusion is not fully understood. This study determines deformations that contribute to a Class III cranial base morphology, employing thin-plate spline analysis on lateral cephalographs. A total of 73 children of European-American descent aged between 5 and 11 years of age with Class III malocclusion were compared with an equivalent group of subjects with a normal, untreated, Class I molar occlusion. The cephalographs were traced, checked and subdivided into seven age- and sex-matched groups. Thirteen points on the cranial base were identified and digitized. The datasets were scaled to an equivalent size, and statistical analysis indicated significant differences between average Class I and Class III cranial base morphologies for each group. Thin-plate spline analysis indicated that both affine (uniform) and non-affine transformations contribute toward the total spline for each average cranial base morphology at each age group analysed. For non-affine transformations, Partial warps 10, 8 and 7 had high magnitudes, indicating large-scale deformations affecting Bolton point, basion, pterygo-maxillare, Ricketts' point and articulare. In contrast, high eigenvalues associated with Partial warps 1-3, indicating localized shape changes, were found at tuberculum sellae, sella, and the frontonasomaxillary suture. It is concluded that large spatial-scale deformations affect the occipital complex of the cranial base and sphenoidal region, in combination with localized distortions at the frontonasal suture. These deformations may contribute to reduced orthocephalization or deficient flattening of the cranial base antero-posteriorly that, in turn, leads to the formation of a Class III malocclusion.
Stochastic and deterministic causes of streamer branching in liquid dielectrics
NASA Astrophysics Data System (ADS)
Jadidian, Jouya; Zahn, Markus; Lavesson, Nils; Widlund, Ola; Borg, Karl
2013-08-01
Streamer branching in liquid dielectrics is driven by stochastic and deterministic factors. The presence of stochastic causes of streamer branching such as inhomogeneities inherited from noisy initial states, impurities, or charge carrier density fluctuations is inevitable in any dielectric. A fully three-dimensional streamer model presented in this paper indicates that deterministic origins of branching are intrinsic attributes of streamers, which in some cases make the branching inevitable depending on shape and velocity of the volume charge at the streamer frontier. Specifically, any given inhomogeneous perturbation can result in streamer branching if the volume charge layer at the original streamer head is relatively thin and slow enough. Furthermore, discrete nature of electrons at the leading edge of an ionization front always guarantees the existence of a non-zero inhomogeneous perturbation ahead of the streamer head propagating even in perfectly homogeneous dielectric. Based on the modeling results for streamers propagating in a liquid dielectric, a gauge on the streamer head geometry is introduced that determines whether the branching occurs under particular inhomogeneous circumstances. Estimated number, diameter, and velocity of the born branches agree qualitatively with experimental images of the streamer branching.
A deterministic model of electron transport for electron probe microanalysis
NASA Astrophysics Data System (ADS)
Bünger, J.; Richter, S.; Torrilhon, M.
2018-01-01
Within the last decades significant improvements in the spatial resolution of electron probe microanalysis (EPMA) were obtained by instrumental enhancements. In contrast, the quantification procedures essentially remained unchanged. As the classical procedures assume either homogeneity or a multi-layered structure of the material, they limit the spatial resolution of EPMA. The possibilities of improving the spatial resolution through more sophisticated quantification procedures are therefore almost untouched. We investigate a new analytical model (M 1-model) for the quantification procedure based on fast and accurate modelling of electron-X-ray-matter interactions in complex materials using a deterministic approach to solve the electron transport equations. We outline the derivation of the model from the Boltzmann equation for electron transport using the method of moments with a minimum entropy closure and present first numerical results for three different test cases (homogeneous, thin film and interface). Taking Monte Carlo as a reference, the results for the three test cases show that the M 1-model is able to reproduce the electron dynamics in EPMA applications very well. Compared to classical analytical models like XPP and PAP, the M 1-model is more accurate and far more flexible, which indicates the potential of deterministic models of electron transport to further increase the spatial resolution of EPMA.
Multi-Scale Modeling of the Gamma Radiolysis of Nitrate Solutions.
Horne, Gregory P; Donoclift, Thomas A; Sims, Howard E; Orr, Robin M; Pimblott, Simon M
2016-11-17
A multiscale modeling approach has been developed for the extended time scale long-term radiolysis of aqueous systems. The approach uses a combination of stochastic track structure and track chemistry as well as deterministic homogeneous chemistry techniques and involves four key stages: radiation track structure simulation, the subsequent physicochemical processes, nonhomogeneous diffusion-reaction kinetic evolution, and homogeneous bulk chemistry modeling. The first three components model the physical and chemical evolution of an isolated radiation chemical track and provide radiolysis yields, within the extremely low dose isolated track paradigm, as the input parameters for a bulk deterministic chemistry model. This approach to radiation chemical modeling has been tested by comparison with the experimentally observed yield of nitrite from the gamma radiolysis of sodium nitrate solutions. This is a complex radiation chemical system which is strongly dependent on secondary reaction processes. The concentration of nitrite is not just dependent upon the evolution of radiation track chemistry and the scavenging of the hydrated electron and its precursors but also on the subsequent reactions of the products of these scavenging reactions with other water radiolysis products. Without the inclusion of intratrack chemistry, the deterministic component of the multiscale model is unable to correctly predict experimental data, highlighting the importance of intratrack radiation chemistry in the chemical evolution of the irradiated system.
Mai, Tam V-T; Duong, Minh V; Nguyen, Hieu T; Lin, Kuang C; Huynh, Lam K
2017-04-27
An integrated deterministic and stochastic model within the master equation/Rice-Ramsperger-Kassel-Marcus (ME/RRKM) framework was first used to characterize temperature- and pressure-dependent behaviors of thermal decomposition of acetic anhydride in a wide range of conditions (i.e., 300-1500 K and 0.001-100 atm). Particularly, using potential energy surface and molecular properties obtained from high-level electronic structure calculations at CCSD(T)/CBS, macroscopic thermodynamic properties and rate coefficients of the title reaction were derived with corrections for hindered internal rotation and tunneling treatments. Being in excellent agreement with the scattered experimental data, the results from deterministic and stochastic frameworks confirmed and complemented each other to reveal that the main decomposition pathway proceeds via a 6-membered-ring transition state with the 0 K barrier of 35.2 kcal·mol -1 . This observation was further understood and confirmed by the sensitivity analysis on the time-resolved species profiles and the derived rate coefficients with respect to the ab initio barriers. Such an agreement suggests the integrated model can be confidently used for a wide range of conditions as a powerful postfacto and predictive tool in detailed chemical kinetic modeling and simulation for the title reaction and thus can be extended to complex chemical reactions.
Probabilistic Design and Analysis Framework
NASA Technical Reports Server (NTRS)
Strack, William C.; Nagpal, Vinod K.
2010-01-01
PRODAF is a software package designed to aid analysts and designers in conducting probabilistic analysis of components and systems. PRODAF can integrate multiple analysis programs to ease the tedious process of conducting a complex analysis process that requires the use of multiple software packages. The work uses a commercial finite element analysis (FEA) program with modules from NESSUS to conduct a probabilistic analysis of a hypothetical turbine blade, disk, and shaft model. PRODAF applies the response surface method, at the component level, and extrapolates the component-level responses to the system level. Hypothetical components of a gas turbine engine are first deterministically modeled using FEA. Variations in selected geometrical dimensions and loading conditions are analyzed to determine the effects of the stress state within each component. Geometric variations include the cord length and height for the blade, inner radius, outer radius, and thickness, which are varied for the disk. Probabilistic analysis is carried out using developing software packages like System Uncertainty Analysis (SUA) and PRODAF. PRODAF was used with a commercial deterministic FEA program in conjunction with modules from the probabilistic analysis program, NESTEM, to perturb loads and geometries to provide a reliability and sensitivity analysis. PRODAF simplified the handling of data among the various programs involved, and will work with many commercial and opensource deterministic programs, probabilistic programs, or modules.
Processes occurring within small areas (patch-scale) that influence species richness and spatial heterogeneity of larger areas (landscape-scale) have long been an interest of ecologists. This research focused on the role of patch-scale deterministic chaos arising in phytoplankton...
Examining Errors in Simple Spreadsheet Modeling from Different Research Perspectives
ERIC Educational Resources Information Center
Kadijevich, Djordje M.
2012-01-01
By using a sample of 1st-year undergraduate business students, this study dealt with the development of simple (deterministic and non-optimization) spreadsheet models of income statements within an introductory course on business informatics. The study examined students' errors in doing this for business situations of their choice and found three…
Pinning impulsive control algorithms for complex network
NASA Astrophysics Data System (ADS)
Sun, Wen; Lü, Jinhu; Chen, Shihua; Yu, Xinghuo
2014-03-01
In this paper, we further investigate the synchronization of complex dynamical network via pinning control in which a selection of nodes are controlled at discrete times. Different from most existing work, the pinning control algorithms utilize only the impulsive signals at discrete time instants, which may greatly improve the communication channel efficiency and reduce control cost. Two classes of algorithms are designed, one for strongly connected complex network and another for non-strongly connected complex network. It is suggested that in the strongly connected network with suitable coupling strength, a single controller at any one of the network's nodes can always pin the network to its homogeneous solution. In the non-strongly connected case, the location and minimum number of nodes needed to pin the network are determined by the Frobenius normal form of the coupling matrix. In addition, the coupling matrix is not necessarily symmetric or irreducible. Illustrative examples are then given to validate the proposed pinning impulsive control algorithms.
Arctic Sea Ice: Trends, Stability and Variability
NASA Astrophysics Data System (ADS)
Moon, Woosok
A stochastic Arctic sea-ice model is derived and analyzed in detail to interpret the recent decay and associated variability of Arctic sea-ice under changes in greenhouse gas forcing widely referred to as global warming. The approach begins from a deterministic model of the heat flux balance through the air/sea/ice system, which uses observed monthly-averaged heat fluxes to drive a time evolution of sea-ice thickness. This model reproduces the observed seasonal cycle of the ice cover and it is to this that stochastic noise---representing high frequency variability---is introduced. The model takes the form of a single periodic non-autonomous stochastic ordinary differential equation. Following an introductory chapter, the two that follow focus principally on the properties of the deterministic model in order to identify the main properties governing the stability of the ice cover. In chapter 2 the underlying time-dependent solutions to the deterministic model are analyzed for their stability. It is found that the response time-scale of the system to perturbations is dominated by the destabilizing sea-ice albedo feedback, which is operative in the summer, and the stabilizing long wave radiative cooling of the ice surface, which is operative in the winter. This basic competition is found throughout the thesis to define the governing dynamics of the system. In particular, as greenhouse gas forcing increases, the sea-ice albedo feedback becomes more effective at destabilizing the system. Thus, any projections of the future state of Arctic sea-ice will depend sensitively on the treatment of the ice-albedo feedback. This in turn implies that the treatment a fractional ice cover as the ice areal extent changes rapidly, must be handled with the utmost care. In chapter 3, the idea of a two-season model, with just winter and summer, is revisited. By breaking the seasonal cycle up in this manner one can simplify the interpretation of the basic dynamics. Whereas in the fully time-dependent seasonal model one finds stable seasonal ice cover (vanishing in the summer but reappearing in the winter), in previous two-season models such a state could not be found. In this chapter the sufficient conditions are found for a stable seasonal ice cover, which reside in including a time variation in the shortwave radiance during summer. This provides a qualitative interpretation of the continuous and reversible shift from perennial to seasonally-varying states in the more complex deterministic model. In order to put the stochastic model into a realistic observational framework, in chapter 4, the analysis of daily satellite retrievals of ice albedo and ice extent is described. Both the basic statistics are examined and a new method, called multi-fractal temporally weighted detrended fluctuation analysis, is applied. Because the basic data are taken on daily time scales, the full fidelity of the retrieved data is accessed and we find time scales from days and weeks to seasonal and decadal. Importantly, the data show a white-noise structure on annual to biannual time scales and this provides the basis for using a Wiener process for the noise in the stochastic Arctic sea-ice model. In chapter 5 a generalized perturbation analysis of a non-autonomous stochastic differential equation is developed and then applied to interpreting the variability of Arctic sea-ice as greenhouse gas forcing increases. The resulting analytic expressions of the statistical moments provide insight into the transient and memory-delay effects associated with the basic competition in the system: the ice-albedo feedback and long wave radiative stabilization along with the asymmetry in the nonlinearity of the deterministic contributions to the model and the magnitude and structure of the stochastic noise. A systematic study of the impact of the noise structure, from additive to multiplicative, is undertaken in chapters 6 and 7. Finally, in chapter 8 the matter of including a fractional ice cover into a deterministic model is addressed. It is found that a simple but crucial mistake is made in one of the most widely used model schemes and this has a major impact given the important role of areal fraction in the ice-albedo feedback in such a model. The thesis is summarized in chapter 9.
Modular assembly of chimeric phi29 packaging RNAs that support DNA packaging.
Fang, Yun; Shu, Dan; Xiao, Feng; Guo, Peixuan; Qin, Peter Z
2008-08-08
The bacteriophage phi29 DNA packaging motor is a protein/RNA complex that can produce strong force to condense the linear-double-stranded DNA genome into a pre-formed protein capsid. The RNA component, called the packaging RNA (pRNA), utilizes magnesium-dependent inter-molecular base-pairing interactions to form ring-shaped complexes. The pRNA is a class of non-coding RNA, interacting with phi29 motor proteins to enable DNA packaging. Here, we report a two-piece chimeric pRNA construct that is fully competent in interacting with partner pRNA to form ring-shaped complexes, in packaging DNA via the motor, and in assembling infectious phi29 virions in vitro. This is the first example of a fully functional pRNA assembled using two non-covalently interacting fragments. The results support the notion of modular pRNA architecture in the phi29 packaging motor.
Modular assembly of chimeric phi29 packaging RNAs that support DNA packaging
Fang, Yun; Shu, Dan; Xiao, Feng; Guo, Peixuan; Qin, Peter Z.
2008-01-01
The bacteriophage phi29 DNA packaging motor is a protein/RNA complex that can produce strong force to condense the linear-double stranded DNA genome into a pre-formed protein capsid. The RNA component, called the packaging RNA (pRNA), utilizes magnesium-dependent intermolecular base-pairing interactions to form ring-shaped complexes. The pRNA is a class of non-coding RNA, interacting with phi29 motor proteins to enable DNA packaging. Here, we report a 2-piece chimeric pRNA construct that is fully competent in interacting with partner pRNA to form ring-shaped complexes, in packaging DNA via the motor, and in assembling infectious phi29 virions in vitro. This is the first example of a fully functional pRNA assembled using two non-covalently interacting fragments. The results support the notion of modular pRNA architecture in the phi29 packaging motor. PMID:18514064
BFV-Complex and Higher Homotopy Structures
NASA Astrophysics Data System (ADS)
Schätz, Florian
2009-03-01
We present a connection between the BFV-complex (abbreviation for Batalin-Fradkin-Vilkovisky complex) and the strong homotopy Lie algebroid associated to a coisotropic submanifold of a Poisson manifold. We prove that the latter structure can be derived from the BFV-complex by means of homotopy transfer along contractions. Consequently the BFV-complex and the strong homotopy Lie algebroid structure are L ∞ quasi-isomorphic and control the same formal deformation problem. However there is a gap between the non-formal information encoded in the BFV-complex and in the strong homotopy Lie algebroid respectively. We prove that there is a one-to-one correspondence between coisotropic submanifolds given by graphs of sections and equivalence classes of normalized Maurer-Cartan elemens of the BFV-complex. This does not hold if one uses the strong homotopy Lie algebroid instead.
Improved multi-objective ant colony optimization algorithm and its application in complex reasoning
NASA Astrophysics Data System (ADS)
Wang, Xinqing; Zhao, Yang; Wang, Dong; Zhu, Huijie; Zhang, Qing
2013-09-01
The problem of fault reasoning has aroused great concern in scientific and engineering fields. However, fault investigation and reasoning of complex system is not a simple reasoning decision-making problem. It has become a typical multi-constraint and multi-objective reticulate optimization decision-making problem under many influencing factors and constraints. So far, little research has been carried out in this field. This paper transforms the fault reasoning problem of complex system into a paths-searching problem starting from known symptoms to fault causes. Three optimization objectives are considered simultaneously: maximum probability of average fault, maximum average importance, and minimum average complexity of test. Under the constraints of both known symptoms and the causal relationship among different components, a multi-objective optimization mathematical model is set up, taking minimizing cost of fault reasoning as the target function. Since the problem is non-deterministic polynomial-hard(NP-hard), a modified multi-objective ant colony algorithm is proposed, in which a reachability matrix is set up to constrain the feasible search nodes of the ants and a new pseudo-random-proportional rule and a pheromone adjustment mechinism are constructed to balance conflicts between the optimization objectives. At last, a Pareto optimal set is acquired. Evaluation functions based on validity and tendency of reasoning paths are defined to optimize noninferior set, through which the final fault causes can be identified according to decision-making demands, thus realize fault reasoning of the multi-constraint and multi-objective complex system. Reasoning results demonstrate that the improved multi-objective ant colony optimization(IMACO) can realize reasoning and locating fault positions precisely by solving the multi-objective fault diagnosis model, which provides a new method to solve the problem of multi-constraint and multi-objective fault diagnosis and reasoning of complex system.
Observation-Driven Configuration of Complex Software Systems
NASA Astrophysics Data System (ADS)
Sage, Aled
2010-06-01
The ever-increasing complexity of software systems makes them hard to comprehend, predict and tune due to emergent properties and non-deterministic behaviour. Complexity arises from the size of software systems and the wide variety of possible operating environments: the increasing choice of platforms and communication policies leads to ever more complex performance characteristics. In addition, software systems exhibit different behaviour under different workloads. Many software systems are designed to be configurable so that policies can be chosen to meet the needs of various stakeholders. For complex software systems it can be difficult to accurately predict the effects of a change and to know which configuration is most appropriate. This thesis demonstrates that it is useful to run automated experiments that measure a selection of system configurations. Experiments can find configurations that meet the stakeholders' needs, find interesting behavioural characteristics, and help produce predictive models of the system's behaviour. The design and use of ACT (Automated Configuration Tool) for running such experiments is described, in combination a number of search strategies for deciding on the configurations to measure. Design Of Experiments (DOE) is discussed, with emphasis on Taguchi Methods. These statistical methods have been used extensively in manufacturing, but have not previously been used for configuring software systems. The novel contribution here is an industrial case study, applying the combination of ACT and Taguchi Methods to DC-Directory, a product from Data Connection Ltd (DCL). The case study investigated the applicability of Taguchi Methods for configuring complex software systems. Taguchi Methods were found to be useful for modelling and configuring DC- Directory, making them a valuable addition to the techniques available to system administrators and developers.
NASA Astrophysics Data System (ADS)
Rit, S.; Vila Oliva, M.; Brousmiche, S.; Labarbe, R.; Sarrut, D.; Sharp, G. C.
2014-03-01
We propose the Reconstruction Toolkit (RTK, http://www.openrtk.org), an open-source toolkit for fast cone-beam CT reconstruction, based on the Insight Toolkit (ITK) and using GPU code extracted from Plastimatch. RTK is developed by an open consortium (see affiliations) under the non-contaminating Apache 2.0 license. The quality of the platform is daily checked with regression tests in partnership with Kitware, the company supporting ITK. Several features are already available: Elekta, Varian and IBA inputs, multi-threaded Feldkamp-David-Kress reconstruction on CPU and GPU, Parker short scan weighting, multi-threaded CPU and GPU forward projectors, etc. Each feature is either accessible through command line tools or C++ classes that can be included in independent software. A MIDAS community has been opened to share CatPhan datasets of several vendors (Elekta, Varian and IBA). RTK will be used in the upcoming cone-beam CT scanner developed by IBA for proton therapy rooms. Many features are under development: new input format support, iterative reconstruction, hybrid Monte Carlo / deterministic CBCT simulation, etc. RTK has been built to freely share tomographic reconstruction developments between researchers and is open for new contributions.
A parallel implementation of an off-lattice individual-based model of multicellular populations
NASA Astrophysics Data System (ADS)
Harvey, Daniel G.; Fletcher, Alexander G.; Osborne, James M.; Pitt-Francis, Joe
2015-07-01
As computational models of multicellular populations include ever more detailed descriptions of biophysical and biochemical processes, the computational cost of simulating such models limits their ability to generate novel scientific hypotheses and testable predictions. While developments in microchip technology continue to increase the power of individual processors, parallel computing offers an immediate increase in available processing power. To make full use of parallel computing technology, it is necessary to develop specialised algorithms. To this end, we present a parallel algorithm for a class of off-lattice individual-based models of multicellular populations. The algorithm divides the spatial domain between computing processes and comprises communication routines that ensure the model is correctly simulated on multiple processors. The parallel algorithm is shown to accurately reproduce the results of a deterministic simulation performed using a pre-existing serial implementation. We test the scaling of computation time, memory use and load balancing as more processes are used to simulate a cell population of fixed size. We find approximate linear scaling of both speed-up and memory consumption on up to 32 processor cores. Dynamic load balancing is shown to provide speed-up for non-regular spatial distributions of cells in the case of a growing population.
Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)
NASA Astrophysics Data System (ADS)
Kędra, Mariola
2014-02-01
Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.
New class of generalized photon-added coherent states and some of their non-classical properties
NASA Astrophysics Data System (ADS)
Mojaveri, B.; Dehghani, A.; Mahmoodi, S.
2014-08-01
In this paper, we construct a new class of generalized photon added coherent states (GPACSs), |z,m{{\\rangle }_{r}} by excitations on a newly introduced family of generalized coherent states (GCSs) |z{{\\rangle }_{r}} (A Dehghani and B Mojaveri 2012 J. Phys. A: Math. Theor. 45 095304), obtained via generalized hypergeometric type displacement operators acting on the vacuum state of the simple harmonic oscillator. We show that these states realize resolution of the identity property through positive definite measures on the complex plane. Meanwhile, we demonstrate that the introduced states can also be interpreted as nonlinear coherent states (NLCSs), with a spacial nonlinearity function. Finally, some of their non-classical features as well as their quantum statistical properties are compared with Agarwal's photon-added coherent states (PACSs), \\left| z,m \\right\\rangle .
Lai, C.; Tsay, T.-K.; Chien, C.-H.; Wu, I.-L.
2009-01-01
Researchers at the Hydroinformatic Research and Development Team (HIRDT) of the National Taiwan University undertook a project to create a real time flood forecasting model, with an aim to predict the current in the Tamsui River Basin. The model was designed based on deterministic approach with mathematic modeling of complex phenomenon, and specific parameter values operated to produce a discrete result. The project also devised a rainfall-stage model that relates the rate of rainfall upland directly to the change of the state of river, and is further related to another typhoon-rainfall model. The geographic information system (GIS) data, based on precise contour model of the terrain, estimate the regions that were perilous to flooding. The HIRDT, in response to the project's progress, also devoted their application of a deterministic model to unsteady flow of thermodynamics to help predict river authorities issue timely warnings and take other emergency measures.
Mutual Information Rate and Bounds for It
Baptista, Murilo S.; Rubinger, Rero M.; Viana, Emilson R.; Sartorelli, José C.; Parlitz, Ulrich; Grebogi, Celso
2012-01-01
The amount of information exchanged per unit of time between two nodes in a dynamical network or between two data sets is a powerful concept for analysing complex systems. This quantity, known as the mutual information rate (MIR), is calculated from the mutual information, which is rigorously defined only for random systems. Moreover, the definition of mutual information is based on probabilities of significant events. This work offers a simple alternative way to calculate the MIR in dynamical (deterministic) networks or between two time series (not fully deterministic), and to calculate its upper and lower bounds without having to calculate probabilities, but rather in terms of well known and well defined quantities in dynamical systems. As possible applications of our bounds, we study the relationship between synchronisation and the exchange of information in a system of two coupled maps and in experimental networks of coupled oscillators. PMID:23112809
Survivability of Deterministic Dynamical Systems
Hellmann, Frank; Schultz, Paul; Grabow, Carsten; Heitzig, Jobst; Kurths, Jürgen
2016-01-01
The notion of a part of phase space containing desired (or allowed) states of a dynamical system is important in a wide range of complex systems research. It has been called the safe operating space, the viability kernel or the sunny region. In this paper we define the notion of survivability: Given a random initial condition, what is the likelihood that the transient behaviour of a deterministic system does not leave a region of desirable states. We demonstrate the utility of this novel stability measure by considering models from climate science, neuronal networks and power grids. We also show that a semi-analytic lower bound for the survivability of linear systems allows a numerically very efficient survivability analysis in realistic models of power grids. Our numerical and semi-analytic work underlines that the type of stability measured by survivability is not captured by common asymptotic stability measures. PMID:27405955
Deterministic Joint Remote Preparation of an Arbitrary Sevenqubit Cluster-type State
NASA Astrophysics Data System (ADS)
Ding, MengXiao; Jiang, Min
2017-06-01
In this paper, we propose a scheme for joint remotely preparing an arbitrary seven-qubit cluster-type state by using several GHZ entangled states as the quantum channel. The coefficients of the prepared states can be not only real, but also complex. Firstly, Alice performs a three-qubit projective measurement according to the amplitude coefficients of the target state, and then Bob carries out another three-qubit projective measurement based on its phase coefficients. Next, one three-qubit state containing all information of the target state is prepared with suitable operation. Finally, the target seven-qubit cluster-type state can be prepared by introducing four auxiliary qubits and performing appropriate local unitary operations based on the prepared three-qubit state in a deterministic way. The receiver's all recovery operations are summarized into a concise formula. Furthermore, it's worth noting that our scheme is more novel and feasible with the present technologies than most other previous schemes.
Data dependent systems approach to modal analysis Part 1: Theory
NASA Astrophysics Data System (ADS)
Pandit, S. M.; Mehta, N. P.
1988-05-01
The concept of Data Dependent Systems (DDS) and its applicability in the context of modal vibration analysis is presented. The ability of the DDS difference equation models to provide a complete representation of a linear dynamic system from its sampled response data forms the basis of the approach. The models are decomposed into deterministic and stochastic components so that system characteristics are isolated from noise effects. The modelling strategy is outlined, and the method of analysis associated with modal parameter identification is described in detail. Advantages and special features of the DDS methodology are discussed. Since the correlated noise is appropriately and automatically modelled by the DDS, the modal parameters are shown to be estimated very accurately and hence no preprocessing of the data is needed. Complex mode shapes and non-classical damping are as easily analyzed as the classical normal mode analysis. These features are illustrated by using simulated data in this Part I and real data on a disc-brake rotor in Part II.
Learning to integrate reactivity and deliberation in uncertain planning and scheduling problems
NASA Technical Reports Server (NTRS)
Chien, Steve A.; Gervasio, Melinda T.; Dejong, Gerald F.
1992-01-01
This paper describes an approach to planning and scheduling in uncertain domains. In this approach, a system divides a task on a goal by goal basis into reactive and deliberative components. Initially, a task is handled entirely reactively. When failures occur, the system changes the reactive/deliverative goal division by moving goals into the deliberative component. Because our approach attempts to minimize the number of deliberative goals, we call our approach Minimal Deliberation (MD). Because MD allows goals to be treated reactively, it gains some of the advantages of reactive systems: computational efficiency, the ability to deal with noise and non-deterministic effects, and the ability to take advantage of unforseen opportunities. However, because MD can fall back upon deliberation, it can also provide some of the guarantees of classical planning, such as the ability to deal with complex goal interactions. This paper describes the Minimal Deliberation approach to integrating reactivity and deliberation and describe an ongoing application of the approach to an uncertain planning and scheduling domain.
NASA Technical Reports Server (NTRS)
Schiff, Conrad; Dove, Edwin
2011-01-01
The MMS mission is an ambitious space physics mission that will fly 4 spacecraft in a tetrahedron formation in a series of highly elliptical orbits in order to study magnetic reconnection in the Earth's magnetosphere. The mission design is comprised of a combination of deterministic orbit adjust and random maintenance maneuvers distributed over the 2.5 year mission life. Formal verification of the requirements is achieved by analysis through the use of the End-to-End (ETE) code, which is a modular simulation of the maneuver operations over the entire mission duration. Error models for navigation accuracy (knowledge) and maneuver execution (control) are incorporated to realistically simulate the possible maneuver scenarios that might be realized These error models, coupled with the complex formation flying physics, lead to non-trivial effects that must be taken into account by the ETE automation. Using the ETE code, the MMS Flight Dynamics team was able to demonstrate that the current mission design satisfies the mission requirements.
A Comparison of Techniques for Scheduling Earth-Observing Satellites
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna
2004-01-01
Scheduling observations by coordinated fleets of Earth Observing Satellites (EOS) involves large search spaces, complex constraints and poorly understood bottlenecks, conditions where evolutionary and related algorithms are often effective. However, there are many such algorithms and the best one to use is not clear. Here we compare multiple variants of the genetic algorithm: stochastic hill climbing, simulated annealing, squeaky wheel optimization and iterated sampling on ten realistically-sized EOS scheduling problems. Schedules are represented by a permutation (non-temperal ordering) of the observation requests. A simple deterministic scheduler assigns times and resources to each observation request in the order indicated by the permutation, discarding those that violate the constraints created by previously scheduled observations. Simulated annealing performs best. Random mutation outperform a more 'intelligent' mutator. Furthermore, the best mutator, by a small margin, was a novel approach we call temperature dependent random sampling that makes large changes in the early stages of evolution and smaller changes towards the end of search.
Multi-segmental postural coordination in professional ballet dancers.
Kiefer, Adam W; Riley, Michael A; Shockley, Kevin; Sitton, Candace A; Hewett, Timothy E; Cummins-Sebree, Sarah; Haas, Jacqui G
2011-05-01
Ballet dancers have heightened balance skills, but previous studies that compared dancers to non-dancers have not quantified patterns of multi-joint postural coordination. This study utilized a visual tracking task that required professional ballet dancers and untrained control participants to sway with the fore-aft motion of a target while standing on one leg, at target frequencies of 0.2 and 0.6Hz. The mean and variability of relative phase between the ankle and hip, and measures from cross-recurrence quantification analysis (i.e., percent cross-recurrence, percent cross-determinism, and cross-maxline), indexed the coordination patterns and their stability. Dancers exhibited less variable ankle-hip coordination and a less deterministic ankle-hip coupling, compared to controls. The results indicate that ballet dancers have increased coordination stability, potentially achieved through enhanced neuromuscular control and/or perceptual sensitivity, and indicate proficiency at optimizing the constraints that enable dancers to perform complex balance tasks. Copyright © 2011 Elsevier B.V. All rights reserved.
Fractional dynamics using an ensemble of classical trajectories
NASA Astrophysics Data System (ADS)
Sun, Zhaopeng; Dong, Hao; Zheng, Yujun
2018-01-01
A trajectory-based formulation for fractional dynamics is presented and the trajectories are generated deterministically. In this theoretical framework, we derive a new class of estimators in terms of confluent hypergeometric function (F11) to represent the Riesz fractional derivative. Using this method, the simulation of free and confined Lévy flight are in excellent agreement with the exact numerical and analytical results. In addition, the barrier crossing in a bistable potential driven by Lévy noise of index α is investigated. In phase space, the behavior of trajectories reveal the feature of Lévy flight in a better perspective.
Zeming, Kerwin Kwek; Salafi, Thoriq; Chen, Chia-Hung; Zhang, Yong
2016-01-01
Deterministic lateral displacement (DLD) method for particle separation in microfluidic devices has been extensively used for particle separation in recent years due to its high resolution and robust separation. DLD has shown versatility for a wide spectrum of applications for sorting of micro particles such as parasites, blood cells to bacteria and DNA. DLD model is designed for spherical particles and efficient separation of blood cells is challenging due to non-uniform shape and size. Moreover, separation in sub-micron regime requires the gap size of DLD systems to be reduced which exponentially increases the device resistance, resulting in greatly reduced throughput. This paper shows how simple application of asymmetrical DLD gap-size by changing the ratio of lateral-gap (GL) to downstream-gap (GD) enables efficient separation of RBCs without greatly restricting throughput. This method reduces the need for challenging fabrication of DLD pillars and provides new insight to the current DLD model. The separation shows an increase in DLD critical diameter resolution (separate smaller particles) and increase selectivity for non-spherical RBCs. The RBCs separate better as compared to standard DLD model with symmetrical gap sizes. This method can be applied to separate non-spherical bacteria or sub-micron particles to enhance throughput and DLD resolution. PMID:26961061
Selective Attention, Diffused Attention, and the Development of Categorization
Deng, Wei (Sophia); Sloutsky, Vladimir M.
2016-01-01
How do people learn categories and what changes with development? The current study attempts to address these questions by focusing on the role of attention in the development of categorization. In Experiment 1, participants (adults, 7-year-olds, and 4-year-olds) were trained with novel categories consisting of deterministic and probabilistic features, and their categorization and memory for features were tested. In Experiment 2, participants’ attention was directed to the deterministic feature, and in Experiment 3 it was directed to the probabilistic features. Attentional cuing affected categorization and memory in adults and 7-year-olds: these participants relied on the cued features in their categorization and exhibited better memory of cued than of non-cued features. In contrast, in 4-year-olds attentional cueing affected only categorization, but not memory: these participants exhibited equally good memory for both cued and non-cued features. Furthermore, across the experiments, 4-year-olds remembered non-cued features better than adults. These results coupled with computational simulations provide novel evidence (1) pointing to differences in category representation and mechanisms of categorization across development, (2) elucidating the role of attention in the development of categorization, and (3) suggesting an important distinction between representation and decision factors in categorization early in development. These issues are discussed with respect to theories of categorization and its development. PMID:27721103
Zeming, Kerwin Kwek; Salafi, Thoriq; Chen, Chia-Hung; Zhang, Yong
2016-03-10
Deterministic lateral displacement (DLD) method for particle separation in microfluidic devices has been extensively used for particle separation in recent years due to its high resolution and robust separation. DLD has shown versatility for a wide spectrum of applications for sorting of micro particles such as parasites, blood cells to bacteria and DNA. DLD model is designed for spherical particles and efficient separation of blood cells is challenging due to non-uniform shape and size. Moreover, separation in sub-micron regime requires the gap size of DLD systems to be reduced which exponentially increases the device resistance, resulting in greatly reduced throughput. This paper shows how simple application of asymmetrical DLD gap-size by changing the ratio of lateral-gap (GL) to downstream-gap (GD) enables efficient separation of RBCs without greatly restricting throughput. This method reduces the need for challenging fabrication of DLD pillars and provides new insight to the current DLD model. The separation shows an increase in DLD critical diameter resolution (separate smaller particles) and increase selectivity for non-spherical RBCs. The RBCs separate better as compared to standard DLD model with symmetrical gap sizes. This method can be applied to separate non-spherical bacteria or sub-micron particles to enhance throughput and DLD resolution.
NASA Astrophysics Data System (ADS)
Felder, Guido; Zischg, Andreas; Weingartner, Rolf
2015-04-01
Estimating peak discharges with very low probabilities is still accompanied by large uncertainties. Common estimation methods are usually based on extreme value statistics applied to observed time series or to hydrological model outputs. However, such methods assume the system to be stationary and do not specifically consider non-stationary effects. Observed time series may exclude events where peak discharge is damped by retention effects, as this process does not occur until specific thresholds, possibly beyond those of the highest measured event, are exceeded. Hydrological models can be complemented and parameterized with non-linear functions. However, in such cases calibration depends on observed data and non-stationary behaviour is not deterministically calculated. Our study discusses the option of considering retention effects on extreme peak discharges by coupling hydrological and hydraulic models. This possibility is tested by forcing the semi-distributed deterministic hydrological model PREVAH with randomly generated, physically plausible extreme precipitation patterns. The resulting hydrographs are then used to force the hydraulic model BASEMENT-ETH (riverbed in 1D, potential inundation areas in 2D). The procedure ensures that the estimated extreme peak discharge does not exceed the physical limit given by the riverbed capacity and that the dampening effect of inundation processes on peak discharge is considered.
PREDICTION OF SOLAR FLARE SIZE AND TIME-TO-FLARE USING SUPPORT VECTOR MACHINE REGRESSION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boucheron, Laura E.; Al-Ghraibah, Amani; McAteer, R. T. James
We study the prediction of solar flare size and time-to-flare using 38 features describing magnetic complexity of the photospheric magnetic field. This work uses support vector regression to formulate a mapping from the 38-dimensional feature space to a continuous-valued label vector representing flare size or time-to-flare. When we consider flaring regions only, we find an average error in estimating flare size of approximately half a geostationary operational environmental satellite (GOES) class. When we additionally consider non-flaring regions, we find an increased average error of approximately three-fourths a GOES class. We also consider thresholding the regressed flare size for the experimentmore » containing both flaring and non-flaring regions and find a true positive rate of 0.69 and a true negative rate of 0.86 for flare prediction. The results for both of these size regression experiments are consistent across a wide range of predictive time windows, indicating that the magnetic complexity features may be persistent in appearance long before flare activity. This is supported by our larger error rates of some 40 hr in the time-to-flare regression problem. The 38 magnetic complexity features considered here appear to have discriminative potential for flare size, but their persistence in time makes them less discriminative for the time-to-flare problem.« less
Quantum Ensemble Classification: A Sampling-Based Learning Control Approach.
Chen, Chunlin; Dong, Daoyi; Qi, Bo; Petersen, Ian R; Rabitz, Herschel
2017-06-01
Quantum ensemble classification (QEC) has significant applications in discrimination of atoms (or molecules), separation of isotopes, and quantum information extraction. However, quantum mechanics forbids deterministic discrimination among nonorthogonal states. The classification of inhomogeneous quantum ensembles is very challenging, since there exist variations in the parameters characterizing the members within different classes. In this paper, we recast QEC as a supervised quantum learning problem. A systematic classification methodology is presented by using a sampling-based learning control (SLC) approach for quantum discrimination. The classification task is accomplished via simultaneously steering members belonging to different classes to their corresponding target states (e.g., mutually orthogonal states). First, a new discrimination method is proposed for two similar quantum systems. Then, an SLC method is presented for QEC. Numerical results demonstrate the effectiveness of the proposed approach for the binary classification of two-level quantum ensembles and the multiclass classification of multilevel quantum ensembles.
Optimum Parameters of a Tuned Liquid Column Damper in a Wind Turbine Subject to Stochastic Load
NASA Astrophysics Data System (ADS)
Alkmim, M. H.; de Morais, M. V. G.; Fabro, A. T.
2017-12-01
Parameter optimization for tuned liquid column dampers (TLCD), a class of passive structural control, have been previously proposed in the literature for reducing vibration in wind turbines, and several other applications. However, most of the available work consider the wind excitation as either a deterministic harmonic load or random load with white noise spectra. In this paper, a global direct search optimization algorithm to reduce vibration of a tuned liquid column damper (TLCD), a class of passive structural control device, is presented. The objective is to find optimized parameters for the TLCD under stochastic load from different wind power spectral density. A verification is made considering the analytical solution of undamped primary system under white noise excitation by comparing with result from the literature. Finally, it is shown that different wind profiles can significantly affect the optimum TLCD parameters.
Menstruation, perimenopause, and chaos theory.
Derry, Paula S; Derry, Gregory N
2012-01-01
This article argues that menstruation, including the transition to menopause, results from a specific kind of complex system, namely, one that is nonlinear, dynamical, and chaotic. A complexity-based perspective changes how we think about and research menstruation-related health problems and positive health. Chaotic systems are deterministic but not predictable, characterized by sensitivity to initial conditions and strange attractors. Chaos theory provides a coherent framework that qualitatively accounts for puzzling results from perimenopause research. It directs attention to variability within and between women, adaptation, lifespan development, and the need for complex explanations of disease. Whether the menstrual cycle is chaotic can be empirically tested, and a summary of our research on 20- to 40-year-old women is provided.
Luštrek, Mitja; Lorenz, Peter; Kreutzer, Michael; Qian, Zilliang; Steinbeck, Felix; Wu, Di; Born, Nadine; Ziems, Bjoern; Hecker, Michael; Blank, Miri; Shoenfeld, Yehuda; Cao, Zhiwei; Glocker, Michael O; Li, Yixue; Fuellen, Georg; Thiesen, Hans-Jürgen
2013-01-01
Epitope-antibody-reactivities (EAR) of intravenous immunoglobulins (IVIGs) determined for 75,534 peptides by microarray analysis demonstrate that roughly 9% of peptides derived from 870 different human protein sequences react with antibodies present in IVIG. Computational prediction of linear B cell epitopes was conducted using machine learning with an ensemble of classifiers in combination with position weight matrix (PWM) analysis. Machine learning slightly outperformed PWM with area under the curve (AUC) of 0.884 vs. 0.849. Two different types of epitope-antibody recognition-modes (Type I EAR and Type II EAR) were found. Peptides of Type I EAR are high in tyrosine, tryptophan and phenylalanine, and low in asparagine, glutamine and glutamic acid residues, whereas for peptides of Type II EAR it is the other way around. Representative crystal structures present in the Protein Data Bank (PDB) of Type I EAR are PDB 1TZI and PDB 2DD8, while PDB 2FD6 and 2J4W are typical for Type II EAR. Type I EAR peptides share predicted propensities for being presented by MHC class I and class II complexes. The latter interaction possibly favors T cell-dependent antibody responses including IgG class switching. Peptides of Type II EAR are predicted not to be preferentially presented by MHC complexes, thus implying the involvement of T cell-independent IgG class switch mechanisms. The high extent of IgG immunoglobulin reactivity with human peptides implies that circulating IgG molecules are prone to bind to human protein/peptide structures under non-pathological, non-inflammatory conditions. A webserver for predicting EAR of peptide sequences is available at www.sysmed-immun.eu/EAR.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-01
... this notice to solicit comments on the proposed rule change from interested persons. \\1\\ 15 U.S.C. 78s... structure for simple, non-complex orders in equity options classes.\\3\\ This new fees structure factors BBO... that such action is necessary or appropriate in the public interest, for the protection of investors...
Singh, G D; McNamara, J A; Lozanoff, S
1999-01-01
The purpose of this study was to assess soft tissue facial matrices in subjects of diverse ethnic origins with underlying dentoskeletal malocclusions. Pre-treatment lateral cephalographs of 71 Korean and 70 European-American children aged between 5 and 11 years with Angle's Class III malocclusions were traced, and 12 homologous, soft tissue landmarks digitized. Comparing mean Korean and European-American Class III soft tissue profiles, Procrustes analysis established statistical difference (P < 0.001) between the configurations, and this difference was also true at all seven age groups tested (P < 0.001). Comparing the overall European-American and Korean transformation, thin-plate spline analysis indicated that both affine and non-affine transformations contribute towards the total spline (deformation) of the averaged Class III soft tissue configurations. For non-affine transformations, partial warp (PW) 8 had the highest magnitude, indicating large-scale deformations visualized as labio-mental protrusion, predominantly. In addition, PW9, PW4, and PW5 also had high magnitudes, demonstrating labio-mental vertical compression and antero-posterior compression of the lower labio-mental soft tissues. Thus, Korean children with Class III malocclusions demonstrate antero-posterior and vertical deformations of the labio-mental soft tissue complex with respect to their European-American counterparts. Morphological heterogeneity of the soft tissue integument in subjects of diverse ethnic origin may obscure the underlying skeletal morphology, but the soft tissue integument appears to have minimal ontogenetic association with Class III malocclusions.
Voltage control of magnetic single domains in Ni discs on ferroelectric BaTiO3
NASA Astrophysics Data System (ADS)
Ghidini, M.; Zhu, B.; Mansell, R.; Pellicelli, R.; Lesaine, A.; Moya, X.; Crossley, S.; Nair, B.; Maccherozzi, F.; Barnes, C. H. W.; Cowburn, R. P.; Dhesi, S. S.; Mathur, N. D.
2018-06-01
For 1 µm-diameter Ni discs on a BaTiO3 substrate, the local magnetization direction is determined by ferroelectric domain orientation as a consequence of growth strain, such that single-domain discs lie on single ferroelectric domains. On applying a voltage across the substrate, ferroelectric domain switching yields non-volatile magnetization rotations of 90°, while piezoelectric effects that are small and continuous yield non-volatile magnetization reversals that are non-deterministic. This demonstration of magnetization reversal without ferroelectric domain switching implies reduced fatigue, and therefore represents a step towards applications.
Mu, Chuang; Wang, Ruijia; Li, Tianqi; Li, Yuqiang; Tian, Meilin; Jiao, Wenqian; Huang, Xiaoting; Zhang, Lingling; Hu, Xiaoli; Wang, Shi; Bao, Zhenmin
2016-08-01
Long non-coding RNA (lncRNA) structurally resembles mRNA but cannot be translated into protein. Although the systematic identification and characterization of lncRNAs have been increasingly reported in model species, information concerning non-model species is still lacking. Here, we report the first systematic identification and characterization of lncRNAs in two sea cucumber species: (1) Apostichopus japonicus during lipopolysaccharide (LPS) challenge and in heathy tissues and (2) Holothuria glaberrima during radial organ complex regeneration, using RNA-seq datasets and bioinformatics analysis. We identified A. japonicus and H. glaberrima lncRNAs that were differentially expressed during LPS challenge and radial organ complex regeneration, respectively. Notably, the predicted lncRNA-microRNA-gene trinities revealed that, in addition to targeting protein-coding transcripts, miRNAs might also target lncRNAs, thereby participating in a potential novel layer of regulatory interactions among non-coding RNA classes in echinoderms. Furthermore, the constructed coding-non-coding network implied the potential involvement of lncRNA-gene interactions during the regulation of several important genes (e.g., Toll-like receptor 1 [TLR1] and transglutaminase-1 [TGM1]) in response to LPS challenge and radial organ complex regeneration in sea cucumbers. Overall, this pioneer systematic identification, annotation, and characterization of lncRNAs in echinoderm pave the way for similar studies and future genetic, genomic, and evolutionary research in non-model species.
Immunological Functions of the Membrane Proximal Region of MHC Class II Molecules
Harton, Jonathan; Jin, Lei; Hahn, Amy; Drake, Jim
2016-01-01
Major histocompatibility complex (MHC) class II molecules present exogenously derived antigen peptides to CD4 T cells, driving activation of naïve T cells and supporting CD4-driven immune functions. However, MHC class II molecules are not inert protein pedestals that simply bind and present peptides. These molecules also serve as multi-functional signaling molecules delivering activation, differentiation, or death signals (or a combination of these) to B cells, macrophages, as well as MHC class II-expressing T cells and tumor cells. Although multiple proteins are known to associate with MHC class II, interaction with STING (stimulator of interferon genes) and CD79 is essential for signaling. In addition, alternative transmembrane domain pairing between class II α and β chains influences association with membrane lipid sub-domains, impacting both signaling and antigen presentation. In contrast to the membrane-distal region of the class II molecule responsible for peptide binding and T-cell receptor engagement, the membrane-proximal region (composed of the connecting peptide, transmembrane domain, and cytoplasmic tail) mediates these “non-traditional” class II functions. Here, we review the literature on the function of the membrane-proximal region of the MHC class II molecule and discuss the impact of this aspect of class II immunobiology on immune regulation and human disease. PMID:27006762
Distinguishing humans from computers in the game of go: A complex network approach
NASA Astrophysics Data System (ADS)
Coquidé, C.; Georgeot, B.; Giraud, O.
2017-08-01
We compare complex networks built from the game of go and obtained from databases of human-played games with those obtained from computer-played games. Our investigations show that statistical features of the human-based networks and the computer-based networks differ, and that these differences can be statistically significant on a relatively small number of games using specific estimators. We show that the deterministic or stochastic nature of the computer algorithm playing the game can also be distinguished from these quantities. This can be seen as a tool to implement a Turing-like test for go simulators.
Mathew, Geetha; Unnikrishnan, M K
2015-10-01
Inflammation is a complex, metabolically expensive process involving multiple signaling pathways and regulatory mechanisms which have evolved over evolutionary timescale. Addressing multiple targets of inflammation holistically, in moderation, is probably a more evolutionarily viable strategy, as compared to current therapy which addresses drug targets in isolation. Polypharmacology, addressing multiple targets, is commonly used in complex ailments, suggesting the superior safety and efficacy profile of multi-target (MT) drugs. Phenotypic drug discovery, which generated successful MT and first-in-class drugs in the past, is now re-emerging. A multi-pronged approach, which modulates the evolutionarily conserved, robust and pervasive cellular mechanisms of tissue repair, with AMPK at the helm, regulating the complex metabolic/immune/redox pathways underlying inflammation, is perhaps a more viable strategy than addressing single targets in isolation. Molecules that modulate multiple molecular mechanisms of inflammation in moderation (modulating TH cells toward the anti-inflammatory phenotype, activating AMPK, stimulating Nrf2 and inhibiting NFκB) might serve as a model for a novel Darwinian "first-in-class" therapeutic category that holistically addresses immune, redox and metabolic processes associated with inflammatory repair. Such a multimodal biological activity is supported by the fact that several non-calorific pleiotropic natural products with anti-inflammatory action have been incorporated into diet (chiefly guided by the adaptive development of olfacto-gustatory preferences over evolutionary timescales) rendering such molecules, endowed with evolutionarily privileged molecular scaffolds, naturally oriented toward multiple targets.
Dhingra, R. R.; Jacono, F. J.; Fishman, M.; Loparo, K. A.; Rybak, I. A.
2011-01-01
Physiological rhythms, including respiration, exhibit endogenous variability associated with health, and deviations from this are associated with disease. Specific changes in the linear and nonlinear sources of breathing variability have not been investigated. In this study, we used information theory-based techniques, combined with surrogate data testing, to quantify and characterize the vagal-dependent nonlinear pattern variability in urethane-anesthetized, spontaneously breathing adult rats. Surrogate data sets preserved the amplitude distribution and linear correlations of the original data set, but nonlinear correlation structure in the data was removed. Differences in mutual information and sample entropy between original and surrogate data sets indicated the presence of deterministic nonlinear or stochastic non-Gaussian variability. With vagi intact (n = 11), the respiratory cycle exhibited significant nonlinear behavior in templates of points separated by time delays ranging from one sample to one cycle length. After vagotomy (n = 6), even though nonlinear variability was reduced significantly, nonlinear properties were still evident at various time delays. Nonlinear deterministic variability did not change further after subsequent bilateral microinjection of MK-801, an N-methyl-d-aspartate receptor antagonist, in the Kölliker-Fuse nuclei. Reversing the sequence (n = 5), blocking N-methyl-d-aspartate receptors bilaterally in the dorsolateral pons significantly decreased nonlinear variability in the respiratory pattern, even with the vagi intact, and subsequent vagotomy did not change nonlinear variability. Thus both vagal and dorsolateral pontine influences contribute to nonlinear respiratory pattern variability. Furthermore, breathing dynamics of the intact system are mutually dependent on vagal and pontine sources of nonlinear complexity. Understanding the structure and modulation of variability provides insight into disease effects on respiratory patterning. PMID:21527661
Stochastic and Deterministic Approaches to Gas-grain Modeling of Interstellar Sources
NASA Astrophysics Data System (ADS)
Vasyunin, Anton; Herbst, Eric; Caselli, Paola
During the last decade, our understanding of the chemistry on surfaces of interstellar grains has been significantly enchanced. Extensive laboratory studies have revealed complex structure and dynamics in interstellar ice analogues, thus making our knowledge much more detailed. In addition, the first qualitative investigations of new processes were made, such as non-thermal chemical desorption of species from dust grains into the gas. Not surprisingly, the rapid growth of knowledge about the physics and chemistry of interstellar ices led to the development of a new generation of astrochemical models. The models are typically characterized by more detailed treatments of the ice physics and chemistry than previously. The utilized numerical approaches vary greatly from microscopic models, in which every single molecule is traced, to ``mean field'' macroscopic models, which simulate the evolution of averaged characteristics of interstellar ices, such as overall bulk composition. While microscopic models based on a stochastic Monte Carlo approach are potentially able to simulate the evolution of interstellar ices with an account of most subtle effects found in a laboratory, their use is often impractical due to limited knowledge about star-forming regions and huge computational demands. On the other hand, deterministic macroscopic models that often utilize kinetic rate equations are computationally efficient but experience difficulties in incorporation of such potentially important effects as ice segregation or discreteness of surface chemical reactions. In my talk, I will review the state of the art in the development of gas-grain astrochemical models. I will discuss how to incorporate key features of ice chemistry and dynamics in the gas-grain astrochemical models, and how the incorporation of recent laboratory findings into gas-grain models helps to better match observations.
A Robust Scalable Transportation System Concept
NASA Technical Reports Server (NTRS)
Hahn, Andrew; DeLaurentis, Daniel
2006-01-01
This report documents the 2005 Revolutionary System Concept for Aeronautics (RSCA) study entitled "A Robust, Scalable Transportation System Concept". The objective of the study was to generate, at a high-level of abstraction, characteristics of a new concept for the National Airspace System, or the new NAS, under which transportation goals such as increased throughput, delay reduction, and improved robustness could be realized. Since such an objective can be overwhelmingly complex if pursued at the lowest levels of detail, instead a System-of-Systems (SoS) approach was adopted to model alternative air transportation architectures at a high level. The SoS approach allows the consideration of not only the technical aspects of the NAS", but also incorporates policy, socio-economic, and alternative transportation system considerations into one architecture. While the representations of the individual systems are basic, the higher level approach allows for ways to optimize the SoS at the network level, determining the best topology (i.e. configuration of nodes and links). The final product (concept) is a set of rules of behavior and network structure that not only satisfies national transportation goals, but represents the high impact rules that accomplish those goals by getting the agents to "do the right thing" naturally. The novel combination of Agent Based Modeling and Network Theory provides the core analysis methodology in the System-of-Systems approach. Our method of approach is non-deterministic which means, fundamentally, it asks and answers different questions than deterministic models. The nondeterministic method is necessary primarily due to our marriage of human systems with technological ones in a partially unknown set of future worlds. Our goal is to understand and simulate how the SoS, human and technological components combined, evolve.
Partner symmetries and non-invariant solutions of four-dimensional heavenly equations
NASA Astrophysics Data System (ADS)
Malykh, A. A.; Nutku, Y.; Sheftel, M. B.
2004-07-01
We extend our method of partner symmetries to the hyperbolic complex Monge-Ampère equation and the second heavenly equation of Plebañski. We show the existence of partner symmetries and derive the relations between them. For certain simple choices of partner symmetries the resulting differential constraints together with the original heavenly equations are transformed to systems of linear equations by an appropriate Legendre transformation. The solutions of these linear equations are generically non-invariant. As a consequence we obtain explicitly new classes of heavenly metrics without Killing vectors.
NASA Technical Reports Server (NTRS)
Yunis, Isam S.; Carney, Kelly S.
1993-01-01
A new aerospace application of structural reliability techniques is presented, where the applied forces depend on many probabilistic variables. This application is the plume impingement loading of the Space Station Freedom Photovoltaic Arrays. When the space shuttle berths with Space Station Freedom it must brake and maneuver towards the berthing point using its primary jets. The jet exhaust, or plume, may cause high loads on the photovoltaic arrays. The many parameters governing this problem are highly uncertain and random. An approach, using techniques from structural reliability, as opposed to the accepted deterministic methods, is presented which assesses the probability of failure of the array mast due to plume impingement loading. A Monte Carlo simulation of the berthing approach is used to determine the probability distribution of the loading. A probability distribution is also determined for the strength of the array. Structural reliability techniques are then used to assess the array mast design. These techniques are found to be superior to the standard deterministic dynamic transient analysis, for this class of problem. The results show that the probability of failure of the current array mast design, during its 15 year life, is minute.
Modeling the spreading of large-scale wildland fires
Mohamed Drissi
2015-01-01
The objective of the present study is twofold. First, the last developments and validation results of a hybrid model designed to simulate fire patterns in heterogeneous landscapes are presented. The model combines the features of a stochastic small-world network model with those of a deterministic semi-physical model of the interaction between burning and non-burning...
Neural nets with terminal chaos for simulation of non-deterministic patterns
NASA Technical Reports Server (NTRS)
Zak, Michail
1993-01-01
Models for simulating some aspects of neural intelligence are presented and discussed. Special attention is given to terminal neurodynamics as a particular architecture of terminal dynamics suitable for modeling information flows. Applications of terminal chaos to information fusion as well as to planning and modeling coordination among neurons in biological systems are disussed.
ERIC Educational Resources Information Center
Barrett, Brian D.; Martina, Camille Anne
2012-01-01
Building on the social reproduction theory of Pierre Bourdieu, this study examines the impact of school context and institutional agency on shaping urban students' access to social and cultural capital resources, which are selectively valued and rewarded by the education system, in two schools across two high-poverty, intensely segregated urban…
Reach for the Stars: A Constellational Approach to Ethnographies of Elite Schools
ERIC Educational Resources Information Center
Prosser, Howard
2014-01-01
This paper offers a method for examining elite schools in a global setting by appropriating Theodor Adorno's constellational approach. I contend that arranging ideas and themes in a non-deterministic fashion can illuminate the social reality of elite schools. Drawing on my own fieldwork at an elite school in Argentina, I suggest that local and…
Scott, B R; Lyzlov, A F; Osovets, S V
1998-05-01
During a Phase-I effort, studies were planned to evaluate deterministic (nonstochastic) effects of chronic exposure of nuclear workers at the Mayak atomic complex in the former Soviet Union to relatively high levels (> 0.25 Gy) of ionizing radiation. The Mayak complex has been used, since the late 1940's, to produce plutonium for nuclear weapons. Workers at Site A of the complex were involved in plutonium breeding using nuclear reactors, and some were exposed to relatively large doses of gamma rays plus relatively small neutron doses. The Weibull normalized-dose model, which has been set up to evaluate the risk of specific deterministic effects of combined, continuous exposure of humans to alpha, beta, and gamma radiations, is here adapted for chronic exposure to gamma rays and neutrons during repeated 6-h work shifts--as occurred for some nuclear workers at Site A. Using the adapted model, key conclusions were reached that will facilitate a Phase-II study of deterministic effects among Mayak workers. These conclusions include the following: (1) neutron doses may be more important for Mayak workers than for Japanese A-bomb victims in Hiroshima and can be accounted for using an adjusted dose (which accounts for neutron relative biological effectiveness); (2) to account for dose-rate effects, normalized dose X (a dimensionless fraction of an LD50 or ED50) can be evaluated in terms of an adjusted dose; (3) nonlinear dose-response curves for the risk of death via the hematopoietic mode can be converted to linear dose-response curves (for low levels of risk) using a newly proposed dimensionless dose, D = X(V), in units of Oklad (where D is pronounced "deh"), and V is the shape parameter in the Weibull model; (4) for X < or = Xo, where Xo is the threshold normalized dose, D = 0; (5) unlike absorbed dose, the dose D can be averaged over different Mayak workers in order to calculate the average risk of death via the hematopoietic mode for the population exposed at Site A; and (6) the expected cases of death via the hematopoietic syndrome mode for Mayak workers chronically exposed during work shifts at Site A to gamma rays and neutrons can be predicted using ln(2)B M[D]; where B (pronounced "beh") is the number of workers at risk (criticality accident victims excluded); and M[D] is the average (mean) value of D (averaged over the worker population at risk, for Site A, for the time period considered). These results can be used to facilitate a Phase II study of deterministic radiation effects among Mayak workers chronically exposed to gamma rays and neutrons.
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ferri, Aldo A.
1995-01-01
Standard methods of structural dynamic analysis assume that the structural characteristics are deterministic. Recognizing that these characteristics are actually statistical in nature, researchers have recently developed a variety of methods that use this information to determine probabilities of a desired response characteristic, such as natural frequency, without using expensive Monte Carlo simulations. One of the problems in these methods is correctly identifying the statistical properties of primitive variables such as geometry, stiffness, and mass. This paper presents a method where the measured dynamic properties of substructures are used instead as the random variables. The residual flexibility method of component mode synthesis is combined with the probabilistic methods to determine the cumulative distribution function of the system eigenvalues. A simple cantilever beam test problem is presented that illustrates the theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huthmacher, Klaus; Molberg, Andreas K.; Rethfeld, Bärbel
2016-10-01
A split-step numerical method for calculating ultrafast free-electron dynamics in dielectrics is introduced. The two split steps, independently programmed in C++11 and FORTRAN 2003, are interfaced via the presented open source wrapper. The first step solves a deterministic extended multi-rate equation for the ionization, electron–phonon collisions, and single photon absorption by free-carriers. The second step is stochastic and models electron–electron collisions using Monte-Carlo techniques. This combination of deterministic and stochastic approaches is a unique and efficient method of calculating the nonlinear dynamics of 3D materials exposed to high intensity ultrashort pulses. Results from simulations solving the proposed model demonstrate howmore » electron–electron scattering relaxes the non-equilibrium electron distribution on the femtosecond time scale.« less
Edge states in the climate system: exploring global instabilities and critical transitions
NASA Astrophysics Data System (ADS)
Lucarini, Valerio; Bódai, Tamás
2017-07-01
Multistability is a ubiquitous feature in systems of geophysical relevance and provides key challenges for our ability to predict a system’s response to perturbations. Near critical transitions small causes can lead to large effects and—for all practical purposes—irreversible changes in the properties of the system. As is well known, the Earth climate is multistable: present astronomical and astrophysical conditions support two stable regimes, the warm climate we live in, and a snowball climate characterized by global glaciation. We first provide an overview of methods and ideas relevant for studying the climate response to forcings and focus on the properties of critical transitions in the context of both stochastic and deterministic dynamics, and assess strengths and weaknesses of simplified approaches to the problem. Following an idea developed by Eckhardt and collaborators for the investigation of multistable turbulent fluid dynamical systems, we study the global instability giving rise to the snowball/warm multistability in the climate system by identifying the climatic edge state, a saddle embedded in the boundary between the two basins of attraction of the stable climates. The edge state attracts initial conditions belonging to such a boundary and, while being defined by the deterministic dynamics, is the gate facilitating noise-induced transitions between competing attractors. We use a simplified yet Earth-like intermediate complexity climate model constructed by coupling a primitive equations model of the atmosphere with a simple diffusive ocean. We refer to the climatic edge states as Melancholia states and provide an extensive analysis of their features. We study their dynamics, their symmetry properties, and we follow a complex set of bifurcations. We find situations where the Melancholia state has chaotic dynamics. In these cases, we have that the basin boundary between the two basins of attraction is a strange geometric set with a nearly zero codimension, and relate this feature to the time scale separation between instabilities occurring on weather and climatic time scales. We also discover a new stable climatic state that is similar to a Melancholia state and is characterized by non-trivial symmetry properties.
A white-box model of S-shaped and double S-shaped single-species population growth
Kalmykov, Lev V.
2015-01-01
Complex systems may be mechanistically modelled by white-box modeling with using logical deterministic individual-based cellular automata. Mathematical models of complex systems are of three types: black-box (phenomenological), white-box (mechanistic, based on the first principles) and grey-box (mixtures of phenomenological and mechanistic models). Most basic ecological models are of black-box type, including Malthusian, Verhulst, Lotka–Volterra models. In black-box models, the individual-based (mechanistic) mechanisms of population dynamics remain hidden. Here we mechanistically model the S-shaped and double S-shaped population growth of vegetatively propagated rhizomatous lawn grasses. Using purely logical deterministic individual-based cellular automata we create a white-box model. From a general physical standpoint, the vegetative propagation of plants is an analogue of excitation propagation in excitable media. Using the Monte Carlo method, we investigate a role of different initial positioning of an individual in the habitat. We have investigated mechanisms of the single-species population growth limited by habitat size, intraspecific competition, regeneration time and fecundity of individuals in two types of boundary conditions and at two types of fecundity. Besides that, we have compared the S-shaped and J-shaped population growth. We consider this white-box modeling approach as a method of artificial intelligence which works as automatic hyper-logical inference from the first principles of the studied subject. This approach is perspective for direct mechanistic insights into nature of any complex systems. PMID:26038717
Can complexity decrease in congestive heart failure?
NASA Astrophysics Data System (ADS)
Mukherjee, Sayan; Palit, Sanjay Kumar; Banerjee, Santo; Ariffin, M. R. K.; Rondoni, Lamberto; Bhattacharya, D. K.
2015-12-01
The complexity of a signal can be measured by the Recurrence period density entropy (RPDE) from the reconstructed phase space. We have chosen a window based RPDE method for the classification of signals, as RPDE is an average entropic measure of the whole phase space. We have observed the changes in the complexity in cardiac signals of normal healthy person (NHP) and congestive heart failure patients (CHFP). The results show that the cardiac dynamics of a healthy subject is more complex and random compare to the same for a heart failure patient, whose dynamics is more deterministic. We have constructed a general threshold to distinguish the border line between a healthy and a congestive heart failure dynamics. The results may be useful for wide range for physiological and biomedical analysis.
Non-invasive biomarkers for monitoring the fibrogenic process in liver: A short survey
Gressner, Axel M; Gao, Chun-Fang; Gressner, Olav A
2009-01-01
The clinical course of chronic liver diseases is significantly dependent on the progression rate and the extent of fibrosis, i.e. the non-structured replacement of necrotic parenchyma by extracellular matrix. Fibrogenesis, i.e. the development of fibrosis can be regarded as an unlimited wound healing process, which is based on matrix (connective tissue) synthesis in activated hepatic stellate cells, fibroblasts (fibrocytes), hepatocytes and biliary epithelial cells, which are converted to matrix-producing (myo-)fibroblasts by a process defined as epithelial-mesenchymal transition. Blood (non-invasive) biomarkers of fibrogenesis and fibrosis can be divided into class I and class II analytes. Class I biomarkers are those single tests, which are based on the pathophysiology of fibrosis, whereas class II biomarkers are mostly multiparametric algorithms, which have been statistically evaluated with regard to the detection and activity of ongoing fibrosis. Currently available markers fulfil the criteria of ideal clinical-chemical tests only partially, but increased understanding of the complex pathogenesis of fibrosis offers additional ways for pathophysiologically well based serum (plasma) biomarkers. They include TGF-β-driven marker proteins, bone marrow-derived cells (fibrocytes), and cytokines, which govern pro- and anti-fibrotic activities. Proteomic and glycomic approaches of serum are under investigation to set up specific protein or carbohydrate profiles in patients with liver fibrosis. These and other novel parameters will supplement or eventually replace liver biopsy/histology, high resolution imaging analysis, and elastography for the detection and monitoring of patients at risk of developing liver fibrosis. PMID:19468990
Non-Systemic Drugs: A Critical Review
Charmot, Dominique
2012-01-01
Non-systemic drugs act within the intestinal lumen without reaching the systemic circulation. The first generation included polymeric resins that sequester phosphate ions, potassium ions, or bile acids for the treatment of electrolyte imbalances or hypercholesteremia. The field has evolved towards non-absorbable small molecules or peptides targeting luminal enzymes or transporters for the treatment of mineral metabolism disorders, diabetes, gastrointestinal (GI) disorders, and enteric infections. From a drug design and development perspective, non-systemic agents offer novel opportunities to address unmet medical needs while minimizing toxicity risks, but also present new challenges, including developing a better understanding and control of non-transcellular leakage pathways into the systemic circulation. The pharmacokinetic-pharmacodynamic relationship of drugs acting in the GI tract can be complex due to the variability of intestinal transit, interaction with chyme, and the complex environment of the surface epithelia. We review the main classes of non-absorbable agents at various stages of development, and their therapeutic potential and limitations. The rapid progress in the identification of intestinal receptors and transporters, their functional characterization and role in metabolic and inflammatory disorders, will undoubtedly renew interest in the development of novel, safe, non-systemic therapeutics. PMID:22300258
Definitions of Complexity are Notoriously Difficult
NASA Astrophysics Data System (ADS)
Schuster, Peter
Definitions of complexity are notoriously difficult if not impossible at all. A good working hypothesis might be: Everything is complex that is not simple. This is precisely the way in which we define nonlinear behavior. Things appear complex for different reasons: i) Complexity may result from lack of insight, ii) complexity may result from lack of methods, and (iii) complexity may be inherent to the system. The best known example for i) is celestial mechanics: The highly complex Pythagorean epicycles become obsolete by the introduction of Newton's law of universal gravitation. To give an example for ii), pattern formation and deterministic chaos became not really understandable before extensive computer simulations became possible. Cellular metabolism may serve as an example for iii) and is caused by the enormous complexity of biochemical reaction networks with up to one hundred individual reaction fluxes. Nevertheless, only few fluxes are dominant in the sense that using Pareto optimal values for them provides near optimal values for all the others...
Yamashita, Yuichi; Okumura, Tetsu; Okanoya, Kazuo; Tani, Jun
2011-01-01
How the brain learns and generates temporal sequences is a fundamental issue in neuroscience. The production of birdsongs, a process which involves complex learned sequences, provides researchers with an excellent biological model for this topic. The Bengalese finch in particular learns a highly complex song with syntactical structure. The nucleus HVC (HVC), a premotor nucleus within the avian song system, plays a key role in generating the temporal structures of their songs. From lesion studies, the nucleus interfacialis (NIf) projecting to the HVC is considered one of the essential regions that contribute to the complexity of their songs. However, the types of interaction between the HVC and the NIf that can produce complex syntactical songs remain unclear. In order to investigate the function of interactions between the HVC and NIf, we have proposed a neural network model based on previous biological evidence. The HVC is modeled by a recurrent neural network (RNN) that learns to generate temporal patterns of songs. The NIf is modeled as a mechanism that provides auditory feedback to the HVC and generates random noise that feeds into the HVC. The model showed that complex syntactical songs can be replicated by simple interactions between deterministic dynamics of the RNN and random noise. In the current study, the plausibility of the model is tested by the comparison between the changes in the songs of actual birds induced by pharmacological inhibition of the NIf and the changes in the songs produced by the model resulting from modification of parameters representing NIf functions. The efficacy of the model demonstrates that the changes of songs induced by pharmacological inhibition of the NIf can be interpreted as a trade-off between the effects of noise and the effects of feedback on the dynamics of the RNN of the HVC. These facts suggest that the current model provides a convincing hypothesis for the functional role of NIf–HVC interaction. PMID:21559065
Statistical Physics of Complex Substitutive Systems
NASA Astrophysics Data System (ADS)
Jin, Qing
Diffusion processes are central to human interactions. Despite extensive studies that span multiple disciplines, our knowledge is limited to spreading processes in non-substitutive systems. Yet, a considerable number of ideas, products, and behaviors spread by substitution; to adopt a new one, agents must give up an existing one. This captures the spread of scientific constructs--forcing scientists to choose, for example, a deterministic or probabilistic worldview, as well as the adoption of durable items, such as mobile phones, cars, or homes. In this dissertation, I develop a statistical physics framework to describe, quantify, and understand substitutive systems. By empirically exploring three collected high-resolution datasets pertaining to such systems, I build a mechanistic model describing substitutions, which not only analytically predicts the universal macroscopic phenomenon discovered in the collected datasets, but also accurately captures the trajectories of individual items in a complex substitutive system, demonstrating a high degree of regularity and universality in substitutive systems. I also discuss the origins and insights of the parameters in the substitution model and possible generalization form of the mathematical framework. The systematical study of substitutive systems presented in this dissertation could potentially guide the understanding and prediction of all spreading phenomena driven by substitutions, from electric cars to scientific paradigms, and from renewable energy to new healthy habits.
NASA Astrophysics Data System (ADS)
Menapace, Joseph A.
2010-11-01
Over the last eight years we have been developing advanced MRF tools and techniques to manufacture meter-scale optics for use in Megajoule class laser systems. These systems call for optics having unique characteristics that can complicate their fabrication using conventional polishing methods. First, exposure to the high-power nanosecond and sub-nanosecond pulsed laser environment in the infrared (>27 J/cm2 at 1053 nm), visible (>18 J/cm2 at 527 nm), and ultraviolet (>10 J/cm2 at 351 nm) demands ultra-precise control of optical figure and finish to avoid intensity modulation and scatter that can result in damage to the optics chain or system hardware. Second, the optics must be super-polished and virtually free of surface and subsurface flaws that can limit optic lifetime through laser-induced damage initiation and growth at the flaw sites, particularly at 351 nm. Lastly, ultra-precise optics for beam conditioning are required to control laser beam quality. These optics contain customized surface topographical structures that cannot be made using traditional fabrication processes. In this review, we will present the development and implementation of large-aperture MRF tools and techniques specifically designed to meet the demanding optical performance challenges required in large aperture high-power laser systems. In particular, we will discuss the advances made by using MRF technology to expose and remove surface and subsurface flaws in optics during final polishing to yield optics with improve laser damage resistance, the novel application of MRF deterministic polishing to imprint complex topographical information and wavefront correction patterns onto optical surfaces, and our efforts to advance the technology to manufacture largeaperture damage resistant optics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menapace, J A
2010-10-27
Over the last eight years we have been developing advanced MRF tools and techniques to manufacture meter-scale optics for use in Megajoule class laser systems. These systems call for optics having unique characteristics that can complicate their fabrication using conventional polishing methods. First, exposure to the high-power nanosecond and sub-nanosecond pulsed laser environment in the infrared (>27 J/cm{sup 2} at 1053 nm), visible (>18 J/cm{sup 2} at 527 nm), and ultraviolet (>10 J/cm{sup 2} at 351 nm) demands ultra-precise control of optical figure and finish to avoid intensity modulation and scatter that can result in damage to the optics chainmore » or system hardware. Second, the optics must be super-polished and virtually free of surface and subsurface flaws that can limit optic lifetime through laser-induced damage initiation and growth at the flaw sites, particularly at 351 nm. Lastly, ultra-precise optics for beam conditioning are required to control laser beam quality. These optics contain customized surface topographical structures that cannot be made using traditional fabrication processes. In this review, we will present the development and implementation of large-aperture MRF tools and techniques specifically designed to meet the demanding optical performance challenges required in large-aperture high-power laser systems. In particular, we will discuss the advances made by using MRF technology to expose and remove surface and subsurface flaws in optics during final polishing to yield optics with improve laser damage resistance, the novel application of MRF deterministic polishing to imprint complex topographical information and wavefront correction patterns onto optical surfaces, and our efforts to advance the technology to manufacture large-aperture damage resistant optics.« less
Engen, Steinar; Lee, Aline Magdalena; Sæther, Bernt-Erik
2018-02-01
We analyze a spatial age-structured model with density regulation, age specific dispersal, stochasticity in vital rates and proportional harvesting. We include two age classes, juveniles and adults, where juveniles are subject to logistic density dependence. There are environmental stochastic effects with arbitrary spatial scales on all birth and death rates, and individuals of both age classes are subject to density independent dispersal with given rates and specified distributions of dispersal distances. We show how to simulate the joint density fields of the age classes and derive results for the spatial scales of all spatial autocovariance functions for densities. A general result is that the squared scale has an additive term equal to the squared scale of the environmental noise, corresponding to the Moran effect, as well as additive terms proportional to the dispersal rate and variance of dispersal distance for the age classes and approximately inversely proportional to the strength of density regulation. We show that the optimal harvesting strategy in the deterministic case is to harvest only juveniles when their relative value (e.g. financial) is large, and otherwise only adults. With increasing environmental stochasticity there is an interval of increasing length of values of juveniles relative to adults where both age classes should be harvested. Harvesting generally tends to increase all spatial scales of the autocovariances of densities. Copyright © 2017. Published by Elsevier Inc.
Oxime Ether Lipids as Transfection Agents: Assembly and Complexation with siRNA.
Puri, Anu; Zampino, Serena; Viard, Mathias; Shapiro, Bruce A
2017-01-01
RNAi-based therapeutic approaches to combat cancer and other diseases are currently an area of great interest. However, practical applications of this approach rely on optimal tools to carry and deliver siRNA to the desired site. Oxime ether lipids (OELs) are a class of molecules among other various carriers being examined for siRNA delivery. OELs, relatively new candidates, belong to a class of non-glycerol based lipids and have begun to claim their place as an siRNA delivery carrier in the field of RNAi therapy. Chemical synthesis steps of OELs are considered relatively simple with the ability to modify the functionalities as desired. OEL-siRNA complexes can be assembled in the presence of serum-containing buffers (or cell culture media) and recent data from our and other groups have demonstrated that OELs are viable carriers for siRNA delivery in the cell culture systems. In this chapter, we provide the details of experimental protocols routinely used in our laboratory to examine OEL-siRNA complexes including their assembly, stability, and transfection efficiencies.
Atzei, A; Luchetti, R; Garagnani, L
2017-05-01
The classical definition of 'Palmer Type IB' triangular fibrocartilage complex tear, includes a spectrum of clinical conditions. This review highlights the clinical and arthroscopic criteria that enable us to categorize five classes on a treatment-oriented classification system of triangular fibrocartilage complex peripheral tears. Class 1 lesions represent isolated tears of the distal triangular fibrocartilage complex without distal radio-ulnar joint instability and are amenable to arthroscopic suture. Class 2 tears include rupture of both the distal triangular fibrocartilage complex and proximal attachments of the triangular fibrocartilage complex to the fovea. Class 3 tears constitute isolated ruptures of the proximal attachment of the triangular fibrocartilage complex to the fovea; they are not visible at radio-carpal arthroscopy. Both Class 2 and Class 3 tears are diagnosed with a positive hook test and are typically associated with distal radio-ulnar joint instability. If required, treatment is through reattachment of the distal radio-ulnar ligament insertions to the fovea. Class 4 lesions are irreparable tears due to the size of the defect or to poor tissue quality and, if required, treatment is through distal radio-ulnar ligament reconstruction with tendon graft. Class 5 tears are associated with distal radio-ulnar joint arthritis and can only be treated with salvage procedures. This subdivision of type IB triangular fibrocartilage complex tear provides more insights in the pathomechanics and treatment strategies. II.
Stability analysis and application of a mathematical cholera model.
Liao, Shu; Wang, Jin
2011-07-01
In this paper, we conduct a dynamical analysis of the deterministic cholera model proposed in [9]. We study the stability of both the disease-free and endemic equilibria so as to explore the complex epidemic and endemic dynamics of the disease. We demonstrate a real-world application of this model by investigating the recent cholera outbreak in Zimbabwe. Meanwhile, we present numerical simulation results to verify the analytical predictions.
Pudda, Catherine; Boizot, François; Verplanck, Nicolas; Revol-Cavalier, Frédéric; Berthier, Jean; Thuaire, Aurélie
2018-01-01
Particle separation in microfluidic devices is a common problematic for sample preparation in biology. Deterministic lateral displacement (DLD) is efficiently implemented as a size-based fractionation technique to separate two populations of particles around a specific size. However, real biological samples contain components of many different sizes and a single DLD separation step is not sufficient to purify these complex samples. When connecting several DLD modules in series, pressure balancing at the DLD outlets of each step becomes critical to ensure an optimal separation efficiency. A generic microfluidic platform is presented in this paper to optimize pressure balancing, when DLD separation is connected either to another DLD module or to a different microfluidic function. This is made possible by generating droplets at T-junctions connected to the DLD outlets. Droplets act as pressure controllers, which perform at the same time the encapsulation of DLD sorted particles and the balance of output pressures. The optimized pressures to apply on DLD modules and on T-junctions are determined by a general model that ensures the equilibrium of the entire platform. The proposed separation platform is completely modular and reconfigurable since the same predictive model applies to any cascaded DLD modules of the droplet-based cartridge. PMID:29768490
Comparing reactive and memory-one strategies of direct reciprocity
NASA Astrophysics Data System (ADS)
Baek, Seung Ki; Jeong, Hyeong-Chai; Hilbe, Christian; Nowak, Martin A.
2016-05-01
Direct reciprocity is a mechanism for the evolution of cooperation based on repeated interactions. When individuals meet repeatedly, they can use conditional strategies to enforce cooperative outcomes that would not be feasible in one-shot social dilemmas. Direct reciprocity requires that individuals keep track of their past interactions and find the right response. However, there are natural bounds on strategic complexity: Humans find it difficult to remember past interactions accurately, especially over long timespans. Given these limitations, it is natural to ask how complex strategies need to be for cooperation to evolve. Here, we study stochastic evolutionary game dynamics in finite populations to systematically compare the evolutionary performance of reactive strategies, which only respond to the co-player’s previous move, and memory-one strategies, which take into account the own and the co-player’s previous move. In both cases, we compare deterministic strategy and stochastic strategy spaces. For reactive strategies and small costs, we find that stochasticity benefits cooperation, because it allows for generous-tit-for-tat. For memory one strategies and small costs, we find that stochasticity does not increase the propensity for cooperation, because the deterministic rule of win-stay, lose-shift works best. For memory one strategies and large costs, however, stochasticity can augment cooperation.
Deterministic Assembly of Complex Bacterial Communities in Guts of Germ-Free Cockroaches
Mikaelyan, Aram; Thompson, Claire L.; Hofer, Markus J.
2015-01-01
The gut microbiota of termites plays important roles in the symbiotic digestion of lignocellulose. However, the factors shaping the microbial community structure remain poorly understood. Because termites cannot be raised under axenic conditions, we established the closely related cockroach Shelfordella lateralis as a germ-free model to study microbial community assembly and host-microbe interactions. In this study, we determined the composition of the bacterial assemblages in cockroaches inoculated with the gut microbiota of termites and mice using pyrosequencing analysis of their 16S rRNA genes. Although the composition of the xenobiotic communities was influenced by the lineages present in the foreign inocula, their structure resembled that of conventional cockroaches. Bacterial taxa abundant in conventional cockroaches but rare in the foreign inocula, such as Dysgonomonas and Parabacteroides spp., were selectively enriched in the xenobiotic communities. Donor-specific taxa, such as endomicrobia or spirochete lineages restricted to the gut microbiota of termites, however, either were unable to colonize germ-free cockroaches or formed only small populations. The exposure of xenobiotic cockroaches to conventional adults restored their normal microbiota, which indicated that autochthonous lineages outcompete foreign ones. Our results provide experimental proof that the assembly of a complex gut microbiota in insects is deterministic. PMID:26655763
Comparing reactive and memory-one strategies of direct reciprocity
Baek, Seung Ki; Jeong, Hyeong-Chai; Hilbe, Christian; Nowak, Martin A.
2016-01-01
Direct reciprocity is a mechanism for the evolution of cooperation based on repeated interactions. When individuals meet repeatedly, they can use conditional strategies to enforce cooperative outcomes that would not be feasible in one-shot social dilemmas. Direct reciprocity requires that individuals keep track of their past interactions and find the right response. However, there are natural bounds on strategic complexity: Humans find it difficult to remember past interactions accurately, especially over long timespans. Given these limitations, it is natural to ask how complex strategies need to be for cooperation to evolve. Here, we study stochastic evolutionary game dynamics in finite populations to systematically compare the evolutionary performance of reactive strategies, which only respond to the co-player’s previous move, and memory-one strategies, which take into account the own and the co-player’s previous move. In both cases, we compare deterministic strategy and stochastic strategy spaces. For reactive strategies and small costs, we find that stochasticity benefits cooperation, because it allows for generous-tit-for-tat. For memory one strategies and small costs, we find that stochasticity does not increase the propensity for cooperation, because the deterministic rule of win-stay, lose-shift works best. For memory one strategies and large costs, however, stochasticity can augment cooperation. PMID:27161141
What does the structure of its visibility graph tell us about the nature of the time series?
NASA Astrophysics Data System (ADS)
Franke, Jasper G.; Donner, Reik V.
2017-04-01
Visibility graphs are a recently introduced method to construct complex network representations based upon univariate time series in order to study their dynamical characteristics [1]. In the last years, this approach has been successfully applied to studying a considerable variety of geoscientific research questions and data sets, including non-trivial temporal patterns in complex earthquake catalogs [2] or time-reversibility in climate time series [3]. It has been shown that several characteristic features of the thus constructed networks differ between stochastic and deterministic (possibly chaotic) processes, which is, however, relatively hard to exploit in the case of real-world applications. In this study, we propose studying two new measures related with the network complexity of visibility graphs constructed from time series, one being a special type of network entropy [4] and the other a recently introduced measure of the heterogeneity of the network's degree distribution [5]. For paradigmatic model systems exhibiting bifurcation sequences between regular and chaotic dynamics, both properties clearly trace the transitions between both types of regimes and exhibit marked quantitative differences for regular and chaotic dynamics. Moreover, for dynamical systems with a small amount of additive noise, the considered properties demonstrate gradual changes prior to the bifurcation point. This finding appears closely related to the subsequent loss of stability of the current state known to lead to a critical slowing down as the transition point is approaches. In this spirit, both considered visibility graph characteristics provide alternative tracers of dynamical early warning signals consistent with classical indicators. Our results demonstrate that measures of visibility graph complexity (i) provide a potentially useful means to tracing changes in the dynamical patterns encoded in a univariate time series that originate from increasing autocorrelation and (ii) allow to systematically distinguish regular from deterministic-chaotic dynamics. We demonstrate the application of our method for different model systems as well as selected paleoclimate time series from the North Atlantic region. Notably, visibility graph based methods are particularly suited for studying the latter type of geoscientific data, since they do not impose intrinsic restrictions or assumptions on the nature of the time series under investigation in terms of noise process, linearity and sampling homogeneity. [1] Lacasa, Lucas, et al. "From time series to complex networks: The visibility graph." Proceedings of the National Academy of Sciences 105.13 (2008): 4972-4975. [2] Telesca, Luciano, and Michele Lovallo. "Analysis of seismic sequences by using the method of visibility graph." EPL (Europhysics Letters) 97.5 (2012): 50002. [3] Donges, Jonathan F., Reik V. Donner, and Jürgen Kurths. "Testing time series irreversibility using complex network methods." EPL (Europhysics Letters) 102.1 (2013): 10004. [4] Small, Michael. "Complex networks from time series: capturing dynamics." 2013 IEEE International Symposium on Circuits and Systems (ISCAS2013), Beijing (2013): 2509-2512. [5] Jacob, Rinku, K.P. Harikrishnan, Ranjeev Misra, and G. Ambika. "Measure for degree heterogeneity in complex networks and its application to recurrence network analysis." arXiv preprint 1605.06607 (2016).
Non-linear stochastic growth rates and redshift space distortions
Jennings, Elise; Jennings, David
2015-04-09
The linear growth rate is commonly defined through a simple deterministic relation between the velocity divergence and the matter overdensity in the linear regime. We introduce a formalism that extends this to a non-linear, stochastic relation between θ = ∇ ∙ v(x,t)/aH and δ. This provides a new phenomenological approach that examines the conditional mean , together with the fluctuations of θ around this mean. We also measure these stochastic components using N-body simulations and find they are non-negative and increase with decreasing scale from ~10 per cent at k < 0.2 h Mpc -1 to 25 per cent atmore » k ~ 0.45 h Mpc -1 at z = 0. Both the stochastic relation and non-linearity are more pronounced for haloes, M ≤ 5 × 10 12 M ⊙ h -1, compared to the dark matter at z = 0 and 1. Non-linear growth effects manifest themselves as a rotation of the mean away from the linear theory prediction -f LTδ, where f LT is the linear growth rate. This rotation increases with wavenumber, k, and we show that it can be well-described by second-order Lagrangian perturbation theory (2LPT) fork < 0.1 h Mpc -1. Furthermore, the stochasticity in the θ – δ relation is not so simply described by 2LPT, and we discuss its impact on measurements of f LT from two-point statistics in redshift space. Furthermore, given that the relationship between δ and θ is stochastic and non-linear, this will have implications for the interpretation and precision of f LT extracted using models which assume a linear, deterministic expression.« less
Chaos-order transition in foraging behavior of ants.
Li, Lixiang; Peng, Haipeng; Kurths, Jürgen; Yang, Yixian; Schellnhuber, Hans Joachim
2014-06-10
The study of the foraging behavior of group animals (especially ants) is of practical ecological importance, but it also contributes to the development of widely applicable optimization problem-solving techniques. Biologists have discovered that single ants exhibit low-dimensional deterministic-chaotic activities. However, the influences of the nest, ants' physical abilities, and ants' knowledge (or experience) on foraging behavior have received relatively little attention in studies of the collective behavior of ants. This paper provides new insights into basic mechanisms of effective foraging for social insects or group animals that have a home. We propose that the whole foraging process of ants is controlled by three successive strategies: hunting, homing, and path building. A mathematical model is developed to study this complex scheme. We show that the transition from chaotic to periodic regimes observed in our model results from an optimization scheme for group animals with a home. According to our investigation, the behavior of such insects is not represented by random but rather deterministic walks (as generated by deterministic dynamical systems, e.g., by maps) in a random environment: the animals use their intelligence and experience to guide them. The more knowledge an ant has, the higher its foraging efficiency is. When young insects join the collective to forage with old and middle-aged ants, it benefits the whole colony in the long run. The resulting strategy can even be optimal.
Chaos–order transition in foraging behavior of ants
Li, Lixiang; Peng, Haipeng; Kurths, Jürgen; Yang, Yixian; Schellnhuber, Hans Joachim
2014-01-01
The study of the foraging behavior of group animals (especially ants) is of practical ecological importance, but it also contributes to the development of widely applicable optimization problem-solving techniques. Biologists have discovered that single ants exhibit low-dimensional deterministic-chaotic activities. However, the influences of the nest, ants’ physical abilities, and ants’ knowledge (or experience) on foraging behavior have received relatively little attention in studies of the collective behavior of ants. This paper provides new insights into basic mechanisms of effective foraging for social insects or group animals that have a home. We propose that the whole foraging process of ants is controlled by three successive strategies: hunting, homing, and path building. A mathematical model is developed to study this complex scheme. We show that the transition from chaotic to periodic regimes observed in our model results from an optimization scheme for group animals with a home. According to our investigation, the behavior of such insects is not represented by random but rather deterministic walks (as generated by deterministic dynamical systems, e.g., by maps) in a random environment: the animals use their intelligence and experience to guide them. The more knowledge an ant has, the higher its foraging efficiency is. When young insects join the collective to forage with old and middle-aged ants, it benefits the whole colony in the long run. The resulting strategy can even be optimal. PMID:24912159
Akemann, G; Bloch, J; Shifrin, L; Wettig, T
2008-01-25
We analyze how individual eigenvalues of the QCD Dirac operator at nonzero quark chemical potential are distributed in the complex plane. Exact and approximate analytical results for both quenched and unquenched distributions are derived from non-Hermitian random matrix theory. When comparing these to quenched lattice QCD spectra close to the origin, excellent agreement is found for zero and nonzero topology at several values of the quark chemical potential. Our analytical results are also applicable to other physical systems in the same symmetry class.
Chance of Necessity: Modeling Origins of Life
NASA Technical Reports Server (NTRS)
Pohorille, Andrew
2006-01-01
The fundamental nature of processes that led to the emergence of life has been a subject of long-standing debate. One view holds that the origin of life is an event governed by chance, and the result of so many random events is unpredictable. This view was eloquently expressed by Jacques Monod in his book Chance or Necessity. In an alternative view, the origin of life is considered a deterministic event. Its details need not be deterministic in every respect, but the overall behavior is predictable. A corollary to the deterministic view is that the emergence of life must have been determined primarily by universal chemistry and biochemistry rather than by subtle details of environmental conditions. In my lecture I will explore two different paradigms for the emergence of life and discuss their implications for predictability and universality of life-forming processes. The dominant approach is that the origin of life was guided by information stored in nucleic acids (the RNA World hypothesis). In this view, selection of improved combinations of nucleic acids obtained through random mutations drove evolution of biological systems from their conception. An alternative hypothesis states that the formation of protocellular metabolism was driven by non-genomic processes. Even though these processes were highly stochastic the outcome was largely deterministic, strongly constrained by laws of chemistry. I will argue that self-replication of macromolecules was not required at the early stages of evolution; the reproduction of cellular functions alone was sufficient for self-maintenance of protocells. In fact, the precise transfer of information between successive generations of the earliest protocells was unnecessary and could have impeded the discovery of cellular metabolism. I will also show that such concepts as speciation and fitness to the environment, developed in the context of genomic evolution also hold in the absence of a genome.
Solving geosteering inverse problems by stochastic Hybrid Monte Carlo method
Shen, Qiuyang; Wu, Xuqing; Chen, Jiefu; ...
2017-11-20
The inverse problems arise in almost all fields of science where the real-world parameters are extracted from a set of measured data. The geosteering inversion plays an essential role in the accurate prediction of oncoming strata as well as a reliable guidance to adjust the borehole position on the fly to reach one or more geological targets. This mathematical treatment is not easy to solve, which requires finding an optimum solution among a large solution space, especially when the problem is non-linear and non-convex. Nowadays, a new generation of logging-while-drilling (LWD) tools has emerged on the market. The so-called azimuthalmore » resistivity LWD tools have azimuthal sensitivity and a large depth of investigation. Hence, the associated inverse problems become much more difficult since the earth model to be inverted will have more detailed structures. The conventional deterministic methods are incapable to solve such a complicated inverse problem, where they suffer from the local minimum trap. Alternatively, stochastic optimizations are in general better at finding global optimal solutions and handling uncertainty quantification. In this article, we investigate the Hybrid Monte Carlo (HMC) based statistical inversion approach and suggest that HMC based inference is more efficient in dealing with the increased complexity and uncertainty faced by the geosteering problems.« less
Analysis of Phase-Type Stochastic Petri Nets With Discrete and Continuous Timing
NASA Technical Reports Server (NTRS)
Jones, Robert L.; Goode, Plesent W. (Technical Monitor)
2000-01-01
The Petri net formalism is useful in studying many discrete-state, discrete-event systems exhibiting concurrency, synchronization, and other complex behavior. As a bipartite graph, the net can conveniently capture salient aspects of the system. As a mathematical tool, the net can specify an analyzable state space. Indeed, one can reason about certain qualitative properties (from state occupancies) and how they arise (the sequence of events leading there). By introducing deterministic or random delays, the model is forced to sojourn in states some amount of time, giving rise to an underlying stochastic process, one that can be specified in a compact way and capable of providing quantitative, probabilistic measures. We formalize a new non-Markovian extension to the Petri net that captures both discrete and continuous timing in the same model. The approach affords efficient, stationary analysis in most cases and efficient transient analysis under certain restrictions. Moreover, this new formalism has the added benefit in modeling fidelity stemming from the simultaneous capture of discrete- and continuous-time events (as opposed to capturing only one and approximating the other). We show how the underlying stochastic process, which is non-Markovian, can be resolved into simpler Markovian problems that enjoy efficient solutions. Solution algorithms are provided that can be easily programmed.
A Tabu-Search Heuristic for Deterministic Two-Mode Blockmodeling of Binary Network Matrices
ERIC Educational Resources Information Center
Brusco, Michael; Steinley, Douglas
2011-01-01
Two-mode binary data matrices arise in a variety of social network contexts, such as the attendance or non-attendance of individuals at events, the participation or lack of participation of groups in projects, and the votes of judges on cases. A popular method for analyzing such data is two-mode blockmodeling based on structural equivalence, where…
NASA Astrophysics Data System (ADS)
Chu, Peter C.
2018-03-01
SOund Fixing And Ranging (RAFOS) floats deployed by the Naval Postgraduate School (NPS) in the California Current system from 1992 to 2001 at depth between 150 and 600 m (http://www.oc.nps.edu/npsRAFOS/) are used to study 2-D turbulent characteristics. Each drifter trajectory is adaptively decomposed using the empirical mode decomposition (EMD) into a series of intrinsic mode functions (IMFs) with corresponding specific scale for each IMF. A new steepest ascent low/non-low-frequency ratio is proposed in this paper to separate a Lagrangian trajectory into low-frequency (nondiffusive, i.e., deterministic) and high-frequency (diffusive, i.e., stochastic) components. The 2-D turbulent (or called eddy) diffusion coefficients are calculated on the base of the classical turbulent diffusion with mixing length theory from stochastic component of a single drifter. Statistical characteristics of the calculated 2-D turbulence length scale, strength, and diffusion coefficients from the NPS RAFOS data are presented with the mean values (over the whole drifters) of the 2-D diffusion coefficients comparable to the commonly used diffusivity tensor method.
Negative mobility of a Brownian particle: Strong damping regime
NASA Astrophysics Data System (ADS)
Słapik, A.; Łuczka, J.; Spiechowicz, J.
2018-02-01
We study impact of inertia on directed transport of a Brownian particle under non-equilibrium conditions: the particle moves in a one-dimensional periodic and symmetric potential, is driven by both an unbiased time-periodic force and a constant force, and is coupled to a thermostat of temperature T. Within selected parameter regimes this system exhibits negative mobility, which means that the particle moves in the direction opposite to the direction of the constant force. It is known that in such a setup the inertial term is essential for the emergence of negative mobility and it cannot be detected in the limiting case of overdamped dynamics. We analyse inertial effects and show that negative mobility can be observed even in the strong damping regime. We determine the optimal dimensionless mass for the presence of negative mobility and reveal three mechanisms standing behind this anomaly: deterministic chaotic, thermal noise induced and deterministic non-chaotic. The last origin has never been reported. It may provide guidance to the possibility of observation of negative mobility for strongly damped dynamics which is of fundamental importance from the point of view of biological systems, all of which in situ operate in fluctuating environments.
Structural Deterministic Safety Factors Selection Criteria and Verification
NASA Technical Reports Server (NTRS)
Verderaime, V.
1992-01-01
Though current deterministic safety factors are arbitrarily and unaccountably specified, its ratio is rooted in resistive and applied stress probability distributions. This study approached the deterministic method from a probabilistic concept leading to a more systematic and coherent philosophy and criterion for designing more uniform and reliable high-performance structures. The deterministic method was noted to consist of three safety factors: a standard deviation multiplier of the applied stress distribution; a K-factor for the A- or B-basis material ultimate stress; and the conventional safety factor to ensure that the applied stress does not operate in the inelastic zone of metallic materials. The conventional safety factor is specifically defined as the ratio of ultimate-to-yield stresses. A deterministic safety index of the combined safety factors was derived from which the corresponding reliability proved the deterministic method is not reliability sensitive. The bases for selecting safety factors are presented and verification requirements are discussed. The suggested deterministic approach is applicable to all NASA, DOD, and commercial high-performance structures under static stresses.
2012-01-01
Background The critical role of Major Histocompatibility Complex (Mhc) genes in disease resistance and their highly polymorphic nature make them exceptional candidates for studies investigating genetic effects on survival, mate choice and conservation. Species that harbor many Mhc loci and high allelic diversity are particularly intriguing as they are potentially under strong selection and studies of such species provide valuable information as to the mechanisms maintaining Mhc diversity. However comprehensive genotyping of complex multilocus systems has been a major challenge to date with the result that little is known about the consequences of this complexity in terms of fitness effects and disease resistance. Results In this study, we genotyped the Mhc class I exon 3 of the great tit (Parus major) from two nest-box breeding populations near Oxford, UK that have been monitored for decades. Characterization of Mhc class I exon 3 was adopted and bidirectional sequencing was carried using the 454 sequencing platform. Full analysis of sequences through a stepwise variant validation procedure allowed reliable typing of more than 800 great tits based on 214,357 reads; from duplicates we estimated the repeatability of typing as 0.94. A total of 862 alleles were detected, and the presence of at least 16 functional loci was shown - the highest number characterized in a wild bird species. Finally, the functional alleles were grouped into 17 supertypes based on their antigen binding affinities. Conclusions We found extreme complexity at the Mhc class I of the great tit both in terms of allelic diversity and gene number. The presence of many functional loci was shown, together with a pseudogene family and putatively non-functional alleles; there was clear evidence that functional alleles were under strong balancing selection. This study is the first step towards an in-depth analysis of this gene complex in this species, which will help understanding how parasite-mediated and sexual selection shape and maintain host genetic variation in nature. We believe that study systems like ours can make important contributions to the field of evolutionary biology and emphasize the necessity of integrating long-term field-based studies with detailed genetic analysis to unravel complex evolutionary processes. PMID:22587557
Hpm of Estrogen Model on the Dynamics of Breast Cancer
NASA Astrophysics Data System (ADS)
Govindarajan, A.; Balamuralitharan, S.; Sundaresan, T.
2018-04-01
We enhance a deterministic mathematical model involving universal dynamics on breast cancer with immune response. This is population model so includes Normal cells class, Tumor cells, Immune cells and Estrogen. The eects regarding Estrogen are below incorporated in the model. The effects show to that amount the arrival of greater Estrogen increases the danger over growing breast cancer. Furthermore, approximate solution regarding nonlinear differential equations is arrived by Homotopy Perturbation Method (HPM). Hes HPM is good and correct technique after solve nonlinear differential equation directly. Approximate solution learnt with the support of that method is suitable same as like the actual results in accordance with this models.
Selective attention, diffused attention, and the development of categorization.
Deng, Wei Sophia; Sloutsky, Vladimir M
2016-12-01
How do people learn categories and what changes with development? The current study attempts to address these questions by focusing on the role of attention in the development of categorization. In Experiment 1, participants (adults, 7-year-olds, and 4-year-olds) were trained with novel categories consisting of deterministic and probabilistic features, and their categorization and memory for features were tested. In Experiment 2, participants' attention was directed to the deterministic feature, and in Experiment 3 it was directed to the probabilistic features. Attentional cueing affected categorization and memory in adults and 7-year-olds: these participants relied on the cued features in their categorization and exhibited better memory of cued than of non-cued features. In contrast, in 4-year-olds attentional cueing affected only categorization, but not memory: these participants exhibited equally good memory for both cued and non-cued features. Furthermore, across the experiments, 4-year-olds remembered non-cued features better than adults. These results coupled with computational simulations provide novel evidence (1) pointing to differences in category representation and mechanisms of categorization across development, (2) elucidating the role of attention in the development of categorization, and (3) suggesting an important distinction between representation and decision factors in categorization early in development. These issues are discussed with respect to theories of categorization and its development. Copyright © 2016 Elsevier Inc. All rights reserved.
Stochastic Blockmodeling of the Modules and Core of the Caenorhabditis elegans Connectome
Pavlovic, Dragana M.; Vértes, Petra E.; Bullmore, Edward T.; Schafer, William R.; Nichols, Thomas E.
2014-01-01
Recently, there has been much interest in the community structure or mesoscale organization of complex networks. This structure is characterised either as a set of sparsely inter-connected modules or as a highly connected core with a sparsely connected periphery. However, it is often difficult to disambiguate these two types of mesoscale structure or, indeed, to summarise the full network in terms of the relationships between its mesoscale constituents. Here, we estimate a community structure with a stochastic blockmodel approach, the Erdős-Rényi Mixture Model, and compare it to the much more widely used deterministic methods, such as the Louvain and Spectral algorithms. We used the Caenorhabditis elegans (C. elegans) nervous system (connectome) as a model system in which biological knowledge about each node or neuron can be used to validate the functional relevance of the communities obtained. The deterministic algorithms derived communities with 4–5 modules, defined by sparse inter-connectivity between all modules. In contrast, the stochastic Erdős-Rényi Mixture Model estimated a community with 9 blocks or groups which comprised a similar set of modules but also included a clearly defined core, made of 2 small groups. We show that the “core-in-modules” decomposition of the worm brain network, estimated by the Erdős-Rényi Mixture Model, is more compatible with prior biological knowledge about the C. elegans nervous system than the purely modular decomposition defined deterministically. We also show that the blockmodel can be used both to generate stochastic realisations (simulations) of the biological connectome, and to compress network into a small number of super-nodes and their connectivity. We expect that the Erdős-Rényi Mixture Model may be useful for investigating the complex community structures in other (nervous) systems. PMID:24988196
Punchi-Manage, Ruwan; Wiegand, Thorsten; Wiegand, Kerstin; Getzin, Stephan; Huth, Andreas; Gunatilleke, C V Savitri; Gunatilleke, I A U Nimal
2015-07-01
Interactions among neighboring individuals influence plant performance and should create spatial patterns in local community structure. In order to assess the role of large trees in generating spatial patterns in local species richness, we used the individual species-area relationship (ISAR) to evaluate the species richness of trees of different size classes (and dead trees) in circular neighborhoods with varying radius around large trees of different focal species. To reveal signals of species interactions, we compared the ISAR function of the individuals of focal species with that of randomly selected nearby locations. We expected that large trees should strongly affect the community structure of smaller trees in their neighborhood, but that these effects should fade away with increasing size class. Unexpectedly, we found that only few focal species showed signals of species interactions with trees of the different size classes and that this was less likely for less abundant focal species. However, the few and relatively weak departures from independence were consistent with expectations of the effect of competition for space and the dispersal syndrome on spatial patterns. A noisy signal of competition for space found for large trees built up gradually with increasing life stage; it was not yet present for large saplings but detectable for intermediates. Additionally, focal species with animal-dispersed seeds showed higher species richness in their neighborhood than those with gravity- and gyration-dispersed seeds. Our analysis across the entire ontogeny from recruits to large trees supports the hypothesis that stochastic effects dilute deterministic species interactions in highly diverse communities. Stochastic dilution is a consequence of the stochastic geometry of biodiversity in species-rich communities where the identities of the nearest neighbors of a given plant are largely unpredictable. While the outcome of local species interactions is governed for each plant by deterministic fitness and niche differences, the large variability of competitors causes also a large variability in the outcomes of interactions and does not allow for strong directed responses at the species level. Collectively, our results highlight the critical effect of the stochastic geometry of biodiversity in structuring local spatial patterns of tropical forest diversity.
A random walk on water (Henry Darcy Medal Lecture)
NASA Astrophysics Data System (ADS)
Koutsoyiannis, D.
2009-04-01
Randomness and uncertainty had been well appreciated in hydrology and water resources engineering in their initial steps as scientific disciplines. However, this changed through the years and, following other geosciences, hydrology adopted a naïve view of randomness in natural processes. Such a view separates natural phenomena into two mutually exclusive types, random or stochastic, and deterministic. When a classification of a specific process into one of these two types fails, then a separation of the process into two different, usually additive, parts is typically devised, each of which may be further subdivided into subparts (e.g., deterministic subparts such as periodic and aperiodic or trends). This dichotomous logic is typically combined with a manichean perception, in which the deterministic part supposedly represents cause-effect relationships and thus is physics and science (the "good"), whereas randomness has little relationship with science and no relationship with understanding (the "evil"). Probability theory and statistics, which traditionally provided the tools for dealing with randomness and uncertainty, have been regarded by some as the "necessary evil" but not as an essential part of hydrology and geophysics. Some took a step further to banish them from hydrology, replacing them with deterministic sensitivity analysis and fuzzy-logic representations. Others attempted to demonstrate that irregular fluctuations observed in natural processes are au fond manifestations of underlying chaotic deterministic dynamics with low dimensionality, thus attempting to render probabilistic descriptions unnecessary. Some of the above recent developments are simply flawed because they make erroneous use of probability and statistics (which, remarkably, provide the tools for such analyses), whereas the entire underlying logic is just a false dichotomy. To see this, it suffices to recall that Pierre Simon Laplace, perhaps the most famous proponent of determinism in the history of philosophy of science (cf. Laplace's demon), is, at the same time, one of the founders of probability theory, which he regarded as "nothing but common sense reduced to calculation". This harmonizes with James Clerk Maxwell's view that "the true logic for this world is the calculus of Probabilities" and was more recently and epigrammatically formulated in the title of Edwin Thompson Jaynes's book "Probability Theory: The Logic of Science" (2003). Abandoning dichotomous logic, either on ontological or epistemic grounds, we can identify randomness or stochasticity with unpredictability. Admitting that (a) uncertainty is an intrinsic property of nature; (b) causality implies dependence of natural processes in time and thus suggests predictability; but, (c) even the tiniest uncertainty (e.g., in initial conditions) may result in unpredictability after a certain time horizon, we may shape a stochastic representation of natural processes that is consistent with Karl Popper's indeterministic world view. In this representation, probability quantifies uncertainty according to the Kolmogorov system, in which probability is a normalized measure, i.e., a function that maps sets (areas where the initial conditions or the parameter values lie) to real numbers (in the interval [0, 1]). In such a representation, predictability (suggested by deterministic laws) and unpredictability (randomness) coexist, are not separable or additive components, and it is a matter of specifying the time horizon of prediction to decide which of the two dominates. An elementary numerical example has been devised to illustrate the above ideas and demonstrate that they offer a pragmatic and useful guide for practice, rather than just pertaining to philosophical discussions. A chaotic model, with fully and a priori known deterministic dynamics and deterministic inputs (without any random agent), is assumed to represent the hydrological balance in an area partly covered by vegetation. Experimentation with this toy model demonstrates, inter alia, that: (1) for short time horizons the deterministic dynamics is able to give good predictions; but (2) these predictions become extremely inaccurate and useless for long time horizons; (3) for such horizons a naïve statistical prediction (average of past data) which fully neglects the deterministic dynamics is more skilful; and (4) if this statistical prediction, in addition to past data, is combined with the probability theory (the principle of maximum entropy, in particular), it can provide a more informative prediction. Also, the toy model shows that the trajectories of the system state (and derivative properties thereof) do not resemble a regular (e.g., periodic) deterministic process nor a purely random process, but exhibit patterns indicating anti-persistence and persistence (where the latter statistically complies with a Hurst-Kolmogorov behaviour). If the process is averaged over long time scales, the anti-persistent behaviour improves predictability, whereas the persistent behaviour substantially deteriorates it. A stochastic representation of this deterministic system, which incorporates dynamics, is not only possible, but also powerful as it provides good predictions for both short and long horizons and helps to decide on when the deterministic dynamics should be considered or neglected. Obviously, a natural system is extremely more complex than this simple toy model and hence unpredictability is naturally even more prominent in the former. In addition, in a complex natural system, we can never know the exact dynamics and we must infer it from past data, which implies additional uncertainty and an additional role of stochastics in the process of formulating the system equations and estimating the involved parameters. Data also offer the only solid grounds to test any hypothesis about the dynamics, and failure of performing such testing against evidence from data renders the hypothesised dynamics worthless. If this perception of natural phenomena is adequately plausible, then it may help in studying interesting fundamental questions regarding the current state and the trends of hydrological and water resources research and their promising future paths. For instance: (i) Will it ever be possible to achieve a fully "physically based" modelling of hydrological systems that will not depend on data or stochastic representations? (ii) To what extent can hydrological uncertainty be reduced and what are the effective means for such reduction? (iii) Are current stochastic methods in hydrology consistent with observed natural behaviours? What paths should we explore for their advancement? (iv) Can deterministic methods provide solid scientific grounds for water resources engineering and management? In particular, can there be risk-free hydraulic engineering and water management? (v) Is the current (particularly important) interface between hydrology and climate satisfactory?. In particular, should hydrology rely on climate models that are not properly validated (i.e., for periods and scales not used in calibration)? In effect, is the evolution of climate and its impacts on water resources deterministically predictable?
Intelligent Manufacturing of Commercial Optics Final Report CRADA No. TC-0313-92
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, J. S.; Pollicove, H.
The project combined the research and development efforts of LLNL and the University of Rochester Center for Manufacturing Optics (COM), to develop a new generation of flexible computer controlled optics· grinding machines. COM's principal near term development effort is to commercialize the OPTICAM-SM, a new prototype spherical grinding machine. A crucial requirement for commercializing the OPTICAM-SM is the development of a predictable and repeatable material removal process ( deterministic micro-grinding) that yields high quality surfaces that minimize non-deterministic polishing. OPTICAM machine tools and the fabrication process development studies are part of COM' s response to the DOD (ARPA) request tomore » implement a modernization strategy for revitalizing the U.S. optics manufacturing base. This project was entered into in order to develop a new generation of :flexible, computer-controlled optics grinding machines.« less
Converting differential-equation models of biological systems to membrane computing.
Muniyandi, Ravie Chandren; Zin, Abdullah Mohd; Sanders, J W
2013-12-01
This paper presents a method to convert the deterministic, continuous representation of a biological system by ordinary differential equations into a non-deterministic, discrete membrane computation. The dynamics of the membrane computation is governed by rewrite rules operating at certain rates. That has the advantage of applying accurately to small systems, and to expressing rates of change that are determined locally, by region, but not necessary globally. Such spatial information augments the standard differentiable approach to provide a more realistic model. A biological case study of the ligand-receptor network of protein TGF-β is used to validate the effectiveness of the conversion method. It demonstrates the sense in which the behaviours and properties of the system are better preserved in the membrane computing model, suggesting that the proposed conversion method may prove useful for biological systems in particular. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Robust Audio Watermarking Scheme Based on Deterministic Plus Stochastic Model
NASA Astrophysics Data System (ADS)
Dhar, Pranab Kumar; Kim, Cheol Hong; Kim, Jong-Myon
Digital watermarking has been widely used for protecting digital contents from unauthorized duplication. This paper proposes a new watermarking scheme based on spectral modeling synthesis (SMS) for copyright protection of digital contents. SMS defines a sound as a combination of deterministic events plus a stochastic component that makes it possible for a synthesized sound to attain all of the perceptual characteristics of the original sound. In our proposed scheme, watermarks are embedded into the highest prominent peak of the magnitude spectrum of each non-overlapping frame in peak trajectories. Simulation results indicate that the proposed watermarking scheme is highly robust against various kinds of attacks such as noise addition, cropping, re-sampling, re-quantization, and MP3 compression and achieves similarity values ranging from 17 to 22. In addition, our proposed scheme achieves signal-to-noise ratio (SNR) values ranging from 29 dB to 30 dB.
Deterministic control of radiative processes by shaping the mode field
NASA Astrophysics Data System (ADS)
Pellegrino, D.; Pagliano, F.; Genco, A.; Petruzzella, M.; van Otten, F. W.; Fiore, A.
2018-04-01
Quantum dots (QDs) interacting with confined light fields in photonic crystal cavities represent a scalable light source for the generation of single photons and laser radiation in the solid-state platform. The complete control of light-matter interaction in these sources is needed to fully exploit their potential, but it has been challenging due to the small length scales involved. In this work, we experimentally demonstrate the control of the radiative interaction between InAs QDs and one mode of three coupled nanocavities. By non-locally moulding the mode field experienced by the QDs inside one of the cavities, we are able to deterministically tune, and even inhibit, the spontaneous emission into the mode. The presented method will enable the real-time switching of Rabi oscillations, the shaping of the temporal waveform of single photons, and the implementation of unexplored nanolaser modulation schemes.