Science.gov

Sample records for networks quasi-stationary approximation

  1. Quasi-stationary phase change heat transfer on a fin

    NASA Astrophysics Data System (ADS)

    Orzechowski, Tadeusz; Stokowiec, Katarzyna

    2016-03-01

    The paper presents heat transfer research basing on a long fin with a circular cross-section. Its basis is welded to the pipe where the hot liquid paraffin, having a temperature of 70°C at the inflow, is pumped. The analyzed element is a recurrent part of a refrigeration's condenser, which is immersed in a paraffin. The temperature of the inflowing liquid is higher than the temperature of the melting process for paraffin, which allows the paraffin to liquify. The temperature at the basis of the rib changes and it is assumed that the heat transfer is quasi-stationary. On this basis the estimation of the mean value of heat transfer coefficient was conducted. The unsteady thermal field of the investigated system was registered with an infrared camera V50 produced by a Polish company Vigo System. This device is equipped with a microbolometric detector with 384 × 288 elements and the single pixel size 25 × 25 μm. Their thermal resolution is lower than 70 mK at a temperature of 30 °C. The camera operates at 7,5 ÷ 14 μm long-wave infrared radiation range. For a typical lens 35 mm the special resolution is 0.7 mrad. The result of the calculations is mean heat transfer coefficient for the considered time series. It is equal to 50 W m -2 K-1 and 47 W m -2 K-1 on the left and right side of the fin, respectively. The distance between the experimental data and the curve approximating the temperature distribution was assessed with the standard deviation, Sd = 0.04 K.

  2. The Southern Hemisphere quasi-stationary eddies and their relationship with Antarctic sea ice

    NASA Astrophysics Data System (ADS)

    Hobbs, William Richard

    The west Antarctic region shows one of the strongest warming trends globally over the late 20th century, whilst much of the Antarctic continent shows little trend or even cooling. Additionally, sea ice reductions in the Antarctic Peninsula region have been balanced by sea ice increases in the Ross Sea region. Despite this heterogeneity, much recent research in the Southern Hemisphere has focused on the approximately zonally-symmetric Southern Annular Mode. In this research, reanalysis and satellite data are analyzed to show that at monthly and annual timescales the zonally asymmetric circulation over the Southern Ocean is dominated by two quasi-stationary anticyclones; a stable western anticyclone approximately located south of New Zealand, and a more variable eastern anticyclone located over the Drake Passage region. Time series describing each anticyclone's strength and longitude, and these time series are used to investigate the physical nature and influence of the anticyclones. The anticyclones are found to have some covariance, and in particular they tend to shift in phase, but their strengths are negatively correlated. Quasi-geostrophic diagnosis indicates that the west anticyclone is maintained by meridional vorticity advection by poleward airflow south of Australia, whereas the east anticyclone is forced by zonal convergence over the Pacific Ocean. The differences in variability and dynamic nature between the anticyclones bring into question the utility of the zonal wave decomposition, which is commonly used in analysis of the Southern Hemisphere zonally asymmetric circulation. It is shown that the quasi-stationary anticyclones influence west Antarctic sea ice in a pattern that resembles the 1st and 3rd principal components of ice variability. The anticyclones have some effect on wind-driven sea ice motion, but the primary mechanism explaining their link to sea ice appears to be meridional thermal advection.

  3. Damping of Quasi-stationary Waves Between Two Miscible Liquids

    NASA Technical Reports Server (NTRS)

    Duval, Walter M. B.

    2002-01-01

    Two viscous miscible liquids with an initially sharp interface oriented vertically inside a cavity become unstable against oscillatory external forcing due to Kelvin-Helmholtz instability. The instability causes growth of quasi-stationary (q-s) waves at the interface between the two liquids. We examine computationally the dynamics of a four-mode q-s wave, for a fixed energy input, when one of the components of the external forcing is suddenly ceased. The external forcing consists of a steady and oscillatory component as realizable in a microgravity environment. Results show that when there is a jump discontinuity in the oscillatory excitation that produced the four-mode q-s wave, the interface does not return to its equilibrium position, the structure of the q-s wave remains imbedded between the two fluids over a long time scale. The damping characteristics of the q-s wave from the time history of the velocity field show overdamped and critically damped response; there is no underdamped oscillation as the flow field approaches steady state. Viscous effects serve as a dissipative mechanism to effectively damp the system. The stability of the four-mode q-s wave is dependent on both a geometric length scale as well as the level of background steady acceleration.

  4. Approximate entropy of network parameters.

    PubMed

    West, James; Lacasa, Lucas; Severini, Simone; Teschendorff, Andrew

    2012-04-01

    We study the notion of approximate entropy within the framework of network theory. Approximate entropy is an uncertainty measure originally proposed in the context of dynamical systems and time series. We first define a purely structural entropy obtained by computing the approximate entropy of the so-called slide sequence. This is a surrogate of the degree sequence and it is suggested by the frequency partition of a graph. We examine this quantity for standard scale-free and Erdös-Rényi networks. By using classical results of Pincus, we show that our entropy measure often converges with network size to a certain binary Shannon entropy. As a second step, with specific attention to networks generated by dynamical processes, we investigate approximate entropy of horizontal visibility graphs. Visibility graphs allow us to naturally associate with a network the notion of temporal correlations, therefore providing the measure a dynamical garment. We show that approximate entropy distinguishes visibility graphs generated by processes with different complexity. The result probes to a greater extent these networks for the study of dynamical systems. Applications to certain biological data arising in cancer genomics are finally considered in the light of both approaches. PMID:22680542

  5. Approximate entropy of network parameters

    NASA Astrophysics Data System (ADS)

    West, James; Lacasa, Lucas; Severini, Simone; Teschendorff, Andrew

    2012-04-01

    We study the notion of approximate entropy within the framework of network theory. Approximate entropy is an uncertainty measure originally proposed in the context of dynamical systems and time series. We first define a purely structural entropy obtained by computing the approximate entropy of the so-called slide sequence. This is a surrogate of the degree sequence and it is suggested by the frequency partition of a graph. We examine this quantity for standard scale-free and Erdös-Rényi networks. By using classical results of Pincus, we show that our entropy measure often converges with network size to a certain binary Shannon entropy. As a second step, with specific attention to networks generated by dynamical processes, we investigate approximate entropy of horizontal visibility graphs. Visibility graphs allow us to naturally associate with a network the notion of temporal correlations, therefore providing the measure a dynamical garment. We show that approximate entropy distinguishes visibility graphs generated by processes with different complexity. The result probes to a greater extent these networks for the study of dynamical systems. Applications to certain biological data arising in cancer genomics are finally considered in the light of both approaches.

  6. On the structure of quasi-stationary laser ablation fronts in strongly radiating plasmas

    SciTech Connect

    Basko, M. M. Novikov, V. G.; Grushin, A. S.

    2015-05-15

    The effect of strong thermal radiation on the structure of quasi-stationary laser ablation fronts is investigated under the assumption that all the laser flux is absorbed at the critical surface. Special attention is paid to adequate formulation of the boundary-value problem for a steady-state planar ablation flow. The dependence of the laser-to-x-ray conversion efficiency ϕ{sub r} on the laser intensity I{sub L} and wavelength λ{sub L} is analyzed within the non-equilibrium diffusion approximation for radiation transfer. The scaling of the main ablation parameters with I{sub L} and λ{sub L} in the strongly radiative regime 1−ϕ{sub r}≪1 is derived. It is demonstrated that strongly radiating ablation fronts develop a characteristic extended cushion of “radiation-soaked” plasma between the condensed ablated material and the critical surface, which can efficiently suppress perturbations from the instabilities at the critical surface.

  7. Broadband quasi-stationary pulses in mode-locked fiber ring laser

    NASA Astrophysics Data System (ADS)

    Kang, Jin U.

    2000-08-01

    We show experimentally an enhancement and systematic dependence of the optical spectral bandwidth of quasi-stationary or noise-like pulses due to changes in the net dispersion of a fiber ring laser cavity. When the net dispersion was significantly normal a maximum spectral width of about 80 nm was obtained compared to about 30 nm where no dispersion mapping was used. We numerically show that this is a result of the strong nonlinear chirping due to the propagation of quasi-stationary pulses in the dispersion-managed cavity.

  8. Function approximation in inhibitory networks.

    PubMed

    Tripp, Bryan; Eliasmith, Chris

    2016-05-01

    In performance-optimized artificial neural networks, such as convolutional networks, each neuron makes excitatory connections with some of its targets and inhibitory connections with others. In contrast, physiological neurons are typically either excitatory or inhibitory, not both. This is a puzzle, because it seems to constrain computation, and because there are several counter-examples that suggest that it may not be a physiological necessity. Parisien et al. (2008) showed that any mixture of excitatory and inhibitory functional connections could be realized by a purely excitatory projection in parallel with a two-synapse projection through an inhibitory population. They showed that this works well with ratios of excitatory and inhibitory neurons that are realistic for the neocortex, suggesting that perhaps the cortex efficiently works around this apparent computational constraint. Extending this work, we show here that mixed excitatory and inhibitory functional connections can also be realized in networks that are dominated by inhibition, such as those of the basal ganglia. Further, we show that the function-approximation capacity of such connections is comparable to that of idealized mixed-weight connections. We also study whether such connections are viable in recurrent networks, and find that such recurrent networks can flexibly exhibit a wide range of dynamics. These results offer a new perspective on computation in the basal ganglia, and also perhaps on inhibitory networks within the cortex. PMID:26963256

  9. Quasi-stationary states and a classification of the range of pair interactions

    SciTech Connect

    Gabrielli, A.; Joyce, M.; Marcos, B.

    2011-03-24

    Systems of long-range interacting particles present typically 'quasi-stationary' states (QSS). Investigating their lifetime for a generic pair interaction V(r{yields}{infinity}){approx}1/r{sup {gamma}} we give a classification of the range of the interactions according to the dynamical properties of the system.

  10. Salinity Exchange through the Quasi-Stationary Jet from the Subtropical to the Subpolar Pacific Ocean

    NASA Astrophysics Data System (ADS)

    Miyama, T.; Mitsudera, H.

    2014-12-01

    It is known that a quasi-stationary jet-like current [referred to as J1 in Isoguchi et al. (2006)] flows along the northern part of the Kuroshio/Oyashio mixed water region in the western Pacific Ocean. Observations (Isoguchi et al. 2006, Wagawa et al. 2014) have shown that the jet transports saline water in the subtropical Pacific Ocean to the subpolar region. To investigate how the subtropical water is transported through the quasi-stationary jet, numerical particle were tracked using a high resolution ocean reanalysis dataset, the Japan Coastal Ocean Predictability Experiment (JCOPE2). Particle released from the region near the quasi-stationary jet (152-158°E, 42-45°N) are tracked for one year from 15th day of every month and every year (1993-2013) with daily velocity of the JCOPE2 reanalysis at 30 m depth. Backward particle tracking shows that the particles near the jet come from wide southward area, which suggests that eddies are important in the transport process of the saline subtropical water. The number of particles that go back to the region south of 36°N within one year varies greatly in time, from 0.002% to 20% of the total particles. Forward particle tracking shows that the part of particles flows northeastward, which indicates the western subpolar gyre, while part of the particles are trapped in another jet-like current [referred to as J2 in Isoguchi et al. (2006)].

  11. Quasi-stationary simulations of the directed percolation universality class in d = 3 dimensions

    NASA Astrophysics Data System (ADS)

    Sander, Renan S.; de Oliveira, Marcelo M.; Ferreira, Silvio C.

    2009-08-01

    We present quasi-stationary simulations of three-dimensional models with a single absorbing configuration, namely the contact process (CP), the susceptible-infected-susceptible (SIS) model and the contact replication process (CRP). The moment ratios of the order parameters for the DP class in three dimensions were set up using the well established SIS and CP models. We also show that the mean-field exponent for d = 3 reported previously for the CRP (Ferreira 2005 Phys. Rev. E 71 017104) is a transient observed in the spreading analysis.

  12. High-current lanthanum-hexaboride electron emitter for a quasi-stationary arc plasma generator

    SciTech Connect

    Davydenko, V. I. Ivanov, A. A. Shul’zhenko, G. I.

    2015-11-15

    A high-current electron emitter on the basis of lanthanum hexaboride is developed for quasi-stationary arc plasma generators of ion sources. The emitter consists of a set of LaB{sub 6} washers interleaved with washers made of thermally extended graphite. The emitter is heated by the current flowing through the graphite washers. The thermal regime of emitter operation during plasma generation is considered. The emitter has been successfully used in the ion sources of the diagnostic injectors of fast hydrogen atomic beams.

  13. Stability and hierarchy of quasi-stationary states: financial markets as an example

    NASA Astrophysics Data System (ADS)

    Stepanov, Yuriy; Rinn, Philip; Guhr, Thomas; Peinke, Joachim; Schäfer, Rudi

    2015-08-01

    We combine geometric data analysis and stochastic modeling to describe the collective dynamics of complex systems. As an example we apply this approach to financial data and focus on the non-stationarity of the market correlation structure. We identify the dominating variable and extract its explicit stochastic model. This allows us to establish a connection between its time evolution and known historical events on the market. We discuss the dynamics, the stability and the hierarchy of the recently proposed quasi-stationary market states.

  14. Poly-coil design for a 60 tesla quasi-stationary magnet

    NASA Astrophysics Data System (ADS)

    Boenig, H. J.; Campbell, L. J.; Hodgdon, M. L.; Lopez, E. A.; Rickel, D. G.; Rogers, J. D.; Schillig, J. B.; Sims, J. R.; Pernambuco-Wise, P.; Schneider-Muntau, H. J.

    1993-02-01

    Among the new facilities to be offered by the National Science Foundation through the National High Magnetic Field Laboratory (NHMFL) are pulsed fields that can only be achieved at a national user facility by virtue of their strength, duration, and volume. In particular, a 44 mm bore pulsed magnet giving a 60 tesla field for 100 ms is in the final design stage. This magnet will be powered by a 1.4 GW motor-generator at Los Alamos and is an important step toward proving design principles that will be needed for the higher field quasi-stationary pulsed magnets that this power source is capable of driving.

  15. Collisionless kinetic regimes for quasi-stationary axisymmetric accretion disc plasmas

    SciTech Connect

    Cremaschini, C.; Tessarotto, M.

    2012-08-15

    This paper is concerned with the kinetic treatment of quasi-stationary axisymmetric collisionless accretion disc plasmas. The conditions of validity of the kinetic description for non-relativistic magnetized and gravitationally bound plasmas of this type are discussed. A classification of the possible collisionless plasma regimes which can arise in these systems is proposed, which can apply to accretion discs around both stellar-mass compact objects and galactic-center black holes. Two different classifications are determined, which are referred to, respectively, as energy-based and magnetic field-based classifications. Different regimes are pointed out for each plasma species, depending both on the relative magnitudes of kinetic and potential energies and the magnitude of the magnetic field. It is shown that in all cases, there can be quasi-stationary Maxwellian-like solutions of the Vlasov equation. The perturbative approach outlined here permits unique analytical determination of the functional form for the distribution function consistent, in each kinetic regime, with the explicit inclusion of finite Larmor radius-diamagnetic and/or energy-correction effects.

  16. Thermophysical properties of medium density fiberboards measured by quasi-stationary method: experimental and numerical evaluation

    NASA Astrophysics Data System (ADS)

    Troppová, Eva; Tippner, Jan; Hrčka, Richard

    2016-04-01

    This paper presents an experimental measurement of thermal properties of medium density fiberboards with different thicknesses (12, 18 and 25 mm) and sample sizes (50 × 50 mm and 100 × 100 mm) by quasi-stationary method. The quasi-stationary method is a transient method which allows measurement of three thermal parameters (thermal conductivity, thermal diffusivity and heat capacity). The experimentally gained values were used to verify a numerical model and furthermore served as input parameters for the numerical probabilistic analysis. The sensitivity of measured outputs (time course of temperature) to influential factors (density, heat transfer coefficient and thermal conductivities) was established and described by the Spearman's rank correlation coefficients. The dependence of thermal properties on density was confirmed by the data measured. Density was also proved to be an important factor for sensitivity analyses as it highly correlated with all output parameters. The accuracy of the measurement method can be improved based on the results of the probabilistic analysis. The relevancy of the experiment is mainly influenced by the choice of a proper ratio between thickness and width of samples.

  17. Kinetic description of quasi-stationary axisymmetric collisionless accretion disk plasmas with arbitrary magnetic field configurations

    SciTech Connect

    Cremaschini, Claudio; Miller, John C.; Tessarotto, Massimo

    2011-06-15

    A kinetic treatment is developed for collisionless magnetized plasmas occurring in high-temperature, low-density astrophysical accretion disks, such as are thought to be present in some radiatively inefficient accretion flows onto black holes. Quasi-stationary configurations are investigated, within the framework of a Vlasov-Maxwell description. The plasma is taken to be axisymmetric and subject to the action of slowly time-varying gravitational and electromagnetic fields. The magnetic field is assumed to be characterized by a family of locally nested but open magnetic surfaces. The slow collisionless dynamics of these plasmas is investigated, yielding a reduced gyrokinetic Vlasov equation for the kinetic distribution function. For doing this, an asymptotic quasi-stationary solution is first determined, represented by a generalized bi-Maxwellian distribution expressed in terms of the relevant adiabatic invariants. The existence of the solution is shown to depend on having suitable kinetic constraints and conditions leading to particle trapping phenomena. With this solution, one can treat temperature anisotropy, toroidal and poloidal flow velocities, and finite Larmor-radius effects. An asymptotic expansion for the distribution function permits analytic evaluation of all the relevant fluid fields. Basic theoretical features of the solution and their astrophysical implications are discussed. As an application, the possibility of describing the dynamics of slowly time-varying accretion flows and the self-generation of magnetic field by means of a ''kinetic dynamo effect'' are discussed. Both effects are shown to be related to intrinsically kinetic physical mechanisms.

  18. Constructive approximate interpolation by neural networks

    NASA Astrophysics Data System (ADS)

    Llanas, B.; Sainz, F. J.

    2006-04-01

    We present a type of single-hidden layer feedforward neural networks with sigmoidal nondecreasing activation function. We call them ai-nets. They can approximately interpolate, with arbitrary precision, any set of distinct data in one or several dimensions. They can uniformly approximate any continuous function of one variable and can be used for constructing uniform approximants of continuous functions of several variables. All these capabilities are based on a closed expression of the networks.

  19. Birth of a quasi-stationary black hole in an outcoupled Bose-Einstein condensate

    NASA Astrophysics Data System (ADS)

    Sols, Fernando; de Nova, Juan R. M.; Guery-Odelin, David; Zapata, Ivar

    2015-05-01

    We study the evolution of an initially confined atom condensate, which is progressively outcoupled by gradually lowering the confining barrier on one side. The goal is to identify protocols that best lead to a quasi-stationary sonic black hole separating regions of subsonic and supersonic flow. An optical lattice is found to be more efficient than a single barrier in yielding a long-time stationary flow. This is best achieved if the final conduction band is broad and its minimum is not much lower than the initial chemical potential. An optical lattice with a realistic Gaussian envelope yields similar results. We analytically prove and numerically check that, within a spatially coarse-grained description, the sonic horizon is bound to lie right at the envelope maximum. We derive an analytical formula for the Hawking temperature in that setup. Work supported by MINECO (Spain), grants FIS2010-21372 and FIS2013-41716-P.

  20. Core-halo quasi-stationary states in the Hamiltonian mean-field model

    NASA Astrophysics Data System (ADS)

    Konishi, Eiji

    2016-04-01

    A characteristic feature of long-range interacting systems is that they become trapped in a non-equilibrium and long-lived quasi-stationary state (QSS) during the early stages of their development. We present a comprehensive review of recent studies of the core-halo structure of QSSs, in the Hamiltonian mean-field model (HMF), which is a mean-field model of mutually coupled ferromagnetic XY spins located at a point, obtained by starting from various unsteady rectangular water-bag type initial phase-space distributions. The main result exposed in this review is that the core-halo structure can be described by the superposition of two independent Lynden-Bell distributions. We discuss the completeness of collisionless relaxation of this double Lynden-Bell distribution by using both of Lynden-Bell entropy and double Lynden-Bell entropy for the systems at low energies per particle.

  1. Quasi-stationary mechanics of elastic continua with bending stiffness wrapping on a pulley system

    NASA Astrophysics Data System (ADS)

    Kaczmarczyk, S.; Mirhadizadeh, S.

    2016-05-01

    In many engineering applications elastic continua such as ropes and belts often are subject to bending when they pass over pulleys / sheaves. In this paper the quasi-stationary mechanics of a cable-pulley system is studied. The cable is modelled as a moving Euler- Bernoulli beam. The distribution of tension is non-uniform along its span and due to the bending stiffness the contact points at the pulley-beam boundaries are not unknown. The system is described by a set of nonlinear ordinary differential equations with undetermined boundary conditions. The resulting nonlinear Boundary Value Problem (BVP) with unknown boundaries is solved by converting the problem into the ‘standard’ form defined over a fixed interval. Numerical results obtained for a range of typical configurations with relevant boundary conditions applied demonstrate that due to the effects of bending stiffness the angels of wrap are reduced and the span tensions are increased.

  2. Rogue wave formation under the action of quasi-stationary pressure

    NASA Astrophysics Data System (ADS)

    Abrashkin, A. A.; Oshmarina, O. E.

    2016-05-01

    The process of rogue wave formation on deep water is considered. A wave of extreme amplitude is born against the background of uniform waves (Gerstner waves) under the action of external pressure on free surface. The pressure distribution has a form of a quasi-stationary "pit". The fluid motion is supposed to be a vortex one and is described by an exact solution of equations of 2D hydrodynamics for an ideal fluid in Lagrangian coordinates. Liquid particles are moving around circumferences of different radii in the absence of drift flow. Values of amplitude and wave steepness optimal for rogue wave formation are found numerically. The influence of vorticity distribution and pressure drop on parameters of the fluid is investigated.

  3. Kinetic theory of quasi-stationary collisionless axisymmetric plasmas in the presence of strong rotation phenomena

    SciTech Connect

    Cremaschini, Claudio; Stuchlík, Zdeněk; Tessarotto, Massimo

    2013-05-15

    The problem of formulating a kinetic treatment for quasi-stationary collisionless plasmas in axisymmetric systems subject to the possibly independent presence of local strong velocity-shear and supersonic rotation velocities is posed. The theory is developed in the framework of the Vlasov-Maxwell description for multi-species non-relativistic plasmas. Applications to astrophysical accretion discs arising around compact objects and to plasmas in laboratory devices are considered. Explicit solutions for the equilibrium kinetic distribution function (KDF) are constructed based on the identification of the relevant particle adiabatic invariants. These are shown to be expressed in terms of generalized non-isotropic Gaussian distributions. A suitable perturbative theory is then developed which allows for the treatment of non-uniform strong velocity-shear/supersonic plasmas. This yields a series representation for the equilibrium KDF in which the leading-order term depends on both a finite set of fluid fields as well as on the gradients of an appropriate rotational frequency. Constitutive equations for the fluid number density, flow velocity, and pressure tensor are explicitly calculated. As a notable outcome, the discovery of a new mechanism for generating temperature and pressure anisotropies is pointed out, which represents a characteristic feature of plasmas considered here. This is shown to arise as a consequence of the canonical momentum conservation and to contribute to the occurrence of temperature anisotropy in combination with the adiabatic conservation of the particle magnetic moment. The physical relevance of the result and the implications of the kinetic solution for the self-generation of quasi-stationary electrostatic and magnetic fields through a kinetic dynamo are discussed.

  4. A 'Boscastle-type' quasi-stationary convective system over the UK Southwest Peninsula

    NASA Astrophysics Data System (ADS)

    Warren, Robert; Kirshbaum, Daniel; Plant, Robert; Lean, Humphrey

    2013-04-01

    Quasi-stationary convective systems (QSCSs) can produce extreme rainfall accumulations and have been responsible for many devastating flash floods worldwide. An oft-cited case from the UK is the 'Boscastle storm' which occurred on 16 August 2004 over the southwest peninsula of England. This system produced over 200 mm of precipitation in just four hours, leading to severe flooding in several coastal settlements. This presentation will focus on a QSCS from July 2010 which showed remarkable similarity to the Boscastle storm in terms of its location and structure, but produced much smaller rainfall accumulations and no flooding. First, observational data from the two cases will be compared to highlight three factors which made the Boscastle case more extreme: (1) higher rain rates, associated with a warmer and moister tropospheric column and deeper convective clouds; (2) a more stationary system, due to slower evolution of the large-scale flow; and (3) distribution of the heaviest precipitation over fewer river catchments. Results from numerical simulations of the July 2010 case (performed with convection-permitting configurations of the Met Office Unified Model) will then be presented. A control simulation, using 1.5-km grid spacing, reveals that convection was repeatedly initiated through lifting of low-level air parcels along a quasi-stationary coastal convergence line. Sensitivity tests suggest that this convergence line was a sea breeze front which temporarily stalled along the coastline due to the retarding influence of an offshore-direction background wind component. Several deficiencies are apparent in the 1.5-km model's representation of the storm system, including delayed convective initiation; however, significant improvements are observed when the grid length is reduced to 500 m. These result in part from an improved representation of the convergence line, which enhances the associated low-level ascent allowing air parcels to more readily reach their level

  5. Quasi-Stationary Shear-parallel MCS in a Near-saturated Environment

    NASA Astrophysics Data System (ADS)

    Liu, Changhai; Moncrieff, Mitchell

    2016-04-01

    Idealized simulations are performed to investigate a poorly-understood category of Mesoscale Convective Systems (MCSs) - quasi-stationary convective lines with upstream-building and downstream stratiform observed in very moist environments. A specific feature in the experimental design is the inclusion of a highly idealized moisture front, mimicking the water vapor variations across the large-scale quasi-stationary (Mei-Yu) front during the Asian summer monsoon, where this regime of convective organization has been frequently observed. The numerical experiment with a wind profile of significant low-level vertical shear, plus a moist thermodynamic sounding with low convective inhibition, generates a long-lasting convective system which is down-shear tilted with a morphology resembling the documented MCSs with back-building or parallel stratiform in East Asia and North America. This is the first successful simulations of the carrot-like MCS morphology, where cells initiate near the upstream edge in either back-building or forward-building form depending on the system propagation direction. A major disparity from most types of MCSs, especially the well-studied squall line, is the weak and shallow cold pool and its negligible effect on system sustenance and propagation. Instead of the cold-pool-shear interaction, it is found that convectively-excited gravity waves are responsible for the intermittent upstream initiation of convective elements. Sensitivity tests show that both the moisture front and shear are critical for this MCS category. Our study suggests that the background spatial moisture variability affects the selection of the modes of organization, and that a systematic investigation of its role in convective organization in various wind shear conditions should be explored.

  6. Approximating maximum clique with a Hopfield network.

    PubMed

    Jagota, A

    1995-01-01

    In a graph, a clique is a set of vertices such that every pair is connected by an edge. MAX-CLIQUE is the optimization problem of finding the largest clique in a given graph and is NP-hard, even to approximate well. Several real-world and theory problems can be modeled as MAX-CLIQUE. In this paper, we efficiently approximate MAX-CLIQUE in a special case of the Hopfield network whose stable states are maximal cliques. We present several energy-descent optimizing dynamics; both discrete (deterministic and stochastic) and continuous. One of these emulates, as special cases, two well-known greedy algorithms for approximating MAX-CLIQUE. We report on detailed empirical comparisons on random graphs and on harder ones. Mean-field annealing, an efficient approximation to simulated annealing, and a stochastic dynamics are the narrow but clear winners. All dynamics approximate much better than one which emulates a "naive" greedy heuristic. PMID:18263357

  7. Approximation abilities of neuro-fuzzy networks

    NASA Astrophysics Data System (ADS)

    Mrówczyńska, Maria

    2010-01-01

    The paper presents the operation of two neuro-fuzzy systems of an adaptive type, intended for solving problems of the approximation of multi-variable functions in the domain of real numbers. Neuro-fuzzy systems being a combination of the methodology of artificial neural networks and fuzzy sets operate on the basis of a set of fuzzy rules "if-then", generated by means of the self-organization of data grouping and the estimation of relations between fuzzy experiment results. The article includes a description of neuro-fuzzy systems by Takaga-Sugeno-Kang (TSK) and Wang-Mendel (WM), and in order to complement the problem in question, a hierarchical structural self-organizing method of teaching a fuzzy network. A multi-layer structure of the systems is a structure analogous to the structure of "classic" neural networks. In its final part the article presents selected areas of application of neuro-fuzzy systems in the field of geodesy and surveying engineering. Numerical examples showing how the systems work concerned: the approximation of functions of several variables to be used as algorithms in the Geographic Information Systems (the approximation of a terrain model), the transformation of coordinates, and the prediction of a time series. The accuracy characteristics of the results obtained have been taken into consideration.

  8. Native globular actin has a thermodynamically unstable quasi-stationary structure with elements of intrinsic disorder.

    PubMed

    Kuznetsova, Irina M; Povarova, Olga I; Uversky, Vladimir N; Turoverov, Konstantin K

    2016-02-01

    The native form of globular actin, G-actin, is formed in vivo as a result of complex post-translational folding processes that require ATP energy expenditure and are assisted by the 70 kDa heat shock protein, prefoldin and chaperonin containing TCP-1. G-actin is stabilized by the binding of one ATP molecule and one Ca(2+) ion (or Mg(2+) in vivo). Chemical denaturants, heating or Ca(2+) removal transform native actin (N) into 'inactivated actin' (I), a compact oligomer comprising 14-16 subunits. Viscogenic and crowding agents slow this process but do not stop it. The lack of calcium in the solution accelerates the spontaneous N → I transition. Thus, native G-actin has a kinetically stable (as a result of the high free energy barrier between the N and I states) but thermodynamically unstable structure, which, in the absence of Ca(2+) or other bivalent metal ions, spontaneously converts to the thermodynamically stable I state. It was noted that native actin has much in common with intrinsically disordered proteins: it has functionally important disordered regions; it is constantly in complex with one of its numerous partners; and it plays key roles in many cellular processes, in a manner similar to disordered hub proteins. By analyzing actin folding in vivo and unfolding in vitro, we advanced the hypothesis that proteins in a native state may have a thermodynamically unstable quasi-stationary structure. The kinetically stable native state of these proteins appears forcibly under the influence of intracellular folding machinery. The denaturation of such proteins is always irreversible because the inactivated state, for which the structure is determined by the amino acid sequence of a protein, comprises the thermodynamically stable state under physiological conditions. PMID:26460158

  9. Non-equilibrium relaxation between two quasi-stationary states in a stochastic lattice Lotka-Volterra model

    NASA Astrophysics Data System (ADS)

    Chen, Sheng; Täuber, Uwe C.

    2015-03-01

    Spatially extended stochastic models for predator-prey competition and coexistence display complex, correlated spatio-temporal structures and are governed by remarkably large fluctuations. Both populations are characterized by damped erratic oscillations whose properties are governed by the reaction rates. Here, we specifically study a stochastic lattice Lotka-Volterra model by means of Monte Carlo simulations that impose spatial restrictions on the number of occupants per site. The system tends to relax into a quasi-stationary state, independent of the imposed initial conditions. We investigate the non-equilibrium relaxation between two such quasi-stationary states, following an instantaneous change of the predation rate. The ensuing relaxation times are measured via the peak width of the population density Fourier transforms. As expected, we find that the initial state only influences the oscillations for the duration of this relaxation time, implying that the system quickly loses any memory of the initial configuration. Research supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award DE-FG02-09ER46613.

  10. Forecast of quasi-stationary solar wind parameters based on EUV images and magnetograms in 24 solar cycle

    NASA Astrophysics Data System (ADS)

    Shugay, Yulia; Kalegaev, Vladimir; Barinova, Vera; Rodkin, Denis

    2016-07-01

    Forecasting of quasi-stationary solar wind (SW) parameters is important for automated prediction of the geomagnetic and radiation conditions in the near-Earth's environment. SDO solar images representing the location of quasi-stationary SW coronal sources, such as large coronal holes, small areas of open magnetic fields near active regions, give the needed information for such analysis. The different coronal sources correspond to the different plasma temperatures and can be easier and better identified using different EUV spectral bands. We use the EUV solar images centered at different wavelengths (19.3, 21.1 and 17.1 nm}) obtained by AIA/SDO and the solar magnetograms obtained by HMI/SDO for the automated separation of different types of SW sources and filaments. Several simple models were created for estimating the SW parameters using properties of coronal SW sources in different spectral bands. The operational model was developed by combining the responses of various simple models. The combination of empirical relationships for different spectral bands in the frames of the hierarchical approach allows improving SW forecast. Model validation has been carried out by comparison of calculated and measured solar wind speed at L1 point. Implementation of this operational model in Space monitoring data center is under development.

  11. Approximating the largest eigenvalue of network adjacency matrices

    NASA Astrophysics Data System (ADS)

    Restrepo, Juan G.; Ott, Edward; Hunt, Brian R.

    2007-11-01

    The largest eigenvalue of the adjacency matrix of a network plays an important role in several network processes (e.g., synchronization of oscillators, percolation on directed networks, and linear stability of equilibria of network coupled systems). In this paper we develop approximations to the largest eigenvalue of adjacency matrices and discuss the relationships between these approximations. Numerical experiments on simulated networks are used to test our results.

  12. A closer look at the indications of q-generalized Central Limit Theorem behavior in quasi-stationary states of the HMF model

    NASA Astrophysics Data System (ADS)

    Pluchino, Alessandro; Rapisarda, Andrea; Tsallis, Constantino

    2008-05-01

    We give a closer look at the Central Limit Theorem (CLT) behavior in quasi-stationary states of the Hamiltonian Mean Field model, a paradigmatic one for long-range-interacting classical many-body systems. We present new calculations which show that, following their time evolution, we can observe and classify three kinds of long-standing quasi-stationary states (QSS) with different correlations. The frequency of occurrence of each class depends on the size of the system. The different microscopic nature of the QSS leads to different dynamical correlations and therefore to different results for the observed CLT behavior.

  13. The use of neural networks for approximation of nuclear data

    SciTech Connect

    Korovin, Yu. A.; Maksimushkina, A. V.

    2015-12-15

    The article discusses the possibility of using neural networks for approximation or reconstruction of data such as the reaction cross sections. The quality of the approximation using fitting criteria is also evaluated. The activity of materials under irradiation is calculated from data obtained using neural networks.

  14. A spiking neural network architecture for nonlinear function approximation.

    PubMed

    Iannella, N; Back, A D

    2001-01-01

    Multilayer perceptrons have received much attention in recent years due to their universal approximation capabilities. Normally, such models use real valued continuous signals, although they are loosely based on biological neuronal networks that encode signals using spike trains. Spiking neural networks are of interest both from a biological point of view and in terms of a method of robust signaling in particularly noisy or difficult environments. It is important to consider networks based on spike trains. A basic question that needs to be considered however, is what type of architecture can be used to provide universal function approximation capabilities in spiking networks? In this paper, we propose a spiking neural network architecture using both integrate-and-fire units as well as delays, that is capable of approximating a real valued function mapping to within a specified degree of accuracy. PMID:11665783

  15. Cluster and propensity based approximation of a network

    PubMed Central

    2013-01-01

    Background The models in this article generalize current models for both correlation networks and multigraph networks. Correlation networks are widely applied in genomics research. In contrast to general networks, it is straightforward to test the statistical significance of an edge in a correlation network. It is also easy to decompose the underlying correlation matrix and generate informative network statistics such as the module eigenvector. However, correlation networks only capture the connections between numeric variables. An open question is whether one can find suitable decompositions of the similarity measures employed in constructing general networks. Multigraph networks are attractive because they support likelihood based inference. Unfortunately, it is unclear how to adjust current statistical methods to detect the clusters inherent in many data sets. Results Here we present an intuitive and parsimonious parametrization of a general similarity measure such as a network adjacency matrix. The cluster and propensity based approximation (CPBA) of a network not only generalizes correlation network methods but also multigraph methods. In particular, it gives rise to a novel and more realistic multigraph model that accounts for clustering and provides likelihood based tests for assessing the significance of an edge after controlling for clustering. We present a novel Majorization-Minimization (MM) algorithm for estimating the parameters of the CPBA. To illustrate the practical utility of the CPBA of a network, we apply it to gene expression data and to a bi-partite network model for diseases and disease genes from the Online Mendelian Inheritance in Man (OMIM). Conclusions The CPBA of a network is theoretically appealing since a) it generalizes correlation and multigraph network methods, b) it improves likelihood based significance tests for edge counts, c) it directly models higher-order relationships between clusters, and d) it suggests novel clustering

  16. Engine With Regression and Neural Network Approximators Designed

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.

    2001-01-01

    At the NASA Glenn Research Center, the NASA engine performance program (NEPP, ref. 1) and the design optimization testbed COMETBOARDS (ref. 2) with regression and neural network analysis-approximators have been coupled to obtain a preliminary engine design methodology. The solution to a high-bypass-ratio subsonic waverotor-topped turbofan engine, which is shown in the preceding figure, was obtained by the simulation depicted in the following figure. This engine is made of 16 components mounted on two shafts with 21 flow stations. The engine is designed for a flight envelope with 47 operating points. The design optimization utilized both neural network and regression approximations, along with the cascade strategy (ref. 3). The cascade used three algorithms in sequence: the method of feasible directions, the sequence of unconstrained minimizations technique, and sequential quadratic programming. The normalized optimum thrusts obtained by the three methods are shown in the following figure: the cascade algorithm with regression approximation is represented by a triangle, a circle is shown for the neural network solution, and a solid line indicates original NEPP results. The solutions obtained from both approximate methods lie within one standard deviation of the benchmark solution for each operating point. The simulation improved the maximum thrust by 5 percent. The performance of the linear regression and neural network methods as alternate engine analyzers was found to be satisfactory for the analysis and operation optimization of air-breathing propulsion engines (ref. 4).

  17. An application of artificial neural networks to experimental data approximation

    NASA Technical Reports Server (NTRS)

    Meade, Andrew J., Jr.

    1993-01-01

    As an initial step in the evaluation of networks, a feedforward architecture is trained to approximate experimental data by the backpropagation algorithm. Several drawbacks were detected and an alternative learning algorithm was then developed to partially address the drawbacks. This noniterative algorithm has a number of advantages over the backpropagation method and is easily implemented on existing hardware.

  18. Subsonic Aircraft With Regression and Neural-Network Approximators Designed

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.

    2004-01-01

    At the NASA Glenn Research Center, NASA Langley Research Center's Flight Optimization System (FLOPS) and the design optimization testbed COMETBOARDS with regression and neural-network-analysis approximators have been coupled to obtain a preliminary aircraft design methodology. For a subsonic aircraft, the optimal design, that is the airframe-engine combination, is obtained by the simulation. The aircraft is powered by two high-bypass-ratio engines with a nominal thrust of about 35,000 lbf. It is to carry 150 passengers at a cruise speed of Mach 0.8 over a range of 3000 n mi and to operate on a 6000-ft runway. The aircraft design utilized a neural network and a regression-approximations-based analysis tool, along with a multioptimizer cascade algorithm that uses sequential linear programming, sequential quadratic programming, the method of feasible directions, and then sequential quadratic programming again. Optimal aircraft weight versus the number of design iterations is shown. The central processing unit (CPU) time to solution is given. It is shown that the regression-method-based analyzer exhibited a smoother convergence pattern than the FLOPS code. The optimum weight obtained by the approximation technique and the FLOPS code differed by 1.3 percent. Prediction by the approximation technique exhibited no error for the aircraft wing area and turbine entry temperature, whereas it was within 2 percent for most other parameters. Cascade strategy was required by FLOPS as well as the approximators. The regression method had a tendency to hug the data points, whereas the neural network exhibited a propensity to follow a mean path. The performance of the neural network and regression methods was considered adequate. It was at about the same level for small, standard, and large models with redundancy ratios (defined as the number of input-output pairs to the number of unknown coefficients) of 14, 28, and 57, respectively. In an SGI octane workstation (Silicon Graphics

  19. Approximating Attractors of Boolean Networks by Iterative CTL Model Checking

    PubMed Central

    Klarner, Hannes; Siebert, Heike

    2015-01-01

    This paper introduces the notion of approximating asynchronous attractors of Boolean networks by minimal trap spaces. We define three criteria for determining the quality of an approximation: “faithfulness” which requires that the oscillating variables of all attractors in a trap space correspond to their dimensions, “univocality” which requires that there is a unique attractor in each trap space, and “completeness” which requires that there are no attractors outside of a given set of trap spaces. Each is a reachability property for which we give equivalent model checking queries. Whereas faithfulness and univocality can be decided by model checking the corresponding subnetworks, the naive query for completeness must be evaluated on the full state space. Our main result is an alternative approach which is based on the iterative refinement of an initially poor approximation. The algorithm detects so-called autonomous sets in the interaction graph, variables that contain all their regulators, and considers their intersection and extension in order to perform model checking on the smallest possible state spaces. A benchmark, in which we apply the algorithm to 18 published Boolean networks, is given. In each case, the minimal trap spaces are faithful, univocal, and complete, which suggests that they are in general good approximations for the asymptotics of Boolean networks. PMID:26442247

  20. Network meta-analysis with integrated nested Laplace approximations.

    PubMed

    Sauter, Rafael; Held, Leonhard

    2015-11-01

    Analyzing the collected evidence of a systematic review in form of a network meta-analysis (NMA) enjoys increasing popularity and provides a valuable instrument for decision making. Bayesian inference of NMA models is often propagated, especially if correlated random effects for multiarm trials are included. The standard choice for Bayesian inference is Markov chain Monte Carlo (MCMC) sampling, which is computationally intensive. An alternative to MCMC sampling is the recently suggested approximate Bayesian method of integrated nested Laplace approximations (INLA) that dramatically saves computation time without any substantial loss in accuracy. We show how INLA apply to NMA models for summary level as well as trial-arm level data. Specifically, we outline the modeling of multiarm trials and inference for functional contrasts with INLA. We demonstrate how INLA facilitate the assessment of network inconsistency with node-splitting. Three applications illustrate the use of INLA for a NMA. PMID:26360927

  1. Neural Network and Regression Approximations Used in Aircraft Design

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.; Lavelle, Thomas M.

    1999-01-01

    NASA Lewis Research Center's CometBoards Test Bed was used to create regression and neural network models for a High-Speed Civil Transport (HSCT) aircraft. Both approximation models that replaced the actual analysis tool predicted the aircraft response in a trivial computational effort. The models allow engineers to quickly study the effects of design variables on constraint and objective values for a given aircraft configuration. For example, an engineer can change the engine size by 1000 pounds of thrust and quickly see how this change affects all the output values without rerunning the entire simulation. In addition, an engineer can change a constraint and use the approximation models to quickly reoptimize the configuration. Generating the neural network and the regression models is a time-consuming process, but this exercise has to be carried out only once. Furthermore, an automated process can reduce calculations substantially.

  2. Approximating frustration scores in complex networks via perturbed Laplacian spectra

    NASA Astrophysics Data System (ADS)

    Savol, Andrej J.; Chennubhotla, Chakra S.

    2015-12-01

    Systems of many interacting components, as found in physics, biology, infrastructure, and the social sciences, are often modeled by simple networks of nodes and edges. The real-world systems frequently confront outside intervention or internal damage whose impact must be predicted or minimized, and such perturbations are then mimicked in the models by altering nodes or edges. This leads to the broad issue of how to best quantify changes in a model network after some type of perturbation. In the case of node removal there are many centrality metrics which associate a scalar quantity with the removed node, but it can be difficult to associate the quantities with some intuitive aspect of physical behavior in the network. This presents a serious hurdle to the application of network theory: real-world utility networks are rarely altered according to theoretic principles unless the kinetic impact on the network's users are fully appreciated beforehand. In pursuit of a kinetically interpretable centrality score, we discuss the f-score, or frustration score. Each f-score quantifies whether a selected node accelerates or inhibits global mean first passage times to a second, independently selected target node. We show that this is a natural way of revealing the dynamical importance of a node in some networks. After discussing merits of the f-score metric, we combine spectral and Laplacian matrix theory in order to quickly approximate the exact f-score values, which can otherwise be expensive to compute. Following tests on both synthetic and real medium-sized networks, we report f-score runtime improvements over exact brute force approaches in the range of 0 to 400 % with low error (<3 % ).

  3. Optimized approximation algorithm in neural networks without overfitting.

    PubMed

    Liu, Yinyin; Starzyk, Janusz A; Zhu, Zhen

    2008-06-01

    In this paper, an optimized approximation algorithm (OAA) is proposed to address the overfitting problem in function approximation using neural networks (NNs). The optimized approximation algorithm avoids overfitting by means of a novel and effective stopping criterion based on the estimation of the signal-to-noise-ratio figure (SNRF). Using SNRF, which checks the goodness-of-fit in the approximation, overfitting can be automatically detected from the training error only without use of a separate validation set. The algorithm has been applied to problems of optimizing the number of hidden neurons in a multilayer perceptron (MLP) and optimizing the number of learning epochs in MLP's backpropagation training using both synthetic and benchmark data sets. The OAA algorithm can also be utilized in the optimization of other parameters of NNs. In addition, it can be applied to the problem of function approximation using any kind of basis functions, or to the problem of learning model selection when overfitting needs to be considered. PMID:18541499

  4. Approximating frustration scores in complex networks via perturbed Laplacian spectra

    PubMed Central

    Savol, Andrej J.; Chennubhotla, Chakra S.

    2016-01-01

    Systems of many interacting components, as found in physics, biology, infrastructure, and the social sciences, are often modeled by simple networks of nodes and edges. The real-world systems frequently confront outside intervention or internal damage whose impact must be predicted or minimized, and such perturbations are then mimicked in the models by altering nodes or edges. This leads to the broad issue of how to best quantify changes in a model network after some type of perturbation. In the case of node removal there are many centrality metrics which associate a scalar quantity with the removed node, but it can be difficult to associate the quantities with some intuitive aspect of physical behavior in the network. This presents a serious hurdle to the application of network theory: real-world utility networks are rarely altered according to theoretic principles unless the kinetic impact on the network’s users are fully appreciated beforehand. In pursuit of a kinetically-interpretable centrality score, we discuss the f-score, or frustration score. Each f-score quantifies whether a selected node accelerates or inhibits global mean first passage times to a second, independently-selected target node. We show that this is a natural way of revealing the dynamical importance of a node in some networks. After discussing merits of the f-score metric, we combine spectral and Laplacian matrix theory in order to quickly approximate the exact f-score values, which can otherwise be expensive to compute. Following tests on both synthetic and real medium-sized networks, we report f-score runtime improvements over exact brute force approaches in the range of 0 to 400% with low error (< 3%). PMID:26764743

  5. Beyond the locally treelike approximation for percolation on real networks

    NASA Astrophysics Data System (ADS)

    Radicchi, Filippo; Castellano, Claudio

    2016-03-01

    Theoretical attempts proposed so far to describe ordinary percolation processes on real-world networks rely on the locally treelike ansatz. Such an approximation, however, holds only to a limited extent, because real graphs are often characterized by high frequencies of short loops. We present here a theoretical framework able to overcome such a limitation for the case of site percolation. Our method is based on a message passing algorithm that discounts redundant paths along triangles in the graph. We systematically test the approach on 98 real-world graphs and on synthetic networks. We find excellent accuracy in the prediction of the whole percolation diagram, with significant improvement with respect to the prediction obtained under the locally treelike approximation. Residual discrepancies between theory and simulations do not depend on clustering and can be attributed to the presence of loops longer than three edges. We present also a method to account for clustering in bond percolation, but the improvement with respect to the method based on the treelike approximation is much less apparent.

  6. A Multithreaded Algorithm for Network Alignment Via Approximate Matching

    SciTech Connect

    Khan, Arif; Gleich, David F.; Pothen, Alex; Halappanavar, Mahantesh

    2012-11-16

    Network alignment is an optimization problem to find the best one-to-one map between the vertices of a pair of graphs that overlaps in as many edges as possible. It is a relaxation of the graph isomorphism problem and is closely related to the subgraph isomorphism problem. The best current approaches are entirely heuristic, and are iterative in nature. They generate real-valued heuristic approximations that must be rounded to find integer solutions. This rounding requires solving a bipartite maximum weight matching problem at each step in order to avoid missing high quality solutions. We investigate substituting a parallel, half-approximation for maximum weight matching instead of an exact computation. Our experiments show that the resulting difference in solution quality is negligible. We demonstrate almost a 20-fold speedup using 40 threads on an 8 processor Intel Xeon E7-8870 system (from 10 minutes to 36 seconds).

  7. The Replica Symmetric Approximation of the Analogical Neural Network

    NASA Astrophysics Data System (ADS)

    Barra, Adriano; Genovese, Giuseppe; Guerra, Francesco

    2010-08-01

    In this paper we continue our investigation of the analogical neural network, by introducing and studying its replica symmetric approximation in the absence of external fields. Bridging the neural network to a bipartite spin-glass, we introduce and apply a new interpolation scheme to its free energy, that naturally extends the interpolation via cavity fields or stochastic perturbations from the usual spin glass case to these models. While our methods allow the formulation of a fully broken replica symmetry scheme, in this paper we limit ourselves to the replica symmetric case, in order to give the basic essence of our interpolation method. The order parameters in this case are given by the assumed averages of the overlaps for the original spin variables, and for the new Gaussian variables. As a result, we obtain the free energy of the system as a sum rule, which, at least at the replica symmetric level, can be solved exactly, through a self-consistent mini-max variational principle. The so gained replica symmetric approximation turns out to be exactly correct in the ergodic region, where it coincides with the annealed expression for the free energy, and in the low density limit of stored patterns. Moreover, in the spin glass limit it gives the correct expression for the replica symmetric approximation in this case. We calculate also the entropy density in the low temperature region, where we find that it becomes negative, as expected for this kind of approximation. Interestingly, in contrast with the case where the stored patterns are digital, no phase transition is found in the low temperature limit, as a function of the density of stored patterns.

  8. Convergence and Rate Analysis of Neural Networks for Sparse Approximation

    PubMed Central

    Balavoine, Aurèle; Romberg, Justin; Rozell, Christopher J.

    2013-01-01

    We present an analysis of the Locally Competitive Algotihm (LCA), which is a Hopfield-style neural network that efficiently solves sparse approximation problems (e.g., approximating a vector from a dictionary using just a few nonzero coefficients). This class of problems plays a significant role in both theories of neural coding and applications in signal processing. However, the LCA lacks analysis of its convergence properties, and previous results on neural networks for nonsmooth optimization do not apply to the specifics of the LCA architecture. We show that the LCA has desirable convergence properties, such as stability and global convergence to the optimum of the objective function when it is unique. Under some mild conditions, the support of the solution is also proven to be reached in finite time. Furthermore, some restrictions on the problem specifics allow us to characterize the convergence rate of the system by showing that the LCA converges exponentially fast with an analytically bounded convergence rate. We support our analysis with several illustrative simulations. PMID:24199030

  9. Comparison of gravitational wave detector network sky localization approximations

    NASA Astrophysics Data System (ADS)

    Grover, K.; Fairhurst, S.; Farr, B. F.; Mandel, I.; Rodriguez, C.; Sidery, T.; Vecchio, A.

    2014-02-01

    Gravitational waves emitted during compact binary coalescences are a promising source for gravitational-wave detector networks. The accuracy with which the location of the source on the sky can be inferred from gravitational-wave data is a limiting factor for several potential scientific goals of gravitational-wave astronomy, including multimessenger observations. Various methods have been used to estimate the ability of a proposed network to localize sources. Here we compare two techniques for predicting the uncertainty of sky localization—timing triangulation and the Fisher information matrix approximations—with Bayesian inference on the full, coherent data set. We find that timing triangulation alone tends to overestimate the uncertainty in sky localization by a median factor of 4 for a set of signals from nonspinning compact object binaries ranging up to a total mass of 20M⊙, and the overestimation increases with the mass of the system. We find that average predictions can be brought to better agreement by the inclusion of phase consistency information in timing-triangulation techniques. However, even after corrections, these techniques can yield significantly different results to the full analysis on specific mock signals. Thus, while the approximate techniques may be useful in providing rapid, large scale estimates of network localization capability, the fully coherent Bayesian analysis gives more robust results for individual signals, particularly in the presence of detector noise.

  10. Mobile Calibration Based on Laser Metrology and Approximation Networks

    PubMed Central

    Muñoz-Rodriguez, J. Apolinar

    2010-01-01

    A mobile calibration technique for three-dimensional vision is presented. In this method, vision parameters are computed automatically by approximation networks built based on the position of a camera and image processing of a laser line. The networks also perform three-dimensional visualization. In the proposed system, the setup geometry can be modified online, whereby an online re-calibration is performed based on data provided by the network and the required modifications of extrinsic and intrinsic parameters are thus determined, overcoming any calibration limitations caused by the modification procedure. The mobile calibration also avoids procedures involving references, which are used in traditional online re-calibration methods. The proposed mobile calibration thus improves the accuracy and performance of the three-dimensional vision because online data of calibrated references are not passed on to the vision system. This work represents a contribution to the field of online re-calibration, as verified by a comparison with the results based on lighting methods, which are calibrated and re-calibrated via perspective projection. Processing time is also studied. PMID:22163622

  11. Locally supervised neural networks for approximating terramechanics models

    NASA Astrophysics Data System (ADS)

    Song, Xingguo; Gao, Haibo; Ding, Liang; Spanos, Pol D.; Deng, Zongquan; Li, Zhijun

    2016-06-01

    Neural networks (NNs) have been widely implemented for identifying nonlinear models, and predicting the distribution of targets, due to their ability to store and learn training samples. However, for highly complex systems, it is difficult to build a robust global network model, and efficiently managing the large amounts of experimental data is often required in real-time applications. In this paper, an effective method for building local models is proposed to enhance robustness and learning speed in globally supervised NNs. Unlike NNs, Gaussian processes (GP) produce predictions that capture the uncertainty inherent in actual systems, and typically provides superior results. Therefore, in this study, each local NN is learned in the same manner as a Gaussian process. A mixture of local model NNs is created and then augmented using weighted regression. This proposed method, referred to as locally supervised NN for weighted regression like GP, is abbreviated as "LGPN", is utilized for approximating a wheel-terrain interaction model under fixed soil parameters. The prediction results show that the proposed method yields significant robustness, modeling accuracy, and rapid learning speed.

  12. Sub-problem Optimization With Regression and Neural Network Approximators

    NASA Technical Reports Server (NTRS)

    Guptill, James D.; Hopkins, Dale A.; Patnaik, Surya N.

    2003-01-01

    Design optimization of large systems can be attempted through a sub-problem strategy. In this strategy, the original problem is divided into a number of smaller problems that are clustered together to obtain a sequence of sub-problems. Solution to the large problem is attempted iteratively through repeated solutions to the modest sub-problems. This strategy is applicable to structures and to multidisciplinary systems. For structures, clustering the substructures generates the sequence of sub-problems. For a multidisciplinary system, individual disciplines, accounting for coupling, can be considered as sub-problems. A sub-problem, if required, can be further broken down to accommodate sub-disciplines. The sub-problem strategy is being implemented into the NASA design optimization test bed, referred to as "CometBoards." Neural network and regression approximators are employed for reanalysis and sensitivity analysis calculations at the sub-problem level. The strategy has been implemented in sequential as well as parallel computational environments. This strategy, which attempts to alleviate algorithmic and reanalysis deficiencies, has the potential to become a powerful design tool. However, several issues have to be addressed before its full potential can be harnessed. This paper illustrates the strategy and addresses some issues.

  13. Functional approximation using artificial neural networks in structural mechanics

    NASA Technical Reports Server (NTRS)

    Alam, Javed; Berke, Laszlo

    1993-01-01

    The artificial neural networks (ANN) methodology is an outgrowth of research in artificial intelligence. In this study, the feed-forward network model that was proposed by Rumelhart, Hinton, and Williams was applied to the mapping of functions that are encountered in structural mechanics problems. Several different network configurations were chosen to train the available data for problems in materials characterization and structural analysis of plates and shells. By using the recall process, the accuracy of these trained networks was assessed.

  14. S-curve networks and an approximate method for estimating degree distributions of complex networks

    NASA Astrophysics Data System (ADS)

    Guo, Jin-Li

    2010-12-01

    In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research.

  15. A non-Boltzmannian behavior of the energy distribution for quasi-stationary regimes of the Fermi–Pasta–Ulam β system

    SciTech Connect

    Leo, Mario; Leo, Rosario Antonio; Tempesta, Piergiulio

    2013-06-15

    In a recent paper [M. Leo, R.A. Leo, P. Tempesta, C. Tsallis, Phys. Rev. E 85 (2012) 031149], the existence of quasi-stationary states for the Fermi–Pasta–Ulam β system has been shown numerically, by analyzing the stability properties of the N/4-mode exact nonlinear solution. Here we study the energy distribution of the modes N/4, N/3 and N/2, when they are unstable, as a function of N and of the initial excitation energy. We observe that the classical Boltzmann weight is replaced by a different weight, expressed by a q-exponential function. -- Highlights: ► New statistical properties of the Fermi–Pasta–Ulam beta system are found. ► The energy distribution of specific observables are studied: a deviation from the standard Boltzmann behavior is found. ► A q-exponential weight should be used instead. ► The classical exponential weight is restored in the large particle limit (mesoscopic nature of the phenomenon)

  16. Best approximation of Gaussian neural networks with nodes uniformly spaced.

    PubMed

    Mulero-Martinez, J I

    2008-02-01

    This paper is aimed at exposing the reader to certain aspects in the design of the best approximants with Gaussian radial basis functions (RBFs). The class of functions to which this approach applies consists of those compactly supported in frequency. The approximative properties of uniqueness and existence are restricted to this class. Functions which are smooth enough can be expanded in Gaussian series converging uniformly to the objective function. The uniqueness of these series is demonstrated by the context of the orthonormal basis in a Hilbert space. Furthermore, the best approximation to a given band-limited function from a truncated Gaussian series is analyzed by an energy-based argument. This analysis not only gives a theoretical proof concerned with the existence of best approximations but addresses the problems of architectural selection. Specifically, guidance for selecting the variance and the oversampling parameters is provided for practitioners. PMID:18269959

  17. A numerical study of back-building process in a quasi-stationary rainband with extreme rainfall over northern Taiwan during 11-12 June 2012

    NASA Astrophysics Data System (ADS)

    Wang, C.-C.; Chiou, B.-K.; Chen, G. T.-J.; Kuo, H.-C.

    2015-11-01

    During 11-12 June 2012, quasi-stationary linear mesoscale convective systems (MCSs) developed near northern Taiwan and produced extreme rainfall up to 510 mm and severe flooding in Taipei. Evident back-building (BB) process in these MCSs contributed to the extreme rainfall, and thus is investigated using a cloud-resolving model. Specifically, we seek answers to the question why the location about 15-30 km upstream from the old cell is often more favorable for new cell initiation without the cold pool mechanism in this subtropical event during the mei-yu season. With a horizontal grid size of 1.5 km, the model successfully reproduced the linear MCS and the BB process, which is found to be influenced by both dynamical and thermodynamical effects. During initiation in a background with convective instability, new cells are associated with positive (negative) buoyancy below (above) due to latent heating (adiabatic cooling), which represent a gradual destabilization. At the beginning, the new development is close to the old convection, which provides stronger warming below and additional cooling at mid-levels from evaporation of condensates, thus yielding a more rapid destabilization. This enhanced upward decrease in buoyancy at a lower height eventually creates an upward perturbation pressure gradient force to drive further development along with the buoyancy itself. After the new cell has gain sufficient strength, a descending branch at the old cell's rear flank acts to separate the new cell to about 20 km upstream. Therefore, the advantages of the spot in the BB process can be explained.

  18. Nonlinear functional approximation with networks using adaptive neurons

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1992-01-01

    A novel mathematical framework for the rapid learning of nonlinear mappings and topological transformations is presented. It is based on allowing the neuron's parameters to adapt as a function of learning. This fully recurrent adaptive neuron model (ANM) has been successfully applied to complex nonlinear function approximation problems such as the highly degenerate inverse kinematics problem in robotics.

  19. Distributed density estimation in sensor networks based on variational approximations

    NASA Astrophysics Data System (ADS)

    Safarinejadian, Behrooz; Menhaj, Mohammad B.

    2011-09-01

    This article presents a peer-to-peer (P2P) distributed variational Bayesian (P2PDVB) algorithm for density estimation and clustering in sensor networks. It is assumed that measurements of the nodes can be statistically modelled by a common Gaussian mixture model. The variational approach allows the simultaneous estimate of the component parameters and the model complexity. In this algorithm, each node independently calculates local sufficient statistics first by using local observations. A P2P averaging approach is then used to diffuse local sufficient statistics to neighbours and estimate global sufficient statistics in each node. Finally, each sensor node uses the estimated global sufficient statistics to estimate the model order as well as the parameters of this model. Because the P2P averaging approach only requires that each node communicate with its neighbours, the P2PDVB algorithm is scalable and robust. Diffusion speed and convergence of the proposed algorithm are also studied. Finally, simulated and real data sets are used to verify the remarkable performance of proposed algorithm.

  20. Adaptive hybrid simulations for multiscale stochastic reaction networks

    NASA Astrophysics Data System (ADS)

    Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa

    2015-01-01

    The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such a partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.

  1. Adaptive hybrid simulations for multiscale stochastic reaction networks

    SciTech Connect

    Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa

    2015-01-21

    The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such a partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.

  2. A Subsonic Aircraft Design Optimization With Neural Network and Regression Approximators

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.; Haller, William J.

    2004-01-01

    The Flight-Optimization-System (FLOPS) code encountered difficulty in analyzing a subsonic aircraft. The limitation made the design optimization problematic. The deficiencies have been alleviated through use of neural network and regression approximations. The insight gained from using the approximators is discussed in this paper. The FLOPS code is reviewed. Analysis models are developed and validated for each approximator. The regression method appears to hug the data points, while the neural network approximation follows a mean path. For an analysis cycle, the approximate model required milliseconds of central processing unit (CPU) time versus seconds by the FLOPS code. Performance of the approximators was satisfactory for aircraft analysis. A design optimization capability has been created by coupling the derived analyzers to the optimization test bed CometBoards. The approximators were efficient reanalysis tools in the aircraft design optimization. Instability encountered in the FLOPS analyzer was eliminated. The convergence characteristics were improved for the design optimization. The CPU time required to calculate the optimum solution, measured in hours with the FLOPS code was reduced to minutes with the neural network approximation and to seconds with the regression method. Generation of the approximators required the manipulation of a very large quantity of data. Design sensitivity with respect to the bounds of aircraft constraints is easily generated.

  3. Application of neurocomputing for data approximation and classification in wireless sensor networks.

    PubMed

    Jabbari, Amir; Jedermann, Reiner; Muthuraman, Ramanan; Lang, Walter

    2009-01-01

    A new application of neurocomputing for data approximation and classification is introduced to process data in a wireless sensor network. For this purpose, a simplified dynamic sliding backpropagation algorithm is implemented on a wireless sensor network for transportation applications. It is able to approximate temperature and humidity in sensor nodes. In addition, two architectures of "radial basis function" (RBF) classifiers are introduced with probabilistic features for data classification in sensor nodes. The applied approximation and classification algorithms could be used in similar applications for data processing in embedded systems. PMID:22574062

  4. Application of Neurocomputing for Data Approximation and Classification in Wireless Sensor Networks

    PubMed Central

    Jabbari, Amir; Jedermann, Reiner; Muthuraman, Ramanan; Lang, Walter

    2009-01-01

    A new application of neurocomputing for data approximation and classification is introduced to process data in a wireless sensor network. For this purpose, a simplified dynamic sliding backpropagation algorithm is implemented on a wireless sensor network for transportation applications. It is able to approximate temperature and humidity in sensor nodes. In addition, two architectures of “radial basis function” (RBF) classifiers are introduced with probabilistic features for data classification in sensor nodes. The applied approximation and classification algorithms could be used in similar applications for data processing in embedded systems. PMID:22574062

  5. Neural Network and Regression Approximations in High Speed Civil Transport Aircraft Design Optimization

    NASA Technical Reports Server (NTRS)

    Patniak, Surya N.; Guptill, James D.; Hopkins, Dale A.; Lavelle, Thomas M.

    1998-01-01

    Nonlinear mathematical-programming-based design optimization can be an elegant method. However, the calculations required to generate the merit function, constraints, and their gradients, which are frequently required, can make the process computational intensive. The computational burden can be greatly reduced by using approximating analyzers derived from an original analyzer utilizing neural networks and linear regression methods. The experience gained from using both of these approximation methods in the design optimization of a high speed civil transport aircraft is the subject of this paper. The Langley Research Center's Flight Optimization System was selected for the aircraft analysis. This software was exercised to generate a set of training data with which a neural network and a regression method were trained, thereby producing the two approximating analyzers. The derived analyzers were coupled to the Lewis Research Center's CometBoards test bed to provide the optimization capability. With the combined software, both approximation methods were examined for use in aircraft design optimization, and both performed satisfactorily. The CPU time for solution of the problem, which had been measured in hours, was reduced to minutes with the neural network approximation and to seconds with the regression method. Instability encountered in the aircraft analysis software at certain design points was also eliminated. On the other hand, there were costs and difficulties associated with training the approximating analyzers. The CPU time required to generate the input-output pairs and to train the approximating analyzers was seven times that required for solution of the problem.

  6. Approximate Entropy Based Fault Localization and Fault Type Recognition for Non-solidly Earthed Network

    NASA Astrophysics Data System (ADS)

    Pang, Qingle; Liu, Xinyun; Sun, Bo; Ling, Qunli

    2012-12-01

    For non-solidly earthed network, the fault localization of single phase grounding fault has been a problem. A novel fault localization and fault type recognition method of single phase grounding fault based on approximate entropy is presented. The approximate entropies of transient zero sequence current at both ends of healthy section are approximately equal, and the ratio is close to 1. On the contrary, the approximate entropies at both ends of fault section are different, and the ratio is far from 1. So, the fault section is located. At the same fault section, the smaller is the fault resistance, the larger is the approximate entropy of transient zero sequence current. According to the function between approximate entropy and fault resistance, the fault type is determined. The method has the advantages of transferring less data and unneeded synchronous sampling accurately. The simulation results show that the proposed method is feasible and accurate.

  7. Approximating natural connectivity of scale-free networks based on largest eigenvalue

    NASA Astrophysics Data System (ADS)

    Tan, S.-Y.; Wu, J.; Li, M.-J.; Lu, X.

    2016-06-01

    It has been recently proposed that natural connectivity can be used to efficiently characterize the robustness of complex networks. The natural connectivity has an intuitive physical meaning and a simple mathematical formulation, which corresponds to an average eigenvalue calculated from the graph spectrum. However, as a network model close to the real-world system that widely exists, the scale-free network is found difficult to obtain its spectrum analytically. In this article, we investigate the approximation of natural connectivity based on the largest eigenvalue in both random and correlated scale-free networks. It is demonstrated that the natural connectivity of scale-free networks can be dominated by the largest eigenvalue, which can be expressed asymptotically and analytically to approximate natural connectivity with small errors. Then we show that the natural connectivity of random scale-free networks increases linearly with the average degree given the scaling exponent and decreases monotonically with the scaling exponent given the average degree. Moreover, it is found that, given the degree distribution, the more assortative a scale-free network is, the more robust it is. Experiments in real networks validate our methods and results.

  8. Relative entropy minimizing noisy non-linear neural network to approximate stochastic processes.

    PubMed

    Galtier, Mathieu N; Marini, Camille; Wainrib, Gilles; Jaeger, Herbert

    2014-08-01

    A method is provided for designing and training noise-driven recurrent neural networks as models of stochastic processes. The method unifies and generalizes two known separate modeling approaches, Echo State Networks (ESN) and Linear Inverse Modeling (LIM), under the common principle of relative entropy minimization. The power of the new method is demonstrated on a stochastic approximation of the El Niño phenomenon studied in climate research. PMID:24815743

  9. Approximating the largest eigenvalue of the modified adjacency matrix of networks with heterogeneous node biases

    NASA Astrophysics Data System (ADS)

    Ott, Edward; Pomerance, Andrew

    2009-05-01

    Motivated by its relevance to various types of dynamical behavior of network systems, the maximum eigenvalue λA of the adjacency matrix A of a network has been considered and mean-field-type approximations to λA have been developed for different kinds of networks. Here A is defined by Aij=1 (Aij=0) if there is (is not) a directed network link to i from j . However, in at least two recent problems involving networks with heterogeneous node properties (percolation on a directed network and the stability of Boolean models of gene networks), an analogous but different eigenvalue problem arises, namely, that of finding the largest eigenvalue λQ of the matrix Q , where Qij=qiAij and the “bias” qi may be different at each node i . (In the previously mentioned percolation and gene network contexts, qi is a probability and so lies in the range 0≤qi≤1 .) The purposes of this paper are to extend the previous considerations of the maximum eigenvalue λA of A to λQ , to develop suitable analytic approximations to λQ , and to test these approximations with numerical experiments. In particular, three issues considered are (i) the effect of the correlation (or anticorrelation) between the value of qi and the number of links to and from node i , (ii) the effect of correlation between the properties of two nodes at either end of a network link (“assortativity”), and (iii) the effect of community structure allowing for a situation in which different q values are associated with different communities.

  10. Approximating the largest eigenvalue of the modified adjacency matrix of networks with heterogeneous node biases.

    PubMed

    Ott, Edward; Pomerance, Andrew

    2009-05-01

    Motivated by its relevance to various types of dynamical behavior of network systems, the maximum eigenvalue lambdaA of the adjacency matrix A of a network has been considered and mean-field-type approximations to lambdaA have been developed for different kinds of networks. Here A is defined by Aij=1 (Aij=0) if there is (is not) a directed network link to i from j. However, in at least two recent problems involving networks with heterogeneous node properties (percolation on a directed network and the stability of Boolean models of gene networks), an analogous but different eigenvalue problem arises, namely, that of finding the largest eigenvalue lambdaQ of the matrix Q, where Qij=qiAij and the "bias" qi may be different at each node i. (In the previously mentioned percolation and gene network contexts, qi is a probability and so lies in the range 0approximations to lambdaQ, and to test these approximations with numerical experiments. In particular, three issues considered are (i) the effect of the correlation (or anticorrelation) between the value of qi and the number of links to and from node i, (ii) the effect of correlation between the properties of two nodes at either end of a network link ("assortativity"), and (iii) the effect of community structure allowing for a situation in which different q values are associated with different communities. PMID:19518525

  11. Pointwise and uniform approximation by multivariate neural network operators of the max-product type.

    PubMed

    Costarelli, Danilo; Vinti, Gianluca

    2016-09-01

    In this article, the theory of multivariate max-product neural network (NN) and quasi-interpolation operators has been introduced. Pointwise and uniform approximation results have been proved, together with estimates concerning the rate of convergence. At the end, several examples of sigmoidal activation functions have been provided. PMID:27389570

  12. Exact and approximate moment closures for non-Markovian network epidemics.

    PubMed

    Pellis, Lorenzo; House, Thomas; Keeling, Matt J

    2015-10-01

    Moment-closure techniques are commonly used to generate low-dimensional deterministic models to approximate the average dynamics of stochastic systems on networks. The quality of such closures is usually difficult to asses and furthermore the relationship between model assumptions and closure accuracy are often difficult, if not impossible, to quantify. Here we carefully examine some commonly used moment closures, in particular a new one based on the concept of maximum entropy, for approximating the spread of epidemics on networks by reconstructing the probability distributions over triplets based on those over pairs. We consider various models (SI, SIR, SEIR and Reed-Frost-type) under Markovian and non-Markovian assumption characterising the latent and infectious periods. We initially study with care two special networks, namely the open triplet and closed triangle, for which we can obtain analytical results. We then explore numerically the exactness of moment closures for a wide range of larger motifs, thus gaining understanding of the factors that introduce errors in the approximations, in particular the presence of a random duration of the infectious period and the presence of overlapping triangles in a network. We also derive a simpler and more intuitive proof than previously available concerning the known result that pair-based moment closure is exact for the Markovian SIR model on tree-like networks under pure initial conditions. We also extend such a result to all infectious models, Markovian and non-Markovian, in which susceptibles escape infection independently from each infected neighbour and for which infectives cannot regain susceptible status, provided the network is tree-like and initial conditions are pure. This works represent a valuable step in enriching intuition and deepening understanding of the assumptions behind moment closure approximations and for putting them on a more rigorous mathematical footing. PMID:25975999

  13. Parameter inference in small world network disease models with approximate Bayesian Computational methods

    NASA Astrophysics Data System (ADS)

    Walker, David M.; Allingham, David; Lee, Heung Wing Joseph; Small, Michael

    2010-02-01

    Small world network models have been effective in capturing the variable behaviour of reported case data of the SARS coronavirus outbreak in Hong Kong during 2003. Simulations of these models have previously been realized using informed “guesses” of the proposed model parameters and tested for consistency with the reported data by surrogate analysis. In this paper we attempt to provide statistically rigorous parameter distributions using Approximate Bayesian Computation sampling methods. We find that such sampling schemes are a useful framework for fitting parameters of stochastic small world network models where simulation of the system is straightforward but expressing a likelihood is cumbersome.

  14. Artificial neural networks and approximate reasoning for intelligent control in space

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.

    1991-01-01

    A method is introduced for learning to refine the control rules of approximate reasoning-based controllers. A reinforcement-learning technique is used in conjunction with a multi-layer neural network model of an approximate reasoning-based controller. The model learns by updating its prediction of the physical system's behavior. The model can use the control knowledge of an experienced operator and fine-tune it through the process of learning. Some of the space domains suitable for applications of the model such as rendezvous and docking, camera tracking, and tethered systems control are discussed.

  15. Mean Field Approximation for Biased Diffusion on Japanese Inter-Firm Trading Network

    PubMed Central

    Watanabe, Hayafumi

    2014-01-01

    By analysing the financial data of firms across Japan, a nonlinear power law with an exponent of 1.3 was observed between the number of business partners (i.e. the degree of the inter-firm trading network) and sales. In a previous study using numerical simulations, we found that this scaling can be explained by both the money-transport model, where a firm (i.e. customer) distributes money to its out-edges (suppliers) in proportion to the in-degree of destinations, and by the correlations among the Japanese inter-firm trading network. However, in this previous study, we could not specifically identify what types of structure properties (or correlations) of the network determine the 1.3 exponent. In the present study, we more clearly elucidate the relationship between this nonlinear scaling and the network structure by applying mean-field approximation of the diffusion in a complex network to this money-transport model. Using theoretical analysis, we obtained the mean-field solution of the model and found that, in the case of the Japanese firms, the scaling exponent of 1.3 can be determined from the power law of the average degree of the nearest neighbours of the network with an exponent of −0.7. PMID:24626149

  16. Mean field approximation for biased diffusion on Japanese inter-firm trading network.

    PubMed

    Watanabe, Hayafumi

    2014-01-01

    By analysing the financial data of firms across Japan, a nonlinear power law with an exponent of 1.3 was observed between the number of business partners (i.e. the degree of the inter-firm trading network) and sales. In a previous study using numerical simulations, we found that this scaling can be explained by both the money-transport model, where a firm (i.e. customer) distributes money to its out-edges (suppliers) in proportion to the in-degree of destinations, and by the correlations among the Japanese inter-firm trading network. However, in this previous study, we could not specifically identify what types of structure properties (or correlations) of the network determine the 1.3 exponent. In the present study, we more clearly elucidate the relationship between this nonlinear scaling and the network structure by applying mean-field approximation of the diffusion in a complex network to this money-transport model. Using theoretical analysis, we obtained the mean-field solution of the model and found that, in the case of the Japanese firms, the scaling exponent of 1.3 can be determined from the power law of the average degree of the nearest neighbours of the network with an exponent of -0.7. PMID:24626149

  17. Mean-field approximation for the Sznajd model in complex networks

    NASA Astrophysics Data System (ADS)

    Araújo, Maycon S.; Vannucchi, Fabio S.; Timpanaro, André M.; Prado, Carmen P. C.

    2015-02-01

    This paper studies the Sznajd model for opinion formation in a population connected through a general network. A master equation describing the time evolution of opinions is presented and solved in a mean-field approximation. Although quite simple, this approximation allows us to capture the most important features regarding the steady states of the model. When spontaneous opinion changes are included, a discontinuous transition from consensus to polarization can be found as the rate of spontaneous change is increased. In this case we show that a hybrid mean-field approach including interactions between second nearest neighbors is necessary to estimate correctly the critical point of the transition. The analytical prediction of the critical point is also compared with numerical simulations in a wide variety of networks, in particular Barabási-Albert networks, finding reasonable agreement despite the strong approximations involved. The same hybrid approach that made it possible to deal with second-order neighbors could just as well be adapted to treat other problems such as epidemic spreading or predator-prey systems.

  18. Accuracy criterion for the mean-field approximation in susceptible-infected-susceptible epidemics on networks

    NASA Astrophysics Data System (ADS)

    Van Mieghem, P.; van de Bovenkamp, R.

    2015-03-01

    Mean-field approximations (MFAs) are frequently used in physics. When a process (such as an epidemic or a synchronization) on a network is approximated by MFA, a major hurdle is the determination of those graphs for which MFA is reasonably accurate. Here, we present an accuracy criterion for Markovian susceptible-infected-susceptible (SIS) epidemics on any network, based on the spectrum of the adjacency and SIS covariance matrix. We evaluate the MFA criterion for the complete and star graphs analytically, and numerically for connected Erdős-Rényi random graphs for small size N ≤14 . The accuracy of MFA increases with average degree and with N . Precise simulations (up to network sizes N =100 ) of the MFA accuracy criterion versus N for the complete graph, star, square lattice, and path graphs lead us to conjecture that the worst MFA accuracy decreases, for large N , proportionally to the inverse of the spectral radius of the adjacency matrix of the graph.

  19. A rapid supervised learning neural network for function interpolation and approximation.

    PubMed

    Chen, C P

    1996-01-01

    This paper presents a neural-network architecture and an instant learning algorithm that rapidly decides the weights of the designed single-hidden layer neural network. For an n-dimensional N-pattern training set, with a constant bias, a maximum of N-r-1 hidden nodes is required to learn the mapping within a given precision (where r is the rank, usually the dimension, of the input patterns). For off-line training, the proposed network and algorithm is able to achieve "one-shot" training as opposed to most iterative training algorithms in the literature. An online training algorithm is also presented. Similar to most of the backpropagation type of learning algorithms, the given algorithm also interpolates the training data. To eliminate outlier data which may appear in some erroneous training data, a robust weighted least squares method is proposed. The robust weighted least squares learning algorithm can eliminate outlier samples and the algorithm approximates the training data rather than interpolates them. The advantage of the designed network architecture is also mathematically proved. Several experiments show very promising results. PMID:18263516

  20. Applying Monte Carlo Simulation to Biomedical Literature to Approximate Genetic Network.

    PubMed

    Al-Dalky, Rami; Taha, Kamal; Al Homouz, Dirar; Qasaimeh, Murad

    2016-01-01

    Biologists often need to know the set of genes associated with a given set of genes or a given disease. We propose in this paper a classifier system called Monte Carlo for Genetic Network (MCforGN) that can construct genetic networks, identify functionally related genes, and predict gene-disease associations. MCforGN identifies functionally related genes based on their co-occurrences in the abstracts of biomedical literature. For a given gene g , the system first extracts the set of genes found within the abstracts of biomedical literature associated with g. It then ranks these genes to determine the ones with high co-occurrences with g . It overcomes the limitations of current approaches that employ analytical deterministic algorithms by applying Monte Carlo Simulation to approximate genetic networks. It does so by conducting repeated random sampling to obtain numerical results and to optimize these results. Moreover, it analyzes results to obtain the probabilities of different genes' co-occurrences using series of statistical tests. MCforGN can detect gene-disease associations by employing a combination of centrality measures (to identify the central genes in disease-specific genetic networks) and Monte Carlo Simulation. MCforGN aims at enhancing state-of-the-art biological text mining by applying novel extraction techniques. We evaluated MCforGN by comparing it experimentally with nine approaches. Results showed marked improvement. PMID:26415184

  1. Linear noise approximation for oscillations in a stochastic inhibitory network with delay

    NASA Astrophysics Data System (ADS)

    Dumont, Grégory; Northoff, Georg; Longtin, André

    2014-07-01

    Understanding neural variability is currently one of the biggest challenges in neuroscience. Using theory and computational modeling, we study the behavior of a globally coupled inhibitory neural network, in which each neuron follows a purely stochastic two-state spiking process. We investigate the role of both this intrinsic randomness and the conduction delay on the emergence of fast (e.g., gamma) oscillations. Toward that end, we expand the recently proposed linear noise approximation (LNA) technique to this non-Markovian "delay" case. The analysis first leads to a nonlinear delay-differential equation (DDE) with multiplicative noise for the mean activity. The LNA then yields two coupled DDEs, one of which is driven by additive Gaussian white noise. These equations on their own provide an excellent approximation to the full network dynamics, which are much longer to integrate. They further allow us to compute a theoretical expression for the power spectrum of the population activity. Our analytical result is in good agreement with the power spectrum obtained via numerical simulations of the full network dynamics, for the large range of parameters where both the intrinsic stochasticity and the conduction delay are necessary for the occurrence of oscillations. The intrinsic noise arises from the probabilistic description of each neuron, yet it is expressed at the system activity level, and it can only be controlled by the system size. In fact, its effect on the fluctuations in system activity disappears in the infinite network size limit, but the characteristics of the oscillatory activity depend on all model parameters including the system size. Using the Hilbert transform, we further show that the intrinsic noise causes sporadic strong fluctuations in the phase of the gamma rhythm.

  2. How reliable is the linear noise approximation of gene regulatory networks?

    PubMed Central

    2013-01-01

    Background The linear noise approximation (LNA) is commonly used to predict how noise is regulated and exploited at the cellular level. These predictions are exact for reaction networks composed exclusively of first order reactions or for networks involving bimolecular reactions and large numbers of molecules. It is however well known that gene regulation involves bimolecular interactions with molecule numbers as small as a single copy of a particular gene. It is therefore questionable how reliable are the LNA predictions for these systems. Results We implement in the software package intrinsic Noise Analyzer (iNA), a system size expansion based method which calculates the mean concentrations and the variances of the fluctuations to an order of accuracy higher than the LNA. We then use iNA to explore the parametric dependence of the Fano factors and of the coefficients of variation of the mRNA and protein fluctuations in models of genetic networks involving nonlinear protein degradation, post-transcriptional, post-translational and negative feedback regulation. We find that the LNA can significantly underestimate the amplitude and period of noise-induced oscillations in genetic oscillators. We also identify cases where the LNA predicts that noise levels can be optimized by tuning a bimolecular rate constant whereas our method shows that no such regulation is possible. All our results are confirmed by stochastic simulations. Conclusion The software iNA allows the investigation of parameter regimes where the LNA fares well and where it does not. We have shown that the parametric dependence of the coefficients of variation and Fano factors for common gene regulatory networks is better described by including terms of higher order than LNA in the system size expansion. This analysis is considerably faster than stochastic simulations due to the extensive ensemble averaging needed to obtain statistically meaningful results. Hence iNA is well suited for performing

  3. Complete hierarchies of SIR models on arbitrary networks with exact and approximate moment closure.

    PubMed

    Sharkey, Kieran J; Wilkinson, Robert R

    2015-06-01

    We first generalise ideas discussed by Kiss et al. (2015) to prove a theorem for generating exact closures (here expressing joint probabilities in terms of their constituent marginal probabilities) for susceptible-infectious-removed (SIR) dynamics on arbitrary graphs (networks). For Poisson transmission and removal processes, this enables us to obtain a systematic reduction in the number of differential equations needed for an exact 'moment closure' representation of the underlying stochastic model. We define 'transmission blocks' as a possible extension of the block concept in graph theory and show that the order at which the exact moment closure representation is curtailed is the size of the largest transmission block. More generally, approximate closures of the hierarchy of moment equations for these dynamics are typically defined for the first and second order yielding mean-field and pairwise models respectively. It is frequently implied that, in principle, closed models can be written down at arbitrary order if only we had the time and patience to do this. However, for epidemic dynamics on networks, these higher-order models have not been defined explicitly. Here we unambiguously define hierarchies of approximate closed models that can utilise subsystem states of any order, and show how well-known models are special cases of these hierarchies. PMID:25829147

  4. Application of Approximate Pattern Matching in Two Dimensional Spaces to Grid Layout for Biochemical Network Maps

    PubMed Central

    Inoue, Kentaro; Shimozono, Shinichi; Yoshida, Hideaki; Kurata, Hiroyuki

    2012-01-01

    Background For visualizing large-scale biochemical network maps, it is important to calculate the coordinates of molecular nodes quickly and to enhance the understanding or traceability of them. The grid layout is effective in drawing compact, orderly, balanced network maps with node label spaces, but existing grid layout algorithms often require a high computational cost because they have to consider complicated positional constraints through the entire optimization process. Results We propose a hybrid grid layout algorithm that consists of a non-grid, fast layout (preprocessor) algorithm and an approximate pattern matching algorithm that distributes the resultant preprocessed nodes on square grid points. To demonstrate the feasibility of the hybrid layout algorithm, it is characterized in terms of the calculation time, numbers of edge-edge and node-edge crossings, relative edge lengths, and F-measures. The proposed algorithm achieves outstanding performances compared with other existing grid layouts. Conclusions Use of an approximate pattern matching algorithm quickly redistributes the laid-out nodes by fast, non-grid algorithms on the square grid points, while preserving the topological relationships among the nodes. The proposed algorithm is a novel use of the pattern matching, thereby providing a breakthrough for grid layout. This application program can be freely downloaded from http://www.cadlive.jp/hybridlayout/hybridlayout.html. PMID:22679486

  5. Cascade Optimization Strategy with Neural Network and Regression Approximations Demonstrated on a Preliminary Aircraft Engine Design

    NASA Technical Reports Server (NTRS)

    Hopkins, Dale A.; Patnaik, Surya N.

    2000-01-01

    A preliminary aircraft engine design methodology is being developed that utilizes a cascade optimization strategy together with neural network and regression approximation methods. The cascade strategy employs different optimization algorithms in a specified sequence. The neural network and regression methods are used to approximate solutions obtained from the NASA Engine Performance Program (NEPP), which implements engine thermodynamic cycle and performance analysis models. The new methodology is proving to be more robust and computationally efficient than the conventional optimization approach of using a single optimization algorithm with direct reanalysis. The methodology has been demonstrated on a preliminary design problem for a novel subsonic turbofan engine concept that incorporates a wave rotor as a cycle-topping device. Computations of maximum thrust were obtained for a specific design point in the engine mission profile. The results (depicted in the figure) show a significant improvement in the maximum thrust obtained using the new methodology in comparison to benchmark solutions obtained using NEPP in a manual design mode.

  6. Cascade Optimization for Aircraft Engines With Regression and Neural Network Analysis - Approximators

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Guptill, James D.; Hopkins, Dale A.; Lavelle, Thomas M.

    2000-01-01

    The NASA Engine Performance Program (NEPP) can configure and analyze almost any type of gas turbine engine that can be generated through the interconnection of a set of standard physical components. In addition, the code can optimize engine performance by changing adjustable variables under a set of constraints. However, for engine cycle problems at certain operating points, the NEPP code can encounter difficulties: nonconvergence in the currently implemented Powell's optimization algorithm and deficiencies in the Newton-Raphson solver during engine balancing. A project was undertaken to correct these deficiencies. Nonconvergence was avoided through a cascade optimization strategy, and deficiencies associated with engine balancing were eliminated through neural network and linear regression methods. An approximation-interspersed cascade strategy was used to optimize the engine's operation over its flight envelope. Replacement of Powell's algorithm by the cascade strategy improved the optimization segment of the NEPP code. The performance of the linear regression and neural network methods as alternative engine analyzers was found to be satisfactory. This report considers two examples-a supersonic mixed-flow turbofan engine and a subsonic waverotor-topped engine-to illustrate the results, and it discusses insights gained from the improved version of the NEPP code.

  7. Perfect plastic approximation revisited: a flowline network model for calving glaciers

    NASA Astrophysics Data System (ADS)

    Ultee, E.; Bassis, J. N.

    2015-12-01

    Accurate modeling of outlet glacier dynamics requires knowledge of many factors—ice thickness, bed topography, air/ocean temperature, precipitation rate—specific to individual glaciers, and for which only limited data exists. Furthermore, key processes such as iceberg calving remain poorly understood and difficult to include in models. In light of these challenges to even the most sophisticated models, there is great value in simple, computationally efficient models that can capture first-order effects. Many of the simplest models currently in use produce glacier profiles along a central flowline, either ignoring the contribution of tributaries or relying on a measure of "equivalent width" to handle those contributions. Here, we present a simple model that generalizes Nye's 1953 perfect plastic approximation so that it also predicts the position of the glacier terminus based on the yield strength. Moreover, our model simulates not only a central flowline, but the interactions of a network of tributaries. The model requires only minimal information: glacier geometry (network structure and bed topography, available from observation for select glaciers) and basal shear strength (a reasonably-constrained parameter). We apply the model to Columbia Glacier, Alaska and show that, despite its simplicity, the model is able to reproduce observed centerline profiles and terminus retreat for the main branch as well as selected tributaries. Finally, we illustrate how our model can be applied to constrain the calving contribution of individual glaciers to 21st century sea level rise.

  8. Least squares solutions of the HJB equation with neural network value-function approximators.

    PubMed

    Tassa, Yuval; Erez, Tom

    2007-07-01

    In this paper, we present an empirical study of iterative least squares minimization of the Hamilton-Jacobi-Bellman (HJB) residual with a neural network (NN) approximation of the value function. Although the nonlinearities in the optimal control problem and NN approximator preclude theoretical guarantees and raise concerns of numerical instabilities, we present two simple methods for promoting convergence, the effectiveness of which is presented in a series of experiments. The first method involves the gradual increase of the horizon time scale, with a corresponding gradual increase in value function complexity. The second method involves the assumption of stochastic dynamics which introduces a regularizing second derivative term to the HJB equation. A gradual reduction of this term provides further stabilization of the convergence. We demonstrate the solution of several problems, including the 4-D inverted-pendulum system with bounded control. Our approach requires no initial stabilizing policy or any restrictive assumptions on the plant or cost function, only knowledge of the plant dynamics. In the Appendix, we provide the equations for first- and second-order differential backpropagation. PMID:17668659

  9. Transition modes in Ising networks: an approximate theory for macromolecular recognition.

    PubMed Central

    Keating, S; Di Cera, E

    1993-01-01

    For a statistical lattice, or Ising network, composed of N identical units existing in two possible states, 0 and 1, and interacting according to a given geometry, a set of values can be found for the mean free energy of the 0-->1 transition of a single unit. Each value defines a transition mode in an ensemble of nu N = 3N - 2N possible values and reflects the role played by intermediate states in shaping the energetics of the system as a whole. The distribution of transition modes has a number of intriguing properties. Some of them apply quite generally to any Ising network, regardless of its dimension, while others are specific for each interaction geometry and dimensional embedding and bear on fundamental aspects of analytical number theory. The landscape of transition modes encapsulates all of the important thermodynamic properties of the network. The free energy terms defining the partition function of the system can be derived from the modes by simple transformations. Classical mean-field expressions can be obtained from consideration of the properties of transition modes in a rather straightforward way. The results obtained in the analysis of the transition mode distributions have been used to develop an approximate treatment of the problem of macromolecular recognition. This phenomenon is modeled as a cooperative process that involves a number of recognition subsites across an interface generated by the binding of two macromolecular components. The distribution of allowed binding free energies for the system is shown to be a superposition of Gaussian terms with mean and variance determined a priori by the theory. Application to the analysis of the biologically interaction of thrombin with hirudin has provided some useful information on basic aspects of the interaction, such as the number of recognition subsites involved and the energy balance for binding and cooperative coupling among them. Our results agree quite well with information derived independently

  10. Binary-State Dynamics on Complex Networks: Pair Approximation and Beyond

    NASA Astrophysics Data System (ADS)

    Gleeson, James P.

    2013-04-01

    A wide class of binary-state dynamics on networks—including, for example, the voter model, the Bass diffusion model, and threshold models—can be described in terms of transition rates (spin-flip probabilities) that depend on the number of nearest neighbors in each of the two possible states. High-accuracy approximations for the emergent dynamics of such models on uncorrelated, infinite networks are given by recently developed compartmental models or approximate master equations (AMEs). Pair approximations (PAs) and mean-field theories can be systematically derived from the AME. We show that PA and AME solutions can coincide under certain circumstances, and numerical simulations confirm that PA is highly accurate in these cases. For monotone dynamics (where transitions out of one nodal state are impossible, e.g., susceptible-infected disease spread or Bass diffusion), PA and the AME give identical results for the fraction of nodes in the infected (active) state for all time, provided that the rate of infection depends linearly on the number of infected neighbors. In the more general nonmonotone case, we derive a condition—that proves to be equivalent to a detailed balance condition on the dynamics—for PA and AME solutions to coincide in the limit t→∞. This equivalence permits bifurcation analysis, yielding explicit expressions for the critical (ferromagnetic or paramagnetic transition) point of such dynamics, that is closely analogous to the critical temperature of the Ising spin model. Finally, the AME for threshold models of propagation is shown to reduce to just two differential equations and to give excellent agreement with numerical simulations. As part of this work, the Octave or Matlab code for implementing and solving the differential-equation systems is made available for download.

  11. A Single Hidden Layer Feedforward Network with Only One Neuron in the Hidden Layer Can Approximate Any Univariate Function.

    PubMed

    Guliyev, Namig J; Ismailov, Vugar E

    2016-07-01

    The possibility of approximating a continuous function on a compact subset of the real line by a feedforward single hidden layer neural network with a sigmoidal activation function has been studied in many papers. Such networks can approximate an arbitrary continuous function provided that an unlimited number of neurons in a hidden layer is permitted. In this note, we consider constructive approximation on any finite interval of [Formula: see text] by neural networks with only one neuron in the hidden layer. We construct algorithmically a smooth, sigmoidal, almost monotone activation function [Formula: see text] providing approximation to an arbitrary continuous function within any degree of accuracy. This algorithm is implemented in a computer program, which computes the value of [Formula: see text] at any reasonable point of the real axis. PMID:27171269

  12. Scalable learning method for feedforward neural networks using minimal-enclosing-ball approximation.

    PubMed

    Wang, Jun; Deng, Zhaohong; Luo, Xiaoqing; Jiang, Yizhang; Wang, Shitong

    2016-06-01

    Training feedforward neural networks (FNNs) is one of the most critical issues in FNNs studies. However, most FNNs training methods cannot be directly applied for very large datasets because they have high computational and space complexity. In order to tackle this problem, the CCMEB (Center-Constrained Minimum Enclosing Ball) problem in hidden feature space of FNN is discussed and a novel learning algorithm called HFSR-GCVM (hidden-feature-space regression using generalized core vector machine) is developed accordingly. In HFSR-GCVM, a novel learning criterion using L2-norm penalty-based ε-insensitive function is formulated and the parameters in the hidden nodes are generated randomly independent of the training sets. Moreover, the learning of parameters in its output layer is proved equivalent to a special CCMEB problem in FNN hidden feature space. As most CCMEB approximation based machine learning algorithms, the proposed HFSR-GCVM training algorithm has the following merits: The maximal training time of the HFSR-GCVM training is linear with the size of training datasets and the maximal space consumption is independent of the size of training datasets. The experiments on regression tasks confirm the above conclusions. PMID:27049545

  13. Computing approximate blocking probability of inverse multiplexing and sub-band conversion in the flexible-grid optical networks

    NASA Astrophysics Data System (ADS)

    Gu, Yamei; You, Shanhong

    2016-07-01

    With the rapid growth of data rate, the optical network is evolving from fixed-grid to flexible-grid to provide spectrum-efficient and scalable transport of 100 Gb/s services and beyond. Also, the deployment of wavelength converter in the existing network can increase the flexibility of routing and wavelength allocation (RWA) and improve blocking performance of the optical networks. In this paper, we present a methodology for computing approximate blocking probabilities of the provision of multiclass services in the flexible-grid optical networks with sub-band spectrum conversion and inverse multiplexing respectively. Numerical calculation results based on the model are compared to the simulation results for the different cases. It is shown that the calculation results match well with the simulation results for the flexible-grid optical networks at different scenarios.

  14. Correlated Spatio-Temporal Data Collection in Wireless Sensor Networks Based on Low Rank Matrix Approximation and Optimized Node Sampling

    PubMed Central

    Piao, Xinglin; Hu, Yongli; Sun, Yanfeng; Yin, Baocai; Gao, Junbin

    2014-01-01

    The emerging low rank matrix approximation (LRMA) method provides an energy efficient scheme for data collection in wireless sensor networks (WSNs) by randomly sampling a subset of sensor nodes for data sensing. However, the existing LRMA based methods generally underutilize the spatial or temporal correlation of the sensing data, resulting in uneven energy consumption and thus shortening the network lifetime. In this paper, we propose a correlated spatio-temporal data collection method for WSNs based on LRMA. In the proposed method, both the temporal consistence and the spatial correlation of the sensing data are simultaneously integrated under a new LRMA model. Moreover, the network energy consumption issue is considered in the node sampling procedure. We use Gini index to measure both the spatial distribution of the selected nodes and the evenness of the network energy status, then formulate and resolve an optimization problem to achieve optimized node sampling. The proposed method is evaluated on both the simulated and real wireless networks and compared with state-of-the-art methods. The experimental results show the proposed method efficiently reduces the energy consumption of network and prolongs the network lifetime with high data recovery accuracy and good stability. PMID:25490583

  15. Characteristics of pattern formation and evolution in approximations of Physarum transport networks.

    PubMed

    Jones, Jeff

    2010-01-01

    Most studies of pattern formation place particular emphasis on its role in the development of complex multicellular body plans. In simpler organisms, however, pattern formation is intrinsic to growth and behavior. Inspired by one such organism, the true slime mold Physarum polycephalum, we present examples of complex emergent pattern formation and evolution formed by a population of simple particle-like agents. Using simple local behaviors based on chemotaxis, the mobile agent population spontaneously forms complex and dynamic transport networks. By adjusting simple model parameters, maps of characteristic patterning are obtained. Certain areas of the parameter mapping yield particularly complex long term behaviors, including the circular contraction of network lacunae and bifurcation of network paths to maintain network connectivity. We demonstrate the formation of irregular spots and labyrinthine and reticulated patterns by chemoattraction. Other Turing-like patterning schemes were obtained by using chemorepulsion behaviors, including the self-organization of regular periodic arrays of spots, and striped patterns. We show that complex pattern types can be produced without resorting to the hierarchical coupling of reaction-diffusion mechanisms. We also present network behaviors arising from simple pre-patterning cues, giving simple examples of how the emergent pattern formation processes evolve into networks with functional and quasi-physical properties including tensionlike effects, network minimization behavior, and repair to network damage. The results are interpreted in relation to classical theories of biological pattern formation in natural systems, and we suggest mechanisms by which emergent pattern formation processes may be used as a method for spatially represented unconventional computation. PMID:20067403

  16. Communication: Limitations of the stochastic quasi-steady-state approximation in open biochemical reaction networks

    NASA Astrophysics Data System (ADS)

    Thomas, Philipp; Straube, Arthur V.; Grima, Ramon

    2011-11-01

    It is commonly believed that, whenever timescale separation holds, the predictions of reduced chemical master equations obtained using the stochastic quasi-steady-state approximation are in very good agreement with the predictions of the full master equations. We use the linear noise approximation to obtain a simple formula for the relative error between the predictions of the two master equations for the Michaelis-Menten reaction with substrate input. The reduced approach is predicted to overestimate the variance of the substrate concentration fluctuations by as much as 30%. The theoretical results are validated by stochastic simulations using experimental parameter values for enzymes involved in proteolysis, gluconeogenesis, and fermentation.

  17. Communication: limitations of the stochastic quasi-steady-state approximation in open biochemical reaction networks.

    PubMed

    Thomas, Philipp; Straube, Arthur V; Grima, Ramon

    2011-11-14

    It is commonly believed that, whenever timescale separation holds, the predictions of reduced chemical master equations obtained using the stochastic quasi-steady-state approximation are in very good agreement with the predictions of the full master equations. We use the linear noise approximation to obtain a simple formula for the relative error between the predictions of the two master equations for the Michaelis-Menten reaction with substrate input. The reduced approach is predicted to overestimate the variance of the substrate concentration fluctuations by as much as 30%. The theoretical results are validated by stochastic simulations using experimental parameter values for enzymes involved in proteolysis, gluconeogenesis, and fermentation. PMID:22088045

  18. Approximate optimal control design for nonlinear one-dimensional parabolic PDE systems using empirical eigenfunctions and neural network.

    PubMed

    Luo, Biao; Wu, Huai-Ning

    2012-12-01

    This paper addresses the approximate optimal control problem for a class of parabolic partial differential equation (PDE) systems with nonlinear spatial differential operators. An approximate optimal control design method is proposed on the basis of the empirical eigenfunctions (EEFs) and neural network (NN). First, based on the data collected from the PDE system, the Karhunen-Loève decomposition is used to compute the EEFs. With those EEFs, the PDE system is formulated as a high-order ordinary differential equation (ODE) system. To further reduce its dimension, the singular perturbation (SP) technique is employed to derive a reduced-order model (ROM), which can accurately describe the dominant dynamics of the PDE system. Second, the Hamilton-Jacobi-Bellman (HJB) method is applied to synthesize an optimal controller based on the ROM, where the closed-loop asymptotic stability of the high-order ODE system can be guaranteed by the SP theory. By dividing the optimal control law into two parts, the linear part is obtained by solving an algebraic Riccati equation, and a new type of HJB-like equation is derived for designing the nonlinear part. Third, a control update strategy based on successive approximation is proposed to solve the HJB-like equation, and its convergence is proved. Furthermore, an NN approach is used to approximate the cost function. Finally, we apply the developed approximate optimal control method to a diffusion-reaction process with a nonlinear spatial operator, and the simulation results illustrate its effectiveness. PMID:22588610

  19. Phase patterns in finite oscillator networks with insights from the piecewise linear approximation

    NASA Astrophysics Data System (ADS)

    Goldstein, Daniel

    2015-03-01

    Recent experiments on spatially extend arrays of droplets containing Belousov-Zhabotinsky reactants have shown a rich variety of spatio-temporal patterns. Motivated by this experimental set up, we study a simple model of chemical oscillators in the highly nonlinear excitable regime in order to gain insight into the mechanism giving rise to the observed multistable attractors. When coupled, these two attractors have different preferred phase synchronizations, leading to complex behavior. We study rings of coupled oscillators and observe a rich array of oscillating patterns. We combine Turing analysis and a piecewise linear approximation to better understand the observed patterns.

  20. A high order approximation of hyperbolic conservation laws in networks: Application to one-dimensional blood flow

    NASA Astrophysics Data System (ADS)

    Müller, Lucas O.; Blanco, Pablo J.

    2015-11-01

    We present a methodology for the high order approximation of hyperbolic conservation laws in networks by using the Dumbser-Enaux-Toro solver and exact solvers for the classical Riemann problem at junctions. The proposed strategy can be applied to any hyperbolic system, conservative or non-conservative, and possibly with flux functions containing discontinuous parameters, as long as an exact or approximate Riemann problem solver is available. The methodology is implemented for a one-dimensional blood flow model that considers discontinuous variations of mechanical and geometrical properties of vessels. The achievement of formal order of accuracy, as well as the robustness of the resulting numerical scheme, is verified through the simulation of both, academic tests and physiological flows.

  1. Approximate formula and bounds for the time-varying susceptible-infected-susceptible prevalence in networks

    NASA Astrophysics Data System (ADS)

    Van Mieghem, P.

    2016-05-01

    Based on a recent exact differential equation, the time dependence of the SIS prevalence, the average fraction of infected nodes, in any graph is first studied and then upper and lower bounded by an explicit analytic function of time. That new approximate "tanh formula" obeys a Riccati differential equation and bears resemblance to the classical expression in epidemiology of Kermack and McKendrick [Proc. R. Soc. London A 115, 700 (1927), 10.1098/rspa.1927.0118] but enhanced with graph specific properties, such as the algebraic connectivity, the second smallest eigenvalue of the Laplacian of the graph. We further revisit the challenge of finding tight upper bounds for the SIS (and SIR) epidemic threshold for all graphs. We propose two new upper bounds and show the importance of the variance of the number of infected nodes. Finally, a formula for the epidemic threshold in the cycle (or ring graph) is presented.

  2. Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.

    1991-01-01

    A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.

  3. Moving Objects Trajectoty Prediction Based on Artificial Neural Network Approximator by Considering Instantaneous Reaction Time, Case Study: CAR Following

    NASA Astrophysics Data System (ADS)

    Poor Arab Moghadam, M.; Pahlavani, P.

    2015-12-01

    Car following models as well-known moving objects trajectory problems have been used for more than half a century in all traffic simulation software for describing driving behaviour in traffic flows. However, previous empirical studies and modeling about car following behavior had some important limitations. One of the main and clear defects of the introduced models was the very large number of parameters that made their calibration very time-consuming and costly. Also, any change in these parameters, even slight ones, severely disrupted the output. In this study, an artificial neural network approximator was used to introduce a trajectory model for vehicle movements. In this regard, the Levenberg-Marquardt back propagation function and the hyperbolic tangent sigmoid function were employed as the training and the transfer functions, respectively. One of the important aspects in identifying driver behavior is the reaction time. This parameter shows the period between the time the driver recognizes a stimulus and the time a suitable response is shown to that stimulus. In this paper, the actual data on car following from the NGSIM project was used to determine the performance of the proposed model. This dataset was used for the purpose of expanding behavioral algorithm in micro simulation. Sixty percent of the data was entered into the designed artificial neural network approximator as the training data, twenty percent as the testing data, and twenty percent as the evaluation data. A statistical and a micro simulation method were employed to show the accuracy of the proposed model. Moreover, the two popular Gipps and Helly models were implemented. Finally, it was shown that the accuracy of the proposed model was much higher - and its computational costs were lower - than those of other models when calibration operations were not performed on these models. Therefore, the proposed model can be used for displaying and predicting trajectories of moving objects being

  4. Metallopeptide Based Mimics with Substituted Histidines Approximate a Key Hydrogen Bonding Network in the Metalloenzyme Nickel Superoxide Dismutase

    SciTech Connect

    Shearer, J.; Neupane, K; Callan, P

    2009-01-01

    Nickel superoxide dismutase (NiSOD) is a recently discovered superoxide dismutase that utilizes the Ni{sup III}/Ni{sup II} couple to facilitate the disproportionation of O{sub 2}{sup {sm_bullet}-} into H{sub 2}O{sub 2} and O{sub 2}. A key structural component of NiSOD is an elongated axial His-imidazole Ni{sup III} bond (2.3-2.6 {angstrom}) that is the result of a H-bonding network between His(1), Glu(17), and Arg(47). Herein we utilize metallopeptide based mimics of NiSOD with His(1) {var_epsilon}-nitrogen substituted imidazoles to approximate the electronic influence of this H-bonding network ({l_brace}Ni{sup III/II}(SOD{sup M1}-Im-X){r_brace} X = Me, H, DNP, and Tos; SOD{sup M1}-Im-X = H{prime}CDLPCGVYDPA where H{prime} is an N-substituted His). All reduced {l_brace}Ni{sup II}(SOD{sup M1}-Im-X){r_brace} are similar to one another as assessed by electronic absorption spectroscopy, circular dichroism (CD) spectroscopy, and Ni K-edge x-ray absorption (XAS). This indicates that the change in His(1) is having little influence on the square-planar Ni{sup II}N{sub 2}S{sub 2} center. In contrast, changes to the axial His(1) ligand impart differential spectroscopic properties on the oxidized {l_brace}Ni{sup III}(SOD{sup M1}-Im-X){r_brace} metallopeptides. Resonance Raman spectroscopy (405 nm excitation) in conjunction with a normal coordinate analysis indicates that as the axial His imidazole is made less Lewis basic there is an increase in Ni{sup III}-S bond strength in the equatorial plane, with force constants for the Ni-S bond trans to the amine ranging from 1.54 to 1.70 mdyn {angstrom}{sup -1}. The rhombic electron paramagnetic resonance (EPR) spectra of the four oxidized metallopeptides are all consistent with low-spin Ni{sup III} contained in a square pyramidal coordination environment, but show changes in the hyperfine coupling to {sup 14}N along g{sub z}. This is attributable to a reorientation of the g{sub z} vector in the more (along the Ni{sup III

  5. Temporal relaxation of electrons in multi-term approximation

    SciTech Connect

    Loffhagen, D.; Winkler, R.

    1995-12-31

    The study of the temporal relaxation of energetic electrons in weakly ionized, spatially homogeneous, collision dominated plasmas under the action of an electric field constitutes a topic of widespread interest (e.g. problems of plasma light sources, gas laser physics, swarm techniques, after-glow decay). Starting point for the electron kinetic investigations is the nonstationary Boltzmann equation. When choosing a fixed direction of the electric field, usually the solution of this electron kinetic equation is based on the Legendre polynomial expansion of the velocity distribution function leading to a hierarchy of partial differential equations. Conventionally this expansion is truncated after two terms (two-term approximation of the velocity distribution) and a quasi-stationary treatment of the distribution anisotropy is adopted. These two approximations are almost generally used in investigations of the temporal relaxation of electrons in collision dominated, weakly ionized plasmas. However, this approach is incorrect in several cases of practical interest. Based upon recent studies of the electron relaxation a new and very efficient technique for the solution of the electron Boltzmann equation in strict nonstationary multi-term approximation has been developed. First results on the electron relaxation in a time-independent electric field for a model gas plasma using this new approach have already been presented in. This paper reports results for the temporal relaxation of electrons in various real inert and molecular gas plasmas.

  6. Development of generalized potential-energy surfaces using many-body expansions, neural networks, and moiety energy approximations

    NASA Astrophysics Data System (ADS)

    Malshe, M.; Narulkar, R.; Raff, L. M.; Hagan, M.; Bukkapatnam, S.; Agrawal, P. M.; Komanduri, R.

    2009-05-01

    A general method for the development of potential-energy hypersurfaces is presented. The method combines a many-body expansion to represent the potential-energy surface with two-layer neural networks (NN) for each M-body term in the summations. The total number of NNs required is significantly reduced by employing a moiety energy approximation. An algorithm is presented that efficiently adjusts all the coupled NN parameters to the database for the surface. Application of the method to four different systems of increasing complexity shows that the fitting accuracy of the method is good to excellent. For some cases, it exceeds that available by other methods currently in literature. The method is illustrated by fitting large databases of ab initio energies for Sin(n =3,4,…,7) clusters obtained from density functional theory calculations and for vinyl bromide (C2H3Br) and all products for dissociation into six open reaction channels (12 if the reverse reactions are counted as separate open channels) that include C-H and C-Br bond scissions, three-center HBr dissociation, and three-center H2 dissociation. The vinyl bromide database comprises the ab initio energies of 71 969 configurations computed at MP4(SDQ) level with a 6-31G(d,p) basis set for the carbon and hydrogen atoms and Huzinaga's (4333/433/4) basis set augmented with split outer s and p orbitals (43321/4321/4) and a polarization f orbital with an exponent of 0.5 for the bromine atom. It is found that an expansion truncated after the three-body terms is sufficient to fit the Si5 system with a mean absolute testing set error of 5.693×10-4 eV. Expansions truncated after the four-body terms for Sin(n =3,4,5) and Sin(n =3,4,…,7) provide fits whose mean absolute testing set errors are 0.0056 and 0.0212 eV, respectively. For vinyl bromide, a many-body expansion truncated after the four-body terms provides fitting accuracy with mean absolute testing set errors that range between 0.0782 and 0.0808 eV. These

  7. The theory of pattern formation on directed networks

    NASA Astrophysics Data System (ADS)

    Asllani, Malbor; Challenger, Joseph D.; Pavone, Francesco Saverio; Sacconi, Leonardo; Fanelli, Duccio

    2014-07-01

    Dynamical processes on networks have generated widespread interest in recent years. The theory of pattern formation in reaction-diffusion systems defined on symmetric networks has often been investigated, due to its applications in a wide range of disciplines. Here we extend the theory to the case of directed networks, which are found in a number of different fields, such as neuroscience, computer networks and traffic systems. Owing to the structure of the network Laplacian, the dispersion relation has both real and imaginary parts, at variance with the case for a symmetric, undirected network. The homogeneous fixed point can become unstable due to the topology of the network, resulting in a new class of instabilities, which cannot be induced on undirected graphs. Results from a linear stability analysis allow the instability region to be analytically traced. Numerical simulations show travelling waves, or quasi-stationary patterns, depending on the characteristics of the underlying graph.

  8. Quasi-Stationary Global Auroral Ionospheric Model: E-layer

    NASA Astrophysics Data System (ADS)

    Nikolaeva, Vera; Gordeev, Evgeny; Kotikov, Andrey; Makarova, Ludmila; Shirochkov, Aleksander

    2014-05-01

    E-layer Auroral Ionospheric Model (E-AIM) is developed to provide temporal and spatial density distribution of the main ionosphere neutral species (NO, N(4S),N(2D)), and ions (N2+, NO+,O2+,O+) in the altitude range from 90 to 150 km. NRLMSISE-00 model [Picone et al., JGR 2003] is used for neutral atmosphere content and temperature determination, that is the input for the E-AIM model. The E-AIM model based on chemical equilibrium state in E-layer that reaches in chemical reactions between ionospheric species considering solar radiation ionization source, superposed with sporadic precipitation of magnetospheric electrons. The chemical equilibrium state in each location under specific solar and geomagnetic activity conditions reaches during numerical solution of the continuity equations for the neutrals and ions using the high-performance Gear method [Gear, 1971] for ordinary differential equation (ODE) systems. Applying the Gear method for solving stiff ODE system strongly reduce the computation time and machine resources comparing to widely used methods and provide an opportunity to calculate the global spatial E-layer ion content distribution. In contrast to the mid-latitude ionosphere, structure and dynamics of the auroral zone ionosphere (φ ≡ 60-75° MLat) associated not only with shortwave solar radiation. Precipitating magnetospheric particle flux is the most important ionization source and is the main cause of E-layer disturbances. Precipitated electrons with initial energies of 1 - 30 keV influence the auroral ionosphere E-layer. E-AIM model can estimate ionization rate corresponds to auroral electron precipitation in two different ways: 1. with direct electron flux satellite data; 2. with differential energy spectrum reconstructed from OVATION-Prime empirical model [Newell, JGR 2009] average values, that allows to estimate ionosphere ion content for any time and location in the auroral zone. Comparison of E-AIM results with direct ionospheric observations (ionosonde, incoherent scatter radar) show good agreement of electron concentration vertical distribution values.

  9. Dynamics of quasi-stationary systems: Finance as an example

    NASA Astrophysics Data System (ADS)

    Rinn, Philip; Stepanov, Yuriy; Peinke, Joachim; Guhr, Thomas; Schäfer, Rudi

    2015-06-01

    We propose a combination of cluster analysis and stochastic process analysis to characterize high-dimensional complex dynamical systems by few dominating variables. As an example, stock market data are analyzed for which the dynamical stability as well as transitions between different stable states are found. This combined method allows especially to set up new criteria for merging clusters to uncover dynamically distinct states. The low-dimensional approach allows to recover the high-dimensional fixed points of the system by means of an optimization procedure.

  10. Progress on the self-service kiosk for testing the UV protection on sunglasses: polynomial and neural network approximation for calculating light transmittance

    NASA Astrophysics Data System (ADS)

    Mello, M. M.; Ventura, L.

    2015-03-01

    A method using different light sources and sensors have already been used to approximate weighting functions to calculate light transmittance in sunglasses. Although it made possible a low cost equipment that inform the user about its sunglasses, each transmittance test is still dependent of its components. We tested two methods, using polynomial approximation and artificial neural network, that would open the possibility for the use of a fixed light source and sensor for all light transmittance tests from the standard. Spectrophotometry, visible transmittance and traffic light transmittance was calculated in 45 lenses of sunglasses, used as samples for testing the methodologies. The tests included a white LED, a RGB sensor, and electronic for control and signal acquisition. Bland - Altman analysis tool was used to calculate the agreement between the method and the transmittances calculated in the spectrophotometer. Both methods, had an approximation within the deviation limit required by NBR15111. The system with the polynomial regression showed lower deviations than artificial neural networks. A larger number of samples can improve the methods in order to obtain an optimal calibration that includes all sunglasses. No meter in the market can calculate accurately all light transmittances measurements required for the sunglasses. The methodology was applied only for the visible light, while UV and infrared spectrum remains to be tested. The methodology tested presented a way for simple low-cost equipment for all light transmittance tests in sunglasses.

  11. Mixed evolutionary strategies imply coexisting opinions on networks

    NASA Astrophysics Data System (ADS)

    Cao, Lang; Li, Xiang

    2008-01-01

    An evolutionary battle-of-the-sexes game is proposed to model the opinion formation on networks. The individuals of a network are partitioned into different classes according to their unaltered opinion preferences, and their factual opinions are considered as the evolutionary strategies, which are updated with the birth-death or death-birth rules to imitate the process of opinion formation. The individuals finally reach a consensus in the dominate opinion or fall into (quasi)stationary fractions of coexisting mixed opinions, presenting a phase transition at the critical modularity of the multiclass individuals’ partitions on networks. The stability analysis on the coexistence of mixed strategies among multiclass individuals is given, and the analytical predictions agree well with the numerical simulations, indicating that the individuals of a community (or modular) structured network are prone to form coexisting opinions, and the coexistence of mixed evolutionary strategies implies the modularity of networks.

  12. The metabolic network of Clostridium acetobutylicum: Comparison of the approximate Bayesian computation via sequential Monte Carlo (ABC-SMC) and profile likelihood estimation (PLE) methods for determinability analysis.

    PubMed

    Thorn, Graeme J; King, John R

    2016-01-01

    The Gram-positive bacterium Clostridium acetobutylicum is an anaerobic endospore-forming species which produces acetone, butanol and ethanol via the acetone-butanol (AB) fermentation process, leading to biofuels including butanol. In previous work we looked to estimate the parameters in an ordinary differential equation model of the glucose metabolism network using data from pH-controlled continuous culture experiments. Here we combine two approaches, namely the approximate Bayesian computation via an existing sequential Monte Carlo (ABC-SMC) method (to compute credible intervals for the parameters), and the profile likelihood estimation (PLE) (to improve the calculation of confidence intervals for the same parameters), the parameters in both cases being derived from experimental data from forward shift experiments. We also apply the ABC-SMC method to investigate which of the models introduced previously (one non-sporulation and four sporulation models) have the greatest strength of evidence. We find that the joint approximate posterior distribution of the parameters determines the same parameters as previously, including all of the basal and increased enzyme production rates and enzyme reaction activity parameters, as well as the Michaelis-Menten kinetic parameters for glucose ingestion, while other parameters are not as well-determined, particularly those connected with the internal metabolites acetyl-CoA, acetoacetyl-CoA and butyryl-CoA. We also find that the approximate posterior is strongly non-Gaussian, indicating that our previous assumption of elliptical contours of the distribution is not valid, which has the effect of reducing the numbers of pairs of parameters that are (linearly) correlated with each other. Calculations of confidence intervals using the PLE method back this up. Finally, we find that all five of our models are equally likely, given the data available at present. PMID:26561777

  13. Networks based on collisions among mobile agents

    NASA Astrophysics Data System (ADS)

    González, Marta C.; Lind, Pedro G.; Herrmann, Hans J.

    2006-12-01

    We investigate in detail a recent model of colliding mobile agents [M.C. González, P.G. Lind, H.J. Herrmann, Phys. Rev. Lett. 96 (2006) 088702. cond-mat/0602091], used as an alternative approach for constructing evolving networks of interactions formed by collisions governed by suitable dynamical rules. The system of mobile agents evolves towards a quasi-stationary state which is, apart from small fluctuations, well characterized by the density of the system and the residence time of the agents. The residence time defines a collision rate, and by varying this collision rate, the system percolates at a critical value, with the emergence of a giant cluster whose critical exponents are the ones of two-dimensional percolation. Further, the degree and clustering coefficient distributions, and the average path length, show that the network associated with such a system presents non-trivial features which, depending on the collision rules, enables one not only to recover the main properties of standard networks, such as exponential, random and scale-free networks, but also to obtain other topological structures. To illustrate, we show a specific example where the obtained structure has topological features which characterize the structure and evolution of social networks accurately in different contexts, ranging from networks of acquaintances to networks of sexual contacts.

  14. Multicriteria approximation through decomposition

    SciTech Connect

    Burch, C. |; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E. |

    1997-12-01

    The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of the technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. The method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) The authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing. (2) They show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.

  15. Multicriteria approximation through decomposition

    SciTech Connect

    Burch, C.; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E.

    1998-06-01

    The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of their technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. Their method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) the authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing; (2) they also show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.

  16. Noctilucent clouds: modern ground-based photographic observations by a digital camera network.

    PubMed

    Dubietis, Audrius; Dalin, Peter; Balčiūnas, Ričardas; Černis, Kazimieras; Pertsev, Nikolay; Sukhodoev, Vladimir; Perminov, Vladimir; Zalcik, Mark; Zadorozhny, Alexander; Connors, Martin; Schofield, Ian; McEwan, Tom; McEachran, Iain; Frandsen, Soeren; Hansen, Ole; Andersen, Holger; Grønne, Jesper; Melnikov, Dmitry; Manevich, Alexander; Romejko, Vitaly

    2011-10-01

    Noctilucent, or "night-shining," clouds (NLCs) are a spectacular optical nighttime phenomenon that is very often neglected in the context of atmospheric optics. This paper gives a brief overview of current understanding of NLCs by providing a simple physical picture of their formation, relevant observational characteristics, and scientific challenges of NLC research. Modern ground-based photographic NLC observations, carried out in the framework of automated digital camera networks around the globe, are outlined. In particular, the obtained results refer to studies of single quasi-stationary waves in the NLC field. These waves exhibit specific propagation properties--high localization, robustness, and long lifetime--that are the essential requisites of solitary waves. PMID:22016249

  17. Networks.

    ERIC Educational Resources Information Center

    Maughan, George R.; Petitto, Karen R.; McLaughlin, Don

    2001-01-01

    Describes the connectivity features and options of modern campus communication and information system networks, including signal transmission (wire-based and wireless), signal switching, convergence of networks, and network assessment variables, to enable campus leaders to make sound future-oriented decisions. (EV)

  18. Approximate flavor symmetries

    SciTech Connect

    Rasin, A.

    1994-04-01

    We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.

  19. Networking.

    ERIC Educational Resources Information Center

    Duvall, Betty

    Networking is an information giving and receiving system, a support system, and a means whereby women can get ahead in careers--either in new jobs or in current positions. Networking information can create many opportunities: women can talk about how other women handle situations and tasks, and previously established contacts can be used in…

  20. Quasi-stationary North Equatorial Undercurrent jets across the tropical North Pacific Ocean

    NASA Astrophysics Data System (ADS)

    Qiu, Bo; Rudnick, Daniel L.; Chen, Shuiming; Kashino, Yuji

    2013-05-01

    Subthermocline circulation in the tropical North Pacific Ocean (2°N-30°N) is investigated using profiling float temperature-salinity data from the International Argo and the Origins of the Kuroshio and Mindanao Current (OKMC) projects. Three well-defined eastward jets are detected beneath the wind-driven, westward flowing North Equatorial Current. Dubbed the North Equatorial Undercurrent (NEUC) jets, these subthermocline jets have a typical core velocity of 2-5 cms-1 and are spatially coherent from the western boundary to about 120°W across the North Pacific basin. Centered around 9°N, 13°N, and 18°N in the western basin, the NEUC jet cores tend to migrate northward by ˜4° in the eastern basin. Vertically, the cores of the southern, central, and northern NEUC jets reside on the 26.9, 27.2, and 27.3 σθsurfaces, respectively, and they tend to shoal to lighter density surfaces, by about 0.2 σθ, as the jets progress eastward.

  1. Evidence of nonlinear interaction between quasi 2 day wave and quasi-stationary wave

    NASA Astrophysics Data System (ADS)

    Gu, Sheng-Yang; Liu, Han-Li; Li, Tao; Dou, Xiankang; Wu, Qian; Russell, James M.

    2015-02-01

    The nonlinear interaction between the westward quasi 2 day wave (QTDW) with zonal wave number s = 3 (W3) and stationary planetary wave with s = 1 (SPW1) is first investigated using both Thermosphere, Ionosphere, and Mesosphere Electric Dynamics (TIMED) satellite observations and the thermosphere-ionosphere-mesosphere electrodynamics general circulation model (TIME-GCM) simulations. A QTDW with westward s = 2 (W2) is identified in the mesosphere and lower thermosphere (MLT) region in TIMED/Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) temperature and TIMED/TIMED Doppler Imager (TIDI) wind observations during 2011/2012 austral summer period, which coincides with a strong SPW1 episode at high latitude of the northern winter hemisphere. The temperature perturbation of W2 QTDW reaches a maximum amplitude of ~8 K at ~30°S and ~88 km in the Southern Hemisphere, with a smaller amplitude in the Northern Hemisphere at similar latitude and minimum amplitude at the equator. The maximum meridional wind amplitude of the W2 QTDW is observed to be ~40 m/s at 95 km in the equatorial region. The TIME-GCM is utilized to simulate the nonlinear interactions between W3 QTDW and SPW1 by specifying both W3 QTDW and SPW1 perturbations at the lower model boundary. The model results show a clear W2 QTDW signature in the MLT region, which agrees well with the TIMED/SABER temperature and TIMED/TIDI horizontal wind observations. We conclude that the W2 QTDW during the 2011/2012 austral summer period results from the nonlinear interaction between W3 QTDW and SPW1.

  2. Ocean tides and quasi-stationary departures from the marine geoid investigation

    NASA Technical Reports Server (NTRS)

    Siry, J. W.; Kahn, W. D.; Bryan, J. W.; Vonbun, F. O.

    1973-01-01

    The detection of tides and/or currents through the analysis of data generated in connection with the Ocean Geoid Determination Investigation is presented. A discussion of the detailed objectives and approach are included.

  3. Characterisation of quasi-stationary planetary waves in the Northern MLT during summer

    NASA Astrophysics Data System (ADS)

    Stray, Nora H.; Espy, Patrick J.; Limpasuvan, Varavut; Hibbins, Robert E.

    2015-05-01

    Observations of planetary wave (PW) activity in the northern hemisphere, polar summer mesosphere and lower thermosphere (MLT) are presented. Meteor winds from a northern hemisphere chain of SuperDARN radars have been used to monitor the meridional wind along a latitude band (51-66°N) in the MLT. A stationary PW-like longitudinal structure with a strong zonal PW number 1 characteristic is persistently observed year-to-year during summer. Here we characterize the amplitude and the phase structure of this wave in the MLT. The Modern-Era Retrospective Analysis for Research and Application (MERRA) of the NASA Global Modelling and Assimilation Office has been used to evaluate possible sources of the observed longitudinal perturbation in the mesospheric meridional wind by investigating the amplitudes and phases of PWs in the underlying atmosphere. The investigation shows that neither gravity wave modulation by lower atmospheric PWs nor direct propagation of PWs from the lower atmosphere are a significant cause of the observed longitudinal perturbation. However, the data are not of sufficient scope to investigate longitudinal differences in gravity wave sources, or to separate the effects of instabilities and inter-hemispheric propagation as possible causes for the large PW present in the summer MLT.

  4. On the formation of a quasi-stationary twisted disc after a tidal disruption event

    NASA Astrophysics Data System (ADS)

    Xiang-Gruess, M.; Ivanov, P. B.; Papaloizou, J. C. B.

    2016-08-01

    We investigate misaligned accretion discs formed after tidal disruption events that occur when a star encounters a supermassive black hole. We employ the linear theory of warped accretion discs to find the shape of a disc for which the stream arising from the disrupted star provides a source of angular momentum that is misaligned with that of the black hole. For quasi-steady configurations we find that when the warp diffusion or propagation time is large compared to the local mass accretion time and/or the natural disc alignment radius is small, misalignment is favoured. These results have been verified using SPH simulations. We also simulated 1D model discs including gas and radiation pressure. As accretion rates initially exceed the Eddington limit the disc is initially advection dominated. Assuming the α model for the disc, where it can be thermally unstable it subsequently undergoes cyclic transitions between high and low states. During these transitions the aspect ratio varies from ˜1 to ˜10-3 which is reflected in changes in the degree of disc misalignment at the stream impact location. For maximal black hole rotation and sufficiently large values of viscosity parameter α > ˜0.01 - 0.1 the ratio of the disc inclination to that of the initial stellar orbit is estimated to be 0.1 - 0.2 in the advection dominated state, while reaching of order unity in the low state. Misalignment descreases with decrease of α, but increases as the black hole rotation parameter decreases. Thus, it is always significant when the latter is small.

  5. Water formation in early solar nebula: I. Quasi-stationary cloud core

    NASA Astrophysics Data System (ADS)

    Tornow, C.; Gast, P.; Pelivan, I.; Kupper, S.; Kührt, E.; Motschmann, U.

    2014-08-01

    An important condition for the habitability of rocky planets is the existence of water in or on their upper lithospheric layer. We will show that the available amount of this water depends on the conditions in the parental cloud the planetary system has formed from. These clouds can be giant gas clusters with a complex structure associated with bright nebulae or smaller gas aggregations appearing as quiescent dark regions. It has been observed that in both cloud types young stars are formed in dense cores consisting mainly of molecular hydrogen. We assume that the physical and chemical state of these cores, which defines the initial conditions of star formation, is also representative for the initial state of the solar nebula 4.6 Giga years ago. Based on this assumption, we have developed a radial symmetric model to study the physical and chemical evolution of the earliest period of the solar nebula described by a cloud core with 1.01 solar mass and a radius of about 104 AU. The evolution of this core is simulated for a few Mega years, while its molecular gas being in a hydrostatic equilibrium. The related radial distributions of the gas and dust temperature can be calculated from thermal balance equations. These equations depend on the radial profile of the dust to gas density which follows from the continuity equation of the dust phase. The velocity of the dust grains is influenced by the radiation pressure of the local interstellar radiation field and the gas drag. The resulting temperature and dust profiles derived from our model depend on the grain size distribution of the dust. These profiles determine the chemical evolution of the cloud core. It is shown that in the dust phase about 106 to 107 times more water is produced than in the gas phase. Further, the total mass of the water formed in the core varies only marginally between 0.11 and 0.12 wt% for a life time of the core between 1 and 6.5 Mega years, respectively. Roughly 84% of the oxygen atoms are incorporated into water molecules, if the intensity of the radiation field is about 1 Habing. The number of oxygen atoms decreases to 77% if this intensity triples. The water amount produced in the gas phase depends stronger on the interstellar radiation field and the living time of the core than the water amount formed on dust. For the 1 Habing radiation intensity the size distribution of the dust grains has nearly no influence. Finally, a number of species representing compounds mainly formed in the dust or in the gas phase was selected (H2O, CO, etc.) in order to use them for a validation of our model. Thereto, we have compared the abundances of these compounds simulated with the model to the related data from observations published in the literature. For almost all cases except N2H+ a sufficient agreement was found.

  6. Approximate spatial reasoning

    NASA Technical Reports Server (NTRS)

    Dutta, Soumitra

    1988-01-01

    A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.

  7. SIR model on a dynamical network and the endemic state of an infectious disease

    NASA Astrophysics Data System (ADS)

    Dottori, M.; Fabricius, G.

    2015-09-01

    In this work we performed a numerical study of an epidemic model that mimics the endemic state of whooping cough in the pre-vaccine era. We considered a stochastic SIR model on dynamical networks that involve local and global contacts among individuals and analysed the influence of the network properties on the characterization of the quasi-stationary state. We computed probability density functions (PDF) for infected fraction of individuals and found that they are well fitted by gamma functions, excepted the tails of the distributions that are q-exponentials. We also computed the fluctuation power spectra of infective time series for different networks. We found that network effects can be partially absorbed by rescaling the rate of infective contacts of the model. An explicit relation between the effective transmission rate of the disease and the correlation of susceptible individuals with their infective nearest neighbours was obtained. This relation quantifies the known screening of infective individuals observed in these networks. We finally discuss the goodness and limitations of the SIR model with homogeneous mixing and parameters taken from epidemiological data to describe the dynamic behaviour observed in the networks studied.

  8. Calculator Function Approximation.

    ERIC Educational Resources Information Center

    Schelin, Charles W.

    1983-01-01

    The general algorithm used in most hand calculators to approximate elementary functions is discussed. Comments on tabular function values and on computer function evaluation are given first; then the CORDIC (Coordinate Rotation Digital Computer) scheme is described. (MNS)

  9. Approximate spatial reasoning

    NASA Technical Reports Server (NTRS)

    Dutta, Soumitra

    1988-01-01

    Much of human reasoning is approximate in nature. Formal models of reasoning traditionally try to be precise and reject the fuzziness of concepts in natural use and replace them with non-fuzzy scientific explicata by a process of precisiation. As an alternate to this approach, it has been suggested that rather than regard human reasoning processes as themselves approximating to some more refined and exact logical process that can be carried out with mathematical precision, the essence and power of human reasoning is in its capability to grasp and use inexact concepts directly. This view is supported by the widespread fuzziness of simple everyday terms (e.g., near tall) and the complexity of ordinary tasks (e.g., cleaning a room). Spatial reasoning is an area where humans consistently reason approximately with demonstrably good results. Consider the case of crossing a traffic intersection. We have only an approximate idea of the locations and speeds of various obstacles (e.g., persons and vehicles), but we nevertheless manage to cross such traffic intersections without any harm. The details of our mental processes which enable us to carry out such intricate tasks in such apparently simple manner are not well understood. However, it is that we try to incorporate such approximate reasoning techniques in our computer systems. Approximate spatial reasoning is very important for intelligent mobile agents (e.g., robots), specially for those operating in uncertain or unknown or dynamic domains.

  10. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. PMID:25528318

  11. Covariant approximation averaging

    NASA Astrophysics Data System (ADS)

    Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph

    2015-06-01

    We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.

  12. Fast approximate motif statistics.

    PubMed

    Nicodème, P

    2001-01-01

    We present in this article a fast approximate method for computing the statistics of a number of non-self-overlapping matches of motifs in a random text in the nonuniform Bernoulli model. This method is well suited for protein motifs where the probability of self-overlap of motifs is small. For 96% of the PROSITE motifs, the expectations of occurrences of the motifs in a 7-million-amino-acids random database are computed by the approximate method with less than 1% error when compared with the exact method. Processing of the whole PROSITE takes about 30 seconds with the approximate method. We apply this new method to a comparison of the C. elegans and S. cerevisiae proteomes. PMID:11535175

  13. The Guiding Center Approximation

    NASA Astrophysics Data System (ADS)

    Pedersen, Thomas Sunn

    The guiding center approximation for charged particles in strong magnetic fields is introduced here. This approximation is very useful in situations where the charged particles are very well magnetized, such that the gyration (Larmor) radius is small compared to relevant length scales of the confinement device, and the gyration is fast relative to relevant timescales in an experiment. The basics of motion in a straight, uniform, static magnetic field are reviewed, and are used as a starting point for analyzing more complicated situations where more forces are present, as well as inhomogeneities in the magnetic field -- magnetic curvature as well as gradients in the magnetic field strength. The first and second adiabatic invariant are introduced, and slowly time-varying fields are also covered. As an example of the use of the guiding center approximation, the confinement concept of the cylindrical magnetic mirror is analyzed.

  14. Monotone Boolean approximation

    SciTech Connect

    Hulme, B.L.

    1982-12-01

    This report presents a theory of approximation of arbitrary Boolean functions by simpler, monotone functions. Monotone increasing functions can be expressed without the use of complements. Nonconstant monotone increasing functions are important in their own right since they model a special class of systems known as coherent systems. It is shown here that when Boolean expressions for noncoherent systems become too large to treat exactly, then monotone approximations are easily defined. The algorithms proposed here not only provide simpler formulas but also produce best possible upper and lower monotone bounds for any Boolean function. This theory has practical application for the analysis of noncoherent fault trees and event tree sequences.

  15. Approximating Integrals Using Probability

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.; Caudle, Kyle A.

    2005-01-01

    As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…

  16. Optimizing the Zeldovich approximation

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Pellman, Todd F.; Shandarin, Sergei F.

    1994-01-01

    We have recently learned that the Zeldovich approximation can be successfully used for a far wider range of gravitational instability scenarios than formerly proposed; we study here how to extend this range. In previous work (Coles, Melott and Shandarin 1993, hereafter CMS) we studied the accuracy of several analytic approximations to gravitational clustering in the mildly nonlinear regime. We found that what we called the 'truncated Zeldovich approximation' (TZA) was better than any other (except in one case the ordinary Zeldovich approximation) over a wide range from linear to mildly nonlinear (sigma approximately 3) regimes. TZA was specified by setting Fourier amplitudes equal to zero for all wavenumbers greater than k(sub nl), where k(sub nl) marks the transition to the nonlinear regime. Here, we study the cross correlation of generalized TZA with a group of n-body simulations for three shapes of window function: sharp k-truncation (as in CMS), a tophat in coordinate space, or a Gaussian. We also study the variation in the crosscorrelation as a function of initial truncation scale within each type. We find that k-truncation, which was so much better than other things tried in CMS, is the worst of these three window shapes. We find that a Gaussian window e(exp(-k(exp 2)/2k(exp 2, sub G))) applied to the initial Fourier amplitudes is the best choice. It produces a greatly improved crosscorrelation in those cases which most needed improvement, e.g. those with more small-scale power in the initial conditions. The optimum choice of kG for the Gaussian window is (a somewhat spectrum-dependent) 1 to 1.5 times k(sub nl). Although all three windows produce similar power spectra and density distribution functions after application of the Zeldovich approximation, the agreement of the phases of the Fourier components with the n-body simulation is better for the Gaussian window. We therefore ascribe the success of the best-choice Gaussian window to its superior treatment

  17. The de novo formation of a vascular network, in warm-blooded embryos, occurs via a self-assembly process that spans multiple length and time scales

    NASA Astrophysics Data System (ADS)

    Little, Charles D.

    2007-03-01

    Taking advantage of wide-field, time-lapse microscopy we examined the assembly of vascular polygonal networks in whole bird embryos and in explanted embryonic mouse tissue (allantois). Primary vasculogenesis assembly steps range from cellular (1-10 μm) to tissue (100μm-1mm) level events: Individual vascular endothelial cells extend protrusions and move with respect to the extracellular matrix/surrounding tissue. Consequently, long-range, tissue-level, deformations directly influence the vascular pattern. Experimental perturbation of endothelial-specific cell-cell adhesions (VE-cadherin), during mouse vasculogenesis, permitted dissection of the cellular motion required for sprout formation. In particular, cells are shown to move actively onto vascular cords that are subject to strain via tissue deformations. Based on the empirical data we propose a simple model of preferential migration along stretched cells. Numerical simulations reveal that the model evolves into a quasi-stationary pattern containing linear segments, which interconnect above a critical volume fraction. In the quasi-stationary state the generation of new branches offsets the coarsening driven by surface tension. In agreement with empirical data, the characteristic size of the resulting polygonal pattern is density-independent within a wide range of volume fractions. These data underscore the potential of combining physical studies with experimental embryology as a means of studying complex morphogenetic systems. In collaboration with Brenda J. Rongish^1, Andr'as Czir'ok^1,2, Erica D. Perryn^1, Cheng Cui^1, and Evan A. Zamir^1 ^1Department of Anatomy and Cell Biology, the University of Kansas Medical Center, Kansas City, KS ^2Department of Biological Physics, E"otv"os Lor'and University, Budapest, Hungary.

  18. The Cell Cycle Switch Computes Approximate Majority

    NASA Astrophysics Data System (ADS)

    Cardelli, Luca; Csikász-Nagy, Attila

    2012-09-01

    Both computational and biological systems have to make decisions about switching from one state to another. The `Approximate Majority' computational algorithm provides the asymptotically fastest way to reach a common decision by all members of a population between two possible outcomes, where the decision approximately matches the initial relative majority. The network that regulates the mitotic entry of the cell-cycle in eukaryotes also makes a decision before it induces early mitotic processes. Here we show that the switch from inactive to active forms of the mitosis promoting Cyclin Dependent Kinases is driven by a system that is related to both the structure and the dynamics of the Approximate Majority computation. We investigate the behavior of these two switches by deterministic, stochastic and probabilistic methods and show that the steady states and temporal dynamics of the two systems are similar and they are exchangeable as components of oscillatory networks.

  19. Approximate option pricing

    SciTech Connect

    Chalasani, P.; Saias, I.; Jha, S.

    1996-04-08

    As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.

  20. Beyond the Kirchhoff approximation

    NASA Technical Reports Server (NTRS)

    Rodriguez, Ernesto

    1989-01-01

    The three most successful models for describing scattering from random rough surfaces are the Kirchhoff approximation (KA), the small-perturbation method (SPM), and the two-scale-roughness (or composite roughness) surface-scattering (TSR) models. In this paper it is shown how these three models can be derived rigorously from one perturbation expansion based on the extinction theorem for scalar waves scattering from perfectly rigid surface. It is also shown how corrections to the KA proportional to the surface curvature and higher-order derivatives may be obtained. Using these results, the scattering cross section is derived for various surface models.

  1. An exact relationship between invasion probability and endemic prevalence for Markovian SIS dynamics on networks.

    PubMed

    Wilkinson, Robert R; Sharkey, Kieran J

    2013-01-01

    Understanding models which represent the invasion of network-based systems by infectious agents can give important insights into many real-world situations, including the prevention and control of infectious diseases and computer viruses. Here we consider Markovian susceptible-infectious-susceptible (SIS) dynamics on finite strongly connected networks, applicable to several sexually transmitted diseases and computer viruses. In this context, a theoretical definition of endemic prevalence is easily obtained via the quasi-stationary distribution (QSD). By representing the model as a percolation process and utilising the property of duality, we also provide a theoretical definition of invasion probability. We then show that, for undirected networks, the probability of invasion from any given individual is equal to the (probabilistic) endemic prevalence, following successful invasion, at the individual (we also provide a relationship for the directed case). The total (fractional) endemic prevalence in the population is thus equal to the average invasion probability (across all individuals). Consequently, for such systems, the regions or individuals already supporting a high level of infection are likely to be the source of a successful invasion by another infectious agent. This could be used to inform targeted interventions when there is a threat from an emerging infectious disease. PMID:23935916

  2. Countably QC-Approximating Posets

    PubMed Central

    Mao, Xuxin; Xu, Luoshan

    2014-01-01

    As a generalization of countably C-approximating posets, the concept of countably QC-approximating posets is introduced. With the countably QC-approximating property, some characterizations of generalized completely distributive lattices and generalized countably approximating posets are given. The main results are as follows: (1) a complete lattice is generalized completely distributive if and only if it is countably QC-approximating and weakly generalized countably approximating; (2) a poset L having countably directed joins is generalized countably approximating if and only if the lattice σc(L)op of all σ-Scott-closed subsets of L is weakly generalized countably approximating. PMID:25165730

  3. Approximate Bayesian multibody tracking.

    PubMed

    Lanz, Oswald

    2006-09-01

    Visual tracking of multiple targets is a challenging problem, especially when efficiency is an issue. Occlusions, if not properly handled, are a major source of failure. Solutions supporting principled occlusion reasoning have been proposed but are yet unpractical for online applications. This paper presents a new solution which effectively manages the trade-off between reliable modeling and computational efficiency. The Hybrid Joint-Separable (HJS) filter is derived from a joint Bayesian formulation of the problem, and shown to be efficient while optimal in terms of compact belief representation. Computational efficiency is achieved by employing a Markov random field approximation to joint dynamics and an incremental algorithm for posterior update with an appearance likelihood that implements a physically-based model of the occlusion process. A particle filter implementation is proposed which achieves accurate tracking during partial occlusions, while in cases of complete occlusion, tracking hypotheses are bound to estimated occlusion volumes. Experiments show that the proposed algorithm is efficient, robust, and able to resolve long-term occlusions between targets with identical appearance. PMID:16929730

  4. Approximation by hinge functions

    SciTech Connect

    Faber, V.

    1997-05-01

    Breiman has defined {open_quotes}hinge functions{close_quotes} for use as basis functions in least squares approximations to data. A hinge function is the max (or min) function of two linear functions. In this paper, the author assumes the existence of smooth function f(x) and a set of samples of the form (x, f(x)) drawn from a probability distribution {rho}(x). The author hopes to find the best fitting hinge function h(x) in the least squares sense. There are two problems with this plan. First, Breiman has suggested an algorithm to perform this fit. The author shows that this algorithm is not robust and also shows how to create examples on which the algorithm diverges. Second, if the author tries to use the data to minimize the fit in the usual discrete least squares sense, the functional that must be minimized is continuous in the variables, but has a derivative which jumps at the data. This paper takes a different approach. This approach is an example of a method that the author has developed called {open_quotes}Monte Carlo Regression{close_quotes}. (A paper on the general theory is in preparation.) The author shall show that since the function f is continuous, the analytic form of the least squares equation is continuously differentiable. A local minimum is solved for by using Newton`s method, where the entries of the Hessian are estimated directly from the data by Monte Carlo. The algorithm has the desirable properties that it is quadratically convergent from any starting guess sufficiently close to a solution and that each iteration requires only a linear system solve.

  5. APPROXIMATION ALGORITHMS FOR DISTANCE-2 EDGE COLORING.

    SciTech Connect

    BARRETT, CHRISTOPHER L; ISTRATE, GABRIEL; VILIKANTI, ANIL KUMAR; MARATHE, MADHAV; THITE, SHRIPAD V

    2002-07-17

    The authors consider the link scheduling problem for packet radio networks which is assigning channels to the connecting links so that transmission may proceed on all links assigned the same channel simultaneously without collisions. This problem can be cast as the distance-2 edge coloring problem, a variant of proper edge coloring, on the graph with transceivers as vertices and links as edges. They present efficient approximation algorithms for the distance-2 edge coloring problem for various classes of graphs.

  6. Approximation by fully complex multilayer perceptrons.

    PubMed

    Kim, Taehwan; Adali, Tülay

    2003-07-01

    We investigate the approximation ability of a multilayer perceptron (MLP) network when it is extended to the complex domain. The main challenge for processing complex data with neural networks has been the lack of bounded and analytic complex nonlinear activation functions in the complex domain, as stated by Liouville's theorem. To avoid the conflict between the boundedness and the analyticity of a nonlinear complex function in the complex domain, a number of ad hoc MLPs that include using two real-valued MLPs, one processing the real part and the other processing the imaginary part, have been traditionally employed. However, since nonanalytic functions do not meet the Cauchy-Riemann conditions, they render themselves into degenerative backpropagation algorithms that compromise the efficiency of nonlinear approximation and learning in the complex vector field. A number of elementary transcendental functions (ETFs) derivable from the entire exponential function e(z) that are analytic are defined as fully complex activation functions and are shown to provide a parsimonious structure for processing data in the complex domain and address most of the shortcomings of the traditional approach. The introduction of ETFs, however, raises a new question in the approximation capability of this fully complex MLP. In this letter, three proofs of the approximation capability of the fully complex MLP are provided based on the characteristics of singularity among ETFs. First, the fully complex MLPs with continuous ETFs over a compact set in the complex vector field are shown to be the universal approximator of any continuous complex mappings. The complex universal approximation theorem extends to bounded measurable ETFs possessing a removable singularity. Finally, it is shown that the output of complex MLPs using ETFs with isolated and essential singularities uniformly converges to any nonlinear mapping in the deleted annulus of singularity nearest to the origin. PMID:12816570

  7. Cavity approximation for graphical models.

    PubMed

    Rizzo, T; Wemmenhove, B; Kappen, H J

    2007-07-01

    We reformulate the cavity approximation (CA), a class of algorithms recently introduced for improving the Bethe approximation estimates of marginals in graphical models. In our formulation, which allows for the treatment of multivalued variables, a further generalization to factor graphs with arbitrary order of interaction factors is explicitly carried out, and a message passing algorithm that implements the first order correction to the Bethe approximation is described. Furthermore, we investigate an implementation of the CA for pairwise interactions. In all cases considered we could confirm that CA[k] with increasing k provides a sequence of approximations of markedly increasing precision. Furthermore, in some cases we could also confirm the general expectation that the approximation of order k , whose computational complexity is O(N(k+1)) has an error that scales as 1/N(k+1) with the size of the system. We discuss the relation between this approach and some recent developments in the field. PMID:17677405

  8. Approximate circuits for increased reliability

    SciTech Connect

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-08-18

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  9. Approximate circuits for increased reliability

    SciTech Connect

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-12-22

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  10. Structural optimization with approximate sensitivities

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.

    1994-01-01

    Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.

  11. Approximate Genealogies Under Genetic Hitchhiking

    PubMed Central

    Pfaffelhuber, P.; Haubold, B.; Wakolbinger, A.

    2006-01-01

    The rapid fixation of an advantageous allele leads to a reduction in linked neutral variation around the target of selection. The genealogy at a neutral locus in such a selective sweep can be simulated by first generating a random path of the advantageous allele's frequency and then a structured coalescent in this background. Usually the frequency path is approximated by a logistic growth curve. We discuss an alternative method that approximates the genealogy by a random binary splitting tree, a so-called Yule tree that does not require first constructing a frequency path. Compared to the coalescent in a logistic background, this method gives a slightly better approximation for identity by descent during the selective phase and a much better approximation for the number of lineages that stem from the founder of the selective sweep. In applications such as the approximation of the distribution of Tajima's D, the two approximation methods perform equally well. For relevant parameter ranges, the Yule approximation is faster. PMID:17182733

  12. Function approximation using adaptive and overlapping intervals

    SciTech Connect

    Patil, R.B.

    1995-05-01

    A problem common to many disciplines is to approximate a function given only the values of the function at various points in input variable space. A method is proposed for approximating a function of several to one variable. The model takes the form of weighted averaging of overlapping basis functions defined over intervals. The number of such basis functions and their parameters (widths and centers) are automatically determined using given training data and a learning algorithm. The proposed algorithm can be seen as placing a nonuniform multidimensional grid in the input domain with overlapping cells. The non-uniformity and overlap of the cells is achieved by a learning algorithm to optimize a given objective function. This approach is motivated by the fuzzy modeling approach and a learning algorithms used for clustering and classification in pattern recognition. The basics of why and how the approach works are given. Few examples of nonlinear regression and classification are modeled. The relationship between the proposed technique, radial basis neural networks, kernel regression, probabilistic neural networks, and fuzzy modeling is explained. Finally advantages and disadvantages are discussed.

  13. Mathematical algorithms for approximate reasoning

    NASA Technical Reports Server (NTRS)

    Murphy, John H.; Chay, Seung C.; Downs, Mary M.

    1988-01-01

    Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away

  14. Exponential approximations in optimal design

    NASA Technical Reports Server (NTRS)

    Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.

    1990-01-01

    One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.

  15. Approximate factorization with source terms

    NASA Technical Reports Server (NTRS)

    Shih, T. I.-P.; Chyu, W. J.

    1991-01-01

    A comparative evaluation is made of three methodologies with a view to that which offers the best approximate factorization error. While two of these methods are found to lead to more efficient algorithms in cases where factors which do not contain source terms can be diagonalized, the third method used generates the lowest approximate factorization error. This method may be preferred when the norms of source terms are large, and transient solutions are of interest.

  16. Approximating random quantum optimization problems

    NASA Astrophysics Data System (ADS)

    Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.

    2013-06-01

    We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.

  17. Wavelet Sparse Approximate Inverse Preconditioners

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.; Tang, W.-P.; Wan, W. L.

    1996-01-01

    There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.

  18. Relativistic regular approximations revisited: An infinite-order relativistic approximation

    SciTech Connect

    Dyall, K.G.; van Lenthe, E.

    1999-07-01

    The concept of the regular approximation is presented as the neglect of the energy dependence of the exact Foldy{endash}Wouthuysen transformation of the Dirac Hamiltonian. Expansion of the normalization terms leads immediately to the zeroth-order regular approximation (ZORA) and first-order regular approximation (FORA) Hamiltonians as the zeroth- and first-order terms of the expansion. The expansion may be taken to infinite order by using an un-normalized Foldy{endash}Wouthuysen transformation, which results in the ZORA Hamiltonian and a nonunit metric. This infinite-order regular approximation, IORA, has eigenvalues which differ from the Dirac eigenvalues by order E{sup 3}/c{sup 4} for a hydrogen-like system, which is a considerable improvement over the ZORA eigenvalues, and similar to the nonvariational FORA energies. A further perturbation analysis yields a third-order correction to the IORA energies, TIORA. Results are presented for several systems including the neutral U atom. The IORA eigenvalues for all but the 1s spinor of the neutral system are superior even to the scaled ZORA energies, which are exact for the hydrogenic system. The third-order correction reduces the IORA error for the inner orbitals to a very small fraction of the Dirac eigenvalue. {copyright} {ital 1999 American Institute of Physics.}

  19. Gadgets, approximation, and linear programming

    SciTech Connect

    Trevisan, L.; Sudan, M.; Sorkin, G.B.; Williamson, D.P.

    1996-12-31

    We present a linear-programming based method for finding {open_quotes}gadgets{close_quotes}, i.e., combinatorial structures reducing constraints of one optimization problems to constraints of another. A key step in this method is a simple observation which limits the search space to a finite one. Using this new method we present a number of new, computer-constructed gadgets for several different reductions. This method also answers a question posed by on how to prove the optimality of gadgets-we show how LP duality gives such proofs. The new gadgets improve hardness results for MAX CUT and MAX DICUT, showing that approximating these problems to within factors of 60/61 and 44/45 respectively is N P-hard. We also use the gadgets to obtain an improved approximation algorithm for MAX 3SAT which guarantees an approximation ratio of .801. This improves upon the previous best bound of .7704.

  20. On stochastic approximation algorithms for classes of PAC learning problems

    SciTech Connect

    Rao, N.S.V.; Uppuluri, V.R.R.; Oblow, E.M.

    1994-03-01

    The classical stochastic approximation methods are shown to yield algorithms to solve several formulations of the PAC learning problem defined on the domain [o,1]{sup d}. Under some assumptions on different ability of the probability measure functions, simple algorithms to solve some PAC learning problems are proposed based on networks of non-polynomial units (e.g. artificial neural networks). Conditions on the sizes of these samples required to ensure the error bounds are derived using martingale inequalities.

  1. Heat pipe transient response approximation

    NASA Astrophysics Data System (ADS)

    Reid, Robert S.

    2002-01-01

    A simple and concise routine that approximates the response of an alkali metal heat pipe to changes in evaporator heat transfer rate is described. This analytically based routine is compared with data from a cylindrical heat pipe with a crescent-annular wick that undergoes gradual (quasi-steady) transitions through the viscous and condenser boundary heat transfer limits. The sonic heat transfer limit can also be incorporated into this routine for heat pipes with more closely coupled condensers. The advantages and obvious limitations of this approach are discussed. For reference, a source code listing for the approximation appears at the end of this paper. .

  2. Approximation methods for stochastic petri nets

    NASA Technical Reports Server (NTRS)

    Jungnitz, Hauke Joerg

    1992-01-01

    Stochastic Marked Graphs are a concurrent decision free formalism provided with a powerful synchronization mechanism generalizing conventional Fork Join Queueing Networks. In some particular cases the analysis of the throughput can be done analytically. Otherwise the analysis suffers from the classical state explosion problem. Embedded in the divide and conquer paradigm, approximation techniques are introduced for the analysis of stochastic marked graphs and Macroplace/Macrotransition-nets (MPMT-nets), a new subclass introduced herein. MPMT-nets are a subclass of Petri nets that allow limited choice, concurrency and sharing of resources. The modeling power of MPMT is much larger than that of marked graphs, e.g., MPMT-nets can model manufacturing flow lines with unreliable machines and dataflow graphs where choice and synchronization occur. The basic idea leads to the notion of a cut to split the original net system into two subnets. The cuts lead to two aggregated net systems where one of the subnets is reduced to a single transition. A further reduction leads to a basic skeleton. The generalization of the idea leads to multiple cuts, where single cuts can be applied recursively leading to a hierarchical decomposition. Based on the decomposition, a response time approximation technique for the performance analysis is introduced. Also, delay equivalence, which has previously been introduced in the context of marked graphs by Woodside et al., Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's is slower, but the accuracy is generally better. Delay

  3. Pythagorean Approximations and Continued Fractions

    ERIC Educational Resources Information Center

    Peralta, Javier

    2008-01-01

    In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…

  4. Chemical Laws, Idealization and Approximation

    NASA Astrophysics Data System (ADS)

    Tobin, Emma

    2013-07-01

    This paper examines the notion of laws in chemistry. Vihalemm ( Found Chem 5(1):7-22, 2003) argues that the laws of chemistry are fundamentally the same as the laws of physics they are all ceteris paribus laws which are true "in ideal conditions". In contrast, Scerri (2000) contends that the laws of chemistry are fundamentally different to the laws of physics, because they involve approximations. Christie ( Stud Hist Philos Sci 25:613-629, 1994) and Christie and Christie ( Of minds and molecules. Oxford University Press, New York, pp. 34-50, 2000) agree that the laws of chemistry are operationally different to the laws of physics, but claim that the distinction between exact and approximate laws is too simplistic to taxonomise them. Approximations in chemistry involve diverse kinds of activity and often what counts as a scientific law in chemistry is dictated by the context of its use in scientific practice. This paper addresses the question of what makes chemical laws distinctive independently of the separate question as to how they are related to the laws of physics. From an analysis of some candidate ceteris paribus laws in chemistry, this paper argues that there are two distinct kinds of ceteris paribus laws in chemistry; idealized and approximate chemical laws. Thus, while Christie ( Stud Hist Philos Sci 25:613-629, 1994) and Christie and Christie ( Of minds and molecules. Oxford University Press, New York, pp. 34--50, 2000) are correct to point out that the candidate generalisations in chemistry are diverse and heterogeneous, a distinction between idealizations and approximations can nevertheless be used to successfully taxonomise them.

  5. One sign ion mobile approximation

    NASA Astrophysics Data System (ADS)

    Barbero, G.

    2011-12-01

    The electrical response of an electrolytic cell to an external excitation is discussed in the simple case where only one group of positive and negative ions is present. The particular case where the diffusion coefficients of the negative ions, Dm, is very small with respect to that of the positive ions, Dp, is considered. In this framework, it is discussed under what conditions the one mobile approximation, in which the negative ions are assumed fixed, works well. The analysis is performed by assuming that the external excitation is sinusoidal with circular frequency ω, as that used in the impedance spectroscopy technique. In this framework, we show that there exists a circular frequency, ω*, such that for ω > ω*, the one mobile ion approximation works well. We also show that for Dm ≪ Dp, ω* is independent of Dm.

  6. Testing the frozen flow approximation

    NASA Technical Reports Server (NTRS)

    Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro

    1993-01-01

    We investigate the accuracy of the frozen-flow approximation (FFA), recently proposed by Matarrese, et al. (1992), for following the nonlinear evolution of cosmological density fluctuations under gravitational instability. We compare a number of statistics between results of the FFA and n-body simulations, including those used by Melott, Pellman & Shandarin (1993) to test the Zel'dovich approximation. The FFA performs reasonably well in a statistical sense, e.g. in reproducing the counts-in-cell distribution, at small scales, but it does poorly in the crosscorrelation with n-body which means it is generally not moving mass to the right place, especially in models with high small-scale power.

  7. Approximate Counting of Graphical Realizations

    PubMed Central

    2015-01-01

    In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994

  8. Computer Experiments for Function Approximations

    SciTech Connect

    Chang, A; Izmailov, I; Rizzo, S; Wynter, S; Alexandrov, O; Tong, C

    2007-10-15

    This research project falls in the domain of response surface methodology, which seeks cost-effective ways to accurately fit an approximate function to experimental data. Modeling and computer simulation are essential tools in modern science and engineering. A computer simulation can be viewed as a function that receives input from a given parameter space and produces an output. Running the simulation repeatedly amounts to an equivalent number of function evaluations, and for complex models, such function evaluations can be very time-consuming. It is then of paramount importance to intelligently choose a relatively small set of sample points in the parameter space at which to evaluate the given function, and then use this information to construct a surrogate function that is close to the original function and takes little time to evaluate. This study was divided into two parts. The first part consisted of comparing four sampling methods and two function approximation methods in terms of efficiency and accuracy for simple test functions. The sampling methods used were Monte Carlo, Quasi-Random LP{sub {tau}}, Maximin Latin Hypercubes, and Orthogonal-Array-Based Latin Hypercubes. The function approximation methods utilized were Multivariate Adaptive Regression Splines (MARS) and Support Vector Machines (SVM). The second part of the study concerned adaptive sampling methods with a focus on creating useful sets of sample points specifically for monotonic functions, functions with a single minimum and functions with a bounded first derivative.

  9. Approximate reasoning using terminological models

    NASA Technical Reports Server (NTRS)

    Yen, John; Vaidya, Nitin

    1992-01-01

    Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved.

  10. Approximate Counting of Graphical Realizations.

    PubMed

    Erdős, Péter L; Kiss, Sándor Z; Miklós, István; Soukup, Lajos

    2015-01-01

    In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994

  11. Approximate von Neumann entropy for directed graphs.

    PubMed

    Ye, Cheng; Wilson, Richard C; Comin, César H; Costa, Luciano da F; Hancock, Edwin R

    2014-05-01

    In this paper, we develop an entropy measure for assessing the structural complexity of directed graphs. Although there are many existing alternative measures for quantifying the structural properties of undirected graphs, there are relatively few corresponding measures for directed graphs. To fill this gap in the literature, we explore an alternative technique that is applicable to directed graphs. We commence by using Chung's generalization of the Laplacian of a directed graph to extend the computation of von Neumann entropy from undirected to directed graphs. We provide a simplified form of the entropy which can be expressed in terms of simple node in-degree and out-degree statistics. Moreover, we find approximate forms of the von Neumann entropy that apply to both weakly and strongly directed graphs, and that can be used to characterize network structure. We illustrate the usefulness of these simplified entropy forms defined in this paper on both artificial and real-world data sets, including structures from protein databases and high energy physics theory citation networks. PMID:25353841

  12. Improved non-approximability results

    SciTech Connect

    Bellare, M.; Sudan, M.

    1994-12-31

    We indicate strong non-approximability factors for central problems: N{sup 1/4} for Max Clique; N{sup 1/10} for Chromatic Number; and 66/65 for Max 3SAT. Underlying the Max Clique result is a proof system in which the verifier examines only three {open_quotes}free bits{close_quotes} to attain an error of 1/2. Underlying the Chromatic Number result is a reduction from Max Clique which is more efficient than previous ones.

  13. Quantum tunneling beyond semiclassical approximation

    NASA Astrophysics Data System (ADS)

    Banerjee, Rabin; Ranjan Majhi, Bibhas

    2008-06-01

    Hawking radiation as tunneling by Hamilton-Jacobi method beyond semiclassical approximation is analysed. We compute all quantum corrections in the single particle action revealing that these are proportional to the usual semiclassical contribution. We show that a simple choice of the proportionality constants reproduces the one loop back reaction effect in the spacetime, found by conformal field theory methods, which modifies the Hawking temperature of the black hole. Using the law of black hole mechanics we give the corrections to the Bekenstein-Hawking area law following from the modified Hawking temperature. Some examples are explicitly worked out.

  14. Fermion tunneling beyond semiclassical approximation

    NASA Astrophysics Data System (ADS)

    Majhi, Bibhas Ranjan

    2009-02-01

    Applying the Hamilton-Jacobi method beyond the semiclassical approximation prescribed in R. Banerjee and B. R. Majhi, J. High Energy Phys.JHEPFG1029-8479 06 (2008) 09510.1088/1126-6708/2008/06/095 for the scalar particle, Hawking radiation as tunneling of the Dirac particle through an event horizon is analyzed. We show that, as before, all quantum corrections in the single particle action are proportional to the usual semiclassical contribution. We also compute the modifications to the Hawking temperature and Bekenstein-Hawking entropy for the Schwarzschild black hole. Finally, the coefficient of the logarithmic correction to entropy is shown to be related with the trace anomaly.

  15. Generalized Gradient Approximation Made Simple

    SciTech Connect

    Perdew, J.P.; Burke, K.; Ernzerhof, M.

    1996-10-01

    Generalized gradient approximations (GGA{close_quote}s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. {copyright} {ital 1996 The American Physical Society.}

  16. The structural physical approximation conjecture

    NASA Astrophysics Data System (ADS)

    Shultz, Fred

    2016-01-01

    It was conjectured that the structural physical approximation (SPA) of an optimal entanglement witness is separable (or equivalently, that the SPA of an optimal positive map is entanglement breaking). This conjecture was disproved, first for indecomposable maps and more recently for decomposable maps. The arguments in both cases are sketched along with important related results. This review includes background material on topics including entanglement witnesses, optimality, duality of cones, decomposability, and the statement and motivation for the SPA conjecture so that it should be accessible for a broad audience.

  17. Diophantine networks

    NASA Astrophysics Data System (ADS)

    Bedogne', C.; Masucci, A. P.; Rodgers, G. J.

    2008-03-01

    We introduce a new class of deterministic networks by associating networks with Diophantine equations, thus relating network topology to algebraic properties. The network is formed by representing integers as vertices and by drawing cliques between M vertices every time that M distinct integers satisfy the equation. We analyse the network generated by the Pythagorean equation x2 +y2 =z2 showing that its degree distribution is well approximated by a power law with exponential cut-off. We also show that the properties of this network differ considerably from the features of scale-free networks generated through preferential attachment. Remarkably we also recover a power law for the clustering coefficient. We then study the network associated with the equation x2 +y2 = z showing that the degree distribution is consistent with a power law for several decades of values of k and that, after having reached a minimum, the distribution begins rising again. The power-law exponent, in this case, is given by γ ∼ 4.5 We then analyse clustering and ageing and compare our results to the ones obtained in the Pythagorean case.

  18. Moment closure and the stochastic logistic model.

    PubMed

    Nåsell, Ingemar

    2003-03-01

    The quasi-stationary distribution of the stochastic logistic model is studied in the parameter region where its body is approximately normal. Improved asymptotic approximations of its first three cumulants are derived. It is shown that the same results can be derived with the aid of the moment closure method. This indicates that the moment closure method leads to expressions for the cumulants that are asymptotic approximations of the cumulants of the quasi-stationary distribution. PMID:12615498

  19. Wavelet Approximation in Data Assimilation

    NASA Technical Reports Server (NTRS)

    Tangborn, Andrew; Atlas, Robert (Technical Monitor)

    2002-01-01

    Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.

  20. Plasma Physics Approximations in Ares

    SciTech Connect

    Managan, R. A.

    2015-01-08

    Lee & More derived analytic forms for the transport properties of a plasma. Many hydro-codes use their formulae for electrical and thermal conductivity. The coefficients are complex functions of Fermi-Dirac integrals, Fn( μ/θ ), the chemical potential, μ or ζ = ln(1+e μ/θ ), and the temperature, θ = kT. Since these formulae are expensive to compute, rational function approximations were fit to them. Approximations are also used to find the chemical potential, either μ or ζ . The fits use ζ as the independent variable instead of μ/θ . New fits are provided for Aα (ζ ),Aβ (ζ ), ζ, f(ζ ) = (1 + e-μ/θ)F1/2(μ/θ), F1/2'/F1/2, Fcα, and Fcβ. In each case the relative error of the fit is minimized since the functions can vary by many orders of magnitude. The new fits are designed to exactly preserve the limiting values in the non-degenerate and highly degenerate limits or as ζ→ 0 or ∞. The original fits due to Lee & More and George Zimmerman are presented for comparison.

  1. Interplay of approximate planning strategies.

    PubMed

    Huys, Quentin J M; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J; Dayan, Peter; Roiser, Jonathan P

    2015-03-10

    Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or "options." PMID:25675480

  2. Approximating metal-insulator transitions

    NASA Astrophysics Data System (ADS)

    Danieli, Carlo; Rayanov, Kristian; Pavlov, Boris; Martin, Gaven; Flach, Sergej

    2015-12-01

    We consider quantum wave propagation in one-dimensional quasiperiodic lattices. We propose an iterative construction of quasiperiodic potentials from sequences of potentials with increasing spatial period. At each finite iteration step, the eigenstates reflect the properties of the limiting quasiperiodic potential properties up to a controlled maximum system size. We then observe approximate Metal-Insulator Transitions (MIT) at the finite iteration steps. We also report evidence on mobility edges, which are at variance to the celebrated Aubry-André model. The dynamics near the MIT shows a critical slowing down of the ballistic group velocity in the metallic phase, similar to the divergence of the localization length in the insulating phase.

  3. Strong shock implosion, approximate solution

    NASA Astrophysics Data System (ADS)

    Fujimoto, Y.; Mishkin, E. A.; Alejaldre, C.

    1983-01-01

    The self-similar, center-bound motion of a strong spherical, or cylindrical, shock wave moving through an ideal gas with a constant, γ= cp/ cv, is considered and a linearized, approximate solution is derived. An X, Y phase plane of the self-similar solution is defined and the representative curved of the system behind the shock front is replaced by a straight line connecting the mappings of the shock front with that of its tail. The reduced pressure P(ξ), density R(ξ) and velocity U1(ξ) are found in closed, quite accurate, form. Comparison with numerically obtained results, for γ= {5}/{3} and γ= {7}/{5}, is shown.

  4. Approximate analytic solutions to the NPDD: Short exposure approximations

    NASA Astrophysics Data System (ADS)

    Close, Ciara E.; Sheridan, John T.

    2014-04-01

    There have been many attempts to accurately describe the photochemical processes that take places in photopolymer materials. As the models have become more accurate, solving them has become more numerically intensive and more 'opaque'. Recent models incorporate the major photochemical reactions taking place as well as the diffusion effects resulting from the photo-polymerisation process, and have accurately described these processes in a number of different materials. It is our aim to develop accessible mathematical expressions which provide physical insights and simple quantitative predictions of practical value to material designers and users. In this paper, starting with the Non-Local Photo-Polymerisation Driven Diffusion (NPDD) model coupled integro-differential equations, we first simplify these equations and validate the accuracy of the resulting approximate model. This new set of governing equations are then used to produce accurate analytic solutions (polynomials) describing the evolution of the monomer and polymer concentrations, and the grating refractive index modulation, in the case of short low intensity sinusoidal exposures. The physical significance of the results and their consequences for holographic data storage (HDS) are then discussed.

  5. B-term approximation using tree-structured Haar transforms

    NASA Astrophysics Data System (ADS)

    Ho, Hsin-Han; Egiazarian, Karen O.; Mitra, Sanjit K.

    2009-02-01

    We present a heuristic solution for B-term approximation using Tree-Structured Haar (TSH) transforms. Our solution consists of two main stages: best basis selection and greedy approximation. In addition, when approximating the same signal with different B constraint or error metric, our solution also provides the flexibility of having less overall running time at expense of more storage space. We adopted lattice structure to index basis vectors, so that one index value can fully specify a basis vector. Based on the concept of fast computation of TSH transform by butterfly network, we also developed an algorithm for directly deriving butterfly parameters and incorporated it into our solution. Results show that, when the error metric is normalized l1-norm and normalized l2-norm, our solution has comparable (sometimes better) approximation quality with prior data synopsis algorithms.

  6. Evolution-development congruence in pattern formation dynamics: Bifurcations in gene expression and regulation of networks structures.

    PubMed

    Kohsokabe, Takahiro; Kaneko, Kunihiko

    2016-01-01

    Search for possible relationships between phylogeny and ontogeny is important in evolutionary-developmental biology. Here we uncover such relationships by numerical evolution and unveil their origin in terms of dynamical systems theory. By representing developmental dynamics of spatially located cells with gene expression dynamics with cell-to-cell interaction under external morphogen gradient, gene regulation networks are evolved under mutation and selection with the fitness to approach a prescribed spatial pattern of expressed genes. For most numerical evolution experiments, evolution of pattern over generations and development of pattern by an evolved network exhibit remarkable congruence. Both in the evolution and development pattern changes consist of several epochs where stripes are formed in a short time, while for other temporal regimes, pattern hardly changes. In evolution, these quasi-stationary regimes are generations needed to hit relevant mutations, while in development, they are due to some gene expression that varies slowly and controls the pattern change. The morphogenesis is regulated by combinations of feedback or feedforward regulations, where the upstream feedforward network reads the external morphogen gradient, and generates a pattern used as a boundary condition for the later patterns. The ordering from up to downstream is common in evolution and development, while the successive epochal changes in development and evolution are represented as common bifurcations in dynamical-systems theory, which lead to the evolution-development congruence. Mechanism of exceptional violation of the congruence is also unveiled. Our results provide a new look on developmental stages, punctuated equilibrium, developmental bottlenecks, and evolutionary acquisition of novelty in morphogenesis. PMID:26678220

  7. Interplay of approximate planning strategies

    PubMed Central

    Huys, Quentin J. M.; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J.; Dayan, Peter; Roiser, Jonathan P.

    2015-01-01

    Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or “options.” PMID:25675480

  8. Multidimensional stochastic approximation Monte Carlo.

    PubMed

    Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383

  9. Decision analysis with approximate probabilities

    NASA Technical Reports Server (NTRS)

    Whalen, Thomas

    1992-01-01

    This paper concerns decisions under uncertainty in which the probabilities of the states of nature are only approximately known. Decision problems involving three states of nature are studied. This is due to the fact that some key issues do not arise in two-state problems, while probability spaces with more than three states of nature are essentially impossible to graph. The primary focus is on two levels of probabilistic information. In one level, the three probabilities are separately rounded to the nearest tenth. This can lead to sets of rounded probabilities which add up to 0.9, 1.0, or 1.1. In the other level, probabilities are rounded to the nearest tenth in such a way that the rounded probabilities are forced to sum to 1.0. For comparison, six additional levels of probabilistic information, previously analyzed, were also included in the present analysis. A simulation experiment compared four criteria for decisionmaking using linearly constrained probabilities (Maximin, Midpoint, Standard Laplace, and Extended Laplace) under the eight different levels of information about probability. The Extended Laplace criterion, which uses a second order maximum entropy principle, performed best overall.

  10. Multidimensional stochastic approximation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .

  11. On the distributed approximation of edge coloring

    SciTech Connect

    Panconesi, A.

    1994-12-31

    An edge coloring of a graph G is an assignment of colors to the edges such that incident edges always have different colors. The edge coloring problem is to find an edge coloring with the aim of minimizing the number of colors used. The importance of this problem in distributed computing, and computer science generally, stems from the fact that several scheduling and resource allocation problems can be modeled as edge coloring problems. Given that determining an optimal (minimal) coloring is an NP-hard problem this requirement is usually relaxed to consider approximate, hopefully even near-optimal, colorings. In this talk, we discuss a distributed, randomized algorithm for the edge coloring problem that uses (1 + o(1)){Delta} colors and runs in O(log n) time with high probability ({Delta} denotes the maximum degree of the underlying network, and n denotes the number of nodes). The algorithm is based on a beautiful probabilistic strategy called the Rodl nibble. This talk describes joint work with Devdatt Dubhashi of the Max Planck Institute, Saarbrucken, Germany.

  12. Studying geomagnetic pulsation characteristics with the local approximation method

    NASA Astrophysics Data System (ADS)

    Getmanov, V. G.; Dabagyan, R. A.; Sidorov, R. V.

    2016-03-01

    A local approximation method based on piecewise sinusoidal models has been proposed in order to study the frequency and amplitude characteristics of geomagnetic pulsations registered at a network of magnetic observatories. It has been established that synchronous variations in the geomagnetic pulsation frequency in the specified frequency band can be studied with the use of calculations performed according to this method. The method was used to analyze the spectral-time structure of Pc3 geomagnetic pulsations registered at the network of equatorial observatories. Local approximation variants have been formed for single-channel and multichannel cases of estimating the geomagnetic pulsation frequency and amplitude, which made it possible to decrease estimation errors via filtering with moving weighted averaging.

  13. Producing approximate answers to database queries

    NASA Technical Reports Server (NTRS)

    Vrbsky, Susan V.; Liu, Jane W. S.

    1993-01-01

    We have designed and implemented a query processor, called APPROXIMATE, that makes approximate answers available if part of the database is unavailable or if there is not enough time to produce an exact answer. The accuracy of the approximate answers produced improves monotonically with the amount of data retrieved to produce the result. The exact answer is produced if all of the needed data are available and query processing is allowed to continue until completion. The monotone query processing algorithm of APPROXIMATE works within the standard relational algebra framework and can be implemented on a relational database system with little change to the relational architecture. We describe here the approximation semantics of APPROXIMATE that serves as the basis for meaningful approximations of both set-valued and single-valued queries. We show how APPROXIMATE is implemented to make effective use of semantic information, provided by an object-oriented view of the database, and describe the additional overhead required by APPROXIMATE.

  14. An approximation technique for jet impingement flow

    SciTech Connect

    Najafi, Mahmoud; Fincher, Donald; Rahni, Taeibi; Javadi, KH.; Massah, H.

    2015-03-10

    The analytical approximate solution of a non-linear jet impingement flow model will be demonstrated. We will show that this is an improvement over the series approximation obtained via the Adomian decomposition method, which is itself, a powerful method for analysing non-linear differential equations. The results of these approximations will be compared to the Runge-Kutta approximation in order to demonstrate their validity.

  15. Comparison of two Pareto frontier approximations

    NASA Astrophysics Data System (ADS)

    Berezkin, V. E.; Lotov, A. V.

    2014-09-01

    A method for comparing two approximations to the multidimensional Pareto frontier in nonconvex nonlinear multicriteria optimization problems, namely, the inclusion functions method is described. A feature of the method is that Pareto frontier approximations are compared by computing and comparing inclusion functions that show which fraction of points of one Pareto frontier approximation is contained in the neighborhood of the Edgeworth-Pareto hull approximation for the other Pareto frontier.

  16. Fractal Trigonometric Polynomials for Restricted Range Approximation

    NASA Astrophysics Data System (ADS)

    Chand, A. K. B.; Navascués, M. A.; Viswanathan, P.; Katiyar, S. K.

    2016-05-01

    One-sided approximation tackles the problem of approximation of a prescribed function by simple traditional functions such as polynomials or trigonometric functions that lie completely above or below it. In this paper, we use the concept of fractal interpolation function (FIF), precisely of fractal trigonometric polynomials, to construct one-sided uniform approximants for some classes of continuous functions.

  17. The Deep Space Network

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The objectives, functions, and organization, of the Deep Space Network are summarized. Deep Space stations, ground communications, and network operations control capabilities are described. The network is designed for two-way communications with unmanned spacecraft traveling approximately 1600 km from earth to the farthest planets in the solar system. It has provided tracking and data acquisition support for the following projects: Ranger, Surveyor, Mariner, Pioneer, Apollo, Helios, Viking, and the Lunar Orbiter.

  18. Albuquerque Basin seismic network

    USGS Publications Warehouse

    Jaksha, Lawrence H.; Locke, Jerry; Thompson, J.B.; Garcia, Alvin

    1977-01-01

    The U.S. Geological Survey has recently completed the installation of a seismic network around the Albuquerque Basin in New Mexico. The network consists of two seismometer arrays, a thirteen-station array monitoring an area of approximately 28,000 km 2 and an eight-element array monitoring the area immediately adjacent to the Albuquerque Seismological Laboratory. This report describes the instrumentation deployed in the network.

  19. A unified approach to the Darwin approximation

    SciTech Connect

    Krause, Todd B.; Apte, A.; Morrison, P. J.

    2007-10-15

    There are two basic approaches to the Darwin approximation. The first involves solving the Maxwell equations in Coulomb gauge and then approximating the vector potential to remove retardation effects. The second approach approximates the Coulomb gauge equations themselves, then solves these exactly for the vector potential. There is no a priori reason that these should result in the same approximation. Here, the equivalence of these two approaches is investigated and a unified framework is provided in which to view the Darwin approximation. Darwin's original treatment is variational in nature, but subsequent applications of his ideas in the context of Vlasov's theory are not. We present here action principles for the Darwin approximation in the Vlasov context, and this serves as a consistency check on the use of the approximation in this setting.

  20. Approximate Analysis of Semiconductor Laser Arrays

    NASA Technical Reports Server (NTRS)

    Marshall, William K.; Katz, Joseph

    1987-01-01

    Simplified equation yields useful information on gains and output patterns. Theoretical method based on approximate waveguide equation enables prediction of lateral modes of gain-guided planar array of parallel semiconductor lasers. Equation for entire array solved directly using piecewise approximation of index of refraction by simple functions without customary approximation based on coupled waveguid modes of individual lasers. Improved results yield better understanding of laser-array modes and help in development of well-behaved high-power semiconductor laser arrays.

  1. Piecewise linear approximation for hereditary control problems

    NASA Technical Reports Server (NTRS)

    Propst, Georg

    1990-01-01

    This paper presents finite-dimensional approximations for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems, when a quadratic cost integral must be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in the case where the cost integral ranges over a finite time interval, as well as in the case where it ranges over an infinite time interval. The arguments in the last case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense.

  2. Probabilistic Interaction Network of Evidence Algorithm and its Application to Complete Labeling of Peak Lists from Protein NMR Spectroscopy

    PubMed Central

    Bahrami, Arash; Assadi, Amir H.; Markley, John L.; Eghbalnia, Hamid R.

    2009-01-01

    The process of assigning a finite set of tags or labels to a collection of observations, subject to side conditions, is notable for its computational complexity. This labeling paradigm is of theoretical and practical relevance to a wide range of biological applications, including the analysis of data from DNA microarrays, metabolomics experiments, and biomolecular nuclear magnetic resonance (NMR) spectroscopy. We present a novel algorithm, called Probabilistic Interaction Network of Evidence (PINE), that achieves robust, unsupervised probabilistic labeling of data. The computational core of PINE uses estimates of evidence derived from empirical distributions of previously observed data, along with consistency measures, to drive a fictitious system M with Hamiltonian H to a quasi-stationary state that produces probabilistic label assignments for relevant subsets of the data. We demonstrate the successful application of PINE to a key task in protein NMR spectroscopy: that of converting peak lists extracted from various NMR experiments into assignments associated with probabilities for their correctness. This application, called PINE-NMR, is available from a freely accessible computer server (http://pine.nmrfam.wisc.edu). The PINE-NMR server accepts as input the sequence of the protein plus user-specified combinations of data corresponding to an extensive list of NMR experiments; it provides as output a probabilistic assignment of NMR signals (chemical shifts) to sequence-specific backbone and aliphatic side chain atoms plus a probabilistic determination of the protein secondary structure. PINE-NMR can accommodate prior information about assignments or stable isotope labeling schemes. As part of the analysis, PINE-NMR identifies, verifies, and rectifies problems related to chemical shift referencing or erroneous input data. PINE-NMR achieves robust and consistent results that have been shown to be effective in subsequent steps of NMR structure determination. PMID

  3. An Examination of New Paradigms for Spline Approximations.

    PubMed

    Witzgall, Christoph; Gilsinn, David E; McClain, Marjorie A

    2006-01-01

    Lavery splines are examined in the univariate and bivariate cases. In both instances relaxation based algorithms for approximate calculation of Lavery splines are proposed. Following previous work Gilsinn, et al. [7] addressing the bivariate case, a rotationally invariant functional is assumed. The version of bivariate splines proposed in this paper also aims at irregularly spaced data and uses Hseih-Clough-Tocher elements based on the triangulated irregular network (TIN) concept. In this paper, the univariate case, however, is investigated in greater detail so as to further the understanding of the bivariate case. PMID:27274917

  4. Spline approximations for nonlinear hereditary control systems

    NASA Technical Reports Server (NTRS)

    Daniel, P. L.

    1982-01-01

    A sline-based approximation scheme is discussed for optimal control problems governed by nonlinear nonautonomous delay differential equations. The approximating framework reduces the original control problem to a sequence of optimization problems governed by ordinary differential equations. Convergence proofs, which appeal directly to dissipative-type estimates for the underlying nonlinear operator, are given and numerical findings are summarized.

  5. Quirks of Stirling's Approximation

    ERIC Educational Resources Information Center

    Macrae, Roderick M.; Allgeier, Benjamin M.

    2013-01-01

    Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…

  6. Taylor approximations of multidimensional linear differential systems

    NASA Astrophysics Data System (ADS)

    Lomadze, Vakhtang

    2016-06-01

    The Taylor approximations of a multidimensional linear differential system are of importance as they contain a complete information about it. It is shown that in order to construct them it is sufficient to truncate the exponential trajectories only. A computation of the Taylor approximations is provided using purely algebraic means, without requiring explicit knowledge of the trajectories.

  7. Approximation for nonresonant beam target fusion reactivities

    SciTech Connect

    Mikkelsen, D.R.

    1988-11-01

    The beam target fusion reactivity for a monoenergetic beam in a Maxwellian target is approximately evaluated for nonresonant reactions. The approximation is accurate for the DD and TT fusion reactions to better than 4% for all beam energies up to 300 keV and all ion temperatures up to 2/3 of the beam energy. 12 refs., 1 fig., 1 tab.

  8. Computing Functions by Approximating the Input

    ERIC Educational Resources Information Center

    Goldberg, Mayer

    2012-01-01

    In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…

  9. Diagonal Pade approximations for initial value problems

    SciTech Connect

    Reusch, M.F.; Ratzan, L.; Pomphrey, N.; Park, W.

    1987-06-01

    Diagonal Pade approximations to the time evolution operator for initial value problems are applied in a novel way to the numerical solution of these problems by explicitly factoring the polynomials of the approximation. A remarkable gain over conventional methods in efficiency and accuracy of solution is obtained. 20 refs., 3 figs., 1 tab.

  10. Inversion and approximation of Laplace transforms

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1980-01-01

    A method of inverting Laplace transforms by using a set of orthonormal functions is reported. As a byproduct of the inversion, approximation of complicated Laplace transforms by a transform with a series of simple poles along the left half plane real axis is shown. The inversion and approximation process is simple enough to be put on a programmable hand calculator.

  11. An approximation for inverse Laplace transforms

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1981-01-01

    Programmable calculator runs simple finite-series approximation for Laplace transform inversions. Utilizing family of orthonormal functions, approximation is used for wide range of transforms, including those encountered in feedback control problems. Method works well as long as F(t) decays to zero as it approaches infinity and so is appliable to most physical systems.

  12. Linear radiosity approximation using vertex radiosities

    SciTech Connect

    Max, N. Lawrence Livermore National Lab., CA ); Allison, M. )

    1990-12-01

    Using radiosities computed at vertices, the radiosity across a triangle can be approximated by linear interpolation. We develop vertex-to-vertex form factors based on this linear radiosity approximation, and show how they can be computed efficiently using modern hardware-accelerated shading and z-buffer technology. 9 refs., 4 figs.

  13. Evolutionary reconstruction of networks

    NASA Astrophysics Data System (ADS)

    Ipsen, Mads; Mikhailov, Alexander S.

    2002-10-01

    Can a graph specifying the pattern of connections of a dynamical network be reconstructed from statistical properties of a signal generated by such a system? In this model study, we present a Metropolis algorithm for reconstruction of graphs from their Laplacian spectra. Through a stochastic process of mutations and selection, evolving test networks converge to a reference graph. Applying the method to several examples of random graphs, clustered graphs, and small-world networks, we show that the proposed stochastic evolution allows exact reconstruction of relatively small networks and yields good approximations in the case of large sizes.

  14. An approximate model for pulsar navigation simulation

    NASA Astrophysics Data System (ADS)

    Jovanovic, Ilija; Enright, John

    2016-02-01

    This paper presents an approximate model for the simulation of pulsar aided navigation systems. High fidelity simulations of these systems are computationally intensive and impractical for simulating periods of a day or more. Simulation of yearlong missions is done by abstracting navigation errors as periodic Gaussian noise injections. This paper presents an intermediary approximate model to simulate position errors for periods of several weeks, useful for building more accurate Gaussian error models. This is done by abstracting photon detection and binning, replacing it with a simple deterministic process. The approximate model enables faster computation of error injection models, allowing the error model to be inexpensively updated throughout a simulation. Testing of the approximate model revealed an optimistic performance prediction for non-millisecond pulsars with more accurate predictions for pulsars in the millisecond spectrum. This performance gap was attributed to noise which is not present in the approximate model but can be predicted and added to improve accuracy.

  15. Approximate error conjugation gradient minimization methods

    DOEpatents

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  16. APPROXIMATING LIGHT RAYS IN THE SCHWARZSCHILD FIELD

    SciTech Connect

    Semerák, O.

    2015-02-10

    A short formula is suggested that approximates photon trajectories in the Schwarzschild field better than other simple prescriptions from the literature. We compare it with various ''low-order competitors'', namely, with those following from exact formulas for small M, with one of the results based on pseudo-Newtonian potentials, with a suitably adjusted hyperbola, and with the effective and often employed approximation by Beloborodov. Our main concern is the shape of the photon trajectories at finite radii, yet asymptotic behavior is also discussed, important for lensing. An example is attached indicating that the newly suggested approximation is usable—and very accurate—for practically solving the ray-deflection exercise.

  17. Detecting Gravitational Waves using Pade Approximants

    NASA Astrophysics Data System (ADS)

    Porter, E. K.; Sathyaprakash, B. S.

    1998-12-01

    We look at the use of Pade Approximants in defining a metric tensor for the inspiral waveform template manifold. By using this method we investigate the curvature of the template manifold and the number of templates needed to carry out a realistic search for a Gravitational Wave signal. By comparing this method with the normal use of Taylor Approximant waveforms we hope to show that (a) Pade Approximants are a superior method for calculating the inspiral waveform, and (b) the number of search templates needed, and hence computing power, is reduced.

  18. Alternative approximation concepts for space frame synthesis

    NASA Technical Reports Server (NTRS)

    Lust, R. V.; Schmit, L. A.

    1985-01-01

    A method for space frame synthesis based on the application of a full gamut of approximation concepts is presented. It is found that with the thoughtful selection of design space, objective function approximation, constraint approximation and mathematical programming problem formulation options it is possible to obtain near minimum mass designs for a significant class of space frame structural systems while requiring fewer than 10 structural analyses. Example problems are presented which demonstrate the effectiveness of the method for frame structures subjected to multiple static loading conditions with limits on structural stiffness and strength.

  19. Approximate knowledge compilation: The first order case

    SciTech Connect

    Val, A. del

    1996-12-31

    Knowledge compilation procedures make a knowledge base more explicit so as make inference with respect to the compiled knowledge base tractable or at least more efficient. Most work to date in this area has been restricted to the propositional case, despite the importance of first order theories for expressing knowledge concisely. Focusing on (LUB) approximate compilation, our contribution is twofold: (1) We present a new ground algorithm for approximate compilation which can produce exponential savings with respect to the previously known algorithm. (2) We show that both ground algorithms can be lifted to the first order case preserving their correctness for approximate compilation.

  20. Adiabatic approximation for nucleus-nucleus scattering

    SciTech Connect

    Johnson, R.C.

    2005-10-14

    Adiabatic approximations to few-body models of nuclear scattering are described with emphasis on reactions with deuterons and halo nuclei (frozen halo approximation) as projectiles. The different ways the approximation should be implemented in a consistent theory of elastic scattering, stripping and break-up are explained and the conditions for the theory's validity are briefly discussed. A formalism which links few-body models and the underlying many-body system is outlined and the connection between the adiabatic and CDCC methods is reviewed.

  1. Approximate Bruechner orbitals in electron propagator calculations

    SciTech Connect

    Ortiz, J.V.

    1999-12-01

    Orbitals and ground-state correlation amplitudes from the so-called Brueckner doubles approximation of coupled-cluster theory provide a useful reference state for electron propagator calculations. An operator manifold with hold, particle, two-hole-one-particle and two-particle-one-hole components is chosen. The resulting approximation, third-order algebraic diagrammatic construction [2ph-TDA, ADC (3)] and 3+ methods. The enhanced versatility of this approximation is demonstrated through calculations on valence ionization energies, core ionization energies, electron detachment energies of anions, and on a molecule with partial biradical character, ozone.

  2. Information geometry of mean-field approximation.

    PubMed

    Tanaka, T

    2000-08-01

    I present a general theory of mean-field approximation based on information geometry and applicable not only to Boltzmann machines but also to wider classes of statistical models. Using perturbation expansion of the Kullback divergence (or Plefka expansion in statistical physics), a formulation of mean-field approximation of general orders is derived. It includes in a natural way the "naive" mean-field approximation and is consistent with the Thouless-Anderson-Palmer (TAP) approach and the linear response theorem in statistical physics. PMID:10953246

  3. A Survey of Techniques for Approximate Computing

    DOE PAGESBeta

    Mittal, Sparsh

    2016-03-18

    Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less

  4. Adiabatic approximation for the density matrix

    NASA Astrophysics Data System (ADS)

    Band, Yehuda B.

    1992-05-01

    An adiabatic approximation for the Liouville density-matrix equation which includes decay terms is developed. The adiabatic approximation employs the eigenvectors of the non-normal Liouville operator. The approximation is valid when there exists a complete set of eigenvectors of the non-normal Liouville operator (i.e., the eigenvectors span the density-matrix space), the time rate of change of the Liouville operator is small, and an auxiliary matrix is nonsingular. Numerical examples are presented involving efficient population transfer in a molecule by stimulated Raman scattering, with the intermediate level of the molecule decaying on a time scale that is fast compared with the pulse durations of the pump and Stokes fields. The adiabatic density-matrix approximation can be simply used to determine the density matrix for atomic or molecular systems interacting with cw electromagnetic fields when spontaneous emission or other decay mechanisms prevail.

  5. Approximate probability distributions of the master equation

    NASA Astrophysics Data System (ADS)

    Thomas, Philipp; Grima, Ramon

    2015-07-01

    Master equations are common descriptions of mesoscopic systems. Analytical solutions to these equations can rarely be obtained. We here derive an analytical approximation of the time-dependent probability distribution of the master equation using orthogonal polynomials. The solution is given in two alternative formulations: a series with continuous and a series with discrete support, both of which can be systematically truncated. While both approximations satisfy the system size expansion of the master equation, the continuous distribution approximations become increasingly negative and tend to oscillations with increasing truncation order. In contrast, the discrete approximations rapidly converge to the underlying non-Gaussian distributions. The theory is shown to lead to particularly simple analytical expressions for the probability distributions of molecule numbers in metabolic reactions and gene expression systems.

  6. An approximation method for electrostatic Vlasov turbulence

    NASA Technical Reports Server (NTRS)

    Klimas, A. J.

    1979-01-01

    Electrostatic Vlasov turbulence in a bounded spatial region is considered. An iterative approximation method with a proof of convergence is constructed. The method is non-linear and applicable to strong turbulence.

  7. Linear Approximation SAR Azimuth Processing Study

    NASA Technical Reports Server (NTRS)

    Lindquist, R. B.; Masnaghetti, R. K.; Belland, E.; Hance, H. V.; Weis, W. G.

    1979-01-01

    A segmented linear approximation of the quadratic phase function that is used to focus the synthetic antenna of a SAR was studied. Ideal focusing, using a quadratic varying phase focusing function during the time radar target histories are gathered, requires a large number of complex multiplications. These can be largely eliminated by using linear approximation techniques. The result is a reduced processor size and chip count relative to ideally focussed processing and a correspondingly increased feasibility for spaceworthy implementation. A preliminary design and sizing for a spaceworthy linear approximation SAR azimuth processor meeting requirements similar to those of the SEASAT-A SAR was developed. The study resulted in a design with approximately 1500 IC's, 1.2 cubic feet of volume, and 350 watts of power for a single look, 4000 range cell azimuth processor with 25 meters resolution.

  8. Approximation concepts for efficient structural synthesis

    NASA Technical Reports Server (NTRS)

    Schmit, L. A., Jr.; Miura, H.

    1976-01-01

    It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.

  9. Some Recent Progress for Approximation Algorithms

    NASA Astrophysics Data System (ADS)

    Kawarabayashi, Ken-ichi

    We survey some recent progress on approximation algorithms. Our main focus is the following two problems that have some recent breakthroughs; the edge-disjoint paths problem and the graph coloring problem. These breakthroughs involve the following three ingredients that are quite central in approximation algorithms: (1) Combinatorial (graph theoretical) approach, (2) LP based approach and (3) Semi-definite programming approach. We also sketch how they are used to obtain recent development.

  10. Polynomial approximation of functions in Sobolev spaces

    NASA Technical Reports Server (NTRS)

    Dupont, T.; Scott, R.

    1980-01-01

    Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomial plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.

  11. Approximate Solutions Of Equations Of Steady Diffusion

    NASA Technical Reports Server (NTRS)

    Edmonds, Larry D.

    1992-01-01

    Rigorous analysis yields reliable criteria for "best-fit" functions. Improved "curve-fitting" method yields approximate solutions to differential equations of steady-state diffusion. Method applies to problems in which rates of diffusion depend linearly or nonlinearly on concentrations of diffusants, approximate solutions analytic or numerical, and boundary conditions of Dirichlet type, of Neumann type, or mixture of both types. Applied to equations for diffusion of charge carriers in semiconductors in which mobilities and lifetimes of charge carriers depend on concentrations.

  12. Polynomial approximation of functions in Sobolev spaces

    SciTech Connect

    Dupont, T.; Scott, R.

    1980-04-01

    Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomical plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.

  13. An improved proximity force approximation for electrostatics

    SciTech Connect

    Fosco, Cesar D.; Lombardo, Fernando C.; Mazzitelli, Francisco D.

    2012-08-15

    A quite straightforward approximation for the electrostatic interaction between two perfectly conducting surfaces suggests itself when the distance between them is much smaller than the characteristic lengths associated with their shapes. Indeed, in the so called 'proximity force approximation' the electrostatic force is evaluated by first dividing each surface into a set of small flat patches, and then adding up the forces due two opposite pairs, the contributions of which are approximated as due to pairs of parallel planes. This approximation has been widely and successfully applied in different contexts, ranging from nuclear physics to Casimir effect calculations. We present here an improvement on this approximation, based on a derivative expansion for the electrostatic energy contained between the surfaces. The results obtained could be useful for discussing the geometric dependence of the electrostatic force, and also as a convenient benchmark for numerical analyses of the tip-sample electrostatic interaction in atomic force microscopes. - Highlights: Black-Right-Pointing-Pointer The proximity force approximation (PFA) has been widely used in different areas. Black-Right-Pointing-Pointer The PFA can be improved using a derivative expansion in the shape of the surfaces. Black-Right-Pointing-Pointer We use the improved PFA to compute electrostatic forces between conductors. Black-Right-Pointing-Pointer The results can be used as an analytic benchmark for numerical calculations in AFM. Black-Right-Pointing-Pointer Insight is provided for people who use the PFA to compute nuclear and Casimir forces.

  14. Compressive Network Analysis

    PubMed Central

    Jiang, Xiaoye; Yao, Yuan; Liu, Han; Guibas, Leonidas

    2014-01-01

    Modern data acquisition routinely produces massive amounts of network data. Though many methods and models have been proposed to analyze such data, the research of network data is largely disconnected with the classical theory of statistical learning and signal processing. In this paper, we present a new framework for modeling network data, which connects two seemingly different areas: network data analysis and compressed sensing. From a nonparametric perspective, we model an observed network using a large dictionary. In particular, we consider the network clique detection problem and show connections between our formulation with a new algebraic tool, namely Randon basis pursuit in homogeneous spaces. Such a connection allows us to identify rigorous recovery conditions for clique detection problems. Though this paper is mainly conceptual, we also develop practical approximation algorithms for solving empirical problems and demonstrate their usefulness on real-world datasets. PMID:25620806

  15. National Highway Planning Network

    Energy Science and Technology Software Center (ESTSC)

    1992-02-02

    NHPN, the National Highway Planning Network, is a database of major highways in the continental United States that is used for national-level analyses of highway transportation issues that require use of a network, such as studies of highway performance, network design, social and environmental impacts of transportation, vehicle routing and scheduling, and mapping. The network is based on a set of roadways digitized by the U. S. Geological Survey (USGS) from the 1980 National Atlasmore » and has been enhanced with additional roads, attribute detail, and topological error corrections to produce a true analytic network. All data have been derived from or checked against information obtained from state and Federal governmental agencies. Two files comprise this network: one describing links and the other nodes. This release, NHPN1.0, contains 44,960 links and 28,512 nodes representing approximately 380,000 miles of roadway.« less

  16. Bicriteria network design problems

    SciTech Connect

    Marathe, M.V.; Ravi, R.; Sundaram, R.; Ravi, S.S.; Rosenkrantz, D.J.; Hunt, H.B. III

    1994-12-31

    We study several bicriteria network design problems phrased as follows: given an undirected graph and two minimization objectives with a budget specified on one objective, find a subgraph satisfying certain connectivity requirements that minimizes the second objective subject to the budget on the first. Define an ({alpha}, {beta})-approximation algorithm as a polynomial-time algorithm that produces a solution in which the first objective value is at most {alpha} times the budget, and the second objective value is at most {alpha} times the minimum cost of a network obeying the budget oil the first objective. We, present the first approximation algorithms for bicriteria problems obtained by combining classical minimization objectives such as the total edge cost of the network, the diameter of the network and a weighted generalization of the maximum degree of any node in the network. We first develop some formalism related to bicriteria problems that leads to a clean way to state bicriteria approximation results. Secondly, when the two objectives are similar but only differ based on the cost function under which they are computed we present a general parametric search technique that yields approximation algorithms by reducing the problem to one of minimizing a single objective of the same type. Thirdly, we present an O(log n, log n)-approximation algorithm for finding a diameter-constrained minimum cost spanning tree of an undirected graph on n nodes generalizing the notion of shallow, light trees and light approximate shortest-path trees that have been studied before. Finally, for the class of treewidth-bounded graphs, we provide pseudopolynomial-time algorithms for a number of bicriteria problems using dynamic programming. These pseudopolynomial-time algorithms can be converted to fully polynomial-time approximation schemes using a scaling technique.

  17. Approximate Uncertainty Modeling in Risk Analysis with Vine Copulas

    PubMed Central

    Bedford, Tim; Daneshkhah, Alireza

    2015-01-01

    Many applications of risk analysis require us to jointly model multiple uncertain quantities. Bayesian networks and copulas are two common approaches to modeling joint uncertainties with probability distributions. This article focuses on new methodologies for copulas by developing work of Cooke, Bedford, Kurowica, and others on vines as a way of constructing higher dimensional distributions that do not suffer from some of the restrictions of alternatives such as the multivariate Gaussian copula. The article provides a fundamental approximation result, demonstrating that we can approximate any density as closely as we like using vines. It further operationalizes this result by showing how minimum information copulas can be used to provide parametric classes of copulas that have such good levels of approximation. We extend previous approaches using vines by considering nonconstant conditional dependencies, which are particularly relevant in financial risk modeling. We discuss how such models may be quantified, in terms of expert judgment or by fitting data, and illustrate the approach by modeling two financial data sets. PMID:26332240

  18. Novel determination of differential-equation solutions: universal approximation method

    NASA Astrophysics Data System (ADS)

    Leephakpreeda, Thananchai

    2002-09-01

    In a conventional approach to numerical computation, finite difference and finite element methods are usually implemented to determine the solution of a set of differential equations (DEs). This paper presents a novel approach to solve DEs by applying the universal approximation method through an artificial intelligence utility in a simple way. In this proposed method, neural network model (NNM) and fuzzy linguistic model (FLM) are applied as universal approximators for any nonlinear continuous functions. With this outstanding capability, the solutions of DEs can be approximated by the appropriate NNM or FLM within an arbitrary accuracy. The adjustable parameters of such NNM and FLM are determined by implementing the optimization algorithm. This systematic search yields sub-optimal adjustable parameters of NNM and FLM with the satisfactory conditions and with the minimum residual errors of the governing equations subject to the constraints of boundary conditions of DEs. The simulation results are investigated for the viability of efficiently determining the solutions of the ordinary and partial nonlinear DEs.

  19. Multi-level methods and approximating distribution functions

    NASA Astrophysics Data System (ADS)

    Wilson, D.; Baker, R. E.

    2016-07-01

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie's direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie's direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146-179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.

  20. Parallel SVD updating using approximate rotations

    NASA Astrophysics Data System (ADS)

    Goetze, Juergen; Rieder, Peter; Nossek, J. A.

    1995-06-01

    In this paper a parallel implementation of the SVD-updating algorithm using approximate rotations is presented. In its original form the SVD-updating algorithm had numerical problems if no reorthogonalization steps were applied. Representing the orthogonalmatrix V (right singular vectors) using its parameterization in terms of the rotation angles of n(n - 1)/2 plane rotations these reorthogonalization steps can be avoided during the SVD-updating algorithm. This results in a SVD-updating algorithm where all computations (matrix vector multiplication, QRD-updating, Kogbetliantz's algorithm) are entirely based on the evaluation and application of orthogonal plane rotations. Therefore, in this form the SVD-updating algorithm is amenable to an implementation using CORDIC-based approximate rotations. Using CORDIC-based approximate rotations the n(n - 1)/2 rotations representing V (as well as all other rotations) are only computed to a certain approximation accuracy (in the basis arctan 2i). All necessary computations required during the SVD-updating algorithm (exclusively rotations) are executed with the same accuracy, i.e., only r << w (w: wordlength) elementary orthonormal (mu) rotations are used per plane rotation. Simulations show the efficiency of the implementation using CORDIC-based approximate rotations.

  1. 'LTE-diffusion approximation' for arc calculations

    NASA Astrophysics Data System (ADS)

    Lowke, J. J.; Tanaka, M.

    2006-08-01

    This paper proposes the use of the 'LTE-diffusion approximation' for predicting the properties of electric arcs. Under this approximation, local thermodynamic equilibrium (LTE) is assumed, with a particular mesh size near the electrodes chosen to be equal to the 'diffusion length', based on De/W, where De is the electron diffusion coefficient and W is the electron drift velocity. This approximation overcomes the problem that the equilibrium electrical conductivity in the arc near the electrodes is almost zero, which makes accurate calculations using LTE impossible in the limit of small mesh size, as then voltages would tend towards infinity. Use of the LTE-diffusion approximation for a 200 A arc with a thermionic cathode gives predictions of total arc voltage, electrode temperatures, arc temperatures and radial profiles of heat flux density and current density at the anode that are in approximate agreement with more accurate calculations which include an account of the diffusion of electric charges to the electrodes, and also with experimental results. Calculations, which include diffusion of charges, agree with experimental results of current and heat flux density as a function of radius if the Milne boundary condition is used at the anode surface rather than imposing zero charge density at the anode.

  2. Separable approximations of two-body interactions

    NASA Astrophysics Data System (ADS)

    Haidenbauer, J.; Plessas, W.

    1983-01-01

    We perform a critical discussion of the efficiency of the Ernst-Shakin-Thaler method for a separable approximation of arbitrary two-body interactions by a careful examination of separable 3S1-3D1 N-N potentials that were constructed via this method by Pieper. Not only the on-shell properties of these potentials are considered, but also a comparison is made of their off-shell characteristics relative to the Reid soft-core potential. We point out a peculiarity in Pieper's application of the Ernst-Shakin-Thaler method, which leads to a resonant-like behavior of his potential 3SD1D. It is indicated where care has to be taken in order to circumvent drawbacks inherent in the Ernst-Shakin-Thaler separable approximation scheme. NUCLEAR REACTIONS Critical discussion of the Ernst-Shakin-Thaler separable approximation method. Pieper's separable N-N potentials examined on shell and off shell.

  3. Approximate solutions of the hyperbolic Kepler equation

    NASA Astrophysics Data System (ADS)

    Avendano, Martín; Martín-Molina, Verónica; Ortigas-Galindo, Jorge

    2015-12-01

    We provide an approximate zero widetilde{S}(g,L) for the hyperbolic Kepler's equation S-g {{arcsinh}}(S)-L=0 for gin (0,1) and Lin [0,∞ ). We prove, by using Smale's α -theory, that Newton's method starting at our approximate zero produces a sequence that converges to the actual solution S( g, L) at quadratic speed, i.e. if S_n is the value obtained after n iterations, then |S_n-S|≤ 0.5^{2^n-1}|widetilde{S}-S|. The approximate zero widetilde{S}(g,L) is a piecewise-defined function involving several linear expressions and one with cubic and square roots. In bounded regions of (0,1) × [0,∞ ) that exclude a small neighborhood of g=1, L=0, we also provide a method to construct simpler starters involving only constants.

  4. Ancilla-approximable quantum state transformations

    SciTech Connect

    Blass, Andreas; Gurevich, Yuri

    2015-04-15

    We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation.

  5. Fast wavelet based sparse approximate inverse preconditioner

    SciTech Connect

    Wan, W.L.

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  6. Approximation methods in gravitational-radiation theory

    NASA Technical Reports Server (NTRS)

    Will, C. M.

    1986-01-01

    The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.

  7. Faddeev random-phase approximation for molecules

    SciTech Connect

    Degroote, Matthias; Van Neck, Dimitri; Barbieri, Carlo

    2011-04-15

    The Faddeev random-phase approximation is a Green's function technique that makes use of Faddeev equations to couple the motion of a single electron to the two-particle-one-hole and two-hole-one-particle excitations. This method goes beyond the frequently used third-order algebraic diagrammatic construction method: all diagrams involving the exchange of phonons in the particle-hole and particle-particle channel are retained, but the phonons are now described at the level of the random-phase approximation, which includes ground-state correlations, rather than at the Tamm-Dancoff approximation level, where ground-state correlations are excluded. Previously applied to atoms, this paper presents results for small molecules at equilibrium geometry.

  8. On the Accuracy of the MINC approximation

    SciTech Connect

    Lai, C.H.; Pruess, K.; Bodvarsson, G.S.

    1986-02-01

    The method of ''multiple interacting continua'' is based on the assumption that changes in thermodynamic conditions of rock matrix blocks are primarily controlled by the distance from the nearest fracture. The accuracy of this assumption was evaluated for regularly shaped (cubic and rectangular) rock blocks with uniform initial conditions, which are subjected to a step change in boundary conditions on the surface. Our results show that pressures (or temperatures) predicted from the MINC approximation may deviate from the exact solutions by as much as 10 to 15% at certain points within the blocks. However, when fluid (or heat) flow rates are integrated over the entire block surface, MINC-approximation and exact solution agree to better than 1%. This indicates that the MINC approximation can accurately represent transient inter-porosity flow in fractured porous media, provided that matrix blocks are indeed subjected to nearly uniform boundary conditions at all times.

  9. Exponential Approximations Using Fourier Series Partial Sums

    NASA Technical Reports Server (NTRS)

    Banerjee, Nana S.; Geer, James F.

    1997-01-01

    The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.

  10. [Diagnostics of approximal caries - literature review].

    PubMed

    Berczyński, Paweł; Gmerek, Anna; Buczkowska-Radlińska, Jadwiga

    2015-01-01

    The most important issue in modern cariology is the early diagnostics of carious lesions, because only early detected lesions can be treated with as little intervention as possible. This is extremely difficult on approximal surfaces because of their anatomy, late onset of pain, and very few clinical symptoms. Modern diagnostic methods make dentists' everyday work easier, often detecting lesions unseen during visual examination. This work presents a review of the literature on the subject of modern diagnostic methods that can be used to detect approximal caries. PMID:27344873

  11. Approximate convective heating equations for hypersonic flows

    NASA Technical Reports Server (NTRS)

    Zoby, E. V.; Moss, J. N.; Sutton, K.

    1979-01-01

    Laminar and turbulent heating-rate equations appropriate for engineering predictions of the convective heating rates about blunt reentry spacecraft at hypersonic conditions are developed. The approximate methods are applicable to both nonreacting and reacting gas mixtures for either constant or variable-entropy edge conditions. A procedure which accounts for variable-entropy effects and is not based on mass balancing is presented. Results of the approximate heating methods are in good agreement with existing experimental results as well as boundary-layer and viscous-shock-layer solutions.

  12. Congruence Approximations for Entrophy Endowed Hyperbolic Systems

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Building upon the standard symmetrization theory for hyperbolic systems of conservation laws, congruence properties of the symmetrized system are explored. These congruence properties suggest variants of several stabilized numerical discretization procedures for hyperbolic equations (upwind finite-volume, Galerkin least-squares, discontinuous Galerkin) that benefit computationally from congruence approximation. Specifically, it becomes straightforward to construct the spatial discretization and Jacobian linearization for these schemes (given a small amount of derivative information) for possible use in Newton's method, discrete optimization, homotopy algorithms, etc. Some examples will be given for the compressible Euler equations and the nonrelativistic MHD equations using linear and quadratic spatial approximation.

  13. Characterizing inflationary perturbations: The uniform approximation

    SciTech Connect

    Habib, Salman; Heinen, Andreas; Heitmann, Katrin; Jungman, Gerard; Molina-Paris, Carmen

    2004-10-15

    The spectrum of primordial fluctuations from inflation can be obtained using a mathematically controlled, and systematically extendable, uniform approximation. Closed-form expressions for power spectra and spectral indices may be found without making explicit slow-roll assumptions. Here we provide details of our previous calculations, extend the results beyond leading-order in the approximation, and derive general error bounds for power spectra and spectral indices. Already at next-to-leading-order, the errors in calculating the power spectrum are less than a percent. This meets the accuracy requirement for interpreting next-generation cosmic microwave background observations.

  14. HALOGEN: Approximate synthetic halo catalog generator

    NASA Astrophysics Data System (ADS)

    Avila Perez, Santiago; Murray, Steven

    2015-05-01

    HALOGEN generates approximate synthetic halo catalogs. Written in C, it decomposes the problem of generating cosmological tracer distributions (eg. halos) into four steps: generating an approximate density field, generating the required number of tracers from a CDF over mass, placing the tracers on field particles according to a bias scheme dependent on local density, and assigning velocities to the tracers based on velocities of local particles. It also implements a default set of four models for these steps. HALOGEN uses 2LPTic (ascl:1201.005) and CUTE (ascl:1505.016); the software is flexible and can be adapted to varying cosmologies and simulation specifications.

  15. ANALOG QUANTUM NEURON FOR FUNCTIONS APPROXIMATION

    SciTech Connect

    A. EZHOV; A. KHROMOV; G. BERMAN

    2001-05-01

    We describe a system able to perform universal stochastic approximations of continuous multivariable functions in both neuron-like and quantum manner. The implementation of this model in the form of multi-barrier multiple-silt system has been earlier proposed. For the simplified waveguide variant of this model it is proved, that the system can approximate any continuous function of many variables. This theorem is also applied to the 2-input quantum neural model analogical to the schemes developed for quantum control.

  16. Bicriteria network design problems

    SciTech Connect

    Marathe, M.V.; Ravi, R.; Sundaram, R.; Ravi, S.S.; Rosenkrantz, D.J.; Hunt, H.B. III

    1997-11-20

    The authors study a general class of bicriteria network design problems. A generic problem in this class is as follows: Given an undirected graph and two minimization objectives (under different cost functions), with a budget specified on the first, find a subgraph from a given subgraph class that minimizes the second objective subject to the budget on the first. They consider three different criteria -- the total edge cost, the diameter and the maximum degree of the network. Here, they present the first polynomial-time approximation algorithms for a large class of bicriteria network design problems for the above mentioned criteria. The following general types of results are presented. First, they develop a framework for bicriteria problems and their approximations. Second, when the two criteria are the same they present a black box parametric search technique. This black box takes in as input an (approximation) algorithm for the criterion situation and generates an approximation algorithm for the bicriteria case with only a constant factor loss in the performance guarantee. Third, when the two criteria are the diameter and the total edge costs they use a cluster based approach to devise approximation algorithms. The solutions violate both the criteria by a logarithmic factor. Finally, for the class of treewidth-bounded graphs, they provide pseudopolynomial-time algorithms for a number of bicriteria problems using dynamic programming. The authors show how these pseudopolynomial-time algorithms can be converted to fully polynomial-time approximation schemes using a scaling technique.

  17. Spatio-temporal analysis of brain electrical activity in epilepsy based on cellular nonlinear networks

    NASA Astrophysics Data System (ADS)

    Gollas, Frank; Tetzlaff, Ronald

    2009-05-01

    -temporal autoregressive filter models are considered, for a prediction of EEG signal values. Thus Signal features values for successive, short, quasi stationary segments of brain electrical activity can be obtained, with the objective of detecting distinct changes prior to impending epileptic seizures. Furthermore long term recordings gained during presurgical diagnostics in temporal lobe epilepsy are analyzed and the predictive performance of the extracted features is evaluated statistically. Therefore a Receiver Operating Characteristic analysis is considered, assessing the distinguishability between distributions of supposed preictal and interictal periods.

  18. Progressive Image Coding by Hierarchical Linear Approximation.

    ERIC Educational Resources Information Center

    Wu, Xiaolin; Fang, Yonggang

    1994-01-01

    Proposes a scheme of hierarchical piecewise linear approximation as an adaptive image pyramid. A progressive image coder comes naturally from the proposed image pyramid. The new pyramid is semantically more powerful than regular tessellation but syntactically simpler than free segmentation. This compromise between adaptability and complexity…

  19. Median Approximations for Genomes Modeled as Matrices.

    PubMed

    Zanetti, Joao Paulo Pereira; Biller, Priscila; Meidanis, Joao

    2016-04-01

    The genome median problem is an important problem in phylogenetic reconstruction under rearrangement models. It can be stated as follows: Given three genomes, find a fourth that minimizes the sum of the pairwise rearrangement distances between it and the three input genomes. In this paper, we model genomes as matrices and study the matrix median problem using the rank distance. It is known that, for any metric distance, at least one of the corners is a [Formula: see text]-approximation of the median. Our results allow us to compute up to three additional matrix median candidates, all of them with approximation ratios at least as good as the best corner, when the input matrices come from genomes. We also show a class of instances where our candidates are optimal. From the application point of view, it is usually more interesting to locate medians farther from the corners, and therefore, these new candidates are potentially more useful. In addition to the approximation algorithm, we suggest a heuristic to get a genome from an arbitrary square matrix. This is useful to translate the results of our median approximation algorithm back to genomes, and it has good results in our tests. To assess the relevance of our approach in the biological context, we ran simulated evolution tests and compared our solutions to those of an exact DCJ median solver. The results show that our method is capable of producing very good candidates. PMID:27072561

  20. Approximate analysis of electromagnetically coupled microstrip dipoles

    NASA Astrophysics Data System (ADS)

    Kominami, M.; Yakuwa, N.; Kusaka, H.

    1990-10-01

    A new dynamic analysis model for analyzing electromagnetically coupled (EMC) microstrip dipoles is proposed. The formulation is based on an approximate treatment of the dielectric substrate. Calculations of the equivalent impedance of two different EMC dipole configurations are compared with measured data and full-wave solutions. The agreement is very good.

  1. Approximations For Controls Of Hereditary Systems

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.

    1988-01-01

    Convergence properties of controls, trajectories, and feedback kernels analyzed. Report discusses use of factorization techniques to approximate optimal feedback gains in finite-time, linear-regulator/quadratic-cost-function problem of system governed by retarded-functional-difference equations RFDE's with control delays. Presents approach to factorization based on discretization of state penalty leading to simple structure for feedback control law.

  2. Revisiting Twomey's approximation for peak supersaturation

    NASA Astrophysics Data System (ADS)

    Shipway, B. J.

    2015-04-01

    Twomey's seminal 1959 paper provided lower and upper bound approximations to the estimation of peak supersaturation within an updraft and thus provides the first closed expression for the number of nucleated cloud droplets. The form of this approximation is simple, but provides a surprisingly good estimate and has subsequently been employed in more sophisticated treatments of nucleation parametrization. In the current paper, we revisit the lower bound approximation of Twomey and make a small adjustment that can be used to obtain a more accurate calculation of peak supersaturation under all potential aerosol loadings and thermodynamic conditions. In order to make full use of this improved approximation, the underlying integro-differential equation for supersaturation evolution and the condition for calculating peak supersaturation are examined. A simple rearrangement of the algebra allows for an expression to be written down that can then be solved with a single lookup table with only one independent variable for an underlying lognormal aerosol population. While multimodal aerosol with N different dispersion characteristics requires 2N+1 inputs to calculate the activation fraction, only N of these one-dimensional lookup tables are needed. No additional information is required in the lookup table to deal with additional chemical, physical or thermodynamic properties. The resulting implementation provides a relatively simple, yet computationally cheap, physically based parametrization of droplet nucleation for use in climate and Numerical Weather Prediction models.

  3. Padé approximations and diophantine geometry

    PubMed Central

    Chudnovsky, D. V.; Chudnovsky, G. V.

    1985-01-01

    Using methods of Padé approximations we prove a converse to Eisenstein's theorem on the boundedness of denominators of coefficients in the expansion of an algebraic function, for classes of functions, parametrized by meromorphic functions. This result is applied to the Tate conjecture on the effective description of isogenies for elliptic curves. PMID:16593552

  4. Achievements and Problems in Diophantine Approximation Theory

    NASA Astrophysics Data System (ADS)

    Sprindzhuk, V. G.

    1980-08-01

    ContentsIntroduction I. Metrical theory of approximation on manifolds § 1. The basic problem § 2. Brief survey of results § 3. The principal conjecture II. Metrical theory of transcendental numbers § 1. Mahler's classification of numbers § 2. Metrical characterization of numbers with a given type of approximation § 3. Further problems III. Approximation of algebraic numbers by rationals § 1. Simultaneous approximations § 2. The inclusion of p-adic metrics § 3. Effective improvements of Liouville's inequality IV. Estimates of linear forms in logarithms of algebraic numbers § 1. The basic method § 2. Survey of results § 3. Estimates in the p-adic metric V. Diophantine equations § 1. Ternary exponential equations § 2. The Thue and Thue-Mahler equations § 3. Equations of hyperelliptic type § 4. Algebraic-exponential equations VI. The arithmetic structure of polynomials and the class number § 1. The greatest prime divisor of a polynomial in one variable § 2. The greatest prime divisor of a polynomial in two variables § 3. Square-free divisors of polynomials and the class number § 4. The general problem of the size of the class number Conclusion References

  5. Approximation of virus structure by icosahedral tilings.

    PubMed

    Salthouse, D G; Indelicato, G; Cermelli, P; Keef, T; Twarock, R

    2015-07-01

    Viruses are remarkable examples of order at the nanoscale, exhibiting protein containers that in the vast majority of cases are organized with icosahedral symmetry. Janner used lattice theory to provide blueprints for the organization of material in viruses. An alternative approach is provided here in terms of icosahedral tilings, motivated by the fact that icosahedral symmetry is non-crystallographic in three dimensions. In particular, a numerical procedure is developed to approximate the capsid of icosahedral viruses by icosahedral tiles via projection of high-dimensional tiles based on the cut-and-project scheme for the construction of three-dimensional quasicrystals. The goodness of fit of our approximation is assessed using techniques related to the theory of polygonal approximation of curves. The approach is applied to a number of viral capsids and it is shown that detailed features of the capsid surface can indeed be satisfactorily described by icosahedral tilings. This work complements previous studies in which the geometry of the capsid is described by point sets generated as orbits of extensions of the icosahedral group, as such point sets are by construction related to the vertex sets of icosahedral tilings. The approximations of virus geometry derived here can serve as coarse-grained models of viral capsids as a basis for the study of virus assembly and structural transitions of viral capsids, and also provide a new perspective on the design of protein containers for nanotechnology applications. PMID:26131897

  6. Parameter Choices for Approximation by Harmonic Splines

    NASA Astrophysics Data System (ADS)

    Gutting, Martin

    2016-04-01

    The approximation by harmonic trial functions allows the construction of the solution of boundary value problems in geoscience, e.g., in terms of harmonic splines. Due to their localizing properties regional modeling or the improvement of a global model in a part of the Earth's surface is possible with splines. Fast multipole methods have been developed for some cases of the occurring kernels to obtain a fast matrix-vector multiplication. The main idea of the fast multipole algorithm consists of a hierarchical decomposition of the computational domain into cubes and a kernel approximation for the more distant points. This reduces the numerical effort of the matrix-vector multiplication from quadratic to linear in reference to the number of points for a prescribed accuracy of the kernel approximation. The application of the fast multipole method to spline approximation which also allows the treatment of noisy data requires the choice of a smoothing parameter. We investigate different methods to (ideally automatically) choose this parameter with and without prior knowledge of the noise level. Thereby, the performance of these methods is considered for different types of noise in a large simulation study. Applications to gravitational field modeling are presented as well as the extension to boundary value problems where the boundary is the known surface of the Earth itself.

  7. Can Distributional Approximations Give Exact Answers?

    ERIC Educational Resources Information Center

    Griffiths, Martin

    2013-01-01

    Some mathematical activities and investigations for the classroom or the lecture theatre can appear rather contrived. This cannot, however, be levelled at the idea given here, since it is based on a perfectly sensible question concerning distributional approximations that was posed by an undergraduate student. Out of this simple question, and…

  8. Large Hierarchies from Approximate R Symmetries

    SciTech Connect

    Kappl, Rolf; Ratz, Michael; Schmidt-Hoberg, Kai; Nilles, Hans Peter; Ramos-Sanchez, Saul; Vaudrevange, Patrick K. S.

    2009-03-27

    We show that hierarchically small vacuum expectation values of the superpotential in supersymmetric theories can be a consequence of an approximate R symmetry. We briefly discuss the role of such small constants in moduli stabilization and understanding the huge hierarchy between the Planck and electroweak scales.

  9. An approximate classical unimolecular reaction rate theory

    NASA Astrophysics Data System (ADS)

    Zhao, Meishan; Rice, Stuart A.

    1992-05-01

    We describe a classical theory of unimolecular reaction rate which is derived from the analysis of Davis and Gray by use of simplifying approximations. These approximations concern the calculation of the locations of, and the fluxes of phase points across, the bottlenecks to fragmentation and to intramolecular energy transfer. The bottleneck to fragment separation is represented as a vibration-rotation state dependent separatrix, which approximation is similar to but extends and improves the approximations for the separatrix introduced by Gray, Rice, and Davis and by Zhao and Rice. The novel feature in our analysis is the representation of the bottlenecks to intramolecular energy transfer as dividing surfaces in phase space; the locations of these dividing surfaces are determined by the same conditions as locate the remnants of robust tori with frequency ratios related to the golden mean (in a two degree of freedom system these are the cantori). The flux of phase points across each dividing surface is calculated with an analytic representation instead of a stroboscopic mapping. The rate of unimolecular reaction is identified with the net rate at which phase points escape from the region of quasiperiodic bounded motion to the region of free fragment motion by consecutively crossing the dividing surfaces for intramolecular energy exchange and the separatrix. This new theory generates predictions of the rates of predissociation of the van der Waals molecules HeI2, NeI2 and ArI2 which are in very good agreement with available experimental data.

  10. Approximation and compression with sparse orthonormal transforms.

    PubMed

    Sezer, Osman Gokhan; Guleryuz, Onur G; Altunbasak, Yucel

    2015-08-01

    We propose a new transform design method that targets the generation of compression-optimized transforms for next-generation multimedia applications. The fundamental idea behind transform compression is to exploit regularity within signals such that redundancy is minimized subject to a fidelity cost. Multimedia signals, in particular images and video, are well known to contain a diverse set of localized structures, leading to many different types of regularity and to nonstationary signal statistics. The proposed method designs sparse orthonormal transforms (SOTs) that automatically exploit regularity over different signal structures and provides an adaptation method that determines the best representation over localized regions. Unlike earlier work that is motivated by linear approximation constructs and model-based designs that are limited to specific types of signal regularity, our work uses general nonlinear approximation ideas and a data-driven setup to significantly broaden its reach. We show that our SOT designs provide a safe and principled extension of the Karhunen-Loeve transform (KLT) by reducing to the KLT on Gaussian processes and by automatically exploiting non-Gaussian statistics to significantly improve over the KLT on more general processes. We provide an algebraic optimization framework that generates optimized designs for any desired transform structure (multiresolution, block, lapped, and so on) with significantly better n -term approximation performance. For each structure, we propose a new prototype codec and test over a database of images. Simulation results show consistent increase in compression and approximation performance compared with conventional methods. PMID:25823033

  11. Quickly Approximating the Distance Between Two Objects

    NASA Technical Reports Server (NTRS)

    Hammen, David

    2009-01-01

    A method of quickly approximating the distance between two objects (one smaller, regarded as a point; the other larger and complexly shaped) has been devised for use in computationally simulating motions of the objects for the purpose of planning the motions to prevent collisions.

  12. Fostering Formal Commutativity Knowledge with Approximate Arithmetic

    PubMed Central

    Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A.; Gaschler, Robert

    2015-01-01

    How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311

  13. Fostering Formal Commutativity Knowledge with Approximate Arithmetic.

    PubMed

    Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A; Gaschler, Robert

    2015-01-01

    How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311

  14. Block Addressing Indices for Approximate Text Retrieval.

    ERIC Educational Resources Information Center

    Baeza-Yates, Ricardo; Navarro, Gonzalo

    2000-01-01

    Discusses indexing in large text databases, approximate text searching, and space-time tradeoffs for indexed text searching. Studies the space overhead and retrieval times as functions of the text block size, concludes that an index can be sublinear in space overhead and query time, and applies the analysis to the Web. (Author/LRW)

  15. Alternative approximation concepts for space frame synthesis

    NASA Technical Reports Server (NTRS)

    Lust, R. V.; Schmit, L. A.

    1985-01-01

    A structural synthesis methodology for the minimum mass design of 3-dimensionall frame-truss structures under multiple static loading conditions and subject to limits on displacements, rotations, stresses, local buckling, and element cross-sectional dimensions is presented. A variety of approximation concept options are employed to yield near optimum designs after no more than 10 structural analyses. Available options include: (A) formulation of the nonlinear mathematcal programming problem in either reciprocal section property (RSP) or cross-sectional dimension (CSD) space; (B) two alternative approximate problem structures in each design space; and (C) three distinct assumptions about element end-force variations. Fixed element, design element linking, and temporary constraint deletion features are also included. The solution of each approximate problem, in either its primal or dual form, is obtained using CONMIN, a feasible directions program. The frame-truss synthesis methodology is implemented in the COMPASS computer program and is used to solve a variety of problems. These problems were chosen so that, in addition to exercising the various approximation concepts options, the results could be compared with previously published work.

  16. An adiabatic approximation for grain alignment theory

    NASA Astrophysics Data System (ADS)

    Roberge, W. G.

    1997-10-01

    The alignment of interstellar dust grains is described by the joint distribution function for certain `internal' and `external' variables, where the former describe the orientation of the axes of a grain with respect to its angular momentum, J, and the latter describe the orientation of J relative to the interstellar magnetic field. I show how the large disparity between the dynamical time-scales of the internal and external variables - which is typically 2-3 orders of magnitude - can be exploited to simplify calculations of the required distribution greatly. The method is based on an `adiabatic approximation' which closely resembles the Born-Oppenheimer approximation in quantum mechanics. The adiabatic approximation prescribes an analytic distribution function for the `fast' dynamical variables and a simplified Fokker-Planck equation for the `slow' variables which can be solved straightforwardly using various techniques. These solutions are accurate to O(epsilon), where epsilon is the ratio of the fast and slow dynamical time-scales. As a simple illustration of the method, I derive an analytic solution for the joint distribution established when Barnett relaxation acts in concert with gas damping. The statistics of the analytic solution agree with the results of laborious numerical calculations which do not exploit the adiabatic approximation.

  17. An Adiabatic Approximation for Grain Alignment Theory

    NASA Astrophysics Data System (ADS)

    Roberge, W. G.

    1997-12-01

    The alignment of interstellar dust grains is described by the joint distribution function for certain ``internal'' and ``external'' variables, where the former describe the orientation of a grain's axes with respect to its angular momentum, J, and the latter describe the orientation of J relative to the interstellar magnetic field. I show how the large disparity between the dynamical timescales of the internal and external variables--- which is typically 2--3 orders of magnitude--- can be exploited to greatly simplify calculations of the required distribution. The method is based on an ``adiabatic approximation'' which closely resembles the Born-Oppenheimer approximation in quantum mechanics. The adiabatic approximation prescribes an analytic distribution function for the ``fast'' dynamical variables and a simplified Fokker-Planck equation for the ``slow'' variables which can be solved straightforwardly using various techniques. These solutions are accurate to cal {O}(epsilon ), where epsilon is the ratio of the fast and slow dynamical timescales. As a simple illustration of the method, I derive an analytic solution for the joint distribution established when Barnett relaxation acts in concert with gas damping. The statistics of the analytic solution agree with the results of laborious numerical calculations which do not exploit the adiabatic approximation.

  18. Approximation algorithms for planning and control

    NASA Technical Reports Server (NTRS)

    Boddy, Mark; Dean, Thomas

    1989-01-01

    A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.

  19. Kravchuk functions for the finite oscillator approximation

    NASA Technical Reports Server (NTRS)

    Atakishiyev, Natig M.; Wolf, Kurt Bernardo

    1995-01-01

    Kravchuk orthogonal functions - Kravchuk polynomials multiplied by the square root of the weight function - simplify the inversion algorithm for the analysis of discrete, finite signals in harmonic oscillator components. They can be regarded as the best approximation set. As the number of sampling points increases, the Kravchuk expansion becomes the standard oscillator expansion.

  20. Discovering natural communities in networks

    NASA Astrophysics Data System (ADS)

    Li, Angsheng; Li, Jiankou; Pan, Yicheng

    2015-10-01

    Understanding and detecting natural communities in networks have been a fundamental challenge in networks, and in science generally. Recently, we proposed a hypothesis that homophyly/kinship is the principle of natural communities based on real network experiments, proposed a model of networks to explore the principle of natural selection in nature evolving, and proposed the measure of structure entropy of networks. Here we proposed a community finding algorithm by our measure of structure entropy of networks. We found that our community finding algorithm exactly identifies almost all natural communities of networks generated by natural selection, if any, and that the algorithm exactly identifies or precisely approximates almost all the communities planted in the networks of the existing models. We verified that our algorithm identifies or very well approximates the ground-truth communities of some real world networks, if the ground-truth communities are semantically well-defined, that our algorithm naturally finds the balanced communities, and that the communities found by our algorithm may have larger modularity than that by the algorithms based on modularity, for some networks. Our algorithm provides for the first time an approach to detecting and analyzing natural or true communities in real world networks. Our results demonstrate that structure entropy minimization is the principle of detecting the natural or true communities in large-scale networks.

  1. Counting independent sets using the Bethe approximation

    SciTech Connect

    Chertkov, Michael; Chandrasekaran, V; Gamarmik, D; Shah, D; Sin, J

    2009-01-01

    The authors consider the problem of counting the number of independent sets or the partition function of a hard-core model in a graph. The problem in general is computationally hard (P hard). They study the quality of the approximation provided by the Bethe free energy. Belief propagation (BP) is a message-passing algorithm can be used to compute fixed points of the Bethe approximation; however, BP is not always guarantee to converge. As the first result, they propose a simple message-passing algorithm that converges to a BP fixed pont for any grapy. They find that their algorithm converges within a multiplicative error 1 + {var_epsilon} of a fixed point in {Omicron}(n{sup 2}E{sup -4} log{sup 3}(nE{sup -1})) iterations for any bounded degree graph of n nodes. In a nutshell, the algorithm can be thought of as a modification of BP with 'time-varying' message-passing. Next, they analyze the resulting error to the number of independent sets provided by such a fixed point of the Bethe approximation. Using the recently developed loop calculus approach by Vhertkov and Chernyak, they establish that for any bounded graph with large enough girth, the error is {Omicron}(n{sup -{gamma}}) for some {gamma} > 0. As an application, they find that for random 3-regular graph, Bethe approximation of log-partition function (log of the number of independent sets) is within o(1) of corret log-partition - this is quite surprising as previous physics-based predictions were expecting an error of o(n). In sum, their results provide a systematic way to find Bethe fixed points for any graph quickly and allow for estimating error in Bethe approximation using novel combinatorial techniques.

  2. Quantization Effects on Complex Networks.

    PubMed

    Wang, Ying; Wang, Lin; Yang, Wen; Wang, Xiaofan

    2016-01-01

    Weights of edges in many complex networks we constructed are quantized values of the real weights. To what extent does the quantization affect the properties of a network? In this work, quantization effects on network properties are investigated based on the spectrum of the corresponding Laplacian. In contrast to the intuition that larger quantization level always implies a better approximation of the quantized network to the original one, we find a ubiquitous periodic jumping phenomenon with peak-value decreasing in a power-law relationship in all the real-world weighted networks that we investigated. We supply theoretical analysis on the critical quantization level and the power laws. PMID:27226049

  3. Quantization Effects on Complex Networks

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Wang, Lin; Yang, Wen; Wang, Xiaofan

    2016-05-01

    Weights of edges in many complex networks we constructed are quantized values of the real weights. To what extent does the quantization affect the properties of a network? In this work, quantization effects on network properties are investigated based on the spectrum of the corresponding Laplacian. In contrast to the intuition that larger quantization level always implies a better approximation of the quantized network to the original one, we find a ubiquitous periodic jumping phenomenon with peak-value decreasing in a power-law relationship in all the real-world weighted networks that we investigated. We supply theoretical analysis on the critical quantization level and the power laws.

  4. Quantization Effects on Complex Networks

    PubMed Central

    Wang, Ying; Wang, Lin; Yang, Wen; Wang, Xiaofan

    2016-01-01

    Weights of edges in many complex networks we constructed are quantized values of the real weights. To what extent does the quantization affect the properties of a network? In this work, quantization effects on network properties are investigated based on the spectrum of the corresponding Laplacian. In contrast to the intuition that larger quantization level always implies a better approximation of the quantized network to the original one, we find a ubiquitous periodic jumping phenomenon with peak-value decreasing in a power-law relationship in all the real-world weighted networks that we investigated. We supply theoretical analysis on the critical quantization level and the power laws. PMID:27226049

  5. Approximate gauge symemtry of composite vector bosons

    SciTech Connect

    Suzuki, Mahiko

    2010-06-01

    It can be shown in a solvable field theory model that the couplings of the composite vector mesons made of a fermion pair approach the gauge couplings in the limit of strong binding. Although this phenomenon may appear accidental and special to the vector bosons made of a fermion pair, we extend it to the case of bosons being constituents and find that the same phenomenon occurs in more an intriguing way. The functional formalism not only facilitates computation but also provides us with a better insight into the generating mechanism of approximate gauge symmetry, in particular, how the strong binding and global current conservation conspire to generate such an approximate symmetry. Remarks are made on its possible relevance or irrelevance to electroweak and higher symmetries.

  6. Private Medical Record Linkage with Approximate Matching

    PubMed Central

    Durham, Elizabeth; Xue, Yuan; Kantarcioglu, Murat; Malin, Bradley

    2010-01-01

    Federal regulations require patient data to be shared for reuse in a de-identified manner. However, disparate providers often share data on overlapping populations, such that a patient’s record may be duplicated or fragmented in the de-identified repository. To perform unbiased statistical analysis in a de-identified setting, it is crucial to integrate records that correspond to the same patient. Private record linkage techniques have been developed, but most methods are based on encryption and preclude the ability to determine similarity, decreasing the accuracy of record linkage. The goal of this research is to integrate a private string comparison method that uses Bloom filters to provide an approximate match, with a medical record linkage algorithm. We evaluate the approach with 100,000 patients’ identifiers and demographics from the Vanderbilt University Medical Center. We demonstrate that the private approximation method achieves sensitivity that is, on average, 3% higher than previous methods. PMID:21346965

  7. Approximate gauge symmetry of composite vector bosons

    NASA Astrophysics Data System (ADS)

    Suzuki, Mahiko

    2010-08-01

    It can be shown in a solvable field theory model that the couplings of the composite vector bosons made of a fermion pair approach the gauge couplings in the limit of strong binding. Although this phenomenon may appear accidental and special to the vector bosons made of a fermion pair, we extend it to the case of bosons being constituents and find that the same phenomenon occurs in a more intriguing way. The functional formalism not only facilitates computation but also provides us with a better insight into the generating mechanism of approximate gauge symmetry, in particular, how the strong binding and global current conservation conspire to generate such an approximate symmetry. Remarks are made on its possible relevance or irrelevance to electroweak and higher symmetries.

  8. Approximate locality for quantum systems on graphs.

    PubMed

    Osborne, Tobias J

    2008-10-01

    In this Letter we make progress on a long-standing open problem of Aaronson and Ambainis [Theory Comput. 1, 47 (2005)]: we show that if U is a sparse unitary operator with a gap Delta in its spectrum, then there exists an approximate logarithm H of U which is also sparse. The sparsity pattern of H gets more dense as 1/Delta increases. This result can be interpreted as a way to convert between local continuous-time and local discrete-time quantum processes. As an example we show that the discrete-time coined quantum walk can be realized stroboscopically from an approximately local continuous-time quantum walk. PMID:18851512

  9. Approximation of pseudospectra on a Hilbert space

    NASA Astrophysics Data System (ADS)

    Schmidt, Torge; Lindner, Marko

    2016-06-01

    The study of spectral properties of linear operators on an infinite-dimensional Hilbert space is of great interest. This task is especially difficult when the operator is non-selfadjoint or even non-normal. Standard approaches like spectral approximation by finite sections generally fail in that case. In this talk we present an algorithm which rigorously computes upper and lower bounds for the spectrum and pseudospectrum of such operators using finite-dimensional approximations. One of our main fields of research is an efficient implementation of this algorithm. To this end we will demonstrate and evaluate methods for the computation of the pseudospectrum of finite-dimensional operators based on continuation techniques.

  10. Approximated solutions to Born-Infeld dynamics

    NASA Astrophysics Data System (ADS)

    Ferraro, Rafael; Nigro, Mauro

    2016-02-01

    The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.

  11. Weizsacker-Williams approximation in quantum chromodynamics

    NASA Astrophysics Data System (ADS)

    Kovchegov, Yuri V.

    The Weizsacker-Williams approximation for a large nucleus in quantum chromodynamics is developed. The non-Abelian Wieizsacker Williams field for a large ultrarelativistic nucleus is constructed. This field is an exact solution of the classical Yang-Mills equations of motion in light cone gauge. The connection is made to the McLerran- Venugopalan model of a large nucleus, and the color charge density for a nucleus in this model is found. The density of states distribution, as a function of color charge density, is proved to be Gaussian. We construct the Feynman diagrams in the light cone gauge which correspond to the classical Weizsacker Williams field. Analyzing these diagrams we obtain a limitation on using the quasi-classical approximation for nuclear collisions.

  12. Small Clique Detection and Approximate Nash Equilibria

    NASA Astrophysics Data System (ADS)

    Minder, Lorenz; Vilenchik, Dan

    Recently, Hazan and Krauthgamer showed [12] that if, for a fixed small ɛ, an ɛ-best ɛ-approximate Nash equilibrium can be found in polynomial time in two-player games, then it is also possible to find a planted clique in G n, 1/2 of size C logn, where C is a large fixed constant independent of ɛ. In this paper, we extend their result to show that if an ɛ-best ɛ-approximate equilibrium can be efficiently found for arbitrarily small ɛ> 0, then one can detect the presence of a planted clique of size (2 + δ) logn in G n, 1/2 in polynomial time for arbitrarily small δ> 0. Our result is optimal in the sense that graphs in G n, 1/2 have cliques of size (2 - o(1)) logn with high probability.

  13. Planetary ephemerides approximation for radar astronomy

    NASA Technical Reports Server (NTRS)

    Sadr, R.; Shahshahani, M.

    1991-01-01

    The planetary ephemerides approximation for radar astronomy is discussed, and, in particular, the effect of this approximation on the performance of the programmable local oscillator (PLO) used in Goldstone Solar System Radar is presented. Four different approaches are considered and it is shown that the Gram polynomials outperform the commonly used technique based on Chebyshev polynomials. These methods are used to analyze the mean square, the phase error, and the frequency tracking error in the presence of the worst case Doppler shift that one may encounter within the solar system. It is shown that in the worst case the phase error is under one degree and the frequency tracking error less than one hertz when the frequency to the PLO is updated every millisecond.

  14. Flow past a porous approximate spherical shell

    NASA Astrophysics Data System (ADS)

    Srinivasacharya, D.

    2007-07-01

    In this paper, the creeping flow of an incompressible viscous liquid past a porous approximate spherical shell is considered. The flow in the free fluid region outside the shell and in the cavity region of the shell is governed by the Navier Stokes equation. The flow within the porous annulus region of the shell is governed by Darcy’s Law. The boundary conditions used at the interface are continuity of the normal velocity, continuity of the pressure and Beavers and Joseph slip condition. An exact solution for the problem is obtained. An expression for the drag on the porous approximate spherical shell is obtained. The drag experienced by the shell is evaluated numerically for several values of the parameters governing the flow.

  15. Approximate Solutions in Planted 3-SAT

    NASA Astrophysics Data System (ADS)

    Hsu, Benjamin; Laumann, Christopher; Moessner, Roderich; Sondhi, Shivaji

    2013-03-01

    In many computational settings, there exists many instances where finding a solution requires a computing time that grows exponentially in the number of variables. Concrete examples occur in combinatorial optimization problems and cryptography in computer science or glassy systems in physics. However, while exact solutions are often known to require exponential time, a related and important question is the running time required to find approximate solutions. Treating this problem as a problem in statistical physics at finite temperature, we examine the computational running time in finding approximate solutions in 3-satisfiability for randomly generated 3-SAT instances which are guaranteed to have a solution. Analytic predictions are corroborated by numerical evidence using stochastic local search algorithms. A first order transition is found in the running time of these algorithms.

  16. Analysing organic transistors based on interface approximation

    SciTech Connect

    Akiyama, Yuto; Mori, Takehiko

    2014-01-15

    Temperature-dependent characteristics of organic transistors are analysed thoroughly using interface approximation. In contrast to amorphous silicon transistors, it is characteristic of organic transistors that the accumulation layer is concentrated on the first monolayer, and it is appropriate to consider interface charge rather than band bending. On the basis of this model, observed characteristics of hexamethylenetetrathiafulvalene (HMTTF) and dibenzotetrathiafulvalene (DBTTF) transistors with various surface treatments are analysed, and the trap distribution is extracted. In turn, starting from a simple exponential distribution, we can reproduce the temperature-dependent transistor characteristics as well as the gate voltage dependence of the activation energy, so we can investigate various aspects of organic transistors self-consistently under the interface approximation. Small deviation from such an ideal transistor operation is discussed assuming the presence of an energetically discrete trap level, which leads to a hump in the transfer characteristics. The contact resistance is estimated by measuring the transfer characteristics up to the linear region.

  17. Uncertainty relations for approximation and estimation

    NASA Astrophysics Data System (ADS)

    Lee, Jaeha; Tsutsui, Izumi

    2016-05-01

    We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér-Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position-momentum and the time-energy relations in one framework albeit handled differently.

  18. Approximate inverse preconditioners for general sparse matrices

    SciTech Connect

    Chow, E.; Saad, Y.

    1994-12-31

    Preconditioned Krylov subspace methods are often very efficient in solving sparse linear matrices that arise from the discretization of elliptic partial differential equations. However, for general sparse indifinite matrices, the usual ILU preconditioners fail, often because of the fact that the resulting factors L and U give rise to unstable forward and backward sweeps. In such cases, alternative preconditioners based on approximate inverses may be attractive. We are currently developing a number of such preconditioners based on iterating on each column to get the approximate inverse. For this approach to be efficient, the iteration must be done in sparse mode, i.e., we must use sparse-matrix by sparse-vector type operatoins. We will discuss a few options and compare their performance on standard problems from the Harwell-Boeing collection.

  19. Some approximation concepts for structural synthesis

    NASA Technical Reports Server (NTRS)

    Schmit, L. A., Jr.; Farshi, B.

    1974-01-01

    An efficient automated minimum weight design procedure is presented which is applicable to sizing structural systems that can be idealized by truss, shear panel, and constant strain triangles. Static stress and displacement constraints under alternative loading conditions are considered. The optimization algorithm is an adaptation of the method of inscribed hyperspheres and high efficiency is achieved by using several approximation concepts including temporary deletion of noncritical constraints, design variable linking, and Taylor series expansions for response variables in terms of design variables. Optimum designs for several planar and space truss examples problems are presented. The results reported support the contention that the innovative use of approximation concepts in structural synthesis can produce significant improvements in efficiency.

  20. Some approximation concepts for structural synthesis.

    NASA Technical Reports Server (NTRS)

    Schmit, L. A., Jr.; Farshi, B.

    1973-01-01

    An efficient automated minimum weight design procedure is presented which is applicable to sizing structural systems that can be idealized by truss, shear panel, and constant strain triangles. Static stress and displacement constraints under alternative loading conditions are considered. The optimization algorithm is an adaptation of the method of inscribed hyperspheres and high efficiency is achieved by using several approximation concepts including temporary deletion of noncritical constraints, design variable linking, and Taylor series expansions for response variables in terms of design variables. Optimum designs for several planar and space truss example problems are presented. The results reported support the contention that the innovative use of approximation concepts in structural synthesis can produce significant improvements in efficiency.

  1. Second derivatives for approximate spin projection methods

    SciTech Connect

    Thompson, Lee M.; Hratchian, Hrant P.

    2015-02-07

    The use of broken-symmetry electronic structure methods is required in order to obtain correct behavior of electronically strained open-shell systems, such as transition states, biradicals, and transition metals. This approach often has issues with spin contamination, which can lead to significant errors in predicted energies, geometries, and properties. Approximate projection schemes are able to correct for spin contamination and can often yield improved results. To fully make use of these methods and to carry out exploration of the potential energy surface, it is desirable to develop an efficient second energy derivative theory. In this paper, we formulate the analytical second derivatives for the Yamaguchi approximate projection scheme, building on recent work that has yielded an efficient implementation of the analytical first derivatives.

  2. Flexible least squares for approximately linear systems

    NASA Astrophysics Data System (ADS)

    Kalaba, Robert; Tesfatsion, Leigh

    1990-10-01

    A probability-free multicriteria approach is presented to the problem of filtering and smoothing when prior beliefs concerning dynamics and measurements take an approximately linear form. Consideration is given to applications in the social and biological sciences, where obtaining agreement among researchers regarding probability relations for discrepancy terms is difficult. The essence of the proposed flexible-least-squares (FLS) procedure is the cost-efficient frontier, a curve in a two-dimensional cost plane which provides an explicit and systematic way to determine the efficient trade-offs between the separate costs incurred for dynamic and measurement specification errors. The FLS estimates show how the state vector could have evolved over time in a manner minimally incompatible with the prior dynamic and measurement specifications. A FORTRAN program for implementing the FLS filtering and smoothing procedure for approximately linear systems is provided.

  3. Babylonian Resistor Networks

    ERIC Educational Resources Information Center

    Mungan, Carl E.; Lipscombe, Trevor C.

    2012-01-01

    The ancient Babylonians had an iterative technique for numerically approximating the values of square roots. Their method can be physically implemented using series and parallel resistor networks. A recursive formula for the equivalent resistance R[subscript eq] is developed and converted into a nonrecursive solution for circuits using…

  4. Approximating spheroid inductive responses using spheres

    SciTech Connect

    Smith, J. Torquil; Morrison, H. Frank

    2003-12-12

    The response of high permeability ({mu}{sub r} {ge} 50) conductive spheroids of moderate aspect ratios (0.25 to 4) to excitation by uniform magnetic fields in the axial or transverse directions is approximated by the response of spheres of appropriate diameters, of the same conductivity and permeability, with magnitude rescaled based on the differing volumes, D.C. magnetizations, and high frequency limit responses of the spheres and modeled spheroids.

  5. Beyond the Kirchhoff approximation. II - Electromagnetic scattering

    NASA Technical Reports Server (NTRS)

    Rodriguez, Ernesto

    1991-01-01

    In a paper by Rodriguez (1981), the momentum transfer expansion was introduced for scalar wave scattering. It was shown that this expansion can be used to obtain wavelength-dependent curvature corrections to the Kirchhoff approximation. This paper extends the momentum transfer perturbation expansion to electromagnetic waves. Curvature corrections to the surface current are obtained. Using these results, the specular field and the backscatter cross section are calculated.

  6. Relativistic point interactions: Approximation by smooth potentials

    NASA Astrophysics Data System (ADS)

    Hughes, Rhonda J.

    1997-06-01

    We show that the four-parameter family of one-dimensional relativistic point interactions studied by Benvegnu and Dąbrowski may be approximated in the strong resolvent sense by smooth, local, short-range perturbations of the Dirac Hamiltonian. In addition, we prove that the nonrelativistic limits correspond to the Schrödinger point interactions studied extensively by the author and Paul Chernoff.

  7. Approximation methods in relativistic eigenvalue perturbation theory

    NASA Astrophysics Data System (ADS)

    Noble, Jonathan Howard

    In this dissertation, three questions, concerning approximation methods for the eigenvalues of quantum mechanical systems, are investigated: (i) What is a pseudo--Hermitian Hamiltonian, and how can its eigenvalues be approximated via numerical calculations? This is a fairly broad topic, and the scope of the investigation is narrowed by focusing on a subgroup of pseudo--Hermitian operators, namely, PT--symmetric operators. Within a numerical approach, one projects a PT--symmetric Hamiltonian onto an appropriate basis, and uses a straightforward two--step algorithm to diagonalize the resulting matrix, leading to numerically approximated eigenvalues. (ii) Within an analytic ansatz, how can a relativistic Dirac Hamiltonian be decoupled into particle and antiparticle degrees of freedom, in appropriate kinematic limits? One possible answer is the Foldy--Wouthuysen transform; however, there are alter- native methods which seem to have some advantages over the time--tested approach. One such method is investigated by applying both the traditional Foldy--Wouthuysen transform and the "chiral" Foldy--Wouthuysen transform to a number of Dirac Hamiltonians, including the central-field Hamiltonian for a gravitationally bound system; namely, the Dirac-(Einstein-)Schwarzschild Hamiltonian, which requires the formal- ism of general relativity. (iii) Are there are pseudo--Hermitian variants of Dirac Hamiltonians that can be approximated using a decoupling transformation? The tachyonic Dirac Hamiltonian, which describes faster-than-light spin-1/2 particles, is gamma5--Hermitian, i.e., pseudo-Hermitian. Superluminal particles remain faster than light upon a Lorentz transformation, and hence, the Foldy--Wouthuysen program is unsuited for this case. Thus, inspired by the Foldy--Wouthuysen program, a decoupling transform in the ultrarelativistic limit is proposed, which is applicable to both sub- and superluminal particles.

  8. JIMWLK evolution in the Gaussian approximation

    NASA Astrophysics Data System (ADS)

    Iancu, E.; Triantafyllopoulos, D. N.

    2012-04-01

    We demonstrate that the Balitsky-JIMWLK equations describing the high-energy evolution of the n-point functions of the Wilson lines (the QCD scattering amplitudes in the eikonal approximation) admit a controlled mean field approximation of the Gaussian type, for any value of the number of colors N c . This approximation is strictly correct in the weak scattering regime at relatively large transverse momenta, where it re-produces the BFKL dynamics, and in the strong scattering regime deeply at saturation, where it properly describes the evolution of the scattering amplitudes towards the respective black disk limits. The approximation scheme is fully specified by giving the 2-point function (the S-matrix for a color dipole), which in turn can be related to the solution to the Balitsky-Kovchegov equation, including at finite N c . Any higher n-point function with n ≥ 4 can be computed in terms of the dipole S-matrix by solving a closed system of evolution equations (a simplified version of the respective Balitsky-JIMWLK equations) which are local in the transverse coordinates. For simple configurations of the projectile in the transverse plane, our new results for the 4-point and the 6-point functions coincide with the high-energy extrapolations of the respective results in the McLerran-Venugopalan model. One cornerstone of our construction is a symmetry property of the JIMWLK evolution, that we notice here for the first time: the fact that, with increasing energy, a hadron is expanding its longitudinal support symmetrically around the light-cone. This corresponds to invariance under time reversal for the scattering amplitudes.

  9. Capacitor-Chain Successive-Approximation ADC

    NASA Technical Reports Server (NTRS)

    Cunningham, Thomas

    2003-01-01

    A proposed successive-approximation analog-to-digital converter (ADC) would contain a capacitively terminated chain of identical capacitor cells. Like a conventional successive-approximation ADC containing a bank of binary-scaled capacitors, the proposed ADC would store an input voltage on a sample-and-hold capacitor and would digitize the stored input voltage by finding the closest match between this voltage and a capacitively generated sum of binary fractions of a reference voltage (Vref). However, the proposed capacitor-chain ADC would offer two major advantages over a conventional binary-scaled-capacitor ADC: (1) In a conventional ADC that digitizes to n bits, the largest capacitor (representing the most significant bit) must have 2(exp n-1) times as much capacitance, and hence, approximately 2(exp n-1) times as much area as does the smallest capacitor (representing the least significant bit), so that the total capacitor area must be 2(exp n) times that of the smallest capacitor. In the proposed capacitor-chain ADC, there would be three capacitors per cell, each approximately equal to the smallest capacitor in the conventional ADC, and there would be one cell per bit. Therefore, the total capacitor area would be only about 3(exp n) times that of the smallest capacitor. The net result would be that the proposed ADC could be considerably smaller than the conventional ADC. (2) Because of edge effects, parasitic capacitances, and manufacturing tolerances, it is difficult to make capacitor banks in which the values of capacitance are scaled by powers of 2 to the required precision. In contrast, because all the capacitors in the proposed ADC would be identical, the problem of precise binary scaling would not arise.

  10. Microscopic justification of the equal filling approximation

    SciTech Connect

    Perez-Martin, Sara; Robledo, L. M.

    2008-07-15

    The equal filling approximation, a procedure widely used in mean-field calculations to treat the dynamics of odd nuclei in a time-reversal invariant way, is justified as the consequence of a variational principle over an average energy functional. The ideas of statistical quantum mechanics are employed in the justification. As an illustration of the method, the ground and lowest-lying states of some octupole deformed radium isotopes are computed.

  11. Solving Math Problems Approximately: A Developmental Perspective

    PubMed Central

    Ganor-Stern, Dana

    2016-01-01

    Although solving arithmetic problems approximately is an important skill in everyday life, little is known about the development of this skill. Past research has shown that when children are asked to solve multi-digit multiplication problems approximately, they provide estimates that are often very far from the exact answer. This is unfortunate as computation estimation is needed in many circumstances in daily life. The present study examined 4th graders, 6th graders and adults’ ability to estimate the results of arithmetic problems relative to a reference number. A developmental pattern was observed in accuracy, speed and strategy use. With age there was a general increase in speed, and an increase in accuracy mainly for trials in which the reference number was close to the exact answer. The children tended to use the sense of magnitude strategy, which does not involve any calculation but relies mainly on an intuitive coarse sense of magnitude, while the adults used the approximated calculation strategy which involves rounding and multiplication procedures, and relies to a greater extent on calculation skills and working memory resources. Importantly, the children were less accurate than the adults, but were well above chance level. In all age groups performance was enhanced when the reference number was smaller (vs. larger) than the exact answer and when it was far (vs. close) from it, suggesting the involvement of an approximate number system. The results suggest the existence of an intuitive sense of magnitude for the results of arithmetic problems that might help children and even adults with difficulties in math. The present findings are discussed in the context of past research reporting poor estimation skills among children, and the conditions that might allow using children estimation skills in an effective manner. PMID:27171224

  12. Finite size effects in epidemic spreading: the problem of overpopulated systems

    NASA Astrophysics Data System (ADS)

    Ganczarek, Wojciech

    2013-12-01

    In this paper we analyze the impact of network size on the dynamics of epidemic spreading. In particular, we investigate the pace of infection in overpopulated systems. In order to do that, we design a model for epidemic spreading on a finite complex network with a restriction to at most one contamination per time step, which can serve as a model for sexually transmitted diseases spreading in some student communes. Because of the highly discrete character of the process, the analysis cannot use the continuous approximation widely exploited for most models. Using a discrete approach, we investigate the epidemic threshold and the quasi-stationary distribution. The main results are two theorems about the mixing time for the process: it scales like the logarithm of the network size and it is proportional to the inverse of the distance from the epidemic threshold.

  13. Spacecraft attitude control using neuro-fuzzy approximation of the optimal controllers

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Woo; Park, Sang-Young; Park, Chandeok

    2016-01-01

    In this study, a neuro-fuzzy controller (NFC) was developed for spacecraft attitude control to mitigate large computational load of the state-dependent Riccati equation (SDRE) controller. The NFC was developed by training a neuro-fuzzy network to approximate the SDRE controller. The stability of the NFC was numerically verified using a Lyapunov-based method, and the performance of the controller was analyzed in terms of approximation ability, steady-state error, cost, and execution time. The simulations and test results indicate that the developed NFC efficiently approximates the SDRE controller, with asymptotic stability in a bounded region of angular velocity encompassing the operational range of rapid-attitude maneuvers. In addition, it was shown that an approximated optimal feedback controller can be designed successfully through neuro-fuzzy approximation of the optimal open-loop controller.

  14. Strong washout approximation to resonant leptogenesis

    NASA Astrophysics Data System (ADS)

    Garbrecht, Björn; Gautier, Florian; Klaric, Juraj

    2014-09-01

    We show that the effective decay asymmetry for resonant Leptogenesis in the strong washout regime with two sterile neutrinos and a single active flavour can in wide regions of parameter space be approximated by its late-time limit ɛ=Xsin(2varphi)/(X2+sin2varphi), where X=8πΔ/(|Y1|2+|Y2|2), Δ=4(M1-M2)/(M1+M2), varphi=arg(Y2/Y1), and M1,2, Y1,2 are the masses and Yukawa couplings of the sterile neutrinos. This approximation in particular extends to parametric regions where |Y1,2|2gg Δ, i.e. where the width dominates the mass splitting. We generalise the formula for the effective decay asymmetry to the case of several flavours of active leptons and demonstrate how this quantity can be used to calculate the lepton asymmetry for phenomenological scenarios that are in agreement with the observed neutrino oscillations. We establish analytic criteria for the validity of the late-time approximation for the decay asymmetry and compare these with numerical results that are obtained by solving for the mixing and the oscillations of the sterile neutrinos. For phenomenologically viable models with two sterile neutrinos, we find that the flavoured effective late-time decay asymmetry can be applied throughout parameter space.

  15. Green-Ampt approximations: A comprehensive analysis

    NASA Astrophysics Data System (ADS)

    Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.

    2016-04-01

    Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.

  16. A coastal ocean model with subgrid approximation

    NASA Astrophysics Data System (ADS)

    Walters, Roy A.

    2016-06-01

    A wide variety of coastal ocean models exist, each having attributes that reflect specific application areas. The model presented here is based on finite element methods with unstructured grids containing triangular and quadrilateral elements. The model optimizes robustness, accuracy, and efficiency by using semi-implicit methods in time in order to remove the most restrictive stability constraints, by using a semi-Lagrangian advection approximation to remove Courant number constraints, and by solving a wave equation at the discrete level for enhanced efficiency. An added feature is the approximation of the effects of subgrid objects. Here, the Reynolds-averaged Navier-Stokes equations and the incompressibility constraint are volume averaged over one or more computational cells. This procedure gives rise to new terms which must be approximated as a closure problem. A study of tidal power generation is presented as an example of this method. A problem that arises is specifying appropriate thrust and power coefficients for the volume averaged velocity when they are usually referenced to free stream velocity. A new contribution here is the evaluation of three approaches to this problem: an iteration procedure and two mapping formulations. All three sets of results for thrust (form drag) and power are in reasonable agreement.

  17. Generalized Quasilinear Approximation: Application to Zonal Jets

    NASA Astrophysics Data System (ADS)

    Marston, J. B.; Chini, G. P.; Tobias, S. M.

    2016-05-01

    Quasilinear theory is often utilized to approximate the dynamics of fluids exhibiting significant interactions between mean flows and eddies. We present a generalization of quasilinear theory to include dynamic mode interactions on the large scales. This generalized quasilinear (GQL) approximation is achieved by separating the state variables into large and small zonal scales via a spectral filter rather than by a decomposition into a formal mean and fluctuations. Nonlinear interactions involving only small zonal scales are then removed. The approximation is conservative and allows for scattering of energy between small-scale modes via the large scale (through nonlocal spectral interactions). We evaluate GQL for the paradigmatic problems of the driving of large-scale jets on a spherical surface and on the beta plane and show that it is accurate even for a small number of large-scale modes. As GQL is formally linear in the small zonal scales, it allows for the closure of the system and can be utilized in direct statistical simulation schemes that have proved an attractive alternative to direct numerical simulation for many geophysical and astrophysical problems.

  18. New Hardness Results for Diophantine Approximation

    NASA Astrophysics Data System (ADS)

    Eisenbrand, Friedrich; Rothvoß, Thomas

    We revisit simultaneous Diophantine approximation, a classical problem from the geometry of numbers which has many applications in algorithms and complexity. The input to the decision version of this problem consists of a rational vector α ∈ ℚ n , an error bound ɛ and a denominator bound N ∈ ℕ + . One has to decide whether there exists an integer, called the denominator Q with 1 ≤ Q ≤ N such that the distance of each number Q ·α i to its nearest integer is bounded by ɛ. Lagarias has shown that this problem is NP-complete and optimization versions have been shown to be hard to approximate within a factor n c/ loglogn for some constant c > 0. We strengthen the existing hardness results and show that the optimization problem of finding the smallest denominator Q ∈ ℕ + such that the distances of Q·α i to the nearest integer are bounded by ɛ is hard to approximate within a factor 2 n unless {textrm{P}} = NP.

  19. Using Approximations to Accelerate Engineering Design Optimization

    NASA Technical Reports Server (NTRS)

    Torczon, Virginia; Trosset, Michael W.

    1998-01-01

    Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.

  20. Approximate number and approximate time discrimination each correlate with school math abilities in young children.

    PubMed

    Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin

    2016-01-01

    What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics. PMID:26587963

  1. An n log n Generalized Born Approximation.

    PubMed

    Anandakrishnan, Ramu; Daga, Mayank; Onufriev, Alexey V

    2011-03-01

    Molecular dynamics (MD) simulations based on the generalized Born (GB) model of implicit solvation offer a number of important advantages over the traditional explicit solvent based simulations. Yet, in MD simulations, the GB model has not been able to reach its full potential partly due to its computational cost, which scales as ∼n(2), where n is the number of solute atoms. We present here an ∼n log n approximation for the generalized Born (GB) implicit solvent model. The approximation is based on the hierarchical charge partitioning (HCP) method (Anandakrishnan and Onufriev J. Comput. Chem. 2010 , 31 , 691 - 706 ) previously developed and tested for electrostatic computations in gas-phase and distant dependent dielectric models. The HCP uses the natural organization of biomolecular structures to partition the structures into multiple hierarchical levels of components. The charge distribution for each of these components is approximated by a much smaller number of charges. The approximate charges are then used for computing electrostatic interactions with distant components, while the full set of atomic charges are used for nearby components. To apply the HCP concept to the GB model, we define the equivalent of the effective Born radius for components. The component effective Born radius is then used in GB computations for points that are distant from the component. This HCP approximation for GB (HCP-GB) is implemented in the open source MD software, NAB in AmberTools, and tested on a set of representative biomolecular structures ranging in size from 632 atoms to ∼3 million atoms. For this set of test structures, the HCP-GB method is 1.1-390 times faster than the GB computation without additional approximations (the reference GB computation), depending on the size of the structure. Similar to the spherical cutoff method with GB (cutoff-GB), which also scales as ∼n log n, the HCP-GB is relatively simple. However, for the structures considered here, we show

  2. Cluster variation approximations for a contact process living on a graph

    NASA Astrophysics Data System (ADS)

    Peyrard, Nathalie; Franc, Alain

    2005-12-01

    A model classically used for modelling the spread of an infectious diseases in a network is the time-continuous contact process, which is one simple example of an interacting particles system. It displays a non-equilibrium phase transition, related to the burst of an epidemic within a population in case of an accidental introduction. Several studies have recently emphasized the role of some geometrical properties of the graph on which the contact process lives, like the degree distribution, for quantities of interest like the singlet density at equilibrium or the critical value of the infectivity parameter for the emergence of the epidemics, but this role is not yet fully understood. As the contact process on a graph still cannot be solved analytically (even on a 1D lattice), some approximations are needed. The more naive, but well-studied approximation is the mean field approximation. We explore in this paper the potentiality of a finer approximation: the pair approximation used in ecology. We give an analytical formulation on a graph of the site occupancy probability at equilibrium, depending on the site degree, under pair approximation and another dependence structure approximation. We point out improvements brought about in the case of realistic graph structures, far from the well-mixed assumption. We also identify the limits of the pair approximation to answer the question of the effects of the graph characteristics. We show how to improve the method using a more appropriate order 2 cluster variation method, the Bethe approximation.

  3. Approximating the maximum weight clique using replicator dynamics.

    PubMed

    Bomze, I R; Pelillo, M; Stix, V

    2000-01-01

    Given an undirected graph with weights on the vertices, the maximum weight clique problem (MWCP) is to find a subset of mutually adjacent vertices (i.e., a clique) having the largest total weight. This is a generalization of the classical problem of finding the maximum cardinality clique of an unweighted graph, which arises as a special case of the MWCP when all the weights associated to the vertices are equal. The problem is known to be NP-hard for arbitrary graphs and, according to recent theoretical results, so is the problem of approximating it within a constant factor. Although there has recently been much interest around neural-network algorithms for the unweighted maximum clique problem, no effort has been directed so far toward its weighted counterpart. In this paper, we present a parallel, distributed heuristic for approximating the MWCP based on dynamics principles developed and studied in various branches of mathematical biology. The proposed framework centers around a recently introduced continuous characterization of the MWCP which generalizes an earlier remarkable result by Motzkin and Straus. This allows us to formulate the MWCP (a purely combinatorial problem) in terms of a continuous quadratic programming problem. One drawback associated with this formulation, however, is the presence of "spurious" solutions, and we present characterizations of these solutions. To avoid them we introduce a new regularized continuous formulation of the MWCP inspired by previous works on the unweighted problem, and show how this approach completely solves the problem. The continuous formulation of the MWCP naturally maps onto a parallel, distributed computational network whose dynamical behavior is governed by the so-called replicator equations. These are dynamical systems introduced in evolutionary game theory and population genetics to model evolutionary processes on a macroscopic scale.We present theoretical results which guarantee that the solutions provided by

  4. A fractal-like resistive network

    NASA Astrophysics Data System (ADS)

    Saggese, A.; De Luca, R.

    2014-11-01

    The equivalent resistance of a fractal-like network is calculated by means of approaches similar to those employed in defining the equivalent resistance of an infinite ladder. Starting from an elementary triangular circuit, a fractal-like network, named after Saggese, is developed. The equivalent resistance of finite approximations of this network is measured, and the didactical implications of the model are highlighted.

  5. Strong washout approximation to resonant leptogenesis

    SciTech Connect

    Garbrecht, Björn; Gautier, Florian; Klaric, Juraj E-mail: florian.gautier@tum.de

    2014-09-01

    We show that the effective decay asymmetry for resonant Leptogenesis in the strong washout regime with two sterile neutrinos and a single active flavour can in wide regions of parameter space be approximated by its late-time limit ε=Xsin(2φ)/(X{sup 2}+sin{sup 2}φ), where X=8πΔ/(|Y{sub 1}|{sup 2}+|Y{sub 2}|{sup 2}), Δ=4(M{sub 1}-M{sub 2})/(M{sub 1}+M{sub 2}), φ=arg(Y{sub 2}/Y{sub 1}), and M{sub 1,2}, Y{sub 1,2} are the masses and Yukawa couplings of the sterile neutrinos. This approximation in particular extends to parametric regions where |Y{sub 1,2}|{sup 2}>> Δ, i.e. where the width dominates the mass splitting. We generalise the formula for the effective decay asymmetry to the case of several flavours of active leptons and demonstrate how this quantity can be used to calculate the lepton asymmetry for phenomenological scenarios that are in agreement with the observed neutrino oscillations. We establish analytic criteria for the validity of the late-time approximation for the decay asymmetry and compare these with numerical results that are obtained by solving for the mixing and the oscillations of the sterile neutrinos. For phenomenologically viable models with two sterile neutrinos, we find that the flavoured effective late-time decay asymmetry can be applied throughout parameter space.

  6. Photoelectron spectroscopy and the dipole approximation

    SciTech Connect

    Hemmers, O.; Hansen, D.L.; Wang, H.

    1997-04-01

    Photoelectron spectroscopy is a powerful technique because it directly probes, via the measurement of photoelectron kinetic energies, orbital and band structure in valence and core levels in a wide variety of samples. The technique becomes even more powerful when it is performed in an angle-resolved mode, where photoelectrons are distinguished not only by their kinetic energy, but by their direction of emission as well. Determining the probability of electron ejection as a function of angle probes the different quantum-mechanical channels available to a photoemission process, because it is sensitive to phase differences among the channels. As a result, angle-resolved photoemission has been used successfully for many years to provide stringent tests of the understanding of basic physical processes underlying gas-phase and solid-state interactions with radiation. One mainstay in the application of angle-resolved photoelectron spectroscopy is the well-known electric-dipole approximation for photon interactions. In this simplification, all higher-order terms, such as those due to electric-quadrupole and magnetic-dipole interactions, are neglected. As the photon energy increases, however, effects beyond the dipole approximation become important. To best determine the range of validity of the dipole approximation, photoemission measurements on a simple atomic system, neon, where extra-atomic effects cannot play a role, were performed at BL 8.0. The measurements show that deviations from {open_quotes}dipole{close_quotes} expectations in angle-resolved valence photoemission are observable for photon energies down to at least 0.25 keV, and are quite significant at energies around 1 keV. From these results, it is clear that non-dipole angular-distribution effects may need to be considered in any application of angle-resolved photoelectron spectroscopy that uses x-ray photons of energies as low as a few hundred eV.

  7. Product-State Approximations to Quantum States

    NASA Astrophysics Data System (ADS)

    Brandão, Fernando G. S. L.; Harrow, Aram W.

    2016-02-01

    We show that for any many-body quantum state there exists an unentangled quantum state such that most of the two-body reduced density matrices are close to those of the original state. This is a statement about the monogamy of entanglement, which cannot be shared without limit in the same way as classical correlation. Our main application is to Hamiltonians that are sums of two-body terms. For such Hamiltonians we show that there exist product states with energy that is close to the ground-state energy whenever the interaction graph of the Hamiltonian has high degree. This proves the validity of mean-field theory and gives an explicitly bounded approximation error. If we allow states that are entangled within small clusters of systems but product across clusters then good approximations exist when the Hamiltonian satisfies one or more of the following properties: (1) high degree, (2) small expansion, or (3) a ground state where the blocks in the partition have sublinear entanglement. Previously this was known only in the case of small expansion or in the regime where the entanglement was close to zero. Our approximations allow an extensive error in energy, which is the scale considered by the quantum PCP (probabilistically checkable proof) and NLTS (no low-energy trivial-state) conjectures. Thus our results put restrictions on the possible Hamiltonians that could be used for a possible proof of the qPCP or NLTS conjectures. By contrast the classical PCP constructions are often based on constraint graphs with high degree. Likewise we show that the parallel repetition that is possible with classical constraint satisfaction problems cannot also be possible for quantum Hamiltonians, unless qPCP is false. The main technical tool behind our results is a collection of new classical and quantum de Finetti theorems which do not make any symmetry assumptions on the underlying states.

  8. Partially coherent contrast-transfer-function approximation.

    PubMed

    Nesterets, Yakov I; Gureyev, Timur E

    2016-04-01

    The contrast-transfer-function (CTF) approximation, widely used in various phase-contrast imaging techniques, is revisited. CTF validity conditions are extended to a wide class of strongly absorbing and refracting objects, as well as to nonuniform partially coherent incident illumination. Partially coherent free-space propagators, describing amplitude and phase in-line contrast, are introduced and their properties are investigated. The present results are relevant to the design of imaging experiments with partially coherent sources, as well as to the analysis and interpretation of the corresponding images. PMID:27140752

  9. [Bond selective chemistry beyond the adiabatic approximation

    SciTech Connect

    Butler, L.J.

    1993-02-28

    The adiabatic Born-Oppenheimer potential energy surface approximation is not valid for reaction of a wide variety of energetic materials and organic fuels; coupling between electronic states of reacting species plays a key role in determining the selectivity of the chemical reactions induced. This research program initially studies this coupling in (1) selective C-Br bond fission in 1,3- bromoiodopropane, (2) C-S:S-H bond fission branching in CH[sub 3]SH, and (3) competition between bond fission channels and H[sub 2] elimination in CH[sub 3]NH[sub 2].

  10. Virial expansion coefficients in the harmonic approximation.

    PubMed

    Armstrong, J R; Zinner, N T; Fedorov, D V; Jensen, A S

    2012-08-01

    The virial expansion method is applied within a harmonic approximation to an interacting N-body system of identical fermions. We compute the canonical partition functions for two and three particles to get the two lowest orders in the expansion. The energy spectrum is carefully interpolated to reproduce ground-state properties at low temperature and the noninteracting high-temperature limit of constant virial coefficients. This resembles the smearing of shell effects in finite systems with increasing temperature. Numerical results are discussed for the second and third virial coefficients as functions of dimension, temperature, interaction, and transition temperature between low- and high-energy limits. PMID:23005730

  11. Simple analytic approximations for the Blasius problem

    NASA Astrophysics Data System (ADS)

    Iacono, R.; Boyd, John P.

    2015-08-01

    The classical boundary layer problem formulated by Heinrich Blasius more than a century ago is revisited, with the purpose of deriving simple and accurate analytical approximations to its solution. This is achieved through the combined use of a generalized Padé approach and of an integral iteration scheme devised by Hermann Weyl. The iteration scheme is also used to derive very accurate bounds for the value of the second derivative of the Blasius function at the origin, which plays a crucial role in this problem.

  12. Approximations for crossing two nearby spin resonances

    NASA Astrophysics Data System (ADS)

    Ranjbar, V. H.

    2015-01-01

    Solutions to the Thomas-Bargmann-Michel-Telegdi spin equation for spin 1 /2 particles have to date been confined to the single-resonance crossing. However, in reality, most cases of interest concern the overlapping of several resonances. While there have been several serious studies of this problem, a good analytical solution or even an approximation has eluded the community. We show that this system can be transformed into a Hill-like equation. In this representation, we show that, while the single-resonance crossing represents the solution to the parabolic cylinder equation, the overlapping case becomes a parametric type of resonance.

  13. Rapidly converging series approximation to Kepler's equation

    NASA Astrophysics Data System (ADS)

    Peters, R. D.

    1984-08-01

    A power series solution in eccentricity e and normalized mean anomaly f has been developed for elliptic orbits. Expansion through the fourth order yields approximate errors about an order of magnitude smaller than the corresponding Lagrange series. For large e, a particular algorithm is shown to be superior to published initializers for Newton iteration solutions. The normalized variable f varies between zero and one on each of two separately defined intervals: 0 to x = (pi/2-e) and x to pi. The expansion coefficients are polynomials based on a one-time evaluation of sine and cosine terms in f.

  14. Approximate risk assessment prioritizes remedial decisions

    SciTech Connect

    Bergmann, E.P. )

    1993-08-01

    Approximate risk assessment (ARA) is a management tool that prioritizes cost/benefit options for risk reduction decisions. Management needs a method that quantifies how much control is satisfactory for each level of risk reduction. Two risk matrices develop a scheme that estimates the necessary control a unit should implement with its present probability and severity of consequences/disaster. A second risk assessment matrix attaches a dollar value to each failure possibility at various severities. Now HPI operators can see the cost and benefit for each control step contemplated and justify returns based on removing the likelihood of the disaster.

  15. Shear viscosity in the postquasistatic approximation

    SciTech Connect

    Peralta, C.; Rosales, L.; Rodriguez-Mueller, B.; Barreto, W.

    2010-05-15

    We apply the postquasistatic approximation, an iterative method for the evolution of self-gravitating spheres of matter, to study the evolution of anisotropic nonadiabatic radiating and dissipative distributions in general relativity. Dissipation is described by viscosity and free-streaming radiation, assuming an equation of state to model anisotropy induced by the shear viscosity. We match the interior solution, in noncomoving coordinates, with the Vaidya exterior solution. Two simple models are presented, based on the Schwarzschild and Tolman VI solutions, in the nonadiabatic and adiabatic limit. In both cases, the eventual collapse or expansion of the distribution is mainly controlled by the anisotropy induced by the viscosity.

  16. Fast Approximate Analysis Of Modified Antenna Structure

    NASA Technical Reports Server (NTRS)

    Levy, Roy

    1991-01-01

    Abbreviated algorithms developed for fast approximate analysis of effects of modifications in supporting structures upon root-mean-square (rms) path-length errors of paraboloidal-dish antennas. Involves combination of methods of structural-modification reanalysis with new extensions of correlation analysis to obtain revised rms path-length error. Full finite-element analysis, usually requires computer of substantial capacity, necessary only to obtain responses of unmodified structure to known external loads and to selected self-equilibrating "indicator" loads. Responses used in shortcut calculations, which, although theoretically "exact", simple enough to be performed on hand-held calculator. Useful in design, design-sensitivity analysis, and parametric studies.

  17. A function approximation approach to anomaly detection in propulsion system test data

    NASA Astrophysics Data System (ADS)

    Whitehead, Bruce A.; Hoyt, W. A.

    1993-06-01

    Ground test data from propulsion systems such as the Space Shuttle Main Engine (SSME) can be automatically screened for anomalies by a neural network. The neural network screens data after being trained with nominal data only. Given the values of 14 measurements reflecting external influences on the SSME at a given time, the neural network predicts the expected nominal value of a desired engine parameter at that time. We compared the ability of three different function-approximation techniques to perform this nominal value prediction: a novel neural network architecture based on Gaussian bar basis functions, a conventional back propagation neural network, and linear regression. These three techniques were tested with real data from six SSME ground tests containing two anomalies. The basis function network trained more rapidly than back propagation. It yielded nominal predictions with, a tight enough confidence interval to distinguish anomalous deviations from the nominal fluctuations in an engine parameter. Since the function-approximation approach requires nominal training data only, it is capable of detecting unknown classes of anomalies for which training data is not available.

  18. A function approximation approach to anomaly detection in propulsion system test data

    NASA Technical Reports Server (NTRS)

    Whitehead, Bruce A.; Hoyt, W. A.

    1993-01-01

    Ground test data from propulsion systems such as the Space Shuttle Main Engine (SSME) can be automatically screened for anomalies by a neural network. The neural network screens data after being trained with nominal data only. Given the values of 14 measurements reflecting external influences on the SSME at a given time, the neural network predicts the expected nominal value of a desired engine parameter at that time. We compared the ability of three different function-approximation techniques to perform this nominal value prediction: a novel neural network architecture based on Gaussian bar basis functions, a conventional back propagation neural network, and linear regression. These three techniques were tested with real data from six SSME ground tests containing two anomalies. The basis function network trained more rapidly than back propagation. It yielded nominal predictions with, a tight enough confidence interval to distinguish anomalous deviations from the nominal fluctuations in an engine parameter. Since the function-approximation approach requires nominal training data only, it is capable of detecting unknown classes of anomalies for which training data is not available.

  19. On some applications of diophantine approximations

    PubMed Central

    Chudnovsky, G. V.

    1984-01-01

    Siegel's results [Siegel, C. L. (1929) Abh. Preuss. Akad. Wiss. Phys.-Math. Kl. 1] on the transcendence and algebraic independence of values of E-functions are refined to obtain the best possible bound for the measures of irrationality and linear independence of values of arbitrary E-functions at rational points. Our results show that values of E-functions at rational points have measures of diophantine approximations typical to “almost all” numbers. In particular, any such number has the “2 + ε” exponent of irrationality: ǀΘ - p/qǀ > ǀqǀ-2-ε for relatively prime rational integers p,q, with q ≥ q0 (Θ, ε). These results answer some problems posed by Lang. The methods used here are based on the introduction of graded Padé approximations to systems of functions satisfying linear differential equations with rational function coefficients. The constructions and proofs of this paper were used in the functional (nonarithmetic case) in a previous paper [Chudnovsky, D. V. & Chudnovsky, G. V. (1983) Proc. Natl. Acad. Sci. USA 80, 5158-5162]. PMID:16593441

  20. On some applications of diophantine approximations.

    PubMed

    Chudnovsky, G V

    1984-03-01

    Siegel's results [Siegel, C. L. (1929) Abh. Preuss. Akad. Wiss. Phys.-Math. Kl. 1] on the transcendence and algebraic independence of values of E-functions are refined to obtain the best possible bound for the measures of irrationality and linear independence of values of arbitrary E-functions at rational points. Our results show that values of E-functions at rational points have measures of diophantine approximations typical to "almost all" numbers. In particular, any such number has the "2 + epsilon" exponent of irrationality: Theta - p/q > q(-2-epsilon) for relatively prime rational integers p,q, with q >/= q(0) (Theta, epsilon). These results answer some problems posed by Lang. The methods used here are based on the introduction of graded Padé approximations to systems of functions satisfying linear differential equations with rational function coefficients. The constructions and proofs of this paper were used in the functional (nonarithmetic case) in a previous paper [Chudnovsky, D. V. & Chudnovsky, G. V. (1983) Proc. Natl. Acad. Sci. USA 80, 5158-5162]. PMID:16593441

  1. Investigating Material Approximations in Spacecraft Radiation Analysis

    NASA Technical Reports Server (NTRS)

    Walker, Steven A.; Slaba, Tony C.; Clowdsley, Martha S.; Blattnig, Steve R.

    2011-01-01

    During the design process, the configuration of space vehicles and habitats changes frequently and the merits of design changes must be evaluated. Methods for rapidly assessing astronaut exposure are therefore required. Typically, approximations are made to simplify the geometry and speed up the evaluation of each design. In this work, the error associated with two common approximations used to simplify space radiation vehicle analyses, scaling into equivalent materials and material reordering, are investigated. Over thirty materials commonly found in spacesuits, vehicles, and human bodies are considered. Each material is placed in a material group (aluminum, polyethylene, or tissue), and the error associated with scaling and reordering was quantified for each material. Of the scaling methods investigated, range scaling is shown to be the superior method, especially for shields less than 30 g/cm2 exposed to a solar particle event. More complicated, realistic slabs are examined to quantify the separate and combined effects of using equivalent materials and reordering. The error associated with material reordering is shown to be at least comparable to, if not greater than, the error associated with range scaling. In general, scaling and reordering errors were found to grow with the difference between the average nuclear charge of the actual material and average nuclear charge of the equivalent material. Based on this result, a different set of equivalent materials (titanium, aluminum, and tissue) are substituted for the commonly used aluminum, polyethylene, and tissue. The realistic cases are scaled and reordered using the new equivalent materials, and the reduced error is shown.

  2. Chiral Magnetic Effect in Hydrodynamic Approximation

    NASA Astrophysics Data System (ADS)

    Zakharov, Valentin I.

    We review derivations of the chiral magnetic effect (ChME) in hydrodynamic approximation. The reader is assumed to be familiar with the basics of the effect. The main challenge now is to account for the strong interactions between the constituents of the fluid. The main result is that the ChME is not renormalized: in the hydrodynamic approximation it remains the same as for non-interacting chiral fermions moving in an external magnetic field. The key ingredients in the proof are general laws of thermodynamics and the Adler-Bardeen theorem for the chiral anomaly in external electromagnetic fields. The chiral magnetic effect in hydrodynamics represents a macroscopic manifestation of a quantum phenomenon (chiral anomaly). Moreover, one can argue that the current induced by the magnetic field is dissipation free and talk about a kind of "chiral superconductivity". More precise description is a quantum ballistic transport along magnetic field taking place in equilibrium and in absence of a driving force. The basic limitation is the exact chiral limit while temperature—excitingly enough—does not seemingly matter. What is still lacking, is a detailed quantum microscopic picture for the ChME in hydrodynamics. Probably, the chiral currents propagate through lower-dimensional defects, like vortices in superfluid. In case of superfluid, the prediction for the chiral magnetic effect remains unmodified although the emerging dynamical picture differs from the standard one.

  3. Optimal Approximation of Quadratic Interval Functions

    NASA Technical Reports Server (NTRS)

    Koshelev, Misha; Taillibert, Patrick

    1997-01-01

    Measurements are never absolutely accurate, as a result, after each measurement, we do not get the exact value of the measured quantity; at best, we get an interval of its possible values, For dynamically changing quantities x, the additional problem is that we cannot measure them continuously; we can only measure them at certain discrete moments of time t(sub 1), t(sub 2), ... If we know that the value x(t(sub j)) at a moment t(sub j) of the last measurement was in the interval [x-(t(sub j)), x + (t(sub j))], and if we know the upper bound D on the rate with which x changes, then, for any given moment of time t, we can conclude that x(t) belongs to the interval [x-(t(sub j)) - D (t - t(sub j)), x + (t(sub j)) + D (t - t(sub j))]. This interval changes linearly with time, an is, therefore, called a linear interval function. When we process these intervals, we get an expression that is quadratic and higher order w.r.t. time t, Such "quadratic" intervals are difficult to process and therefore, it is necessary to approximate them by linear ones. In this paper, we describe an algorithm that gives the optimal approximation of quadratic interval functions by linear ones.

  4. Iterative Sparse Approximation of the Gravitational Potential

    NASA Astrophysics Data System (ADS)

    Telschow, R.

    2012-04-01

    In recent applications in the approximation of gravitational potential fields, several new challenges arise. We are concerned with a huge quantity of data (e.g. in case of the Earth) or strongly irregularly distributed data points (e.g. in case of the Juno mission to Jupiter), where both of these problems bring the established approximation methods to their limits. Our novel method, which is a matching pursuit, however, iteratively chooses a best basis out of a large redundant family of trial functions to reconstruct the signal. It is independent of the data points which makes it possible to take into account a much higher amount of data and, furthermore, handle irregularly distributed data, since the algorithm is able to combine arbitrary spherical basis functions, i.e., global as well as local trial functions. This additionaly results in a solution, which is sparse in the sense that it features more basis functions where the signal has a higher local detail density. Summarizing, we get a method which reconstructs large quantities of data with a preferably low number of basis functions, combining global as well as several localizing functions to a sparse basis and a solution which is locally adapted to the data density and also to the detail density of the signal.

  5. Spectrally Invariant Approximation within Atmospheric Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Marshak, A.; Knyazikhin, Y.; Chiu, J. C.; Wiscombe, W. J.

    2011-01-01

    Certain algebraic combinations of single scattering albedo and solar radiation reflected from, or transmitted through, vegetation canopies do not vary with wavelength. These spectrally invariant relationships are the consequence of wavelength independence of the extinction coefficient and scattering phase function in vegetation. In general, this wavelength independence does not hold in the atmosphere, but in cloud-dominated atmospheres the total extinction and total scattering phase function vary only weakly with wavelength. This paper identifies the atmospheric conditions under which the spectrally invariant approximation can accurately describe the extinction and scattering properties of cloudy atmospheres. The validity of the assumptions and the accuracy of the approximation are tested with 1D radiative transfer calculations using publicly available radiative transfer models: Discrete Ordinate Radiative Transfer (DISORT) and Santa Barbara DISORT Atmospheric Radiative Transfer (SBDART). It is shown for cloudy atmospheres with cloud optical depth above 3, and for spectral intervals that exclude strong water vapor absorption, that the spectrally invariant relationships found in vegetation canopy radiative transfer are valid to better than 5%. The physics behind this phenomenon, its mathematical basis, and possible applications to remote sensing and climate are discussed.

  6. Approximation of Failure Probability Using Conditional Sampling

    NASA Technical Reports Server (NTRS)

    Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.

    2008-01-01

    In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.

  7. Analytic approximate radiation effects due to Bremsstrahlung

    SciTech Connect

    Ben-Zvi I.

    2012-02-01

    The purpose of this note is to provide analytic approximate expressions that can provide quick estimates of the various effects of the Bremsstrahlung radiation produced relatively low energy electrons, such as the dumping of the beam into the beam stop at the ERL or field emission in superconducting cavities. The purpose of this work is not to replace a dependable calculation or, better yet, a measurement under real conditions, but to provide a quick but approximate estimate for guidance purposes only. These effects include dose to personnel, ozone generation in the air volume exposed to the radiation, hydrogen generation in the beam dump water cooling system and radiation damage to near-by magnets. These expressions can be used for other purposes, but one should note that the electron beam energy range is limited. In these calculations the good range is from about 0.5 MeV to 10 MeV. To help in the application of this note, calculations are presented as a worked out example for the beam dump of the R&D Energy Recovery Linac.

  8. Approximating Markov Chains: What and why

    SciTech Connect

    Pincus, S.

    1996-06-01

    Much of the current study of dynamical systems is focused on geometry (e.g., chaos and bifurcations) and ergodic theory. Yet dynamical systems were originally motivated by an attempt to {open_quote}{open_quote}solve,{close_quote}{close_quote} or at least understand, a discrete-time analogue of differential equations. As such, numerical, analytical solution techniques for dynamical systems would seem desirable. We discuss an approach that provides such techniques, the approximation of dynamical systems by suitable finite state Markov Chains. Steady state distributions for these Markov Chains, a straightforward calculation, will converge to the true dynamical system steady state distribution, with appropriate limit theorems indicated. Thus (i) approximation by a computable, linear map holds the promise of vastly faster steady state solutions for nonlinear, multidimensional differential equations; (ii) the solution procedure is unaffected by the presence or absence of a probability density function for the {ital attractor}, entirely skirting singularity, fractal/multifractal, and renormalization considerations. The theoretical machinery underpinning this development also implies that under very general conditions, steady state measures are weakly continuous with control parameter evolution. This means that even though a system may change periodicity, or become chaotic in its limiting behavior, such statistical parameters as the mean, standard deviation, and tail probabilities change continuously, not abruptly with system evolution. {copyright} {ital 1996 American Institute of Physics.}

  9. Proportional damping approximation using the energy gain and simultaneous perturbation stochastic approximation

    NASA Astrophysics Data System (ADS)

    Sultan, Cornel

    2010-10-01

    The design of vector second-order linear systems for accurate proportional damping approximation is addressed. For this purpose an error system is defined using the difference between the generalized coordinates of the non-proportionally damped system and its proportionally damped approximation in modal space. The accuracy of the approximation is characterized using the energy gain of the error system and the design problem is formulated as selecting parameters of the non-proportionally damped system to ensure that this gain is sufficiently small. An efficient algorithm that combines linear matrix inequalities and simultaneous perturbation stochastic approximation is developed to solve the problem and examples of its application to tensegrity structures design are presented.

  10. Matrix Pade-type approximant and directional matrix Pade approximant in the inner product space

    NASA Astrophysics Data System (ADS)

    Gu, Chuanqing

    2004-03-01

    A new matrix Pade-type approximant (MPTA) is defined in the paper by introducing a generalized linear functional in the inner product space. The expressions of MPTA are provided with the generating function form and the determinant form. Moreover, a directional matrix Pade approximant is also established by giving a set of linearly independent matrices. In the end, it is shown that the method of MPTA can be applied to the reduction problems of the high degree multivariable linear system.

  11. Epidemic thresholds for bipartite networks

    NASA Astrophysics Data System (ADS)

    Hernández, D. G.; Risau-Gusman, S.

    2013-11-01

    It is well known that sexually transmitted diseases (STD) spread across a network of human sexual contacts. This network is most often bipartite, as most STD are transmitted between men and women. Even though network models in epidemiology have quite a long history now, there are few general results about bipartite networks. One of them is the simple dependence, predicted using the mean field approximation, between the epidemic threshold and the average and variance of the degree distribution of the network. Here we show that going beyond this approximation can lead to qualitatively different results that are supported by numerical simulations. One of the new features, that can be relevant for applications, is the existence of a critical value for the infectivity of each population, below which no epidemics can arise, regardless of the value of the infectivity of the other population.

  12. Fast Approximate Quadratic Programming for Graph Matching

    PubMed Central

    Vogelstein, Joshua T.; Conroy, John M.; Lyzinski, Vince; Podrazik, Louis J.; Kratzer, Steven G.; Harley, Eric T.; Fishkind, Donniell E.; Vogelstein, R. Jacob; Priebe, Carey E.

    2015-01-01

    Quadratic assignment problems arise in a wide variety of domains, spanning operations research, graph theory, computer vision, and neuroscience, to name a few. The graph matching problem is a special case of the quadratic assignment problem, and graph matching is increasingly important as graph-valued data is becoming more prominent. With the aim of efficiently and accurately matching the large graphs common in big data, we present our graph matching algorithm, the Fast Approximate Quadratic assignment algorithm. We empirically demonstrate that our algorithm is faster and achieves a lower objective value on over 80% of the QAPLIB benchmark library, compared with the previous state-of-the-art. Applying our algorithm to our motivating example, matching C. elegans connectomes (brain-graphs), we find that it efficiently achieves performance. PMID:25886624

  13. Generic sequential sampling for metamodel approximations

    SciTech Connect

    Turner, C. J.; Campbell, M. I.

    2003-01-01

    Metamodels approximate complex multivariate data sets from simulations and experiments. These data sets often are not based on an explicitly defined function. The resulting metamodel represents a complex system's behavior for subsequent analysis or optimization. Often an exhaustive data search to obtain the data for the metalnodel is impossible, so an intelligent sampling strategy is necessary. While inultiple approaches have been advocated, the majority of these approaches were developed in support of a particular class of metamodel, known as a Kriging. A more generic, cotninonsense approach to this problem allows sequential sampling techniques to be applied to other types of metamodeis. This research compares recent search techniques for Kriging inetamodels with a generic, inulti-criteria approach combined with a new type of B-spline metamodel. This B-spline metamodel is competitive with prior results obtained with a Kriging metamodel. Furthermore, the results of this research highlight several important features necessary for these techniques to be extended to more complex domains.

  14. Approximate maximum likelihood decoding of block codes

    NASA Technical Reports Server (NTRS)

    Greenberger, H. J.

    1979-01-01

    Approximate maximum likelihood decoding algorithms, based upon selecting a small set of candidate code words with the aid of the estimated probability of error of each received symbol, can give performance close to optimum with a reasonable amount of computation. By combining the best features of various algorithms and taking care to perform each step as efficiently as possible, a decoding scheme was developed which can decode codes which have better performance than those presently in use and yet not require an unreasonable amount of computation. The discussion of the details and tradeoffs of presently known efficient optimum and near optimum decoding algorithms leads, naturally, to the one which embodies the best features of all of them.

  15. Approximate Techniques for Representing Nuclear Data Uncertainties

    SciTech Connect

    Williams, Mark L; Broadhead, Bryan L; Dunn, Michael E; Rearden, Bradley T

    2007-01-01

    Computational tools are available to utilize sensitivity and uncertainty (S/U) methods for a wide variety of applications in reactor analysis and criticality safety. S/U analysis generally requires knowledge of the underlying uncertainties in evaluated nuclear data, as expressed by covariance matrices; however, only a few nuclides currently have covariance information available in ENDF/B-VII. Recently new covariance evaluations have become available for several important nuclides, but a complete set of uncertainties for all materials needed in nuclear applications is unlikely to be available for several years at least. Therefore if the potential power of S/U techniques is to be realized for near-term projects in advanced reactor design and criticality safety analysis, it is necessary to establish procedures for generating approximate covariance data. This paper discusses an approach to create applications-oriented covariance data by applying integral uncertainties to differential data within the corresponding energy range.

  16. A Gradient Descent Approximation for Graph Cuts

    NASA Astrophysics Data System (ADS)

    Yildiz, Alparslan; Akgul, Yusuf Sinan

    Graph cuts have become very popular in many areas of computer vision including segmentation, energy minimization, and 3D reconstruction. Their ability to find optimal results efficiently and the convenience of usage are some of the factors of this popularity. However, there are a few issues with graph cuts, such as inherent sequential nature of popular algorithms and the memory bloat in large scale problems. In this paper, we introduce a novel method for the approximation of the graph cut optimization by posing the problem as a gradient descent formulation. The advantages of our method is the ability to work efficiently on large problems and the possibility of convenient implementation on parallel architectures such as inexpensive Graphics Processing Units (GPUs). We have implemented the proposed method on the Nvidia 8800GTS GPU. The classical segmentation experiments on static images and video data showed the effectiveness of our method.

  17. Gutzwiller approximation in strongly correlated electron systems

    NASA Astrophysics Data System (ADS)

    Li, Chunhua

    Gutzwiller wave function is an important theoretical technique for treating local electron-electron correlations nonperturbatively in condensed matter and materials physics. It is concerned with calculating variationally the ground state wave function by projecting out multi-occupation configurations that are energetically costly. The projection can be carried out analytically in the Gutzwiller approximation that offers an approximate way of calculating expectation values in the Gutzwiller projected wave function. This approach has proven to be very successful in strongly correlated systems such as the high temperature cuprate superconductors, the sodium cobaltates, and the heavy fermion compounds. In recent years, it has become increasingly evident that strongly correlated systems have a strong propensity towards forming inhomogeneous electronic states with spatially periodic superstrutural modulations. A good example is the commonly observed stripes and checkerboard states in high- Tc superconductors under a variety of conditions where superconductivity is weakened. There exists currently a real challenge and demand for new theoretical ideas and approaches that treats strongly correlated inhomogeneous electronic states, which is the subject matter of this thesis. This thesis contains four parts. In the first part of the thesis, the Gutzwiller approach is formulated in the grand canonical ensemble where, for the first time, a spatially (and spin) unrestricted Gutzwiller approximation (SUGA) is developed for studying inhomogeneous (both ordered and disordered) quantum electronic states in strongly correlated electron systems. The second part of the thesis applies the SUGA to the t-J model for doped Mott insulators which led to the discovery of checkerboard-like inhomogeneous electronic states competing with d-wave superconductivity, consistent with experimental observations made on several families of high-Tc superconductors. In the third part of the thesis, new

  18. Statistical model semiquantitatively approximates arabinoxylooligosaccharides' structural diversity.

    PubMed

    Dotsenko, Gleb; Nielsen, Michael Krogsgaard; Lange, Lene

    2016-05-13

    A statistical model describing the random distribution of substituted xylopyranosyl residues in arabinoxylooligosaccharides is suggested and compared with existing experimental data. Structural diversity of arabinoxylooligosaccharides of various length, originating from different arabinoxylans (wheat flour arabinoxylan (arabinose/xylose, A/X = 0.47); grass arabinoxylan (A/X = 0.24); wheat straw arabinoxylan (A/X = 0.15); and hydrothermally pretreated wheat straw arabinoxylan (A/X = 0.05)), is semiquantitatively approximated using the proposed model. The suggested approach can be applied not only for prediction and quantification of arabinoxylooligosaccharides' structural diversity, but also for estimate of yield and selection of the optimal source of arabinoxylan for production of arabinoxylooligosaccharides with desired structural features. PMID:27043469

  19. Spline Approximation of Thin Shell Dynamics

    NASA Technical Reports Server (NTRS)

    delRosario, R. C. H.; Smith, R. C.

    1996-01-01

    A spline-based method for approximating thin shell dynamics is presented here. While the method is developed in the context of the Donnell-Mushtari thin shell equations, it can be easily extended to the Byrne-Flugge-Lur'ye equations or other models for shells of revolution as warranted by applications. The primary requirements for the method include accuracy, flexibility and efficiency in smart material applications. To accomplish this, the method was designed to be flexible with regard to boundary conditions, material nonhomogeneities due to sensors and actuators, and inputs from smart material actuators such as piezoceramic patches. The accuracy of the method was also of primary concern, both to guarantee full resolution of structural dynamics and to facilitate the development of PDE-based controllers which ultimately require real-time implementation. Several numerical examples provide initial evidence demonstrating the efficacy of the method.

  20. Sivers function in the quasiclassical approximation

    NASA Astrophysics Data System (ADS)

    Kovchegov, Yuri V.; Sievert, Matthew D.

    2014-03-01

    We calculate the Sivers function in semi-inclusive deep inelastic scattering (SIDIS) and in the Drell-Yan process (DY) by employing the quasiclassical Glauber-Mueller/McLerran-Venugopalan approximation. Modeling the hadron as a large "nucleus" with nonzero orbital angular momentum (OAM), we find that its Sivers function receives two dominant contributions: one contribution is due to the OAM, while another one is due to the local Sivers function density in the nucleus. While the latter mechanism, being due to the "lensing" interactions, dominates at large transverse momentum of the produced hadron in SIDIS or of the dilepton pair in DY, the former (OAM) mechanism is leading in saturation power counting and dominates when the above transverse momenta become of the order of the saturation scale. We show that the OAM channel allows for a particularly simple and intuitive interpretation of the celebrated sign flip between the Sivers functions in SIDIS and DY.

  1. CT reconstruction via denoising approximate message passing

    NASA Astrophysics Data System (ADS)

    Perelli, Alessandro; Lexa, Michael A.; Can, Ali; Davies, Mike E.

    2016-05-01

    In this paper, we adapt and apply a compressed sensing based reconstruction algorithm to the problem of computed tomography reconstruction for luggage inspection. Specifically, we propose a variant of the denoising generalized approximate message passing (D-GAMP) algorithm and compare its performance to the performance of traditional filtered back projection and to a penalized weighted least squares (PWLS) based reconstruction method. D-GAMP is an iterative algorithm that at each iteration estimates the conditional probability of the image given the measurements and employs a non-linear "denoising" function which implicitly imposes an image prior. Results on real baggage show that D-GAMP is well-suited to limited-view acquisitions.

  2. Fast approximate quadratic programming for graph matching.

    PubMed

    Vogelstein, Joshua T; Conroy, John M; Lyzinski, Vince; Podrazik, Louis J; Kratzer, Steven G; Harley, Eric T; Fishkind, Donniell E; Vogelstein, R Jacob; Priebe, Carey E

    2015-01-01

    Quadratic assignment problems arise in a wide variety of domains, spanning operations research, graph theory, computer vision, and neuroscience, to name a few. The graph matching problem is a special case of the quadratic assignment problem, and graph matching is increasingly important as graph-valued data is becoming more prominent. With the aim of efficiently and accurately matching the large graphs common in big data, we present our graph matching algorithm, the Fast Approximate Quadratic assignment algorithm. We empirically demonstrate that our algorithm is faster and achieves a lower objective value on over 80% of the QAPLIB benchmark library, compared with the previous state-of-the-art. Applying our algorithm to our motivating example, matching C. elegans connectomes (brain-graphs), we find that it efficiently achieves performance. PMID:25886624

  3. Turbo Equalization Using Partial Gaussian Approximation

    NASA Astrophysics Data System (ADS)

    Zhang, Chuanzong; Wang, Zhongyong; Manchon, Carles Navarro; Sun, Peng; Guo, Qinghua; Fleury, Bernard Henri

    2016-09-01

    This paper deals with turbo-equalization for coded data transmission over intersymbol interference (ISI) channels. We propose a message-passing algorithm that uses the expectation-propagation rule to convert messages passed from the demodulator-decoder to the equalizer and computes messages returned by the equalizer by using a partial Gaussian approximation (PGA). Results from Monte Carlo simulations show that this approach leads to a significant performance improvement compared to state-of-the-art turbo-equalizers and allows for trading performance with complexity. We exploit the specific structure of the ISI channel model to significantly reduce the complexity of the PGA compared to that considered in the initial paper proposing the method.

  4. Heat flow in the postquasistatic approximation

    SciTech Connect

    Rodriguez-Mueller, B.; Peralta, C.; Barreto, W.; Rosales, L.

    2010-08-15

    We apply the postquasistatic approximation to study the evolution of spherically symmetric fluid distributions undergoing dissipation in the form of radial heat flow. For a model that corresponds to an incompressible fluid departing from the static equilibrium, it is not possible to go far from the initial state after the emission of a small amount of energy. Initially collapsing distributions of matter are not permitted. Emission of energy can be considered as a mechanism to avoid the collapse. If the distribution collapses initially and emits one hundredth of the initial mass only the outermost layers evolve. For a model that corresponds to a highly compressed Fermi gas, only the outermost shell can evolve with a shorter hydrodynamic time scale.

  5. Improved effective vector boson approximation revisited

    NASA Astrophysics Data System (ADS)

    Bernreuther, Werner; Chen, Long

    2016-03-01

    We reexamine the improved effective vector boson approximation which is based on two-vector-boson luminosities Lpol for the computation of weak gauge-boson hard scattering subprocesses V1V2→W in high-energy hadron-hadron or e-e+ collisions. We calculate these luminosities for the nine combinations of the transverse and longitudinal polarizations of V1 and V2 in the unitary and axial gauge. For these two gauge choices the quality of this approach is investigated for the reactions e-e+→W-W+νeν¯ e and e-e+→t t ¯ νeν¯ e using appropriate phase-space cuts.

  6. Improved approximations for control augmented structural synthesis

    NASA Technical Reports Server (NTRS)

    Thomas, H. L.; Schmit, L. A.

    1990-01-01

    A methodology for control-augmented structural synthesis is presented for structure-control systems which can be modeled as an assemblage of beam, truss, and nonstructural mass elements augmented by a noncollocated direct output feedback control system. Truss areas, beam cross sectional dimensions, nonstructural masses and rotary inertias, and controller position and velocity gains are treated simultaneously as design variables. The structural mass and a control-system performance index can be minimized simultaneously, with design constraints placed on static stresses and displacements, dynamic harmonic displacements and forces, structural frequencies, and closed-loop eigenvalues and damping ratios. Intermediate design-variable and response-quantity concepts are used to generate new approximations for displacements and actuator forces under harmonic dynamic loads and for system complex eigenvalues. This improves the overall efficiency of the procedure by reducing the number of complete analyses required for convergence. Numerical results which illustrate the effectiveness of the method are given.

  7. Iterative image restoration using approximate inverse preconditioning.

    PubMed

    Nagy, J G; Plemmons, R J; Torgersen, T C

    1996-01-01

    Removing a linear shift-invariant blur from a signal or image can be accomplished by inverse or Wiener filtering, or by an iterative least-squares deblurring procedure. Because of the ill-posed characteristics of the deconvolution problem, in the presence of noise, filtering methods often yield poor results. On the other hand, iterative methods often suffer from slow convergence at high spatial frequencies. This paper concerns solving deconvolution problems for atmospherically blurred images by the preconditioned conjugate gradient algorithm, where a new approximate inverse preconditioner is used to increase the rate of convergence. Theoretical results are established to show that fast convergence can be expected, and test results are reported for a ground-based astronomical imaging problem. PMID:18285203

  8. Comparing numerical and analytic approximate gravitational waveforms

    NASA Astrophysics Data System (ADS)

    Afshari, Nousha; Lovelace, Geoffrey; SXS Collaboration

    2016-03-01

    A direct observation of gravitational waves will test Einstein's theory of general relativity under the most extreme conditions. The Laser Interferometer Gravitational-Wave Observatory, or LIGO, began searching for gravitational waves in September 2015 with three times the sensitivity of initial LIGO. To help Advanced LIGO detect as many gravitational waves as possible, a major research effort is underway to accurately predict the expected waves. In this poster, I will explore how the gravitational waveform produced by a long binary-black-hole inspiral, merger, and ringdown is affected by how fast the larger black hole spins. In particular, I will present results from simulations of merging black holes, completed using the Spectral Einstein Code (black-holes.org/SpEC.html), including some new, long simulations designed to mimic black hole-neutron star mergers. I will present comparisons of the numerical waveforms with analytic approximations.

  9. PROX: Approximated Summarization of Data Provenance

    PubMed Central

    Ainy, Eleanor; Bourhis, Pierre; Davidson, Susan B.; Deutch, Daniel; Milo, Tova

    2016-01-01

    Many modern applications involve collecting large amounts of data from multiple sources, and then aggregating and manipulating it in intricate ways. The complexity of such applications, combined with the size of the collected data, makes it difficult to understand the application logic and how information was derived. Data provenance has been proven helpful in this respect in different contexts; however, maintaining and presenting the full and exact provenance may be infeasible, due to its size and complex structure. For that reason, we introduce the notion of approximated summarized provenance, where we seek a compact representation of the provenance at the possible cost of information loss. Based on this notion, we have developed PROX, a system for the management, presentation and use of data provenance for complex applications. We propose to demonstrate PROX in the context of a movies rating crowd-sourcing system, letting participants view provenance summarization and use it to gain insights on the application and its underlying data. PMID:27570843

  10. An approximate CPHD filter for superpositional sensors

    NASA Astrophysics Data System (ADS)

    Mahler, Ronald; El-Fallah, Adel

    2012-06-01

    Most multitarget tracking algorithms, such as JPDA, MHT, and the PHD and CPHD filters, presume the following measurement model: (a) targets are point targets, (b) every target generates at most a single measurement, and (c) any measurement is generated by at most a single target. However, the most familiar sensors, such as surveillance and imaging radars, violate assumption (c). This is because they are actually superpositional-that is, any measurement is a sum of signals generated by all of the targets in the scene. At this conference in 2009, the first author derived exact formulas for PHD and CPHD filters that presume general superpositional measurement models. Unfortunately, these formulas are computationally intractable. In this paper, we modify and generalize a Gaussian approximation technique due to Thouin, Nannuru, and Coates to derive a computationally tractable superpositional-CPHD filter. Implementation requires sequential Monte Carlo (particle filter) techniques.

  11. Exact and Approximate Probabilistic Symbolic Execution

    NASA Technical Reports Server (NTRS)

    Luckow, Kasper; Pasareanu, Corina S.; Dwyer, Matthew B.; Filieri, Antonio; Visser, Willem

    2014-01-01

    Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also introduce approximate algorithms to search for good schedulers, speeding up established random sampling and reinforcement learning results through the quantification of path probabilities based on symbolic execution. We implemented the techniques in Symbolic PathFinder and evaluated them on nondeterministic Java programs. We show that our algorithms significantly improve upon a state-of- the-art statistical model checking algorithm, originally developed for Markov Decision Processes.

  12. Animal Models and Integrated Nested Laplace Approximations

    PubMed Central

    Holand, Anna Marie; Steinsland, Ingelin; Martino, Sara; Jensen, Henrik

    2013-01-01

    Animal models are generalized linear mixed models used in evolutionary biology and animal breeding to identify the genetic part of traits. Integrated Nested Laplace Approximation (INLA) is a methodology for making fast, nonsampling-based Bayesian inference for hierarchical Gaussian Markov models. In this article, we demonstrate that the INLA methodology can be used for many versions of Bayesian animal models. We analyze animal models for both synthetic case studies and house sparrow (Passer domesticus) population case studies with Gaussian, binomial, and Poisson likelihoods using INLA. Inference results are compared with results using Markov Chain Monte Carlo methods. For model choice we use difference in deviance information criteria (DIC). We suggest and show how to evaluate differences in DIC by comparing them with sampling results from simulation studies. We also introduce an R package, AnimalINLA, for easy and fast inference for Bayesian Animal models using INLA. PMID:23708299

  13. Robust Generalized Low Rank Approximations of Matrices

    PubMed Central

    Shi, Jiarong; Yang, Wei; Zheng, Xiuyun

    2015-01-01

    In recent years, the intrinsic low rank structure of some datasets has been extensively exploited to reduce dimensionality, remove noise and complete the missing entries. As a well-known technique for dimensionality reduction and data compression, Generalized Low Rank Approximations of Matrices (GLRAM) claims its superiority on computation time and compression ratio over the SVD. However, GLRAM is very sensitive to sparse large noise or outliers and its robust version does not have been explored or solved yet. To address this problem, this paper proposes a robust method for GLRAM, named Robust GLRAM (RGLRAM). We first formulate RGLRAM as an l1-norm optimization problem which minimizes the l1-norm of the approximation errors. Secondly, we apply the technique of Augmented Lagrange Multipliers (ALM) to solve this l1-norm minimization problem and derive a corresponding iterative scheme. Then the weak convergence of the proposed algorithm is discussed under mild conditions. Next, we investigate a special case of RGLRAM and extend RGLRAM to a general tensor case. Finally, the extensive experiments on synthetic data show that it is possible for RGLRAM to exactly recover both the low rank and the sparse components while it may be difficult for previous state-of-the-art algorithms. We also discuss three issues on RGLRAM: the sensitivity to initialization, the generalization ability and the relationship between the running time and the size/number of matrices. Moreover, the experimental results on images of faces with large corruptions illustrate that RGLRAM obtains the best denoising and compression performance than other methods. PMID:26367116

  14. Distance approximating dimension reduction of Riemannian manifolds.

    PubMed

    Chen, Changyou; Zhang, Junping; Fleischer, Rudolf

    2010-02-01

    We study the problem of projecting high-dimensional tensor data on an unspecified Riemannian manifold onto some lower dimensional subspace We note that, technically, the low-dimensional space we compute may not be a subspace of the original high-dimensional space. However, it is convenient to envision it as a subspace when explaining the algorithms. without much distorting the pairwise geodesic distances between data points on the Riemannian manifold while preserving discrimination ability. Existing algorithms, e.g., ISOMAP, that try to learn an isometric embedding of data points on a manifold have a nonsatisfactory discrimination ability in practical applications such as face and gait recognition. In this paper, we propose a two-stage algorithm named tensor-based Riemannian manifold distance-approximating projection (TRIMAP), which can quickly compute an approximately optimal projection for a given tensor data set. In the first stage, we construct a graph from labeled or unlabeled data, which correspond to the supervised and unsupervised scenario, respectively, such that we can use the graph distance to obtain an upper bound on an objective function that preserves pairwise geodesic distances. Then, we perform some tensor-based optimization of this upper bound to obtain a projection onto a low-dimensional subspace. In the second stage, we propose three different strategies to enhance the discrimination ability, i.e., make data points from different classes easier to separate and make data points in the same class more compact. Experimental results on two benchmark data sets from the University of South Florida human gait database and the Face Recognition Technology face database show that the discrimination ability of TRIMAP exceeds that of other popular algorithms. We theoretically show that TRIMAP converges. We demonstrate, through experiments on six synthetic data sets, its potential ability to unfold nonlinear manifolds in the first stage. PMID:19622439

  15. The Guarding Problem - Complexity and Approximation

    NASA Astrophysics Data System (ADS)

    Reddy, T. V. Thirumala; Krishna, D. Sai; Rangan, C. Pandu

    Let G = (V, E) be the given graph and G R = (V R ,E R ) and G C = (V C ,E C ) be the sub graphs of G such that V R ∩ V C = ∅ and V R ∪ V C = V. G C is referred to as the cops region and G R is called as the robber region. Initially a robber is placed at some vertex of V R and the cops are placed at some vertices of V C . The robber and cops may move from their current vertices to one of their neighbours. While a cop can move only within the cops region, the robber may move to any neighbour. The robber and cops move alternatively. A vertex v ∈ V C is said to be attacked if the current turn is the robber's turn, the robber is at vertex u where u ∈ V R , (u,v) ∈ E and no cop is present at v. The guarding problem is to find the minimum number of cops required to guard the graph G C from the robber's attack. We first prove that the decision version of this problem when G R is an arbitrary undirected graph is PSPACE-hard. We also prove that the complexity of the decision version of the guarding problem when G R is a wheel graph is NP-hard. We then present approximation algorithms if G R is a star graph, a clique and a wheel graph with approximation ratios H(n 1), 2 H(n 1) and left( H(n1) + 3/2 right) respectively, where H(n1) = 1 + 1/2 + ... + 1/n1 and n 1 = ∣ V R ∣.

  16. Approximate reasoning-based learning and control for proximity operations and docking in space

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.; Jani, Yashvant; Lea, Robert N.

    1991-01-01

    A recently proposed hybrid-neutral-network and fuzzy-logic-control architecture is applied to a fuzzy logic controller developed for attitude control of the Space Shuttle. A model using reinforcement learning and learning from past experience for fine-tuning its knowledge base is proposed. Two main components of this approximate reasoning-based intelligent control (ARIC) model - an action-state evaluation network and action selection network are described as well as the Space Shuttle attitude controller. An ARIC model for the controller is presented, and it is noted that the input layer in each network includes three nodes representing the angle error, angle error rate, and bias node. Preliminary results indicate that the controller can hold the pitch rate within its desired deadband and starts to use the jets at about 500 sec in the run.

  17. Accelerating Learning By Neural Networks

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad; Barhen, Jacob

    1992-01-01

    Electronic neural networks made to learn faster by use of terminal teacher forcing. Method of supervised learning involves addition of teacher forcing functions to excitations fed as inputs to output neurons. Initially, teacher forcing functions are strong enough to force outputs to desired values; subsequently, these functions decay with time. When learning successfully completed, terminal teacher forcing vanishes, and dynamics or neural network become equivalent to those of conventional neural network. Simulated neural network with terminal teacher forcing learned to produce close approximation of circular trajectory in 400 iterations.

  18. Network Cosmology

    PubMed Central

    Krioukov, Dmitri; Kitsak, Maksim; Sinkovits, Robert S.; Rideout, David; Meyer, David; Boguñá, Marián

    2012-01-01

    Prediction and control of the dynamics of complex networks is a central problem in network science. Structural and dynamical similarities of different real networks suggest that some universal laws might accurately describe the dynamics of these networks, albeit the nature and common origin of such laws remain elusive. Here we show that the causal network representing the large-scale structure of spacetime in our accelerating universe is a power-law graph with strong clustering, similar to many complex networks such as the Internet, social, or biological networks. We prove that this structural similarity is a consequence of the asymptotic equivalence between the large-scale growth dynamics of complex networks and causal networks. This equivalence suggests that unexpectedly similar laws govern the dynamics of complex networks and spacetime in the universe, with implications to network science and cosmology. PMID:23162688

  19. Stochastically evolving networks

    NASA Astrophysics Data System (ADS)

    Chan, Derek Y.; Hughes, Barry D.; Leong, Alex S.; Reed, William J.

    2003-12-01

    We discuss a class of models for the evolution of networks in which new nodes are recruited into the network at random times, and links between existing nodes that are not yet directly connected may also form at random times. The class contains both models that produce “small-world” networks and less tightly linked models. We produce both trees, appropriate in certain biological applications, and networks in which closed loops can appear, which model communication networks and networks of human sexual interactions. One of our models is closely related to random recursive trees, and some exact results known in that context can be exploited. The other models are more subtle and difficult to analyze. Our analysis includes a number of exact results for moments, correlations, and distributions of coordination number and network size. We report simulations and also discuss some mean-field approximations. If the system has evolved for a long time and the state of a random node (which thus has a random age) is observed, power-law distributions for properties of the system arise in some of these models.

  20. Dynamical Vertex Approximation for the Hubbard Model

    NASA Astrophysics Data System (ADS)

    Toschi, Alessandro

    A full understanding of correlated electron systems in the physically relevant situations of three and two dimensions represents a challenge for the contemporary condensed matter theory. However, in the last years considerable progress has been achieved by means of increasingly more powerful quantum many-body algorithms, applied to the basic model for correlated electrons, the Hubbard Hamiltonian. Here, I will review the physics emerging from studies performed with the dynamical vertex approximation, which includes diagrammatic corrections to the local description of the dynamical mean field theory (DMFT). In particular, I will first discuss the phase diagram in three dimensions with a special focus on the commensurate and incommensurate magnetic phases, their (quantum) critical properties, and the impact of fluctuations on electronic lifetimes and spectral functions. In two dimensions, the effects of non-local fluctuations beyond DMFT grow enormously, determining the appearance of a low-temperature insulating behavior for all values of the interaction in the unfrustrated model: Here the prototypical features of the Mott-Hubbard metal-insulator transition, as well as the existence of magnetically ordered phases, are completely overwhelmed by antiferromagnetic fluctuations of exponentially large extension, in accordance with the Mermin-Wagner theorem. Eventually, by a fluctuation diagnostics analysis of cluster DMFT self-energies, the same magnetic fluctuations are identified as responsible for the pseudogap regime in the holed-doped frustrated case, with important implications for the theoretical modeling of the cuprate physics.

  1. Protein alignment: Exact versus approximate. An illustration.

    PubMed

    Randić, Milan; Pisanski, Tomaž

    2015-05-30

    We illustrate solving the protein alignment problem exactly using the algorithm VESPA (very efficient search for protein alignment). We have compared our result with the approximate solution obtained with BLAST (basic local alignment search tool) software, which is currently the most widely used for searching for protein alignment. We have selected human and mouse proteins having around 170 amino acids for comparison. The exact solution has found 78 pairs of amino acids, to which one should add 17 individual amino acid alignments giving a total of 95 aligned amino acids. BLAST has identified 64 aligned amino acids which involve pairs of more than two adjacent amino acids. However, the difference between the two outputs is not as large as it may appear, because a number of amino acids that are adjacent have been reported by BLAST as single amino acids. So if one counts all amino acids, whether isolated (single) or in a group of two and more amino acids, then the count for BLAST is 89 and for VESPA is 95, a difference of only six. PMID:25800773

  2. Self-Consistent Random Phase Approximation

    NASA Astrophysics Data System (ADS)

    Rohr, Daniel; Hellgren, Maria; Gross, E. K. U.

    2012-02-01

    We report self-consistent Random Phase Approximation (RPA) calculations within the Density Functional Theory. The calculations are performed by the direct minimization scheme for the optimized effective potential method developed by Yang et al. [1]. We show results for the dissociation curve of H2^+, H2 and LiH with the RPA, where the exchange correlation kernel has been set to zero. For H2^+ and H2 we also show results for RPAX, where the exact exchange kernel has been included. The RPA, in general, over-correlates. At intermediate distances a maximum is obtained that lies above the exact energy. This is known from non-self-consistent calculations and is still present in the self-consistent results. The RPAX energies are higher than the RPA energies. At equilibrium distance they accurately reproduce the exact total energy. In the dissociation limit they improve upon RPA, but are still too low. For H2^+ the RPAX correlation energy is zero. Consequently, RPAX gives the exact dissociation curve. We also present the local potentials. They indicate that a peak at the bond midpoint builds up with increasing bond distance. This is expected for the exact KS potential.[4pt] [1] W. Yang, and Q. Wu, Phys. Rev. Lett., 89, 143002 (2002)

  3. Adaptive approximation of higher order posterior statistics

    SciTech Connect

    Lee, Wonjung

    2014-02-01

    Filtering is an approach for incorporating observed data into time-evolving systems. Instead of a family of Dirac delta masses that is widely used in Monte Carlo methods, we here use the Wiener chaos expansion for the parametrization of the conditioned probability distribution to solve the nonlinear filtering problem. The Wiener chaos expansion is not the best method for uncertainty propagation without observations. Nevertheless, the projection of the system variables in a fixed polynomial basis spanning the probability space might be a competitive representation in the presence of relatively frequent observations because the Wiener chaos approach not only leads to an accurate and efficient prediction for short time uncertainty quantification, but it also allows to apply several data assimilation methods that can be used to yield a better approximate filtering solution. The aim of the present paper is to investigate this hypothesis. We answer in the affirmative for the (stochastic) Lorenz-63 system based on numerical simulations in which the uncertainty quantification method and the data assimilation method are adaptively selected by whether the dynamics is driven by Brownian motion and the near-Gaussianity of the measure to be updated, respectively.

  4. Approximate Model for Turbulent Stagnation Point Flow.

    SciTech Connect

    Dechant, Lawrence

    2016-01-01

    Here we derive an approximate turbulent self-similar model for a class of favorable pressure gradient wedge-like flows, focusing on the stagnation point limit. While the self-similar model provides a useful gross flow field estimate this approach must be combined with a near wall model is to determine skin friction and by Reynolds analogy the heat transfer coefficient. The combined approach is developed in detail for the stagnation point flow problem where turbulent skin friction and Nusselt number results are obtained. Comparison to the classical Van Driest (1958) result suggests overall reasonable agreement. Though the model is only valid near the stagnation region of cylinders and spheres it nonetheless provides a reasonable model for overall cylinder and sphere heat transfer. The enhancement effect of free stream turbulence upon the laminar flow is used to derive a similar expression which is valid for turbulent flow. Examination of free stream enhanced laminar flow suggests that the rather than enhancement of a laminar flow behavior free stream disturbance results in early transition to turbulent stagnation point behavior. Excellent agreement is shown between enhanced laminar flow and turbulent flow behavior for high levels, e.g. 5% of free stream turbulence. Finally the blunt body turbulent stagnation results are shown to provide realistic heat transfer results for turbulent jet impingement problems.

  5. Approximate algorithms for partitioning and assignment problems

    NASA Technical Reports Server (NTRS)

    Iqbal, M. A.

    1986-01-01

    The problem of optimally assigning the modules of a parallel/pipelined program over the processors of a multiple computer system under certain restrictions on the interconnection structure of the program as well as the multiple computer system was considered. For a variety of such programs it is possible to find linear time if a partition of the program exists in which the load on any processor is within a certain bound. This method, when combined with a binary search over a finite range, provides an approximate solution to the partitioning problem. The specific problems considered were: a chain structured parallel program over a chain-like computer system, multiple chain-like programs over a host-satellite system, and a tree structured parallel program over a host-satellite system. For a problem with m modules and n processors, the complexity of the algorithm is no worse than O(mnlog(W sub T/epsilon)), where W sub T is the cost of assigning all modules to one processor and epsilon the desired accuracy.

  6. Approximate theory for radial filtration/consolidation

    SciTech Connect

    Tiller, F.M.; Kirby, J.M.; Nguyen, H.L.

    1996-10-01

    Approximate solutions are developed for filtration and subsequent consolidation of compactible cakes on a cylindrical filter element. Darcy`s flow equation is coupled with equations for equilibrium stress under the conditions of plane strain and axial symmetry for radial flow inwards. The solutions are based on power function forms involving the relationships of the solidosity {epsilon}{sub s} (volume fraction of solids) and the permeability K to the solids effective stress p{sub s}. The solutions allow determination of the various parameters in the power functions and the ratio k{sub 0} of the lateral to radial effective stress (earth stress ratio). Measurements were made of liquid and effective pressures, flow rates, and cake thickness versus time. Experimental data are presented for a series of tests in a radial filtration cell with a central filter element. Slurries prepared from two materials (Microwate, which is mainly SrSO{sub 4}, and kaolin) were used in the experiments. Transient deposition of filter cakes was followed by static (i.e., no flow) conditions in the cake. The no-flow condition was accomplished by introducing bentonite which produced a nearly impermeable layer with negligible flow. Measurement of the pressure at the cake surface and the transmitted pressure on the central element permitted calculation of k{sub 0}.

  7. Semiclassical approximation to supersymmetric quantum gravity

    NASA Astrophysics Data System (ADS)

    Kiefer, Claus; Lück, Tobias; Moniz, Paulo

    2005-08-01

    We develop a semiclassical approximation scheme for the constraint equations of supersymmetric canonical quantum gravity. This is achieved by a Born-Oppenheimer type of expansion, in analogy to the case of the usual Wheeler-DeWitt equation. The formalism is only consistent if the states at each order depend on the gravitino field. We recover at consecutive orders the Hamilton-Jacobi equation, the functional Schrödinger equation, and quantum gravitational correction terms to this Schrödinger equation. In particular, the following consequences are found: (i) the Hamilton-Jacobi equation and therefore the background spacetime must involve the gravitino, (ii) a (many-fingered) local time parameter has to be present on super Riem Σ (the space of all possible tetrad and gravitino fields), (iii) quantum supersymmetric gravitational corrections affect the evolution of the very early Universe. The physical meaning of these equations and results, in particular, the similarities to and differences from the pure bosonic case, are discussed.

  8. Magnetic reconnection under anisotropic magnetohydrodynamic approximation

    SciTech Connect

    Hirabayashi, K.; Hoshino, M.

    2013-11-15

    We study the formation of slow-mode shocks in collisionless magnetic reconnection by using one- and two-dimensional collisionless MHD codes based on the double adiabatic approximation and the Landau closure model. We bridge the gap between the Petschek-type MHD reconnection model accompanied by a pair of slow shocks and the observational evidence of the rare occasion of in-situ slow shock observations. Our results showed that once magnetic reconnection takes place, a firehose-sense (p{sub ∥}>p{sub ⊥}) pressure anisotropy arises in the downstream region, and the generated slow shocks are quite weak comparing with those in an isotropic MHD. In spite of the weakness of the shocks, however, the resultant reconnection rate is 10%–30% higher than that in an isotropic case. This result implies that the slow shock does not necessarily play an important role in the energy conversion in the reconnection system and is consistent with the satellite observation in the Earth's magnetosphere.

  9. Configuring Airspace Sectors with Approximate Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Bloem, Michael; Gupta, Pramod

    2010-01-01

    In response to changing traffic and staffing conditions, supervisors dynamically configure airspace sectors by assigning them to control positions. A finite horizon airspace sector configuration problem models this supervisor decision. The problem is to select an airspace configuration at each time step while considering a workload cost, a reconfiguration cost, and a constraint on the number of control positions at each time step. Three algorithms for this problem are proposed and evaluated: a myopic heuristic, an exact dynamic programming algorithm, and a rollouts approximate dynamic programming algorithm. On problem instances from current operations with only dozens of possible configurations, an exact dynamic programming solution gives the optimal cost value. The rollouts algorithm achieves costs within 2% of optimal for these instances, on average. For larger problem instances that are representative of future operations and have thousands of possible configurations, excessive computation time prohibits the use of exact dynamic programming. On such problem instances, the rollouts algorithm reduces the cost achieved by the heuristic by more than 15% on average with an acceptable computation time.

  10. Approximation Schemes for Scheduling with Availability Constraints

    NASA Astrophysics Data System (ADS)

    Fu, Bin; Huo, Yumei; Zhao, Hairong

    We investigate the problems of scheduling n weighted jobs to m identical machines with availability constraints. We consider two different models of availability constraints: the preventive model where the unavailability is due to preventive machine maintenance, and the fixed job model where the unavailability is due to a priori assignment of some of the n jobs to certain machines at certain times. Both models have applications such as turnaround scheduling or overlay computing. In both models, the objective is to minimize the total weighted completion time. We assume that m is a constant, and the jobs are non-resumable. For the preventive model, it has been shown that there is no approximation algorithm if all machines have unavailable intervals even when w i = p i for all jobs. In this paper, we assume there is one machine permanently available and the processing time of each job is equal to its weight for all jobs. We develop the first PTAS when there are constant number of unavailable intervals. One main feature of our algorithm is that the classification of large and small jobs is with respect to each individual interval, thus not fixed. This classification allows us (1) to enumerate the assignments of large jobs efficiently; (2) and to move small jobs around without increasing the objective value too much, and thus derive our PTAS. Then we show that there is no FPTAS in this case unless P = NP.

  11. The time-dependent Gutzwiller approximation

    NASA Astrophysics Data System (ADS)

    Fabrizio, Michele

    2015-03-01

    The time-dependent Gutzwiller Approximation (t-GA) is shown to be capable of tracking the off-equilibrium evolution both of coherent quasiparticles and of incoherent Hubbard bands. The method is used to demonstrate that the sharp dynamical crossover observed by time-dependent DMFT in the quench-dynamics of a half-filled Hubbard model can be identified within the t-GA as a genuine dynamical transition separating two distinct physical phases. This result, strictly variational for lattices of infinite coordination number, is intriguing as it actually questions the occurrence of thermalization. Next, we shall present how t-GA works in a multi-band model for V2O3 that displays a first-order Mott transition. We shall show that a physically accessible excitation pathway is able to collapse the Mott gap down and drive off-equilibrium the insulator into a metastable metal phase. Work supported by the European Union, Seventh Framework Programme, under the project GO FAST, Grant Agreement No. 280555.

  12. A simple, approximate model of parachute inflation

    SciTech Connect

    Macha, J.M.

    1992-11-01

    A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluid are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.

  13. A simple, approximate model of parachute inflation

    SciTech Connect

    Macha, J.M.

    1992-01-01

    A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluid are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.

  14. Rainbows: Mie computations and the Airy approximation.

    PubMed

    Wang, R T; van de Hulst, H C

    1991-01-01

    Efficient and accurate computation of the scattered intensity pattern by the Mie formulas is now feasible for size parameters up to x = 50,000 at least, which in visual light means spherical drops with diameters up to 6 mm. We present a method for evaluating the Mie coefficients from the ratios between Riccati-Bessel and Neumann functions of successive order. We probe the applicability of the Airy approximation, which we generalize to rainbows of arbitrary p (number of internal reflections = p - 1), by comparing the Mie and Airy intensity patterns. Millimeter size water drops show a match in all details, including the position and intensity of the supernumerary maxima and the polarization. A fairly good match is still seen for drops of 0.1 mm. A small spread in sizes helps to smooth out irrelevant detail. The dark band between the rainbows is used to test more subtle features. We conclude that this band contains not only externally reflected light (p = 0) but also a sizable contribution f rom the p = 6 and p = 7 rainbows, which shift rapidly with wavelength. The higher the refractive index, the closer both theories agree on the first primary rainbow (p = 2) peak for drop diameters as small as 0.02 mm. This may be useful in supporting experimental work. PMID:20581954

  15. Network Solutions.

    ERIC Educational Resources Information Center

    Vietzke, Robert; And Others

    1996-01-01

    This special section explains the latest developments in networking technologies, profiles school districts benefiting from successful implementations, and reviews new products for building networks. Highlights include ATM (asynchronous transfer mode), cable modems, networking switches, Internet screening software, file servers, network management…

  16. Construction and accuracy of partial differential equation approximations to the chemical master equation.

    PubMed

    Grima, Ramon

    2011-11-01

    The mesoscopic description of chemical kinetics, the chemical master equation, can be exactly solved in only a few simple cases. The analytical intractability stems from the discrete character of the equation, and hence considerable effort has been invested in the development of Fokker-Planck equations, second-order partial differential equation approximations to the master equation. We here consider two different types of higher-order partial differential approximations, one derived from the system-size expansion and the other from the Kramers-Moyal expansion, and derive the accuracy of their predictions for chemical reactive networks composed of arbitrary numbers of unimolecular and bimolecular reactions. In particular, we show that the partial differential equation approximation of order Q from the Kramers-Moyal expansion leads to estimates of the mean number of molecules accurate to order Ω(-(2Q-3)/2), of the variance of the fluctuations in the number of molecules accurate to order Ω(-(2Q-5)/2), and of skewness accurate to order Ω(-(Q-2)). We also show that for large Q, the accuracy in the estimates can be matched only by a partial differential equation approximation from the system-size expansion of approximate order 2Q. Hence, we conclude that partial differential approximations based on the Kramers-Moyal expansion generally lead to considerably more accurate estimates in the mean, variance, and skewness than approximations of the same order derived from the system-size expansion. PMID:22181475

  17. Networking standards

    NASA Technical Reports Server (NTRS)

    Davies, Mark

    1991-01-01

    The enterprise network is currently a multivendor environment consisting of many defacto and proprietary standards. During the 1990s, these networks will evolve towards networks which are based on international standards in both Local Area Network (LAN) and Wide Area Network (WAN) space. Also, you can expect to see the higher level functions and applications begin the same transition. Additional information is given in viewgraph form.

  18. Bilayer graphene spectral function in the random phase approximation and self-consistent GW approximation

    NASA Astrophysics Data System (ADS)

    Sabashvili, Andro; Östlund, Stellan; Granath, Mats

    2013-08-01

    We calculate the single-particle spectral function for doped bilayer graphene in the low energy limit, described by two parabolic bands with zero band gap and long range Coulomb interaction. Calculations are done using thermal Green's functions in both the random phase approximation (RPA) and the fully self-consistent GW approximation. Consistent with previous studies RPA yields a spectral function which, apart from the Landau quasiparticle peaks, shows additional coherent features interpreted as plasmarons, i.e., composite electron-plasmon excitations. In the GW approximation the plasmaron becomes incoherent and peaks are replaced by much broader features. The deviation of the quasiparticle weight and mass renormalization from their noninteracting values is small which indicates that bilayer graphene is a weakly interacting system. The electron energy loss function, Im[-ɛq-1(ω)] shows a sharp plasmon mode in RPA which in the GW approximation becomes less coherent and thus consistent with the weaker plasmaron features in the corresponding single-particle spectral function.

  19. Hydration thermodynamics beyond the linear response approximation.

    PubMed

    Raineri, Fernando O

    2016-10-19

    The solvation energetics associated with the transformation of a solute molecule at infinite dilution in water from an initial state A to a final state B is reconsidered. The two solute states have different potentials energies of interaction, [Formula: see text] and [Formula: see text], with the solvent environment. Throughout the A [Formula: see text] B transformation of the solute, the solvation system is described by a Hamiltonian [Formula: see text] that changes linearly with the coupling parameter ξ. By focusing on the characterization of the probability density [Formula: see text] that the dimensionless perturbational solute-solvent interaction energy [Formula: see text] has numerical value y when the coupling parameter is ξ, we derive a hierarchy of differential equation relations between the ξ-dependent cumulant functions of various orders in the expansion of the appropriate cumulant generating function. On the basis of this theoretical framework we then introduce an inherently nonlinear solvation model for which we are able to find analytical results for both [Formula: see text] and for the solvation thermodynamic functions. The solvation model is based on the premise that there is an upper or a lower bound (depending on the nature of the interactions considered) to the amplitude of the fluctuations of Y in the solution system at equilibrium. The results reveal essential differences in behavior for the model when compared with the linear response approximation to solvation, particularly with regards to the probability density [Formula: see text]. The analytical expressions for the solvation properties show, however, that the linear response behavior is recovered from the new model when the room for the thermal fluctuations in Y is not restricted by the existence of a nearby bound. We compare the predictions of the model with the results from molecular dynamics computer simulations for aqueous solvation, in which either (1) the solute

  20. Rapid approximate inversion of airborne TEM

    NASA Astrophysics Data System (ADS)

    Fullagar, Peter K.; Pears, Glenn A.; Reid, James E.; Schaa, Ralf

    2015-11-01

    Rapid interpretation of large airborne transient electromagnetic (ATEM) datasets is highly desirable for timely decision-making in exploration. Full solution 3D inversion of entire airborne electromagnetic (AEM) surveys is often still not feasible on current day PCs. Therefore, two algorithms to perform rapid approximate 3D interpretation of AEM have been developed. The loss of rigour may be of little consequence if the objective of the AEM survey is regional reconnaissance. Data coverage is often quasi-2D rather than truly 3D in such cases, belying the need for `exact' 3D inversion. Incorporation of geological constraints reduces the non-uniqueness of 3D AEM inversion. Integrated interpretation can be achieved most readily when inversion is applied to a geological model, attributed with lithology as well as conductivity. Geological models also offer several practical advantages over pure property models during inversion. In particular, they permit adjustment of geological boundaries. In addition, optimal conductivities can be determined for homogeneous units. Both algorithms described here can operate on geological models; however, they can also perform `unconstrained' inversion if the geological context is unknown. VPem1D performs 1D inversion at each ATEM data location above a 3D model. Interpretation of cover thickness is a natural application; this is illustrated via application to Spectrem data from central Australia. VPem3D performs 3D inversion on time-integrated (resistive limit) data. Conversion to resistive limits delivers a massive increase in speed since the TEM inverse problem reduces to a quasi-magnetic problem. The time evolution of the decay is lost during the conversion, but the information can be largely recovered by constructing a starting model from conductivity depth images (CDIs) or 1D inversions combined with geological constraints if available. The efficacy of the approach is demonstrated on Spectrem data from Brazil. Both separately and in

  1. Coronal Loops: Evolving Beyond the Isothermal Approximation

    NASA Astrophysics Data System (ADS)

    Schmelz, J. T.; Cirtain, J. W.; Allen, J. D.

    2002-05-01

    Are coronal loops isothermal? A controversy over this question has arisen recently because different investigators using different techniques have obtained very different answers. Analysis of SOHO-EIT and TRACE data using narrowband filter ratios to obtain temperature maps has produced several key publications that suggest that coronal loops may be isothermal. We have constructed a multi-thermal distribution for several pixels along a relatively isolated coronal loop on the southwest limb of the solar disk using spectral line data from SOHO-CDS taken on 1998 Apr 20. These distributions are clearly inconsistent with isothermal plasma along either the line of sight or the length of the loop, and suggested rather that the temperature increases from the footpoints to the loop top. We speculated originally that these differences could be attributed to pixel size -- CDS pixels are larger, and more `contaminating' material would be expected along the line of sight. To test this idea, we used CDS iron line ratios from our data set to mimic the isothermal results from the narrowband filter instruments. These ratios indicated that the temperature gradient along the loop was flat, despite the fact that a more complete analysis of the same data showed this result to be false! The CDS pixel size was not the cause of the discrepancy; rather, the problem lies with the isothermal approximation used in EIT and TRACE analysis. These results should serve as a strong warning to anyone using this simplistic method to obtain temperature. This warning is echoed on the EIT web page: ``Danger! Enter at your own risk!'' In other words, values for temperature may be found, but they may have nothing to do with physical reality. Solar physics research at the University of Memphis is supported by NASA grant NAG5-9783. This research was funded in part by the NASA/TRACE MODA grant for Montana State University.

  2. Cophylogeny Reconstruction via an Approximate Bayesian Computation

    PubMed Central

    Baudet, C.; Donati, B.; Sinaimeri, B.; Crescenzi, P.; Gautier, C.; Matias, C.; Sagot, M.-F.

    2015-01-01

    Despite an increasingly vast literature on cophylogenetic reconstructions for studying host–parasite associations, understanding the common evolutionary history of such systems remains a problem that is far from being solved. Most algorithms for host–parasite reconciliation use an event-based model, where the events include in general (a subset of) cospeciation, duplication, loss, and host switch. All known parsimonious event-based methods then assign a cost to each type of event in order to find a reconstruction of minimum cost. The main problem with this approach is that the cost of the events strongly influences the reconciliation obtained. Some earlier approaches attempt to avoid this problem by finding a Pareto set of solutions and hence by considering event costs under some minimization constraints. To deal with this problem, we developed an algorithm, called Coala, for estimating the frequency of the events based on an approximate Bayesian computation approach. The benefits of this method are 2-fold: (i) it provides more confidence in the set of costs to be used in a reconciliation, and (ii) it allows estimation of the frequency of the events in cases where the data set consists of trees with a large number of taxa. We evaluate our method on simulated and on biological data sets. We show that in both cases, for the same pair of host and parasite trees, different sets of frequencies for the events lead to equally probable solutions. Moreover, often these solutions differ greatly in terms of the number of inferred events. It appears crucial to take this into account before attempting any further biological interpretation of such reconciliations. More generally, we also show that the set of frequencies can vary widely depending on the input host and parasite trees. Indiscriminately applying a standard vector of costs may thus not be a good strategy. PMID:25540454

  3. Generalized stationary phase approximations for mountain waves

    NASA Astrophysics Data System (ADS)

    Knight, H.; Broutman, D.; Eckermann, S. D.

    2016-04-01

    Large altitude asymptotic approximations are derived for vertical displacements due to mountain waves generated by hydrostatic wind flow over arbitrary topography. This leads to new asymptotic analytic expressions for wave-induced vertical displacement for mountains with an elliptical Gaussian shape and with the major axis oriented at any angle relative to the background wind. The motivation is to understand local maxima in vertical displacement amplitude at a given height for elliptical mountains aligned at oblique angles to the wind direction, as identified in Eckermann et al. ["Effects of horizontal geometrical spreading on the parameterization of orographic gravity-wave drag. Part 1: Numerical transform solutions," J. Atmos. Sci. 72, 2330-2347 (2015)]. The standard stationary phase method reproduces one type of local amplitude maximum that migrates downwind with increasing altitude. Another type of local amplitude maximum stays close to the vertical axis over the center of the mountain, and a new generalized stationary phase method is developed to describe this other type of local amplitude maximum and the horizontal variation of wave-induced vertical displacement near the vertical axis of the mountain in the large altitude limit. The new generalized stationary phase method describes the asymptotic behavior of integrals where the asymptotic parameter is raised to two different powers (1/2 and 1) rather than just one power as in the standard stationary phase method. The vertical displacement formulas are initially derived assuming a uniform background wind but are extended to accommodate both vertical shear with a fixed wind direction and vertical variations in the buoyancy frequency.

  4. Compressive Hyperspectral Imaging via Approximate Message Passing

    NASA Astrophysics Data System (ADS)

    Tan, Jin; Ma, Yanting; Rueda, Hoover; Baron, Dror; Arce, Gonzalo R.

    2016-03-01

    We consider a compressive hyperspectral imaging reconstruction problem, where three-dimensional spatio-spectral information about a scene is sensed by a coded aperture snapshot spectral imager (CASSI). The CASSI imaging process can be modeled as suppressing three-dimensional coded and shifted voxels and projecting these onto a two-dimensional plane, such that the number of acquired measurements is greatly reduced. On the other hand, because the measurements are highly compressive, the reconstruction process becomes challenging. We previously proposed a compressive imaging reconstruction algorithm that is applied to two-dimensional images based on the approximate message passing (AMP) framework. AMP is an iterative algorithm that can be used in signal and image reconstruction by performing denoising at each iteration. We employed an adaptive Wiener filter as the image denoiser, and called our algorithm "AMP-Wiener." In this paper, we extend AMP-Wiener to three-dimensional hyperspectral image reconstruction, and call it "AMP-3D-Wiener." Applying the AMP framework to the CASSI system is challenging, because the matrix that models the CASSI system is highly sparse, and such a matrix is not suitable to AMP and makes it difficult for AMP to converge. Therefore, we modify the adaptive Wiener filter and employ a technique called damping to solve for the divergence issue of AMP. Our approach is applied in nature, and the numerical experiments show that AMP-3D-Wiener outperforms existing widely-used algorithms such as gradient projection for sparse reconstruction (GPSR) and two-step iterative shrinkage/thresholding (TwIST) given a similar amount of runtime. Moreover, in contrast to GPSR and TwIST, AMP-3D-Wiener need not tune any parameters, which simplifies the reconstruction process.

  5. Visual nesting impacts approximate number system estimation.

    PubMed

    Chesney, Dana L; Gelman, Rochel

    2012-08-01

    The approximate number system (ANS) allows people to quickly but inaccurately enumerate large sets without counting. One popular account of the ANS is known as the accumulator model. This model posits that the ANS acts analogously to a graduated cylinder to which one "cup" is added for each item in the set, with set numerosity read from the "height" of the cylinder. Under this model, one would predict that if all the to-be-enumerated items were not collected into the accumulator, either the sets would be underestimated, or the misses would need to be corrected by a subsequent process, leading to longer reaction times. In this experiment, we tested whether such miss effects occur. Fifty participants judged numerosities of briefly presented sets of circles. In some conditions, circles were arranged such that some were inside others. This circle nesting was expected to increase the miss rate, since previous research had indicated that items in nested configurations cannot be preattentively individuated in parallel. Logically, items in a set that cannot be simultaneously individuated cannot be simultaneously added to an accumulator. Participants' response times were longer and their estimations were lower for sets whose configurations yielded greater levels of nesting. The level of nesting in a display influenced estimation independently of the total number of items present. This indicates that miss effects, predicted by the accumulator model, are indeed seen in ANS estimation. We speculate that ANS biases might, in turn, influence cognition and behavior, perhaps by influencing which kinds of sets are spontaneously counted. PMID:22810562

  6. Cophylogeny reconstruction via an approximate Bayesian computation.

    PubMed

    Baudet, C; Donati, B; Sinaimeri, B; Crescenzi, P; Gautier, C; Matias, C; Sagot, M-F

    2015-05-01

    Despite an increasingly vast literature on cophylogenetic reconstructions for studying host-parasite associations, understanding the common evolutionary history of such systems remains a problem that is far from being solved. Most algorithms for host-parasite reconciliation use an event-based model, where the events include in general (a subset of) cospeciation, duplication, loss, and host switch. All known parsimonious event-based methods then assign a cost to each type of event in order to find a reconstruction of minimum cost. The main problem with this approach is that the cost of the events strongly influences the reconciliation obtained. Some earlier approaches attempt to avoid this problem by finding a Pareto set of solutions and hence by considering event costs under some minimization constraints. To deal with this problem, we developed an algorithm, called Coala, for estimating the frequency of the events based on an approximate Bayesian computation approach. The benefits of this method are 2-fold: (i) it provides more confidence in the set of costs to be used in a reconciliation, and (ii) it allows estimation of the frequency of the events in cases where the data set consists of trees with a large number of taxa. We evaluate our method on simulated and on biological data sets. We show that in both cases, for the same pair of host and parasite trees, different sets of frequencies for the events lead to equally probable solutions. Moreover, often these solutions differ greatly in terms of the number of inferred events. It appears crucial to take this into account before attempting any further biological interpretation of such reconciliations. More generally, we also show that the set of frequencies can vary widely depending on the input host and parasite trees. Indiscriminately applying a standard vector of costs may thus not be a good strategy. PMID:25540454

  7. Bond selective chemistry beyond the adiabatic approximation

    SciTech Connect

    Butler, L.J.

    1993-12-01

    One of the most important challenges in chemistry is to develop predictive ability for the branching between energetically allowed chemical reaction pathways. Such predictive capability, coupled with a fundamental understanding of the important molecular interactions, is essential to the development and utilization of new fuels and the design of efficient combustion processes. Existing transition state and exact quantum theories successfully predict the branching between available product channels for systems in which each reaction coordinate can be adequately described by different paths along a single adiabatic potential energy surface. In particular, unimolecular dissociation following thermal, infrared multiphoton, or overtone excitation in the ground state yields a branching between energetically allowed product channels which can be successfully predicted by the application of statistical theories, i.e. the weakest bond breaks. (The predictions are particularly good for competing reactions in which when there is no saddle point along the reaction coordinates, as in simple bond fission reactions.) The predicted lack of bond selectivity results from the assumption of rapid internal vibrational energy redistribution and the implicit use of a single adiabatic Born-Oppenheimer potential energy surface for the reaction. However, the adiabatic approximation is not valid for the reaction of a wide variety of energetic materials and organic fuels; coupling between the electronic states of the reacting species play a a key role in determining the selectivity of the chemical reactions induced. The work described below investigated the central role played by coupling between electronic states in polyatomic molecules in determining the selective branching between energetically allowed fragmentation pathways in two key systems.

  8. Improved Approximability and Non-approximability Results for Graph Diameter Decreasing Problems

    NASA Astrophysics Data System (ADS)

    Bilò, Davide; Gualà, Luciano; Proietti, Guido

    In this paper we study two variants of the problem of adding edges to a graph so as to reduce the resulting diameter. More precisely, given a graph G = (V,E), and two positive integers D and B, the Minimum-Cardinality Bounded-Diameter Edge Addition (MCBD) problem is to find a minimum cardinality set F of edges to be added to G in such a way that the diameter of G + F is less than or equal to D, while the Bounded-Cardinality Minimum-Diameter Edge Addition (BCMD) problem is to find a set F of B edges to be added to G in such a way that the diameter of G + F is minimized. Both problems are well known to be NP-hard, as well as approximable within O(logn logD) and 4 (up to an additive term of 2), respectively. In this paper, we improve these long-standing approximation ratios to O(logn) and to 2 (up to an additive term of 2), respectively. As a consequence, we close, in an asymptotic sense, the gap on the approximability of the MCBD problem, which was known to be not approximable within c logn, for some constant c > 0, unless P=NP. Remarkably, as we further show in the paper, our approximation ratio remains asymptotically tight even if we allow for a solution whose diameter is optimal up to a multiplicative factor approaching 5/3. On the other hand, on the positive side, we show that at most twice of the minimal number of additional edges suffices to get at most twice of the required diameter.

  9. Steady flow approximations to the helium r-process

    NASA Technical Reports Server (NTRS)

    Cameron, A. G. W.; Cowan, J. J.; Klapdor, H. V.; Metzinger, J.; Oda, T.; Truran, J. W.

    1983-01-01

    A steady flow approximation to the r-process is presented and used for numerical experiments with physical quantities to determine the sensitivity of the process to variations in those quantities. The effect of neutron capture cross sections along the capture path and of recently available improved beta decay rates on the r-process are discussed. The peaks in the observed r-process yield curve near mass numbers 80 and 130 are roughly characterized by a neutron number density of 10 to the 20th per cu/cm; the mean beta decay rates are about 10/s, and the freezing time is comparable to or less than 0.1 s. The peak near mass number 195 is roughly characterized by a neutron number density of 10 to the 21st/cm, the mean beta decay rates are about 100/s, and the freezing time is comparable to or less than 0.01 s. The flow path of the steady state r-process is sensitively dependent upon the neutron capture cross sections in the flow network and on the values of the beta decay rates.

  10. Semantic Networks and Social Networks

    ERIC Educational Resources Information Center

    Downes, Stephen

    2005-01-01

    Purpose: To illustrate the need for social network metadata within semantic metadata. Design/methodology/approach: Surveys properties of social networks and the semantic web, suggests that social network analysis applies to semantic content, argues that semantic content is more searchable if social network metadata is merged with semantic web…

  11. A comparison of approximate interval estimators for the Bernoulli parameter

    NASA Technical Reports Server (NTRS)

    Leemis, Lawrence; Trivedi, Kishor S.

    1993-01-01

    The goal of this paper is to compare the accuracy of two approximate confidence interval estimators for the Bernoulli parameter p. The approximate confidence intervals are based on the normal and Poisson approximations to the binomial distribution. Charts are given to indicate which approximation is appropriate for certain sample sizes and point estimators.

  12. Approximate nearest neighbors via dictionary learning

    NASA Astrophysics Data System (ADS)

    Cherian, Anoop; Morellas, Vassilios; Papanikolopoulos, Nikolaos

    2011-06-01

    Approximate Nearest Neighbors (ANN) in high dimensional vector spaces is a fundamental, yet challenging problem in many areas of computer science, including computer vision, data mining and robotics. In this work, we investigate this problem from the perspective of compressive sensing, especially the dictionary learning aspect. High dimensional feature vectors are seldom seen to be sparse in the feature domain; examples include, but not limited to Scale Invariant Feature Transform (SIFT) descriptors, Histogram Of Gradients, Shape Contexts, etc. Compressive sensing advocates that if a given vector has a dense support in a feature space, then there should exist an alternative high dimensional subspace where the features are sparse. This idea is leveraged by dictionary learning techniques through learning an overcomplete projection from the feature space so that the vectors are sparse in the new space. The learned dictionary aids in refining the search for the nearest neighbors to a query feature vector into the most likely subspace combination indexed by its non-zero active basis elements. Since the size of the dictionary is generally very large, distinct feature vectors are most likely to have distinct non-zero basis. Utilizing this observation, we propose a novel representation of the feature vectors as tuples of non-zero dictionary indices, which then reduces the ANN search problem into hashing the tuples to an index table; thereby dramatically improving the speed of the search. A drawback of this naive approach is that it is very sensitive to feature perturbations. This can be due to two possibilities: (i) the feature vectors are corrupted by noise, (ii) the true data vectors undergo perturbations themselves. Existing dictionary learning methods address the first possibility. In this work we investigate the second possibility and approach it from a robust optimization perspective. This boils down to the problem of learning a dictionary robust to feature

  13. Global Electricity Trade Network: Structures and Implications.

    PubMed

    Ji, Ling; Jia, Xiaoping; Chiu, Anthony S F; Xu, Ming

    2016-01-01

    Nations increasingly trade electricity, and understanding the structure of the global power grid can help identify nations that are critical for its reliability. This study examines the global grid as a network with nations as nodes and international electricity trade as links. We analyze the structure of the global electricity trade network and find that the network consists of four sub-networks, and provide a detailed analysis of the largest network, Eurasia. Russia, China, Ukraine, and Azerbaijan have high betweenness measures in the Eurasian sub-network, indicating the degrees of centrality of the positions they hold. The analysis reveals that the Eurasian sub-network consists of seven communities based on the network structure. We find that the communities do not fully align with geographical proximity, and that the present international electricity trade in the Eurasian sub-network causes an approximately 11 million additional tons of CO2 emissions. PMID:27504825

  14. Global Electricity Trade Network: Structures and Implications

    PubMed Central

    Ji, Ling; Jia, Xiaoping; Chiu, Anthony S. F.; Xu, Ming

    2016-01-01

    Nations increasingly trade electricity, and understanding the structure of the global power grid can help identify nations that are critical for its reliability. This study examines the global grid as a network with nations as nodes and international electricity trade as links. We analyze the structure of the global electricity trade network and find that the network consists of four sub-networks, and provide a detailed analysis of the largest network, Eurasia. Russia, China, Ukraine, and Azerbaijan have high betweenness measures in the Eurasian sub-network, indicating the degrees of centrality of the positions they hold. The analysis reveals that the Eurasian sub-network consists of seven communities based on the network structure. We find that the communities do not fully align with geographical proximity, and that the present international electricity trade in the Eurasian sub-network causes an approximately 11 million additional tons of CO2 emissions. PMID:27504825

  15. Statistical analysis of 22 public transport networks in Poland.

    PubMed

    Sienkiewicz, Julian; Hołyst, Janusz A

    2005-10-01

    Public transport systems in 22 Polish cities have been analyzed. The sizes of these networks range from N = 152 to 2881. Depending on the assumed definition of network topology, the degree distribution can follow a power law or can be described by an exponential function. Distributions of path lengths in all considered networks are given by asymmetric, unimodal functions. Clustering, assortativity, and betweenness are studied. All considered networks exhibit small-world behavior and are hierarchically organized. A transition between dissortative small networks N approximately < or = 500 and assortative large networks N approximately > or = 500 is observed. PMID:16383488

  16. Fermionic networks

    NASA Astrophysics Data System (ADS)

    Javarone, Marco Alberto

    2016-08-01

    We study the structure of fermionic networks, i.e. a model of networks based on the behavior of fermionic gases, and we analyze dynamical processes over them. In this model, particle dynamics have been mapped to the domain of networks, hence a parameter representing the temperature controls the evolution of the system. In doing so, it is possible to generate adaptive networks, i.e. networks whose structure varies over time. As shown in previous works, networks generated by quantum statistics can undergo critical phenomena as phase transitions and, moreover, they can be considered as thermodynamic systems. In this study, we analyze fermionic networks and opinion dynamics processes over them, framing this network model as a computational model useful to represent complex and adaptive systems. Results highlight that a strong relation holds between the gas temperature and the structure of the achieved networks. Notably, both the degree distribution and the assortativity vary as the temperature varies, hence we can state that fermionic networks behave as adaptive networks. On the other hand, it is worth to highlight that we did not finding relation between outcomes of opinion dynamics processes and the gas temperature. Therefore, although the latter plays a fundamental role in gas dynamics, on the network domain, its importance is related only to structural properties of fermionic networks.

  17. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.

  18. Network Basics.

    ERIC Educational Resources Information Center

    Tennant, Roy

    1992-01-01

    Explains how users can find and access information resources available on the Internet. Highlights include network information centers (NICs); lists, both formal and informal; computer networking protocols, including international standards; electronic mail; remote log-in; and file transfer. (LRW)

  19. Beyond feedforward models trained by backpropagation: a practical training tool for a more efficient universal approximator.

    PubMed

    Ilin, Roman; Kozma, Robert; Werbos, Paul J

    2008-06-01

    Cellular simultaneous recurrent neural network (SRN) has been shown to be a function approximator more powerful than the multilayer perceptron (MLP). This means that the complexity of MLP would be prohibitively large for some problems while SRN could realize the desired mapping with acceptable computational constraints. The speed of training of complex recurrent networks is crucial to their successful application. This work improves the previous results by training the network with extended Kalman filter (EKF). We implemented a generic cellular SRN (CSRN) and applied it for solving two challenging problems: 2-D maze navigation and a subset of the connectedness problem. The speed of convergence has been improved by several orders of magnitude in comparison with the earlier results in the case of maze navigation, and superior generalization has been demonstrated in the case of connectedness. The implications of this improvements are discussed. PMID:18541494

  20. Neural network tomography: network replication from output surface geometry.

    PubMed

    Minnett, Rupert C J; Smith, Andrew T; Lennon, William C; Hecht-Nielsen, Robert

    2011-06-01

    Multilayer perceptron networks whose outputs consist of affine combinations of hidden units using the tanh activation function are universal function approximators and are used for regression, typically by reducing the MSE with backpropagation. We present a neural network weight learning algorithm that directly positions the hidden units within input space by numerically analyzing the curvature of the output surface. Our results show that under some sampling requirements, this method can reliably recover the parameters of a neural network used to generate a data set. PMID:21377326

  1. Epigenetic switches and network transitions

    NASA Astrophysics Data System (ADS)

    Sasai, Masaki

    2012-02-01

    We investigate dynamics of gene networks which are regulated by both the fast binding/unbinding of transcription factors to/from DNA and the slow processes of chromatin structural change or histone modification. This heterogeneous dynamics consisting of different time scales is analyzed by the mean-field approximation and the stochastic simulation to show that the network exhibits multiple metastable states and is characterized by transitions among them. We discuss distribution and fluctuation of states of the core gene network of embryonic stem cells as an example of such heterogeneous dynamics and the simulated transitions are compared with the experimental data on the distribution of stem cell states.

  2. An asymptotic homogenized neutron diffusion approximation. II. Numerical comparisons

    SciTech Connect

    Trahan, T. J.; Larsen, E. W.

    2012-07-01

    In a companion paper, a monoenergetic, homogenized, anisotropic diffusion equation is derived asymptotically for large, 3-D, multiplying systems with a periodic lattice structure [1]. In the present paper, this approximation is briefly compared to several other well known diffusion approximations. Although the derivation is different, the asymptotic diffusion approximation matches that proposed by Deniz and Gelbard, and is closely related to those proposed by Benoist. The focus of this paper, however, is a numerical comparison of the various methods for simple reactor analysis problems in 1-D. The comparisons show that the asymptotic diffusion approximation provides a more accurate estimate of the eigenvalue than the Benoist diffusion approximations. However, the Benoist diffusion approximations and the asymptotic diffusion approximation provide very similar estimates of the neutron flux. The asymptotic method and the Benoist methods both outperform the standard homogenized diffusion approximation, with flux weighted cross sections. (authors)

  3. Difference equation state approximations for nonlinear hereditary control problems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1984-01-01

    Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems. Previously announced in STAR as N83-33589

  4. Difference equation state approximations for nonlinear hereditary control problems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1982-01-01

    Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems.

  5. Multivariate Padé Approximations For Solving Nonlinear Diffusion Equations

    NASA Astrophysics Data System (ADS)

    Turut, V.

    2015-11-01

    In this paper, multivariate Padé approximation is applied to power series solutions of nonlinear diffusion equations. As it is seen from tables, multivariate Padé approximation (MPA) gives reliable solutions and numerical results.

  6. First-principles local density approximation + U and generalized gradient approximation + U study of plutonium oxides.

    PubMed

    Sun, Bo; Zhang, Ping; Zhao, Xian-Geng

    2008-02-28

    The electronic structure and properties of PuO2 and Pu2O3 have been studied from first principles by the all-electron projector-augmented-wave method. The local density approximation+U and the generalized gradient approximation+U formalisms have been used to account for the strong on-site Coulomb repulsion among the localized Pu 5f electrons. We discuss how the properties of PuO2 and Pu2O3 are affected by the choice of U as well as the choice of exchange-correlation potential. Also, oxidation reaction of Pu2O3, leading to formation of PuO2, and its dependence on U and exchange-correlation potential have been studied. Our results show that by choosing an appropriate U, it is promising to correctly and consistently describe structural, electronic, and thermodynamic properties of PuO2 and Pu2O3, which enable the modeling of redox process involving Pu-based materials possible. PMID:18315070

  7. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations.

    PubMed

    Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao

    2014-12-01

    In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N(4)). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as ⟨Ŝ(2)⟩ are also developed and tested. PMID:25481124

  8. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations

    SciTech Connect

    Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao

    2014-12-07

    In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N{sup 4}). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as 〈S{sup ^2}〉 are also developed and tested.

  9. Restricted second random phase approximations and Tamm-Dancoff approximations for electronic excitation energy calculations

    NASA Astrophysics Data System (ADS)

    Peng, Degao; Yang, Yang; Zhang, Peng; Yang, Weitao

    2014-12-01

    In this article, we develop systematically second random phase approximations (RPA) and Tamm-Dancoff approximations (TDA) of particle-hole and particle-particle channels for calculating molecular excitation energies. The second particle-hole RPA/TDA can capture double excitations missed by the particle-hole RPA/TDA and time-dependent density-functional theory (TDDFT), while the second particle-particle RPA/TDA recovers non-highest-occupied-molecular-orbital excitations missed by the particle-particle RPA/TDA. With proper orbital restrictions, these restricted second RPAs and TDAs have a formal scaling of only O(N4). The restricted versions of second RPAs and TDAs are tested with various small molecules to show some positive results. Data suggest that the restricted second particle-hole TDA (r2ph-TDA) has the best overall performance with a correlation coefficient similar to TDDFT, but with a larger negative bias. The negative bias of the r2ph-TDA may be induced by the unaccounted ground state correlation energy to be investigated further. Overall, the r2ph-TDA is recommended to study systems with both single and some low-lying double excitations with a moderate accuracy. Some expressions on excited state property evaluations, such as < hat{S}2rangle are also developed and tested.

  10. Pawlak Algebra and Approximate Structure on Fuzzy Lattice

    PubMed Central

    Zhuang, Ying; Liu, Wenqi; Wu, Chin-Chia; Li, Jinhai

    2014-01-01

    The aim of this paper is to investigate the general approximation structure, weak approximation operators, and Pawlak algebra in the framework of fuzzy lattice, lattice topology, and auxiliary ordering. First, we prove that the weak approximation operator space forms a complete distributive lattice. Then we study the properties of transitive closure of approximation operators and apply them to rough set theory. We also investigate molecule Pawlak algebra and obtain some related properties. PMID:25152922

  11. A new approximation method for stress constraints in structural synthesis

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garret N.; Salajegheh, Eysa

    1987-01-01

    A new approximation method for dealing with stress constraints in structural synthesis is presented. The finite element nodal forces are approximated and these are used to create an explicit, but often nonlinear, approximation to the original problem. The principal motivation is to create the best approximation possible, in order to reduce the number of detailed finite element analyses needed to reach the optimum. Examples are offered and compared with published results, to demonstrate the efficiency and reliability of the proposed method.

  12. Spatial networks

    NASA Astrophysics Data System (ADS)

    Barthélemy, Marc

    2011-02-01

    Complex systems are very often organized under the form of networks where nodes and edges are embedded in space. Transportation and mobility networks, Internet, mobile phone networks, power grids, social and contact networks, and neural networks, are all examples where space is relevant and where topology alone does not contain all the information. Characterizing and understanding the structure and the evolution of spatial networks is thus crucial for many different fields, ranging from urbanism to epidemiology. An important consequence of space on networks is that there is a cost associated with the length of edges which in turn has dramatic effects on the topological structure of these networks. We will thoroughly explain the current state of our understanding of how the spatial constraints affect the structure and properties of these networks. We will review the most recent empirical observations and the most important models of spatial networks. We will also discuss various processes which take place on these spatial networks, such as phase transitions, random walks, synchronization, navigation, resilience, and disease spread.

  13. Vulnerability of network of networks

    NASA Astrophysics Data System (ADS)

    Havlin, S.; Kenett, D. Y.; Bashan, A.; Gao, J.; Stanley, H. E.

    2014-10-01

    Our dependence on networks - be they infrastructure, economic, social or others - leaves us prone to crises caused by the vulnerabilities of these networks. There is a great need to develop new methods to protect infrastructure networks and prevent cascade of failures (especially in cases of coupled networks). Terrorist attacks on transportation networks have traumatized modern societies. With a single blast, it has become possible to paralyze airline traffic, electric power supply, ground transportation or Internet communication. How, and at which cost can one restructure the network such that it will become more robust against malicious attacks? The gradual increase in attacks on the networks society depends on - Internet, mobile phone, transportation, air travel, banking, etc. - emphasize the need to develop new strategies to protect and defend these crucial networks of communication and infrastructure networks. One example is the threat of liquid explosives a few years ago, which completely shut down air travel for days, and has created extreme changes in regulations. Such threats and dangers warrant the need for new tools and strategies to defend critical infrastructure. In this paper we review recent advances in the theoretical understanding of the vulnerabilities of interdependent networks with and without spatial embedding, attack strategies and their affect on such networks of networks as well as recently developed strategies to optimize and repair failures caused by such attacks.

  14. United States National seismograph network

    USGS Publications Warehouse

    Masse, R.P.; Filson, J.R.; Murphy, A.

    1989-01-01

    The USGS National Earthquake Information Center (NEIC) has planned and is developing a broadband digital seismograph network for the United States. The network will consist of approximately 150 seismograph stations distributed across the contiguous 48 states and across Alaska, Hawaii, Puerto Rico and the Virgin Islands. Data transmission will be via two-way satellite telemetry from the network sites to a central recording facility at the NEIC in Golden, Colorado. The design goal for the network is the on-scale recording by at least five well-distributed stations of any seismic event of magnitude 2.5 or greater in all areas of the United States except possibly part of Alaska. All event data from the network will be distributed to the scientific community on compact disc with read-only memory (CD-ROM). ?? 1989.

  15. 43 CFR 2201.5 - Exchanges at approximately equal value.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 43 Public Lands: Interior 2 2012-10-01 2012-10-01 false Exchanges at approximately equal value... PROCEDURES Exchanges-Specific Requirements § 2201.5 Exchanges at approximately equal value. (a) The authorized officer may exchange lands that are of approximately equal value when it is determined that:...

  16. 43 CFR 2201.5 - Exchanges at approximately equal value.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false Exchanges at approximately equal value... PROCEDURES Exchanges-Specific Requirements § 2201.5 Exchanges at approximately equal value. (a) The authorized officer may exchange lands that are of approximately equal value when it is determined that:...

  17. 43 CFR 2201.5 - Exchanges at approximately equal value.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 43 Public Lands: Interior 2 2011-10-01 2011-10-01 false Exchanges at approximately equal value... PROCEDURES Exchanges-Specific Requirements § 2201.5 Exchanges at approximately equal value. (a) The authorized officer may exchange lands that are of approximately equal value when it is determined that:...

  18. Boundary control of parabolic systems - Finite-element approximation

    NASA Technical Reports Server (NTRS)

    Lasiecka, I.

    1980-01-01

    The finite element approximation of a Dirichlet type boundary control problem for parabolic systems is considered. An approach based on the direct approximation of an input-output semigroup formula is applied. Error estimates are derived for optimal state and optimal control, and it is noted that these estimates are actually optimal with respect to the approximation theoretic properties.

  19. The Use of Approximations in a High School Chemistry Course

    ERIC Educational Resources Information Center

    Matsumoto, Paul S.; Tong, Gary; Lee, Stephanie; Kam, Bonita

    2009-01-01

    While approximations are used frequently in science, high school students may be unaware of the use of approximations in science, the motivation for their use, and the limitations of their use. In the article, we consider the use of approximations in a high school chemistry class as opportunities to increase student understanding of the use of…

  20. The Lockheed Martin Network: An Intranet Analysis.

    ERIC Educational Resources Information Center

    Okey, Robert M.

    Lockheed Martin Corporation, which is comprised of approximately 72 operating units (some 200,000 employees) worldwide, has set up an intranet called the Lockheed Martin Network. On the network, the corporation released a set of corporate policies via web pages which must be implemented by each of its companies. Because each company varied on what…

  1. Universal approximation of extreme learning machine with adaptive growth of hidden nodes.

    PubMed

    Zhang, Rui; Lan, Yuan; Huang, Guang-Bin; Xu, Zong-Ben

    2012-02-01

    Extreme learning machines (ELMs) have been proposed for generalized single-hidden-layer feedforward networks which need not be neuron-like and perform well in both regression and classification applications. In this brief, we propose an ELM with adaptive growth of hidden nodes (AG-ELM), which provides a new approach for the automated design of networks. Different from other incremental ELMs (I-ELMs) whose existing hidden nodes are frozen when the new hidden nodes are added one by one, in AG-ELM the number of hidden nodes is determined in an adaptive way in the sense that the existing networks may be replaced by newly generated networks which have fewer hidden nodes and better generalization performance. We then prove that such an AG-ELM using Lebesgue p-integrable hidden activation functions can approximate any Lebesgue p-integrable function on a compact input set. Simulation results demonstrate and verify that this new approach can achieve a more compact network architecture than the I-ELM. PMID:24808516

  2. Accuracy of the Michaelis-Menten approximation when analysing effects of molecular noise.

    PubMed

    Lawson, Michael J; Petzold, Linda; Hellander, Andreas

    2015-05-01

    Quantitative biology relies on the construction of accurate mathematical models, yet the effectiveness of these models is often predicated on making simplifying approximations that allow for direct comparisons with available experimental data. The Michaelis-Menten (MM) approximation is widely used in both deterministic and discrete stochastic models of intracellular reaction networks, owing to the ubiquity of enzymatic activity in cellular processes and the clear biochemical interpretation of its parameters. However, it is not well understood how the approximation applies to the discrete stochastic case or how it extends to spatially inhomogeneous systems. We study the behaviour of the discrete stochastic MM approximation as a function of system size and show that significant errors can occur for small volumes, in comparison with a corresponding mass-action system. We then explore some consequences of these results for quantitative modelling. One consequence is that fluctuation-induced sensitivity, or stochastic focusing, can become highly exaggerated in models that make use of MM kinetics even if the approximations are excellent in a deterministic model. Another consequence is that spatial stochastic simulations based on the reaction-diffusion master equation can become highly inaccurate if the model contains MM terms. PMID:25833240

  3. Control of Stochastic Master Equation Models of Genetic Regulatory Networks by Approximating Their Average Behavior

    NASA Astrophysics Data System (ADS)

    Umut Caglar, Mehmet; Pal, Ranadip

    2010-10-01

    The central dogma of molecular biology states that ``information cannot be transferred back from protein to either protein or nucleic acid.'' However, this assumption is not exactly correct in most of the cases. There are a lot of feedback loops and interactions between different levels of systems. These types of interactions are hard to analyze due to the lack of data in the cellular level and probabilistic nature of interactions. Probabilistic models like Stochastic Master Equation (SME) or deterministic models like differential equations (DE) can be used to analyze these types of interactions. SME models based on chemical master equation (CME) can provide detailed representation of genetic regulatory system, but their use is restricted by the large data requirements and computational costs of calculations. The differential equations models on the other hand, have low calculation costs and much more adequate to generate control procedures on the system; but they are not adequate to investigate the probabilistic nature of interactions. In this work the success of the mapping between SME and DE is analyzed, and the success of a control policy generated by DE model with respect to SME model is examined. Index Terms--- Stochastic Master Equation models, Differential Equation Models, Control Policy Design, Systems biology

  4. NEURAL NETWORK AND REGRESSION SPLINE VALUE FUNCTION APPROXIMATIONS FOR STOCHASTIC DYNAMIC PROGRAMMING. (R828207)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  5. Unit hydrograph approximations assuming linear flow through topologically random channel networks.

    USGS Publications Warehouse

    Troutman, B.M.; Karlinger, M.R.

    1985-01-01

    The instantaneous unit hydrograph (IUH) of a drainage basin is derived in terms of fundamental basin characteristics (Z, alpha, beta), where alpha parameterizes the link (channel segment) length distribution, and beta is a vector of hydraulic parameters. -from Authors

  6. Channel Networks

    NASA Astrophysics Data System (ADS)

    Rinaldo, Andrea; Rodriguez-Iturbe, Ignacio; Rigon, Riccardo

    This review proceeds from Luna Leopold's and Ronald Shreve's lasting accomplishments dealing with the study of random-walk and topologically random channel networks. According to the random perspective, which has had a profound influence on the interpretation of natural landforms, nature's resiliency in producing recurrent networks and landforms was interpreted to be the consequence of chance. In fact, central to models of topologically random networks is the assumption of equal likelihood of any tree-like configuration. However, a general framework of analysis exists that argues that all possible network configurations draining a fixed area are not necessarily equally likely. Rather, a probability P(s) is assigned to a particular spanning tree configuration, say s, which can be generally assumed to obey a Boltzmann distribution: P(s) % e^-H(s)/T, where T is a parameter and H(s) is a global property of the network configuration s related to energetic characters, i.e. its Hamiltonian. One extreme case is the random topology model where all trees are equally likely, i.e. the limit case for T6 4 . The other extreme case is T 6 0, and this corresponds to network configurations that tend to minimize their total energy dissipation to improve their likelihood. Networks obtained in this manner are termed optimal channel networks (OCNs). Observational evidence suggests that the characters of real river networks are reproduced extremely well by OCNs. Scaling properties of energy and entropy of OCNs suggest that large network development is likely to effectively occur at zero temperature (i.e. minimizing its Hamiltonian). We suggest a corollary of dynamic accessibility of a network configuration and speculate towards a thermodynamics of critical self-organization. We thus conclude that both chance and necessity are equally important ingredients for the dynamic origin of channel networks---and perhaps of the geometry of nature.

  7. Network morphospace.

    PubMed

    Avena-Koenigsberger, Andrea; Goñi, Joaquín; Solé, Ricard; Sporns, Olaf

    2015-02-01

    The structure of complex networks has attracted much attention in recent years. It has been noted that many real-world examples of networked systems share a set of common architectural features. This raises important questions about their origin, for example whether such network attributes reflect common design principles or constraints imposed by selectional forces that have shaped the evolution of network topology. Is it possible to place the many patterns and forms of complex networks into a common space that reveals their relations, and what are the main rules and driving forces that determine which positions in such a space are occupied by systems that have actually evolved? We suggest that these questions can be addressed by combining concepts from two currently relatively unconnected fields. One is theoretical morphology, which has conceptualized the relations between morphological traits defined by mathematical models of biological form. The second is network science, which provides numerous quantitative tools to measure and classify different patterns of local and global network architecture across disparate types of systems. Here, we explore a new theoretical concept that lies at the intersection between both fields, the 'network morphospace'. Defined by axes that represent specific network traits, each point within such a space represents a location occupied by networks that share a set of common 'morphological' characteristics related to aspects of their connectivity. Mapping a network morphospace reveals the extent to which the space is filled by existing networks, thus allowing a distinction between actual and impossible designs and highlighting the generative potential of rules and constraints that pervade the evolution of complex systems. PMID:25540237

  8. Network morphospace

    PubMed Central

    Avena-Koenigsberger, Andrea; Goñi, Joaquín; Solé, Ricard; Sporns, Olaf

    2015-01-01

    The structure of complex networks has attracted much attention in recent years. It has been noted that many real-world examples of networked systems share a set of common architectural features. This raises important questions about their origin, for example whether such network attributes reflect common design principles or constraints imposed by selectional forces that have shaped the evolution of network topology. Is it possible to place the many patterns and forms of complex networks into a common space that reveals their relations, and what are the main rules and driving forces that determine which positions in such a space are occupied by systems that have actually evolved? We suggest that these questions can be addressed by combining concepts from two currently relatively unconnected fields. One is theoretical morphology, which has conceptualized the relations between morphological traits defined by mathematical models of biological form. The second is network science, which provides numerous quantitative tools to measure and classify different patterns of local and global network architecture across disparate types of systems. Here, we explore a new theoretical concept that lies at the intersection between both fields, the ‘network morphospace’. Defined by axes that represent specific network traits, each point within such a space represents a location occupied by networks that share a set of common ‘morphological’ characteristics related to aspects of their connectivity. Mapping a network morphospace reveals the extent to which the space is filled by existing networks, thus allowing a distinction between actual and impossible designs and highlighting the generative potential of rules and constraints that pervade the evolution of complex systems. PMID:25540237

  9. Innovation Networks

    NASA Astrophysics Data System (ADS)

    Pyka, Andreas; Scharnhorst, Andrea

    The idea for this book started when we organized a topical workshop entitled "Innovation Networks - New Approaches in Modeling and Analyzing" (held in Augsburg, Germany in October 2005), under the auspices of Exystence, a network of excellence funded in the European Union's Fifth Framework Program. Unlike other conferences on innovation and networks, however, this workshop brought together scientists from economics, sociology, communication science, science and technology studies, and physics. With this book we aim to build further on a bridge connecting the bodies of knowledge on networks in economics, the social sciences and, more recently, statistical physics.

  10. Network reliability

    NASA Technical Reports Server (NTRS)

    Johnson, Marjory J.

    1985-01-01

    Network control (or network management) functions are essential for efficient and reliable operation of a network. Some control functions are currently included as part of the Open System Interconnection model. For local area networks, it is widely recognized that there is a need for additional control functions, including fault isolation functions, monitoring functions, and configuration functions. These functions can be implemented in either a central or distributed manner. The Fiber Distributed Data Interface Medium Access Control and Station Management protocols provide an example of distributed implementation. Relative information is presented here in outline form.

  11. Structural damage assessment using linear approximation with maximum entropy and transmissibility data

    NASA Astrophysics Data System (ADS)

    Meruane, V.; Ortiz-Bernardin, A.

    2015-03-01

    Supervised learning algorithms have been proposed as a suitable alternative to model updating methods in structural damage assessment, being Artificial Neural Networks the most frequently used. Notwithstanding, the slow learning speed and the large number of parameters that need to be tuned within the training stage have been a major bottleneck in their application. This article presents a new algorithm for real-time damage assessment that uses a linear approximation method in conjunction with antiresonant frequencies that are identified from transmissibility functions. The linear approximation is handled by a statistical inference model based on the maximum-entropy principle. The merits of this new approach are twofold: training is avoided and data is processed in a period of time that is comparable to the one of Neural Networks. The performance of the proposed methodology is validated by considering three experimental structures: an eight-degree-of-freedom (DOF) mass-spring system, a beam, and an exhaust system of a car. To demonstrate the potential of the proposed algorithm over existing ones, the obtained results are compared with those of a model updating method based on parallel genetic algorithms and a multilayer feedforward neural network approach.

  12. Approximate N-Player Nonzero-Sum Game Solution for an Uncertain Continuous Nonlinear System.

    PubMed

    Johnson, Marcus; Kamalapurkar, Rushikesh; Bhasin, Shubhendu; Dixon, Warren E

    2015-08-01

    An approximate online equilibrium solution is developed for an N -player nonzero-sum game subject to continuous-time nonlinear unknown dynamics and an infinite horizon quadratic cost. A novel actor-critic-identifier structure is used, wherein a robust dynamic neural network is used to asymptotically identify the uncertain system with additive disturbances, and a set of critic and actor NNs are used to approximate the value functions and equilibrium policies, respectively. The weight update laws for the actor neural networks (NNs) are generated using a gradient-descent method, and the critic NNs are generated by least square regression, which are both based on the modified Bellman error that is independent of the system dynamics. A Lyapunov-based stability analysis shows that uniformly ultimately bounded tracking is achieved, and a convergence analysis demonstrates that the approximate control policies converge to a neighborhood of the optimal solutions. The actor, critic, and identifier structures are implemented in real time continuously and simultaneously. Simulations on two and three player games illustrate the performance of the developed method. PMID:25312943

  13. Discrete-time nonlinear HJB solution using approximate dynamic programming: convergence proof.

    PubMed

    Al-Tamimi, Asma; Lewis, Frank L; Abu-Khalaf, Murad

    2008-08-01

    Convergence of the value-iteration-based heuristic dynamic programming (HDP) algorithm is proven in the case of general nonlinear systems. That is, it is shown that HDP converges to the optimal control and the optimal value function that solves the Hamilton-Jacobi-Bellman equation appearing in infinite-horizon discrete-time (DT) nonlinear optimal control. It is assumed that, at each iteration, the value and action update equations can be exactly solved. The following two standard neural networks (NN) are used: a critic NN is used to approximate the value function, whereas an action network is used to approximate the optimal control policy. It is stressed that this approach allows the implementation of HDP without knowing the internal dynamics of the system. The exact solution assumption holds for some classes of nonlinear systems and, specifically, in the specific case of the DT linear quadratic regulator (LQR), where the action is linear and the value quadratic in the states and NNs have zero approximation error. It is stressed that, for the LQR, HDP may be implemented without knowing the system A matrix by using two NNs. This fact is not generally appreciated in the folklore of HDP for the DT LQR, where only one critic NN is generally used. PMID:18632382

  14. Wavelet approximation of correlated wave functions. II. Hyperbolic wavelets and adaptive approximation schemes

    NASA Astrophysics Data System (ADS)

    Luo, Hongjun; Kolb, Dietmar; Flad, Heinz-Jurgen; Hackbusch, Wolfgang; Koprucki, Thomas

    2002-08-01

    We have studied various aspects concerning the use of hyperbolic wavelets and adaptive approximation schemes for wavelet expansions of correlated wave functions. In order to analyze the consequences of reduced regularity of the wave function at the electron-electron cusp, we first considered a realistic exactly solvable many-particle model in one dimension. Convergence rates of wavelet expansions, with respect to L2 and H1 norms and the energy, were established for this model. We compare the performance of hyperbolic wavelets and their extensions through adaptive refinement in the cusp region, to a fully adaptive treatment based on the energy contribution of individual wavelets. Although hyperbolic wavelets show an inferior convergence behavior, they can be easily refined in the cusp region yielding an optimal convergence rate for the energy. Preliminary results for the helium atom are presented, which demonstrate the transferability of our observations to more realistic systems. We propose a contraction scheme for wavelets in the cusp region, which reduces the number of degrees of freedom and yields a favorable cost to benefit ratio for the evaluation of matrix elements.

  15. Approximate solutions for certain bidomain problems in electrocardiography

    NASA Astrophysics Data System (ADS)

    Johnston, Peter R.

    2008-10-01

    The simulation of problems in electrocardiography using the bidomain model for cardiac tissue often creates issues with satisfaction of the boundary conditions required to obtain a solution. Recent studies have proposed approximate methods for solving such problems by satisfying the boundary conditions only approximately. This paper presents an analysis of their approximations using a similar method, but one which ensures that the boundary conditions are satisfied during the whole solution process. Also considered are additional functional forms, used in the approximate solutions, which are more appropriate to specific boundary conditions. The analysis shows that the approximations introduced by Patel and Roth [Phys. Rev. E 72, 051931 (2005)] generally give accurate results. However, there are certain situations where functional forms based on the geometry of the problem under consideration can give improved approximations. It is also demonstrated that the recent methods are equivalent to different approaches to solving the same problems introduced 20years earlier.

  16. The selection of approximating functions for tabulated numerical data

    NASA Technical Reports Server (NTRS)

    Ingram, H. L.; Hooker, W. R.

    1972-01-01

    A computer program was developed that selects, from a list of candidate functions, the approximating functions and associated coefficients which result in the best curve fit of a given set of numerical data. The advantages of the approach used here are: (1) Multivariable approximations can be performed. (2) Flexibility with respect to the type of approximations used is available. (3) The program is designed to choose the best terms to be used in the approximation from an arbitrary list of possible terms so that little knowledge of the proper approximating form is required. (4) Recursion relations are used in determining the coefficients of the approximating functions, which reduces the computer execution time of the program.

  17. The gravimetric boundary value problem in spheroidal approximation

    NASA Astrophysics Data System (ADS)

    Panou, Georgios

    2015-04-01

    In this presentation the linear gravimetric boundary value problem is discussed in spheroidal approximation. The input to the problem is gravity disturbances, using the known Earth's topography as boundary and corresponds to an oblique derivative problem. From the physical viewpoint, it has many advantages and can serve as the basis in establishing a world vertical datum. Adopting the spheroidal approximation in this boundary value problem, an integral equation results which can be solved analytically using successive approximations. However, the mathematical model becomes simpler and can be solved more easily by applying certain permissible approximations: neglecting the Earth's topography, a spheroidal normal derivative (Neumann) problem is obtained. Under the spherical approximation, the result is a normal derivative problem plus suitable corrections. In this case, neglecting the Earth's topography, the solution corresponds to the well-known spherical Hotine integral. Finally, the relative errors in the above approximations and derivations are quantitatively estimated.

  18. A lattice-theoretic approach to multigranulation approximation space.

    PubMed

    He, Xiaoli; She, Yanhong

    2014-01-01

    In this paper, we mainly investigate the equivalence between multigranulation approximation space and single-granulation approximation space from the lattice-theoretic viewpoint. It is proved that multigranulation approximation space is equivalent to single-granulation approximation space if and only if the pair of multigranulation rough approximation operators [Formula in text] forms an order-preserving Galois connection, if and only if the collection of lower (resp., upper) definable sets forms an (resp., union) intersection structure, if and only if the collection of multigranulation upper (lower) definable sets forms a distributive lattice when n = 2, and if and only if [Formula in text]. The obtained results help us gain more insights into the mathematical structure of multigranulation approximation spaces. PMID:25243226

  19. A Lattice-Theoretic Approach to Multigranulation Approximation Space

    PubMed Central

    He, Xiaoli

    2014-01-01

    In this paper, we mainly investigate the equivalence between multigranulation approximation space and single-granulation approximation space from the lattice-theoretic viewpoint. It is proved that multigranulation approximation space is equivalent to single-granulation approximation space if and only if the pair of multigranulation rough approximation operators (Σi=1nRi¯,Σi=1nRi_) forms an order-preserving Galois connection, if and only if the collection of lower (resp., upper) definable sets forms an (resp., union) intersection structure, if and only if the collection of multigranulation upper (lower) definable sets forms a distributive lattice when n = 2, and if and only if ∀X⊆U,  Σi=1nRi_(X)=∩i=1nRi_(X). The obtained results help us gain more insights into the mathematical structure of multigranulation approximation spaces. PMID:25243226

  20. Finding communities in sparse networks

    PubMed Central

    Singh, Abhinav; Humphries, Mark D.

    2015-01-01

    Spectral algorithms based on matrix representations of networks are often used to detect communities, but classic spectral methods based on the adjacency matrix and its variants fail in sparse networks. New spectral methods based on non-backtracking random walks have recently been introduced that successfully detect communities in many sparse networks. However, the spectrum of non-backtracking random walks ignores hanging trees in networks that can contain information about their community structure. We introduce the reluctant backtracking operators that explicitly account for hanging trees as they admit a small probability of returning to the immediately previous node, unlike the non-backtracking operators that forbid an immediate return. We show that the reluctant backtracking operators can detect communities in certain sparse networks where the non-backtracking operators cannot, while performing comparably on benchmark stochastic block model networks and real world networks. We also show that the spectrum of the reluctant backtracking operator approximately optimises the standard modularity function. Interestingly, for this family of non- and reluctant-backtracking operators the main determinant of performance on real-world networks is whether or not they are normalised to conserve probability at each node. PMID:25742951

  1. Multijet final states: exact results and the leading pole approximation

    SciTech Connect

    Ellis, R.K.; Owens, J.F.

    1984-09-01

    Exact results for the process gg ..-->.. ggg are compared with those obtained using the leading pole approximation. Regions of phase space where the approximation breaks down are discussed. A specific example relevant for background estimates to W boson production is presented. It is concluded that in this instance the leading pole approximation may underestimate the standard QCD background by more than a factor of two in certain kinematic regions of physical interest.

  2. Generalized Lorentzian approximations for the Voigt line shape.

    PubMed

    Martin, P; Puerta, J

    1981-01-15

    The object of the work reported in this paper was to find a simple and easy to calculate approximation to the Voigt function using the Padé method. To do this we calculated the multipole approximation to the complex function as the error function or as the plasma dispersion function. This generalized Lorentzian approximation can be used instead of the exact function in experiments that do not require great accuracy. PMID:20309100

  3. On approximating hereditary dynamics by systems of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Cliff, E. M.; Burns, J. A.

    1978-01-01

    The paper deals with methods of obtaining approximate solutions to linear retarded functional differential equations (hereditary systems). The basic notion is to project the infinite dimensional space of initial functions for the hereditary system onto a finite dimensional subspace. Within this framework, two particular schemes are discussed. The first uses well-known piecewise constant approximations, while the second is a new method based on piecewise linear approximating functions. Numerical results are given.

  4. LCAO approximation for scaling properties of the Menger sponge fractal.

    PubMed

    Sakoda, Kazuaki

    2006-11-13

    The electromagnetic eigenmodes of a three-dimensional fractal called the Menger sponge were analyzed by the LCAO (linear combination of atomic orbitals) approximation and a first-principle calculation based on the FDTD (finite-difference time-domain) method. Due to the localized nature of the eigenmodes, the LCAO approximation gives a good guiding principle to find scaled eigenfunctions and to observe the approximate self-similarity in the spectrum of the localized eigenmodes. PMID:19529555

  5. Stochastic approximation methods-Powerful tools for simulation and optimization: A survey of some recent work on multi-agent systems and cyber-physical systems

    NASA Astrophysics Data System (ADS)

    Yin, George; Wang, Le Yi; Zhang, Hongwei

    2014-12-01

    Stochastic approximation methods have found extensive and diversified applications. Recent emergence of networked systems and cyber-physical systems has generated renewed interest in advancing stochastic approximation into a general framework to support algorithm development for information processing and decisions in such systems. This paper presents a survey on some recent developments in stochastic approximation methods and their applications. Using connected vehicles in platoon formation and coordination as a platform, we highlight some traditional and new methodologies of stochastic approximation algorithms and explain how they can be used to capture essential features in networked systems. Distinct features of networked systems with randomly switching topologies, dynamically evolving parameters, and unknown delays are presented, and control strategies are provided.

  6. Stochastic approximation methods-Powerful tools for simulation and optimization: A survey of some recent work on multi-agent systems and cyber-physical systems

    SciTech Connect

    Yin, George; Wang, Le Yi; Zhang, Hongwei

    2014-12-10

    Stochastic approximation methods have found extensive and diversified applications. Recent emergence of networked systems and cyber-physical systems has generated renewed interest in advancing stochastic approximation into a general framework to support algorithm development for information processing and decisions in such systems. This paper presents a survey on some recent developments in stochastic approximation methods and their applications. Using connected vehicles in platoon formation and coordination as a platform, we highlight some traditional and new methodologies of stochastic approximation algorithms and explain how they can be used to capture essential features in networked systems. Distinct features of networked systems with randomly switching topologies, dynamically evolving parameters, and unknown delays are presented, and control strategies are provided.

  7. A commercial application of survivable network design: ITP/INPLANS CCS Network Topology Analyzer

    SciTech Connect

    Mihail, M.; Shallcross, D.; Dean, N.; Mostrel, M.

    1996-12-31

    The ITP/INPLANS CCS Network Topology Analyzer is a Bellcore product which performs automated design of cost effective survivable CCS (Common Channel Signaling) networks, with survivability meaning that certain path-connectivity is preserved under limited failures of network elements. The algorithmic core of this product consists of suitable extensions of primal-dual approximation schemes for Steiner network problems. Even though most of the survivability problems arising in CCS networks are not strictly of the form for which the approximation algorithms with proven performance guarantees apply, we implemented modifications of these algorithms with success: In addition to duality-based performance guarantees that indicate, mathematically, discrepancy of no more than 20% from optimality for generic Steiner problems and no more than 40% for survivable CCS networks, our software passed all commercial benchmark tests, and our code was deployed with the August 94 release of the product. CCS networks fall in the general category of low bit-rate backbone networks. The main characteristic of survivability problems for these networks is that each edge, once present, can be assumed to carry arbitrarily many paths. For high bit-rate backbone networks, such as the widely used ATM and SONET, this is no longer the case. We discuss versions of network survivability with capacitated edges that appear to model survivability considerations in such networks.

  8. Approximation functions for airblast environments from buried charges

    SciTech Connect

    Reichenbach, H.; Behrens, K.; Kuhl, A.L.

    1993-11-01

    In EMI report E 1/93, ``Airblast Environments from Buried HE-Charges,`` fit functions were used for the compact description of blastwave parameters. The coefficients of these functions were approximated by means of second order polynomials versus DOB. In most cases, the agreement with the measured data was satisfactory; to reduce remaining noticeable deviations, an approximation by polygons (i.e., piecewise-linear approximation) was used instead of polynomials. The present report describes the results of the polygon approximation and compares them to previous data. We conclude that the polygon representation leads to a better agreement with the measured data.

  9. 13. BUILDING #5, HOSPITAL, RENDERING OF EAST ELEVATION, APPROXIMATELY 1946 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    13. BUILDING #5, HOSPITAL, RENDERING OF EAST ELEVATION, APPROXIMATELY 1946 - Sioux Falls Veterans Administration Medical & Regional Office Center, 2501 West Twenty-second, Sioux Falls, Minnehaha County, SD

  10. Impact of inflow transport approximation on light water reactor analysis

    NASA Astrophysics Data System (ADS)

    Choi, Sooyoung; Smith, Kord; Lee, Hyun Chul; Lee, Deokjung

    2015-10-01

    The impact of the inflow transport approximation on light water reactor analysis is investigated, and it is verified that the inflow transport approximation significantly improves the accuracy of the transport and transport/diffusion solutions. A methodology for an inflow transport approximation is implemented in order to generate an accurate transport cross section. The inflow transport approximation is compared to the conventional methods, which are the consistent-PN and the outflow transport approximations. The three transport approximations are implemented in the lattice physics code STREAM, and verification is performed for various verification problems in order to investigate their effects and accuracy. From the verification, it is noted that the consistent-PN and the outflow transport approximations cause significant error in calculating the eigenvalue and the power distribution. The inflow transport approximation shows very accurate and precise results for the verification problems. The inflow transport approximation shows significant improvements not only for the high leakage problem but also for practical large core problem analyses.

  11. Spatial Ability Explains the Male Advantage in Approximate Arithmetic

    PubMed Central

    Wei, Wei; Chen, Chuansheng; Zhou, Xinlin

    2016-01-01

    Previous research has shown that females consistently outperform males in exact arithmetic, perhaps due to the former’s advantage in language processing. Much less is known about gender difference in approximate arithmetic. Given that approximate arithmetic is closely associated with visuospatial processing, which shows a male advantage we hypothesized that males would perform better than females in approximate arithmetic. In two experiments (496 children in Experiment 1 and 554 college students in Experiment 2), we found that males showed better performance in approximate arithmetic, which was accounted for by gender differences in spatial ability. PMID:27014124

  12. How to Solve Schroedinger Problems by Approximating the Potential Function

    SciTech Connect

    Ledoux, Veerle; Van Daele, Marnix

    2010-09-30

    We give a survey over the efforts in the direction of solving the Schroedinger equation by using piecewise approximations of the potential function. Two types of approximating potentials have been considered in the literature, that is piecewise constant and piecewise linear functions. For polynomials of higher degree the approximating problem is not so easy to integrate analytically. This obstacle can be circumvented by using a perturbative approach to construct the solution of the approximating problem, leading to the so-called piecewise perturbation methods (PPM). We discuss the construction of a PPM in its most convenient form for applications and show that different PPM versions (CPM,LPM) are in fact equivalent.

  13. Sensitivity analysis and approximation methods for general eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Murthy, D. V.; Haftka, R. T.

    1986-01-01

    Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.

  14. Monotonically improving approximate answers to relational algebra queries

    NASA Technical Reports Server (NTRS)

    Smith, Kenneth P.; Liu, J. W. S.

    1989-01-01

    We present here a query processing method that produces approximate answers to queries posed in standard relational algebra. This method is monotone in the sense that the accuracy of the approximate result improves with the amount of time spent producing the result. This strategy enables us to trade the time to produce the result for the accuracy of the result. An approximate relational model that characterizes appromimate relations and a partial order for comparing them is developed. Relational operators which operate on and return approximate relations are defined.

  15. Bethe free-energy approximations for disordered quantum systems

    NASA Astrophysics Data System (ADS)

    Biazzo, I.; Ramezanpour, A.

    2014-06-01

    Given a locally consistent set of reduced density matrices, we construct approximate density matrices which are globally consistent with the local density matrices we started from when the trial density matrix has a tree structure. We employ the cavity method of statistical physics to find the optimal density matrix representation by slowly decreasing the temperature in an annealing algorithm, or by minimizing an approximate Bethe free energy depending on the reduced density matrices and some cavity messages originated from the Bethe approximation of the entropy. We obtain the classical Bethe expression for the entropy within a naive (mean-field) approximation of the cavity messages, which is expected to work well at high temperatures. In the next order of the approximation, we obtain another expression for the Bethe entropy depending only on the diagonal elements of the reduced density matrices. In principle, we can improve the entropy approximation by considering more accurate cavity messages in the Bethe approximation of the entropy. We compare the annealing algorithm and the naive approximation of the Bethe entropy with exact and approximate numerical simulations for small and large samples of the random transverse Ising model on random regular graphs.

  16. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.

  17. Legendre-tau approximations for functional differential equations

    NASA Technical Reports Server (NTRS)

    Ito, K.; Teglas, R.

    1986-01-01

    The numerical approximation of solutions to linear retarded functional differential equations are considered using the so-called Legendre-tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time-differentiation. The approximate solution is then represented as a truncated Legendre series with time-varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximation is made.

  18. Legendre-Tau approximations for functional differential equations

    NASA Technical Reports Server (NTRS)

    Ito, K.; Teglas, R.

    1983-01-01

    The numerical approximation of solutions to linear functional differential equations are considered using the so called Legendre tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time differentiation. The approximate solution is then represented as a truncated Legendre series with time varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximations is made.

  19. An approximation based global optimization strategy for structural synthesis

    NASA Technical Reports Server (NTRS)

    Sepulveda, A. E.; Schmit, L. A.

    1991-01-01

    A global optimization strategy for structural synthesis based on approximation concepts is presented. The methodology involves the solution of a sequence of highly accurate approximate problems using a global optimization algorithm. The global optimization algorithm implemented consists of a branch and bound strategy based on the interval evaluation of the objective function and constraint functions, combined with a local feasible directions algorithm. The approximate design optimization problems are constructed using first order approximations of selected intermediate response quantities in terms of intermediate design variables. Some numerical results for example problems are presented to illustrate the efficacy of the design procedure setforth.

  20. Various approximations made in augmented-plane-wave calculations

    NASA Astrophysics Data System (ADS)

    Bacalis, N. C.; Blathras, K.; Thomaides, P.; Papaconstantopoulos, D. A.

    1985-10-01

    The effects of various approximations used in performing augmented-plane-wave calculations were studied for elements of the fifth and sixth columns of the Periodic Table, namely V, Nb, Ta, Cr, Mo, and W. Two kinds of approximations have been checked: (i) variation of the number of k points used to iterate to self-consistency, and (ii) approximations for the treatment of the core states. In addition a comparison between relativistic and nonrelativistic calculations is made, and an approximate method of calculating the spin-orbit splitting is given.

  1. Accuracy of the non-relativistic approximation for momentum diffusion

    NASA Astrophysics Data System (ADS)

    Liang, Shiuan-Ni; Lan, Boon Leong

    2016-06-01

    The accuracy of the non-relativistic approximation, which is calculated using the same parameter and the same initial ensemble of trajectories, to relativistic momentum diffusion at low speed is studied numerically for a prototypical nonlinear Hamiltonian system -the periodically delta-kicked particle. We find that if the initial ensemble is a non-localized semi-uniform ensemble, the non-relativistic approximation to the relativistic mean square momentum displacement is always accurate. However, if the initial ensemble is a localized Gaussian, the non-relativistic approximation may not always be accurate and the approximation can break down rapidly.

  2. Noise in coevolving networks

    NASA Astrophysics Data System (ADS)

    Diakonova, Marina; Eguíluz, Víctor M.; San Miguel, Maxi

    2015-09-01

    Coupling dynamics of the states of the nodes of a network to the dynamics of the network topology leads to generic absorbing and fragmentation transitions. The coevolving voter model is a typical system that exhibits such transitions at some critical rewiring. We study the robustness of these transitions under two distinct ways of introducing noise. Noise affecting all the nodes destroys the absorbing-fragmentation transition, giving rise in finite-size systems to two regimes: bimodal magnetization and dynamic fragmentation. Noise targeting a fraction of nodes preserves the transitions but introduces shattered fragmentation with its characteristic fraction of isolated nodes and one or two giant components. Both the lack of absorbing state for homogeneous noise and the shift in the absorbing transition to higher rewiring for targeted noise are supported by analytical approximations.

  3. Network extreme eigenvalue: From mutimodal to scale-free networks

    NASA Astrophysics Data System (ADS)

    Chung, N. N.; Chew, L. Y.; Lai, C. H.

    2012-03-01

    The extreme eigenvalues of adjacency matrices are important indicators on the influence of topological structures to the collective dynamical behavior of complex networks. Recent findings on the ensemble averageability of the extreme eigenvalue have further authenticated its applicability to the study of network dynamics. However, the ensemble average of extreme eigenvalue has only been solved analytically up to the second order correction. Here, we determine the ensemble average of the extreme eigenvalue and characterize its deviation across the ensemble through the discrete form of random scale-free network. Remarkably, the analytical approximation derived from the discrete form shows significant improvement over previous results, which implies a more accurate prediction of the epidemic threshold. In addition, we show that bimodal networks, which are more robust against both random and targeted removal of nodes, are more vulnerable to the spreading of diseases.

  4. Temporal networks

    NASA Astrophysics Data System (ADS)

    Holme, Petter; Saramäki, Jari

    2012-10-01

    A great variety of systems in nature, society and technology-from the web of sexual contacts to the Internet, from the nervous system to power grids-can be modeled as graphs of vertices coupled by edges. The network structure, describing how the graph is wired, helps us understand, predict and optimize the behavior of dynamical systems. In many cases, however, the edges are not continuously active. As an example, in networks of communication via e-mail, text messages, or phone calls, edges represent sequences of instantaneous or practically instantaneous contacts. In some cases, edges are active for non-negligible periods of time: e.g., the proximity patterns of inpatients at hospitals can be represented by a graph where an edge between two individuals is on throughout the time they are at the same ward. Like network topology, the temporal structure of edge activations can affect dynamics of systems interacting through the network, from disease contagion on the network of patients to information diffusion over an e-mail network. In this review, we present the emergent field of temporal networks, and discuss methods for analyzing topological and temporal structure and models for elucidating their relation to the behavior of dynamical systems. In the light of traditional network theory, one can see this framework as moving the information of when things happen from the dynamical system on the network, to the network itself. Since fundamental properties, such as the transitivity of edges, do not necessarily hold in temporal networks, many of these methods need to be quite different from those for static networks. The study of temporal networks is very interdisciplinary in nature. Reflecting this, even the object of study has many names-temporal graphs, evolving graphs, time-varying graphs, time-aggregated graphs, time-stamped graphs, dynamic networks, dynamic graphs, dynamical graphs, and so on. This review covers different fields where temporal graphs are considered

  5. Balanced echo state networks.

    PubMed

    Koryakin, Danil; Lohmann, Johannes; Butz, Martin V

    2012-12-01

    This paper investigates the interaction between the driving output feedback and the internal reservoir dynamics in echo state networks (ESNs). The interplay is studied experimentally on the multiple superimposed oscillators (MSOs) benchmark. The experimental data reveals a dual effect of the output feedback strength on the network dynamics: it drives the dynamic reservoir but it can also block suitable reservoir dynamics. Moreover, the data shows that the reservoir size crucially co-determines the likelihood of generating an effective ESN. We show that dependent on the complexity of the MSO dynamics somewhat smaller networks can yield better performance. Optimizing the output feedback weight range and the network size is thus crucial for generating an effective ESN. With proper parameter choices, we show that it is possible to generate ESNs that approximate MSOs with several orders of magnitude smaller errors than those previously reported. We conclude that there appears to be still much more potential in ESNs than previously thought and sketch-out some promising future research directions. PMID:23037774

  6. Network Simulation

    SciTech Connect

    Fujimoto, Richard; Perumalla, Kalyan S; Riley, George F.

    2006-01-01

    A detailed introduction to the design, implementation and use of network simulation tools is presented. The requirements and issues faced in the design of simulators for wired and wireless networks are discussed. Abstractions such as packet- and fluid-level network models are covered. Several existing simulations are given as examples, with details and rationales regarding design decisions presented. Issues regarding performance and scalability are discussed in detail, describing how one can utilize distributed simulation methods to increase the scale and performance of a simulation environment. Finally, a case study of two simulation tools is presented that have been developed using distributed simulation techniques. This text is essential to any student, researcher or network architect desiring a detailed understanding of how network simulation tools are designed, implemented, and used.

  7. Information communication on complex networks

    NASA Astrophysics Data System (ADS)

    Igarashi, Akito; Kawamoto, Hiroki; Maruyama, Takahiro; Morioka, Atsushi; Naganuma, Yuki

    2013-02-01

    Since communication networks such as the Internet, which is regarded as a complex network, have recently become a huge scale and a lot of data pass through them, the improvement of packet routing strategies for transport is one of the most significant themes in the study of computer networks. It is especially important to find routing strategies which can bear as many traffic as possible without congestion in complex networks. First, using neural networks, we introduce a strategy for packet routing on complex networks, where path lengths and queue lengths in nodes are taken into account within a framework of statistical physics. Secondly, instead of using shortest paths, we propose efficient paths which avoid hubs, nodes with a great many degrees, on scale-free networks with a weight of each node. We improve the heuristic algorithm proposed by Danila et. al. which optimizes step by step routing properties on congestion by using the information of betweenness, the probability of paths passing through a node in all optimal paths which are defined according to a rule, and mitigates the congestion. We confirm the new heuristic algorithm which balances traffic on networks by achieving minimization of the maximum betweenness in much smaller number of iteration steps. Finally, We model virus spreading and data transfer on peer-to-peer (P2P) networks. Using mean-field approximation, we obtain an analytical formulation and emulate virus spreading on the network and compare the results with those of simulation. Moreover, we investigate the mitigation of information traffic congestion in the P2P networks.

  8. Interval neural networks

    SciTech Connect

    Patil, R.B.

    1995-05-01

    Traditional neural networks like multi-layered perceptrons (MLP) use example patterns, i.e., pairs of real-valued observation vectors, ({rvec x},{rvec y}), to approximate function {cflx f}({rvec x}) = {rvec y}. To determine the parameters of the approximation, a special version of the gradient descent method called back-propagation is widely used. In many situations, observations of the input and output variables are not precise; instead, we usually have intervals of possible values. The imprecision could be due to the limited accuracy of the measuring instrument or could reflect genuine uncertainty in the observed variables. In such situation input and output data consist of mixed data types; intervals and precise numbers. Function approximation in interval domains is considered in this paper. We discuss a modification of the classical backpropagation learning algorithm to interval domains. Results are presented with simple examples demonstrating few properties of nonlinear interval mapping as noise resistance and finding set of solutions to the function approximation problem.

  9. Bicriteria network design problems

    SciTech Connect

    Marathe, M.V.; Ravi, R.; Sundaram, R.; Ravi, S.S.; Rosenkrantz, D.J.; Hunt, H.B. III

    1995-05-01

    We study several bicriteria network design problems phrased as follows: given an indirected graph and two minimization objectives with a budget specified on one objective, find a subgraph satisfying certain connectivity requirements that minimizes the second objective subject to the budget on the first. First, we develop a formalism for bicriteria problems and their approximations. Secondly, we use a simple parametric search technique to provide bicriteria approximation algorithms for problems with two similar criteria, where both criteria are the same measure (such as the diameter or the total cost of a tree) but differ only in the cost function under which the measure is computed. Thirdly, we present an (O(log n), O(log n))-approximation algorithm for finding a diameter-constrained minimum cost spanning tree of an undirected graph on n nodes. Finally, for the class of treewidth-bounded graphs, we provide pseudopolynomial-time algorithms for a number of bicriteria problems using dynamic programming. These pseudopolynomial-time algorithms can be converted to fully polynomial-time approximation schemes using a scaling technique.

  10. Stress controls the mechanics of collagen networks

    PubMed Central

    Licup, Albert James; Münster, Stefan; Sharma, Abhinav; Sheinman, Michael; Jawerth, Louise M.; Fabry, Ben; Weitz, David A.; MacKintosh, Fred C.

    2015-01-01

    Collagen is the main structural and load-bearing element of various connective tissues, where it forms the extracellular matrix that supports cells. It has long been known that collagenous tissues exhibit a highly nonlinear stress–strain relationship, although the origins of this nonlinearity remain unknown. Here, we show that the nonlinear stiffening of reconstituted type I collagen networks is controlled by the applied stress and that the network stiffness becomes surprisingly insensitive to network concentration. We demonstrate how a simple model for networks of elastic fibers can quantitatively account for the mechanics of reconstituted collagen networks. Our model points to the important role of normal stresses in determining the nonlinear shear elastic response, which can explain the approximate exponential relationship between stress and strain reported for collagenous tissues. This further suggests principles for the design of synthetic fiber networks with collagen-like properties, as well as a mechanism for the control of the mechanics of such networks. PMID:26195769

  11. An analytical framework for local feedforward networks.

    PubMed

    Weaver, S; Baird, L; Polycarpou, M

    1998-01-01

    Interference in neural networks occurs when learning in one area of the input space causes unlearning in another area. Networks that are less susceptible to interference are referred to as spatially local networks. To obtain a better understanding of these properties, a theoretical framework, consisting of a measure of interference and a measure of network localization, is developed. These measures incorporate not only the network weights and architecture but also the learning algorithm. Using this framework to analyze sigmoidal, multilayer perceptron (MLP) networks that employ the backpropagation learning algorithm on the quadratic cost function, we address a familiar misconception that single-hidden-layer sigmoidal networks are inherently nonlocal by demonstrating that given a sufficiently large number of adjustable weights, single-hidden-layer sigmoidal MLP's exist that are arbitrarily local and retain the ability to approximate any continuous function on a compact domain. PMID:18252471

  12. Structural factoring approach for analyzing stochastic networks

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.; Shier, Douglas R.

    1991-01-01

    The problem of finding the distribution of the shortest path length through a stochastic network is investigated. A general algorithm for determining the exact distribution of the shortest path length is developed based on the concept of conditional factoring, in which a directed, stochastic network is decomposed into an equivalent set of smaller, generally less complex subnetworks. Several network constructs are identified and exploited to reduce significantly the computational effort required to solve a network problem relative to complete enumeration. This algorithm can be applied to two important classes of stochastic path problems: determining the critical path distribution for acyclic networks and the exact two-terminal reliability for probabilistic networks. Computational experience with the algorithm was encouraging and allowed the exact solution of networks that have been previously analyzed only by approximation techniques.

  13. Approximate solution for frequency synchronization in a finite-size Kuramoto model

    NASA Astrophysics Data System (ADS)

    Wang, Chengwei; Rubido, Nicolás; Grebogi, Celso; Baptista, Murilo S.

    2015-12-01

    Scientists have been considering the Kuramoto model to understand the mechanism behind the appearance of collective behavior, such as frequency synchronization (FS) as a paradigm, in real-world networks with a finite number of oscillators. A major current challenge is to obtain an analytical solution for the phase angles. Here, we provide an approximate analytical solution for this problem by deriving a master solution for the finite-size Kuramoto model, with arbitrary finite-variance distribution of the natural frequencies of the oscillators. The master solution embodies all particular solutions of the finite-size Kuramoto model for any frequency distribution and coupling strength larger than the critical one. Furthermore, we present a criterion to determine the stability of the FS solution. This allows one to analytically infer the relationship between the physical parameters and the stable behavior of networks.

  14. Embedding impedance approximations in the analysis of SIS mixers

    NASA Technical Reports Server (NTRS)

    Kerr, A. R.; Pan, S.-K.; Withington, S.

    1992-01-01

    Future millimeter-wave radio astronomy instruments will use arrays of many SIS receivers, either as focal plane arrays on individual radio telescopes, or as individual receivers on the many antennas of radio interferometers. Such applications will require broadband integrated mixers without mechanical tuners. To produce such mixers, it will be necessary to improve present mixer design techniques, most of which use the three-frequency approximation to Tucker's quantum mixer theory. This paper examines the adequacy of three approximations to Tucker's theory: (1) the usual three-frequency approximation which assumes a sinusoidal LO voltage at the junction, and a short-circuit at all frequencies above the upper sideband; (2) a five-frequency approximation which allows two LO voltage harmonics and five small-signal sidebands; and (3) a quasi five-frequency approximation in which five small-signal sidebands are allowed, but the LO voltage is assumed sinusoidal. These are compared with a full harmonic-Newton solution of Tucker's equations, including eight LO harmonics and their corresponding sidebands, for realistic SIS mixer circuits. It is shown that the accuracy of the three approximations depends strongly on the value of omega R(sub N)C for the SIS junctions used. For large omega R(sub N)C, all three approximations approach the eight-harmonic solution. For omega R(sub N)C values in the range 0.5 to 10, the range of most practical interest, the quasi five-frequency approximation is a considerable improvement over the three-frequency approximation, and should be suitable for much design work. For the realistic SIS mixers considered here, the five-frequency approximation gives results very close to those of the eight-harmonic solution. Use of these approximations, where appropriate, considerably reduces the computational effort needed to analyze an SIS mixer, and allows the design and optimization of mixers using a personal computer.

  15. Quasiparticle random-phase approximation and {beta}-decay physics: Higher-order approximations in a boson formalism

    SciTech Connect

    Sambataro, M.; Suhonen, J.

    1997-08-01

    The quasiparticle random-phase approximation (QRPA) is reviewed and higher-order approximations are discussed with reference to {beta}-decay physics. The approach is fully developed in a boson formalism. Working within a schematic model, we first illustrate a fermion-boson mapping procedure and apply it to construct boson images of the fermion Hamiltonian at different levels of approximation. The quality of these images is tested through a comparison between approximate and exact spectra. Standard QRPA equations are derived in correspondence with the quasi-boson limit of the first-order boson Hamiltonian. The use of higher-order Hamiltonians is seen to improve considerably the stability of the approximate solutions. The mapping procedure is also applied to Fermi {beta} operators: exact and approximate transition amplitudes are discussed together with the Ikeda sum rule. The range of applicabilty of the QRPA formalism is analyzed. {copyright} {ital 1997} {ital The American Physical Society}

  16. Approximate Optimal Control as a Model for Motor Learning

    ERIC Educational Resources Information Center

    Berthier, Neil E.; Rosenstein, Michael T.; Barto, Andrew G.

    2005-01-01

    Current models of psychological development rely heavily on connectionist models that use supervised learning. These models adapt network weights when the network output does not match the target outputs computed by some agent. The authors present a model of motor learning in which the child uses exploration to discover appropriate ways of…

  17. An approximate internal model-based neural control for unknown nonlinear discrete processes.

    PubMed

    Li, Han-Xiong; Deng, Hua

    2006-05-01

    An approximate internal model-based neural control (AIMNC) strategy is proposed for unknown nonaffine nonlinear discrete processes under disturbed environment. The proposed control strategy has some clear advantages in respect to existing neural internal model control methods. It can be used for open-loop unstable nonlinear processes or a class of systems with unstable zero dynamics. Based on a novel input-output approximation, the proposed neural control law can be derived directly and implemented straightforward for an unknown process. Only one neural network needs to be trained and control algorithm can be directly obtained from model identification without further training. The stability and robustness of a closed-loop system can be derived analytically. Extensive simulations demonstrate the superior performance of the proposed AIMNC strategy. PMID:16722170

  18. RaceTrack: An Approximation Algorithm for the Mobile Sink Routing Problem

    NASA Astrophysics Data System (ADS)

    Yuan, Yuan; Peng, Yuxing

    In large-scale monitoring applications, randomly deployed wireless sensor networks may not be fully connected. Using mobile sink for data collection is one of the feasible solutions. For energy saving, it is necessary to plan a shortest route for the mobile sink. Mobile sink routing problem can be regarded as a special case of TSP with neighborhoods (TSPN) problem. In this paper, we propose a novel approximation algorithm called RaceTrack. This algorithm forms a "racetrack" based on the TSP route, which is constructed from the locations of the deployed sensor nodes. By using inner lane heuristic and concave bend heuristic of auto racing, and a shortcut finding step, we optimize the obtained TSP route within O(n) computation time. Through formal proofs and large-scale simulations, we verified that our RaceTrack algorithm can achieve a good approximation ratio.

  19. Reinforcement learning controller design for affine nonlinear discrete-time systems using online approximators.

    PubMed

    Yang, Qinmin; Jagannathan, Sarangapani

    2012-04-01

    In this paper, reinforcement learning state- and output-feedback-based adaptive critic controller designs are proposed by using the online approximators (OLAs) for a general multi-input and multioutput affine unknown nonlinear discretetime systems in the presence of bounded disturbances. The proposed controller design has two entities, an action network that is designed to produce optimal signal and a critic network that evaluates the performance of the action network. The critic estimates the cost-to-go function which is tuned online using recursive equations derived from heuristic dynamic programming. Here, neural networks (NNs) are used both for the action and critic whereas any OLAs, such as radial basis functions, splines, fuzzy logic, etc., can be utilized. For the output-feedback counterpart, an additional NN is designated as the observer to estimate the unavailable system states, and thus, separation principle is not required. The NN weight tuning laws for the controller schemes are also derived while ensuring uniform ultimate boundedness of the closed-loop system using Lyapunov theory. Finally, the effectiveness of the two controllers is tested in simulation on a pendulum balancing system and a two-link robotic arm system. PMID:21947529

  20. Technological Networks

    NASA Astrophysics Data System (ADS)

    Mitra, Bivas

    The study of networks in the form of mathematical graph theory is one of the fundamental pillars of discrete mathematics. However, recent years have witnessed a substantial new movement in network research. The focus of the research is shifting away from the analysis of small graphs and the properties of individual vertices or edges to consideration of statistical properties of large scale networks. This new approach has been driven largely by the availability of technological networks like the Internet [12], World Wide Web network [2], etc. that allow us to gather and analyze data on a scale far larger than previously possible. At the same time, technological networks have evolved as a socio-technological system, as the concepts of social systems that are based on self-organization theory have become unified in technological networks [13]. In today’s society, we have a simple and universal access to great amounts of information and services. These information services are based upon the infrastructure of the Internet and the World Wide Web. The Internet is the system composed of ‘computers’ connected by cables or some other form of physical connections. Over this physical network, it is possible to exchange e-mails, transfer files, etc. On the other hand, the World Wide Web (commonly shortened to the Web) is a system of interlinked hypertext documents accessed via the Internet where nodes represent web pages and links represent hyperlinks between the pages. Peer-to-peer (P2P) networks [26] also have recently become a popular medium through which huge amounts of data can be shared. P2P file sharing systems, where files are searched and downloaded among peers without the help of central servers, have emerged as a major component of Internet traffic. An important advantage in P2P networks is that all clients provide resources, including bandwidth, storage space, and computing power. In this chapter, we discuss these technological networks in detail. The review