Sample records for analytical upper bound

  1. Numerical and analytical bounds on threshold error rates for hypergraph-product codes

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Prabhakar, Sanjay; Dumer, Ilya; Pryadko, Leonid P.

    2018-06-01

    We study analytically and numerically decoding properties of finite-rate hypergraph-product quantum low density parity-check codes obtained from random (3,4)-regular Gallager codes, with a simple model of independent X and Z errors. Several nontrivial lower and upper bounds for the decodable region are constructed analytically by analyzing the properties of the homological difference, equal minus the logarithm of the maximum-likelihood decoding probability for a given syndrome. Numerical results include an upper bound for the decodable region from specific heat calculations in associated Ising models and a minimum-weight decoding threshold of approximately 7 % .

  2. Upper and lower bounds for the speed of pulled fronts with a cut-off

    NASA Astrophysics Data System (ADS)

    Benguria, R. D.; Depassier, M. C.; Loss, M.

    2008-02-01

    We establish rigorous upper and lower bounds for the speed of pulled fronts with a cut-off. For all reaction terms of KPP type a simple analytic upper bound is given. The lower bounds however depend on details of the reaction term. For a small cut-off parameter the two leading order terms in the asymptotic expansion of the upper and lower bounds coincide and correspond to the Brunet-Derrida formula. For large cut-off parameters the bounds do not coincide and permit a simple estimation of the speed of the front.

  3. Lower and upper bounds for entanglement of Rényi-α entropy.

    PubMed

    Song, Wei; Chen, Lin; Cao, Zhuo-Liang

    2016-12-23

    Entanglement Rényi-α entropy is an entanglement measure. It reduces to the standard entanglement of formation when α tends to 1. We derive analytical lower and upper bounds for the entanglement Rényi-α entropy of arbitrary dimensional bipartite quantum systems. We also demonstrate the application our bound for some concrete examples. Moreover, we establish the relation between entanglement Rényi-α entropy and some other entanglement measures.

  4. The Laughlin liquid in an external potential

    NASA Astrophysics Data System (ADS)

    Rougerie, Nicolas; Yngvason, Jakob

    2018-04-01

    We study natural perturbations of the Laughlin state arising from the effects of trapping and disorder. These are N-particle wave functions that have the form of a product of Laughlin states and analytic functions of the N variables. We derive an upper bound to the ground state energy in a confining external potential, matching exactly a recently derived lower bound in the large N limit. Irrespective of the shape of the confining potential, this sharp upper bound can be achieved through a modification of the Laughlin function by suitably arranged quasi-holes.

  5. Bounds for the price of discrete arithmetic Asian options

    NASA Astrophysics Data System (ADS)

    Vanmaele, M.; Deelstra, G.; Liinev, J.; Dhaene, J.; Goovaerts, M. J.

    2006-01-01

    In this paper the pricing of European-style discrete arithmetic Asian options with fixed and floating strike is studied by deriving analytical lower and upper bounds. In our approach we use a general technique for deriving upper (and lower) bounds for stop-loss premiums of sums of dependent random variables, as explained in Kaas et al. (Ins. Math. Econom. 27 (2000) 151-168), and additionally, the ideas of Rogers and Shi (J. Appl. Probab. 32 (1995) 1077-1088) and of Nielsen and Sandmann (J. Financial Quant. Anal. 38(2) (2003) 449-473). We are able to create a unifying framework for European-style discrete arithmetic Asian options through these bounds, that generalizes several approaches in the literature as well as improves the existing results. We obtain analytical and easily computable bounds. The aim of the paper is to formulate an advice of the appropriate choice of the bounds given the parameters, investigate the effect of different conditioning variables and compare their efficiency numerically. Several sets of numerical results are included. We also discuss hedging using these bounds. Moreover, our methods are applicable to a wide range of (pricing) problems involving a sum of dependent random variables.

  6. Computer search for binary cyclic UEP codes of odd length up to 65

    NASA Technical Reports Server (NTRS)

    Lin, Mao-Chao; Lin, Chi-Chang; Lin, Shu

    1990-01-01

    Using an exhaustive computation, the unequal error protection capabilities of all binary cyclic codes of odd length up to 65 that have minimum distances at least 3 are found. For those codes that can only have upper bounds on their unequal error protection capabilities computed, an analytic method developed by Dynkin and Togonidze (1976) is used to show that the upper bounds meet the exact unequal error protection capabilities.

  7. Quantum State Tomography via Linear Regression Estimation

    PubMed Central

    Qi, Bo; Hou, Zhibo; Li, Li; Dong, Daoyi; Xiang, Guoyong; Guo, Guangcan

    2013-01-01

    A simple yet efficient state reconstruction algorithm of linear regression estimation (LRE) is presented for quantum state tomography. In this method, quantum state reconstruction is converted into a parameter estimation problem of a linear regression model and the least-squares method is employed to estimate the unknown parameters. An asymptotic mean squared error (MSE) upper bound for all possible states to be estimated is given analytically, which depends explicitly upon the involved measurement bases. This analytical MSE upper bound can guide one to choose optimal measurement sets. The computational complexity of LRE is O(d4) where d is the dimension of the quantum state. Numerical examples show that LRE is much faster than maximum-likelihood estimation for quantum state tomography. PMID:24336519

  8. Necessary and sufficient criterion for extremal quantum correlations in the simplest Bell scenario

    NASA Astrophysics Data System (ADS)

    Ishizaka, Satoshi

    2018-05-01

    In the study of quantum nonlocality, one obstacle is that the analytical criterion for identifying the boundaries between quantum and postquantum correlations has not yet been given, even in the simplest Bell scenario. We propose a plausible, analytical, necessary and sufficient condition ensuring that a nonlocal quantum correlation in the simplest scenario is an extremal boundary point. Our extremality condition amounts to certifying an information-theoretical quantity; the probability of guessing a measurement outcome of a distant party optimized using any quantum instrument. We show that this quantity can be upper and lower bounded from any correlation in a device-independent way, and we use numerical calculations to confirm that coincidence of the upper and lower bounds appears to be necessary and sufficient for the extremality.

  9. Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam

    2009-01-01

    This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.

  10. Improved bounds on the energy-minimizing strains in martensitic polycrystals

    NASA Astrophysics Data System (ADS)

    Peigney, Michaël

    2016-07-01

    This paper is concerned with the theoretical prediction of the energy-minimizing (or recoverable) strains in martensitic polycrystals, considering a nonlinear elasticity model of phase transformation at finite strains. The main results are some rigorous upper bounds on the set of energy-minimizing strains. Those bounds depend on the polycrystalline texture through the volume fractions of the different orientations. The simplest form of the bounds presented is obtained by combining recent results for single crystals with a homogenization approach proposed previously for martensitic polycrystals. However, the polycrystalline bound delivered by that procedure may fail to recover the monocrystalline bound in the homogeneous limit, as is demonstrated in this paper by considering an example related to tetragonal martensite. This motivates the development of a more detailed analysis, leading to improved polycrystalline bounds that are notably consistent with results for single crystals in the homogeneous limit. A two-orientation polycrystal of tetragonal martensite is studied as an illustration. In that case, analytical expressions of the upper bounds are derived and the results are compared with lower bounds obtained by considering laminate textures.

  11. A Novel Capacity Analysis for Wireless Backhaul Mesh Networks

    NASA Astrophysics Data System (ADS)

    Chung, Tein-Yaw; Lee, Kuan-Chun; Lee, Hsiao-Chih

    This paper derived a closed-form expression for inter-flow capacity of a backhaul wireless mesh network (WMN) with centralized scheduling by employing a ring-based approach. Through the definition of an interference area, we are able to accurately describe a bottleneck collision area for a WMN and calculate the upper bound of inter-flow capacity. The closed-form expression shows that the upper bound is a function of the ratio between transmission range and network radius. Simulations and numerical analysis show that our analytic solution can better estimate the inter-flow capacity of WMNs than that of previous approach.

  12. Bounds for Asian basket options

    NASA Astrophysics Data System (ADS)

    Deelstra, Griselda; Diallo, Ibrahima; Vanmaele, Michèle

    2008-09-01

    In this paper we propose pricing bounds for European-style discrete arithmetic Asian basket options in a Black and Scholes framework. We start from methods used for basket options and Asian options. First, we use the general approach for deriving upper and lower bounds for stop-loss premia of sums of non-independent random variables as in Kaas et al. [Upper and lower bounds for sums of random variables, Insurance Math. Econom. 27 (2000) 151-168] or Dhaene et al. [The concept of comonotonicity in actuarial science and finance: theory, Insurance Math. Econom. 31(1) (2002) 3-33]. We generalize the methods in Deelstra et al. [Pricing of arithmetic basket options by conditioning, Insurance Math. Econom. 34 (2004) 55-57] and Vanmaele et al. [Bounds for the price of discrete sampled arithmetic Asian options, J. Comput. Appl. Math. 185(1) (2006) 51-90]. Afterwards we show how to derive an analytical closed-form expression for a lower bound in the non-comonotonic case. Finally, we derive upper bounds for Asian basket options by applying techniques as in Thompson [Fast narrow bounds on the value of Asian options, Working Paper, University of Cambridge, 1999] and Lord [Partially exact and bounded approximations for arithmetic Asian options, J. Comput. Finance 10 (2) (2006) 1-52]. Numerical results are included and on the basis of our numerical tests, we explain which method we recommend depending on moneyness and time-to-maturity.

  13. Integrability and chemical potential in the (3 + 1)-dimensional Skyrme model

    NASA Astrophysics Data System (ADS)

    Alvarez, P. D.; Canfora, F.; Dimakis, N.; Paliathanasis, A.

    2017-10-01

    Using a remarkable mapping from the original (3 + 1)dimensional Skyrme model to the Sine-Gordon model, we construct the first analytic examples of Skyrmions as well as of Skyrmions-anti-Skyrmions bound states within a finite box in 3 + 1 dimensional flat space-time. An analytic upper bound on the number of these Skyrmions-anti-Skyrmions bound states is derived. We compute the critical isospin chemical potential beyond which these Skyrmions cease to exist. With these tools, we also construct topologically protected time-crystals: time-periodic configurations whose time-dependence is protected by their non-trivial winding number. These are striking realizations of the ideas of Shapere and Wilczek. The critical isospin chemical potential for these time-crystals is determined.

  14. Exact lower and upper bounds on stationary moments in stochastic biochemical systems

    NASA Astrophysics Data System (ADS)

    Ghusinga, Khem Raj; Vargas-Garcia, Cesar A.; Lamperski, Andrew; Singh, Abhyudai

    2017-08-01

    In the stochastic description of biochemical reaction systems, the time evolution of statistical moments for species population counts is described by a linear dynamical system. However, except for some ideal cases (such as zero- and first-order reaction kinetics), the moment dynamics is underdetermined as lower-order moments depend upon higher-order moments. Here, we propose a novel method to find exact lower and upper bounds on stationary moments for a given arbitrary system of biochemical reactions. The method exploits the fact that statistical moments of any positive-valued random variable must satisfy some constraints that are compactly represented through the positive semidefiniteness of moment matrices. Our analysis shows that solving moment equations at steady state in conjunction with constraints on moment matrices provides exact lower and upper bounds on the moments. These results are illustrated by three different examples—the commonly used logistic growth model, stochastic gene expression with auto-regulation and an activator-repressor gene network motif. Interestingly, in all cases the accuracy of the bounds is shown to improve as moment equations are expanded to include higher-order moments. Our results provide avenues for development of approximation methods that provide explicit bounds on moments for nonlinear stochastic systems that are otherwise analytically intractable.

  15. Diffusion Influenced Adsorption Kinetics.

    PubMed

    Miura, Toshiaki; Seki, Kazuhiko

    2015-08-27

    When the kinetics of adsorption is influenced by the diffusive flow of solutes, the solute concentration at the surface is influenced by the surface coverage of solutes, which is given by the Langmuir-Hinshelwood adsorption equation. The diffusion equation with the boundary condition given by the Langmuir-Hinshelwood adsorption equation leads to the nonlinear integro-differential equation for the surface coverage. In this paper, we solved the nonlinear integro-differential equation using the Grünwald-Letnikov formula developed to solve fractional kinetics. Guided by the numerical results, analytical expressions for the upper and lower bounds of the exact numerical results were obtained. The upper and lower bounds were close to the exact numerical results in the diffusion- and reaction-controlled limits, respectively. We examined the validity of the two simple analytical expressions obtained in the diffusion-controlled limit. The results were generalized to include the effect of dispersive diffusion. We also investigated the effect of molecular rearrangement of anisotropic molecules on surface coverage.

  16. Polygamy of entanglement in multipartite quantum systems

    NASA Astrophysics Data System (ADS)

    Kim, Jeong San

    2009-08-01

    We show that bipartite entanglement distribution (or entanglement of assistance) in multipartite quantum systems is by nature polygamous. We first provide an analytical upper bound for the concurrence of assistance in bipartite quantum systems and derive a polygamy inequality of multipartite entanglement in arbitrary-dimensional quantum systems.

  17. Validation of the SURE Program, phase 1

    NASA Technical Reports Server (NTRS)

    Dotson, Kelly J.

    1987-01-01

    Presented are the results of the first phase in the validation of the SURE (Semi-Markov Unreliability Range Evaluator) program. The SURE program gives lower and upper bounds on the death-state probabilities of a semi-Markov model. With these bounds, the reliability of a semi-Markov model of a fault-tolerant computer system can be analyzed. For the first phase in the validation, fifteen semi-Markov models were solved analytically for the exact death-state probabilities and these solutions compared to the corresponding bounds given by SURE. In every case, the SURE bounds covered the exact solution. The bounds, however, had a tendency to separate in cases where the recovery rate was slow or the fault arrival rate was fast.

  18. Tidal disruption of Periodic Comet Shoemaker-Levy 9 and a constraint on its mean density

    NASA Technical Reports Server (NTRS)

    Boss, Alan P.

    1994-01-01

    The apparent tidal disruption of Periodic Comet Shoemaker-Levy 9 (1993e) during a close encounter within approximately 1.62 planetary radii of Jupiter can be used along with theoretical models of tidal disruption to place an upper bound on the density of the predisruption body. Depending on the theoretical model used, these upper bounds range from rho(sub c) less than 0.702 +/- 0.080 g/cu cm for a simple analytical model calibrated by numerical smoothed particle hydrodynamics (SPH) simulations to rho(sub c) less than 1.50 +/- 0.17 g/cu cm for a detailed semianalytical model. The quoted uncertainties stem from an assumed uncertainty in the perijove radius. However, the uncertainty introduced by the different theoretical models is the major source of error; this uncertainty could be eliminated by future SPH simulations specialized to cometary disruptions, including the effects of initially prolate, spinning comets. If the SPH-based upper bound turns out to be most appropriate, it would be consistent with the predisruption body being a comet with a relatively low density and porous structure, as has been asserted previously based on observations of cometary outgassing. Regardless of which upper bound is preferable, the models all agree that the predisruption body could not have been a relatively high-density body, such as an asteroid with rho approximately = 2 g/cu cm.

  19. Limit analysis of hollow spheres or spheroids with Hill orthotropic matrix

    NASA Astrophysics Data System (ADS)

    Pastor, Franck; Pastor, Joseph; Kondo, Djimedo

    2012-03-01

    Recent theoretical studies of the literature are concerned by the hollow sphere or spheroid (confocal) problems with orthotropic Hill type matrix. They have been developed in the framework of the limit analysis kinematical approach by using very simple trial velocity fields. The present Note provides, through numerical upper and lower bounds, a rigorous assessment of the approximate criteria derived in these theoretical works. To this end, existing static 3D codes for a von Mises matrix have been easily extended to the orthotropic case. Conversely, instead of the non-obvious extension of the existing kinematic codes, a new original mixed approach has been elaborated on the basis of the plane strain structure formulation earlier developed by F. Pastor (2007). Indeed, such a formulation does not need the expressions of the unit dissipated powers. Interestingly, it delivers a numerical code better conditioned and notably more rapid than the previous one, while preserving the rigorous upper bound character of the corresponding numerical results. The efficiency of the whole approach is first demonstrated through comparisons of the results to the analytical upper bounds of Benzerga and Besson (2001) or Monchiet et al. (2008) in the case of spherical voids in the Hill matrix. Moreover, we provide upper and lower bounds results for the hollow spheroid with the Hill matrix which are compared to those of Monchiet et al. (2008).

  20. More on the decoder error probability for Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.

    1987-01-01

    The decoder error probability for Reed-Solomon codes (more generally, linear maximum distance separable codes) is examined. McEliece and Swanson offered an upper bound on P sub E (u), the decoder error probability given that u symbol errors occurs. This upper bound is slightly greater than Q, the probability that a completely random error pattern will cause decoder error. By using a combinatoric technique, the principle of inclusion and exclusion, an exact formula for P sub E (u) is derived. The P sub e (u) for the (255, 223) Reed-Solomon Code used by NASA, and for the (31,15) Reed-Solomon code (JTIDS code), are calculated using the exact formula, and the P sub E (u)'s are observed to approach the Q's of the codes rapidly as u gets larger. An upper bound for the expression is derived, and is shown to decrease nearly exponentially as u increases. This proves analytically that P sub E (u) indeed approaches Q as u becomes large, and some laws of large numbers come into play.

  1. Analytic Confusion Matrix Bounds for Fault Detection and Isolation Using a Sum-of-Squared- Residuals Approach

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2009-01-01

    Given a system which can fail in 1 or n different ways, a fault detection and isolation (FDI) algorithm uses sensor data in order to determine which fault is the most likely to have occurred. The effectiveness of an FDI algorithm can be quantified by a confusion matrix, which i ndicates the probability that each fault is isolated given that each fault has occurred. Confusion matrices are often generated with simulation data, particularly for complex systems. In this paper we perform FDI using sums of squares of sensor residuals (SSRs). We assume that the sensor residuals are Gaussian, which gives the SSRs a chi-squared distribution. We then generate analytic lower and upper bounds on the confusion matrix elements. This allows for the generation of optimal sensor sets without numerical simulations. The confusion matrix bound s are verified with simulated aircraft engine data.

  2. Diffractive variable beam splitter: optimal design.

    PubMed

    Borghi, R; Cincotti, G; Santarsiero, M

    2000-01-01

    The analytical expression of the phase profile of the optimum diffractive beam splitter with an arbitrary power ratio between the two output beams is derived. The phase function is obtained by an analytical optimization procedure such that the diffraction efficiency of the resulting optical element is the highest for an actual device. Comparisons are presented with the efficiency of a diffractive beam splitter specified by a sawtooth phase function and with the pertinent theoretical upper bound for this type of element.

  3. Bounding Averages Rigorously Using Semidefinite Programming: Mean Moments of the Lorenz System

    NASA Astrophysics Data System (ADS)

    Goluskin, David

    2018-04-01

    We describe methods for proving bounds on infinite-time averages in differential dynamical systems. The methods rely on the construction of nonnegative polynomials with certain properties, similarly to the way nonlinear stability can be proved using Lyapunov functions. Nonnegativity is enforced by requiring the polynomials to be sums of squares, a condition which is then formulated as a semidefinite program (SDP) that can be solved computationally. Although such computations are subject to numerical error, we demonstrate two ways to obtain rigorous results: using interval arithmetic to control the error of an approximate SDP solution, and finding exact analytical solutions to relatively small SDPs. Previous formulations are extended to allow for bounds depending analytically on parametric variables. These methods are illustrated using the Lorenz equations, a system with three state variables ( x, y, z) and three parameters (β ,σ ,r). Bounds are reported for infinite-time averages of all eighteen moments x^ly^mz^n up to quartic degree that are symmetric under (x,y)\\mapsto (-x,-y). These bounds apply to all solutions regardless of stability, including chaotic trajectories, periodic orbits, and equilibrium points. The analytical approach yields two novel bounds that are sharp: the mean of z^3 can be no larger than its value of (r-1)^3 at the nonzero equilibria, and the mean of xy^3 must be nonnegative. The interval arithmetic approach is applied at the standard chaotic parameters to bound eleven average moments that all appear to be maximized on the shortest periodic orbit. Our best upper bound on each such average exceeds its value on the maximizing orbit by less than 1%. Many bounds reported here are much tighter than would be possible without computer assistance.

  4. Circuit bounds on stochastic transport in the Lorenz equations

    NASA Astrophysics Data System (ADS)

    Weady, Scott; Agarwal, Sahil; Wilen, Larry; Wettlaufer, J. S.

    2018-07-01

    In turbulent Rayleigh-Bénard convection one seeks the relationship between the heat transport, captured by the Nusselt number, and the temperature drop across the convecting layer, captured by the Rayleigh number. In experiments, one measures the Nusselt number for a given Rayleigh number, and the question of how close that value is to the maximal transport is a key prediction of variational fluid mechanics in the form of an upper bound. The Lorenz equations have traditionally been studied as a simplified model of turbulent Rayleigh-Bénard convection, and hence it is natural to investigate their upper bounds, which has previously been done numerically and analytically, but they are not as easily accessible in an experimental context. Here we describe a specially built circuit that is the experimental analogue of the Lorenz equations and compare its output to the recently determined upper bounds of the stochastic Lorenz equations [1]. The circuit is substantially more efficient than computational solutions, and hence we can more easily examine the system. Because of offsets that appear naturally in the circuit, we are motivated to study unique bifurcation phenomena that arise as a result. Namely, for a given Rayleigh number, we find a reentrant behavior of the transport on noise amplitude and this varies with Rayleigh number passing from the homoclinic to the Hopf bifurcation.

  5. Some Factor Analytic Approximations to Latent Class Structure.

    ERIC Educational Resources Information Center

    Dziuban, Charles D.; Denton, William T.

    Three procedures, alpha, image, and uniqueness rescaling, were applied to a joint occurrence probability matrix. That matrix was the basis of a well-known latent class structure. The values of the recurring subscript elements were varied as follows: Case 1 - The known elements were input; Case 2 - The upper bounds to the recurring subscript…

  6. Improved upper bounds on energy dissipation rates in plane Couette flow with boundary injection and suction

    NASA Astrophysics Data System (ADS)

    Lee, Harry; Wen, Baole; Doering, Charles

    2017-11-01

    The rate of viscous energy dissipation ɛ in incompressible Newtonian planar Couette flow (a horizontal shear layer) imposed with uniform boundary injection and suction is studied numerically. Specifically, fluid is steadily injected through the top plate with a constant rate at a constant angle of injection, and the same amount of fluid is sucked out vertically through the bottom plate at the same rate. This set-up leads to two control parameters, namely the angle of injection, θ, and the Reynolds number of the horizontal shear flow, Re . We numerically implement the `background field' variational problem formulated by Constantin and Doering with a one-dimensional unidirectional background field ϕ(z) , where z aligns with the distance between the plates. Computation is carried out at various levels of Re with θ = 0 , 0 .1° ,1° and 2°, respectively. The computed upper bounds on ɛ scale like Re0 as Re > 20 , 000 for each fixed θ, this agrees with Kolmogorov's hypothesis on isotropic turbulence. The outcome provides new upper bounds to ɛ among any solution to the underlying Navier-Stokes equations, and they are sharper than the analytical bounds presented in Doering et al. (2000). This research was partially supported by the NSF Award DMS-1515161, and the University of Michigan's Rackham Graduate Student Research Grant.

  7. $$ \\mathcal{N} $$ = 4 superconformal bootstrap of the K 3 CFT

    DOE PAGES

    Lin, Ying-Hsuan; Shao, Shu-Heng; Simmons-Duffin, David; ...

    2017-05-23

    We study two-dimensional (4; 4) superconformal eld theories of central charge c = 6, corresponding to nonlinear sigma models on K3 surfaces, using the superconformal bootstrap. This is made possible through a surprising relation between the BPS N = 4 superconformal blocks with c = 6 and bosonic Virasoro conformal blocks with c = 28, and an exact result on the moduli dependence of a certain integrated BPS 4-point function. Nontrivial bounds on the non-BPS spectrum in the K3 CFT are obtained as functions of the CFT moduli, that interpolate between the free orbifold points and singular CFT points. Wemore » observe directly from the CFT perspective the signature of a continuous spectrum above a gap at the singular moduli, and fi nd numerically an upper bound on this gap that is saturated by the A1 N = 4 cigar CFT. We also derive an analytic upper bound on the fi rst nonzero eigenvalue of the scalar Laplacian on K3 in the large volume regime, that depends on the K3 moduli data. As two byproducts, we find an exact equivalence between a class of BPS N = 2 superconformal blocks and Virasoro conformal blocks in two dimensions, and an upper bound on the four-point functions of operators of sufficiently low scaling dimension in three and four dimensional CFTs.« less

  8. $$ \\mathcal{N} $$ = 4 superconformal bootstrap of the K 3 CFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Ying-Hsuan; Shao, Shu-Heng; Simmons-Duffin, David

    We study two-dimensional (4; 4) superconformal eld theories of central charge c = 6, corresponding to nonlinear sigma models on K3 surfaces, using the superconformal bootstrap. This is made possible through a surprising relation between the BPS N = 4 superconformal blocks with c = 6 and bosonic Virasoro conformal blocks with c = 28, and an exact result on the moduli dependence of a certain integrated BPS 4-point function. Nontrivial bounds on the non-BPS spectrum in the K3 CFT are obtained as functions of the CFT moduli, that interpolate between the free orbifold points and singular CFT points. Wemore » observe directly from the CFT perspective the signature of a continuous spectrum above a gap at the singular moduli, and fi nd numerically an upper bound on this gap that is saturated by the A1 N = 4 cigar CFT. We also derive an analytic upper bound on the fi rst nonzero eigenvalue of the scalar Laplacian on K3 in the large volume regime, that depends on the K3 moduli data. As two byproducts, we find an exact equivalence between a class of BPS N = 2 superconformal blocks and Virasoro conformal blocks in two dimensions, and an upper bound on the four-point functions of operators of sufficiently low scaling dimension in three and four dimensional CFTs.« less

  9. Long-distance measurement-device-independent quantum key distribution with coherent-state superpositions.

    PubMed

    Yin, H-L; Cao, W-F; Fu, Y; Tang, Y-L; Liu, Y; Chen, T-Y; Chen, Z-B

    2014-09-15

    Measurement-device-independent quantum key distribution (MDI-QKD) with decoy-state method is believed to be securely applied to defeat various hacking attacks in practical quantum key distribution systems. Recently, the coherent-state superpositions (CSS) have emerged as an alternative to single-photon qubits for quantum information processing and metrology. Here, in this Letter, CSS are exploited as the source in MDI-QKD. We present an analytical method that gives two tight formulas to estimate the lower bound of yield and the upper bound of bit error rate. We exploit the standard statistical analysis and Chernoff bound to perform the parameter estimation. Chernoff bound can provide good bounds in the long-distance MDI-QKD. Our results show that with CSS, both the security transmission distance and secure key rate are significantly improved compared with those of the weak coherent states in the finite-data case.

  10. Quantum Discord for d⊗2 Systems

    PubMed Central

    Ma, Zhihao; Chen, Zhihua; Fanchini, Felipe Fernandes; Fei, Shao-Ming

    2015-01-01

    We present an analytical solution for classical correlation, defined in terms of linear entropy, in an arbitrary system when the second subsystem is measured. We show that the optimal measurements used in the maximization of the classical correlation in terms of linear entropy, when used to calculate the quantum discord in terms of von Neumann entropy, result in a tight upper bound for arbitrary systems. This bound agrees with all known analytical results about quantum discord in terms of von Neumann entropy and, when comparing it with the numerical results for 106 two-qubit random density matrices, we obtain an average deviation of order 10−4. Furthermore, our results give a way to calculate the quantum discord for arbitrary n-qubit GHZ and W states evolving under the action of the amplitude damping noisy channel. PMID:26036771

  11. Curvature Continuous and Bounded Path Planning for Fixed-Wing UAVs

    PubMed Central

    Jiang, Peng; Li, Deshi; Sun, Tao

    2017-01-01

    Unmanned Aerial Vehicles (UAVs) play an important role in applications such as data collection and target reconnaissance. An accurate and optimal path can effectively increase the mission success rate in the case of small UAVs. Although path planning for UAVs is similar to that for traditional mobile robots, the special kinematic characteristics of UAVs (such as their minimum turning radius) have not been taken into account in previous studies. In this paper, we propose a locally-adjustable, continuous-curvature, bounded path-planning algorithm for fixed-wing UAVs. To deal with the curvature discontinuity problem, an optimal interpolation algorithm and a key-point shift algorithm are proposed based on the derivation of a curvature continuity condition. To meet the upper bound for curvature and to render the curvature extrema controllable, a local replanning scheme is designed by combining arcs and Bezier curves with monotonic curvature. In particular, a path transition mechanism is built for the replanning phase using minimum curvature circles for a planning philosophy. Numerical results demonstrate that the analytical planning algorithm can effectively generate continuous-curvature paths, while satisfying the curvature upper bound constraint and allowing UAVs to pass through all predefined waypoints in the desired mission region. PMID:28925960

  12. Curvature Continuous and Bounded Path Planning for Fixed-Wing UAVs.

    PubMed

    Wang, Xiaoliang; Jiang, Peng; Li, Deshi; Sun, Tao

    2017-09-19

    Unmanned Aerial Vehicles (UAVs) play an important role in applications such as data collection and target reconnaissance. An accurate and optimal path can effectively increase the mission success rate in the case of small UAVs. Although path planning for UAVs is similar to that for traditional mobile robots, the special kinematic characteristics of UAVs (such as their minimum turning radius) have not been taken into account in previous studies. In this paper, we propose a locally-adjustable, continuous-curvature, bounded path-planning algorithm for fixed-wing UAVs. To deal with the curvature discontinuity problem, an optimal interpolation algorithm and a key-point shift algorithm are proposed based on the derivation of a curvature continuity condition. To meet the upper bound for curvature and to render the curvature extrema controllable, a local replanning scheme is designed by combining arcs and Bezier curves with monotonic curvature. In particular, a path transition mechanism is built for the replanning phase using minimum curvature circles for a planning philosophy. Numerical results demonstrate that the analytical planning algorithm can effectively generate continuous-curvature paths, while satisfying the curvature upper bound constraint and allowing UAVs to pass through all predefined waypoints in the desired mission region.

  13. An Analytical Framework for Runtime of a Class of Continuous Evolutionary Algorithms.

    PubMed

    Zhang, Yushan; Hu, Guiwu

    2015-01-01

    Although there have been many studies on the runtime of evolutionary algorithms in discrete optimization, relatively few theoretical results have been proposed on continuous optimization, such as evolutionary programming (EP). This paper proposes an analysis of the runtime of two EP algorithms based on Gaussian and Cauchy mutations, using an absorbing Markov chain. Given a constant variation, we calculate the runtime upper bound of special Gaussian mutation EP and Cauchy mutation EP. Our analysis reveals that the upper bounds are impacted by individual number, problem dimension number n, searching range, and the Lebesgue measure of the optimal neighborhood. Furthermore, we provide conditions whereby the average runtime of the considered EP can be no more than a polynomial of n. The condition is that the Lebesgue measure of the optimal neighborhood is larger than a combinatorial calculation of an exponential and the given polynomial of n.

  14. Constraints on the [Formula: see text] form factor from analyticity and unitarity.

    PubMed

    Ananthanarayan, B; Caprini, I; Kubis, B

    Motivated by the discrepancies noted recently between the theoretical calculations of the electromagnetic [Formula: see text] form factor and certain experimental data, we investigate this form factor using analyticity and unitarity in a framework known as the method of unitarity bounds. We use a QCD correlator computed on the spacelike axis by operator product expansion and perturbative QCD as input, and exploit unitarity and the positivity of its spectral function, including the two-pion contribution that can be reliably calculated using high-precision data on the pion form factor. From this information, we derive upper and lower bounds on the modulus of the [Formula: see text] form factor in the elastic region. The results provide a significant check on those obtained with standard dispersion relations, confirming the existence of a disagreement with experimental data in the region around [Formula: see text].

  15. Adiabatic description of capture into resonance and surfatron acceleration of charged particles by electromagnetic waves.

    PubMed

    Artemyev, A V; Neishtadt, A I; Zelenyi, L M; Vainchtein, D L

    2010-12-01

    We present an analytical and numerical study of the surfatron acceleration of nonrelativistic charged particles by electromagnetic waves. The acceleration is caused by capture of particles into resonance with one of the waves. We investigate capture for systems with one or two waves and provide conditions under which the obtained results can be applied to systems with more than two waves. In the case of a single wave, the once captured particles never leave the resonance and their velocity grows linearly with time. However, if there are two waves in the system, the upper bound of the energy gain may exist and we find the analytical value of that bound. We discuss several generalizations including the relativistic limit, different wave amplitudes, and a wide range of the waves' wavenumbers. The obtained results are used for qualitative description of some phenomena observed in the Earth's magnetosphere. © 2010 American Institute of Physics.

  16. Study of the geodesic equations of a spherical symmetric spacetime in conformal Weyl gravity

    NASA Astrophysics Data System (ADS)

    Hoseini, Bahareh; Saffari, Reza; Soroushfar, Saheb

    2017-03-01

    A set of analytic solutions of the geodesic equation in a spherical conformal spacetime is presented. Solutions of this geodesics can be expressed in terms of the Weierstrass \\wp function and the Kleinian σ function. Using conserved energy and angular momentum we can characterize the different orbits. Also, considering parametric diagrams and effective potentials, we plot some possible orbits. Moreover, with the help of analytical solutions, we investigate the light deflection for such an escape orbit. Finally, by using periastron advance we get to an upper bound for magnitude of γ.

  17. Conservative Analytical Collision Probabilities for Orbital Formation Flying

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    2004-01-01

    The literature offers a number of approximations for analytically and/or efficiently computing the probability of collision between two space objects. However, only one of these techniques is a completely analytical approximation that is suitable for use in the preliminary design phase, when it is more important to quickly analyze a large segment of the trade space than it is to precisely compute collision probabilities. Unfortunately, among the types of formations that one might consider, some combine a range of conditions for which this analytical method is less suitable. This work proposes a simple, conservative approximation that produces reasonable upper bounds on the collision probability in such conditions. Although its estimates are much too conservative under other conditions, such conditions are typically well suited for use of the existing method.

  18. Conservative Analytical Collision Probability for Design of Orbital Formations

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    2004-01-01

    The literature offers a number of approximations for analytically and/or efficiently computing the probability of collision between two space objects. However, only one of these techniques is a completely analytical approximation that is suitable for use in the preliminary design phase, when it is more important to quickly analyze a large segment of the trade space than it is to precisely compute collision probabilities. Unfortunately, among the types of formations that one might consider, some combine a range of conditions for which this analytical method is less suitable. This work proposes a simple, conservative approximation that produces reasonable upper bounds on the collision probability in such conditions. Although its estimates are much too conservative under other conditions, such conditions are typically well suited for use of the existing method.

  19. An Analytical Integration of the Averaged Equations of Variation due to Sun-Moon Perturbations and its Application.

    DTIC Science & Technology

    1979-10-01

    the lngituddV .a rh - L .\\tJd f lh1. ;I,.s( , n n node crossing of the satellite, XANX , repeats evvry Q rev(,lutions,, or 2N7T a (t NP) + X -Qu( NP)2...7 at ANX 0 Q with N 0, 1, 2 ....... to =epoch time when the ascending node crossing occurs at longitude XANX P = nodal period aG = right ascension of...Greenwich S2 right ascension of the ascending node UPPER m BOUND XANX ANX LOWER axANX BOUND to tI t2 t4 Fig. 11 Schematic Drawing of the Time

  20. Measures and limits of models of fixation selection.

    PubMed

    Wilming, Niklas; Betz, Torsten; Kietzmann, Tim C; König, Peter

    2011-01-01

    Models of fixation selection are a central tool in the quest to understand how the human mind selects relevant information. Using this tool in the evaluation of competing claims often requires comparing different models' relative performance in predicting eye movements. However, studies use a wide variety of performance measures with markedly different properties, which makes a comparison difficult. We make three main contributions to this line of research: First we argue for a set of desirable properties, review commonly used measures, and conclude that no single measure unites all desirable properties. However the area under the ROC curve (a classification measure) and the KL-divergence (a distance measure of probability distributions) combine many desirable properties and allow a meaningful comparison of critical model performance. We give an analytical proof of the linearity of the ROC measure with respect to averaging over subjects and demonstrate an appropriate correction of entropy-based measures like KL-divergence for small sample sizes in the context of eye-tracking data. Second, we provide a lower bound and an upper bound of these measures, based on image-independent properties of fixation data and between subject consistency respectively. Based on these bounds it is possible to give a reference frame to judge the predictive power of a model of fixation selection. We provide open-source python code to compute the reference frame. Third, we show that the upper, between subject consistency bound holds only for models that predict averages of subject populations. Departing from this we show that incorporating subject-specific viewing behavior can generate predictions which surpass that upper bound. Taken together, these findings lay out the required information that allow a well-founded judgment of the quality of any model of fixation selection and should therefore be reported when a new model is introduced.

  1. Hardening Effect Analysis by Modular Upper Bound and Finite Element Methods in Indentation of Aluminum, Steel, Titanium and Superalloys

    PubMed Central

    Bermudo, Carolina; Sevilla, Lorenzo; Martín, Francisco; Trujillo, Francisco Javier

    2017-01-01

    The application of incremental processes in the manufacturing industry is having a great development in recent years. The first stage of an Incremental Forming Process can be defined as an indentation. Because of this, the indentation process is starting to be widely studied, not only as a hardening test but also as a forming process. Thus, in this work, an analysis of the indentation process under the new Modular Upper Bound perspective has been performed. The modular implementation has several advantages, including the possibility of the introduction of different parameters to extend the study, such as the friction effect, the temperature or the hardening effect studied in this paper. The main objective of the present work is to analyze the three hardening models developed depending on the material characteristics. In order to support the validation of the hardening models, finite element analyses of diverse materials under an indentation are carried out. Results obtained from the Modular Upper Bound are in concordance with the results obtained from the numerical analyses. In addition, the numerical and analytical methods are in concordance with the results previously obtained in the experimental indentation of annealed aluminum A92030. Due to the introduction of the hardening factor, the new modular distribution is a suitable option for the analysis of indentation process. PMID:28772914

  2. Upper bounds on superpartner masses from upper bounds on the Higgs boson mass.

    PubMed

    Cabrera, M E; Casas, J A; Delgado, A

    2012-01-13

    The LHC is putting bounds on the Higgs boson mass. In this Letter we use those bounds to constrain the minimal supersymmetric standard model (MSSM) parameter space using the fact that, in supersymmetry, the Higgs mass is a function of the masses of sparticles, and therefore an upper bound on the Higgs mass translates into an upper bound for the masses for superpartners. We show that, although current bounds do not constrain the MSSM parameter space from above, once the Higgs mass bound improves big regions of this parameter space will be excluded, putting upper bounds on supersymmetry (SUSY) masses. On the other hand, for the case of split-SUSY we show that, for moderate or large tanβ, the present bounds on the Higgs mass imply that the common mass for scalars cannot be greater than 10(11)  GeV. We show how these bounds will evolve as LHC continues to improve the limits on the Higgs mass.

  3. Hawking effects as a noisy quantum channel

    NASA Astrophysics Data System (ADS)

    Ahn, Doyeol

    2018-01-01

    In this work, we have shown that the evolution of the bipartite entangled state near the black hole with the Hawking radiation can be described by a noisy quantum channel, having a complete positive map with an "operator sum representation." The entanglement fidelity is obtained in analytic form from the "operator sum representation." The bipartite entangled state becomes bipartite mixed Gaussian state as the black hole evaporates. By comparing negativity and entanglement monotone with the analytical form of the entanglement fidelity, we found that the negativity and the entanglement monotone for s = 1/2 provide the upper and the lower bounds of the entanglement fidelity, respectively.

  4. Adaptive steganography

    NASA Astrophysics Data System (ADS)

    Chandramouli, Rajarathnam; Li, Grace; Memon, Nasir D.

    2002-04-01

    Steganalysis techniques attempt to differentiate between stego-objects and cover-objects. In recent work we developed an explicit analytic upper bound for the steganographic capacity of LSB based steganographic techniques for a given false probability of detection. In this paper we look at adaptive steganographic techniques. Adaptive steganographic techniques take explicit steps to escape detection. We explore different techniques that can be used to adapt message embedding to the image content or to a known steganalysis technique. We investigate the advantages of adaptive steganography within an analytical framework. We also give experimental results with a state-of-the-art steganalysis technique demonstrating that adaptive embedding results in a significant number of bits embedded without detection.

  5. Non-linear collisional Penrose process: How much energy can a black hole release?

    NASA Astrophysics Data System (ADS)

    Nakao, Ken-ichi; Okawa, Hirotada; Maeda, Kei-ichi

    2018-01-01

    Energy extraction from a rotating or charged black hole is one of the fascinating issues in general relativity. The collisional Penrose process is one such extraction mechanism and has been reconsidered intensively since Bañados, Silk, and West pointed out the physical importance of very high energy collisions around a maximally rotating black hole. In order to get results analytically, the test particle approximation has been adopted so far. Successive works based on this approximation scheme have not yet revealed the upper bound on the efficiency of the energy extraction because of the lack of backreaction. In the Reissner-Nordström spacetime, by fully taking into account the self-gravity of the shells, we find that there is an upper bound on the extracted energy that is consistent with the area law of a black hole. We also show one particular scenario in which almost the maximum energy extraction is achieved even without the Bañados-Silk-West collision.

  6. Performance Analysis of Amplify-and-Forward Systems with Single Relay Selection in Correlated Environments.

    PubMed

    Van Nguyen, Binh; Kim, Kiseon

    2016-09-11

    In this paper, we consider amplify-and-forward (AnF) cooperative systems under correlated fading environments. We first present a brief overview of existing works on the effect of channel correlations on the system performance. We then focus on our main contribution which is analyzing the outage probability of a multi-AnF-relay system with the best relay selection (BRS) scheme under a condition that two channels of each relay, source-relay and relay-destination channels, are correlated. Using lower and upper bounds on the end-to-end received signal-to-noise ratio (SNR) at the destination, we derive corresponding upper and lower bounds on the system outage probability. We prove that the system can achieve a diversity order (DO) equal to the number of relays. In addition, and importantly, we show that the considered correlation form has a constructive effect on the system performance. In other words, the larger the correlation coefficient, the better system performance. Our analytic results are corroborated by extensive Monte-Carlo simulations.

  7. Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations.

    PubMed

    Xiao, Lin; Liao, Bolin; Li, Shuai; Chen, Ke

    2018-02-01

    In order to solve general time-varying linear matrix equations (LMEs) more efficiently, this paper proposes two nonlinear recurrent neural networks based on two nonlinear activation functions. According to Lyapunov theory, such two nonlinear recurrent neural networks are proved to be convergent within finite-time. Besides, by solving differential equation, the upper bounds of the finite convergence time are determined analytically. Compared with existing recurrent neural networks, the proposed two nonlinear recurrent neural networks have a better convergence property (i.e., the upper bound is lower), and thus the accurate solutions of general time-varying LMEs can be obtained with less time. At last, various different situations have been considered by setting different coefficient matrices of general time-varying LMEs and a great variety of computer simulations (including the application to robot manipulators) have been conducted to validate the better finite-time convergence of the proposed two nonlinear recurrent neural networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Transient and steady state viscoelastic rolling contact

    NASA Technical Reports Server (NTRS)

    Padovan, J.; Paramadilok, O.

    1985-01-01

    Based on moving total Lagrangian coordinates, a so-called traveling Hughes type contact strategy is developed. Employing the modified contact scheme in conjunction with a traveling finite element strategy, an overall solution methodology is developed to handle transient and steady viscoelastic rolling contact. To verify the scheme, the results of both experimental and analytical benchmarking is presented. The experimental benchmarking includes the handling of rolling tires up to their upper bound behavior, namely the standing wave response.

  9. New upper bounds on the rate of a code via the Delsarte-MacWilliams inequalities

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Rodemich, E. R.; Rumsey, H., Jr.; Welch, L. R.

    1977-01-01

    An upper bound on the rate of a binary code as a function of minimum code distance (using a Hamming code metric) is arrived at from Delsarte-MacWilliams inequalities. The upper bound so found is asymptotically less than Levenshtein's bound, and a fortiori less than Elias' bound. Appendices review properties of Krawtchouk polynomials and Q-polynomials utilized in the rigorous proofs.

  10. Money Gone Up in Smoke: The Tobacco Use and Malnutrition Nexus in Bangladesh

    PubMed Central

    Husain, Muhammad Jami; Virk-Baker, Mandeep; Parascandola, Mark; Khondker, Bazlul Haque; Ahluwalia, Indu B.

    2017-01-01

    BACKGROUND The tobacco epidemic in Bangladesh is pervasive. Expenditures on tobacco may reduce money available for food in a country with a high malnutrition rate. OBJECTIVES The aims of the study are to quantify the opportunity costs of tobacco expenditure in terms of nutrition (ie, food energy) forgone and the potential improvements in the household level food-energy status if the money spent on tobacco were diverted for food consumption. METHOD We analyzed data from the 2010 Bangladesh Household Income and Expenditure Survey, a nationally representative survey conducted among 12,240 households. We present 2 analytical scenarios: (1) the lower-bound gain scenario entailing money spent on tobacco partially diverted to acquiring food according to households’ food consumption share in total expenditures; and (2) the upper-bound gain scenario entailing money spent on tobacco diverted to acquiring food only. Age- and gender-based energy norms were used to identify food-energy deficient households. Data were analyzed by mutually exclusive smoking-only, smokeless-only, and dual-tobacco user households. FINDINGS On average, a smoking-only household could gain 269–497 kilocalories (kcal) daily under the lower-bound and upper-bound scenarios, respectively. The potential energy gains for smokeless-only and dual-tobacco user households ranged from 148–268 kcal and 508–924 kcal, respectively. Under these lower- and upper-bound estimates, the percentage of smoking-only user households that are malnourished declined significantly from the baseline rate of 38% to 33% and 29%, respectively. For the smokeless-only and dual-tobacco user households, there were 2–3 and 6–9 percentage point drops in the malnutrition prevalence rates. The tobacco expenditure shift could translate to an additional 4.6–7.7 million food-energy malnourished persons meeting their caloric requirements. CONCLUSIONS The findings suggest that tobacco use reduction could facilitate concomitant improvements in population-level nutrition status and may inform the development and refinement of tobacco prevention and control efforts in Bangladesh. PMID:28283125

  11. Money Gone Up in Smoke: The Tobacco Use and Malnutrition Nexus in Bangladesh.

    PubMed

    Husain, Muhammad Jami; Virk-Baker, Mandeep; Parascandola, Mark; Khondker, Bazlul Haque; Ahluwalia, Indu B

    The tobacco epidemic in Bangladesh is pervasive. Expenditures on tobacco may reduce money available for food in a country with a high malnutrition rate. The aims of the study are to quantify the opportunity costs of tobacco expenditure in terms of nutrition (ie, food energy) forgone and the potential improvements in the household level food-energy status if the money spent on tobacco were diverted for food consumption. We analyzed data from the 2010 Bangladesh Household Income and Expenditure Survey, a nationally representative survey conducted among 12,240 households. We present 2 analytical scenarios: (1) the lower-bound gain scenario entailing money spent on tobacco partially diverted to acquiring food according to households' food consumption share in total expenditures; and (2) the upper-bound gain scenario entailing money spent on tobacco diverted to acquiring food only. Age- and gender-based energy norms were used to identify food-energy deficient households. Data were analyzed by mutually exclusive smoking-only, smokeless-only, and dual-tobacco user households. On average, a smoking-only household could gain 269-497 kilocalories (kcal) daily under the lower-bound and upper-bound scenarios, respectively. The potential energy gains for smokeless-only and dual-tobacco user households ranged from 148-268 kcal and 508-924 kcal, respectively. Under these lower- and upper-bound estimates, the percentage of smoking-only user households that are malnourished declined significantly from the baseline rate of 38% to 33% and 29%, respectively. For the smokeless-only and dual-tobacco user households, there were 2-3 and 6-9 percentage point drops in the malnutrition prevalence rates. The tobacco expenditure shift could translate to an additional 4.6-7.7 million food-energy malnourished persons meeting their caloric requirements. The findings suggest that tobacco use reduction could facilitate concomitant improvements in population-level nutrition status and may inform the development and refinement of tobacco prevention and control efforts in Bangladesh. Copyright © 2016. Published by Elsevier Inc.

  12. The superradiant instability regime of the spinning Kerr black hole

    NASA Astrophysics Data System (ADS)

    Hod, Shahar

    2016-07-01

    Spinning Kerr black holes are known to be superradiantly unstable to massive scalar perturbations. We here prove that the instability regime of the composed Kerr-black-hole-massive-scalar-field system is bounded from above by the dimensionless inequality Mμ < m ṡ√{2(1 + γ) (1 -√ 1 -γ2) / -γ2 4γ2, where { μ , m } are respectively the proper mass and azimuthal harmonic index of the scalar field and γ ≡r- /r+ is the dimensionless ratio between the horizon radii of the black hole. It is further shown that this analytically derived upper bound on the superradiant instability regime of the spinning Kerr black hole agrees with recent numerical computations of the instability resonance spectrum.

  13. UPPER BOUND RISK ESTIMATES FOR MIXTURES OF CARCINOGENS

    EPA Science Inventory

    The excess cancer risk that might result from exposure to a mixture of chemical carcinogens usually is estimated with data from experiments conducted on individual chemicals. An upper bound on the total excess risk is estimated commonly by summing individual upper bound risk esti...

  14. Dynamical Origin of the Effective Storage Capacity in the Brain's Working Memory

    NASA Astrophysics Data System (ADS)

    Bick, Christian; Rabinovich, Mikhail I.

    2009-11-01

    The capacity of working memory (WM), a short-term buffer for information in the brain, is limited. We suggest a model for sequential WM that is based upon winnerless competition amongst representations of available informational items. Analytical results for the underlying mathematical model relate WM capacity and relative lateral inhibition in the corresponding neural network. This implies an upper bound for WM capacity, which is, under reasonable neurobiological assumptions, close to the “magical number seven.”

  15. Synchronizability of random rectangular graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estrada, Ernesto, E-mail: ernesto.estrada@strath.ac.uk; Chen, Guanrong

    2015-08-15

    Random rectangular graphs (RRGs) represent a generalization of the random geometric graphs in which the nodes are embedded into hyperrectangles instead of on hypercubes. The synchronizability of RRG model is studied. Both upper and lower bounds of the eigenratio of the network Laplacian matrix are determined analytically. It is proven that as the rectangular network is more elongated, the network becomes harder to synchronize. The synchronization processing behavior of a RRG network of chaotic Lorenz system nodes is numerically investigated, showing complete consistence with the theoretical results.

  16. Investigation of geomagnetic field forecasting and fluid dynamics of the core

    NASA Technical Reports Server (NTRS)

    Benton, E. R. (Principal Investigator)

    1981-01-01

    The magnetic determination of the depth of the core-mantle boundary using MAGSAT data is discussed. Refinements to the approach of using the pole-strength of Earth to evaluate the radius of the Earth's core-mantle boundary are reported. The downward extrapolation through the electrically conducting mantle was reviewed. Estimates of an upper bound for the time required for Earth's liquid core to overturn completely are presented. High order analytic approximations to the unsigned magnetic flux crossing the Earth's surface are also presented.

  17. Exact Fundamental Limits of the First and Second Hyperpolarizabilities

    NASA Astrophysics Data System (ADS)

    Lytel, Rick; Mossman, Sean; Crowell, Ethan; Kuzyk, Mark G.

    2017-08-01

    Nonlinear optical interactions of light with materials originate in the microscopic response of the molecular constituents to excitation by an optical field, and are expressed by the first (β ) and second (γ ) hyperpolarizabilities. Upper bounds to these quantities were derived seventeen years ago using approximate, truncated state models that violated completeness and unitarity, and far exceed those achieved by potential optimization of analytical systems. This Letter determines the fundamental limits of the first and second hyperpolarizability tensors using Monte Carlo sampling of energy spectra and transition moments constrained by the diagonal Thomas-Reiche-Kuhn (TRK) sum rules and filtered by the off-diagonal TRK sum rules. The upper bounds of β and γ are determined from these quantities by applying error-refined extrapolation to perfect compliance with the sum rules. The method yields the largest diagonal component of the hyperpolarizabilities for an arbitrary number of interacting electrons in any number of dimensions. The new method provides design insight to the synthetic chemist and nanophysicist for approaching the limits. This analysis also reveals that the special cases which lead to divergent nonlinearities in the many-state catastrophe are not physically realizable.

  18. Performance Analysis of Amplify-and-Forward Systems with Single Relay Selection in Correlated Environments

    PubMed Central

    Nguyen, Binh Van; Kim, Kiseon

    2016-01-01

    In this paper, we consider amplify-and-forward (AnF) cooperative systems under correlated fading environments. We first present a brief overview of existing works on the effect of channel correlations on the system performance. We then focus on our main contribution which is analyzing the outage probability of a multi-AnF-relay system with the best relay selection (BRS) scheme under a condition that two channels of each relay, source-relay and relay-destination channels, are correlated. Using lower and upper bounds on the end-to-end received signal-to-noise ratio (SNR) at the destination, we derive corresponding upper and lower bounds on the system outage probability. We prove that the system can achieve a diversity order (DO) equal to the number of relays. In addition, and importantly, we show that the considered correlation form has a constructive effect on the system performance. In other words, the larger the correlation coefficient, the better system performance. Our analytic results are corroborated by extensive Monte-Carlo simulations. PMID:27626426

  19. Upper bound on the slope of steady water waves with small adverse vorticity

    NASA Astrophysics Data System (ADS)

    So, Seung Wook; Strauss, Walter A.

    2018-03-01

    We consider the angle of inclination (with respect to the horizontal) of the profile of a steady 2D inviscid symmetric periodic or solitary water wave subject to gravity. There is an upper bound of 31.15° in the irrotational case [1] and an upper bound of 45° in the case of favorable vorticity [13]. On the other hand, if the vorticity is adverse, the profile can become vertical. We prove here that if the adverse vorticity is sufficiently small, then the angle still has an upper bound which is slightly larger than 45°.

  20. Dominating Scale-Free Networks Using Generalized Probabilistic Methods

    PubMed Central

    Molnár,, F.; Derzsy, N.; Czabarka, É.; Székely, L.; Szymanski, B. K.; Korniss, G.

    2014-01-01

    We study ensemble-based graph-theoretical methods aiming to approximate the size of the minimum dominating set (MDS) in scale-free networks. We analyze both analytical upper bounds of dominating sets and numerical realizations for applications. We propose two novel probabilistic dominating set selection strategies that are applicable to heterogeneous networks. One of them obtains the smallest probabilistic dominating set and also outperforms the deterministic degree-ranked method. We show that a degree-dependent probabilistic selection method becomes optimal in its deterministic limit. In addition, we also find the precise limit where selecting high-degree nodes exclusively becomes inefficient for network domination. We validate our results on several real-world networks, and provide highly accurate analytical estimates for our methods. PMID:25200937

  1. (U) An Analytic Examination of Piezoelectric Ejecta Mass Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tregillis, Ian Lee

    2017-02-02

    Ongoing efforts to validate a Richtmyer-Meshkov instability (RMI) based ejecta source model [1, 2, 3] in LANL ASC codes use ejecta areal masses derived from piezoelectric sensor data [4, 5, 6]. However, the standard technique for inferring masses from sensor voltages implicitly assumes instantaneous ejecta creation [7], which is not a feature of the RMI source model. To investigate the impact of this discrepancy, we define separate “areal mass functions” (AMFs) at the source and sensor in terms of typically unknown distribution functions for the ejecta particles, and derive an analytic relationship between them. Then, for the case of single-shockmore » ejection into vacuum, we use the AMFs to compare the analytic (or “true”) accumulated mass at the sensor with the value that would be inferred from piezoelectric voltage measurements. We confirm the inferred mass is correct when creation is instantaneous, and furthermore prove that when creation is not instantaneous, the inferred values will always overestimate the true mass. Finally, we derive an upper bound for the error imposed on a perfect system by the assumption of instantaneous ejecta creation. When applied to shots in the published literature, this bound is frequently less than several percent. Errors exceeding 15% may require velocities or timescales at odds with experimental observations.« less

  2. Lower bounds on the violation of the monogamy inequality for quantum correlation measures

    NASA Astrophysics Data System (ADS)

    Kumar, Asutosh; Dhar, Himadri Shekhar

    2016-06-01

    In multiparty quantum systems, the monogamy inequality proposes an upper bound on the distribution of bipartite quantum correlation between a single party and each of the remaining parties in the system, in terms of the amount of quantum correlation shared by that party with the rest of the system taken as a whole. However, it is well known that not all quantum correlation measures universally satisfy the monogamy inequality. In this work, we aim at determining the nontrivial value by which the monogamy inequality can be violated by a quantum correlation measure. Using an information-theoretic complementarity relation between the normalized purity and quantum correlation in any given multiparty state, we obtain a nontrivial lower bound on the negative monogamy score for the quantum correlation measure. In particular, for the three-qubit states the lower bound is equal to the negative von Neumann entropy of the single qubit reduced density matrix. We analytically examine the tightness of the derived lower bound for certain n -qubit quantum states. Further, we report numerical results of the same for monogamy violating correlation measures using Haar uniformly generated three-qubit states.

  3. Error analysis of analytic solutions for self-excited near-symmetric rigid bodies - A numerical study

    NASA Technical Reports Server (NTRS)

    Kia, T.; Longuski, J. M.

    1984-01-01

    Analytic error bounds are presented for the solutions of approximate models for self-excited near-symmetric rigid bodies. The error bounds are developed for analytic solutions to Euler's equations of motion. The results are applied to obtain a simplified analytic solution for Eulerian rates and angles. The results of a sample application of the range and error bound expressions for the case of the Galileo spacecraft experiencing transverse torques demonstrate the use of the bounds in analyses of rigid body spin change maneuvers.

  4. Mechanical metamaterials at the theoretical limit of isotropic elastic stiffness

    NASA Astrophysics Data System (ADS)

    Berger, J. B.; Wadley, H. N. G.; McMeeking, R. M.

    2017-02-01

    A wide variety of high-performance applications require materials for which shape control is maintained under substantial stress, and that have minimal density. Bio-inspired hexagonal and square honeycomb structures and lattice materials based on repeating unit cells composed of webs or trusses, when made from materials of high elastic stiffness and low density, represent some of the lightest, stiffest and strongest materials available today. Recent advances in 3D printing and automated assembly have enabled such complicated material geometries to be fabricated at low (and declining) cost. These mechanical metamaterials have properties that are a function of their mesoscale geometry as well as their constituents, leading to combinations of properties that are unobtainable in solid materials; however, a material geometry that achieves the theoretical upper bounds for isotropic elasticity and strain energy storage (the Hashin-Shtrikman upper bounds) has yet to be identified. Here we evaluate the manner in which strain energy distributes under load in a representative selection of material geometries, to identify the morphological features associated with high elastic performance. Using finite-element models, supported by analytical methods, and a heuristic optimization scheme, we identify a material geometry that achieves the Hashin-Shtrikman upper bounds on isotropic elastic stiffness. Previous work has focused on truss networks and anisotropic honeycombs, neither of which can achieve this theoretical limit. We find that stiff but well distributed networks of plates are required to transfer loads efficiently between neighbouring members. The resulting low-density mechanical metamaterials have many advantageous properties: their mesoscale geometry can facilitate large crushing strains with high energy absorption, optical bandgaps and mechanically tunable acoustic bandgaps, high thermal insulation, buoyancy, and fluid storage and transport. Our relatively simple design can be manufactured using origami-like sheet folding and bonding methods.

  5. Mechanical metamaterials at the theoretical limit of isotropic elastic stiffness.

    PubMed

    Berger, J B; Wadley, H N G; McMeeking, R M

    2017-03-23

    A wide variety of high-performance applications require materials for which shape control is maintained under substantial stress, and that have minimal density. Bio-inspired hexagonal and square honeycomb structures and lattice materials based on repeating unit cells composed of webs or trusses, when made from materials of high elastic stiffness and low density, represent some of the lightest, stiffest and strongest materials available today. Recent advances in 3D printing and automated assembly have enabled such complicated material geometries to be fabricated at low (and declining) cost. These mechanical metamaterials have properties that are a function of their mesoscale geometry as well as their constituents, leading to combinations of properties that are unobtainable in solid materials; however, a material geometry that achieves the theoretical upper bounds for isotropic elasticity and strain energy storage (the Hashin-Shtrikman upper bounds) has yet to be identified. Here we evaluate the manner in which strain energy distributes under load in a representative selection of material geometries, to identify the morphological features associated with high elastic performance. Using finite-element models, supported by analytical methods, and a heuristic optimization scheme, we identify a material geometry that achieves the Hashin-Shtrikman upper bounds on isotropic elastic stiffness. Previous work has focused on truss networks and anisotropic honeycombs, neither of which can achieve this theoretical limit. We find that stiff but well distributed networks of plates are required to transfer loads efficiently between neighbouring members. The resulting low-density mechanical metamaterials have many advantageous properties: their mesoscale geometry can facilitate large crushing strains with high energy absorption, optical bandgaps and mechanically tunable acoustic bandgaps, high thermal insulation, buoyancy, and fluid storage and transport. Our relatively simple design can be manufactured using origami-like sheet folding and bonding methods.

  6. Optimal Micropatterns in 2D Transport Networks and Their Relation to Image Inpainting

    NASA Astrophysics Data System (ADS)

    Brancolini, Alessio; Rossmanith, Carolin; Wirth, Benedikt

    2018-04-01

    We consider two different variational models of transport networks: the so-called branched transport problem and the urban planning problem. Based on a novel relation to Mumford-Shah image inpainting and techniques developed in that field, we show for a two-dimensional situation that both highly non-convex network optimization tasks can be transformed into a convex variational problem, which may be very useful from analytical and numerical perspectives. As applications of the convex formulation, we use it to perform numerical simulations (to our knowledge this is the first numerical treatment of urban planning), and we prove a lower bound for the network cost that matches a known upper bound (in terms of how the cost scales in the model parameters) which helps better understand optimal networks and their minimal costs.

  7. Calculation of upper confidence bounds on proportion of area containing not-sampled vegetation types: An application to map unit definition for existing vegetation maps

    Treesearch

    Paul L. Patterson; Mark Finco

    2011-01-01

    This paper explores the information forest inventory data can produce regarding forest types that were not sampled and develops the equations necessary to define the upper confidence bounds on not-sampled forest types. The problem is reduced to a Bernoulli variable. This simplification allows the upper confidence bounds to be calculated based on Cochran (1977)....

  8. The Lyapunov dimension and its estimation via the Leonov method

    NASA Astrophysics Data System (ADS)

    Kuznetsov, N. V.

    2016-06-01

    Along with widely used numerical methods for estimating and computing the Lyapunov dimension there is an effective analytical approach, proposed by G.A. Leonov in 1991. The Leonov method is based on the direct Lyapunov method with special Lyapunov-like functions. The advantage of the method is that it allows one to estimate the Lyapunov dimension of invariant sets without localization of the set in the phase space and, in many cases, to get effectively an exact Lyapunov dimension formula. In this work the invariance of the Lyapunov dimension with respect to diffeomorphisms and its connection with the Leonov method are discussed. For discrete-time dynamical systems an analog of Leonov method is suggested. In a simple but rigorous way, here it is presented the connection between the Leonov method and the key related works: Kaplan and Yorke (the concept of the Lyapunov dimension, 1979), Douady and Oesterlé (upper bounds of the Hausdorff dimension via the Lyapunov dimension of maps, 1980), Constantin, Eden, Foiaş, and Temam (upper bounds of the Hausdorff dimension via the Lyapunov exponents and Lyapunov dimension of dynamical systems, 1985-90), and the numerical calculation of the Lyapunov exponents and dimension.

  9. Reducing Conservatism of Analytic Transient Response Bounds via Shaping Filters

    NASA Technical Reports Server (NTRS)

    Kwan, Aiyueh; Bedrossian, Nazareth; Jan, Jiann-Woei; Grigoriadis, Karolos; Hua, Tuyen (Technical Monitor)

    1999-01-01

    Recent results show that the peak transient response of a linear system to bounded energy inputs can be computed using the energy-to-peak gain of the system. However, analytically computed peak response bound can be conservative for a class of class bounded energy signals, specifically pulse trains generated from jet firings encountered in space vehicles. In this paper, shaping filters are proposed as a Methodology to reduce the conservatism of peak response analytic bounds. This Methodology was applied to a realistic Space Station assembly operation subject to jet firings. The results indicate that shaping filters indeed reduce the predicted peak response bounds.

  10. Calculation of upper confidence bounds on not-sampled vegetation types using a systematic grid sample: An application to map unit definition for existing vegetation maps

    Treesearch

    Paul L. Patterson; Mark Finco

    2009-01-01

    This paper explores the information FIA data can produce regarding forest types that were not sampled and develops the equations necessary to define the upper confidence bounds on not-sampled forest types. The problem is reduced to a Bernoulli variable. This simplification allows the upper confidence bounds to be calculated based on Cochran (1977). Examples are...

  11. General upper bound on single-event upset rate. [due to ionizing radiation in orbiting vehicle avionics

    NASA Technical Reports Server (NTRS)

    Chlouber, Dean; O'Neill, Pat; Pollock, Jim

    1990-01-01

    A technique of predicting an upper bound on the rate at which single-event upsets due to ionizing radiation occur in semiconducting memory cells is described. The upper bound on the upset rate, which depends on the high-energy particle environment in earth orbit and accelerator cross-section data, is given by the product of an upper-bound linear energy-transfer spectrum and the mean cross section of the memory cell. Plots of the spectrum are given for low-inclination and polar orbits. An alternative expression for the exact upset rate is also presented. Both methods rely only on experimentally obtained cross-section data and are valid for sensitive bit regions having arbitrary shape.

  12. Evolutionary potential of upper thermal tolerance: biogeographic patterns and expectations under climate change.

    PubMed

    Diamond, Sarah E

    2017-02-01

    How will organisms respond to climate change? The rapid changes in global climate are expected to impose strong directional selection on fitness-related traits. A major open question then is the potential for adaptive evolutionary change under these shifting climates. At the most basic level, evolutionary change requires the presence of heritable variation and natural selection. Because organismal tolerances of high temperature place an upper bound on responding to temperature change, there has been a surge of research effort on the evolutionary potential of upper thermal tolerance traits. Here, I review the available evidence on heritable variation in upper thermal tolerance traits, adopting a biogeographic perspective to understand how heritability of tolerance varies across space. Specifically, I use meta-analytical models to explore the relationship between upper thermal tolerance heritability and environmental variability in temperature. I also explore how variation in the methods used to obtain these thermal tolerance heritabilities influences the estimation of heritable variation in tolerance. I conclude by discussing the implications of a positive relationship between thermal tolerance heritability and environmental variability in temperature and how this might influence responses to future changes in climate. © 2016 New York Academy of Sciences.

  13. Upper bounds on secret-key agreement over lossy thermal bosonic channels

    NASA Astrophysics Data System (ADS)

    Kaur, Eneet; Wilde, Mark M.

    2017-12-01

    Upper bounds on the secret-key-agreement capacity of a quantum channel serve as a way to assess the performance of practical quantum-key-distribution protocols conducted over that channel. In particular, if a protocol employs a quantum repeater, achieving secret-key rates exceeding these upper bounds is evidence of having a working quantum repeater. In this paper, we extend a recent advance [Liuzzo-Scorpo et al., Phys. Rev. Lett. 119, 120503 (2017), 10.1103/PhysRevLett.119.120503] in the theory of the teleportation simulation of single-mode phase-insensitive Gaussian channels such that it now applies to the relative entropy of entanglement measure. As a consequence of this extension, we find tighter upper bounds on the nonasymptotic secret-key-agreement capacity of the lossy thermal bosonic channel than were previously known. The lossy thermal bosonic channel serves as a more realistic model of communication than the pure-loss bosonic channel, because it can model the effects of eavesdropper tampering and imperfect detectors. An implication of our result is that the previously known upper bounds on the secret-key-agreement capacity of the thermal channel are too pessimistic for the practical finite-size regime in which the channel is used a finite number of times, and so it should now be somewhat easier to witness a working quantum repeater when using secret-key-agreement capacity upper bounds as a benchmark.

  14. Upper bounds on the error probabilities and asymptotic error exponents in quantum multiple state discrimination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audenaert, Koenraad M. R., E-mail: koenraad.audenaert@rhul.ac.uk; Department of Physics and Astronomy, University of Ghent, S9, Krijgslaan 281, B-9000 Ghent; Mosonyi, Milán, E-mail: milan.mosonyi@gmail.com

    2014-10-01

    We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ₁, …, σ{sub r}. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ₁, …, σ{sub r}), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov'smore » classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min{sub j« less

  15. Upper bounds on sequential decoding performance parameters

    NASA Technical Reports Server (NTRS)

    Jelinek, F.

    1974-01-01

    This paper presents the best obtainable random coding and expurgated upper bounds on the probabilities of undetectable error, of t-order failure (advance to depth t into an incorrect subset), and of likelihood rise in the incorrect subset, applicable to sequential decoding when the metric bias G is arbitrary. Upper bounds on the Pareto exponent are also presented. The G-values optimizing each of the parameters of interest are determined, and are shown to lie in intervals that in general have nonzero widths. The G-optimal expurgated bound on undetectable error is shown to agree with that for maximum likelihood decoding of convolutional codes, and that on failure agrees with the block code expurgated bound. Included are curves evaluating the bounds for interesting choices of G and SNR for a binary-input quantized-output Gaussian additive noise channel.

  16. Breakdown of a 2D Heteroclinic Connection in the Hopf-Zero Singularity (I)

    NASA Astrophysics Data System (ADS)

    Baldomá, I.; Castejón, O.; Seara, T. M.

    2018-04-01

    In this paper we study a beyond all orders phenomenon which appears in the analytic unfoldings of the Hopf-zero singularity. It consists in the breakdown of a two-dimensional heteroclinic surface which exists in the truncated normal form of this singularity at any order. The results in this paper are twofold: on the one hand, we give results for generic unfoldings which lead to sharp exponentially small upper bounds of the difference between these manifolds. On the other hand, we provide asymptotic formulas for this difference by means of the Melnikov function for some non-generic unfoldings.

  17. Measuring Integrated Information from the Decoding Perspective

    PubMed Central

    Oizumi, Masafumi; Amari, Shun-ichi; Yanagawa, Toru; Fujii, Naotaka; Tsuchiya, Naotsugu

    2016-01-01

    Accumulating evidence indicates that the capacity to integrate information in the brain is a prerequisite for consciousness. Integrated Information Theory (IIT) of consciousness provides a mathematical approach to quantifying the information integrated in a system, called integrated information, Φ. Integrated information is defined theoretically as the amount of information a system generates as a whole, above and beyond the amount of information its parts independently generate. IIT predicts that the amount of integrated information in the brain should reflect levels of consciousness. Empirical evaluation of this theory requires computing integrated information from neural data acquired from experiments, although difficulties with using the original measure Φ precludes such computations. Although some practical measures have been previously proposed, we found that these measures fail to satisfy the theoretical requirements as a measure of integrated information. Measures of integrated information should satisfy the lower and upper bounds as follows: The lower bound of integrated information should be 0 and is equal to 0 when the system does not generate information (no information) or when the system comprises independent parts (no integration). The upper bound of integrated information is the amount of information generated by the whole system. Here we derive the novel practical measure Φ* by introducing a concept of mismatched decoding developed from information theory. We show that Φ* is properly bounded from below and above, as required, as a measure of integrated information. We derive the analytical expression of Φ* under the Gaussian assumption, which makes it readily applicable to experimental data. Our novel measure Φ* can generally be used as a measure of integrated information in research on consciousness, and also as a tool for network analysis on diverse areas of biology. PMID:26796119

  18. How entangled can a multi-party system possibly be?

    NASA Astrophysics Data System (ADS)

    Qi, Liqun; Zhang, Guofeng; Ni, Guyan

    2018-06-01

    The geometric measure of entanglement of a pure quantum state is defined to be its distance to the space of pure product (separable) states. Given an n-partite system composed of subsystems of dimensions d1 , … ,dn, an upper bound for maximally allowable entanglement is derived in terms of geometric measure of entanglement. This upper bound is characterized exclusively by the dimensions d1 , … ,dn of composite subsystems. Numerous examples demonstrate that the upper bound appears to be reasonably tight.

  19. Deviations from LTE in a stellar atmosphere

    NASA Technical Reports Server (NTRS)

    Kalkofen, W.; Klein, R. I.; Stein, R. F.

    1979-01-01

    Deviations for LTE are investigated in an atmosphere of hydrogen atoms with one bound level, satisfying the equations of radiative, hydrostatic, and statistical equilibrium. The departure coefficient and the kinetic temperature as functions of the frequency dependence of the radiative cross section are studied analytically and numerically. Near the outer boundary of the atmosphere, the departure coefficient is smaller than unity when the radiative cross section grows with frequency faster than with the square of frequency; it exceeds unity otherwise. Far from the boundary the departure coefficient tends to exceed unity for any frequency dependence of the radiative cross section. Overpopulation always implies that the kinetic temperature in the statistical-equilibrium atmosphere is higher than the temperature in the corresponding LTE atmosphere. Upper and lower bounds on the kinetic temperature are given for an atmosphere with deviations from LTE only in the optically shallow layers when the emergent intensity can be described by a radiation temperature.

  20. Software reliability: Additional investigations into modeling with replicated experiments

    NASA Technical Reports Server (NTRS)

    Nagel, P. M.; Schotz, F. M.; Skirvan, J. A.

    1984-01-01

    The effects of programmer experience level, different program usage distributions, and programming languages are explored. All these factors affect performance, and some tentative relational hypotheses are presented. An analytic framework for replicated and non-replicated (traditional) software experiments is presented. A method of obtaining an upper bound on the error rate of the next error is proposed. The method was validated empirically by comparing forecasts with actual data. In all 14 cases the bound exceeded the observed parameter, albeit somewhat conservatively. Two other forecasting methods are proposed and compared to observed results. Although demonstrated relative to this framework that stages are neither independent nor exponentially distributed, empirical estimates show that the exponential assumption is nearly valid for all but the extreme tails of the distribution. Except for the dependence in the stage probabilities, Cox's model approximates to a degree what is being observed.

  1. A new upper bound for the norm of interval matrices with application to robust stability analysis of delayed neural networks.

    PubMed

    Faydasicok, Ozlem; Arik, Sabri

    2013-08-01

    The main problem with the analysis of robust stability of neural networks is to find the upper bound norm for the intervalized interconnection matrices of neural networks. In the previous literature, the major three upper bound norms for the intervalized interconnection matrices have been reported and they have been successfully applied to derive new sufficient conditions for robust stability of delayed neural networks. One of the main contributions of this paper will be the derivation of a new upper bound for the norm of the intervalized interconnection matrices of neural networks. Then, by exploiting this new upper bound norm of interval matrices and using stability theory of Lyapunov functionals and the theory of homomorphic mapping, we will obtain new sufficient conditions for the existence, uniqueness and global asymptotic stability of the equilibrium point for the class of neural networks with discrete time delays under parameter uncertainties and with respect to continuous and slope-bounded activation functions. The results obtained in this paper will be shown to be new and they can be considered alternative results to previously published corresponding results. We also give some illustrative and comparative numerical examples to demonstrate the effectiveness and applicability of the proposed robust stability condition. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Estimation variance bounds of importance sampling simulations in digital communication systems

    NASA Technical Reports Server (NTRS)

    Lu, D.; Yao, K.

    1991-01-01

    In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.

  3. On the likelihood of single-peaked preferences.

    PubMed

    Lackner, Marie-Louise; Lackner, Martin

    2017-01-01

    This paper contains an extensive combinatorial analysis of the single-peaked domain restriction and investigates the likelihood that an election is single-peaked. We provide a very general upper bound result for domain restrictions that can be defined by certain forbidden configurations. This upper bound implies that many domain restrictions (including the single-peaked restriction) are very unlikely to appear in a random election chosen according to the Impartial Culture assumption. For single-peaked elections, this upper bound can be refined and complemented by a lower bound that is asymptotically tight. In addition, we provide exact results for elections with few voters or candidates. Moreover, we consider the Pólya urn model and the Mallows model and obtain lower bounds showing that single-peakedness is considerably more likely to appear for certain parameterizations.

  4. A tight upper bound for quadratic knapsack problems in grid-based wind farm layout optimization

    NASA Astrophysics Data System (ADS)

    Quan, Ning; Kim, Harrison M.

    2018-03-01

    The 0-1 quadratic knapsack problem (QKP) in wind farm layout optimization models possible turbine locations as nodes, and power loss due to wake effects between pairs of turbines as edges in a complete graph. The goal is to select up to a certain number of turbine locations such that the sum of selected node and edge coefficients is maximized. Finding the optimal solution to the QKP is difficult in general, but it is possible to obtain a tight upper bound on the QKP's optimal value which facilitates the use of heuristics to solve QKPs by giving a good estimate of the optimality gap of any feasible solution. This article applies an upper bound method that is especially well-suited to QKPs in wind farm layout optimization due to certain features of the formulation that reduce the computational complexity of calculating the upper bound. The usefulness of the upper bound was demonstrated by assessing the performance of the greedy algorithm for solving QKPs in wind farm layout optimization. The results show that the greedy algorithm produces good solutions within 4% of the optimal value for small to medium sized problems considered in this article.

  5. Low-dimensional Representation of Error Covariance

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan

    2000-01-01

    Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.

  6. Edge connectivity and the spectral gap of combinatorial and quantum graphs

    NASA Astrophysics Data System (ADS)

    Berkolaiko, Gregory; Kennedy, James B.; Kurasov, Pavel; Mugnolo, Delio

    2017-09-01

    We derive a number of upper and lower bounds for the first nontrivial eigenvalue of Laplacians on combinatorial and quantum graph in terms of the edge connectivity, i.e. the minimal number of edges which need to be removed to make the graph disconnected. On combinatorial graphs, one of the bounds corresponds to a well-known inequality of Fiedler, of which we give a new variational proof. On quantum graphs, the corresponding bound generalizes a recent result of Band and Lévy. All proofs are general enough to yield corresponding estimates for the p-Laplacian and allow us to identify the minimizers. Based on the Betti number of the graph, we also derive upper and lower bounds on all eigenvalues which are ‘asymptotically correct’, i.e. agree with the Weyl asymptotics for the eigenvalues of the quantum graph. In particular, the lower bounds improve the bounds of Friedlander on any given graph for all but finitely many eigenvalues, while the upper bounds improve recent results of Ariturk. Our estimates are also used to derive bounds on the eigenvalues of the normalized Laplacian matrix that improve known bounds of spectral graph theory.

  7. On the role of entailment patterns and scalar implicatures in the processing of numerals

    PubMed Central

    Panizza, Daniele; Chierchia, Gennaro; Clifton, Charles

    2009-01-01

    There has been much debate, in both the linguistics and the psycholinguistics literature, concerning numbers and the interpretation of number denoting determiners ('numerals'). Such debate concerns, in particular, the nature and distribution of upper-bounded ('at-least') interpretations vs. lower-bounded ('exact') construals. In the present paper we show that the interpretation and processing of numerals are affected by the entailment properties of the context in which they occur. Experiment 1 established off-line preferences using a questionnaire. Experiment 2 investigated the processing issue through an eye tracking experiment using a silent reading task. Our results show that the upper-bounded interpretation of numerals occurs more often in an upward entailing context than in a downward entailing context. Reading times of the numeral itself were longer when it was embedded in an upward entailing context than when it was not, indicating that processing resources were required when the context triggered an upper-bounded interpretation. However, reading of a following context that required an upper-bounded interpretation triggered more regressions towards the numeral when it had occurred in a downward entailing context than in an upward entailing one. Such findings show that speakers' interpretation and processing of numerals is systematically affected by the polarity of the sentence in which they occur, and support the hypothesis that the upper-bounded interpretation of numerals is due to a scalar implicature. PMID:20161494

  8. The Upper and Lower Bounds of the Prediction Accuracies of Ensemble Methods for Binary Classification

    PubMed Central

    Wang, Xueyi; Davidson, Nicholas J.

    2011-01-01

    Ensemble methods have been widely used to improve prediction accuracy over individual classifiers. In this paper, we achieve a few results about the prediction accuracies of ensemble methods for binary classification that are missed or misinterpreted in previous literature. First we show the upper and lower bounds of the prediction accuracies (i.e. the best and worst possible prediction accuracies) of ensemble methods. Next we show that an ensemble method can achieve > 0.5 prediction accuracy, while individual classifiers have < 0.5 prediction accuracies. Furthermore, for individual classifiers with different prediction accuracies, the average of the individual accuracies determines the upper and lower bounds. We perform two experiments to verify the results and show that it is hard to achieve the upper and lower bounds accuracies by random individual classifiers and better algorithms need to be developed. PMID:21853162

  9. Upper bound of abutment scour in laboratory and field data

    USGS Publications Warehouse

    Benedict, Stephen

    2016-01-01

    The U.S. Geological Survey, in cooperation with the South Carolina Department of Transportation, conducted a field investigation of abutment scour in South Carolina and used those data to develop envelope curves that define the upper bound of abutment scour. To expand on this previous work, an additional cooperative investigation was initiated to combine the South Carolina data with abutment scour data from other sources and evaluate upper bound patterns with this larger data set. To facilitate this analysis, 446 laboratory and 331 field measurements of abutment scour were compiled into a digital database. This extensive database was used to evaluate the South Carolina abutment scour envelope curves and to develop additional envelope curves that reflected the upper bound of abutment scour depth for the laboratory and field data. The envelope curves provide simple but useful supplementary tools for assessing the potential maximum abutment scour depth in the field setting.

  10. Exact PDF equations and closure approximations for advective-reactive transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venturi, D.; Tartakovsky, Daniel M.; Tartakovsky, Alexandre M.

    2013-06-01

    Mathematical models of advection–reaction phenomena rely on advective flow velocity and (bio) chemical reaction rates that are notoriously random. By using functional integral methods, we derive exact evolution equations for the probability density function (PDF) of the state variables of the advection–reaction system in the presence of random transport velocity and random reaction rates with rather arbitrary distributions. These PDF equations are solved analytically for transport with deterministic flow velocity and a linear reaction rate represented mathematically by a heterog eneous and strongly-correlated random field. Our analytical solution is then used to investigate the accuracy and robustness of the recentlymore » proposed large-eddy diffusivity (LED) closure approximation [1]. We find that the solution to the LED-based PDF equation, which is exact for uncorrelated reaction rates, is accurate even in the presence of strong correlations and it provides an upper bound of predictive uncertainty.« less

  11. Fully synchronous solutions and the synchronization phase transition for the finite-N Kuramoto model

    NASA Astrophysics Data System (ADS)

    Bronski, Jared C.; DeVille, Lee; Jip Park, Moon

    2012-09-01

    We present a detailed analysis of the stability of phase-locked solutions to the Kuramoto system of oscillators. We derive an analytical expression counting the dimension of the unstable manifold associated to a given stationary solution. From this we are able to derive a number of consequences, including analytic expressions for the first and last frequency vectors to phase-lock, upper and lower bounds on the probability that a randomly chosen frequency vector will phase-lock, and very sharp results on the large N limit of this model. One of the surprises in this calculation is that for frequencies that are Gaussian distributed, the correct scaling for full synchrony is not the one commonly studied in the literature; rather, there is a logarithmic correction to the scaling which is related to the extremal value statistics of the random frequency vector.

  12. Energy and energy gradient matrix elements with N-particle explicitly correlated complex Gaussian basis functions with L =1

    NASA Astrophysics Data System (ADS)

    Bubin, Sergiy; Adamowicz, Ludwik

    2008-03-01

    In this work we consider explicitly correlated complex Gaussian basis functions for expanding the wave function of an N-particle system with the L =1 total orbital angular momentum. We derive analytical expressions for various matrix elements with these basis functions including the overlap, kinetic energy, and potential energy (Coulomb interaction) matrix elements, as well as matrix elements of other quantities. The derivatives of the overlap, kinetic, and potential energy integrals with respect to the Gaussian exponential parameters are also derived and used to calculate the energy gradient. All the derivations are performed using the formalism of the matrix differential calculus that facilitates a way of expressing the integrals in an elegant matrix form, which is convenient for the theoretical analysis and the computer implementation. The new method is tested in calculations of two systems: the lowest P state of the beryllium atom and the bound P state of the positronium molecule (with the negative parity). Both calculations yielded new, lowest-to-date, variational upper bounds, while the number of basis functions used was significantly smaller than in previous studies. It was possible to accomplish this due to the use of the analytic energy gradient in the minimization of the variational energy.

  13. Energy and energy gradient matrix elements with N-particle explicitly correlated complex Gaussian basis functions with L=1.

    PubMed

    Bubin, Sergiy; Adamowicz, Ludwik

    2008-03-21

    In this work we consider explicitly correlated complex Gaussian basis functions for expanding the wave function of an N-particle system with the L=1 total orbital angular momentum. We derive analytical expressions for various matrix elements with these basis functions including the overlap, kinetic energy, and potential energy (Coulomb interaction) matrix elements, as well as matrix elements of other quantities. The derivatives of the overlap, kinetic, and potential energy integrals with respect to the Gaussian exponential parameters are also derived and used to calculate the energy gradient. All the derivations are performed using the formalism of the matrix differential calculus that facilitates a way of expressing the integrals in an elegant matrix form, which is convenient for the theoretical analysis and the computer implementation. The new method is tested in calculations of two systems: the lowest P state of the beryllium atom and the bound P state of the positronium molecule (with the negative parity). Both calculations yielded new, lowest-to-date, variational upper bounds, while the number of basis functions used was significantly smaller than in previous studies. It was possible to accomplish this due to the use of the analytic energy gradient in the minimization of the variational energy.

  14. Physical Uncertainty Bounds (PUB)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaughan, Diane Elizabeth; Preston, Dean L.

    2015-03-19

    This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switchingmore » out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.« less

  15. Effects of stochastic noise on dynamical decoupling procedures

    NASA Astrophysics Data System (ADS)

    Bernád, J. Z.; Frydrych, H.

    2014-06-01

    Dynamical decoupling is an important tool to counter decoherence and dissipation effects in quantum systems originating from environmental interactions. It has been used successfully in many experiments; however, there is still a gap between fidelity improvements achieved in practice compared to theoretical predictions. We propose a model for imperfect dynamical decoupling based on a stochastic Ito differential equation which could explain the observed gap. We discuss the impact of our model on the time evolution of various quantum systems in finite- and infinite-dimensional Hilbert spaces. Analytical results are given for the limit of continuous control, whereas we present numerical simulations and upper bounds for the case of finite control.

  16. Approximation solution of Schrodinger equation for Q-deformed Rosen-Morse using supersymmetry quantum mechanics (SUSY QM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alemgadmi, Khaled I. K., E-mail: azozkied@yahoo.com; Suparmi; Cari

    2015-09-30

    The approximate analytical solution of Schrodinger equation for Q-Deformed Rosen-Morse potential was investigated using Supersymmetry Quantum Mechanics (SUSY QM) method. The approximate bound state energy is given in the closed form and the corresponding approximate wave function for arbitrary l-state given for ground state wave function. The first excited state obtained using upper operator and ground state wave function. The special case is given for the ground state in various number of q. The existence of Rosen-Morse potential reduce energy spectra of system. The larger value of q, the smaller energy spectra of system.

  17. Perturbative unitarity constraints on gauge portals

    NASA Astrophysics Data System (ADS)

    El Hedri, Sonia; Shepherd, William; Walker, Devin G. E.

    2017-12-01

    Dark matter that was once in thermal equilibrium with the Standard Model is generally prohibited from obtaining all of its mass from the electroweak phase transition. This implies a new scale of physics and mediator particles to facilitate dark matter annihilation. In this work, we focus on dark matter that annihilates through a generic gauge boson portal. We show how partial wave unitarity places upper bounds on the dark gauge boson, dark Higgs and dark matter masses. Outside of well-defined fine-tuned regions, we find an upper bound of 9 TeV for the dark matter mass when the dark Higgs and dark gauge bosons both facilitate the dark matter annihilations. In this scenario, the upper bound on the dark Higgs and dark gauge boson masses are 10 TeV and 16 TeV, respectively. When only the dark gauge boson facilitates dark matter annihilations, we find an upper bound of 3 TeV and 6 TeV for the dark matter and dark gauge boson, respectively. Overall, using the gauge portal as a template, we describe a method to not only place upper bounds on the dark matter mass but also on the new particles with Standard Model quantum numbers. We briefly discuss the reach of future accelerator, direct and indirect detection experiments for this class of models.

  18. Eigenvalues of the Wentzell-Laplace operator and of the fourth order Steklov problems

    NASA Astrophysics Data System (ADS)

    Xia, Changyu; Wang, Qiaoling

    2018-05-01

    We prove a sharp upper bound and a lower bound for the first nonzero eigenvalue of the Wentzell-Laplace operator on compact manifolds with boundary and an isoperimetric inequality for the same eigenvalue in the case where the manifold is a bounded domain in a Euclidean space. We study some fourth order Steklov problems and obtain isoperimetric upper bound for the first eigenvalue of them. We also find all the eigenvalues and eigenfunctions for two kind of fourth order Steklov problems on a Euclidean ball.

  19. SAS and SPSS macros to calculate standardized Cronbach's alpha using the upper bound of the phi coefficient for dichotomous items.

    PubMed

    Sun, Wei; Chou, Chih-Ping; Stacy, Alan W; Ma, Huiyan; Unger, Jennifer; Gallaher, Peggy

    2007-02-01

    Cronbach's a is widely used in social science research to estimate the internal consistency of reliability of a measurement scale. However, when items are not strictly parallel, the Cronbach's a coefficient provides a lower-bound estimate of true reliability, and this estimate may be further biased downward when items are dichotomous. The estimation of standardized Cronbach's a for a scale with dichotomous items can be improved by using the upper bound of coefficient phi. SAS and SPSS macros have been developed in this article to obtain standardized Cronbach's a via this method. The simulation analysis showed that Cronbach's a from upper-bound phi might be appropriate for estimating the real reliability when standardized Cronbach's a is problematic.

  20. The Problem of Limited Inter-rater Agreement in Modelling Music Similarity

    PubMed Central

    Flexer, Arthur; Grill, Thomas

    2016-01-01

    One of the central goals of Music Information Retrieval (MIR) is the quantification of similarity between or within pieces of music. These quantitative relations should mirror the human perception of music similarity, which is however highly subjective with low inter-rater agreement. Unfortunately this principal problem has been given little attention in MIR so far. Since it is not meaningful to have computational models that go beyond the level of human agreement, these levels of inter-rater agreement present a natural upper bound for any algorithmic approach. We will illustrate this fundamental problem in the evaluation of MIR systems using results from two typical application scenarios: (i) modelling of music similarity between pieces of music; (ii) music structure analysis within pieces of music. For both applications, we derive upper bounds of performance which are due to the limited inter-rater agreement. We compare these upper bounds to the performance of state-of-the-art MIR systems and show how the upper bounds prevent further progress in developing better MIR systems. PMID:28190932

  1. Evidence for a bound on the lifetime of de Sitter space

    NASA Astrophysics Data System (ADS)

    Freivogel, Ben; Lippert, Matthew

    2008-12-01

    Recent work has suggested a surprising new upper bound on the lifetime of de Sitter vacua in string theory. The bound is parametrically longer than the Hubble time but parametrically shorter than the recurrence time. We investigate whether the bound is satisfied in a particular class of de Sitter solutions, the KKLT vacua. Despite the freedom to make the supersymmetry breaking scale exponentially small, which naively would lead to extremely stable vacua, we find that the lifetime is always less than about exp(1022) Hubble times, in agreement with the proposed bound. This result, however, is contingent on several estimates and assumptions; in particular, we rely on a conjectural upper bound on the Euler number of the Calabi-Yau fourfolds used in KKLT compactifications.

  2. Upper bound on the efficiency of certain nonimaging concentrators in the physical-optics model

    NASA Astrophysics Data System (ADS)

    Welford, W. T.; Winston, R.

    1982-09-01

    Upper bounds on the performance of nonimaging concentrators are obtained within the framework of scalar-wave theory by using a simple approach to avoid complex calculations on multiple phase fronts. The approach consists in treating a theoretically perfect image-forming device and postulating that no non-image-forming concentrator can have a better performance than such an ideal image-forming system. The performance of such a system can be calculated according to wave theory, and this will provide, in accordance with the postulate, upper bounds on the performance of nonimaging systems. The method is demonstrated for a two-dimensional compound parabolic concentrator.

  3. Global a priori estimates for the inhomogeneous Landau equation with moderately soft potentials

    NASA Astrophysics Data System (ADS)

    Cameron, Stephen; Silvestre, Luis; Snelson, Stanley

    2018-05-01

    We establish a priori upper bounds for solutions to the spatially inhomogeneous Landau equation in the case of moderately soft potentials, with arbitrary initial data, under the assumption that mass, energy and entropy densities stay under control. Our pointwise estimates decay polynomially in the velocity variable. We also show that if the initial data satisfies a Gaussian upper bound, this bound is propagated for all positive times.

  4. Global solutions of restricted open-shell Hartree-Fock theory from semidefinite programming with applications to strongly correlated quantum systems.

    PubMed

    Veeraraghavan, Srikant; Mazziotti, David A

    2014-03-28

    We present a density matrix approach for computing global solutions of restricted open-shell Hartree-Fock theory, based on semidefinite programming (SDP), that gives upper and lower bounds on the Hartree-Fock energy of quantum systems. While wave function approaches to Hartree-Fock theory yield an upper bound to the Hartree-Fock energy, we derive a semidefinite relaxation of Hartree-Fock theory that yields a rigorous lower bound on the Hartree-Fock energy. We also develop an upper-bound algorithm in which Hartree-Fock theory is cast as a SDP with a nonconvex constraint on the rank of the matrix variable. Equality of the upper- and lower-bound energies guarantees that the computed solution is the globally optimal solution of Hartree-Fock theory. The work extends a previously presented method for closed-shell systems [S. Veeraraghavan and D. A. Mazziotti, Phys. Rev. A 89, 010502-R (2014)]. For strongly correlated systems the SDP approach provides an alternative to the locally optimized Hartree-Fock energies and densities with a certificate of global optimality. Applications are made to the potential energy curves of C2, CN, Cr2, and NO2.

  5. Formation of the Aerosol of Space Origin in Earth's Atmosphere

    NASA Technical Reports Server (NTRS)

    Kozak, P. M.; Kruchynenko, V. G.

    2011-01-01

    The problem of formation of the aerosol of space origin in Earth s atmosphere is examined. Meteoroids of the mass range of 10-18-10-8 g are considered as a source of its origin. The lower bound of the mass range is chosen according to the data presented in literature, the upper bound is determined in accordance with the theory of Whipple s micrometeorites. Basing on the classical equations of deceleration and heating for small meteor bodies we have determined the maximal temperatures of the particles, and altitudes at which they reach critically low velocities, which can be called as velocities of stopping . As a condition for the transformation of a space particle into an aerosol one we have used the condition of non-reaching melting temperature of the meteoroid. The simplified equation of deceleration without earth gravity and barometric formula for the atmosphere density are used. In the equation of heat balance the energy loss for heating is neglected. The analytical solution of the simplified equations is used for the analysis.

  6. Perturbative unitarity constraints on gauge portals

    DOE PAGES

    El Hedri, Sonia; Shepherd, William; Walker, Devin G. E.

    2017-10-03

    Dark matter that was once in thermal equilibrium with the Standard Model is generally prohibited from obtaining all of its mass from the electroweak phase transition. This implies a new scale of physics and mediator particles to facilitate dark matter annihilation. In this work, we focus on dark matter that annihilates through a generic gauge boson portal. We show how partial wave unitarity places upper bounds on the dark gauge boson, dark Higgs and dark matter masses. Outside of well-defined fine-tuned regions, we find an upper bound of 9 TeV for the dark matter mass when the dark Higgs andmore » dark gauge bosons both facilitate the dark matter annihilations. In this scenario, the upper bound on the dark Higgs and dark gauge boson masses are 10 TeV and 16 TeV, respectively. When only the dark gauge boson facilitates dark matter annihilations, we find an upper bound of 3 TeV and 6 TeV for the dark matter and dark gauge boson, respectively. Overall, using the gauge portal as a template, we describe a method to not only place upper bounds on the dark matter mass but also on the new particles with Standard Model quantum numbers. Here, we briefly discuss the reach of future accelerator, direct and indirect detection experiments for this class of models.« less

  7. Perturbative unitarity constraints on gauge portals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    El Hedri, Sonia; Shepherd, William; Walker, Devin G. E.

    Dark matter that was once in thermal equilibrium with the Standard Model is generally prohibited from obtaining all of its mass from the electroweak phase transition. This implies a new scale of physics and mediator particles to facilitate dark matter annihilation. In this work, we focus on dark matter that annihilates through a generic gauge boson portal. We show how partial wave unitarity places upper bounds on the dark gauge boson, dark Higgs and dark matter masses. Outside of well-defined fine-tuned regions, we find an upper bound of 9 TeV for the dark matter mass when the dark Higgs andmore » dark gauge bosons both facilitate the dark matter annihilations. In this scenario, the upper bound on the dark Higgs and dark gauge boson masses are 10 TeV and 16 TeV, respectively. When only the dark gauge boson facilitates dark matter annihilations, we find an upper bound of 3 TeV and 6 TeV for the dark matter and dark gauge boson, respectively. Overall, using the gauge portal as a template, we describe a method to not only place upper bounds on the dark matter mass but also on the new particles with Standard Model quantum numbers. Here, we briefly discuss the reach of future accelerator, direct and indirect detection experiments for this class of models.« less

  8. Noisy metrology: a saturable lower bound on quantum Fisher information

    NASA Astrophysics Data System (ADS)

    Yousefjani, R.; Salimi, S.; Khorashad, A. S.

    2017-06-01

    In order to provide a guaranteed precision and a more accurate judgement about the true value of the Cramér-Rao bound and its scaling behavior, an upper bound (equivalently a lower bound on the quantum Fisher information) for precision of estimation is introduced. Unlike the bounds previously introduced in the literature, the upper bound is saturable and yields a practical instruction to estimate the parameter through preparing the optimal initial state and optimal measurement. The bound is based on the underling dynamics, and its calculation is straightforward and requires only the matrix representation of the quantum maps responsible for encoding the parameter. This allows us to apply the bound to open quantum systems whose dynamics are described by either semigroup or non-semigroup maps. Reliability and efficiency of the method to predict the ultimate precision limit are demonstrated by three main examples.

  9. Evaluation of the availability of bound analyte for passive sampling in the presence of mobile binding matrix.

    PubMed

    Xu, Jianqiao; Huang, Shuyao; Jiang, Ruifen; Cui, Shufen; Luan, Tiangang; Chen, Guosheng; Qiu, Junlang; Cao, Chenyang; Zhu, Fang; Ouyang, Gangfeng

    2016-04-21

    Elucidating the availability of the bound analytes for the mass transfer through the diffusion boundary layers (DBLs) adjacent to passive samplers is important for understanding the passive sampling kinetics in complex samples, in which the lability factor of the bound analyte in the DBL is an important parameter. In this study, the mathematical expression of lability factor was deduced by assuming a pseudo-steady state during passive sampling, and the equation indicated that the lability factor was equal to the ratio of normalized concentration gradients between the bound and free analytes. Through the introduction of the mathematical expression of lability factor, the modified effective average diffusion coefficient was proven to be more suitable for describing the passive sampling kinetics in the presence of mobile binding matrixes. Thereafter, the lability factors of the bound polycyclic aromatic hydrocarbons (PAHs) with sodium dodecylsulphate (SDS) micelles as the binding matrixes were figured out according to the improved theory. The lability factors were observed to decrease with larger binding ratios and smaller micelle sizes, and were successfully used to predict the mass transfer efficiencies of PAHs through DBLs. This study would promote the understanding of the availability of bound analytes for passive sampling based on the theoretical improvements and experimental assessments. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Standard Deviation for Small Samples

    ERIC Educational Resources Information Center

    Joarder, Anwar H.; Latif, Raja M.

    2006-01-01

    Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…

  11. Finite-error metrological bounds on multiparameter Hamiltonian estimation

    NASA Astrophysics Data System (ADS)

    Kura, Naoto; Ueda, Masahito

    2018-01-01

    Estimation of multiple parameters in an unknown Hamiltonian is investigated. We present upper and lower bounds on the time required to complete the estimation within a prescribed error tolerance δ . The lower bound is given on the basis of the Cramér-Rao inequality, where the quantum Fisher information is bounded by the squared evolution time. The upper bound is obtained by an explicit construction of estimation procedures. By comparing the cases with different numbers of Hamiltonian channels, we also find that the few-channel procedure with adaptive feedback and the many-channel procedure with entanglement are equivalent in the sense that they require the same amount of time resource up to a constant factor.

  12. Effective elastic moduli of triangular lattice material with defects

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoyu; Liang, Naigang

    2012-10-01

    This paper presents an attempt to extend homogenization analysis for the effective elastic moduli of triangular lattice materials with microstructural defects. The proposed homogenization method adopts a process based on homogeneous strain boundary conditions, the micro-scale constitutive law and the micro-to-macro static operator to establish the relationship between the macroscopic properties of a given lattice material to its micro-discrete behaviors and structures. Further, the idea behind Eshelby's equivalent eigenstrain principle is introduced to replace a defect distribution by an imagining displacement field (eigendisplacement) with the equivalent mechanical effect, and the triangular lattice Green's function technique is developed to solve the eigendisplacement field. The proposed method therefore allows handling of different types of microstructural defects as well as its arbitrary spatial distribution within a general and compact framework. Analytical closed-form estimations are derived, in the case of the dilute limit, for all the effective elastic moduli of stretch-dominated triangular lattices containing fractured cell walls and missing cells, respectively. Comparison with numerical results, the Hashin-Shtrikman upper bounds and uniform strain upper bounds are also presented to illustrate the predictive capability of the proposed method for lattice materials. Based on this work, we propose that not only the effective Young's and shear moduli but also the effective Poisson's ratio of triangular lattice materials depend on the number density of fractured cell walls and their spatial arrangements.

  13. Constraining the phantom braneworld model from cosmic structure sizes

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Sourav; Kousvos, Stefanos R.

    2017-11-01

    We consider the phantom braneworld model in the context of the maximum turnaround radius, RTA ,max, of a stable, spherical cosmic structure with a given mass. The maximum turnaround radius is the point where the attraction due to the central inhomogeneity gets balanced with the repulsion of the ambient dark energy, beyond which a structure cannot hold any mass, thereby giving the maximum upper bound on the size of a stable structure. In this work we derive an analytical expression of RTA ,max for this model using cosmological scalar perturbation theory. Using this we numerically constrain the parameter space, including a bulk cosmological constant and the Weyl fluid, from the mass versus observed size data for some nearby, nonvirial cosmic structures. We use different values of the matter density parameter Ωm, both larger and smaller than that of the Λ cold dark matter, as the input in our analysis. We show in particular, that (a) with a vanishing bulk cosmological constant the predicted upper bound is always greater than what is actually observed; a similar conclusion holds if the bulk cosmological constant is negative (b) if it is positive, the predicted maximum size can go considerably below than what is actually observed and owing to the involved nature of the field equations, it leads to interesting constraints on not only the bulk cosmological constant itself but on the whole parameter space of the theory.

  14. A search theory model of patch-to-patch forager movement with application to pollinator-mediated gene flow.

    PubMed

    Hoyle, Martin; Cresswell, James E

    2007-09-07

    We present a spatially implicit analytical model of forager movement, designed to address a simple scenario common in nature. We assume minimal depression of patch resources, and discrete foraging bouts, during which foragers fill to capacity. The model is particularly suitable for foragers that search systematically, foragers that deplete resources in a patch only incrementally, and for sit-and-wait foragers, where harvesting does not affect the rate of arrival of forage. Drawing on the theory of job search from microeconomics, we estimate the expected number of patches visited as a function of just two variables: the coefficient of variation of the rate of energy gain among patches, and the ratio of the expected time exploiting a randomly chosen patch and the expected time travelling between patches. We then consider the forager as a pollinator and apply our model to estimate gene flow. Under model assumptions, an upper bound for animal-mediated gene flow between natural plant populations is approximately proportional to the probability that the animal rejects a plant population. In addition, an upper bound for animal-mediated gene flow in any animal-pollinated agricultural crop from a genetically modified (GM) to a non-GM field is approximately proportional to the proportion of fields that are GM and the probability that the animal rejects a field.

  15. Spherically symmetric vacuum in covariant F (T )=T +α/2 T2+O (Tγ) gravity theory

    NASA Astrophysics Data System (ADS)

    DeBenedictis, Andrew; Ilijić, Saša

    2016-12-01

    Recently, a fully covariant version of the theory of F (T ) torsion gravity has been introduced by M. Kršśák and E. Saridakis [Classical Quantum Gravity 33, 115009 (2016)]. In covariant F (T ) gravity, the Schwarzschild solution is not a vacuum solution for F (T )≠T , and therefore determining the spherically symmetric vacuum is an important open problem. Within the covariant framework, we perturbatively solve the spherically symmetric vacuum gravitational equations around the Schwarzschild solution for the scenario with F (T )=T +(α /2 )T2 , representing the dominant terms in theories governed by Lagrangians analytic in the torsion scalar. From this, we compute the perihelion shift correction to solar system planetary orbits as well as perturbative gravitational effects near neutron stars. This allows us to set an upper bound on the magnitude of the coupling constant, α , which governs deviations from general relativity. We find the bound on this nonlinear torsion coupling constant by specifically considering the uncertainty in the perihelion shift of Mercury. We also analyze a bound from a similar comparison with the periastron orbit of the binary pulsar PSR J0045-7319 as an independent check for consistency. Setting bounds on the dominant nonlinear coupling is important in determining if other effects in the Solar System or greater universe could be attributable to nonlinear torsion.

  16. Electric Dipole Moment of the Neutron from 2+1 Flavor Lattice QCD.

    PubMed

    Guo, F-K; Horsley, R; Meissner, U-G; Nakamura, Y; Perlt, H; Rakow, P E L; Schierholz, G; Schiller, A; Zanotti, J M

    2015-08-07

    We compute the electric dipole moment d(n) of the neutron from a fully dynamical simulation of lattice QCD with 2+1 flavors of clover fermions and nonvanishing θ term. The latter is rotated into a pseudoscalar density in the fermionic action using the axial anomaly. To make the action real, the vacuum angle θ is taken to be purely imaginary. The physical value of dd(n) is obtained by analytic continuation. We find d(n)=-3.9(2)(9)×10(-16) θ  e cm, which, when combined with the experimental limit on d(n), leads to the upper bound |θ|≲7.4×10(-11).

  17. Characterizing entanglement with global and marginal entropic measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adesso, Gerardo; Illuminati, Fabrizio; De Siena, Silvio

    2003-12-01

    We qualify the entanglement of arbitrary mixed states of bipartite quantum systems by comparing global and marginal mixednesses quantified by different entropic measures. For systems of two qubits we discriminate the class of maximally entangled states with fixed marginal mixednesses, and determine an analytical upper bound relating the entanglement of formation to the marginal linear entropies. This result partially generalizes to mixed states the quantification of entanglement with marginal mixednesses holding for pure states. We identify a class of entangled states that, for fixed marginals, are globally more mixed than product states when measured by the linear entropy. Such statesmore » cannot be discriminated by the majorization criterion.« less

  18. Analytical pricing of geometric Asian power options on an underlying driven by a mixed fractional Brownian motion

    NASA Astrophysics Data System (ADS)

    Zhang, Wei-Guo; Li, Zhe; Liu, Yong-Jun

    2018-01-01

    In this paper, we study the pricing problem of the continuously monitored fixed and floating strike geometric Asian power options in a mixed fractional Brownian motion environment. First, we derive both closed-form solutions and mixed fractional partial differential equations for fixed and floating strike geometric Asian power options based on delta-hedging strategy and partial differential equation method. Second, we present the lower and upper bounds of the prices of fixed and floating strike geometric Asian power options under the assumption that both risk-free interest rate and volatility are interval numbers. Finally, numerical studies are performed to illustrate the performance of our proposed pricing model.

  19. An Upper Bound on High Speed Satellite Collision Probability When Only One Object has Position Uncertainty Information

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    Upper bounds on high speed satellite collision probability, PC †, have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum PC. If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but potentially useful Pc upper bound.

  20. Tri-critical behavior of the Blume-Emery-Griffiths model on a Kagomé lattice: Effective-field theory and Rigorous bounds

    NASA Astrophysics Data System (ADS)

    Santos, Jander P.; Sá Barreto, F. C.

    2016-01-01

    Spin correlation identities for the Blume-Emery-Griffiths model on Kagomé lattice are derived and combined with rigorous correlation inequalities lead to upper bounds on the critical temperature. From the spin correlation identities the mean field approximation and the effective field approximation results for the magnetization, the critical frontiers and the tricritical points are obtained. The rigorous upper bounds on the critical temperature improve over those effective-field type theories results.

  1. Bounds for the Z-spectral radius of nonnegative tensors.

    PubMed

    He, Jun; Liu, Yan-Min; Ke, Hua; Tian, Jun-Kang; Li, Xiang

    2016-01-01

    In this paper, we have proposed some new upper bounds for the largest Z-eigenvalue of an irreducible weakly symmetric and nonnegative tensor, which improve the known upper bounds obtained in Chang et al. (Linear Algebra Appl 438:4166-4182, 2013), Song and Qi (SIAM J Matrix Anal Appl 34:1581-1595, 2013), He and Huang (Appl Math Lett 38:110-114, 2014), Li et al. (J Comput Anal Appl 483:182-199, 2015), He (J Comput Anal Appl 20:1290-1301, 2016).

  2. Morphological representation of order-statistics filters.

    PubMed

    Charif-Chefchaouni, M; Schonfeld, D

    1995-01-01

    We propose a comprehensive theory for the morphological bounds on order-statistics filters (and their repeated iterations). Conditions are derived for morphological openings and closings to serve as bounds (lower and upper, respectively) on order-statistics filters (and their repeated iterations). Under various assumptions, morphological open-closings and close-openings are also shown to serve as (tighter) bounds (lower and upper, respectively) on iterations of order-statistics filters. Simulations of the application of the results presented to image restoration are finally provided.

  3. The upper bound of Pier Scour defined by selected laboratory and field data

    USGS Publications Warehouse

    Benedict, Stephen; Caldwell, Andral W.

    2015-01-01

    The U.S. Geological Survey, in cooperation with the South Carolina Department of Transportation, conducted several field investigations of pier scour in South Carolina (Benedict and Caldwell, 2006; Benedict and Caldwell, 2009) and used that data to develop envelope curves defining the upper bound of pier scour. To expand upon this previous work, an additional cooperative investigation was initiated to combine the South Carolina data with pier-scour data from other sources and evaluate the upper bound of pier scour with this larger data set. To facilitate this analysis, a literature review was made to identify potential sources of published pier-scour data, and selected data were compiled into a digital spreadsheet consisting of approximately 570 laboratory and 1,880 field measurements. These data encompass a wide range of laboratory and field conditions and represent field data from 24 states within the United States and six other countries. This extensive database was used to define the upper bound of pier-scour depth with respect to pier width encompassing the laboratory and field data. Pier width is a primary variable that influences pier-scour depth (Laursen and Toch, 1956; Melville and Coleman, 2000; Mueller and Wagner, 2005, Ettema et al. 2011, Arneson et al. 2012) and therefore, was used as the primary explanatory variable in developing the upper-bound envelope curve. The envelope curve provides a simple but useful tool for assessing the potential maximum pier-scour depth for pier widths of about 30 feet or less.

  4. Bounds on the information rate of quantum-secret-sharing schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarvepalli, Pradeep

    An important metric of the performance of a quantum-secret-sharing scheme is its information rate. Beyond the fact that the information rate is upper-bounded by one, very little is known in terms of bounds on the information rate of quantum-secret-sharing schemes. Furthermore, not every scheme can be realized with rate one. In this paper we derive upper bounds for the information rates of quantum-secret-sharing schemes. We show that there exist quantum access structures on n players for which the information rate cannot be better than O((log{sub 2}n)/n). These results are the quantum analogues of the bounds for classical-secret-sharing schemes proved bymore » Csirmaz.« less

  5. The upper bound to the Relative Reporting Ratio—a measure of the impact of the violation of hidden assumptions underlying some disproportionality methods used in signal detection

    PubMed Central

    Van Holle, Lionel; Bauchau, Vincent

    2014-01-01

    Purpose For disproportionality measures based on the Relative Reporting Ratio (RRR) such as the Information Component (IC) and the Empirical Bayesian Geometrical Mean (EBGM), each product and event is assumed to represent a negligible fraction of the spontaneous report database (SRD). Here, we provide the tools for allowing signal detection experts to assess the consequence of the violation of this assumption on their specific SRD. Methods For each product–event pair (P–E), a worst-case scenario associated all the reported events-of-interest with the product of interest. The values of the RRR under this scenario were measured for different sets of stratification factors using the GlaxoSmithKline vaccines SRD. These values represent the RRR upper bound that RRR cannot exceed whatever the true strength of association. Results Depending on the choice of stratification factors, the RRR could not exceed an upper bound of 2 for up to 2.4% of the P–Es. For Engerix™, 23.4% of all reports in the SDR, the RRR could not exceed an upper bound of 2 for up to 13.8% of pairs. For the P–E Rotarix™-Intussusception, the choice of stratification factors impacted the upper bound to RRR: from 52.5 for an unstratified RRR to 2.0 for a fully stratified RRR. Conclusions The quantification of the upper bound can indicate whether measures such as EBGM, IC, or RRR can be used for SRD for which products or events represent a non-negligible fraction of the entire SRD. In addition, at the level of the product or P–E, it can also highlight detrimental impact of overstratification. © 2014 The Authors. Pharmacoepidemiology and Drug Safety published by John Wiley & Sons, Ltd. PMID:24395594

  6. Bounds of memory strength for power-law series.

    PubMed

    Guo, Fangjian; Yang, Dan; Yang, Zimo; Zhao, Zhi-Dan; Zhou, Tao

    2017-05-01

    Many time series produced by complex systems are empirically found to follow power-law distributions with different exponents α. By permuting the independently drawn samples from a power-law distribution, we present nontrivial bounds on the memory strength (first-order autocorrelation) as a function of α, which are markedly different from the ordinary ±1 bounds for Gaussian or uniform distributions. When 1<α≤3, as α grows bigger, the upper bound increases from 0 to +1 while the lower bound remains 0; when α>3, the upper bound remains +1 while the lower bound descends below 0. Theoretical bounds agree well with numerical simulations. Based on the posts on Twitter, ratings of MovieLens, calling records of the mobile operator Orange, and the browsing behavior of Taobao, we find that empirical power-law-distributed data produced by human activities obey such constraints. The present findings explain some observed constraints in bursty time series and scale-free networks and challenge the validity of measures such as autocorrelation and assortativity coefficient in heterogeneous systems.

  7. Bounds of memory strength for power-law series

    NASA Astrophysics Data System (ADS)

    Guo, Fangjian; Yang, Dan; Yang, Zimo; Zhao, Zhi-Dan; Zhou, Tao

    2017-05-01

    Many time series produced by complex systems are empirically found to follow power-law distributions with different exponents α . By permuting the independently drawn samples from a power-law distribution, we present nontrivial bounds on the memory strength (first-order autocorrelation) as a function of α , which are markedly different from the ordinary ±1 bounds for Gaussian or uniform distributions. When 1 <α ≤3 , as α grows bigger, the upper bound increases from 0 to +1 while the lower bound remains 0; when α >3 , the upper bound remains +1 while the lower bound descends below 0. Theoretical bounds agree well with numerical simulations. Based on the posts on Twitter, ratings of MovieLens, calling records of the mobile operator Orange, and the browsing behavior of Taobao, we find that empirical power-law-distributed data produced by human activities obey such constraints. The present findings explain some observed constraints in bursty time series and scale-free networks and challenge the validity of measures such as autocorrelation and assortativity coefficient in heterogeneous systems.

  8. Bound of dissipation on a plane Couette dynamo

    NASA Astrophysics Data System (ADS)

    Alboussière, Thierry

    2009-06-01

    Variational turbulence is among the few approaches providing rigorous results in turbulence. In addition, it addresses a question of direct practical interest, namely, the rate of energy dissipation. Unfortunately, only an upper bound is obtained as a larger functional space than the space of solutions to the Navier-Stokes equations is searched. Yet, in some cases, this upper bound is in good agreement with experimental results in terms of order of magnitude and power law of the imposed Reynolds number. In this paper, the variational approach to turbulence is extended to the case of dynamo action and an upper bound is obtained for the global dissipation rate (viscous and Ohmic). A simple plane Couette flow is investigated. For low magnetic Prandtl number Pm fluids, the upper bound of energy dissipation is that of classical turbulence (i.e., proportional to the cubic power of the shear velocity) for magnetic Reynolds numbers below Pm-1 and follows a steeper evolution for magnetic Reynolds numbers above Pm-1 (i.e., proportional to the shear velocity to the power of 4) in the case of electrically insulating walls. However, the effect of wall conductance is crucial: for a given value of wall conductance, there is a value for the magnetic Reynolds number above which energy dissipation cannot be bounded. This limiting magnetic Reynolds number is inversely proportional to the square root of the conductance of the wall. Implications in terms of energy dissipation in experimental and natural dynamos are discussed.

  9. Limitations of the background field method applied to Rayleigh-Bénard convection

    NASA Astrophysics Data System (ADS)

    Nobili, Camilla; Otto, Felix

    2017-09-01

    We consider Rayleigh-Bénard convection as modeled by the Boussinesq equations, in the case of infinite Prandtl numbers and with no-slip boundary condition. There is a broad interest in bounds of the upwards heat flux, as given by the Nusselt number Nu, in terms of the forcing via the imposed temperature difference, as given by the Rayleigh number in the turbulent regime Ra ≫ 1 . In several studies, the background field method applied to the temperature field has been used to provide upper bounds on Nu in terms of Ra. In these applications, the background field method comes in the form of a variational problem where one optimizes a stratified temperature profile subject to a certain stability condition; the method is believed to capture the marginal stability of the boundary layer. The best available upper bound via this method is Nu ≲Ra/1 3 ( ln R a )/1 15 ; it proceeds via the construction of a stable temperature background profile that increases logarithmically in the bulk. In this paper, we show that the background temperature field method cannot provide a tighter upper bound in terms of the power of the logarithm. However, by another method, one does obtain the tighter upper bound Nu ≲ Ra /1 3 ( ln ln Ra ) /1 3 so that the result of this paper implies that the background temperature field method is unphysical in the sense that it cannot provide the optimal bound.

  10. Bulk diffusion in a kinetically constrained lattice gas

    NASA Astrophysics Data System (ADS)

    Arita, Chikashi; Krapivsky, P. L.; Mallick, Kirone

    2018-03-01

    In the hydrodynamic regime, the evolution of a stochastic lattice gas with symmetric hopping rules is described by a diffusion equation with density-dependent diffusion coefficient encapsulating all microscopic details of the dynamics. This diffusion coefficient is, in principle, determined by a Green-Kubo formula. In practice, even when the equilibrium properties of a lattice gas are analytically known, the diffusion coefficient cannot be computed except when a lattice gas additionally satisfies the gradient condition. We develop a procedure to systematically obtain analytical approximations for the diffusion coefficient for non-gradient lattice gases with known equilibrium. The method relies on a variational formula found by Varadhan and Spohn which is a version of the Green-Kubo formula particularly suitable for diffusive lattice gases. Restricting the variational formula to finite-dimensional sub-spaces allows one to perform the minimization and gives upper bounds for the diffusion coefficient. We apply this approach to a kinetically constrained non-gradient lattice gas in two dimensions, viz. to the Kob-Andersen model on the square lattice.

  11. Upper-Bound Estimates Of SEU in CMOS

    NASA Technical Reports Server (NTRS)

    Edmonds, Larry D.

    1990-01-01

    Theory of single-event upsets (SEU) (changes in logic state caused by energetic charged subatomic particles) in complementary metal oxide/semiconductor (CMOS) logic devices extended to provide upper-bound estimates of rates of SEU when limited experimental information available and configuration and dimensions of SEU-sensitive regions of devices unknown. Based partly on chord-length-distribution method.

  12. An upper bound on the second order asymptotic expansion for the quantum communication cost of state redistribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Datta, Nilanjana, E-mail: n.datta@statslab.cam.ac.uk; Hsieh, Min-Hsiu, E-mail: Min-Hsiu.Hsieh@uts.edu.au; Oppenheim, Jonathan, E-mail: j.oppenheim@ucl.ac.uk

    State redistribution is the protocol in which given an arbitrary tripartite quantum state, with two of the subsystems initially being with Alice and one being with Bob, the goal is for Alice to send one of her subsystems to Bob, possibly with the help of prior shared entanglement. We derive an upper bound on the second order asymptotic expansion for the quantum communication cost of achieving state redistribution with a given finite accuracy. In proving our result, we also obtain an upper bound on the quantum communication cost of this protocol in the one-shot setting, by using the protocol ofmore » coherent state merging as a primitive.« less

  13. Solving Open Job-Shop Scheduling Problems by SAT Encoding

    NASA Astrophysics Data System (ADS)

    Koshimura, Miyuki; Nabeshima, Hidetomo; Fujita, Hiroshi; Hasegawa, Ryuzo

    This paper tries to solve open Job-Shop Scheduling Problems (JSSP) by translating them into Boolean Satisfiability Testing Problems (SAT). The encoding method is essentially the same as the one proposed by Crawford and Baker. The open problems are ABZ8, ABZ9, YN1, YN2, YN3, and YN4. We proved that the best known upper bounds 678 of ABZ9 and 884 of YN1 are indeed optimal. We also improved the upper bound of YN2 and lower bounds of ABZ8, YN2, YN3 and YN4.

  14. Upper and lower bounds for semi-Markov reliability models of reconfigurable systems

    NASA Technical Reports Server (NTRS)

    White, A. L.

    1984-01-01

    This paper determines the information required about system recovery to compute the reliability of a class of reconfigurable systems. Upper and lower bounds are derived for these systems. The class consists of those systems that satisfy five assumptions: the components fail independently at a low constant rate, fault occurrence and system reconfiguration are independent processes, the reliability model is semi-Markov, the recovery functions which describe system configuration have small means and variances, and the system is well designed. The bounds are easy to compute, and examples are included.

  15. Determining Normal-Distribution Tolerance Bounds Graphically

    NASA Technical Reports Server (NTRS)

    Mezzacappa, M. A.

    1983-01-01

    Graphical method requires calculations and table lookup. Distribution established from only three points: mean upper and lower confidence bounds and lower confidence bound of standard deviation. Method requires only few calculations with simple equations. Graphical procedure establishes best-fit line for measured data and bounds for selected confidence level and any distribution percentile.

  16. An evaluation of risk estimation procedures for mixtures of carcinogens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, J.S.; Chen, J.J.

    1999-12-01

    The estimation of health risks from exposure to a mixture of chemical carcinogens is generally based on the combination of information from several available single compound studies. The current practice of directly summing the upper bound risk estimates of individual carcinogenic components as an upper bound on the total risk of a mixture is known to be generally too conservative. Gaylor and Chen (1996, Risk Analysis) proposed a simple procedure to compute an upper bound on the total risk using only the upper confidence limits and central risk estimates of individual carcinogens. The Gaylor-Chen procedure was derived based on anmore » underlying assumption of the normality for the distributions of individual risk estimates. IN this paper the authors evaluated the Gaylor-Chen approach in terms the coverages of the upper confidence limits on the true risks of individual carcinogens. In general, if the coverage probabilities for the individual carcinogens are all approximately equal to the nominal level, then the Gaylor-Chen approach should perform well. However, the Gaylor-Chen approach can be conservative or anti-conservative if some of all individual upper confidence limit estimates are conservative or anti-conservative.« less

  17. The upper bound of abutment scour defined by selected laboratory and field data

    USGS Publications Warehouse

    Benedict, Stephen; Caldwell, Andral W.

    2015-01-01

    The U.S. Geological Survey, in cooperation with the South Carolina Department of Transportation, conducted a field investigation of abutment scour in South Carolina and used that data to develop envelope curves defining the upper bound of abutment scour. To expand upon this previous work, an additional cooperative investigation was initiated to combine the South Carolina data with abutment-scour data from other sources and evaluate the upper bound of abutment scour with the larger data set. To facilitate this analysis, a literature review was made to identify potential sources of published abutment-scour data, and selected data, consisting of 446 laboratory and 331 field measurements, were compiled for the analysis. These data encompassed a wide range of laboratory and field conditions and represent field data from 6 states within the United States. The data set was used to evaluate the South Carolina abutment-scour envelope curves. Additionally, the data were used to evaluate a dimensionless abutment-scour envelope curve developed by Melville (1992), highlighting the distinct difference in the upper bound for laboratory and field data. The envelope curves evaluated in this investigation provide simple but useful tools for assessing the potential maximum abutment-scour depth in the field setting.

  18. Parametrization of local CR automorphisms by finite jets and applications

    NASA Astrophysics Data System (ADS)

    Lamel, Bernhard; Mir, Nordine

    2007-04-01

    For any real-analytic hypersurface Msubset {C}^N , which does not contain any complex-analytic subvariety of positive dimension, we show that for every point pin M the local real-analytic CR automorphisms of M fixing p can be parametrized real-analytically by their ell_p jets at p . As a direct application, we derive a Lie group structure for the topological group operatorname{Aut}(M,p) . Furthermore, we also show that the order ell_p of the jet space in which the group operatorname{Aut}(M,p) embeds can be chosen to depend upper-semicontinuously on p . As a first consequence, it follows that given any compact real-analytic hypersurface M in {C}^N , there exists an integer k depending only on M such that for every point pin M germs at p of CR diffeomorphisms mapping M into another real-analytic hypersurface in {C}^N are uniquely determined by their k -jet at that point. Another consequence is the following boundary version of H. Cartan's uniqueness theorem: given any bounded domain Ω with smooth real-analytic boundary, there exists an integer k depending only on partial Ω such that if H\\colon Ωto Ω is a proper holomorphic mapping extending smoothly up to partial Ω near some point pin partial Ω with the same k -jet at p with that of the identity mapping, then necessarily H=Id . Our parametrization theorem also holds for the stability group of any essentially finite minimal real-analytic CR manifold of arbitrary codimension. One of the new main tools developed in the paper, which may be of independent interest, is a parametrization theorem for invertible solutions of a certain kind of singular analytic equations, which roughly speaking consists of inverting certain families of parametrized maps with singularities.

  19. Quantitative Characterization of the Microstructure and Transport Properties of Biopolymer Networks

    PubMed Central

    Jiao, Yang; Torquato, Salvatore

    2012-01-01

    Biopolymer networks are of fundamental importance to many biological processes in normal and tumorous tissues. In this paper, we employ the panoply of theoretical and simulation techniques developed for characterizing heterogeneous materials to quantify the microstructure and effective diffusive transport properties (diffusion coefficient De and mean survival time τ) of collagen type I networks at various collagen concentrations. In particular, we compute the pore-size probability density function P(δ) for the networks and present a variety of analytical estimates of the effective diffusion coefficient De for finite-sized diffusing particles, including the low-density approximation, the Ogston approximation, and the Torquato approximation. The Hashin-Strikman upper bound on the effective diffusion coefficient De and the pore-size lower bound on the mean survival time τ are used as benchmarks to test our analytical approximations and numerical results. Moreover, we generalize the efficient first-passage-time techniques for Brownian-motion simulations in suspensions of spheres to the case of fiber networks and compute the associated effective diffusion coefficient De as well as the mean survival time τ, which is related to nuclear magnetic resonance (NMR) relaxation times. Our numerical results for De are in excellent agreement with analytical results for simple network microstructures, such as periodic arrays of parallel cylinders. Specifically, the Torquato approximation provides the most accurate estimates of De for all collagen concentrations among all of the analytical approximations we consider. We formulate a universal curve for τ for the networks at different collagen concentrations, extending the work of Yeong and Torquato [J. Chem. Phys. 106, 8814 (1997)]. We apply rigorous cross-property relations to estimate the effective bulk modulus of collagen networks from a knowledge of the effective diffusion coefficient computed here. The use of cross-property relations to link other physical properties to the transport properties of collagen networks is also discussed. PMID:22683739

  20. Energy and criticality in random Boolean networks

    NASA Astrophysics Data System (ADS)

    Andrecut, M.; Kauffman, S. A.

    2008-06-01

    The central issue of the research on the Random Boolean Networks (RBNs) model is the characterization of the critical transition between ordered and chaotic phases. Here, we discuss an approach based on the ‘energy’ associated with the unsatisfiability of the Boolean functions in the RBNs model, which provides an upper bound estimation for the energy used in computation. We show that in the ordered phase the RBNs are in a ‘dissipative’ regime, performing mostly ‘downhill’ moves on the ‘energy’ landscape. Also, we show that in the disordered phase the RBNs have to ‘hillclimb’ on the ‘energy’ landscape in order to perform computation. The analytical results, obtained using Derrida's approximation method, are in complete agreement with numerical simulations.

  1. Optimal digital dynamical decoupling for general decoherence via Walsh modulation

    NASA Astrophysics Data System (ADS)

    Qi, Haoyu; Dowling, Jonathan P.; Viola, Lorenza

    2017-11-01

    We provide a general framework for constructing digital dynamical decoupling sequences based on Walsh modulation—applicable to arbitrary qubit decoherence scenarios. By establishing equivalence between decoupling design based on Walsh functions and on concatenated projections, we identify a family of optimal Walsh sequences, which can be exponentially more efficient, in terms of the required total pulse number, for fixed cancellation order, than known digital sequences based on concatenated design. Optimal sequences for a given cancellation order are highly non-unique—their performance depending sensitively on the control path. We provide an analytic upper bound to the achievable decoupling error and show how sequences within the optimal Walsh family can substantially outperform concatenated decoupling in principle, while respecting realistic timing constraints.

  2. Scalar field configurations supported by charged compact reflecting stars in a curved spacetime

    NASA Astrophysics Data System (ADS)

    Peng, Yan

    2018-05-01

    We study the system of static scalar fields coupled to charged compact reflecting stars through both analytical and numerical methods. We enclose the star in a box and our solutions are related to cases without box boundaries when putting the box far away from the star. We provide bottom and upper bounds for the radius of the scalar hairy compact reflecting star. We obtain numerical scalar hairy star solutions satisfying boundary conditions and find that the radius of the hairy star in a box is continuous in a range, which is very different from cases without box boundaries where the radius is discrete in the range. We also examine effects of the star charge and mass on the largest radius.

  3. Symmetry Parameter Constraints from a Lower Bound on Neutron-matter Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tews, Ingo; Lattimer, James M.; Ohnishi, Akira

    We propose the existence of a lower bound on the energy of pure neutron matter (PNM) on the basis of unitary-gas considerations. We discuss its justification from experimental studies of cold atoms as well as from theoretical studies of neutron matter. We demonstrate that this bound results in limits to the density-dependent symmetry energy, which is the difference between the energies of symmetric nuclear matter and PNM. In particular, this bound leads to a lower limit to the volume symmetry energy parameter S {sub 0}. In addition, for assumed values of S {sub 0} above this minimum, this bound impliesmore » both upper and lower limits to the symmetry energy slope parameter L , which describes the lowest-order density dependence of the symmetry energy. A lower bound on neutron-matter incompressibility is also obtained. These bounds are found to be consistent with both recent calculations of the energies of PNM and constraints from nuclear experiments. Our results are significant because several equations of state that are currently used in astrophysical simulations of supernovae and neutron star mergers, as well as in nuclear physics simulations of heavy-ion collisions, have symmetry energy parameters that violate these bounds. Furthermore, below the nuclear saturation density, the bound on neutron-matter energies leads to a lower limit to the density-dependent symmetry energy, which leads to upper limits to the nuclear surface symmetry parameter and the neutron-star crust–core boundary. We also obtain a lower limit to the neutron-skin thicknesses of neutron-rich nuclei. Above the nuclear saturation density, the bound on neutron-matter energies also leads to an upper limit to the symmetry energy, with implications for neutron-star cooling via the direct Urca process.« less

  4. Computational experience with a parallel algorithm for tetrangle inequality bound smoothing.

    PubMed

    Rajan, K; Deo, N

    1999-09-01

    Determining molecular structure from interatomic distances is an important and challenging problem. Given a molecule with n atoms, lower and upper bounds on interatomic distances can usually be obtained only for a small subset of the 2(n(n-1)) atom pairs, using NMR. Given the bounds so obtained on the distances between some of the atom pairs, it is often useful to compute tighter bounds on all the 2(n(n-1)) pairwise distances. This process is referred to as bound smoothing. The initial lower and upper bounds for the pairwise distances not measured are usually assumed to be 0 and infinity. One method for bound smoothing is to use the limits imposed by the triangle inequality. The distance bounds so obtained can often be tightened further by applying the tetrangle inequality--the limits imposed on the six pairwise distances among a set of four atoms (instead of three for the triangle inequalities). The tetrangle inequality is expressed by the Cayley-Menger determinants. For every quadruple of atoms, each pass of the tetrangle inequality bound smoothing procedure finds upper and lower limits on each of the six distances in the quadruple. Applying the tetrangle inequalities to each of the (4n) quadruples requires O(n4) time. Here, we propose a parallel algorithm for bound smoothing employing the tetrangle inequality. Each pass of our algorithm requires O(n3 log n) time on a REW PRAM (Concurrent Read Exclusive Write Parallel Random Access Machine) with O(log(n)n) processors. An implementation of this parallel algorithm on the Intel Paragon XP/S and its performance are also discussed.

  5. Symmetry Parameter Constraints from a Lower Bound on Neutron-matter Energy

    NASA Astrophysics Data System (ADS)

    Tews, Ingo; Lattimer, James M.; Ohnishi, Akira; Kolomeitsev, Evgeni E.

    2017-10-01

    We propose the existence of a lower bound on the energy of pure neutron matter (PNM) on the basis of unitary-gas considerations. We discuss its justification from experimental studies of cold atoms as well as from theoretical studies of neutron matter. We demonstrate that this bound results in limits to the density-dependent symmetry energy, which is the difference between the energies of symmetric nuclear matter and PNM. In particular, this bound leads to a lower limit to the volume symmetry energy parameter S 0. In addition, for assumed values of S 0 above this minimum, this bound implies both upper and lower limits to the symmetry energy slope parameter L ,which describes the lowest-order density dependence of the symmetry energy. A lower bound on neutron-matter incompressibility is also obtained. These bounds are found to be consistent with both recent calculations of the energies of PNM and constraints from nuclear experiments. Our results are significant because several equations of state that are currently used in astrophysical simulations of supernovae and neutron star mergers, as well as in nuclear physics simulations of heavy-ion collisions, have symmetry energy parameters that violate these bounds. Furthermore, below the nuclear saturation density, the bound on neutron-matter energies leads to a lower limit to the density-dependent symmetry energy, which leads to upper limits to the nuclear surface symmetry parameter and the neutron-star crust-core boundary. We also obtain a lower limit to the neutron-skin thicknesses of neutron-rich nuclei. Above the nuclear saturation density, the bound on neutron-matter energies also leads to an upper limit to the symmetry energy, with implications for neutron-star cooling via the direct Urca process.

  6. Divergences and estimating tight bounds on Bayes error with applications to multivariate Gaussian copula and latent Gaussian copula

    NASA Astrophysics Data System (ADS)

    Thelen, Brian J.; Xique, Ismael J.; Burns, Joseph W.; Goley, G. Steven; Nolan, Adam R.; Benson, Jonathan W.

    2017-04-01

    In Bayesian decision theory, there has been a great amount of research into theoretical frameworks and information- theoretic quantities that can be used to provide lower and upper bounds for the Bayes error. These include well-known bounds such as Chernoff, Battacharrya, and J-divergence. Part of the challenge of utilizing these various metrics in practice is (i) whether they are "loose" or "tight" bounds, (ii) how they might be estimated via either parametric or non-parametric methods, and (iii) how accurate the estimates are for limited amounts of data. In general what is desired is a methodology for generating relatively tight lower and upper bounds, and then an approach to estimate these bounds efficiently from data. In this paper, we explore the so-called triangle divergence which has been around for a while, but was recently made more prominent in some recent research on non-parametric estimation of information metrics. Part of this work is motivated by applications for quantifying fundamental information content in SAR/LIDAR data, and to help in this, we have developed a flexible multivariate modeling framework based on multivariate Gaussian copula models which can be combined with the triangle divergence framework to quantify this information, and provide approximate bounds on Bayes error. In this paper we present an overview of the bounds, including those based on triangle divergence and verify that under a number of multivariate models, the upper and lower bounds derived from triangle divergence are significantly tighter than the other common bounds, and often times, dramatically so. We also propose some simple but effective means for computing the triangle divergence using Monte Carlo methods, and then discuss estimation of the triangle divergence from empirical data based on Gaussian Copula models.

  7. Upper and lower bounds of ground-motion variabilities: implication for source properties

    NASA Astrophysics Data System (ADS)

    Cotton, Fabrice; Reddy-Kotha, Sreeram; Bora, Sanjay; Bindi, Dino

    2017-04-01

    One of the key challenges of seismology is to be able to analyse the physical factors that control earthquakes and ground-motion variabilities. Such analysis is particularly important to calibrate physics-based simulations and seismic hazard estimations at high frequencies. Within the framework of the development of ground-motion prediction equation (GMPE) developments, ground-motions residuals (differences between recorded ground motions and the values predicted by a GMPE) are computed. The exponential growth of seismological near-source records and modern GMPE analysis technics allow to partition these residuals into between- and a within-event components. In particular, the between-event term quantifies all those repeatable source effects (e.g. related to stress-drop or kappa-source variability) which have not been accounted by the magnitude-dependent term of the model. In this presentation, we first discuss the between-event variabilities computed both in the Fourier and Response Spectra domains, using recent high-quality global accelerometric datasets (e.g. NGA-west2, Resorce, Kiknet). These analysis lead to the assessment of upper bounds for the ground-motion variability. Then, we compare these upper bounds with lower bounds estimated by analysing seismic sequences which occurred on specific fault systems (e.g., located in Central Italy or in Japan). We show that the lower bounds of between-event variabilities are surprisingly large which indicates a large variability of earthquake dynamic properties even within the same fault system. Finally, these upper and lower bounds of ground-shaking variability are discussed in term of variability of earthquake physical properties (e.g., stress-drop and kappa_source).

  8. Interrelationship between flexoelectricity and strain gradient elasticity in ferroelectric nanofilms: A phase field study

    NASA Astrophysics Data System (ADS)

    Jiang, Limei; Xu, Xiaofei; Zhou, Yichun

    2016-12-01

    With the development of the integrated circuit technology and decreasing of the device size, ferroelectric films used in nano ferroelectric devices become thinner and thinner. Along with the downscaling of the ferroelectric film, there is an increasing influence of two strain gradient related terms. One is the strain gradient elasticity and the other one is flexoelectricity. To investigate the interrelationship between flexoelectricity and strain gradient elasticity and their combined effect on the domain structure in ferroelectric nanofilms, a phase field model of flexoelectricity and strain gradient elasticity on the ferroelectric domain evolution is developed based on Mindlin's theory of strain-gradient elasticity. Weak form is derived and implemented in finite element formulations for numerically solving the model equations. The simulation results show that upper bounds for flexoelectric coefficients can be enhanced by increasing strain gradient elasticity coefficients. While a large flexoelectricity that exceeds the upper bound can induce a transition from a ferroelectric state to a modulated/incommensurate state, a large enough strain gradient elasticity may lead to a conversion from an incommensurate state to a ferroelectric state. Strain gradient elasticity and the flexoelectricity have entirely opposite effects on polarization. The observed interrelationship between the strain gradient elasticity and flexoelectricity is rationalized by an analytical solution of the proposed theoretical model. The model proposed in this paper could help us understand the mechanism of phenomena observed in ferroelectric nanofilms under complex electromechanical loads and provide some guides on the practical application of ferroelectric nanofilms.

  9. Analyte detection using an active assay

    DOEpatents

    Morozov, Victor; Bailey, Charles L.; Evanskey, Melissa R.

    2010-11-02

    Analytes using an active assay may be detected by introducing an analyte solution containing a plurality of analytes to a lacquered membrane. The lacquered membrane may be a membrane having at least one surface treated with a layer of polymers. The lacquered membrane may be semi-permeable to nonanalytes. The layer of polymers may include cross-linked polymers. A plurality of probe molecules may be arrayed and immobilized on the lacquered membrane. An external force may be applied to the analyte solution to move the analytes towards the lacquered membrane. Movement may cause some or all of the analytes to bind to the lacquered membrane. In cases where probe molecules are presented, some or all of the analytes may bind to probe molecules. The direction of the external force may be reversed to remove unbound or weakly bound analytes. Bound analytes may be detected using known detection types.

  10. Complexity, Heuristic, and Search Analysis for the Games of Crossings and Epaminondas

    DTIC Science & Technology

    2014-03-27

    research in Artifical Intelligence (Section 2.1) and why games are studied (Section 2.2). Section 2.3 discusses how games are played and solved. An...5 2.1 Games in Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Game Study...Artificial Intelligence UCT Upper Confidence Bounds applied to Trees HUCT Heuristic Guided UCT LOA Lines of Action UCB Upper Confidence Bound RAVE Rapid

  11. The Estimation of the IRT Reliability Coefficient and Its Lower and Upper Bounds, with Comparisons to CTT Reliability Statistics

    ERIC Educational Resources Information Center

    Kim, Seonghoon; Feldt, Leonard S.

    2010-01-01

    The primary purpose of this study is to investigate the mathematical characteristics of the test reliability coefficient rho[subscript XX'] as a function of item response theory (IRT) parameters and present the lower and upper bounds of the coefficient. Another purpose is to examine relative performances of the IRT reliability statistics and two…

  12. Stabilizing potentials in bound state analytic continuation methods for electronic resonances in polyatomic molecules

    DOE PAGES

    White, Alec F.; Head-Gordon, Martin; McCurdy, C. William

    2017-01-30

    The computation of Siegert energies by analytic continuation of bound state energies has recently been applied to shape resonances in polyatomic molecules by several authors. Here, we critically evaluate a recently proposed analytic continuation method based on low order (type III) Padé approximants as well as an analytic continuation method based on high order (type II) Padé approximants. We compare three classes of stabilizing potentials: Coulomb potentials, Gaussian potentials, and attenuated Coulomb potentials. These methods are applied to a model potential where the correct answer is known exactly and to the 2Π g shape resonance of N 2 - whichmore » has been studied extensively by other methods. Both the choice of stabilizing potential and method of analytic continuation prove to be important to the accuracy of the results. We then conclude that an attenuated Coulomb potential is the most effective of the three for bound state analytic continuation methods. With the proper potential, such methods show promise for algorithmic determination of the positions and widths of molecular shape resonances.« less

  13. Stabilizing potentials in bound state analytic continuation methods for electronic resonances in polyatomic molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Alec F.; Head-Gordon, Martin; McCurdy, C. William

    The computation of Siegert energies by analytic continuation of bound state energies has recently been applied to shape resonances in polyatomic molecules by several authors. Here, we critically evaluate a recently proposed analytic continuation method based on low order (type III) Padé approximants as well as an analytic continuation method based on high order (type II) Padé approximants. We compare three classes of stabilizing potentials: Coulomb potentials, Gaussian potentials, and attenuated Coulomb potentials. These methods are applied to a model potential where the correct answer is known exactly and to the 2Π g shape resonance of N 2 - whichmore » has been studied extensively by other methods. Both the choice of stabilizing potential and method of analytic continuation prove to be important to the accuracy of the results. We then conclude that an attenuated Coulomb potential is the most effective of the three for bound state analytic continuation methods. With the proper potential, such methods show promise for algorithmic determination of the positions and widths of molecular shape resonances.« less

  14. Multivariate Lipschitz optimization: Survey and computational comparison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, P.; Gourdin, E.; Jaumard, B.

    1994-12-31

    Many methods have been proposed to minimize a multivariate Lipschitz function on a box. They pertain the three approaches: (i) reduction to the univariate case by projection (Pijavskii) or by using a space-filling curve (Strongin); (ii) construction and refinement of a single upper bounding function (Pijavskii, Mladineo, Mayne and Polak, Jaumard Hermann and Ribault, Wood...); (iii) branch and bound with local upper bounding functions (Galperin, Pint{acute e}r, Meewella and Mayne, the present authors). A survey is made, stressing similarities of algorithms, expressed when possible within a unified framework. Moreover, an extensive computational comparison is reported on.

  15. Computational micromechanics of woven composites

    NASA Technical Reports Server (NTRS)

    Hopkins, Dale A.; Saigal, Sunil; Zeng, Xiaogang

    1991-01-01

    The bounds on the equivalent elastic material properties of a composite are presently addressed by a unified energy approach which is valid for both unidirectional and 2D and 3D woven composites. The unit cell considered is assumed to consist, first, of the actual composite arrangement of the fibers and matrix material, and then, of an equivalent pseudohomogeneous material. Equating the strain energies due to the two arrangements yields an estimate of the upper bound for the material equivalent properties; successive increases in the order of displacement field that is assumed in the composite arrangement will successively produce improved upper bound estimates.

  16. Upper bounds on the photon mass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Accioly, Antonio; Group of Field Theory from First Principles, Sao Paulo State University; Instituto de Fisica Teorica

    2010-09-15

    The effects of a nonzero photon rest mass can be incorporated into electromagnetism in a simple way using the Proca equations. In this vein, two interesting implications regarding the possible existence of a massive photon in nature, i.e., tiny alterations in the known values of both the anomalous magnetic moment of the electron and the gravitational deflection of electromagnetic radiation, are utilized to set upper limits on its mass. The bounds obtained are not as stringent as those recently found; nonetheless, they are comparable to other existing bounds and bring new elements to the issue of restricting the photon mass.

  17. Estimating the Richness of a Population When the Maximum Number of Classes Is Fixed: A Nonparametric Solution to an Archaeological Problem

    PubMed Central

    Eren, Metin I.; Chao, Anne; Hwang, Wen-Han; Colwell, Robert K.

    2012-01-01

    Background Estimating assemblage species or class richness from samples remains a challenging, but essential, goal. Though a variety of statistical tools for estimating species or class richness have been developed, they are all singly-bounded: assuming only a lower bound of species or classes. Nevertheless there are numerous situations, particularly in the cultural realm, where the maximum number of classes is fixed. For this reason, a new method is needed to estimate richness when both upper and lower bounds are known. Methodology/Principal Findings Here, we introduce a new method for estimating class richness: doubly-bounded confidence intervals (both lower and upper bounds are known). We specifically illustrate our new method using the Chao1 estimator, rarefaction, and extrapolation, although any estimator of asymptotic richness can be used in our method. Using a case study of Clovis stone tools from the North American Lower Great Lakes region, we demonstrate that singly-bounded richness estimators can yield confidence intervals with upper bound estimates larger than the possible maximum number of classes, while our new method provides estimates that make empirical sense. Conclusions/Significance Application of the new method for constructing doubly-bound richness estimates of Clovis stone tools permitted conclusions to be drawn that were not otherwise possible with singly-bounded richness estimates, namely, that Lower Great Lakes Clovis Paleoindians utilized a settlement pattern that was probably more logistical in nature than residential. However, our new method is not limited to archaeological applications. It can be applied to any set of data for which there is a fixed maximum number of classes, whether that be site occupancy models, commercial products (e.g. athletic shoes), or census information (e.g. nationality, religion, age, race). PMID:22666316

  18. Upper bound on the Abelian gauge coupling from asymptotic safety

    NASA Astrophysics Data System (ADS)

    Eichhorn, Astrid; Versteegen, Fleur

    2018-01-01

    We explore the impact of asymptotically safe quantum gravity on the Abelian gauge coupling in a model including a charged scalar, confirming indications that asymptotically safe quantum fluctuations of gravity could trigger a power-law running towards a free fixed point for the gauge coupling above the Planck scale. Simultaneously, quantum gravity fluctuations balance against matter fluctuations to generate an interacting fixed point, which acts as a boundary of the basin of attraction of the free fixed point. This enforces an upper bound on the infrared value of the Abelian gauge coupling. In the regime of gravity couplings which in our approximation also allows for a prediction of the top quark and Higgs mass close to the experimental value [1], we obtain an upper bound approximately 35% above the infrared value of the hypercharge coupling in the Standard Model.

  19. Limits of Gaussian fluctuations in the cosmic microwave background at 19.2 GHz

    NASA Technical Reports Server (NTRS)

    Boughn, S. P.; Cheng, E. S.; Cottingham, D. A.; Fixsen, D. J.

    1992-01-01

    The Northern Hemisphere data from the 19.2 GHz full sky survey are analyzed to place limits on the magnitude of Gaussian fluctuations in the cosmic microwave background implied by a variety of correlation functions. Included among the models tested are the monochromatic and Gaussian-shaped families, and those with power-law spectra for n values between -2 and 1. An upper bound is placed on the quadrupole anisotropy of Delta T/T less than 3.2 x 10 exp -5 rms, and an upper bound on scale-invariant (n = 1) fluctuations of a2 less than 4.5 x 10 exp -5 (95 percent confidence level). There is significant contamination of these data from Galactic emission, and improvement of the modeling of the Galaxy could yield a significant reduction of these upper bounds.

  20. Limits on Gaussian fluctuations in the cosmic microwave background at 19.2 GHz

    NASA Technical Reports Server (NTRS)

    Boughn, S. P.; Cheng, E. S.; Cottingham, D. A.; Fixsen, D. J.

    1991-01-01

    The Northern Hemisphere data from the 19.2 GHz full sky survey are analyzed to place limits on the magnitude of Gaussian fluctuations in the cosmic microwave background implied by a variety of correlation functions. Included among the models tested are the monochromatic and Gaussian-shaped families, and those with power law spectra for n from -2 to 1. We place an upper bound on the quadrupole anisotropy of DeltaT/T less than 3.2 x 10 exp -5 rms, and an upper bound on scale-invariant (n = 1) fluctuations of a2 less than 4.5 x 10 exp -5 (95 percent confidence level). There is significant contamination of these data from Galactic emission, and improvement of our modeling of the Galaxy could yield a significant reduction of these upper bounds.

  1. Complexity Bounds for Quantum Computation

    DTIC Science & Technology

    2007-06-22

    Programs Trustees of Boston University Boston, MA 02215 - Complexity Bounds for Quantum Computation REPORT DOCUMENTATION PAGE 18. SECURITY CLASSIFICATION...Complexity Bounds for Quantum Comp[utation Report Title ABSTRACT This project focused on upper and lower bounds for quantum computability using constant...classical computation models, particularly emphasizing new examples of where quantum circuits are more powerful than their classical counterparts. A second

  2. Structure of the Balmer jump. The isolated hydrogen atom

    NASA Astrophysics Data System (ADS)

    Calvo, F.; Belluzzi, L.; Steiner, O.

    2018-06-01

    Context. The spectrum of the hydrogen atom was explained by Bohr more than one century ago. We revisit here some of the aspects of the underlying quantum structure, with a modern formalism, focusing on the limit of the Balmer series. Aims: We investigate the behaviour of the absorption coefficient of the isolated hydrogen atom in the neighbourhood of the Balmer limit. Methods: We analytically computed the total cross-section arising from bound-bound and bound-free transitions in the isolated hydrogen atom at the Balmer limit, and established a simplified semi-analytical model for the surroundings of that limit. We worked within the framework of the formalism of Landi Degl'Innocenti & Landolfi (2004, Astrophys. Space Sci. Lib., 307), which permits an almost straight-forward generalization of our results to other atoms and molecules, and which is perfectly suitable for including polarization phenomena in the problem. Results: We analytically show that there is no discontinuity at the Balmer limit, even though the concept of a "Balmer jump" is still meaningful. Furthermore, we give a possible definition of the location of the Balmer jump, and we check that this location is dependent on the broadening mechanisms. At the Balmer limit, we compute the cross-section in a fully analytical way. Conclusions: The Balmer jump is produced by a rapid drop of the total Balmer cross-section, yet this variation is smooth and continuous when both bound-bound and bound-free processes are taken into account, and its shape and location is dependent on the broadening mechanisms.

  3. Upper Bound on Diffusivity

    NASA Astrophysics Data System (ADS)

    Hartman, Thomas; Hartnoll, Sean A.; Mahajan, Raghu

    2017-10-01

    The linear growth of operators in local quantum systems leads to an effective light cone even if the system is nonrelativistic. We show that the consistency of diffusive transport with this light cone places an upper bound on the diffusivity: D ≲v2τeq. The operator growth velocity v defines the light cone, and τeq is the local equilibration time scale, beyond which the dynamics of conserved densities is diffusive. We verify that the bound is obeyed in various weakly and strongly interacting theories. In holographic models, this bound establishes a relation between the hydrodynamic and leading nonhydrodynamic quasinormal modes of planar black holes. Our bound relates transport data—including the electrical resistivity and the shear viscosity—to the local equilibration time, even in the absence of a quasiparticle description. In this way, the bound sheds light on the observed T -linear resistivity of many unconventional metals, the shear viscosity of the quark-gluon plasma, and the spin transport of unitary fermions.

  4. Intrinsic upper bound on two-qubit polarization entanglement predetermined by pump polarization correlations in parametric down-conversion

    NASA Astrophysics Data System (ADS)

    Kulkarni, Girish; Subrahmanyam, V.; Jha, Anand K.

    2016-06-01

    We study how one-particle correlations transfer to manifest as two-particle correlations in the context of parametric down-conversion (PDC), a process in which a pump photon is annihilated to produce two entangled photons. We work in the polarization degree of freedom and show that for any two-qubit generation process that is both trace-preserving and entropy-nondecreasing, the concurrence C (ρ ) of the generated two-qubit state ρ follows an intrinsic upper bound with C (ρ )≤(1 +P )/2 , where P is the degree of polarization of the pump photon. We also find that for the class of two-qubit states that is restricted to have only two nonzero diagonal elements such that the effective dimensionality of the two-qubit state is the same as the dimensionality of the pump polarization state, the upper bound on concurrence is the degree of polarization itself, that is, C (ρ )≤P . Our work shows that the maximum manifestation of two-particle correlations as entanglement is dictated by one-particle correlations. The formalism developed in this work can be extended to include multiparticle systems and can thus have important implications towards deducing the upper bounds on multiparticle entanglement, for which no universally accepted measure exists.

  5. Backstepping Design of Adaptive Neural Fault-Tolerant Control for MIMO Nonlinear Systems.

    PubMed

    Gao, Hui; Song, Yongduan; Wen, Changyun

    In this paper, an adaptive controller is developed for a class of multi-input and multioutput nonlinear systems with neural networks (NNs) used as a modeling tool. It is shown that all the signals in the closed-loop system with the proposed adaptive neural controller are globally uniformly bounded for any external input in . In our control design, the upper bound of the NN modeling error and the gains of external disturbance are characterized by unknown upper bounds, which is more rational to establish the stability in the adaptive NN control. Filter-based modification terms are used in the update laws of unknown parameters to improve the transient performance. Finally, fault-tolerant control is developed to accommodate actuator failure. An illustrative example applying the adaptive controller to control a rigid robot arm shows the validation of the proposed controller.In this paper, an adaptive controller is developed for a class of multi-input and multioutput nonlinear systems with neural networks (NNs) used as a modeling tool. It is shown that all the signals in the closed-loop system with the proposed adaptive neural controller are globally uniformly bounded for any external input in . In our control design, the upper bound of the NN modeling error and the gains of external disturbance are characterized by unknown upper bounds, which is more rational to establish the stability in the adaptive NN control. Filter-based modification terms are used in the update laws of unknown parameters to improve the transient performance. Finally, fault-tolerant control is developed to accommodate actuator failure. An illustrative example applying the adaptive controller to control a rigid robot arm shows the validation of the proposed controller.

  6. Length bounds for connecting discharges in triggered lightning subsequent strokes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Idone, V.P.

    1990-11-20

    Highly time resolved streak recordings from nine subsequent strokes in four triggered flashes have been examined for evidence of the occurrence of upward connecting discharges. These photographic recordings were obtained with superior spatial and temporal resolution (0.3 m and 0.5 {lambda}s) and were examined with a video image analysis system to help delineate the separate leader and return stroke image tracks. Unfortunately, a definitive determination of the occurrence of connecting discharges in these strokes could not be made. The data did allow various determinations of an upper bound length for any possible connecting discharge in each stroke. Under the simplestmore » analysis approach possible, an 'absolute' upper bound set of lengths was measured that ranged from 12 to 27 m with a mean of 19 m; two other more involved analyses yielded arguably better upper bound estimates of 8-18 m and 7-26 m with means of 19 m; two other more involved analyses yielded arguably better upper bound estimates of 8-18 m and 7-26 m with means of 12 and 13 m, respectively. An additional set of low time-resolution telephoto recordings of the lowest few meters of channel revealed six strokes in these flashes with one or more upward unconnected channels originating from the lightning rod tip. The maximum length of unconnected channel seen in each of these strokes ranged from 0.2 to 1.6 m with a mean of 0.7 m. This latter set of observations is interpreted as indirect evidence that connecting discharges did occur in these strokes and that the lower bound for their length is about 1 m.« less

  7. The Theory and Practice of Estimating the Accuracy of Dynamic Flight-Determined Coefficients

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1981-01-01

    Means of assessing the accuracy of maximum likelihood parameter estimates obtained from dynamic flight data are discussed. The most commonly used analytical predictors of accuracy are derived and compared from both statistical and simplified geometrics standpoints. The accuracy predictions are evaluated with real and simulated data, with an emphasis on practical considerations, such as modeling error. Improved computations of the Cramer-Rao bound to correct large discrepancies due to colored noise and modeling error are presented. The corrected Cramer-Rao bound is shown to be the best available analytical predictor of accuracy, and several practical examples of the use of the Cramer-Rao bound are given. Engineering judgement, aided by such analytical tools, is the final arbiter of accuracy estimation.

  8. Risk assessment and monitoring programme of nitrates through vegetables in the Region of Valencia (Spain).

    PubMed

    Quijano, Leyre; Yusà, Vicent; Font, Guillermina; McAllister, Claudia; Torres, Concepción; Pardo, Olga

    2017-02-01

    This study was carried out to determine current levels of nitrate in vegetables marketed in the Region of Valencia (Spain) and to estimate the toxicological risk associated with their intake. A total of 533 samples of seven vegetable species were studied. Nitrate levels were derived from the Valencia Region monitoring programme carried out from 2009 to 2013 and food consumption levels were taken from the first Valencia Food Consumption Survey, conducted in 2010. The exposure was estimated using a probabilistic approach and two scenarios were assumed for left-censored data: the lower-bound scenario, in which unquantified results (below the limit of quantification) were set to zero and the upper-bound scenario, in which unquantified results were set to the limit of quantification value. The exposure of the Valencia consumers to nitrate through the consumption of vegetable products appears to be relatively low. In the adult population (16-95 years) the P99.9 was 3.13 mg kg -1 body weight day -1 and 3.15 mg kg -1 body weight day -1 in the lower bound and upper bound scenario, respectively. On the other hand, for young people (6-15 years) the P99.9 of the exposure was 4.20 mg kg -1 body weight day -1 and 4.40 mg kg -1 body weight day -1 in the lower bound and upper bound scenario, respectively. The risk characterisation indicates that, under the upper bound scenario, 0.79% of adults and 1.39% of young people can exceed the Acceptable Daily Intake of nitrate. This percentage could join the vegetable extreme consumers (such as vegetarians) of vegetables. Overall, the estimated exposures to nitrate from vegetables are unlikely to result in appreciable health risks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. On the upper bound in the Bohm sheath criterion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotelnikov, I. A., E-mail: I.A.Kotelnikov@inp.nsk.su; Skovorodin, D. I., E-mail: D.I.Skovorodin@inp.nsk.su

    2016-02-15

    The question is discussed about the existence of an upper bound in the Bohm sheath criterion, according to which the Debye sheath at the interface between plasma and a negatively charged electrode is stable only if the ion flow velocity in plasma exceeds the ion sound velocity. It is stated that, with an exception of some artificial ionization models, the Bohm sheath criterion is satisfied as an equality at the lower bound and the ion flow velocity is equal to the speed of sound. In the one-dimensional theory, a supersonic flow appears in an unrealistic model of a localized ionmore » source the size of which is less than the Debye length; however, supersonic flows seem to be possible in the two- and three-dimensional cases. In the available numerical codes used to simulate charged particle sources with a plasma emitter, the presence of the upper bound in the Bohm sheath criterion is not supposed; however, the correspondence with experimental data is usually achieved if the ion flow velocity in plasma is close to the ion sound velocity.« less

  10. Exact synchronization bound for coupled time-delay systems.

    PubMed

    Senthilkumar, D V; Pesquera, Luis; Banerjee, Santo; Ortín, Silvia; Kurths, J

    2013-04-01

    We obtain an exact bound for synchronization in coupled time-delay systems using the generalized Halanay inequality for the general case of time-dependent delay, coupling, and coefficients. Furthermore, we show that the same analysis is applicable to both uni- and bidirectionally coupled time-delay systems with an appropriate evolution equation for their synchronization manifold, which can also be defined for different types of synchronization. The exact synchronization bound assures an exponential stabilization of the synchronization manifold which is crucial for applications. The analytical synchronization bound is independent of the nature of the modulation and can be applied to any time-delay system satisfying a Lipschitz condition. The analytical results are corroborated numerically using the Ikeda system.

  11. Communication complexity and information complexity

    NASA Astrophysics Data System (ADS)

    Pankratov, Denis

    Information complexity enables the use of information-theoretic tools in communication complexity theory. Prior to the results presented in this thesis, information complexity was mainly used for proving lower bounds and direct-sum theorems in the setting of communication complexity. We present three results that demonstrate new connections between information complexity and communication complexity. In the first contribution we thoroughly study the information complexity of the smallest nontrivial two-party function: the AND function. While computing the communication complexity of AND is trivial, computing its exact information complexity presents a major technical challenge. In overcoming this challenge, we reveal that information complexity gives rise to rich geometrical structures. Our analysis of information complexity relies on new analytic techniques and new characterizations of communication protocols. We also uncover a connection of information complexity to the theory of elliptic partial differential equations. Once we compute the exact information complexity of AND, we can compute exact communication complexity of several related functions on n-bit inputs with some additional technical work. Previous combinatorial and algebraic techniques could only prove bounds of the form theta( n). Interestingly, this level of precision is typical in the area of information theory, so our result demonstrates that this meta-property of precise bounds carries over to information complexity and in certain cases even to communication complexity. Our result does not only strengthen the lower bound on communication complexity of disjointness by making it more exact, but it also shows that information complexity provides the exact upper bound on communication complexity. In fact, this result is more general and applies to a whole class of communication problems. In the second contribution, we use self-reduction methods to prove strong lower bounds on the information complexity of two of the most studied functions in the communication complexity literature: Gap Hamming Distance (GHD) and Inner Product mod 2 (IP). In our first result we affirm the conjecture that the information complexity of GHD is linear even under the uniform distribution. This strengthens the O(n) bound shown by Kerenidis et al. (2012) and answers an open problem by Chakrabarti et al. (2012). We also prove that the information complexity of IP is arbitrarily close to the trivial upper bound n as the permitted error tends to zero, again strengthening the O(n) lower bound proved by Braverman and Weinstein (2011). More importantly, our proofs demonstrate that self-reducibility makes the connection between information complexity and communication complexity lower bounds a two-way connection. Whereas numerous results in the past used information complexity techniques to derive new communication complexity lower bounds, we explore a generic way, in which communication complexity lower bounds imply information complexity lower bounds in a black-box manner. In the third contribution we consider the roles that private and public randomness play in the definition of information complexity. In communication complexity, private randomness can be trivially simulated by public randomness. Moreover, the communication cost of simulating public randomness with private randomness is well understood due to Newman's theorem (1991). In information complexity, the roles of public and private randomness are reversed: public randomness can be trivially simulated by private randomness. However, the information cost of simulating private randomness with public randomness is not understood. We show that protocols that use only public randomness admit a rather strong compression. In particular, efficient simulation of private randomness by public randomness would imply a version of a direct sum theorem in the setting of communication complexity. This establishes a yet another connection between the two areas. (Abstract shortened by UMI.).

  12. The direct reaction field hamiltonian: Analysis of the dispersion term and application to the water dimer

    NASA Astrophysics Data System (ADS)

    Thole, B. T.; Van Duijnen, P. Th.

    1982-10-01

    The induction and dispersion terms obtained from quantum-mechanical calculations with a direct reaction field hamiltonian are compared to second order perturbation theory expressions. The dispersion term is shown to give an upper bound which is a generalization of Alexander's upper bound. The model is illustrated by a calculation on the interactions in the water dimer. The long range Coulomb, induction and dispersion interactions are reasonably reproduced.

  13. On the Kirchhoff Index of Graphs

    NASA Astrophysics Data System (ADS)

    Das, Kinkar C.

    2013-09-01

    Let G be a connected graph of order n with Laplacian eigenvalues μ1 ≥ μ2 ≥ ... ≥ μn-1 > mn = 0. The Kirchhoff index of G is defined as [xxx] In this paper. we give lower and upper bounds on Kf of graphs in terms on n, number of edges, maximum degree, and number of spanning trees. Moreover, we present lower and upper bounds on the Nordhaus-Gaddum-type result for the Kirchhoff index.

  14. Upper bound of pier scour in laboratory and field data

    USGS Publications Warehouse

    Benedict, Stephen; Caldwell, Andral W.

    2016-01-01

    The U.S. Geological Survey (USGS), in cooperation with the South Carolina Department of Transportation, conducted several field investigations of pier scour in South Carolina and used the data to develop envelope curves defining the upper bound of pier scour. To expand on this previous work, an additional cooperative investigation was initiated to combine the South Carolina data with pier scour data from other sources and to evaluate upper-bound relations with this larger data set. To facilitate this analysis, 569 laboratory and 1,858 field measurements of pier scour were compiled to form the 2014 USGS Pier Scour Database. This extensive database was used to develop an envelope curve for the potential maximum pier scour depth encompassing the laboratory and field data. The envelope curve provides a simple but useful tool for assessing the potential maximum pier scour depth for effective pier widths of about 30 ft or less.

  15. Resilient filtering for time-varying stochastic coupling networks under the event-triggering scheduling

    NASA Astrophysics Data System (ADS)

    Wang, Fan; Liang, Jinling; Dobaie, Abdullah M.

    2018-07-01

    The resilient filtering problem is considered for a class of time-varying networks with stochastic coupling strengths. An event-triggered strategy is adopted to save the network resources by scheduling the signal transmission from the sensors to the filters based on certain prescribed rules. Moreover, the filter parameters to be designed are subject to gain perturbations. The primary aim of the addressed problem is to determine a resilient filter that ensures an acceptable filtering performance for the considered network with event-triggering scheduling. To handle such an issue, an upper bound on the estimation error variance is established for each node according to the stochastic analysis. Subsequently, the resilient filter is designed by locally minimizing the derived upper bound at each iteration. Moreover, rigorous analysis shows the monotonicity of the minimal upper bound regarding the triggering threshold. Finally, a simulation example is presented to show effectiveness of the established filter scheme.

  16. Objects of Maximum Electromagnetic Chirality

    NASA Astrophysics Data System (ADS)

    Fernandez-Corbaton, Ivan; Fruhnert, Martin; Rockstuhl, Carsten

    2016-07-01

    We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. Reciprocal objects attain the upper bound if and only if they are transparent for all the fields of one polarization handedness (helicity). Additionally, electromagnetic duality symmetry, i.e., helicity preservation upon interaction, turns out to be a necessary condition for reciprocal objects to attain the upper bound. We use these results to provide requirements for the design of such extremal objects. The requirements can be formulated as constraints on the polarizability tensors for dipolar objects or on the material constitutive relations for continuous media. We also outline two applications for objects of maximum electromagnetic chirality: a twofold resonantly enhanced and background-free circular dichroism measurement setup, and angle-independent helicity filtering glasses. Finally, we use the theoretically obtained requirements to guide the design of a specific structure, which we then analyze numerically and discuss its performance with respect to maximal electromagnetic chirality.

  17. On similarity solutions of a boundary layer problem with an upstream moving wall

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y.; Lakin, W. D.; Nachman, A.

    1986-01-01

    The problem of a boundary layer on a flat plate which has a constant velocity opposite in direction to that of the uniform mainstream is examined. It was previously shown that the solution of this boundary value problem is crucially dependent on the parameter which is the ratio of the velocity of the plate to the velocity of the free stream. In particular, it was proved that a solution exists only if this parameter does not exceed a certain critical value, and numerical evidence was adduced to show that this solution is nonunique. Using Crocco formulation the present work proves this nonuniqueness. Also considered are the analyticity of solutions and the derivation of upper bounds on the critical value of wall velocity parameter.

  18. A critical examination of the validity of simplified models for radiant heat transfer analysis.

    NASA Technical Reports Server (NTRS)

    Toor, J. S.; Viskanta, R.

    1972-01-01

    Examination of the directional effects of the simplified models by comparing the experimental data with the predictions based on simple and more detailed models for the radiation characteristics of surfaces. Analytical results indicate that the constant property diffuse and specular models do not yield the upper and lower bounds on local radiant heat flux. In general, the constant property specular analysis yields higher values of irradiation than the constant property diffuse analysis. A diffuse surface in the enclosure appears to destroy the effect of specularity of the other surfaces. Semigray and gray analyses predict the irradiation reasonably well provided that the directional properties and the specularity of the surfaces are taken into account. The uniform and nonuniform radiosity diffuse models are in satisfactory agreement with each other.

  19. Search for deviations from the inverse square law of gravity at nm range using a pulsed neutron beam

    NASA Astrophysics Data System (ADS)

    Haddock, Christopher C.; Oi, Noriko; Hirota, Katsuya; Ino, Takashi; Kitaguchi, Masaaki; Matsumoto, Satoru; Mishima, Kenji; Shima, Tatsushi; Shimizu, Hirohiko M.; Snow, W. Michael; Yoshioka, Tamaki

    2018-03-01

    We describe an experimental search for deviations from the inverse-square law of gravity at the nanometer length scale using neutron scattering from noble gases on a pulsed slow neutron beam line. By measuring the neutron momentum transfer (q ) dependence of the differential cross section for xenon and helium and comparing to their well-known analytical forms, we place an upper bound on the strength of a new interaction as a function of interaction length λ which improves upon previous results in the region λ <0.1 nm , and remains competitive in the larger-λ region. A pseudoexperimental simulation is developed for this experiment and its role in the data analysis is described. We conclude with plans for improving sensitivity in the larger-λ region.

  20. Forecasting neutrino masses from combining KATRIN and the CMB observations: Frequentist and Bayesian analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Host, Ole; Lahav, Ofer; Abdalla, Filipe B.

    We present a showcase for deriving bounds on the neutrino masses from laboratory experiments and cosmological observations. We compare the frequentist and Bayesian bounds on the effective electron neutrino mass m{sub {beta}} which the KATRIN neutrino mass experiment is expected to obtain, using both an analytical likelihood function and Monte Carlo simulations of KATRIN. Assuming a uniform prior in m{sub {beta}}, we find that a null result yields an upper bound of about 0.17 eV at 90% confidence in the Bayesian analysis, to be compared with the frequentist KATRIN reference value of 0.20 eV. This is a significant difference whenmore » judged relative to the systematic and statistical uncertainties of the experiment. On the other hand, an input m{sub {beta}}=0.35 eV, which is the KATRIN 5{sigma} detection threshold, would be detected at virtually the same level. Finally, we combine the simulated KATRIN results with cosmological data in the form of present (post-WMAP) and future (simulated Planck) observations. If an input of m{sub {beta}}=0.2 eV is assumed in our simulations, KATRIN alone excludes a zero neutrino mass at 2.2{sigma}. Adding Planck data increases the probability of detection to a median 2.7{sigma}. The analysis highlights the importance of combining cosmological and laboratory data on an equal footing.« less

  1. Micromechanical Modeling of Anisotropic Damage-Induced Permeability Variation in Crystalline Rocks

    NASA Astrophysics Data System (ADS)

    Chen, Yifeng; Hu, Shaohua; Zhou, Chuangbing; Jing, Lanru

    2014-09-01

    This paper presents a study on the initiation and progress of anisotropic damage and its impact on the permeability variation of crystalline rocks of low porosity. This work was based on an existing micromechanical model considering the frictional sliding and dilatancy behaviors of microcracks and the recovery of degraded stiffness when the microcracks are closed. By virtue of an analytical ellipsoidal inclusion solution, lower bound estimates were formulated through a rigorous homogenization procedure for the damage-induced effective permeability of the microcracks-matrix system, and their predictive limitations were discussed with superconducting penny-shaped microcracks, in which the greatest lower bounds were obtained for each homogenization scheme. On this basis, an empirical upper bound estimation model was suggested to account for the influences of anisotropic damage growth, connectivity, frictional sliding, dilatancy, and normal stiffness recovery of closed microcracks, as well as tensile stress-induced microcrack opening on the permeability variation, with a small number of material parameters. The developed model was calibrated and validated by a series of existing laboratory triaxial compression tests with permeability measurements on crystalline rocks, and applied for characterizing the excavation-induced damage zone and permeability variation in the surrounding granitic rock of the TSX tunnel at the Atomic Energy of Canada Limited's (AECL) Underground Research Laboratory (URL) in Canada, with an acceptable agreement between the predicted and measured data.

  2. Upper bounds on the error probabilities and asymptotic error exponents in quantum multiple state discrimination

    NASA Astrophysics Data System (ADS)

    Audenaert, Koenraad M. R.; Mosonyi, Milán

    2014-10-01

    We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ1, …, σr. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ1, …, σr), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov's classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min _{j

  3. Quantitative characterization of surface topography using spectral analysis

    NASA Astrophysics Data System (ADS)

    Jacobs, Tevis D. B.; Junge, Till; Pastewka, Lars

    2017-03-01

    Roughness determines many functional properties of surfaces, such as adhesion, friction, and (thermal and electrical) contact conductance. Recent analytical models and simulations enable quantitative prediction of these properties from knowledge of the power spectral density (PSD) of the surface topography. The utility of the PSD is that it contains statistical information that is unbiased by the particular scan size and pixel resolution chosen by the researcher. In this article, we first review the mathematical definition of the PSD, including the one- and two-dimensional cases, and common variations of each. We then discuss strategies for reconstructing an accurate PSD of a surface using topography measurements at different size scales. Finally, we discuss detecting and mitigating artifacts at the smallest scales, and computing upper/lower bounds on functional properties obtained from models. We accompany our discussion with virtual measurements on computer-generated surfaces. This discussion summarizes how to analyze topography measurements to reconstruct a reliable PSD. Analytical models demonstrate the potential for tuning functional properties by rationally tailoring surface topography—however, this potential can only be achieved through the accurate, quantitative reconstruction of the PSDs of real-world surfaces.

  4. A splay tree-based approach for efficient resource location in P2P networks.

    PubMed

    Zhou, Wei; Tan, Zilong; Yao, Shaowen; Wang, Shipu

    2014-01-01

    Resource location in structured P2P system has a critical influence on the system performance. Existing analytical studies of Chord protocol have shown some potential improvements in performance. In this paper a splay tree-based new Chord structure called SChord is proposed to improve the efficiency of locating resources. We consider a novel implementation of the Chord finger table (routing table) based on the splay tree. This approach extends the Chord finger table with additional routing entries. Adaptive routing algorithm is proposed for implementation, and it can be shown that hop count is significantly minimized without introducing any other protocol overheads. We analyze the hop count of the adaptive routing algorithm, as compared to Chord variants, and demonstrate sharp upper and lower bounds for both worst-case and average case settings. In addition, we theoretically analyze the hop reducing in SChord and derive the fact that SChord can significantly reduce the routing hops as compared to Chord. Several simulations are presented to evaluate the performance of the algorithm and support our analytical findings. The simulation results show the efficiency of SChord.

  5. Systems-Level Annotation of a Metabolomics Data Set Reduces 25 000 Features to Fewer than 1000 Unique Metabolites.

    PubMed

    Mahieu, Nathaniel G; Patti, Gary J

    2017-10-03

    When using liquid chromatography/mass spectrometry (LC/MS) to perform untargeted metabolomics, it is now routine to detect tens of thousands of features from biological samples. Poor understanding of the data, however, has complicated interpretation and masked the number of unique metabolites actually being measured in an experiment. Here we place an upper bound on the number of unique metabolites detected in Escherichia coli samples analyzed with one untargeted metabolomics method. We first group multiple features arising from the same analyte, which we call "degenerate features", using a context-driven annotation approach. Surprisingly, this analysis revealed thousands of previously unreported degeneracies that reduced the number of unique analytes to ∼2961. We then applied an orthogonal approach to remove nonbiological features from the data using the 13 C-based credentialing technology. This further reduced the number of unique analytes to less than 1000. Our 90% reduction in data is 5-fold greater than previously published studies. On the basis of the results, we propose an alternative approach to untargeted metabolomics that relies on thoroughly annotated reference data sets. To this end, we introduce the creDBle database ( http://creDBle.wustl.edu ), which contains accurate mass, retention time, and MS/MS fragmentation data as well as annotations of all credentialed features.

  6. Solar System and stellar tests of a quantum-corrected gravity

    NASA Astrophysics Data System (ADS)

    Zhao, Shan-Shan; Xie, Yi

    2015-09-01

    The renormalization group running of the gravitational constant has a universal form and represents a possible extension of general relativity. These renormalization group effects on general relativity will cause the running of the gravitational constant, and there exists a scale of renormalization α ν , which depends on the mass of an astronomical system and needs to be determined by observations. We test renormalization group effects on general relativity and obtain the upper bounds of α ν in the low-mass scales: the Solar System and five systems of binary pulsars. Using the supplementary advances of the perihelia provided by INPOP10a (IMCCE, France) and EPM2011 (IAA RAS, Russia) ephemerides, we obtain new upper bounds on α ν in the Solar System when the Lense-Thirring effect due to the Sun's angular momentum and the uncertainty of the Sun's quadrupole moment are properly taken into account. These two factors were absent in the previous work. We find that INPOP10a yields the upper bound as α ν =(0.3 ±2.8 )×10-20 while EPM2011 gives α ν =(-2.5 ±8.3 )×10-21. Both of them are tighter than the previous result by 4 orders of magnitude. Furthermore, based on the observational data sets of five systems of binary pulsars: PSR J 0737 -3039 , PSR B 1534 +12 , PSR J 1756 -2251 , PSR B 1913 +16 , and PSR B 2127 +11 C , the upper bound is found as α ν =(-2.6 ±5.1 )×10-17. From the bounds of this work at a low-mass scale and the ones at the mass scale of galaxies, we might catch an updated glimpse of the mass dependence of α ν , and it is found that our improvement of the upper bounds in the Solar System can significantly change the possible pattern of the relation between log |α ν | and log m from a linear one to a power law, where m is the mass of an astronomical system. This suggests that |α ν | needs to be suppressed more rapidly with the decrease of the mass of low-mass systems. It also predicts that |α ν | might have an upper limit in high-mass astrophysical systems, which can be tested in the future.

  7. Differential Games of inf-sup Type and Isaacs Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaise, Hidehiro; Sheu, S.-J.

    2005-06-15

    Motivated by the work of Fleming, we provide a general framework to associate inf-sup type values with the Isaacs equations.We show that upper and lower bounds for the generators of inf-sup type are upper and lower Hamiltonians, respectively. In particular, the lower (resp. upper) bound corresponds to the progressive (resp. strictly progressive) strategy. By the Dynamic Programming Principle and identification of the generator, we can prove that the inf-sup type game is characterized as the unique viscosity solution of the Isaacs equation. We also discuss the Isaacs equation with a Hamiltonian of a convex combination between the lower and uppermore » Hamiltonians.« less

  8. Revisiting the time until fixation of a neutral mutant in a finite population - A coalescent theory approach.

    PubMed

    Greenbaum, Gili

    2015-09-07

    Evaluation of the time scale of the fixation of neutral mutations is crucial to the theoretical understanding of the role of neutral mutations in evolution. Diffusion approximations of the Wright-Fisher model are most often used to derive analytic formulations of genetic drift, as well as for the time scales of the fixation of neutral mutations. These approximations require a set of assumptions, most notably that genetic drift is a stochastic process in a continuous allele-frequency space, an assumption appropriate for large populations. Here equivalent approximations are derived using a coalescent theory approach which relies on a different set of assumptions than the diffusion approach, and adopts a discrete allele-frequency space. Solutions for the mean and variance of the time to fixation of a neutral mutation derived from the two approaches converge for large populations but slightly differ for small populations. A Markov chain analysis of the Wright-Fisher model for small populations is used to evaluate the solutions obtained, showing that both the mean and the variance are better approximated by the coalescent approach. The coalescence approximation represents a tighter upper-bound for the mean time to fixation than the diffusion approximation, while the diffusion approximation and coalescence approximation form an upper and lower bound, respectively, for the variance. The converging solutions and the small deviations of the two approaches strongly validate the use of diffusion approximations, but suggest that coalescent theory can provide more accurate approximations for small populations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Meta-food-chains as a many-layer epidemic process on networks

    NASA Astrophysics Data System (ADS)

    Barter, Edmund; Gross, Thilo

    2016-02-01

    Notable recent works have focused on the multilayer properties of coevolving diseases. We point out that very similar systems play an important role in population ecology. Specifically we study a meta-food-web model that was recently proposed by Pillai et al. [Theor. Ecol. 3, 223 (2009), 10.1007/s12080-009-0065-1]. This model describes a network of species connected by feeding interactions, which spread over a network of spatial patches. Focusing on the essential case, where the network of feeding interactions is a chain, we develop an analytical approach for the computation of the degree distributions of colonized spatial patches for the different species in the chain. This framework allows us to address ecologically relevant questions. Considering configuration model ensembles of spatial networks, we find that there is an upper bound for the fraction of patches that a given species can occupy, which depends only on the networks mean degree. For a given mean degree there is then an optimal degree distribution that comes closest to the upper bound. Notably scale-free degree distributions perform worse than more homogeneous degree distributions if the mean degree is sufficiently high. Because species experience the underlying network differently the optimal degree distribution for one particular species is generally not the optimal distribution for the other species in the same food web. These results are of interest for conservation ecology, where, for instance, the task of selecting areas of old-growth forest to preserve in an agricultural landscape, amounts to the design of a patch network.

  10. Coefficient of performance and its bounds with the figure of merit for a general refrigerator

    NASA Astrophysics Data System (ADS)

    Long, Rui; Liu, Wei

    2015-02-01

    A general refrigerator model with non-isothermal processes is studied. The coefficient of performance (COP) and its bounds at maximum χ figure of merit are obtained and analyzed. This model accounts for different heat capacities during the heat transfer processes. So, different kinds of refrigerator cycles can be considered. Under the constant heat capacity condition, the upper bound of the COP is the Curzon-Ahlborn (CA) coefficient of performance and is independent of the time durations of the heat exchanging processes. With the maximum χ criterion, in the refrigerator cycles, such as the reversed Brayton refrigerator cycle, the reversed Otto refrigerator cycle and the reversed Atkinson refrigerator cycle, where the heat capacity in the heat absorbing process is not less than that in the heat releasing process, their COPs are bounded by the CA coefficient of performance; otherwise, such as for the reversed Diesel refrigerator cycle, its COP can exceed the CA coefficient of performance. Furthermore, the general refined upper and lower bounds have been proposed.

  11. Search for Chemically Bound Water in the Surface Layer of Mars Based on HEND/Mars Odyssey Data

    NASA Technical Reports Server (NTRS)

    Basilevsky, A. T.; Litvak, M. L.; Mitrofanov, I. G.; Boynton, W.; Saunders, R. S.

    2003-01-01

    This study is emphasized on search for signatures of chemically bound water in surface layer of Mars based on data acquired by High Energy Neutron Detector (HEND) which is part of the Mars Odyssey Gamma Ray Spectrometer (GRS). Fluxes of epithermal (probe the upper 1-2 m) and fast (the upper 20-30 cm) neutrons, considered in this work, were measured since mid February till mid June 2002. First analysis of this data set with emphasis of chemically bound water was made. Early publications of the GRS results reported low neutron flux at high latitudes, interpreted as signature of ground water ice, and in two low latitude areas: Arabia and SW of Olympus Mons (SWOM), interpreted as 'geographic variations in the amount of chemically and/or physically bound H2O and or OH...'. It is clear that surface materials of Mars do contain chemically bound water, but its amounts are poorly known and its geographic distribution was not analyzed.

  12. Pre-Test Assessment of the Upper Bound of the Drag Coefficient Repeatability of a Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; L'Esperance, A.

    2017-01-01

    A new method is presented that computes a pre{test estimate of the upper bound of the drag coefficient repeatability of a wind tunnel model. This upper bound is a conservative estimate of the precision error of the drag coefficient. For clarity, precision error contributions associated with the measurement of the dynamic pressure are analyzed separately from those that are associated with the measurement of the aerodynamic loads. The upper bound is computed by using information about the model, the tunnel conditions, and the balance in combination with an estimate of the expected output variations as input. The model information consists of the reference area and an assumed angle of attack. The tunnel conditions are described by the Mach number and the total pressure or unit Reynolds number. The balance inputs are the partial derivatives of the axial and normal force with respect to all balance outputs. Finally, an empirical output variation of 1.0 microV/V is used to relate both random instrumentation and angle measurement errors to the precision error of the drag coefficient. Results of the analysis are reported by plotting the upper bound of the precision error versus the tunnel conditions. The analysis shows that the influence of the dynamic pressure measurement error on the precision error of the drag coefficient is often small when compared with the influence of errors that are associated with the load measurements. Consequently, the sensitivities of the axial and normal force gages of the balance have a significant influence on the overall magnitude of the drag coefficient's precision error. Therefore, results of the error analysis can be used for balance selection purposes as the drag prediction characteristics of balances of similar size and capacities can objectively be compared. Data from two wind tunnel models and three balances are used to illustrate the assessment of the precision error of the drag coefficient.

  13. Improved Lower Bounds on the Price of Stability of Undirected Network Design Games

    NASA Astrophysics Data System (ADS)

    Bilò, Vittorio; Caragiannis, Ioannis; Fanelli, Angelo; Monaco, Gianpiero

    Bounding the price of stability of undirected network design games with fair cost allocation is a challenging open problem in the Algorithmic Game Theory research agenda. Even though the generalization of such games in directed networks is well understood in terms of the price of stability (it is exactly H n , the n-th harmonic number, for games with n players), far less is known for network design games in undirected networks. The upper bound carries over to this case as well while the best known lower bound is 42/23 ≈ 1.826. For more restricted but interesting variants of such games such as broadcast and multicast games, sublogarithmic upper bounds are known while the best known lower bound is 12/7 ≈ 1.714. In the current paper, we improve the lower bounds as follows. We break the psychological barrier of 2 by showing that the price of stability of undirected network design games is at least 348/155 ≈ 2.245. Our proof uses a recursive construction of a network design game with a simple gadget as the main building block. For broadcast and multicast games, we present new lower bounds of 20/11 ≈ 1.818 and 1.862, respectively.

  14. Nonequilibrium localization and the interplay between disorder and interactions.

    PubMed

    Mascarenhas, Eduardo; Bragança, Helena; Drumond, R; Aguiar, M C O; França Santos, M

    2016-05-18

    We study the nonequilibrium interplay between disorder and interactions in a closed quantum system. We base our analysis on the notion of dynamical state-space localization, calculated via the Loschmidt echo. Although real-space and state-space localization are independent concepts in general, we show that both perspectives may be directly connected through a specific choice of initial states, namely, maximally localized states (ML-states). We show numerically that in the noninteracting case the average echo is found to be monotonically increasing with increasing disorder; these results are in agreement with an analytical evaluation in the single particle case in which the echo is found to be inversely proportional to the localization length. We also show that for interacting systems, the length scale under which equilibration may occur is upper bounded and such bound is smaller the greater the average echo of ML-states. When disorder and interactions, both being localization mechanisms, are simultaneously at play the echo features a non-monotonic behaviour indicating a non-trivial interplay of the two processes. This interplay induces delocalization of the dynamics which is accompanied by delocalization in real-space. This non-monotonic behaviour is also present in the effective integrability which we show by evaluating the gap statistics.

  15. An analysis of the vertical structure equation for arbitrary thermal profiles

    NASA Technical Reports Server (NTRS)

    Cohn, Stephen E.; Dee, Dick P.

    1989-01-01

    The vertical structure equation is a singular Sturm-Liouville problem whose eigenfunctions describe the vertical dependence of the normal modes of the primitive equations linearized about a given thermal profile. The eigenvalues give the equivalent depths of the modes. The spectrum of the vertical structure equation and the appropriateness of various upper boundary conditions, both for arbitrary thermal profiles were studied. The results depend critically upon whether or not the thermal profile is such that the basic state atmosphere is bounded. In the case of a bounded atmosphere it is shown that the spectrum is always totally discrete, regardless of details of the thermal profile. For the barotropic equivalent depth, which corresponds to the lowest eigen value, upper and lower bounds which depend only on the surface temperature and the atmosphere height were obtained. All eigenfunctions are bounded, but always have unbounded first derivatives. It was proved that the commonly invoked upper boundary condition that vertical velocity must vanish as pressure tends to zero, as well as a number of alternative conditions, is well posed. It was concluded that the vertical structure equation always has a totally discrete spectrum under the assumptions implicit in the primitive equations.

  16. An analysis of the vertical structure equation for arbitrary thermal profiles

    NASA Technical Reports Server (NTRS)

    Cohn, Stephen E.; Dee, Dick P.

    1987-01-01

    The vertical structure equation is a singular Sturm-Liouville problem whose eigenfunctions describe the vertical dependence of the normal modes of the primitive equations linearized about a given thermal profile. The eigenvalues give the equivalent depths of the modes. The spectrum of the vertical structure equation and the appropriateness of various upper boundary conditions, both for arbitrary thermal profiles were studied. The results depend critically upon whether or not the thermal profile is such that the basic state atmosphere is bounded. In the case of a bounded atmosphere it is shown that the spectrum is always totally discrete, regardless of details of the thermal profile. For the barotropic equivalent depth, which corresponds to the lowest eigen value, upper and lower bounds which depend only on the surface temperature and the atmosphere height were obtained. All eigenfunctions are bounded, but always have unbounded first derivatives. It was proved that the commonly invoked upper boundary condition that vertical velocity must vanish as pressure tends to zero, as well as a number of alternative conditions, is well posed. It was concluded that the vertical structure equation always has a totally discrete spectrum under the assumptions implicit in the primitive equations.

  17. Ultimate energy density of observable cold baryonic matter.

    PubMed

    Lattimer, James M; Prakash, Madappa

    2005-03-25

    We demonstrate that the largest measured mass of a neutron star establishes an upper bound to the energy density of observable cold baryonic matter. An equation of state-independent expression satisfied by both normal neutron stars and self-bound quark matter stars is derived for the largest energy density of matter inside stars as a function of their masses. The largest observed mass sets the lowest upper limit to the density. Implications from existing and future neutron star mass measurements are discussed.

  18. Semiannual Report, October 1, 1989 through March 31, 1990 (Institute for Computer Applications in Science and Engineering)

    DTIC Science & Technology

    1990-06-01

    synchronization . We consider the performance of various synchronization protocols by deriving upper and lower bounds on optimal perfor- mance, upper bounds on Time ...from universities and from industry, who have resident appointments for limited periods of time , and by consultants. Members of NASA’s research staff...convergence to steady state is also being studied together with D. Gottlieb. The idea is to generalize the concept of local- time stepping by minimizing the

  19. Generalized monogamy inequalities and upper bounds of negativity for multiqubit systems

    NASA Astrophysics Data System (ADS)

    Yang, Yanmin; Chen, Wei; Li, Gang; Zheng, Zhu-Jun

    2018-01-01

    In this paper, we present some generalized monogamy inequalities and upper bounds of negativity based on convex-roof extended negativity (CREN) and CREN of assistance (CRENOA). These monogamy relations are satisfied by the negativity of N -qubit quantum systems A B C1⋯CN -2 , under the partitions A B | C1⋯CN -2 and A B C1| C2⋯CN -2 . Furthermore, the W -class states are used to test these generalized monogamy inequalities.

  20. Computing an upper bound on contact stress with surrogate duality

    NASA Astrophysics Data System (ADS)

    Xuan, Zhaocheng; Papadopoulos, Panayiotis

    2016-07-01

    We present a method for computing an upper bound on the contact stress of elastic bodies. The continuum model of elastic bodies with contact is first modeled as a constrained optimization problem by using finite elements. An explicit formulation of the total contact force, a fraction function with the numerator as a linear function and the denominator as a quadratic convex function, is derived with only the normalized nodal contact forces as the constrained variables in a standard simplex. Then two bounds are obtained for the sum of the nodal contact forces. The first is an explicit formulation of matrices of the finite element model, derived by maximizing the fraction function under the constraint that the sum of the normalized nodal contact forces is one. The second bound is solved by first maximizing the fraction function subject to the standard simplex and then using Dinkelbach's algorithm for fractional programming to find the maximum—since the fraction function is pseudo concave in a neighborhood of the solution. These two bounds are solved with the problem dimensions being only the number of contact nodes or node pairs, which are much smaller than the dimension for the original problem, namely, the number of degrees of freedom. Next, a scheme for constructing an upper bound on the contact stress is proposed that uses the bounds on the sum of the nodal contact forces obtained on a fine finite element mesh and the nodal contact forces obtained on a coarse finite element mesh, which are problems that can be solved at a lower computational cost. Finally, the proposed method is verified through some examples concerning both frictionless and frictional contact to demonstrate the method's feasibility, efficiency, and robustness.

  1. Performance bounds on parallel self-initiating discrete-event

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1990-01-01

    The use is considered of massively parallel architectures to execute discrete-event simulations of what is termed self-initiating models. A logical process in a self-initiating model schedules its own state re-evaluation times, independently of any other logical process, and sends its new state to other logical processes following the re-evaluation. The interest is in the effects of that communication on synchronization. The performance is considered of various synchronization protocols by deriving upper and lower bounds on optimal performance, upper bounds on Time Warp's performance, and lower bounds on the performance of a new conservative protocol. The analysis of Time Warp includes the overhead costs of state-saving and rollback. The analysis points out sufficient conditions for the conservative protocol to outperform Time Warp. The analysis also quantifies the sensitivity of performance to message fan-out, lookahead ability, and the probability distributions underlying the simulation.

  2. Bounds on the Coupling of the Majoron to Light Neutrinos from Supernova Cooling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farzan, Yasaman

    2002-12-02

    We explore the role of Majoron (J) emission in the supernova cooling process, as a source of upper bound on the neutrino-Majoron coupling. We show that the strongest upper bound on the coupling to {nu}{sub 3} comes from the {nu}{sub e}{nu}{sub e} {yields} J process in the core of a supernova. We also find bounds on diagonal couplings of the Majoron to {nu}{sub {mu}({tau})}{nu}{sub {mu}({tau})} and on off-diagonal {nu}{sub e}{nu}{sub {mu}({tau})} couplings in various regions of the parameter space. We discuss the evaluation of cross-section for four-particle interactions ({nu}{nu} {yields} JJ and {nu}J {yields} {nu}J). We show that these aremore » typically dominated by three-particle sub-processes and do not give new independent constraints.« less

  3. A linear programming approach to max-sum problem: a review.

    PubMed

    Werner, Tomás

    2007-07-01

    The max-sum labeling problem, defined as maximizing a sum of binary (i.e., pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field. We review a not widely known approach to the problem, developed by Ukrainian researchers Schlesinger et al. in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product. In particular, we review Schlesinger et al.'s upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound. We revisit problems with Boolean variables and supermodular problems. We describe two algorithms for decreasing the upper bound. We present an example application for structural image analysis.

  4. Kernel K-Means Sampling for Nyström Approximation.

    PubMed

    He, Li; Zhang, Hong

    2018-05-01

    A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.

  5. ϕ 3 theory with F4 flavor symmetry in 6 - 2 ɛ dimensions: 3-loop renormalization and conformal bootstrap

    NASA Astrophysics Data System (ADS)

    Pang, Yi; Rong, Junchen; Su, Ning

    2016-12-01

    We consider ϕ 3 theory in 6 - 2 ɛ with F 4 global symmetry. The beta function is calculated up to 3 loops, and a stable unitary IR fixed point is observed. The anomalous dimensions of operators quadratic or cubic in ϕ are also computed. We then employ conformal bootstrap technique to study the fixed point predicted from the perturbative approach. For each putative scaling dimension of ϕ (Δ ϕ ), we obtain the corresponding upper bound on the scaling dimension of the second lowest scalar primary in the 26 representation ( Δ 26 2nd ) which appears in the OPE of ϕ × ϕ. In D = 5 .95, we observe a sharp peak on the upper bound curve located at Δ ϕ equal to the value predicted by the 3-loop computation. In D = 5, we observe a weak kink on the upper bound curve at ( Δ ϕ , Δ 26 2nd ) = (1.6, 4).

  6. Strong polygamy of quantum correlations in multi-party quantum systems

    NASA Astrophysics Data System (ADS)

    Kim, Jeong San

    2014-10-01

    We propose a new type of polygamy inequality for multi-party quantum entanglement. We first consider the possible amount of bipartite entanglement distributed between a fixed party and any subset of the rest parties in a multi-party quantum system. By using the summation of these distributed entanglements, we provide an upper bound of the distributed entanglement between a party and the rest in multi-party quantum systems. We then show that this upper bound also plays as a lower bound of the usual polygamy inequality, therefore the strong polygamy of multi-party quantum entanglement. For the case of multi-party pure states, we further show that the strong polygamy of entanglement implies the strong polygamy of quantum discord.

  7. Optimal bounds and extremal trajectories for time averages in nonlinear dynamical systems

    NASA Astrophysics Data System (ADS)

    Tobasco, Ian; Goluskin, David; Doering, Charles R.

    2018-02-01

    For any quantity of interest in a system governed by ordinary differential equations, it is natural to seek the largest (or smallest) long-time average among solution trajectories, as well as the extremal trajectories themselves. Upper bounds on time averages can be proved a priori using auxiliary functions, the optimal choice of which is a convex optimization problem. We prove that the problems of finding maximal trajectories and minimal auxiliary functions are strongly dual. Thus, auxiliary functions provide arbitrarily sharp upper bounds on time averages. Moreover, any nearly minimal auxiliary function provides phase space volumes in which all nearly maximal trajectories are guaranteed to lie. For polynomial equations, auxiliary functions can be constructed by semidefinite programming, which we illustrate using the Lorenz system.

  8. Upper bounds on quantum uncertainty products and complexity measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guerrero, Angel; Sanchez-Moreno, Pablo; Dehesa, Jesus S.

    The position-momentum Shannon and Renyi uncertainty products of general quantum systems are shown to be bounded not only from below (through the known uncertainty relations), but also from above in terms of the Heisenberg-Kennard product . Moreover, the Cramer-Rao, Fisher-Shannon, and Lopez-Ruiz, Mancini, and Calbet shape measures of complexity (whose lower bounds have been recently found) are also bounded from above. The improvement of these bounds for systems subject to spherically symmetric potentials is also explicitly given. Finally, applications to hydrogenic and oscillator-like systems are done.

  9. Minimum-error quantum distinguishability bounds from matrix monotone functions: A comment on 'Two-sided estimates of minimum-error distinguishability of mixed quantum states via generalized Holevo-Curlander bounds' [J. Math. Phys. 50, 032106 (2009)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tyson, Jon

    2009-06-15

    Matrix monotonicity is used to obtain upper bounds on minimum-error distinguishability of arbitrary ensembles of mixed quantum states. This generalizes one direction of a two-sided bound recently obtained by the author [J. Tyson, J. Math. Phys. 50, 032106 (2009)]. It is shown that the previously obtained special case has unique properties.

  10. Bond additive modeling 10. Upper and lower bounds of bond incident degree indices of catacondensed fluoranthenes

    NASA Astrophysics Data System (ADS)

    Vukičević, Damir; Đurđević, Jelena

    2011-10-01

    Bond incident degree index is a descriptor that is calculated as the sum of the bond contributions such that each bond contribution depends solely on the degrees of its incident vertices (e.g. Randić index, Zagreb index, modified Zagreb index, variable Randić index, atom-bond connectivity index, augmented Zagreb index, sum-connectivity index, many Adriatic indices, and many variable Adriatic indices). In this Letter we find tight upper and lower bounds for bond incident degree index for catacondensed fluoranthenes with given number of hexagons.

  11. Beating the photon-number-splitting attack in practical quantum cryptography.

    PubMed

    Wang, Xiang-Bin

    2005-06-17

    We propose an efficient method to verify the upper bound of the fraction of counts caused by multiphoton pulses in practical quantum key distribution using weak coherent light, given whatever type of Eve's action. The protocol simply uses two coherent states for the signal pulses and vacuum for the decoy pulse. Our verified upper bound is sufficiently tight for quantum key distribution with a very lossy channel, in both the asymptotic and nonasymptotic case. So far our protocol is the only decoy-state protocol that works efficiently for currently existing setups.

  12. The local interstellar helium density - Corrected

    NASA Technical Reports Server (NTRS)

    Freeman, J.; Paresce, F.; Bowyer, S.

    1979-01-01

    An upper bound for the number density of neutral helium in the local interstellar medium of 0.004 + or - 0.0022 per cu cm was previously reported, based on extreme-ultraviolet telescope observations at 584 A made during the 1975 Apollo-Soyuz Test Project. A variety of evidence is found which indicates that the 584-A sensitivity of the instrument declined by a factor of 2 between the last laboratory calibration and the time of the measurements. The upper bound on the helium density is therefore revised to 0.0089 + or - 0.005 per cu cm.

  13. Upper bound on three-tangles of reduced states of four-qubit pure states

    NASA Astrophysics Data System (ADS)

    Sharma, S. Shelly; Sharma, N. K.

    2017-06-01

    Closed formulas for upper bounds on three-tangles of three-qubit reduced states in terms of three-qubit-invariant polynomials of pure four-qubit states are obtained. Our results offer tighter constraints on total three-way entanglement of a given qubit with the rest of the system than those used by Regula et al. [Phys. Rev. Lett. 113, 110501 (2014), 10.1103/PhysRevLett.113.110501 and Phys. Rev. Lett. 116, 049902(E) (2016)], 10.1103/PhysRevLett.116.049902 to verify monogamy of four-qubit quantum entanglement.

  14. Sputnik Planitia, Pluto Convection Cell Surface Velocities of ~10 Centimeters per Year Based on Sublimation Pit Distribution

    NASA Astrophysics Data System (ADS)

    Buhler, Peter Benjamin; Ingersoll, Andrew P.

    2017-10-01

    Sputnik Planitia, Pluto contains cellular landforms with areas on the order of a few 102-103 km2 that are likely the surface manifestation of convective overturn in a vast basin of nitrogen ice. The cells have sublimation pits on them, with smaller pits near their centers and larger pits near their edges. We map over 12,000 pits on seven cells and find that the pit radii increase by between 2.1 ± 0.4 and 5.9 ± 0.8 × 10-3 m per meter away from the cell center, depending on the cell. Due to finite data resolution, this is a lower bound on the size increase. Conservatively accounting for resolution effects yields upper bounds on the size vs. distance distribution of 4.2 ± 0.2 to 23.4 ± 1.5 × 10-3 m m-1. In order to convert the pit size vs. distance distribution into a pit age vs. distance distribution, we use an analytic model to calculate that pit radii grow via sublimation at a rate of 3.6 [+2.1,-0.6] × 10-4 m yr-1. Combined with the mapped distribution of pit radii, this yields surface velocities between 1.5 [+1.0,-0.2] and 6.2 [+3.4,-1.4] cm yr-1 for the slowest cell and surface velocities between 8.1 [+5.5,-1.0] and 17.9 [+8.9,-5.1] cm yr-1 for the fastest cell; the lower bound estimate for each cell accounts for resolution effects, while the upper bound estimate does not. These convection rates imply that the surface ages at the edge of cells reach approximately 4.2 to 8.9 × 105 yr, depending on the cell. The rates we find are comparable to rates of ~6 cm yr-1 that were previously obtained from modeling of the convective overturn in Sputnik Planitia [McKinnon, W.B. et al., 2016, Nature, 534(7605), 82-85]. Finally, we find that the minimum viscosity at the surface of the convection cells is of order 1016 to 1017 Pa s; we find that pits would relax away before sublimating to their observed radii of several hundred meters if the viscosity were lower than this value.

  15. Planck limits on non-canonical generalizations of large-field inflation models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Nina K.; Kinney, William H., E-mail: ninastei@buffalo.edu, E-mail: whkinney@buffalo.edu

    2017-04-01

    In this paper, we consider two case examples of Dirac-Born-Infeld (DBI) generalizations of canonical large-field inflation models, characterized by a reduced sound speed, c {sub S} < 1. The reduced speed of sound lowers the tensor-scalar ratio, improving the fit of the models to the data, but increases the equilateral-mode non-Gaussianity, f {sup equil.}{sub NL}, which the latest results from the Planck satellite constrain by a new upper bound. We examine constraints on these models in light of the most recent Planck and BICEP/Keck results, and find that they have a greatly decreased window of viability. The upper bound onmore » f {sup equil.}{sub NL} corresponds to a lower bound on the sound speed and a corresponding lower bound on the tensor-scalar ratio of r ∼ 0.01, so that near-future Cosmic Microwave Background observations may be capable of ruling out entire classes of DBI inflation models. The result is, however, not universal: infrared-type DBI inflation models, where the speed of sound increases with time, are not subject to the bound.« less

  16. A fast algorithm for determining bounds and accurate approximate p-values of the rank product statistic for replicate experiments.

    PubMed

    Heskes, Tom; Eisinga, Rob; Breitling, Rainer

    2014-11-21

    The rank product method is a powerful statistical technique for identifying differentially expressed molecules in replicated experiments. A critical issue in molecule selection is accurate calculation of the p-value of the rank product statistic to adequately address multiple testing. Both exact calculation and permutation and gamma approximations have been proposed to determine molecule-level significance. These current approaches have serious drawbacks as they are either computationally burdensome or provide inaccurate estimates in the tail of the p-value distribution. We derive strict lower and upper bounds to the exact p-value along with an accurate approximation that can be used to assess the significance of the rank product statistic in a computationally fast manner. The bounds and the proposed approximation are shown to provide far better accuracy over existing approximate methods in determining tail probabilities, with the slightly conservative upper bound protecting against false positives. We illustrate the proposed method in the context of a recently published analysis on transcriptomic profiling performed in blood. We provide a method to determine upper bounds and accurate approximate p-values of the rank product statistic. The proposed algorithm provides an order of magnitude increase in throughput as compared with current approaches and offers the opportunity to explore new application domains with even larger multiple testing issue. The R code is published in one of the Additional files and is available at http://www.ru.nl/publish/pages/726696/rankprodbounds.zip .

  17. Energy Bounds for a Compressed Elastic Film on a Substrate

    NASA Astrophysics Data System (ADS)

    Bourne, David P.; Conti, Sergio; Müller, Stefan

    2017-04-01

    We study pattern formation in a compressed elastic film which delaminates from a substrate. Our key tool is the determination of rigorous upper and lower bounds on the minimum value of a suitable energy functional. The energy consists of two parts, describing the two main physical effects. The first part represents the elastic energy of the film, which is approximated using the von Kármán plate theory. The second part represents the fracture or delamination energy, which is approximated using the Griffith model of fracture. A simpler model containing the first term alone was previously studied with similar methods by several authors, assuming that the delaminated region is fixed. We include the fracture term, transforming the elastic minimisation into a free boundary problem, and opening the way for patterns which result from the interplay of elasticity and delamination. After rescaling, the energy depends on only two parameters: the rescaled film thickness, {σ }, and a measure of the bonding strength between the film and substrate, {γ }. We prove upper bounds on the minimum energy of the form {σ }^a {γ }^b and find that there are four different parameter regimes corresponding to different values of a and b and to different folding patterns of the film. In some cases, the upper bounds are attained by self-similar folding patterns as observed in experiments. Moreover, for two of the four parameter regimes we prove matching, optimal lower bounds.

  18. Confidence intervals for the between-study variance in random-effects meta-analysis using generalised heterogeneity statistics: should we use unequal tails?

    PubMed

    Jackson, Dan; Bowden, Jack

    2016-09-07

    Confidence intervals for the between study variance are useful in random-effects meta-analyses because they quantify the uncertainty in the corresponding point estimates. Methods for calculating these confidence intervals have been developed that are based on inverting hypothesis tests using generalised heterogeneity statistics. Whilst, under the random effects model, these new methods furnish confidence intervals with the correct coverage, the resulting intervals are usually very wide, making them uninformative. We discuss a simple strategy for obtaining 95 % confidence intervals for the between-study variance with a markedly reduced width, whilst retaining the nominal coverage probability. Specifically, we consider the possibility of using methods based on generalised heterogeneity statistics with unequal tail probabilities, where the tail probability used to compute the upper bound is greater than 2.5 %. This idea is assessed using four real examples and a variety of simulation studies. Supporting analytical results are also obtained. Our results provide evidence that using unequal tail probabilities can result in shorter 95 % confidence intervals for the between-study variance. We also show some further results for a real example that illustrates how shorter confidence intervals for the between-study variance can be useful when performing sensitivity analyses for the average effect, which is usually the parameter of primary interest. We conclude that using unequal tail probabilities when computing 95 % confidence intervals for the between-study variance, when using methods based on generalised heterogeneity statistics, can result in shorter confidence intervals. We suggest that those who find the case for using unequal tail probabilities convincing should use the '1-4 % split', where greater tail probability is allocated to the upper confidence bound. The 'width-optimal' interval that we present deserves further investigation.

  19. Value-of-information analysis within a stakeholder-driven research prioritization process in a US setting: an application in cancer genomics.

    PubMed

    Carlson, Josh J; Thariani, Rahber; Roth, Josh; Gralow, Julie; Henry, N Lynn; Esmail, Laura; Deverka, Pat; Ramsey, Scott D; Baker, Laurence; Veenstra, David L

    2013-05-01

    The objective of this study was to evaluate the feasibility and outcomes of incorporating value-of-information (VOI) analysis into a stakeholder-driven research prioritization process in a US-based setting. . Within a program to prioritize comparative effectiveness research areas in cancer genomics, over a period of 7 months, we developed decision-analytic models and calculated upper-bound VOI estimates for 3 previously selected genomic tests. Thirteen stakeholders representing patient advocates, payers, test developers, regulators, policy makers, and community-based oncologists ranked the tests before and after receiving VOI results. The stakeholders were surveyed about the usefulness and impact of the VOI findings. The estimated upper-bound VOI ranged from $33 million to $2.8 billion for the 3 research areas. Seven stakeholders indicated the results modified their rankings, 9 stated VOI data were useful, and all indicated they would support its use in future prioritization processes. Some stakeholders indicated expected value of sampled information might be the preferred choice when evaluating specific Limitations. Our study was limited by the size and the potential for selection bias in the composition of the external stakeholder group, lack of a randomized design to assess effect of VOI data on rankings, and the use of expected value of perfect information v. expected value of sample information methods. Value of information analyses may have a meaningful role in research topic prioritization for comparative effectiveness research in the United States, particularly when large differences in VOI across topic areas are identified. Additional research is needed to facilitate the use of more complex value of information analyses in this setting.

  20. A statistical study of gyro-averaging effects in a reduced model of drift-wave transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fonseca, Julio; Del-Castillo-Negrete, Diego B.; Sokolov, Igor M.

    2016-08-25

    Here, a statistical study of finite Larmor radius (FLR) effects on transport driven by electrostatic driftwaves is presented. The study is based on a reduced discrete Hamiltonian dynamical system known as the gyro-averaged standard map (GSM). In this system, FLR effects are incorporated through the gyro-averaging of a simplified weak-turbulence model of electrostatic fluctuations. Formally, the GSM is a modified version of the standard map in which the perturbation amplitude, K 0, becomes K 0J 0(more » $$\\hat{p}$$), where J 0 is the zeroth-order Bessel function and $$\\hat{p}$$ s the Larmor radius. Assuming a Maxwellian probability density function (pdf) for $$\\hat{p}$$ , we compute analytically and numerically the pdf and the cumulative distribution function of the effective drift-wave perturba- tion amplitude K 0J 0($$\\hat{p}$$). Using these results, we compute the probability of loss of confinement (i.e., global chaos), P c provides an upper bound for the escape rate, and that P t rovides a good estimate of the particle trapping rate. Lastly. the analytical results are compared with direct numerical Monte-Carlo simulations of particle transport.« less

  1. State-dependent anisotrophy: Comparison of quasi-analytical solutions with stochastic results for steady gravity drainage

    USGS Publications Warehouse

    Green, Timothy R.; Freyberg, David L.

    1995-01-01

    Anisotropy in large-scale unsaturated hydraulic conductivity of layered soils changes with the moisture state. Here, state-dependent anisotropy is computed under conditions of large-scale gravity drainage. Soils represented by Gardner's exponential function are perfectly stratified, periodic, and inclined. Analytical integration of Darcy’s law across each layer results in a system of nonlinear equations that is solved iteratively for capillary suction at layer interfaces and for the Darcy flux normal to layering. Computed fluxes and suction profiles are used to determine both upscaled hydraulic conductivity in the principal directions and the corresponding “state-dependent” anisotropy ratio as functions of the mean suction. Three groups of layered soils are analyzed and compared with independent predictions from the stochastic results of Yeh et al. (1985b). The small-perturbation approach predicts appropriate behaviors for anisotropy under nonarid conditions. However, the stochastic results are limited to moderate values of mean suction; this limitation is linked to a Taylor series approximation in terms of a group of statistical and geometric parameters. Two alternative forms of the Taylor series provide upper and lower bounds for the state-dependent anisotropy of relatively dry soils.

  2. Saturn's very axisymmetric magnetic field: No detectable secular variation or tilt

    NASA Astrophysics Data System (ADS)

    Cao, Hao; Russell, Christopher T.; Christensen, Ulrich R.; Dougherty, Michele K.; Burton, Marcia E.

    2011-04-01

    Saturn is the only planet in the solar system whose observed magnetic field is highly axisymmetric. At least a small deviation from perfect symmetry is required for a dynamo-generated magnetic field. Analyzing more than six years of magnetometer data obtained by Cassini close to the planet, we show that Saturn's observed field is much more axisymmetric than previously thought. We invert the magnetometer observations that were obtained in the "current-free" inner magnetosphere for an internal model, varying the assumed unknown rotation rate of Saturn's deep interior. No unambiguous non-axially symmetric magnetic moment is detected, with a new upper bound on the dipole tilt of 0.06°. An axisymmetric internal model with Schmidt-normalized spherical harmonic coefficients g10 = 21,191 ± 24 nT, g20 = 1586 ± 7 nT. g30 = 2374 ± 47 nT is derived from these measurements, the upper bounds on the axial degree 4 and 5 terms are 720 nT and 3200 nT respectively. The secular variation for the last 30 years is within the probable error of each term from degree 1 to 3, and the upper bounds are an order of magnitude smaller than in similar terrestrial terms for degrees 1 and 2. Differentially rotating conducting stable layers above Saturn's dynamo region have been proposed to symmetrize the magnetic field (Stevenson, 1982). The new upper bound on the dipole tilt implies that this stable layer must have a thickness L >= 4000 km, and this thickness is consistent with our weak secular variation observations.

  3. Towards full band colorless reception with coherent balanced receivers.

    PubMed

    Zhang, Bo; Malouin, Christian; Schmidt, Theodore J

    2012-04-23

    In addition to linear compensation of fiber channel impairments, coherent receivers also provide colorless selection of any desired data channel within multitude of incident wavelengths, without the need of a channel selecting filter. In this paper, we investigate the design requirements for colorless reception using a coherent balanced receiver, considering both the optical front end (OFE) and the transimpedance amplifier (TIA). We develop analytical models to predict the system performance as a function of receiver design parameters and show good agreement against numerical simulations. At low input signal power, an optimum local oscillator (LO) power is shown to exist where the thermal noise is balanced with the residual LO-RIN beat noise. At high input signal power, we show the dominant noise effect is the residual self-beat noise from the out of band (OOB) channels, which scales not only with the number of OOB channels and the common mode rejection ratio (CMRR) of the OFE, but also depends on the link residual chromatic dispersion (CD) and the orientation of the polarization tributaries relative to the receiver. This residual self-beat noise from OOB channels sets the lower bound for the LO power. We also investigate the limitations imposed by overload in the TIA, showing analytically that the DC current scales only with the number of OOB channels, while the differential AC current scales only with the link residual CD, which induces high peak-to-average power ratio (PAPR). Both DC and AC currents at the input to the TIA set the upper bounds for the LO power. Considering both the OFE noise limit and the TIA overload limit, we show that the receiver operating range is notably narrowed for dispersion unmanaged links, as compared to dispersion managed links. © 2012 Optical Society of America

  4. Biodegradation kinetics for pesticide exposure assessment.

    PubMed

    Wolt, J D; Nelson, H P; Cleveland, C B; van Wesenbeeck, I J

    2001-01-01

    Understanding pesticide risks requires characterizing pesticide exposure within the environment in a manner that can be broadly generalized across widely varied conditions of use. The coupled processes of sorption and soil degradation are especially important for understanding the potential environmental exposure of pesticides. The data obtained from degradation studies are inherently variable and, when limited in extent, lend uncertainty to exposure characterization and risk assessment. Pesticide decline in soils reflects dynamically coupled processes of sorption and degradation that add complexity to the treatment of soil biodegradation data from a kinetic perspective. Additional complexity arises from study design limitations that may not fully account for the decline in microbial activity of test systems, or that may be inadequate for considerations of all potential dissipation routes for a given pesticide. Accordingly, kinetic treatment of data must accommodate a variety of differing approaches starting with very simple assumptions as to reaction dynamics and extending to more involved treatments if warranted by the available experimental data. Selection of the appropriate kinetic model to describe pesticide degradation should rely on statistical evaluation of the data fit to ensure that the models used are not overparameterized. Recognizing the effects of experimental conditions and methods for kinetic treatment of degradation data is critical for making appropriate comparisons among pesticide biodegradation data sets. Assessment of variability in soil half-life among soils is uncertain because for many pesticides the data on soil degradation rate are limited to one or two soils. Reasonable upper-bound estimates of soil half-life are necessary in risk assessment so that estimated environmental concentrations can be developed from exposure models. Thus, an understanding of the variable and uncertain distribution of soil half-lives in the environment is necessary to estimate bounding values. Statistical evaluation of measures of central tendency for multisoil kinetic studies shows that geometric means better represent the distribution in soil half-lives than do the arithmetic or harmonic means. Estimates of upper-bound soil half-life values based on the upper 90% confidence bound on the geometric mean tend to accurately represent the upper bound when pesticide degradation rate is biologically driven but appear to overestimate the upper bound when there is extensive coupling of biodegradation with sorptive processes. The limited data available comparing distribution in pesticide soil half-lives between multisoil laboratory studies and multilocation field studies suggest that the probability density functions are similar. Thus, upper-bound estimates of pesticide half-life determined from laboratory studies conservatively represent pesticide biodegradation in the field environment for the purposes of exposure and risk assessment. International guidelines and approaches used for interpretations of soil biodegradation reflect many common elements, but differ in how the source and nature of variability in soil kinetic data are considered. Harmonization of approaches for the use of soil biodegradation data will improve the interpretative power of these data for the purposes of exposure and risk assessment.

  5. Jarzynski equality: connections to thermodynamics and the second law.

    PubMed

    Palmieri, Benoit; Ronis, David

    2007-01-01

    The one-dimensional expanding ideal gas model is used to compute the exact nonequilibrium distribution function. The state of the system during the expansion is defined in terms of local thermodynamics quantities. The final equilibrium free energy, obtained a long time after the expansion, is compared against the free energy that appears in the Jarzynski equality. Within this model, where the Jarzynski equality holds rigorously, the free energy change that appears in the equality does not equal the actual free energy change of the system at any time of the process. More generally, the work bound that is obtained from the Jarzynski equality is an upper bound to the upper bound that is obtained from the first and second laws of thermodynamics. The cancellation of the dissipative (nonequilibrium) terms that result in the Jarzynski equality is shown in the framework of response theory. This is used to show that the intuitive assumption that the Jarzynski work bound becomes equal to the average work done when the system evolves quasistatically is incorrect under some conditions.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Toomey, Bridget

    Evolving power systems with increasing levels of stochasticity call for a need to solve optimal power flow problems with large quantities of random variables. Weather forecasts, electricity prices, and shifting load patterns introduce higher levels of uncertainty and can yield optimization problems that are difficult to solve in an efficient manner. Solution methods for single chance constraints in optimal power flow problems have been considered in the literature, ensuring single constraints are satisfied with a prescribed probability; however, joint chance constraints, ensuring multiple constraints are simultaneously satisfied, have predominantly been solved via scenario-based approaches or by utilizing Boole's inequality asmore » an upper bound. In this paper, joint chance constraints are used to solve an AC optimal power flow problem while preventing overvoltages in distribution grids under high penetrations of photovoltaic systems. A tighter version of Boole's inequality is derived and used to provide a new upper bound on the joint chance constraint, and simulation results are shown demonstrating the benefit of the proposed upper bound. The new framework allows for a less conservative and more computationally efficient solution to considering joint chance constraints, specifically regarding preventing overvoltages.« less

  7. Lorenz curves in a new science-funding model

    NASA Astrophysics Data System (ADS)

    Huang, Ding-wei

    2017-12-01

    We propose an agent-based model to theoretically and systematically explore the implications of a new approach to fund science, which has been suggested recently by J. Bollen et al.[?] We introduce various parameters and examine their effects. The concentration of funding is shown by the Lorenz curve and the Gini coefficient. In this model, all scientists are treated equally and follow the well-intended regulations. All scientists give a fixed ratio of their funding to others. The fixed ratio becomes an upper bound for the Gini coefficient. We observe two distinct regimes in the parameter space: valley and plateau. In the valley regime, the fluidity of funding is significant. The Lorenz curve is smooth. The Gini coefficient is well below the upper bound. The funding distribution is the desired result. In the plateau regime, the cumulative advantage is significant. The Lorenz curve has a sharp turn. The Gini coefficient saturates to the upper bound. The undue concentration of funding happens swiftly. The funding distribution is the undesired results, where a minority of scientists take the majority of funding. Phase transitions between these two regimes are discussed.

  8. Expected performance of m-solution backtracking

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.

    1986-01-01

    This paper derives upper bounds on the expected number of search tree nodes visited during an m-solution backtracking search, a search which terminates after some preselected number m problem solutions are found. The search behavior is assumed to have a general probabilistic structure. The results are stated in terms of node expansion and contraction. A visited search tree node is said to be expanding if the mean number of its children visited by the search exceeds 1 and is contracting otherwise. It is shown that if every node expands, or if every node contracts, then the number of search tree nodes visited by a search has an upper bound which is linear in the depth of the tree, in the mean number of children a node has, and in the number of solutions sought. Also derived are bounds linear in the depth of the tree in some situations where an upper portion of the tree contracts (expands), while the lower portion expands (contracts). While previous analyses of 1-solution backtracking have concluded that the expected performance is always linear in the tree depth, the model allows superlinear expected performance.

  9. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1990-01-01

    An expurgated upper bound on the event error probability of trellis coded modulation is presented. This bound is used to derive a lower bound on the minimum achievable free Euclidean distance d sub (free) of trellis codes. It is shown that the dominant parameters for both bounds, the expurgated error exponent and the asymptotic d sub (free) growth rate, respectively, can be obtained from the cutoff-rate R sub O of the transmission channel by a simple geometric construction, making R sub O the central parameter for finding good trellis codes. Several constellations are optimized with respect to the bounds.

  10. Predictive classification of self-paced upper-limb analytical movements with EEG.

    PubMed

    Ibáñez, Jaime; Serrano, J I; del Castillo, M D; Minguez, J; Pons, J L

    2015-11-01

    The extent to which the electroencephalographic activity allows the characterization of movements with the upper limb is an open question. This paper describes the design and validation of a classifier of upper-limb analytical movements based on electroencephalographic activity extracted from intervals preceding self-initiated movement tasks. Features selected for the classification are subject specific and associated with the movement tasks. Further tests are performed to reject the hypothesis that other information different from the task-related cortical activity is being used by the classifiers. Six healthy subjects were measured performing self-initiated upper-limb analytical movements. A Bayesian classifier was used to classify among seven different kinds of movements. Features considered covered the alpha and beta bands. A genetic algorithm was used to optimally select a subset of features for the classification. An average accuracy of 62.9 ± 7.5% was reached, which was above the baseline level observed with the proposed methodology (30.2 ± 4.3%). The study shows how the electroencephalography carries information about the type of analytical movement performed with the upper limb and how it can be decoded before the movement begins. In neurorehabilitation environments, this information could be used for monitoring and assisting purposes.

  11. Entanglement and area law with a fractal boundary in a topologically ordered phase

    NASA Astrophysics Data System (ADS)

    Hamma, Alioscia; Lidar, Daniel A.; Severini, Simone

    2010-01-01

    Quantum systems with short-range interactions are known to respect an area law for the entanglement entropy: The von Neumann entropy S associated to a bipartition scales with the boundary p between the two parts. Here we study the case in which the boundary is a fractal. We consider the topologically ordered phase of the toric code with a magnetic field. When the field vanishes it is possible to analytically compute the entanglement entropy for both regular and fractal bipartitions (A,B) of the system and this yields an upper bound for the entire topological phase. When the A-B boundary is regular we have S/p=1 for large p. When the boundary is a fractal of the Hausdorff dimension D, we show that the entanglement between the two parts scales as S/p=γ⩽1/D, and γ depends on the fractal considered.

  12. Quantum coherence via skew information and its polygamy

    NASA Astrophysics Data System (ADS)

    Yu, Chang-shui

    2017-04-01

    Quantifying coherence is a key task in both quantum-mechanical theory and practical applications. Here, a reliable quantum coherence measure is presented by utilizing the quantum skew information of the state of interest subject to a certain broken observable. This coherence measure is proven to fulfill all the criteria (especially the strong monotonicity) recently introduced in the resource theories of quantum coherence. The coherence measure has an analytic expression and an obvious operational meaning related to quantum metrology. In terms of this coherence measure, the distribution of the quantum coherence, i.e., how the quantum coherence is distributed among the multiple parties, is studied and a corresponding polygamy relation is proposed. As a further application, it is found that the coherence measure forms the natural upper bounds for quantum correlations prepared by incoherent operations. The experimental measurements of our coherence measure as well as the relative-entropy coherence and lp-norm coherence are studied finally.

  13. Underwater wireless optical MIMO system with spatial modulation and adaptive power allocation

    NASA Astrophysics Data System (ADS)

    Huang, Aiping; Tao, Linwei; Niu, Yilong

    2018-04-01

    In this paper, we investigate the performance of underwater wireless optical multiple-input multiple-output communication system combining spatial modulation (SM-UOMIMO) with flag dual amplitude pulse position modulation (FDAPPM). Channel impulse response for coastal and harbor ocean water links are obtained by Monte Carlo (MC) simulation. Moreover, we obtain the closed-form and upper bound average bit error rate (BER) expressions for receiver diversity including optical combining, equal gain combining and selected combining. And a novel adaptive power allocation algorithm (PAA) is proposed to minimize the average BER of SM-UOMIMO system. Our numeric results indicate an excellent match between the analytical results and numerical simulations, which confirms the accuracy of our derived expressions. Furthermore, the results show that adaptive PAA outperforms conventional fixed factor PAA and equal PAA obviously. Multiple-input single-output system with adaptive PAA obtains even better BER performance than MIMO one, at the same time reducing receiver complexity effectively.

  14. A note on the velocity derivative flatness factor in decaying HIT

    NASA Astrophysics Data System (ADS)

    Djenidi, L.; Danaila, L.; Antonia, R. A.; Tang, S.

    2017-05-01

    We develop an analytical expression for the velocity derivative flatness factor, F, in decaying homogenous and isotropic turbulence (HIT) starting with the transport equation of the third-order moment of the velocity increment and assuming self-preservation. This expression, fully consistent with the Navier-Stokes equations, relates F to the product between the second-order pressure derivative (∂2p /∂x2) and second-order moment of the longitudinal velocity derivative ((∂u/∂x ) 2), highlighting the role the pressure plays in the scaling of the fourth-order moment of the longitudinal velocity derivative. It is also shown that F has an upper bound which follows the integral of k*4Ep*(k* ) where Ep and k are the pressure spectrum and the wavenumber, respectively (the symbol * represents the Kolmogorov normalization). Direct numerical simulations of forced HIT suggest that this integral converges toward a constant as the Reynolds number increases.

  15. Quantized kernel least mean square algorithm.

    PubMed

    Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C

    2012-01-01

    In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.

  16. Analytic studies of local-severe-storm observables by satellites

    NASA Technical Reports Server (NTRS)

    Dergarabedian, P.; Fendell, F.

    1977-01-01

    Attention is concentrated on the exceptionally violet whirlwind, often characterized by a fairly vertical axis of rotation. For a cylindrical polar coordinate system with axis coincident with the axis of rotation, the secondary flow involves the radial and axial velocity components. The thesis advanced is, first, that a violent whirlwind is characterized by swirl speeds relative to the axis of rotation on the order of 90 m/s, with 100 m/s being close to an upper bound. This estimate is based on interpretation of funnel-cloud shape (which also suggests properties of the radial profile of swirl, as well as the maximum magnitude); an error assessment of the funnel-cloud interpretation procedure is developed. Second, computation of ground-level pressure deficits achievable from typical tornado-spawning ambients by idealized thermohydrostatic processes suggests that a two-cell structure is required to sustain such large speeds.

  17. Homoclinic accretion solutions in the Schwarzschild-anti-de Sitter space-time

    NASA Astrophysics Data System (ADS)

    Mach, Patryk

    2015-04-01

    The aim of this paper is to clarify the distinction between homoclinic and standard (global) Bondi-type accretion solutions in the Schwarzschild-anti-de Sitter space-time. The homoclinic solutions have recently been discovered numerically for polytropic equations of state. Here I show that they exist also for certain isothermal (linear) equations of state, and an analytic solution of this type is obtained. It is argued that the existence of such solutions is generic, although for sufficiently relativistic matter models (photon gas, ultrahard equation of state) there exist global solutions that can be continued to infinity, similarly to standard Michel's solutions in the Schwarzschild space-time. In contrast to that global solutions should not exist for matter models with a nonvanishing rest-mass component, and this is demonstrated for polytropes. For homoclinic isothermal solutions I derive an upper bound on the mass of the black hole for which stationary transonic accretion is allowed.

  18. Investigating equality: The Rényi spectrum

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2017-09-01

    An equality index is a score quantifying the socioeconomic egalitarianism of the distribution of wealth in human societies; the score takes values in the unit interval, with the unit upper bound characterizing purely communist societies. In this paper we explore the Rényi spectrum, a continuum of equality indices that: (i) is based on the moments of the societies' distributions of wealth; (ii) unifies various measures of socioeconomic inequality, including the Theil and Atkinson indices; (iii) displays a collection of amicable analytic properties; (iv) admits multiple Rényi-divergence representations; and (v) provides a high-resolution gauging of egalitarianism that is way beyond what can be offered by the common-practice measures of socioeconomic inequality, the Gini and Pietra indices. At large, the Rényi spectrum is applicable in the context of any distribution of non-negative sizes with a positive mean-yielding a high-resolution gauging of the distribution's inherent statistical heterogeneity.

  19. The threshold laws for electron-atom and positron-atom impact ionization

    NASA Technical Reports Server (NTRS)

    Temkin, A.

    1983-01-01

    The Coulomb-dipole theory is employed to derive a threshold law for the lowest energy needed for the separation of three particles from one another. The study focuses on an electron impinging on a neutral atom, and the dipole is formed between an inner electron and the nucleus. The analytical dependence of the transition matrix element on energy is reduced to lowest order to obtain the threshold law, with the inner electron providing a shield for the nucleus. Experimental results using the LAMPF accelerator to produce a high energy beam of H- ions, which are then exposed to an optical laser beam to detach the negative H- ion, are discussed. The threshold level is found to be confined to the region defined by the upper bound of the inverse square of the Coulomb-dipole region. Difficulties in exact experimental confirmation of the threshold are considered.

  20. Inflation with a graceful exit in a random landscape

    NASA Astrophysics Data System (ADS)

    Pedro, F. G.; Westphal, A.

    2017-03-01

    We develop a stochastic description of small-field inflationary histories with a graceful exit in a random potential whose Hessian is a Gaussian random matrix as a model of the unstructured part of the string landscape. The dynamical evolution in such a random potential from a small-field inflation region towards a viable late-time de Sitter (dS) minimum maps to the dynamics of Dyson Brownian motion describing the relaxation of non-equilibrium eigenvalue spectra in random matrix theory. We analytically compute the relaxation probability in a saddle point approximation of the partition function of the eigenvalue distribution of the Wigner ensemble describing the mass matrices of the critical points. When applied to small-field inflation in the landscape, this leads to an exponentially strong bias against small-field ranges and an upper bound N ≪ 10 on the number of light fields N participating during inflation from the non-observation of negative spatial curvature.

  1. Variability in sinking fluxes and composition of particle-bound phosphorus in the Xisha area of the northern South China Sea

    NASA Astrophysics Data System (ADS)

    Dong, Yuan; Li, Qian P.; Wu, Zhengchao; Zhang, Jia-Zhong

    2016-12-01

    Export fluxes of phosphorus (P) by sinking particles are important in studying ocean biogeochemical dynamics, whereas their composition and temporal variability are still inadequately understood in the global oceans, including the northern South China Sea (NSCS). A time-series study of particle fluxes was conducted at a mooring station adjacent to the Xisha Trough in the NSCS from September 2012 to September 2014, with sinking particles collected every two weeks by two sediment traps deployed at 500 m and 1500 m depths. Five operationally defined particulate P classes of sinking particles including loosely-bound P, Fe-bound P, CaCO3-bound P, detrital apatite P, and refractory organic P were quantified by a sequential extraction method (SEDEX). Our results revealed substantial variability in sinking particulate P composition at the Xisha over two years of samplings. Particulate inorganic P was largely contributed from Fe-bound P in the upper trap, but detrital P in the lower trap. Particulate organic P, including exchangeable organic P, CaCO3-bound organic P, and refractory organic P, contributed up to 50-55% of total sinking particulate P. Increase of CaCO3-bound P in the upper trap during 2014 could be related to a strong El Niño event with enhanced CaCO3 deposition. We also found sediment resuspension responsible for the unusual high particles fluxes at the lower trap based on analyses of a two-component mixing model. There was on average a total mass flux of 78±50 mg m-2 d-1 at the upper trap during the study period. A significant correlation between integrated primary productivity in the region and particle fluxes at 500 m of the station suggested the important role of biological production in controlling the concentration, composition, and export fluxes of sinking particulate P in the NSCS.

  2. ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.

    2010-08-10

    A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error),more » and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.« less

  3. Performance analysis for minimally nonlinear irreversible refrigerators at finite cooling power

    NASA Astrophysics Data System (ADS)

    Long, Rui; Liu, Zhichun; Liu, Wei

    2018-04-01

    The coefficient of performance (COP) for general refrigerators at finite cooling power have been systematically researched through the minimally nonlinear irreversible model, and its lower and upper bounds in different operating regions have been proposed. Under the tight coupling conditions, we have calculated the universal COP bounds under the χ figure of merit in different operating regions. When the refrigerator operates in the region with lower external flux, we obtained the general bounds (0 < ε <(√{ 9 + 8εC } - 3) / 2) under the χ figure of merit. We have also calculated the universal bounds for maximum gain in COP under different operating regions to give a further insight into the COP gain with the cooling power away from the maximum one. When the refrigerator operates in the region located between maximum cooling power and maximum COP with lower external flux, the upper bound for COP and the lower bound for relative gain in COP present large values, compared to a relative small loss from the maximum cooling power. If the cooling power is the main objective, it is desirable to operate the refrigerator at a slightly lower cooling power than at the maximum one, where a small loss in the cooling power induces a much larger COP enhancement.

  4. Bounds and inequalities relating h-index, g-index, e-index and generalized impact factor: an improvement over existing models.

    PubMed

    Abbas, Ash Mohammad

    2012-01-01

    In this paper, we describe some bounds and inequalities relating h-index, g-index, e-index, and generalized impact factor. We derive the bounds and inequalities relating these indexing parameters from their basic definitions and without assuming any continuous model to be followed by any of them. We verify the theorems using citation data for five Price Medalists. We observe that the lower bound for h-index given by Theorem 2, [formula: see text], g ≥ 1, comes out to be more accurate as compared to Schubert-Glanzel relation h is proportional to C(2/3)P(-1/3) for a proportionality constant of 1, where C is the number of citations and P is the number of papers referenced. Also, the values of h-index obtained using Theorem 2 outperform those obtained using Egghe-Liang-Rousseau power law model for the given citation data of Price Medalists. Further, we computed the values of upper bound on g-index given by Theorem 3, g ≤ (h + e), where e denotes the value of e-index. We observe that the upper bound on g-index given by Theorem 3 is reasonably tight for the given citation record of Price Medalists.

  5. Reverse preferential spread in complex networks

    NASA Astrophysics Data System (ADS)

    Toyoizumi, Hiroshi; Tani, Seiichi; Miyoshi, Naoto; Okamoto, Yoshio

    2012-08-01

    Large-degree nodes may have a larger influence on the network, but they can be bottlenecks for spreading information since spreading attempts tend to concentrate on these nodes and become redundant. We discuss that the reverse preferential spread (distributing information inversely proportional to the degree of the receiving node) has an advantage over other spread mechanisms. In large uncorrelated networks, we show that the mean number of nodes that receive information under the reverse preferential spread is an upper bound among any other weight-based spread mechanisms, and this upper bound is indeed a logistic growth independent of the degree distribution.

  6. A note on the upper bound of the spectral radius for SOR iteration matrix

    NASA Astrophysics Data System (ADS)

    Chang, D.-W. Da-Wei

    2004-05-01

    Recently, Wang and Huang (J. Comput. Appl. Math. 135 (2001) 325, Corollary 4.7) established the following estimation on the upper bound of the spectral radius for successive overrelaxation (SOR) iteration matrix:ρSOR≤1-ω+ωρGSunder the condition that the coefficient matrix A is a nonsingular M-matrix and ω≥1, where ρSOR and ρGS are the spectral radius of SOR iteration matrix and Gauss-Seidel iteration matrix, respectively. In this note, we would like to point out that the above estimation is not valid in general.

  7. Anti-cyclonic circulation driven by the estuarine circulation in a gulf type ROFI

    NASA Astrophysics Data System (ADS)

    Fujiwara, T.; Sanford, L. P.; Nakatsuji, K.; Sugiyama, Y.

    1997-08-01

    Baroclinic residual circulation processes are examined in gulf type Regions Of Freshwater Influence (ROFIs), which have large rivers discharging into a rounded head wider than the Rossby internal deformation radius. Theoretical and observational investigations concentrate on Ise Bay, Japan, with supporting data from Osaka Bay and Tokyo Bay. Simplified analytical solutions are derived to describe the primary features of the circulation. Three dimensional residual current data collected using moored current meters and shipboard acoustic doppler current profilers (ADCPs), satellite imagery and density structure data observed using STDs, are presented for comparison to the theoretical predictions. There are three key points to understanding the resulting circulation in gulf type ROFIs. First, there are likely to be three distinct water masses: the river plume, a brackish upper layer, and a higher salinity lower layer. Second, baroclinic processes in gulf type ROFIs are influenced by the Earth's rotation at first order. Residual currents are quasi-geostrophic and potential vorticity is approximately conserved. Third, the combined effects of a classical longitudinal estuarine circulation and the Earth's rotation are both necessary to produce the resulting circulation. Anti-cyclonic vorticity is generated in the upper layer by the horizontal divergence associated with upward entrainment, which is part of the estuarine circulation. The interaction between anti-cyclonic vorticity and horizontal divergence results in two regions of qualitatively different circulation, with gyre-like circulation near the bay head and uniformly seaward anti-cyclonicly sheared flow further towards the mouth. The stagnation point separating the two regions is closer to (further away from) the bay head for stronger (weaker) horizontal divergence, respectively. The vorticity and spin-up time of this circulation are-(ƒ-ω 1)/2 and h/2w 0, respectively, where ƒ is the Coriolis parameter, ω 1 is the vorticity of the lower layer, h is the depth of the upper layer and w 0 is the upward entrainment velocity across the pycnocline. Under high discharge conditions the axis of the river plume proceeds in a right bounded direction, describing an inertial circle clearly seen in satellite images. Under low discharge conditions the river plume is deflected in a left bounded direction by the anti-cyclonic circulation of the upper layer.

  8. A combinatorial perspective of the protein inference problem.

    PubMed

    Yang, Chao; He, Zengyou; Yu, Weichuan

    2013-01-01

    In a shotgun proteomics experiment, proteins are the most biologically meaningful output. The success of proteomics studies depends on the ability to accurately and efficiently identify proteins. Many methods have been proposed to facilitate the identification of proteins from peptide identification results. However, the relationship between protein identification and peptide identification has not been thoroughly explained before. In this paper, we devote ourselves to a combinatorial perspective of the protein inference problem. We employ combinatorial mathematics to calculate the conditional protein probabilities (protein probability means the probability that a protein is correctly identified) under three assumptions, which lead to a lower bound, an upper bound, and an empirical estimation of protein probabilities, respectively. The combinatorial perspective enables us to obtain an analytical expression for protein inference. Our method achieves comparable results with ProteinProphet in a more efficient manner in experiments on two data sets of standard protein mixtures and two data sets of real samples. Based on our model, we study the impact of unique peptides and degenerate peptides (degenerate peptides are peptides shared by at least two proteins) on protein probabilities. Meanwhile, we also study the relationship between our model and ProteinProphet. We name our program ProteinInfer. Its Java source code, our supplementary document and experimental results are available at: >http://bioinformatics.ust.hk/proteininfer.

  9. Coherence and entanglement measures based on Rényi relative entropies

    NASA Astrophysics Data System (ADS)

    Zhu, Huangjun; Hayashi, Masahito; Chen, Lin

    2017-11-01

    We study systematically resource measures of coherence and entanglement based on Rényi relative entropies, which include the logarithmic robustness of coherence, geometric coherence, and conventional relative entropy of coherence together with their entanglement analogues. First, we show that each Rényi relative entropy of coherence is equal to the corresponding Rényi relative entropy of entanglement for any maximally correlated state. By virtue of this observation, we establish a simple operational connection between entanglement measures and coherence measures based on Rényi relative entropies. We then prove that all these coherence measures, including the logarithmic robustness of coherence, are additive. Accordingly, all these entanglement measures are additive for maximally correlated states. In addition, we derive analytical formulas for Rényi relative entropies of entanglement of maximally correlated states and bipartite pure states, which reproduce a number of classic results on the relative entropy of entanglement and logarithmic robustness of entanglement in a unified framework. Several nontrivial bounds for Rényi relative entropies of coherence (entanglement) are further derived, which improve over results known previously. Moreover, we determine all states whose relative entropy of coherence is equal to the logarithmic robustness of coherence. As an application, we provide an upper bound for the exact coherence distillation rate, which is saturated for pure states.

  10. On the error probability of general tree and trellis codes with applications to sequential decoding

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1973-01-01

    An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.

  11. Efficient Regressions via Optimally Combining Quantile Information*

    PubMed Central

    Zhao, Zhibiao; Xiao, Zhijie

    2014-01-01

    We develop a generally applicable framework for constructing efficient estimators of regression models via quantile regressions. The proposed method is based on optimally combining information over multiple quantiles and can be applied to a broad range of parametric and nonparametric settings. When combining information over a fixed number of quantiles, we derive an upper bound on the distance between the efficiency of the proposed estimator and the Fisher information. As the number of quantiles increases, this upper bound decreases and the asymptotic variance of the proposed estimator approaches the Cramér-Rao lower bound under appropriate conditions. In the case of non-regular statistical estimation, the proposed estimator leads to super-efficient estimation. We illustrate the proposed method for several widely used regression models. Both asymptotic theory and Monte Carlo experiments show the superior performance over existing methods. PMID:25484481

  12. Toward allocative efficiency in the prescription drug industry.

    PubMed

    Guell, R C; Fischbaum, M

    1995-01-01

    Traditionally, monopoly power in the pharmaceutical industry has been measured by profits. An alternative method estimates the deadweight loss of consumer surplus associated with the exercise of monopoly power. Although upper and lower bound estimates for this inefficiency are far apart, they at least suggest a dramatically greater welfare loss than measures of industry profitability would imply. A proposed system would have the U.S. government employing its power of eminent domain to "take" and distribute pharmaceutical patents, providing as "just compensation" the present value of the patent's expected future monopoly profits. Given the allocative inefficiency of raising taxes to pay for the program, the impact of the proposal on allocative efficiency would be at least as good at our lower bound estimate of monopoly costs while substantially improving efficiency at or near our upper bound estimate.

  13. Tight upper bound for the maximal quantum value of the Svetlichny operators

    NASA Astrophysics Data System (ADS)

    Li, Ming; Shen, Shuqian; Jing, Naihuan; Fei, Shao-Ming; Li-Jost, Xianqing

    2017-10-01

    It is a challenging task to detect genuine multipartite nonlocality (GMNL). In this paper, the problem is considered via computing the maximal quantum value of Svetlichny operators for three-qubit systems and a tight upper bound is obtained. The constraints on the quantum states for the tightness of the bound are also presented. The approach enables us to give the necessary and sufficient conditions of violating the Svetlichny inequality (SI) for several quantum states, including the white and color noised Greenberger-Horne-Zeilinger (GHZ) states. The relation between the genuine multipartite entanglement concurrence and the maximal quantum value of the Svetlichny operators for mixed GHZ class states is also discussed. As the SI is useful for the investigation of GMNL, our results give an effective and operational method to detect the GMNL for three-qubit mixed states.

  14. Vacuum stability in the U(1)χ extended model with vanishing scalar potential at the Planck scale

    NASA Astrophysics Data System (ADS)

    Haba, Naoyuki; Yamaguchi, Yuya

    2015-09-01

    We investigate the vacuum stability in a scale invariant local {U}(1)_χ model with vanishing scalar potential at the Planck scale. We find that it is impossible to realize the Higgs mass of 125 GeV while keeping the Higgs quartic coupling λ _H positive in all energy scales, that is, the same as the standard model. Once one allows λ _H<0, the lower bounds of the Z' boson mass ares obtained through the positive definiteness of the scalar mass squared eigenvalues, while the bounds are smaller than the LHC bounds. On the other hand, the upper bounds strongly depend on the number of relevant Majorana Yukawa couplings of the right-handed neutrinos N_ν . Considering decoupling effects of the Z' boson and the right-handed neutrinos, the condition of the singlet scalar quartic coupling λ _φ >0 gives the upper bound in the N_ν =1 case, while it does not constrain the N_ν =2 and 3 cases. In particular, we find that the Z' boson mass is tightly restricted for the N_ν =1 case as M_{Z'} &lsim 3.7 TeV.

  15. Comonotonic bounds on the survival probabilities in the Lee-Carter model for mortality projection

    NASA Astrophysics Data System (ADS)

    Denuit, Michel; Dhaene, Jan

    2007-06-01

    In the Lee-Carter framework, future survival probabilities are random variables with an intricate distribution function. In large homogeneous portfolios of life annuities, value-at-risk or conditional tail expectation of the total yearly payout of the company are approximately equal to the corresponding quantities involving random survival probabilities. This paper aims to derive some bounds in the increasing convex (or stop-loss) sense on these random survival probabilities. These bounds are obtained with the help of comonotonic upper and lower bounds on sums of correlated random variables.

  16. Weak Frictional Healing as Controlled by Intergranular Pressure Solution

    NASA Astrophysics Data System (ADS)

    He, C.

    2017-12-01

    Unstable fault slips due to velocity weakening requires a frictional healing effect that is stronger than the instantaneous rate effect. Based on a previous analytical result regarding the healing effect at spherical contacts by intergranular pressure solution (He et al., 2013), we extend the analysis to incorporate the full range of dilatancy angles from π/6 to -π/6, covering uphill and downhill situations of many contacts with different dilatancy angles. Assuming that both healing effect (parameter b) and instantaneous rate effect (parameter a) are controlled by intergranular pressure solution, and averaging over the whole range of dilatancy angle, our analysis derives each of the two effects as a function of temperature. The result shows velocity weakening for friction coefficient>0.274. As hydrothermal conditions are important for deep portion of actual fault zones, the strength of velocity weakening is of interest when the related faulting behavior is concerned. As a measure of the strength of velocity weakening, the derived ratio b/a fully controlled by pressure solution shows an upper bound of 1.22. Data analyses in previous studies on plagioclase (He et al., 2013) and oceanic basalt (Zhang and He, 2017) shows a range of b/a =1.05-1.2, consistent with the analytical result. The values<1.2 are considered here to be due to concurrent cataclasis that promotes the instantaneous rate effect, which reduces b/a to levels below the upper bound. These values are significantly less than in dry experiments on granite by Mitchell et al.(2016), where b/a ranges from 1.54-2.59 as inferred by reanalyzing their stick-slip data at temperatures of 20°C, 500°C and 600°C. Comparison between the two ranges of b/a helps understand the dominant mechanism of frictional healing at contacts, especially under hydrothermal conditions in fault zones. For comparable ratios of system stiffness to the critical value, numerical simulations with a single-degree-of-freedom system show that a smaller b/a significantly reduces the peak slip velocity as a result of reduced period of free oscillation corresponding to the lower stiffness (Fig.1). This is an effect similar to that by reduced effective normal stress due to overpressure of pore fluid, which lowers the stiffness suitable for unstable slips, thus weakens the peak slip velocity.

  17. Theoretical investigation of the upper and lower bounds of a generalized dimensionless bearing health indicator

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Tsui, Kwok-Leung

    2018-01-01

    Bearing-supported shafts are widely used in various machines. Due to harsh working environments, bearing performance degrades over time. To prevent unexpected bearing failures and accidents, bearing performance degradation assessment becomes an emerging topic in recent years. Bearing performance degradation assessment aims to evaluate the current health condition of a bearing through a bearing health indicator. In the past years, many signal processing and data mining based methods were proposed to construct bearing health indicators. However, the upper and lower bounds of these bearing health indicators were not theoretically calculated and they strongly depended on historical bearing data including normal and failure data. Besides, most health indicators are dimensional, which connotes that these health indicators are prone to be affected by varying operating conditions, such as varying speeds and loads. In this paper, based on the principle of squared envelope analysis, we focus on theoretical investigation of bearing performance degradation assessment in the case of additive Gaussian noises, including distribution establishment of squared envelope, construction of a generalized dimensionless bearing health indicator, and mathematical calculation of the upper and lower bounds of the generalized dimensionless bearing health indicator. Then, analyses of simulated and real bearing run to failure data are used as two case studies to illustrate how the generalized dimensionless health indicator works and demonstrate its effectiveness in bearing performance degradation assessment. Results show that squared envelope follows a noncentral chi-square distribution and the upper and lower bounds of the generalized dimensionless health indicator can be mathematically established. Moreover, the generalized dimensionless health indicator is sensitive to an incipient bearing defect in the process of bearing performance degradation.

  18. ``Carbon Credits'' for Resource-Bounded Computations Using Amortised Analysis

    NASA Astrophysics Data System (ADS)

    Jost, Steffen; Loidl, Hans-Wolfgang; Hammond, Kevin; Scaife, Norman; Hofmann, Martin

    Bounding resource usage is important for a number of areas, notably real-time embedded systems and safety-critical systems. In this paper, we present a fully automatic static type-based analysis for inferring upper bounds on resource usage for programs involving general algebraic datatypes and full recursion. Our method can easily be used to bound any countable resource, without needing to revisit proofs. We apply the analysis to the important metrics of worst-case execution time, stack- and heap-space usage. Our results from several realistic embedded control applications demonstrate good matches between our inferred bounds and measured worst-case costs for heap and stack usage. For time usage we infer good bounds for one application. Where we obtain less tight bounds, this is due to the use of software floating-point libraries.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azunre, P.

    Here in this paper, two novel techniques for bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations are developed. The first provides a theorem to construct interval bounds, while the second provides a theorem to construct lower bounds convex and upper bounds concave in the parameter. The convex/concave bounds can be significantly tighter than the interval bounds because of the wrapping effect suffered by interval analysis in dynamical systems. Both types of bounds are computationally cheap to construct, requiring solving auxiliary systems twice and four times larger than the original system, respectively. An illustrative numerical examplemore » of bound construction and use for deterministic global optimization within a simple serial branch-and-bound algorithm, implemented numerically using interval arithmetic and a generalization of McCormick's relaxation technique, is presented. Finally, problems within the important class of reaction-diffusion systems may be optimized with these tools.« less

  20. Theoretical and computational studies of excitons in conjugated polymers

    NASA Astrophysics Data System (ADS)

    Barford, William; Bursill, Robert J.; Smith, Richard W.

    2002-09-01

    We present a theoretical and computational analysis of excitons in conjugated polymers. We use a tight-binding model of π-conjugated electrons, with 1/r interactions for large r. In both the weak-coupling limit (defined by W>>U) and the strong-coupling limit (defined by W<

  1. Effects of triplet Higgs bosons in long baseline neutrino experiments

    NASA Astrophysics Data System (ADS)

    Huitu, K.; Kärkkäinen, T. J.; Maalampi, J.; Vihonen, S.

    2018-05-01

    The triplet scalars (Δ =Δ++,Δ+,Δ0) utilized in the so-called type-II seesaw model to explain the lightness of neutrinos, would generate nonstandard interactions (NSI) for a neutrino propagating in matter. We investigate the prospects to probe these interactions in long baseline neutrino oscillation experiments. We analyze the upper bounds that the proposed DUNE experiment might set on the nonstandard parameters and numerically derive upper bounds, as a function of the lightest neutrino mass, on the ratio the mass MΔ of the triplet scalars, and the strength |λϕ| of the coupling ϕ ϕ Δ of the triplet Δ and conventional Higgs doublet ϕ . We also discuss the possible misinterpretation of these effects as effects arising from a nonunitarity of the neutrino mixing matrix and compare the results with the bounds that arise from the charged lepton flavor violating processes.

  2. Estimates on Functional Integrals of Quantum Mechanics and Non-relativistic Quantum Field Theory

    NASA Astrophysics Data System (ADS)

    Bley, Gonzalo A.; Thomas, Lawrence E.

    2017-01-01

    We provide a unified method for obtaining upper bounds for certain functional integrals appearing in quantum mechanics and non-relativistic quantum field theory, functionals of the form {E[{exp}(A_T)]} , the (effective) action {A_T} being a function of particle trajectories up to time T. The estimates in turn yield rigorous lower bounds for ground state energies, via the Feynman-Kac formula. The upper bounds are obtained by writing the action for these functional integrals in terms of stochastic integrals. The method is illustrated in familiar quantum mechanical settings: for the hydrogen atom, for a Schrödinger operator with {1/|x|^2} potential with small coupling, and, with a modest adaptation of the method, for the harmonic oscillator. We then present our principal applications of the method, in the settings of non-relativistic quantum field theories for particles moving in a quantized Bose field, including the optical polaron and Nelson models.

  3. Dwell time-based stabilisation of switched delay systems using free-weighting matrices

    NASA Astrophysics Data System (ADS)

    Koru, Ahmet Taha; Delibaşı, Akın; Özbay, Hitay

    2018-01-01

    In this paper, we present a quasi-convex optimisation method to minimise an upper bound of the dwell time for stability of switched delay systems. Piecewise Lyapunov-Krasovskii functionals are introduced and the upper bound for the derivative of Lyapunov functionals is estimated by free-weighting matrices method to investigate non-switching stability of each candidate subsystems. Then, a sufficient condition for the dwell time is derived to guarantee the asymptotic stability of the switched delay system. Once these conditions are represented by a set of linear matrix inequalities , dwell time optimisation problem can be formulated as a standard quasi-convex optimisation problem. Numerical examples are given to illustrate the improvements over previously obtained dwell time bounds. Using the results obtained in the stability case, we present a nonlinear minimisation algorithm to synthesise the dwell time minimiser controllers. The algorithm solves the problem with successive linearisation of nonlinear conditions.

  4. Decay of superconducting correlations for gauged electrons in dimensions D ≤ 4

    NASA Astrophysics Data System (ADS)

    Tada, Yasuhiro; Koma, Tohru

    2018-03-01

    We study lattice superconductors coupled to gauge fields, such as an attractive Hubbard model in electromagnetic fields, with a standard gauge fixing. We prove upper bounds for a two-point Cooper pair correlation at finite temperatures in spatial dimensions D ≤ 4. The upper bounds decay exponentially in three dimensions and by power law in four dimensions. These imply the absence of the superconducting long-range order for the Cooper pair amplitude as a consequence of fluctuations of the gauge fields. Since our results hold for the gauge fixing Hamiltonian, they cannot be obtained as a corollary of Elitzur's theorem.

  5. Calculations of reliability predictions for the Apollo spacecraft

    NASA Technical Reports Server (NTRS)

    Amstadter, B. L.

    1966-01-01

    A new method of reliability prediction for complex systems is defined. Calculation of both upper and lower bounds are involved, and a procedure for combining the two to yield an approximately true prediction value is presented. Both mission success and crew safety predictions can be calculated, and success probabilities can be obtained for individual mission phases or subsystems. Primary consideration is given to evaluating cases involving zero or one failure per subsystem, and the results of these evaluations are then used for analyzing multiple failure cases. Extensive development is provided for the overall mission success and crew safety equations for both the upper and lower bounds.

  6. A proof of the log-concavity conjecture related to the computation of the ergodic capacity of MIMO channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gurvitis, Leonid

    2009-01-01

    An upper bound on the ergodic capacity of MIMO channels was introduced recently in [1]. This upper bound amounts to the maximization on the simplex of some multilinear polynomial p({lambda}{sub 1}, ..., {lambda}{sub n}) with non-negative coefficients. In general, such maximizations problems are NP-HARD. But if say, the functional log(p) is concave on the simplex and can be efficiently evaluated, then the maximization can also be done efficiently. Such log-concavity was conjectured in [1]. We give in this paper self-contained proof of the conjecture, based on the theory of H-Stable polynomials.

  7. Investigation of matter-antimatter interaction for possible propulsion applications

    NASA Technical Reports Server (NTRS)

    Morgan, D. L., Jr.

    1974-01-01

    Matter-antimatter annihilation is discussed as a means of rocket propulsion. The feasibility of different means of antimatter storage is shown to depend on how annihilation rates are affected by various circumstances. The annihilation processes are described, with emphasis on important features of atom-antiatom interatomic potential energies. A model is developed that allows approximate calculation of upper and lower bounds to the interatomic potential energy for any atom-antiatom pair. Formulae for the upper and lower bounds for atom-antiatom annihilation cross-sections are obtained and applied to the annihilation rates for each means of antimatter storage under consideration. Recommendations for further studies are presented.

  8. Marginal Consistency: Upper-Bounding Partition Functions over Commutative Semirings.

    PubMed

    Werner, Tomás

    2015-07-01

    Many inference tasks in pattern recognition and artificial intelligence lead to partition functions in which addition and multiplication are abstract binary operations forming a commutative semiring. By generalizing max-sum diffusion (one of convergent message passing algorithms for approximate MAP inference in graphical models), we propose an iterative algorithm to upper bound such partition functions over commutative semirings. The iteration of the algorithm is remarkably simple: change any two factors of the partition function such that their product remains the same and their overlapping marginals become equal. In many commutative semirings, repeating this iteration for different pairs of factors converges to a fixed point when the overlapping marginals of every pair of factors coincide. We call this state marginal consistency. During that, an upper bound on the partition function monotonically decreases. This abstract algorithm unifies several existing algorithms, including max-sum diffusion and basic constraint propagation (or local consistency) algorithms in constraint programming. We further construct a hierarchy of marginal consistencies of increasingly higher levels and show than any such level can be enforced by adding identity factors of higher arity (order). Finally, we discuss instances of the framework for several semirings, including the distributive lattice and the max-sum and sum-product semirings.

  9. On the Role of Entailment Patterns and Scalar Implicatures in the Processing of Numerals

    ERIC Educational Resources Information Center

    Panizza, Daniele; Chierchia, Gennaro; Clifton, Charles, Jr.

    2009-01-01

    There has been much debate, in both the linguistics and the psycholinguistics literature, concerning numbers and the interpretation of number denoting determiners ("numerals"). Such debate concerns, in particular, the nature and distribution of upper-bounded ("exact") interpretations vs. lower-bounded ("at-least") construals. In the present paper…

  10. Sublinear Upper Bounds for Stochastic Programs with Recourse. Revision.

    DTIC Science & Technology

    1987-06-01

    approximation procedures for (1.1) generally rely on discretizations of E (Huang, Ziemba , and Ben-Tal (1977), Kall and Stoyan (1982), Birge and Wets...Wright, Practical optimization (Academic Press, London and New York,1981). C.C. Huang, W. Ziemba , and A. Ben-Tal, "Bounds on the expectation of a con

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zachos, C. K.; High Energy Physics

    Following ref [1], a classical upper bound for quantum entropy is identified and illustrated, 0 {le} S{sub q} {le} ln (e{sigma}{sup 2}/2{h_bar}), involving the variance {sigma}{sup 2} in phase space of the classical limit distribution of a given system. A fortiori, this further bounds the corresponding information-theoretical generalizations of the quantum entropy proposed by Renyi.

  12. Representing and Acquiring Geographic Knowledge.

    DTIC Science & Technology

    1984-01-01

    which is allowed if v is a kowledge bound of REG. e3. The real vertices of a clump map into the boundary of the corresponding object so * , 21...example, *What is the diameter of the pond?" can be answered, but the answer will, in general, be a range power -bound, upper-bound]. If the clump for...cases of others. They are included separately, because their procedures are either faster or more powerful than the general procedure. I will not

  13. A Posteriori Finite Element Bounds for Sensitivity Derivatives of Partial-Differential-Equation Outputs. Revised

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Patera, Anthony T.; Peraire, Jaume

    1998-01-01

    We present a Neumann-subproblem a posteriori finite element procedure for the efficient and accurate calculation of rigorous, 'constant-free' upper and lower bounds for sensitivity derivatives of functionals of the solutions of partial differential equations. The design motivation for sensitivity derivative error control is discussed; the a posteriori finite element procedure is described; the asymptotic bounding properties and computational complexity of the method are summarized; and illustrative numerical results are presented.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Müller-Hermes, Alexander, E-mail: muellerh@ma.tum.de; Wolf, Michael M., E-mail: m.wolf@tum.de; Reeb, David, E-mail: reeb.qit@gmail.com

    We investigate linear maps between matrix algebras that remain positive under tensor powers, i.e., under tensoring with n copies of themselves. Completely positive and completely co-positive maps are trivial examples of this kind. We show that for every n ∈ ℕ, there exist non-trivial maps with this property and that for two-dimensional Hilbert spaces there is no non-trivial map for which this holds for all n. For higher dimensions, we reduce the existence question of such non-trivial “tensor-stable positive maps” to a one-parameter family of maps and show that an affirmative answer would imply the existence of non-positive partial transposemore » bound entanglement. As an application, we show that any tensor-stable positive map that is not completely positive yields an upper bound on the quantum channel capacity, which for the transposition map gives the well-known cb-norm bound. We, furthermore, show that the latter is an upper bound even for the local operations and classical communications-assisted quantum capacity, and that moreover it is a strong converse rate for this task.« less

  15. Receive-Noise Analysis of Capacitive Micromachined Ultrasonic Transducers.

    PubMed

    Bozkurt, Ayhan; Yaralioglu, G Goksenin

    2016-11-01

    This paper presents an analysis of thermal (Johnson) noise received from the radiation medium by otherwise noiseless capacitive micromachined ultrasonic transducer (CMUT) membranes operating in their fundamental resonance mode. Determination of thermal noise received by multiple numbers of transducers or a transducer array requires the assessment of cross-coupling through the radiation medium, as well as the self-radiation impedance of the individual transducer. We show that the total thermal noise received by the cells of a CMUT has insignificant correlation, and is independent of the radiation impedance, but is only determined by the mass of each membrane and the electromechanical transformer ratio. The proof is based on the analytical derivations for a simple transducer with two cells, and extended to transducers with numerous cells using circuit simulators. We used a first-order model, which incorporates the fundamental resonance of the CMUT. Noise power is calculated by integrating over the entire spectrum; hence, the presented figures are an upper bound for the noise. The presented analyses are valid for a transimpedance amplifier in the receive path. We use the analysis results to calculate the minimum detectable pressure of a CMUT. We also provide an analysis based on the experimental data to show that output noise power is limited by and comparable to the theoretical upper limit.

  16. Randomized noninferiority trial of telephone versus in-person genetic counseling for hereditary breast and ovarian cancer.

    PubMed

    Schwartz, Marc D; Valdimarsdottir, Heiddis B; Peshkin, Beth N; Mandelblatt, Jeanne; Nusbaum, Rachel; Huang, An-Tsun; Chang, Yaojen; Graves, Kristi; Isaacs, Claudine; Wood, Marie; McKinnon, Wendy; Garber, Judy; McCormick, Shelley; Kinney, Anita Y; Luta, George; Kelleher, Sarah; Leventhal, Kara-Grace; Vegella, Patti; Tong, Angie; King, Lesley

    2014-03-01

    Although guidelines recommend in-person counseling before BRCA1/BRCA2 gene testing, genetic counseling is increasingly offered by telephone. As genomic testing becomes more common, evaluating alternative delivery approaches becomes increasingly salient. We tested whether telephone delivery of BRCA1/2 genetic counseling was noninferior to in-person delivery. Participants (women age 21 to 85 years who did not have newly diagnosed or metastatic cancer and lived within a study site catchment area) were randomly assigned to usual care (UC; n = 334) or telephone counseling (TC; n = 335). UC participants received in-person pre- and post-test counseling; TC participants completed all counseling by telephone. Primary outcomes were knowledge, satisfaction, decision conflict, distress, and quality of life; secondary outcomes were equivalence of BRCA1/2 test uptake and costs of delivering TC versus UC. TC was noninferior to UC on all primary outcomes. At 2 weeks after pretest counseling, knowledge (d = 0.03; lower bound of 97.5% CI, -0.61), perceived stress (d = -0.12; upper bound of 97.5% CI, 0.21), and satisfaction (d = -0.16; lower bound of 97.5% CI, -0.70) had group differences and confidence intervals that did not cross their 1-point noninferiority limits. Decision conflict (d = 1.1; upper bound of 97.5% CI, 3.3) and cancer distress (d = -1.6; upper bound of 97.5% CI, 0.27) did not cross their 4-point noninferiority limit. Results were comparable at 3 months. TC was not equivalent to UC on BRCA1/2 test uptake (UC, 90.1%; TC, 84.2%). TC yielded cost savings of $114 per patient. Genetic counseling can be effectively and efficiently delivered via telephone to increase access and decrease costs.

  17. Sign rank versus Vapnik-Chervonenkis dimension

    NASA Astrophysics Data System (ADS)

    Alon, N.; Moran, Sh; Yehudayoff, A.

    2017-12-01

    This work studies the maximum possible sign rank of sign (N × N)-matrices with a given Vapnik-Chervonenkis dimension d. For d=1, this maximum is three. For d=2, this maximum is \\widetilde{\\Theta}(N1/2). For d >2, similar but slightly less accurate statements hold. The lower bounds improve on previous ones by Ben-David et al., and the upper bounds are novel. The lower bounds are obtained by probabilistic constructions, using a theorem of Warren in real algebraic topology. The upper bounds are obtained using a result of Welzl about spanning trees with low stabbing number, and using the moment curve. The upper bound technique is also used to: (i) provide estimates on the number of classes of a given Vapnik-Chervonenkis dimension, and the number of maximum classes of a given Vapnik-Chervonenkis dimension--answering a question of Frankl from 1989, and (ii) design an efficient algorithm that provides an O(N/log(N)) multiplicative approximation for the sign rank. We also observe a general connection between sign rank and spectral gaps which is based on Forster's argument. Consider the adjacency (N × N)-matrix of a Δ-regular graph with a second eigenvalue of absolute value λ and Δ ≤ N/2. We show that the sign rank of the signed version of this matrix is at least Δ/λ. We use this connection to prove the existence of a maximum class C\\subseteq\\{+/- 1\\}^N with Vapnik-Chervonenkis dimension 2 and sign rank \\widetilde{\\Theta}(N1/2). This answers a question of Ben-David et al. regarding the sign rank of large Vapnik-Chervonenkis classes. We also describe limitations of this approach, in the spirit of the Alon-Boppana theorem. We further describe connections to communication complexity, geometry, learning theory, and combinatorics. Bibliography: 69 titles.

  18. Analytical Dimensional Reduction of a Fuel Optimal Powered Descent Subproblem

    NASA Technical Reports Server (NTRS)

    Rea, Jeremy R.; Bishop, Robert H.

    2010-01-01

    Current renewed interest in exploration of the moon, Mars, and other planetary objects is driving technology development in many fields of space system design. In particular, there is a desire to land both robotic and human missions on the moon and elsewhere. The landing guidance system must be able to deliver the vehicle to a desired soft landing while meeting several constraints necessary for the safety of the vehicle. Due to performance limitations of current launch vehicles, it is desired to minimize the amount of fuel used. In addition, the landing site may change in real-time in order to avoid previously undetected hazards which become apparent during the landing maneuver. This complicated maneuver can be broken into simpler subproblems that bound the full problem. One such subproblem is to find a minimum-fuel landing solution that meets constraints on the initial state, final state, and bounded thrust acceleration magnitude. With the assumptions of constant gravity and negligible atmosphere, the form of the optimal steering law is known, and the equations of motion can be integrated analytically, resulting in a system of five equations in five unknowns. It is shown that this system of equations can be reduced analytically to two equations in two unknowns. With an additional assumption of constant thrust acceleration magnitude, this system can be reduced further to one equation in one unknown. It is shown that these unknowns can be bounded analytically. An algorithm is developed to quickly and reliably solve the resulting one-dimensional bounded search, and it is used as a real-time guidance applied to a lunar landing test case.

  19. Method development and validation for total haloxyfop analysis in infant formulas and related ingredient matrices using liquid chromatography-tandem mass spectrometry.

    PubMed

    Koesukwiwat, Urairat; Vaclavik, Lukas; Mastovska, Katerina

    2018-05-08

    According to the European Commission directive 2006/141/EC, haloxyfop residue levels should not exceed 0.003 mg/kg in ready-to-feed infant formula, and the residue definition includes sum of haloxyfop, its esters, salts, and conjugates expressed as haloxyfop. A simple method for total haloxyfop analysis in infant formula and related ingredient matrices was developed and validated using liquid chromatography-tandem mass spectrometry (LC-MS/MS). The sample preparation consisted of an alkaline hydrolysis with methanolic sodium hydroxide to release haloxyfop (parent acid) from its bound forms prior to the extraction with acetonitrile. A mixture of magnesium sulfate (MgSO 4 ) and sodium chloride (NaCl) (4:1, w/w) was added to the extract to induce phase separation and force the analyte into the upper acetonitrile-methanol layer and then a 1-mL aliquot was subsequently cleaned up by dispersive solid phase extraction with 150 mg of MgSO 4 and 50 mg of octadecyl (C 18 ) sorbent. The analytical procedure was developed and carefully optimized to enable low-level, total haloxyfop analysis in a variety of challenging matrices, including infant formulas and their important high-carbohydrate, high-protein, high-fat, and emulsifier ingredients. The final method was validated in two different laboratories by fortifying samples with haloxyfop and haloxyfop-methyl, which was used as a model compound simulating bound forms of the analyte. Mean recoveries of haloxyfop across all fortification levels and evaluated matrices ranged between 92.2 and 114% with repeatability, within-lab reproducibility, and reproducibility RSDs ≤ 14%. Based on the validation results, this method was capable to convert the haloxyfop ester into the parent acid in a wide range of sample types and to reliably identify and quantify total haloxyfop at the target 0.003 mg/kg level in infant formulas (both powdered and ready-to-feed liquid forms). Graphical abstract LC-MS/MS-based workflow for the determination of the total haloxyfop in infant formula and related ingredients.

  20. A formulation of a matrix sparsity approach for the quantum ordered search algorithm

    NASA Astrophysics Data System (ADS)

    Parmar, Jupinder; Rahman, Saarim; Thiara, Jaskaran

    One specific subset of quantum algorithms is Grovers Ordered Search Problem (OSP), the quantum counterpart of the classical binary search algorithm, which utilizes oracle functions to produce a specified value within an ordered database. Classically, the optimal algorithm is known to have a log2N complexity; however, Grovers algorithm has been found to have an optimal complexity between the lower bound of ((lnN-1)/π≈0.221log2N) and the upper bound of 0.433log2N. We sought to lower the known upper bound of the OSP. With Farhi et al. MITCTP 2815 (1999), arXiv:quant-ph/9901059], we see that the OSP can be resolved into a translational invariant algorithm to create quantum query algorithm restraints. With these restraints, one can find Laurent polynomials for various k — queries — and N — database sizes — thus finding larger recursive sets to solve the OSP and effectively reducing the upper bound. These polynomials are found to be convex functions, allowing one to make use of convex optimization to find an improvement on the known bounds. According to Childs et al. [Phys. Rev. A 75 (2007) 032335], semidefinite programming, a subset of convex optimization, can solve the particular problem represented by the constraints. We were able to implement a program abiding to their formulation of a semidefinite program (SDP), leading us to find that it takes an immense amount of storage and time to compute. To combat this setback, we then formulated an approach to improve results of the SDP using matrix sparsity. Through the development of this approach, along with an implementation of a rudimentary solver, we demonstrate how matrix sparsity reduces the amount of time and storage required to compute the SDP — overall ensuring further improvements will likely be made to reach the theorized lower bound.

  1. Sample Complexity Bounds for Differentially Private Learning

    PubMed Central

    Chaudhuri, Kamalika; Hsu, Daniel

    2013-01-01

    This work studies the problem of privacy-preserving classification – namely, learning a classifier from sensitive data while preserving the privacy of individuals in the training set. In particular, the learning algorithm is required in this problem to guarantee differential privacy, a very strong notion of privacy that has gained significant attention in recent years. A natural question to ask is: what is the sample requirement of a learning algorithm that guarantees a certain level of privacy and accuracy? We address this question in the context of learning with infinite hypothesis classes when the data is drawn from a continuous distribution. We first show that even for very simple hypothesis classes, any algorithm that uses a finite number of examples and guarantees differential privacy must fail to return an accurate classifier for at least some unlabeled data distributions. This result is unlike the case with either finite hypothesis classes or discrete data domains, in which distribution-free private learning is possible, as previously shown by Kasiviswanathan et al. (2008). We then consider two approaches to differentially private learning that get around this lower bound. The first approach is to use prior knowledge about the unlabeled data distribution in the form of a reference distribution chosen independently of the sensitive data. Given such a reference , we provide an upper bound on the sample requirement that depends (among other things) on a measure of closeness between and the unlabeled data distribution. Our upper bound applies to the non-realizable as well as the realizable case. The second approach is to relax the privacy requirement, by requiring only label-privacy – namely, that the only labels (and not the unlabeled parts of the examples) be considered sensitive information. An upper bound on the sample requirement of learning with label privacy was shown by Chaudhuri et al. (2006); in this work, we show a lower bound. PMID:25285183

  2. Selective determination of aluminum bound with tannin in tea infusion.

    PubMed

    Erdemoğlu, Sema B; Güçer, Seref

    2005-08-01

    In this study, an analytical method for indirect measurement of Al bound with tannin in tea infusion was studied. This method utilizes the ability of the tannins to precipitate with protein. Separation conditions were investigated using model solutions. This method is uncomplicated, inexpensive and suitable for real samples. About 34% of the total Al in brew extracted from commercially available teas was bound to condensed and hydrolyzable tannins.

  3. Analytical transition-matrix treatment of electric multipole polarizabilities of hydrogen-like atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kharchenko, V.F., E-mail: vkharchenko@bitp.kiev.ua

    2015-04-15

    The direct transition-matrix approach to the description of the electric polarization of the quantum bound system of particles is used to determine the electric multipole polarizabilities of the hydrogen-like atoms. It is shown that in the case of the bound system formed by the Coulomb interaction the corresponding inhomogeneous integral equation determining an off-shell scattering function, which consistently describes virtual multiple scattering, can be solved exactly analytically for all electric multipole polarizabilities. Our method allows to reproduce the known Dalgarno–Lewis formula for electric multipole polarizabilities of the hydrogen atom in the ground state and can also be applied to determinemore » the polarizability of the atom in excited bound states. - Highlights: • A new description for electric polarization of hydrogen-like atoms. • Expression for multipole polarizabilities in terms of off-shell scattering functions. • Derivation of integral equation determining the off-shell scattering function. • Rigorous analytic solving the integral equations both for ground and excited states. • Study of contributions of virtual multiple scattering to electric polarizabilities.« less

  4. Branch and bound algorithm for accurate estimation of analytical isotropic bidirectional reflectance distribution function models.

    PubMed

    Yu, Chanki; Lee, Sang Wook

    2016-05-20

    We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.

  5. Spread of entanglement and causality

    NASA Astrophysics Data System (ADS)

    Casini, Horacio; Liu, Hong; Mezei, Márk

    2016-07-01

    We investigate causality constraints on the time evolution of entanglement entropy after a global quench in relativistic theories. We first provide a general proof that the so-called tsunami velocity is bounded by the speed of light. We then generalize the free particle streaming model of [1] to general dimensions and to an arbitrary entanglement pattern of the initial state. In more than two spacetime dimensions the spread of entanglement in these models is highly sensitive to the initial entanglement pattern, but we are able to prove an upper bound on the normalized rate of growth of entanglement entropy, and hence the tsunami velocity. The bound is smaller than what one gets for quenches in holographic theories, which highlights the importance of interactions in the spread of entanglement in many-body systems. We propose an interacting model which we believe provides an upper bound on the spread of entanglement for interacting relativistic theories. In two spacetime dimensions with multiple intervals, this model and its variations are able to reproduce intricate results exhibited by holographic theories for a significant part of the parameter space. For higher dimensions, the model bounds the tsunami velocity at the speed of light. Finally, we construct a geometric model for entanglement propagation based on a tensor network construction for global quenches.

  6. Ion wake field effects on the dust-ion-acoustic surface mode in a semi-bounded Lorentzian dusty plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Myoung-Jae; Jung, Young-Dae, E-mail: ydjung@hanyang.ac.kr; Department of Physics, Applied Physics, and Astronomy, Rensselaer Polytechnic Institute, 110 8th Street, Troy, New York 12180-3590

    The dispersion relation for the dust ion-acoustic surface waves propagating at the interface of semi-bounded Lorentzian dusty plasma with supersonic ion flow has been kinetically derived to investigate the nonthermal property and the ion wake field effect. We found that the supersonic ion flow creates the upper and the lower modes. The increase in the nonthermal particles decreases the wave frequency for the upper mode whereas it increases the frequency for the lower mode. The increase in the supersonic ion flow velocity is found to enhance the wave frequency for both modes. We also found that the increase in nonthermalmore » plasmas is found to enhance the group velocity of the upper mode. However, the nonthermal particles suppress the lower mode group velocity. The nonthermal effects on the group velocity will be reduced in the limit of small or large wavelength limit.« less

  7. Current Collection in a Magnetic Field

    NASA Technical Reports Server (NTRS)

    Krivorutsky, E. N.

    1997-01-01

    It is found that the upper-bound limit for current collection in the case of strong magnetic field from the current is close to that given by the Parker-Murphy formula. This conclusion is consistent with the results obtained in laboratory experiments. This limit weakly depends on the shape of the wire. The adiabatic limit in this case will be easily surpassed due to strong magnetic field gradients near the separatrix. The calculations can be done using the kinetic equation in the drift approximation. Analytical results are obtained for the region where the Earth's magnetic field is dominant. The current collection can be calculated (neglecting scattering) using a particle simulation code. Dr. Singh has agreed to collaborate, allowing the use of his particle code. The code can be adapted for the case when the current magnetic field is strong. The needed dm for these modifications is 3-4 months. The analytical description and essential part of the program is prepared for the calculation of the current in the region where the adiabatic description can be used. This was completed with the collaboration of Drs. Khazanov and Liemohn. A scheme of measuring the end body position is also proposed. The scheme was discussed in the laboratory (with Dr. Stone) and it was concluded that it can be proposed for engineering analysis.

  8. Bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations

    DOE PAGES

    Azunre, P.

    2016-09-21

    Here in this paper, two novel techniques for bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations are developed. The first provides a theorem to construct interval bounds, while the second provides a theorem to construct lower bounds convex and upper bounds concave in the parameter. The convex/concave bounds can be significantly tighter than the interval bounds because of the wrapping effect suffered by interval analysis in dynamical systems. Both types of bounds are computationally cheap to construct, requiring solving auxiliary systems twice and four times larger than the original system, respectively. An illustrative numerical examplemore » of bound construction and use for deterministic global optimization within a simple serial branch-and-bound algorithm, implemented numerically using interval arithmetic and a generalization of McCormick's relaxation technique, is presented. Finally, problems within the important class of reaction-diffusion systems may be optimized with these tools.« less

  9. On the multiple zeros of a real analytic function with applications to the averaging theory of differential equations

    NASA Astrophysics Data System (ADS)

    García, Isaac A.; Llibre, Jaume; Maza, Susanna

    2018-06-01

    In this work we consider real analytic functions , where , Ω is a bounded open subset of , is an interval containing the origin, are parameters, and ε is a small parameter. We study the branching of the zero-set of at multiple points when the parameter ε varies. We apply the obtained results to improve the classical averaging theory for computing T-periodic solutions of λ-families of analytic T-periodic ordinary differential equations defined on , using the displacement functions defined by these equations. We call the coefficients in the Taylor expansion of in powers of ε the averaged functions. The main contribution consists in analyzing the role that have the multiple zeros of the first non-zero averaged function. The outcome is that these multiple zeros can be of two different classes depending on whether the zeros belong or not to the analytic set defined by the real variety associated to the ideal generated by the averaged functions in the Noetheriang ring of all the real analytic functions at . We bound the maximum number of branches of isolated zeros that can bifurcate from each multiple zero z 0. Sometimes these bounds depend on the cardinalities of minimal bases of the former ideal. Several examples illustrate our results and they are compared with the classical theory, branching theory and also under the light of singularity theory of smooth maps. The examples range from polynomial vector fields to Abel differential equations and perturbed linear centers.

  10. Bounds on quantum confinement effects in metal nanoparticles

    NASA Astrophysics Data System (ADS)

    Blackman, G. Neal; Genov, Dentcho A.

    2018-03-01

    Quantum size effects on the permittivity of metal nanoparticles are investigated using the quantum box model. Explicit upper and lower bounds are derived for the permittivity and relaxation rates due to quantum confinement effects. These bounds are verified numerically, and the size dependence and frequency dependence of the empirical Drude size parameter is extracted from the model. Results suggest that the common practice of empirically modifying the dielectric function can lead to inaccurate predictions for highly uniform distributions of finite-sized particles.

  11. Transfer Function Bounds for Partial-unit-memory Convolutional Codes Based on Reduced State Diagram

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1984-01-01

    The performance of a coding system consisting of a convolutional encoder and a Viterbi decoder is analytically found by the well-known transfer function bounding technique. For the partial-unit-memory byte-oriented convolutional encoder with m sub 0 binary memory cells and (k sub 0 m sub 0) inputs, a state diagram of 2(K) (sub 0) was for the transfer function bound. A reduced state diagram of (2 (m sub 0) +1) is used for easy evaluation of transfer function bounds for partial-unit-memory codes.

  12. Highly excited bound-state resonances of short-range inverse power-law potentials

    NASA Astrophysics Data System (ADS)

    Hod, Shahar

    2017-11-01

    We study analytically the radial Schrödinger equation with long-range attractive potentials whose asymptotic behaviors are dominated by inverse power-law tails of the form V(r)=-β _n r^{-n} with n>2. In particular, assuming that the effective radial potential is characterized by a short-range infinitely repulsive core of radius R, we derive a compact analytical formula for the threshold energy E^{ {max}}_l=E^{ {max}}_l(n,β _n,R), which characterizes the most weakly bound-state resonance (the most excited energy level) of the quantum system.

  13. Localization of the eigenvalues of linear integral equations with applications to linear ordinary differential equations.

    NASA Technical Reports Server (NTRS)

    Sloss, J. M.; Kranzler, S. K.

    1972-01-01

    The equivalence of a considered integral equation form with an infinite system of linear equations is proved, and the localization of the eigenvalues of the infinite system is expressed. Error estimates are derived, and the problems of finding upper bounds and lower bounds for the eigenvalues are solved simultaneously.

  14. The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates

    NASA Technical Reports Server (NTRS)

    Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2008-01-01

    We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.

  15. When clusters collide: constraints on antimatter on the largest scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steigman, Gary, E-mail: steigman@mps.ohio-state.edu

    2008-10-15

    Observations have ruled out the presence of significant amounts of antimatter in the Universe on scales ranging from the solar system, to the Galaxy, to groups and clusters of galaxies, and even to distances comparable to the scale of the present horizon. Except for the model-dependent constraints on the largest scales, the most significant upper limits to diffuse antimatter in the Universe are those on the {approx}Mpc scale of clusters of galaxies provided by the EGRET upper bounds to annihilation gamma rays from galaxy clusters whose intracluster gas is revealed through its x-ray emission. On the scale of individual clustersmore » of galaxies the upper bounds to the fraction of mixed matter and antimatter for the 55 clusters from a flux-limited x-ray survey range from 5 Multiplication-Sign 10{sup -9} to <1 Multiplication-Sign 10{sup -6}, strongly suggesting that individual clusters of galaxies are made entirely of matter or of antimatter. X-ray and gamma-ray observations of colliding clusters of galaxies, such as the Bullet Cluster, permit these constraints to be extended to even larger scales. If the observations of the Bullet Cluster, where the upper bound to the antimatter fraction is found to be <3 Multiplication-Sign 10{sup -6}, can be generalized to other colliding clusters of galaxies, cosmologically significant amounts of antimatter will be excluded on scales of order {approx}20 Mpc (M{approx}5 Multiplication-Sign 10{sup 15}M{sub sun})« less

  16. Multi-soliton interaction of a generalized Schrödinger-Boussinesq system in a magnetized plasma

    NASA Astrophysics Data System (ADS)

    Zhao, Xue-Hui; Tian, Bo; Chai, Jun; Wu, Xiao-Yu; Guo, Yong-Jiang

    2017-04-01

    Under investigation in this paper is a generalized Schrödinger-Boussinesq system, which describes the stationary propagation of coupled upper-hybrid waves and magnetoacoustic waves in a magnetized plasma. Bilinear forms, one-, two- and three-soliton solutions are derived by virtue of the Hirota method and symbolic computation. Propagation and interaction for the solitons are illustrated graphically: Coefficients β1^{} and β2^{} can affect the velocities and propagation directions of the solitary waves. Amplitude, velocity and shape of the one solitary wave keep invariant during the propagation, implying that the transport of the energy is stable in the upper-hybrid and magnetoacoustic waves, and amplitude of the upper-hybrid wave is bigger than that of the magnetoacoustic wave. For the upper-hybrid and magnetoacoustic waves, head-on, overtaking and bound-state interaction between the two solitary waves are asymptotically depicted, respectively, indicating that the interaction between the two solitary waves is elastic. Elastic interaction between the bound-state soliton and a single one soliton is also displayed, and interaction among the three solitary waves is all elastic.

  17. On the Coriolis effect in acoustic waveguides.

    PubMed

    Wegert, Henry; Reindl, Leonard M; Ruile, Werner; Mayer, Andreas P

    2012-05-01

    Rotation of an elastic medium gives rise to a shift of frequency of its acoustic modes, i.e., the time-period vibrations that exist in it. This frequency shift is investigated by applying perturbation theory in the regime of small ratios of the rotation velocity and the frequency of the acoustic mode. In an expansion of the relative frequency shift in powers of this ratio, upper bounds are derived for the first-order and the second-order terms. The derivation of the theoretical upper bounds of the first-order term is presented for linear vibration modes as well as for stable nonlinear vibrations with periodic time dependence that can be represented by a Fourier series.

  18. Asymptotics of the evolution semigroup associated with a scalar field in the presence of a non-linear electromagnetic field

    NASA Astrophysics Data System (ADS)

    Albeverio, Sergio; Tamura, Hiroshi

    2018-04-01

    We consider a model describing the coupling of a vector-valued and a scalar homogeneous Markovian random field over R4, interpreted as expressing the interaction between a charged scalar quantum field coupled with a nonlinear quantized electromagnetic field. Expectations of functionals of the random fields are expressed by Brownian bridges. Using this, together with Feynman-Kac-Itô type formulae and estimates on the small time and large time behaviour of Brownian functionals, we prove asymptotic upper and lower bounds on the kernel of the transition semigroup for our model. The upper bound gives faster than exponential decay for large distances of the corresponding resolvent (propagator).

  19. The upper bounds of reduced axial and shear moduli in cross-ply laminates with matrix cracks

    NASA Technical Reports Server (NTRS)

    Lee, Jong-Won; Allen, D. H.; Harris, C. E.

    1991-01-01

    The present study proposes a mathematical model utilizing the internal state variable concept for predicting the upper bounds of the reduced axial and shear stiffnesses in cross-ply laminates with matrix cracks. The displacement components at the matrix crack surfaces are explicitly expressed in terms of the observable axial and shear strains and the undamaged material properties. The reduced axial and shear stiffnesses are predicted for glass/epoxy and graphite/epoxy laminates. Comparison of the model with other theoretical and experimental studies is also presented to confirm direct applicability of the model to angle-ply laminates with matrix cracks subjected to general in-plane loading.

  20. SURE reliability analysis: Program and mathematics

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; White, Allan L.

    1988-01-01

    The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The computational methods on which the program is based provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  1. Orbital Motions and the Conservation-Law/Preferred-Frame α_3 Parameter

    NASA Astrophysics Data System (ADS)

    Iorio, Lorenzo

    2014-09-01

    We analytically calculate some orbital effects induced by the Lorentz-invariance/ momentum-conservation parameterized post-Newtonian (PPN) parameter α_3 in a gravitationally bound binary system made of a primary orbited by a test particle. We neither restrict ourselves to any particular orbital configuration nor to specific orientations of the primary's spin axis ψ. We use our results to put preliminary upper bounds on α_3 in the weak-field regime by using the latest data from Solar System's planetary dynamics. By linearly combining the supplementary perihelion precessions Δw of the Earth, Mars and Saturn, determined by astronomers with the Ephemerides of Planets and the Moon (EPM) 2011 ephemerides for the general relativistic values of the PPN parameters β = γ = 1, we infer |α_3| ;5 6 × 10^-10. Our result is about three orders of magnitude better than the previous weak-field constraints existing in the literature and of the same order of magnitude of the constraint expected from the future BepiColombo mission to Mercury. It is, by construction, independent of the other preferred-frame PPN parameters α1, α2, both preliminarily constrained down to a ≈ 10^-6 level. Future analyses should be performed by explicitly including α3 and a selection of other PPN parameters in the models fitted by the astronomers to the observations and estimating them in dedicated covariance analyses.

  2. More N =4 superconformal bootstrap

    NASA Astrophysics Data System (ADS)

    Beem, Christopher; Rastelli, Leonardo; van Rees, Balt C.

    2017-08-01

    In this long overdue second installment, we continue to develop the conformal bootstrap program for N =4 superconformal field theories (SCFTs) in four dimensions via an analysis of the correlation function of four stress-tensor supermultiplets. We review analytic results for this correlator and make contact with the SCFT/chiral algebra correspondence of Beem et al. [Commun. Math. Phys. 336, 1359 (2015), 10.1007/s00220-014-2272-x]. We demonstrate that the constraints of unitarity and crossing symmetry require the central charge c to be greater than or equal to 3 /4 in any interacting N =4 SCFT. We apply numerical bootstrap methods to derive upper bounds on scaling dimensions and operator product expansion coefficients for several low-lying, unprotected operators as a function of the central charge. We interpret our bounds in the context of N =4 super Yang-Mills theories, formulating a series of conjectures regarding the embedding of the conformal manifold—parametrized by the complexified gauge coupling—into the space of scaling dimensions and operator product expansion coefficients. Our conjectures assign a distinguished role to points on the conformal manifold that are self-dual under a subgroup of the S -duality group. This paper contains a more detailed exposition of a number of results previously reported in Beem et al. [Phys. Rev. Lett. 111, 071601 (2013), 10.1103/PhysRevLett.111.071601] in addition to new results.

  3. Low energy theorems and the unitarity bounds in the extra U(1) superstring inspired E{sub 6} models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, N.K.; Saxena, Pranav; Nagawat, Ashok K.

    2005-11-01

    The conventional method using low energy theorems derived by Chanowitz et al. [Phys. Rev. Lett. 57, 2344 (1986);] does not seem to lead to an explicit unitarity limit in the scattering processes of longitudinally polarized gauge bosons for the high energy case in the extra U(1) superstring inspired models, commonly known as {eta} model, emanating from E{sub 6} group of superstring theory. We have made use of an alternative procedure given by Durand and Lopez [Phys. Lett. B 217, 463 (1989);], which is applicable to supersymmetric grand unified theories. Explicit unitarity bounds on the superpotential couplings (identified as Yukawa couplings)more » are obtained from both using unitarity constraints as well as using renormalization group equations (RGE) analysis at one-loop level utilizing critical couplings concepts implying divergence of scalar coupling at M{sub G}. These are found to be consistent with finiteness over the entire range M{sub Z}{<=}{radical}(s){<=}M{sub G} i.e. from grand unification scale to weak scale. For completeness, the similar approach has been made use of in other models i.e., {chi}, {psi}, and {nu} models emanating from E{sub 6} and it has been noticed that at weak scale, the unitarity bounds on Yukawa couplings do not differ among E{sub 6} extra U(1) models significantly except for the case of {chi} model in 16 representations. For the case of the E{sub 6}-{eta} model ({beta}{sub E} congruent with 9.64), the analysis using the unitarity constraints leads to the following bounds on various parameters: {lambda}{sub t(max.)}(M{sub Z})=1.294, {lambda}{sub b(max.)}(M{sub Z})=1.278, {lambda}{sub H(max.)}(M{sub Z})=0.955, {lambda}{sub D(max.)}(M{sub Z})=1.312. The analytical analysis of RGE at the one-loop level provides the following critical bounds on superpotential couplings: {lambda}{sub t,c}(M{sub Z}) congruent with 1.295, {lambda}{sub b,c}(M{sub Z}) congruent with 1.279, {lambda}{sub H,c}(M{sub Z}) congruent with 0.968, {lambda}{sub D,c}(M{sub Z}) congruent with 1.315. Thus superpotential coupling values obtained by both the approaches are in good agreement. Theoretically we have obtained bounds on physical mass parameters using the unitarity constrained superpotential couplings. The bounds are as follows: (i) Absolute upper bound on top quark mass m{sub t}{<=}225 GeV (ii) the upper bound on the lightest neutral Higgs boson mass at the tree level is m{sub H{sub 2}{sup 0}}{sup tree}{<=}169 GeV, and after the inclusion of the one-loop radiative correction it is m{sub H{sub 2}{sup 0}}{<=}229 GeV when {lambda}{sub t}{ne}{lambda}{sub b} at the grand unified theory scale. On the other hand, these are m{sub H{sub 2}{sup 0}}{sup tree}{<=}159 GeV, m{sub H{sub 2}{sup 0}}{<=}222 GeV, respectively, when {lambda}{sub t}={lambda}{sub b} at the grand unified theory scale. A plausible range on D-quark mass as a function of mass scale M{sub Z{sub 2}} is m{sub D}{approx_equal}O(3 TeV) for M{sub Z{sub 2}}{approx_equal}O(1 TeV) for the favored values of tan{beta}{<=}1. The bounds on aforesaid physical parameters in the case of {chi}, {psi}, and {nu} models in the 27 representation are almost identical with those of {eta} model and are consistent with the present day experimental precision measurements.« less

  4. Constraining the generalized uncertainty principle with the atomic weak-equivalence-principle test

    NASA Astrophysics Data System (ADS)

    Gao, Dongfeng; Wang, Jin; Zhan, Mingsheng

    2017-04-01

    Various models of quantum gravity imply the Planck-scale modifications of Heisenberg's uncertainty principle into a so-called generalized uncertainty principle (GUP). The GUP effects on high-energy physics, cosmology, and astrophysics have been extensively studied. Here, we focus on the weak-equivalence-principle (WEP) violation induced by the GUP. Results from the WEP test with the 85Rb-87Rb dual-species atom interferometer are used to set upper bounds on parameters in two GUP proposals. A 1045-level bound on the Kempf-Mangano-Mann proposal and a 1027-level bound on Maggiore's proposal, which are consistent with bounds from other experiments, are obtained. All these bounds have huge room for improvement in the future.

  5. PubMed

    Trinker, Horst

    2011-10-28

    We study the distribution of triples of codewords of codes and ordered codes. Schrijver [A. Schrijver, New code upper bounds from the Terwilliger algebra and semidefinite programming, IEEE Trans. Inform. Theory 51 (8) (2005) 2859-2866] used the triple distribution of a code to establish a bound on the number of codewords based on semidefinite programming. In the first part of this work, we generalize this approach for ordered codes. In the second part, we consider linear codes and linear ordered codes and present a MacWilliams-type identity for the triple distribution of their dual code. Based on the non-negativity of this linear transform, we establish a linear programming bound and conclude with a table of parameters for which this bound yields better results than the standard linear programming bound.

  6. The random coding bound is tight for the average code.

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.

    1973-01-01

    The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.

  7. Scalable L-infinite coding of meshes.

    PubMed

    Munteanu, Adrian; Cernea, Dan C; Alecu, Alin; Cornelis, Jan; Schelkens, Peter

    2010-01-01

    The paper investigates the novel concept of local-error control in mesh geometry encoding. In contrast to traditional mesh-coding systems that use the mean-square error as target distortion metric, this paper proposes a new L-infinite mesh-coding approach, for which the target distortion metric is the L-infinite distortion. In this context, a novel wavelet-based L-infinite-constrained coding approach for meshes is proposed, which ensures that the maximum error between the vertex positions in the original and decoded meshes is lower than a given upper bound. Furthermore, the proposed system achieves scalability in L-infinite sense, that is, any decoding of the input stream will correspond to a perfectly predictable L-infinite distortion upper bound. An instantiation of the proposed L-infinite-coding approach is demonstrated for MESHGRID, which is a scalable 3D object encoding system, part of MPEG-4 AFX. In this context, the advantages of scalable L-infinite coding over L-2-oriented coding are experimentally demonstrated. One concludes that the proposed L-infinite mesh-coding approach guarantees an upper bound on the local error in the decoded mesh, it enables a fast real-time implementation of the rate allocation, and it preserves all the scalability features and animation capabilities of the employed scalable mesh codec.

  8. Adaptive nonsingular fast terminal sliding-mode control for the tracking problem of uncertain dynamical systems.

    PubMed

    Boukattaya, Mohamed; Mezghani, Neila; Damak, Tarak

    2018-06-01

    In this paper, robust and adaptive nonsingular fast terminal sliding-mode (NFTSM) control schemes for the trajectory tracking problem are proposed with known or unknown upper bound of the system uncertainty and external disturbances. The developed controllers take the advantage of the NFTSM theory to ensure fast convergence rate, singularity avoidance, and robustness against uncertainties and external disturbances. First, a robust NFTSM controller is proposed which guarantees that sliding surface and equilibrium point can be reached in a short finite-time from any initial state. Then, in order to cope with the unknown upper bound of the system uncertainty which may be occurring in practical applications, a new adaptive NFTSM algorithm is developed. One feature of the proposed control law is their adaptation techniques where the prior knowledge of parameters uncertainty and disturbances is not needed. However, the adaptive tuning law can estimate the upper bound of these uncertainties using only position and velocity measurements. Moreover, the proposed controller eliminates the chattering effect without losing the robustness property and the precision. Stability analysis is performed using the Lyapunov stability theory, and simulation studies are conducted to verify the effectiveness of the developed control schemes. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Effects of general relativity on glitch amplitudes and pulsar mass upper bounds

    NASA Astrophysics Data System (ADS)

    Antonelli, M.; Montoli, A.; Pizzochero, P. M.

    2018-04-01

    Pinning of vortex lines in the inner crust of a spinning neutron star may be the mechanism that enhances the differential rotation of the internal neutron superfluid, making it possible to freeze some amount of angular momentum which eventually can be released, thus causing a pulsar glitch. We investigate the general relativistic corrections to pulsar glitch amplitudes in the slow-rotation approximation, consistently with the stratified structure of the star. We thus provide a relativistic generalization of a previous Newtonian model that was recently used to estimate upper bounds on the masses of glitching pulsars. We find that the effect of general relativity on the glitch amplitudes obtained by emptying the whole angular momentum reservoir is less than 30 per cent. Moreover, we show that the Newtonian upper bounds on the masses of large glitchers obtained from observations of their maximum recorded event differ by less than a few percent from those calculated within the relativistic framework. This work can also serve as a basis to construct more sophisticated models of angular momentum reservoir in a relativistic context: in particular, we present two alternative scenarios for macroscopically rigid and slack pinned vortex lines, and we generalize the Feynman-Onsager relation to the case when both entrainment coupling between the fluids and a strong axisymmetric gravitational field are present.

  10. Upper bounds of deformation in the Upper Rhine Graben from GPS data - First results from GURN (GNSS Upper Rhine Graben Network)

    NASA Astrophysics Data System (ADS)

    Masson, Frederic; Knoepfler, Andreas; Mayer, Michael; Ulrich, Patrice; Heck, Bernhard

    2010-05-01

    In September 2008, the Institut de Physique du Globe de Strasbourg (Ecole et Observatoire des Sciences de la Terre, EOST) and the Geodetic Institute (GIK) of Karlsruhe University (TH) established a transnational cooperation called GURN (GNSS Upper Rhine Graben Network). Within the GURN initiative these institutions are cooperating in order to establish a highly precise and highly sensitive network of permanently operating GNSS sites for the detection of crustal movements in the Upper Rhine Graben region. At the beginning, the network consisted of the permanently operating GNSS sites of SAPOS®-Baden-Württemberg, different data providers in France (e.g. EOST, Teria, RGP) and some further sites (e.g. IGS). In July 2009, the network was extended to the South when swisstopo (Switzerland) and to the North when SAPOS®-Rheinland-Pfalz joined GURN. Therefore, actually the GNSS network consists of approx. 80 permanently operating reference sites. The presentation will discuss the actual status of GURN, main research goals, and will present first results concerning the data quality as well as time series of a first reprocessing of all available data since 2002 using GAMIT/GLOBK (EOST working group) and the Bernese GPS Software (GIK working group). Based on these time series, the velocity as well as strain fields will be calculated in the future. The GURN initiative is also aiming for the estimation of the upper bounds of deformation in the Upper Rhine Graben region.

  11. A Reduced Basis Method with Exact-Solution Certificates for Symmetric Coercive Equations

    DTIC Science & Technology

    2013-11-06

    the energy associated with the infinite - dimensional weak solution of parametrized symmetric coercive partial differential equations with piecewise...builds bounds with respect to the infinite - dimensional weak solution, aims to entirely remove the issue of the “truth” within the certified reduced basis...framework. We in particular introduce a reduced basis method that provides rigorous upper and lower bounds

  12. The Economic Cost of Methamphetamine Use in the United States, 2005

    ERIC Educational Resources Information Center

    Nicosia, Nancy; Pacula, Rosalie Liccardo; Kilmer, Beau; Lundberg, Russell; Chiesa, James

    2009-01-01

    This first national estimate suggests that the economic cost of methamphetamine (meth) use in the United States reached $23.4 billion in 2005. Given the uncertainty in estimating the costs of meth use, this book provides a lower-bound estimate of $16.2 billion and an upper-bound estimate of $48.3 billion. The analysis considers a wide range of…

  13. Paramagnetic or diamagnetic persistent currents? A topological point of view

    NASA Astrophysics Data System (ADS)

    Waintal, Xavier

    2009-03-01

    A persistent current flows at low temperatures in small conducting rings when they are threaded by a magnetic flux. I will discuss the sign of this persistent current (diamagnetic or paramagnetic response) in the special case of N electrons in a one dimensional ring [1]. One dimension is very special in the sense that the sign of the persistent current is entirely controlled by the topology of the system. I will establish lower bounds for the free energy in the presence of arbitrary electron-electron interactions and external potentials. Those bounds are the counterparts of upper bounds derived by Leggett using another topological argument. Rings with odd (even) numbers of polarized electrons are always diamagnetic (paramagnetic). The situation is more interesting with unpolarized electrons where Leggett upper bound breaks down: rings with N=4n exhibit either paramagnetic behavior or a superconductor-like current-phase relation. The topological argument provides a rigorous justification for the phenomenological Huckel rule which states that cyclic molecules with 4n + 2 electrons like benzene are aromatic while those with 4n electrons are not. [4pt] [1] Xavier Waintal, Geneviève Fleury, Kyryl Kazymyrenko, Manuel Houzet, Peter Schmitteckert, and Dietmar Weinmann Phys. Rev. Lett.101, 106804 (2008).

  14. The accuracy of less: Natural bounds explain why quantity decreases are estimated more accurately than quantity increases.

    PubMed

    Chandon, Pierre; Ordabayeva, Nailya

    2017-02-01

    Five studies show that people, including experts such as professional chefs, estimate quantity decreases more accurately than quantity increases. We argue that this asymmetry occurs because physical quantities cannot be negative. Consequently, there is a natural lower bound (zero) when estimating decreasing quantities but no upper bound when estimating increasing quantities, which can theoretically grow to infinity. As a result, the "accuracy of less" disappears (a) when a numerical or a natural upper bound is present when estimating quantity increases, or (b) when people are asked to estimate the (unbounded) ratio of change from 1 size to another for both increasing and decreasing quantities. Ruling out explanations related to loss aversion, symbolic number mapping, and the visual arrangement of the stimuli, we show that the "accuracy of less" influences choice and demonstrate its robustness in a meta-analysis that includes previously published results. Finally, we discuss how the "accuracy of less" may explain asymmetric reactions to the supersizing and downsizing of food portions, some instances of the endowment effect, and asymmetries in the perception of increases and decreases in physical and psychological distance. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Bounds on area and charge for marginally trapped surfaces with a cosmological constant

    NASA Astrophysics Data System (ADS)

    Simon, Walter

    2012-03-01

    We sharpen the known inequalities AΛ ⩽ 4π(1 - g) (Hayward et al 1994 Phys. Rev. D 49 5080, Woolgar 1999 Class. Quantum Grav. 16 3005) and A ⩾ 4πQ2 (Dain et al 2012 Class. Quantum Grav. 29 035013) between the area A and the electric charge Q of a stable marginally outer-trapped surface (MOTS) of genus g in the presence of a cosmological constant Λ. In particular, instead of requiring stability we include the principal eigenvalue λ of the stability operator. For Λ* = Λ + λ > 0, we obtain a lower and an upper bound for Λ*A in terms of Λ*Q2, as well as the upper bound Q \\le 1/(2\\sqrt{\\Lambda ^{*}}) for the charge, which reduces to Q \\le 1/(2\\sqrt{\\Lambda }) in the stable case λ ⩾ 0. For Λ* < 0, there only remains a lower bound on A. In the spherically symmetric, static, stable case, one of our area inequalities is saturated iff the surface gravity vanishes. We also discuss implications of our inequalities for ‘jumps’ and mergers of charged MOTS.

  16. Back pressure based multicast scheduling for fair bandwidth allocation.

    PubMed

    Sarkar, Saswati; Tassiulas, Leandros

    2005-09-01

    We study the fair allocation of bandwidth in multicast networks with multirate capabilities. In multirate transmission, each source encodes its signal in layers. The lowest layer contains the most important information and all receivers of a session should receive it. If a receiver's data path has additional bandwidth, it receives higher layers which leads to a better quality of reception. The bandwidth allocation objective is to distribute the layers fairly. We present a computationally simple, decentralized scheduling policy that attains the maxmin fair rates without using any knowledge of traffic statistics and layer bandwidths. This policy learns the congestion level from the queue lengths at the nodes, and adapts the packet transmissions accordingly. When the network is congested, packets are dropped from the higher layers; therefore, the more important lower layers suffer negligible packet loss. We present analytical and simulation results that guarantee the maxmin fairness of the resulting rate allocation, and upper bound the packet loss rates for different layers.

  17. Monitoring the in-situ oxide growth on uranium by ultraviolet-visible reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Schweke, Danielle; Maimon, Chen; Chernia, Zelig; Livneh, Tsachi

    2012-11-01

    We demonstrate the in-situ monitoring of oxide growth on U-0.1 wt. % Cr by means of UV-visible reflectance spectroscopy in the thickness range of ˜20-150 nm. Two different approaches are presented: In the "modeling approach," we employ a model for a metallic substrate covered by a dielectric layer, while taking into account the buildup of oxygen gradient and surface roughness. Then, we fit the simulated spectra to the experimental one. In the "extrema analysis," we derive an approximated analytical expression, which relates the oxide thickness to the position of the extrema in the reflectance spectra based on the condition for optical interference of the reflected light. Good agreement is found between the values extracted by the two procedures. Activation energy of ˜21 kcal/mole was obtained by monitoring the oxide growth in the temperature range of 22-90 °C. The upper bound for the thickness determination is argued to be mostly dictated by cracking and detachment processes in the formed oxide.

  18. Simulations of nearly extremal binary black holes

    NASA Astrophysics Data System (ADS)

    Giesler, Matthew; Scheel, Mark; Hemberger, Daniel; Lovelace, Geoffrey; Kuper, Kevin; Boyle, Michael; Szilagyi, Bela; Kidder, Lawrence; SXS Collaboration

    2015-04-01

    Astrophysical black holes could have nearly extremal spins; therefore, nearly extremal black holes could be among the binaries that current and future gravitational-wave observatories will detect. Predicting the gravitational waves emitted by merging black holes requires numerical-relativity simulations, but these simulations are especially challenging when one or both holes have mass m and spin S exceeding the Bowen-York limit of S /m2 = 0 . 93 . Using improved methods we simulate an unequal-mass, precessing binary black hole coalescence, where the larger black hole has S /m2 = 0 . 99 . We also use these methods to simulate a nearly extremal non-precessing binary black hole coalescence, where both black holes have S /m2 = 0 . 994 , nearly reaching the Novikov-Thorne upper bound for holes spun up by thin accretion disks. We demonstrate numerical convergence and estimate the numerical errors of the waveforms; we compare numerical waveforms from our simulations with post-Newtonian and effective-one-body waveforms; and we compare the evolution of the black-hole masses and spins with analytic predictions.

  19. Melting Behavior of a Model Molecular Crystalline GeI4

    NASA Astrophysics Data System (ADS)

    Fuchizaki, Kazuhiro; Asano, Yuta

    2015-06-01

    A model molecular crystalline GeI4 was examined using molecular dynamics simulation. The model was constructed in such a way that rigid tetrahedral molecules interact with each other via Lennard-Jones potentials whose centers are located at the vertices of a tetrahedron. Because no other interaction that can "soften" the intermolecular interaction was introduced, the melting curve of the model crystalline material does not exhibit the anomaly that was found for the real substance. However, the current investigation is useful in that it could settle the upper bound of pressure below which the model can predict properties of the molecular liquid. Moreover, singularity-free nature of the melting curve allowed us to analytically treat the melting curve in the light of the Kumari-Dass-Kechin equation. As a result, we could definitely conclude that the well-known Simon equation for the melting curve is merely an approximate expression. The condition for the validity of Simon's equation was identified.

  20. Thermodynamic efficiency of learning a rule in neural networks

    NASA Astrophysics Data System (ADS)

    Goldt, Sebastian; Seifert, Udo

    2017-11-01

    Biological systems have to build models from their sensory input data that allow them to efficiently process previously unseen inputs. Here, we study a neural network learning a binary classification rule for these inputs from examples provided by a teacher. We analyse the ability of the network to apply the rule to new inputs, that is to generalise from past experience. Using stochastic thermodynamics, we show that the thermodynamic costs of the learning process provide an upper bound on the amount of information that the network is able to learn from its teacher for both batch and online learning. This allows us to introduce a thermodynamic efficiency of learning. We analytically compute the dynamics and the efficiency of a noisy neural network performing online learning in the thermodynamic limit. In particular, we analyse three popular learning algorithms, namely Hebbian, Perceptron and AdaTron learning. Our work extends the methods of stochastic thermodynamics to a new type of learning problem and might form a suitable basis for investigating the thermodynamics of decision-making.

  1. Uncertainties in extracted parameters of a Gaussian emission line profile with continuum background.

    PubMed

    Minin, Serge; Kamalabadi, Farzad

    2009-12-20

    We derive analytical equations for uncertainties in parameters extracted by nonlinear least-squares fitting of a Gaussian emission function with an unknown continuum background component in the presence of additive white Gaussian noise. The derivation is based on the inversion of the full curvature matrix (equivalent to Fisher information matrix) of the least-squares error, chi(2), in a four-variable fitting parameter space. The derived uncertainty formulas (equivalent to Cramer-Rao error bounds) are found to be in good agreement with the numerically computed uncertainties from a large ensemble of simulated measurements. The derived formulas can be used for estimating minimum achievable errors for a given signal-to-noise ratio and for investigating some aspects of measurement setup trade-offs and optimization. While the intended application is Fabry-Perot spectroscopy for wind and temperature measurements in the upper atmosphere, the derivation is generic and applicable to other spectroscopy problems with a Gaussian line shape.

  2. Mothers’ Night Work and Children’s Behavior Problems

    PubMed Central

    Dunifon, Rachel; Kalil, Ariel; Crosby, Danielle; Su, Jessica Houston

    2013-01-01

    Many mothers work in jobs with nonstandard schedules (i.e., schedules that involve work outside of the traditional 9–5, Monday through Friday schedule); this is particularly true for economically disadvantaged mothers. The present paper uses longitudinal data from the Fragile Families and Child Wellbeing Survey (n = 2,367 mothers of children ages 3–5) to examine the associations between maternal nonstandard work and children’s behavior problems, with a particular focus on mothers’ night shift work. We employ three analytic strategies that take various approaches to adjusting for observed and unobserved selection factors; these approaches provide an upper and lower bound on the true relationship between night shift work and children’s behavior. Taken together, the results provide suggestive evidence for modest associations between exposure to maternal night shift work and higher levels of aggressive and anxious/depressed behavior in children compared to mothers who are not working, those whose mothers work other types of nonstandard shifts, and, for aggressive behavior, those whose mothers work standard shifts. PMID:23294148

  3. Mothers' night work and children's behavior problems.

    PubMed

    Dunifon, Rachel; Kalil, Ariel; Crosby, Danielle A; Su, Jessica Houston

    2013-10-01

    Many mothers work in jobs with nonstandard schedules (i.e., schedules that involve work outside of the traditional 9-5, Monday through Friday schedule); this is particularly true for economically disadvantaged mothers. In the present article, we used longitudinal data from the Fragile Families and Child Wellbeing Survey (n = 2,367 mothers of children ages 3-5 years) to examine the associations between maternal nonstandard work and children's behavior problems, with a particular focus on mothers' night shift work. We employed 3 analytic strategies with various approaches to adjusting for observed and unobserved selection factors; these approaches provided an upper and lower bound on the true relationship between night shift work and children's behavior. Taken together, the results provide suggestive evidence for modest associations between exposure to maternal night shift work and higher levels of aggressive and anxious or depressed behavior in children compared with children whose mothers who are not working, those whose mothers work other types of nonstandard shifts, and, for aggressive behavior, those whose mothers work standard shifts.

  4. Cascades on a stochastic pulse-coupled network

    NASA Astrophysics Data System (ADS)

    Wray, C. M.; Bishop, S. R.

    2014-09-01

    While much recent research has focused on understanding isolated cascades of networks, less attention has been given to dynamical processes on networks exhibiting repeated cascades of opposing influence. An example of this is the dynamic behaviour of financial markets where cascades of buying and selling can occur, even over short timescales. To model these phenomena, a stochastic pulse-coupled oscillator network with upper and lower thresholds is described and analysed. Numerical confirmation of asynchronous and synchronous regimes of the system is presented, along with analytical identification of the fixed point state vector of the asynchronous mean field system. A lower bound for the finite system mean field critical value of network coupling probability is found that separates the asynchronous and synchronous regimes. For the low-dimensional mean field system, a closed-form equation is found for cascade size, in terms of the network coupling probability. Finally, a description of how this model can be applied to interacting agents in a financial market is provided.

  5. 2D CFT partition functions at late times

    NASA Astrophysics Data System (ADS)

    Dyer, Ethan; Gur-Ari, Guy

    2017-08-01

    We consider the late time behavior of the analytically continued partition function Z( β + it) Z( β - it) in holographic 2 d CFTs. This is a probe of information loss in such theories and in their holographic duals. We show that each Virasoro character decays in time, and so information is not restored at the level of individual characters. We identify a universal decaying contribution at late times, and conjecture that it describes the behavior of generic chaotic 2 d CFTs out to times that are exponentially large in the central charge. It was recently suggested that at sufficiently late times one expects a crossover to random matrix behavior. We estimate an upper bound on the crossover time, which suggests that the decay is followed by a parametrically long period of late time growth. Finally, we discuss gravitationally-motivated integrable theories and show how information is restored at late times by a series of characters. This hints at a possible bulk mechanism, where information is restored by an infinite sum over non-perturbative saddles.

  6. Optimum analysis of a Brownian refrigerator.

    PubMed

    Luo, X G; Liu, N; He, J Z

    2013-02-01

    A Brownian refrigerator with the cold and hot reservoirs alternating along a space coordinate is established. The heat flux couples with the movement of the Brownian particles due to an external force in the spatially asymmetric but periodic potential. After using the Arrhenius factor to describe the behaviors of the forward and backward jumps of the particles, the expressions for coefficient of performance (COP) and cooling rate are derived analytically. Then, through maximizing the product of conversion efficiency and heat flux flowing out, a new upper bound only depending on the temperature ratio of the cold and hot reservoirs is found numerically in the reversible situation, and it is a little larger than the so-called Curzon and Ahlborn COP ε(CA)=(1/√[1-τ])-1. After considering the irreversible factor owing to the kinetic energy change of the moving particles, we find the optimized COP is smaller than ε(CA) and the external force even does negative work on the Brownian particles when they jump from a cold to hot reservoir.

  7. Cascades on a stochastic pulse-coupled network

    PubMed Central

    Wray, C. M.; Bishop, S. R.

    2014-01-01

    While much recent research has focused on understanding isolated cascades of networks, less attention has been given to dynamical processes on networks exhibiting repeated cascades of opposing influence. An example of this is the dynamic behaviour of financial markets where cascades of buying and selling can occur, even over short timescales. To model these phenomena, a stochastic pulse-coupled oscillator network with upper and lower thresholds is described and analysed. Numerical confirmation of asynchronous and synchronous regimes of the system is presented, along with analytical identification of the fixed point state vector of the asynchronous mean field system. A lower bound for the finite system mean field critical value of network coupling probability is found that separates the asynchronous and synchronous regimes. For the low-dimensional mean field system, a closed-form equation is found for cascade size, in terms of the network coupling probability. Finally, a description of how this model can be applied to interacting agents in a financial market is provided. PMID:25213626

  8. Collective motion in prolate γ-rigid nuclei within minimal length concept via a quantum perturbation method

    NASA Astrophysics Data System (ADS)

    Chabab, M.; El Batoul, A.; Lahbas, A.; Oulne, M.

    2018-05-01

    Based on the minimal length concept, inspired by Heisenberg algebra, a closed analytical formula is derived for the energy spectrum of the prolate γ-rigid Bohr-Mottelson Hamiltonian of nuclei, within a quantum perturbation method (QPM), by considering a scaled Davidson potential in β shape variable. In the resulting solution, called X(3)-D-ML, the ground state and the first β-band are all studied as a function of the free parameters. The fact of introducing the minimal length concept with a QPM makes the model very flexible and a powerful approach to describe nuclear collective excitations of a variety of vibrational-like nuclei. The introduction of scaling parameters in the Davidson potential enables us to get a physical minimum of this latter in comparison with previous works. The analysis of the corrected wave function, as well as the probability density distribution, shows that the minimal length parameter has a physical upper bound limit.

  9. Structural Controllability and Controlling Centrality of Temporal Networks

    PubMed Central

    Pan, Yujian; Li, Xiang

    2014-01-01

    Temporal networks are such networks where nodes and interactions may appear and disappear at various time scales. With the evidence of ubiquity of temporal networks in our economy, nature and society, it's urgent and significant to focus on its structural controllability as well as the corresponding characteristics, which nowadays is still an untouched topic. We develop graphic tools to study the structural controllability as well as its characteristics, identifying the intrinsic mechanism of the ability of individuals in controlling a dynamic and large-scale temporal network. Classifying temporal trees of a temporal network into different types, we give (both upper and lower) analytical bounds of the controlling centrality, which are verified by numerical simulations of both artificial and empirical temporal networks. We find that the positive relationship between aggregated degree and controlling centrality as well as the scale-free distribution of node's controlling centrality are virtually independent of the time scale and types of datasets, meaning the inherent robustness and heterogeneity of the controlling centrality of nodes within temporal networks. PMID:24747676

  10. Self-resonance after inflation: Oscillons, transients, and radiation domination

    NASA Astrophysics Data System (ADS)

    Lozanov, Kaloian D.; Amin, Mustafa A.

    2018-01-01

    Homogeneous oscillations of the inflaton after inflation can be unstable to small spatial perturbations even without coupling to other fields. We show that for inflaton potentials ∝|ϕ |2n near |ϕ |=0 and flatter beyond some |ϕ |=M , the inflaton condensate oscillations can lead to self-resonance, followed by its complete fragmentation. We find that for nonquadratic minima (n >1 ), shortly after backreaction, the equation of state parameter, w →1 /3 . If M ≪mPl, radiation domination is established within less than an e -fold of expansion after the end of inflation. In this case self-resonance is efficient and the condensate fragments into transient, localised spherical objects which are unstable and decay, leaving behind them a virialized field with mean kinetic and gradient energies much greater than the potential energy. This end-state yields w =1 /3 . When M ˜mPl we observe slow and steady, self-resonance that can last many e -folds before backreaction eventually shuts it off, followed by fragmentation and w →1 /3 . We provide analytical estimates for the duration to w →1 /3 after inflation, which can be used as an upper bound (under certain assumptions) on the duration of the transition between the inflationary and the radiation dominated states of expansion. This upper bound can reduce uncertainties in CMB observables such as the spectral tilt ns, and the tensor-to-scalar ratio r . For quadratic minima (n =1 ), w →0 regardless of the value of M . This is because when M ≪mPl, long-lived oscillons form within an e -fold after inflation, and collectively behave as pressureless dust thereafter. For M ˜mPl, the self-resonance is inefficient and the condensate remains intact (ignoring long-term gravitational clustering) and keeps oscillating about the quadratic minimum, again implying w =0 .

  11. Analogue modelling of inclined, brittle-ductile transpression: Testing analytical models through natural shear zones (external Betics)

    NASA Astrophysics Data System (ADS)

    Barcos, L.; Díaz-Azpiroz, M.; Balanyá, J. C.; Expósito, I.; Jiménez-Bonilla, A.; Faccenna, C.

    2016-07-01

    The combination of analytical and analogue models gives new opportunities to better understand the kinematic parameters controlling the evolution of transpression zones. In this work, we carried out a set of analogue models using the kinematic parameters of transpressional deformation obtained by applying a general triclinic transpression analytical model to a tabular-shaped shear zone in the external Betic Chain (Torcal de Antequera massif). According to the results of the analytical model, we used two oblique convergence angles to reproduce the main structural and kinematic features of structural domains observed within the Torcal de Antequera massif (α = 15° for the outer domains and α = 30° for the inner domain). Two parallel inclined backstops (one fixed and the other mobile) reproduce the geometry of the shear zone walls of the natural case. Additionally, we applied digital particle image velocimetry (PIV) method to calculate the velocity field of the incremental deformation. Our results suggest that the spatial distribution of the main structures observed in the Torcal de Antequera massif reflects different modes of strain partitioning and strain localization between two domain types, which are related to the variation in the oblique convergence angle and the presence of steep planar velocity - and rheological - discontinuities (the shear zone walls in the natural case). In the 15° model, strain partitioning is simple and strain localization is high: a single narrow shear zone is developed close and parallel to the fixed backstop, bounded by strike-slip faults and internally deformed by R and P shears. In the 30° model, strain partitioning is strong, generating regularly spaced oblique-to-the backstops thrusts and strike-slip faults. At final stages of the 30° experiment, deformation affects the entire model box. Our results show that the application of analytical modelling to natural transpressive zones related to upper crustal deformation facilitates to constrain the geometrical parameters of analogue models.

  12. Perturbative unitarity constraints on the NMSSM Higgs Sector

    DOE PAGES

    Betre, Kassahun; El Hedri, Sonia; Walker, Devin G. E.

    2017-11-11

    We place perturbative unitarity constraints on both the dimensionful and dimensionless parameters in the Next-to-Minimal Supersymmetric Standard Model (NMSSM) Higgs Sector. These constraints, plus the requirement that the singlino and/or Higgsino constitutes at least part of the observed dark matter relic abundance, generate upper bounds on the Higgs, neutralino and chargino mass spectrum. Requiring higher-order corrections to be no more than 41% of the tree-level value, we obtain an upper bound of 20 TeV for the heavy Higgses and 12 TeV for the charginos and neutralinos outside defined fine-tuned regions. If the corrections are no more than 20% of themore » tree-level value, the bounds are 7 TeV for the heavy Higgses and 5 TeV for the charginos and neutralinos. Finally, in all, by using the NMSSM as a template, we describe a method which replaces naturalness arguments with more rigorous perturbative unitarity arguments to get a better understanding of when new physics will appear.« less

  13. An upper bound on the radius of a highly electrically conducting lunar core

    NASA Technical Reports Server (NTRS)

    Hobbs, B. A.; Hood, L. L.; Herbert, F.; Sonett, C. P.

    1983-01-01

    Parker's (1980) nonlinear inverse theory for the electromagnetic sounding problem is converted to a form suitable for analysis of lunar day-side transfer function data by: (1) transforming the solution in plane geometry to that in spherical geometry; and (2) transforming the theoretical lunar transfer function in the dipole limit to an apparent resistivity function. The theory is applied to the revised lunar transfer function data set of Hood et al. (1982), which extends in frequency from 10 to the -5th to 10 to the -3rd Hz. On the assumption that an iron-rich lunar core, whether molten or solid, can be represented by a perfect conductor at the minimum sampled frequency, an upper bound of 435 km on the maximum radius of such a core is calculated. This bound is somewhat larger than values of 360-375 km previously estimated from the same data set via forward model calculations because the prior work did not consider all possible mantle conductivity functions.

  14. An upper bound on the particle-laden dependency of shear stresses at solid-fluid interfaces

    NASA Astrophysics Data System (ADS)

    Zohdi, T. I.

    2018-03-01

    In modern advanced manufacturing processes, such as three-dimensional printing of electronics, fine-scale particles are added to a base fluid yielding a modified fluid. For example, in three-dimensional printing, particle-functionalized inks are created by adding particles to freely flowing solvents forming a mixture, which is then deposited onto a surface, which upon curing yields desirable solid properties, such as thermal conductivity, electrical permittivity and magnetic permeability. However, wear at solid-fluid interfaces within the machinery walls that deliver such particle-laden fluids is typically attributed to the fluid-induced shear stresses, which increase with the volume fraction of added particles. The objective of this work is to develop a rigorous strict upper bound for the tolerable volume fraction of particles that can be added, while remaining below a given stress threshold at a fluid-solid interface. To illustrate the bound's utility, the expression is applied to a series of classical flow regimes.

  15. Perturbative unitarity constraints on the NMSSM Higgs Sector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Betre, Kassahun; El Hedri, Sonia; Walker, Devin G. E.

    We place perturbative unitarity constraints on both the dimensionful and dimensionless parameters in the Next-to-Minimal Supersymmetric Standard Model (NMSSM) Higgs Sector. These constraints, plus the requirement that the singlino and/or Higgsino constitutes at least part of the observed dark matter relic abundance, generate upper bounds on the Higgs, neutralino and chargino mass spectrum. Requiring higher-order corrections to be no more than 41% of the tree-level value, we obtain an upper bound of 20 TeV for the heavy Higgses and 12 TeV for the charginos and neutralinos outside defined fine-tuned regions. If the corrections are no more than 20% of themore » tree-level value, the bounds are 7 TeV for the heavy Higgses and 5 TeV for the charginos and neutralinos. Finally, in all, by using the NMSSM as a template, we describe a method which replaces naturalness arguments with more rigorous perturbative unitarity arguments to get a better understanding of when new physics will appear.« less

  16. Analytical study of bound states in graphene nanoribbons and carbon nanotubes: The variable phase method and the relativistic Levinson theorem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miserev, D. S., E-mail: d.miserev@student.unsw.edu.au, E-mail: erazorheader@gmail.com

    2016-06-15

    The problem of localized states in 1D systems with a relativistic spectrum, namely, graphene stripes and carbon nanotubes, is studied analytically. The bound state as a superposition of two chiral states is completely described by their relative phase, which is the foundation of the variable phase method (VPM) developed herein. Based on our VPM, we formulate and prove the relativistic Levinson theorem. The problem of bound states can be reduced to the analysis of closed trajectories of some vector field. Remarkably, the Levinson theorem appears as the Poincaré index theorem for these closed trajectories. The VPM equation is also reducedmore » to the nonrelativistic and semiclassical limits. The limit of a small momentum p{sub y} of transverse quantization is applicable to an arbitrary integrable potential. In this case, a single confined mode is predicted.« less

  17. Spatial and temporal patterns in concentrations of perfluorinated compounds in bald eagle nestlings in the Upper Midwestern United States

    USGS Publications Warehouse

    Route, William T.; Russell, Robin E.; Lindstrom, Andrew B.; Strynor, Mark J.; Key, Rebecca L.

    2014-01-01

    Perfluorinated chemicals (PFCs) are of concern due to their widespread use, persistence in the environment, tendency to accumulate in animal tissues, and growing evidence of toxicity. Between 2006 and 2011 we collected blood plasma from 261 bald eagle nestlings in six study areas from the upper Midwestern United States. Samples were assessed for levels of 16 different PFCs. We used regression analysis in a Bayesian framework to evaluate spatial and temporal trends for these analytes. We found levels as high as 7370 ng/mL for the sum of all 16 PFCs (∑PFCs). Perfluorooctanesulfonate (PFOS) and perfluorodecanesulfonate (PFDS) were the most abundant analytes, making up 67% and 23% of the PFC burden, respectively. Levels of ∑PFC, PFOS, and PFDS were highest in more urban and industrial areas, moderate on Lake Superior, and low on the remote upper St. Croix River watershed. We found evidence of declines in ∑PFCs and seven analytes, including PFOS, PFDS, and perfluorooctanoic acid (PFOA); no trend in two analytes; and increases in two analytes. We argue that PFDS, a long-chained PFC with potential for high bioaccumulation and toxicity, should be considered for future animal and human studies.

  18. Determination of the Acid Dissociation Constant of a Phenolic Acid by High Performance Liquid Chromatography: An Experiment for the Upper Level Analytical Chemistry Laboratory

    ERIC Educational Resources Information Center

    Raboh, Ghada

    2018-01-01

    A high performance liquid chromatography (HPLC) experiment for the upper level analytical chemistry laboratory is described. The students consider the effect of mobile-phase composition and pH on the retention times of ionizable compounds in order to determine the acid dissociation constant, K[subscript a], of a phenolic acid. Results are analyzed…

  19. Quantum Dynamical Applications of Salem's Theorem

    NASA Astrophysics Data System (ADS)

    Damanik, David; Del Rio, Rafael

    2009-07-01

    We consider the survival probability of a state that evolves according to the Schrödinger dynamics generated by a self-adjoint operator H. We deduce from a classical result of Salem that upper bounds for the Hausdorff dimension of a set supporting the spectral measure associated with the initial state imply lower bounds on a subsequence of time scales for the survival probability. This general phenomenon is illustrated with applications to the Fibonacci operator and the critical almost Mathieu operator. In particular, this gives the first quantitative dynamical bound for the critical almost Mathieu operator.

  20. Volumes and intrinsic diameters of hypersurfaces

    NASA Astrophysics Data System (ADS)

    Paeng, Seong-Hun

    2015-09-01

    We estimate the volume and the intrinsic diameter of a hypersurface M with geometric information of a hypersurface which is parallel to M at distance T. It can be applied to the Riemannian Penrose inequality to obtain a lower bound of the total mass of a spacetime. Also it can be used to obtain upper bounds of the volume and the intrinsic diameter of the celestial r-sphere without a lower bound of the sectional curvature. We extend our results to metric-measure spaces by using the Bakry-Emery Ricci tensor.

  1. The SURE reliability analysis program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  2. The SURE Reliability Analysis Program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  3. The digital computer as a metaphor for the perfect laboratory experiment: Loophole-free Bell experiments

    NASA Astrophysics Data System (ADS)

    De Raedt, Hans; Michielsen, Kristel; Hess, Karl

    2016-12-01

    Using Einstein-Podolsky-Rosen-Bohm experiments as an example, we demonstrate that the combination of a digital computer and algorithms, as a metaphor for a perfect laboratory experiment, provides solutions to problems of the foundations of physics. Employing discrete-event simulation, we present a counterexample to John Bell's remarkable "proof" that any theory of physics, which is both Einstein-local and "realistic" (counterfactually definite), results in a strong upper bound to the correlations that are being measured in Einstein-Podolsky-Rosen-Bohm experiments. Our counterexample, which is free of the so-called detection-, coincidence-, memory-, and contextuality loophole, violates this upper bound and fully agrees with the predictions of quantum theory for Einstein-Podolsky-Rosen-Bohm experiments.

  4. Event-based recursive filtering for a class of nonlinear stochastic parameter systems over fading channels

    NASA Astrophysics Data System (ADS)

    Shen, Yuxuan; Wang, Zidong; Shen, Bo; Alsaadi, Fuad E.

    2018-07-01

    In this paper, the recursive filtering problem is studied for a class of time-varying nonlinear systems with stochastic parameter matrices. The measurement transmission between the sensor and the filter is conducted through a fading channel characterized by the Rice fading model. An event-based transmission mechanism is adopted to decide whether the sensor measurement should be transmitted to the filter. A recursive filter is designed such that, in the simultaneous presence of the stochastic parameter matrices and fading channels, the filtering error covariance is guaranteed to have an upper bound and such an upper bound is then minimized by appropriately choosing filter gain matrix. Finally, a simulation example is presented to demonstrate the effectiveness of the proposed filtering scheme.

  5. Combinatorial complexity of pathway analysis in metabolic networks.

    PubMed

    Klamt, Steffen; Stelling, Jörg

    2002-01-01

    Elementary flux mode analysis is a promising approach for a pathway-oriented perspective of metabolic networks. However, in larger networks it is hampered by the combinatorial explosion of possible routes. In this work we give some estimations on the combinatorial complexity including theoretical upper bounds for the number of elementary flux modes in a network of a given size. In a case study, we computed the elementary modes in the central metabolism of Escherichia coli while utilizing four different substrates. Interestingly, although the number of modes occurring in this complex network can exceed half a million, it is still far below the upper bound. Hence, to a certain extent, pathway analysis of central catabolism is feasible to assess network properties such as flexibility and functionality.

  6. A one-dimensional model of solid-earth electrical resistivity beneath Florida

    USGS Publications Warehouse

    Blum, Cletus; Love, Jeffrey J.; Pedrie, Kolby; Bedrosian, Paul A.; Rigler, E. Joshua

    2015-11-19

    An estimated one-dimensional layered model of electrical resistivity beneath Florida was developed from published geological and geophysical information. The resistivity of each layer is represented by plausible upper and lower bounds as well as a geometric mean resistivity. Corresponding impedance transfer functions, Schmucker-Weidelt transfer functions, apparent resistivity, and phase responses are calculated for inducing geomagnetic frequencies ranging from 10−5 to 100 hertz. The resulting one-dimensional model and response functions can be used to make general estimates of time-varying electric fields associated with geomagnetic storms such as might represent induction hazards for electric-power grid operation. The plausible upper- and lower-bound resistivity structures show the uncertainty, giving a wide range of plausible time-varying electric fields.

  7. Rebuttal to "On the distribution of the modulus of Gabor wavelet coefficients and the upper bound of the dimensionless smoothness index in the case of additive Gaussian noises: Revisited" by Dong Wang, Qiang Zhou, and Kwok-Leung Tsui

    NASA Astrophysics Data System (ADS)

    Soltani Bozchalooi, Iman; Liang, Ming

    2018-04-01

    A discussion paper entitled "On the distribution of the modulus of Gabor wavelet coefficients and the upper bound of the dimensionless smoothness index in the case of additive Gaussian noises: revisited" by Dong Wang, Qiang Zhou, Kwok-Leung Tsui has been brought to our attention recently. This discussion paper (hereafter called Wang et al. paper) is based on arguments that are fundamentally incorrect and which we rebut within this commentary. However, as the flaws in the arguments proposed by Wang et al. are clear, we will keep this rebuttal as brief as possible.

  8. Two Upper Bounds for the Weighted Path Length of Binary Trees. Report No. UIUCDCS-R-73-565.

    ERIC Educational Resources Information Center

    Pradels, Jean Louis

    Rooted binary trees with weighted nodes are structures encountered in many areas, such as coding theory, searching and sorting, information storage and retrieval. The path length is a meaningful quantity which gives indications about the expected time of a search or the length of a code, for example. In this paper, two sharp bounds for the total…

  9. The Mystery of Io's Warm Polar Regions: Implications for Heat Flow

    NASA Technical Reports Server (NTRS)

    Matson, D. L.; Veeder, G. J.; Johnson, T. V.; Blaney, D. L.; Davies, A. G.

    2002-01-01

    Unexpectedly warm polar temperatures further support the idea that Io is covered virtually everywhere by cooling lava flows. This implies a new heat flow component. Io's heat flow remains constrained between a lower bound of (approximately) 2.5 W m(exp -2) and an upper bound of (approximately) 13 W m(exp -2). Additional information is contained in the original extended abstract.

  10. Risk aversion and risk seeking in multicriteria forest management: a Markov decision process approach

    Treesearch

    Joseph Buongiorno; Mo Zhou; Craig Johnston

    2017-01-01

    Markov decision process models were extended to reflect some consequences of the risk attitude of forestry decision makers. One approach consisted of maximizing the expected value of a criterion subject to an upper bound on the variance or, symmetrically, minimizing the variance subject to a lower bound on the expected value.  The other method used the certainty...

  11. Verifying the error bound of numerical computation implemented in computer systems

    DOEpatents

    Sawada, Jun

    2013-03-12

    A verification tool receives a finite precision definition for an approximation of an infinite precision numerical function implemented in a processor in the form of a polynomial of bounded functions. The verification tool receives a domain for verifying outputs of segments associated with the infinite precision numerical function. The verification tool splits the domain into at least two segments, wherein each segment is non-overlapping with any other segment and converts, for each segment, a polynomial of bounded functions for the segment to a simplified formula comprising a polynomial, an inequality, and a constant for a selected segment. The verification tool calculates upper bounds of the polynomial for the at least two segments, beginning with the selected segment and reports the segments that violate a bounding condition.

  12. Search for invisible decays of a Higgs boson using vector-boson fusion in pp collisions at √s = 8 TeV with the ATLAS detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aad, G.; Abbott, B.; Abdallah, J.

    2016-01-28

    A search for a Higgs boson produced via vector-boson fusion and decaying into invisible particles is presented, using 20.3 fb -1 of proton-proton collision data at a centre-of-mass energy of 8 TeV recorded by the ATLAS detector at the LHC. For a Higgs boson with a mass of 125 GeV, assuming the Standard Model production cross section, an upper bound of 0.28 is set on the branching fraction of H → invisible at 95% confidence level, where the expected upper limit is 0.31. Furthermore, the results are interpreted in models of Higgs-portal dark matter where the branching fraction limit ismore » converted into upper bounds on the dark-matter-nucleon scattering cross section as a function of the dark-matter particle mass, and compared to results from the direct dark-matter detection experiments.« less

  13. Statistical thermodynamics foundation for photovoltaic and photothermal conversion. II. Application to photovoltaic conversion

    NASA Astrophysics Data System (ADS)

    Badescu, Viorel; Landsberg, Peter T.

    1995-08-01

    The general theory developed in part I was applied to build up two models of photovoltaic conversion. To this end two different systems were analyzed. The first system consists of the whole absorber (converter), for which the balance equations for energy and entropy are written and then used to derive an upper bound for solar energy conversion. The second system covers a part of the absorber (converter), namely the valence and conduction electronic bands. The balance of energy is used in this case to derive, under additional assumptions, another upper limit for the conversion efficiency. This second system deals with the real location where the power is generated. Both models take into consideration the radiation polarization and reflection, and the effects of concentration. The second model yields a more accurate upper bound for the conversion efficiency. A generalized solar cell equation is derived. It is proved that other previous theories are particular cases of the present more general formalism.

  14. Limits on the fluctuating part of y-type distortion monopole from Planck and SPT results

    NASA Astrophysics Data System (ADS)

    Khatri, Rishi; Sunyaev, Rashid

    2015-08-01

    We use the published Planck and SPT cluster catalogs [1,2] and recently published y-distortion maps [3] to put strong observational limits on the contribution of the fluctuating part of the y-type distortions to the y-distortion monopole. Our bounds are 5.4× 10-8 < langle yrangle < 2.2× 10-6. Our upper bound is a factor of 6.8 stronger than the currently best upper 95% confidence limit from COBE-FIRAS of langle yrangle <15× 10-6. In the standard cosmology, large scale structure is the only source of such distortions and our limits therefore constrain the baryonic physics involved in the formation of the large scale structure. Our lower limit, from the detected clusters in the Planck and SPT catalogs, also implies that a Pixie-like experiment should detect the y-distortion monopole at >27-σ. The biggest sources of uncertainty in our upper limit are the monopole offsets between different HFI channel maps that we estimate to be <10-6.

  15. On the realization of the bulk modulus bounds for two-phase viscoelastic composites

    NASA Astrophysics Data System (ADS)

    Andreasen, Casper Schousboe; Andreassen, Erik; Jensen, Jakob Søndergaard; Sigmund, Ole

    2014-02-01

    Materials with good vibration damping properties and high stiffness are of great industrial interest. In this paper the bounds for viscoelastic composites are investigated and material microstructures that realize the upper bound are obtained by topology optimization. These viscoelastic composites can be realized by additive manufacturing technologies followed by an infiltration process. Viscoelastic composites consisting of a relatively stiff elastic phase, e.g. steel, and a relatively lossy viscoelastic phase, e.g. silicone rubber, have non-connected stiff regions when optimized for maximum damping. In order to ensure manufacturability of such composites the connectivity of the matrix is ensured by imposing a conductivity constraint and the influence on the bounds is discussed.

  16. 1-norm support vector novelty detection and its sparseness.

    PubMed

    Zhang, Li; Zhou, WeiDa

    2013-12-01

    This paper proposes a 1-norm support vector novelty detection (SVND) method and discusses its sparseness. 1-norm SVND is formulated as a linear programming problem and uses two techniques for inducing sparseness, or the 1-norm regularization and the hinge loss function. We also find two upper bounds on the sparseness of 1-norm SVND, or exact support vector (ESV) and kernel Gram matrix rank bounds. The ESV bound indicates that 1-norm SVND has a sparser representation model than SVND. The kernel Gram matrix rank bound can loosely estimate the sparseness of 1-norm SVND. Experimental results show that 1-norm SVND is feasible and effective. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. On the perturbation of the group generalized inverse for a class of bounded operators in Banach spaces

    NASA Astrophysics Data System (ADS)

    Castro-González, N.; Vélez-Cerrada, J. Y.

    2008-05-01

    Given a bounded operator A on a Banach space X with Drazin inverse AD and index r, we study the class of group invertible bounded operators B such that I+AD(B-A) is invertible and . We show that they can be written with respect to the decomposition as a matrix operator, , where B1 and are invertible. Several characterizations of the perturbed operators are established, extending matrix results. We analyze the perturbation of the Drazin inverse and we provide explicit upper bounds of ||B#-AD|| and ||BB#-ADA||. We obtain a result on the continuity of the group inverse for operators on Banach spaces.

  18. Bounds on invisible Higgs boson decays extracted from LHC ttH production data.

    PubMed

    Zhou, Ning; Khechadoorian, Zepyoor; Whiteson, Daniel; Tait, Tim M P

    2014-10-10

    We present an upper bound on the branching fraction of the Higgs boson to invisible particles by recasting a CMS Collaboration search for stop quarks decaying to tt + E(T)(miss). The observed (expected) bound, BF(H → inv.) < 0.40(0.65) at 95% C.L., is the strongest direct limit to date, benefiting from a downward fluctuation in the CMS data in that channel. In addition, we combine this new constraint with existing published constraints to give an observed (expected) bound of BF(H → inv.) < 0.40(0.40) at 95% C.L., and we show some of the implications for theories of dark matter which communicate through the Higgs portal.

  19. Stochastic static fault slip inversion from geodetic data with non-negativity and bound constraints

    NASA Astrophysics Data System (ADS)

    Nocquet, J.-M.

    2018-07-01

    Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modelling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a truncated multivariate normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulae for the single, 2-D or n-D marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations. Posterior mean and covariance can also be efficiently derived. I show that the maximum posterior (MAP) can be obtained using a non-negative least-squares algorithm for the single truncated case or using the bounded-variable least-squares algorithm for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modelling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC-based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the MAP is extremely fast.

  20. Termination Proofs for String Rewriting Systems via Inverse Match-Bounds

    NASA Technical Reports Server (NTRS)

    Butler, Ricky (Technical Monitor); Geser, Alfons; Hofbauer, Dieter; Waldmann, Johannes

    2004-01-01

    Annotating a letter by a number, one can record information about its history during a reduction. A string rewriting system is called match-bounded if there is a global upper bound to these numbers. In earlier papers we established match-boundedness as a strong sufficient criterion for both termination and preservation of regular languages. We show now that the string rewriting system whose inverse (left and right hand sides exchanged) is match-bounded, also have exceptional properties, but slightly different ones. Inverse match-bounded systems effectively preserve context-free languages; their sets of normalized strings and their sets of immortal strings are effectively regular. These sets of strings can be used to decide the normalization, the termination and the uniform termination problems of inverse match-bounded systems. We also show that the termination problem is decidable in linear time, and that a certain strong reachability problem is deciable, thus solving two open problems of McNaughton's.

  1. Tightening the entropic uncertainty bound in the presence of quantum memory

    NASA Astrophysics Data System (ADS)

    Adabi, F.; Salimi, S.; Haseli, S.

    2016-06-01

    The uncertainty principle is a fundamental principle in quantum physics. It implies that the measurement outcomes of two incompatible observables cannot be predicted simultaneously. In quantum information theory, this principle can be expressed in terms of entropic measures. M. Berta et al. [Nat. Phys. 6, 659 (2010), 10.1038/nphys1734] have indicated that uncertainty bound can be altered by considering a particle as a quantum memory correlating with the primary particle. In this article, we obtain a lower bound for entropic uncertainty in the presence of a quantum memory by adding an additional term depending on the Holevo quantity and mutual information. We conclude that our lower bound will be tightened with respect to that of Berta et al. when the accessible information about measurements outcomes is less than the mutual information about the joint state. Some examples have been investigated for which our lower bound is tighter than Berta et al.'s lower bound. Using our lower bound, a lower bound for the entanglement of formation of bipartite quantum states has been obtained, as well as an upper bound for the regularized distillable common randomness.

  2. Safe Upper-Bounds Inference of Energy Consumption for Java Bytecode Applications

    NASA Technical Reports Server (NTRS)

    Navas, Jorge; Mendez-Lojo, Mario; Hermenegildo, Manuel V.

    2008-01-01

    Many space applications such as sensor networks, on-board satellite-based platforms, on-board vehicle monitoring systems, etc. handle large amounts of data and analysis of such data is often critical for the scientific mission. Transmitting such large amounts of data to the remote control station for analysis is usually too expensive for time-critical applications. Instead, modern space applications are increasingly relying on autonomous on-board data analysis. All these applications face many resource constraints. A key requirement is to minimize energy consumption. Several approaches have been developed for estimating the energy consumption of such applications (e.g. [3, 1]) based on measuring actual consumption at run-time for large sets of random inputs. However, this approach has the limitation that it is in general not possible to cover all possible inputs. Using formal techniques offers the potential for inferring safe energy consumption bounds, thus being specially interesting for space exploration and safety-critical systems. We have proposed and implemented a general frame- work for resource usage analysis of Java bytecode [2]. The user defines a set of resource(s) of interest to be tracked and some annotations that describe the cost of some elementary elements of the program for those resources. These values can be constants or, more generally, functions of the input data sizes. The analysis then statically derives an upper bound on the amount of those resources that the program as a whole will consume or provide, also as functions of the input data sizes. This article develops a novel application of the analysis of [2] to inferring safe upper bounds on the energy consumption of Java bytecode applications. We first use a resource model that describes the cost of each bytecode instruction in terms of the joules it consumes. With this resource model, we then generate energy consumption cost relations, which are then used to infer safe upper bounds. How energy consumption for each bytecode instruction is measured is beyond the scope of this paper. Instead, this paper is about how to infer safe energy consumption estimations assuming that those energy consumption costs are provided. For concreteness, we use a simplified version of an existing resource model [1] in which an energy consumption cost for individual Java opcodes is defined.

  3. Exploring L1 model space in search of conductivity bounds for the MT problem

    NASA Astrophysics Data System (ADS)

    Wheelock, B. D.; Parker, R. L.

    2013-12-01

    Geophysical inverse problems of the type encountered in electromagnetic techniques are highly non-unique. As a result, any single inverted model, though feasible, is at best inconclusive and at worst misleading. In this paper, we use modified inversion methods to establish bounds on electrical conductivity within a model of the earth. Our method consists of two steps, each making use of the 1-norm in model regularization. Both 1-norm minimization problems are framed without approximation as non-negative least-squares (NNLS) problems. First, we must identify a parsimonious set of regions within the model for which upper and lower bounds on average conductivity will be sought. This is accomplished by minimizing the 1-norm of spatial variation, which produces a model with a limited number of homogeneous regions; in fact, the number of homogeneous regions will never be greater than the number of data, regardless of the number of free parameters supplied. The second step establishes bounds for each of these regions with pairs of inversions. The new suite of inversions also uses a 1-norm penalty, but applied to the conductivity values themselves, rather than the spatial variation thereof. In the bounding step we use the 1-norm of our model parameters because it is proportional to average conductivity. For a lower bound on average conductivity, the 1-norm within a bounding region is minimized. For an upper bound on average conductivity, the 1-norm everywhere outside a bounding region is minimized. The latter minimization has the effect of concentrating conductance into the bounding region. Taken together, these bounds are a measure of the uncertainty in the associated region of our model. Starting with a blocky inverse solution is key in the selection of the bounding regions. Of course, there is a tradeoff between resolution and uncertainty: an increase in resolution (smaller bounding regions), results in greater uncertainty (wider bounds). Minimization of the 1-norm of spatial variation delivers the fewest possible regions defined by a mean conductivity, the quantity we wish to bound. Thus, these regions present a natural set for which the most narrow and discriminating bounds can be found. For illustration, we apply these techniques to synthetic magnetotelluric (MT) data sets resulting from one-dimensional (1D) earth models. In each case we find that with realistic data coverage, any single inverted model can often stray from the truth, while the computed bounds on an encompassing region contain both the inverted and the true conductivities, indicating that our measure of model uncertainty is robust. Such estimates of uncertainty for conductivity can then be translated to bounds on important petrological parameters such as mineralogy, porosity, saturation, and fluid type.

  4. A constitutive law for continuous fiber reinforced brittle matrix composites with fiber fragmentation and stress recovery

    NASA Astrophysics Data System (ADS)

    Neumeister, Jonas M.

    1993-08-01

    THE TENSILE BEHAVIOR of a brittle matrix composite is studied for post matrix crack saturation conditions. Scatter of fiber strength following the Weibull distribution as well as the influence of the major microstructural variables is considered. The stress in a fiber is assumed to recover linearly around a failure due to a fiber-matrix interface behavior mainly ruled by friction. The constitutive behavior for such a composite is analysed. Results are given for a simplified and a refined approximate description and compared with an analysis resulting from the exact analytical theory of fiber fragmentation. It is shown that the stress-strain relation for the refined model excellently follows the exact solution and gives the location of the maximum to within 1% in both stress and strain; for most materials the agreement is even better. Also it is shown that all relations can be normalized to depend on only two variables; a stress reference and the Weibull exponent. For systems with low scatter in fiber strength the simplified model is sufficient to determine the stress maximum but not the postcritical behavior. In addition, the simplified model gives explicit analytical expressions for the maximum stress and corresponding strain. None of the models contain any volume dependence or statistical scatter, but the maximum stress given by the stress-strain relation constitutes an upper bound for the ultimate tensile strength of the composite.

  5. Magnetic shielding of 3-phase current by a composite material at low frequencies

    NASA Astrophysics Data System (ADS)

    Livesey, K. L.; Camley, R. E.; Celinski, Z.; Maat, S.

    2017-05-01

    Electromagnetic shielding at microwave frequencies (MHz and GHz) can be accomplished by attenuating the waves using ferromagnetic resonance and eddy currents in conductive materials. This method is not as effective at shielding the quasi-static magnetic fields produced by low-frequency (kHz) currents. We explore theoretically the use of composite materials - magnetic nanoparticles embedded in a polymer matrix - as a shielding material surrounding a 3-phase current source. We develop several methods to estimate the permeability of a single magnetic nanoparticle at low frequencies, several hundred kHz, and find that the relative permeability can be as high as 5,000-20,000. We then use two analytic effective medium theories to find the effective permeability of a collection of nanoparticles as a function of the volume filling fraction. The analytic calculations provide upper and lower bounds on the composite permeability, and we use a numerical solution to calculate the effective permeability for specific cases. The field-pattern for the 3-phase current is calculated using a magnetic scalar potential for each of the three wires surrounded by a cylinder with the effective permeability found above. For a cylinder with an inner radius of 1 cm and an outer radius of 1.5 cm and an effective permeability of 50, one finds a reduction factor of about 8 in the field strength outside the cylinder.

  6. Establishing a direct connection between detrended fluctuation analysis and Fourier analysis

    NASA Astrophysics Data System (ADS)

    Kiyono, Ken

    2015-10-01

    To understand methodological features of the detrended fluctuation analysis (DFA) using a higher-order polynomial fitting, we establish the direct connection between DFA and Fourier analysis. Based on an exact calculation of the single-frequency response of the DFA, the following facts are shown analytically: (1) in the analysis of stochastic processes exhibiting a power-law scaling of the power spectral density (PSD), S (f ) ˜f-β , a higher-order detrending in the DFA has no adverse effect in the estimation of the DFA scaling exponent α , which satisfies the scaling relation α =(β +1 )/2 ; (2) the upper limit of the scaling exponents detectable by the DFA depends on the order of polynomial fit used in the DFA, and is bounded by m +1 , where m is the order of the polynomial fit; (3) the relation between the time scale in the DFA and the corresponding frequency in the PSD are distorted depending on both the order of the DFA and the frequency dependence of the PSD. We can improve the scale distortion by introducing the corrected time scale in the DFA corresponding to the inverse of the frequency scale in the PSD. In addition, our analytical approach makes it possible to characterize variants of the DFA using different types of detrending. As an application, properties of the detrending moving average algorithm are discussed.

  7. Pharmacokinetics and repolarization effects of intravenous and transdermal granisetron.

    PubMed

    Mason, Jay W; Selness, Daniel S; Moon, Thomas E; O'Mahony, Bridget; Donachie, Peter; Howell, Julian

    2012-05-15

    The need for greater clarity about the effects of 5-HT(3) receptor antagonists on cardiac repolarization is apparent in the changing product labeling across this therapeutic class. This study assessed the repolarization effects of granisetron, a 5-HT(3) receptor antagonist antiemetic, administered intravenously and by a granisetron transdermal system (GTDS). In a parallel four-arm study, healthy subjects were randomized to receive intravenous granisetron, GTDS, placebo, or oral moxifloxacin (active control). The primary endpoint was difference in change from baseline in mean Fridericia-corrected QT interval (QTcF) between GTDS and placebo (ddQTcF) on days 3 and 5. A total of 240 subjects were enrolled, 60 in each group. Adequate sensitivity for detection of QTc change was shown by a 5.75 ms lower bound of the 90% confidence interval (CI) for moxifloxacin versus placebo at 2 hours postdose on day 3. Day 3 ddQTcF values varied between 0.2 and 1.9 ms for GTDS (maximum upper bound of 90% CI, 6.88 ms), between -1.2 and 1.6 ms for i.v. granisetron (maximum upper bound of 90% CI, 5.86 ms), and between -3.4 and 4.7 ms for moxifloxacin (maximum upper bound of 90% CI, 13.45 ms). Day 5 findings were similar. Pharmacokinetic-ddQTcF modeling showed a minimally positive slope of 0.157 ms/(ng/mL), but a very low correlation (r = 0.090). GTDS was not associated with statistically or clinically significant effects on QTcF or other electrocardiographic variables. This study provides useful clarification on the effect of granisetron delivered by GTDS on cardiac repolarization. ©2012 AACR.

  8. Using a Water Balance Model to Bound Potential Irrigation Development in the Upper Blue Nile Basin

    NASA Astrophysics Data System (ADS)

    Jain Figueroa, A.; McLaughlin, D.

    2016-12-01

    The Grand Ethiopian Renaissance Dam (GERD), on the Blue Nile is an example of water resource management underpinning food, water and energy security. Downstream countries have long expressed concern about water projects in Ethiopia because of possible diversions to agricultural uses that could reduce flow in the Nile. Such diversions are attractive to Ethiopia as a partial solution to its food security problems but they could also conflict with hydropower revenue from GERD. This research estimates an upper bound on diversions above the GERD project by considering the potential for irrigated agriculture expansion and, in particular, the availability of water and land resources for crop production. Although many studies have aimed to simulate downstream flows for various Nile basin management plans, few have taken the perspective of bounding the likely impacts of upstream agricultural development. The approach is to construct an optimization model to establish a bound on Upper Blue Nile (UBN) agricultural development, paying particular attention to soil suitability and seasonal variability in climate. The results show that land and climate constraints impose significant limitations on crop production. Only 25% of the land area is suitable for irrigation due to the soil, slope and temperature constraints. When precipitation is also considered only 11% of current land area could be used in a way that increases water consumption. The results suggest that Ethiopia could consume an additional 3.75 billion cubic meters (bcm) of water per year, through changes in land use and storage capacity. By exploiting this irrigation potential, Ethiopia could potentially decrease the annual flow downstream of the UBN by 8 percent from the current 46 bcm/y to the modeled 42 bcm/y.

  9. Analytical Calculation of the Lower Bound on Timing Resolution for PET Scintillation Detectors Comprising High-Aspect-Ratio Crystal Elements

    PubMed Central

    Cates, Joshua W.; Vinke, Ruud; Levin, Craig S.

    2015-01-01

    Excellent timing resolution is required to enhance the signal-to-noise ratio (SNR) gain available from the incorporation of time-of-flight (ToF) information in image reconstruction for positron emission tomography (PET). As the detector’s timing resolution improves, so does SNR, reconstructed image quality, and accuracy. This directly impacts the challenging detection and quantification tasks in the clinic. The recognition of these benefits has spurred efforts within the molecular imaging community to determine to what extent the timing resolution of scintillation detectors can be improved and develop near-term solutions for advancing ToF-PET. Presented in this work, is a method for calculating the Cramér-Rao lower bound (CRLB) on timing resolution for scintillation detectors with long crystal elements, where the influence of the variation in optical path length of scintillation light on achievable timing resolution is non-negligible. The presented formalism incorporates an accurate, analytical probability density function (PDF) of optical transit time within the crystal to obtain a purely mathematical expression of the CRLB with high-aspect-ratio (HAR) scintillation detectors. This approach enables the statistical limit on timing resolution performance to be analytically expressed for clinically-relevant PET scintillation detectors without requiring Monte Carlo simulation-generated photon transport time distributions. The analytically calculated optical transport PDF was compared with detailed light transport simulations, and excellent agreement was found between the two. The coincidence timing resolution (CTR) between two 3×3×20 mm3 LYSO:Ce crystals coupled to analogue SiPMs was experimentally measured to be 162±1 ps FWHM, approaching the analytically calculated lower bound within 6.5%. PMID:26083559

  10. Analytical calculation of the lower bound on timing resolution for PET scintillation detectors comprising high-aspect-ratio crystal elements

    NASA Astrophysics Data System (ADS)

    Cates, Joshua W.; Vinke, Ruud; Levin, Craig S.

    2015-07-01

    Excellent timing resolution is required to enhance the signal-to-noise ratio (SNR) gain available from the incorporation of time-of-flight (ToF) information in image reconstruction for positron emission tomography (PET). As the detector’s timing resolution improves, so does SNR, reconstructed image quality, and accuracy. This directly impacts the challenging detection and quantification tasks in the clinic. The recognition of these benefits has spurred efforts within the molecular imaging community to determine to what extent the timing resolution of scintillation detectors can be improved and develop near-term solutions for advancing ToF-PET. Presented in this work, is a method for calculating the Cramér-Rao lower bound (CRLB) on timing resolution for scintillation detectors with long crystal elements, where the influence of the variation in optical path length of scintillation light on achievable timing resolution is non-negligible. The presented formalism incorporates an accurate, analytical probability density function (PDF) of optical transit time within the crystal to obtain a purely mathematical expression of the CRLB with high-aspect-ratio (HAR) scintillation detectors. This approach enables the statistical limit on timing resolution performance to be analytically expressed for clinically-relevant PET scintillation detectors without requiring Monte Carlo simulation-generated photon transport time distributions. The analytically calculated optical transport PDF was compared with detailed light transport simulations, and excellent agreement was found between the two. The coincidence timing resolution (CTR) between two 3× 3× 20 mm3 LYSO:Ce crystals coupled to analogue SiPMs was experimentally measured to be 162+/- 1 ps FWHM, approaching the analytically calculated lower bound within 6.5%.

  11. FACTORING TO FIT OFF DIAGONALS.

    DTIC Science & Technology

    imply an upper bound on the number of factors. When applied to somatotype data, the method improved substantially on centroid solutions and indicated a reinterpretation of earlier factoring studies. (Author)

  12. Numerical studies of the Bethe-Salpeter equation for a two-fermion bound state

    NASA Astrophysics Data System (ADS)

    de Paula, W.; Frederico, T.; Salmè, G.; Viviani, M.

    2018-03-01

    Some recent advances on the solution of the Bethe-Salpeter equation (BSE) for a two-fermion bound system directly in Minkowski space are presented. The calculations are based on the expression of the Bethe-Salpeter amplitude in terms of the so-called Nakanishi integral representation and on the light-front projection (i.e. the integration of the light-front variable k - = k 0 - k 3). The latter technique allows for the analytically exact treatment of the singularities plaguing the two-fermion BSE in Minkowski space. The good agreement observed between our results and those obtained using other existing numerical methods, based on both Minkowski and Euclidean space techniques, fully corroborate our analytical treatment.

  13. STATISTICAL ANALYSIS OF TANK 5 FLOOR SAMPLE RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shine, E.

    2012-03-14

    Sampling has been completed for the characterization of the residual material on the floor of Tank 5 in the F-Area Tank Farm at the Savannah River Site (SRS), near Aiken, SC. The sampling was performed by Savannah River Remediation (SRR) LLC using a stratified random sampling plan with volume-proportional compositing. The plan consisted of partitioning the residual material on the floor of Tank 5 into three non-overlapping strata: two strata enclosed accumulations, and a third stratum consisted of a thin layer of material outside the regions of the two accumulations. Each of three composite samples was constructed from five primarymore » sample locations of residual material on the floor of Tank 5. Three of the primary samples were obtained from the stratum containing the thin layer of material, and one primary sample was obtained from each of the two strata containing an accumulation. This report documents the statistical analyses of the analytical results for the composite samples. The objective of the analysis is to determine the mean concentrations and upper 95% confidence (UCL95) bounds for the mean concentrations for a set of analytes in the tank residuals. The statistical procedures employed in the analyses were consistent with the Environmental Protection Agency (EPA) technical guidance by Singh and others [2010]. Savannah River National Laboratory (SRNL) measured the sample bulk density, nonvolatile beta, gross alpha, radionuclide, inorganic, and anion concentrations three times for each of the composite samples. The analyte concentration data were partitioned into three separate groups for further analysis: analytes with every measurement above their minimum detectable concentrations (MDCs), analytes with no measurements above their MDCs, and analytes with a mixture of some measurement results above and below their MDCs. The means, standard deviations, and UCL95s were computed for the analytes in the two groups that had at least some measurements above their MDCs. The identification of distributions and the selection of UCL95 procedures generally followed the protocol in Singh, Armbya, and Singh [2010]. When all of an analyte's measurements lie below their MDCs, only a summary of the MDCs can be provided. The measurement results reported by SRNL are listed in Appendix A, and the results of this analysis are reported in Appendix B. The data were generally found to follow a normal distribution, and to be homogeneous across composite samples.« less

  14. Statistical Analysis of Tank 5 Floor Sample Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shine, E. P.

    2013-01-31

    Sampling has been completed for the characterization of the residual material on the floor of Tank 5 in the F-Area Tank Farm at the Savannah River Site (SRS), near Aiken, SC. The sampling was performed by Savannah River Remediation (SRR) LLC using a stratified random sampling plan with volume-proportional compositing. The plan consisted of partitioning the residual material on the floor of Tank 5 into three non-overlapping strata: two strata enclosed accumulations, and a third stratum consisted of a thin layer of material outside the regions of the two accumulations. Each of three composite samples was constructed from five primarymore » sample locations of residual material on the floor of Tank 5. Three of the primary samples were obtained from the stratum containing the thin layer of material, and one primary sample was obtained from each of the two strata containing an accumulation. This report documents the statistical analyses of the analytical results for the composite samples. The objective of the analysis is to determine the mean concentrations and upper 95% confidence (UCL95) bounds for the mean concentrations for a set of analytes in the tank residuals. The statistical procedures employed in the analyses were consistent with the Environmental Protection Agency (EPA) technical guidance by Singh and others [2010]. Savannah River National Laboratory (SRNL) measured the sample bulk density, nonvolatile beta, gross alpha, and the radionuclide1, elemental, and chemical concentrations three times for each of the composite samples. The analyte concentration data were partitioned into three separate groups for further analysis: analytes with every measurement above their minimum detectable concentrations (MDCs), analytes with no measurements above their MDCs, and analytes with a mixture of some measurement results above and below their MDCs. The means, standard deviations, and UCL95s were computed for the analytes in the two groups that had at least some measurements above their MDCs. The identification of distributions and the selection of UCL95 procedures generally followed the protocol in Singh, Armbya, and Singh [2010]. When all of an analyte's measurements lie below their MDCs, only a summary of the MDCs can be provided. The measurement results reported by SRNL are listed, and the results of this analysis are reported. The data were generally found to follow a normal distribution, and to be homogenous across composite samples.« less

  15. Statistical Analysis Of Tank 5 Floor Sample Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shine, E. P.

    2012-08-01

    Sampling has been completed for the characterization of the residual material on the floor of Tank 5 in the F-Area Tank Farm at the Savannah River Site (SRS), near Aiken, SC. The sampling was performed by Savannah River Remediation (SRR) LLC using a stratified random sampling plan with volume-proportional compositing. The plan consisted of partitioning the residual material on the floor of Tank 5 into three non-overlapping strata: two strata enclosed accumulations, and a third stratum consisted of a thin layer of material outside the regions of the two accumulations. Each of three composite samples was constructed from five primarymore » sample locations of residual material on the floor of Tank 5. Three of the primary samples were obtained from the stratum containing the thin layer of material, and one primary sample was obtained from each of the two strata containing an accumulation. This report documents the statistical analyses of the analytical results for the composite samples. The objective of the analysis is to determine the mean concentrations and upper 95% confidence (UCL95) bounds for the mean concentrations for a set of analytes in the tank residuals. The statistical procedures employed in the analyses were consistent with the Environmental Protection Agency (EPA) technical guidance by Singh and others [2010]. Savannah River National Laboratory (SRNL) measured the sample bulk density, nonvolatile beta, gross alpha, and the radionuclide, elemental, and chemical concentrations three times for each of the composite samples. The analyte concentration data were partitioned into three separate groups for further analysis: analytes with every measurement above their minimum detectable concentrations (MDCs), analytes with no measurements above their MDCs, and analytes with a mixture of some measurement results above and below their MDCs. The means, standard deviations, and UCL95s were computed for the analytes in the two groups that had at least some measurements above their MDCs. The identification of distributions and the selection of UCL95 procedures generally followed the protocol in Singh, Armbya, and Singh [2010]. When all of an analyte's measurements lie below their MDCs, only a summary of the MDCs can be provided. The measurement results reported by SRNL are listed in Appendix A, and the results of this analysis are reported in Appendix B. The data were generally found to follow a normal distribution, and to be homogenous across composite samples.« less

  16. Evolution of cosmic string networks

    NASA Technical Reports Server (NTRS)

    Albrecht, Andreas; Turok, Neil

    1989-01-01

    Results on cosmic strings are summarized including: (1) the application of non-equilibrium statistical mechanics to cosmic string evolution; (2) a simple one scale model for the long strings which has a great deal of predictive power; (3) results from large scale numerical simulations; and (4) a discussion of the observational consequences of our results. An upper bound on G mu of approximately 10(-7) emerges from the millisecond pulsar gravity wave bound. How numerical uncertainties affect this are discussed. Any changes which weaken the bound would probably also give the long strings the dominant role in producing observational consequences.

  17. Approximating Multilinear Monomial Coefficients and Maximum Multilinear Monomials in Multivariate Polynomials

    NASA Astrophysics Data System (ADS)

    Chen, Zhixiang; Fu, Bin

    This paper is our third step towards developing a theory of testing monomials in multivariate polynomials and concentrates on two problems: (1) How to compute the coefficients of multilinear monomials; and (2) how to find a maximum multilinear monomial when the input is a ΠΣΠ polynomial. We first prove that the first problem is #P-hard and then devise a O *(3 n s(n)) upper bound for this problem for any polynomial represented by an arithmetic circuit of size s(n). Later, this upper bound is improved to O *(2 n ) for ΠΣΠ polynomials. We then design fully polynomial-time randomized approximation schemes for this problem for ΠΣ polynomials. On the negative side, we prove that, even for ΠΣΠ polynomials with terms of degree ≤ 2, the first problem cannot be approximated at all for any approximation factor ≥ 1, nor "weakly approximated" in a much relaxed setting, unless P=NP. For the second problem, we first give a polynomial time λ-approximation algorithm for ΠΣΠ polynomials with terms of degrees no more a constant λ ≥ 2. On the inapproximability side, we give a n (1 - ɛ)/2 lower bound, for any ɛ> 0, on the approximation factor for ΠΣΠ polynomials. When the degrees of the terms in these polynomials are constrained as ≤ 2, we prove a 1.0476 lower bound, assuming Pnot=NP; and a higher 1.0604 lower bound, assuming the Unique Games Conjecture.

  18. Exact quasinormal modes for a special class of black holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliva, Julio; Troncoso, Ricardo; Centro de Ingenieria de la Innovacion del CECS

    2010-07-15

    Analytic exact expressions for the quasinormal modes of scalar and electromagnetic perturbations around a special class of black holes are found in d{>=}3 dimensions. It is shown that the size of the black hole provides a lower bound for the angular momentum of the perturbation. Quasinormal modes appear when this bound is fulfilled; otherwise the excitations become purely damped.

  19. On the validity of the Arrhenius equation for electron attachment rate coefficients.

    PubMed

    Fabrikant, Ilya I; Hotop, Hartmut

    2008-03-28

    The validity of the Arrhenius equation for dissociative electron attachment rate coefficients is investigated. A general analysis allows us to obtain estimates of the upper temperature bound for the range of validity of the Arrhenius equation in the endothermic case and both lower and upper bounds in the exothermic case with a reaction barrier. The results of the general discussion are illustrated by numerical examples whereby the rate coefficient, as a function of temperature for dissociative electron attachment, is calculated using the resonance R-matrix theory. In the endothermic case, the activation energy in the Arrhenius equation is close to the threshold energy, whereas in the case of exothermic reactions with an intermediate barrier, the activation energy is found to be substantially lower than the barrier height.

  20. On dynamic tumor eradication conditions under combined chemical/anti-angiogenic therapies

    NASA Astrophysics Data System (ADS)

    Starkov, Konstantin E.

    2018-02-01

    In this paper ultimate dynamics of the five-dimensional cancer tumor growth model at the angiogenesis phase is studied. This model elaborated by Pinho et al. in 2014 describes interactions between normal/cancer/endothelial cells under chemotherapy/anti-angiogenic agents in tumor growth process. The author derives ultimate upper bounds for normal/tumor/endothelial cells concentrations and ultimate upper and lower bounds for chemical/anti-angiogenic concentrations. Global asymptotic tumor clearance conditions are obtained for two versions: the use of only chemotherapy and the combined application of chemotherapy and anti-angiogenic therapy. These conditions are established as the attraction conditions to the maximum invariant set in the tumor free plane, and furthermore, the case is examined when this set consists only of tumor free equilibrium points.

  1. Robust guaranteed cost tracking control of quadrotor UAV with uncertainties.

    PubMed

    Xu, Zhiwei; Nian, Xiaohong; Wang, Haibo; Chen, Yinsheng

    2017-07-01

    In this paper, a robust guaranteed cost controller (RGCC) is proposed for quadrotor UAV system with uncertainties to address set-point tracking problem. A sufficient condition of the existence for RGCC is derived by Lyapunov stability theorem. The designed RGCC not only guarantees the whole closed-loop system asymptotically stable but also makes the quadratic performance level built for the closed-loop system have an upper bound irrespective to all admissible parameter uncertainties. Then, an optimal robust guaranteed cost controller is developed to minimize the upper bound of performance level. Simulation results verify the presented control algorithms possess small overshoot and short setting time, with which the quadrotor has ability to perform set-point tracking task well. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Limits on cold dark matter cosmologies from new anisotropy bounds on the cosmic microwave background

    NASA Technical Reports Server (NTRS)

    Vittorio, Nicola; Meinhold, Peter; Lubin, Philip; Muciaccia, Pio Francesco; Silk, Joseph

    1991-01-01

    A self-consistent method is presented for comparing theoretical predictions of and observational upper limits on CMB anisotropy. New bounds on CDM cosmologies set by the UCSB South Pole experiment on the 1 deg angular scale are presented. An upper limit of 4.0 x 10 to the -5th is placed on the rms differential temperature anisotropy to a 95 percent confidence level and a power of the test beta = 55 percent. A lower limit of about 0.6/b is placed on the density parameter of cold dark matter universes with greater than about 3 percent baryon abundance and a Hubble constant of 50 km/s/Mpc, where b is the bias factor, equal to unity only if light traces mass.

  3. Thermal dark matter co-annihilating with a strongly interacting scalar

    NASA Astrophysics Data System (ADS)

    Biondini, S.; Laine, M.

    2018-04-01

    Recently many investigations have considered Majorana dark matter co-annihilating with bound states formed by a strongly interacting scalar field. However only the gluon radiation contribution to bound state formation and dissociation, which at high temperatures is subleading to soft 2 → 2 scatterings, has been included. Making use of a non-relativistic effective theory framework and solving a plasma-modified Schrödinger equation, we address the effect of soft 2 → 2 scatterings as well as the thermal dissociation of bound states. We argue that the mass splitting between the Majorana and scalar field has in general both a lower and an upper bound, and that the dark matter mass scale can be pushed at least up to 5…6TeV.

  4. A Priori Bound on the Velocity in Axially Symmetric Navier-Stokes Equations

    NASA Astrophysics Data System (ADS)

    Lei, Zhen; Navas, Esteban A.; Zhang, Qi S.

    2016-01-01

    Let v be the velocity of Leray-Hopf solutions to the axially symmetric three-dimensional Navier-Stokes equations. Under suitable conditions for initial values, we prove the following a priori bound |v(x, t)| ≤ C |ln r|^{1/2}/r^2, qquad 0 < r ≤ 1/2, where r is the distance from x to the z axis, and C is a constant depending only on the initial value. This provides a pointwise upper bound (worst case scenario) for possible singularities, while the recent papers (Chiun-Chuan et al., Commun PDE 34(1-3):203-232, 2009; Koch et al., Acta Math 203(1):83-105, 2009) gave a lower bound. The gap is polynomial order 1 modulo a half log term.

  5. An invariance property of generalized Pearson random walks in bounded geometries

    NASA Astrophysics Data System (ADS)

    Mazzolo, Alain

    2009-03-01

    Invariance properties of random walks in bounded domains are a topic of growing interest since they contribute to improving our understanding of diffusion in confined geometries. Recently, limited to Pearson random walks with exponentially distributed straight paths, it has been shown that under isotropic uniform incidence, the average length of the trajectories through the domain is independent of the random walk characteristic and depends only on the ratio of the volume's domain over its surface. In this paper, thanks to arguments of integral geometry, we generalize this property to any isotropic bounded stochastic process and we give the conditions of its validity for isotropic unbounded stochastic processes. The analytical form for the traveled distance from the boundary to the first scattering event that ensures the validity of the Cauchy formula is also derived. The generalization of the Cauchy formula is an analytical constraint that thus concerns a very wide range of stochastic processes, from the original Pearson random walk to a Rayleigh distribution of the displacements, covering many situations of physical importance.

  6. Comparison of RAGE Hydrocode Mars Impact Model Results to Scaling Law Predictions

    NASA Astrophysics Data System (ADS)

    Plesko, Catherine S.; Wohletz, K. H.; Coker, R. F.; Asphaug, E.; Gittings, M. L.

    2007-10-01

    Impact devolatilization has been proposed by Segura et al. (2002) and Carr (1996) as a mechanism for triggering sporadic, intense precipitation on Mars. We seek to examine this hypothesis, specifically to determine the lower bound on possible energy/size scales, and thus an upper bound on the frequency of such events. To do this, we employ various analytical and numerical modeling techniques including the RAGE hydrocode. RAGE (Baltrusaitis et al. 1996) is an Eulerian Hydrocode that runs in up to three dimensions and incorporates a variety of detailed equations of state including the temperature-based SESAME tables maintained by LANL. In order to validate RAGE hydrocode results at the scale of moderate to large asteroid impacts, we compare simplified models of vertical impacts of objects of diameter 10 -100 km into homogeneous basalt targets under Martian conditions to pressure scaling law predictions (Holsapple 1993, e.g. Tables 3-4) for the same scenario. Peak pressures are important to the volatile mobilization question (Stewart and Ahrens, 2005), thus it is of primary importance for planned future modeling efforts to confirm that pressures in RAGE are well behaved. Knowledge of the final crater geometry and the fate of ejecta are not required to understand our main question: to what depth and radius are subsurface volatiles are mobilized, for a given impact and target? This effort is supported by LANL/IGPP (CSP, RFC, KHW, MLG) and by NASA PG&G "Small Bodies and Planetary Collisions" (EA).

  7. Automated determination of arterial input function for DCE-MRI of the prostate

    NASA Astrophysics Data System (ADS)

    Zhu, Yingxuan; Chang, Ming-Ching; Gupta, Sandeep

    2011-03-01

    Prostate cancer is one of the commonest cancers in the world. Dynamic contrast enhanced MRI (DCE-MRI) provides an opportunity for non-invasive diagnosis, staging, and treatment monitoring. Quantitative analysis of DCE-MRI relies on determination of an accurate arterial input function (AIF). Although several methods for automated AIF detection have been proposed in literature, none are optimized for use in prostate DCE-MRI, which is particularly challenging due to large spatial signal inhomogeneity. In this paper, we propose a fully automated method for determining the AIF from prostate DCE-MRI. Our method is based on modeling pixel uptake curves as gamma variate functions (GVF). First, we analytically compute bounds on GVF parameters for more robust fitting. Next, we approximate a GVF for each pixel based on local time domain information, and eliminate the pixels with false estimated AIFs using the deduced upper and lower bounds. This makes the algorithm robust to signal inhomogeneity. After that, according to spatial information such as similarity and distance between pixels, we formulate the global AIF selection as an energy minimization problem and solve it using a message passing algorithm to further rule out the weak pixels and optimize the detected AIF. Our method is fully automated without training or a priori setting of parameters. Experimental results on clinical data have shown that our method obtained promising detection accuracy (all detected pixels inside major arteries), and a very good match with expert traced manual AIF.

  8. Anisotropic transport of normal metal-barrier-normal metal junctions in monolayer phosphorene.

    PubMed

    De Sarkar, Sangita; Agarwal, Amit; Sengupta, K

    2017-07-19

    We study transport properties of a phosphorene monolayer in the presence of single and multiple potential barriers of height U 0 and width d, using both continuum and microscopic lattice models, and show that the nature of electron transport along its armchair edge (x direction) is qualitatively different from its counterpart in both conventional two-dimensional electron gas with Schrödinger-like quasiparticles and graphene or surfaces of topological insulators hosting massless Dirac quasiparticles. We show that the transport, mediated by massive Dirac electrons, allows one to achieve collimated quasiparticle motion along x and thus makes monolayer phosphorene an ideal experimental platform for studying Klein paradox in the context of gapped Dirac materials. We study the dependence of the tunneling conductance [Formula: see text] as a function of d and U 0 , and demonstrate that for a given applied voltage V its behavior changes from oscillatory to decaying function of d for a range of U 0 with finite non-zero upper and lower bounds, and provide analytical expression for these bounds within which G decays with d. We contrast such behavior of G with that of massless Dirac electrons in graphene and also with that along the zigzag edge (y direction) in phosphorene where the quasiparticles obey an effective Schrödinger equation at low energy. We also study transport through multiple barriers along x and demonstrate that these properties hold for transport through multiple barriers as well. Finally, we suggest concrete experiments which may verify our theoretical predictions.

  9. A novel magnet focusing plate for matrix-assisted laser desorption/ionization analysis of magnetic bead-bound analytes.

    PubMed

    Gode, David; Volmer, Dietrich A

    2013-05-15

    Magnetic beads are often used for serum profiling of peptide and protein biomarkers. In these assays, the bead-bound analytes are eluted from the beads prior to mass spectrometric analysis. This study describes a novel matrix-assisted laser desorption/ionization (MALDI) technique for direct application and focusing of magnetic beads to MALDI plates by means of dedicated micro-magnets as sample spots. Custom-made MALDI plates with magnetic focusing spots were made using small nickel-coated neodymium micro-magnets integrated into a stainless steel plate in a 16 × 24 (384) pattern. For demonstrating the proof-of-concept, commercial C-18 magnetic beads were used for the extraction of a test compound (reserpine) from aqueous solution. Experiments were conducted to study focusing abilities, the required laser energies, the influence of a matrix compound, dispensing techniques, solvent choice and the amount of magnetic beads. Dispensing the magnetic beads onto the micro-magnet sample spots resulted in immediate and strong binding to the magnetic surface. Light microscope images illustrated the homogeneous distribution of beads across the surfaces of the magnets, when the entire sample volume containing the beads was pipetted onto the surface. Subsequent MALDI analysis of the bead-bound analyte demonstrated excellent and reproducible ionization yields. The surface-assisted laser desorption/ionization (SALDI) properties of the strongly light-absorbing γ-Fe2O3-based beads resulted in similar ionization efficiencies to those obtained from experiments with an additional MALDI matrix compound. This feasibility study successfully demonstrated the magnetic focusing abilities for magnetic bead-bound analytes on a novel MALDI plate containing small micro-magnets as sample spots. One of the key advantages of this integrated approach is that no elution steps from magnetic beads were required during analyses compared with conventional bead experiments. Copyright © 2013 John Wiley & Sons, Ltd.

  10. Analytical results and sample locality maps of stream-sediment, heavy-mineral-concentrate, and rock samples from the Little Jacks Creek (ID-111-006), Big Jacks Creek (ID-111-007C), Duncan Creek (ID-111-0007B), and Upper Deep Creek (ID-111-044) Wilderness Study Areas, Owyhee County, Idaho

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, M.S.; Gent, C.A.; Bradley, L.A.

    1989-01-01

    A U.S. Geological Survey report detailing the analytical results and sample locality maps of stream-sediment, heavy-mineral-concentrate, and rock samples from the Little Jacks Creek, Big Jacks Creek, Duncan Creek, and Upper Deep Creek Wilderness Study Areas, Owyhee County, Idaho

  11. Detection of Pneumonia Associated Pathogens Using a Prototype Multiplexed Pneumonia Test in Hospitalized Patients with Severe Pneumonia

    PubMed Central

    Schulte, Berit; Eickmeyer, Holm; Heininger, Alexandra; Juretzek, Stephanie; Karrasch, Matthias; Denis, Olivier; Roisin, Sandrine; Pletz, Mathias W.; Klein, Matthias; Barth, Sandra; Lüdke, Gerd H.; Thews, Anne; Torres, Antoni; Cillóniz, Catia; Straube, Eberhard; Autenrieth, Ingo B.; Keller, Peter M.

    2014-01-01

    Severe pneumonia remains an important cause of morbidity and mortality. Polymerase chain reaction (PCR) has been shown to be more sensitive than current standard microbiological methods – particularly in patients with prior antibiotic treatment – and therefore, may improve the accuracy of microbiological diagnosis for hospitalized patients with pneumonia. Conventional detection techniques and multiplex PCR for 14 typical bacterial pneumonia-associated pathogens were performed on respiratory samples collected from adult hospitalized patients enrolled in a prospective multi-center study. Patients were enrolled from March until September 2012. A total of 739 fresh, native samples were eligible for analysis, of which 75 were sputa, 421 aspirates, and 234 bronchial lavages. 276 pathogens were detected by microbiology for which a valid PCR result was generated (positive or negative detection result by Curetis prototype system). Among these, 120 were identified by the prototype assay, 50 pathogens were not detected. Overall performance of the prototype for pathogen identification was 70.6% sensitivity (95% confidence interval (CI) lower bound: 63.3%, upper bound: 76.9%) and 95.2% specificity (95% CI lower bound: 94.6%, upper bound: 95.7%). Based on the study results, device cut-off settings were adjusted for future series production. The overall performance with the settings of the CE series production devices was 78.7% sensitivity (95% CI lower bound: 72.1%) and 96.6% specificity (95% CI lower bound: 96.1%). Time to result was 5.2 hours (median) for the prototype test and 43.5 h for standard-of-care. The Pneumonia Application provides a rapid and moderately sensitive assay for the detection of pneumonia-causing pathogens with minimal hands-on time. Trial Registration Deutsches Register Klinischer Studien (DRKS) DRKS00005684 PMID:25397673

  12. Beyond Positivity Bounds and the Fate of Massive Gravity

    NASA Astrophysics Data System (ADS)

    Bellazzini, Brando; Riva, Francesco; Serra, Javi; Sgarlata, Francesco

    2018-04-01

    We constrain effective field theories by going beyond the familiar positivity bounds that follow from unitarity, analyticity, and crossing symmetry of the scattering amplitudes. As interesting examples, we discuss the implications of the bounds for the Galileon and ghost-free massive gravity. The combination of our theoretical bounds with the experimental constraints on the graviton mass implies that the latter is either ruled out or unable to describe gravitational phenomena, let alone to consistently implement the Vainshtein mechanism, down to the relevant scales of fifth-force experiments, where general relativity has been successfully tested. We also show that the Galileon theory must contain symmetry-breaking terms that are at most one-loop suppressed compared to the symmetry-preserving ones. We comment as well on other interesting applications of our bounds.

  13. Beyond Positivity Bounds and the Fate of Massive Gravity.

    PubMed

    Bellazzini, Brando; Riva, Francesco; Serra, Javi; Sgarlata, Francesco

    2018-04-20

    We constrain effective field theories by going beyond the familiar positivity bounds that follow from unitarity, analyticity, and crossing symmetry of the scattering amplitudes. As interesting examples, we discuss the implications of the bounds for the Galileon and ghost-free massive gravity. The combination of our theoretical bounds with the experimental constraints on the graviton mass implies that the latter is either ruled out or unable to describe gravitational phenomena, let alone to consistently implement the Vainshtein mechanism, down to the relevant scales of fifth-force experiments, where general relativity has been successfully tested. We also show that the Galileon theory must contain symmetry-breaking terms that are at most one-loop suppressed compared to the symmetry-preserving ones. We comment as well on other interesting applications of our bounds.

  14. Differential homogeneous immunosensor device

    DOEpatents

    Malmros, Mark K.; Gulbinski, III, Julian

    1990-04-10

    There is provided a novel method of testing for the presence of an analyte in a fluid suspected of containing the same. In this method, in the presence of the analyte, a substance capable of modifying certain characteristics of the substrate is bound to the substrate and the change in these qualities is measured. While the method may be modified for carrying out quantitative differential analyses, it eliminates the need for washing analyte from the substrate which is characteristic of prior art methods.

  15. (Bio)Sensing Using Nanoparticle Arrays: On the Effect of Analyte Transport on Sensitivity.

    PubMed

    Lynn, N Scott; Homola, Jiří

    2016-12-20

    There has recently been an extensive amount of work regarding the development of optical, electrical, and mechanical (bio)sensors employing planar arrays of surface-bound nanoparticles. The sensor output for these systems is dependent on the rate at which analyte is transported to, and interacts with, each nanoparticle in the array. There has so far been little discussion on the relationship between the design parameters of an array and the interplay of convection, diffusion, and reaction. Moreover, current methods providing such information require extensive computational simulation. Here we demonstrate that the rate of analyte transport to a nanoparticle array can be quantified analytically. We show that such rates are bound by both the rate to a single NP and that to a planar surface (having equivalent size as the array), with the specific rate determined by the fill fraction: the ratio between the total surface area used for biomolecular capture with respect to the entire sensing area. We characterize analyte transport to arrays with respect to changes in numerous parameters relevant to experiment, including variation of the nanoparticle shape and size, packing density, flow conditions, and analyte diffusivity. We also explore how analyte capture is dependent on the kinetic parameters related to an affinity-based biosensor, and furthermore, we classify the conditions under which the array might be diffusion- or reaction-limited. The results obtained herein are applicable toward the design and optimization of all (bio)sensors based on nanoparticle arrays.

  16. Pioneer Venus orbiter search for Venusian lightning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borucki, W.J.; Dyer, J.W.; Phillips, J.R.

    1991-07-01

    During the 1988 and 1990, the star sensor aboard the Pioneer Venus orbiter (PVO) was used to search for optical pulses from lightning on the nightside of Venus. Useful data were obtained for 53 orbits in 1988 and 55 orbits in 1990. During this period, approximately 83 s of search time plus 7749 s of control data were obtained. The results again find no optical evidence for lightning activity. With the region that was observed during 1988, the results imply that the upper bound to short-duration flashes is 4 {times} 10{sup {minus}7} flashes/km{sup 2}/s for flashes that are at leastmore » 50% as bright as typical terrestrial lightning. During 1990, when the 2-Hz filter was used, the results imply an upper bound of 1 {times} 10{sup {minus}7} flashes/km{sup 2}/s for long-duration flashes at least 1.6% as bright as typical terrestrial lightning flashes or 33% as bright as the pulses observed by the Venera 9. The upper bounds to the flash rates for the 1988 and 1990 searches are twice and one half the global terrestrial rate, respectively. These two searches covered the region from 60{degrees}N latitude to 30{degrees}S latitude, 250{degrees} to 350{degrees} longitude, and the region from 45{degrees}N latitude to 55{degrees}S latitude, 155{degrees} to 300{degrees} longitude. Both searches sampled much of the nightside region from the dawn terminator to within 4 hours of the dusk terminator. These searches covered a much larger latitude range than any previous search. The results show that the Beat and Phoebe Regio areas previously identified by Russell et al. (1988) as areas with high rates of lightning activity were not active during the two seasons of the observations. When the authors assume that their upper bounds to the nightside flash rate are representative of the entire planet, the results imply that the global flash rate and energy dissipation rate derived by Krasnopol'sky (1983) from his observation of a single storm are too high.« less

  17. Follow-Up Care for Older Women With Breast Cancer

    DTIC Science & Technology

    2000-05-01

    better predictor of upper body mor therapy, all cause mortality, self -reported function and overall physical function than upper body function, and...outcomes, including primary tu- Major Analytic Variables mor therapy and all cause mortality, as well as self -reported upper body and overall physical ...comorbidity and their relation to a range of patient outcomes, including primary tumor therapy and mortality, self -reported upper body function, and overall

  18. Alder Establishment and Channel Dynamics in a Tributary of the South Fork Eel River, Mendocino County, California

    Treesearch

    William J. Trush; Edward C. Connor; Knight Alan W.

    1989-01-01

    Riparian communities established along Elder Creek, a tributary of the upper South Fork Eel River, are bounded by two frequencies of periodic flooding. The upper limit for the riparian zone occurs at bankfull stage. The lower riparian limit is associated with a more frequent stage height, called the active channel, having an exceedance probability of 11 percent on a...

  19. Variational bounds on the temperature distribution

    NASA Astrophysics Data System (ADS)

    Kalikstein, Kalman; Spruch, Larry; Baider, Alberto

    1984-02-01

    Upper and lower stationary or variational bounds are obtained for functions which satisfy parabolic linear differential equations. (The error in the bound, that is, the difference between the bound on the function and the function itself, is of second order in the error in the input function, and the error is of known sign.) The method is applicable to a range of functions associated with equalization processes, including heat conduction, mass diffusion, electric conduction, fluid friction, the slowing down of neutrons, and certain limiting forms of the random walk problem, under conditions which are not unduly restrictive: in heat conduction, for example, we do not allow the thermal coefficients or the boundary conditions to depend upon the temperature, but the thermal coefficients can be functions of space and time and the geometry is unrestricted. The variational bounds follow from a maximum principle obeyed by the solutions of these equations.

  20. Promoting Active Learning by Practicing the "Self-Assembly" of Model Analytical Instruments

    ERIC Educational Resources Information Center

    Algar, W. Russ; Krull, Ulrich J.

    2010-01-01

    In our upper-year instrumental analytical chemistry course, we have developed "cut-and-paste" exercises where students "build" models of analytical instruments from individual schematic images of components. These exercises encourage active learning by students. Instead of trying to memorize diagrams, students are required to think deeply about…

  1. Optimizing the learning rate for adaptive estimation of neural encoding models

    PubMed Central

    2018-01-01

    Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains. PMID:29813069

  2. Optimizing the learning rate for adaptive estimation of neural encoding models.

    PubMed

    Hsieh, Han-Lin; Shanechi, Maryam M

    2018-05-01

    Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, E.B. Jr.

    Various methods for the calculation of lower bounds for eigenvalues are examined, including those of Weinstein, Temple, Bazley and Fox, Gay, and Miller. It is shown how all of these can be derived in a unified manner by the projection technique. The alternate forms obtained for the Gay formula show how a considerably improved method can be readily obtained. Applied to the ground state of the helium atom with a simple screened hydrogenic trial function, this new method gives a lower bound closer to the true energy than the best upper bound obtained with this form of trial function. Possiblemore » routes to further improved methods are suggested.« less

  4. Upper Bounds on the Expected Value of a Convex Function Using Gradient and Conjugate Function Information.

    DTIC Science & Technology

    1987-08-01

    of the absolute difference between the random variable and its mean.Gassmann and Ziemba 119861 provide a weaker bound that does not require...2.8284, and EX4tV) -12 EX’iX) = -42. Hence C = -2 -€t* i-4’]= I-- . 1213. £1 2 5 COMPARISONS OF BOUNDS IN IIn Gassmann and Ziemba 11986) extend an idea...solution of the foLLowing Linear program: (see Gassmann, Ziemba (1986),Theorem 1) m m m-GZ=max(XT(vi) I: z. 1=1,Z vo=x io (5.1hk i-l i=i i=1 I I where 0

  5. Bounds on Block Error Probability for Multilevel Concatenated Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Moorthy, Hari T.; Stojanovic, Diana

    1996-01-01

    Maximum likelihood decoding of long block codes is not feasable due to large complexity. Some classes of codes are shown to be decomposable into multilevel concatenated codes (MLCC). For these codes, multistage decoding provides good trade-off between performance and complexity. In this paper, we derive an upper bound on the probability of block error for MLCC. We use this bound to evaluate difference in performance for different decompositions of some codes. Examples given show that a significant reduction in complexity can be achieved when increasing number of stages of decoding. Resulting performance degradation varies for different decompositions. A guideline is given for finding good m-level decompositions.

  6. New Anomalous Lieb-Robinson Bounds in Quasiperiodic XY Chains

    NASA Astrophysics Data System (ADS)

    Damanik, David; Lemm, Marius; Lukic, Milivoje; Yessen, William

    2014-09-01

    We announce and sketch the rigorous proof of a new kind of anomalous (or sub-ballistic) Lieb-Robinson (LR) bound for an isotropic XY chain in a quasiperiodic transversal magnetic field. Instead of the usual effective light cone |x|≤v|t|, we obtain |x|≤v|t|α for some 0<α <1. We can characterize the allowed values of α exactly as those exceeding the upper transport exponent αu+ of a one-body Schrödinger operator. To our knowledge, this is the first rigorous derivation of anomalous quantum many-body transport. We also discuss anomalous LR bounds with power-law tails for a random dimer field.

  7. Precision Measurement of the Electron's Electric Dipole Moment Using Trapped Molecular Ions

    NASA Astrophysics Data System (ADS)

    Cairncross, William B.; Gresh, Daniel N.; Grau, Matt; Cossel, Kevin C.; Roussy, Tanya S.; Ni, Yiqi; Zhou, Yan; Ye, Jun; Cornell, Eric A.

    2017-10-01

    We describe the first precision measurement of the electron's electric dipole moment (de) using trapped molecular ions, demonstrating the application of spin interrogation times over 700 ms to achieve high sensitivity and stringent rejection of systematic errors. Through electron spin resonance spectroscopy on 180Hf 19F+ in its metastable 3Δ1 electronic state, we obtain de=(0.9 ±7. 7stat±1. 7syst)×10-29 e cm , resulting in an upper bound of |de|<1.3 ×10-28 e cm (90% confidence). Our result provides independent confirmation of the current upper bound of |de|<9.4 ×10-29 e cm [J. Baron et al., New J. Phys. 19, 073029 (2017), 10.1088/1367-2630/aa708e], and offers the potential to improve on this limit in the near future.

  8. Limit cycles via higher order perturbations for some piecewise differential systems

    NASA Astrophysics Data System (ADS)

    Buzzi, Claudio A.; Lima, Maurício Firmino Silva; Torregrosa, Joan

    2018-05-01

    A classical perturbation problem is the polynomial perturbation of the harmonic oscillator, (x‧ ,y‧) =(- y + εf(x , y , ε) , x + εg(x , y , ε)) . In this paper we study the limit cycles that bifurcate from the period annulus via piecewise polynomial perturbations in two zones separated by a straight line. We prove that, for polynomial perturbations of degree n , no more than Nn - 1 limit cycles appear up to a study of order N. We also show that this upper bound is reached for orders one and two. Moreover, we study this problem in some classes of piecewise Liénard differential systems providing better upper bounds for higher order perturbation in ε, showing also when they are reached. The Poincaré-Pontryagin-Melnikov theory is the main technique used to prove all the results.

  9. Non-localization of eigenfunctions for Sturm-Liouville operators and applications

    NASA Astrophysics Data System (ADS)

    Liard, Thibault; Lissy, Pierre; Privat, Yannick

    2018-02-01

    In this article, we investigate a non-localization property of the eigenfunctions of Sturm-Liouville operators Aa = -∂xx + a (ṡ) Id with Dirichlet boundary conditions, where a (ṡ) runs over the bounded nonnegative potential functions on the interval (0 , L) with L > 0. More precisely, we address the extremal spectral problem of minimizing the L2-norm of a function e (ṡ) on a measurable subset ω of (0 , L), where e (ṡ) runs over all eigenfunctions of Aa, at the same time with respect to all subsets ω having a prescribed measure and all L∞ potential functions a (ṡ) having a prescribed essentially upper bound. We provide some existence and qualitative properties of the minimizers, as well as precise lower and upper estimates on the optimal value. Several consequences in control and stabilization theory are then highlighted.

  10. Fisher information of a single qubit interacts with a spin-qubit in the presence of a magnetic field

    NASA Astrophysics Data System (ADS)

    Metwally, N.

    2018-06-01

    In this contribution, quantum Fisher information is utilized to estimate the parameters of a central qubit interacting with a single-spin qubit. The effect of the longitudinal, transverse and the rotating strengths of the magnetic field on the estimation degree is discussed. It is shown that, in the resonance case, the number of peaks and consequently the size of the estimation regions increase as the rotating magnetic field strength increases. The precision estimation of the central qubit parameters depends on the initial state settings of the central and the spin-qubit, either encode classical or quantum information. It is displayed that, the upper bounds of the estimation degree are large if the two qubits encode classical information. In the non-resonance case, the estimation degree depends on which of the longitudinal/transverse strength is larger. The coupling constant between the central qubit and the spin-qubit has a different effect on the estimation degree of the weight and the phase parameters, where the possibility of estimating the weight parameter decreases as the coupling constant increases, while it increases for the phase parameter. For large number of spin-particles, namely, we have a spin-bath particles, the upper bounds of the Fisher information with respect to the weight parameter of the central qubit decreases as the number of the spin particle increases. As the interaction time increases, the upper bounds appear at different initial values of the weight parameter.

  11. Modeling of magnitude distributions by the generalized truncated exponential distribution

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2015-01-01

    The probability distribution of the magnitude can be modeled by an exponential distribution according to the Gutenberg-Richter relation. Two alternatives are the truncated exponential distribution (TED) and the cutoff exponential distribution (CED). The TED is frequently used in seismic hazard analysis although it has a weak point: when two TEDs with equal parameters except the upper bound magnitude are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. We overcome it by the generalization of the abovementioned exponential distributions: the generalized truncated exponential distribution (GTED). Therein, identical exponential distributions are mixed by the probability distribution of the correct cutoff points. This distribution model is flexible in the vicinity of the upper bound magnitude and is equal to the exponential distribution for smaller magnitudes. Additionally, the exponential distributions TED and CED are special cases of the GTED. We discuss the possible ways of estimating its parameters and introduce the normalized spacing for this purpose. Furthermore, we present methods for geographic aggregation and differentiation of the GTED and demonstrate the potential and universality of our simple approach by applying it to empirical data. The considerable improvement by the GTED in contrast to the TED is indicated by a large difference between the corresponding values of the Akaike information criterion.

  12. Simplest little Higgs model revisited: Hidden mass relation, unitarity, and naturalness

    NASA Astrophysics Data System (ADS)

    Cheung, Kingman; He, Shi-Ping; Mao, Ying-nan; Zhang, Chen; Zhou, Yang

    2018-06-01

    We analyze the scalar potential of the simplest little Higgs (SLH) model in an approach consistent with the spirit of continuum effective field theory (CEFT). By requiring correct electroweak symmetry breaking (EWSB) with the 125 GeV Higgs boson, we are able to derive a relation between the pseudoaxion mass mη and the heavy top mass mT, which serves as a crucial test of the SLH mechanism. By requiring mη2>0 an upper bound on mT can be obtained for any fixed SLH global symmetry breaking scale f . We also point out that an absolute upper bound on f can be obtained by imposing partial wave unitarity constraint, which in turn leads to absolute upper bounds of mT≲19 TeV , mη≲1.5 TeV , and mZ'≲48 TeV . We present the allowed region in the three-dimensional parameter space characterized by f ,tβ,mT, taking into account the requirement of valid EWSB and the constraint from perturbative unitarity. We also propose a strategy of analyzing the fine-tuning problem consistent with the spirit of CEFT and apply it to the SLH. We suggest that the scalar potential and fine-tuning analysis strategies adopted here should also be applicable to a wide class of little Higgs and twin Higgs models, which may reveal interesting relations as crucial tests of the related EWSB mechanism and provide a new perspective on assessing their degree of fine-tuning.

  13. Bounds on OPE coefficients from interference effects in the conformal collider

    NASA Astrophysics Data System (ADS)

    Córdova, Clay; Maldacena, Juan; Turiaci, Gustavo J.

    2017-11-01

    We apply the average null energy condition to obtain upper bounds on the three-point function coefficients of stress tensors and a scalar operator, < TTOi>, in general CFTs. We also constrain the gravitational anomaly of U(1) currents in four-dimensional CFTs, which are encoded in three-point functions of the form 〈 T T J 〉. In theories with a large N AdS dual we translate these bounds into constraints on the coefficient of a higher derivative bulk term of the form ∫ϕ W 2. We speculate that these bounds also apply in de-Sitter. In this case our results constrain inflationary observables, such as the amplitude for chiral gravity waves that originate from higher derivative terms in the Lagrangian of the form ϕ W W ∗.

  14. Reduced conservatism in stability robustness bounds by state transformation

    NASA Technical Reports Server (NTRS)

    Yedavalli, R. K.; Liang, Z.

    1986-01-01

    This note addresses the issue of 'conservatism' in the time domain stability robustness bounds obtained by the Liapunov approach. A state transformation is employed to improve the upper bounds on the linear time-varying perturbation of an asymptotically stable linear time-invariant system for robust stability. This improvement is due to the variance of the conservatism of the Liapunov approach with respect to the basis of the vector space in which the Liapunov function is constructed. Improved bounds are obtained, using a transformation, on elemental and vector norms of perturbations (i.e., structured perturbations) as well as on a matrix norm of perturbations (i.e., unstructured perturbations). For the case of a diagonal transformation, an algorithm is proposed to find the 'optimal' transformation. Several examples are presented to illustrate the proposed analysis.

  15. Differential homogeneous immunosensor device

    DOEpatents

    Malmros, M.K.; Gulbinski, J. III.

    1990-04-10

    There is provided a novel method of testing for the presence of an analyte in a fluid suspected of containing the same. In this method, in the presence of the analyte, a substance capable of modifying certain characteristics of the substrate is bound to the substrate and the change in these qualities is measured. While the method may be modified for carrying out quantitative differential analyses, it eliminates the need for washing the analyte from the substrate which is characteristic of prior art methods. 12 figs.

  16. Generalized Hofmann quantum process fidelity bounds for quantum filters

    NASA Astrophysics Data System (ADS)

    Sedlák, Michal; Fiurášek, Jaromír

    2016-04-01

    We propose and investigate bounds on the quantum process fidelity of quantum filters, i.e., probabilistic quantum operations represented by a single Kraus operator K . These bounds generalize the Hofmann bounds on the quantum process fidelity of unitary operations [H. F. Hofmann, Phys. Rev. Lett. 94, 160504 (2005), 10.1103/PhysRevLett.94.160504] and are based on probing the quantum filter with pure states forming two mutually unbiased bases. Determination of these bounds therefore requires far fewer measurements than full quantum process tomography. We find that it is particularly suitable to construct one of the probe bases from the right eigenstates of K , because in this case the bounds are tight in the sense that if the actual filter coincides with the ideal one, then both the lower and the upper bounds are equal to 1. We theoretically investigate the application of these bounds to a two-qubit optical quantum filter formed by the interference of two photons on a partially polarizing beam splitter. For an experimentally convenient choice of factorized input states and measurements we study the tightness of the bounds. We show that more stringent bounds can be obtained by more sophisticated processing of the data using convex optimization and we compare our methods for different choices of the input probe states.

  17. A Multi-Armed Bandit Approach to Following a Markov Chain

    DTIC Science & Technology

    2017-06-01

    focus on the House to Café transition (p1,4). We develop a Multi-Armed Bandit approach for efficiently following this target, where each state takes the...and longitude (each state corresponding to a physical location and a small set of activities). The searcher would then apply our approach on this...the target’s transition probability and the true probability over time. Further, we seek to provide upper bounds (i.e., worst case bounds) on the

  18. Hard and Soft Constraints in Reliability-Based Design Optimization

    NASA Technical Reports Server (NTRS)

    Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.

    2006-01-01

    This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.

  19. Approximation method for a spherical bound system in the quantum plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mehramiz, A.; Sobhanian, S.; Mahmoodi, J.

    2010-08-15

    A system of quantum hydrodynamic equations has been used for investigating the dielectric tensor and dispersion equation of a semiconductor as a quantum magnetized plasma. Dispersion relations and their modifications due to quantum effects are derived for both longitudinal and transverse waves. The number of states and energy levels are analytically estimated for a spherical bound system embedded in a semiconductor quantum plasma. The results show that longitudinal waves decay rapidly and do not interact with the spherical bound system. The energy shifts caused by the spin-orbit interaction and the Zeeman effect are calculated.

  20. A method for the detection of protein-bound mutagens in food.

    PubMed

    Ibe, F I; Blowers, S D; Anderson, D; Massey, R

    1994-01-01

    To investigate the possible presence of protein-bound mutagens in food an analytical procedure has been devised in which the sample is enzymically hydrolysed, fractionated by HPLC and examined by a modified liquid incubation Ames assay. To validate the method MeIQx was added, as a model compound, to beefburger and a recovery of 82% obtained. The limit of detection for protein-bound mutagens was 1 microgram/kg, expressed as equivalents of MeIQx. No detectable mutagenicity was observed when the procedure was applied to samples of well cooked beefburger, irradiated chicken or mycoprotein.

  1. On increasing stability in the two dimensional inverse source scattering problem with many frequencies

    NASA Astrophysics Data System (ADS)

    Entekhabi, Mozhgan Nora; Isakov, Victor

    2018-05-01

    In this paper, we will study the increasing stability in the inverse source problem for the Helmholtz equation in the plane when the source term is assumed to be compactly supported in a bounded domain Ω with a sufficiently smooth boundary. Using the Fourier transform in the frequency domain, bounds for the Hankel functions and for scattering solutions in the complex plane, improving bounds for the analytic continuation, and the exact observability for the wave equation led us to our goals which are a sharp uniqueness and increasing stability estimate when the wave number interval is growing.

  2. Eradicating catastrophic collapse in interdependent networks via reinforced nodes

    PubMed Central

    Yuan, Xin; Hu, Yanqing; Havlin, Shlomo

    2017-01-01

    In interdependent networks, it is usually assumed, based on percolation theory, that nodes become nonfunctional if they lose connection to the network giant component. However, in reality, some nodes, equipped with alternative resources, together with their connected neighbors can still be functioning after disconnected from the giant component. Here, we propose and study a generalized percolation model that introduces a fraction of reinforced nodes in the interdependent networks that can function and support their neighborhood. We analyze, both analytically and via simulations, the order parameter—the functioning component—comprising both the giant component and smaller components that include at least one reinforced node. Remarkably, it is found that, for interdependent networks, we need to reinforce only a small fraction of nodes to prevent abrupt catastrophic collapses. Moreover, we find that the universal upper bound of this fraction is 0.1756 for two interdependent Erdős–Rényi (ER) networks: regular random (RR) networks and scale-free (SF) networks with large average degrees. We also generalize our theory to interdependent networks of networks (NONs). These findings might yield insight for designing resilient interdependent infrastructure networks. PMID:28289204

  3. Construction of Barrier in a Fishing Game With Point Capture.

    PubMed

    Zha, Wenzhong; Chen, Jie; Peng, Zhihong; Gu, Dongbing

    2017-06-01

    This paper addresses a particular pursuit-evasion game, called as "fishing game" where a faster evader attempts to pass the gap between two pursuers. We are concerned with the conditions under which the evader or pursuers can win the game. This is a game of kind in which an essential aspect, barrier, separates the state space into disjoint parts associated with each player's winning region. We present a method of explicit policy to construct the barrier. This method divides the fishing game into two subgames related to the included angle and the relative distances between the evader and the pursuers, respectively, and then analyzes the possibility of capture or escape for each subgame to ascertain the analytical forms of the barrier. Furthermore, we fuse the games of kind and degree by solving the optimal control strategies in the minimum time for each player when the initial state lies in their winning regions. Along with the optimal strategies, the trajectories of the players are delineated and the upper bounds of their winning times are also derived.

  4. Design principles for high efficiency small-grain polysilicon solar cells, with supporting experimental studies

    NASA Technical Reports Server (NTRS)

    Lindholm, F. A.; Neugroschel, A.; Sah, C. T.

    1982-01-01

    Design principles suggested here aim toward high conversion efficiency (greater than 15 percent) in polysilicon cells. The principles seek to decrease the liabilities of both intragranular and grain-boundary-surface defects. The advantages of a phosphorus atom concentration gradient in a thin (less than 50 microns) base of a p(+)/n(x)/n(+) drift-field solar cell, which produces favorable gradients in chemical potential, minority-carrier mobility and diffusivity, and recombination lifetime (via phosphorus gettering) are suggested. The degrading effects of grain boundaries are reduced by these three gradients and by substituting atoms (P, H, F or Li) for vacancies on the grain-boundary surface. From recent experiments comes support for the benefits of P diffusion down grain boundaries and, for quasi-grain-boundary-free and related structures. New analytic solutions for the n(x)-base include the effect of a power-law dependence between P concentration and lifetime. These provide an upper-bound estimate on the open circuit voltage. Finite-difference numerical solutions of the six Shockley equations furnish complete information about all solar-cell parameters and add insight concerning design.

  5. Numerical approximations of the mean absorption cross-section of a variety of randomly oriented microalgal shapes.

    PubMed

    Baird, Mark E

    2003-10-01

    The size, shape, and absorption coefficient of a microalgal cell determines, to a first order approximation, the rate at which light is absorbed by the cell. The rate of absorption determines the maximum amount of energy available for photosynthesis, and can be used to calculate the attenuation of light through the water column, including the effect of packaging pigments within discrete particles. In this paper, numerical approximations are made of the mean absorption cross-section of randomly oriented cells, aA. The shapes investigated are spheroids, rectangular prisms with a square base, cylinders, cones and double cones with aspect ratios of 0.25, 0.5, 1, 2, and 4. The results of the numerical simulations are fitted to a modified sigmoid curve, and take advantage of three analytical solutions. The results are presented in a non-dimensionalised format and are independent of size. A simple approximation using a rectangular hyperbolic curve is also given, and an approach for obtaining the upper and lower bounds of aA for more complex shapes is outlined.

  6. Subcarrier intensity modulation for MIMO visible light communications

    NASA Astrophysics Data System (ADS)

    Celik, Yasin; Akan, Aydin

    2018-04-01

    In this paper, subcarrier intensity modulation (SIM) is investigated for multiple-input multiple-output (MIMO) visible light communication (VLC) systems. A new modulation scheme called DC-aid SIM (DCA-SIM) is proposed for the spatial modulation (SM) transmission plan. Then, DCA-SIM is extended for multiple subcarrier case which is called DC-aid Multiple Subcarrier Modulation (DCA-MSM). Bit error rate (BER) performances of the considered system are analyzed for different MIMO schemes. The power efficiencies of DCA-SIM and DCA-MSM are shown in correlated MIMO VLC channels. The upper bound BER performances of the proposed models are obtained analytically for PSK and QAM modulation types in order to validate the simulation results. Additionally, the effect of power imbalance method on the performance of SIM is studied and remarkable power gains are obtained compared to the non-power imbalanced cases. In this work, Pulse amplitude modulation (PAM) and MSM-Index are used as benchmarks for single carrier and multiple carrier cases, respectively. And the results show that the proposed schemes outperform PAM and MSM-Index for considered single carrier and multiple carrier communication scenarios.

  7. Analytical solutions of the Klein-Gordon equation for Manning-Rosen potential with centrifugal term through Nikiforov-Uvarov method

    NASA Astrophysics Data System (ADS)

    Hatami, N.; Setare, M. R.

    2017-10-01

    We present approximate analytical solutions of the Klein-Gordon equation with arbitrary l state for the Manning-Rosen potential using the Nikiforov-Uvarov method and adopting the approximation scheme for the centrifugal term. We provide the bound state energy spectrum and the wave function in terms of the hypergeometric functions.

  8. Condition for a Bounded System of Klein-Gordon Particles in Electric and Magnetic Fields

    NASA Astrophysics Data System (ADS)

    Kisoglu, Hasan Fatih; Sogut, Kenan

    2018-07-01

    We investigate the motion of relativistic spinless particles in an external electromagnetic field that is considered to has a constant magnetic field and a time-dependent electric field. For such a system, we obtain analytical eigenfunctions through Asymptotic Iteration Method. We also obtain a condition of choosing the external magnetic field for which the system is bounded with usage of the method in perturbation theory.

  9. Ionospheric Signatures in Radio Occultation Data

    NASA Technical Reports Server (NTRS)

    Mannucci, Anthony J.; Ao, Chi; Iijima, Byron A.; Kursinkski, E. Robert

    2012-01-01

    We can extend robustly the radio occultation data record by 6 years (+60%) by developing a singlefrequency processing method for GPS/MET data. We will produce a calibrated data set with profile-byprofile data characterization to determine robust upper bounds on ionospheric bias. Part of an effort to produce a calibrated RO data set addressing other key error sources such as upper boundary initialization. Planned: AIRS-GPS water vapor cross validation (water vapor climatology and trends).

  10. New bounding and decomposition approaches for MILP investment problems: Multi-area transmission and generation planning under policy constraints

    DOE PAGES

    Munoz, F. D.; Hobbs, B. F.; Watson, J. -P.

    2016-02-01

    A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems is proposed and it considers a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improvesmore » upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. But, the decomposition phase is required to attain optimality gaps. Moreover, use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately.« less

  11. New bounding and decomposition approaches for MILP investment problems: Multi-area transmission and generation planning under policy constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munoz, F. D.; Hobbs, B. F.; Watson, J. -P.

    A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems is proposed and it considers a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improvesmore » upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. But, the decomposition phase is required to attain optimality gaps. Moreover, use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately.« less

  12. An analytical study of nitrogen oxides and carbon monoxide emissions in hydrocarbon combustion with added nitrogen, preliminary results

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1979-01-01

    The effect of combustor operating conditions on the conversion of fuel-bound nitrogen (FBN) to nitrogen oxides NO sub x was analytically determined. The effect of FBN and of operating conditions on carbon monoxide (CO) formation was also studied. For these computations, the combustor was assumed to be a two stage, adiabatic, perfectly-stirred reactor. Propane-air was used as the combustible mixture and fuel-bound nitrogen was simulated by adding nitrogen atoms to the mixture. The oxidation of propane and formation of NO sub x and CO were modeled by a fifty-seven reaction chemical mechanism. The results for NO sub x and CO formation are given as functions of primary and secondary stage equivalence ratios and residence times.

  13. Fluorescence photon migration techniques for the on-farm measurement of somatic cell count in fresh cow's milk

    NASA Astrophysics Data System (ADS)

    Khoo, Geoffrey; Kuennemeyer, Rainer; Claycomb, Rod W.

    2005-04-01

    Currently, the state of the art of mastitis detection in dairy cows is the laboratory-based measurement of somatic cell count (SCC), which is time consuming and expensive. Alternative, rapid, and reliable on-farm measurement methods are required for effective farm management. We have investigated whether fluorescence lifetime measurements can determine SCC in fresh, unprocessed milk. The method is based on the change in fluorescence lifetime of ethidium bromide when it binds to DNA from the somatic cells. Milk samples were obtained from a Fullwood Merlin Automated Milking System and analysed within a twenty-four hour period, over which the SCC does not change appreciably. For reference, the milk samples were also sent to a testing laboratory where the SCC was determined by traditional methods. The results show that we can quantify SCC using the fluorescence photon migration method from a lower bound of 4x105 cells mL-1 to an upper bound of 1 x 107 cells mL-1. The upper bound is due to the reference method used while the cause of the lower boundary is unknown, yet.

  14. Record length requirement of long-range dependent teletraffic

    NASA Astrophysics Data System (ADS)

    Li, Ming

    2017-04-01

    This article contributes the highlights mainly in two folds. On the one hand, it presents a formula to compute the upper bound of the variance of the correlation periodogram measurement of teletraffic (traffic for short) with long-range dependence (LRD) for a given record length T and a given value of the Hurst parameter H (Theorems 1 and 2). On the other hand, it proposes two formulas for the computation of the variance upper bound of the correlation periodogram measurement of traffic of fractional Gaussian noise (fGn) type and the generalized Cauchy (GC) type, respectively (Corollaries 1 and 2). They may constitute a reference guideline of record length requirement of traffic with LRD. In addition, record length requirement for the correlation periodogram measurement of traffic with either the Schuster type or the Bartlett one is studied and the present results about it show that both types of periodograms may be used for the correlation measurement of traffic with a pre-desired variance bound of correlation estimation. Moreover, real traffic in the Internet Archive by the Special Interest Group on Data Communication under the Association for Computing Machinery of US (ACM SIGCOMM) is analyzed in the case study in this topic.

  15. Improving the efficiency of single and multiple teleportation protocols based on the direct use of partially entangled states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fortes, Raphael; Rigolin, Gustavo, E-mail: rigolin@ifi.unicamp.br

    We push the limits of the direct use of partially pure entangled states to perform quantum teleportation by presenting several protocols in many different scenarios that achieve the optimal efficiency possible. We review and put in a single formalism the three major strategies known to date that allow one to use partially entangled states for direct quantum teleportation (no distillation strategies permitted) and compare their efficiencies in real world implementations. We show how one can improve the efficiency of many direct teleportation protocols by combining these techniques. We then develop new teleportation protocols employing multipartite partially entangled states. The threemore » techniques are also used here in order to achieve the highest efficiency possible. Finally, we prove the upper bound for the optimal success rate for protocols based on partially entangled Bell states and show that some of the protocols here developed achieve such a bound. -- Highlights: •Optimal direct teleportation protocols using directly partially entangled states. •We put in a single formalism all strategies of direct teleportation. •We extend these techniques for multipartite partially entangle states. •We give upper bounds for the optimal efficiency of these protocols.« less

  16. Performance analysis of optimal power allocation in wireless cooperative communication systems

    NASA Astrophysics Data System (ADS)

    Babikir Adam, Edriss E.; Samb, Doudou; Yu, Li

    2013-03-01

    Cooperative communication has been recently proposed in wireless communication systems for exploring the inherent spatial diversity in relay channels.The Amplify-and-Forward (AF) cooperation protocols with multiple relays have not been sufficiently investigated even if it has a low complexity in term of implementation. We consider in this work a cooperative diversity system in which a source transmits some information to a destination with the help of multiple relay nodes with AF protocols and investigate the optimality of allocating powers both at the source and the relays system by optimizing the symbol error rate (SER) performance in an efficient way. Firstly we derive a closedform SER formulation for MPSK signal using the concept of moment generating function and some statistical approximations in high signal to noise ratio (SNR) for the system under studied. We then find a tight corresponding lower bound which converges to the same limit as the theoretical upper bound and develop an optimal power allocation (OPA) technique with mean channel gains to minimize the SER. Simulation results show that our scheme outperforms the equal power allocation (EPA) scheme and is tight to the theoretical approximation based on the SER upper bound in high SNR for different number of relays.

  17. Simulating the effect of vegetation cover on the sediment yield of mediterranean catchments using SHETRAN

    NASA Astrophysics Data System (ADS)

    Lukey, B. T.; Sheffield, J.; Bathurst, J. C.; Lavabre, J.; Mathys, N.; Martin, C.

    1995-08-01

    The sediment yield of two catchments in southern France was modelled using the newly developed sediment code of SHETRAN. A fire in August 1990 denuded the Rimbaud catchment, providing an opportunity to study the effect of vegetation cover on sediment yield by running the model for both pre-and post-fire cases. Model output is in the form of upper and lower bounds on sediment discharge, reflecting the uncertainty in the erodibility of the soil. The results are encouraging since measured sediment discharge falls largely between the predicted bounds, and simulated sediment yield is dramatically lower for the catchment before the fire which matches observation. SHETRAN is also applied to the Laval catchment, which is subject to Badlands gulley erosion. Again using the principle of generating upper and lower bounds on sediment discharge, the model is shown to be capable of predicting the bulk sediment discharge over periods of months. To simulate the effect of reforestation, the model is run with vegetation cover equivalent to a neighbouring fully forested basin. The results obtained indicate that SHETRAN provides a powerful tool for predicting the impact of environmental change and land management on sediment yield.

  18. Ochratoxin A Dietary Exposure of Ten Population Groups in the Czech Republic: Comparison with Data over the World.

    PubMed

    Ostry, Vladimir; Malir, Frantisek; Dofkova, Marcela; Skarkova, Jarmila; Pfohl-Leszkowicz, Annie; Ruprich, Jiri

    2015-09-10

    Ochratoxin A is a nephrotoxic and renal carcinogenic mycotoxin and is a common contaminant of various food commodities. Eighty six kinds of foodstuffs (1032 food samples) were collected in 2011-2013. High-performance liquid chromatography with fluorescence detection was used for ochratoxin A determination. Limit of quantification of the method varied between 0.01-0.2 μg/kg depending on the food matrices. The most exposed population is children aged 4-6 years old. Globally for this group, the maximum ochratoxin A dietary exposure for "average consumer" was estimated at 3.3 ng/kg bw/day (lower bound, considering the analytical values below the limit of quantification as 0) and 3.9 ng/kg bw/day (middle bound, considering the analytical values below the limit of quantification as 1/2 limit of quantification). Important sources of exposure for this latter group include grain-based products, confectionery, meat products and fruit juice. The dietary intake for "high consumers" in the group 4-6 years old was estimated from grains and grain-based products at 19.8 ng/kg bw/day (middle bound), from tea at 12.0 ng/kg bw/day (middle bound) and from confectionery at 6.5 ng/kg bw/day (middle bound). For men aged 18-59 years old beer was the main contributor with an intake of 2.60 ng/kg bw/day ("high consumers", middle bound). Tea and grain-based products were identified to be the main contributors for dietary exposure in women aged 18-59 years old. Coffee and wine were identified as a higher contributor of the OTA intake in the population group of women aged 18-59 years old compared to the other population groups.

  19. Existence and amplitude bounds for irrotational water waves in finite depth

    NASA Astrophysics Data System (ADS)

    Kogelbauer, Florian

    2017-12-01

    We prove the existence of solutions to the irrotational water-wave problem in finite depth and derive an explicit upper bound on the amplitude of the nonlinear solutions in terms of the wavenumber, the total hydraulic head, the wave speed and the relative mass flux. Our approach relies upon a reformulation of the water-wave problem as a one-dimensional pseudo-differential equation and the Newton-Kantorovich iteration for Banach spaces. This article is part of the theme issue 'Nonlinear water waves'.

  20. Entanglement polygon inequality in qubit systems

    NASA Astrophysics Data System (ADS)

    Qian, Xiao-Feng; Alonso, Miguel A.; Eberly, J. H.

    2018-06-01

    We prove a set of tight entanglement inequalities for arbitrary N-qubit pure states. By focusing on all bi-partite marginal entanglements between each single qubit and its remaining partners, we show that the inequalities provide an upper bound for each marginal entanglement, while the known monogamy relation establishes the lower bound. The restrictions and sharing properties associated with the inequalities are further analyzed with a geometric polytope approach, and examples of three-qubit GHZ-class and W-class entangled states are presented to illustrate the results.

  1. Quantum Speed Limits across the Quantum-to-Classical Transition

    NASA Astrophysics Data System (ADS)

    Shanahan, B.; Chenu, A.; Margolus, N.; del Campo, A.

    2018-02-01

    Quantum speed limits set an upper bound to the rate at which a quantum system can evolve. Adopting a phase-space approach, we explore quantum speed limits across the quantum-to-classical transition and identify equivalent bounds in the classical world. As a result, and contrary to common belief, we show that speed limits exist for both quantum and classical systems. As in the quantum domain, classical speed limits are set by a given norm of the generator of time evolution.

  2. Bounds on the cross-correlation functions of state m-sequences

    NASA Astrophysics Data System (ADS)

    Woodcock, C. F.; Davies, Phillip A.; Shaar, Ahmed A.

    1987-03-01

    Lower and upper bounds on the peaks of the periodic Hamming cross-correlation function for state m-sequences, which are often used in frequency-hopped spread-spectrum systems, are derived. The state position mapped (SPM) sequences of the state m-sequences are described. The use of SPM sequences for OR-channel code division multiplexing is studied. The relation between the Hamming cross-correlation function and the correlation function of SPM sequence is examined. Numerical results which support the theoretical data are presented.

  3. Hybrid Theory of Electron-Hydrogenic Systems Elastic Scattering

    NASA Technical Reports Server (NTRS)

    Bhatia, A. K.

    2007-01-01

    Accurate electron-hydrogen and electron-hydrogenic cross sections are required to interpret fusion experiments, laboratory plasma physics and properties of the solar and astrophysical plasmas. We have developed a method in which the short-range and long-range correlations can be included at the same time in the scattering equations. The phase shifts have rigorous lower bounds and the scattering lengths have rigorous upper bounds. The phase shifts in the resonance region can be used to calculate very accurately the resonance parameters.

  4. DD-bar production and their interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu Yanrui; Oka, Makoto; Takizawa, Makoto

    2011-05-23

    We have explored the bound state problem and the scattering problem of the DD-bar pair in a meson exchange model. When considering their production in the e{sup +}e{sup -} process, we included the DD-bar rescattering effect. Although it is difficult to answer whether the S-wave DD-bar bound state exists or not from the binding energies and the phase shifts, one may get an upper limit of the binding energy from the production of the BB-bar, the bottom analog of DD-bar.

  5. Thin-wall approximation in vacuum decay: A lemma

    NASA Astrophysics Data System (ADS)

    Brown, Adam R.

    2018-05-01

    The "thin-wall approximation" gives a simple estimate of the decay rate of an unstable quantum field. Unfortunately, the approximation is uncontrolled. In this paper I show that there are actually two different thin-wall approximations and that they bracket the true decay rate: I prove that one is an upper bound and the other a lower bound. In the thin-wall limit, the two approximations converge. In the presence of gravity, a generalization of this lemma provides a simple sufficient condition for nonperturbative vacuum instability.

  6. A Note on the Kirchhoff and Additive Degree-Kirchhoff Indices of Graphs

    NASA Astrophysics Data System (ADS)

    Yang, Yujun; Klein, Douglas J.

    2015-06-01

    Two resistance-distance-based graph invariants, namely, the Kirchhoff index and the additive degree-Kirchhoff index, are studied. A relation between them is established, with inequalities for the additive degree-Kirchhoff index arising via the Kirchhoff index along with minimum, maximum, and average degrees. Bounds for the Kirchhoff and additive degree-Kirchhoff indices are also determined, and extremal graphs are characterised. In addition, an upper bound for the additive degree-Kirchhoff index is established to improve a previously known result.

  7. Magnitude error bounds for sampled-data frequency response obtained from the truncation of an infinite series, and compensator improvement program

    NASA Technical Reports Server (NTRS)

    Mitchell, J. R.

    1972-01-01

    The frequency response method of analyzing control system performance is discussed, and the difficulty of obtaining the sampled frequency response of the continuous system is considered. An upper bound magnitude error equation is obtained which yields reasonable estimates of the actual error. Finalization of the compensator improvement program is also reported, and the program was used to design compensators for Saturn 5/S1-C dry workshop and Saturn 5/S1-C Skylab.

  8. On the boundedness and integration of non-oscillatory solutions of certain linear differential equations of second order.

    PubMed

    Tunç, Cemil; Tunç, Osman

    2016-01-01

    In this paper, certain system of linear homogeneous differential equations of second-order is considered. By using integral inequalities, some new criteria for bounded and [Formula: see text]-solutions, upper bounds for values of improper integrals of the solutions and their derivatives are established to the considered system. The obtained results in this paper are considered as extension to the results obtained by Kroopnick (2014) [1]. An example is given to illustrate the obtained results.

  9. Blow-up of solutions to a quasilinear wave equation for high initial energy

    NASA Astrophysics Data System (ADS)

    Li, Fang; Liu, Fang

    2018-05-01

    This paper deals with blow-up solutions to a nonlinear hyperbolic equation with variable exponent of nonlinearities. By constructing a new control function and using energy inequalities, the authors obtain the lower bound estimate of the L2 norm of the solution. Furthermore, the concavity arguments are used to prove the nonexistence of solutions; at the same time, an estimate of the upper bound of blow-up time is also obtained. This result extends and improves those of [1,2].

  10. Disease Localization in Multilayer Networks

    NASA Astrophysics Data System (ADS)

    de Arruda, Guilherme Ferraz; Cozzo, Emanuele; Peixoto, Tiago P.; Rodrigues, Francisco A.; Moreno, Yamir

    2017-01-01

    We present a continuous formulation of epidemic spreading on multilayer networks using a tensorial representation, extending the models of monoplex networks to this context. We derive analytical expressions for the epidemic threshold of the susceptible-infected-susceptible (SIS) and susceptible-infected-recovered dynamics, as well as upper and lower bounds for the disease prevalence in the steady state for the SIS scenario. Using the quasistationary state method, we numerically show the existence of disease localization and the emergence of two or more susceptibility peaks, which are characterized analytically and numerically through the inverse participation ratio. At variance with what is observed in single-layer networks, we show that disease localization takes place on the layers and not on the nodes of a given layer. Furthermore, when mapping the critical dynamics to an eigenvalue problem, we observe a characteristic transition in the eigenvalue spectra of the supra-contact tensor as a function of the ratio of two spreading rates: If the rate at which the disease spreads within a layer is comparable to the spreading rate across layers, the individual spectra of each layer merge with the coupling between layers. Finally, we report on an interesting phenomenon, the barrier effect; i.e., for a three-layer configuration, when the layer with the lowest eigenvalue is located at the center of the line, it can effectively act as a barrier to the disease. The formalism introduced here provides a unifying mathematical approach to disease contagion in multiplex systems, opening new possibilities for the study of spreading processes.

  11. Vertical structure of tropospheric winds on gas giants

    NASA Astrophysics Data System (ADS)

    Scott, R. K.; Dunkerton, T. J.

    2017-04-01

    Zonal mean zonal velocity profiles from cloud-tracking observations on Jupiter and Saturn are used to infer latitudinal variations of potential temperature consistent with a shear stable potential vorticity distribution. Immediately below the cloud tops, density stratification is weaker on the poleward and stronger on the equatorward flanks of midlatitude jets, while at greater depth the opposite relation holds. Thermal wind balance then yields the associated vertical shears of midlatitude jets in an altitude range bounded above by the cloud tops and bounded below by the level where the latitudinal gradient of static stability changes sign. The inferred vertical shear below the cloud tops is consistent with existing thermal profiling of the upper troposphere. The sense of the associated mean meridional circulation in the upper troposphere is discussed, and expected magnitudes are given based on existing estimates of the radiative timescale on each planet.

  12. Gravitating Q-balls in the Affleck-Dine mechanism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamaki, Takashi; Sakai, Nobuyuki; Department of Education, Yamagata University, Yamagata 990-8560

    2011-04-15

    We investigate how gravity affects ''Q-balls'' with the Affleck-Dine potential V{sub AD}({phi}):=(m{sup 2}/2){phi}{sup 2} [1+Kln(({phi}/M)){sup 2}]. Contrary to the flat case, in which equilibrium solutions exist only if K<0, we find three types of gravitating solutions as follows. In the case that K<0, ordinary Q-ball solutions exist; there is an upper bound of the charge due to gravity. In the case that K=0, equilibrium solutions called (mini-)boson stars appear due to gravity; there is an upper bound of the charge, too. In the case that K>0, equilibrium solutions appear, too. In this case, these solutions are not asymptotically flat butmore » surrounded by Q-matter. These solutions might be important in considering a dark matter scenario in the Affleck-Dine mechanism.« less

  13. Precision Measurement of the Electron's Electric Dipole Moment Using Trapped Molecular Ions.

    PubMed

    Cairncross, William B; Gresh, Daniel N; Grau, Matt; Cossel, Kevin C; Roussy, Tanya S; Ni, Yiqi; Zhou, Yan; Ye, Jun; Cornell, Eric A

    2017-10-13

    We describe the first precision measurement of the electron's electric dipole moment (d_{e}) using trapped molecular ions, demonstrating the application of spin interrogation times over 700 ms to achieve high sensitivity and stringent rejection of systematic errors. Through electron spin resonance spectroscopy on ^{180}Hf^{19}F^{+} in its metastable ^{3}Δ_{1} electronic state, we obtain d_{e}=(0.9±7.7_{stat}±1.7_{syst})×10^{-29}  e cm, resulting in an upper bound of |d_{e}|<1.3×10^{-28}  e cm (90% confidence). Our result provides independent confirmation of the current upper bound of |d_{e}|<9.4×10^{-29}  e cm [J. Baron et al., New J. Phys. 19, 073029 (2017)NJOPFM1367-263010.1088/1367-2630/aa708e], and offers the potential to improve on this limit in the near future.

  14. Insights into the Earth System mass variability from CSR-RL05 GRACE gravity fields

    NASA Astrophysics Data System (ADS)

    Bettadpur, S.

    2012-04-01

    The next-generation Release-05 GRACE gravity field data products are the result of extensive effort applied to the improvements to the GRACE Level-1 (tracking) data products, and to improvements in the background gravity models and processing methodology. As a result, the squared-error upper-bound in RL05 fields is half or less than the squared-error upper-bound in RL04 fields. The CSR-RL05 field release consists of unconstrained gravity fields as well as a regularized gravity field time-series that can be used for several applications without any post-processing error reduction. This paper will describe the background and the nature of these improvements in the data products, and provide an error characterization. We will describe the insights these new series offer in measuring the mass flux due to diverse Hydrologic, Oceanographic and Cryospheric processes.

  15. Potential-field sounding using Euler's homogeneity equation and Zidarov bubbling

    USGS Publications Warehouse

    Cordell, Lindrith

    1994-01-01

    Potential-field (gravity) data are transformed into a physical-property (density) distribution in a lower half-space, constrained solely by assumed upper bounds on physical-property contrast and data error. A two-step process is involved. The data are first transformed to an equivalent set of line (2-D case) or point (3-D case) sources, using Euler's homogeneity equation evaluated iteratively on the largest residual data value. Then, mass is converted to a volume-density product, constrained to an upper density bound, by 'bubbling,' which exploits circular or radial expansion to redistribute density without changing the associated gravity field. The method can be developed for gravity or magnetic data in two or three dimensions. The results can provide a beginning for interpretation of potential-field data where few independent constraints exist, or more likely, can be used to develop models and confirm or extend interpretation of other geophysical data sets.

  16. Search for violations of quantum mechanics

    DOE PAGES

    Ellis, John; Hagelin, John S.; Nanopoulos, D. V.; ...

    1984-07-01

    The treatment of quantum effects in gravitational fields indicates that pure states may evolve into mixed states, and Hawking has proposed modification of the axioms of field theory which incorporate the corresponding violation of quantum mechanics. In this study we propose a modified hamiltonian equation of motion for density matrices and use it to interpret upper bounds on the violation of quantum mechanics in different phenomenological situations. We apply our formalism to the K 0-K 0 system and to long baseline neutron interferometry experiments. In both cases we find upper bounds of about 2 × 10 -21 GeV on contributionsmore » to the single particle “hamiltonian” which violate quantum mechanical coherence. We discuss how these limits might be improved in the future, and consider the relative significance of other successful tests of quantum mechanics. Finally, an appendix contains model estimates of the magnitude of effects violating quantum mechanics.« less

  17. DD production and their interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu Yanrui; Oka, Makoto; Takizawa, Makoto

    2010-07-01

    S- and P-wave DD scatterings are studied in a meson exchange model with the coupling constants obtained in the heavy quark effective theory. With the extracted P-wave phase shifts and the separable potential approximation, we include the DD rescattering effect and investigate the production process e{sup +}e{sup -{yields}}DD. We find that it is difficult to explain the anomalous line shape observed by the BES Collaboration with this mechanism. Combining our model calculation and the experimental measurement, we estimate the upper limit of the nearly universal cutoff parameter to be around 2 GeV. With this number, the upper limits of themore » binding energies of the S-wave DD and BB bound states are obtained. Assuming that the S-wave and P-wave interactions rely on the same cutoff, our study provides a way of extracting the information about S-wave molecular bound states from the P-wave meson pair production.« less

  18. Estimation of the lower and upper bounds on the probability of failure using subset simulation and random set theory

    NASA Astrophysics Data System (ADS)

    Alvarez, Diego A.; Uribe, Felipe; Hurtado, Jorge E.

    2018-02-01

    Random set theory is a general framework which comprises uncertainty in the form of probability boxes, possibility distributions, cumulative distribution functions, Dempster-Shafer structures or intervals; in addition, the dependence between the input variables can be expressed using copulas. In this paper, the lower and upper bounds on the probability of failure are calculated by means of random set theory. In order to accelerate the calculation, a well-known and efficient probability-based reliability method known as subset simulation is employed. This method is especially useful for finding small failure probabilities in both low- and high-dimensional spaces, disjoint failure domains and nonlinear limit state functions. The proposed methodology represents a drastic reduction of the computational labor implied by plain Monte Carlo simulation for problems defined with a mixture of representations for the input variables, while delivering similar results. Numerical examples illustrate the efficiency of the proposed approach.

  19. Universal charge-radius relation for subatomic and astrophysical compact objects.

    PubMed

    Madsen, Jes

    2008-04-18

    Electron-positron pair creation in supercritical electric fields limits the net charge of any static, spherical object, such as superheavy nuclei, strangelets, and Q balls, or compact stars like neutron stars, quark stars, and black holes. For radii between 4 x 10(2) and 10(4) fm the upper bound on the net charge is given by the universal relation Z=0.71R(fm), and for larger radii (measured in femtometers or kilometers) Z=7 x 10(-5)R_(2)(fm)=7 x 10(31)R_(2)(km). For objects with nuclear density the relation corresponds to Z approximately 0.7A(1/3)( (10(8)10(12)), where A is the baryon number. For some systems this universal upper bound improves existing charge limits in the literature.

  20. Crustal volumes of the continents and of oceanic and continental submarine plateaus

    NASA Technical Reports Server (NTRS)

    Schubert, G.; Sandwell, D.

    1989-01-01

    Using global topographic data and the assumption of Airy isostasy, it is estimated that the crustal volume of the continents is 7182 X 10 to the 6th cu km. The crustal volumes of the oceanic and continental submarine plateaus are calculated at 369 X 10 to the 6th cu km and 242 X 10 to the 6th cu km, respectively. The total continental crustal volume is found to be 7581 X 10 to the 6th cu km, 3.2 percent of which is comprised of continental submarine plateaus on the seafloor. An upper bound on the contintental crust addition rate by the accretion of oceanic plateaus is set at 3.7 cu km/yr. Subduction of continental submarine plateaus with the oceanic lithosphere on a 100 Myr time scale yields an upper bound to the continental crustal subtraction rate of 2.4 cu km/yr.

  1. Comparison of various techniques for calibration of AIS data

    NASA Technical Reports Server (NTRS)

    Roberts, D. A.; Yamaguchi, Y.; Lyon, R. J. P.

    1986-01-01

    The Airborne Imaging Spectrometer (AIS) samples a region which is strongly influenced by decreasing solar irradiance at longer wavelengths and strong atmospheric absorptions. Four techniques, the Log Residual, the Least Upper Bound Residual, the Flat Field Correction and calibration using field reflectance measurements were investigated as a means for removing these two features. Of the four techniques field reflectance calibration proved to be superior in terms of noise and normalization. Of the other three techniques, the Log Residual was superior when applied to areas which did not contain one dominant cover type. In heavily vegetated areas, the Log Residual proved to be ineffective. After removing anomalously bright data values, the Least Upper Bound Residual proved to be almost as effective as the Log Residual in sparsely vegetated areas and much more effective in heavily vegetated areas. Of all the techniques, the Flat Field Correction was the noisest.

  2. Isotope-abundance variations and atomic weights of selected elements: 2016 (IUPAC Technical Report)

    USGS Publications Warehouse

    Coplen, Tyler B.; Shrestha, Yesha

    2016-01-01

    There are 63 chemical elements that have two or more isotopes that are used to determine their standard atomic weights. The isotopic abundances and atomic weights of these elements can vary in normal materials due to physical and chemical fractionation processes (not due to radioactive decay). These variations are well known for 12 elements (hydrogen, lithium, boron, carbon, nitrogen, oxygen, magnesium, silicon, sulfur, chlorine, bromine, and thallium), and the standard atomic weight of each of these elements is given by IUPAC as an interval with lower and upper bounds. Graphical plots of selected materials and compounds of each of these elements have been published previously. Herein and at the URL http://dx.doi.org/10.5066/F7GF0RN2, we provide isotopic abundances, isotope-delta values, and atomic weights for each of the upper and lower bounds of these materials and compounds.

  3. Constructions for finite-state codes

    NASA Technical Reports Server (NTRS)

    Pollara, F.; Mceliece, R. J.; Abdel-Ghaffar, K.

    1987-01-01

    A class of codes called finite-state (FS) codes is defined and investigated. These codes, which generalize both block and convolutional codes, are defined by their encoders, which are finite-state machines with parallel inputs and outputs. A family of upper bounds on the free distance of a given FS code is derived from known upper bounds on the minimum distance of block codes. A general construction for FS codes is then given, based on the idea of partitioning a given linear block into cosets of one of its subcodes, and it is shown that in many cases the FS codes constructed in this way have a d sub free which is as large as possible. These codes are found without the need for lengthy computer searches, and have potential applications for future deep-space coding systems. The issue of catastropic error propagation (CEP) for FS codes is also investigated.

  4. An upper-bound assessment of the benefits of reducing perchlorate in drinking water.

    PubMed

    Lutter, Randall

    2014-10-01

    The Environmental Protection Agency plans to issue new federal regulations to limit drinking water concentrations of perchlorate, which occurs naturally and results from the combustion of rocket fuel. This article presents an upper-bound estimate of the potential benefits of alternative maximum contaminant levels for perchlorate in drinking water. The results suggest that the economic benefits of reducing perchlorate concentrations in drinking water are likely to be low, i.e., under $2.9 million per year nationally, for several reasons. First, the prevalence of detectable perchlorate in public drinking water systems is low. Second, the population especially sensitive to effects of perchlorate, pregnant women who are moderately iodide deficient, represents a minority of all pregnant women. Third, and perhaps most importantly, reducing exposure to perchlorate in drinking water is a relatively ineffective way of increasing iodide uptake, a crucial step linking perchlorate to health effects of concern. © 2014 Society for Risk Analysis.

  5. Fault-tolerant clock synchronization validation methodology. [in computer systems

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Palumbo, Daniel L.; Johnson, Sally C.

    1987-01-01

    A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight-crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating the clock synchronization system of the Software Implemented Fault Tolerance computer. The design proof of the algorithm includes a theorem that defines the maximum skew between any two nonfaulty clocks in the system in terms of specific system parameters. Most of these parameters are deterministic. One crucial parameter is the upper bound on the clock read error, which is stochastic. The probability that this upper bound is exceeded is calculated from data obtained by the measurement of system parameters. This probability is then included in a detailed reliability analysis of the system.

  6. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  7. Gauge mediation at the LHC: status and prospects

    DOE PAGES

    Knapen, Simon; Redigolo, Diego

    2017-01-30

    We show that the predictivity of general gauge mediation (GGM) with TeV-scale stops is greatly increased once the Higgs mass constraint is imposed. The most notable results are a strong lower bound on the mass of the gluino and right-handed squarks, and an upper bound on the Higgsino mass. If the μ-parameter is positive, the wino mass is also bounded from above. These constraints relax significantly for high messenger scales and as such long-lived NLSPs are favored in GGM. We identify a small set of most promising topologies for the neutralino/sneutrino NLSP scenarios and estimate the impact of the currentmore » bounds and the sensitivity of the high luminosity LHC. The stau, stop and sbottom NLSP scenarios can be robustly excluded at the high luminosity LHC.« less

  8. On the Inequalities of Babu\\vska-Aziz, Friedrichs and Horgan-Payne

    NASA Astrophysics Data System (ADS)

    Costabel, Martin; Dauge, Monique

    2015-09-01

    The equivalence between the inequalities of Babu\\vska-Aziz and Friedrichs for sufficiently smooth bounded domains in the plane was shown by Horgan and Payne 30 years ago. We prove that this equivalence, and the equality between the associated constants, is true without any regularity condition on the domain. For the Horgan-Payne inequality, which is an upper bound of the Friedrichs constant for plane star-shaped domains in terms of a geometric quantity known as the Horgan-Payne angle, we show that it is true for some classes of domains, but not for all bounded star-shaped domains. We prove a weaker inequality that is true in all cases.

  9. A simple method for assessing occupational exposure via the one-way random effects model.

    PubMed

    Krishnamoorthy, K; Mathew, Thomas; Peng, Jie

    2016-11-01

    A one-way random effects model is postulated for the log-transformed shift-long personal exposure measurements, where the random effect in the model represents an effect due to the worker. Simple closed-form confidence intervals are proposed for the relevant parameters of interest using the method of variance estimates recovery (MOVER). The performance of the confidence bounds is evaluated and compared with those based on the generalized confidence interval approach. Comparison studies indicate that the proposed MOVER confidence bounds are better than the generalized confidence bounds for the overall mean exposure and an upper percentile of the exposure distribution. The proposed methods are illustrated using a few examples involving industrial hygiene data.

  10. Wave height estimates from pressure and velocity data at an intermediate depth in the presence of uniform currents

    NASA Astrophysics Data System (ADS)

    Basu, Biswajit

    2017-12-01

    Bounds on estimates of wave heights (valid for large amplitudes) from pressure and flow measurements at an arbitrary intermediate depth have been provided. Two-dimensional irrotational steady water waves over a flat bed with a finite depth in the presence of underlying uniform currents have been considered in the analysis. Five different upper bounds based on a combination of pressure and velocity field measurements have been derived, though there is only one available lower bound on the wave height in the case of the speed of current greater than or less than the wave speed. This article is part of the theme issue 'Nonlinear water waves'.

  11. A communication channel model of the software process

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1988-01-01

    Reported here is beginning research into a noisy communication channel analogy of software development process productivity, in order to establish quantifiable behavior and theoretical bounds. The analogy leads to a fundamental mathematical relationship between human productivity and the amount of information supplied by the developers, the capacity of the human channel for processing and transmitting information, the software product yield (object size), the work effort, requirements efficiency, tool and process efficiency, and programming environment advantage. Also derived is an upper bound to productivity that shows that software reuse is the only means than can lead to unbounded productivity growth; practical considerations of size and cost of reusable components may reduce this to a finite bound.

  12. A communication channel model of the software process

    NASA Technical Reports Server (NTRS)

    Tausworthe, Robert C.

    1988-01-01

    Beginning research into a noisy communication channel analogy of software development process productivity, in order to establish quantifiable behavior and theoretical bounds is discussed. The analogy leads to a fundamental mathematical relationship between human productivity and the amount of information supplied by the developers, the capacity of the human channel for processing and transmitting information, the software product yield (object size) the work effort, requirements efficiency, tool and process efficiency, and programming environment advantage. An upper bound to productivity is derived that shows that software reuse is the only means that can lead to unbounded productivity growth; practical considerations of size and cost of reusable components may reduce this to a finite bound.

  13. A passivity criterion for sampled-data bilateral teleoperation systems.

    PubMed

    Jazayeri, Ali; Tavakoli, Mahdi

    2013-01-01

    A teleoperation system consists of a teleoperator, a human operator, and a remote environment. Conditions involving system and controller parameters that ensure the teleoperator passivity can serve as control design guidelines to attain maximum teleoperation transparency while maintaining system stability. In this paper, sufficient conditions for teleoperator passivity are derived for when position error-based controllers are implemented in discrete-time. This new analysis is necessary because discretization causes energy leaks and does not necessarily preserve the passivity of the system. The proposed criterion for sampled-data teleoperator passivity imposes lower bounds on the teleoperator's robots dampings, an upper bound on the sampling time, and bounds on the control gains. The criterion is verified through simulations and experiments.

  14. Reliability Estimating Procedures for Electric and Thermochemical Propulsion Systems. Volume 1

    DTIC Science & Technology

    1977-02-01

    Laboratories, The Marquardt Company, NASA Goddard Space Flight Center, RCA Astro Elec- tronics, Rockwell International, Applied Physics Laboratory...E fX ) 2.3 Failure Rate Means and Bounds 5% Lower Bound Median Mean 95% Upper Bound A.05 X.05 . AIA. 9 5 0.00025 0.0024 0.06 0.022 x10- 6 per cycle, 1...Iq IIt. Xg4 4l Wl ~ 4𔃺 L Q ൘ I1-269 I- I J N1- 74-i Liu I- (~J~~~jto 1-27 r4J > U 0 1-271 T 27 fX ~𔃽 0L 1-273 -- va VAv( 13 1-272 %J% ~ii 000 41

  15. Stochastic static fault slip inversion from geodetic data with non-negativity and bounds constraints

    NASA Astrophysics Data System (ADS)

    Nocquet, J.-M.

    2018-04-01

    Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems (Tarantola & Valette 1982; Tarantola 2005) provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modeling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a Truncated Multi-Variate Normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulas for the single, two-dimensional or n-dimensional marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations (e.g. Genz & Bretz 2009). Posterior mean and covariance can also be efficiently derived. I show that the Maximum Posterior (MAP) can be obtained using a Non-Negative Least-Squares algorithm (Lawson & Hanson 1974) for the single truncated case or using the Bounded-Variable Least-Squares algorithm (Stark & Parker 1995) for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov Chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modeling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the Maximum Posterior (MAP) is extremely fast.

  16. Development of an analytical method for the simultaneous analysis of MCPD esters and glycidyl esters in oil-based foodstuffs.

    PubMed

    Ermacora, Alessia; Hrnčiřík, Karel

    2014-01-01

    Substantial progress has been recently made in the development and optimisation of analytical methods for the quantification of 2-MCPD, 3-MCPD and glycidyl esters in oils and fats, and there are a few methods currently available that allow a reliable quantification of these contaminants in bulk oils and fats. On the other hand, no standard method for the analysis of foodstuffs has yet been established. The aim of this study was the development and validation of a new method for the simultaneous quantification of 2-MCPD, 3-MCPD and glycidyl esters in oil-based food products. The developed protocol includes a first step of liquid-liquid extraction and purification of the lipophilic substances of the sample, followed by the application of a previously developed procedure based on acid transesterification, for the indirect quantification of these contaminants in oils and fats. The method validation was carried out on food products (fat-based spreads, creams, margarine, mayonnaise) manufactured in-house, in order to control the manufacturing process and account for any food matrix-analyte interactions (the sample spiking was carried out on the single components used for the formulations rather than the final products). The method showed good accuracy (the recoveries ranged from 97% to 106% for bound 3-MCPD and 2-MCPD and from 88% to 115% for bound glycidol) and sensitivity (the LOD was 0.04 and 0.05 mg kg(-1) for bound MCPD and glycidol, respectively). Repeatability and reproducibility were satisfactory (RSD below 2% and 5%, respectively) for all analytes. The levels of salts and surface-active compounds in the formulation were found to have no impact on the accuracy and the other parameters of the method.

  17. A lower bound on the solutions of Kapustin-Witten equations

    NASA Astrophysics Data System (ADS)

    Huang, Teng

    2016-11-01

    In this article, we consider the Kapustin-Witten equations on a closed four-manifold. We study certain analytic properties of solutions to the equations on a closed manifold. The main result is that there exists an L2 -lower bound on the extra fields over a closed four-manifold satisfying certain conditions if the connections are not ASD connections. Furthermore, we also obtain a similar result about the Vafa-Witten equations.

  18. A bound particle coupled to two thermostats

    NASA Astrophysics Data System (ADS)

    Fogedby, Hans C.; Imparato, Alberto

    2011-05-01

    We consider a harmonically bound Brownian particle coupled to two distinct heat reservoirs at different temperatures. We show that the presence of a harmonic trap does not change the large deviation function from the case of a free Brownian particle discussed by Derrida and Brunet and Visco. Likewise, the Gallavotti-Cohen fluctuation theorem related to the entropy production at the heat sources remains in force. We support the analytical results with numerical simulations.

  19. The analytical transfer matrix method for PT-symmetric complex potential

    NASA Astrophysics Data System (ADS)

    Naceri, Leila; Hammou, Amine B.

    2017-07-01

    We have extended the analytical transfer matrix (ATM) method to solve quantum mechanical bound state problems with complex PT-symmetric potentials. Our work focuses on a class of models studied by Bender and Jones, we calculate the energy eigenvalues, discuss the critical values of g and compare the results with those obtained from other methods such as exact numerical computation and WKB approximation method.

  20. Follow-Up Care for Older Women With Breast Cancer

    DTIC Science & Technology

    1999-08-01

    range of patient outcomes, including primary tumor therapy and mortality, self -reported upper body function, and overall physical function. Methods...mor therapy, all cause mortality, self -reported function and overall physical function than upper body function, and overall physical was the interview...Major Analytic Variables mor therapy and all cause mortality, as well as self -reported upper body and overall physical Dependent Variables. Our first

  1. Ada (Trade Name)/SQL (Structured Query Language) Binding Specification

    DTIC Science & Technology

    1988-06-01

    TYPES iS package ADA-SOL Is type DWPLOYEEyNAME Is new STRING ( 1 .. 30 ); type BOSSNAME is new EMPLOYEENAME; type EMPLOYEE SALARY is digits 7 range 0.00...minimum number of significant decimal digits . All real numbers between the lower and upper bounds, inclusive, belong to the subtype, and are...and the elements of strings. Format <character> -:- < digit > I <letter> ! <special character> < digit > ::- 0111213141516171819 <letter> ::- <upper case

  2. Characterization of Seismic Noise at Selected Non-Urban Sites

    DTIC Science & Technology

    2010-03-01

    Field sites for seismic recordings: Scottish moor (upper left), Enfield, NH (upper right), and vicinity of Keele, England (bottom). ERDC...three sites. The sites are: a wind farm on a remote moor in Scotland, a ~13 acre field bounded by woods in a rural Enfield, NH neigh- borhood, and a site...in a rural Enfield, NH, neighborhood, and a site transitional from developed land to farmland within 1 km of the six-lane M6 motorway near Keele

  3. Constraints on the ωπ Form Factor from Analyticity and Unitarity

    NASA Astrophysics Data System (ADS)

    Ananthanarayan, B.; Caprini, Irinel; Kubis, Bastian

    Form factors are important low-energy quantities and an accurate knowledge of these sheds light on the strong interactions. A variety of methods based on general principles have been developed to use information known in different energy regimes to constrain them in regions where experimental information needs to be tested precisely. Here we review our recent work on the electromagnetic ωπ form factor in a model-independent framework known as the method of unitarity bounds, partly motivated by the discre-pancies noted recently between the theoretical calculations of the form factor based on dispersion relations and certain experimental data measured from the decay ω → π0γ*. We have applied a modified dispersive formalism, which uses as input the discontinuity of the ωπ form factor calculated by unitarity below the ωπ threshold and an integral constraint on the square of its modulus above this threshold. The latter constraint was obtained by exploiting unitarity and the positivity of the spectral function of a QCD correlator, computed on the spacelike axis by operator product expansion and perturbative QCD. An alternative constraint is obtained by using data available at higher energies for evaluating an integral of the modulus squared with a suitable weight function. From these conditions we derived upper and lower bounds on the modulus of the ωπ form factor in the region below the ωπ threshold. The results confirm the existence of a disagreement between dispersion theory and experimental data on the ωπ form factor around 0:6 GeV, including those from NA60 published in 2016.

  4. Constraints on the ωπ form factor from analyticity and unitarity

    NASA Astrophysics Data System (ADS)

    Ananthanarayan, B.; Caprini, Irinel; Kubis, Bastian

    2016-05-01

    Form factors are important low-energy quantities and an accurate knowledge of these sheds light on the strong interactions. A variety of methods based on general principles have been developed to use information known in different energy regimes to constrain them in regions where experimental information needs to be tested precisely. Here we review our recent work on the electromagnetic ωπ form factor in a model-independent framework known as the method of unitarity bounds, partly motivated by the discrepancies noted recently between the theoretical calculations of the form factor based on dispersion relations and certain experimental data measured from the decay ω → π0γ∗. We have applied a modified dispersive formalism, which uses as input the discontinuity of the ωπ form factor calculated by unitarity below the ωπ threshold and an integral constraint on the square of its modulus above this threshold. The latter constraint was obtained by exploiting unitarity and the positivity of the spectral function of a QCD correlator, computed on the spacelike axis by operator product expansion and perturbative QCD. An alternative constraint is obtained by using data available at higher energies for evaluating an integral of the modulus squared with a suitable weight function. From these conditions we derived upper and lower bounds on the modulus of the ωπ form factor in the region below the ωπ threshold. The results confirm the existence of a disagreement between dispersion theory and experimental data on the ωπ form factor around 0.6 GeV, including those from NA60 published in 2016.

  5. Natural Constraints to Species Diversification.

    PubMed

    Lewitus, Eric; Morlon, Hélène

    2016-08-01

    Identifying modes of species diversification is fundamental to our understanding of how biodiversity changes over evolutionary time. Diversification modes are captured in species phylogenies, but characterizing the landscape of diversification has been limited by the analytical tools available for directly comparing phylogenetic trees of groups of organisms. Here, we use a novel, non-parametric approach and 214 family-level phylogenies of vertebrates representing over 500 million years of evolution to identify major diversification modes, to characterize phylogenetic space, and to evaluate the bounds and central tendencies of species diversification. We identify five principal patterns of diversification to which all vertebrate families hold. These patterns, mapped onto multidimensional space, constitute a phylogenetic space with distinct properties. Firstly, phylogenetic space occupies only a portion of all possible tree space, showing family-level phylogenies to be constrained to a limited range of diversification patterns. Secondly, the geometry of phylogenetic space is delimited by quantifiable trade-offs in tree size and the heterogeneity and stem-to-tip distribution of branching events. These trade-offs are indicative of the instability of certain diversification patterns and effectively bound speciation rates (for successful clades) within upper and lower limits. Finally, both the constrained range and geometry of phylogenetic space are established by the differential effects of macroevolutionary processes on patterns of diversification. Given these properties, we show that the average path through phylogenetic space over evolutionary time traverses several diversification stages, each of which is defined by a different principal pattern of diversification and directed by a different macroevolutionary process. The identification of universal patterns and natural constraints to diversification provides a foundation for understanding the deep-time evolution of biodiversity.

  6. Anisotropic transport of normal metal-barrier-normal metal junctions in monolayer phosphorene

    NASA Astrophysics Data System (ADS)

    De Sarkar, Sangita; Agarwal, Amit; Sengupta, K.

    2017-07-01

    We study transport properties of a phosphorene monolayer in the presence of single and multiple potential barriers of height U 0 and width d, using both continuum and microscopic lattice models, and show that the nature of electron transport along its armchair edge (x direction) is qualitatively different from its counterpart in both conventional two-dimensional electron gas with Schrödinger-like quasiparticles and graphene or surfaces of topological insulators hosting massless Dirac quasiparticles. We show that the transport, mediated by massive Dirac electrons, allows one to achieve collimated quasiparticle motion along x and thus makes monolayer phosphorene an ideal experimental platform for studying Klein paradox in the context of gapped Dirac materials. We study the dependence of the tunneling conductance G\\equiv {{G}xx} as a function of d and U 0, and demonstrate that for a given applied voltage V its behavior changes from oscillatory to decaying function of d for a range of U 0 with finite non-zero upper and lower bounds, and provide analytical expression for these bounds within which G decays with d. We contrast such behavior of G with that of massless Dirac electrons in graphene and also with that along the zigzag edge (y direction) in phosphorene where the quasiparticles obey an effective Schrödinger equation at low energy. We also study transport through multiple barriers along x and demonstrate that these properties hold for transport through multiple barriers as well. Finally, we suggest concrete experiments which may verify our theoretical predictions.

  7. Exact method for numerically analyzing a model of local denaturation in superhelically stressed DNA

    NASA Astrophysics Data System (ADS)

    Fye, Richard M.; Benham, Craig J.

    1999-03-01

    Local denaturation, the separation at specific sites of the two strands comprising the DNA double helix, is one of the most fundamental processes in biology, required to allow the base sequence to be read both in DNA transcription and in replication. In living organisms this process can be mediated by enzymes which regulate the amount of superhelical stress imposed on the DNA. We present a numerically exact technique for analyzing a model of denaturation in superhelically stressed DNA. This approach is capable of predicting the locations and extents of transition in circular superhelical DNA molecules of kilobase lengths and specified base pair sequences. It can also be used for closed loops of DNA which are typically found in vivo to be kilobases long. The analytic method consists of an integration over the DNA twist degrees of freedom followed by the introduction of auxiliary variables to decouple the remaining degrees of freedom, which allows the use of the transfer matrix method. The algorithm implementing our technique requires O(N2) operations and O(N) memory to analyze a DNA domain containing N base pairs. However, to analyze kilobase length DNA molecules it must be implemented in high precision floating point arithmetic. An accelerated algorithm is constructed by imposing an upper bound M on the number of base pairs that can simultaneously denature in a state. This accelerated algorithm requires O(MN) operations, and has an analytically bounded error. Sample calculations show that it achieves high accuracy (greater than 15 decimal digits) with relatively small values of M (M<0.05N) for kilobase length molecules under physiologically relevant conditions. Calculations are performed on the superhelical pBR322 DNA sequence to test the accuracy of the method. With no free parameters in the model, the locations and extents of local denaturation predicted by this analysis are in quantitatively precise agreement with in vitro experimental measurements. Calculations performed on the fructose-1,6-bisphosphatase gene sequence from yeast show that this approach can also accurately treat in vivo denaturation.

  8. Length estimations of presumed upward connecting leaders in lightning flashes to flat water and flat ground

    NASA Astrophysics Data System (ADS)

    Stolzenburg, Maribeth; Marshall, Thomas C.; Karunarathne, Sumedhe; Orville, Richard E.

    2018-10-01

    Using video data recorded at 50,000 frames per second for nearby negative lightning flashes, estimates are derived for the length of positive upward connecting leaders (UCLs) that presumably formed prior to new ground attachments. Return strokes were 1.7 to 7.8 km distant, yielding image resolutions of 4.25 to 19.5 m. No UCLs are imaged in these data, indicating those features were too transient or too dim compared to other lightning processes that are imaged at these resolutions. Upper bound lengths for 17 presumed UCLs are determined from the height above flat ground or water of the successful stepped leader tip in the image immediately prior to (within 20 μs before) the return stroke. Better estimates of maximum UCL lengths are determined using the downward stepped leader tip's speed of advance and the estimated return stroke time within its first frame. For 17 strokes, the upper bound length of the possible UCL averages 31.6 m and ranges from 11.3 to 50.3 m. Among the close strokes (those with spatial resolution <8 m per pixel), the five which connected to water (salt water lagoon) have UCL upper bound estimates averaging significantly shorter (24.1 m) than the average for the three close strokes which connected to land (36.9 m). The better estimates of maximum UCL lengths for the eight close strokes average 20.2 m, with slightly shorter average of 18.3 m for the five that connected to water. All the better estimates of UCL maximum lengths are <38 m in this dataset

  9. Modification of the activity of cell wall-bound peroxidase by hypergravity in relation to the stimulation of lignin formation in azuki bean epicotyls

    NASA Astrophysics Data System (ADS)

    Wakabayashi, Kazuyuki; Nakano, Saho; Soga, Kouichi; Hoson, Takayuki

    Lignin is a component of cell walls of terrestrial plants, which provides cell walls with the mechanical rigidity. Lignin is a phenolic polymer with high molecular mass and formed by the polymerization of phenolic substances on a cellulosic matrix. The polymerization is catalyzed by cell wall-bound peroxidase, and thus the activity of this enzyme regulates the rate of formation of lignin. In the present study, the changes in the lignin content and the activity of cell wall peroxidase were investigated along epicotyls of azuki bean seedlings grown under hypergravity conditions. The endogenous growth occurred primarily in the upper regions of the epicotyl and no growth was detected in the middle or basal regions. The amounts of acetyl bromide-soluble lignin increased from the upper to the basal regions of epicotyls. The lignin content per unit length in the basal region was three times higher than that in the upper region. Hypergravity treatment at 300 g for 6 h stimulated the increase in the lignin content in all regions of epicotyls, particularly in the basal regions. The peroxidase activity in the protein fraction extracted from the cell wall preparation with a high ionic strength buffer also increased gradually toward the basal region, and hypergravity treatment clearly increased the activity in all regions. There was a close correlation between the lignin content and the enzyme activity. These results suggest that gravity stimuli modulate the activity of cell wall-bound peroxidase, which, in turn, causes the stimulation of the lignin formation in stem organs.

  10. Thermalization Time Bounds for Pauli Stabilizer Hamiltonians

    NASA Astrophysics Data System (ADS)

    Temme, Kristan

    2017-03-01

    We prove a general lower bound to the spectral gap of the Davies generator for Hamiltonians that can be written as the sum of commuting Pauli operators. These Hamiltonians, defined on the Hilbert space of N-qubits, serve as one of the most frequently considered candidates for a self-correcting quantum memory. A spectral gap bound on the Davies generator establishes an upper limit on the life time of such a quantum memory and can be used to estimate the time until the system relaxes to thermal equilibrium when brought into contact with a thermal heat bath. The bound can be shown to behave as {λ ≥ O(N^{-1} exp(-2β overline{ɛ}))}, where {overline{ɛ}} is a generalization of the well known energy barrier for logical operators. Particularly in the low temperature regime we expect this bound to provide the correct asymptotic scaling of the gap with the system size up to a factor of N -1. Furthermore, we discuss conditions and provide scenarios where this factor can be removed and a constant lower bound can be proven.

  11. A duality approach for solving bounded linear programming problems with fuzzy variables based on ranking functions and its application in bounded transportation problems

    NASA Astrophysics Data System (ADS)

    Ebrahimnejad, Ali

    2015-08-01

    There are several methods, in the literature, for solving fuzzy variable linear programming problems (fuzzy linear programming in which the right-hand-side vectors and decision variables are represented by trapezoidal fuzzy numbers). In this paper, the shortcomings of some existing methods are pointed out and to overcome these shortcomings a new method based on the bounded dual simplex method is proposed to determine the fuzzy optimal solution of that kind of fuzzy variable linear programming problems in which some or all variables are restricted to lie within lower and upper bounds. To illustrate the proposed method, an application example is solved and the obtained results are given. The advantages of the proposed method over existing methods are discussed. Also, one application of this algorithm in solving bounded transportation problems with fuzzy supplies and demands is dealt with. The proposed method is easy to understand and to apply for determining the fuzzy optimal solution of bounded fuzzy variable linear programming problems occurring in real-life situations.

  12. Covariance Bell inequalities

    NASA Astrophysics Data System (ADS)

    Pozsgay, Victor; Hirsch, Flavien; Branciard, Cyril; Brunner, Nicolas

    2017-12-01

    We introduce Bell inequalities based on covariance, one of the most common measures of correlation. Explicit examples are discussed, and violations in quantum theory are demonstrated. A crucial feature of these covariance Bell inequalities is their nonlinearity; this has nontrivial consequences for the derivation of their local bound, which is not reached by deterministic local correlations. For our simplest inequality, we derive analytically tight bounds for both local and quantum correlations. An interesting application of covariance Bell inequalities is that they can act as "shared randomness witnesses": specifically, the value of the Bell expression gives device-independent lower bounds on both the dimension and the entropy of the shared random variable in a local model.

  13. The isolation limits of stochastic vibration

    NASA Technical Reports Server (NTRS)

    Knopse, C. R.; Allaire, P. E.

    1993-01-01

    The vibration isolation problem is formulated as a 1D kinematic problem. The geometry of the stochastic wall trajectories arising from the stroke constraint is defined in terms of their significant extrema. An optimal control solution for the minimum acceleration return path determines a lower bound on platform mean square acceleration. This bound is expressed in terms of the probability density function on the significant maxima and the conditional fourth moment of the first passage time inverse. The first of these is found analytically while the second is found using a Monte Carlo simulation. The rms acceleration lower bound as a function of available space is then determined through numerical quadrature.

  14. Stability of Nonlinear Systems with Unknown Time-varying Feedback Delay

    NASA Astrophysics Data System (ADS)

    Chunodkar, Apurva A.; Akella, Maruthi R.

    2013-12-01

    This paper considers the problem of stabilizing a class of nonlinear systems with unknown bounded delayed feedback wherein the time-varying delay is 1) piecewise constant 2) continuous with a bounded rate. We also consider application of these results to the stabilization of rigid-body attitude dynamics. In the first case, the time-delay in feedback is modeled specifically as a switch among an arbitrarily large set of unknown constant values with a known strict upper bound. The feedback is a linear function of the delayed states. In the case of linear systems with switched delay feedback, a new sufficiency condition for average dwell time result is presented using a complete type Lyapunov-Krasovskii (L-K) functional approach. Further, the corresponding switched system with nonlinear perturbations is proven to be exponentially stable inside a well characterized region of attraction for an appropriately chosen average dwell time. In the second case, the concept of the complete type L-K functional is extended to a class of nonlinear time-delay systems with unknown time-varying time-delay. This extension ensures stability robustness to time-delay in the control design for all values of time-delay less than the known upper bound. Model-transformation is used in order to partition the nonlinear system into a nominal linear part that is exponentially stable with a bounded perturbation. We obtain sufficient conditions which ensure exponential stability inside a region of attraction estimate. A constructive method to evaluate the sufficient conditions is presented together with comparison with the corresponding constant and piecewise constant delay. Numerical simulations are performed to illustrate the theoretical results of this paper.

  15. Chang'e 3 lunar mission and upper limit on stochastic background of gravitational wave around the 0.01 Hz band

    NASA Astrophysics Data System (ADS)

    Tang, Wenlin; Xu, Peng; Hu, Songjie; Cao, Jianfeng; Dong, Peng; Bu, Yanlong; Chen, Lue; Han, Songtao; Gong, Xuefei; Li, Wenxiao; Ping, Jinsong; Lau, Yun-Kau; Tang, Geshi

    2017-09-01

    The Doppler tracking data of the Chang'e 3 lunar mission is used to constrain the stochastic background of gravitational wave in cosmology within the 1 mHz to 0.05 Hz frequency band. Our result improves on the upper bound on the energy density of the stochastic background of gravitational wave in the 0.02-0.05 Hz band obtained by the Apollo missions, with the improvement reaching almost one order of magnitude at around 0.05 Hz. Detailed noise analysis of the Doppler tracking data is also presented, with the prospect that these noise sources will be mitigated in future Chinese deep space missions. A feasibility study is also undertaken to understand the scientific capability of the Chang'e 4 mission, due to be launched in 2018, in relation to the stochastic gravitational wave background around 0.01 Hz. The study indicates that the upper bound on the energy density may be further improved by another order of magnitude from the Chang'e 3 mission, which will fill the gap in the frequency band from 0.02 Hz to 0.1 Hz in the foreseeable future.

  16. The generalized truncated exponential distribution as a model for earthquake magnitudes

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2015-04-01

    The random distribution of small, medium and large earthquake magnitudes follows an exponential distribution (ED) according to the Gutenberg-Richter relation. But a magnitude distribution is truncated in the range of very large magnitudes because the earthquake energy is finite and the upper tail of the exponential distribution does not fit well observations. Hence the truncated exponential distribution (TED) is frequently applied for the modelling of the magnitude distributions in the seismic hazard and risk analysis. The TED has a weak point: when two TEDs with equal parameters, except the upper bound magnitude, are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters, except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. It also applies to alternative distribution models. The presented generalized truncated exponential distribution (GTED) overcomes this weakness. The ED and the TED are special cases of the GTED. Different issues of the statistical inference are also discussed and an example of empirical data is presented in the current contribution.

  17. Fundamental limitations of cavity-assisted atom interferometry

    NASA Astrophysics Data System (ADS)

    Dovale-Álvarez, M.; Brown, D. D.; Jones, A. W.; Mow-Lowry, C. M.; Miao, H.; Freise, A.

    2017-11-01

    Atom interferometers employing optical cavities to enhance the beam splitter pulses promise significant advances in science and technology, notably for future gravitational wave detectors. Long cavities, on the scale of hundreds of meters, have been proposed in experiments aiming to observe gravitational waves with frequencies below 1 Hz, where laser interferometers, such as LIGO, have poor sensitivity. Alternatively, short cavities have also been proposed for enhancing the sensitivity of more portable atom interferometers. We explore the fundamental limitations of two-mirror cavities for atomic beam splitting, and establish upper bounds on the temperature of the atomic ensemble as a function of cavity length and three design parameters: the cavity g factor, the bandwidth, and the optical suppression factor of the first and second order spatial modes. A lower bound to the cavity bandwidth is found which avoids elongation of the interaction time and maximizes power enhancement. An upper limit to cavity length is found for symmetric two-mirror cavities, restricting the practicality of long baseline detectors. For shorter cavities, an upper limit on the beam size was derived from the geometrical stability of the cavity. These findings aim to aid the design of current and future cavity-assisted atom interferometers.

  18. Limits on the fluctuating part of y-type distortion monopole from Planck and SPT results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khatri, Rishi; Sunyaev, Rashid, E-mail: khatri@mpa-garching.mpg.de, E-mail: sunyaev@mpa-garching.mpg.de

    2015-08-01

    We use the published Planck and SPT cluster catalogs [1,2] and recently published y-distortion maps [3] to put strong observational limits on the contribution of the fluctuating part of the y-type distortions to the y-distortion monopole. Our bounds are 5.4× 10{sup −8} < ( y) < 2.2× 10{sup −6}. Our upper bound is a factor of 6.8 stronger than the currently best upper 95% confidence limit from COBE-FIRAS of ( y) <15× 10{sup −6}. In the standard cosmology, large scale structure is the only source of such distortions and our limits therefore constrain the baryonic physics involved in the formation of the large scale structure. Our lower limit, from themore » detected clusters in the Planck and SPT catalogs, also implies that a Pixie-like experiment should detect the y-distortion monopole at >27-σ. The biggest sources of uncertainty in our upper limit are the monopole offsets between different HFI channel maps that we estimate to be <10{sup −6}.« less

  19. Approximation Set of the Interval Set in Pawlak's Space

    PubMed Central

    Wang, Jin; Wang, Guoyin

    2014-01-01

    The interval set is a special set, which describes uncertainty of an uncertain concept or set Z with its two crisp boundaries named upper-bound set and lower-bound set. In this paper, the concept of similarity degree between two interval sets is defined at first, and then the similarity degrees between an interval set and its two approximations (i.e., upper approximation set R¯(Z) and lower approximation set R_(Z)) are presented, respectively. The disadvantages of using upper-approximation set R¯(Z) or lower-approximation set R_(Z) as approximation sets of the uncertain set (uncertain concept) Z are analyzed, and a new method for looking for a better approximation set of the interval set Z is proposed. The conclusion that the approximation set R 0.5(Z) is an optimal approximation set of interval set Z is drawn and proved successfully. The change rules of R 0.5(Z) with different binary relations are analyzed in detail. Finally, a kind of crisp approximation set of the interval set Z is constructed. We hope this research work will promote the development of both the interval set model and granular computing theory. PMID:25177721

  20. Uncertainty, imprecision, and the precautionary principle in climate change assessment.

    PubMed

    Borsuk, M E; Tomassini, L

    2005-01-01

    Statistical decision theory can provide useful support for climate change decisions made under conditions of uncertainty. However, the probability distributions used to calculate expected costs in decision theory are themselves subject to uncertainty, disagreement, or ambiguity in their specification. This imprecision can be described using sets of probability measures, from which upper and lower bounds on expectations can be calculated. However, many representations, or classes, of probability measures are possible. We describe six of the more useful classes and demonstrate how each may be used to represent climate change uncertainties. When expected costs are specified by bounds, rather than precise values, the conventional decision criterion of minimum expected cost is insufficient to reach a unique decision. Alternative criteria are required, and the criterion of minimum upper expected cost may be desirable because it is consistent with the precautionary principle. Using simple climate and economics models as an example, we determine the carbon dioxide emissions levels that have minimum upper expected cost for each of the selected classes. There can be wide differences in these emissions levels and their associated costs, emphasizing the need for care when selecting an appropriate class.

Top