Morphological representation of order-statistics filters.
Charif-Chefchaouni, M; Schonfeld, D
1995-01-01
We propose a comprehensive theory for the morphological bounds on order-statistics filters (and their repeated iterations). Conditions are derived for morphological openings and closings to serve as bounds (lower and upper, respectively) on order-statistics filters (and their repeated iterations). Under various assumptions, morphological open-closings and close-openings are also shown to serve as (tighter) bounds (lower and upper, respectively) on iterations of order-statistics filters. Simulations of the application of the results presented to image restoration are finally provided.
Limitations of the background field method applied to Rayleigh-Bénard convection
NASA Astrophysics Data System (ADS)
Nobili, Camilla; Otto, Felix
2017-09-01
We consider Rayleigh-Bénard convection as modeled by the Boussinesq equations, in the case of infinite Prandtl numbers and with no-slip boundary condition. There is a broad interest in bounds of the upwards heat flux, as given by the Nusselt number Nu, in terms of the forcing via the imposed temperature difference, as given by the Rayleigh number in the turbulent regime Ra ≫ 1 . In several studies, the background field method applied to the temperature field has been used to provide upper bounds on Nu in terms of Ra. In these applications, the background field method comes in the form of a variational problem where one optimizes a stratified temperature profile subject to a certain stability condition; the method is believed to capture the marginal stability of the boundary layer. The best available upper bound via this method is Nu ≲Ra/1 3 ( ln R a )/1 15 ; it proceeds via the construction of a stable temperature background profile that increases logarithmically in the bulk. In this paper, we show that the background temperature field method cannot provide a tighter upper bound in terms of the power of the logarithm. However, by another method, one does obtain the tighter upper bound Nu ≲ Ra /1 3 ( ln ln Ra ) /1 3 so that the result of this paper implies that the background temperature field method is unphysical in the sense that it cannot provide the optimal bound.
Upper bound on three-tangles of reduced states of four-qubit pure states
NASA Astrophysics Data System (ADS)
Sharma, S. Shelly; Sharma, N. K.
2017-06-01
Closed formulas for upper bounds on three-tangles of three-qubit reduced states in terms of three-qubit-invariant polynomials of pure four-qubit states are obtained. Our results offer tighter constraints on total three-way entanglement of a given qubit with the rest of the system than those used by Regula et al. [Phys. Rev. Lett. 113, 110501 (2014), 10.1103/PhysRevLett.113.110501 and Phys. Rev. Lett. 116, 049902(E) (2016)], 10.1103/PhysRevLett.116.049902 to verify monogamy of four-qubit quantum entanglement.
Upper bounds on secret-key agreement over lossy thermal bosonic channels
NASA Astrophysics Data System (ADS)
Kaur, Eneet; Wilde, Mark M.
2017-12-01
Upper bounds on the secret-key-agreement capacity of a quantum channel serve as a way to assess the performance of practical quantum-key-distribution protocols conducted over that channel. In particular, if a protocol employs a quantum repeater, achieving secret-key rates exceeding these upper bounds is evidence of having a working quantum repeater. In this paper, we extend a recent advance [Liuzzo-Scorpo et al., Phys. Rev. Lett. 119, 120503 (2017), 10.1103/PhysRevLett.119.120503] in the theory of the teleportation simulation of single-mode phase-insensitive Gaussian channels such that it now applies to the relative entropy of entanglement measure. As a consequence of this extension, we find tighter upper bounds on the nonasymptotic secret-key-agreement capacity of the lossy thermal bosonic channel than were previously known. The lossy thermal bosonic channel serves as a more realistic model of communication than the pure-loss bosonic channel, because it can model the effects of eavesdropper tampering and imperfect detectors. An implication of our result is that the previously known upper bounds on the secret-key-agreement capacity of the thermal channel are too pessimistic for the practical finite-size regime in which the channel is used a finite number of times, and so it should now be somewhat easier to witness a working quantum repeater when using secret-key-agreement capacity upper bounds as a benchmark.
Energy efficient quantum machines
NASA Astrophysics Data System (ADS)
Abah, Obinna; Lutz, Eric
2017-05-01
We investigate the performance of a quantum thermal machine operating in finite time based on shortcut-to-adiabaticity techniques. We compute efficiency and power for a paradigmatic harmonic quantum Otto engine by taking the energetic cost of the shortcut driving explicitly into account. We demonstrate that shortcut-to-adiabaticity machines outperform conventional ones for fast cycles. We further derive generic upper bounds on both quantities, valid for any heat engine cycle, using the notion of quantum speed limit for driven systems. We establish that these quantum bounds are tighter than those stemming from the second law of thermodynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azunre, P.
Here in this paper, two novel techniques for bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations are developed. The first provides a theorem to construct interval bounds, while the second provides a theorem to construct lower bounds convex and upper bounds concave in the parameter. The convex/concave bounds can be significantly tighter than the interval bounds because of the wrapping effect suffered by interval analysis in dynamical systems. Both types of bounds are computationally cheap to construct, requiring solving auxiliary systems twice and four times larger than the original system, respectively. An illustrative numerical examplemore » of bound construction and use for deterministic global optimization within a simple serial branch-and-bound algorithm, implemented numerically using interval arithmetic and a generalization of McCormick's relaxation technique, is presented. Finally, problems within the important class of reaction-diffusion systems may be optimized with these tools.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kyri; Toomey, Bridget
Evolving power systems with increasing levels of stochasticity call for a need to solve optimal power flow problems with large quantities of random variables. Weather forecasts, electricity prices, and shifting load patterns introduce higher levels of uncertainty and can yield optimization problems that are difficult to solve in an efficient manner. Solution methods for single chance constraints in optimal power flow problems have been considered in the literature, ensuring single constraints are satisfied with a prescribed probability; however, joint chance constraints, ensuring multiple constraints are simultaneously satisfied, have predominantly been solved via scenario-based approaches or by utilizing Boole's inequality asmore » an upper bound. In this paper, joint chance constraints are used to solve an AC optimal power flow problem while preventing overvoltages in distribution grids under high penetrations of photovoltaic systems. A tighter version of Boole's inequality is derived and used to provide a new upper bound on the joint chance constraint, and simulation results are shown demonstrating the benefit of the proposed upper bound. The new framework allows for a less conservative and more computationally efficient solution to considering joint chance constraints, specifically regarding preventing overvoltages.« less
Azunre, P.
2016-09-21
Here in this paper, two novel techniques for bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations are developed. The first provides a theorem to construct interval bounds, while the second provides a theorem to construct lower bounds convex and upper bounds concave in the parameter. The convex/concave bounds can be significantly tighter than the interval bounds because of the wrapping effect suffered by interval analysis in dynamical systems. Both types of bounds are computationally cheap to construct, requiring solving auxiliary systems twice and four times larger than the original system, respectively. An illustrative numerical examplemore » of bound construction and use for deterministic global optimization within a simple serial branch-and-bound algorithm, implemented numerically using interval arithmetic and a generalization of McCormick's relaxation technique, is presented. Finally, problems within the important class of reaction-diffusion systems may be optimized with these tools.« less
NASA Technical Reports Server (NTRS)
Dembo, Amir
1989-01-01
Pinsker and Ebert (1970) proved that in channels with additive Gaussian noise, feedback at most doubles the capacity. Cover and Pombra (1989) proved that feedback at most adds half a bit per transmission. Following their approach, the author proves that in the limit as signal power approaches either zero (very low SNR) or infinity (very high SNR), feedback does not increase the finite block-length capacity (which for nonstationary Gaussian channels replaces the standard notion of capacity that may not exist). Tighter upper bounds on the capacity are obtained in the process. Specializing these results to stationary channels, the author recovers some of the bounds recently obtained by Ozarow.
Computational experience with a parallel algorithm for tetrangle inequality bound smoothing.
Rajan, K; Deo, N
1999-09-01
Determining molecular structure from interatomic distances is an important and challenging problem. Given a molecule with n atoms, lower and upper bounds on interatomic distances can usually be obtained only for a small subset of the 2(n(n-1)) atom pairs, using NMR. Given the bounds so obtained on the distances between some of the atom pairs, it is often useful to compute tighter bounds on all the 2(n(n-1)) pairwise distances. This process is referred to as bound smoothing. The initial lower and upper bounds for the pairwise distances not measured are usually assumed to be 0 and infinity. One method for bound smoothing is to use the limits imposed by the triangle inequality. The distance bounds so obtained can often be tightened further by applying the tetrangle inequality--the limits imposed on the six pairwise distances among a set of four atoms (instead of three for the triangle inequalities). The tetrangle inequality is expressed by the Cayley-Menger determinants. For every quadruple of atoms, each pass of the tetrangle inequality bound smoothing procedure finds upper and lower limits on each of the six distances in the quadruple. Applying the tetrangle inequalities to each of the (4n) quadruples requires O(n4) time. Here, we propose a parallel algorithm for bound smoothing employing the tetrangle inequality. Each pass of our algorithm requires O(n3 log n) time on a REW PRAM (Concurrent Read Exclusive Write Parallel Random Access Machine) with O(log(n)n) processors. An implementation of this parallel algorithm on the Intel Paragon XP/S and its performance are also discussed.
NASA Astrophysics Data System (ADS)
Thelen, Brian J.; Xique, Ismael J.; Burns, Joseph W.; Goley, G. Steven; Nolan, Adam R.; Benson, Jonathan W.
2017-04-01
In Bayesian decision theory, there has been a great amount of research into theoretical frameworks and information- theoretic quantities that can be used to provide lower and upper bounds for the Bayes error. These include well-known bounds such as Chernoff, Battacharrya, and J-divergence. Part of the challenge of utilizing these various metrics in practice is (i) whether they are "loose" or "tight" bounds, (ii) how they might be estimated via either parametric or non-parametric methods, and (iii) how accurate the estimates are for limited amounts of data. In general what is desired is a methodology for generating relatively tight lower and upper bounds, and then an approach to estimate these bounds efficiently from data. In this paper, we explore the so-called triangle divergence which has been around for a while, but was recently made more prominent in some recent research on non-parametric estimation of information metrics. Part of this work is motivated by applications for quantifying fundamental information content in SAR/LIDAR data, and to help in this, we have developed a flexible multivariate modeling framework based on multivariate Gaussian copula models which can be combined with the triangle divergence framework to quantify this information, and provide approximate bounds on Bayes error. In this paper we present an overview of the bounds, including those based on triangle divergence and verify that under a number of multivariate models, the upper and lower bounds derived from triangle divergence are significantly tighter than the other common bounds, and often times, dramatically so. We also propose some simple but effective means for computing the triangle divergence using Monte Carlo methods, and then discuss estimation of the triangle divergence from empirical data based on Gaussian Copula models.
Tightening the entropic uncertainty bound in the presence of quantum memory
NASA Astrophysics Data System (ADS)
Adabi, F.; Salimi, S.; Haseli, S.
2016-06-01
The uncertainty principle is a fundamental principle in quantum physics. It implies that the measurement outcomes of two incompatible observables cannot be predicted simultaneously. In quantum information theory, this principle can be expressed in terms of entropic measures. M. Berta et al. [Nat. Phys. 6, 659 (2010), 10.1038/nphys1734] have indicated that uncertainty bound can be altered by considering a particle as a quantum memory correlating with the primary particle. In this article, we obtain a lower bound for entropic uncertainty in the presence of a quantum memory by adding an additional term depending on the Holevo quantity and mutual information. We conclude that our lower bound will be tightened with respect to that of Berta et al. when the accessible information about measurements outcomes is less than the mutual information about the joint state. Some examples have been investigated for which our lower bound is tighter than Berta et al.'s lower bound. Using our lower bound, a lower bound for the entanglement of formation of bipartite quantum states has been obtained, as well as an upper bound for the regularized distillable common randomness.
Efficient traffic grooming in SONET/WDM BLSR Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Awwal, A S; Billah, A B; Wang, B
2004-04-02
In this paper, we study traffic grooming in SONET/WDM BLSR networks under the uniform all-to-all traffic model with an objective to reduce total network costs (wavelength and electronic multiplexing costs), in particular, to minimize the number of ADMs while using the optimal number of wavelengths. We derive a new tighter lower bound for the number of wavelengths when the number of nodes is a multiple of 4. We show that this lower bound is achievable. All previous ADM lower bounds except perhaps that in were derived under the assumption that the magnitude of the traffic streams (r) is one unitmore » (r = 1) with respect to the wavelength capacity granularity g. We then derive new, more general and tighter lower bounds for the number of ADMs subject to that the optimal number of wavelengths is used, and propose heuristic algorithms (circle construction algorithm and circle grooming algorithm) that try to minimize the number of ADMs while using the optimal number of wavelengths in BLSR networks. Both the bounds and algorithms are applicable to any value of r and for different wavelength granularity g. Performance evaluation shows that wherever applicable, our lower bounds are at least as good as existing bounds and are much tighter than existing ones in many cases. Our proposed heuristic grooming algorithms perform very well with traffic streams of larger magnitude. The resulting number of ADMs required is very close to the corresponding lower bounds derived in this paper.« less
Molecular recognition of pyr mRNA by the Bacillus subtilis attenuation regulatory protein PyrR
Bonner, Eric R.; D’Elia, John N.; Billips, Benjamin K.; Switzer, Robert L.
2001-01-01
The pyrimidine nucleotide biosynthesis (pyr) operon in Bacillus subtilis is regulated by transcriptional attenuation. The PyrR protein binds in a uridine nucleotide-dependent manner to three attenuation sites at the 5′-end of pyr mRNA. PyrR binds an RNA-binding loop, allowing a terminator hairpin to form and repressing the downstream genes. The binding of PyrR to defined RNA molecules was characterized by a gel mobility shift assay. Titration indicated that PyrR binds RNA in an equimolar ratio. PyrR bound more tightly to the binding loops from the second (BL2 RNA) and third (BL3 RNA) attenuation sites than to the binding loop from the first (BL1 RNA) attenuation site. PyrR bound BL2 RNA 4–5-fold tighter in the presence of saturating UMP or UDP and 150- fold tighter with saturating UTP, suggesting that UTP is the more important co-regulator. The minimal RNA that bound tightly to PyrR was 28 nt long. Thirty-one structural variants of BL2 RNA were tested for PyrR binding affinity. Two highly conserved regions of the RNA, the terminal loop and top of the upper stem and a purine-rich internal bulge and the base pairs below it, were crucial for tight binding. Conserved elements of RNA secondary structure were also required for tight binding. PyrR protected conserved areas of the binding loop in hydroxyl radical footprinting experiments. PyrR likely recognizes conserved RNA sequences, but only if they are properly positioned in the correct secondary structure. PMID:11726695
Solar System and stellar tests of a quantum-corrected gravity
NASA Astrophysics Data System (ADS)
Zhao, Shan-Shan; Xie, Yi
2015-09-01
The renormalization group running of the gravitational constant has a universal form and represents a possible extension of general relativity. These renormalization group effects on general relativity will cause the running of the gravitational constant, and there exists a scale of renormalization α ν , which depends on the mass of an astronomical system and needs to be determined by observations. We test renormalization group effects on general relativity and obtain the upper bounds of α ν in the low-mass scales: the Solar System and five systems of binary pulsars. Using the supplementary advances of the perihelia provided by INPOP10a (IMCCE, France) and EPM2011 (IAA RAS, Russia) ephemerides, we obtain new upper bounds on α ν in the Solar System when the Lense-Thirring effect due to the Sun's angular momentum and the uncertainty of the Sun's quadrupole moment are properly taken into account. These two factors were absent in the previous work. We find that INPOP10a yields the upper bound as α ν =(0.3 ±2.8 )×10-20 while EPM2011 gives α ν =(-2.5 ±8.3 )×10-21. Both of them are tighter than the previous result by 4 orders of magnitude. Furthermore, based on the observational data sets of five systems of binary pulsars: PSR J 0737 -3039 , PSR B 1534 +12 , PSR J 1756 -2251 , PSR B 1913 +16 , and PSR B 2127 +11 C , the upper bound is found as α ν =(-2.6 ±5.1 )×10-17. From the bounds of this work at a low-mass scale and the ones at the mass scale of galaxies, we might catch an updated glimpse of the mass dependence of α ν , and it is found that our improvement of the upper bounds in the Solar System can significantly change the possible pattern of the relation between log |α ν | and log m from a linear one to a power law, where m is the mass of an astronomical system. This suggests that |α ν | needs to be suppressed more rapidly with the decrease of the mass of low-mass systems. It also predicts that |α ν | might have an upper limit in high-mass astrophysical systems, which can be tested in the future.
A comparison of error bounds for a nonlinear tracking system with detection probability Pd < 1.
Tong, Huisi; Zhang, Hao; Meng, Huadong; Wang, Xiqin
2012-12-14
Error bounds for nonlinear filtering are very important for performance evaluation and sensor management. This paper presents a comparative study of three error bounds for tracking filtering, when the detection probability is less than unity. One of these bounds is the random finite set (RFS) bound, which is deduced within the framework of finite set statistics. The others, which are the information reduction factor (IRF) posterior Cramer-Rao lower bound (PCRLB) and enumeration method (ENUM) PCRLB are introduced within the framework of finite vector statistics. In this paper, we deduce two propositions and prove that the RFS bound is equal to the ENUM PCRLB, while it is tighter than the IRF PCRLB, when the target exists from the beginning to the end. Considering the disappearance of existing targets and the appearance of new targets, the RFS bound is tighter than both IRF PCRLB and ENUM PCRLB with time, by introducing the uncertainty of target existence. The theory is illustrated by two nonlinear tracking applications: ballistic object tracking and bearings-only tracking. The simulation studies confirm the theory and reveal the relationship among the three bounds.
A Comparison of Error Bounds for a Nonlinear Tracking System with Detection Probability Pd < 1
Tong, Huisi; Zhang, Hao; Meng, Huadong; Wang, Xiqin
2012-01-01
Error bounds for nonlinear filtering are very important for performance evaluation and sensor management. This paper presents a comparative study of three error bounds for tracking filtering, when the detection probability is less than unity. One of these bounds is the random finite set (RFS) bound, which is deduced within the framework of finite set statistics. The others, which are the information reduction factor (IRF) posterior Cramer-Rao lower bound (PCRLB) and enumeration method (ENUM) PCRLB are introduced within the framework of finite vector statistics. In this paper, we deduce two propositions and prove that the RFS bound is equal to the ENUM PCRLB, while it is tighter than the IRF PCRLB, when the target exists from the beginning to the end. Considering the disappearance of existing targets and the appearance of new targets, the RFS bound is tighter than both IRF PCRLB and ENUM PCRLB with time, by introducing the uncertainty of target existence. The theory is illustrated by two nonlinear tracking applications: ballistic object tracking and bearings-only tracking. The simulation studies confirm the theory and reveal the relationship among the three bounds. PMID:23242274
THE COOL ACCRETION DISK IN ESO 243-49 HLX-1: FURTHER EVIDENCE OF AN INTERMEDIATE-MASS BLACK HOLE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Shane W.; Narayan, Ramesh; Zhu Yucong
2011-06-20
With an inferred bolometric luminosity exceeding 10{sup 42} erg s{sup -1}, HLX-1 in ESO 243-49 is the most luminous of ultraluminous X-ray sources and provides one of the strongest cases for the existence of intermediate-mass black holes. We obtain good fits to disk-dominated observations of the source with BHSPEC, a fully relativistic black hole accretion disk spectral model. Due to degeneracies in the model arising from the lack of independent constraints on inclination and black hole spin, there is a factor of 100 uncertainty in the best-fit black hole mass M. Nevertheless, spectral fitting of XMM-Newton observations provides robust lowermore » and upper limits with 3000 M{sub sun} {approx}< M {approx}< 3 x 10{sup 5} M{sub sun}, at 90% confidence, placing HLX-1 firmly in the intermediate-mass regime. The lower bound on M is entirely determined by matching the shape and peak energy of the thermal component in the spectrum. This bound is consistent with (but independent of) arguments based solely on the Eddington limit. Joint spectral modeling of the XMM-Newton data with more luminous Swift and Chandra observations increases the lower bound to 6000 M{sub sun}, but this tighter constraint is not independent of the Eddington limit. The upper bound on M is sensitive to the maximum allowed inclination i, and is reduced to M {approx}< 10{sup 5} M{sub sun} if we limit i {approx}< 75{sup 0}.« less
An eigenvalue localization set for tensors and its applications.
Zhao, Jianxing; Sang, Caili
2017-01-01
A new eigenvalue localization set for tensors is given and proved to be tighter than those presented by Li et al . (Linear Algebra Appl. 481:36-53, 2015) and Huang et al . (J. Inequal. Appl. 2016:254, 2016). As an application of this set, new bounds for the minimum eigenvalue of [Formula: see text]-tensors are established and proved to be sharper than some known results. Compared with the results obtained by Huang et al ., the advantage of our results is that, without considering the selection of nonempty proper subsets S of [Formula: see text], we can obtain a tighter eigenvalue localization set for tensors and sharper bounds for the minimum eigenvalue of [Formula: see text]-tensors. Finally, numerical examples are given to verify the theoretical results.
Bounding Averages Rigorously Using Semidefinite Programming: Mean Moments of the Lorenz System
NASA Astrophysics Data System (ADS)
Goluskin, David
2018-04-01
We describe methods for proving bounds on infinite-time averages in differential dynamical systems. The methods rely on the construction of nonnegative polynomials with certain properties, similarly to the way nonlinear stability can be proved using Lyapunov functions. Nonnegativity is enforced by requiring the polynomials to be sums of squares, a condition which is then formulated as a semidefinite program (SDP) that can be solved computationally. Although such computations are subject to numerical error, we demonstrate two ways to obtain rigorous results: using interval arithmetic to control the error of an approximate SDP solution, and finding exact analytical solutions to relatively small SDPs. Previous formulations are extended to allow for bounds depending analytically on parametric variables. These methods are illustrated using the Lorenz equations, a system with three state variables ( x, y, z) and three parameters (β ,σ ,r). Bounds are reported for infinite-time averages of all eighteen moments x^ly^mz^n up to quartic degree that are symmetric under (x,y)\\mapsto (-x,-y). These bounds apply to all solutions regardless of stability, including chaotic trajectories, periodic orbits, and equilibrium points. The analytical approach yields two novel bounds that are sharp: the mean of z^3 can be no larger than its value of (r-1)^3 at the nonzero equilibria, and the mean of xy^3 must be nonnegative. The interval arithmetic approach is applied at the standard chaotic parameters to bound eleven average moments that all appear to be maximized on the shortest periodic orbit. Our best upper bound on each such average exceeds its value on the maximizing orbit by less than 1%. Many bounds reported here are much tighter than would be possible without computer assistance.
A fresh look into the interacting dark matter scenario
NASA Astrophysics Data System (ADS)
Escudero, Miguel; Lopez-Honorez, Laura; Mena, Olga; Palomares-Ruiz, Sergio; Villanueva-Domingo, Pablo
2018-06-01
The elastic scattering between dark matter particles and radiation represents an attractive possibility to solve a number of discrepancies between observations and standard cold dark matter predictions, as the induced collisional damping would imply a suppression of small-scale structures. We consider this scenario and confront it with measurements of the ionization history of the Universe at several redshifts and with recent estimates of the counts of Milky Way satellite galaxies. We derive a conservative upper bound on the dark matter-photon elastic scattering cross section of σγ DM < 8 × 10‑10 σT (mDM/GeV) at 95% CL, about one order of magnitude tighter than previous constraints from satellite number counts. Due to the strong degeneracies with astrophysical parameters, the bound on the dark matter-photon scattering cross section derived here is driven by the estimate of the number of Milky Way satellite galaxies. Finally, we also argue that future 21 cm probes could help in disentangling among possible non-cold dark matter candidates, such as interacting and warm dark matter scenarios. Let us emphasize that bounds of similar magnitude to the ones obtained here could be also derived for models with dark matter-neutrino interactions and would be as constraining as the tightest limits on such scenarios.
Parallel algorithms for the molecular conformation problem
NASA Astrophysics Data System (ADS)
Rajan, Kumar
Given a set of objects, and some of the pairwise distances between them, the problem of identifying the positions of the objects in the Euclidean space is referred to as the molecular conformation problem. This problem is known to be computationally difficult. One of the most important applications of this problem is the determination of the structure of molecules. In the case of molecular structure determination, usually only the lower and upper bounds on some of the interatomic distances are available. The process of obtaining a tighter set of bounds between all pairs of atoms, using the available interatomic distance bounds is referred to as bound-smoothing . One method for bound-smoothing is to use the limits imposed by the triangle inequality. The distance bounds so obtained can often be tightened further by applying the tetrangle inequality---the limits imposed on the six pairwise distances among a set of four atoms (instead of three for the triangle inequalities). The tetrangle inequality is expressed by the Cayley-Menger determinants. The sequential tetrangle-inequality bound-smoothing algorithm considers a quadruple of atoms at a time, and tightens the bounds on each of its six distances. The sequential algorithm is computationally expensive, and its application is limited to molecules with up to a few hundred atoms. Here, we conduct an experimental study of tetrangle-inequality bound-smoothing and reduce the sequential time by identifying the most computationally expensive portions of the process. We also present a simple criterion to determine which of the quadruples of atoms are likely to be tightened the most by tetrangle-inequality bound-smoothing. This test could be used to enhance the applicability of this process to large molecules. We map the problem of parallelizing tetrangle-inequality bound-smoothing to that of generating disjoint packing designs of a certain kind. We map this, in turn, to a regular-graph coloring problem, and present a simple, parallel algorithm for tetrangle-inequality bound-smoothing. We implement the parallel algorithm on the Intel Paragon X/PS, and apply it to real-life molecules. Our results show that with this parallel algorithm, tetrangle inequality can be applied to large molecules in a reasonable amount of time. We extend the regular graph to represent more general packing designs, and present a coloring algorithm for this graph. This can be used to generate constant-weight binary codes in parallel. Once a tighter set of distance bounds is obtained, the molecular conformation problem is usually formulated as a non-linear optimization problem, and a global optimization algorithm is then used to solve the problem. Here we present a parallel, deterministic algorithm for the optimization problem based on Interval Analysis. We implement our algorithm, using dynamic load balancing, on a network of Sun Ultra-Sparc workstations. Our experience with this algorithm shows that its application is limited to small instances of the molecular conformation problem, where the number of measured, pairwise distances is close to the maximum value. However, since the interval method eliminates a substantial portion of the initial search space very quickly, it can be used to prune the search space before any of the more efficient, nondeterministic methods can be applied.
Utilization Bound of Non-preemptive Fixed Priority Schedulers
NASA Astrophysics Data System (ADS)
Park, Moonju; Chae, Jinseok
It is known that the schedulability of a non-preemptive task set with fixed priority can be determined in pseudo-polynomial time. However, since Rate Monotonic scheduling is not optimal for non-preemptive scheduling, the applicability of existing polynomial time tests that provide sufficient schedulability conditions, such as Liu and Layland's bound, is limited. This letter proposes a new sufficient condition for non-preemptive fixed priority scheduling that can be used for any fixed priority assignment scheme. It is also shown that the proposed schedulability test has a tighter utilization bound than existing test methods.
NASA Astrophysics Data System (ADS)
Zhang, Jun; Zhang, Yang; Yu, Chang-Shui
2015-06-01
The Heisenberg uncertainty principle shows that no one can specify the values of the non-commuting canonically conjugated variables simultaneously. However, the uncertainty relation is usually applied to two incompatible measurements. We present tighter bounds on both entropic uncertainty relation and information exclusion relation for multiple measurements in the presence of quantum memory. As applications, three incompatible measurements on Werner state and Horodecki’s bound entangled state are investigated in details.
Zhang, Jun; Zhang, Yang; Yu, Chang-shui
2015-01-01
The Heisenberg uncertainty principle shows that no one can specify the values of the non-commuting canonically conjugated variables simultaneously. However, the uncertainty relation is usually applied to two incompatible measurements. We present tighter bounds on both entropic uncertainty relation and information exclusion relation for multiple measurements in the presence of quantum memory. As applications, three incompatible measurements on Werner state and Horodecki’s bound entangled state are investigated in details. PMID:26118488
Greenbaum, Gili
2015-09-07
Evaluation of the time scale of the fixation of neutral mutations is crucial to the theoretical understanding of the role of neutral mutations in evolution. Diffusion approximations of the Wright-Fisher model are most often used to derive analytic formulations of genetic drift, as well as for the time scales of the fixation of neutral mutations. These approximations require a set of assumptions, most notably that genetic drift is a stochastic process in a continuous allele-frequency space, an assumption appropriate for large populations. Here equivalent approximations are derived using a coalescent theory approach which relies on a different set of assumptions than the diffusion approach, and adopts a discrete allele-frequency space. Solutions for the mean and variance of the time to fixation of a neutral mutation derived from the two approaches converge for large populations but slightly differ for small populations. A Markov chain analysis of the Wright-Fisher model for small populations is used to evaluate the solutions obtained, showing that both the mean and the variance are better approximated by the coalescent approach. The coalescence approximation represents a tighter upper-bound for the mean time to fixation than the diffusion approximation, while the diffusion approximation and coalescence approximation form an upper and lower bound, respectively, for the variance. The converging solutions and the small deviations of the two approaches strongly validate the use of diffusion approximations, but suggest that coalescent theory can provide more accurate approximations for small populations. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Cruz Jiménez, Miriam Guadalupe; Meyer Baese, Uwe; Jovanovic Dolecek, Gordana
2017-12-01
New theoretical lower bounds for the number of operators needed in fixed-point constant multiplication blocks are presented. The multipliers are constructed with the shift-and-add approach, where every arithmetic operation is pipelined, and with the generalization that n-input pipelined additions/subtractions are allowed, along with pure pipelining registers. These lower bounds, tighter than the state-of-the-art theoretical limits, are particularly useful in early design stages for a quick assessment in the hardware utilization of low-cost constant multiplication blocks implemented in the newest families of field programmable gate array (FPGA) integrated circuits.
Zhang, Xian-Ming; Han, Qing-Long; Ge, Xiaohua
2017-09-22
This paper is concerned with the problem of robust H∞ control of an uncertain discrete-time Takagi-Sugeno fuzzy system with an interval-like time-varying delay. A novel finite-sum inequality-based method is proposed to provide a tighter estimation on the forward difference of certain Lyapunov functional, leading to a less conservative result. First, an auxiliary vector function is used to establish two finite-sum inequalities, which can produce tighter bounds for the finite-sum terms appearing in the forward difference of the Lyapunov functional. Second, a matrix-based quadratic convex approach is employed to equivalently convert the original matrix inequality including a quadratic polynomial on the time-varying delay into two boundary matrix inequalities, which delivers a less conservative bounded real lemma (BRL) for the resultant closed-loop system. Third, based on the BRL, a novel sufficient condition on the existence of suitable robust H∞ fuzzy controllers is derived. Finally, two numerical examples and a computer-simulated truck-trailer system are provided to show the effectiveness of the obtained results.
Bounding the Resource Availability of Partially Ordered Events with Constant Resource Impact
NASA Technical Reports Server (NTRS)
Frank, Jeremy
2004-01-01
We compare existing techniques to bound the resource availability of partially ordered events. We first show that, contrary to intuition, two existing techniques, one due to Laborie and one due to Muscettola, are not strictly comparable in terms of the size of the search trees generated under chronological search with a fixed heuristic. We describe a generalization of these techniques called the Flow Balance Constraint to tightly bound the amount of available resource for a set of partially ordered events with piecewise constant resource impact We prove that the new technique generates smaller proof trees under chronological search with a fixed heuristic, at little increase in computational expense. We then show how to construct tighter resource bounds but at increased computational cost.
Multibody local approximation: Application to conformational entropy calculations on biomolecules
NASA Astrophysics Data System (ADS)
Suárez, Ernesto; Suárez, Dimas
2012-08-01
Multibody type expansions like mutual information expansions are widely used for computing or analyzing properties of large composite systems. The power of such expansions stems from their generality. Their weaknesses, however, are the large computational cost of including high order terms due to the combinatorial explosion and the fact that truncation errors do not decrease strictly with the expansion order. Herein, we take advantage of the redundancy of multibody expansions in order to derive an efficient reformulation that captures implicitly all-order correlation effects within a given cutoff, avoiding the combinatory explosion. This approach, which is cutoff dependent rather than order dependent, keeps the generality of the original expansions and simultaneously mitigates their limitations provided that a reasonable cutoff can be used. An application of particular interest can be the computation of the conformational entropy of flexible peptide molecules from molecular dynamics trajectories. By combining the multibody local estimations of conformational entropy with average values of the rigid-rotor and harmonic-oscillator entropic contributions, we obtain by far a tighter upper bound of the absolute entropy than the one obtained by the broadly used quasi-harmonic method.
Multibody local approximation: application to conformational entropy calculations on biomolecules.
Suárez, Ernesto; Suárez, Dimas
2012-08-28
Multibody type expansions like mutual information expansions are widely used for computing or analyzing properties of large composite systems. The power of such expansions stems from their generality. Their weaknesses, however, are the large computational cost of including high order terms due to the combinatorial explosion and the fact that truncation errors do not decrease strictly with the expansion order. Herein, we take advantage of the redundancy of multibody expansions in order to derive an efficient reformulation that captures implicitly all-order correlation effects within a given cutoff, avoiding the combinatory explosion. This approach, which is cutoff dependent rather than order dependent, keeps the generality of the original expansions and simultaneously mitigates their limitations provided that a reasonable cutoff can be used. An application of particular interest can be the computation of the conformational entropy of flexible peptide molecules from molecular dynamics trajectories. By combining the multibody local estimations of conformational entropy with average values of the rigid-rotor and harmonic-oscillator entropic contributions, we obtain by far a tighter upper bound of the absolute entropy than the one obtained by the broadly used quasi-harmonic method.
Efficiency and its bounds for thermal engines at maximum power using Newton's law of cooling.
Yan, H; Guo, Hao
2012-01-01
We study a thermal engine model for which Newton's cooling law is obeyed during heat transfer processes. The thermal efficiency and its bounds at maximum output power are derived and discussed. This model, though quite simple, can be applied not only to Carnot engines but also to four other types of engines. For the long thermal contact time limit, new bounds, tighter than what were known before, are obtained. In this case, this model can simulate Otto, Joule-Brayton, Diesel, and Atkinson engines. While in the short contact time limit, which corresponds to the Carnot cycle, the same efficiency bounds as that from Esposito et al. [Phys. Rev. Lett. 105, 150603 (2010)] are derived. In both cases, the thermal efficiency decreases as the ratio between the heat capacities of the working medium during heating and cooling stages increases. This might provide instructions for designing real engines. © 2012 American Physical Society
NASA Technical Reports Server (NTRS)
Jaggi, S.; Quattrochi, D.; Baskin, R.
1992-01-01
The effective flux incident upon the detectors of a thermal sensor, after it has been corrected for atmospheric effects, is a function of a non-linear combination of the emissivity of the target for that channel and the temperature of the target. The sensor system cannot separate the contribution from the emissivity and the temperature that constitute the flux value. A method that estimates the bounds on these temperatures and emissivities from thermal data is described. This method is then tested with remotely sensed data obtained from NASA's Thermal Infrared Multispectral Scanner (TIMS) - a 6 channel thermal sensor. Since this is an under-determined set of equations i.e. there are 7 unknowns (6 emissivities and 1 temperature) and 6 equations (corresponding to the 6 channel fluxes), there exist theoretically an infinite combination of values of emissivities and temperature that can satisfy these equations. Using some realistic bounds on the emissivities, bounds on the temperature are calculated. These bounds on the temperature are refined to estimate a tighter bound on the emissivity of the source. An error analysis is also carried out to quantitatively determine the extent of uncertainty introduced in the estimate of these parameters. This method is useful only when a realistic set of bounds can be obtained for the emissivities of the data. In the case of water the lower and upper bounds were set at 0.97 and 1.00 respectively. Five flights were flown in succession at altitudes of 2 km (low), 6 km (mid), 12 km (high), and then back again at 6 km and 2 km. The area selected with the Ross Barnett reservoir near Jackson, Mississippi. The mission was flown during the predawn hours of 1 Feb. 1992. Radiosonde data was collected for that duration to profile the characteristics of the atmosphere. Ground truth temperatures using thermometers and radiometers were also obtained over an area of the reservoir. The results of two independent runs of the radiometer data averaged 7.03 plus or minus .70 for the first run and 7.31 plus or minus .88 for the second run. The results of the algorithm yield a temperature of 7.68 for the low altitude data to 8.73 for the high altitude data.
On entanglement-assisted quantum codes achieving the entanglement-assisted Griesmer bound
NASA Astrophysics Data System (ADS)
Li, Ruihu; Li, Xueliang; Guo, Luobin
2015-12-01
The theory of entanglement-assisted quantum error-correcting codes (EAQECCs) is a generalization of the standard stabilizer formalism. Any quaternary (or binary) linear code can be used to construct EAQECCs under the entanglement-assisted (EA) formalism. We derive an EA-Griesmer bound for linear EAQECCs, which is a quantum analog of the Griesmer bound for classical codes. This EA-Griesmer bound is tighter than known bounds for EAQECCs in the literature. For a given quaternary linear code {C}, we show that the parameters of the EAQECC that EA-stabilized by the dual of {C} can be determined by a zero radical quaternary code induced from {C}, and a necessary condition under which a linear EAQECC may achieve the EA-Griesmer bound is also presented. We construct four families of optimal EAQECCs and then show the necessary condition for existence of EAQECCs is also sufficient for some low-dimensional linear EAQECCs. The four families of optimal EAQECCs are degenerate codes and go beyond earlier constructions. What is more, except four codes, our [[n,k,d_{ea};c
Upper bounds on superpartner masses from upper bounds on the Higgs boson mass.
Cabrera, M E; Casas, J A; Delgado, A
2012-01-13
The LHC is putting bounds on the Higgs boson mass. In this Letter we use those bounds to constrain the minimal supersymmetric standard model (MSSM) parameter space using the fact that, in supersymmetry, the Higgs mass is a function of the masses of sparticles, and therefore an upper bound on the Higgs mass translates into an upper bound for the masses for superpartners. We show that, although current bounds do not constrain the MSSM parameter space from above, once the Higgs mass bound improves big regions of this parameter space will be excluded, putting upper bounds on supersymmetry (SUSY) masses. On the other hand, for the case of split-SUSY we show that, for moderate or large tanβ, the present bounds on the Higgs mass imply that the common mass for scalars cannot be greater than 10(11) GeV. We show how these bounds will evolve as LHC continues to improve the limits on the Higgs mass.
Improved key-rate bounds for practical decoy-state quantum-key-distribution systems
NASA Astrophysics Data System (ADS)
Zhang, Zhen; Zhao, Qi; Razavi, Mohsen; Ma, Xiongfeng
2017-01-01
The decoy-state scheme is the most widely implemented quantum-key-distribution protocol in practice. In order to account for the finite-size key effects on the achievable secret key generation rate, a rigorous statistical fluctuation analysis is required. Originally, a heuristic Gaussian-approximation technique was used for this purpose, which, despite its analytical convenience, was not sufficiently rigorous. The fluctuation analysis has recently been made rigorous by using the Chernoff bound. There is a considerable gap, however, between the key-rate bounds obtained from these techniques and that obtained from the Gaussian assumption. Here we develop a tighter bound for the decoy-state method, which yields a smaller failure probability. This improvement results in a higher key rate and increases the maximum distance over which secure key exchange is possible. By optimizing the system parameters, our simulation results show that our method almost closes the gap between the two previously proposed techniques and achieves a performance similar to that of conventional Gaussian approximations.
Upper and lower bounds for the speed of pulled fronts with a cut-off
NASA Astrophysics Data System (ADS)
Benguria, R. D.; Depassier, M. C.; Loss, M.
2008-02-01
We establish rigorous upper and lower bounds for the speed of pulled fronts with a cut-off. For all reaction terms of KPP type a simple analytic upper bound is given. The lower bounds however depend on details of the reaction term. For a small cut-off parameter the two leading order terms in the asymptotic expansion of the upper and lower bounds coincide and correspond to the Brunet-Derrida formula. For large cut-off parameters the bounds do not coincide and permit a simple estimation of the speed of the front.
Universal bounds on current fluctuations.
Pietzonka, Patrick; Barato, Andre C; Seifert, Udo
2016-05-01
For current fluctuations in nonequilibrium steady states of Markovian processes, we derive four different universal bounds valid beyond the Gaussian regime. Different variants of these bounds apply to either the entropy change or any individual current, e.g., the rate of substrate consumption in a chemical reaction or the electron current in an electronic device. The bounds vary with respect to their degree of universality and tightness. A universal parabolic bound on the generating function of an arbitrary current depends solely on the average entropy production. A second, stronger bound requires knowledge both of the thermodynamic forces that drive the system and of the topology of the network of states. These two bounds are conjectures based on extensive numerics. An exponential bound that depends only on the average entropy production and the average number of transitions per time is rigorously proved. This bound has no obvious relation to the parabolic bound but it is typically tighter further away from equilibrium. An asymptotic bound that depends on the specific transition rates and becomes tight for large fluctuations is also derived. This bound allows for the prediction of the asymptotic growth of the generating function. Even though our results are restricted to networks with a finite number of states, we show that the parabolic bound is also valid for three paradigmatic examples of driven diffusive systems for which the generating function can be calculated using the additivity principle. Our bounds provide a general class of constraints for nonequilibrium systems.
New upper bounds on the rate of a code via the Delsarte-MacWilliams inequalities
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Rodemich, E. R.; Rumsey, H., Jr.; Welch, L. R.
1977-01-01
An upper bound on the rate of a binary code as a function of minimum code distance (using a Hamming code metric) is arrived at from Delsarte-MacWilliams inequalities. The upper bound so found is asymptotically less than Levenshtein's bound, and a fortiori less than Elias' bound. Appendices review properties of Krawtchouk polynomials and Q-polynomials utilized in the rigorous proofs.
Unification of multiqubit polygamy inequalities
NASA Astrophysics Data System (ADS)
Kim, Jeong San
2012-03-01
I establish a unified view of polygamy of multiqubit entanglement. I first introduce a two-parameter generalization of the entanglement of assistance, namely, the unified entanglement of assistance for bipartite quantum states, and provide an analytic lower bound in two-qubit systems. I show a broad class of polygamy inequalities of multiqubit entanglement in terms of the unified entanglement of assistance that encapsulates all known multiqubit polygamy inequalities as special cases. I further show that this class of polygamy inequalities can be improved into tighter inequalities for three-qubit systems.
UPPER BOUND RISK ESTIMATES FOR MIXTURES OF CARCINOGENS
The excess cancer risk that might result from exposure to a mixture of chemical carcinogens usually is estimated with data from experiments conducted on individual chemicals. An upper bound on the total excess risk is estimated commonly by summing individual upper bound risk esti...
A new S-type eigenvalue inclusion set for tensors and its applications.
Huang, Zheng-Ge; Wang, Li-Gong; Xu, Zhong; Cui, Jing-Jing
2016-01-01
In this paper, a new S -type eigenvalue localization set for a tensor is derived by dividing [Formula: see text] into disjoint subsets S and its complement. It is proved that this new set is sharper than those presented by Qi (J. Symb. Comput. 40:1302-1324, 2005), Li et al. (Numer. Linear Algebra Appl. 21:39-50, 2014) and Li et al. (Linear Algebra Appl. 481:36-53, 2015). As applications of the results, new bounds for the spectral radius of nonnegative tensors and the minimum H -eigenvalue of strong M -tensors are established, and we prove that these bounds are tighter than those obtained by Li et al. (Numer. Linear Algebra Appl. 21:39-50, 2014) and He and Huang (J. Inequal. Appl. 2014:114, 2014).
Soccer players' fitting perception of different upper boot materials.
Olaso Melis, J C; Priego Quesada, J I; Lucas-Cuevas, A G; González García, J C; Puigcerver Palau, S
2016-07-01
The present study assessed the influence of upper boot materials on fitting perception. Twenty players tested three soccer boots only differing in the upper boot material (natural calf leather, natural kangaroo leather and synthetic leather). Players reported fitting perception and preference on specific foot areas using a perceived fitting scale. Ratings were averaged for every foot area. Repeated measures ANOVA was used to analyze the differences between boots. The kangaroo leather boots were perceived tighter and closer to the preferred fitting in general fitting, metatarsals area and instep area. The synthetic leather boots were perceived as the loosest and as the most distant boot from the preferred fitting in medial front area and instep area. In conclusion, the type of upper boot material influences the fitting perception of soccer players. The kangaroo leather was the material whose fitting was perceived closest to the players fitting preference. Copyright © 2016 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Solving Connected Subgraph Problems in Wildlife Conservation
NASA Astrophysics Data System (ADS)
Dilkina, Bistra; Gomes, Carla P.
We investigate mathematical formulations and solution techniques for a variant of the Connected Subgraph Problem. Given a connected graph with costs and profits associated with the nodes, the goal is to find a connected subgraph that contains a subset of distinguished vertices. In this work we focus on the budget-constrained version, where we maximize the total profit of the nodes in the subgraph subject to a budget constraint on the total cost. We propose several mixed-integer formulations for enforcing the subgraph connectivity requirement, which plays a key role in the combinatorial structure of the problem. We show that a new formulation based on subtour elimination constraints is more effective at capturing the combinatorial structure of the problem, providing significant advantages over the previously considered encoding which was based on a single commodity flow. We test our formulations on synthetic instances as well as on real-world instances of an important problem in environmental conservation concerning the design of wildlife corridors. Our encoding results in a much tighter LP relaxation, and more importantly, it results in finding better integer feasible solutions as well as much better upper bounds on the objective (often proving optimality or within less than 1% of optimality), both when considering the synthetic instances as well as the real-world wildlife corridor instances.
Cross support overview and operations concept for future space missions
NASA Technical Reports Server (NTRS)
Stallings, William; Kaufeler, Jean-Francois
1994-01-01
Ground networks must respond to the requirements of future missions, which include smaller sizes, tighter budgets, increased numbers, and shorter development schedules. The Consultative Committee for Space Data Systems (CCSDS) is meeting these challenges by developing a general cross support concept, reference model, and service specifications for Space Link Extension services for space missions involving cross support among Space Agencies. This paper identifies and bounds the problem, describes the need to extend Space Link services, gives an overview of the operations concept, and introduces complimentary CCSDS work on standardizing Space Link Extension services.
Counterfactual Quantum Deterministic Key Distribution
NASA Astrophysics Data System (ADS)
Zhang, Sheng; Wang, Jian; Tang, Chao-Jing
2013-01-01
We propose a new counterfactual quantum cryptography protocol concerning about distributing a deterministic key. By adding a controlled blocking operation module to the original protocol [T.G. Noh, Phys. Rev. Lett. 103 (2009) 230501], the correlation between the polarizations of the two parties, Alice and Bob, is extended, therefore, one can distribute both deterministic keys and random ones using our protocol. We have also given a simple proof of the security of our protocol using the technique we ever applied to the original protocol. Most importantly, our analysis produces a bound tighter than the existing ones.
Upper bound on the slope of steady water waves with small adverse vorticity
NASA Astrophysics Data System (ADS)
So, Seung Wook; Strauss, Walter A.
2018-03-01
We consider the angle of inclination (with respect to the horizontal) of the profile of a steady 2D inviscid symmetric periodic or solitary water wave subject to gravity. There is an upper bound of 31.15° in the irrotational case [1] and an upper bound of 45° in the case of favorable vorticity [13]. On the other hand, if the vorticity is adverse, the profile can become vertical. We prove here that if the adverse vorticity is sufficiently small, then the angle still has an upper bound which is slightly larger than 45°.
Generalized Bezout's Theorem and its applications in coding theory
NASA Technical Reports Server (NTRS)
Berg, Gene A.; Feng, Gui-Liang; Rao, T. R. N.
1996-01-01
This paper presents a generalized Bezout theorem which can be used to determine a tighter lower bound of the number of distinct points of intersection of two or more curves for a large class of plane curves. A new approach to determine a lower bound on the minimum distance (and also the generalized Hamming weights) for algebraic-geometric codes defined from a class of plane curves is introduced, based on the generalized Bezout theorem. Examples of more efficient linear codes are constructed using the generalized Bezout theorem and the new approach. For d = 4, the linear codes constructed by the new construction are better than or equal to the known linear codes. For d greater than 5, these new codes are better than the known codes. The Klein code over GF(2(sup 3)) is also constructed.
Theoretical Bounds of Direct Binary Search Halftoning.
Liao, Jan-Ray
2015-11-01
Direct binary search (DBS) produces the images of the best quality among half-toning algorithms. The reason is that it minimizes the total squared perceived error instead of using heuristic approaches. The search for the optimal solution involves two operations: (1) toggle and (2) swap. Both operations try to find the binary states for each pixel to minimize the total squared perceived error. This error energy minimization leads to a conjecture that the absolute value of the filtered error after DBS converges is bounded by half of the peak value of the autocorrelation filter. However, a proof of the bound's existence has not yet been found. In this paper, we present a proof that shows the bound existed as conjectured under the condition that at least one swap occurs after toggle converges. The theoretical analysis also indicates that a swap with a pixel further away from the center of the autocorrelation filter results in a tighter bound. Therefore, we propose a new DBS algorithm which considers toggle and swap separately, and the swap operations are considered in the order from the edge to the center of the filter. Experimental results show that the new algorithm is more efficient than the previous algorithm and can produce half-toned images of the same quality as the previous algorithm.
Li, Zukui; Floudas, Christodoulos A.
2012-01-01
Probabilistic guarantees on constraint satisfaction for robust counterpart optimization are studied in this paper. The robust counterpart optimization formulations studied are derived from box, ellipsoidal, polyhedral, “interval+ellipsoidal” and “interval+polyhedral” uncertainty sets (Li, Z., Ding, R., and Floudas, C.A., A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear and Robust Mixed Integer Linear Optimization, Ind. Eng. Chem. Res, 2011, 50, 10567). For those robust counterpart optimization formulations, their corresponding probability bounds on constraint satisfaction are derived for different types of uncertainty characteristic (i.e., bounded or unbounded uncertainty, with or without detailed probability distribution information). The findings of this work extend the results in the literature and provide greater flexibility for robust optimization practitioners in choosing tighter probability bounds so as to find less conservative robust solutions. Extensive numerical studies are performed to compare the tightness of the different probability bounds and the conservatism of different robust counterpart optimization formulations. Guiding rules for the selection of robust counterpart optimization models and for the determination of the size of the uncertainty set are discussed. Applications in production planning and process scheduling problems are presented. PMID:23329868
Paul L. Patterson; Mark Finco
2011-01-01
This paper explores the information forest inventory data can produce regarding forest types that were not sampled and develops the equations necessary to define the upper confidence bounds on not-sampled forest types. The problem is reduced to a Bernoulli variable. This simplification allows the upper confidence bounds to be calculated based on Cochran (1977)....
Solar-System Tests of Gravitational Theories
NASA Technical Reports Server (NTRS)
Shapiro, Irwin I.
2005-01-01
This research is aimed at testing gravitational theory, primarily on an interplanetary scale and using mainly observations of objects in the solar system. Our goal is either to detect departures from the standard model (general relativity) - if any exist within the level of sensitivity of our data - or to support this model by placing tighter bounds on any departure from it. For this project, we have analyzed a combination of observational data with our model of the solar system, including planetary radar ranging, lunar laser ranging, and spacecraft tracking, as well as pulsar timing and pulsar VLBI measurements.
Tight Bell Inequalities and Nonlocality in Weak Measurement
NASA Astrophysics Data System (ADS)
Waegell, Mordecai
A general class of Bell inequalities is derived based on strict adherence to probabilistic entanglement correlations observed in nature. This derivation gives significantly tighter bounds on local hidden variable theories for the well-known Clauser-Horne-Shimony-Holt (CHSH) inequality, and also leads to new proofs of the Greenberger-Horne-Zeilinger (GHZ) theorem. This method is applied to weak measurements and reveals nonlocal correlations between the weak value and the post-selection, which rules out various classical models of weak measurement. Implications of these results are discussed. Fetzer-Franklin Fund of the John E. Fetzer Memorial Trust.
Paul L. Patterson; Mark Finco
2009-01-01
This paper explores the information FIA data can produce regarding forest types that were not sampled and develops the equations necessary to define the upper confidence bounds on not-sampled forest types. The problem is reduced to a Bernoulli variable. This simplification allows the upper confidence bounds to be calculated based on Cochran (1977). Examples are...
NASA Technical Reports Server (NTRS)
Chlouber, Dean; O'Neill, Pat; Pollock, Jim
1990-01-01
A technique of predicting an upper bound on the rate at which single-event upsets due to ionizing radiation occur in semiconducting memory cells is described. The upper bound on the upset rate, which depends on the high-energy particle environment in earth orbit and accelerator cross-section data, is given by the product of an upper-bound linear energy-transfer spectrum and the mean cross section of the memory cell. Plots of the spectrum are given for low-inclination and polar orbits. An alternative expression for the exact upset rate is also presented. Both methods rely only on experimentally obtained cross-section data and are valid for sensitive bit regions having arbitrary shape.
Postselection technique for quantum channels with applications to quantum cryptography.
Christandl, Matthias; König, Robert; Renner, Renato
2009-01-16
We propose a general method for studying properties of quantum channels acting on an n-partite system, whose action is invariant under permutations of the subsystems. Our main result is that, in order to prove that a certain property holds for an arbitrary input, it is sufficient to consider the case where the input is a particular de Finetti-type state, i.e., a state which consists of n identical and independent copies of an (unknown) state on a single subsystem. Our technique can be applied to the analysis of information-theoretic problems. For example, in quantum cryptography, we get a simple proof for the fact that security of a discrete-variable quantum key distribution protocol against collective attacks implies security of the protocol against the most general attacks. The resulting security bounds are tighter than previously known bounds obtained with help of the exponential de Finetti theorem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allahverdi, Rouzbeh; Gao, Yu; Knockel, Bradley
In this paper, we study indirect detection signals from solar annihilation of dark matter (DM) particles into light right-handed (RH) neutrinos with a mass in a 1–5 GeV range. These RH neutrinos can have a sufficiently long lifetime to allow them to decay outside the Sun, and their delayed decays can result in a signal in gamma rays from the otherwise “dark” solar direction, and also a neutrino signal that is not suppressed by the interactions with solar medium. We find that the latest Fermi-LAT and IceCube results place limits on the gamma ray and neutrino signals, respectively. Combined photonmore » and neutrino bounds can constrain the spin-independent DM-nucleon elastic scattering cross section better than direct detection experiments for DM masses from 200 GeV up to several TeV. Finally, the bounds on spin-dependent scattering are also much tighter than the strongest limits from direct detection experiments.« less
Model-Based Control of an Aircraft Engine using an Optimal Tuner Approach
NASA Technical Reports Server (NTRS)
Connolly, Joseph W.; Chicatelli, Amy; Garg, Sanjay
2012-01-01
This paper covers the development of a model-based engine control (MBEC) method- ology applied to an aircraft turbofan engine. Here, a linear model extracted from the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40k) at a cruise operating point serves as the engine and the on-board model. The on-board model is up- dated using an optimal tuner Kalman Filter (OTKF) estimation routine, which enables the on-board model to self-tune to account for engine performance variations. The focus here is on developing a methodology for MBEC with direct control of estimated parameters of interest such as thrust and stall margins. MBEC provides the ability for a tighter control bound of thrust over the entire life cycle of the engine that is not achievable using traditional control feedback, which uses engine pressure ratio or fan speed. CMAPSS40k is capable of modeling realistic engine performance, allowing for a verification of the MBEC tighter thrust control. In addition, investigations of using the MBEC to provide a surge limit for the controller limit logic are presented that could provide benefits over a simple acceleration schedule that is currently used in engine control architectures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Audenaert, Koenraad M. R., E-mail: koenraad.audenaert@rhul.ac.uk; Department of Physics and Astronomy, University of Ghent, S9, Krijgslaan 281, B-9000 Ghent; Mosonyi, Milán, E-mail: milan.mosonyi@gmail.com
2014-10-01
We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ₁, …, σ{sub r}. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ₁, …, σ{sub r}), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov'smore » classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min{sub j« less
Upper bounds on sequential decoding performance parameters
NASA Technical Reports Server (NTRS)
Jelinek, F.
1974-01-01
This paper presents the best obtainable random coding and expurgated upper bounds on the probabilities of undetectable error, of t-order failure (advance to depth t into an incorrect subset), and of likelihood rise in the incorrect subset, applicable to sequential decoding when the metric bias G is arbitrary. Upper bounds on the Pareto exponent are also presented. The G-values optimizing each of the parameters of interest are determined, and are shown to lie in intervals that in general have nonzero widths. The G-optimal expurgated bound on undetectable error is shown to agree with that for maximum likelihood decoding of convolutional codes, and that on failure agrees with the block code expurgated bound. Included are curves evaluating the bounds for interesting choices of G and SNR for a binary-input quantized-output Gaussian additive noise channel.
Isocyanides inhibit human heme oxygenases at the verdoheme stage.
Evans, John P; Kandel, Sylvie; Ortiz de Montellano, Paul R
2009-09-22
Heme oxygenases (HO) catalyze the oxidative cleavage of heme to generate biliverdin, CO, and free iron. In humans, heme oxygenase-1 (hHO-1) is overexpressed in tumor tissues, where it helps to protect cancer cells from anticancer agents, while HOs in fungal pathogens, such as Candida albicans, function as the primary means of iron acquisition. Thus, HO can be considered a potential therapeutic target for certain diseases. In this study, we have examined the equilibrium binding of three isocyanides, isopropyl, n-butyl, and benzyl, to the two major human HO isoforms (hHO-1 and hHO-2), Candida albicans HO (CaHmx1), and human cytochrome P450 CYP3A4 using electronic absorption spectroscopy. Isocyanides coordinate to both ferric and ferrous HO-bound heme, with tighter binding by the more hydrophobic isocyanides and 200-300-fold tighter binding to the ferrous form. Benzyl isocyanide was the strongest ligand to ferrous heme in all the enzymes. Because the dissociation constants (KD) of the ligands for ferrous heme-hHO-1 were below the limit of accuracy for equilibrium titrations, stopped-flow kinetic experiments were used to measure the binding parameters of the isocyanides to ferrous hHO-1. Steady-state activity assays showed that benzyl isocyanide was the most potent uncompetitive inhibitor with respect to heme with a KI = 0.15 microM for hHO-1. Importantly, single turnover assays revealed that the reaction was completely stopped by coordination of the isocyanide to the verdoheme intermediate rather than to the ferric heme complex. Much tighter binding of the inhibitor to the verdoheme intermediate differentiates it from inhibition of, for example, CYP3A4 and offers a possible route to more selective inhibitor design.
Isocyanides Inhibit Human Heme Oxygenases at the Verdoheme Stage†
Evans, John P.; Kandel, Sylvie; Ortiz de Montellano, Paul R.
2010-01-01
Heme oxygenases (HO) catalyze the oxidative cleavage of heme to generate biliverdin, CO, and free iron. In humans, heme oxygenase-1 (hHO-1) is overexpressed in tumor tissues, where it helps to protect cancer cells from anticancer agents, while HOs in fungal pathogens, such as Candida albicans, function as the primary means of iron acquisition. Thus, HO can be considered a potential therapeutic target for certain diseases. In this study, we have examined the equilibrium binding of three isocyanides; isopropyl, n-butyl, and benzyl, to the two major human HO isoforms (hHO-1 and hHO-2), Candida albicans HO (CaHmx1), and human cytochrome P450 CYP3A4 using electronic absorption spectroscopy. Isocyanides coordinate to both ferric and ferrous HO-bound heme, with tighter binding by the more hydrophobic isocyanides, and 200-300-fold tighter binding to the ferrous form. Benzyl isocyanide was the strongest ligand to ferrous heme in all the enzymes. Because the dissociation constants (KD) of the ligands for ferrous heme-hHO-1 were below the limit of accuracy for equilibrium titrations, stopped-flow kinetic experiments were used to measure the binding parameters of the isocyanides to ferrous hHO-1. Steady-state activity assays showed that benzyl isocyanide was the most potent uncompetitive inhibitor with respect to heme with a KI = 0.15 μM for hHO-1. Importantly, single turnover assays revealed that the reaction was completely stopped by coordination of the isocyanide to the verdoheme intermediate rather than to the ferric heme complex. Much tighter binding of the inhibitor to the verdoheme intermediate differentiates it from inhibition of, for example, CYP3A4 and offers a possible route to more selective inhibitor design. PMID:19694439
How entangled can a multi-party system possibly be?
NASA Astrophysics Data System (ADS)
Qi, Liqun; Zhang, Guofeng; Ni, Guyan
2018-06-01
The geometric measure of entanglement of a pure quantum state is defined to be its distance to the space of pure product (separable) states. Given an n-partite system composed of subsystems of dimensions d1 , … ,dn, an upper bound for maximally allowable entanglement is derived in terms of geometric measure of entanglement. This upper bound is characterized exclusively by the dimensions d1 , … ,dn of composite subsystems. Numerous examples demonstrate that the upper bound appears to be reasonably tight.
Faydasicok, Ozlem; Arik, Sabri
2013-08-01
The main problem with the analysis of robust stability of neural networks is to find the upper bound norm for the intervalized interconnection matrices of neural networks. In the previous literature, the major three upper bound norms for the intervalized interconnection matrices have been reported and they have been successfully applied to derive new sufficient conditions for robust stability of delayed neural networks. One of the main contributions of this paper will be the derivation of a new upper bound for the norm of the intervalized interconnection matrices of neural networks. Then, by exploiting this new upper bound norm of interval matrices and using stability theory of Lyapunov functionals and the theory of homomorphic mapping, we will obtain new sufficient conditions for the existence, uniqueness and global asymptotic stability of the equilibrium point for the class of neural networks with discrete time delays under parameter uncertainties and with respect to continuous and slope-bounded activation functions. The results obtained in this paper will be shown to be new and they can be considered alternative results to previously published corresponding results. We also give some illustrative and comparative numerical examples to demonstrate the effectiveness and applicability of the proposed robust stability condition. Copyright © 2013 Elsevier Ltd. All rights reserved.
Estimation variance bounds of importance sampling simulations in digital communication systems
NASA Technical Reports Server (NTRS)
Lu, D.; Yao, K.
1991-01-01
In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.
On the likelihood of single-peaked preferences.
Lackner, Marie-Louise; Lackner, Martin
2017-01-01
This paper contains an extensive combinatorial analysis of the single-peaked domain restriction and investigates the likelihood that an election is single-peaked. We provide a very general upper bound result for domain restrictions that can be defined by certain forbidden configurations. This upper bound implies that many domain restrictions (including the single-peaked restriction) are very unlikely to appear in a random election chosen according to the Impartial Culture assumption. For single-peaked elections, this upper bound can be refined and complemented by a lower bound that is asymptotically tight. In addition, we provide exact results for elections with few voters or candidates. Moreover, we consider the Pólya urn model and the Mallows model and obtain lower bounds showing that single-peakedness is considerably more likely to appear for certain parameterizations.
NASA Astrophysics Data System (ADS)
Bardeen, J. M.
The last several years have seen a tremendous ferment of activity in astrophysical cosmology. Much of the theoretical impetus has come from particle physics theories of the early universe and candidates for dark matter, but what promise to be even more significant are improved direct observations of high z galaxies and intergalactic matter, deeper and more comprehensive redshift surveys, and the increasing power of computer simulations of the dynamical evolution of large scale structure. Upper limits on the anisotropy of the microwave background radiation are gradually getting tighter and constraining more severely theoretical scenarios for the evolution of the universe.
A tight upper bound for quadratic knapsack problems in grid-based wind farm layout optimization
NASA Astrophysics Data System (ADS)
Quan, Ning; Kim, Harrison M.
2018-03-01
The 0-1 quadratic knapsack problem (QKP) in wind farm layout optimization models possible turbine locations as nodes, and power loss due to wake effects between pairs of turbines as edges in a complete graph. The goal is to select up to a certain number of turbine locations such that the sum of selected node and edge coefficients is maximized. Finding the optimal solution to the QKP is difficult in general, but it is possible to obtain a tight upper bound on the QKP's optimal value which facilitates the use of heuristics to solve QKPs by giving a good estimate of the optimality gap of any feasible solution. This article applies an upper bound method that is especially well-suited to QKPs in wind farm layout optimization due to certain features of the formulation that reduce the computational complexity of calculating the upper bound. The usefulness of the upper bound was demonstrated by assessing the performance of the greedy algorithm for solving QKPs in wind farm layout optimization. The results show that the greedy algorithm produces good solutions within 4% of the optimal value for small to medium sized problems considered in this article.
Edge connectivity and the spectral gap of combinatorial and quantum graphs
NASA Astrophysics Data System (ADS)
Berkolaiko, Gregory; Kennedy, James B.; Kurasov, Pavel; Mugnolo, Delio
2017-09-01
We derive a number of upper and lower bounds for the first nontrivial eigenvalue of Laplacians on combinatorial and quantum graph in terms of the edge connectivity, i.e. the minimal number of edges which need to be removed to make the graph disconnected. On combinatorial graphs, one of the bounds corresponds to a well-known inequality of Fiedler, of which we give a new variational proof. On quantum graphs, the corresponding bound generalizes a recent result of Band and Lévy. All proofs are general enough to yield corresponding estimates for the p-Laplacian and allow us to identify the minimizers. Based on the Betti number of the graph, we also derive upper and lower bounds on all eigenvalues which are ‘asymptotically correct’, i.e. agree with the Weyl asymptotics for the eigenvalues of the quantum graph. In particular, the lower bounds improve the bounds of Friedlander on any given graph for all but finitely many eigenvalues, while the upper bounds improve recent results of Ariturk. Our estimates are also used to derive bounds on the eigenvalues of the normalized Laplacian matrix that improve known bounds of spectral graph theory.
On the role of entailment patterns and scalar implicatures in the processing of numerals
Panizza, Daniele; Chierchia, Gennaro; Clifton, Charles
2009-01-01
There has been much debate, in both the linguistics and the psycholinguistics literature, concerning numbers and the interpretation of number denoting determiners ('numerals'). Such debate concerns, in particular, the nature and distribution of upper-bounded ('at-least') interpretations vs. lower-bounded ('exact') construals. In the present paper we show that the interpretation and processing of numerals are affected by the entailment properties of the context in which they occur. Experiment 1 established off-line preferences using a questionnaire. Experiment 2 investigated the processing issue through an eye tracking experiment using a silent reading task. Our results show that the upper-bounded interpretation of numerals occurs more often in an upward entailing context than in a downward entailing context. Reading times of the numeral itself were longer when it was embedded in an upward entailing context than when it was not, indicating that processing resources were required when the context triggered an upper-bounded interpretation. However, reading of a following context that required an upper-bounded interpretation triggered more regressions towards the numeral when it had occurred in a downward entailing context than in an upward entailing one. Such findings show that speakers' interpretation and processing of numerals is systematically affected by the polarity of the sentence in which they occur, and support the hypothesis that the upper-bounded interpretation of numerals is due to a scalar implicature. PMID:20161494
Wang, Xueyi; Davidson, Nicholas J.
2011-01-01
Ensemble methods have been widely used to improve prediction accuracy over individual classifiers. In this paper, we achieve a few results about the prediction accuracies of ensemble methods for binary classification that are missed or misinterpreted in previous literature. First we show the upper and lower bounds of the prediction accuracies (i.e. the best and worst possible prediction accuracies) of ensemble methods. Next we show that an ensemble method can achieve > 0.5 prediction accuracy, while individual classifiers have < 0.5 prediction accuracies. Furthermore, for individual classifiers with different prediction accuracies, the average of the individual accuracies determines the upper and lower bounds. We perform two experiments to verify the results and show that it is hard to achieve the upper and lower bounds accuracies by random individual classifiers and better algorithms need to be developed. PMID:21853162
Upper bound of abutment scour in laboratory and field data
Benedict, Stephen
2016-01-01
The U.S. Geological Survey, in cooperation with the South Carolina Department of Transportation, conducted a field investigation of abutment scour in South Carolina and used those data to develop envelope curves that define the upper bound of abutment scour. To expand on this previous work, an additional cooperative investigation was initiated to combine the South Carolina data with abutment scour data from other sources and evaluate upper bound patterns with this larger data set. To facilitate this analysis, 446 laboratory and 331 field measurements of abutment scour were compiled into a digital database. This extensive database was used to evaluate the South Carolina abutment scour envelope curves and to develop additional envelope curves that reflected the upper bound of abutment scour depth for the laboratory and field data. The envelope curves provide simple but useful supplementary tools for assessing the potential maximum abutment scour depth in the field setting.
Physical Uncertainty Bounds (PUB)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaughan, Diane Elizabeth; Preston, Dean L.
2015-03-19
This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switchingmore » out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.« less
Perturbative unitarity constraints on gauge portals
NASA Astrophysics Data System (ADS)
El Hedri, Sonia; Shepherd, William; Walker, Devin G. E.
2017-12-01
Dark matter that was once in thermal equilibrium with the Standard Model is generally prohibited from obtaining all of its mass from the electroweak phase transition. This implies a new scale of physics and mediator particles to facilitate dark matter annihilation. In this work, we focus on dark matter that annihilates through a generic gauge boson portal. We show how partial wave unitarity places upper bounds on the dark gauge boson, dark Higgs and dark matter masses. Outside of well-defined fine-tuned regions, we find an upper bound of 9 TeV for the dark matter mass when the dark Higgs and dark gauge bosons both facilitate the dark matter annihilations. In this scenario, the upper bound on the dark Higgs and dark gauge boson masses are 10 TeV and 16 TeV, respectively. When only the dark gauge boson facilitates dark matter annihilations, we find an upper bound of 3 TeV and 6 TeV for the dark matter and dark gauge boson, respectively. Overall, using the gauge portal as a template, we describe a method to not only place upper bounds on the dark matter mass but also on the new particles with Standard Model quantum numbers. We briefly discuss the reach of future accelerator, direct and indirect detection experiments for this class of models.
Boundary causality versus hyperbolicity for spherical black holes in Gauss-Bonnet gravity
NASA Astrophysics Data System (ADS)
Andrade, Tomás; Cáceres, Elena; Keeler, Cynthia
2017-07-01
We explore the constraints boundary causality places on the allowable Gauss-Bonnet gravitational couplings in asymptotically AdS spaces, specifically considering spherical black hole solutions. We additionally consider the hyperbolicity properties of these solutions, positing that hyperbolicity-violating solutions are sick solutions whose causality properties provide no information about the theory they reside in. For both signs of the Gauss-Bonnet coupling, spherical black holes violate boundary causality at smaller absolute values of the coupling than planar black holes do. For negative coupling, as we tune the Gauss-Bonnet coupling away from zero, both spherical and planar black holes violate hyperbolicity before they violate boundary causality. For positive coupling, the only hyperbolicity-respecting spherical black holes which violate boundary causality do not do so appreciably far from the planar bound. Consequently, eliminating hyperbolicity-violating solutions means the bound on Gauss-Bonnet couplings from the boundary causality of spherical black holes is no tighter than that from planar black holes.
NASA Astrophysics Data System (ADS)
Alpha Collaboration; Amole, C.; Ashkezari, M. D.; Baquero-Ruiz, M.; Bertsche, W.; Butler, E.; Capra, A.; Cesar, C. L.; Charlton, M.; Eriksson, S.; Fajans, J.; Friesen, T.; Fujiwara, M. C.; Gill, D. R.; Gutierrez, A.; Hangst, J. S.; Hardy, W. N.; Hayden, M. E.; Isaac, C. A.; Jonsell, S.; Kurchaninov, L.; Little, A.; Madsen, N.; McKenna, J. T. K.; Menary, S.; Napoli, S. C.; Nolan, P.; Olin, A.; Pusa, P.; Rasmussen, C. Ø.; Robicheaux, F.; Sarid, E.; Silveira, D. M.; So, C.; Thompson, R. I.; van der Werf, D. P.; Wurtele, J. S.; Zhmoginov, A. I.; Charman, A. E.
2013-04-01
Physicists have long wondered whether the gravitational interactions between matter and antimatter might be different from those between matter and itself. Although there are many indirect indications that no such differences exist and that the weak equivalence principle holds, there have been no direct, free-fall style, experimental tests of gravity on antimatter. Here we describe a novel direct test methodology; we search for a propensity for antihydrogen atoms to fall downward when released from the ALPHA antihydrogen trap. In the absence of systematic errors, we can reject ratios of the gravitational to inertial mass of antihydrogen >75 at a statistical significance level of 5% worst-case systematic errors increase the minimum rejection ratio to 110. A similar search places somewhat tighter bounds on a negative gravitational mass, that is, on antigravity. This methodology, coupled with ongoing experimental improvements, should allow us to bound the ratio within the more interesting near equivalence regime.
Amole, C.; Ashkezari, M. D.; Baquero-Ruiz, M.; Bertsche, W.; Butler, E.; Capra, A.; Cesar, C. L.; Charlton, M.; Eriksson, S.; Fajans, J.; Friesen, T.; Fujiwara, M. C.; Gill, D. R.; Gutierrez, A.; Hangst, J. S.; Hardy, W. N.; Hayden, M. E.; Isaac, C. A.; Jonsell, S.; Kurchaninov, L.; Little, A.; Madsen, N.; McKenna, J. T. K.; Menary, S.; Napoli, S. C.; Nolan, P.; Olin, A.; Pusa, P.; Rasmussen, C. Ø; Robicheaux, F.; Sarid, E.; Silveira, D. M.; So, C.; Thompson, R. I.; van der Werf, D. P.; Wurtele, J. S.; Zhmoginov, A. I.; Charman, A. E.
2013-01-01
Physicists have long wondered whether the gravitational interactions between matter and antimatter might be different from those between matter and itself. Although there are many indirect indications that no such differences exist and that the weak equivalence principle holds, there have been no direct, free-fall style, experimental tests of gravity on antimatter. Here we describe a novel direct test methodology; we search for a propensity for antihydrogen atoms to fall downward when released from the ALPHA antihydrogen trap. In the absence of systematic errors, we can reject ratios of the gravitational to inertial mass of antihydrogen >75 at a statistical significance level of 5%; worst-case systematic errors increase the minimum rejection ratio to 110. A similar search places somewhat tighter bounds on a negative gravitational mass, that is, on antigravity. This methodology, coupled with ongoing experimental improvements, should allow us to bound the ratio within the more interesting near equivalence regime. PMID:23653197
Indirect signals from solar dark matter annihilation to long-lived right-handed neutrinos
Allahverdi, Rouzbeh; Gao, Yu; Knockel, Bradley; ...
2017-04-04
In this paper, we study indirect detection signals from solar annihilation of dark matter (DM) particles into light right-handed (RH) neutrinos with a mass in a 1–5 GeV range. These RH neutrinos can have a sufficiently long lifetime to allow them to decay outside the Sun, and their delayed decays can result in a signal in gamma rays from the otherwise “dark” solar direction, and also a neutrino signal that is not suppressed by the interactions with solar medium. We find that the latest Fermi-LAT and IceCube results place limits on the gamma ray and neutrino signals, respectively. Combined photonmore » and neutrino bounds can constrain the spin-independent DM-nucleon elastic scattering cross section better than direct detection experiments for DM masses from 200 GeV up to several TeV. Finally, the bounds on spin-dependent scattering are also much tighter than the strongest limits from direct detection experiments.« less
Charman, A E; Amole, C; Ashkezari, M D; Baquero-Ruiz, M; Bertsche, W; Butler, E; Capra, A; Cesar, C L; Charlton, M; Eriksson, S; Fajans, J; Friesen, T; Fujiwara, M C; Gill, D R; Gutierrez, A; Hangst, J S; Hardy, W N; Hayden, M E; Isaac, C A; Jonsell, S; Kurchaninov, L; Little, A; Madsen, N; McKenna, J T K; Menary, S; Napoli, S C; Nolan, P; Olin, A; Pusa, P; Rasmussen, C Ø; Robicheaux, F; Sarid, E; Silveira, D M; So, C; Thompson, R I; van der Werf, D P; Wurtele, J S; Zhmoginov, A I
2013-01-01
Physicists have long wondered whether the gravitational interactions between matter and antimatter might be different from those between matter and itself. Although there are many indirect indications that no such differences exist and that the weak equivalence principle holds, there have been no direct, free-fall style, experimental tests of gravity on antimatter. Here we describe a novel direct test methodology; we search for a propensity for antihydrogen atoms to fall downward when released from the ALPHA antihydrogen trap. In the absence of systematic errors, we can reject ratios of the gravitational to inertial mass of antihydrogen >75 at a statistical significance level of 5%; worst-case systematic errors increase the minimum rejection ratio to 110. A similar search places somewhat tighter bounds on a negative gravitational mass, that is, on antigravity. This methodology, coupled with ongoing experimental improvements, should allow us to bound the ratio within the more interesting near equivalence regime.
Eigenvalues of the Wentzell-Laplace operator and of the fourth order Steklov problems
NASA Astrophysics Data System (ADS)
Xia, Changyu; Wang, Qiaoling
2018-05-01
We prove a sharp upper bound and a lower bound for the first nonzero eigenvalue of the Wentzell-Laplace operator on compact manifolds with boundary and an isoperimetric inequality for the same eigenvalue in the case where the manifold is a bounded domain in a Euclidean space. We study some fourth order Steklov problems and obtain isoperimetric upper bound for the first eigenvalue of them. We also find all the eigenvalues and eigenfunctions for two kind of fourth order Steklov problems on a Euclidean ball.
Sun, Wei; Chou, Chih-Ping; Stacy, Alan W; Ma, Huiyan; Unger, Jennifer; Gallaher, Peggy
2007-02-01
Cronbach's a is widely used in social science research to estimate the internal consistency of reliability of a measurement scale. However, when items are not strictly parallel, the Cronbach's a coefficient provides a lower-bound estimate of true reliability, and this estimate may be further biased downward when items are dichotomous. The estimation of standardized Cronbach's a for a scale with dichotomous items can be improved by using the upper bound of coefficient phi. SAS and SPSS macros have been developed in this article to obtain standardized Cronbach's a via this method. The simulation analysis showed that Cronbach's a from upper-bound phi might be appropriate for estimating the real reliability when standardized Cronbach's a is problematic.
The Problem of Limited Inter-rater Agreement in Modelling Music Similarity
Flexer, Arthur; Grill, Thomas
2016-01-01
One of the central goals of Music Information Retrieval (MIR) is the quantification of similarity between or within pieces of music. These quantitative relations should mirror the human perception of music similarity, which is however highly subjective with low inter-rater agreement. Unfortunately this principal problem has been given little attention in MIR so far. Since it is not meaningful to have computational models that go beyond the level of human agreement, these levels of inter-rater agreement present a natural upper bound for any algorithmic approach. We will illustrate this fundamental problem in the evaluation of MIR systems using results from two typical application scenarios: (i) modelling of music similarity between pieces of music; (ii) music structure analysis within pieces of music. For both applications, we derive upper bounds of performance which are due to the limited inter-rater agreement. We compare these upper bounds to the performance of state-of-the-art MIR systems and show how the upper bounds prevent further progress in developing better MIR systems. PMID:28190932
Monogamy of αth power entanglement measurement in qubit systems
NASA Astrophysics Data System (ADS)
Luo, Yu; Li, Yongming
2015-11-01
In this paper, we study the αth power monogamy properties related to the entanglement measure in bipartite states. The monogamy relations related to the αth power of negativity and the Convex-Roof Extended Negativity are obtained for N-qubit states. We also give a tighter bound of hierarchical monogamy inequality for the entanglement of formation. We find that the GHZ state and W state can be used to distinguish both the αth power of the concurrence for 0 < α < 2 and the αth power of the entanglement of formation for 0 < α ≤ 1/2. Furthermore, we compare concurrence with negativity in terms of monogamy property and investigate the difference between them.
Evidence for a bound on the lifetime of de Sitter space
NASA Astrophysics Data System (ADS)
Freivogel, Ben; Lippert, Matthew
2008-12-01
Recent work has suggested a surprising new upper bound on the lifetime of de Sitter vacua in string theory. The bound is parametrically longer than the Hubble time but parametrically shorter than the recurrence time. We investigate whether the bound is satisfied in a particular class of de Sitter solutions, the KKLT vacua. Despite the freedom to make the supersymmetry breaking scale exponentially small, which naively would lead to extremely stable vacua, we find that the lifetime is always less than about exp(1022) Hubble times, in agreement with the proposed bound. This result, however, is contingent on several estimates and assumptions; in particular, we rely on a conjectural upper bound on the Euler number of the Calabi-Yau fourfolds used in KKLT compactifications.
Stochastic information transfer from cochlear implant electrodes to auditory nerve fibers
NASA Astrophysics Data System (ADS)
Gao, Xiao; Grayden, David B.; McDonnell, Mark D.
2014-08-01
Cochlear implants, also called bionic ears, are implanted neural prostheses that can restore lost human hearing function by direct electrical stimulation of auditory nerve fibers. Previously, an information-theoretic framework for numerically estimating the optimal number of electrodes in cochlear implants has been devised. This approach relies on a model of stochastic action potential generation and a discrete memoryless channel model of the interface between the array of electrodes and the auditory nerve fibers. Using these models, the stochastic information transfer from cochlear implant electrodes to auditory nerve fibers is estimated from the mutual information between channel inputs (the locations of electrodes) and channel outputs (the set of electrode-activated nerve fibers). Here we describe a revised model of the channel output in the framework that avoids the side effects caused by an "ambiguity state" in the original model and also makes fewer assumptions about perceptual processing in the brain. A detailed comparison of how different assumptions on fibers and current spread modes impact on the information transfer in the original model and in the revised model is presented. We also mathematically derive an upper bound on the mutual information in the revised model, which becomes tighter as the number of electrodes increases. We found that the revised model leads to a significantly larger maximum mutual information and corresponding number of electrodes compared with the original model and conclude that the assumptions made in this part of the modeling framework are crucial to the model's overall utility.
Upper bound on the efficiency of certain nonimaging concentrators in the physical-optics model
NASA Astrophysics Data System (ADS)
Welford, W. T.; Winston, R.
1982-09-01
Upper bounds on the performance of nonimaging concentrators are obtained within the framework of scalar-wave theory by using a simple approach to avoid complex calculations on multiple phase fronts. The approach consists in treating a theoretically perfect image-forming device and postulating that no non-image-forming concentrator can have a better performance than such an ideal image-forming system. The performance of such a system can be calculated according to wave theory, and this will provide, in accordance with the postulate, upper bounds on the performance of nonimaging systems. The method is demonstrated for a two-dimensional compound parabolic concentrator.
Lower and upper bounds for entanglement of Rényi-α entropy.
Song, Wei; Chen, Lin; Cao, Zhuo-Liang
2016-12-23
Entanglement Rényi-α entropy is an entanglement measure. It reduces to the standard entanglement of formation when α tends to 1. We derive analytical lower and upper bounds for the entanglement Rényi-α entropy of arbitrary dimensional bipartite quantum systems. We also demonstrate the application our bound for some concrete examples. Moreover, we establish the relation between entanglement Rényi-α entropy and some other entanglement measures.
Global a priori estimates for the inhomogeneous Landau equation with moderately soft potentials
NASA Astrophysics Data System (ADS)
Cameron, Stephen; Silvestre, Luis; Snelson, Stanley
2018-05-01
We establish a priori upper bounds for solutions to the spatially inhomogeneous Landau equation in the case of moderately soft potentials, with arbitrary initial data, under the assumption that mass, energy and entropy densities stay under control. Our pointwise estimates decay polynomially in the velocity variable. We also show that if the initial data satisfies a Gaussian upper bound, this bound is propagated for all positive times.
Huang, C-C; Yang, Y-H; Chen, C-H; Chen, T-W; Lee, C-L; Wu, C-L; Chuang, S-H; Huang, M-H
2008-03-01
The aim of this study was to compare the flexibility of the upper extremities in collegiate students involved in Aikido (a kind of soft martial art attracting youth) training with those involved in other sports. Fifty freshmen with a similar frequency of exercise were divided into the Aikido group (n = 18), the upper-body sports group (n = 17), and the lower-body sports group (n = 15) according to the sports that they participated in. Eight classes of range of motion in upper extremities were taken for all subjects by the same clinicians. The Aikido group had significantly better flexibility than the upper-body sports group except for range of motion in shoulder flexion (p = 0.22), shoulder lateral rotation (p > 0.99), and wrist extension (p > 0.99). The Aikido group also had significantly better flexibility than the lower-body sports group (p < 0.01) and the sedentary group (p < 0.01) in all classes of range of motion. The upper-body sports group was significantly more flexible in five classes of range of motion and significantly tighter in range of motion of wrist flexion (p < 0.01) compared to the lower-body sports group. It was concluded that the youths participating in soft martial arts had good upper extremities flexibility that might not result from regular exercise alone.
Veeraraghavan, Srikant; Mazziotti, David A
2014-03-28
We present a density matrix approach for computing global solutions of restricted open-shell Hartree-Fock theory, based on semidefinite programming (SDP), that gives upper and lower bounds on the Hartree-Fock energy of quantum systems. While wave function approaches to Hartree-Fock theory yield an upper bound to the Hartree-Fock energy, we derive a semidefinite relaxation of Hartree-Fock theory that yields a rigorous lower bound on the Hartree-Fock energy. We also develop an upper-bound algorithm in which Hartree-Fock theory is cast as a SDP with a nonconvex constraint on the rank of the matrix variable. Equality of the upper- and lower-bound energies guarantees that the computed solution is the globally optimal solution of Hartree-Fock theory. The work extends a previously presented method for closed-shell systems [S. Veeraraghavan and D. A. Mazziotti, Phys. Rev. A 89, 010502-R (2014)]. For strongly correlated systems the SDP approach provides an alternative to the locally optimized Hartree-Fock energies and densities with a certificate of global optimality. Applications are made to the potential energy curves of C2, CN, Cr2, and NO2.
Bounding the moment deficit rate on crustal faults using geodetic data: Methods
Maurer, Jeremy; Segall, Paul; Bradley, Andrew Michael
2017-08-19
Here, the geodetically derived interseismic moment deficit rate (MDR) provides a first-order constraint on earthquake potential and can play an important role in seismic hazard assessment, but quantifying uncertainty in MDR is a challenging problem that has not been fully addressed. We establish criteria for reliable MDR estimators, evaluate existing methods for determining the probability density of MDR, and propose and evaluate new methods. Geodetic measurements moderately far from the fault provide tighter constraints on MDR than those nearby. Previously used methods can fail catastrophically under predictable circumstances. The bootstrap method works well with strong data constraints on MDR, butmore » can be strongly biased when network geometry is poor. We propose two new methods: the Constrained Optimization Bounding Estimator (COBE) assumes uniform priors on slip rate (from geologic information) and MDR, and can be shown through synthetic tests to be a useful, albeit conservative estimator; the Constrained Optimization Bounding Linear Estimator (COBLE) is the corresponding linear estimator with Gaussian priors rather than point-wise bounds on slip rates. COBE matches COBLE with strong data constraints on MDR. We compare results from COBE and COBLE to previously published results for the interseismic MDR at Parkfield, on the San Andreas Fault, and find similar results; thus, the apparent discrepancy between MDR and the total moment release (seismic and afterslip) in the 2004 Parkfield earthquake remains.« less
Bounding the moment deficit rate on crustal faults using geodetic data: Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maurer, Jeremy; Segall, Paul; Bradley, Andrew Michael
Here, the geodetically derived interseismic moment deficit rate (MDR) provides a first-order constraint on earthquake potential and can play an important role in seismic hazard assessment, but quantifying uncertainty in MDR is a challenging problem that has not been fully addressed. We establish criteria for reliable MDR estimators, evaluate existing methods for determining the probability density of MDR, and propose and evaluate new methods. Geodetic measurements moderately far from the fault provide tighter constraints on MDR than those nearby. Previously used methods can fail catastrophically under predictable circumstances. The bootstrap method works well with strong data constraints on MDR, butmore » can be strongly biased when network geometry is poor. We propose two new methods: the Constrained Optimization Bounding Estimator (COBE) assumes uniform priors on slip rate (from geologic information) and MDR, and can be shown through synthetic tests to be a useful, albeit conservative estimator; the Constrained Optimization Bounding Linear Estimator (COBLE) is the corresponding linear estimator with Gaussian priors rather than point-wise bounds on slip rates. COBE matches COBLE with strong data constraints on MDR. We compare results from COBE and COBLE to previously published results for the interseismic MDR at Parkfield, on the San Andreas Fault, and find similar results; thus, the apparent discrepancy between MDR and the total moment release (seismic and afterslip) in the 2004 Parkfield earthquake remains.« less
Perturbative unitarity constraints on gauge portals
El Hedri, Sonia; Shepherd, William; Walker, Devin G. E.
2017-10-03
Dark matter that was once in thermal equilibrium with the Standard Model is generally prohibited from obtaining all of its mass from the electroweak phase transition. This implies a new scale of physics and mediator particles to facilitate dark matter annihilation. In this work, we focus on dark matter that annihilates through a generic gauge boson portal. We show how partial wave unitarity places upper bounds on the dark gauge boson, dark Higgs and dark matter masses. Outside of well-defined fine-tuned regions, we find an upper bound of 9 TeV for the dark matter mass when the dark Higgs andmore » dark gauge bosons both facilitate the dark matter annihilations. In this scenario, the upper bound on the dark Higgs and dark gauge boson masses are 10 TeV and 16 TeV, respectively. When only the dark gauge boson facilitates dark matter annihilations, we find an upper bound of 3 TeV and 6 TeV for the dark matter and dark gauge boson, respectively. Overall, using the gauge portal as a template, we describe a method to not only place upper bounds on the dark matter mass but also on the new particles with Standard Model quantum numbers. Here, we briefly discuss the reach of future accelerator, direct and indirect detection experiments for this class of models.« less
Perturbative unitarity constraints on gauge portals
DOE Office of Scientific and Technical Information (OSTI.GOV)
El Hedri, Sonia; Shepherd, William; Walker, Devin G. E.
Dark matter that was once in thermal equilibrium with the Standard Model is generally prohibited from obtaining all of its mass from the electroweak phase transition. This implies a new scale of physics and mediator particles to facilitate dark matter annihilation. In this work, we focus on dark matter that annihilates through a generic gauge boson portal. We show how partial wave unitarity places upper bounds on the dark gauge boson, dark Higgs and dark matter masses. Outside of well-defined fine-tuned regions, we find an upper bound of 9 TeV for the dark matter mass when the dark Higgs andmore » dark gauge bosons both facilitate the dark matter annihilations. In this scenario, the upper bound on the dark Higgs and dark gauge boson masses are 10 TeV and 16 TeV, respectively. When only the dark gauge boson facilitates dark matter annihilations, we find an upper bound of 3 TeV and 6 TeV for the dark matter and dark gauge boson, respectively. Overall, using the gauge portal as a template, we describe a method to not only place upper bounds on the dark matter mass but also on the new particles with Standard Model quantum numbers. Here, we briefly discuss the reach of future accelerator, direct and indirect detection experiments for this class of models.« less
Noisy metrology: a saturable lower bound on quantum Fisher information
NASA Astrophysics Data System (ADS)
Yousefjani, R.; Salimi, S.; Khorashad, A. S.
2017-06-01
In order to provide a guaranteed precision and a more accurate judgement about the true value of the Cramér-Rao bound and its scaling behavior, an upper bound (equivalently a lower bound on the quantum Fisher information) for precision of estimation is introduced. Unlike the bounds previously introduced in the literature, the upper bound is saturable and yields a practical instruction to estimate the parameter through preparing the optimal initial state and optimal measurement. The bound is based on the underling dynamics, and its calculation is straightforward and requires only the matrix representation of the quantum maps responsible for encoding the parameter. This allows us to apply the bound to open quantum systems whose dynamics are described by either semigroup or non-semigroup maps. Reliability and efficiency of the method to predict the ultimate precision limit are demonstrated by three main examples.
Standard Deviation for Small Samples
ERIC Educational Resources Information Center
Joarder, Anwar H.; Latif, Raja M.
2006-01-01
Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…
Bounds for Asian basket options
NASA Astrophysics Data System (ADS)
Deelstra, Griselda; Diallo, Ibrahima; Vanmaele, Michèle
2008-09-01
In this paper we propose pricing bounds for European-style discrete arithmetic Asian basket options in a Black and Scholes framework. We start from methods used for basket options and Asian options. First, we use the general approach for deriving upper and lower bounds for stop-loss premia of sums of non-independent random variables as in Kaas et al. [Upper and lower bounds for sums of random variables, Insurance Math. Econom. 27 (2000) 151-168] or Dhaene et al. [The concept of comonotonicity in actuarial science and finance: theory, Insurance Math. Econom. 31(1) (2002) 3-33]. We generalize the methods in Deelstra et al. [Pricing of arithmetic basket options by conditioning, Insurance Math. Econom. 34 (2004) 55-57] and Vanmaele et al. [Bounds for the price of discrete sampled arithmetic Asian options, J. Comput. Appl. Math. 185(1) (2006) 51-90]. Afterwards we show how to derive an analytical closed-form expression for a lower bound in the non-comonotonic case. Finally, we derive upper bounds for Asian basket options by applying techniques as in Thompson [Fast narrow bounds on the value of Asian options, Working Paper, University of Cambridge, 1999] and Lord [Partially exact and bounded approximations for arithmetic Asian options, J. Comput. Finance 10 (2) (2006) 1-52]. Numerical results are included and on the basis of our numerical tests, we explain which method we recommend depending on moneyness and time-to-maturity.
Finite-error metrological bounds on multiparameter Hamiltonian estimation
NASA Astrophysics Data System (ADS)
Kura, Naoto; Ueda, Masahito
2018-01-01
Estimation of multiple parameters in an unknown Hamiltonian is investigated. We present upper and lower bounds on the time required to complete the estimation within a prescribed error tolerance δ . The lower bound is given on the basis of the Cramér-Rao inequality, where the quantum Fisher information is bounded by the squared evolution time. The upper bound is obtained by an explicit construction of estimation procedures. By comparing the cases with different numbers of Hamiltonian channels, we also find that the few-channel procedure with adaptive feedback and the many-channel procedure with entanglement are equivalent in the sense that they require the same amount of time resource up to a constant factor.
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2015-01-01
Upper bounds on high speed satellite collision probability, PC †, have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum PC. If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but potentially useful Pc upper bound.
NASA Astrophysics Data System (ADS)
Santos, Jander P.; Sá Barreto, F. C.
2016-01-01
Spin correlation identities for the Blume-Emery-Griffiths model on Kagomé lattice are derived and combined with rigorous correlation inequalities lead to upper bounds on the critical temperature. From the spin correlation identities the mean field approximation and the effective field approximation results for the magnetization, the critical frontiers and the tricritical points are obtained. The rigorous upper bounds on the critical temperature improve over those effective-field type theories results.
Bounds for the Z-spectral radius of nonnegative tensors.
He, Jun; Liu, Yan-Min; Ke, Hua; Tian, Jun-Kang; Li, Xiang
2016-01-01
In this paper, we have proposed some new upper bounds for the largest Z-eigenvalue of an irreducible weakly symmetric and nonnegative tensor, which improve the known upper bounds obtained in Chang et al. (Linear Algebra Appl 438:4166-4182, 2013), Song and Qi (SIAM J Matrix Anal Appl 34:1581-1595, 2013), He and Huang (Appl Math Lett 38:110-114, 2014), Li et al. (J Comput Anal Appl 483:182-199, 2015), He (J Comput Anal Appl 20:1290-1301, 2016).
The upper bound of Pier Scour defined by selected laboratory and field data
Benedict, Stephen; Caldwell, Andral W.
2015-01-01
The U.S. Geological Survey, in cooperation with the South Carolina Department of Transportation, conducted several field investigations of pier scour in South Carolina (Benedict and Caldwell, 2006; Benedict and Caldwell, 2009) and used that data to develop envelope curves defining the upper bound of pier scour. To expand upon this previous work, an additional cooperative investigation was initiated to combine the South Carolina data with pier-scour data from other sources and evaluate the upper bound of pier scour with this larger data set. To facilitate this analysis, a literature review was made to identify potential sources of published pier-scour data, and selected data were compiled into a digital spreadsheet consisting of approximately 570 laboratory and 1,880 field measurements. These data encompass a wide range of laboratory and field conditions and represent field data from 24 states within the United States and six other countries. This extensive database was used to define the upper bound of pier-scour depth with respect to pier width encompassing the laboratory and field data. Pier width is a primary variable that influences pier-scour depth (Laursen and Toch, 1956; Melville and Coleman, 2000; Mueller and Wagner, 2005, Ettema et al. 2011, Arneson et al. 2012) and therefore, was used as the primary explanatory variable in developing the upper-bound envelope curve. The envelope curve provides a simple but useful tool for assessing the potential maximum pier-scour depth for pier widths of about 30 feet or less.
Bounds on the information rate of quantum-secret-sharing schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarvepalli, Pradeep
An important metric of the performance of a quantum-secret-sharing scheme is its information rate. Beyond the fact that the information rate is upper-bounded by one, very little is known in terms of bounds on the information rate of quantum-secret-sharing schemes. Furthermore, not every scheme can be realized with rate one. In this paper we derive upper bounds for the information rates of quantum-secret-sharing schemes. We show that there exist quantum access structures on n players for which the information rate cannot be better than O((log{sub 2}n)/n). These results are the quantum analogues of the bounds for classical-secret-sharing schemes proved bymore » Csirmaz.« less
Van Holle, Lionel; Bauchau, Vincent
2014-01-01
Purpose For disproportionality measures based on the Relative Reporting Ratio (RRR) such as the Information Component (IC) and the Empirical Bayesian Geometrical Mean (EBGM), each product and event is assumed to represent a negligible fraction of the spontaneous report database (SRD). Here, we provide the tools for allowing signal detection experts to assess the consequence of the violation of this assumption on their specific SRD. Methods For each product–event pair (P–E), a worst-case scenario associated all the reported events-of-interest with the product of interest. The values of the RRR under this scenario were measured for different sets of stratification factors using the GlaxoSmithKline vaccines SRD. These values represent the RRR upper bound that RRR cannot exceed whatever the true strength of association. Results Depending on the choice of stratification factors, the RRR could not exceed an upper bound of 2 for up to 2.4% of the P–Es. For Engerix™, 23.4% of all reports in the SDR, the RRR could not exceed an upper bound of 2 for up to 13.8% of pairs. For the P–E Rotarix™-Intussusception, the choice of stratification factors impacted the upper bound to RRR: from 52.5 for an unstratified RRR to 2.0 for a fully stratified RRR. Conclusions The quantification of the upper bound can indicate whether measures such as EBGM, IC, or RRR can be used for SRD for which products or events represent a non-negligible fraction of the entire SRD. In addition, at the level of the product or P–E, it can also highlight detrimental impact of overstratification. © 2014 The Authors. Pharmacoepidemiology and Drug Safety published by John Wiley & Sons, Ltd. PMID:24395594
Observational constraints on the primordial curvature power spectrum
NASA Astrophysics Data System (ADS)
Emami, Razieh; Smoot, George F.
2018-01-01
CMB temperature fluctuation observations provide a precise measurement of the primordial power spectrum on large scales, corresponding to wavenumbers 10‑3 Mpc‑1 lesssim k lesssim 0.1 Mpc‑1, [1-7, 11]. Luminous red galaxies and galaxy clusters probe the matter power spectrum on overlapping scales (0.02 Mpc‑1 lesssim k lesssim 0.7 Mpc‑1 [10, 12-20]), while the Lyman-alpha forest reaches slightly smaller scales (0.3 Mpc‑1 lesssim k lesssim 3 Mpc‑1 [22]). These observations indicate that the primordial power spectrum is nearly scale-invariant with an amplitude close to 2 × 10‑9, [5, 23-28]. These observations strongly support Inflation and motivate us to obtain observations and constraints reaching to smaller scales on the primordial curvature power spectrum and by implication on Inflation. We are able to obtain limits to much higher values of k lesssim 105 Mpc‑1 and with less sensitivity even higher k lesssim 1019‑ 1023 Mpc‑1 using limits from CMB spectral distortions and other limits on ultracompact minihalo objects (UCMHs) and Primordial Black Holes (PBHs). PBHs are one of the known candidates for the Dark Matter (DM). Due to their very early formation, they could give us valuable information about the primordial curvature perturbations. These are complementary to other cosmological bounds on the amplitude of the primordial fluctuations. In this paper, we revisit and collect all the published constraints on both PBHs and UCMHs. We show that unless one uses the CMB spectral distortion, PBHs give us a very relaxed bounds on the primordial curvature perturbations. UCMHs, on the other hand, are very informative over a reasonable k range (3 lesssim k lesssim 106 Mpc‑1) and lead to significant upper-bounds on the curvature spectrum. We review the conditions under which the tighter constraints on the UCMHs could imply extremely strong bounds on the fraction of DM that could be PBHs in reasonable models. Failure to satisfy these conditions would lead to over production of the UCMHs which is inconsistent with the observations. Therefore, we can almost rule out PBH within their overlap scales with the UCMHs. We compare the UCMH bounds coming from those experiments which are sensitive to the nature of the DM, such as γ-rays, Neutrinos and Reionization, with those which are insensitive to the type of the DM, e.g. the pulsar-timing as well as CMB spectral distortion. We explicitly show that they lead to comparable results which are independent of the type of DM. These bounds however do depend on the required initial density perturbation, i.e. δmin. It could be either a constant or a scale-dependent function. As we will show, the constraints differ by three orders of magnitude depend on our choice of required initial perturbations.
Generalized Geometric Quantum Speed Limits
NASA Astrophysics Data System (ADS)
Pires, Diego Paiva; Cianciaruso, Marco; Céleri, Lucas C.; Adesso, Gerardo; Soares-Pinto, Diogo O.
2016-04-01
The attempt to gain a theoretical understanding of the concept of time in quantum mechanics has triggered significant progress towards the search for faster and more efficient quantum technologies. One of such advances consists in the interpretation of the time-energy uncertainty relations as lower bounds for the minimal evolution time between two distinguishable states of a quantum system, also known as quantum speed limits. We investigate how the nonuniqueness of a bona fide measure of distinguishability defined on the quantum-state space affects the quantum speed limits and can be exploited in order to derive improved bounds. Specifically, we establish an infinite family of quantum speed limits valid for unitary and nonunitary evolutions, based on an elegant information geometric formalism. Our work unifies and generalizes existing results on quantum speed limits and provides instances of novel bounds that are tighter than any established one based on the conventional quantum Fisher information. We illustrate our findings with relevant examples, demonstrating the importance of choosing different information metrics for open system dynamics, as well as clarifying the roles of classical populations versus quantum coherences, in the determination and saturation of the speed limits. Our results can find applications in the optimization and control of quantum technologies such as quantum computation and metrology, and might provide new insights in fundamental investigations of quantum thermodynamics.
Bounds of memory strength for power-law series.
Guo, Fangjian; Yang, Dan; Yang, Zimo; Zhao, Zhi-Dan; Zhou, Tao
2017-05-01
Many time series produced by complex systems are empirically found to follow power-law distributions with different exponents α. By permuting the independently drawn samples from a power-law distribution, we present nontrivial bounds on the memory strength (first-order autocorrelation) as a function of α, which are markedly different from the ordinary ±1 bounds for Gaussian or uniform distributions. When 1<α≤3, as α grows bigger, the upper bound increases from 0 to +1 while the lower bound remains 0; when α>3, the upper bound remains +1 while the lower bound descends below 0. Theoretical bounds agree well with numerical simulations. Based on the posts on Twitter, ratings of MovieLens, calling records of the mobile operator Orange, and the browsing behavior of Taobao, we find that empirical power-law-distributed data produced by human activities obey such constraints. The present findings explain some observed constraints in bursty time series and scale-free networks and challenge the validity of measures such as autocorrelation and assortativity coefficient in heterogeneous systems.
Bounds of memory strength for power-law series
NASA Astrophysics Data System (ADS)
Guo, Fangjian; Yang, Dan; Yang, Zimo; Zhao, Zhi-Dan; Zhou, Tao
2017-05-01
Many time series produced by complex systems are empirically found to follow power-law distributions with different exponents α . By permuting the independently drawn samples from a power-law distribution, we present nontrivial bounds on the memory strength (first-order autocorrelation) as a function of α , which are markedly different from the ordinary ±1 bounds for Gaussian or uniform distributions. When 1 <α ≤3 , as α grows bigger, the upper bound increases from 0 to +1 while the lower bound remains 0; when α >3 , the upper bound remains +1 while the lower bound descends below 0. Theoretical bounds agree well with numerical simulations. Based on the posts on Twitter, ratings of MovieLens, calling records of the mobile operator Orange, and the browsing behavior of Taobao, we find that empirical power-law-distributed data produced by human activities obey such constraints. The present findings explain some observed constraints in bursty time series and scale-free networks and challenge the validity of measures such as autocorrelation and assortativity coefficient in heterogeneous systems.
Bound of dissipation on a plane Couette dynamo
NASA Astrophysics Data System (ADS)
Alboussière, Thierry
2009-06-01
Variational turbulence is among the few approaches providing rigorous results in turbulence. In addition, it addresses a question of direct practical interest, namely, the rate of energy dissipation. Unfortunately, only an upper bound is obtained as a larger functional space than the space of solutions to the Navier-Stokes equations is searched. Yet, in some cases, this upper bound is in good agreement with experimental results in terms of order of magnitude and power law of the imposed Reynolds number. In this paper, the variational approach to turbulence is extended to the case of dynamo action and an upper bound is obtained for the global dissipation rate (viscous and Ohmic). A simple plane Couette flow is investigated. For low magnetic Prandtl number Pm fluids, the upper bound of energy dissipation is that of classical turbulence (i.e., proportional to the cubic power of the shear velocity) for magnetic Reynolds numbers below Pm-1 and follows a steeper evolution for magnetic Reynolds numbers above Pm-1 (i.e., proportional to the shear velocity to the power of 4) in the case of electrically insulating walls. However, the effect of wall conductance is crucial: for a given value of wall conductance, there is a value for the magnetic Reynolds number above which energy dissipation cannot be bounded. This limiting magnetic Reynolds number is inversely proportional to the square root of the conductance of the wall. Implications in terms of energy dissipation in experimental and natural dynamos are discussed.
Upper-Bound Estimates Of SEU in CMOS
NASA Technical Reports Server (NTRS)
Edmonds, Larry D.
1990-01-01
Theory of single-event upsets (SEU) (changes in logic state caused by energetic charged subatomic particles) in complementary metal oxide/semiconductor (CMOS) logic devices extended to provide upper-bound estimates of rates of SEU when limited experimental information available and configuration and dimensions of SEU-sensitive regions of devices unknown. Based partly on chord-length-distribution method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Datta, Nilanjana, E-mail: n.datta@statslab.cam.ac.uk; Hsieh, Min-Hsiu, E-mail: Min-Hsiu.Hsieh@uts.edu.au; Oppenheim, Jonathan, E-mail: j.oppenheim@ucl.ac.uk
State redistribution is the protocol in which given an arbitrary tripartite quantum state, with two of the subsystems initially being with Alice and one being with Bob, the goal is for Alice to send one of her subsystems to Bob, possibly with the help of prior shared entanglement. We derive an upper bound on the second order asymptotic expansion for the quantum communication cost of achieving state redistribution with a given finite accuracy. In proving our result, we also obtain an upper bound on the quantum communication cost of this protocol in the one-shot setting, by using the protocol ofmore » coherent state merging as a primitive.« less
Improved bounds on the energy-minimizing strains in martensitic polycrystals
NASA Astrophysics Data System (ADS)
Peigney, Michaël
2016-07-01
This paper is concerned with the theoretical prediction of the energy-minimizing (or recoverable) strains in martensitic polycrystals, considering a nonlinear elasticity model of phase transformation at finite strains. The main results are some rigorous upper bounds on the set of energy-minimizing strains. Those bounds depend on the polycrystalline texture through the volume fractions of the different orientations. The simplest form of the bounds presented is obtained by combining recent results for single crystals with a homogenization approach proposed previously for martensitic polycrystals. However, the polycrystalline bound delivered by that procedure may fail to recover the monocrystalline bound in the homogeneous limit, as is demonstrated in this paper by considering an example related to tetragonal martensite. This motivates the development of a more detailed analysis, leading to improved polycrystalline bounds that are notably consistent with results for single crystals in the homogeneous limit. A two-orientation polycrystal of tetragonal martensite is studied as an illustration. In that case, analytical expressions of the upper bounds are derived and the results are compared with lower bounds obtained by considering laminate textures.
Making almost commuting matrices commute
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hastings, Matthew B
Suppose two Hermitian matrices A, B almost commute ({parallel}[A,B]{parallel} {<=} {delta}). Are they close to a commuting pair of Hermitian matrices, A', B', with {parallel}A-A'{parallel},{parallel}B-B'{parallel} {<=} {epsilon}? A theorem of H. Lin shows that this is uniformly true, in that for every {epsilon} > 0 there exists a {delta} > 0, independent of the size N of the matrices, for which almost commuting implies being close to a commuting pair. However, this theorem does not specifiy how {delta} depends on {epsilon}. We give uniform bounds relating {delta} and {epsilon}. The proof is constructive, giving an explicit algorithm to construct A'more » and B'. We provide tighter bounds in the case of block tridiagonal and tridiagnonal matrices. Within the context of quantum measurement, this implies an algorithm to construct a basis in which we can make a projective measurement that approximately measures two approximately commuting operators simultaneously. Finally, we comment briefly on the case of approximately measuring three or more approximately commuting operators using POVMs (positive operator-valued measures) instead of projective measurements.« less
Solving Open Job-Shop Scheduling Problems by SAT Encoding
NASA Astrophysics Data System (ADS)
Koshimura, Miyuki; Nabeshima, Hidetomo; Fujita, Hiroshi; Hasegawa, Ryuzo
This paper tries to solve open Job-Shop Scheduling Problems (JSSP) by translating them into Boolean Satisfiability Testing Problems (SAT). The encoding method is essentially the same as the one proposed by Crawford and Baker. The open problems are ABZ8, ABZ9, YN1, YN2, YN3, and YN4. We proved that the best known upper bounds 678 of ABZ9 and 884 of YN1 are indeed optimal. We also improved the upper bound of YN2 and lower bounds of ABZ8, YN2, YN3 and YN4.
Upper and lower bounds for semi-Markov reliability models of reconfigurable systems
NASA Technical Reports Server (NTRS)
White, A. L.
1984-01-01
This paper determines the information required about system recovery to compute the reliability of a class of reconfigurable systems. Upper and lower bounds are derived for these systems. The class consists of those systems that satisfy five assumptions: the components fail independently at a low constant rate, fault occurrence and system reconfiguration are independent processes, the reliability model is semi-Markov, the recovery functions which describe system configuration have small means and variances, and the system is well designed. The bounds are easy to compute, and examples are included.
The Laughlin liquid in an external potential
NASA Astrophysics Data System (ADS)
Rougerie, Nicolas; Yngvason, Jakob
2018-04-01
We study natural perturbations of the Laughlin state arising from the effects of trapping and disorder. These are N-particle wave functions that have the form of a product of Laughlin states and analytic functions of the N variables. We derive an upper bound to the ground state energy in a confining external potential, matching exactly a recently derived lower bound in the large N limit. Irrespective of the shape of the confining potential, this sharp upper bound can be achieved through a modification of the Laughlin function by suitably arranged quasi-holes.
Determining Normal-Distribution Tolerance Bounds Graphically
NASA Technical Reports Server (NTRS)
Mezzacappa, M. A.
1983-01-01
Graphical method requires calculations and table lookup. Distribution established from only three points: mean upper and lower confidence bounds and lower confidence bound of standard deviation. Method requires only few calculations with simple equations. Graphical procedure establishes best-fit line for measured data and bounds for selected confidence level and any distribution percentile.
An evaluation of risk estimation procedures for mixtures of carcinogens
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hwang, J.S.; Chen, J.J.
1999-12-01
The estimation of health risks from exposure to a mixture of chemical carcinogens is generally based on the combination of information from several available single compound studies. The current practice of directly summing the upper bound risk estimates of individual carcinogenic components as an upper bound on the total risk of a mixture is known to be generally too conservative. Gaylor and Chen (1996, Risk Analysis) proposed a simple procedure to compute an upper bound on the total risk using only the upper confidence limits and central risk estimates of individual carcinogens. The Gaylor-Chen procedure was derived based on anmore » underlying assumption of the normality for the distributions of individual risk estimates. IN this paper the authors evaluated the Gaylor-Chen approach in terms the coverages of the upper confidence limits on the true risks of individual carcinogens. In general, if the coverage probabilities for the individual carcinogens are all approximately equal to the nominal level, then the Gaylor-Chen approach should perform well. However, the Gaylor-Chen approach can be conservative or anti-conservative if some of all individual upper confidence limit estimates are conservative or anti-conservative.« less
The upper bound of abutment scour defined by selected laboratory and field data
Benedict, Stephen; Caldwell, Andral W.
2015-01-01
The U.S. Geological Survey, in cooperation with the South Carolina Department of Transportation, conducted a field investigation of abutment scour in South Carolina and used that data to develop envelope curves defining the upper bound of abutment scour. To expand upon this previous work, an additional cooperative investigation was initiated to combine the South Carolina data with abutment-scour data from other sources and evaluate the upper bound of abutment scour with the larger data set. To facilitate this analysis, a literature review was made to identify potential sources of published abutment-scour data, and selected data, consisting of 446 laboratory and 331 field measurements, were compiled for the analysis. These data encompassed a wide range of laboratory and field conditions and represent field data from 6 states within the United States. The data set was used to evaluate the South Carolina abutment-scour envelope curves. Additionally, the data were used to evaluate a dimensionless abutment-scour envelope curve developed by Melville (1992), highlighting the distinct difference in the upper bound for laboratory and field data. The envelope curves evaluated in this investigation provide simple but useful tools for assessing the potential maximum abutment-scour depth in the field setting.
Symmetry Parameter Constraints from a Lower Bound on Neutron-matter Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tews, Ingo; Lattimer, James M.; Ohnishi, Akira
We propose the existence of a lower bound on the energy of pure neutron matter (PNM) on the basis of unitary-gas considerations. We discuss its justification from experimental studies of cold atoms as well as from theoretical studies of neutron matter. We demonstrate that this bound results in limits to the density-dependent symmetry energy, which is the difference between the energies of symmetric nuclear matter and PNM. In particular, this bound leads to a lower limit to the volume symmetry energy parameter S {sub 0}. In addition, for assumed values of S {sub 0} above this minimum, this bound impliesmore » both upper and lower limits to the symmetry energy slope parameter L , which describes the lowest-order density dependence of the symmetry energy. A lower bound on neutron-matter incompressibility is also obtained. These bounds are found to be consistent with both recent calculations of the energies of PNM and constraints from nuclear experiments. Our results are significant because several equations of state that are currently used in astrophysical simulations of supernovae and neutron star mergers, as well as in nuclear physics simulations of heavy-ion collisions, have symmetry energy parameters that violate these bounds. Furthermore, below the nuclear saturation density, the bound on neutron-matter energies leads to a lower limit to the density-dependent symmetry energy, which leads to upper limits to the nuclear surface symmetry parameter and the neutron-star crust–core boundary. We also obtain a lower limit to the neutron-skin thicknesses of neutron-rich nuclei. Above the nuclear saturation density, the bound on neutron-matter energies also leads to an upper limit to the symmetry energy, with implications for neutron-star cooling via the direct Urca process.« less
Symmetry Parameter Constraints from a Lower Bound on Neutron-matter Energy
NASA Astrophysics Data System (ADS)
Tews, Ingo; Lattimer, James M.; Ohnishi, Akira; Kolomeitsev, Evgeni E.
2017-10-01
We propose the existence of a lower bound on the energy of pure neutron matter (PNM) on the basis of unitary-gas considerations. We discuss its justification from experimental studies of cold atoms as well as from theoretical studies of neutron matter. We demonstrate that this bound results in limits to the density-dependent symmetry energy, which is the difference between the energies of symmetric nuclear matter and PNM. In particular, this bound leads to a lower limit to the volume symmetry energy parameter S 0. In addition, for assumed values of S 0 above this minimum, this bound implies both upper and lower limits to the symmetry energy slope parameter L ,which describes the lowest-order density dependence of the symmetry energy. A lower bound on neutron-matter incompressibility is also obtained. These bounds are found to be consistent with both recent calculations of the energies of PNM and constraints from nuclear experiments. Our results are significant because several equations of state that are currently used in astrophysical simulations of supernovae and neutron star mergers, as well as in nuclear physics simulations of heavy-ion collisions, have symmetry energy parameters that violate these bounds. Furthermore, below the nuclear saturation density, the bound on neutron-matter energies leads to a lower limit to the density-dependent symmetry energy, which leads to upper limits to the nuclear surface symmetry parameter and the neutron-star crust-core boundary. We also obtain a lower limit to the neutron-skin thicknesses of neutron-rich nuclei. Above the nuclear saturation density, the bound on neutron-matter energies also leads to an upper limit to the symmetry energy, with implications for neutron-star cooling via the direct Urca process.
Upper and lower bounds of ground-motion variabilities: implication for source properties
NASA Astrophysics Data System (ADS)
Cotton, Fabrice; Reddy-Kotha, Sreeram; Bora, Sanjay; Bindi, Dino
2017-04-01
One of the key challenges of seismology is to be able to analyse the physical factors that control earthquakes and ground-motion variabilities. Such analysis is particularly important to calibrate physics-based simulations and seismic hazard estimations at high frequencies. Within the framework of the development of ground-motion prediction equation (GMPE) developments, ground-motions residuals (differences between recorded ground motions and the values predicted by a GMPE) are computed. The exponential growth of seismological near-source records and modern GMPE analysis technics allow to partition these residuals into between- and a within-event components. In particular, the between-event term quantifies all those repeatable source effects (e.g. related to stress-drop or kappa-source variability) which have not been accounted by the magnitude-dependent term of the model. In this presentation, we first discuss the between-event variabilities computed both in the Fourier and Response Spectra domains, using recent high-quality global accelerometric datasets (e.g. NGA-west2, Resorce, Kiknet). These analysis lead to the assessment of upper bounds for the ground-motion variability. Then, we compare these upper bounds with lower bounds estimated by analysing seismic sequences which occurred on specific fault systems (e.g., located in Central Italy or in Japan). We show that the lower bounds of between-event variabilities are surprisingly large which indicates a large variability of earthquake dynamic properties even within the same fault system. Finally, these upper and lower bounds of ground-shaking variability are discussed in term of variability of earthquake physical properties (e.g., stress-drop and kappa_source).
Complexity, Heuristic, and Search Analysis for the Games of Crossings and Epaminondas
2014-03-27
research in Artifical Intelligence (Section 2.1) and why games are studied (Section 2.2). Section 2.3 discusses how games are played and solved. An...5 2.1 Games in Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Game Study...Artificial Intelligence UCT Upper Confidence Bounds applied to Trees HUCT Heuristic Guided UCT LOA Lines of Action UCB Upper Confidence Bound RAVE Rapid
ERIC Educational Resources Information Center
Kim, Seonghoon; Feldt, Leonard S.
2010-01-01
The primary purpose of this study is to investigate the mathematical characteristics of the test reliability coefficient rho[subscript XX'] as a function of item response theory (IRT) parameters and present the lower and upper bounds of the coefficient. Another purpose is to examine relative performances of the IRT reliability statistics and two…
Multivariate Lipschitz optimization: Survey and computational comparison
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, P.; Gourdin, E.; Jaumard, B.
1994-12-31
Many methods have been proposed to minimize a multivariate Lipschitz function on a box. They pertain the three approaches: (i) reduction to the univariate case by projection (Pijavskii) or by using a space-filling curve (Strongin); (ii) construction and refinement of a single upper bounding function (Pijavskii, Mladineo, Mayne and Polak, Jaumard Hermann and Ribault, Wood...); (iii) branch and bound with local upper bounding functions (Galperin, Pint{acute e}r, Meewella and Mayne, the present authors). A survey is made, stressing similarities of algorithms, expressed when possible within a unified framework. Moreover, an extensive computational comparison is reported on.
Numerical and analytical bounds on threshold error rates for hypergraph-product codes
NASA Astrophysics Data System (ADS)
Kovalev, Alexey A.; Prabhakar, Sanjay; Dumer, Ilya; Pryadko, Leonid P.
2018-06-01
We study analytically and numerically decoding properties of finite-rate hypergraph-product quantum low density parity-check codes obtained from random (3,4)-regular Gallager codes, with a simple model of independent X and Z errors. Several nontrivial lower and upper bounds for the decodable region are constructed analytically by analyzing the properties of the homological difference, equal minus the logarithm of the maximum-likelihood decoding probability for a given syndrome. Numerical results include an upper bound for the decodable region from specific heat calculations in associated Ising models and a minimum-weight decoding threshold of approximately 7 % .
Computational micromechanics of woven composites
NASA Technical Reports Server (NTRS)
Hopkins, Dale A.; Saigal, Sunil; Zeng, Xiaogang
1991-01-01
The bounds on the equivalent elastic material properties of a composite are presently addressed by a unified energy approach which is valid for both unidirectional and 2D and 3D woven composites. The unit cell considered is assumed to consist, first, of the actual composite arrangement of the fibers and matrix material, and then, of an equivalent pseudohomogeneous material. Equating the strain energies due to the two arrangements yields an estimate of the upper bound for the material equivalent properties; successive increases in the order of displacement field that is assumed in the composite arrangement will successively produce improved upper bound estimates.
Upper bounds on the photon mass
DOE Office of Scientific and Technical Information (OSTI.GOV)
Accioly, Antonio; Group of Field Theory from First Principles, Sao Paulo State University; Instituto de Fisica Teorica
2010-09-15
The effects of a nonzero photon rest mass can be incorporated into electromagnetism in a simple way using the Proca equations. In this vein, two interesting implications regarding the possible existence of a massive photon in nature, i.e., tiny alterations in the known values of both the anomalous magnetic moment of the electron and the gravitational deflection of electromagnetic radiation, are utilized to set upper limits on its mass. The bounds obtained are not as stringent as those recently found; nonetheless, they are comparable to other existing bounds and bring new elements to the issue of restricting the photon mass.
Improved nearest codeword search scheme using a tighter kick-out condition
NASA Astrophysics Data System (ADS)
Hwang, Kuo-Feng; Chang, Chin-Chen
2001-09-01
Using a tighter kick-out condition as a faster approach to nearest codeword searches is proposed. The proposed scheme finds the nearest codeword that is identical to the one found using a full search. However, using our scheme, the search time is much shorter. Our scheme first establishes a tighter kick-out condition. Then, the temporal nearest codeword can be obtained from the codewords that survive the tighter condition. Finally, the temporal nearest codeword cooperatives with the query vector to constitute a better kick-out condition. In other words, more codewords can be excluded without actually computing the distances between the bypassed codewords and the query vector. Comparison to previous work are included to present the benefits of the proposed scheme in relation to search time.
Eren, Metin I.; Chao, Anne; Hwang, Wen-Han; Colwell, Robert K.
2012-01-01
Background Estimating assemblage species or class richness from samples remains a challenging, but essential, goal. Though a variety of statistical tools for estimating species or class richness have been developed, they are all singly-bounded: assuming only a lower bound of species or classes. Nevertheless there are numerous situations, particularly in the cultural realm, where the maximum number of classes is fixed. For this reason, a new method is needed to estimate richness when both upper and lower bounds are known. Methodology/Principal Findings Here, we introduce a new method for estimating class richness: doubly-bounded confidence intervals (both lower and upper bounds are known). We specifically illustrate our new method using the Chao1 estimator, rarefaction, and extrapolation, although any estimator of asymptotic richness can be used in our method. Using a case study of Clovis stone tools from the North American Lower Great Lakes region, we demonstrate that singly-bounded richness estimators can yield confidence intervals with upper bound estimates larger than the possible maximum number of classes, while our new method provides estimates that make empirical sense. Conclusions/Significance Application of the new method for constructing doubly-bound richness estimates of Clovis stone tools permitted conclusions to be drawn that were not otherwise possible with singly-bounded richness estimates, namely, that Lower Great Lakes Clovis Paleoindians utilized a settlement pattern that was probably more logistical in nature than residential. However, our new method is not limited to archaeological applications. It can be applied to any set of data for which there is a fixed maximum number of classes, whether that be site occupancy models, commercial products (e.g. athletic shoes), or census information (e.g. nationality, religion, age, race). PMID:22666316
Entanglement criterion for tripartite systems based on local sum uncertainty relations
NASA Astrophysics Data System (ADS)
Akbari-Kourbolagh, Y.; Azhdargalam, M.
2018-04-01
We propose a sufficient criterion for the entanglement of tripartite systems based on local sum uncertainty relations for arbitrarily chosen observables of subsystems. This criterion generalizes the tighter criterion for bipartite systems introduced by Zhang et al. [C.-J. Zhang, H. Nha, Y.-S. Zhang, and G.-C. Guo, Phys. Rev. A 81, 012324 (2010), 10.1103/PhysRevA.81.012324] and can be used for both discrete- and continuous-variable systems. It enables us to detect the entanglement of quantum states without having a complete knowledge of them. Its utility is illustrated by some examples of three-qubit, qutrit-qutrit-qubit, and three-mode Gaussian states. It is found that, in comparison with other criteria, this criterion is able to detect some three-qubit bound entangled states more efficiently.
Forward and backward uncertainty propagation: an oxidation ditch modelling example.
Abusam, A; Keesman, K J; van Straten, G
2003-01-01
In the field of water technology, forward uncertainty propagation is frequently used, whereas backward uncertainty propagation is rarely used. In forward uncertainty analysis, one moves from a given (or assumed) parameter subspace towards the corresponding distribution of the output or objective function. However, in the backward uncertainty propagation, one moves in the reverse direction, from the distribution function towards the parameter subspace. Backward uncertainty propagation, which is a generalisation of parameter estimation error analysis, gives information essential for designing experimental or monitoring programmes, and for tighter bounding of parameter uncertainty intervals. The procedure of carrying out backward uncertainty propagation is illustrated in this technical note by working example for an oxidation ditch wastewater treatment plant. Results obtained have demonstrated that essential information can be achieved by carrying out backward uncertainty propagation analysis.
Robust Tomography using Randomized Benchmarking
NASA Astrophysics Data System (ADS)
Silva, Marcus; Kimmel, Shelby; Johnson, Blake; Ryan, Colm; Ohki, Thomas
2013-03-01
Conventional randomized benchmarking (RB) can be used to estimate the fidelity of Clifford operations in a manner that is robust against preparation and measurement errors -- thus allowing for a more accurate and relevant characterization of the average error in Clifford gates compared to standard tomography protocols. Interleaved RB (IRB) extends this result to the extraction of error rates for individual Clifford gates. In this talk we will show how to combine multiple IRB experiments to extract all information about the unital part of any trace preserving quantum process. Consequently, one can compute the average fidelity to any unitary, not just the Clifford group, with tighter bounds than IRB. Moreover, the additional information can be used to design improvements in control. MS, BJ, CR and TO acknowledge support from IARPA under contract W911NF-10-1-0324.
Upper bound on the Abelian gauge coupling from asymptotic safety
NASA Astrophysics Data System (ADS)
Eichhorn, Astrid; Versteegen, Fleur
2018-01-01
We explore the impact of asymptotically safe quantum gravity on the Abelian gauge coupling in a model including a charged scalar, confirming indications that asymptotically safe quantum fluctuations of gravity could trigger a power-law running towards a free fixed point for the gauge coupling above the Planck scale. Simultaneously, quantum gravity fluctuations balance against matter fluctuations to generate an interacting fixed point, which acts as a boundary of the basin of attraction of the free fixed point. This enforces an upper bound on the infrared value of the Abelian gauge coupling. In the regime of gravity couplings which in our approximation also allows for a prediction of the top quark and Higgs mass close to the experimental value [1], we obtain an upper bound approximately 35% above the infrared value of the hypercharge coupling in the Standard Model.
Limits of Gaussian fluctuations in the cosmic microwave background at 19.2 GHz
NASA Technical Reports Server (NTRS)
Boughn, S. P.; Cheng, E. S.; Cottingham, D. A.; Fixsen, D. J.
1992-01-01
The Northern Hemisphere data from the 19.2 GHz full sky survey are analyzed to place limits on the magnitude of Gaussian fluctuations in the cosmic microwave background implied by a variety of correlation functions. Included among the models tested are the monochromatic and Gaussian-shaped families, and those with power-law spectra for n values between -2 and 1. An upper bound is placed on the quadrupole anisotropy of Delta T/T less than 3.2 x 10 exp -5 rms, and an upper bound on scale-invariant (n = 1) fluctuations of a2 less than 4.5 x 10 exp -5 (95 percent confidence level). There is significant contamination of these data from Galactic emission, and improvement of the modeling of the Galaxy could yield a significant reduction of these upper bounds.
Limits on Gaussian fluctuations in the cosmic microwave background at 19.2 GHz
NASA Technical Reports Server (NTRS)
Boughn, S. P.; Cheng, E. S.; Cottingham, D. A.; Fixsen, D. J.
1991-01-01
The Northern Hemisphere data from the 19.2 GHz full sky survey are analyzed to place limits on the magnitude of Gaussian fluctuations in the cosmic microwave background implied by a variety of correlation functions. Included among the models tested are the monochromatic and Gaussian-shaped families, and those with power law spectra for n from -2 to 1. We place an upper bound on the quadrupole anisotropy of DeltaT/T less than 3.2 x 10 exp -5 rms, and an upper bound on scale-invariant (n = 1) fluctuations of a2 less than 4.5 x 10 exp -5 (95 percent confidence level). There is significant contamination of these data from Galactic emission, and improvement of our modeling of the Galaxy could yield a significant reduction of these upper bounds.
Complexity Bounds for Quantum Computation
2007-06-22
Programs Trustees of Boston University Boston, MA 02215 - Complexity Bounds for Quantum Computation REPORT DOCUMENTATION PAGE 18. SECURITY CLASSIFICATION...Complexity Bounds for Quantum Comp[utation Report Title ABSTRACT This project focused on upper and lower bounds for quantum computability using constant...classical computation models, particularly emphasizing new examples of where quantum circuits are more powerful than their classical counterparts. A second
NASA Astrophysics Data System (ADS)
High-Resolution Fly'S Eye Collaboration; Abbai, R. U.; Abu-Zayyad, T.; Amann, J. F.; Archbold, G.; Belov, K.; Belz, J. W.; Benzvi, S.; Bergman, D. R.; Blake, S. A.; Cao, Z.; Connolly, B. M.; Deng, W.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Gray, R. C.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jones, B. F.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Loh, E. C.; Maestas, M. M.; Manago, N.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; Moore, S. A.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Rodriguez, D.; Roberts, M. D.; Sasaki, M.; Schnetzer, S. R.; Scott, L. M.; Sinnis, G.; Smith, J. D.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.; Zhang, X.
2007-07-01
We report the results of a search for point-like deviations from isotropy in the arrival directions of ultra-high energy cosmic rays in the northern hemisphere. In the monocular data set collected by the High-Resolution Fly’s Eye, consisting of 1525 events with energy exceeding 1018.5 eV, we find no evidence for point-like excesses. We place a 90% c.l. upper limit of 0.8 hadronic cosmic rays/km2 yr on the flux from such sources for the northern hemisphere and place tighter limits as a function of position in the sky.
NASA Astrophysics Data System (ADS)
Hartman, Thomas; Hartnoll, Sean A.; Mahajan, Raghu
2017-10-01
The linear growth of operators in local quantum systems leads to an effective light cone even if the system is nonrelativistic. We show that the consistency of diffusive transport with this light cone places an upper bound on the diffusivity: D ≲v2τeq. The operator growth velocity v defines the light cone, and τeq is the local equilibration time scale, beyond which the dynamics of conserved densities is diffusive. We verify that the bound is obeyed in various weakly and strongly interacting theories. In holographic models, this bound establishes a relation between the hydrodynamic and leading nonhydrodynamic quasinormal modes of planar black holes. Our bound relates transport data—including the electrical resistivity and the shear viscosity—to the local equilibration time, even in the absence of a quasiparticle description. In this way, the bound sheds light on the observed T -linear resistivity of many unconventional metals, the shear viscosity of the quark-gluon plasma, and the spin transport of unitary fermions.
NASA Astrophysics Data System (ADS)
Kulkarni, Girish; Subrahmanyam, V.; Jha, Anand K.
2016-06-01
We study how one-particle correlations transfer to manifest as two-particle correlations in the context of parametric down-conversion (PDC), a process in which a pump photon is annihilated to produce two entangled photons. We work in the polarization degree of freedom and show that for any two-qubit generation process that is both trace-preserving and entropy-nondecreasing, the concurrence C (ρ ) of the generated two-qubit state ρ follows an intrinsic upper bound with C (ρ )≤(1 +P )/2 , where P is the degree of polarization of the pump photon. We also find that for the class of two-qubit states that is restricted to have only two nonzero diagonal elements such that the effective dimensionality of the two-qubit state is the same as the dimensionality of the pump polarization state, the upper bound on concurrence is the degree of polarization itself, that is, C (ρ )≤P . Our work shows that the maximum manifestation of two-particle correlations as entanglement is dictated by one-particle correlations. The formalism developed in this work can be extended to include multiparticle systems and can thus have important implications towards deducing the upper bounds on multiparticle entanglement, for which no universally accepted measure exists.
Backstepping Design of Adaptive Neural Fault-Tolerant Control for MIMO Nonlinear Systems.
Gao, Hui; Song, Yongduan; Wen, Changyun
In this paper, an adaptive controller is developed for a class of multi-input and multioutput nonlinear systems with neural networks (NNs) used as a modeling tool. It is shown that all the signals in the closed-loop system with the proposed adaptive neural controller are globally uniformly bounded for any external input in . In our control design, the upper bound of the NN modeling error and the gains of external disturbance are characterized by unknown upper bounds, which is more rational to establish the stability in the adaptive NN control. Filter-based modification terms are used in the update laws of unknown parameters to improve the transient performance. Finally, fault-tolerant control is developed to accommodate actuator failure. An illustrative example applying the adaptive controller to control a rigid robot arm shows the validation of the proposed controller.In this paper, an adaptive controller is developed for a class of multi-input and multioutput nonlinear systems with neural networks (NNs) used as a modeling tool. It is shown that all the signals in the closed-loop system with the proposed adaptive neural controller are globally uniformly bounded for any external input in . In our control design, the upper bound of the NN modeling error and the gains of external disturbance are characterized by unknown upper bounds, which is more rational to establish the stability in the adaptive NN control. Filter-based modification terms are used in the update laws of unknown parameters to improve the transient performance. Finally, fault-tolerant control is developed to accommodate actuator failure. An illustrative example applying the adaptive controller to control a rigid robot arm shows the validation of the proposed controller.
Length bounds for connecting discharges in triggered lightning subsequent strokes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Idone, V.P.
1990-11-20
Highly time resolved streak recordings from nine subsequent strokes in four triggered flashes have been examined for evidence of the occurrence of upward connecting discharges. These photographic recordings were obtained with superior spatial and temporal resolution (0.3 m and 0.5 {lambda}s) and were examined with a video image analysis system to help delineate the separate leader and return stroke image tracks. Unfortunately, a definitive determination of the occurrence of connecting discharges in these strokes could not be made. The data did allow various determinations of an upper bound length for any possible connecting discharge in each stroke. Under the simplestmore » analysis approach possible, an 'absolute' upper bound set of lengths was measured that ranged from 12 to 27 m with a mean of 19 m; two other more involved analyses yielded arguably better upper bound estimates of 8-18 m and 7-26 m with means of 19 m; two other more involved analyses yielded arguably better upper bound estimates of 8-18 m and 7-26 m with means of 12 and 13 m, respectively. An additional set of low time-resolution telephoto recordings of the lowest few meters of channel revealed six strokes in these flashes with one or more upward unconnected channels originating from the lightning rod tip. The maximum length of unconnected channel seen in each of these strokes ranged from 0.2 to 1.6 m with a mean of 0.7 m. This latter set of observations is interpreted as indirect evidence that connecting discharges did occur in these strokes and that the lower bound for their length is about 1 m.« less
NASA Technical Reports Server (NTRS)
Spada, Giorgio; Sabadini, Roberto; Yuen, David A.
1991-01-01
A five-layer viscoelastic spherical model is used to calculate the transient displacements of postglacial rebound, the induced polar motions, and the temporal variations of the geopotential up to degree 8 of the zonal coefficients. Two models - one with two viscoelastic layers separated at 670 km, and the other with three layers in which a hard garnet layer lies between the upper and lower mantle - are compared. Forward modeling shows that it may be possible to discern the presence of a hard garnet layer with a viscosity of at least ten times greater than the upper mantle, on the basis of uplift data near the center of the former Laurentide ice-sheet and from polar wander and j2 data. Temporal variations of higher gravity harmonics, such as j6 and j8, can potentially place even tighter constraints on the rheological properties of the hard transition zone. A lower mantle viscosity between 2 and 4 x 10 to the 22nd Pa is generally preferred in models with a garnet layer which may be as large as 50 times more viscous than the upper mantle.
Quijano, Leyre; Yusà, Vicent; Font, Guillermina; McAllister, Claudia; Torres, Concepción; Pardo, Olga
2017-02-01
This study was carried out to determine current levels of nitrate in vegetables marketed in the Region of Valencia (Spain) and to estimate the toxicological risk associated with their intake. A total of 533 samples of seven vegetable species were studied. Nitrate levels were derived from the Valencia Region monitoring programme carried out from 2009 to 2013 and food consumption levels were taken from the first Valencia Food Consumption Survey, conducted in 2010. The exposure was estimated using a probabilistic approach and two scenarios were assumed for left-censored data: the lower-bound scenario, in which unquantified results (below the limit of quantification) were set to zero and the upper-bound scenario, in which unquantified results were set to the limit of quantification value. The exposure of the Valencia consumers to nitrate through the consumption of vegetable products appears to be relatively low. In the adult population (16-95 years) the P99.9 was 3.13 mg kg -1 body weight day -1 and 3.15 mg kg -1 body weight day -1 in the lower bound and upper bound scenario, respectively. On the other hand, for young people (6-15 years) the P99.9 of the exposure was 4.20 mg kg -1 body weight day -1 and 4.40 mg kg -1 body weight day -1 in the lower bound and upper bound scenario, respectively. The risk characterisation indicates that, under the upper bound scenario, 0.79% of adults and 1.39% of young people can exceed the Acceptable Daily Intake of nitrate. This percentage could join the vegetable extreme consumers (such as vegetarians) of vegetables. Overall, the estimated exposures to nitrate from vegetables are unlikely to result in appreciable health risks. Copyright © 2016 Elsevier Ltd. All rights reserved.
On the upper bound in the Bohm sheath criterion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotelnikov, I. A., E-mail: I.A.Kotelnikov@inp.nsk.su; Skovorodin, D. I., E-mail: D.I.Skovorodin@inp.nsk.su
2016-02-15
The question is discussed about the existence of an upper bound in the Bohm sheath criterion, according to which the Debye sheath at the interface between plasma and a negatively charged electrode is stable only if the ion flow velocity in plasma exceeds the ion sound velocity. It is stated that, with an exception of some artificial ionization models, the Bohm sheath criterion is satisfied as an equality at the lower bound and the ion flow velocity is equal to the speed of sound. In the one-dimensional theory, a supersonic flow appears in an unrealistic model of a localized ionmore » source the size of which is less than the Debye length; however, supersonic flows seem to be possible in the two- and three-dimensional cases. In the available numerical codes used to simulate charged particle sources with a plasma emitter, the presence of the upper bound in the Bohm sheath criterion is not supposed; however, the correspondence with experimental data is usually achieved if the ion flow velocity in plasma is close to the ion sound velocity.« less
Computer search for binary cyclic UEP codes of odd length up to 65
NASA Technical Reports Server (NTRS)
Lin, Mao-Chao; Lin, Chi-Chang; Lin, Shu
1990-01-01
Using an exhaustive computation, the unequal error protection capabilities of all binary cyclic codes of odd length up to 65 that have minimum distances at least 3 are found. For those codes that can only have upper bounds on their unequal error protection capabilities computed, an analytic method developed by Dynkin and Togonidze (1976) is used to show that the upper bounds meet the exact unequal error protection capabilities.
NASA Astrophysics Data System (ADS)
Thole, B. T.; Van Duijnen, P. Th.
1982-10-01
The induction and dispersion terms obtained from quantum-mechanical calculations with a direct reaction field hamiltonian are compared to second order perturbation theory expressions. The dispersion term is shown to give an upper bound which is a generalization of Alexander's upper bound. The model is illustrated by a calculation on the interactions in the water dimer. The long range Coulomb, induction and dispersion interactions are reasonably reproduced.
On the Kirchhoff Index of Graphs
NASA Astrophysics Data System (ADS)
Das, Kinkar C.
2013-09-01
Let G be a connected graph of order n with Laplacian eigenvalues μ1 ≥ μ2 ≥ ... ≥ μn-1 > mn = 0. The Kirchhoff index of G is defined as [xxx] In this paper. we give lower and upper bounds on Kf of graphs in terms on n, number of edges, maximum degree, and number of spanning trees. Moreover, we present lower and upper bounds on the Nordhaus-Gaddum-type result for the Kirchhoff index.
Upper bound of pier scour in laboratory and field data
Benedict, Stephen; Caldwell, Andral W.
2016-01-01
The U.S. Geological Survey (USGS), in cooperation with the South Carolina Department of Transportation, conducted several field investigations of pier scour in South Carolina and used the data to develop envelope curves defining the upper bound of pier scour. To expand on this previous work, an additional cooperative investigation was initiated to combine the South Carolina data with pier scour data from other sources and to evaluate upper-bound relations with this larger data set. To facilitate this analysis, 569 laboratory and 1,858 field measurements of pier scour were compiled to form the 2014 USGS Pier Scour Database. This extensive database was used to develop an envelope curve for the potential maximum pier scour depth encompassing the laboratory and field data. The envelope curve provides a simple but useful tool for assessing the potential maximum pier scour depth for effective pier widths of about 30 ft or less.
NASA Astrophysics Data System (ADS)
Wang, Fan; Liang, Jinling; Dobaie, Abdullah M.
2018-07-01
The resilient filtering problem is considered for a class of time-varying networks with stochastic coupling strengths. An event-triggered strategy is adopted to save the network resources by scheduling the signal transmission from the sensors to the filters based on certain prescribed rules. Moreover, the filter parameters to be designed are subject to gain perturbations. The primary aim of the addressed problem is to determine a resilient filter that ensures an acceptable filtering performance for the considered network with event-triggering scheduling. To handle such an issue, an upper bound on the estimation error variance is established for each node according to the stochastic analysis. Subsequently, the resilient filter is designed by locally minimizing the derived upper bound at each iteration. Moreover, rigorous analysis shows the monotonicity of the minimal upper bound regarding the triggering threshold. Finally, a simulation example is presented to show effectiveness of the established filter scheme.
Objects of Maximum Electromagnetic Chirality
NASA Astrophysics Data System (ADS)
Fernandez-Corbaton, Ivan; Fruhnert, Martin; Rockstuhl, Carsten
2016-07-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. Reciprocal objects attain the upper bound if and only if they are transparent for all the fields of one polarization handedness (helicity). Additionally, electromagnetic duality symmetry, i.e., helicity preservation upon interaction, turns out to be a necessary condition for reciprocal objects to attain the upper bound. We use these results to provide requirements for the design of such extremal objects. The requirements can be formulated as constraints on the polarizability tensors for dipolar objects or on the material constitutive relations for continuous media. We also outline two applications for objects of maximum electromagnetic chirality: a twofold resonantly enhanced and background-free circular dichroism measurement setup, and angle-independent helicity filtering glasses. Finally, we use the theoretically obtained requirements to guide the design of a specific structure, which we then analyze numerically and discuss its performance with respect to maximal electromagnetic chirality.
Exact lower and upper bounds on stationary moments in stochastic biochemical systems
NASA Astrophysics Data System (ADS)
Ghusinga, Khem Raj; Vargas-Garcia, Cesar A.; Lamperski, Andrew; Singh, Abhyudai
2017-08-01
In the stochastic description of biochemical reaction systems, the time evolution of statistical moments for species population counts is described by a linear dynamical system. However, except for some ideal cases (such as zero- and first-order reaction kinetics), the moment dynamics is underdetermined as lower-order moments depend upon higher-order moments. Here, we propose a novel method to find exact lower and upper bounds on stationary moments for a given arbitrary system of biochemical reactions. The method exploits the fact that statistical moments of any positive-valued random variable must satisfy some constraints that are compactly represented through the positive semidefiniteness of moment matrices. Our analysis shows that solving moment equations at steady state in conjunction with constraints on moment matrices provides exact lower and upper bounds on the moments. These results are illustrated by three different examples—the commonly used logistic growth model, stochastic gene expression with auto-regulation and an activator-repressor gene network motif. Interestingly, in all cases the accuracy of the bounds is shown to improve as moment equations are expanded to include higher-order moments. Our results provide avenues for development of approximation methods that provide explicit bounds on moments for nonlinear stochastic systems that are otherwise analytically intractable.
NASA Astrophysics Data System (ADS)
Audenaert, Koenraad M. R.; Mosonyi, Milán
2014-10-01
We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ1, …, σr. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ1, …, σr), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov's classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min _{j
Differential Games of inf-sup Type and Isaacs Equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaise, Hidehiro; Sheu, S.-J.
2005-06-15
Motivated by the work of Fleming, we provide a general framework to associate inf-sup type values with the Isaacs equations.We show that upper and lower bounds for the generators of inf-sup type are upper and lower Hamiltonians, respectively. In particular, the lower (resp. upper) bound corresponds to the progressive (resp. strictly progressive) strategy. By the Dynamic Programming Principle and identification of the generator, we can prove that the inf-sup type game is characterized as the unique viscosity solution of the Isaacs equation. We also discuss the Isaacs equation with a Hamiltonian of a convex combination between the lower and uppermore » Hamiltonians.« less
NASA Astrophysics Data System (ADS)
Xu, Hongmei; Ho, Steven Sai Hang; Cao, Junji; Guinot, Benjamin; Kan, Haidong; Shen, Zhenxing; Ho, Kin Fai; Liu, Suixin; Zhao, Zhuzi; Li, Jianjun; Zhang, Ningning; Zhu, Chongshu; Zhang, Qian; Huang, Rujin
2017-01-01
This study presents the first long term (10-year period, 2004-2013) datasets of PM2.5-bound nickel (Ni) concentration obtained from the daily sample in urban of Xi’an, Northwestern China. The Ni concentration trend, pollution sources, and the potential health risks associated to Ni were investigated. The Ni concentrations increased from 2004 to 2008, but then decreased due to coal consumption reduction, energy structure reconstruction, tighter emission rules and the improvement of the industrial and motor vehicle waste control techniques. With the comparison of distributions between workday and non-workday periods, the effectiveness of local and regional air pollution control policies and contributions of hypothetical Ni sources (industrial and automobile exhausts) were evaluated, demonstrating the health benefits to the populations during the ten years. Mean Ni cancer risk was higher than the threshold value of 10-6, suggesting that carcinogenic Ni still was a concern to the residents. Our findings conclude that there are still needs to establish more strict strategies and guidelines for atmospheric Ni in our living area, assisting to balance the relationship between economic growth and environmental conservation in China.
Conditions for supersonic bent Marshak waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Qiang, E-mail: xuqiangxu@pku.edu.cn; Ren, Xiao-dong; Li, Jing
Supersonic radiation diffusion approximation is an useful method to study the radiation transportation. Considering the 2-d Marshak theory, and an invariable source temperature, conditions for supersonic radiation diffusion are proved to be coincident with that for radiant flux domination in the early time when √(ε)x{sub f}/L≪1. However, they are even tighter than conditions for radiant flux domination in the late time when √(ε)x{sub f}/L≫1, and can be expressed as M>4(1+ε/3)/3 and τ>1. A large Mach number requires the high temperature, while the large optical depth requires the low temperature. Only when the source temperature is in a proper region themore » supersonic diffusion conditions can be satisfied. Assuming a power-low (in temperature and density) opacity and internal energy, for a given density, the supersonic diffusion regions are given theoretically. The 2-d Marshak theory is proved to be able to bound the supersonic diffusion conditions in both high and low temperature regions, however, the 1-d theory only bounds it in low temperature region. Taking SiO{sub 2} and the Au, for example, these supersonic regions are shown numerically.« less
Tidal disruption of Periodic Comet Shoemaker-Levy 9 and a constraint on its mean density
NASA Technical Reports Server (NTRS)
Boss, Alan P.
1994-01-01
The apparent tidal disruption of Periodic Comet Shoemaker-Levy 9 (1993e) during a close encounter within approximately 1.62 planetary radii of Jupiter can be used along with theoretical models of tidal disruption to place an upper bound on the density of the predisruption body. Depending on the theoretical model used, these upper bounds range from rho(sub c) less than 0.702 +/- 0.080 g/cu cm for a simple analytical model calibrated by numerical smoothed particle hydrodynamics (SPH) simulations to rho(sub c) less than 1.50 +/- 0.17 g/cu cm for a detailed semianalytical model. The quoted uncertainties stem from an assumed uncertainty in the perijove radius. However, the uncertainty introduced by the different theoretical models is the major source of error; this uncertainty could be eliminated by future SPH simulations specialized to cometary disruptions, including the effects of initially prolate, spinning comets. If the SPH-based upper bound turns out to be most appropriate, it would be consistent with the predisruption body being a comet with a relatively low density and porous structure, as has been asserted previously based on observations of cometary outgassing. Regardless of which upper bound is preferable, the models all agree that the predisruption body could not have been a relatively high-density body, such as an asteroid with rho approximately = 2 g/cu cm.
Limit analysis of hollow spheres or spheroids with Hill orthotropic matrix
NASA Astrophysics Data System (ADS)
Pastor, Franck; Pastor, Joseph; Kondo, Djimedo
2012-03-01
Recent theoretical studies of the literature are concerned by the hollow sphere or spheroid (confocal) problems with orthotropic Hill type matrix. They have been developed in the framework of the limit analysis kinematical approach by using very simple trial velocity fields. The present Note provides, through numerical upper and lower bounds, a rigorous assessment of the approximate criteria derived in these theoretical works. To this end, existing static 3D codes for a von Mises matrix have been easily extended to the orthotropic case. Conversely, instead of the non-obvious extension of the existing kinematic codes, a new original mixed approach has been elaborated on the basis of the plane strain structure formulation earlier developed by F. Pastor (2007). Indeed, such a formulation does not need the expressions of the unit dissipated powers. Interestingly, it delivers a numerical code better conditioned and notably more rapid than the previous one, while preserving the rigorous upper bound character of the corresponding numerical results. The efficiency of the whole approach is first demonstrated through comparisons of the results to the analytical upper bounds of Benzerga and Besson (2001) or Monchiet et al. (2008) in the case of spherical voids in the Hill matrix. Moreover, we provide upper and lower bounds results for the hollow spheroid with the Hill matrix which are compared to those of Monchiet et al. (2008).
Bounds for the price of discrete arithmetic Asian options
NASA Astrophysics Data System (ADS)
Vanmaele, M.; Deelstra, G.; Liinev, J.; Dhaene, J.; Goovaerts, M. J.
2006-01-01
In this paper the pricing of European-style discrete arithmetic Asian options with fixed and floating strike is studied by deriving analytical lower and upper bounds. In our approach we use a general technique for deriving upper (and lower) bounds for stop-loss premiums of sums of dependent random variables, as explained in Kaas et al. (Ins. Math. Econom. 27 (2000) 151-168), and additionally, the ideas of Rogers and Shi (J. Appl. Probab. 32 (1995) 1077-1088) and of Nielsen and Sandmann (J. Financial Quant. Anal. 38(2) (2003) 449-473). We are able to create a unifying framework for European-style discrete arithmetic Asian options through these bounds, that generalizes several approaches in the literature as well as improves the existing results. We obtain analytical and easily computable bounds. The aim of the paper is to formulate an advice of the appropriate choice of the bounds given the parameters, investigate the effect of different conditioning variables and compare their efficiency numerically. Several sets of numerical results are included. We also discuss hedging using these bounds. Moreover, our methods are applicable to a wide range of (pricing) problems involving a sum of dependent random variables.
Big bang nucleosynthesis: The strong nuclear force meets the weak anthropic principle
NASA Astrophysics Data System (ADS)
MacDonald, J.; Mullan, D. J.
2009-08-01
Contrary to a common argument that a small increase in the strength of the strong force would lead to destruction of all hydrogen in the big bang due to binding of the diproton and the dineutron with a catastrophic impact on life as we know it, we show that provided the increase in strong force coupling constant is less than about 50% substantial amounts of hydrogen remain. The reason is that an increase in strong force strength leads to tighter binding of the deuteron, permitting nucleosynthesis to occur earlier in the big bang at higher temperature than in the standard big bang. Photodestruction of the less tightly bound diproton and dineutron delays their production to after the bulk of nucleosynthesis is complete. The decay of the diproton can, however, lead to relatively large abundances of deuterium.
Anthropics of aluminum-26 decay and biological homochirality
NASA Astrophysics Data System (ADS)
Sandora, McCullen
2017-11-01
Results of recent experiment reinstate feasibility to the hypothesis that biomolecular homochirality originates from beta decay. Coupled with hints that this process occurred extraterrestrially suggests aluminum-26 as the most likely source. If true, then its appropriateness is highly dependent on the half-life and energy of this decay. Demanding that this mechanism hold places new constraints on the anthropically allowed range for multiple parameters, including the electron mass, difference between up and down quark masses, the fine structure constant, and the electroweak scale. These new constraints on particle masses are tighter than those previously found. However, one edge of the allowed region is nearly degenerate with an existing bound, which, using what is termed here as `the principle of noncoincident peril', is argued to be a strong indicator that the fine structure constant must be an environmental parameter in the multiverse.
Sensitivity to neutrino decay with atmospheric neutrinos at the INO-ICAL detector
NASA Astrophysics Data System (ADS)
Choubey, Sandhya; Goswami, Srubabati; Gupta, Chandan; Lakshmi, S. M.; Thakore, Tarak
2018-02-01
Sensitivity of the magnetized Iron Calorimeter (ICAL) detector at the proposed India-based Neutrino Observatory (INO) to invisible decay of the mass eigenstate ν3 using atmospheric neutrinos is explored. A full three-generation analysis including Earth matter effects is performed in a framework with both decay and oscillations. The wide energy range and baselines offered by atmospheric neutrinos are shown to be excellent for constraining the ν3 lifetime. We find that with an exposure of 500 kton -yr the ICAL atmospheric experiment could constrain the ν3 lifetime to τ3/m3>1.51 ×10-10 s /eV at the 90% C.L. This is 2 orders of magnitude tighter than the bound from MINOS. The effect of invisible decay on the precision measurement of θ23 and |Δ m322| is also studied.
Memory feedback PID control for exponential synchronisation of chaotic Lur'e systems
NASA Astrophysics Data System (ADS)
Zhang, Ruimei; Zeng, Deqiang; Zhong, Shouming; Shi, Kaibo
2017-09-01
This paper studies the problem of exponential synchronisation of chaotic Lur'e systems (CLSs) via memory feedback proportional-integral-derivative (PID) control scheme. First, a novel augmented Lyapunov-Krasovskii functional (LKF) is constructed, which can make full use of the information on time delay and activation function. Second, improved synchronisation criteria are obtained by using new integral inequalities, which can provide much tighter bounds than what the existing integral inequalities can produce. In comparison with existing results, in which only proportional control or proportional derivative (PD) control is used, less conservative results are derived for CLSs by PID control. Third, the desired memory feedback controllers are designed in terms of the solution to linear matrix inequalities. Finally, numerical simulations of Chua's circuit and neural network are provided to show the effectiveness and advantages of the proposed results.
Quantum optimization for training support vector machines.
Anguita, Davide; Ridella, Sandro; Rivieccio, Fabio; Zunino, Rodolfo
2003-01-01
Refined concepts, such as Rademacher estimates of model complexity and nonlinear criteria for weighting empirical classification errors, represent recent and promising approaches to characterize the generalization ability of Support Vector Machines (SVMs). The advantages of those techniques lie in both improving the SVM representation ability and yielding tighter generalization bounds. On the other hand, they often make Quadratic-Programming algorithms no longer applicable, and SVM training cannot benefit from efficient, specialized optimization techniques. The paper considers the application of Quantum Computing to solve the problem of effective SVM training, especially in the case of digital implementations. The presented research compares the behavioral aspects of conventional and enhanced SVMs; experiments in both a synthetic and real-world problems support the theoretical analysis. At the same time, the related differences between Quadratic-Programming and Quantum-based optimization techniques are considered.
Coefficient of performance and its bounds with the figure of merit for a general refrigerator
NASA Astrophysics Data System (ADS)
Long, Rui; Liu, Wei
2015-02-01
A general refrigerator model with non-isothermal processes is studied. The coefficient of performance (COP) and its bounds at maximum χ figure of merit are obtained and analyzed. This model accounts for different heat capacities during the heat transfer processes. So, different kinds of refrigerator cycles can be considered. Under the constant heat capacity condition, the upper bound of the COP is the Curzon-Ahlborn (CA) coefficient of performance and is independent of the time durations of the heat exchanging processes. With the maximum χ criterion, in the refrigerator cycles, such as the reversed Brayton refrigerator cycle, the reversed Otto refrigerator cycle and the reversed Atkinson refrigerator cycle, where the heat capacity in the heat absorbing process is not less than that in the heat releasing process, their COPs are bounded by the CA coefficient of performance; otherwise, such as for the reversed Diesel refrigerator cycle, its COP can exceed the CA coefficient of performance. Furthermore, the general refined upper and lower bounds have been proposed.
Search for Chemically Bound Water in the Surface Layer of Mars Based on HEND/Mars Odyssey Data
NASA Technical Reports Server (NTRS)
Basilevsky, A. T.; Litvak, M. L.; Mitrofanov, I. G.; Boynton, W.; Saunders, R. S.
2003-01-01
This study is emphasized on search for signatures of chemically bound water in surface layer of Mars based on data acquired by High Energy Neutron Detector (HEND) which is part of the Mars Odyssey Gamma Ray Spectrometer (GRS). Fluxes of epithermal (probe the upper 1-2 m) and fast (the upper 20-30 cm) neutrons, considered in this work, were measured since mid February till mid June 2002. First analysis of this data set with emphasis of chemically bound water was made. Early publications of the GRS results reported low neutron flux at high latitudes, interpreted as signature of ground water ice, and in two low latitude areas: Arabia and SW of Olympus Mons (SWOM), interpreted as 'geographic variations in the amount of chemically and/or physically bound H2O and or OH...'. It is clear that surface materials of Mars do contain chemically bound water, but its amounts are poorly known and its geographic distribution was not analyzed.
Pre-Test Assessment of the Upper Bound of the Drag Coefficient Repeatability of a Wind Tunnel Model
NASA Technical Reports Server (NTRS)
Ulbrich, N.; L'Esperance, A.
2017-01-01
A new method is presented that computes a pre{test estimate of the upper bound of the drag coefficient repeatability of a wind tunnel model. This upper bound is a conservative estimate of the precision error of the drag coefficient. For clarity, precision error contributions associated with the measurement of the dynamic pressure are analyzed separately from those that are associated with the measurement of the aerodynamic loads. The upper bound is computed by using information about the model, the tunnel conditions, and the balance in combination with an estimate of the expected output variations as input. The model information consists of the reference area and an assumed angle of attack. The tunnel conditions are described by the Mach number and the total pressure or unit Reynolds number. The balance inputs are the partial derivatives of the axial and normal force with respect to all balance outputs. Finally, an empirical output variation of 1.0 microV/V is used to relate both random instrumentation and angle measurement errors to the precision error of the drag coefficient. Results of the analysis are reported by plotting the upper bound of the precision error versus the tunnel conditions. The analysis shows that the influence of the dynamic pressure measurement error on the precision error of the drag coefficient is often small when compared with the influence of errors that are associated with the load measurements. Consequently, the sensitivities of the axial and normal force gages of the balance have a significant influence on the overall magnitude of the drag coefficient's precision error. Therefore, results of the error analysis can be used for balance selection purposes as the drag prediction characteristics of balances of similar size and capacities can objectively be compared. Data from two wind tunnel models and three balances are used to illustrate the assessment of the precision error of the drag coefficient.
Improved Lower Bounds on the Price of Stability of Undirected Network Design Games
NASA Astrophysics Data System (ADS)
Bilò, Vittorio; Caragiannis, Ioannis; Fanelli, Angelo; Monaco, Gianpiero
Bounding the price of stability of undirected network design games with fair cost allocation is a challenging open problem in the Algorithmic Game Theory research agenda. Even though the generalization of such games in directed networks is well understood in terms of the price of stability (it is exactly H n , the n-th harmonic number, for games with n players), far less is known for network design games in undirected networks. The upper bound carries over to this case as well while the best known lower bound is 42/23 ≈ 1.826. For more restricted but interesting variants of such games such as broadcast and multicast games, sublogarithmic upper bounds are known while the best known lower bound is 12/7 ≈ 1.714. In the current paper, we improve the lower bounds as follows. We break the psychological barrier of 2 by showing that the price of stability of undirected network design games is at least 348/155 ≈ 2.245. Our proof uses a recursive construction of a network design game with a simple gadget as the main building block. For broadcast and multicast games, we present new lower bounds of 20/11 ≈ 1.818 and 1.862, respectively.
An analysis of the vertical structure equation for arbitrary thermal profiles
NASA Technical Reports Server (NTRS)
Cohn, Stephen E.; Dee, Dick P.
1989-01-01
The vertical structure equation is a singular Sturm-Liouville problem whose eigenfunctions describe the vertical dependence of the normal modes of the primitive equations linearized about a given thermal profile. The eigenvalues give the equivalent depths of the modes. The spectrum of the vertical structure equation and the appropriateness of various upper boundary conditions, both for arbitrary thermal profiles were studied. The results depend critically upon whether or not the thermal profile is such that the basic state atmosphere is bounded. In the case of a bounded atmosphere it is shown that the spectrum is always totally discrete, regardless of details of the thermal profile. For the barotropic equivalent depth, which corresponds to the lowest eigen value, upper and lower bounds which depend only on the surface temperature and the atmosphere height were obtained. All eigenfunctions are bounded, but always have unbounded first derivatives. It was proved that the commonly invoked upper boundary condition that vertical velocity must vanish as pressure tends to zero, as well as a number of alternative conditions, is well posed. It was concluded that the vertical structure equation always has a totally discrete spectrum under the assumptions implicit in the primitive equations.
An analysis of the vertical structure equation for arbitrary thermal profiles
NASA Technical Reports Server (NTRS)
Cohn, Stephen E.; Dee, Dick P.
1987-01-01
The vertical structure equation is a singular Sturm-Liouville problem whose eigenfunctions describe the vertical dependence of the normal modes of the primitive equations linearized about a given thermal profile. The eigenvalues give the equivalent depths of the modes. The spectrum of the vertical structure equation and the appropriateness of various upper boundary conditions, both for arbitrary thermal profiles were studied. The results depend critically upon whether or not the thermal profile is such that the basic state atmosphere is bounded. In the case of a bounded atmosphere it is shown that the spectrum is always totally discrete, regardless of details of the thermal profile. For the barotropic equivalent depth, which corresponds to the lowest eigen value, upper and lower bounds which depend only on the surface temperature and the atmosphere height were obtained. All eigenfunctions are bounded, but always have unbounded first derivatives. It was proved that the commonly invoked upper boundary condition that vertical velocity must vanish as pressure tends to zero, as well as a number of alternative conditions, is well posed. It was concluded that the vertical structure equation always has a totally discrete spectrum under the assumptions implicit in the primitive equations.
Ultimate energy density of observable cold baryonic matter.
Lattimer, James M; Prakash, Madappa
2005-03-25
We demonstrate that the largest measured mass of a neutron star establishes an upper bound to the energy density of observable cold baryonic matter. An equation of state-independent expression satisfied by both normal neutron stars and self-bound quark matter stars is derived for the largest energy density of matter inside stars as a function of their masses. The largest observed mass sets the lowest upper limit to the density. Implications from existing and future neutron star mass measurements are discussed.
1990-06-01
synchronization . We consider the performance of various synchronization protocols by deriving upper and lower bounds on optimal perfor- mance, upper bounds on Time ...from universities and from industry, who have resident appointments for limited periods of time , and by consultants. Members of NASA’s research staff...convergence to steady state is also being studied together with D. Gottlieb. The idea is to generalize the concept of local- time stepping by minimizing the
Generalized monogamy inequalities and upper bounds of negativity for multiqubit systems
NASA Astrophysics Data System (ADS)
Yang, Yanmin; Chen, Wei; Li, Gang; Zheng, Zhu-Jun
2018-01-01
In this paper, we present some generalized monogamy inequalities and upper bounds of negativity based on convex-roof extended negativity (CREN) and CREN of assistance (CRENOA). These monogamy relations are satisfied by the negativity of N -qubit quantum systems A B C1⋯CN -2 , under the partitions A B | C1⋯CN -2 and A B C1| C2⋯CN -2 . Furthermore, the W -class states are used to test these generalized monogamy inequalities.
Computing an upper bound on contact stress with surrogate duality
NASA Astrophysics Data System (ADS)
Xuan, Zhaocheng; Papadopoulos, Panayiotis
2016-07-01
We present a method for computing an upper bound on the contact stress of elastic bodies. The continuum model of elastic bodies with contact is first modeled as a constrained optimization problem by using finite elements. An explicit formulation of the total contact force, a fraction function with the numerator as a linear function and the denominator as a quadratic convex function, is derived with only the normalized nodal contact forces as the constrained variables in a standard simplex. Then two bounds are obtained for the sum of the nodal contact forces. The first is an explicit formulation of matrices of the finite element model, derived by maximizing the fraction function under the constraint that the sum of the normalized nodal contact forces is one. The second bound is solved by first maximizing the fraction function subject to the standard simplex and then using Dinkelbach's algorithm for fractional programming to find the maximum—since the fraction function is pseudo concave in a neighborhood of the solution. These two bounds are solved with the problem dimensions being only the number of contact nodes or node pairs, which are much smaller than the dimension for the original problem, namely, the number of degrees of freedom. Next, a scheme for constructing an upper bound on the contact stress is proposed that uses the bounds on the sum of the nodal contact forces obtained on a fine finite element mesh and the nodal contact forces obtained on a coarse finite element mesh, which are problems that can be solved at a lower computational cost. Finally, the proposed method is verified through some examples concerning both frictionless and frictional contact to demonstrate the method's feasibility, efficiency, and robustness.
Performance bounds on parallel self-initiating discrete-event
NASA Technical Reports Server (NTRS)
Nicol, David M.
1990-01-01
The use is considered of massively parallel architectures to execute discrete-event simulations of what is termed self-initiating models. A logical process in a self-initiating model schedules its own state re-evaluation times, independently of any other logical process, and sends its new state to other logical processes following the re-evaluation. The interest is in the effects of that communication on synchronization. The performance is considered of various synchronization protocols by deriving upper and lower bounds on optimal performance, upper bounds on Time Warp's performance, and lower bounds on the performance of a new conservative protocol. The analysis of Time Warp includes the overhead costs of state-saving and rollback. The analysis points out sufficient conditions for the conservative protocol to outperform Time Warp. The analysis also quantifies the sensitivity of performance to message fan-out, lookahead ability, and the probability distributions underlying the simulation.
Bounds on the Coupling of the Majoron to Light Neutrinos from Supernova Cooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farzan, Yasaman
2002-12-02
We explore the role of Majoron (J) emission in the supernova cooling process, as a source of upper bound on the neutrino-Majoron coupling. We show that the strongest upper bound on the coupling to {nu}{sub 3} comes from the {nu}{sub e}{nu}{sub e} {yields} J process in the core of a supernova. We also find bounds on diagonal couplings of the Majoron to {nu}{sub {mu}({tau})}{nu}{sub {mu}({tau})} and on off-diagonal {nu}{sub e}{nu}{sub {mu}({tau})} couplings in various regions of the parameter space. We discuss the evaluation of cross-section for four-particle interactions ({nu}{nu} {yields} JJ and {nu}J {yields} {nu}J). We show that these aremore » typically dominated by three-particle sub-processes and do not give new independent constraints.« less
Multi-shell model of ion-induced nucleic acid condensation
NASA Astrophysics Data System (ADS)
Tolokh, Igor S.; Drozdetski, Aleksander V.; Pollack, Lois; Baker, Nathan A.; Onufriev, Alexey V.
2016-04-01
We present a semi-quantitative model of condensation of short nucleic acid (NA) duplexes induced by trivalent cobalt(iii) hexammine (CoHex) ions. The model is based on partitioning of bound counterion distribution around single NA duplex into "external" and "internal" ion binding shells distinguished by the proximity to duplex helical axis. In the aggregated phase the shells overlap, which leads to significantly increased attraction of CoHex ions in these overlaps with the neighboring duplexes. The duplex aggregation free energy is decomposed into attractive and repulsive components in such a way that they can be represented by simple analytical expressions with parameters derived from molecular dynamic simulations and numerical solutions of Poisson equation. The attractive term depends on the fractions of bound ions in the overlapping shells and affinity of CoHex to the "external" shell of nearly neutralized duplex. The repulsive components of the free energy are duplex configurational entropy loss upon the aggregation and the electrostatic repulsion of the duplexes that remains after neutralization by bound CoHex ions. The estimates of the aggregation free energy are consistent with the experimental range of NA duplex condensation propensities, including the unusually poor condensation of RNA structures and subtle sequence effects upon DNA condensation. The model predicts that, in contrast to DNA, RNA duplexes may condense into tighter packed aggregates with a higher degree of duplex neutralization. An appreciable CoHex mediated RNA-RNA attraction requires closer inter-duplex separation to engage CoHex ions (bound mostly in the "internal" shell of RNA) into short-range attractive interactions. The model also predicts that longer NA fragments will condense more readily than shorter ones. The ability of this model to explain experimentally observed trends in NA condensation lends support to proposed NA condensation picture based on the multivalent "ion binding shells."
A linear programming approach to max-sum problem: a review.
Werner, Tomás
2007-07-01
The max-sum labeling problem, defined as maximizing a sum of binary (i.e., pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field. We review a not widely known approach to the problem, developed by Ukrainian researchers Schlesinger et al. in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product. In particular, we review Schlesinger et al.'s upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound. We revisit problems with Boolean variables and supermodular problems. We describe two algorithms for decreasing the upper bound. We present an example application for structural image analysis.
Kernel K-Means Sampling for Nyström Approximation.
He, Li; Zhang, Hong
2018-05-01
A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Pang, Yi; Rong, Junchen; Su, Ning
2016-12-01
We consider ϕ 3 theory in 6 - 2 ɛ with F 4 global symmetry. The beta function is calculated up to 3 loops, and a stable unitary IR fixed point is observed. The anomalous dimensions of operators quadratic or cubic in ϕ are also computed. We then employ conformal bootstrap technique to study the fixed point predicted from the perturbative approach. For each putative scaling dimension of ϕ (Δ ϕ ), we obtain the corresponding upper bound on the scaling dimension of the second lowest scalar primary in the 26 representation ( Δ 26 2nd ) which appears in the OPE of ϕ × ϕ. In D = 5 .95, we observe a sharp peak on the upper bound curve located at Δ ϕ equal to the value predicted by the 3-loop computation. In D = 5, we observe a weak kink on the upper bound curve at ( Δ ϕ , Δ 26 2nd ) = (1.6, 4).
Strong polygamy of quantum correlations in multi-party quantum systems
NASA Astrophysics Data System (ADS)
Kim, Jeong San
2014-10-01
We propose a new type of polygamy inequality for multi-party quantum entanglement. We first consider the possible amount of bipartite entanglement distributed between a fixed party and any subset of the rest parties in a multi-party quantum system. By using the summation of these distributed entanglements, we provide an upper bound of the distributed entanglement between a party and the rest in multi-party quantum systems. We then show that this upper bound also plays as a lower bound of the usual polygamy inequality, therefore the strong polygamy of multi-party quantum entanglement. For the case of multi-party pure states, we further show that the strong polygamy of entanglement implies the strong polygamy of quantum discord.
Optimal bounds and extremal trajectories for time averages in nonlinear dynamical systems
NASA Astrophysics Data System (ADS)
Tobasco, Ian; Goluskin, David; Doering, Charles R.
2018-02-01
For any quantity of interest in a system governed by ordinary differential equations, it is natural to seek the largest (or smallest) long-time average among solution trajectories, as well as the extremal trajectories themselves. Upper bounds on time averages can be proved a priori using auxiliary functions, the optimal choice of which is a convex optimization problem. We prove that the problems of finding maximal trajectories and minimal auxiliary functions are strongly dual. Thus, auxiliary functions provide arbitrarily sharp upper bounds on time averages. Moreover, any nearly minimal auxiliary function provides phase space volumes in which all nearly maximal trajectories are guaranteed to lie. For polynomial equations, auxiliary functions can be constructed by semidefinite programming, which we illustrate using the Lorenz system.
Determination of the Cosmic Infrared Background from COBE/FIRAS and Planck HFI Data
NASA Astrophysics Data System (ADS)
Kogut, Alan
Current determinations of the cosmic infrared background (CIB) at far-infrared to millimeter wavelengths have large uncertainties, on the order of 30%. We propose to make new, more accurate determinations of the CIB at these wavelengths using COBE /FIRAS and Planck High Frequency Instrument (HFI) Data. This work will enable a factor of two improvement in our understanding of the CIB. Planck was not designed to measure the monopole component of sky brightness, so the FIRAS data will be used to recalibrate the zero level of the HFI maps. Correlation of the recalibrated HFI maps with Galactic H I 21-cm line emission will be used to separate the Galactic foreground emission and determine the CIB in the HFI bands from 217 to 857 GHz, or 1380 to 350 microns. The high angular resolution and sensitivity of the HFI data will allow the correlations with H I to be established more accurately and to lower H I column density than is possible with the 7± resolution FIRAS data, resulting in significant improvement in the accuracy of the derived CIB. Correlations of the CIB-subtracted 857 GHz map with FIRAS maps averaged over broad frequency bins will then be used to determine CIB values at frequencies not observed by Planck. Uncertainties in the CIB results are expected to be as low as 14% for the HFI 857 GHz band. Our results will allow more accurate determination of the fraction of the CIB that is resolved by deep source surveys, and a tighter limit to be placed on the contribution to the CIB of any diffuse emission such as emission from intergalactic dust. Possible gray extinction by intergalactic dust may produce significant systematic error in determinations of dark energy parameters from type Ia supernova measurements, and our results will be important for placing a tighter upper limit on such extinction. Our CIB results will also provide tighter constraints on models of the evolution of star-forming galaxies, and will be important in constraining the evolution in density and luminosity of ultraluminous starburst galaxies at high redshift.
NASA Technical Reports Server (NTRS)
Stothers, Richard B.
1991-01-01
This study presents the results of 14 tests for the presence of convective overshooting in large convecting stellar cores for stars with masses of 4-17 solar masses which are members of detached close binary systems and of open clusters in the Galaxy. A large body of theoretical and observational data is scrutinized and subjected to averaging in order to minimize accidental and systematic errors. A conservative upper limit of d/HP less than 0.4 is found from at least four tests, as well as a tighter upper limit of d/HP less than 0.2 from one good test that is subject to only mild restrictions and is based on the maximum observed effective temperature of evolved blue supergiants. It is concluded that any current uncertainty about the distance scale for these stars is unimportant in conducting the present tests for convective core overshooting. The correct effective temperature scale for the B0.5-B2 stars is almost certainly close to one of the proposed hot scales.
Upper bounds on quantum uncertainty products and complexity measures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guerrero, Angel; Sanchez-Moreno, Pablo; Dehesa, Jesus S.
The position-momentum Shannon and Renyi uncertainty products of general quantum systems are shown to be bounded not only from below (through the known uncertainty relations), but also from above in terms of the Heisenberg-Kennard product . Moreover, the Cramer-Rao, Fisher-Shannon, and Lopez-Ruiz, Mancini, and Calbet shape measures of complexity (whose lower bounds have been recently found) are also bounded from above. The improvement of these bounds for systems subject to spherically symmetric potentials is also explicitly given. Finally, applications to hydrogenic and oscillator-like systems are done.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tyson, Jon
2009-06-15
Matrix monotonicity is used to obtain upper bounds on minimum-error distinguishability of arbitrary ensembles of mixed quantum states. This generalizes one direction of a two-sided bound recently obtained by the author [J. Tyson, J. Math. Phys. 50, 032106 (2009)]. It is shown that the previously obtained special case has unique properties.
Structure of Mandelate Racemase with Bound Intermediate Analogues Benzohydroxamate and Cupferron†
Lietzan, Adam D.; Nagar, Mitesh; Pellmann, Elise A.; Bourque, Jennifer R.; Bearne, Stephen L.; St Maurice, Martin
2012-01-01
Mandelate racemase (MR, EC 5.1.2.2) from Pseudomonas putida catalyzes the Mg2+-dependent interconversion of the enantiomers of mandelate, stabilizing the altered substrate in the transition state by 26 kcal/mol relative to the substrate in the ground state. To understand the origins of this binding discrimination, we solved the X-ray crystal structures of wild-type MR complexed with two analogues of the putative aci-carboxylate intermediate, benzohydroxamate and cupferron, to 2.2-Å resolution. Benzohydroxamate is shown to be a reasonable mimic of the transition state/intermediate since its binding affinity to 21 MR variants correlates well with changes in the free energy of transition state stabilization afforded by these variants. Both benzohydroxamate and cupferron chelate the active site divalent metal ion and are bound in a conformation with the phenyl ring coplanar with the hydroxamate and diazeniumdiolate moieties, respectively. Structural overlays of MR complexed with benzohydroxamate, cupferron, and the ground state analogue (S)-atrolacatate reveal that the para-carbon of the substrate phenyl ring moves by 0.8–1.2 Å between the ground state and intermediate state, consistent with the proposal that the phenyl ring moves during MR catalysis while the polar groups remain relatively fixed. Although the overall protein structure of MR with bound intermediate analogues is very similar to MR with bound (S)-atrolactate, the intermediate-Mg2+ distance shortens, suggesting a tighter complex with the catalytic Mg2+. In addition, Tyr 54 moves nearer to the phenyl ring of the bound intermediate analogues, contributing to an overall constriction of the active site cavity. However, site-directed mutagenesis experiments revealed that the role of Tyr 54 in MR catalysis is relatively minor, suggesting that alterations in enzyme structure that contribute to discrimination between the altered substrate in the transition state and the ground state by this proficient enzyme are extremely subtle. PMID:22264153
NASA Astrophysics Data System (ADS)
Vukičević, Damir; Đurđević, Jelena
2011-10-01
Bond incident degree index is a descriptor that is calculated as the sum of the bond contributions such that each bond contribution depends solely on the degrees of its incident vertices (e.g. Randić index, Zagreb index, modified Zagreb index, variable Randić index, atom-bond connectivity index, augmented Zagreb index, sum-connectivity index, many Adriatic indices, and many variable Adriatic indices). In this Letter we find tight upper and lower bounds for bond incident degree index for catacondensed fluoranthenes with given number of hexagons.
Beating the photon-number-splitting attack in practical quantum cryptography.
Wang, Xiang-Bin
2005-06-17
We propose an efficient method to verify the upper bound of the fraction of counts caused by multiphoton pulses in practical quantum key distribution using weak coherent light, given whatever type of Eve's action. The protocol simply uses two coherent states for the signal pulses and vacuum for the decoy pulse. Our verified upper bound is sufficiently tight for quantum key distribution with a very lossy channel, in both the asymptotic and nonasymptotic case. So far our protocol is the only decoy-state protocol that works efficiently for currently existing setups.
The local interstellar helium density - Corrected
NASA Technical Reports Server (NTRS)
Freeman, J.; Paresce, F.; Bowyer, S.
1979-01-01
An upper bound for the number density of neutral helium in the local interstellar medium of 0.004 + or - 0.0022 per cu cm was previously reported, based on extreme-ultraviolet telescope observations at 584 A made during the 1975 Apollo-Soyuz Test Project. A variety of evidence is found which indicates that the 584-A sensitivity of the instrument declined by a factor of 2 between the last laboratory calibration and the time of the measurements. The upper bound on the helium density is therefore revised to 0.0089 + or - 0.005 per cu cm.
Planck limits on non-canonical generalizations of large-field inflation models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Nina K.; Kinney, William H., E-mail: ninastei@buffalo.edu, E-mail: whkinney@buffalo.edu
2017-04-01
In this paper, we consider two case examples of Dirac-Born-Infeld (DBI) generalizations of canonical large-field inflation models, characterized by a reduced sound speed, c {sub S} < 1. The reduced speed of sound lowers the tensor-scalar ratio, improving the fit of the models to the data, but increases the equilateral-mode non-Gaussianity, f {sup equil.}{sub NL}, which the latest results from the Planck satellite constrain by a new upper bound. We examine constraints on these models in light of the most recent Planck and BICEP/Keck results, and find that they have a greatly decreased window of viability. The upper bound onmore » f {sup equil.}{sub NL} corresponds to a lower bound on the sound speed and a corresponding lower bound on the tensor-scalar ratio of r ∼ 0.01, so that near-future Cosmic Microwave Background observations may be capable of ruling out entire classes of DBI inflation models. The result is, however, not universal: infrared-type DBI inflation models, where the speed of sound increases with time, are not subject to the bound.« less
Circuit bounds on stochastic transport in the Lorenz equations
NASA Astrophysics Data System (ADS)
Weady, Scott; Agarwal, Sahil; Wilen, Larry; Wettlaufer, J. S.
2018-07-01
In turbulent Rayleigh-Bénard convection one seeks the relationship between the heat transport, captured by the Nusselt number, and the temperature drop across the convecting layer, captured by the Rayleigh number. In experiments, one measures the Nusselt number for a given Rayleigh number, and the question of how close that value is to the maximal transport is a key prediction of variational fluid mechanics in the form of an upper bound. The Lorenz equations have traditionally been studied as a simplified model of turbulent Rayleigh-Bénard convection, and hence it is natural to investigate their upper bounds, which has previously been done numerically and analytically, but they are not as easily accessible in an experimental context. Here we describe a specially built circuit that is the experimental analogue of the Lorenz equations and compare its output to the recently determined upper bounds of the stochastic Lorenz equations [1]. The circuit is substantially more efficient than computational solutions, and hence we can more easily examine the system. Because of offsets that appear naturally in the circuit, we are motivated to study unique bifurcation phenomena that arise as a result. Namely, for a given Rayleigh number, we find a reentrant behavior of the transport on noise amplitude and this varies with Rayleigh number passing from the homoclinic to the Hopf bifurcation.
Heskes, Tom; Eisinga, Rob; Breitling, Rainer
2014-11-21
The rank product method is a powerful statistical technique for identifying differentially expressed molecules in replicated experiments. A critical issue in molecule selection is accurate calculation of the p-value of the rank product statistic to adequately address multiple testing. Both exact calculation and permutation and gamma approximations have been proposed to determine molecule-level significance. These current approaches have serious drawbacks as they are either computationally burdensome or provide inaccurate estimates in the tail of the p-value distribution. We derive strict lower and upper bounds to the exact p-value along with an accurate approximation that can be used to assess the significance of the rank product statistic in a computationally fast manner. The bounds and the proposed approximation are shown to provide far better accuracy over existing approximate methods in determining tail probabilities, with the slightly conservative upper bound protecting against false positives. We illustrate the proposed method in the context of a recently published analysis on transcriptomic profiling performed in blood. We provide a method to determine upper bounds and accurate approximate p-values of the rank product statistic. The proposed algorithm provides an order of magnitude increase in throughput as compared with current approaches and offers the opportunity to explore new application domains with even larger multiple testing issue. The R code is published in one of the Additional files and is available at http://www.ru.nl/publish/pages/726696/rankprodbounds.zip .
Energy Bounds for a Compressed Elastic Film on a Substrate
NASA Astrophysics Data System (ADS)
Bourne, David P.; Conti, Sergio; Müller, Stefan
2017-04-01
We study pattern formation in a compressed elastic film which delaminates from a substrate. Our key tool is the determination of rigorous upper and lower bounds on the minimum value of a suitable energy functional. The energy consists of two parts, describing the two main physical effects. The first part represents the elastic energy of the film, which is approximated using the von Kármán plate theory. The second part represents the fracture or delamination energy, which is approximated using the Griffith model of fracture. A simpler model containing the first term alone was previously studied with similar methods by several authors, assuming that the delaminated region is fixed. We include the fracture term, transforming the elastic minimisation into a free boundary problem, and opening the way for patterns which result from the interplay of elasticity and delamination. After rescaling, the energy depends on only two parameters: the rescaled film thickness, {σ }, and a measure of the bonding strength between the film and substrate, {γ }. We prove upper bounds on the minimum energy of the form {σ }^a {γ }^b and find that there are four different parameter regimes corresponding to different values of a and b and to different folding patterns of the film. In some cases, the upper bounds are attained by self-similar folding patterns as observed in experiments. Moreover, for two of the four parameter regimes we prove matching, optimal lower bounds.
Influence of menhaden oil on mitochondrial respiration in BHE rats.
Kim, M J; Berdanier, C D
1989-11-01
The effects of corn or menhaden oil and thyroxine treatment on hepatic mitochondrial respiration was studied. BHE rats were fed a 64% sucrose, 6% corn, or menhaden oil diet until they were 60-70 days of age. Succinate-supported mitochondrial respiration was studied at 3 degrees C intervals from 4 to 40 degrees C. Upper and lower activation energies and transition temperatures were determined through the calculation of Arrhenius plot. Menhaden oil plus daily thyroxine injection resulted in higher and lower activation energies than the other treatments. This combined treatment also resulted in lower state 3 and higher state 4 respiration rates and tighter coupling of respiration to ATP synthesis. These effects were thought to be due to the effect this treatment combination had on membrane fluidity.
Tighter monogamy relations of quantum entanglement for multiqubit W-class states
NASA Astrophysics Data System (ADS)
Jin, Zhi-Xiang; Fei, Shao-Ming
2018-01-01
Monogamy relations characterize the distributions of entanglement in multipartite systems. We investigate monogamy relations for multiqubit generalized W-class states. We present new analytical monogamy inequalities for the concurrence of assistance, which are shown to be tighter than the existing ones. Furthermore, analytical monogamy inequalities are obtained for the negativity of assistance.
Saturn's very axisymmetric magnetic field: No detectable secular variation or tilt
NASA Astrophysics Data System (ADS)
Cao, Hao; Russell, Christopher T.; Christensen, Ulrich R.; Dougherty, Michele K.; Burton, Marcia E.
2011-04-01
Saturn is the only planet in the solar system whose observed magnetic field is highly axisymmetric. At least a small deviation from perfect symmetry is required for a dynamo-generated magnetic field. Analyzing more than six years of magnetometer data obtained by Cassini close to the planet, we show that Saturn's observed field is much more axisymmetric than previously thought. We invert the magnetometer observations that were obtained in the "current-free" inner magnetosphere for an internal model, varying the assumed unknown rotation rate of Saturn's deep interior. No unambiguous non-axially symmetric magnetic moment is detected, with a new upper bound on the dipole tilt of 0.06°. An axisymmetric internal model with Schmidt-normalized spherical harmonic coefficients g10 = 21,191 ± 24 nT, g20 = 1586 ± 7 nT. g30 = 2374 ± 47 nT is derived from these measurements, the upper bounds on the axial degree 4 and 5 terms are 720 nT and 3200 nT respectively. The secular variation for the last 30 years is within the probable error of each term from degree 1 to 3, and the upper bounds are an order of magnitude smaller than in similar terrestrial terms for degrees 1 and 2. Differentially rotating conducting stable layers above Saturn's dynamo region have been proposed to symmetrize the magnetic field (Stevenson, 1982). The new upper bound on the dipole tilt implies that this stable layer must have a thickness L >= 4000 km, and this thickness is consistent with our weak secular variation observations.
Biodegradation kinetics for pesticide exposure assessment.
Wolt, J D; Nelson, H P; Cleveland, C B; van Wesenbeeck, I J
2001-01-01
Understanding pesticide risks requires characterizing pesticide exposure within the environment in a manner that can be broadly generalized across widely varied conditions of use. The coupled processes of sorption and soil degradation are especially important for understanding the potential environmental exposure of pesticides. The data obtained from degradation studies are inherently variable and, when limited in extent, lend uncertainty to exposure characterization and risk assessment. Pesticide decline in soils reflects dynamically coupled processes of sorption and degradation that add complexity to the treatment of soil biodegradation data from a kinetic perspective. Additional complexity arises from study design limitations that may not fully account for the decline in microbial activity of test systems, or that may be inadequate for considerations of all potential dissipation routes for a given pesticide. Accordingly, kinetic treatment of data must accommodate a variety of differing approaches starting with very simple assumptions as to reaction dynamics and extending to more involved treatments if warranted by the available experimental data. Selection of the appropriate kinetic model to describe pesticide degradation should rely on statistical evaluation of the data fit to ensure that the models used are not overparameterized. Recognizing the effects of experimental conditions and methods for kinetic treatment of degradation data is critical for making appropriate comparisons among pesticide biodegradation data sets. Assessment of variability in soil half-life among soils is uncertain because for many pesticides the data on soil degradation rate are limited to one or two soils. Reasonable upper-bound estimates of soil half-life are necessary in risk assessment so that estimated environmental concentrations can be developed from exposure models. Thus, an understanding of the variable and uncertain distribution of soil half-lives in the environment is necessary to estimate bounding values. Statistical evaluation of measures of central tendency for multisoil kinetic studies shows that geometric means better represent the distribution in soil half-lives than do the arithmetic or harmonic means. Estimates of upper-bound soil half-life values based on the upper 90% confidence bound on the geometric mean tend to accurately represent the upper bound when pesticide degradation rate is biologically driven but appear to overestimate the upper bound when there is extensive coupling of biodegradation with sorptive processes. The limited data available comparing distribution in pesticide soil half-lives between multisoil laboratory studies and multilocation field studies suggest that the probability density functions are similar. Thus, upper-bound estimates of pesticide half-life determined from laboratory studies conservatively represent pesticide biodegradation in the field environment for the purposes of exposure and risk assessment. International guidelines and approaches used for interpretations of soil biodegradation reflect many common elements, but differ in how the source and nature of variability in soil kinetic data are considered. Harmonization of approaches for the use of soil biodegradation data will improve the interpretative power of these data for the purposes of exposure and risk assessment.
Jarzynski equality: connections to thermodynamics and the second law.
Palmieri, Benoit; Ronis, David
2007-01-01
The one-dimensional expanding ideal gas model is used to compute the exact nonequilibrium distribution function. The state of the system during the expansion is defined in terms of local thermodynamics quantities. The final equilibrium free energy, obtained a long time after the expansion, is compared against the free energy that appears in the Jarzynski equality. Within this model, where the Jarzynski equality holds rigorously, the free energy change that appears in the equality does not equal the actual free energy change of the system at any time of the process. More generally, the work bound that is obtained from the Jarzynski equality is an upper bound to the upper bound that is obtained from the first and second laws of thermodynamics. The cancellation of the dissipative (nonequilibrium) terms that result in the Jarzynski equality is shown in the framework of response theory. This is used to show that the intuitive assumption that the Jarzynski work bound becomes equal to the average work done when the system evolves quasistatically is incorrect under some conditions.
More on the decoder error probability for Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Cheung, K.-M.
1987-01-01
The decoder error probability for Reed-Solomon codes (more generally, linear maximum distance separable codes) is examined. McEliece and Swanson offered an upper bound on P sub E (u), the decoder error probability given that u symbol errors occurs. This upper bound is slightly greater than Q, the probability that a completely random error pattern will cause decoder error. By using a combinatoric technique, the principle of inclusion and exclusion, an exact formula for P sub E (u) is derived. The P sub e (u) for the (255, 223) Reed-Solomon Code used by NASA, and for the (31,15) Reed-Solomon code (JTIDS code), are calculated using the exact formula, and the P sub E (u)'s are observed to approach the Q's of the codes rapidly as u gets larger. An upper bound for the expression is derived, and is shown to decrease nearly exponentially as u increases. This proves analytically that P sub E (u) indeed approaches Q as u becomes large, and some laws of large numbers come into play.
Lorenz curves in a new science-funding model
NASA Astrophysics Data System (ADS)
Huang, Ding-wei
2017-12-01
We propose an agent-based model to theoretically and systematically explore the implications of a new approach to fund science, which has been suggested recently by J. Bollen et al.[?] We introduce various parameters and examine their effects. The concentration of funding is shown by the Lorenz curve and the Gini coefficient. In this model, all scientists are treated equally and follow the well-intended regulations. All scientists give a fixed ratio of their funding to others. The fixed ratio becomes an upper bound for the Gini coefficient. We observe two distinct regimes in the parameter space: valley and plateau. In the valley regime, the fluidity of funding is significant. The Lorenz curve is smooth. The Gini coefficient is well below the upper bound. The funding distribution is the desired result. In the plateau regime, the cumulative advantage is significant. The Lorenz curve has a sharp turn. The Gini coefficient saturates to the upper bound. The undue concentration of funding happens swiftly. The funding distribution is the undesired results, where a minority of scientists take the majority of funding. Phase transitions between these two regimes are discussed.
Expected performance of m-solution backtracking
NASA Technical Reports Server (NTRS)
Nicol, D. M.
1986-01-01
This paper derives upper bounds on the expected number of search tree nodes visited during an m-solution backtracking search, a search which terminates after some preselected number m problem solutions are found. The search behavior is assumed to have a general probabilistic structure. The results are stated in terms of node expansion and contraction. A visited search tree node is said to be expanding if the mean number of its children visited by the search exceeds 1 and is contracting otherwise. It is shown that if every node expands, or if every node contracts, then the number of search tree nodes visited by a search has an upper bound which is linear in the depth of the tree, in the mean number of children a node has, and in the number of solutions sought. Also derived are bounds linear in the depth of the tree in some situations where an upper portion of the tree contracts (expands), while the lower portion expands (contracts). While previous analyses of 1-solution backtracking have concluded that the expected performance is always linear in the tree depth, the model allows superlinear expected performance.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1990-01-01
An expurgated upper bound on the event error probability of trellis coded modulation is presented. This bound is used to derive a lower bound on the minimum achievable free Euclidean distance d sub (free) of trellis codes. It is shown that the dominant parameters for both bounds, the expurgated error exponent and the asymptotic d sub (free) growth rate, respectively, can be obtained from the cutoff-rate R sub O of the transmission channel by a simple geometric construction, making R sub O the central parameter for finding good trellis codes. Several constellations are optimized with respect to the bounds.
NASA Astrophysics Data System (ADS)
Dong, Yuan; Li, Qian P.; Wu, Zhengchao; Zhang, Jia-Zhong
2016-12-01
Export fluxes of phosphorus (P) by sinking particles are important in studying ocean biogeochemical dynamics, whereas their composition and temporal variability are still inadequately understood in the global oceans, including the northern South China Sea (NSCS). A time-series study of particle fluxes was conducted at a mooring station adjacent to the Xisha Trough in the NSCS from September 2012 to September 2014, with sinking particles collected every two weeks by two sediment traps deployed at 500 m and 1500 m depths. Five operationally defined particulate P classes of sinking particles including loosely-bound P, Fe-bound P, CaCO3-bound P, detrital apatite P, and refractory organic P were quantified by a sequential extraction method (SEDEX). Our results revealed substantial variability in sinking particulate P composition at the Xisha over two years of samplings. Particulate inorganic P was largely contributed from Fe-bound P in the upper trap, but detrital P in the lower trap. Particulate organic P, including exchangeable organic P, CaCO3-bound organic P, and refractory organic P, contributed up to 50-55% of total sinking particulate P. Increase of CaCO3-bound P in the upper trap during 2014 could be related to a strong El Niño event with enhanced CaCO3 deposition. We also found sediment resuspension responsible for the unusual high particles fluxes at the lower trap based on analyses of a two-component mixing model. There was on average a total mass flux of 78±50 mg m-2 d-1 at the upper trap during the study period. A significant correlation between integrated primary productivity in the region and particle fluxes at 500 m of the station suggested the important role of biological production in controlling the concentration, composition, and export fluxes of sinking particulate P in the NSCS.
ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.
2010-08-10
A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error),more » and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.« less
Performance analysis for minimally nonlinear irreversible refrigerators at finite cooling power
NASA Astrophysics Data System (ADS)
Long, Rui; Liu, Zhichun; Liu, Wei
2018-04-01
The coefficient of performance (COP) for general refrigerators at finite cooling power have been systematically researched through the minimally nonlinear irreversible model, and its lower and upper bounds in different operating regions have been proposed. Under the tight coupling conditions, we have calculated the universal COP bounds under the χ figure of merit in different operating regions. When the refrigerator operates in the region with lower external flux, we obtained the general bounds (0 < ε <(√{ 9 + 8εC } - 3) / 2) under the χ figure of merit. We have also calculated the universal bounds for maximum gain in COP under different operating regions to give a further insight into the COP gain with the cooling power away from the maximum one. When the refrigerator operates in the region located between maximum cooling power and maximum COP with lower external flux, the upper bound for COP and the lower bound for relative gain in COP present large values, compared to a relative small loss from the maximum cooling power. If the cooling power is the main objective, it is desirable to operate the refrigerator at a slightly lower cooling power than at the maximum one, where a small loss in the cooling power induces a much larger COP enhancement.
Abbas, Ash Mohammad
2012-01-01
In this paper, we describe some bounds and inequalities relating h-index, g-index, e-index, and generalized impact factor. We derive the bounds and inequalities relating these indexing parameters from their basic definitions and without assuming any continuous model to be followed by any of them. We verify the theorems using citation data for five Price Medalists. We observe that the lower bound for h-index given by Theorem 2, [formula: see text], g ≥ 1, comes out to be more accurate as compared to Schubert-Glanzel relation h is proportional to C(2/3)P(-1/3) for a proportionality constant of 1, where C is the number of citations and P is the number of papers referenced. Also, the values of h-index obtained using Theorem 2 outperform those obtained using Egghe-Liang-Rousseau power law model for the given citation data of Price Medalists. Further, we computed the values of upper bound on g-index given by Theorem 3, g ≤ (h + e), where e denotes the value of e-index. We observe that the upper bound on g-index given by Theorem 3 is reasonably tight for the given citation record of Price Medalists.
Reverse preferential spread in complex networks
NASA Astrophysics Data System (ADS)
Toyoizumi, Hiroshi; Tani, Seiichi; Miyoshi, Naoto; Okamoto, Yoshio
2012-08-01
Large-degree nodes may have a larger influence on the network, but they can be bottlenecks for spreading information since spreading attempts tend to concentrate on these nodes and become redundant. We discuss that the reverse preferential spread (distributing information inversely proportional to the degree of the receiving node) has an advantage over other spread mechanisms. In large uncorrelated networks, we show that the mean number of nodes that receive information under the reverse preferential spread is an upper bound among any other weight-based spread mechanisms, and this upper bound is indeed a logistic growth independent of the degree distribution.
A note on the upper bound of the spectral radius for SOR iteration matrix
NASA Astrophysics Data System (ADS)
Chang, D.-W. Da-Wei
2004-05-01
Recently, Wang and Huang (J. Comput. Appl. Math. 135 (2001) 325, Corollary 4.7) established the following estimation on the upper bound of the spectral radius for successive overrelaxation (SOR) iteration matrix:ρSOR≤1-ω+ωρGSunder the condition that the coefficient matrix A is a nonsingular M-matrix and ω≥1, where ρSOR and ρGS are the spectral radius of SOR iteration matrix and Gauss-Seidel iteration matrix, respectively. In this note, we would like to point out that the above estimation is not valid in general.
A Novel Capacity Analysis for Wireless Backhaul Mesh Networks
NASA Astrophysics Data System (ADS)
Chung, Tein-Yaw; Lee, Kuan-Chun; Lee, Hsiao-Chih
This paper derived a closed-form expression for inter-flow capacity of a backhaul wireless mesh network (WMN) with centralized scheduling by employing a ring-based approach. Through the definition of an interference area, we are able to accurately describe a bottleneck collision area for a WMN and calculate the upper bound of inter-flow capacity. The closed-form expression shows that the upper bound is a function of the ratio between transmission range and network radius. Simulations and numerical analysis show that our analytic solution can better estimate the inter-flow capacity of WMNs than that of previous approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacLeod, Morgan; Ramirez-Ruiz, Enrico; Trenti, Michele
When embedded in dense cluster cores, intermediate-mass black holes (IMBHs) acquire close stellar or stellar-remnant companions. These companions are not only gravitationally bound, but also tend to hierarchically isolate from other cluster stars through series of multibody encounters. In this paper we study the demographics of IMBH companions in compact star clusters through direct N-body simulations. We study clusters initially composed of 10{sup 5} or 2 × 10{sup 5} stars with IMBHs of 75 and 150 solar masses, and we follow their evolution for 6–10 Gyr. A tight, innermost binary pair of IMBH and stellar object rapidly forms. The IMBH has amore » companion with an orbital semimajor axis at least three times tighter than the second-most-bound object over 90% of the time. These companionships have typical periods on the order of years and are subject to cycles of exchange and destruction. The most frequently observed, long-lived pairings persist for ∼10{sup 7} years. The demographics of IMBH companions in clusters are diverse: they include both main-sequence, giant stars and stellar remnants. Companion objects may reveal the presence of an IMBH in a cluster in one of several ways. The most-bound companion stars routinely suffer grazing tidal interactions with the IMBH, offering a dynamical mechanism to produce repeated flaring episodes like those seen in the IMBH candidate HLX-1. The stellar winds of companion stars provide a minimum quiescent accretion rate for IMBHs, with implications for radio searches for IMBH accretion in globular clusters. Finally, gravitational wave inspirals of compact objects occur with promising frequency.« less
The Close Stellar Companions to Intermediate-mass Black Holes
NASA Astrophysics Data System (ADS)
MacLeod, Morgan; Trenti, Michele; Ramirez-Ruiz, Enrico
2016-03-01
When embedded in dense cluster cores, intermediate-mass black holes (IMBHs) acquire close stellar or stellar-remnant companions. These companions are not only gravitationally bound, but also tend to hierarchically isolate from other cluster stars through series of multibody encounters. In this paper we study the demographics of IMBH companions in compact star clusters through direct N-body simulations. We study clusters initially composed of 105 or 2 × 105 stars with IMBHs of 75 and 150 solar masses, and we follow their evolution for 6-10 Gyr. A tight, innermost binary pair of IMBH and stellar object rapidly forms. The IMBH has a companion with an orbital semimajor axis at least three times tighter than the second-most-bound object over 90% of the time. These companionships have typical periods on the order of years and are subject to cycles of exchange and destruction. The most frequently observed, long-lived pairings persist for ˜107 years. The demographics of IMBH companions in clusters are diverse: they include both main-sequence, giant stars and stellar remnants. Companion objects may reveal the presence of an IMBH in a cluster in one of several ways. The most-bound companion stars routinely suffer grazing tidal interactions with the IMBH, offering a dynamical mechanism to produce repeated flaring episodes like those seen in the IMBH candidate HLX-1. The stellar winds of companion stars provide a minimum quiescent accretion rate for IMBHs, with implications for radio searches for IMBH accretion in globular clusters. Finally, gravitational wave inspirals of compact objects occur with promising frequency.
On the error probability of general tree and trellis codes with applications to sequential decoding
NASA Technical Reports Server (NTRS)
Johannesson, R.
1973-01-01
An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.
Efficient Regressions via Optimally Combining Quantile Information*
Zhao, Zhibiao; Xiao, Zhijie
2014-01-01
We develop a generally applicable framework for constructing efficient estimators of regression models via quantile regressions. The proposed method is based on optimally combining information over multiple quantiles and can be applied to a broad range of parametric and nonparametric settings. When combining information over a fixed number of quantiles, we derive an upper bound on the distance between the efficiency of the proposed estimator and the Fisher information. As the number of quantiles increases, this upper bound decreases and the asymptotic variance of the proposed estimator approaches the Cramér-Rao lower bound under appropriate conditions. In the case of non-regular statistical estimation, the proposed estimator leads to super-efficient estimation. We illustrate the proposed method for several widely used regression models. Both asymptotic theory and Monte Carlo experiments show the superior performance over existing methods. PMID:25484481
Toward allocative efficiency in the prescription drug industry.
Guell, R C; Fischbaum, M
1995-01-01
Traditionally, monopoly power in the pharmaceutical industry has been measured by profits. An alternative method estimates the deadweight loss of consumer surplus associated with the exercise of monopoly power. Although upper and lower bound estimates for this inefficiency are far apart, they at least suggest a dramatically greater welfare loss than measures of industry profitability would imply. A proposed system would have the U.S. government employing its power of eminent domain to "take" and distribute pharmaceutical patents, providing as "just compensation" the present value of the patent's expected future monopoly profits. Given the allocative inefficiency of raising taxes to pay for the program, the impact of the proposal on allocative efficiency would be at least as good at our lower bound estimate of monopoly costs while substantially improving efficiency at or near our upper bound estimate.
Tight upper bound for the maximal quantum value of the Svetlichny operators
NASA Astrophysics Data System (ADS)
Li, Ming; Shen, Shuqian; Jing, Naihuan; Fei, Shao-Ming; Li-Jost, Xianqing
2017-10-01
It is a challenging task to detect genuine multipartite nonlocality (GMNL). In this paper, the problem is considered via computing the maximal quantum value of Svetlichny operators for three-qubit systems and a tight upper bound is obtained. The constraints on the quantum states for the tightness of the bound are also presented. The approach enables us to give the necessary and sufficient conditions of violating the Svetlichny inequality (SI) for several quantum states, including the white and color noised Greenberger-Horne-Zeilinger (GHZ) states. The relation between the genuine multipartite entanglement concurrence and the maximal quantum value of the Svetlichny operators for mixed GHZ class states is also discussed. As the SI is useful for the investigation of GMNL, our results give an effective and operational method to detect the GMNL for three-qubit mixed states.
Vacuum stability in the U(1)χ extended model with vanishing scalar potential at the Planck scale
NASA Astrophysics Data System (ADS)
Haba, Naoyuki; Yamaguchi, Yuya
2015-09-01
We investigate the vacuum stability in a scale invariant local {U}(1)_χ model with vanishing scalar potential at the Planck scale. We find that it is impossible to realize the Higgs mass of 125 GeV while keeping the Higgs quartic coupling λ _H positive in all energy scales, that is, the same as the standard model. Once one allows λ _H<0, the lower bounds of the Z' boson mass ares obtained through the positive definiteness of the scalar mass squared eigenvalues, while the bounds are smaller than the LHC bounds. On the other hand, the upper bounds strongly depend on the number of relevant Majorana Yukawa couplings of the right-handed neutrinos N_ν . Considering decoupling effects of the Z' boson and the right-handed neutrinos, the condition of the singlet scalar quartic coupling λ _φ >0 gives the upper bound in the N_ν =1 case, while it does not constrain the N_ν =2 and 3 cases. In particular, we find that the Z' boson mass is tightly restricted for the N_ν =1 case as M_{Z'} &lsim 3.7 TeV.
NASA Astrophysics Data System (ADS)
Lee, Harry; Wen, Baole; Doering, Charles
2017-11-01
The rate of viscous energy dissipation ɛ in incompressible Newtonian planar Couette flow (a horizontal shear layer) imposed with uniform boundary injection and suction is studied numerically. Specifically, fluid is steadily injected through the top plate with a constant rate at a constant angle of injection, and the same amount of fluid is sucked out vertically through the bottom plate at the same rate. This set-up leads to two control parameters, namely the angle of injection, θ, and the Reynolds number of the horizontal shear flow, Re . We numerically implement the `background field' variational problem formulated by Constantin and Doering with a one-dimensional unidirectional background field ϕ(z) , where z aligns with the distance between the plates. Computation is carried out at various levels of Re with θ = 0 , 0 .1° ,1° and 2°, respectively. The computed upper bounds on ɛ scale like Re0 as Re > 20 , 000 for each fixed θ, this agrees with Kolmogorov's hypothesis on isotropic turbulence. The outcome provides new upper bounds to ɛ among any solution to the underlying Navier-Stokes equations, and they are sharper than the analytical bounds presented in Doering et al. (2000). This research was partially supported by the NSF Award DMS-1515161, and the University of Michigan's Rackham Graduate Student Research Grant.
$$ \\mathcal{N} $$ = 4 superconformal bootstrap of the K 3 CFT
Lin, Ying-Hsuan; Shao, Shu-Heng; Simmons-Duffin, David; ...
2017-05-23
We study two-dimensional (4; 4) superconformal eld theories of central charge c = 6, corresponding to nonlinear sigma models on K3 surfaces, using the superconformal bootstrap. This is made possible through a surprising relation between the BPS N = 4 superconformal blocks with c = 6 and bosonic Virasoro conformal blocks with c = 28, and an exact result on the moduli dependence of a certain integrated BPS 4-point function. Nontrivial bounds on the non-BPS spectrum in the K3 CFT are obtained as functions of the CFT moduli, that interpolate between the free orbifold points and singular CFT points. Wemore » observe directly from the CFT perspective the signature of a continuous spectrum above a gap at the singular moduli, and fi nd numerically an upper bound on this gap that is saturated by the A1 N = 4 cigar CFT. We also derive an analytic upper bound on the fi rst nonzero eigenvalue of the scalar Laplacian on K3 in the large volume regime, that depends on the K3 moduli data. As two byproducts, we find an exact equivalence between a class of BPS N = 2 superconformal blocks and Virasoro conformal blocks in two dimensions, and an upper bound on the four-point functions of operators of sufficiently low scaling dimension in three and four dimensional CFTs.« less
$$ \\mathcal{N} $$ = 4 superconformal bootstrap of the K 3 CFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Ying-Hsuan; Shao, Shu-Heng; Simmons-Duffin, David
We study two-dimensional (4; 4) superconformal eld theories of central charge c = 6, corresponding to nonlinear sigma models on K3 surfaces, using the superconformal bootstrap. This is made possible through a surprising relation between the BPS N = 4 superconformal blocks with c = 6 and bosonic Virasoro conformal blocks with c = 28, and an exact result on the moduli dependence of a certain integrated BPS 4-point function. Nontrivial bounds on the non-BPS spectrum in the K3 CFT are obtained as functions of the CFT moduli, that interpolate between the free orbifold points and singular CFT points. Wemore » observe directly from the CFT perspective the signature of a continuous spectrum above a gap at the singular moduli, and fi nd numerically an upper bound on this gap that is saturated by the A1 N = 4 cigar CFT. We also derive an analytic upper bound on the fi rst nonzero eigenvalue of the scalar Laplacian on K3 in the large volume regime, that depends on the K3 moduli data. As two byproducts, we find an exact equivalence between a class of BPS N = 2 superconformal blocks and Virasoro conformal blocks in two dimensions, and an upper bound on the four-point functions of operators of sufficiently low scaling dimension in three and four dimensional CFTs.« less
Comonotonic bounds on the survival probabilities in the Lee-Carter model for mortality projection
NASA Astrophysics Data System (ADS)
Denuit, Michel; Dhaene, Jan
2007-06-01
In the Lee-Carter framework, future survival probabilities are random variables with an intricate distribution function. In large homogeneous portfolios of life annuities, value-at-risk or conditional tail expectation of the total yearly payout of the company are approximately equal to the corresponding quantities involving random survival probabilities. This paper aims to derive some bounds in the increasing convex (or stop-loss) sense on these random survival probabilities. These bounds are obtained with the help of comonotonic upper and lower bounds on sums of correlated random variables.
Trends Among the States in Governance and Coordination of Higher Education.
ERIC Educational Resources Information Center
Chambers, M. M.
There has been a trend in state government toward tighter and tighter centralization that, though done in the name of greater economy and efficiency, is in large part a reach for political power. Not all services of the state can be performed well if integrated into a single monolithic administrative pyramid with all other state services and…
Tighter monogamy relations in multiqubit systems
NASA Astrophysics Data System (ADS)
Jin, Zhi-Xiang; Li, Jun; Li, Tao; Fei, Shao-Ming
2018-03-01
Monogamy relations characterize the distributions of entanglement in multipartite systems. We investigate monogamy relations related to the concurrence C , the entanglement of formation E , negativity Nc, and Tsallis-q entanglement Tq. Monogamy relations for the α th power of entanglement have been derived, which are tighter than the existing entanglement monogamy relations for some classes of quantum states. Detailed examples are presented.
NASA Astrophysics Data System (ADS)
Wang, Dong; Tsui, Kwok-Leung
2018-01-01
Bearing-supported shafts are widely used in various machines. Due to harsh working environments, bearing performance degrades over time. To prevent unexpected bearing failures and accidents, bearing performance degradation assessment becomes an emerging topic in recent years. Bearing performance degradation assessment aims to evaluate the current health condition of a bearing through a bearing health indicator. In the past years, many signal processing and data mining based methods were proposed to construct bearing health indicators. However, the upper and lower bounds of these bearing health indicators were not theoretically calculated and they strongly depended on historical bearing data including normal and failure data. Besides, most health indicators are dimensional, which connotes that these health indicators are prone to be affected by varying operating conditions, such as varying speeds and loads. In this paper, based on the principle of squared envelope analysis, we focus on theoretical investigation of bearing performance degradation assessment in the case of additive Gaussian noises, including distribution establishment of squared envelope, construction of a generalized dimensionless bearing health indicator, and mathematical calculation of the upper and lower bounds of the generalized dimensionless bearing health indicator. Then, analyses of simulated and real bearing run to failure data are used as two case studies to illustrate how the generalized dimensionless health indicator works and demonstrate its effectiveness in bearing performance degradation assessment. Results show that squared envelope follows a noncentral chi-square distribution and the upper and lower bounds of the generalized dimensionless health indicator can be mathematically established. Moreover, the generalized dimensionless health indicator is sensitive to an incipient bearing defect in the process of bearing performance degradation.
Tsao, Mei-Fen; Chang, Hui-Wen; Chang, Chien-Hsi; Cheng, Chi-Hsuan; Lin, Hsiu-Chen
2017-05-01
Neonatal hypoglycemia may cause severe neurological damages; therefore, tight glycemic control is crucial to identify neonate at risk. Previous blood glucose monitoring system (BGMS) failed to perform well in neonates; there are calls for the tightening of accuracy requirements. It remains a need for accurate BGMS for effective bedside diabetes management in neonatal care within a hospital population. A total of 300 neonates were recruited from local hospitals. Accuracy performance of a commercially available BGMS was evaluated against reference instrument in screening for neonatal hypoglycemia, and assessment was made based on the ISO15197:2013 and a tighter standard. At blood glucose level < 47 mg/dl, BGMS assessed met the minimal accuracy requirement of ISO 15197:2013 and tighter standard at 100% and 97.2%, respectively.
``Carbon Credits'' for Resource-Bounded Computations Using Amortised Analysis
NASA Astrophysics Data System (ADS)
Jost, Steffen; Loidl, Hans-Wolfgang; Hammond, Kevin; Scaife, Norman; Hofmann, Martin
Bounding resource usage is important for a number of areas, notably real-time embedded systems and safety-critical systems. In this paper, we present a fully automatic static type-based analysis for inferring upper bounds on resource usage for programs involving general algebraic datatypes and full recursion. Our method can easily be used to bound any countable resource, without needing to revisit proofs. We apply the analysis to the important metrics of worst-case execution time, stack- and heap-space usage. Our results from several realistic embedded control applications demonstrate good matches between our inferred bounds and measured worst-case costs for heap and stack usage. For time usage we infer good bounds for one application. Where we obtain less tight bounds, this is due to the use of software floating-point libraries.
Effects of triplet Higgs bosons in long baseline neutrino experiments
NASA Astrophysics Data System (ADS)
Huitu, K.; Kärkkäinen, T. J.; Maalampi, J.; Vihonen, S.
2018-05-01
The triplet scalars (Δ =Δ++,Δ+,Δ0) utilized in the so-called type-II seesaw model to explain the lightness of neutrinos, would generate nonstandard interactions (NSI) for a neutrino propagating in matter. We investigate the prospects to probe these interactions in long baseline neutrino oscillation experiments. We analyze the upper bounds that the proposed DUNE experiment might set on the nonstandard parameters and numerically derive upper bounds, as a function of the lightest neutrino mass, on the ratio the mass MΔ of the triplet scalars, and the strength |λϕ| of the coupling ϕ ϕ Δ of the triplet Δ and conventional Higgs doublet ϕ . We also discuss the possible misinterpretation of these effects as effects arising from a nonunitarity of the neutrino mixing matrix and compare the results with the bounds that arise from the charged lepton flavor violating processes.
Estimates on Functional Integrals of Quantum Mechanics and Non-relativistic Quantum Field Theory
NASA Astrophysics Data System (ADS)
Bley, Gonzalo A.; Thomas, Lawrence E.
2017-01-01
We provide a unified method for obtaining upper bounds for certain functional integrals appearing in quantum mechanics and non-relativistic quantum field theory, functionals of the form {E[{exp}(A_T)]} , the (effective) action {A_T} being a function of particle trajectories up to time T. The estimates in turn yield rigorous lower bounds for ground state energies, via the Feynman-Kac formula. The upper bounds are obtained by writing the action for these functional integrals in terms of stochastic integrals. The method is illustrated in familiar quantum mechanical settings: for the hydrogen atom, for a Schrödinger operator with {1/|x|^2} potential with small coupling, and, with a modest adaptation of the method, for the harmonic oscillator. We then present our principal applications of the method, in the settings of non-relativistic quantum field theories for particles moving in a quantized Bose field, including the optical polaron and Nelson models.
Dwell time-based stabilisation of switched delay systems using free-weighting matrices
NASA Astrophysics Data System (ADS)
Koru, Ahmet Taha; Delibaşı, Akın; Özbay, Hitay
2018-01-01
In this paper, we present a quasi-convex optimisation method to minimise an upper bound of the dwell time for stability of switched delay systems. Piecewise Lyapunov-Krasovskii functionals are introduced and the upper bound for the derivative of Lyapunov functionals is estimated by free-weighting matrices method to investigate non-switching stability of each candidate subsystems. Then, a sufficient condition for the dwell time is derived to guarantee the asymptotic stability of the switched delay system. Once these conditions are represented by a set of linear matrix inequalities , dwell time optimisation problem can be formulated as a standard quasi-convex optimisation problem. Numerical examples are given to illustrate the improvements over previously obtained dwell time bounds. Using the results obtained in the stability case, we present a nonlinear minimisation algorithm to synthesise the dwell time minimiser controllers. The algorithm solves the problem with successive linearisation of nonlinear conditions.
Decay of superconducting correlations for gauged electrons in dimensions D ≤ 4
NASA Astrophysics Data System (ADS)
Tada, Yasuhiro; Koma, Tohru
2018-03-01
We study lattice superconductors coupled to gauge fields, such as an attractive Hubbard model in electromagnetic fields, with a standard gauge fixing. We prove upper bounds for a two-point Cooper pair correlation at finite temperatures in spatial dimensions D ≤ 4. The upper bounds decay exponentially in three dimensions and by power law in four dimensions. These imply the absence of the superconducting long-range order for the Cooper pair amplitude as a consequence of fluctuations of the gauge fields. Since our results hold for the gauge fixing Hamiltonian, they cannot be obtained as a corollary of Elitzur's theorem.
Calculations of reliability predictions for the Apollo spacecraft
NASA Technical Reports Server (NTRS)
Amstadter, B. L.
1966-01-01
A new method of reliability prediction for complex systems is defined. Calculation of both upper and lower bounds are involved, and a procedure for combining the two to yield an approximately true prediction value is presented. Both mission success and crew safety predictions can be calculated, and success probabilities can be obtained for individual mission phases or subsystems. Primary consideration is given to evaluating cases involving zero or one failure per subsystem, and the results of these evaluations are then used for analyzing multiple failure cases. Extensive development is provided for the overall mission success and crew safety equations for both the upper and lower bounds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gurvitis, Leonid
2009-01-01
An upper bound on the ergodic capacity of MIMO channels was introduced recently in [1]. This upper bound amounts to the maximization on the simplex of some multilinear polynomial p({lambda}{sub 1}, ..., {lambda}{sub n}) with non-negative coefficients. In general, such maximizations problems are NP-HARD. But if say, the functional log(p) is concave on the simplex and can be efficiently evaluated, then the maximization can also be done efficiently. Such log-concavity was conjectured in [1]. We give in this paper self-contained proof of the conjecture, based on the theory of H-Stable polynomials.
Investigation of matter-antimatter interaction for possible propulsion applications
NASA Technical Reports Server (NTRS)
Morgan, D. L., Jr.
1974-01-01
Matter-antimatter annihilation is discussed as a means of rocket propulsion. The feasibility of different means of antimatter storage is shown to depend on how annihilation rates are affected by various circumstances. The annihilation processes are described, with emphasis on important features of atom-antiatom interatomic potential energies. A model is developed that allows approximate calculation of upper and lower bounds to the interatomic potential energy for any atom-antiatom pair. Formulae for the upper and lower bounds for atom-antiatom annihilation cross-sections are obtained and applied to the annihilation rates for each means of antimatter storage under consideration. Recommendations for further studies are presented.
Marginal Consistency: Upper-Bounding Partition Functions over Commutative Semirings.
Werner, Tomás
2015-07-01
Many inference tasks in pattern recognition and artificial intelligence lead to partition functions in which addition and multiplication are abstract binary operations forming a commutative semiring. By generalizing max-sum diffusion (one of convergent message passing algorithms for approximate MAP inference in graphical models), we propose an iterative algorithm to upper bound such partition functions over commutative semirings. The iteration of the algorithm is remarkably simple: change any two factors of the partition function such that their product remains the same and their overlapping marginals become equal. In many commutative semirings, repeating this iteration for different pairs of factors converges to a fixed point when the overlapping marginals of every pair of factors coincide. We call this state marginal consistency. During that, an upper bound on the partition function monotonically decreases. This abstract algorithm unifies several existing algorithms, including max-sum diffusion and basic constraint propagation (or local consistency) algorithms in constraint programming. We further construct a hierarchy of marginal consistencies of increasingly higher levels and show than any such level can be enforced by adding identity factors of higher arity (order). Finally, we discuss instances of the framework for several semirings, including the distributive lattice and the max-sum and sum-product semirings.
On the Role of Entailment Patterns and Scalar Implicatures in the Processing of Numerals
ERIC Educational Resources Information Center
Panizza, Daniele; Chierchia, Gennaro; Clifton, Charles, Jr.
2009-01-01
There has been much debate, in both the linguistics and the psycholinguistics literature, concerning numbers and the interpretation of number denoting determiners ("numerals"). Such debate concerns, in particular, the nature and distribution of upper-bounded ("exact") interpretations vs. lower-bounded ("at-least") construals. In the present paper…
Sublinear Upper Bounds for Stochastic Programs with Recourse. Revision.
1987-06-01
approximation procedures for (1.1) generally rely on discretizations of E (Huang, Ziemba , and Ben-Tal (1977), Kall and Stoyan (1982), Birge and Wets...Wright, Practical optimization (Academic Press, London and New York,1981). C.C. Huang, W. Ziemba , and A. Ben-Tal, "Bounds on the expectation of a con
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zachos, C. K.; High Energy Physics
Following ref [1], a classical upper bound for quantum entropy is identified and illustrated, 0 {le} S{sub q} {le} ln (e{sigma}{sup 2}/2{h_bar}), involving the variance {sigma}{sup 2} in phase space of the classical limit distribution of a given system. A fortiori, this further bounds the corresponding information-theoretical generalizations of the quantum entropy proposed by Renyi.
Representing and Acquiring Geographic Knowledge.
1984-01-01
which is allowed if v is a kowledge bound of REG. e3. The real vertices of a clump map into the boundary of the corresponding object so * , 21...example, *What is the diameter of the pond?" can be answered, but the answer will, in general, be a range power -bound, upper-bound]. If the clump for...cases of others. They are included separately, because their procedures are either faster or more powerful than the general procedure. I will not
NASA Technical Reports Server (NTRS)
Lewis, Robert Michael; Patera, Anthony T.; Peraire, Jaume
1998-01-01
We present a Neumann-subproblem a posteriori finite element procedure for the efficient and accurate calculation of rigorous, 'constant-free' upper and lower bounds for sensitivity derivatives of functionals of the solutions of partial differential equations. The design motivation for sensitivity derivative error control is discussed; the a posteriori finite element procedure is described; the asymptotic bounding properties and computational complexity of the method are summarized; and illustrative numerical results are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Müller-Hermes, Alexander, E-mail: muellerh@ma.tum.de; Wolf, Michael M., E-mail: m.wolf@tum.de; Reeb, David, E-mail: reeb.qit@gmail.com
We investigate linear maps between matrix algebras that remain positive under tensor powers, i.e., under tensoring with n copies of themselves. Completely positive and completely co-positive maps are trivial examples of this kind. We show that for every n ∈ ℕ, there exist non-trivial maps with this property and that for two-dimensional Hilbert spaces there is no non-trivial map for which this holds for all n. For higher dimensions, we reduce the existence question of such non-trivial “tensor-stable positive maps” to a one-parameter family of maps and show that an affirmative answer would imply the existence of non-positive partial transposemore » bound entanglement. As an application, we show that any tensor-stable positive map that is not completely positive yields an upper bound on the quantum channel capacity, which for the transposition map gives the well-known cb-norm bound. We, furthermore, show that the latter is an upper bound even for the local operations and classical communications-assisted quantum capacity, and that moreover it is a strong converse rate for this task.« less
Measures and limits of models of fixation selection.
Wilming, Niklas; Betz, Torsten; Kietzmann, Tim C; König, Peter
2011-01-01
Models of fixation selection are a central tool in the quest to understand how the human mind selects relevant information. Using this tool in the evaluation of competing claims often requires comparing different models' relative performance in predicting eye movements. However, studies use a wide variety of performance measures with markedly different properties, which makes a comparison difficult. We make three main contributions to this line of research: First we argue for a set of desirable properties, review commonly used measures, and conclude that no single measure unites all desirable properties. However the area under the ROC curve (a classification measure) and the KL-divergence (a distance measure of probability distributions) combine many desirable properties and allow a meaningful comparison of critical model performance. We give an analytical proof of the linearity of the ROC measure with respect to averaging over subjects and demonstrate an appropriate correction of entropy-based measures like KL-divergence for small sample sizes in the context of eye-tracking data. Second, we provide a lower bound and an upper bound of these measures, based on image-independent properties of fixation data and between subject consistency respectively. Based on these bounds it is possible to give a reference frame to judge the predictive power of a model of fixation selection. We provide open-source python code to compute the reference frame. Third, we show that the upper, between subject consistency bound holds only for models that predict averages of subject populations. Departing from this we show that incorporating subject-specific viewing behavior can generate predictions which surpass that upper bound. Taken together, these findings lay out the required information that allow a well-founded judgment of the quality of any model of fixation selection and should therefore be reported when a new model is introduced.
Schwartz, Marc D; Valdimarsdottir, Heiddis B; Peshkin, Beth N; Mandelblatt, Jeanne; Nusbaum, Rachel; Huang, An-Tsun; Chang, Yaojen; Graves, Kristi; Isaacs, Claudine; Wood, Marie; McKinnon, Wendy; Garber, Judy; McCormick, Shelley; Kinney, Anita Y; Luta, George; Kelleher, Sarah; Leventhal, Kara-Grace; Vegella, Patti; Tong, Angie; King, Lesley
2014-03-01
Although guidelines recommend in-person counseling before BRCA1/BRCA2 gene testing, genetic counseling is increasingly offered by telephone. As genomic testing becomes more common, evaluating alternative delivery approaches becomes increasingly salient. We tested whether telephone delivery of BRCA1/2 genetic counseling was noninferior to in-person delivery. Participants (women age 21 to 85 years who did not have newly diagnosed or metastatic cancer and lived within a study site catchment area) were randomly assigned to usual care (UC; n = 334) or telephone counseling (TC; n = 335). UC participants received in-person pre- and post-test counseling; TC participants completed all counseling by telephone. Primary outcomes were knowledge, satisfaction, decision conflict, distress, and quality of life; secondary outcomes were equivalence of BRCA1/2 test uptake and costs of delivering TC versus UC. TC was noninferior to UC on all primary outcomes. At 2 weeks after pretest counseling, knowledge (d = 0.03; lower bound of 97.5% CI, -0.61), perceived stress (d = -0.12; upper bound of 97.5% CI, 0.21), and satisfaction (d = -0.16; lower bound of 97.5% CI, -0.70) had group differences and confidence intervals that did not cross their 1-point noninferiority limits. Decision conflict (d = 1.1; upper bound of 97.5% CI, 3.3) and cancer distress (d = -1.6; upper bound of 97.5% CI, 0.27) did not cross their 4-point noninferiority limit. Results were comparable at 3 months. TC was not equivalent to UC on BRCA1/2 test uptake (UC, 90.1%; TC, 84.2%). TC yielded cost savings of $114 per patient. Genetic counseling can be effectively and efficiently delivered via telephone to increase access and decrease costs.
Sign rank versus Vapnik-Chervonenkis dimension
NASA Astrophysics Data System (ADS)
Alon, N.; Moran, Sh; Yehudayoff, A.
2017-12-01
This work studies the maximum possible sign rank of sign (N × N)-matrices with a given Vapnik-Chervonenkis dimension d. For d=1, this maximum is three. For d=2, this maximum is \\widetilde{\\Theta}(N1/2). For d >2, similar but slightly less accurate statements hold. The lower bounds improve on previous ones by Ben-David et al., and the upper bounds are novel. The lower bounds are obtained by probabilistic constructions, using a theorem of Warren in real algebraic topology. The upper bounds are obtained using a result of Welzl about spanning trees with low stabbing number, and using the moment curve. The upper bound technique is also used to: (i) provide estimates on the number of classes of a given Vapnik-Chervonenkis dimension, and the number of maximum classes of a given Vapnik-Chervonenkis dimension--answering a question of Frankl from 1989, and (ii) design an efficient algorithm that provides an O(N/log(N)) multiplicative approximation for the sign rank. We also observe a general connection between sign rank and spectral gaps which is based on Forster's argument. Consider the adjacency (N × N)-matrix of a Δ-regular graph with a second eigenvalue of absolute value λ and Δ ≤ N/2. We show that the sign rank of the signed version of this matrix is at least Δ/λ. We use this connection to prove the existence of a maximum class C\\subseteq\\{+/- 1\\}^N with Vapnik-Chervonenkis dimension 2 and sign rank \\widetilde{\\Theta}(N1/2). This answers a question of Ben-David et al. regarding the sign rank of large Vapnik-Chervonenkis classes. We also describe limitations of this approach, in the spirit of the Alon-Boppana theorem. We further describe connections to communication complexity, geometry, learning theory, and combinatorics. Bibliography: 69 titles.
A formulation of a matrix sparsity approach for the quantum ordered search algorithm
NASA Astrophysics Data System (ADS)
Parmar, Jupinder; Rahman, Saarim; Thiara, Jaskaran
One specific subset of quantum algorithms is Grovers Ordered Search Problem (OSP), the quantum counterpart of the classical binary search algorithm, which utilizes oracle functions to produce a specified value within an ordered database. Classically, the optimal algorithm is known to have a log2N complexity; however, Grovers algorithm has been found to have an optimal complexity between the lower bound of ((lnN-1)/π≈0.221log2N) and the upper bound of 0.433log2N. We sought to lower the known upper bound of the OSP. With Farhi et al. MITCTP 2815 (1999), arXiv:quant-ph/9901059], we see that the OSP can be resolved into a translational invariant algorithm to create quantum query algorithm restraints. With these restraints, one can find Laurent polynomials for various k — queries — and N — database sizes — thus finding larger recursive sets to solve the OSP and effectively reducing the upper bound. These polynomials are found to be convex functions, allowing one to make use of convex optimization to find an improvement on the known bounds. According to Childs et al. [Phys. Rev. A 75 (2007) 032335], semidefinite programming, a subset of convex optimization, can solve the particular problem represented by the constraints. We were able to implement a program abiding to their formulation of a semidefinite program (SDP), leading us to find that it takes an immense amount of storage and time to compute. To combat this setback, we then formulated an approach to improve results of the SDP using matrix sparsity. Through the development of this approach, along with an implementation of a rudimentary solver, we demonstrate how matrix sparsity reduces the amount of time and storage required to compute the SDP — overall ensuring further improvements will likely be made to reach the theorized lower bound.
Sample Complexity Bounds for Differentially Private Learning
Chaudhuri, Kamalika; Hsu, Daniel
2013-01-01
This work studies the problem of privacy-preserving classification – namely, learning a classifier from sensitive data while preserving the privacy of individuals in the training set. In particular, the learning algorithm is required in this problem to guarantee differential privacy, a very strong notion of privacy that has gained significant attention in recent years. A natural question to ask is: what is the sample requirement of a learning algorithm that guarantees a certain level of privacy and accuracy? We address this question in the context of learning with infinite hypothesis classes when the data is drawn from a continuous distribution. We first show that even for very simple hypothesis classes, any algorithm that uses a finite number of examples and guarantees differential privacy must fail to return an accurate classifier for at least some unlabeled data distributions. This result is unlike the case with either finite hypothesis classes or discrete data domains, in which distribution-free private learning is possible, as previously shown by Kasiviswanathan et al. (2008). We then consider two approaches to differentially private learning that get around this lower bound. The first approach is to use prior knowledge about the unlabeled data distribution in the form of a reference distribution chosen independently of the sensitive data. Given such a reference , we provide an upper bound on the sample requirement that depends (among other things) on a measure of closeness between and the unlabeled data distribution. Our upper bound applies to the non-realizable as well as the realizable case. The second approach is to relax the privacy requirement, by requiring only label-privacy – namely, that the only labels (and not the unlabeled parts of the examples) be considered sensitive information. An upper bound on the sample requirement of learning with label privacy was shown by Chaudhuri et al. (2006); in this work, we show a lower bound. PMID:25285183
Multi-shell model of ion-induced nucleic acid condensation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tolokh, Igor S.; Drozdetski, Aleksander V.; Pollack, Lois
We present a semi-quantitative model of condensation of short nucleic acid (NA) duplexes induced by trivalent cobalt(III) hexammine (CoHex) ions. The model is based on partitioning of bound counterion distribution around single NA duplex into “external” and “internal” ion binding shells distinguished by the proximity to duplex helical axis. In the aggregated phase the shells overlap, which leads to significantly increased attraction of CoHex ions in these overlaps with the neighboring duplexes. The duplex aggregation free energy is decomposed into attractive and repulsive components in such a way that they can be represented by simple analytical expressions with parameters derivedmore » from molecular dynamic simulations and numerical solutions of Poisson equation. The attractive term depends on the fractions of bound ions in the overlapping shells and affinity of CoHex to the “external” shell of nearly neutralized duplex. The repulsive components of the free energy are duplex configurational entropy loss upon the aggregation and the electrostatic repulsion of the duplexes that remains after neutralization by bound CoHex ions. The estimates of the aggregation free energy are consistent with the experimental range of NA duplex condensation propensities, including the unusually poor condensation of RNA structures and subtle sequence effects upon DNA condensation. The model predicts that, in contrast to DNA, RNA duplexes may condense into tighter packed aggregates with a higher degree of duplex neutralization. An appreciable CoHex mediated RNA-RNA attraction requires closer inter-duplex separation to engage CoHex ions (bound mostly in the “internal” shell of RNA) into short-range attractive interactions. The model also predicts that longer NA fragments will condense more readily than shorter ones. The ability of this model to explain experimentally observed trends in NA condensation lends support to proposed NA condensation picture based on the multivalent “ion binding shells.”.« less
Multi-shell model of ion-induced nucleic acid condensation
Tolokh, Igor S.; Drozdetski, Aleksander V.; Pollack, Lois; Onufriev, Alexey V.
2016-01-01
We present a semi-quantitative model of condensation of short nucleic acid (NA) duplexes induced by trivalent cobalt(iii) hexammine (CoHex) ions. The model is based on partitioning of bound counterion distribution around single NA duplex into “external” and “internal” ion binding shells distinguished by the proximity to duplex helical axis. In the aggregated phase the shells overlap, which leads to significantly increased attraction of CoHex ions in these overlaps with the neighboring duplexes. The duplex aggregation free energy is decomposed into attractive and repulsive components in such a way that they can be represented by simple analytical expressions with parameters derived from molecular dynamic simulations and numerical solutions of Poisson equation. The attractive term depends on the fractions of bound ions in the overlapping shells and affinity of CoHex to the “external” shell of nearly neutralized duplex. The repulsive components of the free energy are duplex configurational entropy loss upon the aggregation and the electrostatic repulsion of the duplexes that remains after neutralization by bound CoHex ions. The estimates of the aggregation free energy are consistent with the experimental range of NA duplex condensation propensities, including the unusually poor condensation of RNA structures and subtle sequence effects upon DNA condensation. The model predicts that, in contrast to DNA, RNA duplexes may condense into tighter packed aggregates with a higher degree of duplex neutralization. An appreciable CoHex mediated RNA-RNA attraction requires closer inter-duplex separation to engage CoHex ions (bound mostly in the “internal” shell of RNA) into short-range attractive interactions. The model also predicts that longer NA fragments will condense more readily than shorter ones. The ability of this model to explain experimentally observed trends in NA condensation lends support to proposed NA condensation picture based on the multivalent “ion binding shells.” PMID:27389241
Spread of entanglement and causality
NASA Astrophysics Data System (ADS)
Casini, Horacio; Liu, Hong; Mezei, Márk
2016-07-01
We investigate causality constraints on the time evolution of entanglement entropy after a global quench in relativistic theories. We first provide a general proof that the so-called tsunami velocity is bounded by the speed of light. We then generalize the free particle streaming model of [1] to general dimensions and to an arbitrary entanglement pattern of the initial state. In more than two spacetime dimensions the spread of entanglement in these models is highly sensitive to the initial entanglement pattern, but we are able to prove an upper bound on the normalized rate of growth of entanglement entropy, and hence the tsunami velocity. The bound is smaller than what one gets for quenches in holographic theories, which highlights the importance of interactions in the spread of entanglement in many-body systems. We propose an interacting model which we believe provides an upper bound on the spread of entanglement for interacting relativistic theories. In two spacetime dimensions with multiple intervals, this model and its variations are able to reproduce intricate results exhibited by holographic theories for a significant part of the parameter space. For higher dimensions, the model bounds the tsunami velocity at the speed of light. Finally, we construct a geometric model for entanglement propagation based on a tensor network construction for global quenches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Myoung-Jae; Jung, Young-Dae, E-mail: ydjung@hanyang.ac.kr; Department of Physics, Applied Physics, and Astronomy, Rensselaer Polytechnic Institute, 110 8th Street, Troy, New York 12180-3590
The dispersion relation for the dust ion-acoustic surface waves propagating at the interface of semi-bounded Lorentzian dusty plasma with supersonic ion flow has been kinetically derived to investigate the nonthermal property and the ion wake field effect. We found that the supersonic ion flow creates the upper and the lower modes. The increase in the nonthermal particles decreases the wave frequency for the upper mode whereas it increases the frequency for the lower mode. The increase in the supersonic ion flow velocity is found to enhance the wave frequency for both modes. We also found that the increase in nonthermalmore » plasmas is found to enhance the group velocity of the upper mode. However, the nonthermal particles suppress the lower mode group velocity. The nonthermal effects on the group velocity will be reduced in the limit of small or large wavelength limit.« less
Tighter entanglement monogamy relations of qubit systems
NASA Astrophysics Data System (ADS)
Jin, Zhi-Xiang; Fei, Shao-Ming
2017-03-01
Monogamy relations characterize the distributions of entanglement in multipartite systems. We investigate monogamy relations related to the concurrence C and the entanglement of formation E. We present new entanglement monogamy relations satisfied by the α -th power of concurrence for all α ≥ 2, and the α -th power of the entanglement of formation for all α ≥ √{2}. These monogamy relations are shown to be tighter than the existing ones.
Archambeau, Cédric; Verleysen, Michel
2007-01-01
A new variational Bayesian learning algorithm for Student-t mixture models is introduced. This algorithm leads to (i) robust density estimation, (ii) robust clustering and (iii) robust automatic model selection. Gaussian mixture models are learning machines which are based on a divide-and-conquer approach. They are commonly used for density estimation and clustering tasks, but are sensitive to outliers. The Student-t distribution has heavier tails than the Gaussian distribution and is therefore less sensitive to any departure of the empirical distribution from Gaussianity. As a consequence, the Student-t distribution is suitable for constructing robust mixture models. In this work, we formalize the Bayesian Student-t mixture model as a latent variable model in a different way from Svensén and Bishop [Svensén, M., & Bishop, C. M. (2005). Robust Bayesian mixture modelling. Neurocomputing, 64, 235-252]. The main difference resides in the fact that it is not necessary to assume a factorized approximation of the posterior distribution on the latent indicator variables and the latent scale variables in order to obtain a tractable solution. Not neglecting the correlations between these unobserved random variables leads to a Bayesian model having an increased robustness. Furthermore, it is expected that the lower bound on the log-evidence is tighter. Based on this bound, the model complexity, i.e. the number of components in the mixture, can be inferred with a higher confidence.
Bounds on quantum confinement effects in metal nanoparticles
NASA Astrophysics Data System (ADS)
Blackman, G. Neal; Genov, Dentcho A.
2018-03-01
Quantum size effects on the permittivity of metal nanoparticles are investigated using the quantum box model. Explicit upper and lower bounds are derived for the permittivity and relaxation rates due to quantum confinement effects. These bounds are verified numerically, and the size dependence and frequency dependence of the empirical Drude size parameter is extracted from the model. Results suggest that the common practice of empirically modifying the dielectric function can lead to inaccurate predictions for highly uniform distributions of finite-sized particles.
NASA Technical Reports Server (NTRS)
Sloss, J. M.; Kranzler, S. K.
1972-01-01
The equivalence of a considered integral equation form with an infinite system of linear equations is proved, and the localization of the eigenvalues of the infinite system is expressed. Error estimates are derived, and the problems of finding upper bounds and lower bounds for the eigenvalues are solved simultaneously.
The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates
NASA Technical Reports Server (NTRS)
Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush
2008-01-01
We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.
Solar-System Tests of Gravitational Theories
NASA Technical Reports Server (NTRS)
Shapiro, Irwin I.
2001-01-01
We are engaged in testing gravitational theory, primarily using observations of objects in the solar system and primarily on that scale. Our goal is either to detect departures from the standard model (general relativity) - if any exist within the level of sensitivity of our data - or to place tighter bounds on such departures. For this project, we have analyzed a combination of observational data with our model of the solar system, including mostly planetary radar ranging, lunar laser ranging, and spacecraft tracking, but also including both pulsar timing and pulsar very long base interferometry (VLBI) measurements. This year, we have extended our model of Earth nutation with adjustable correction terms at the principal frequencies. We also refined our model of tidal drag on the Moon's orbit. We believe these changes will make no substantial changes in the results, but we are now repeating the analysis of the whole set of data to verify that belief. Additional information is contained in the original extended abstract.
Dynamical Constraints on Nontransiting Planets Orbiting TRAPPIST-1
NASA Astrophysics Data System (ADS)
Jontof-Hutter, Daniel; Truong, Vinh H.; Ford, Eric B.; Robertson, Paul; Terrien, Ryan C.
2018-06-01
We derive lower bounds on the orbital distance and inclination of a putative planet beyond the transiting seven planets of TRAPPIST-1, for a range of masses ranging from 0.08 M Jup to 3.5 M Jup. While the outer architecture of this system will ultimately be constrained by radial velocity measurements over time, we present dynamical constraints from the remarkably coplanar configuration of the seven transiting planets, which is sensitive to modestly inclined perturbers. We find that the observed configuration is unlikely if a Jovian-mass planet inclined by ≥3° to the transiting planet exists within 0.53 au, exceeding any constraints from transit timing variations (TTV) induced in the known planets from an undetected perturber. Our results will inform RV programs targeting TRAPPIST-1, and for near coplanar outer planets, tighter constraints are anticipated for radial velocity (RV) precisions of ≲140 m s‑1. At higher inclinations, putative planets are ruled out to greater orbital distances with orbital periods up to a few years.
When clusters collide: constraints on antimatter on the largest scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steigman, Gary, E-mail: steigman@mps.ohio-state.edu
2008-10-15
Observations have ruled out the presence of significant amounts of antimatter in the Universe on scales ranging from the solar system, to the Galaxy, to groups and clusters of galaxies, and even to distances comparable to the scale of the present horizon. Except for the model-dependent constraints on the largest scales, the most significant upper limits to diffuse antimatter in the Universe are those on the {approx}Mpc scale of clusters of galaxies provided by the EGRET upper bounds to annihilation gamma rays from galaxy clusters whose intracluster gas is revealed through its x-ray emission. On the scale of individual clustersmore » of galaxies the upper bounds to the fraction of mixed matter and antimatter for the 55 clusters from a flux-limited x-ray survey range from 5 Multiplication-Sign 10{sup -9} to <1 Multiplication-Sign 10{sup -6}, strongly suggesting that individual clusters of galaxies are made entirely of matter or of antimatter. X-ray and gamma-ray observations of colliding clusters of galaxies, such as the Bullet Cluster, permit these constraints to be extended to even larger scales. If the observations of the Bullet Cluster, where the upper bound to the antimatter fraction is found to be <3 Multiplication-Sign 10{sup -6}, can be generalized to other colliding clusters of galaxies, cosmologically significant amounts of antimatter will be excluded on scales of order {approx}20 Mpc (M{approx}5 Multiplication-Sign 10{sup 15}M{sub sun})« less
Multi-soliton interaction of a generalized Schrödinger-Boussinesq system in a magnetized plasma
NASA Astrophysics Data System (ADS)
Zhao, Xue-Hui; Tian, Bo; Chai, Jun; Wu, Xiao-Yu; Guo, Yong-Jiang
2017-04-01
Under investigation in this paper is a generalized Schrödinger-Boussinesq system, which describes the stationary propagation of coupled upper-hybrid waves and magnetoacoustic waves in a magnetized plasma. Bilinear forms, one-, two- and three-soliton solutions are derived by virtue of the Hirota method and symbolic computation. Propagation and interaction for the solitons are illustrated graphically: Coefficients β1^{} and β2^{} can affect the velocities and propagation directions of the solitary waves. Amplitude, velocity and shape of the one solitary wave keep invariant during the propagation, implying that the transport of the energy is stable in the upper-hybrid and magnetoacoustic waves, and amplitude of the upper-hybrid wave is bigger than that of the magnetoacoustic wave. For the upper-hybrid and magnetoacoustic waves, head-on, overtaking and bound-state interaction between the two solitary waves are asymptotically depicted, respectively, indicating that the interaction between the two solitary waves is elastic. Elastic interaction between the bound-state soliton and a single one soliton is also displayed, and interaction among the three solitary waves is all elastic.
On the Coriolis effect in acoustic waveguides.
Wegert, Henry; Reindl, Leonard M; Ruile, Werner; Mayer, Andreas P
2012-05-01
Rotation of an elastic medium gives rise to a shift of frequency of its acoustic modes, i.e., the time-period vibrations that exist in it. This frequency shift is investigated by applying perturbation theory in the regime of small ratios of the rotation velocity and the frequency of the acoustic mode. In an expansion of the relative frequency shift in powers of this ratio, upper bounds are derived for the first-order and the second-order terms. The derivation of the theoretical upper bounds of the first-order term is presented for linear vibration modes as well as for stable nonlinear vibrations with periodic time dependence that can be represented by a Fourier series.
NASA Astrophysics Data System (ADS)
Albeverio, Sergio; Tamura, Hiroshi
2018-04-01
We consider a model describing the coupling of a vector-valued and a scalar homogeneous Markovian random field over R4, interpreted as expressing the interaction between a charged scalar quantum field coupled with a nonlinear quantized electromagnetic field. Expectations of functionals of the random fields are expressed by Brownian bridges. Using this, together with Feynman-Kac-Itô type formulae and estimates on the small time and large time behaviour of Brownian functionals, we prove asymptotic upper and lower bounds on the kernel of the transition semigroup for our model. The upper bound gives faster than exponential decay for large distances of the corresponding resolvent (propagator).
The upper bounds of reduced axial and shear moduli in cross-ply laminates with matrix cracks
NASA Technical Reports Server (NTRS)
Lee, Jong-Won; Allen, D. H.; Harris, C. E.
1991-01-01
The present study proposes a mathematical model utilizing the internal state variable concept for predicting the upper bounds of the reduced axial and shear stiffnesses in cross-ply laminates with matrix cracks. The displacement components at the matrix crack surfaces are explicitly expressed in terms of the observable axial and shear strains and the undamaged material properties. The reduced axial and shear stiffnesses are predicted for glass/epoxy and graphite/epoxy laminates. Comparison of the model with other theoretical and experimental studies is also presented to confirm direct applicability of the model to angle-ply laminates with matrix cracks subjected to general in-plane loading.
SURE reliability analysis: Program and mathematics
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; White, Allan L.
1988-01-01
The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The computational methods on which the program is based provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.
Rare B Meson Decays With Omega Mesons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Lei; /Colorado U.
2006-04-24
Rare charmless hadronic B decays are particularly interesting because of their importance in understanding the CP violation, which is essential to explain the matter-antimatter asymmetry in our universe, and of their roles in testing the ''effective'' theory of B physics. The study has been done with the BABAR experiment, which is mainly designed for the study of CP violation in the decays of neutral B mesons, and secondarily for rare processes that become accessible with the high luminosity of the PEP-II B Factory. In a sample of 89 million produced B{bar B} pairs on the BABAR experiment, we observed themore » decays B{sup 0} {yields} {omega}K{sup 0} and B{sup +} {yields} {omega}{rho}{sup +} for the first time, made more precise measurements for B{sup +} {yields} {omega}h{sup +} and reported tighter upper limits for B {yields} {omega}K* and B{sup 0} {yields} {omega}{rho}{sup 0}.« less
NASA Astrophysics Data System (ADS)
Wang, Sai; Wang, Yi-Fan; Huang, Qing-Guo; Li, Tjonnie G. F.
2018-05-01
Advanced LIGO's discovery of gravitational-wave events is stimulating extensive studies on the origin of binary black holes. Assuming that the gravitational-wave events can be explained by binary primordial black hole mergers, we utilize the upper limits on the stochastic gravitational-wave background given by Advanced LIGO as a new observational window to independently constrain the abundance of primordial black holes in dark matter. We show that Advanced LIGO's first observation run gives the best constraint on the primordial black hole abundance in the mass range 1 M⊙≲MPBH≲100 M⊙, pushing the previous microlensing and dwarf galaxy dynamics constraints tighter by 1 order of magnitude. Moreover, we discuss the possibility to detect the stochastic gravitational-wave background from primordial black holes, in particular from subsolar mass primordial black holes, by Advanced LIGO in the near future.
Wang, Sai; Wang, Yi-Fan; Huang, Qing-Guo; Li, Tjonnie G F
2018-05-11
Advanced LIGO's discovery of gravitational-wave events is stimulating extensive studies on the origin of binary black holes. Assuming that the gravitational-wave events can be explained by binary primordial black hole mergers, we utilize the upper limits on the stochastic gravitational-wave background given by Advanced LIGO as a new observational window to independently constrain the abundance of primordial black holes in dark matter. We show that Advanced LIGO's first observation run gives the best constraint on the primordial black hole abundance in the mass range 1M_{⊙}≲M_{PBH}≲100M_{⊙}, pushing the previous microlensing and dwarf galaxy dynamics constraints tighter by 1 order of magnitude. Moreover, we discuss the possibility to detect the stochastic gravitational-wave background from primordial black holes, in particular from subsolar mass primordial black holes, by Advanced LIGO in the near future.
Constraining the generalized uncertainty principle with the atomic weak-equivalence-principle test
NASA Astrophysics Data System (ADS)
Gao, Dongfeng; Wang, Jin; Zhan, Mingsheng
2017-04-01
Various models of quantum gravity imply the Planck-scale modifications of Heisenberg's uncertainty principle into a so-called generalized uncertainty principle (GUP). The GUP effects on high-energy physics, cosmology, and astrophysics have been extensively studied. Here, we focus on the weak-equivalence-principle (WEP) violation induced by the GUP. Results from the WEP test with the 85Rb-87Rb dual-species atom interferometer are used to set upper bounds on parameters in two GUP proposals. A 1045-level bound on the Kempf-Mangano-Mann proposal and a 1027-level bound on Maggiore's proposal, which are consistent with bounds from other experiments, are obtained. All these bounds have huge room for improvement in the future.
Trinker, Horst
2011-10-28
We study the distribution of triples of codewords of codes and ordered codes. Schrijver [A. Schrijver, New code upper bounds from the Terwilliger algebra and semidefinite programming, IEEE Trans. Inform. Theory 51 (8) (2005) 2859-2866] used the triple distribution of a code to establish a bound on the number of codewords based on semidefinite programming. In the first part of this work, we generalize this approach for ordered codes. In the second part, we consider linear codes and linear ordered codes and present a MacWilliams-type identity for the triple distribution of their dual code. Based on the non-negativity of this linear transform, we establish a linear programming bound and conclude with a table of parameters for which this bound yields better results than the standard linear programming bound.
Validation of the SURE Program, phase 1
NASA Technical Reports Server (NTRS)
Dotson, Kelly J.
1987-01-01
Presented are the results of the first phase in the validation of the SURE (Semi-Markov Unreliability Range Evaluator) program. The SURE program gives lower and upper bounds on the death-state probabilities of a semi-Markov model. With these bounds, the reliability of a semi-Markov model of a fault-tolerant computer system can be analyzed. For the first phase in the validation, fifteen semi-Markov models were solved analytically for the exact death-state probabilities and these solutions compared to the corresponding bounds given by SURE. In every case, the SURE bounds covered the exact solution. The bounds, however, had a tendency to separate in cases where the recovery rate was slow or the fault arrival rate was fast.
The random coding bound is tight for the average code.
NASA Technical Reports Server (NTRS)
Gallager, R. G.
1973-01-01
The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.
Scalable L-infinite coding of meshes.
Munteanu, Adrian; Cernea, Dan C; Alecu, Alin; Cornelis, Jan; Schelkens, Peter
2010-01-01
The paper investigates the novel concept of local-error control in mesh geometry encoding. In contrast to traditional mesh-coding systems that use the mean-square error as target distortion metric, this paper proposes a new L-infinite mesh-coding approach, for which the target distortion metric is the L-infinite distortion. In this context, a novel wavelet-based L-infinite-constrained coding approach for meshes is proposed, which ensures that the maximum error between the vertex positions in the original and decoded meshes is lower than a given upper bound. Furthermore, the proposed system achieves scalability in L-infinite sense, that is, any decoding of the input stream will correspond to a perfectly predictable L-infinite distortion upper bound. An instantiation of the proposed L-infinite-coding approach is demonstrated for MESHGRID, which is a scalable 3D object encoding system, part of MPEG-4 AFX. In this context, the advantages of scalable L-infinite coding over L-2-oriented coding are experimentally demonstrated. One concludes that the proposed L-infinite mesh-coding approach guarantees an upper bound on the local error in the decoded mesh, it enables a fast real-time implementation of the rate allocation, and it preserves all the scalability features and animation capabilities of the employed scalable mesh codec.
Boukattaya, Mohamed; Mezghani, Neila; Damak, Tarak
2018-06-01
In this paper, robust and adaptive nonsingular fast terminal sliding-mode (NFTSM) control schemes for the trajectory tracking problem are proposed with known or unknown upper bound of the system uncertainty and external disturbances. The developed controllers take the advantage of the NFTSM theory to ensure fast convergence rate, singularity avoidance, and robustness against uncertainties and external disturbances. First, a robust NFTSM controller is proposed which guarantees that sliding surface and equilibrium point can be reached in a short finite-time from any initial state. Then, in order to cope with the unknown upper bound of the system uncertainty which may be occurring in practical applications, a new adaptive NFTSM algorithm is developed. One feature of the proposed control law is their adaptation techniques where the prior knowledge of parameters uncertainty and disturbances is not needed. However, the adaptive tuning law can estimate the upper bound of these uncertainties using only position and velocity measurements. Moreover, the proposed controller eliminates the chattering effect without losing the robustness property and the precision. Stability analysis is performed using the Lyapunov stability theory, and simulation studies are conducted to verify the effectiveness of the developed control schemes. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Effects of general relativity on glitch amplitudes and pulsar mass upper bounds
NASA Astrophysics Data System (ADS)
Antonelli, M.; Montoli, A.; Pizzochero, P. M.
2018-04-01
Pinning of vortex lines in the inner crust of a spinning neutron star may be the mechanism that enhances the differential rotation of the internal neutron superfluid, making it possible to freeze some amount of angular momentum which eventually can be released, thus causing a pulsar glitch. We investigate the general relativistic corrections to pulsar glitch amplitudes in the slow-rotation approximation, consistently with the stratified structure of the star. We thus provide a relativistic generalization of a previous Newtonian model that was recently used to estimate upper bounds on the masses of glitching pulsars. We find that the effect of general relativity on the glitch amplitudes obtained by emptying the whole angular momentum reservoir is less than 30 per cent. Moreover, we show that the Newtonian upper bounds on the masses of large glitchers obtained from observations of their maximum recorded event differ by less than a few percent from those calculated within the relativistic framework. This work can also serve as a basis to construct more sophisticated models of angular momentum reservoir in a relativistic context: in particular, we present two alternative scenarios for macroscopically rigid and slack pinned vortex lines, and we generalize the Feynman-Onsager relation to the case when both entrainment coupling between the fluids and a strong axisymmetric gravitational field are present.
NASA Astrophysics Data System (ADS)
Masson, Frederic; Knoepfler, Andreas; Mayer, Michael; Ulrich, Patrice; Heck, Bernhard
2010-05-01
In September 2008, the Institut de Physique du Globe de Strasbourg (Ecole et Observatoire des Sciences de la Terre, EOST) and the Geodetic Institute (GIK) of Karlsruhe University (TH) established a transnational cooperation called GURN (GNSS Upper Rhine Graben Network). Within the GURN initiative these institutions are cooperating in order to establish a highly precise and highly sensitive network of permanently operating GNSS sites for the detection of crustal movements in the Upper Rhine Graben region. At the beginning, the network consisted of the permanently operating GNSS sites of SAPOS®-Baden-Württemberg, different data providers in France (e.g. EOST, Teria, RGP) and some further sites (e.g. IGS). In July 2009, the network was extended to the South when swisstopo (Switzerland) and to the North when SAPOS®-Rheinland-Pfalz joined GURN. Therefore, actually the GNSS network consists of approx. 80 permanently operating reference sites. The presentation will discuss the actual status of GURN, main research goals, and will present first results concerning the data quality as well as time series of a first reprocessing of all available data since 2002 using GAMIT/GLOBK (EOST working group) and the Bernese GPS Software (GIK working group). Based on these time series, the velocity as well as strain fields will be calculated in the future. The GURN initiative is also aiming for the estimation of the upper bounds of deformation in the Upper Rhine Graben region.
A Reduced Basis Method with Exact-Solution Certificates for Symmetric Coercive Equations
2013-11-06
the energy associated with the infinite - dimensional weak solution of parametrized symmetric coercive partial differential equations with piecewise...builds bounds with respect to the infinite - dimensional weak solution, aims to entirely remove the issue of the “truth” within the certified reduced basis...framework. We in particular introduce a reduced basis method that provides rigorous upper and lower bounds
The Economic Cost of Methamphetamine Use in the United States, 2005
ERIC Educational Resources Information Center
Nicosia, Nancy; Pacula, Rosalie Liccardo; Kilmer, Beau; Lundberg, Russell; Chiesa, James
2009-01-01
This first national estimate suggests that the economic cost of methamphetamine (meth) use in the United States reached $23.4 billion in 2005. Given the uncertainty in estimating the costs of meth use, this book provides a lower-bound estimate of $16.2 billion and an upper-bound estimate of $48.3 billion. The analysis considers a wide range of…
Curvature Continuous and Bounded Path Planning for Fixed-Wing UAVs
Jiang, Peng; Li, Deshi; Sun, Tao
2017-01-01
Unmanned Aerial Vehicles (UAVs) play an important role in applications such as data collection and target reconnaissance. An accurate and optimal path can effectively increase the mission success rate in the case of small UAVs. Although path planning for UAVs is similar to that for traditional mobile robots, the special kinematic characteristics of UAVs (such as their minimum turning radius) have not been taken into account in previous studies. In this paper, we propose a locally-adjustable, continuous-curvature, bounded path-planning algorithm for fixed-wing UAVs. To deal with the curvature discontinuity problem, an optimal interpolation algorithm and a key-point shift algorithm are proposed based on the derivation of a curvature continuity condition. To meet the upper bound for curvature and to render the curvature extrema controllable, a local replanning scheme is designed by combining arcs and Bezier curves with monotonic curvature. In particular, a path transition mechanism is built for the replanning phase using minimum curvature circles for a planning philosophy. Numerical results demonstrate that the analytical planning algorithm can effectively generate continuous-curvature paths, while satisfying the curvature upper bound constraint and allowing UAVs to pass through all predefined waypoints in the desired mission region. PMID:28925960
Paramagnetic or diamagnetic persistent currents? A topological point of view
NASA Astrophysics Data System (ADS)
Waintal, Xavier
2009-03-01
A persistent current flows at low temperatures in small conducting rings when they are threaded by a magnetic flux. I will discuss the sign of this persistent current (diamagnetic or paramagnetic response) in the special case of N electrons in a one dimensional ring [1]. One dimension is very special in the sense that the sign of the persistent current is entirely controlled by the topology of the system. I will establish lower bounds for the free energy in the presence of arbitrary electron-electron interactions and external potentials. Those bounds are the counterparts of upper bounds derived by Leggett using another topological argument. Rings with odd (even) numbers of polarized electrons are always diamagnetic (paramagnetic). The situation is more interesting with unpolarized electrons where Leggett upper bound breaks down: rings with N=4n exhibit either paramagnetic behavior or a superconductor-like current-phase relation. The topological argument provides a rigorous justification for the phenomenological Huckel rule which states that cyclic molecules with 4n + 2 electrons like benzene are aromatic while those with 4n electrons are not. [4pt] [1] Xavier Waintal, Geneviève Fleury, Kyryl Kazymyrenko, Manuel Houzet, Peter Schmitteckert, and Dietmar Weinmann Phys. Rev. Lett.101, 106804 (2008).
Curvature Continuous and Bounded Path Planning for Fixed-Wing UAVs.
Wang, Xiaoliang; Jiang, Peng; Li, Deshi; Sun, Tao
2017-09-19
Unmanned Aerial Vehicles (UAVs) play an important role in applications such as data collection and target reconnaissance. An accurate and optimal path can effectively increase the mission success rate in the case of small UAVs. Although path planning for UAVs is similar to that for traditional mobile robots, the special kinematic characteristics of UAVs (such as their minimum turning radius) have not been taken into account in previous studies. In this paper, we propose a locally-adjustable, continuous-curvature, bounded path-planning algorithm for fixed-wing UAVs. To deal with the curvature discontinuity problem, an optimal interpolation algorithm and a key-point shift algorithm are proposed based on the derivation of a curvature continuity condition. To meet the upper bound for curvature and to render the curvature extrema controllable, a local replanning scheme is designed by combining arcs and Bezier curves with monotonic curvature. In particular, a path transition mechanism is built for the replanning phase using minimum curvature circles for a planning philosophy. Numerical results demonstrate that the analytical planning algorithm can effectively generate continuous-curvature paths, while satisfying the curvature upper bound constraint and allowing UAVs to pass through all predefined waypoints in the desired mission region.
Chandon, Pierre; Ordabayeva, Nailya
2017-02-01
Five studies show that people, including experts such as professional chefs, estimate quantity decreases more accurately than quantity increases. We argue that this asymmetry occurs because physical quantities cannot be negative. Consequently, there is a natural lower bound (zero) when estimating decreasing quantities but no upper bound when estimating increasing quantities, which can theoretically grow to infinity. As a result, the "accuracy of less" disappears (a) when a numerical or a natural upper bound is present when estimating quantity increases, or (b) when people are asked to estimate the (unbounded) ratio of change from 1 size to another for both increasing and decreasing quantities. Ruling out explanations related to loss aversion, symbolic number mapping, and the visual arrangement of the stimuli, we show that the "accuracy of less" influences choice and demonstrate its robustness in a meta-analysis that includes previously published results. Finally, we discuss how the "accuracy of less" may explain asymmetric reactions to the supersizing and downsizing of food portions, some instances of the endowment effect, and asymmetries in the perception of increases and decreases in physical and psychological distance. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Bounds on area and charge for marginally trapped surfaces with a cosmological constant
NASA Astrophysics Data System (ADS)
Simon, Walter
2012-03-01
We sharpen the known inequalities AΛ ⩽ 4π(1 - g) (Hayward et al 1994 Phys. Rev. D 49 5080, Woolgar 1999 Class. Quantum Grav. 16 3005) and A ⩾ 4πQ2 (Dain et al 2012 Class. Quantum Grav. 29 035013) between the area A and the electric charge Q of a stable marginally outer-trapped surface (MOTS) of genus g in the presence of a cosmological constant Λ. In particular, instead of requiring stability we include the principal eigenvalue λ of the stability operator. For Λ* = Λ + λ > 0, we obtain a lower and an upper bound for Λ*A in terms of Λ*Q2, as well as the upper bound Q \\le 1/(2\\sqrt{\\Lambda ^{*}}) for the charge, which reduces to Q \\le 1/(2\\sqrt{\\Lambda }) in the stable case λ ⩾ 0. For Λ* < 0, there only remains a lower bound on A. In the spherically symmetric, static, stable case, one of our area inequalities is saturated iff the surface gravity vanishes. We also discuss implications of our inequalities for ‘jumps’ and mergers of charged MOTS.
Perturbative unitarity constraints on the NMSSM Higgs Sector
Betre, Kassahun; El Hedri, Sonia; Walker, Devin G. E.
2017-11-11
We place perturbative unitarity constraints on both the dimensionful and dimensionless parameters in the Next-to-Minimal Supersymmetric Standard Model (NMSSM) Higgs Sector. These constraints, plus the requirement that the singlino and/or Higgsino constitutes at least part of the observed dark matter relic abundance, generate upper bounds on the Higgs, neutralino and chargino mass spectrum. Requiring higher-order corrections to be no more than 41% of the tree-level value, we obtain an upper bound of 20 TeV for the heavy Higgses and 12 TeV for the charginos and neutralinos outside defined fine-tuned regions. If the corrections are no more than 20% of themore » tree-level value, the bounds are 7 TeV for the heavy Higgses and 5 TeV for the charginos and neutralinos. Finally, in all, by using the NMSSM as a template, we describe a method which replaces naturalness arguments with more rigorous perturbative unitarity arguments to get a better understanding of when new physics will appear.« less
An upper bound on the radius of a highly electrically conducting lunar core
NASA Technical Reports Server (NTRS)
Hobbs, B. A.; Hood, L. L.; Herbert, F.; Sonett, C. P.
1983-01-01
Parker's (1980) nonlinear inverse theory for the electromagnetic sounding problem is converted to a form suitable for analysis of lunar day-side transfer function data by: (1) transforming the solution in plane geometry to that in spherical geometry; and (2) transforming the theoretical lunar transfer function in the dipole limit to an apparent resistivity function. The theory is applied to the revised lunar transfer function data set of Hood et al. (1982), which extends in frequency from 10 to the -5th to 10 to the -3rd Hz. On the assumption that an iron-rich lunar core, whether molten or solid, can be represented by a perfect conductor at the minimum sampled frequency, an upper bound of 435 km on the maximum radius of such a core is calculated. This bound is somewhat larger than values of 360-375 km previously estimated from the same data set via forward model calculations because the prior work did not consider all possible mantle conductivity functions.
An upper bound on the particle-laden dependency of shear stresses at solid-fluid interfaces
NASA Astrophysics Data System (ADS)
Zohdi, T. I.
2018-03-01
In modern advanced manufacturing processes, such as three-dimensional printing of electronics, fine-scale particles are added to a base fluid yielding a modified fluid. For example, in three-dimensional printing, particle-functionalized inks are created by adding particles to freely flowing solvents forming a mixture, which is then deposited onto a surface, which upon curing yields desirable solid properties, such as thermal conductivity, electrical permittivity and magnetic permeability. However, wear at solid-fluid interfaces within the machinery walls that deliver such particle-laden fluids is typically attributed to the fluid-induced shear stresses, which increase with the volume fraction of added particles. The objective of this work is to develop a rigorous strict upper bound for the tolerable volume fraction of particles that can be added, while remaining below a given stress threshold at a fluid-solid interface. To illustrate the bound's utility, the expression is applied to a series of classical flow regimes.
Perturbative unitarity constraints on the NMSSM Higgs Sector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Betre, Kassahun; El Hedri, Sonia; Walker, Devin G. E.
We place perturbative unitarity constraints on both the dimensionful and dimensionless parameters in the Next-to-Minimal Supersymmetric Standard Model (NMSSM) Higgs Sector. These constraints, plus the requirement that the singlino and/or Higgsino constitutes at least part of the observed dark matter relic abundance, generate upper bounds on the Higgs, neutralino and chargino mass spectrum. Requiring higher-order corrections to be no more than 41% of the tree-level value, we obtain an upper bound of 20 TeV for the heavy Higgses and 12 TeV for the charginos and neutralinos outside defined fine-tuned regions. If the corrections are no more than 20% of themore » tree-level value, the bounds are 7 TeV for the heavy Higgses and 5 TeV for the charginos and neutralinos. Finally, in all, by using the NMSSM as a template, we describe a method which replaces naturalness arguments with more rigorous perturbative unitarity arguments to get a better understanding of when new physics will appear.« less
Quantum Dynamical Applications of Salem's Theorem
NASA Astrophysics Data System (ADS)
Damanik, David; Del Rio, Rafael
2009-07-01
We consider the survival probability of a state that evolves according to the Schrödinger dynamics generated by a self-adjoint operator H. We deduce from a classical result of Salem that upper bounds for the Hausdorff dimension of a set supporting the spectral measure associated with the initial state imply lower bounds on a subsequence of time scales for the survival probability. This general phenomenon is illustrated with applications to the Fibonacci operator and the critical almost Mathieu operator. In particular, this gives the first quantitative dynamical bound for the critical almost Mathieu operator.
Volumes and intrinsic diameters of hypersurfaces
NASA Astrophysics Data System (ADS)
Paeng, Seong-Hun
2015-09-01
We estimate the volume and the intrinsic diameter of a hypersurface M with geometric information of a hypersurface which is parallel to M at distance T. It can be applied to the Riemannian Penrose inequality to obtain a lower bound of the total mass of a spacetime. Also it can be used to obtain upper bounds of the volume and the intrinsic diameter of the celestial r-sphere without a lower bound of the sectional curvature. We extend our results to metric-measure spaces by using the Bakry-Emery Ricci tensor.
The SURE reliability analysis program
NASA Technical Reports Server (NTRS)
Butler, R. W.
1986-01-01
The SURE program is a new reliability tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.
The SURE Reliability Analysis Program
NASA Technical Reports Server (NTRS)
Butler, R. W.
1986-01-01
The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.
Quantum State Tomography via Linear Regression Estimation
Qi, Bo; Hou, Zhibo; Li, Li; Dong, Daoyi; Xiang, Guoyong; Guo, Guangcan
2013-01-01
A simple yet efficient state reconstruction algorithm of linear regression estimation (LRE) is presented for quantum state tomography. In this method, quantum state reconstruction is converted into a parameter estimation problem of a linear regression model and the least-squares method is employed to estimate the unknown parameters. An asymptotic mean squared error (MSE) upper bound for all possible states to be estimated is given analytically, which depends explicitly upon the involved measurement bases. This analytical MSE upper bound can guide one to choose optimal measurement sets. The computational complexity of LRE is O(d4) where d is the dimension of the quantum state. Numerical examples show that LRE is much faster than maximum-likelihood estimation for quantum state tomography. PMID:24336519
NASA Astrophysics Data System (ADS)
De Raedt, Hans; Michielsen, Kristel; Hess, Karl
2016-12-01
Using Einstein-Podolsky-Rosen-Bohm experiments as an example, we demonstrate that the combination of a digital computer and algorithms, as a metaphor for a perfect laboratory experiment, provides solutions to problems of the foundations of physics. Employing discrete-event simulation, we present a counterexample to John Bell's remarkable "proof" that any theory of physics, which is both Einstein-local and "realistic" (counterfactually definite), results in a strong upper bound to the correlations that are being measured in Einstein-Podolsky-Rosen-Bohm experiments. Our counterexample, which is free of the so-called detection-, coincidence-, memory-, and contextuality loophole, violates this upper bound and fully agrees with the predictions of quantum theory for Einstein-Podolsky-Rosen-Bohm experiments.
NASA Astrophysics Data System (ADS)
Shen, Yuxuan; Wang, Zidong; Shen, Bo; Alsaadi, Fuad E.
2018-07-01
In this paper, the recursive filtering problem is studied for a class of time-varying nonlinear systems with stochastic parameter matrices. The measurement transmission between the sensor and the filter is conducted through a fading channel characterized by the Rice fading model. An event-based transmission mechanism is adopted to decide whether the sensor measurement should be transmitted to the filter. A recursive filter is designed such that, in the simultaneous presence of the stochastic parameter matrices and fading channels, the filtering error covariance is guaranteed to have an upper bound and such an upper bound is then minimized by appropriately choosing filter gain matrix. Finally, a simulation example is presented to demonstrate the effectiveness of the proposed filtering scheme.
Combinatorial complexity of pathway analysis in metabolic networks.
Klamt, Steffen; Stelling, Jörg
2002-01-01
Elementary flux mode analysis is a promising approach for a pathway-oriented perspective of metabolic networks. However, in larger networks it is hampered by the combinatorial explosion of possible routes. In this work we give some estimations on the combinatorial complexity including theoretical upper bounds for the number of elementary flux modes in a network of a given size. In a case study, we computed the elementary modes in the central metabolism of Escherichia coli while utilizing four different substrates. Interestingly, although the number of modes occurring in this complex network can exceed half a million, it is still far below the upper bound. Hence, to a certain extent, pathway analysis of central catabolism is feasible to assess network properties such as flexibility and functionality.
A one-dimensional model of solid-earth electrical resistivity beneath Florida
Blum, Cletus; Love, Jeffrey J.; Pedrie, Kolby; Bedrosian, Paul A.; Rigler, E. Joshua
2015-11-19
An estimated one-dimensional layered model of electrical resistivity beneath Florida was developed from published geological and geophysical information. The resistivity of each layer is represented by plausible upper and lower bounds as well as a geometric mean resistivity. Corresponding impedance transfer functions, Schmucker-Weidelt transfer functions, apparent resistivity, and phase responses are calculated for inducing geomagnetic frequencies ranging from 10−5 to 100 hertz. The resulting one-dimensional model and response functions can be used to make general estimates of time-varying electric fields associated with geomagnetic storms such as might represent induction hazards for electric-power grid operation. The plausible upper- and lower-bound resistivity structures show the uncertainty, giving a wide range of plausible time-varying electric fields.
NASA Astrophysics Data System (ADS)
Soltani Bozchalooi, Iman; Liang, Ming
2018-04-01
A discussion paper entitled "On the distribution of the modulus of Gabor wavelet coefficients and the upper bound of the dimensionless smoothness index in the case of additive Gaussian noises: revisited" by Dong Wang, Qiang Zhou, Kwok-Leung Tsui has been brought to our attention recently. This discussion paper (hereafter called Wang et al. paper) is based on arguments that are fundamentally incorrect and which we rebut within this commentary. However, as the flaws in the arguments proposed by Wang et al. are clear, we will keep this rebuttal as brief as possible.
Two Upper Bounds for the Weighted Path Length of Binary Trees. Report No. UIUCDCS-R-73-565.
ERIC Educational Resources Information Center
Pradels, Jean Louis
Rooted binary trees with weighted nodes are structures encountered in many areas, such as coding theory, searching and sorting, information storage and retrieval. The path length is a meaningful quantity which gives indications about the expected time of a search or the length of a code, for example. In this paper, two sharp bounds for the total…
The Mystery of Io's Warm Polar Regions: Implications for Heat Flow
NASA Technical Reports Server (NTRS)
Matson, D. L.; Veeder, G. J.; Johnson, T. V.; Blaney, D. L.; Davies, A. G.
2002-01-01
Unexpectedly warm polar temperatures further support the idea that Io is covered virtually everywhere by cooling lava flows. This implies a new heat flow component. Io's heat flow remains constrained between a lower bound of (approximately) 2.5 W m(exp -2) and an upper bound of (approximately) 13 W m(exp -2). Additional information is contained in the original extended abstract.
Joseph Buongiorno; Mo Zhou; Craig Johnston
2017-01-01
Markov decision process models were extended to reflect some consequences of the risk attitude of forestry decision makers. One approach consisted of maximizing the expected value of a criterion subject to an upper bound on the variance or, symmetrically, minimizing the variance subject to a lower bound on the expected value. The other method used the certainty...
Verifying the error bound of numerical computation implemented in computer systems
Sawada, Jun
2013-03-12
A verification tool receives a finite precision definition for an approximation of an infinite precision numerical function implemented in a processor in the form of a polynomial of bounded functions. The verification tool receives a domain for verifying outputs of segments associated with the infinite precision numerical function. The verification tool splits the domain into at least two segments, wherein each segment is non-overlapping with any other segment and converts, for each segment, a polynomial of bounded functions for the segment to a simplified formula comprising a polynomial, an inequality, and a constant for a selected segment. The verification tool calculates upper bounds of the polynomial for the at least two segments, beginning with the selected segment and reports the segments that violate a bounding condition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aad, G.; Abbott, B.; Abdallah, J.
2016-01-28
A search for a Higgs boson produced via vector-boson fusion and decaying into invisible particles is presented, using 20.3 fb -1 of proton-proton collision data at a centre-of-mass energy of 8 TeV recorded by the ATLAS detector at the LHC. For a Higgs boson with a mass of 125 GeV, assuming the Standard Model production cross section, an upper bound of 0.28 is set on the branching fraction of H → invisible at 95% confidence level, where the expected upper limit is 0.31. Furthermore, the results are interpreted in models of Higgs-portal dark matter where the branching fraction limit ismore » converted into upper bounds on the dark-matter-nucleon scattering cross section as a function of the dark-matter particle mass, and compared to results from the direct dark-matter detection experiments.« less
NASA Astrophysics Data System (ADS)
Badescu, Viorel; Landsberg, Peter T.
1995-08-01
The general theory developed in part I was applied to build up two models of photovoltaic conversion. To this end two different systems were analyzed. The first system consists of the whole absorber (converter), for which the balance equations for energy and entropy are written and then used to derive an upper bound for solar energy conversion. The second system covers a part of the absorber (converter), namely the valence and conduction electronic bands. The balance of energy is used in this case to derive, under additional assumptions, another upper limit for the conversion efficiency. This second system deals with the real location where the power is generated. Both models take into consideration the radiation polarization and reflection, and the effects of concentration. The second model yields a more accurate upper bound for the conversion efficiency. A generalized solar cell equation is derived. It is proved that other previous theories are particular cases of the present more general formalism.
Limits on the fluctuating part of y-type distortion monopole from Planck and SPT results
NASA Astrophysics Data System (ADS)
Khatri, Rishi; Sunyaev, Rashid
2015-08-01
We use the published Planck and SPT cluster catalogs [1,2] and recently published y-distortion maps [3] to put strong observational limits on the contribution of the fluctuating part of the y-type distortions to the y-distortion monopole. Our bounds are 5.4× 10-8 < langle yrangle < 2.2× 10-6. Our upper bound is a factor of 6.8 stronger than the currently best upper 95% confidence limit from COBE-FIRAS of langle yrangle <15× 10-6. In the standard cosmology, large scale structure is the only source of such distortions and our limits therefore constrain the baryonic physics involved in the formation of the large scale structure. Our lower limit, from the detected clusters in the Planck and SPT catalogs, also implies that a Pixie-like experiment should detect the y-distortion monopole at >27-σ. The biggest sources of uncertainty in our upper limit are the monopole offsets between different HFI channel maps that we estimate to be <10-6.
On the realization of the bulk modulus bounds for two-phase viscoelastic composites
NASA Astrophysics Data System (ADS)
Andreasen, Casper Schousboe; Andreassen, Erik; Jensen, Jakob Søndergaard; Sigmund, Ole
2014-02-01
Materials with good vibration damping properties and high stiffness are of great industrial interest. In this paper the bounds for viscoelastic composites are investigated and material microstructures that realize the upper bound are obtained by topology optimization. These viscoelastic composites can be realized by additive manufacturing technologies followed by an infiltration process. Viscoelastic composites consisting of a relatively stiff elastic phase, e.g. steel, and a relatively lossy viscoelastic phase, e.g. silicone rubber, have non-connected stiff regions when optimized for maximum damping. In order to ensure manufacturability of such composites the connectivity of the matrix is ensured by imposing a conductivity constraint and the influence on the bounds is discussed.
1-norm support vector novelty detection and its sparseness.
Zhang, Li; Zhou, WeiDa
2013-12-01
This paper proposes a 1-norm support vector novelty detection (SVND) method and discusses its sparseness. 1-norm SVND is formulated as a linear programming problem and uses two techniques for inducing sparseness, or the 1-norm regularization and the hinge loss function. We also find two upper bounds on the sparseness of 1-norm SVND, or exact support vector (ESV) and kernel Gram matrix rank bounds. The ESV bound indicates that 1-norm SVND has a sparser representation model than SVND. The kernel Gram matrix rank bound can loosely estimate the sparseness of 1-norm SVND. Experimental results show that 1-norm SVND is feasible and effective. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Castro-González, N.; Vélez-Cerrada, J. Y.
2008-05-01
Given a bounded operator A on a Banach space X with Drazin inverse AD and index r, we study the class of group invertible bounded operators B such that I+AD(B-A) is invertible and . We show that they can be written with respect to the decomposition as a matrix operator, , where B1 and are invertible. Several characterizations of the perturbed operators are established, extending matrix results. We analyze the perturbation of the Drazin inverse and we provide explicit upper bounds of ||B#-AD|| and ||BB#-ADA||. We obtain a result on the continuity of the group inverse for operators on Banach spaces.
Bounds on invisible Higgs boson decays extracted from LHC ttH production data.
Zhou, Ning; Khechadoorian, Zepyoor; Whiteson, Daniel; Tait, Tim M P
2014-10-10
We present an upper bound on the branching fraction of the Higgs boson to invisible particles by recasting a CMS Collaboration search for stop quarks decaying to tt + E(T)(miss). The observed (expected) bound, BF(H → inv.) < 0.40(0.65) at 95% C.L., is the strongest direct limit to date, benefiting from a downward fluctuation in the CMS data in that channel. In addition, we combine this new constraint with existing published constraints to give an observed (expected) bound of BF(H → inv.) < 0.40(0.40) at 95% C.L., and we show some of the implications for theories of dark matter which communicate through the Higgs portal.
Money Gone Up in Smoke: The Tobacco Use and Malnutrition Nexus in Bangladesh
Husain, Muhammad Jami; Virk-Baker, Mandeep; Parascandola, Mark; Khondker, Bazlul Haque; Ahluwalia, Indu B.
2017-01-01
BACKGROUND The tobacco epidemic in Bangladesh is pervasive. Expenditures on tobacco may reduce money available for food in a country with a high malnutrition rate. OBJECTIVES The aims of the study are to quantify the opportunity costs of tobacco expenditure in terms of nutrition (ie, food energy) forgone and the potential improvements in the household level food-energy status if the money spent on tobacco were diverted for food consumption. METHOD We analyzed data from the 2010 Bangladesh Household Income and Expenditure Survey, a nationally representative survey conducted among 12,240 households. We present 2 analytical scenarios: (1) the lower-bound gain scenario entailing money spent on tobacco partially diverted to acquiring food according to households’ food consumption share in total expenditures; and (2) the upper-bound gain scenario entailing money spent on tobacco diverted to acquiring food only. Age- and gender-based energy norms were used to identify food-energy deficient households. Data were analyzed by mutually exclusive smoking-only, smokeless-only, and dual-tobacco user households. FINDINGS On average, a smoking-only household could gain 269–497 kilocalories (kcal) daily under the lower-bound and upper-bound scenarios, respectively. The potential energy gains for smokeless-only and dual-tobacco user households ranged from 148–268 kcal and 508–924 kcal, respectively. Under these lower- and upper-bound estimates, the percentage of smoking-only user households that are malnourished declined significantly from the baseline rate of 38% to 33% and 29%, respectively. For the smokeless-only and dual-tobacco user households, there were 2–3 and 6–9 percentage point drops in the malnutrition prevalence rates. The tobacco expenditure shift could translate to an additional 4.6–7.7 million food-energy malnourished persons meeting their caloric requirements. CONCLUSIONS The findings suggest that tobacco use reduction could facilitate concomitant improvements in population-level nutrition status and may inform the development and refinement of tobacco prevention and control efforts in Bangladesh. PMID:28283125
Money Gone Up in Smoke: The Tobacco Use and Malnutrition Nexus in Bangladesh.
Husain, Muhammad Jami; Virk-Baker, Mandeep; Parascandola, Mark; Khondker, Bazlul Haque; Ahluwalia, Indu B
The tobacco epidemic in Bangladesh is pervasive. Expenditures on tobacco may reduce money available for food in a country with a high malnutrition rate. The aims of the study are to quantify the opportunity costs of tobacco expenditure in terms of nutrition (ie, food energy) forgone and the potential improvements in the household level food-energy status if the money spent on tobacco were diverted for food consumption. We analyzed data from the 2010 Bangladesh Household Income and Expenditure Survey, a nationally representative survey conducted among 12,240 households. We present 2 analytical scenarios: (1) the lower-bound gain scenario entailing money spent on tobacco partially diverted to acquiring food according to households' food consumption share in total expenditures; and (2) the upper-bound gain scenario entailing money spent on tobacco diverted to acquiring food only. Age- and gender-based energy norms were used to identify food-energy deficient households. Data were analyzed by mutually exclusive smoking-only, smokeless-only, and dual-tobacco user households. On average, a smoking-only household could gain 269-497 kilocalories (kcal) daily under the lower-bound and upper-bound scenarios, respectively. The potential energy gains for smokeless-only and dual-tobacco user households ranged from 148-268 kcal and 508-924 kcal, respectively. Under these lower- and upper-bound estimates, the percentage of smoking-only user households that are malnourished declined significantly from the baseline rate of 38% to 33% and 29%, respectively. For the smokeless-only and dual-tobacco user households, there were 2-3 and 6-9 percentage point drops in the malnutrition prevalence rates. The tobacco expenditure shift could translate to an additional 4.6-7.7 million food-energy malnourished persons meeting their caloric requirements. The findings suggest that tobacco use reduction could facilitate concomitant improvements in population-level nutrition status and may inform the development and refinement of tobacco prevention and control efforts in Bangladesh. Copyright © 2016. Published by Elsevier Inc.
Termination Proofs for String Rewriting Systems via Inverse Match-Bounds
NASA Technical Reports Server (NTRS)
Butler, Ricky (Technical Monitor); Geser, Alfons; Hofbauer, Dieter; Waldmann, Johannes
2004-01-01
Annotating a letter by a number, one can record information about its history during a reduction. A string rewriting system is called match-bounded if there is a global upper bound to these numbers. In earlier papers we established match-boundedness as a strong sufficient criterion for both termination and preservation of regular languages. We show now that the string rewriting system whose inverse (left and right hand sides exchanged) is match-bounded, also have exceptional properties, but slightly different ones. Inverse match-bounded systems effectively preserve context-free languages; their sets of normalized strings and their sets of immortal strings are effectively regular. These sets of strings can be used to decide the normalization, the termination and the uniform termination problems of inverse match-bounded systems. We also show that the termination problem is decidable in linear time, and that a certain strong reachability problem is deciable, thus solving two open problems of McNaughton's.
Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam
2009-01-01
This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.
Safe Upper-Bounds Inference of Energy Consumption for Java Bytecode Applications
NASA Technical Reports Server (NTRS)
Navas, Jorge; Mendez-Lojo, Mario; Hermenegildo, Manuel V.
2008-01-01
Many space applications such as sensor networks, on-board satellite-based platforms, on-board vehicle monitoring systems, etc. handle large amounts of data and analysis of such data is often critical for the scientific mission. Transmitting such large amounts of data to the remote control station for analysis is usually too expensive for time-critical applications. Instead, modern space applications are increasingly relying on autonomous on-board data analysis. All these applications face many resource constraints. A key requirement is to minimize energy consumption. Several approaches have been developed for estimating the energy consumption of such applications (e.g. [3, 1]) based on measuring actual consumption at run-time for large sets of random inputs. However, this approach has the limitation that it is in general not possible to cover all possible inputs. Using formal techniques offers the potential for inferring safe energy consumption bounds, thus being specially interesting for space exploration and safety-critical systems. We have proposed and implemented a general frame- work for resource usage analysis of Java bytecode [2]. The user defines a set of resource(s) of interest to be tracked and some annotations that describe the cost of some elementary elements of the program for those resources. These values can be constants or, more generally, functions of the input data sizes. The analysis then statically derives an upper bound on the amount of those resources that the program as a whole will consume or provide, also as functions of the input data sizes. This article develops a novel application of the analysis of [2] to inferring safe upper bounds on the energy consumption of Java bytecode applications. We first use a resource model that describes the cost of each bytecode instruction in terms of the joules it consumes. With this resource model, we then generate energy consumption cost relations, which are then used to infer safe upper bounds. How energy consumption for each bytecode instruction is measured is beyond the scope of this paper. Instead, this paper is about how to infer safe energy consumption estimations assuming that those energy consumption costs are provided. For concreteness, we use a simplified version of an existing resource model [1] in which an energy consumption cost for individual Java opcodes is defined.
Correction of spin diffusion during iterative automated NOE assignment
NASA Astrophysics Data System (ADS)
Linge, Jens P.; Habeck, Michael; Rieping, Wolfgang; Nilges, Michael
2004-04-01
Indirect magnetization transfer increases the observed nuclear Overhauser enhancement (NOE) between two protons in many cases, leading to an underestimation of target distances. Wider distance bounds are necessary to account for this error. However, this leads to a loss of information and may reduce the quality of the structures generated from the inter-proton distances. Although several methods for spin diffusion correction have been published, they are often not employed to derive distance restraints. This prompted us to write a user-friendly and CPU-efficient method to correct for spin diffusion that is fully integrated in our program ambiguous restraints for iterative assignment (ARIA). ARIA thus allows automated iterative NOE assignment and structure calculation with spin diffusion corrected distances. The method relies on numerical integration of the coupled differential equations which govern relaxation by matrix squaring and sparse matrix techniques. We derive a correction factor for the distance restraints from calculated NOE volumes and inter-proton distances. To evaluate the impact of our spin diffusion correction, we tested the new calibration process extensively with data from the Pleckstrin homology (PH) domain of Mus musculus β-spectrin. By comparing structures refined with and without spin diffusion correction, we show that spin diffusion corrected distance restraints give rise to structures of higher quality (notably fewer NOE violations and a more regular Ramachandran map). Furthermore, spin diffusion correction permits the use of tighter error bounds which improves the distinction between signal and noise in an automated NOE assignment scheme.
Bermudo, Carolina; Sevilla, Lorenzo; Martín, Francisco; Trujillo, Francisco Javier
2017-01-01
The application of incremental processes in the manufacturing industry is having a great development in recent years. The first stage of an Incremental Forming Process can be defined as an indentation. Because of this, the indentation process is starting to be widely studied, not only as a hardening test but also as a forming process. Thus, in this work, an analysis of the indentation process under the new Modular Upper Bound perspective has been performed. The modular implementation has several advantages, including the possibility of the introduction of different parameters to extend the study, such as the friction effect, the temperature or the hardening effect studied in this paper. The main objective of the present work is to analyze the three hardening models developed depending on the material characteristics. In order to support the validation of the hardening models, finite element analyses of diverse materials under an indentation are carried out. Results obtained from the Modular Upper Bound are in concordance with the results obtained from the numerical analyses. In addition, the numerical and analytical methods are in concordance with the results previously obtained in the experimental indentation of annealed aluminum A92030. Due to the introduction of the hardening factor, the new modular distribution is a suitable option for the analysis of indentation process. PMID:28772914
Exploring L1 model space in search of conductivity bounds for the MT problem
NASA Astrophysics Data System (ADS)
Wheelock, B. D.; Parker, R. L.
2013-12-01
Geophysical inverse problems of the type encountered in electromagnetic techniques are highly non-unique. As a result, any single inverted model, though feasible, is at best inconclusive and at worst misleading. In this paper, we use modified inversion methods to establish bounds on electrical conductivity within a model of the earth. Our method consists of two steps, each making use of the 1-norm in model regularization. Both 1-norm minimization problems are framed without approximation as non-negative least-squares (NNLS) problems. First, we must identify a parsimonious set of regions within the model for which upper and lower bounds on average conductivity will be sought. This is accomplished by minimizing the 1-norm of spatial variation, which produces a model with a limited number of homogeneous regions; in fact, the number of homogeneous regions will never be greater than the number of data, regardless of the number of free parameters supplied. The second step establishes bounds for each of these regions with pairs of inversions. The new suite of inversions also uses a 1-norm penalty, but applied to the conductivity values themselves, rather than the spatial variation thereof. In the bounding step we use the 1-norm of our model parameters because it is proportional to average conductivity. For a lower bound on average conductivity, the 1-norm within a bounding region is minimized. For an upper bound on average conductivity, the 1-norm everywhere outside a bounding region is minimized. The latter minimization has the effect of concentrating conductance into the bounding region. Taken together, these bounds are a measure of the uncertainty in the associated region of our model. Starting with a blocky inverse solution is key in the selection of the bounding regions. Of course, there is a tradeoff between resolution and uncertainty: an increase in resolution (smaller bounding regions), results in greater uncertainty (wider bounds). Minimization of the 1-norm of spatial variation delivers the fewest possible regions defined by a mean conductivity, the quantity we wish to bound. Thus, these regions present a natural set for which the most narrow and discriminating bounds can be found. For illustration, we apply these techniques to synthetic magnetotelluric (MT) data sets resulting from one-dimensional (1D) earth models. In each case we find that with realistic data coverage, any single inverted model can often stray from the truth, while the computed bounds on an encompassing region contain both the inverted and the true conductivities, indicating that our measure of model uncertainty is robust. Such estimates of uncertainty for conductivity can then be translated to bounds on important petrological parameters such as mineralogy, porosity, saturation, and fluid type.
Pharmacokinetics and repolarization effects of intravenous and transdermal granisetron.
Mason, Jay W; Selness, Daniel S; Moon, Thomas E; O'Mahony, Bridget; Donachie, Peter; Howell, Julian
2012-05-15
The need for greater clarity about the effects of 5-HT(3) receptor antagonists on cardiac repolarization is apparent in the changing product labeling across this therapeutic class. This study assessed the repolarization effects of granisetron, a 5-HT(3) receptor antagonist antiemetic, administered intravenously and by a granisetron transdermal system (GTDS). In a parallel four-arm study, healthy subjects were randomized to receive intravenous granisetron, GTDS, placebo, or oral moxifloxacin (active control). The primary endpoint was difference in change from baseline in mean Fridericia-corrected QT interval (QTcF) between GTDS and placebo (ddQTcF) on days 3 and 5. A total of 240 subjects were enrolled, 60 in each group. Adequate sensitivity for detection of QTc change was shown by a 5.75 ms lower bound of the 90% confidence interval (CI) for moxifloxacin versus placebo at 2 hours postdose on day 3. Day 3 ddQTcF values varied between 0.2 and 1.9 ms for GTDS (maximum upper bound of 90% CI, 6.88 ms), between -1.2 and 1.6 ms for i.v. granisetron (maximum upper bound of 90% CI, 5.86 ms), and between -3.4 and 4.7 ms for moxifloxacin (maximum upper bound of 90% CI, 13.45 ms). Day 5 findings were similar. Pharmacokinetic-ddQTcF modeling showed a minimally positive slope of 0.157 ms/(ng/mL), but a very low correlation (r = 0.090). GTDS was not associated with statistically or clinically significant effects on QTcF or other electrocardiographic variables. This study provides useful clarification on the effect of granisetron delivered by GTDS on cardiac repolarization. ©2012 AACR.
Using a Water Balance Model to Bound Potential Irrigation Development in the Upper Blue Nile Basin
NASA Astrophysics Data System (ADS)
Jain Figueroa, A.; McLaughlin, D.
2016-12-01
The Grand Ethiopian Renaissance Dam (GERD), on the Blue Nile is an example of water resource management underpinning food, water and energy security. Downstream countries have long expressed concern about water projects in Ethiopia because of possible diversions to agricultural uses that could reduce flow in the Nile. Such diversions are attractive to Ethiopia as a partial solution to its food security problems but they could also conflict with hydropower revenue from GERD. This research estimates an upper bound on diversions above the GERD project by considering the potential for irrigated agriculture expansion and, in particular, the availability of water and land resources for crop production. Although many studies have aimed to simulate downstream flows for various Nile basin management plans, few have taken the perspective of bounding the likely impacts of upstream agricultural development. The approach is to construct an optimization model to establish a bound on Upper Blue Nile (UBN) agricultural development, paying particular attention to soil suitability and seasonal variability in climate. The results show that land and climate constraints impose significant limitations on crop production. Only 25% of the land area is suitable for irrigation due to the soil, slope and temperature constraints. When precipitation is also considered only 11% of current land area could be used in a way that increases water consumption. The results suggest that Ethiopia could consume an additional 3.75 billion cubic meters (bcm) of water per year, through changes in land use and storage capacity. By exploiting this irrigation potential, Ethiopia could potentially decrease the annual flow downstream of the UBN by 8 percent from the current 46 bcm/y to the modeled 42 bcm/y.
FACTORING TO FIT OFF DIAGONALS.
imply an upper bound on the number of factors. When applied to somatotype data, the method improved substantially on centroid solutions and indicated a reinterpretation of earlier factoring studies. (Author)
Evolution of cosmic string networks
NASA Technical Reports Server (NTRS)
Albrecht, Andreas; Turok, Neil
1989-01-01
Results on cosmic strings are summarized including: (1) the application of non-equilibrium statistical mechanics to cosmic string evolution; (2) a simple one scale model for the long strings which has a great deal of predictive power; (3) results from large scale numerical simulations; and (4) a discussion of the observational consequences of our results. An upper bound on G mu of approximately 10(-7) emerges from the millisecond pulsar gravity wave bound. How numerical uncertainties affect this are discussed. Any changes which weaken the bound would probably also give the long strings the dominant role in producing observational consequences.
NASA Astrophysics Data System (ADS)
Chen, Zhixiang; Fu, Bin
This paper is our third step towards developing a theory of testing monomials in multivariate polynomials and concentrates on two problems: (1) How to compute the coefficients of multilinear monomials; and (2) how to find a maximum multilinear monomial when the input is a ΠΣΠ polynomial. We first prove that the first problem is #P-hard and then devise a O *(3 n s(n)) upper bound for this problem for any polynomial represented by an arithmetic circuit of size s(n). Later, this upper bound is improved to O *(2 n ) for ΠΣΠ polynomials. We then design fully polynomial-time randomized approximation schemes for this problem for ΠΣ polynomials. On the negative side, we prove that, even for ΠΣΠ polynomials with terms of degree ≤ 2, the first problem cannot be approximated at all for any approximation factor ≥ 1, nor "weakly approximated" in a much relaxed setting, unless P=NP. For the second problem, we first give a polynomial time λ-approximation algorithm for ΠΣΠ polynomials with terms of degrees no more a constant λ ≥ 2. On the inapproximability side, we give a n (1 - ɛ)/2 lower bound, for any ɛ> 0, on the approximation factor for ΠΣΠ polynomials. When the degrees of the terms in these polynomials are constrained as ≤ 2, we prove a 1.0476 lower bound, assuming Pnot=NP; and a higher 1.0604 lower bound, assuming the Unique Games Conjecture.
Necessary and sufficient criterion for extremal quantum correlations in the simplest Bell scenario
NASA Astrophysics Data System (ADS)
Ishizaka, Satoshi
2018-05-01
In the study of quantum nonlocality, one obstacle is that the analytical criterion for identifying the boundaries between quantum and postquantum correlations has not yet been given, even in the simplest Bell scenario. We propose a plausible, analytical, necessary and sufficient condition ensuring that a nonlocal quantum correlation in the simplest scenario is an extremal boundary point. Our extremality condition amounts to certifying an information-theoretical quantity; the probability of guessing a measurement outcome of a distant party optimized using any quantum instrument. We show that this quantity can be upper and lower bounded from any correlation in a device-independent way, and we use numerical calculations to confirm that coincidence of the upper and lower bounds appears to be necessary and sufficient for the extremality.
On the validity of the Arrhenius equation for electron attachment rate coefficients.
Fabrikant, Ilya I; Hotop, Hartmut
2008-03-28
The validity of the Arrhenius equation for dissociative electron attachment rate coefficients is investigated. A general analysis allows us to obtain estimates of the upper temperature bound for the range of validity of the Arrhenius equation in the endothermic case and both lower and upper bounds in the exothermic case with a reaction barrier. The results of the general discussion are illustrated by numerical examples whereby the rate coefficient, as a function of temperature for dissociative electron attachment, is calculated using the resonance R-matrix theory. In the endothermic case, the activation energy in the Arrhenius equation is close to the threshold energy, whereas in the case of exothermic reactions with an intermediate barrier, the activation energy is found to be substantially lower than the barrier height.
On dynamic tumor eradication conditions under combined chemical/anti-angiogenic therapies
NASA Astrophysics Data System (ADS)
Starkov, Konstantin E.
2018-02-01
In this paper ultimate dynamics of the five-dimensional cancer tumor growth model at the angiogenesis phase is studied. This model elaborated by Pinho et al. in 2014 describes interactions between normal/cancer/endothelial cells under chemotherapy/anti-angiogenic agents in tumor growth process. The author derives ultimate upper bounds for normal/tumor/endothelial cells concentrations and ultimate upper and lower bounds for chemical/anti-angiogenic concentrations. Global asymptotic tumor clearance conditions are obtained for two versions: the use of only chemotherapy and the combined application of chemotherapy and anti-angiogenic therapy. These conditions are established as the attraction conditions to the maximum invariant set in the tumor free plane, and furthermore, the case is examined when this set consists only of tumor free equilibrium points.
Robust guaranteed cost tracking control of quadrotor UAV with uncertainties.
Xu, Zhiwei; Nian, Xiaohong; Wang, Haibo; Chen, Yinsheng
2017-07-01
In this paper, a robust guaranteed cost controller (RGCC) is proposed for quadrotor UAV system with uncertainties to address set-point tracking problem. A sufficient condition of the existence for RGCC is derived by Lyapunov stability theorem. The designed RGCC not only guarantees the whole closed-loop system asymptotically stable but also makes the quadratic performance level built for the closed-loop system have an upper bound irrespective to all admissible parameter uncertainties. Then, an optimal robust guaranteed cost controller is developed to minimize the upper bound of performance level. Simulation results verify the presented control algorithms possess small overshoot and short setting time, with which the quadrotor has ability to perform set-point tracking task well. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Limits on cold dark matter cosmologies from new anisotropy bounds on the cosmic microwave background
NASA Technical Reports Server (NTRS)
Vittorio, Nicola; Meinhold, Peter; Lubin, Philip; Muciaccia, Pio Francesco; Silk, Joseph
1991-01-01
A self-consistent method is presented for comparing theoretical predictions of and observational upper limits on CMB anisotropy. New bounds on CDM cosmologies set by the UCSB South Pole experiment on the 1 deg angular scale are presented. An upper limit of 4.0 x 10 to the -5th is placed on the rms differential temperature anisotropy to a 95 percent confidence level and a power of the test beta = 55 percent. A lower limit of about 0.6/b is placed on the density parameter of cold dark matter universes with greater than about 3 percent baryon abundance and a Hubble constant of 50 km/s/Mpc, where b is the bias factor, equal to unity only if light traces mass.
Thermal dark matter co-annihilating with a strongly interacting scalar
NASA Astrophysics Data System (ADS)
Biondini, S.; Laine, M.
2018-04-01
Recently many investigations have considered Majorana dark matter co-annihilating with bound states formed by a strongly interacting scalar field. However only the gluon radiation contribution to bound state formation and dissociation, which at high temperatures is subleading to soft 2 → 2 scatterings, has been included. Making use of a non-relativistic effective theory framework and solving a plasma-modified Schrödinger equation, we address the effect of soft 2 → 2 scatterings as well as the thermal dissociation of bound states. We argue that the mass splitting between the Majorana and scalar field has in general both a lower and an upper bound, and that the dark matter mass scale can be pushed at least up to 5…6TeV.
A Priori Bound on the Velocity in Axially Symmetric Navier-Stokes Equations
NASA Astrophysics Data System (ADS)
Lei, Zhen; Navas, Esteban A.; Zhang, Qi S.
2016-01-01
Let v be the velocity of Leray-Hopf solutions to the axially symmetric three-dimensional Navier-Stokes equations. Under suitable conditions for initial values, we prove the following a priori bound |v(x, t)| ≤ C |ln r|^{1/2}/r^2, qquad 0 < r ≤ 1/2, where r is the distance from x to the z axis, and C is a constant depending only on the initial value. This provides a pointwise upper bound (worst case scenario) for possible singularities, while the recent papers (Chiun-Chuan et al., Commun PDE 34(1-3):203-232, 2009; Koch et al., Acta Math 203(1):83-105, 2009) gave a lower bound. The gap is polynomial order 1 modulo a half log term.
Computing Bounds on Resource Levels for Flexible Plans
NASA Technical Reports Server (NTRS)
Muscvettola, Nicola; Rijsman, David
2009-01-01
A new algorithm efficiently computes the tightest exact bound on the levels of resources induced by a flexible activity plan (see figure). Tightness of bounds is extremely important for computations involved in planning because tight bounds can save potentially exponential amounts of search (through early backtracking and detection of solutions), relative to looser bounds. The bound computed by the new algorithm, denoted the resource-level envelope, constitutes the measure of maximum and minimum consumption of resources at any time for all fixed-time schedules in the flexible plan. At each time, the envelope guarantees that there are two fixed-time instantiations one that produces the minimum level and one that produces the maximum level. Therefore, the resource-level envelope is the tightest possible resource-level bound for a flexible plan because any tighter bound would exclude the contribution of at least one fixed-time schedule. If the resource- level envelope can be computed efficiently, one could substitute looser bounds that are currently used in the inner cores of constraint-posting scheduling algorithms, with the potential for great improvements in performance. What is needed to reduce the cost of computation is an algorithm, the measure of complexity of which is no greater than a low-degree polynomial in N (where N is the number of activities). The new algorithm satisfies this need. In this algorithm, the computation of resource-level envelopes is based on a novel combination of (1) the theory of shortest paths in the temporal-constraint network for the flexible plan and (2) the theory of maximum flows for a flow network derived from the temporal and resource constraints. The measure of asymptotic complexity of the algorithm is O(N O(maxflow(N)), where O(x) denotes an amount of computing time or a number of arithmetic operations proportional to a number of the order of x and O(maxflow(N)) is the measure of complexity (and thus of cost) of a maximumflow algorithm applied to an auxiliary flow network of 2N nodes. The algorithm is believed to be efficient in practice; experimental analysis shows the practical cost of maxflow to be as low as O(N1.5). The algorithm could be enhanced following at least two approaches. In the first approach, incremental subalgorithms for the computation of the envelope could be developed. By use of temporal scanning of the events in the temporal network, it may be possible to significantly reduce the size of the networks on which it is necessary to run the maximum-flow subalgorithm, thereby significantly reducing the time required for envelope calculation. In the second approach, the practical effectiveness of resource envelopes in the inner loops of search algorithms could be tested for multi-capacity resource scheduling. This testing would include inner-loop backtracking and termination tests and variable and value-ordering heuristics that exploit the properties of resource envelopes more directly.
Schulte, Berit; Eickmeyer, Holm; Heininger, Alexandra; Juretzek, Stephanie; Karrasch, Matthias; Denis, Olivier; Roisin, Sandrine; Pletz, Mathias W.; Klein, Matthias; Barth, Sandra; Lüdke, Gerd H.; Thews, Anne; Torres, Antoni; Cillóniz, Catia; Straube, Eberhard; Autenrieth, Ingo B.; Keller, Peter M.
2014-01-01
Severe pneumonia remains an important cause of morbidity and mortality. Polymerase chain reaction (PCR) has been shown to be more sensitive than current standard microbiological methods – particularly in patients with prior antibiotic treatment – and therefore, may improve the accuracy of microbiological diagnosis for hospitalized patients with pneumonia. Conventional detection techniques and multiplex PCR for 14 typical bacterial pneumonia-associated pathogens were performed on respiratory samples collected from adult hospitalized patients enrolled in a prospective multi-center study. Patients were enrolled from March until September 2012. A total of 739 fresh, native samples were eligible for analysis, of which 75 were sputa, 421 aspirates, and 234 bronchial lavages. 276 pathogens were detected by microbiology for which a valid PCR result was generated (positive or negative detection result by Curetis prototype system). Among these, 120 were identified by the prototype assay, 50 pathogens were not detected. Overall performance of the prototype for pathogen identification was 70.6% sensitivity (95% confidence interval (CI) lower bound: 63.3%, upper bound: 76.9%) and 95.2% specificity (95% CI lower bound: 94.6%, upper bound: 95.7%). Based on the study results, device cut-off settings were adjusted for future series production. The overall performance with the settings of the CE series production devices was 78.7% sensitivity (95% CI lower bound: 72.1%) and 96.6% specificity (95% CI lower bound: 96.1%). Time to result was 5.2 hours (median) for the prototype test and 43.5 h for standard-of-care. The Pneumonia Application provides a rapid and moderately sensitive assay for the detection of pneumonia-causing pathogens with minimal hands-on time. Trial Registration Deutsches Register Klinischer Studien (DRKS) DRKS00005684 PMID:25397673
Pioneer Venus orbiter search for Venusian lightning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borucki, W.J.; Dyer, J.W.; Phillips, J.R.
1991-07-01
During the 1988 and 1990, the star sensor aboard the Pioneer Venus orbiter (PVO) was used to search for optical pulses from lightning on the nightside of Venus. Useful data were obtained for 53 orbits in 1988 and 55 orbits in 1990. During this period, approximately 83 s of search time plus 7749 s of control data were obtained. The results again find no optical evidence for lightning activity. With the region that was observed during 1988, the results imply that the upper bound to short-duration flashes is 4 {times} 10{sup {minus}7} flashes/km{sup 2}/s for flashes that are at leastmore » 50% as bright as typical terrestrial lightning. During 1990, when the 2-Hz filter was used, the results imply an upper bound of 1 {times} 10{sup {minus}7} flashes/km{sup 2}/s for long-duration flashes at least 1.6% as bright as typical terrestrial lightning flashes or 33% as bright as the pulses observed by the Venera 9. The upper bounds to the flash rates for the 1988 and 1990 searches are twice and one half the global terrestrial rate, respectively. These two searches covered the region from 60{degrees}N latitude to 30{degrees}S latitude, 250{degrees} to 350{degrees} longitude, and the region from 45{degrees}N latitude to 55{degrees}S latitude, 155{degrees} to 300{degrees} longitude. Both searches sampled much of the nightside region from the dawn terminator to within 4 hours of the dusk terminator. These searches covered a much larger latitude range than any previous search. The results show that the Beat and Phoebe Regio areas previously identified by Russell et al. (1988) as areas with high rates of lightning activity were not active during the two seasons of the observations. When the authors assume that their upper bounds to the nightside flash rate are representative of the entire planet, the results imply that the global flash rate and energy dissipation rate derived by Krasnopol'sky (1983) from his observation of a single storm are too high.« less
Four-State Continuous-Variable Quantum Key Distribution with Photon Subtraction
NASA Astrophysics Data System (ADS)
Li, Fei; Wang, Yijun; Liao, Qin; Guo, Ying
2018-06-01
Four-state continuous-variable quantum key distribution (CVQKD) is one of the discretely modulated CVQKD which generates four nonorthogonal coherent states and exploits the sign of the measured quadrature of each state to encode information rather than uses the quadrature \\hat {x} or \\hat {p} itself. It has been proven that four-state CVQKD is more suitable than Gaussian modulated CVQKD in terms of transmission distance. In this paper, we propose an improved four-state CVQKD using an non-Gaussian operation, photon subtraction. A suitable photon-subtraction operation can be exploited to improve the maximal transmission of CVQKD in point-to-point quantum communication since it provides a method to enhance the performance of entanglement-based (EB) CVQKD. Photon subtraction not only can lengthen the maximal transmission distance by increasing the signal-to-noise rate but also can be easily implemented with existing technologies. Security analysis shows that the proposed scheme can lengthen the maximum transmission distance. Furthermore, by taking finite-size effect into account we obtain a tighter bound of the secure distance, which is more practical than that obtained in the asymptotic limit.
Tomographic Constraints on High-Energy Neutrinos of Hadronuclear Origin
NASA Astrophysics Data System (ADS)
Ando, Shin'ichiro; Tamborra, Irene; Zandanel, Fabio
2015-11-01
Mounting evidence suggests that the TeV-PeV neutrino flux detected by the IceCube telescope has mainly an extragalactic origin. If such neutrinos are primarily produced by a single class of astrophysical sources via hadronuclear (p p ) interactions, a similar flux of gamma-ray photons is expected. For the first time, we employ tomographic constraints to pinpoint the origin of the IceCube neutrino events by analyzing recent measurements of the cross correlation between the distribution of GeV gamma rays, detected by the Fermi satellite, and several galaxy catalogs in different redshift ranges. We find that the corresponding bounds on the neutrino luminosity density are up to 1 order of magnitude tighter than those obtained by using only the spectrum of the gamma-ray background, especially for sources with mild redshift evolution. In particular, our method excludes any hadronuclear source with a spectrum softer than E-2.1 as a main component of the neutrino background, if its evolution is slower than (1 +z )3. Starburst galaxies, if able to accelerate and confine cosmic rays efficiently, satisfy both spectral and tomographic constraints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martins, C.J.A.P.; Pinho, A.M.M.; Alves, R.F.C.
2015-08-01
Astrophysical tests of the stability of fundamental couplings, such as the fine-structure constant α, are becoming an increasingly powerful probe of new physics. Here we discuss how these measurements, combined with local atomic clock tests and Type Ia supernova and Hubble parameter data, constrain the simplest class of dynamical dark energy models where the same degree of freedom is assumed to provide both the dark energy and (through a dimensionless coupling, ζ, to the electromagnetic sector) the α variation. Specifically, current data tightly constrains a combination of ζ and the present dark energy equation of state w{sub 0}. Moreover, inmore » these models the new degree of freedom inevitably couples to nucleons (through the α dependence of their masses) and leads to violations of the Weak Equivalence Principle. We obtain indirect bounds on the Eötvös parameter η that are typically stronger than the current direct ones. We discuss the model-dependence of our results and briefly comment on how the forthcoming generation of high-resolution ultra-stable spectrographs will enable significantly tighter constraints.« less
Tomographic Constraints on High-Energy Neutrinos of Hadronuclear Origin.
Ando, Shin'ichiro; Tamborra, Irene; Zandanel, Fabio
2015-11-27
Mounting evidence suggests that the TeV-PeV neutrino flux detected by the IceCube telescope has mainly an extragalactic origin. If such neutrinos are primarily produced by a single class of astrophysical sources via hadronuclear (pp) interactions, a similar flux of gamma-ray photons is expected. For the first time, we employ tomographic constraints to pinpoint the origin of the IceCube neutrino events by analyzing recent measurements of the cross correlation between the distribution of GeV gamma rays, detected by the Fermi satellite, and several galaxy catalogs in different redshift ranges. We find that the corresponding bounds on the neutrino luminosity density are up to 1 order of magnitude tighter than those obtained by using only the spectrum of the gamma-ray background, especially for sources with mild redshift evolution. In particular, our method excludes any hadronuclear source with a spectrum softer than E^{-2.1} as a main component of the neutrino background, if its evolution is slower than (1+z)^{3}. Starburst galaxies, if able to accelerate and confine cosmic rays efficiently, satisfy both spectral and tomographic constraints.
Katsuta, Yoshiyuki; Kadoya, Noriyuki; Fujita, Yukio; Shimizu, Eiji; Matsunaga, Kenichi; Matsushita, Haruo; Majima, Kazuhiro; Jingu, Keiichi
2017-10-01
A log file-based method cannot detect dosimetric changes due to linac component miscalibration because log files are insensitive to miscalibration. Herein, clinical impacts of dosimetric changes on a log file-based method were determined. Five head-and-neck and five prostate plans were applied. Miscalibration-simulated log files were generated by inducing a linac component miscalibration into the log file. Miscalibration magnitudes for leaf, gantry, and collimator at the general tolerance level were ±0.5mm, ±1°, and ±1°, respectively, and at a tighter tolerance level achievable on current linac were ±0.3mm, ±0.5°, and ±0.5°, respectively. Re-calculations were performed on patient anatomy using log file data. Changes in tumor control probability/normal tissue complication probability from treatment planning system dose to re-calculated dose at the general tolerance level was 1.8% on planning target volume (PTV) and 2.4% on organs at risk (OARs) in both plans. These changes at the tighter tolerance level were improved to 1.0% on PTV and to 1.5% on OARs, with a statistically significant difference. We determined the clinical impacts of dosimetric changes on a log file-based method using a general tolerance level and a tighter tolerance level for linac miscalibration and found that a tighter tolerance level significantly improved the accuracy of the log file-based method. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
William J. Trush; Edward C. Connor; Knight Alan W.
1989-01-01
Riparian communities established along Elder Creek, a tributary of the upper South Fork Eel River, are bounded by two frequencies of periodic flooding. The upper limit for the riparian zone occurs at bankfull stage. The lower riparian limit is associated with a more frequent stage height, called the active channel, having an exceedance probability of 11 percent on a...
Variational bounds on the temperature distribution
NASA Astrophysics Data System (ADS)
Kalikstein, Kalman; Spruch, Larry; Baider, Alberto
1984-02-01
Upper and lower stationary or variational bounds are obtained for functions which satisfy parabolic linear differential equations. (The error in the bound, that is, the difference between the bound on the function and the function itself, is of second order in the error in the input function, and the error is of known sign.) The method is applicable to a range of functions associated with equalization processes, including heat conduction, mass diffusion, electric conduction, fluid friction, the slowing down of neutrons, and certain limiting forms of the random walk problem, under conditions which are not unduly restrictive: in heat conduction, for example, we do not allow the thermal coefficients or the boundary conditions to depend upon the temperature, but the thermal coefficients can be functions of space and time and the geometry is unrestricted. The variational bounds follow from a maximum principle obeyed by the solutions of these equations.
Yin, H-L; Cao, W-F; Fu, Y; Tang, Y-L; Liu, Y; Chen, T-Y; Chen, Z-B
2014-09-15
Measurement-device-independent quantum key distribution (MDI-QKD) with decoy-state method is believed to be securely applied to defeat various hacking attacks in practical quantum key distribution systems. Recently, the coherent-state superpositions (CSS) have emerged as an alternative to single-photon qubits for quantum information processing and metrology. Here, in this Letter, CSS are exploited as the source in MDI-QKD. We present an analytical method that gives two tight formulas to estimate the lower bound of yield and the upper bound of bit error rate. We exploit the standard statistical analysis and Chernoff bound to perform the parameter estimation. Chernoff bound can provide good bounds in the long-distance MDI-QKD. Our results show that with CSS, both the security transmission distance and secure key rate are significantly improved compared with those of the weak coherent states in the finite-data case.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, E.B. Jr.
Various methods for the calculation of lower bounds for eigenvalues are examined, including those of Weinstein, Temple, Bazley and Fox, Gay, and Miller. It is shown how all of these can be derived in a unified manner by the projection technique. The alternate forms obtained for the Gay formula show how a considerably improved method can be readily obtained. Applied to the ground state of the helium atom with a simple screened hydrogenic trial function, this new method gives a lower bound closer to the true energy than the best upper bound obtained with this form of trial function. Possiblemore » routes to further improved methods are suggested.« less
1987-08-01
of the absolute difference between the random variable and its mean.Gassmann and Ziemba 119861 provide a weaker bound that does not require...2.8284, and EX4tV) -12 EX’iX) = -42. Hence C = -2 -€t* i-4’]= I-- . 1213. £1 2 5 COMPARISONS OF BOUNDS IN IIn Gassmann and Ziemba 11986) extend an idea...solution of the foLLowing Linear program: (see Gassmann, Ziemba (1986),Theorem 1) m m m-GZ=max(XT(vi) I: z. 1=1,Z vo=x io (5.1hk i-l i=i i=1 I I where 0
Bounds on Block Error Probability for Multilevel Concatenated Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Moorthy, Hari T.; Stojanovic, Diana
1996-01-01
Maximum likelihood decoding of long block codes is not feasable due to large complexity. Some classes of codes are shown to be decomposable into multilevel concatenated codes (MLCC). For these codes, multistage decoding provides good trade-off between performance and complexity. In this paper, we derive an upper bound on the probability of block error for MLCC. We use this bound to evaluate difference in performance for different decompositions of some codes. Examples given show that a significant reduction in complexity can be achieved when increasing number of stages of decoding. Resulting performance degradation varies for different decompositions. A guideline is given for finding good m-level decompositions.
New Anomalous Lieb-Robinson Bounds in Quasiperiodic XY Chains
NASA Astrophysics Data System (ADS)
Damanik, David; Lemm, Marius; Lukic, Milivoje; Yessen, William
2014-09-01
We announce and sketch the rigorous proof of a new kind of anomalous (or sub-ballistic) Lieb-Robinson (LR) bound for an isotropic XY chain in a quasiperiodic transversal magnetic field. Instead of the usual effective light cone |x|≤v|t|, we obtain |x|≤v|t|α for some 0<α <1. We can characterize the allowed values of α exactly as those exceeding the upper transport exponent αu+ of a one-body Schrödinger operator. To our knowledge, this is the first rigorous derivation of anomalous quantum many-body transport. We also discuss anomalous LR bounds with power-law tails for a random dimer field.
Estimating pore and cement volumes in thin section
Halley, R.B.
1978-01-01
Point count estimates of pore, grain and cement volumes from thin sections are inaccurate, often by more than 100 percent, even though they may be surprisingly precise (reproducibility + or - 3 percent). Errors are produced by: 1) inclusion of submicroscopic pore space within solid volume and 2) edge effects caused by grain curvature within a 30-micron thick thin section. Submicroscopic porosity may be measured by various physical tests or may be visually estimated from scanning electron micrographs. Edge error takes the form of an envelope around grains and increases with decreasing grain size and sorting, increasing grain irregularity and tighter grain packing. Cements are greatly involved in edge error because of their position at grain peripheries and their generally small grain size. Edge error is minimized by methods which reduce the thickness of the sample viewed during point counting. Methods which effectively reduce thickness include use of ultra-thin thin sections or acetate peels, point counting in reflected light, or carefully focusing and counting on the upper surface of the thin section.
Associations in the hominoid facial skeleton.
Moore, W J
1977-02-01
A comparative study has been made of the correlations between numerous linear and angular dimensions of the facial skeleton of man and the three great apes. The Varimax (rotated orthogonal) factor analysis was found to be an essential aid in analysing the very large correlation matrices obtained. It indicated that three groups of association can be identified in the hominoid skull. The first reflects co-ordonated variation in total skull size; the second, co-ordinated variation within common anatomical regions; the third, co-ordination between the jaws and dentition. A broadly similar pattern was found in each group for all four genera. The principal contrasts between man, on the one hand, and the apes, on the other, were found in groups 1 and 2. The most prominent of these was a generally much tighter degree of association between the size and position of upper and lower jaws in the apes, and a consequently reduced tendency for disruption of the occlusal relationship of the teeth.
Precision Measurement of the Electron's Electric Dipole Moment Using Trapped Molecular Ions
NASA Astrophysics Data System (ADS)
Cairncross, William B.; Gresh, Daniel N.; Grau, Matt; Cossel, Kevin C.; Roussy, Tanya S.; Ni, Yiqi; Zhou, Yan; Ye, Jun; Cornell, Eric A.
2017-10-01
We describe the first precision measurement of the electron's electric dipole moment (de) using trapped molecular ions, demonstrating the application of spin interrogation times over 700 ms to achieve high sensitivity and stringent rejection of systematic errors. Through electron spin resonance spectroscopy on 180Hf 19F+ in its metastable 3Δ1 electronic state, we obtain de=(0.9 ±7. 7stat±1. 7syst)×10-29 e cm , resulting in an upper bound of |de|<1.3 ×10-28 e cm (90% confidence). Our result provides independent confirmation of the current upper bound of |de|<9.4 ×10-29 e cm [J. Baron et al., New J. Phys. 19, 073029 (2017), 10.1088/1367-2630/aa708e], and offers the potential to improve on this limit in the near future.
Limit cycles via higher order perturbations for some piecewise differential systems
NASA Astrophysics Data System (ADS)
Buzzi, Claudio A.; Lima, Maurício Firmino Silva; Torregrosa, Joan
2018-05-01
A classical perturbation problem is the polynomial perturbation of the harmonic oscillator, (x‧ ,y‧) =(- y + εf(x , y , ε) , x + εg(x , y , ε)) . In this paper we study the limit cycles that bifurcate from the period annulus via piecewise polynomial perturbations in two zones separated by a straight line. We prove that, for polynomial perturbations of degree n , no more than Nn - 1 limit cycles appear up to a study of order N. We also show that this upper bound is reached for orders one and two. Moreover, we study this problem in some classes of piecewise Liénard differential systems providing better upper bounds for higher order perturbation in ε, showing also when they are reached. The Poincaré-Pontryagin-Melnikov theory is the main technique used to prove all the results.
Non-localization of eigenfunctions for Sturm-Liouville operators and applications
NASA Astrophysics Data System (ADS)
Liard, Thibault; Lissy, Pierre; Privat, Yannick
2018-02-01
In this article, we investigate a non-localization property of the eigenfunctions of Sturm-Liouville operators Aa = -∂xx + a (ṡ) Id with Dirichlet boundary conditions, where a (ṡ) runs over the bounded nonnegative potential functions on the interval (0 , L) with L > 0. More precisely, we address the extremal spectral problem of minimizing the L2-norm of a function e (ṡ) on a measurable subset ω of (0 , L), where e (ṡ) runs over all eigenfunctions of Aa, at the same time with respect to all subsets ω having a prescribed measure and all L∞ potential functions a (ṡ) having a prescribed essentially upper bound. We provide some existence and qualitative properties of the minimizers, as well as precise lower and upper estimates on the optimal value. Several consequences in control and stabilization theory are then highlighted.
Fisher information of a single qubit interacts with a spin-qubit in the presence of a magnetic field
NASA Astrophysics Data System (ADS)
Metwally, N.
2018-06-01
In this contribution, quantum Fisher information is utilized to estimate the parameters of a central qubit interacting with a single-spin qubit. The effect of the longitudinal, transverse and the rotating strengths of the magnetic field on the estimation degree is discussed. It is shown that, in the resonance case, the number of peaks and consequently the size of the estimation regions increase as the rotating magnetic field strength increases. The precision estimation of the central qubit parameters depends on the initial state settings of the central and the spin-qubit, either encode classical or quantum information. It is displayed that, the upper bounds of the estimation degree are large if the two qubits encode classical information. In the non-resonance case, the estimation degree depends on which of the longitudinal/transverse strength is larger. The coupling constant between the central qubit and the spin-qubit has a different effect on the estimation degree of the weight and the phase parameters, where the possibility of estimating the weight parameter decreases as the coupling constant increases, while it increases for the phase parameter. For large number of spin-particles, namely, we have a spin-bath particles, the upper bounds of the Fisher information with respect to the weight parameter of the central qubit decreases as the number of the spin particle increases. As the interaction time increases, the upper bounds appear at different initial values of the weight parameter.
Modeling of magnitude distributions by the generalized truncated exponential distribution
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2015-01-01
The probability distribution of the magnitude can be modeled by an exponential distribution according to the Gutenberg-Richter relation. Two alternatives are the truncated exponential distribution (TED) and the cutoff exponential distribution (CED). The TED is frequently used in seismic hazard analysis although it has a weak point: when two TEDs with equal parameters except the upper bound magnitude are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. We overcome it by the generalization of the abovementioned exponential distributions: the generalized truncated exponential distribution (GTED). Therein, identical exponential distributions are mixed by the probability distribution of the correct cutoff points. This distribution model is flexible in the vicinity of the upper bound magnitude and is equal to the exponential distribution for smaller magnitudes. Additionally, the exponential distributions TED and CED are special cases of the GTED. We discuss the possible ways of estimating its parameters and introduce the normalized spacing for this purpose. Furthermore, we present methods for geographic aggregation and differentiation of the GTED and demonstrate the potential and universality of our simple approach by applying it to empirical data. The considerable improvement by the GTED in contrast to the TED is indicated by a large difference between the corresponding values of the Akaike information criterion.
Simplest little Higgs model revisited: Hidden mass relation, unitarity, and naturalness
NASA Astrophysics Data System (ADS)
Cheung, Kingman; He, Shi-Ping; Mao, Ying-nan; Zhang, Chen; Zhou, Yang
2018-06-01
We analyze the scalar potential of the simplest little Higgs (SLH) model in an approach consistent with the spirit of continuum effective field theory (CEFT). By requiring correct electroweak symmetry breaking (EWSB) with the 125 GeV Higgs boson, we are able to derive a relation between the pseudoaxion mass mη and the heavy top mass mT, which serves as a crucial test of the SLH mechanism. By requiring mη2>0 an upper bound on mT can be obtained for any fixed SLH global symmetry breaking scale f . We also point out that an absolute upper bound on f can be obtained by imposing partial wave unitarity constraint, which in turn leads to absolute upper bounds of mT≲19 TeV , mη≲1.5 TeV , and mZ'≲48 TeV . We present the allowed region in the three-dimensional parameter space characterized by f ,tβ,mT, taking into account the requirement of valid EWSB and the constraint from perturbative unitarity. We also propose a strategy of analyzing the fine-tuning problem consistent with the spirit of CEFT and apply it to the SLH. We suggest that the scalar potential and fine-tuning analysis strategies adopted here should also be applicable to a wide class of little Higgs and twin Higgs models, which may reveal interesting relations as crucial tests of the related EWSB mechanism and provide a new perspective on assessing their degree of fine-tuning.
Bounds on OPE coefficients from interference effects in the conformal collider
NASA Astrophysics Data System (ADS)
Córdova, Clay; Maldacena, Juan; Turiaci, Gustavo J.
2017-11-01
We apply the average null energy condition to obtain upper bounds on the three-point function coefficients of stress tensors and a scalar operator, < TTOi>, in general CFTs. We also constrain the gravitational anomaly of U(1) currents in four-dimensional CFTs, which are encoded in three-point functions of the form 〈 T T J 〉. In theories with a large N AdS dual we translate these bounds into constraints on the coefficient of a higher derivative bulk term of the form ∫ϕ W 2. We speculate that these bounds also apply in de-Sitter. In this case our results constrain inflationary observables, such as the amplitude for chiral gravity waves that originate from higher derivative terms in the Lagrangian of the form ϕ W W ∗.
Reduced conservatism in stability robustness bounds by state transformation
NASA Technical Reports Server (NTRS)
Yedavalli, R. K.; Liang, Z.
1986-01-01
This note addresses the issue of 'conservatism' in the time domain stability robustness bounds obtained by the Liapunov approach. A state transformation is employed to improve the upper bounds on the linear time-varying perturbation of an asymptotically stable linear time-invariant system for robust stability. This improvement is due to the variance of the conservatism of the Liapunov approach with respect to the basis of the vector space in which the Liapunov function is constructed. Improved bounds are obtained, using a transformation, on elemental and vector norms of perturbations (i.e., structured perturbations) as well as on a matrix norm of perturbations (i.e., unstructured perturbations). For the case of a diagonal transformation, an algorithm is proposed to find the 'optimal' transformation. Several examples are presented to illustrate the proposed analysis.
Generalized Hofmann quantum process fidelity bounds for quantum filters
NASA Astrophysics Data System (ADS)
Sedlák, Michal; Fiurášek, Jaromír
2016-04-01
We propose and investigate bounds on the quantum process fidelity of quantum filters, i.e., probabilistic quantum operations represented by a single Kraus operator K . These bounds generalize the Hofmann bounds on the quantum process fidelity of unitary operations [H. F. Hofmann, Phys. Rev. Lett. 94, 160504 (2005), 10.1103/PhysRevLett.94.160504] and are based on probing the quantum filter with pure states forming two mutually unbiased bases. Determination of these bounds therefore requires far fewer measurements than full quantum process tomography. We find that it is particularly suitable to construct one of the probe bases from the right eigenstates of K , because in this case the bounds are tight in the sense that if the actual filter coincides with the ideal one, then both the lower and the upper bounds are equal to 1. We theoretically investigate the application of these bounds to a two-qubit optical quantum filter formed by the interference of two photons on a partially polarizing beam splitter. For an experimentally convenient choice of factorized input states and measurements we study the tightness of the bounds. We show that more stringent bounds can be obtained by more sophisticated processing of the data using convex optimization and we compare our methods for different choices of the input probe states.
Lee, Hyunwook; Brendle, Sarah A.; Bywaters, Stephanie M.; Guan, Jian; Ashley, Robert E.; Yoder, Joshua D.; Makhov, Alexander M.; Conway, James F.; Christensen, Neil D.
2014-01-01
ABSTRACT Human papillomavirus 16 (HPV16) is a worldwide health threat and an etiologic agent of cervical cancer. To understand the antigenic properties of HPV16, we pursued a structural study to elucidate HPV capsids and antibody interactions. The cryo-electron microscopy (cryo-EM) structures of a mature HPV16 particle and an altered capsid particle were solved individually and as complexes with fragment of antibody (Fab) from the neutralizing antibody H16.V5. Fitted crystal structures provided a pseudoatomic model of the virus-Fab complex, which identified a precise footprint of H16.V5, including previously unrecognized residues. The altered-capsid–Fab complex map showed that binding of the Fab induced significant conformational changes that were not seen in the altered-capsid structure alone. These changes included more ordered surface loops, consolidated so-called “invading-arm” structures, and tighter intercapsomeric connections at the capsid floor. The H16.V5 Fab preferentially bound hexavalent capsomers likely with a stabilizing effect that directly correlated with the number of bound Fabs. Additional cryo-EM reconstructions of the virus-Fab complex for different incubation times and structural analysis provide a model for a hyperstabilization of the capsomer by H16.V5 Fab and showed that the Fab distinguishes subtle differences between antigenic sites. IMPORTANCE Our analysis of the cryo-EM reconstructions of the HPV16 capsids and virus-Fab complexes has identified the entire HPV.V5 conformational epitope and demonstrated a detailed neutralization mechanism of this clinically important monoclonal antibody against HPV16. The Fab bound and ordered the apical loops of HPV16. This conformational change was transmitted to the lower region of the capsomer, resulting in enhanced intercapsomeric interactions evidenced by the more ordered capsid floor and “invading-arm” structures. This study advances the understanding of the neutralization mechanism used by H16.V5. PMID:25392224
A Multi-Armed Bandit Approach to Following a Markov Chain
2017-06-01
focus on the House to Café transition (p1,4). We develop a Multi-Armed Bandit approach for efficiently following this target, where each state takes the...and longitude (each state corresponding to a physical location and a small set of activities). The searcher would then apply our approach on this...the target’s transition probability and the true probability over time. Further, we seek to provide upper bounds (i.e., worst case bounds) on the
Hard and Soft Constraints in Reliability-Based Design Optimization
NASA Technical Reports Server (NTRS)
Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.
Ionospheric Signatures in Radio Occultation Data
NASA Technical Reports Server (NTRS)
Mannucci, Anthony J.; Ao, Chi; Iijima, Byron A.; Kursinkski, E. Robert
2012-01-01
We can extend robustly the radio occultation data record by 6 years (+60%) by developing a singlefrequency processing method for GPS/MET data. We will produce a calibrated data set with profile-byprofile data characterization to determine robust upper bounds on ionospheric bias. Part of an effort to produce a calibrated RO data set addressing other key error sources such as upper boundary initialization. Planned: AIRS-GPS water vapor cross validation (water vapor climatology and trends).
Munoz, F. D.; Hobbs, B. F.; Watson, J. -P.
2016-02-01
A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems is proposed and it considers a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improvesmore » upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. But, the decomposition phase is required to attain optimality gaps. Moreover, use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munoz, F. D.; Hobbs, B. F.; Watson, J. -P.
A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems is proposed and it considers a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improvesmore » upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. But, the decomposition phase is required to attain optimality gaps. Moreover, use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately.« less
Topological quantum error correction in the Kitaev honeycomb model
NASA Astrophysics Data System (ADS)
Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.
2017-08-01
The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.
NASA Astrophysics Data System (ADS)
Khoo, Geoffrey; Kuennemeyer, Rainer; Claycomb, Rod W.
2005-04-01
Currently, the state of the art of mastitis detection in dairy cows is the laboratory-based measurement of somatic cell count (SCC), which is time consuming and expensive. Alternative, rapid, and reliable on-farm measurement methods are required for effective farm management. We have investigated whether fluorescence lifetime measurements can determine SCC in fresh, unprocessed milk. The method is based on the change in fluorescence lifetime of ethidium bromide when it binds to DNA from the somatic cells. Milk samples were obtained from a Fullwood Merlin Automated Milking System and analysed within a twenty-four hour period, over which the SCC does not change appreciably. For reference, the milk samples were also sent to a testing laboratory where the SCC was determined by traditional methods. The results show that we can quantify SCC using the fluorescence photon migration method from a lower bound of 4x105 cells mL-1 to an upper bound of 1 x 107 cells mL-1. The upper bound is due to the reference method used while the cause of the lower boundary is unknown, yet.
Record length requirement of long-range dependent teletraffic
NASA Astrophysics Data System (ADS)
Li, Ming
2017-04-01
This article contributes the highlights mainly in two folds. On the one hand, it presents a formula to compute the upper bound of the variance of the correlation periodogram measurement of teletraffic (traffic for short) with long-range dependence (LRD) for a given record length T and a given value of the Hurst parameter H (Theorems 1 and 2). On the other hand, it proposes two formulas for the computation of the variance upper bound of the correlation periodogram measurement of traffic of fractional Gaussian noise (fGn) type and the generalized Cauchy (GC) type, respectively (Corollaries 1 and 2). They may constitute a reference guideline of record length requirement of traffic with LRD. In addition, record length requirement for the correlation periodogram measurement of traffic with either the Schuster type or the Bartlett one is studied and the present results about it show that both types of periodograms may be used for the correlation measurement of traffic with a pre-desired variance bound of correlation estimation. Moreover, real traffic in the Internet Archive by the Special Interest Group on Data Communication under the Association for Computing Machinery of US (ACM SIGCOMM) is analyzed in the case study in this topic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fortes, Raphael; Rigolin, Gustavo, E-mail: rigolin@ifi.unicamp.br
We push the limits of the direct use of partially pure entangled states to perform quantum teleportation by presenting several protocols in many different scenarios that achieve the optimal efficiency possible. We review and put in a single formalism the three major strategies known to date that allow one to use partially entangled states for direct quantum teleportation (no distillation strategies permitted) and compare their efficiencies in real world implementations. We show how one can improve the efficiency of many direct teleportation protocols by combining these techniques. We then develop new teleportation protocols employing multipartite partially entangled states. The threemore » techniques are also used here in order to achieve the highest efficiency possible. Finally, we prove the upper bound for the optimal success rate for protocols based on partially entangled Bell states and show that some of the protocols here developed achieve such a bound. -- Highlights: •Optimal direct teleportation protocols using directly partially entangled states. •We put in a single formalism all strategies of direct teleportation. •We extend these techniques for multipartite partially entangle states. •We give upper bounds for the optimal efficiency of these protocols.« less
Performance analysis of optimal power allocation in wireless cooperative communication systems
NASA Astrophysics Data System (ADS)
Babikir Adam, Edriss E.; Samb, Doudou; Yu, Li
2013-03-01
Cooperative communication has been recently proposed in wireless communication systems for exploring the inherent spatial diversity in relay channels.The Amplify-and-Forward (AF) cooperation protocols with multiple relays have not been sufficiently investigated even if it has a low complexity in term of implementation. We consider in this work a cooperative diversity system in which a source transmits some information to a destination with the help of multiple relay nodes with AF protocols and investigate the optimality of allocating powers both at the source and the relays system by optimizing the symbol error rate (SER) performance in an efficient way. Firstly we derive a closedform SER formulation for MPSK signal using the concept of moment generating function and some statistical approximations in high signal to noise ratio (SNR) for the system under studied. We then find a tight corresponding lower bound which converges to the same limit as the theoretical upper bound and develop an optimal power allocation (OPA) technique with mean channel gains to minimize the SER. Simulation results show that our scheme outperforms the equal power allocation (EPA) scheme and is tight to the theoretical approximation based on the SER upper bound in high SNR for different number of relays.
NASA Astrophysics Data System (ADS)
Lukey, B. T.; Sheffield, J.; Bathurst, J. C.; Lavabre, J.; Mathys, N.; Martin, C.
1995-08-01
The sediment yield of two catchments in southern France was modelled using the newly developed sediment code of SHETRAN. A fire in August 1990 denuded the Rimbaud catchment, providing an opportunity to study the effect of vegetation cover on sediment yield by running the model for both pre-and post-fire cases. Model output is in the form of upper and lower bounds on sediment discharge, reflecting the uncertainty in the erodibility of the soil. The results are encouraging since measured sediment discharge falls largely between the predicted bounds, and simulated sediment yield is dramatically lower for the catchment before the fire which matches observation. SHETRAN is also applied to the Laval catchment, which is subject to Badlands gulley erosion. Again using the principle of generating upper and lower bounds on sediment discharge, the model is shown to be capable of predicting the bulk sediment discharge over periods of months. To simulate the effect of reforestation, the model is run with vegetation cover equivalent to a neighbouring fully forested basin. The results obtained indicate that SHETRAN provides a powerful tool for predicting the impact of environmental change and land management on sediment yield.
Existence and amplitude bounds for irrotational water waves in finite depth
NASA Astrophysics Data System (ADS)
Kogelbauer, Florian
2017-12-01
We prove the existence of solutions to the irrotational water-wave problem in finite depth and derive an explicit upper bound on the amplitude of the nonlinear solutions in terms of the wavenumber, the total hydraulic head, the wave speed and the relative mass flux. Our approach relies upon a reformulation of the water-wave problem as a one-dimensional pseudo-differential equation and the Newton-Kantorovich iteration for Banach spaces. This article is part of the theme issue 'Nonlinear water waves'.
Entanglement polygon inequality in qubit systems
NASA Astrophysics Data System (ADS)
Qian, Xiao-Feng; Alonso, Miguel A.; Eberly, J. H.
2018-06-01
We prove a set of tight entanglement inequalities for arbitrary N-qubit pure states. By focusing on all bi-partite marginal entanglements between each single qubit and its remaining partners, we show that the inequalities provide an upper bound for each marginal entanglement, while the known monogamy relation establishes the lower bound. The restrictions and sharing properties associated with the inequalities are further analyzed with a geometric polytope approach, and examples of three-qubit GHZ-class and W-class entangled states are presented to illustrate the results.
Quantum Speed Limits across the Quantum-to-Classical Transition
NASA Astrophysics Data System (ADS)
Shanahan, B.; Chenu, A.; Margolus, N.; del Campo, A.
2018-02-01
Quantum speed limits set an upper bound to the rate at which a quantum system can evolve. Adopting a phase-space approach, we explore quantum speed limits across the quantum-to-classical transition and identify equivalent bounds in the classical world. As a result, and contrary to common belief, we show that speed limits exist for both quantum and classical systems. As in the quantum domain, classical speed limits are set by a given norm of the generator of time evolution.
Bounds on the cross-correlation functions of state m-sequences
NASA Astrophysics Data System (ADS)
Woodcock, C. F.; Davies, Phillip A.; Shaar, Ahmed A.
1987-03-01
Lower and upper bounds on the peaks of the periodic Hamming cross-correlation function for state m-sequences, which are often used in frequency-hopped spread-spectrum systems, are derived. The state position mapped (SPM) sequences of the state m-sequences are described. The use of SPM sequences for OR-channel code division multiplexing is studied. The relation between the Hamming cross-correlation function and the correlation function of SPM sequence is examined. Numerical results which support the theoretical data are presented.
Hybrid Theory of Electron-Hydrogenic Systems Elastic Scattering
NASA Technical Reports Server (NTRS)
Bhatia, A. K.
2007-01-01
Accurate electron-hydrogen and electron-hydrogenic cross sections are required to interpret fusion experiments, laboratory plasma physics and properties of the solar and astrophysical plasmas. We have developed a method in which the short-range and long-range correlations can be included at the same time in the scattering equations. The phase shifts have rigorous lower bounds and the scattering lengths have rigorous upper bounds. The phase shifts in the resonance region can be used to calculate very accurately the resonance parameters.
DD-bar production and their interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu Yanrui; Oka, Makoto; Takizawa, Makoto
2011-05-23
We have explored the bound state problem and the scattering problem of the DD-bar pair in a meson exchange model. When considering their production in the e{sup +}e{sup -} process, we included the DD-bar rescattering effect. Although it is difficult to answer whether the S-wave DD-bar bound state exists or not from the binding energies and the phase shifts, one may get an upper limit of the binding energy from the production of the BB-bar, the bottom analog of DD-bar.
Thin-wall approximation in vacuum decay: A lemma
NASA Astrophysics Data System (ADS)
Brown, Adam R.
2018-05-01
The "thin-wall approximation" gives a simple estimate of the decay rate of an unstable quantum field. Unfortunately, the approximation is uncontrolled. In this paper I show that there are actually two different thin-wall approximations and that they bracket the true decay rate: I prove that one is an upper bound and the other a lower bound. In the thin-wall limit, the two approximations converge. In the presence of gravity, a generalization of this lemma provides a simple sufficient condition for nonperturbative vacuum instability.
A Note on the Kirchhoff and Additive Degree-Kirchhoff Indices of Graphs
NASA Astrophysics Data System (ADS)
Yang, Yujun; Klein, Douglas J.
2015-06-01
Two resistance-distance-based graph invariants, namely, the Kirchhoff index and the additive degree-Kirchhoff index, are studied. A relation between them is established, with inequalities for the additive degree-Kirchhoff index arising via the Kirchhoff index along with minimum, maximum, and average degrees. Bounds for the Kirchhoff and additive degree-Kirchhoff indices are also determined, and extremal graphs are characterised. In addition, an upper bound for the additive degree-Kirchhoff index is established to improve a previously known result.
NASA Technical Reports Server (NTRS)
Mitchell, J. R.
1972-01-01
The frequency response method of analyzing control system performance is discussed, and the difficulty of obtaining the sampled frequency response of the continuous system is considered. An upper bound magnitude error equation is obtained which yields reasonable estimates of the actual error. Finalization of the compensator improvement program is also reported, and the program was used to design compensators for Saturn 5/S1-C dry workshop and Saturn 5/S1-C Skylab.
Tunç, Cemil; Tunç, Osman
2016-01-01
In this paper, certain system of linear homogeneous differential equations of second-order is considered. By using integral inequalities, some new criteria for bounded and [Formula: see text]-solutions, upper bounds for values of improper integrals of the solutions and their derivatives are established to the considered system. The obtained results in this paper are considered as extension to the results obtained by Kroopnick (2014) [1]. An example is given to illustrate the obtained results.
Blow-up of solutions to a quasilinear wave equation for high initial energy
NASA Astrophysics Data System (ADS)
Li, Fang; Liu, Fang
2018-05-01
This paper deals with blow-up solutions to a nonlinear hyperbolic equation with variable exponent of nonlinearities. By constructing a new control function and using energy inequalities, the authors obtain the lower bound estimate of the L2 norm of the solution. Furthermore, the concavity arguments are used to prove the nonexistence of solutions; at the same time, an estimate of the upper bound of blow-up time is also obtained. This result extends and improves those of [1,2].
Vertical structure of tropospheric winds on gas giants
NASA Astrophysics Data System (ADS)
Scott, R. K.; Dunkerton, T. J.
2017-04-01
Zonal mean zonal velocity profiles from cloud-tracking observations on Jupiter and Saturn are used to infer latitudinal variations of potential temperature consistent with a shear stable potential vorticity distribution. Immediately below the cloud tops, density stratification is weaker on the poleward and stronger on the equatorward flanks of midlatitude jets, while at greater depth the opposite relation holds. Thermal wind balance then yields the associated vertical shears of midlatitude jets in an altitude range bounded above by the cloud tops and bounded below by the level where the latitudinal gradient of static stability changes sign. The inferred vertical shear below the cloud tops is consistent with existing thermal profiling of the upper troposphere. The sense of the associated mean meridional circulation in the upper troposphere is discussed, and expected magnitudes are given based on existing estimates of the radiative timescale on each planet.
Gravitating Q-balls in the Affleck-Dine mechanism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tamaki, Takashi; Sakai, Nobuyuki; Department of Education, Yamagata University, Yamagata 990-8560
2011-04-15
We investigate how gravity affects ''Q-balls'' with the Affleck-Dine potential V{sub AD}({phi}):=(m{sup 2}/2){phi}{sup 2} [1+Kln(({phi}/M)){sup 2}]. Contrary to the flat case, in which equilibrium solutions exist only if K<0, we find three types of gravitating solutions as follows. In the case that K<0, ordinary Q-ball solutions exist; there is an upper bound of the charge due to gravity. In the case that K=0, equilibrium solutions called (mini-)boson stars appear due to gravity; there is an upper bound of the charge, too. In the case that K>0, equilibrium solutions appear, too. In this case, these solutions are not asymptotically flat butmore » surrounded by Q-matter. These solutions might be important in considering a dark matter scenario in the Affleck-Dine mechanism.« less
Precision Measurement of the Electron's Electric Dipole Moment Using Trapped Molecular Ions.
Cairncross, William B; Gresh, Daniel N; Grau, Matt; Cossel, Kevin C; Roussy, Tanya S; Ni, Yiqi; Zhou, Yan; Ye, Jun; Cornell, Eric A
2017-10-13
We describe the first precision measurement of the electron's electric dipole moment (d_{e}) using trapped molecular ions, demonstrating the application of spin interrogation times over 700 ms to achieve high sensitivity and stringent rejection of systematic errors. Through electron spin resonance spectroscopy on ^{180}Hf^{19}F^{+} in its metastable ^{3}Δ_{1} electronic state, we obtain d_{e}=(0.9±7.7_{stat}±1.7_{syst})×10^{-29} e cm, resulting in an upper bound of |d_{e}|<1.3×10^{-28} e cm (90% confidence). Our result provides independent confirmation of the current upper bound of |d_{e}|<9.4×10^{-29} e cm [J. Baron et al., New J. Phys. 19, 073029 (2017)NJOPFM1367-263010.1088/1367-2630/aa708e], and offers the potential to improve on this limit in the near future.
Insights into the Earth System mass variability from CSR-RL05 GRACE gravity fields
NASA Astrophysics Data System (ADS)
Bettadpur, S.
2012-04-01
The next-generation Release-05 GRACE gravity field data products are the result of extensive effort applied to the improvements to the GRACE Level-1 (tracking) data products, and to improvements in the background gravity models and processing methodology. As a result, the squared-error upper-bound in RL05 fields is half or less than the squared-error upper-bound in RL04 fields. The CSR-RL05 field release consists of unconstrained gravity fields as well as a regularized gravity field time-series that can be used for several applications without any post-processing error reduction. This paper will describe the background and the nature of these improvements in the data products, and provide an error characterization. We will describe the insights these new series offer in measuring the mass flux due to diverse Hydrologic, Oceanographic and Cryospheric processes.
Potential-field sounding using Euler's homogeneity equation and Zidarov bubbling
Cordell, Lindrith
1994-01-01
Potential-field (gravity) data are transformed into a physical-property (density) distribution in a lower half-space, constrained solely by assumed upper bounds on physical-property contrast and data error. A two-step process is involved. The data are first transformed to an equivalent set of line (2-D case) or point (3-D case) sources, using Euler's homogeneity equation evaluated iteratively on the largest residual data value. Then, mass is converted to a volume-density product, constrained to an upper density bound, by 'bubbling,' which exploits circular or radial expansion to redistribute density without changing the associated gravity field. The method can be developed for gravity or magnetic data in two or three dimensions. The results can provide a beginning for interpretation of potential-field data where few independent constraints exist, or more likely, can be used to develop models and confirm or extend interpretation of other geophysical data sets.
Search for violations of quantum mechanics
Ellis, John; Hagelin, John S.; Nanopoulos, D. V.; ...
1984-07-01
The treatment of quantum effects in gravitational fields indicates that pure states may evolve into mixed states, and Hawking has proposed modification of the axioms of field theory which incorporate the corresponding violation of quantum mechanics. In this study we propose a modified hamiltonian equation of motion for density matrices and use it to interpret upper bounds on the violation of quantum mechanics in different phenomenological situations. We apply our formalism to the K 0-K 0 system and to long baseline neutron interferometry experiments. In both cases we find upper bounds of about 2 × 10 -21 GeV on contributionsmore » to the single particle “hamiltonian” which violate quantum mechanical coherence. We discuss how these limits might be improved in the future, and consider the relative significance of other successful tests of quantum mechanics. Finally, an appendix contains model estimates of the magnitude of effects violating quantum mechanics.« less
DD production and their interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu Yanrui; Oka, Makoto; Takizawa, Makoto
2010-07-01
S- and P-wave DD scatterings are studied in a meson exchange model with the coupling constants obtained in the heavy quark effective theory. With the extracted P-wave phase shifts and the separable potential approximation, we include the DD rescattering effect and investigate the production process e{sup +}e{sup -{yields}}DD. We find that it is difficult to explain the anomalous line shape observed by the BES Collaboration with this mechanism. Combining our model calculation and the experimental measurement, we estimate the upper limit of the nearly universal cutoff parameter to be around 2 GeV. With this number, the upper limits of themore » binding energies of the S-wave DD and BB bound states are obtained. Assuming that the S-wave and P-wave interactions rely on the same cutoff, our study provides a way of extracting the information about S-wave molecular bound states from the P-wave meson pair production.« less
An Analytical Framework for Runtime of a Class of Continuous Evolutionary Algorithms.
Zhang, Yushan; Hu, Guiwu
2015-01-01
Although there have been many studies on the runtime of evolutionary algorithms in discrete optimization, relatively few theoretical results have been proposed on continuous optimization, such as evolutionary programming (EP). This paper proposes an analysis of the runtime of two EP algorithms based on Gaussian and Cauchy mutations, using an absorbing Markov chain. Given a constant variation, we calculate the runtime upper bound of special Gaussian mutation EP and Cauchy mutation EP. Our analysis reveals that the upper bounds are impacted by individual number, problem dimension number n, searching range, and the Lebesgue measure of the optimal neighborhood. Furthermore, we provide conditions whereby the average runtime of the considered EP can be no more than a polynomial of n. The condition is that the Lebesgue measure of the optimal neighborhood is larger than a combinatorial calculation of an exponential and the given polynomial of n.
NASA Astrophysics Data System (ADS)
Alvarez, Diego A.; Uribe, Felipe; Hurtado, Jorge E.
2018-02-01
Random set theory is a general framework which comprises uncertainty in the form of probability boxes, possibility distributions, cumulative distribution functions, Dempster-Shafer structures or intervals; in addition, the dependence between the input variables can be expressed using copulas. In this paper, the lower and upper bounds on the probability of failure are calculated by means of random set theory. In order to accelerate the calculation, a well-known and efficient probability-based reliability method known as subset simulation is employed. This method is especially useful for finding small failure probabilities in both low- and high-dimensional spaces, disjoint failure domains and nonlinear limit state functions. The proposed methodology represents a drastic reduction of the computational labor implied by plain Monte Carlo simulation for problems defined with a mixture of representations for the input variables, while delivering similar results. Numerical examples illustrate the efficiency of the proposed approach.
Universal charge-radius relation for subatomic and astrophysical compact objects.
Madsen, Jes
2008-04-18
Electron-positron pair creation in supercritical electric fields limits the net charge of any static, spherical object, such as superheavy nuclei, strangelets, and Q balls, or compact stars like neutron stars, quark stars, and black holes. For radii between 4 x 10(2) and 10(4) fm the upper bound on the net charge is given by the universal relation Z=0.71R(fm), and for larger radii (measured in femtometers or kilometers) Z=7 x 10(-5)R_(2)(fm)=7 x 10(31)R_(2)(km). For objects with nuclear density the relation corresponds to Z approximately 0.7A(1/3)( (10(8)10(12)), where A is the baryon number. For some systems this universal upper bound improves existing charge limits in the literature.
Crustal volumes of the continents and of oceanic and continental submarine plateaus
NASA Technical Reports Server (NTRS)
Schubert, G.; Sandwell, D.
1989-01-01
Using global topographic data and the assumption of Airy isostasy, it is estimated that the crustal volume of the continents is 7182 X 10 to the 6th cu km. The crustal volumes of the oceanic and continental submarine plateaus are calculated at 369 X 10 to the 6th cu km and 242 X 10 to the 6th cu km, respectively. The total continental crustal volume is found to be 7581 X 10 to the 6th cu km, 3.2 percent of which is comprised of continental submarine plateaus on the seafloor. An upper bound on the contintental crust addition rate by the accretion of oceanic plateaus is set at 3.7 cu km/yr. Subduction of continental submarine plateaus with the oceanic lithosphere on a 100 Myr time scale yields an upper bound to the continental crustal subtraction rate of 2.4 cu km/yr.
Comparison of various techniques for calibration of AIS data
NASA Technical Reports Server (NTRS)
Roberts, D. A.; Yamaguchi, Y.; Lyon, R. J. P.
1986-01-01
The Airborne Imaging Spectrometer (AIS) samples a region which is strongly influenced by decreasing solar irradiance at longer wavelengths and strong atmospheric absorptions. Four techniques, the Log Residual, the Least Upper Bound Residual, the Flat Field Correction and calibration using field reflectance measurements were investigated as a means for removing these two features. Of the four techniques field reflectance calibration proved to be superior in terms of noise and normalization. Of the other three techniques, the Log Residual was superior when applied to areas which did not contain one dominant cover type. In heavily vegetated areas, the Log Residual proved to be ineffective. After removing anomalously bright data values, the Least Upper Bound Residual proved to be almost as effective as the Log Residual in sparsely vegetated areas and much more effective in heavily vegetated areas. Of all the techniques, the Flat Field Correction was the noisest.
Isotope-abundance variations and atomic weights of selected elements: 2016 (IUPAC Technical Report)
Coplen, Tyler B.; Shrestha, Yesha
2016-01-01
There are 63 chemical elements that have two or more isotopes that are used to determine their standard atomic weights. The isotopic abundances and atomic weights of these elements can vary in normal materials due to physical and chemical fractionation processes (not due to radioactive decay). These variations are well known for 12 elements (hydrogen, lithium, boron, carbon, nitrogen, oxygen, magnesium, silicon, sulfur, chlorine, bromine, and thallium), and the standard atomic weight of each of these elements is given by IUPAC as an interval with lower and upper bounds. Graphical plots of selected materials and compounds of each of these elements have been published previously. Herein and at the URL http://dx.doi.org/10.5066/F7GF0RN2, we provide isotopic abundances, isotope-delta values, and atomic weights for each of the upper and lower bounds of these materials and compounds.
Constructions for finite-state codes
NASA Technical Reports Server (NTRS)
Pollara, F.; Mceliece, R. J.; Abdel-Ghaffar, K.
1987-01-01
A class of codes called finite-state (FS) codes is defined and investigated. These codes, which generalize both block and convolutional codes, are defined by their encoders, which are finite-state machines with parallel inputs and outputs. A family of upper bounds on the free distance of a given FS code is derived from known upper bounds on the minimum distance of block codes. A general construction for FS codes is then given, based on the idea of partitioning a given linear block into cosets of one of its subcodes, and it is shown that in many cases the FS codes constructed in this way have a d sub free which is as large as possible. These codes are found without the need for lengthy computer searches, and have potential applications for future deep-space coding systems. The issue of catastropic error propagation (CEP) for FS codes is also investigated.
An upper-bound assessment of the benefits of reducing perchlorate in drinking water.
Lutter, Randall
2014-10-01
The Environmental Protection Agency plans to issue new federal regulations to limit drinking water concentrations of perchlorate, which occurs naturally and results from the combustion of rocket fuel. This article presents an upper-bound estimate of the potential benefits of alternative maximum contaminant levels for perchlorate in drinking water. The results suggest that the economic benefits of reducing perchlorate concentrations in drinking water are likely to be low, i.e., under $2.9 million per year nationally, for several reasons. First, the prevalence of detectable perchlorate in public drinking water systems is low. Second, the population especially sensitive to effects of perchlorate, pregnant women who are moderately iodide deficient, represents a minority of all pregnant women. Third, and perhaps most importantly, reducing exposure to perchlorate in drinking water is a relatively ineffective way of increasing iodide uptake, a crucial step linking perchlorate to health effects of concern. © 2014 Society for Risk Analysis.
Fault-tolerant clock synchronization validation methodology. [in computer systems
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Palumbo, Daniel L.; Johnson, Sally C.
1987-01-01
A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight-crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating the clock synchronization system of the Software Implemented Fault Tolerance computer. The design proof of the algorithm includes a theorem that defines the maximum skew between any two nonfaulty clocks in the system in terms of specific system parameters. Most of these parameters are deterministic. One crucial parameter is the upper bound on the clock read error, which is stochastic. The probability that this upper bound is exceeded is calculated from data obtained by the measurement of system parameters. This probability is then included in a detailed reliability analysis of the system.
Achieving unequal error protection with convolutional codes
NASA Technical Reports Server (NTRS)
Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.
1994-01-01
This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.
Gauge mediation at the LHC: status and prospects
Knapen, Simon; Redigolo, Diego
2017-01-30
We show that the predictivity of general gauge mediation (GGM) with TeV-scale stops is greatly increased once the Higgs mass constraint is imposed. The most notable results are a strong lower bound on the mass of the gluino and right-handed squarks, and an upper bound on the Higgsino mass. If the μ-parameter is positive, the wino mass is also bounded from above. These constraints relax significantly for high messenger scales and as such long-lived NLSPs are favored in GGM. We identify a small set of most promising topologies for the neutralino/sneutrino NLSP scenarios and estimate the impact of the currentmore » bounds and the sensitivity of the high luminosity LHC. The stau, stop and sbottom NLSP scenarios can be robustly excluded at the high luminosity LHC.« less
On the Inequalities of Babu\\vska-Aziz, Friedrichs and Horgan-Payne
NASA Astrophysics Data System (ADS)
Costabel, Martin; Dauge, Monique
2015-09-01
The equivalence between the inequalities of Babu\\vska-Aziz and Friedrichs for sufficiently smooth bounded domains in the plane was shown by Horgan and Payne 30 years ago. We prove that this equivalence, and the equality between the associated constants, is true without any regularity condition on the domain. For the Horgan-Payne inequality, which is an upper bound of the Friedrichs constant for plane star-shaped domains in terms of a geometric quantity known as the Horgan-Payne angle, we show that it is true for some classes of domains, but not for all bounded star-shaped domains. We prove a weaker inequality that is true in all cases.
A simple method for assessing occupational exposure via the one-way random effects model.
Krishnamoorthy, K; Mathew, Thomas; Peng, Jie
2016-11-01
A one-way random effects model is postulated for the log-transformed shift-long personal exposure measurements, where the random effect in the model represents an effect due to the worker. Simple closed-form confidence intervals are proposed for the relevant parameters of interest using the method of variance estimates recovery (MOVER). The performance of the confidence bounds is evaluated and compared with those based on the generalized confidence interval approach. Comparison studies indicate that the proposed MOVER confidence bounds are better than the generalized confidence bounds for the overall mean exposure and an upper percentile of the exposure distribution. The proposed methods are illustrated using a few examples involving industrial hygiene data.
NASA Astrophysics Data System (ADS)
Basu, Biswajit
2017-12-01
Bounds on estimates of wave heights (valid for large amplitudes) from pressure and flow measurements at an arbitrary intermediate depth have been provided. Two-dimensional irrotational steady water waves over a flat bed with a finite depth in the presence of underlying uniform currents have been considered in the analysis. Five different upper bounds based on a combination of pressure and velocity field measurements have been derived, though there is only one available lower bound on the wave height in the case of the speed of current greater than or less than the wave speed. This article is part of the theme issue 'Nonlinear water waves'.
A communication channel model of the software process
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1988-01-01
Reported here is beginning research into a noisy communication channel analogy of software development process productivity, in order to establish quantifiable behavior and theoretical bounds. The analogy leads to a fundamental mathematical relationship between human productivity and the amount of information supplied by the developers, the capacity of the human channel for processing and transmitting information, the software product yield (object size), the work effort, requirements efficiency, tool and process efficiency, and programming environment advantage. Also derived is an upper bound to productivity that shows that software reuse is the only means than can lead to unbounded productivity growth; practical considerations of size and cost of reusable components may reduce this to a finite bound.
A communication channel model of the software process
NASA Technical Reports Server (NTRS)
Tausworthe, Robert C.
1988-01-01
Beginning research into a noisy communication channel analogy of software development process productivity, in order to establish quantifiable behavior and theoretical bounds is discussed. The analogy leads to a fundamental mathematical relationship between human productivity and the amount of information supplied by the developers, the capacity of the human channel for processing and transmitting information, the software product yield (object size) the work effort, requirements efficiency, tool and process efficiency, and programming environment advantage. An upper bound to productivity is derived that shows that software reuse is the only means that can lead to unbounded productivity growth; practical considerations of size and cost of reusable components may reduce this to a finite bound.
A passivity criterion for sampled-data bilateral teleoperation systems.
Jazayeri, Ali; Tavakoli, Mahdi
2013-01-01
A teleoperation system consists of a teleoperator, a human operator, and a remote environment. Conditions involving system and controller parameters that ensure the teleoperator passivity can serve as control design guidelines to attain maximum teleoperation transparency while maintaining system stability. In this paper, sufficient conditions for teleoperator passivity are derived for when position error-based controllers are implemented in discrete-time. This new analysis is necessary because discretization causes energy leaks and does not necessarily preserve the passivity of the system. The proposed criterion for sampled-data teleoperator passivity imposes lower bounds on the teleoperator's robots dampings, an upper bound on the sampling time, and bounds on the control gains. The criterion is verified through simulations and experiments.
Reliability Estimating Procedures for Electric and Thermochemical Propulsion Systems. Volume 1
1977-02-01
Laboratories, The Marquardt Company, NASA Goddard Space Flight Center, RCA Astro Elec- tronics, Rockwell International, Applied Physics Laboratory...E fX ) 2.3 Failure Rate Means and Bounds 5% Lower Bound Median Mean 95% Upper Bound A.05 X.05 . AIA. 9 5 0.00025 0.0024 0.06 0.022 x10- 6 per cycle, 1...Iq IIt. Xg4 4l Wl ~ 4 L Q ൘ I1-269 I- I J N1- 74-i Liu I- (~J~~~jto 1-27 r4J > U 0 1-271 T 27 fX ~ 0L 1-273 -- va VAv( 13 1-272 %J% ~ii 000 41
POLARBEAR constraints on cosmic birefringence and primordial magnetic fields
Ade, Peter A. R.; Arnold, Kam; Atlas, Matt; ...
2015-12-08
Here, we constrain anisotropic cosmic birefringence using four-point correlations of even-parity E-mode and odd-parity B-mode polarization in the cosmic microwave background measurements made by the POLARization of the Background Radiation (POLARBEAR) experiment in its first season of observations. We find that the anisotropic cosmic birefringence signal from any parity-violating processes is consistent with zero. The Faraday rotation from anisotropic cosmic birefringence can be compared with the equivalent quantity generated by primordial magnetic fields if they existed. The POLARBEAR nondetection translates into a 95% confidence level (C.L.) upper limit of 93 nanogauss (nG) on the amplitude of an equivalent primordial magneticmore » field inclusive of systematic uncertainties. This four-point correlation constraint on Faraday rotation is about 15 times tighter than the upper limit of 1380 nG inferred from constraining the contribution of Faraday rotation to two-point correlations of B-modes measured by Planck in 2015. Metric perturbations sourced by primordial magnetic fields would also contribute to the B-mode power spectrum. Using the POLARBEAR measurements of the B-mode power spectrum (two-point correlation), we set a 95% C.L. upper limit of 3.9 nG on primordial magnetic fields assuming a flat prior on the field amplitude. This limit is comparable to what was found in the Planck 2015 two-point correlation analysis with both temperature and polarization. Finally, we perform a set of systematic error tests and find no evidence for contamination. This work marks the first time that anisotropic cosmic birefringence or primordial magnetic fields have been constrained from the ground at subdegree scales.« less
Ada (Trade Name)/SQL (Structured Query Language) Binding Specification
1988-06-01
TYPES iS package ADA-SOL Is type DWPLOYEEyNAME Is new STRING ( 1 .. 30 ); type BOSSNAME is new EMPLOYEENAME; type EMPLOYEE SALARY is digits 7 range 0.00...minimum number of significant decimal digits . All real numbers between the lower and upper bounds, inclusive, belong to the subtype, and are...and the elements of strings. Format <character> -:- < digit > I <letter> ! <special character> < digit > ::- 0111213141516171819 <letter> ::- <upper case
Characterization of Seismic Noise at Selected Non-Urban Sites
2010-03-01
Field sites for seismic recordings: Scottish moor (upper left), Enfield, NH (upper right), and vicinity of Keele, England (bottom). ERDC...three sites. The sites are: a wind farm on a remote moor in Scotland, a ~13 acre field bounded by woods in a rural Enfield, NH neigh- borhood, and a site...in a rural Enfield, NH, neighborhood, and a site transitional from developed land to farmland within 1 km of the six-lane M6 motorway near Keele
Standardization of carbon-phenolic composite test methodology
NASA Technical Reports Server (NTRS)
Hall, W. B.
1986-01-01
The objective of this study was to evaluate the residual volatiles, filler content, and resin flow test procedures for carbon-phenolic prepreg materials. The residual volatile test procedure was rewritten with tighter procedure control which was then evaluated by round robin testing by four laboratories on the same rolls of prepreg. Results indicated that the residual volatiles test was too operator and equipment dependent to be reliable, and it was recommended that the test be discontinued. The resin flow test procedures were rewritten with tighter procedure control, and it is now considered to be an acceptable test. It was recommended that the filler content determination be made prior to prepregging.
NASA Astrophysics Data System (ADS)
Stolzenburg, Maribeth; Marshall, Thomas C.; Karunarathne, Sumedhe; Orville, Richard E.
2018-10-01
Using video data recorded at 50,000 frames per second for nearby negative lightning flashes, estimates are derived for the length of positive upward connecting leaders (UCLs) that presumably formed prior to new ground attachments. Return strokes were 1.7 to 7.8 km distant, yielding image resolutions of 4.25 to 19.5 m. No UCLs are imaged in these data, indicating those features were too transient or too dim compared to other lightning processes that are imaged at these resolutions. Upper bound lengths for 17 presumed UCLs are determined from the height above flat ground or water of the successful stepped leader tip in the image immediately prior to (within 20 μs before) the return stroke. Better estimates of maximum UCL lengths are determined using the downward stepped leader tip's speed of advance and the estimated return stroke time within its first frame. For 17 strokes, the upper bound length of the possible UCL averages 31.6 m and ranges from 11.3 to 50.3 m. Among the close strokes (those with spatial resolution <8 m per pixel), the five which connected to water (salt water lagoon) have UCL upper bound estimates averaging significantly shorter (24.1 m) than the average for the three close strokes which connected to land (36.9 m). The better estimates of maximum UCL lengths for the eight close strokes average 20.2 m, with slightly shorter average of 18.3 m for the five that connected to water. All the better estimates of UCL maximum lengths are <38 m in this dataset
NASA Astrophysics Data System (ADS)
Wakabayashi, Kazuyuki; Nakano, Saho; Soga, Kouichi; Hoson, Takayuki
Lignin is a component of cell walls of terrestrial plants, which provides cell walls with the mechanical rigidity. Lignin is a phenolic polymer with high molecular mass and formed by the polymerization of phenolic substances on a cellulosic matrix. The polymerization is catalyzed by cell wall-bound peroxidase, and thus the activity of this enzyme regulates the rate of formation of lignin. In the present study, the changes in the lignin content and the activity of cell wall peroxidase were investigated along epicotyls of azuki bean seedlings grown under hypergravity conditions. The endogenous growth occurred primarily in the upper regions of the epicotyl and no growth was detected in the middle or basal regions. The amounts of acetyl bromide-soluble lignin increased from the upper to the basal regions of epicotyls. The lignin content per unit length in the basal region was three times higher than that in the upper region. Hypergravity treatment at 300 g for 6 h stimulated the increase in the lignin content in all regions of epicotyls, particularly in the basal regions. The peroxidase activity in the protein fraction extracted from the cell wall preparation with a high ionic strength buffer also increased gradually toward the basal region, and hypergravity treatment clearly increased the activity in all regions. There was a close correlation between the lignin content and the enzyme activity. These results suggest that gravity stimuli modulate the activity of cell wall-bound peroxidase, which, in turn, causes the stimulation of the lignin formation in stem organs.
Thermalization Time Bounds for Pauli Stabilizer Hamiltonians
NASA Astrophysics Data System (ADS)
Temme, Kristan
2017-03-01
We prove a general lower bound to the spectral gap of the Davies generator for Hamiltonians that can be written as the sum of commuting Pauli operators. These Hamiltonians, defined on the Hilbert space of N-qubits, serve as one of the most frequently considered candidates for a self-correcting quantum memory. A spectral gap bound on the Davies generator establishes an upper limit on the life time of such a quantum memory and can be used to estimate the time until the system relaxes to thermal equilibrium when brought into contact with a thermal heat bath. The bound can be shown to behave as {λ ≥ O(N^{-1} exp(-2β overline{ɛ}))}, where {overline{ɛ}} is a generalization of the well known energy barrier for logical operators. Particularly in the low temperature regime we expect this bound to provide the correct asymptotic scaling of the gap with the system size up to a factor of N -1. Furthermore, we discuss conditions and provide scenarios where this factor can be removed and a constant lower bound can be proven.
NASA Astrophysics Data System (ADS)
Ebrahimnejad, Ali
2015-08-01
There are several methods, in the literature, for solving fuzzy variable linear programming problems (fuzzy linear programming in which the right-hand-side vectors and decision variables are represented by trapezoidal fuzzy numbers). In this paper, the shortcomings of some existing methods are pointed out and to overcome these shortcomings a new method based on the bounded dual simplex method is proposed to determine the fuzzy optimal solution of that kind of fuzzy variable linear programming problems in which some or all variables are restricted to lie within lower and upper bounds. To illustrate the proposed method, an application example is solved and the obtained results are given. The advantages of the proposed method over existing methods are discussed. Also, one application of this algorithm in solving bounded transportation problems with fuzzy supplies and demands is dealt with. The proposed method is easy to understand and to apply for determining the fuzzy optimal solution of bounded fuzzy variable linear programming problems occurring in real-life situations.
Stability of Nonlinear Systems with Unknown Time-varying Feedback Delay
NASA Astrophysics Data System (ADS)
Chunodkar, Apurva A.; Akella, Maruthi R.
2013-12-01
This paper considers the problem of stabilizing a class of nonlinear systems with unknown bounded delayed feedback wherein the time-varying delay is 1) piecewise constant 2) continuous with a bounded rate. We also consider application of these results to the stabilization of rigid-body attitude dynamics. In the first case, the time-delay in feedback is modeled specifically as a switch among an arbitrarily large set of unknown constant values with a known strict upper bound. The feedback is a linear function of the delayed states. In the case of linear systems with switched delay feedback, a new sufficiency condition for average dwell time result is presented using a complete type Lyapunov-Krasovskii (L-K) functional approach. Further, the corresponding switched system with nonlinear perturbations is proven to be exponentially stable inside a well characterized region of attraction for an appropriately chosen average dwell time. In the second case, the concept of the complete type L-K functional is extended to a class of nonlinear time-delay systems with unknown time-varying time-delay. This extension ensures stability robustness to time-delay in the control design for all values of time-delay less than the known upper bound. Model-transformation is used in order to partition the nonlinear system into a nominal linear part that is exponentially stable with a bounded perturbation. We obtain sufficient conditions which ensure exponential stability inside a region of attraction estimate. A constructive method to evaluate the sufficient conditions is presented together with comparison with the corresponding constant and piecewise constant delay. Numerical simulations are performed to illustrate the theoretical results of this paper.
NASA Astrophysics Data System (ADS)
Tang, Wenlin; Xu, Peng; Hu, Songjie; Cao, Jianfeng; Dong, Peng; Bu, Yanlong; Chen, Lue; Han, Songtao; Gong, Xuefei; Li, Wenxiao; Ping, Jinsong; Lau, Yun-Kau; Tang, Geshi
2017-09-01
The Doppler tracking data of the Chang'e 3 lunar mission is used to constrain the stochastic background of gravitational wave in cosmology within the 1 mHz to 0.05 Hz frequency band. Our result improves on the upper bound on the energy density of the stochastic background of gravitational wave in the 0.02-0.05 Hz band obtained by the Apollo missions, with the improvement reaching almost one order of magnitude at around 0.05 Hz. Detailed noise analysis of the Doppler tracking data is also presented, with the prospect that these noise sources will be mitigated in future Chinese deep space missions. A feasibility study is also undertaken to understand the scientific capability of the Chang'e 4 mission, due to be launched in 2018, in relation to the stochastic gravitational wave background around 0.01 Hz. The study indicates that the upper bound on the energy density may be further improved by another order of magnitude from the Chang'e 3 mission, which will fill the gap in the frequency band from 0.02 Hz to 0.1 Hz in the foreseeable future.
The generalized truncated exponential distribution as a model for earthquake magnitudes
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2015-04-01
The random distribution of small, medium and large earthquake magnitudes follows an exponential distribution (ED) according to the Gutenberg-Richter relation. But a magnitude distribution is truncated in the range of very large magnitudes because the earthquake energy is finite and the upper tail of the exponential distribution does not fit well observations. Hence the truncated exponential distribution (TED) is frequently applied for the modelling of the magnitude distributions in the seismic hazard and risk analysis. The TED has a weak point: when two TEDs with equal parameters, except the upper bound magnitude, are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters, except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. It also applies to alternative distribution models. The presented generalized truncated exponential distribution (GTED) overcomes this weakness. The ED and the TED are special cases of the GTED. Different issues of the statistical inference are also discussed and an example of empirical data is presented in the current contribution.
Fundamental limitations of cavity-assisted atom interferometry
NASA Astrophysics Data System (ADS)
Dovale-Álvarez, M.; Brown, D. D.; Jones, A. W.; Mow-Lowry, C. M.; Miao, H.; Freise, A.
2017-11-01
Atom interferometers employing optical cavities to enhance the beam splitter pulses promise significant advances in science and technology, notably for future gravitational wave detectors. Long cavities, on the scale of hundreds of meters, have been proposed in experiments aiming to observe gravitational waves with frequencies below 1 Hz, where laser interferometers, such as LIGO, have poor sensitivity. Alternatively, short cavities have also been proposed for enhancing the sensitivity of more portable atom interferometers. We explore the fundamental limitations of two-mirror cavities for atomic beam splitting, and establish upper bounds on the temperature of the atomic ensemble as a function of cavity length and three design parameters: the cavity g factor, the bandwidth, and the optical suppression factor of the first and second order spatial modes. A lower bound to the cavity bandwidth is found which avoids elongation of the interaction time and maximizes power enhancement. An upper limit to cavity length is found for symmetric two-mirror cavities, restricting the practicality of long baseline detectors. For shorter cavities, an upper limit on the beam size was derived from the geometrical stability of the cavity. These findings aim to aid the design of current and future cavity-assisted atom interferometers.
Limits on the fluctuating part of y-type distortion monopole from Planck and SPT results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khatri, Rishi; Sunyaev, Rashid, E-mail: khatri@mpa-garching.mpg.de, E-mail: sunyaev@mpa-garching.mpg.de
2015-08-01
We use the published Planck and SPT cluster catalogs [1,2] and recently published y-distortion maps [3] to put strong observational limits on the contribution of the fluctuating part of the y-type distortions to the y-distortion monopole. Our bounds are 5.4× 10{sup −8} < ( y) < 2.2× 10{sup −6}. Our upper bound is a factor of 6.8 stronger than the currently best upper 95% confidence limit from COBE-FIRAS of ( y) <15× 10{sup −6}. In the standard cosmology, large scale structure is the only source of such distortions and our limits therefore constrain the baryonic physics involved in the formation of the large scale structure. Our lower limit, from themore » detected clusters in the Planck and SPT catalogs, also implies that a Pixie-like experiment should detect the y-distortion monopole at >27-σ. The biggest sources of uncertainty in our upper limit are the monopole offsets between different HFI channel maps that we estimate to be <10{sup −6}.« less
Approximation Set of the Interval Set in Pawlak's Space
Wang, Jin; Wang, Guoyin
2014-01-01
The interval set is a special set, which describes uncertainty of an uncertain concept or set Z with its two crisp boundaries named upper-bound set and lower-bound set. In this paper, the concept of similarity degree between two interval sets is defined at first, and then the similarity degrees between an interval set and its two approximations (i.e., upper approximation set R¯(Z) and lower approximation set R_(Z)) are presented, respectively. The disadvantages of using upper-approximation set R¯(Z) or lower-approximation set R_(Z) as approximation sets of the uncertain set (uncertain concept) Z are analyzed, and a new method for looking for a better approximation set of the interval set Z is proposed. The conclusion that the approximation set R 0.5(Z) is an optimal approximation set of interval set Z is drawn and proved successfully. The change rules of R 0.5(Z) with different binary relations are analyzed in detail. Finally, a kind of crisp approximation set of the interval set Z is constructed. We hope this research work will promote the development of both the interval set model and granular computing theory. PMID:25177721
Uncertainty, imprecision, and the precautionary principle in climate change assessment.
Borsuk, M E; Tomassini, L
2005-01-01
Statistical decision theory can provide useful support for climate change decisions made under conditions of uncertainty. However, the probability distributions used to calculate expected costs in decision theory are themselves subject to uncertainty, disagreement, or ambiguity in their specification. This imprecision can be described using sets of probability measures, from which upper and lower bounds on expectations can be calculated. However, many representations, or classes, of probability measures are possible. We describe six of the more useful classes and demonstrate how each may be used to represent climate change uncertainties. When expected costs are specified by bounds, rather than precise values, the conventional decision criterion of minimum expected cost is insufficient to reach a unique decision. Alternative criteria are required, and the criterion of minimum upper expected cost may be desirable because it is consistent with the precautionary principle. Using simple climate and economics models as an example, we determine the carbon dioxide emissions levels that have minimum upper expected cost for each of the selected classes. There can be wide differences in these emissions levels and their associated costs, emphasizing the need for care when selecting an appropriate class.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, Zachary; Neuert, Gregor; Department of Pharmacology, School of Medicine, Vanderbilt University, Nashville, Tennessee 37232
2016-08-21
Emerging techniques now allow for precise quantification of distributions of biological molecules in single cells. These rapidly advancing experimental methods have created a need for more rigorous and efficient modeling tools. Here, we derive new bounds on the likelihood that observations of single-cell, single-molecule responses come from a discrete stochastic model, posed in the form of the chemical master equation. These strict upper and lower bounds are based on a finite state projection approach, and they converge monotonically to the exact likelihood value. These bounds allow one to discriminate rigorously between models and with a minimum level of computational effort.more » In practice, these bounds can be incorporated into stochastic model identification and parameter inference routines, which improve the accuracy and efficiency of endeavors to analyze and predict single-cell behavior. We demonstrate the applicability of our approach using simulated data for three example models as well as for experimental measurements of a time-varying stochastic transcriptional response in yeast.« less
NASA Technical Reports Server (NTRS)
Glover, R. M.; Weinhold, F.
1977-01-01
Variational functionals of Braunn and Rebane (1972) for the imagery-frequency polarizability (IFP) have been generalized by the method of Gramian inequalities to give rigorous upper and lower bounds, valid even when the true (but unknown) unperturbed wavefunction must be represented by a variational approximation. Using these formulas in conjunction with flexible variational trial functions, tight error bounds are computed for the IFP and the associated two- and three-body van der Waals interaction constants of the ground 1(1S) and metastable 2(1,3S) states of He and Li(+). These bounds generally establish the ground-state properties to within a fraction of a per cent and metastable properties to within a few per cent, permitting a comparative assessment of competing theoretical methods at this level of accuracy. Unlike previous 'error bounds' for these properties, the present results have a completely a priori theoretical character, with no empirical input data.
Manipulations of Cartesian Graphs: A First Introduction to Analysis.
ERIC Educational Resources Information Center
Lowenthal, Francis; Vandeputte, Christiane
1989-01-01
Introduces an introductory module for analysis. Describes stock of basic functions and their graphs as part one and three methods as part two: transformations of simple graphs, the sum of stock functions, and upper and lower bounds. (YP)
2017-06-15
the methodology of reducing the online-algorithm-selecting problem as a contextual bandit problem, which is yet another interactive learning...KH2016a] Kuan-Hao Huang and Hsuan-Tien Lin. Linear upper confidence bound algorithm for contextual bandit problem with piled rewards. In Proceedings
Amortized entanglement of a quantum channel and approximately teleportation-simulable channels
NASA Astrophysics Data System (ADS)
Kaur, Eneet; Wilde, Mark M.
2018-01-01
This paper defines the amortized entanglement of a quantum channel as the largest difference in entanglement between the output and the input of the channel, where entanglement is quantified by an arbitrary entanglement measure. We prove that the amortized entanglement of a channel obeys several desirable properties, and we also consider special cases such as the amortized relative entropy of entanglement and the amortized Rains relative entropy. These latter quantities are shown to be single-letter upper bounds on the secret-key-agreement and PPT-assisted quantum capacities of a quantum channel, respectively. Of especial interest is a uniform continuity bound for these latter two special cases of amortized entanglement, in which the deviation between the amortized entanglement of two channels is bounded from above by a simple function of the diamond norm of their difference and the output dimension of the channels. We then define approximately teleportation- and positive-partial-transpose-simulable (PPT-simulable) channels as those that are close in diamond norm to a channel which is either exactly teleportation- or PPT-simulable, respectively. These results then lead to single-letter upper bounds on the secret-key-agreement and PPT-assisted quantum capacities of channels that are approximately teleportation- or PPT-simulable, respectively. Finally, we generalize many of the concepts in the paper to the setting of general resource theories, defining the amortized resourcefulness of a channel and the notion of ν-freely-simulable channels, connecting these concepts in an operational way as well.
Unveiling ν secrets with cosmological data: Neutrino masses and mass hierarchy
NASA Astrophysics Data System (ADS)
Vagnozzi, Sunny; Giusarma, Elena; Mena, Olga; Freese, Katherine; Gerbino, Martina; Ho, Shirley; Lattanzi, Massimiliano
2017-12-01
Using some of the latest cosmological data sets publicly available, we derive the strongest bounds in the literature on the sum of the three active neutrino masses, Mν, within the assumption of a background flat Λ CDM cosmology. In the most conservative scheme, combining Planck cosmic microwave background temperature anisotropies and baryon acoustic oscillations (BAO) data, as well as the up-to-date constraint on the optical depth to reionization (τ ), the tightest 95% confidence level upper bound we find is Mν<0.151 eV . The addition of Planck high-ℓ polarization data, which, however, might still be contaminated by systematics, further tightens the bound to Mν<0.118 eV . A proper model comparison treatment shows that the two aforementioned combinations disfavor the inverted hierarchy at ˜64 % C .L . and ˜71 % C .L . , respectively. In addition, we compare the constraining power of measurements of the full-shape galaxy power spectrum versus the BAO signature, from the BOSS survey. Even though the latest BOSS full-shape measurements cover a larger volume and benefit from smaller error bars compared to previous similar measurements, the analysis method commonly adopted results in their constraining power still being less powerful than that of the extracted BAO signal. Our work uses only cosmological data; imposing the constraint Mν>0.06 eV from oscillations data would raise the quoted upper bounds by O (0.1 σ ) and would not affect our conclusions.
Future trends in computer waste generation in India.
Dwivedy, Maheshwar; Mittal, R K
2010-11-01
The objective of this paper is to estimate the future projection of computer waste in India and to subsequently analyze their flow at the end of their useful phase. For this purpose, the study utilizes the logistic model-based approach proposed by Yang and Williams to forecast future trends in computer waste. The model estimates future projection of computer penetration rate utilizing their first lifespan distribution and historical sales data. A bounding analysis on the future carrying capacity was simulated using the three parameter logistic curve. The observed obsolete generation quantities from the extrapolated penetration rates are then used to model the disposal phase. The results of the bounding analysis indicate that in the year 2020, around 41-152 million units of computers will become obsolete. The obsolete computer generation quantities are then used to estimate the End-of-Life outflows by utilizing a time-series multiple lifespan model. Even a conservative estimate of the future recycling capacity of PCs will reach upwards of 30 million units during 2025. Apparently, more than 150 million units could be potentially recycled in the upper bound case. However, considering significant future investment in the e-waste recycling sector from all stakeholders in India, we propose a logistic growth in the recycling rate and estimate the requirement of recycling capacity between 60 and 400 million units for the lower and upper bound case during 2025. Finally, we compare the future obsolete PC generation amount of the US and India. Copyright © 2010 Elsevier Ltd. All rights reserved.
McDonald, Douglas B.; Buchholz, Carol E.
1994-01-01
A shield for restricting molten corium from flowing into a water sump disposed in a floor of a containment vessel includes upper and lower walls which extend vertically upwardly and downwardly from the floor for laterally bounding the sump. The upper wall includes a plurality of laterally spaced apart flow channels extending horizontally therethrough, with each channel having a bottom disposed coextensively with the floor for channeling water therefrom into the sump. Each channel has a height and a length predeterminedly selected for allowing heat from the molten corium to dissipate through the upper and lower walls as it flows therethrough for solidifying the molten corium therein to prevent accumulation thereof in the sump.
Observed Volume Fluxes and Mixing in the Dardanelles Strait
2013-10-04
et al , 2001; Kara el al ., 2008]. [3] It has been recognized for years that the upper-layer outflow from the Dardanelles Strait to the Aegean Sea...than the interior of the sea and manifests itself as a subsurface flow bounded by the upper layer of the Sea of Mannara. 5007 JAROSZ ET AL ...both ends of the Dardanelles Strait, and assuming a steady state mass budget, Unl’uata et al . [1990] estimated mean annual volume transports in the
Canonical Probability Distributions for Model Building, Learning, and Inference
2006-07-14
hand , are for Ranked nodes set at Unobservable and Auxiliary nodes. The value of alpha is set in the diagnostic window by moving the slider in the upper...right hand side of the window. The upper bound of alpha can be modified by typing the new value in the small edit box to the right of the slider. f...TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER University of Pittsburgh
Exact one-sided confidence limits for the difference between two correlated proportions.
Lloyd, Chris J; Moldovan, Max V
2007-08-15
We construct exact and optimal one-sided upper and lower confidence bounds for the difference between two probabilities based on matched binary pairs using well-established optimality theory of Buehler. Starting with five different approximate lower and upper limits, we adjust them to have coverage probability exactly equal to the desired nominal level and then compare the resulting exact limits by their mean size. Exact limits based on the signed root likelihood ratio statistic are preferred and recommended for practical use.
Scales of mass generation for quarks, leptons, and majorana neutrinos.
Dicus, Duane A; He, Hong-Jian
2005-06-10
We study 2-->n inelastic fermion-(anti)fermion scattering into multiple longitudinal weak gauge bosons and derive universal upper bounds on the scales of fermion mass generation by imposing unitarity of the S matrix. We place new upper limits on the scales of fermion mass generation, independent of the electroweak symmetry breaking scale. Strikingly, we find that the strongest 2-->n limits fall in a narrow range, 3-170 TeV (with n=2-24), depending on the observed fermion masses.
NASA Astrophysics Data System (ADS)
Adam, J.; Adamová, D.; Aggarwal, M. M.; Aglieri Rinella, G.; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahmed, I.; Ahn, S. U.; Aimo, I.; Aiola, S.; Ajaz, M.; Akindinov, A.; Alam, S. N.; Aleksandrov, D.; Alessandro, B.; Alexandre, D.; Alfaro Molina, R.; Alici, A.; Alkin, A.; Alme, J.; Alt, T.; Altinpinar, S.; Altsybeev, I.; Alves Garcia Prado, C.; Andrei, C.; Andronic, A.; Anguelov, V.; Anielski, J.; Antičić, T.; Antinori, F.; Antonioli, P.; Aphecetche, L.; Appelshäuser, H.; Arcelli, S.; Armesto, N.; Arnaldi, R.; Aronsson, T.; Arsene, I. C.; Arslandok, M.; Augustinus, A.; Averbeck, R.; Azmi, M. D.; Bach, M.; Badalà, A.; Baek, Y. W.; Bagnasco, S.; Bailhache, R.; Bala, R.; Baldisseri, A.; Ball, M.; Baltasar Dos Santos Pedrosa, F.; Baral, R. C.; Barbano, A. M.; Barbera, R.; Barile, F.; Barnaföldi, G. G.; Barnby, L. S.; Barret, V.; Bartalini, P.; Bartke, J.; Bartsch, E.; Basile, M.; Bastid, N.; Basu, S.; Bathen, B.; Batigne, G.; Batista Camejo, A.; Batyunya, B.; Batzing, P. C.; Bearden, I. G.; Beck, H.; Bedda, C.; Behera, N. K.; Belikov, I.; Bellini, F.; Bello Martinez, H.; Bellwied, R.; Belmont, R.; Belmont-Moreno, E.; Belyaev, V.; Bencedi, G.; Beole, S.; Berceanu, I.; Bercuci, A.; Berdnikov, Y.; Berenyi, D.; Bertens, R. A.; Berzano, D.; Betev, L.; Bhasin, A.; Bhat, I. R.; Bhati, A. K.; Bhattacharjee, B.; Bhom, J.; Bianchi, L.; Bianchi, N.; Bianchin, C.; Bielčík, J.; Bielčíková, J.; Bilandzic, A.; Biswas, S.; Bjelogrlic, S.; Blanco, F.; Blau, D.; Blume, C.; Bock, F.; Bogdanov, A.; Bøggild, H.; Boldizsár, L.; Bombara, M.; Book, J.; Borel, H.; Borissov, A.; Borri, M.; Bossú, F.; Botje, M.; Botta, E.; Böttger, S.; Braun-Munzinger, P.; Bregant, M.; Breitner, T.; Broker, T. A.; Browning, T. A.; Broz, M.; Brucken, E. J.; Bruna, E.; Bruno, G. E.; Budnikov, D.; Buesching, H.; Bufalino, S.; Buncic, P.; Busch, O.; Buthelezi, Z.; Buxton, J. T.; Caffarri, D.; Cai, X.; Caines, H.; Calero Diaz, L.; Caliva, A.; Calvo Villar, E.; Camerini, P.; Carena, F.; Carena, W.; Castillo Castellanos, J.; Castro, A. J.; Casula, E. A. R.; Cavicchioli, C.; Ceballos Sanchez, C.; Cepila, J.; Cerello, P.; Chang, B.; Chapeland, S.; Chartier, M.; Charvet, J. L.; Chattopadhyay, S.; Chattopadhyay, S.; Chelnokov, V.; Cherney, M.; Cheshkov, C.; Cheynis, B.; Chibante Barroso, V.; Chinellato, D. D.; Chochula, P.; Choi, K.; Chojnacki, M.; Choudhury, S.; Christakoglou, P.; Christensen, C. H.; Christiansen, P.; Chujo, T.; Chung, S. U.; Cicalo, C.; Cifarelli, L.; Cindolo, F.; Cleymans, J.; Colamaria, F.; Colella, D.; Collu, A.; Colocci, M.; Conesa Balbastre, G.; Conesa del Valle, Z.; Connors, M. E.; Contreras, J. G.; Cormier, T. M.; Corrales Morales, Y.; Cortés Maldonado, I.; Cortese, P.; Cosentino, M. R.; Costa, F.; Crochet, P.; Cruz Albino, R.; Cuautle, E.; Cunqueiro, L.; Dahms, T.; Dainese, A.; Danu, A.; Das, D.; Das, I.; Das, S.; Dash, A.; Dash, S.; De, S.; De Caro, A.; de Cataldo, G.; de Cuveland, J.; De Falco, A.; De Gruttola, D.; De Marco, N.; De Pasquale, S.; Deisting, A.; Deloff, A.; Dénes, E.; D'Erasmo, G.; Di Bari, D.; Di Mauro, A.; Di Nezza, P.; Diaz Corchero, M. A.; Dietel, T.; Dillenseger, P.; Divià, R.; Djuvsland, Ø.; Dobrin, A.; Dobrowolski, T.; Domenicis Gimenez, D.; Dönigus, B.; Dordic, O.; Dubey, A. K.; Dubla, A.; Ducroux, L.; Dupieux, P.; Ehlers, R. J.; Elia, D.; Engel, H.; Erazmus, B.; Erhardt, F.; Eschweiler, D.; Espagnon, B.; Estienne, M.; Esumi, S.; Evans, D.; Evdokimov, S.; Eyyubova, G.; Fabbietti, L.; Fabris, D.; Faivre, J.; Fantoni, A.; Fasel, M.; Feldkamp, L.; Felea, D.; Feliciello, A.; Feofilov, G.; Ferencei, J.; Fernández Téllez, A.; Ferreiro, E. G.; Ferretti, A.; Festanti, A.; Figiel, J.; Figueredo, M. A. S.; Filchagin, S.; Finogeev, D.; Fionda, F. M.; Fiore, E. M.; Fleck, M. G.; Floris, M.; Foertsch, S.; Foka, P.; Fokin, S.; Fragiacomo, E.; Francescon, A.; Frankenfeld, U.; Fuchs, U.; Furget, C.; Furs, A.; Fusco Girard, M.; Gaardhøje, J. J.; Gagliardi, M.; Gago, A. M.; Gallio, M.; Gangadharan, D. R.; Ganoti, P.; Gao, C.; Garabatos, C.; Garcia-Solis, E.; Gargiulo, C.; Gasik, P.; Germain, M.; Gheata, A.; Gheata, M.; Ghosh, P.; Ghosh, S. K.; Gianotti, P.; Giubellino, P.; Giubilato, P.; Gladysz-Dziadus, E.; Glässel, P.; Gomez Ramirez, A.; González-Zamora, P.; Gorbunov, S.; Görlich, L.; Gotovac, S.; Grabski, V.; Graczykowski, L. K.; Grelli, A.; Grigoras, A.; Grigoras, C.; Grigoriev, V.; Grigoryan, A.; Grigoryan, S.; Grinyov, B.; Grion, N.; Grosse-Oetringhaus, J. F.; Grossiord, J.-Y.; Grosso, R.; Guber, F.; Guernane, R.; Guerzoni, B.; Gulbrandsen, K.; Gulkanyan, H.; Gunji, T.; Gupta, A.; Gupta, R.; Haake, R.; Haaland, Ø.; Hadjidakis, C.; Haiduc, M.; Hamagaki, H.; Hamar, G.; Hanratty, L. D.; Hansen, A.; Harris, J. W.; Hartmann, H.; Harton, A.; Hatzifotiadou, D.; Hayashi, S.; Heckel, S. T.; Heide, M.; Helstrup, H.; Herghelegiu, A.; Herrera Corral, G.; Hess, B. A.; Hetland, K. F.; Hilden, T. E.; Hillemanns, H.; Hippolyte, B.; Hristov, P.; Huang, M.; Humanic, T. J.; Hussain, N.; Hussain, T.; Hutter, D.; Hwang, D. S.; Ilkaev, R.; Ilkiv, I.; Inaba, M.; Ionita, C.; Ippolitov, M.; Irfan, M.; Ivanov, M.; Ivanov, V.; Izucheev, V.; Jachołkowski, A.; Jacobs, P. M.; Jahnke, C.; Jang, H. J.; Janik, M. A.; Jayarathna, P. H. S. Y.; Jena, C.; Jena, S.; Jimenez Bustamante, R. T.; Jones, P. G.; Jung, H.; Jusko, A.; Kalinak, P.; Kalweit, A.; Kamin, J.; Kang, J. H.; Kaplin, V.; Kar, S.; Karasu Uysal, A.; Karavichev, O.; Karavicheva, T.; Karpechev, E.; Kebschull, U.; Keidel, R.; Keijdener, D. L. D.; Keil, M.; Khan, K. H.; Khan, M. M.; Khan, P.; Khan, S. A.; Khanzadeev, A.; Kharlov, Y.; Kileng, B.; Kim, B.; Kim, D. W.; Kim, D. J.; Kim, H.; Kim, J. S.; Kim, M.; Kim, M.; Kim, S.; Kim, T.; Kirsch, S.; Kisel, I.; Kiselev, S.; Kisiel, A.; Kiss, G.; Klay, J. L.; Klein, C.; Klein, J.; Klein-Bösing, C.; Kluge, A.; Knichel, M. L.; Knospe, A. G.; Kobayashi, T.; Kobdaj, C.; Kofarago, M.; Köhler, M. K.; Kollegger, T.; Kolojvari, A.; Kondratiev, V.; Kondratyeva, N.; Kondratyuk, E.; Konevskikh, A.; Kouzinopoulos, C.; Kovalenko, V.; Kowalski, M.; Kox, S.; Koyithatta Meethaleveedu, G.; Kral, J.; Králik, I.; Kravčáková, A.; Krelina, M.; Kretz, M.; Krivda, M.; Krizek, F.; Kryshen, E.; Krzewicki, M.; Kubera, A. M.; Kučera, V.; Kucheriaev, Y.; Kugathasan, T.; Kuhn, C.; Kuijer, P. G.; Kulakov, I.; Kumar, J.; Kumar, L.; Kurashvili, P.; Kurepin, A.; Kurepin, A. B.; Kuryakin, A.; Kushpil, S.; Kweon, M. J.; Kwon, Y.; La Pointe, S. L.; La Rocca, P.; Lagana Fernandes, C.; Lakomov, I.; Langoy, R.; Lara, C.; Lardeux, A.; Lattuca, A.; Laudi, E.; Lea, R.; Leardini, L.; Lee, G. R.; Lee, S.; Legrand, I.; Lehnert, J.; Lemmon, R. C.; Lenti, V.; Leogrande, E.; León Monzón, I.; Leoncino, M.; Lévai, P.; Li, S.; Li, X.; Lien, J.; Lietava, R.; Lindal, S.; Lindenstruth, V.; Lippmann, C.; Lisa, M. A.; Ljunggren, H. M.; Lodato, D. F.; Loenne, P. I.; Loggins, V. R.; Loginov, V.; Loizides, C.; Lopez, X.; López Torres, E.; Lowe, A.; Lu, X.-G.; Luettig, P.; Lunardon, M.; Luparello, G.; Maevskaya, A.; Mager, M.; Mahajan, S.; Mahmood, S. M.; Maire, A.; Majka, R. D.; Malaev, M.; Maldonado Cervantes, I.; Malinina, L.; Mal'Kevich, D.; Malzacher, P.; Mamonov, A.; Manceau, L.; Manko, V.; Manso, F.; Manzari, V.; Marchisone, M.; Mareš, J.; Margagliotti, G. V.; Margotti, A.; Margutti, J.; Marín, A.; Markert, C.; Marquard, M.; Martashvili, I.; Martin, N. A.; Martin Blanco, J.; Martinengo, P.; Martínez, M. I.; Martínez García, G.; Martinez Pedreira, M.; Martynov, Y.; Mas, A.; Masciocchi, S.; Masera, M.; Masoni, A.; Massacrier, L.; Mastroserio, A.; Matyja, A.; Mayer, C.; Mazer, J.; Mazzoni, M. A.; Mcdonald, D.; Meddi, F.; Menchaca-Rocha, A.; Meninno, E.; Mercado Pérez, J.; Meres, M.; Miake, Y.; Mieskolainen, M. M.; Mikhaylov, K.; Milano, L.; Milosevic, J.; Minervini, L. M.; Mischke, A.; Mishra, A. N.; Miśkowiec, D.; Mitra, J.; Mitu, C. M.; Mohammadi, N.; Mohanty, B.; Molnar, L.; Montaño Zetina, L.; Montes, E.; Morando, M.; Moretto, S.; Morreale, A.; Morsch, A.; Muccifora, V.; Mudnic, E.; Mühlheim, D.; Muhuri, S.; Mukherjee, M.; Müller, H.; Mulligan, J. D.; Munhoz, M. G.; Murray, S.; Musa, L.; Musinsky, J.; Nandi, B. K.; Nania, R.; Nappi, E.; Naru, M. U.; Nattrass, C.; Nayak, K.; Nayak, T. K.; Nazarenko, S.; Nedosekin, A.; Nellen, L.; Ng, F.; Nicassio, M.; Niculescu, M.; Niedziela, J.; Nielsen, B. S.; Nikolaev, S.; Nikulin, S.; Nikulin, V.; Noferini, F.; Nomokonov, P.; Nooren, G.; Norman, J.; Nyanin, A.; Nystrand, J.; Oeschler, H.; Oh, S.; Oh, S. K.; Ohlson, A.; Okatan, A.; Okubo, T.; Olah, L.; Oleniacz, J.; Oliveira Da Silva, A. C.; Oliver, M. H.; Onderwaater, J.; Oppedisano, C.; Ortiz Velasquez, A.; Oskarsson, A.; Otwinowski, J.; Oyama, K.; Ozdemir, M.; Pachmayer, Y.; Pagano, P.; Paić, G.; Pajares, C.; Pal, S. K.; Pan, J.; Pandey, A. K.; Pant, D.; Papikyan, V.; Pappalardo, G. S.; Pareek, P.; Park, W. J.; Parmar, S.; Passfeld, A.; Paticchio, V.; Paul, B.; Pawlak, T.; Peitzmann, T.; Pereira Da Costa, H.; Pereira De Oliveira Filho, E.; Peresunko, D.; Pérez Lara, C. E.; Peskov, V.; Pestov, Y.; Petráček, V.; Petrov, V.; Petrovici, M.; Petta, C.; Piano, S.; Pikna, M.; Pillot, P.; Pinazza, O.; Pinsky, L.; Piyarathna, D. B.; Płoskoń, M.; Planinic, M.; Pluta, J.; Pochybova, S.; Podesta-Lerma, P. L. M.; Poghosyan, M. G.; Polichtchouk, B.; Poljak, N.; Poonsawat, W.; Pop, A.; Porteboeuf-Houssais, S.; Porter, J.; Pospisil, J.; Prasad, S. K.; Preghenella, R.; Prino, F.; Pruneau, C. A.; Pshenichnov, I.; Puccio, M.; Puddu, G.; Pujahari, P.; Punin, V.; Putschke, J.; Qvigstad, H.; Rachevski, A.; Raha, S.; Rajput, S.; Rak, J.; Rakotozafindrabe, A.; Ramello, L.; Raniwala, R.; Raniwala, S.; Räsänen, S. S.; Rascanu, B. T.; Rathee, D.; Razazi, V.; Read, K. F.; Real, J. S.; Redlich, K.; Reed, R. J.; Rehman, A.; Reichelt, P.; Reicher, M.; Reidt, F.; Ren, X.; Renfordt, R.; Reolon, A. R.; Reshetin, A.; Rettig, F.; Revol, J.-P.; Reygers, K.; Riabov, V.; Ricci, R. A.; Richert, T.; Richter, M.; Riedler, P.; Riegler, W.; Riggi, F.; Ristea, C.; Rivetti, A.; Rocco, E.; Rodríguez Cahuantzi, M.; Rodriguez Manso, A.; Røed, K.; Rogochaya, E.; Rohr, D.; Röhrich, D.; Romita, R.; Ronchetti, F.; Ronflette, L.; Rosnet, P.; Rossi, A.; Roukoutakis, F.; Roy, A.; Roy, C.; Roy, P.; Rubio Montero, A. J.; Rui, R.; Russo, R.; Ryabinkin, E.; Ryabov, Y.; Rybicki, A.; Sadovsky, S.; Šafařík, K.; Sahlmuller, B.; Sahoo, P.; Sahoo, R.; Sahoo, S.; Sahu, P. K.; Saini, J.; Sakai, S.; Saleh, M. A.; Salgado, C. A.; Salzwedel, J.; Sambyal, S.; Samsonov, V.; Sanchez Castro, X.; Šándor, L.; Sandoval, A.; Sano, M.; Santagati, G.; Sarkar, D.; Scapparone, E.; Scarlassara, F.; Scharenberg, R. P.; Schiaua, C.; Schicker, R.; Schmidt, C.; Schmidt, H. R.; Schuchmann, S.; Schukraft, J.; Schulc, M.; Schuster, T.; Schutz, Y.; Schwarz, K.; Schweda, K.; Scioli, G.; Scomparin, E.; Scott, R.; Seeder, K. S.; Seger, J. E.; Sekiguchi, Y.; Selyuzhenkov, I.; Senosi, K.; Seo, J.; Serradilla, E.; Sevcenco, A.; Shabanov, A.; Shabetai, A.; Shadura, O.; Shahoyan, R.; Shangaraev, A.; Sharma, A.; Sharma, N.; Shigaki, K.; Shtejer, K.; Sibiriak, Y.; Siddhanta, S.; Sielewicz, K. M.; Siemiarczuk, T.; Silvermyr, D.; Silvestre, C.; Simatovic, G.; Simonetti, G.; Singaraju, R.; Singh, R.; Singha, S.; Singhal, V.; Sinha, B. C.; Sinha, T.; Sitar, B.; Sitta, M.; Skaali, T. B.; Slupecki, M.; Smirnov, N.; Snellings, R. J. M.; Snellman, T. W.; Søgaard, C.; Soltz, R.; Song, J.; Song, M.; Song, Z.; Soramel, F.; Sorensen, S.; Spacek, M.; Spiriti, E.; Sputowska, I.; Spyropoulou-Stassinaki, M.; Srivastava, B. K.; Stachel, J.; Stan, I.; Stefanek, G.; Steinpreis, M.; Stenlund, E.; Steyn, G.; Stiller, J. H.; Stocco, D.; Strmen, P.; Suaide, A. A. P.; Sugitate, T.; Suire, C.; Suleymanov, M.; Sultanov, R.; Šumbera, M.; Symons, T. J. M.; Szabo, A.; Szanto de Toledo, A.; Szarka, I.; Szczepankiewicz, A.; Szymanski, M.; Takahashi, J.; Tanaka, N.; Tangaro, M. A.; Tapia Takaki, J. D.; Tarantola Peloni, A.; Tariq, M.; Tarzila, M. G.; Tauro, A.; Tejeda Muñoz, G.; Telesca, A.; Terasaki, K.; Terrevoli, C.; Teyssier, B.; Thäder, J.; Thomas, D.; Tieulent, R.; Timmins, A. R.; Toia, A.; Trogolo, S.; Trubnikov, V.; Trzaska, W. H.; Tsuji, T.; Tumkin, A.; Turrisi, R.; Tveter, T. S.; Ullaland, K.; Uras, A.; Usai, G. L.; Utrobicic, A.; Vajzer, M.; Vala, M.; Valencia Palomo, L.; Vallero, S.; Van Der Maarel, J.; Van Hoorne, J. W.; van Leeuwen, M.; Vanat, T.; Vande Vyvre, P.; Varga, D.; Vargas, A.; Vargyas, M.; Varma, R.; Vasileiou, M.; Vasiliev, A.; Vauthier, A.; Vechernin, V.; Veen, A. M.; Veldhoen, M.; Velure, A.; Venaruzzo, M.; Vercellin, E.; Vergara Limón, S.; Vernet, R.; Verweij, M.; Vickovic, L.; Viesti, G.; Viinikainen, J.; Vilakazi, Z.; Villalobos Baillie, O.; Vinogradov, A.; Vinogradov, L.; Vinogradov, Y.; Virgili, T.; Vislavicius, V.; Viyogi, Y. P.; Vodopyanov, A.; Völkl, M. A.; Voloshin, K.; Voloshin, S. A.; Volpe, G.; von Haller, B.; Vorobyev, I.; Vranic, D.; Vrláková, J.; Vulpescu, B.; Vyushin, A.; Wagner, B.; Wagner, J.; Wang, H.; Wang, M.; Wang, Y.; Watanabe, D.; Weber, M.; Weber, S. G.; Wessels, J. P.; Westerhoff, U.; Wiechula, J.; Wikne, J.; Wilde, M.; Wilk, G.; Wilkinson, J.; Williams, M. C. S.; Windelband, B.; Winn, M.; Yaldo, C. G.; Yamaguchi, Y.; Yang, H.; Yang, P.; Yano, S.; Yasnopolskiy, S.; Yin, Z.; Yokoyama, H.; Yoo, I.-K.; Yurchenko, V.; Yushmanov, I.; Zaborowska, A.; Zaccolo, V.; Zaman, A.; Zampolli, C.; Zanoli, H. J. C.; Zaporozhets, S.; Zarochentsev, A.; Závada, P.; Zaviyalov, N.; Zbroszczyk, H.; Zgura, I. S.; Zhalov, M.; Zhang, H.; Zhang, X.; Zhang, Y.; Zhao, C.; Zhigareva, N.; Zhou, D.; Zhou, Y.; Zhou, Z.; Zhu, H.; Zhu, J.; Zhu, X.; Zichichi, A.; Zimmermann, A.; Zimmermann, M. B.; Zinovjev, G.; Zyzak, M.
2016-01-01
We present results of a search for two hypothetical strange dibaryon states, i.e. the H-dibaryon and the possible Λn ‾ bound state. The search is performed with the ALICE detector in central (0-10%) Pb-Pb collisions at √{sNN} = 2.76 TeV, by invariant mass analysis in the decay modes Λn ‾ → d ‾π+ and H-dibaryon → Λpπ-. No evidence for these bound states is observed. Upper limits are determined at 99% confidence level for a wide range of lifetimes and for the full range of branching ratios. The results are compared to thermal, coalescence and hybrid UrQMD model expectations, which describe correctly the production of other loosely bound states, like the deuteron and the hypertriton.
Extracting Loop Bounds for WCET Analysis Using the Instrumentation Point Graph
NASA Astrophysics Data System (ADS)
Betts, A.; Bernat, G.
2009-05-01
Every calculation engine proposed in the literature of Worst-Case Execution Time (WCET) analysis requires upper bounds on loop iterations. Existing mechanisms to procure this information are either error prone, because they are gathered from the end-user, or limited in scope, because automatic analyses target very specific loop structures. In this paper, we present a technique that obtains bounds completely automatically for arbitrary loop structures. In particular, we show how to employ the Instrumentation Point Graph (IPG) to parse traces of execution (generated by an instrumented program) in order to extract bounds relative to any loop-nesting level. With this technique, therefore, non-rectangular dependencies between loops can be captured, allowing more accurate WCET estimates to be calculated. We demonstrate the improvement in accuracy by comparing WCET estimates computed through our HMB framework against those computed with state-of-the-art techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adam, J.; Adamová, D.; Aggarwal, M. M.
Here, we present results of a search for two hypothetical strange dibaryon states, i.e. the H-dibaryon and the possiblemore » $$\\overline{Λn}$$ bound state. The search is performed with the ALICE detector in central (0-10%) Pb-Pb collisions at $$\\sqrt{s}$$$_ {NN}$$ = 2.76 TeV, by invariant mass analysis in the decay modes $$\\overline{Λn}$$ → $$\\bar{d}$$π + and H-dibaryon →Λpπ -. No evidence for these bound states is observed. Upper limits are determined at 99% confidence level for a wide range of lifetimes and for the full range of branching ratios. The results are compared to thermal, coalescence and hybrid UrQMD model expectations, which describe correctly the production of other loosely bound states, like the deuteron and the hypertriton.« less
Marathon: An Open Source Software Library for the Analysis of Markov-Chain Monte Carlo Algorithms
Rechner, Steffen; Berger, Annabell
2016-01-01
We present the software library marathon, which is designed to support the analysis of sampling algorithms that are based on the Markov-Chain Monte Carlo principle. The main application of this library is the computation of properties of so-called state graphs, which represent the structure of Markov chains. We demonstrate applications and the usefulness of marathon by investigating the quality of several bounding methods on four well-known Markov chains for sampling perfect matchings and bipartite graphs. In a set of experiments, we compute the total mixing time and several of its bounds for a large number of input instances. We find that the upper bound gained by the famous canonical path method is often several magnitudes larger than the total mixing time and deteriorates with growing input size. In contrast, the spectral bound is found to be a precise approximation of the total mixing time. PMID:26824442
Information models of software productivity - Limits on productivity growth
NASA Technical Reports Server (NTRS)
Tausworthe, Robert C.
1992-01-01
Research into generalized information-metric models of software process productivity establishes quantifiable behavior and theoretical bounds. The models establish a fundamental mathematical relationship between software productivity and the human capacity for information traffic, the software product yield (system size), information efficiency, and tool and process efficiencies. An upper bound is derived that quantifies average software productivity and the maximum rate at which it may grow. This bound reveals that ultimately, when tools, methodologies, and automated assistants have reached their maximum effective state, further improvement in productivity can only be achieved through increasing software reuse. The reuse advantage is shown not to increase faster than logarithmically in the number of reusable features available. The reuse bound is further shown to be somewhat dependent on the reuse policy: a general 'reuse everything' policy can lead to a somewhat slower productivity growth than a specialized reuse policy.
Wang, Yang; Li, Mingxing; Tu, Z C; Hernández, A Calvo; Roco, J M M
2012-07-01
The figure of merit for refrigerators performing finite-time Carnot-like cycles between two reservoirs at temperature T(h) and T(c) (
Adam, J.; Adamová, D.; Aggarwal, M. M.; ...
2016-11-28
Here, we present results of a search for two hypothetical strange dibaryon states, i.e. the H-dibaryon and the possiblemore » $$\\overline{Λn}$$ bound state. The search is performed with the ALICE detector in central (0-10%) Pb-Pb collisions at $$\\sqrt{s}$$$_ {NN}$$ = 2.76 TeV, by invariant mass analysis in the decay modes $$\\overline{Λn}$$ → $$\\bar{d}$$π + and H-dibaryon →Λpπ -. No evidence for these bound states is observed. Upper limits are determined at 99% confidence level for a wide range of lifetimes and for the full range of branching ratios. The results are compared to thermal, coalescence and hybrid UrQMD model expectations, which describe correctly the production of other loosely bound states, like the deuteron and the hypertriton.« less
Resistivity bound for hydrodynamic bad metals
Lucas, Andrew; Hartnoll, Sean A.
2017-01-01
We obtain a rigorous upper bound on the resistivity ρ of an electron fluid whose electronic mean free path is short compared with the scale of spatial inhomogeneities. When such a hydrodynamic electron fluid supports a nonthermal diffusion process—such as an imbalance mode between different bands—we show that the resistivity bound becomes ρ≲AΓ. The coefficient A is independent of temperature and inhomogeneity lengthscale, and Γ is a microscopic momentum-preserving scattering rate. In this way, we obtain a unified mechanism—without umklapp—for ρ∼T2 in a Fermi liquid and the crossover to ρ∼T in quantum critical regimes. This behavior is widely observed in transition metal oxides, organic metals, pnictides, and heavy fermion compounds and has presented a long-standing challenge to transport theory. Our hydrodynamic bound allows phonon contributions to diffusion constants, including thermal diffusion, to directly affect the electrical resistivity. PMID:29073054
Interferometric tests of Planckian quantum geometry models
Kwon, Ohkyung; Hogan, Craig J.
2016-04-19
The effect of Planck scale quantum geometrical effects on measurements with interferometers is estimated with standard physics, and with a variety of proposed extensions. It is shown that effects are negligible in standard field theory with canonically quantized gravity. Statistical noise levels are estimated in a variety of proposals for nonstandard metric fluctuations, and these alternatives are constrained using upper bounds on stochastic metric fluctuations from LIGO. Idealized models of several interferometer system architectures are used to predict signal noise spectra in a quantum geometry that cannot be described by a fluctuating metric, in which position noise arises from holographicmore » bounds on directional information. Lastly, predictions in this case are shown to be close to current and projected experimental bounds.« less
Integrability and chemical potential in the (3 + 1)-dimensional Skyrme model
NASA Astrophysics Data System (ADS)
Alvarez, P. D.; Canfora, F.; Dimakis, N.; Paliathanasis, A.
2017-10-01
Using a remarkable mapping from the original (3 + 1)dimensional Skyrme model to the Sine-Gordon model, we construct the first analytic examples of Skyrmions as well as of Skyrmions-anti-Skyrmions bound states within a finite box in 3 + 1 dimensional flat space-time. An analytic upper bound on the number of these Skyrmions-anti-Skyrmions bound states is derived. We compute the critical isospin chemical potential beyond which these Skyrmions cease to exist. With these tools, we also construct topologically protected time-crystals: time-periodic configurations whose time-dependence is protected by their non-trivial winding number. These are striking realizations of the ideas of Shapere and Wilczek. The critical isospin chemical potential for these time-crystals is determined.
Properties of Coulomb crystals: rigorous results.
Cioslowski, Jerzy
2008-04-28
Rigorous equalities and bounds for several properties of Coulomb crystals are presented. The energy e(N) per particle pair is shown to be a nondecreasing function of the particle number N for all clusters described by double-power-law pairwise-additive potentials epsilon(r) that are unbound at both r-->0 and r-->infinity. A lower bound for the ratio of the mean reciprocal crystal radius and e(N) is derived. The leading term in the asymptotic expression for the shell capacity that appears in the recently introduced approximate model of Coulomb crystals is obtained, providing in turn explicit large-N asymptotics for e(N) and the mean crystal radius. In addition, properties of the harmonic vibrational spectra are investigated, producing an upper bound for the zero-point energy.
Calculating Reuse Distance from Source Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narayanan, Sri Hari Krishna; Hovland, Paul
The efficient use of a system is of paramount importance in high-performance computing. Applications need to be engineered for future systems even before the architecture of such a system is clearly known. Static performance analysis that generates performance bounds is one way to approach the task of understanding application behavior. Performance bounds provide an upper limit on the performance of an application on a given architecture. Predicting cache hierarchy behavior and accesses to main memory is a requirement for accurate performance bounds. This work presents our static reuse distance algorithm to generate reuse distance histograms. We then use these histogramsmore » to predict cache miss rates. Experimental results for kernels studied show that the approach is accurate.« less
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
Structural basis for bifunctional zinc(II) macrocyclic complex recognition of thymine bulges in DNA.
del Mundo, Imee Marie A; Siters, Kevin E; Fountain, Matthew A; Morrow, Janet R
2012-05-07
The zinc(II) complex of 1-(4-quinoylyl)methyl-1,4,7,10-tetraazacyclododecane (cy4q) binds selectively to thymine bulges in DNA and to a uracil bulge in RNA. Binding constants are in the low-micromolar range for thymine bulges in the stems of hairpins, for a thymine bulge in a DNA duplex, and for a uracil bulge in an RNA hairpin. Binding studies of Zn(cy4q) to a series of hairpins containing thymine bulges with different flanking bases showed that the complex had a moderate selectivity for thymine bulges with neighboring purines. The dissociation constants of the most strongly bound Zn(cy4q)-DNA thymine bulge adducts were 100-fold tighter than similar sequences with fully complementary stems or than bulges containing cytosine, guanine, or adenine. In order to probe the role of the pendent group, three additional zinc(II) complexes containing 1,4,7,10-tetraazacyclododecane (cyclen) with aromatic pendent groups were studied for binding to DNA including 1-(2-quinolyl)methyl-1,4,7,10-tetraazacyclododecane (cy2q), 1-(4-biphenyl)methyl-1,4,7,10-tetraazacyclododecane (cybp), and 5-(1,4,7,10-tetraazacyclododecan-1-ylsulfonyl)-N,N-dimethylnaphthalen-1-amine (dsc). The Zn(cybp) complex binds with moderate affinity but little selectivity to DNA hairpins with thymine bulges and to DNA lacking bulges. Similarly, Zn(dsc) binds weakly both to thymine bulges and hairpins with fully complementary stems. The zinc(II) complex of cy2q has the 2-quinolyl moiety bound to the Zn(II) center, as shown by (1)H NMR spectroscopy and pH-potentiometric titrations. As a consequence, only weak (500 μM) binding is observed to DNA with no appreciable selectivity. An NMR structure of a thymine-bulge-containing hairpin shows that the thymine is extrahelical but rotated toward the major groove. NMR data for Zn(cy4q) bound to DNA containing a thymine bulge is consistent with binding of the zinc(II) complex to the thymine N3(-) and stacking of the quinoline on top of the thymine. The thymine-bulge bound zinc(II) complex is pointed into the major groove, and there are interactions with the guanine positioned 5' to the thymine bulge.
Bounds on strong field magneto-transport in three-dimensional composites
NASA Astrophysics Data System (ADS)
Briane, Marc; Milton, Graeme W.
2011-10-01
This paper deals with bounds satisfied by the effective non-symmetric conductivity of three-dimensional composites in the presence of a strong magnetic field. On the one hand, it is shown that for general composites the antisymmetric part of the effective conductivity cannot be bounded solely in terms of the antisymmetric part of the local conductivity, contrary to the columnar case studied by Briane and Milton [SIAM J. Appl. Math. 70(8), 3272-3286 (2010), 10.1137/100798090]. Thus a suitable rank-two laminate, the conductivity of which has a bounded antisymmetric part together with a high-contrast symmetric part, may generate an arbitrarily large antisymmetric part of the effective conductivity. On the other hand, bounds are provided which show that the antisymmetric part of the effective conductivity must go to zero if the upper bound on the antisymmetric part of the local conductivity goes to zero, and the symmetric part of the local conductivity remains bounded below and above. Elementary bounds on the effective moduli are derived assuming the local conductivity and the effective conductivity have transverse isotropy in the plane orthogonal to the magnetic field. New Hashin-Shtrikman type bounds for two-phase three-dimensional composites with a non-symmetric conductivity are provided under geometric isotropy of the microstructure. The derivation of the bounds is based on a particular variational principle symmetrizing the problem, and the use of Y-tensors involving the averages of the fields in each phase.
Static aeroelastic analysis and tailoring of missile control fins
NASA Technical Reports Server (NTRS)
Mcintosh, S. C., Jr.; Dillenius, M. F. E.
1989-01-01
A concept for enhancing the design of control fins for supersonic tactical missiles is described. The concept makes use of aeroelastic tailoring to create fin designs (for given planforms) that limit the variations in hinge moments that can occur during maneuvers involving high load factors and high angles of attack. It combines supersonic nonlinear aerodynamic load calculations with finite-element structural modeling, static and dynamic structural analysis, and optimization. The problem definition is illustrated. The fin is at least partly made up of a composite material. The layup is fixed, and the orientations of the material principal axes are allowed to vary; these are the design variables. The objective is the magnitude of the difference between the chordwise location of the center of pressure and its desired location, calculated for a given flight condition. Three types of constraints can be imposed: upper bounds on static displacements for a given set of load conditions, lower bounds on specified natural frequencies, and upper bounds on the critical flutter damping parameter at a given set of flight speeds and altitudes. The idea is to seek designs that reduce variations in hinge moments that would otherwise occur. The block diagram describes the operation of the computer program that accomplishes these tasks. There is an option for a single analysis in addition to the optimization.
Grosz, R; Stephanopoulos, G
1983-09-01
The need for the determination of the free energy of formation of biomass in bioreactor second law balances is well established. A statistical mechanical method for the calculation of the free energy of formation of E. coli biomass is introduced. In this method, biomass is modelled to consist of a system of biopolymer networks. The partition function of this system is proposed to consist of acoustic and optical modes of vibration. Acoustic modes are described by Tarasov's model, the parameters of which are evaluated with the aid of low-temperature calorimetric data for the crystalline protein bovine chymotrypsinogen A. The optical modes are described by considering the low-temperature thermodynamic properties of biological monomer crystals such as amino acid crystals. Upper and lower bounds are placed on the entropy to establish the maximum error associated with the statistical method. The upper bound is determined by endowing the monomers in biomass with ideal gas properties. The lower bound is obtained by limiting the monomers to complete immobility. On this basis, the free energy of formation is fixed to within 10%. Proposals are made with regard to experimental verification of the calculated value and extension of the calculation to other types of biomass.
Bounds on light gluinos from the BEBC beam dump experiment
NASA Astrophysics Data System (ADS)
Cooper-Sarkar, A. M.; Parker, M. A.; Sarkar, S.; Aderholz, M.; Bostock, P.; Clayton, E. F.; Faccini-Turluer, M. L.; Grässler, H.; Guy, J.; Hulth, P. O.; Hultqvist, K.; Idschok, U.; Klein, H.; Kreutzmann, H.; Krstic, J.; Mobayyen, M. M.; Morrison, D. R. O.; Nellen, B.; Schmid, P.; Schmitz, N.; Talebzadeh, M.; Venus, W.; Vignaud, D.; Walck, Ch.; Wachsmuth, H.; Wünsch, B.; WA66 Collaboration
1985-10-01
Observational upper limits on anomalous neutral-current events in a proton beam dump experiment are used to constrain the possible hadroproduction and decay of light gluinos. These results require ifm g˜$̆4 GeV for ifm q˜ - minw.
5. Corridor A and Building No. 9962A (with white door). ...
5. Corridor A and Building No. 9962-A (with white door). In upper left is east side of Building No. 9952-B. - Madigan Hospital, Corridors & Ramps, Bounded by Wilson & McKinley Avenues & Garfield & Lincoln Streets, Tacoma, Pierce County, WA
Interpolation Inequalities and Spectral Estimates for Magnetic Operators
NASA Astrophysics Data System (ADS)
Dolbeault, Jean; Esteban, Maria J.; Laptev, Ari; Loss, Michael
2018-05-01
We prove magnetic interpolation inequalities and Keller-Lieb-Thir-ring estimates for the principal eigenvalue of magnetic Schr{\\"o}dinger operators. We establish explicit upper and lower bounds for the best constants and show by numerical methods that our theoretical estimates are accurate.
Liouville type theorems of a nonlinear elliptic equation for the V-Laplacian
NASA Astrophysics Data System (ADS)
Huang, Guangyue; Li, Zhi
2018-03-01
In this paper, we consider Liouville type theorems for positive solutions to the following nonlinear elliptic equation: Δ _V u+aulog u=0, where a is a nonzero real constant. By using gradient estimates, we obtain upper bounds of |\
Evaluating the Potential Importance of Monoterpene Degradation for Global Acetone Production
NASA Astrophysics Data System (ADS)
Kelp, M. M.; Brewer, J.; Keller, C. A.; Fischer, E. V.
2015-12-01
Acetone is one of the most abundant volatile organic compounds (VOCs) in the atmosphere, but estimates of the global source of acetone vary widely. A better understanding of acetone sources is essential because acetone serves as a source of HOx in the upper troposphere and as a precursor to the NOx reservoir species peroxyacetyl nitrate (PAN). Although there are primary anthropogenic and pyrogenic sources of acetone, the dominant acetone sources are thought to be from direct biogenic emissions and photochemical production, particularly from the oxidation of iso-alkanes. Recent work suggests that the photochemical degradation of monoterpenes may also represent a significant contribution to global acetone production. We investigate that hypothesis using the GEOS-Chem chemical transport model. In this work, we calculate the emissions of eight terpene species (α-pinene, β-pinene, limonene, Δ3-carene, myrcene, sabinene, trans-β-ocimene, and an 'other monoterpenes' category which contains 34 other trace species) and couple these with upper and lower bound literature yields from species-specific chamber studies. We compare the simulated acetone distributions against in situ acetone measurements from a global suite of NASA aircraft campaigns. When simulating an upper bound on yields, the model-to-measurement comparison improves for North America at both the surface and in the upper troposphere. The inclusion of acetone production from monoterpene degradation also improves the ability of the model to reproduce observations of acetone in East Asian outflow. However, in general the addition of monoterpenes degrades the model comparison for the Southern Hemisphere.
Probing the size of extra dimensions with gravitational wave astronomy
NASA Astrophysics Data System (ADS)
Yagi, Kent; Tanahashi, Norihiro; Tanaka, Takahiro
2011-04-01
In the Randall-Sundrum II braneworld model, it has been conjectured, according to the AdS/CFT correspondence, that a brane-localized black hole (BH) larger than the bulk AdS curvature scale ℓ cannot be static, and it is dual to a four-dimensional BH emitting Hawking radiation through some quantum fields. In this scenario, the number of the quantum field species is so large that this radiation changes the orbital evolution of a BH binary. We derived the correction to the gravitational waveform phase due to this effect and estimated the upper bounds on ℓ by performing Fisher analyses. We found that the Deci-Hertz Interferometer Gravitational Wave Observatory and the Big Bang Observatory (DECIGO/BBO) can give a stronger constraint than the current tabletop result by detecting gravitational waves from small mass BH/BH and BH/neutron star (NS) binaries. Furthermore, DECIGO/BBO is expected to detect 105 BH/NS binaries per year. Taking this advantage, we find that DECIGO/BBO can actually measure ℓ down to ℓ=0.33μm for a 5 yr observation if we know that binaries are circular a priori. This is about 40 times smaller than the upper bound obtained from the tabletop experiment. On the other hand, when we take eccentricities into binary parameters, the detection limit weakens to ℓ=1.5μm due to strong degeneracies between ℓ and eccentricities. We also derived the upper bound on ℓ from the expected detection number of extreme mass ratio inspirals with LISA and BH/NS binaries with DECIGO/BBO, extending the discussion made recently by McWilliams [Phys. Rev. Lett. 104, 141601 (2010)PRLTAO0031-900710.1103/PhysRevLett.104.141601]. We found that these less robust constraints are weaker than the ones from phase differences.
Stability results for multi-layer radial Hele-Shaw and porous media flows
NASA Astrophysics Data System (ADS)
Gin, Craig; Daripa, Prabir
2015-01-01
Motivated by stability problems arising in the context of chemical enhanced oil recovery, we perform linear stability analysis of Hele-Shaw and porous media flows in radial geometry involving an arbitrary number of immiscible fluids. Key stability results obtained and their relevance to the stabilization of fingering instability are discussed. Some of the key results, among many others, are (i) absolute upper bounds on the growth rate in terms of the problem data; (ii) validation of these upper bound results against exact computation for the case of three-layer flows; (iii) stability enhancing injection policies; (iv) asymptotic limits that reduce these radial flow results to similar results for rectilinear flows; and (v) the stabilizing effect of curvature of the interfaces. Multi-layer radial flows have been found to have the following additional distinguishing features in comparison to rectilinear flows: (i) very long waves, some of which can be physically meaningful, are stable; and (ii) eigenvalues can be complex for some waves depending on the problem data, implying that the dispersion curves for one or more waves can contact each other. Similar to the rectilinear case, these results can be useful in providing insight into the interfacial instability transfer mechanism as the problem data are varied. Moreover, these can be useful in devising smart injection policies as well as controlling the complexity of the long-term dynamics when drops of various immiscible fluids intersperse among each other. As an application of the upper bound results, we provide stabilization criteria and design an almost stable multi-layer system by adding many layers of fluid with small positive jumps in viscosity in the direction of the basic flow.
Mechanical metamaterials at the theoretical limit of isotropic elastic stiffness
NASA Astrophysics Data System (ADS)
Berger, J. B.; Wadley, H. N. G.; McMeeking, R. M.
2017-02-01
A wide variety of high-performance applications require materials for which shape control is maintained under substantial stress, and that have minimal density. Bio-inspired hexagonal and square honeycomb structures and lattice materials based on repeating unit cells composed of webs or trusses, when made from materials of high elastic stiffness and low density, represent some of the lightest, stiffest and strongest materials available today. Recent advances in 3D printing and automated assembly have enabled such complicated material geometries to be fabricated at low (and declining) cost. These mechanical metamaterials have properties that are a function of their mesoscale geometry as well as their constituents, leading to combinations of properties that are unobtainable in solid materials; however, a material geometry that achieves the theoretical upper bounds for isotropic elasticity and strain energy storage (the Hashin-Shtrikman upper bounds) has yet to be identified. Here we evaluate the manner in which strain energy distributes under load in a representative selection of material geometries, to identify the morphological features associated with high elastic performance. Using finite-element models, supported by analytical methods, and a heuristic optimization scheme, we identify a material geometry that achieves the Hashin-Shtrikman upper bounds on isotropic elastic stiffness. Previous work has focused on truss networks and anisotropic honeycombs, neither of which can achieve this theoretical limit. We find that stiff but well distributed networks of plates are required to transfer loads efficiently between neighbouring members. The resulting low-density mechanical metamaterials have many advantageous properties: their mesoscale geometry can facilitate large crushing strains with high energy absorption, optical bandgaps and mechanically tunable acoustic bandgaps, high thermal insulation, buoyancy, and fluid storage and transport. Our relatively simple design can be manufactured using origami-like sheet folding and bonding methods.
Mechanical metamaterials at the theoretical limit of isotropic elastic stiffness.
Berger, J B; Wadley, H N G; McMeeking, R M
2017-03-23
A wide variety of high-performance applications require materials for which shape control is maintained under substantial stress, and that have minimal density. Bio-inspired hexagonal and square honeycomb structures and lattice materials based on repeating unit cells composed of webs or trusses, when made from materials of high elastic stiffness and low density, represent some of the lightest, stiffest and strongest materials available today. Recent advances in 3D printing and automated assembly have enabled such complicated material geometries to be fabricated at low (and declining) cost. These mechanical metamaterials have properties that are a function of their mesoscale geometry as well as their constituents, leading to combinations of properties that are unobtainable in solid materials; however, a material geometry that achieves the theoretical upper bounds for isotropic elasticity and strain energy storage (the Hashin-Shtrikman upper bounds) has yet to be identified. Here we evaluate the manner in which strain energy distributes under load in a representative selection of material geometries, to identify the morphological features associated with high elastic performance. Using finite-element models, supported by analytical methods, and a heuristic optimization scheme, we identify a material geometry that achieves the Hashin-Shtrikman upper bounds on isotropic elastic stiffness. Previous work has focused on truss networks and anisotropic honeycombs, neither of which can achieve this theoretical limit. We find that stiff but well distributed networks of plates are required to transfer loads efficiently between neighbouring members. The resulting low-density mechanical metamaterials have many advantageous properties: their mesoscale geometry can facilitate large crushing strains with high energy absorption, optical bandgaps and mechanically tunable acoustic bandgaps, high thermal insulation, buoyancy, and fluid storage and transport. Our relatively simple design can be manufactured using origami-like sheet folding and bonding methods.
Galluzzi, Paolo; de Jong, Marcus C; Sirin, Selma; Maeder, Philippe; Piu, Pietro; Cerase, Alfonso; Monti, Lucia; Brisse, Hervé J; Castelijns, Jonas A; de Graaf, Pim; Goericke, Sophia L
2016-07-01
Differentiation between normal solid (non-cystic) pineal glands and pineal pathologies on brain MRI is difficult. The aim of this study was to assess the size of the solid pineal gland in children (0-5 years) and compare the findings with published pineoblastoma cases. We retrospectively analyzed the size (width, height, planimetric area) of solid pineal glands in 184 non-retinoblastoma patients (73 female, 111 male) aged 0-5 years on MRI. The effect of age and gender on gland size was evaluated. Linear regression analysis was performed to analyze the relation between size and age. Ninety-nine percent prediction intervals around the mean were added to construct a normal size range per age, with the upper bound of the predictive interval as the parameter of interest as a cutoff for normalcy. There was no significant interaction of gender and age for all the three pineal gland parameters (width, height, and area). Linear regression analysis gave 99 % upper prediction bounds of 7.9, 4.8, and 25.4 mm(2), respectively, for width, height, and area. The slopes (size increase per month) of each parameter were 0.046, 0.023, and 0.202, respectively. Ninety-three percent (95 % CI 66-100 %) of asymptomatic solid pineoblastomas were larger in size than the 99 % upper bound. This study establishes norms for solid pineal gland size in non-retinoblastoma children aged 0-5 years. Knowledge of the size of the normal pineal gland is helpful for detection of pineal gland abnormalities, particularly pineoblastoma.
Optimizing Retransmission Threshold in Wireless Sensor Networks
Bi, Ran; Li, Yingshu; Tan, Guozhen; Sun, Liang
2016-01-01
The retransmission threshold in wireless sensor networks is critical to the latency of data delivery in the networks. However, existing works on data transmission in sensor networks did not consider the optimization of the retransmission threshold, and they simply set the same retransmission threshold for all sensor nodes in advance. The method did not take link quality and delay requirement into account, which decreases the probability of a packet passing its delivery path within a given deadline. This paper investigates the problem of finding optimal retransmission thresholds for relay nodes along a delivery path in a sensor network. The object of optimizing retransmission thresholds is to maximize the summation of the probability of the packet being successfully delivered to the next relay node or destination node in time. A dynamic programming-based distributed algorithm for finding optimal retransmission thresholds for relay nodes along a delivery path in the sensor network is proposed. The time complexity is OnΔ·max1≤i≤n{ui}, where ui is the given upper bound of the retransmission threshold of sensor node i in a given delivery path, n is the length of the delivery path and Δ is the given upper bound of the transmission delay of the delivery path. If Δ is greater than the polynomial, to reduce the time complexity, a linear programming-based (1+pmin)-approximation algorithm is proposed. Furthermore, when the ranges of the upper and lower bounds of retransmission thresholds are big enough, a Lagrange multiplier-based distributed O(1)-approximation algorithm with time complexity O(1) is proposed. Experimental results show that the proposed algorithms have better performance. PMID:27171092
Formation of eyes in large-scale cyclonic vortices
NASA Astrophysics Data System (ADS)
Oruba, L.; Davidson, P. A.; Dormy, E.
2018-01-01
We present numerical simulations of steady, laminar, axisymmetric convection of a Boussinesq fluid in a shallow, rotating, cylindrical domain. The flow is driven by an imposed vertical heat flux and shaped by the background rotation of the domain. The geometry is inspired by that of tropical cyclones and the global flow pattern consists of a shallow swirling vortex combined with a poloidal flow in the r -z plane which is predominantly inward near the bottom boundary and outward along the upper surface. Our numerical experiments confirm that, as suggested in our recent work [L. Oruba et al., J. Fluid Mech. 812, 890 (2017), 10.1017/jfm.2016.846], an eye forms at the center of the vortex which is reminiscent of that seen in a tropical cyclone and is characterized by a local reversal in the direction of the poloidal flow. We establish scaling laws for the flow and map out the conditions under which an eye will, or will not, form. We show that, to leading order, the velocity scales with V =(αg β ) 1 /2H , where g is gravity, α is the expansion coefficient, β is the background temperature gradient, and H is the depth of the domain. We also show that the two most important parameters controlling the flow are Re =V H /ν and Ro =V /(Ω H ) , where Ω is the background rotation rate and ν the viscosity. The Prandtl number and aspect ratio also play an important, if secondary, role. Finally, and most importantly, we establish the criteria required for eye formation. These consist of a lower bound on Re , upper and lower bounds on Ro , and an upper bound on the Ekman number.
Koh, Junseock; Shkel, Irina; Saecker, Ruth M.; Record, M. Thomas
2011-01-01
Previous ITC and FRET studies demonstrated that Escherichia coli HUαβ binds nonspecifically to duplex DNA in three different binding modes: a tighter-binding 34 bp mode which interacts with DNA in large (>34 bp) gaps between bound proteins, reversibly bending it 140° and thereby increasing its flexibility, and two weaker, modestly cooperative small-site-size modes (10 bp, 6 bp) useful for filling gaps between bound proteins shorter than 34 bp. Here we use ITC to determine the thermodynamics of these binding modes as a function of salt concentration, and deduce that DNA in the 34 bp mode is bent around but not wrapped on the body of HU, in contrast to specific binding of IHF. Analyses of binding isotherms (8, 15, 34 bp DNA) and initial binding heats (34, 38, 160 bp DNA) reveal that all three modes have similar log-log salt concentration derivatives of the binding constants (Ski) even though their binding site sizes differ greatly; most probable values of Ski on 34 bp or larger DNA are − 7.5 ± 0.5. From the similarity of Ski values, we conclude that binding interfaces of all three modes involve the same region of the arms and saddle of HU. All modes are entropy-driven, as expected for nonspecific binding driven by the polyelectrolyte effect. The bent-DNA 34 bp mode is most endothermic, presumably because of the cost of HU-induced DNA bending, while the 6 bp mode is modestly exothermic at all salt concentrations examined. Structural models consistent with the observed Ski values are proposed. PMID:21513716
How Isotropic is the Universe?
Saadeh, Daniela; Feeney, Stephen M; Pontzen, Andrew; Peiris, Hiranya V; McEwen, Jason D
2016-09-23
A fundamental assumption in the standard model of cosmology is that the Universe is isotropic on large scales. Breaking this assumption leads to a set of solutions to Einstein's field equations, known as Bianchi cosmologies, only a subset of which have ever been tested against data. For the first time, we consider all degrees of freedom in these solutions to conduct a general test of isotropy using cosmic microwave background temperature and polarization data from Planck. For the vector mode (associated with vorticity), we obtain a limit on the anisotropic expansion of (σ_{V}/H)_{0}<4.7×10^{-11} (95% C.L.), which is an order of magnitude tighter than previous Planck results that used cosmic microwave background temperature only. We also place upper limits on other modes of anisotropic expansion, with the weakest limit arising from the regular tensor mode, (σ_{T,reg}/H)_{0}<1.0×10^{-6} (95% C.L.). Including all degrees of freedom simultaneously for the first time, anisotropic expansion of the Universe is strongly disfavored, with odds of 121 000:1 against.
Standard Model in multiscale theories and observational constraints
NASA Astrophysics Data System (ADS)
Calcagni, Gianluca; Nardelli, Giuseppe; Rodríguez-Fernández, David
2016-08-01
We construct and analyze the Standard Model of electroweak and strong interactions in multiscale spacetimes with (i) weighted derivatives and (ii) q -derivatives. Both theories can be formulated in two different frames, called fractional and integer picture. By definition, the fractional picture is where physical predictions should be made. (i) In the theory with weighted derivatives, it is shown that gauge invariance and the requirement of having constant masses in all reference frames make the Standard Model in the integer picture indistinguishable from the ordinary one. Experiments involving only weak and strong forces are insensitive to a change of spacetime dimensionality also in the fractional picture, and only the electromagnetic and gravitational sectors can break the degeneracy. For the simplest multiscale measures with only one characteristic time, length and energy scale t*, ℓ* and E*, we compute the Lamb shift in the hydrogen atom and constrain the multiscale correction to the ordinary result, getting the absolute upper bound t*<10-23 s . For the natural choice α0=1 /2 of the fractional exponent in the measure, this bound is strengthened to t*<10-29 s , corresponding to ℓ*<10-20 m and E*>28 TeV . Stronger bounds are obtained from the measurement of the fine-structure constant. (ii) In the theory with q -derivatives, considering the muon decay rate and the Lamb shift in light atoms, we obtain the independent absolute upper bounds t*<10-13 s and E*>35 MeV . For α0=1 /2 , the Lamb shift alone yields t*<10-27 s , ℓ*<10-19 m and E*>450 GeV .
A note on the WGC, effective field theory and clockwork within string theory
NASA Astrophysics Data System (ADS)
Ibáñez, Luis E.; Montero, Miguel
2018-02-01
It has been recently argued that Higgsing of theories with U(1) n gauge interactions consistent with the Weak Gravity Conjecture (WGC) may lead to effective field theories parametrically violating WGC constraints. The minimal examples typically involve Higgs scalars with a large charge with respect to a U(1) (e.g. charges ( Z, 1) in U(1)2 with Z ≫ 1). This type of Higgs multiplets play also a key role in clockwork U(1) theories. We study these issues in the context of heterotic string theory and find that, even if there is no new physics at the standard magnetic WGC scale Λ ˜ g IR M P , the string scale is just slightly above, at a scale ˜ √{k_{IR}}Λ. Here k IR is the level of the IR U(1) worldsheet current. We show that, unlike the standard magnetic cutoff, this bound is insensitive to subsequent Higgsing. One may argue that this constraint gives rise to no bound at the effective field theory level since k IR is model dependent and in general unknown. However there is an additional constraint to be taken into account, which is that the Higgsing scalars with large charge Z should be part of the string massless spectrum, which becomes an upper bound k IR ≤ k 0 2 , where k 0 is the level of the UV currents. Thus, for fixed k 0, Z cannot be made parametrically large. The upper bound on the charges Z leads to limitations on the size and structure of hierarchies in an iterated U(1) clockwork mechanism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, X; Cao, D; Housley, D
2014-06-01
Purpose: In this work, we have tested the performance of new respiratory gating solutions for Elekta linacs. These solutions include the Response gating and the C-RAD Catalyst surface mapping system.Verification measurements have been performed for a series of clinical cases. We also examined the beam on latency of the system and its impact on delivery efficiency. Methods: To verify the benefits of tighter gating windows, a Quasar Respiratory Motion Platform was used. Its vertical-motion plate acted as a respiration surrogate and was tracked by the Catalyst system to generate gating signals. A MatriXX ion-chamber array was mounted on its longitudinal-movingmore » platform. Clinical plans are delivered to a stationary and moving Matrix array at 100%, 50% and 30% gating windows and gamma scores were calculated comparing moving delivery results to the stationary result. It is important to note that as one moves to tighter gating windows, the delivery efficiency will be impacted by the linac's beam-on latency. Using a specialized software package, we generated beam-on signals of lengths of 1000ms, 600ms, 450ms, 400ms, 350ms and 300ms. As the gating windows get tighter, one can expect to reach a point where the dose rate will fall to nearly zero, indicating that the gating window is close to beam-on latency. A clinically useful gating window needs to be significantly longer than the latency for the linac. Results: As expected, the use of tighter gating windows improved delivery accuracy. However, a lower limit of the gating window, largely defined by linac beam-on latency, exists at around 300ms. Conclusion: The Response gating kit, combined with the C-RAD Catalyst, provides an effective solution for respiratorygated treatment delivery. Careful patient selection, gating window design, even visual/audio coaching may be necessary to ensure both delivery quality and efficiency. This research project is funded by Elekta.« less
The Limb Infrared Monitor of the Stratosphere (LIMS) experiment
NASA Technical Reports Server (NTRS)
Russell, J. M.; Gille, J. C.
1978-01-01
The Limb Infrared Monitor of the Stratosphere is used to obtain vertical profiles and maps of temperature and the concentration of ozone, water vapor, nitrogen dioxide, and nitric acid for the region of the stratosphere bounded by the upper troposphere and the lower mesosphere.
RIEMANNIAN MANIFOLDS ADMITTING A CONFORMAL TRANSFORMATION GROUP
Yano, Kentaro
1969-01-01
Let M be a Riemannian manifold with constant scalar curvature K which admits an infinitesimal conformal transformation. A necessary and sufficient condition in order that it be isometric with a sphere is obtained. Inequalities giving upper and lower bounds for K are also derived. PMID:16578692
An Upper Bound for Population Exposure Variability (SOT)
Tools for the rapid assessment of exposure potential are needed in order to put the results of rapidly-applied tools for assessing biological activity, such as ToxCast® and other high throughput methodologies, into a quantitative exposure context. The ExpoCast models (Wambaugh et...
Aggregating quantum repeaters for the quantum internet
NASA Astrophysics Data System (ADS)
Azuma, Koji; Kato, Go
2017-09-01
The quantum internet holds promise for accomplishing quantum teleportation and unconditionally secure communication freely between arbitrary clients all over the globe, as well as the simulation of quantum many-body systems. For such a quantum internet protocol, a general fundamental upper bound on the obtainable entanglement or secret key has been derived [K. Azuma, A. Mizutani, and H.-K. Lo, Nat. Commun. 7, 13523 (2016), 10.1038/ncomms13523]. Here we consider its converse problem. In particular, we present a universal protocol constructible from any given quantum network, which is based on running quantum repeater schemes in parallel over the network. For arbitrary lossy optical channel networks, our protocol has no scaling gap with the upper bound, even based on existing quantum repeater schemes. In an asymptotic limit, our protocol works as an optimal entanglement or secret-key distribution over any quantum network composed of practical channels such as erasure channels, dephasing channels, bosonic quantum amplifier channels, and lossy optical channels.
Ferromagnetic Potts models with multisite interaction
NASA Astrophysics Data System (ADS)
Schreiber, Nir; Cohen, Reuven; Haber, Simi
2018-03-01
We study the q -state Potts model with four-site interaction on a square lattice. Based on the asymptotic behavior of lattice animals, it is argued that when q ≤4 the system exhibits a second-order phase transition and when q >4 the transition is first order. The q =4 model is borderline. We find 1 /lnq to be an upper bound on Tc, the exact critical temperature. Using a low-temperature expansion, we show that 1 /(θ lnq ) , where θ >1 is a q -dependent geometrical term, is an improved upper bound on Tc. In fact, our findings support Tc=1 /(θ lnq ) . This expression is used to estimate the finite correlation length in first-order transition systems. These results can be extended to other lattices. Our theoretical predictions are confirmed numerically by an extensive study of the four-site interaction model using the Wang-Landau entropic sampling method for q =3 ,4 ,5 . In particular, the q =4 model shows an ambiguous finite-size pseudocritical behavior.
Extremal values on Zagreb indices of trees with given distance k-domination number.
Pei, Lidan; Pan, Xiangfeng
2018-01-01
Let [Formula: see text] be a graph. A set [Formula: see text] is a distance k -dominating set of G if for every vertex [Formula: see text], [Formula: see text] for some vertex [Formula: see text], where k is a positive integer. The distance k -domination number [Formula: see text] of G is the minimum cardinality among all distance k -dominating sets of G . The first Zagreb index of G is defined as [Formula: see text] and the second Zagreb index of G is [Formula: see text]. In this paper, we obtain the upper bounds for the Zagreb indices of n -vertex trees with given distance k -domination number and characterize the extremal trees, which generalize the results of Borovićanin and Furtula (Appl. Math. Comput. 276:208-218, 2016). What is worth mentioning, for an n -vertex tree T , is that a sharp upper bound on the distance k -domination number [Formula: see text] is determined.
Pinning down inelastic dark matter in the Sun and in direct detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blennow, Mattias; Clementz, Stefan; Herrero-Garcia, Juan, E-mail: emb@kth.se, E-mail: scl@kth.se, E-mail: juhg@kth.se
2016-04-01
We study the solar capture rate of inelastic dark matter with endothermic and/or exothermic interactions. By assuming that an inelastic dark matter signal will be observed in next generation direct detection experiments we can set a lower bound on the capture rate that is independent of the local dark matter density, the velocity distribution, the galactic escape velocity as well as the scattering cross section. In combination with upper limits from neutrino observatories we can place upper bounds on the annihilation channels leading to neutrinos. We find that, while endothermic scattering limits are weak in the isospin-conserving case, strong boundsmore » may be set for exothermic interactions, in particular in the spin-dependent case. Furthermore, we study the implications of observing two direct detection signals, in which case one can halo-independently obtain the dark matter mass and the mass splitting, and disentangle the endothermic/exothermic nature of the scattering. Finally we discuss isospin violation.« less
Diffusion Influenced Adsorption Kinetics.
Miura, Toshiaki; Seki, Kazuhiko
2015-08-27
When the kinetics of adsorption is influenced by the diffusive flow of solutes, the solute concentration at the surface is influenced by the surface coverage of solutes, which is given by the Langmuir-Hinshelwood adsorption equation. The diffusion equation with the boundary condition given by the Langmuir-Hinshelwood adsorption equation leads to the nonlinear integro-differential equation for the surface coverage. In this paper, we solved the nonlinear integro-differential equation using the Grünwald-Letnikov formula developed to solve fractional kinetics. Guided by the numerical results, analytical expressions for the upper and lower bounds of the exact numerical results were obtained. The upper and lower bounds were close to the exact numerical results in the diffusion- and reaction-controlled limits, respectively. We examined the validity of the two simple analytical expressions obtained in the diffusion-controlled limit. The results were generalized to include the effect of dispersive diffusion. We also investigated the effect of molecular rearrangement of anisotropic molecules on surface coverage.
Pages, Gaël; Ramdani, Nacim; Fraisse, Philippe; Guiraud, David
2009-06-01
This paper presents a contribution for restoring standing in paraplegia while using functional electrical stimulation (FES). Movement generation induced by FES remains mostly open looped and stimulus intensities are tuned empirically. To design an efficient closed-loop control, a preliminary study has been carried out to investigate the relationship between body posture and voluntary upper body movements. A methodology is proposed to estimate body posture in the sagittal plane using force measurements exerted on supporting handles during standing. This is done by setting up constraints related to the geometric equations of a two-dimensional closed chain model and the hand-handle interactions. All measured quantities are subject to an uncertainty assumed unknown but bounded. The set membership estimation problem is solved via interval analysis. Guaranteed uncertainty bounds are computed for the estimated postures. In order to test the feasibility of our methodology, experiments were carried out with complete spinal cord injured patients.
Non-linear collisional Penrose process: How much energy can a black hole release?
NASA Astrophysics Data System (ADS)
Nakao, Ken-ichi; Okawa, Hirotada; Maeda, Kei-ichi
2018-01-01
Energy extraction from a rotating or charged black hole is one of the fascinating issues in general relativity. The collisional Penrose process is one such extraction mechanism and has been reconsidered intensively since Bañados, Silk, and West pointed out the physical importance of very high energy collisions around a maximally rotating black hole. In order to get results analytically, the test particle approximation has been adopted so far. Successive works based on this approximation scheme have not yet revealed the upper bound on the efficiency of the energy extraction because of the lack of backreaction. In the Reissner-Nordström spacetime, by fully taking into account the self-gravity of the shells, we find that there is an upper bound on the extracted energy that is consistent with the area law of a black hole. We also show one particular scenario in which almost the maximum energy extraction is achieved even without the Bañados-Silk-West collision.
Optimal Coordinated EV Charging with Reactive Power Support in Constrained Distribution Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paudyal, Sumit; Ceylan, Oğuzhan; Bhattarai, Bishnu P.
Electric vehicle (EV) charging/discharging can take place in any P-Q quadrants, which means EVs could support reactive power to the grid while charging the battery. In controlled charging schemes, distribution system operator (DSO) coordinates with the charging of EV fleets to ensure grid’s operating constraints are not violated. In fact, this refers to DSO setting upper bounds on power limits for EV charging. In this work, we demonstrate that if EVs inject reactive power into the grid while charging, DSO could issue higher upper bounds on the active power limits for the EVs for the same set of grid constraints.more » We demonstrate the concept in an 33-node test feeder with 1,500 EVs. Case studies show that in constrained distribution grids in coordinated charging, average costs of EV charging could be reduced if the charging takes place in the fourth P-Q quadrant compared to charging with unity power factor.« less
Stochastic parameter estimation in nonlinear time-delayed vibratory systems with distributed delay
NASA Astrophysics Data System (ADS)
Torkamani, Shahab; Butcher, Eric A.
2013-07-01
The stochastic estimation of parameters and states in linear and nonlinear time-delayed vibratory systems with distributed delay is explored. The approach consists of first employing a continuous time approximation to approximate the delayed integro-differential system with a large set of ordinary differential equations having stochastic excitations. Then the problem of state and parameter estimation in the resulting stochastic ordinary differential system is represented as an optimal filtering problem using a state augmentation technique. By adapting the extended Kalman-Bucy filter to the augmented filtering problem, the unknown parameters of the time-delayed system are estimated from noise-corrupted, possibly incomplete measurements of the states. Similarly, the upper bound of the distributed delay can also be estimated by the proposed technique. As an illustrative example to a practical problem in vibrations, the parameter, delay upper bound, and state estimation from noise-corrupted measurements in a distributed force model widely used for modeling machine tool vibrations in the turning operation is investigated.
Universal Charge-Radius Relation for Subatomic and Astrophysical Compact Objects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madsen, Jes
2008-04-18
Electron-positron pair creation in supercritical electric fields limits the net charge of any static, spherical object, such as superheavy nuclei, strangelets, and Q balls, or compact stars like neutron stars, quark stars, and black holes. For radii between 4x10{sup 2} and 10{sup 4} fm the upper bound on the net charge is given by the universal relation Z=0.71R{sub fm}, and for larger radii (measured in femtometers or kilometers) Z=7x10{sup -5}R{sub fm}{sup 2}=7x10{sup 31}R{sub km}{sup 2}. For objects with nuclear density the relation corresponds to Z{approx_equal}0.7A{sup 1/3} (10{sup 8}10{sup 12}), where A is the baryonmore » number. For some systems this universal upper bound improves existing charge limits in the literature.« less
Van Nguyen, Binh; Kim, Kiseon
2016-09-11
In this paper, we consider amplify-and-forward (AnF) cooperative systems under correlated fading environments. We first present a brief overview of existing works on the effect of channel correlations on the system performance. We then focus on our main contribution which is analyzing the outage probability of a multi-AnF-relay system with the best relay selection (BRS) scheme under a condition that two channels of each relay, source-relay and relay-destination channels, are correlated. Using lower and upper bounds on the end-to-end received signal-to-noise ratio (SNR) at the destination, we derive corresponding upper and lower bounds on the system outage probability. We prove that the system can achieve a diversity order (DO) equal to the number of relays. In addition, and importantly, we show that the considered correlation form has a constructive effect on the system performance. In other words, the larger the correlation coefficient, the better system performance. Our analytic results are corroborated by extensive Monte-Carlo simulations.
Beamforming Based Full-Duplex for Millimeter-Wave Communication
Liu, Xiao; Xiao, Zhenyu; Bai, Lin; Choi, Jinho; Xia, Pengfei; Xia, Xiang-Gen
2016-01-01
In this paper, we study beamforming based full-duplex (FD) systems in millimeter-wave (mmWave) communications. A joint transmission and reception (Tx/Rx) beamforming problem is formulated to maximize the achievable rate by mitigating self-interference (SI). Since the optimal solution is difficult to find due to the non-convexity of the objective function, suboptimal schemes are proposed in this paper. A low-complexity algorithm, which iteratively maximizes signal power while suppressing SI, is proposed and its convergence is proven. Moreover, two closed-form solutions, which do not require iterations, are also derived under minimum-mean-square-error (MMSE), zero-forcing (ZF), and maximum-ratio transmission (MRT) criteria. Performance evaluations show that the proposed iterative scheme converges fast (within only two iterations on average) and approaches an upper-bound performance, while the two closed-form solutions also achieve appealing performances, although there are noticeable differences from the upper bound depending on channel conditions. Interestingly, these three schemes show different robustness against the geometry of Tx/Rx antenna arrays and channel estimation errors. PMID:27455256
White Light Stray Light Test of the SOHO UVCS
NASA Technical Reports Server (NTRS)
Gardner, L. N.; Gardner, L. N.; Fineschi, S.
1998-01-01
During the late stages of the integration phase of the Ultraviolet Coronagraph Spectrometer (UVCS) instrument for the Solar and Heliospheric Observatory (SOHO) at MATRA-Marconi in Toulouse, France, SOHO Project management at Goddard Space Flight Center (GSFC) became concerned that the elaborate stray light rejection system for the instrument had not been tested and might possibly be misaligned such that the instrument could not deliver promised scientific returns. A white light stray light test, which would place an upper bound on the value of UVCS's stray light rejection capability, was commissioned, conceived, and carried out. This upper bound value would be indicative of the weakest coronal features the spectrometer would be capable of discerning. The test was rapidly developed at GSFC in coordination with science team members from Harvard-Smithsonian Center for Astrophysics (CFA) and was carried out at MATRA in late February 1995. The outcome of this test helped to justify similar, much desired tests with visible and far ultraviolet light at CFA in a facility specifically designed to perform such testing.
Quantifying the tracking capability of space-based AIS systems
NASA Astrophysics Data System (ADS)
Skauen, Andreas Nordmo
2016-01-01
The Norwegian Defence Research Establishment (FFI) has operated three Automatic Identification System (AIS) receivers in space. Two are on dedicated nano-satellites, AISSat-1 and AISSat-2. The third, the NORAIS Receiver, was installed on the International Space Station. A general method for calculating the upper bound on the tracking capability of a space-based AIS system has been developed and the results from the algorithm applied to AISSat-1 and the NORAIS Receiver individually. In addition, a constellation of AISSat-1 and AISSat-2 is presented. The tracking capability is defined as the probability of re-detecting ships as they move around the globe and is explained to represent and upper bound on a space-based AIS system performance. AISSat-1 and AISSat-2 operates on the nominal AIS1 and AIS2 channels, while the NORAIS Receiver data used are from operations on the dedicated space AIS channels, AIS3 and AIS4. The improved tracking capability of operations on the space AIS channels is presented.
Flutter suppression and stability analysis for a variable-span wing via morphing technology
NASA Astrophysics Data System (ADS)
Li, Wencheng; Jin, Dongping
2018-01-01
A morphing wing can enhance aerodynamic characteristics and control authority as an alternative to using ailerons. To use morphing technology for flutter suppression, the dynamical behavior and stability of a variable-span wing subjected to the supersonic aerodynamic loads are investigated numerically in this paper. An axially moving cantilever plate is employed to model the variable-span wing, in which the governing equations of motion are established via the Kane method and piston theory. A morphing strategy based on axially moving rates is proposed to suppress the flutter that occurs beyond the critical span length, and the flutter stability is verified by Floquet theory. Furthermore, the transient stability during the morphing motion is analyzed and the upper bound of the morphing rate is obtained. The simulation results indicate that the proposed morphing law, which is varying periodically with a proper amplitude, could accomplish the flutter suppression. Further, the upper bound of the morphing speed decreases rapidly once the span length is close to its critical span length.
Activity of upper limb muscles during human walking.
Kuhtz-Buschbeck, Johann P; Jing, Bo
2012-04-01
The EMG activity of upper limb muscles during human gait has rarely been studied previously. It was examined in 20 normal volunteers in four conditions: walking on a treadmill (1) with unrestrained natural arm swing (Normal), (2) while volitionally holding the arms still (Held), (3) with the arms immobilized (Bound), and (4) with the arms swinging in phase with the ipsilateral legs, i.e. opposite-to-normal phasing (Anti-Normal). Normal arm swing involved weak rhythmical lengthening and shortening contractions of arm and shoulder muscles. Phasic muscle activity was needed to keep the unrestricted arms still during walking (Held), indicating a passive component of arm swing. An active component, possibly programmed centrally, existed as well, because some EMG signals persisted when the arms were immobilized during walking (Bound). Anti-Normal gait involved stronger EMG activity than Normal walking and was uneconomical. The present results indicate that normal arm swing has both passive and active components. Copyright © 2011 Elsevier Ltd. All rights reserved.
Time-optical spinup maneuvers of flexible spacecraft
NASA Technical Reports Server (NTRS)
Singh, G.; Kabamba, P. T.; Mcclamroch, N. H.
1990-01-01
Attitude controllers for spacecraft have been based on the assumption that the bodies being controlled are rigid. Future spacecraft, however, may be quite flexible. Many applications require spinning up/down these vehicles. In this work the minimum time control of these maneuvers is considered. The time-optimal control is shown to possess an important symmetry property. Taking advantage of this property, the necessary and sufficient conditions for optimality are transformed into a system of nonlinear algebraic equations in the control switching times during one half of the maneuver, the maneuver time, and the costates at the mid-maneuver time. These equations can be solved using a homotopy approach. Control spillover measures are introduced and upper bounds on these measures are obtained. For a special case these upper bounds can be expressed in closed form for an infinite dimensional evaluation model. Rotational stiffening effects are ignored in the optimal control analysis. Based on a heuristic argument a simple condition is given which justifies the omission of these nonlinear effects. This condition is validated by numerical simulation.
Howard, H T; Tyler, G L; Esposito, P B; Anderson, J D; Reasenberg, R D; Shapiro, I I; Fjeldbo, G; Kliore, A J; Levy, G S; Brunn, D L; Dickinson, R; Edelson, R E; Martin, W L; Postal, R B; Seidel, B; Sesplaukis, T T; Shirley, D L; Stelzried, C T; Sweetnam, D N; Wood, G E; Zygielbaum, A I
1974-07-12
Analysis of the radio-tracking data from Mariner 10 yields 6,023,600 +/- 600 for the ratio of the mass of the sun to that of Mercury, in very good agreement with values determined earlier from radar data alone. Occultation measurements yielded values for the radius of Mercury of 2440 +/- 2 and 2438 +/- 2 kilometers at laditudes of 2 degrees N and 68 degrees N, respectively, again in close agreement with the average equatorial radius of 2439 +/- 1 kilometers determined from radar data. The mean density of 5.44 grams per cubic centimeter deduced for Mercury from Mariner 10 data thus virtually coincides with the prior determination. No evidence of either an ionosphere or an atmosphere was found, with the data yielding upper bounds on the electron density of about 1500 and 4000 electrons per cubic centimeter on the dayside and nightside, respectively, and an inferred upper bound on the surface pressure of 10(-8) millibar.
Xiao, Lin; Liao, Bolin; Li, Shuai; Chen, Ke
2018-02-01
In order to solve general time-varying linear matrix equations (LMEs) more efficiently, this paper proposes two nonlinear recurrent neural networks based on two nonlinear activation functions. According to Lyapunov theory, such two nonlinear recurrent neural networks are proved to be convergent within finite-time. Besides, by solving differential equation, the upper bounds of the finite convergence time are determined analytically. Compared with existing recurrent neural networks, the proposed two nonlinear recurrent neural networks have a better convergence property (i.e., the upper bound is lower), and thus the accurate solutions of general time-varying LMEs can be obtained with less time. At last, various different situations have been considered by setting different coefficient matrices of general time-varying LMEs and a great variety of computer simulations (including the application to robot manipulators) have been conducted to validate the better finite-time convergence of the proposed two nonlinear recurrent neural networks. Copyright © 2017 Elsevier Ltd. All rights reserved.
Two approximations of the present value distribution of a disability annuity
NASA Astrophysics Data System (ADS)
Spreeuw, Jaap
2006-02-01
The distribution function of the present value of a cash flow can be approximated by means of a distribution function of a random variable, which is also the present value of a sequence of payments, but with a simpler structure. The corresponding random variable has the same expectation as the random variable corresponding to the original distribution function and is a stochastic upper bound of convex order. A sharper upper bound can be obtained if more information about the risk is available. In this paper, it will be shown that such an approach can be adopted for disability annuities (also known as income protection policies) in a three state model under Markov assumptions. Benefits are payable during any spell of disability whilst premiums are only due whenever the insured is healthy. The quality of the two approximations is investigated by comparing the distributions obtained with the one derived from the algorithm presented in the paper by Hesselager and Norberg [Insurance Math. Econom. 18 (1996) 35-42].
Heterogeneous upper-bound finite element limit analysis of masonry walls out-of-plane loaded
NASA Astrophysics Data System (ADS)
Milani, G.; Zuccarello, F. A.; Olivito, R. S.; Tralli, A.
2007-11-01
A heterogeneous approach for FE upper bound limit analyses of out-of-plane loaded masonry panels is presented. Under the assumption of associated plasticity for the constituent materials, mortar joints are reduced to interfaces with a Mohr Coulomb failure criterion with tension cut-off and cap in compression, whereas for bricks both limited and unlimited strength are taken into account. At each interface, plastic dissipation can occur as a combination of out-of-plane shear, bending and torsion. In order to test the reliability of the model proposed, several examples of dry-joint panels out-of-plane loaded tested at the University of Calabria (Italy) are discussed. Numerical results are compared with experimental data for three different series of walls at different values of the in-plane compressive vertical loads applied. The comparisons show that reliable predictions of both collapse loads and failure mechanisms can be obtained by means of the numerical procedure employed.
Upper bound for the span of pencil graph
NASA Astrophysics Data System (ADS)
Parvathi, N.; Vimala Rani, A.
2018-04-01
An L(2,1)-Coloring or Radio Coloring or λ coloring of a graph is a function f from the vertex set V(G) to the set of all nonnegative integers such that |f(x) ‑ f(y)| ≥ 2 if d(x,y) = 1 and |f(x) ‑ f(y)| ≥ 1 if d(x,y)=2, where d(x,y) denotes the distance between x and y in G. The L(2,1)-coloring number or span number λ(G) of G is the smallest number k such that G has an L(2,1)-coloring with max{f(v) : v ∈ V(G)} = k. [2]The minimum number of colors used in L(2,1)-coloring is called the radio number rn(G) of G (Positive integer). Griggs and yeh conjectured that λ(G) ≤ Δ2 for any simple graph with maximum degree Δ>2. In this article, we consider some special graphs like, n-sunlet graph, pencil graph families and derive its upper bound of (G) and rn(G).
Dynamic characteristics of two new vibration modes of the disk-shell shaped gear
NASA Astrophysics Data System (ADS)
Yan, Litang; Qiu, Shijung; Gao, Xiangqung
1992-10-01
Two new vibration modes of the disk-shell-shaped big medium gears placed on three separate medium shafts of a turboprop engine have been found. They have the same nodal diameters as the conventional ones, but their frequencies are higher. The tooth ring vibrates both radially and axially and has greater deflection than the gear hub. The resonance of these two new nodal diameter modes is much more dangerous than that of the conventional nodal diameter modes. Moreover, they occur nearly at the upper and the lower bounds of the gear operating speed range. A special detuning method is developed for removing the resonance of these two new modes out of the upper and the lower bounds, respectively, and the effectiveness of the damping rings in this case has been researched. The vibration responses measured on the reductor casing have been then reduced to a quite low level after the damping rings were applied to the three big medium gears.
An analysis of spectral envelope-reduction via quadratic assignment problems
NASA Technical Reports Server (NTRS)
George, Alan; Pothen, Alex
1994-01-01
A new spectral algorithm for reordering a sparse symmetric matrix to reduce its envelope size was described. The ordering is computed by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. In this paper, we provide an analysis of the spectral envelope reduction algorithm. We described related 1- and 2-sum problems; the former is related to the envelope size, while the latter is related to an upper bound on the work involved in an envelope Cholesky factorization scheme. We formulate the latter two problems as quadratic assignment problems, and then study the 2-sum problem in more detail. We obtain lower bounds on the 2-sum by considering a projected quadratic assignment problem, and then show that finding a permutation matrix closest to an orthogonal matrix attaining one of the lower bounds justifies the spectral envelope reduction algorithm. The lower bound on the 2-sum is seen to be tight for reasonably 'uniform' finite element meshes. We also obtain asymptotically tight lower bounds for the envelope size for certain classes of meshes.
Sensitivity analysis of limit state functions for probability-based plastic design
NASA Technical Reports Server (NTRS)
Frangopol, D. M.
1984-01-01
The evaluation of the total probability of a plastic collapse failure P sub f for a highly redundant structure of random interdependent plastic moments acted on by random interdepedent loads is a difficult and computationally very costly process. The evaluation of reasonable bounds to this probability requires the use of second moment algebra which involves man statistical parameters. A computer program which selects the best strategy for minimizing the interval between upper and lower bounds of P sub f is now in its final stage of development. The relative importance of various uncertainties involved in the computational process on the resulting bounds of P sub f, sensitivity is analyzed. Response sensitivities for both mode and system reliability of an ideal plastic portal frame are shown.
NASA Astrophysics Data System (ADS)
Page, Don N.
2018-01-01
In an asymptotically flat spacetime of dimension d >3 and with the Newtonian gravitational constant G , a spherical black hole of initial horizon radius rh and mass M ˜rhd -3/G has a total decay time to Hawking emission of td˜rhd -1/G ˜G2 /(d -3 )M(d -1 )/(d -3 ) which grows without bound as the radius rh and mass M are taken to infinity. However, in asymptotically anti-de Sitter spacetime with a length scale ℓ and with absorbing boundary conditions at infinity, the total Hawking decay time does not diverge as the mass and radius go to infinity but instead remains bounded by a time of the order of ℓd-1/G .
Enhnacing the science of the WFIRST coronagraph instrument with post-processing.
NASA Astrophysics Data System (ADS)
Pueyo, Laurent; WFIRST CGI data analysis and post-processing WG
2018-01-01
We summarize the results of a three years effort investigating how to apply to the WFIRST coronagraph instrument (CGI) modern image analysis methods, now routinely used with ground-based coronagraphs. In this post we quantify the gain associated post-processing for WFIRST-CGI observing scenarios simulated between 2013 and 2017. We also show based one simulations that spectrum of planet can be confidently retrieved using these processing tools with and Integral Field Spectrograph. We then discuss our work using CGI experimental data and quantify coronagraph post-processing testbed gains. We finally introduce stability metrics that are simple to define and measure, and place useful lower bound and upper bounds on the achievable RDI post-processing contrast gain. We show that our bounds hold in the case of the testbed data.
Turning Around along the Cosmic Web
NASA Astrophysics Data System (ADS)
Lee, Jounghun; Yepes, Gustavo
2016-12-01
A bound violation designates a case in which the turnaround radius of a bound object exceeds the upper limit imposed by the spherical collapse model based on the standard ΛCDM paradigm. Given that the turnaround radius of a bound object is a stochastic quantity and that the spherical model overly simplifies the true gravitational collapse, which actually proceeds anisotropically along the cosmic web, the rarity of the occurrence of a bound violation may depend on the web environment. Assuming a Planck cosmology, we numerically construct the bound-zone peculiar velocity profiles along the cosmic web (filaments and sheets) around the isolated groups with virial mass {M}{{v}}≥slant 3× {10}13 {h}-1 {M}⊙ identified in the Small MultiDark Planck simulations and determine the radial distances at which their peculiar velocities equal the Hubble expansion speed as the turnaround radii of the groups. It is found that although the average turnaround radii of the isolated groups are well below the spherical bound limit on all mass scales, the bound violations are not forbidden for individual groups, and the cosmic web has an effect of reducing the rarity of the occurrence of a bound violation. Explaining that the spherical bound limit on the turnaround radius in fact represents the threshold distance up to which the intervention of the external gravitational field in the bound-zone peculiar velocity profiles around the nonisolated groups stays negligible, we discuss the possibility of using the threshold distance scale to constrain locally the equation of state of dark energy.
2012-07-10
materials used, the complexity of the human anatomy , manufacturing limitations, and analysis capability prohibits exactly matching surrogate material...upper and lower bounds for possible loading behaviour. Although it is impossible to exactly match the human anatomy according to mechanical
ARES I-X USS Fracture Analysis Loads Spectra Development
NASA Technical Reports Server (NTRS)
Larsen, Curtis; Mackey, Alden
2008-01-01
This report describes the development of a set of bounding load spectra for the ARES I-X launch vehicle. These load spectra are used in the determination of the critical initial flaw size (CIFS) of the welds in the ARES I-X upper stage simulator (USS).
49 CFR Appendix B to Part 236 - Risk Assessment Criteria
Code of Federal Regulations, 2012 CFR
2012-10-01
... availability calculations for subsystems and components, Fault Tree Analysis (FTA) of the subsystems, and... upper bound, as estimated with a sensitivity analysis, and the risk value selected must be demonstrated... interconnected subsystems/components? The risk assessment of each safety-critical system (product) must account...
49 CFR Appendix B to Part 236 - Risk Assessment Criteria
Code of Federal Regulations, 2014 CFR
2014-10-01
... availability calculations for subsystems and components, Fault Tree Analysis (FTA) of the subsystems, and... upper bound, as estimated with a sensitivity analysis, and the risk value selected must be demonstrated... interconnected subsystems/components? The risk assessment of each safety-critical system (product) must account...
Polygamy of entanglement in multipartite quantum systems
NASA Astrophysics Data System (ADS)
Kim, Jeong San
2009-08-01
We show that bipartite entanglement distribution (or entanglement of assistance) in multipartite quantum systems is by nature polygamous. We first provide an analytical upper bound for the concurrence of assistance in bipartite quantum systems and derive a polygamy inequality of multipartite entanglement in arbitrary-dimensional quantum systems.
ERIC Educational Resources Information Center
Brilleslyper, Michael A.; Wolverton, Robert H.
2008-01-01
In this article we consider an example suitable for investigation in many mid and upper level undergraduate mathematics courses. Fourier series provide an excellent example of the differences between uniform and non-uniform convergence. We use Dirichlet's test to investigate the convergence of the Fourier series for a simple periodic saw tooth…
SIP Version 1.0 User's Guide for Pesticide Exposure of Birds and Mammals through Drinking Water
Model provides an upper bound estimate of exposure of birds and mammals to pesticides through drinking water alone. Intended for use in problem formulation to determine whether or not drinking water exposure alone is a potential pathway of concern.
On Wiener polarity index of bicyclic networks.
Ma, Jing; Shi, Yongtang; Wang, Zhen; Yue, Jun
2016-01-11
Complex networks are ubiquitous in biological, physical and social sciences. Network robustness research aims at finding a measure to quantify network robustness. A number of Wiener type indices have recently been incorporated as distance-based descriptors of complex networks. Wiener type indices are known to depend both on the network's number of nodes and topology. The Wiener polarity index is also related to the cluster coefficient of networks. In this paper, based on some graph transformations, we determine the sharp upper bound of the Wiener polarity index among all bicyclic networks. These bounds help to understand the underlying quantitative graph measures in depth.
Efficiency bounds of molecular motors under a trade-off figure of merit
NASA Astrophysics Data System (ADS)
Zhang, Yanchao; Huang, Chuankun; Lin, Guoxing; Chen, Jincan
2017-05-01
On the basis of the theory of irreversible thermodynamics and an elementary model of the molecular motors converting chemical energy by ATP hydrolysis to mechanical work exerted against an external force, the efficiencies of the molecular motors at two different optimization configurations for trade-off figure of merit representing a best compromise between the useful energy and the lost energy are calculated. The upper and lower bounds for the efficiency at two different optimization configurations are determined. It is found that the optimal efficiencies at the two different optimization configurations are always larger than 1 / 2.