Wang, Xueyi; Davidson, Nicholas J.
2011-01-01
Ensemble methods have been widely used to improve prediction accuracy over individual classifiers. In this paper, we achieve a few results about the prediction accuracies of ensemble methods for binary classification that are missed or misinterpreted in previous literature. First we show the upper and lower bounds of the prediction accuracies (i.e. the best and worst possible prediction accuracies) of ensemble methods. Next we show that an ensemble method can achieve > 0.5 prediction accuracy, while individual classifiers have < 0.5 prediction accuracies. Furthermore, for individual classifiers with different prediction accuracies, the average of the individual accuracies determines the upper and lower bounds. We perform two experiments to verify the results and show that it is hard to achieve the upper and lower bounds accuracies by random individual classifiers and better algorithms need to be developed. PMID:21853162
Limitations of the background field method applied to Rayleigh-Bénard convection
NASA Astrophysics Data System (ADS)
Nobili, Camilla; Otto, Felix
2017-09-01
We consider Rayleigh-Bénard convection as modeled by the Boussinesq equations, in the case of infinite Prandtl numbers and with no-slip boundary condition. There is a broad interest in bounds of the upwards heat flux, as given by the Nusselt number Nu, in terms of the forcing via the imposed temperature difference, as given by the Rayleigh number in the turbulent regime Ra ≫ 1 . In several studies, the background field method applied to the temperature field has been used to provide upper bounds on Nu in terms of Ra. In these applications, the background field method comes in the form of a variational problem where one optimizes a stratified temperature profile subject to a certain stability condition; the method is believed to capture the marginal stability of the boundary layer. The best available upper bound via this method is Nu ≲Ra/1 3 ( ln R a )/1 15 ; it proceeds via the construction of a stable temperature background profile that increases logarithmically in the bulk. In this paper, we show that the background temperature field method cannot provide a tighter upper bound in terms of the power of the logarithm. However, by another method, one does obtain the tighter upper bound Nu ≲ Ra /1 3 ( ln ln Ra ) /1 3 so that the result of this paper implies that the background temperature field method is unphysical in the sense that it cannot provide the optimal bound.
Determining Normal-Distribution Tolerance Bounds Graphically
NASA Technical Reports Server (NTRS)
Mezzacappa, M. A.
1983-01-01
Graphical method requires calculations and table lookup. Distribution established from only three points: mean upper and lower confidence bounds and lower confidence bound of standard deviation. Method requires only few calculations with simple equations. Graphical procedure establishes best-fit line for measured data and bounds for selected confidence level and any distribution percentile.
NASA Technical Reports Server (NTRS)
Chlouber, Dean; O'Neill, Pat; Pollock, Jim
1990-01-01
A technique of predicting an upper bound on the rate at which single-event upsets due to ionizing radiation occur in semiconducting memory cells is described. The upper bound on the upset rate, which depends on the high-energy particle environment in earth orbit and accelerator cross-section data, is given by the product of an upper-bound linear energy-transfer spectrum and the mean cross section of the memory cell. Plots of the spectrum are given for low-inclination and polar orbits. An alternative expression for the exact upset rate is also presented. Both methods rely only on experimentally obtained cross-section data and are valid for sensitive bit regions having arbitrary shape.
Eren, Metin I.; Chao, Anne; Hwang, Wen-Han; Colwell, Robert K.
2012-01-01
Background Estimating assemblage species or class richness from samples remains a challenging, but essential, goal. Though a variety of statistical tools for estimating species or class richness have been developed, they are all singly-bounded: assuming only a lower bound of species or classes. Nevertheless there are numerous situations, particularly in the cultural realm, where the maximum number of classes is fixed. For this reason, a new method is needed to estimate richness when both upper and lower bounds are known. Methodology/Principal Findings Here, we introduce a new method for estimating class richness: doubly-bounded confidence intervals (both lower and upper bounds are known). We specifically illustrate our new method using the Chao1 estimator, rarefaction, and extrapolation, although any estimator of asymptotic richness can be used in our method. Using a case study of Clovis stone tools from the North American Lower Great Lakes region, we demonstrate that singly-bounded richness estimators can yield confidence intervals with upper bound estimates larger than the possible maximum number of classes, while our new method provides estimates that make empirical sense. Conclusions/Significance Application of the new method for constructing doubly-bound richness estimates of Clovis stone tools permitted conclusions to be drawn that were not otherwise possible with singly-bounded richness estimates, namely, that Lower Great Lakes Clovis Paleoindians utilized a settlement pattern that was probably more logistical in nature than residential. However, our new method is not limited to archaeological applications. It can be applied to any set of data for which there is a fixed maximum number of classes, whether that be site occupancy models, commercial products (e.g. athletic shoes), or census information (e.g. nationality, religion, age, race). PMID:22666316
Bounds for Asian basket options
NASA Astrophysics Data System (ADS)
Deelstra, Griselda; Diallo, Ibrahima; Vanmaele, Michèle
2008-09-01
In this paper we propose pricing bounds for European-style discrete arithmetic Asian basket options in a Black and Scholes framework. We start from methods used for basket options and Asian options. First, we use the general approach for deriving upper and lower bounds for stop-loss premia of sums of non-independent random variables as in Kaas et al. [Upper and lower bounds for sums of random variables, Insurance Math. Econom. 27 (2000) 151-168] or Dhaene et al. [The concept of comonotonicity in actuarial science and finance: theory, Insurance Math. Econom. 31(1) (2002) 3-33]. We generalize the methods in Deelstra et al. [Pricing of arithmetic basket options by conditioning, Insurance Math. Econom. 34 (2004) 55-57] and Vanmaele et al. [Bounds for the price of discrete sampled arithmetic Asian options, J. Comput. Appl. Math. 185(1) (2006) 51-90]. Afterwards we show how to derive an analytical closed-form expression for a lower bound in the non-comonotonic case. Finally, we derive upper bounds for Asian basket options by applying techniques as in Thompson [Fast narrow bounds on the value of Asian options, Working Paper, University of Cambridge, 1999] and Lord [Partially exact and bounded approximations for arithmetic Asian options, J. Comput. Finance 10 (2) (2006) 1-52]. Numerical results are included and on the basis of our numerical tests, we explain which method we recommend depending on moneyness and time-to-maturity.
Physical Uncertainty Bounds (PUB)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaughan, Diane Elizabeth; Preston, Dean L.
2015-03-19
This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switchingmore » out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.« less
Upper-Bound Estimates Of SEU in CMOS
NASA Technical Reports Server (NTRS)
Edmonds, Larry D.
1990-01-01
Theory of single-event upsets (SEU) (changes in logic state caused by energetic charged subatomic particles) in complementary metal oxide/semiconductor (CMOS) logic devices extended to provide upper-bound estimates of rates of SEU when limited experimental information available and configuration and dimensions of SEU-sensitive regions of devices unknown. Based partly on chord-length-distribution method.
Upper bound on the efficiency of certain nonimaging concentrators in the physical-optics model
NASA Astrophysics Data System (ADS)
Welford, W. T.; Winston, R.
1982-09-01
Upper bounds on the performance of nonimaging concentrators are obtained within the framework of scalar-wave theory by using a simple approach to avoid complex calculations on multiple phase fronts. The approach consists in treating a theoretically perfect image-forming device and postulating that no non-image-forming concentrator can have a better performance than such an ideal image-forming system. The performance of such a system can be calculated according to wave theory, and this will provide, in accordance with the postulate, upper bounds on the performance of nonimaging systems. The method is demonstrated for a two-dimensional compound parabolic concentrator.
Sun, Wei; Chou, Chih-Ping; Stacy, Alan W; Ma, Huiyan; Unger, Jennifer; Gallaher, Peggy
2007-02-01
Cronbach's a is widely used in social science research to estimate the internal consistency of reliability of a measurement scale. However, when items are not strictly parallel, the Cronbach's a coefficient provides a lower-bound estimate of true reliability, and this estimate may be further biased downward when items are dichotomous. The estimation of standardized Cronbach's a for a scale with dichotomous items can be improved by using the upper bound of coefficient phi. SAS and SPSS macros have been developed in this article to obtain standardized Cronbach's a via this method. The simulation analysis showed that Cronbach's a from upper-bound phi might be appropriate for estimating the real reliability when standardized Cronbach's a is problematic.
A tight upper bound for quadratic knapsack problems in grid-based wind farm layout optimization
NASA Astrophysics Data System (ADS)
Quan, Ning; Kim, Harrison M.
2018-03-01
The 0-1 quadratic knapsack problem (QKP) in wind farm layout optimization models possible turbine locations as nodes, and power loss due to wake effects between pairs of turbines as edges in a complete graph. The goal is to select up to a certain number of turbine locations such that the sum of selected node and edge coefficients is maximized. Finding the optimal solution to the QKP is difficult in general, but it is possible to obtain a tight upper bound on the QKP's optimal value which facilitates the use of heuristics to solve QKPs by giving a good estimate of the optimality gap of any feasible solution. This article applies an upper bound method that is especially well-suited to QKPs in wind farm layout optimization due to certain features of the formulation that reduce the computational complexity of calculating the upper bound. The usefulness of the upper bound was demonstrated by assessing the performance of the greedy algorithm for solving QKPs in wind farm layout optimization. The results show that the greedy algorithm produces good solutions within 4% of the optimal value for small to medium sized problems considered in this article.
Exact lower and upper bounds on stationary moments in stochastic biochemical systems
NASA Astrophysics Data System (ADS)
Ghusinga, Khem Raj; Vargas-Garcia, Cesar A.; Lamperski, Andrew; Singh, Abhyudai
2017-08-01
In the stochastic description of biochemical reaction systems, the time evolution of statistical moments for species population counts is described by a linear dynamical system. However, except for some ideal cases (such as zero- and first-order reaction kinetics), the moment dynamics is underdetermined as lower-order moments depend upon higher-order moments. Here, we propose a novel method to find exact lower and upper bounds on stationary moments for a given arbitrary system of biochemical reactions. The method exploits the fact that statistical moments of any positive-valued random variable must satisfy some constraints that are compactly represented through the positive semidefiniteness of moment matrices. Our analysis shows that solving moment equations at steady state in conjunction with constraints on moment matrices provides exact lower and upper bounds on the moments. These results are illustrated by three different examples—the commonly used logistic growth model, stochastic gene expression with auto-regulation and an activator-repressor gene network motif. Interestingly, in all cases the accuracy of the bounds is shown to improve as moment equations are expanded to include higher-order moments. Our results provide avenues for development of approximation methods that provide explicit bounds on moments for nonlinear stochastic systems that are otherwise analytically intractable.
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2015-01-01
Upper bounds on high speed satellite collision probability, PC †, have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum PC. If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but potentially useful Pc upper bound.
Kernel K-Means Sampling for Nyström Approximation.
He, Li; Zhang, Hong
2018-05-01
A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.
Noisy metrology: a saturable lower bound on quantum Fisher information
NASA Astrophysics Data System (ADS)
Yousefjani, R.; Salimi, S.; Khorashad, A. S.
2017-06-01
In order to provide a guaranteed precision and a more accurate judgement about the true value of the Cramér-Rao bound and its scaling behavior, an upper bound (equivalently a lower bound on the quantum Fisher information) for precision of estimation is introduced. Unlike the bounds previously introduced in the literature, the upper bound is saturable and yields a practical instruction to estimate the parameter through preparing the optimal initial state and optimal measurement. The bound is based on the underling dynamics, and its calculation is straightforward and requires only the matrix representation of the quantum maps responsible for encoding the parameter. This allows us to apply the bound to open quantum systems whose dynamics are described by either semigroup or non-semigroup maps. Reliability and efficiency of the method to predict the ultimate precision limit are demonstrated by three main examples.
Solving Open Job-Shop Scheduling Problems by SAT Encoding
NASA Astrophysics Data System (ADS)
Koshimura, Miyuki; Nabeshima, Hidetomo; Fujita, Hiroshi; Hasegawa, Ryuzo
This paper tries to solve open Job-Shop Scheduling Problems (JSSP) by translating them into Boolean Satisfiability Testing Problems (SAT). The encoding method is essentially the same as the one proposed by Crawford and Baker. The open problems are ABZ8, ABZ9, YN1, YN2, YN3, and YN4. We proved that the best known upper bounds 678 of ABZ9 and 884 of YN1 are indeed optimal. We also improved the upper bound of YN2 and lower bounds of ABZ8, YN2, YN3 and YN4.
Van Holle, Lionel; Bauchau, Vincent
2014-01-01
Purpose For disproportionality measures based on the Relative Reporting Ratio (RRR) such as the Information Component (IC) and the Empirical Bayesian Geometrical Mean (EBGM), each product and event is assumed to represent a negligible fraction of the spontaneous report database (SRD). Here, we provide the tools for allowing signal detection experts to assess the consequence of the violation of this assumption on their specific SRD. Methods For each product–event pair (P–E), a worst-case scenario associated all the reported events-of-interest with the product of interest. The values of the RRR under this scenario were measured for different sets of stratification factors using the GlaxoSmithKline vaccines SRD. These values represent the RRR upper bound that RRR cannot exceed whatever the true strength of association. Results Depending on the choice of stratification factors, the RRR could not exceed an upper bound of 2 for up to 2.4% of the P–Es. For Engerix™, 23.4% of all reports in the SDR, the RRR could not exceed an upper bound of 2 for up to 13.8% of pairs. For the P–E Rotarix™-Intussusception, the choice of stratification factors impacted the upper bound to RRR: from 52.5 for an unstratified RRR to 2.0 for a fully stratified RRR. Conclusions The quantification of the upper bound can indicate whether measures such as EBGM, IC, or RRR can be used for SRD for which products or events represent a non-negligible fraction of the entire SRD. In addition, at the level of the product or P–E, it can also highlight detrimental impact of overstratification. © 2014 The Authors. Pharmacoepidemiology and Drug Safety published by John Wiley & Sons, Ltd. PMID:24395594
Perturbative unitarity constraints on gauge portals
NASA Astrophysics Data System (ADS)
El Hedri, Sonia; Shepherd, William; Walker, Devin G. E.
2017-12-01
Dark matter that was once in thermal equilibrium with the Standard Model is generally prohibited from obtaining all of its mass from the electroweak phase transition. This implies a new scale of physics and mediator particles to facilitate dark matter annihilation. In this work, we focus on dark matter that annihilates through a generic gauge boson portal. We show how partial wave unitarity places upper bounds on the dark gauge boson, dark Higgs and dark matter masses. Outside of well-defined fine-tuned regions, we find an upper bound of 9 TeV for the dark matter mass when the dark Higgs and dark gauge bosons both facilitate the dark matter annihilations. In this scenario, the upper bound on the dark Higgs and dark gauge boson masses are 10 TeV and 16 TeV, respectively. When only the dark gauge boson facilitates dark matter annihilations, we find an upper bound of 3 TeV and 6 TeV for the dark matter and dark gauge boson, respectively. Overall, using the gauge portal as a template, we describe a method to not only place upper bounds on the dark matter mass but also on the new particles with Standard Model quantum numbers. We briefly discuss the reach of future accelerator, direct and indirect detection experiments for this class of models.
Heskes, Tom; Eisinga, Rob; Breitling, Rainer
2014-11-21
The rank product method is a powerful statistical technique for identifying differentially expressed molecules in replicated experiments. A critical issue in molecule selection is accurate calculation of the p-value of the rank product statistic to adequately address multiple testing. Both exact calculation and permutation and gamma approximations have been proposed to determine molecule-level significance. These current approaches have serious drawbacks as they are either computationally burdensome or provide inaccurate estimates in the tail of the p-value distribution. We derive strict lower and upper bounds to the exact p-value along with an accurate approximation that can be used to assess the significance of the rank product statistic in a computationally fast manner. The bounds and the proposed approximation are shown to provide far better accuracy over existing approximate methods in determining tail probabilities, with the slightly conservative upper bound protecting against false positives. We illustrate the proposed method in the context of a recently published analysis on transcriptomic profiling performed in blood. We provide a method to determine upper bounds and accurate approximate p-values of the rank product statistic. The proposed algorithm provides an order of magnitude increase in throughput as compared with current approaches and offers the opportunity to explore new application domains with even larger multiple testing issue. The R code is published in one of the Additional files and is available at http://www.ru.nl/publish/pages/726696/rankprodbounds.zip .
NASA Astrophysics Data System (ADS)
Thelen, Brian J.; Xique, Ismael J.; Burns, Joseph W.; Goley, G. Steven; Nolan, Adam R.; Benson, Jonathan W.
2017-04-01
In Bayesian decision theory, there has been a great amount of research into theoretical frameworks and information- theoretic quantities that can be used to provide lower and upper bounds for the Bayes error. These include well-known bounds such as Chernoff, Battacharrya, and J-divergence. Part of the challenge of utilizing these various metrics in practice is (i) whether they are "loose" or "tight" bounds, (ii) how they might be estimated via either parametric or non-parametric methods, and (iii) how accurate the estimates are for limited amounts of data. In general what is desired is a methodology for generating relatively tight lower and upper bounds, and then an approach to estimate these bounds efficiently from data. In this paper, we explore the so-called triangle divergence which has been around for a while, but was recently made more prominent in some recent research on non-parametric estimation of information metrics. Part of this work is motivated by applications for quantifying fundamental information content in SAR/LIDAR data, and to help in this, we have developed a flexible multivariate modeling framework based on multivariate Gaussian copula models which can be combined with the triangle divergence framework to quantify this information, and provide approximate bounds on Bayes error. In this paper we present an overview of the bounds, including those based on triangle divergence and verify that under a number of multivariate models, the upper and lower bounds derived from triangle divergence are significantly tighter than the other common bounds, and often times, dramatically so. We also propose some simple but effective means for computing the triangle divergence using Monte Carlo methods, and then discuss estimation of the triangle divergence from empirical data based on Gaussian Copula models.
Veeraraghavan, Srikant; Mazziotti, David A
2014-03-28
We present a density matrix approach for computing global solutions of restricted open-shell Hartree-Fock theory, based on semidefinite programming (SDP), that gives upper and lower bounds on the Hartree-Fock energy of quantum systems. While wave function approaches to Hartree-Fock theory yield an upper bound to the Hartree-Fock energy, we derive a semidefinite relaxation of Hartree-Fock theory that yields a rigorous lower bound on the Hartree-Fock energy. We also develop an upper-bound algorithm in which Hartree-Fock theory is cast as a SDP with a nonconvex constraint on the rank of the matrix variable. Equality of the upper- and lower-bound energies guarantees that the computed solution is the globally optimal solution of Hartree-Fock theory. The work extends a previously presented method for closed-shell systems [S. Veeraraghavan and D. A. Mazziotti, Phys. Rev. A 89, 010502-R (2014)]. For strongly correlated systems the SDP approach provides an alternative to the locally optimized Hartree-Fock energies and densities with a certificate of global optimality. Applications are made to the potential energy curves of C2, CN, Cr2, and NO2.
Estimates on Functional Integrals of Quantum Mechanics and Non-relativistic Quantum Field Theory
NASA Astrophysics Data System (ADS)
Bley, Gonzalo A.; Thomas, Lawrence E.
2017-01-01
We provide a unified method for obtaining upper bounds for certain functional integrals appearing in quantum mechanics and non-relativistic quantum field theory, functionals of the form {E[{exp}(A_T)]} , the (effective) action {A_T} being a function of particle trajectories up to time T. The estimates in turn yield rigorous lower bounds for ground state energies, via the Feynman-Kac formula. The upper bounds are obtained by writing the action for these functional integrals in terms of stochastic integrals. The method is illustrated in familiar quantum mechanical settings: for the hydrogen atom, for a Schrödinger operator with {1/|x|^2} potential with small coupling, and, with a modest adaptation of the method, for the harmonic oscillator. We then present our principal applications of the method, in the settings of non-relativistic quantum field theories for particles moving in a quantized Bose field, including the optical polaron and Nelson models.
Multivariate Lipschitz optimization: Survey and computational comparison
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, P.; Gourdin, E.; Jaumard, B.
1994-12-31
Many methods have been proposed to minimize a multivariate Lipschitz function on a box. They pertain the three approaches: (i) reduction to the univariate case by projection (Pijavskii) or by using a space-filling curve (Strongin); (ii) construction and refinement of a single upper bounding function (Pijavskii, Mladineo, Mayne and Polak, Jaumard Hermann and Ribault, Wood...); (iii) branch and bound with local upper bounding functions (Galperin, Pint{acute e}r, Meewella and Mayne, the present authors). A survey is made, stressing similarities of algorithms, expressed when possible within a unified framework. Moreover, an extensive computational comparison is reported on.
Efficient Regressions via Optimally Combining Quantile Information*
Zhao, Zhibiao; Xiao, Zhijie
2014-01-01
We develop a generally applicable framework for constructing efficient estimators of regression models via quantile regressions. The proposed method is based on optimally combining information over multiple quantiles and can be applied to a broad range of parametric and nonparametric settings. When combining information over a fixed number of quantiles, we derive an upper bound on the distance between the efficiency of the proposed estimator and the Fisher information. As the number of quantiles increases, this upper bound decreases and the asymptotic variance of the proposed estimator approaches the Cramér-Rao lower bound under appropriate conditions. In the case of non-regular statistical estimation, the proposed estimator leads to super-efficient estimation. We illustrate the proposed method for several widely used regression models. Both asymptotic theory and Monte Carlo experiments show the superior performance over existing methods. PMID:25484481
Computer search for binary cyclic UEP codes of odd length up to 65
NASA Technical Reports Server (NTRS)
Lin, Mao-Chao; Lin, Chi-Chang; Lin, Shu
1990-01-01
Using an exhaustive computation, the unequal error protection capabilities of all binary cyclic codes of odd length up to 65 that have minimum distances at least 3 are found. For those codes that can only have upper bounds on their unequal error protection capabilities computed, an analytic method developed by Dynkin and Togonidze (1976) is used to show that the upper bounds meet the exact unequal error protection capabilities.
Computing an upper bound on contact stress with surrogate duality
NASA Astrophysics Data System (ADS)
Xuan, Zhaocheng; Papadopoulos, Panayiotis
2016-07-01
We present a method for computing an upper bound on the contact stress of elastic bodies. The continuum model of elastic bodies with contact is first modeled as a constrained optimization problem by using finite elements. An explicit formulation of the total contact force, a fraction function with the numerator as a linear function and the denominator as a quadratic convex function, is derived with only the normalized nodal contact forces as the constrained variables in a standard simplex. Then two bounds are obtained for the sum of the nodal contact forces. The first is an explicit formulation of matrices of the finite element model, derived by maximizing the fraction function under the constraint that the sum of the normalized nodal contact forces is one. The second bound is solved by first maximizing the fraction function subject to the standard simplex and then using Dinkelbach's algorithm for fractional programming to find the maximum—since the fraction function is pseudo concave in a neighborhood of the solution. These two bounds are solved with the problem dimensions being only the number of contact nodes or node pairs, which are much smaller than the dimension for the original problem, namely, the number of degrees of freedom. Next, a scheme for constructing an upper bound on the contact stress is proposed that uses the bounds on the sum of the nodal contact forces obtained on a fine finite element mesh and the nodal contact forces obtained on a coarse finite element mesh, which are problems that can be solved at a lower computational cost. Finally, the proposed method is verified through some examples concerning both frictionless and frictional contact to demonstrate the method's feasibility, efficiency, and robustness.
Perturbative unitarity constraints on gauge portals
El Hedri, Sonia; Shepherd, William; Walker, Devin G. E.
2017-10-03
Dark matter that was once in thermal equilibrium with the Standard Model is generally prohibited from obtaining all of its mass from the electroweak phase transition. This implies a new scale of physics and mediator particles to facilitate dark matter annihilation. In this work, we focus on dark matter that annihilates through a generic gauge boson portal. We show how partial wave unitarity places upper bounds on the dark gauge boson, dark Higgs and dark matter masses. Outside of well-defined fine-tuned regions, we find an upper bound of 9 TeV for the dark matter mass when the dark Higgs andmore » dark gauge bosons both facilitate the dark matter annihilations. In this scenario, the upper bound on the dark Higgs and dark gauge boson masses are 10 TeV and 16 TeV, respectively. When only the dark gauge boson facilitates dark matter annihilations, we find an upper bound of 3 TeV and 6 TeV for the dark matter and dark gauge boson, respectively. Overall, using the gauge portal as a template, we describe a method to not only place upper bounds on the dark matter mass but also on the new particles with Standard Model quantum numbers. Here, we briefly discuss the reach of future accelerator, direct and indirect detection experiments for this class of models.« less
Perturbative unitarity constraints on gauge portals
DOE Office of Scientific and Technical Information (OSTI.GOV)
El Hedri, Sonia; Shepherd, William; Walker, Devin G. E.
Dark matter that was once in thermal equilibrium with the Standard Model is generally prohibited from obtaining all of its mass from the electroweak phase transition. This implies a new scale of physics and mediator particles to facilitate dark matter annihilation. In this work, we focus on dark matter that annihilates through a generic gauge boson portal. We show how partial wave unitarity places upper bounds on the dark gauge boson, dark Higgs and dark matter masses. Outside of well-defined fine-tuned regions, we find an upper bound of 9 TeV for the dark matter mass when the dark Higgs andmore » dark gauge bosons both facilitate the dark matter annihilations. In this scenario, the upper bound on the dark Higgs and dark gauge boson masses are 10 TeV and 16 TeV, respectively. When only the dark gauge boson facilitates dark matter annihilations, we find an upper bound of 3 TeV and 6 TeV for the dark matter and dark gauge boson, respectively. Overall, using the gauge portal as a template, we describe a method to not only place upper bounds on the dark matter mass but also on the new particles with Standard Model quantum numbers. Here, we briefly discuss the reach of future accelerator, direct and indirect detection experiments for this class of models.« less
Upper bounds on superpartner masses from upper bounds on the Higgs boson mass.
Cabrera, M E; Casas, J A; Delgado, A
2012-01-13
The LHC is putting bounds on the Higgs boson mass. In this Letter we use those bounds to constrain the minimal supersymmetric standard model (MSSM) parameter space using the fact that, in supersymmetry, the Higgs mass is a function of the masses of sparticles, and therefore an upper bound on the Higgs mass translates into an upper bound for the masses for superpartners. We show that, although current bounds do not constrain the MSSM parameter space from above, once the Higgs mass bound improves big regions of this parameter space will be excluded, putting upper bounds on supersymmetry (SUSY) masses. On the other hand, for the case of split-SUSY we show that, for moderate or large tanβ, the present bounds on the Higgs mass imply that the common mass for scalars cannot be greater than 10(11) GeV. We show how these bounds will evolve as LHC continues to improve the limits on the Higgs mass.
Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam
2009-01-01
This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.
Upper and lower bounds for the speed of pulled fronts with a cut-off
NASA Astrophysics Data System (ADS)
Benguria, R. D.; Depassier, M. C.; Loss, M.
2008-02-01
We establish rigorous upper and lower bounds for the speed of pulled fronts with a cut-off. For all reaction terms of KPP type a simple analytic upper bound is given. The lower bounds however depend on details of the reaction term. For a small cut-off parameter the two leading order terms in the asymptotic expansion of the upper and lower bounds coincide and correspond to the Brunet-Derrida formula. For large cut-off parameters the bounds do not coincide and permit a simple estimation of the speed of the front.
A Reduced Basis Method with Exact-Solution Certificates for Symmetric Coercive Equations
2013-11-06
the energy associated with the infinite - dimensional weak solution of parametrized symmetric coercive partial differential equations with piecewise...builds bounds with respect to the infinite - dimensional weak solution, aims to entirely remove the issue of the “truth” within the certified reduced basis...framework. We in particular introduce a reduced basis method that provides rigorous upper and lower bounds
New upper bounds on the rate of a code via the Delsarte-MacWilliams inequalities
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Rodemich, E. R.; Rumsey, H., Jr.; Welch, L. R.
1977-01-01
An upper bound on the rate of a binary code as a function of minimum code distance (using a Hamming code metric) is arrived at from Delsarte-MacWilliams inequalities. The upper bound so found is asymptotically less than Levenshtein's bound, and a fortiori less than Elias' bound. Appendices review properties of Krawtchouk polynomials and Q-polynomials utilized in the rigorous proofs.
Dwell time-based stabilisation of switched delay systems using free-weighting matrices
NASA Astrophysics Data System (ADS)
Koru, Ahmet Taha; Delibaşı, Akın; Özbay, Hitay
2018-01-01
In this paper, we present a quasi-convex optimisation method to minimise an upper bound of the dwell time for stability of switched delay systems. Piecewise Lyapunov-Krasovskii functionals are introduced and the upper bound for the derivative of Lyapunov functionals is estimated by free-weighting matrices method to investigate non-switching stability of each candidate subsystems. Then, a sufficient condition for the dwell time is derived to guarantee the asymptotic stability of the switched delay system. Once these conditions are represented by a set of linear matrix inequalities , dwell time optimisation problem can be formulated as a standard quasi-convex optimisation problem. Numerical examples are given to illustrate the improvements over previously obtained dwell time bounds. Using the results obtained in the stability case, we present a nonlinear minimisation algorithm to synthesise the dwell time minimiser controllers. The algorithm solves the problem with successive linearisation of nonlinear conditions.
Beating the photon-number-splitting attack in practical quantum cryptography.
Wang, Xiang-Bin
2005-06-17
We propose an efficient method to verify the upper bound of the fraction of counts caused by multiphoton pulses in practical quantum key distribution using weak coherent light, given whatever type of Eve's action. The protocol simply uses two coherent states for the signal pulses and vacuum for the decoy pulse. Our verified upper bound is sufficiently tight for quantum key distribution with a very lossy channel, in both the asymptotic and nonasymptotic case. So far our protocol is the only decoy-state protocol that works efficiently for currently existing setups.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, E.B. Jr.
Various methods for the calculation of lower bounds for eigenvalues are examined, including those of Weinstein, Temple, Bazley and Fox, Gay, and Miller. It is shown how all of these can be derived in a unified manner by the projection technique. The alternate forms obtained for the Gay formula show how a considerably improved method can be readily obtained. Applied to the ground state of the helium atom with a simple screened hydrogenic trial function, this new method gives a lower bound closer to the true energy than the best upper bound obtained with this form of trial function. Possiblemore » routes to further improved methods are suggested.« less
UPPER BOUND RISK ESTIMATES FOR MIXTURES OF CARCINOGENS
The excess cancer risk that might result from exposure to a mixture of chemical carcinogens usually is estimated with data from experiments conducted on individual chemicals. An upper bound on the total excess risk is estimated commonly by summing individual upper bound risk esti...
NASA Astrophysics Data System (ADS)
Ebrahimnejad, Ali
2015-08-01
There are several methods, in the literature, for solving fuzzy variable linear programming problems (fuzzy linear programming in which the right-hand-side vectors and decision variables are represented by trapezoidal fuzzy numbers). In this paper, the shortcomings of some existing methods are pointed out and to overcome these shortcomings a new method based on the bounded dual simplex method is proposed to determine the fuzzy optimal solution of that kind of fuzzy variable linear programming problems in which some or all variables are restricted to lie within lower and upper bounds. To illustrate the proposed method, an application example is solved and the obtained results are given. The advantages of the proposed method over existing methods are discussed. Also, one application of this algorithm in solving bounded transportation problems with fuzzy supplies and demands is dealt with. The proposed method is easy to understand and to apply for determining the fuzzy optimal solution of bounded fuzzy variable linear programming problems occurring in real-life situations.
Computational experience with a parallel algorithm for tetrangle inequality bound smoothing.
Rajan, K; Deo, N
1999-09-01
Determining molecular structure from interatomic distances is an important and challenging problem. Given a molecule with n atoms, lower and upper bounds on interatomic distances can usually be obtained only for a small subset of the 2(n(n-1)) atom pairs, using NMR. Given the bounds so obtained on the distances between some of the atom pairs, it is often useful to compute tighter bounds on all the 2(n(n-1)) pairwise distances. This process is referred to as bound smoothing. The initial lower and upper bounds for the pairwise distances not measured are usually assumed to be 0 and infinity. One method for bound smoothing is to use the limits imposed by the triangle inequality. The distance bounds so obtained can often be tightened further by applying the tetrangle inequality--the limits imposed on the six pairwise distances among a set of four atoms (instead of three for the triangle inequalities). The tetrangle inequality is expressed by the Cayley-Menger determinants. For every quadruple of atoms, each pass of the tetrangle inequality bound smoothing procedure finds upper and lower limits on each of the six distances in the quadruple. Applying the tetrangle inequalities to each of the (4n) quadruples requires O(n4) time. Here, we propose a parallel algorithm for bound smoothing employing the tetrangle inequality. Each pass of our algorithm requires O(n3 log n) time on a REW PRAM (Concurrent Read Exclusive Write Parallel Random Access Machine) with O(log(n)n) processors. An implementation of this parallel algorithm on the Intel Paragon XP/S and its performance are also discussed.
Upper bound on the slope of steady water waves with small adverse vorticity
NASA Astrophysics Data System (ADS)
So, Seung Wook; Strauss, Walter A.
2018-03-01
We consider the angle of inclination (with respect to the horizontal) of the profile of a steady 2D inviscid symmetric periodic or solitary water wave subject to gravity. There is an upper bound of 31.15° in the irrotational case [1] and an upper bound of 45° in the case of favorable vorticity [13]. On the other hand, if the vorticity is adverse, the profile can become vertical. We prove here that if the adverse vorticity is sufficiently small, then the angle still has an upper bound which is slightly larger than 45°.
The SURE reliability analysis program
NASA Technical Reports Server (NTRS)
Butler, R. W.
1986-01-01
The SURE program is a new reliability tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.
The SURE Reliability Analysis Program
NASA Technical Reports Server (NTRS)
Butler, R. W.
1986-01-01
The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.
Quantum State Tomography via Linear Regression Estimation
Qi, Bo; Hou, Zhibo; Li, Li; Dong, Daoyi; Xiang, Guoyong; Guo, Guangcan
2013-01-01
A simple yet efficient state reconstruction algorithm of linear regression estimation (LRE) is presented for quantum state tomography. In this method, quantum state reconstruction is converted into a parameter estimation problem of a linear regression model and the least-squares method is employed to estimate the unknown parameters. An asymptotic mean squared error (MSE) upper bound for all possible states to be estimated is given analytically, which depends explicitly upon the involved measurement bases. This analytical MSE upper bound can guide one to choose optimal measurement sets. The computational complexity of LRE is O(d4) where d is the dimension of the quantum state. Numerical examples show that LRE is much faster than maximum-likelihood estimation for quantum state tomography. PMID:24336519
Bounds for the price of discrete arithmetic Asian options
NASA Astrophysics Data System (ADS)
Vanmaele, M.; Deelstra, G.; Liinev, J.; Dhaene, J.; Goovaerts, M. J.
2006-01-01
In this paper the pricing of European-style discrete arithmetic Asian options with fixed and floating strike is studied by deriving analytical lower and upper bounds. In our approach we use a general technique for deriving upper (and lower) bounds for stop-loss premiums of sums of dependent random variables, as explained in Kaas et al. (Ins. Math. Econom. 27 (2000) 151-168), and additionally, the ideas of Rogers and Shi (J. Appl. Probab. 32 (1995) 1077-1088) and of Nielsen and Sandmann (J. Financial Quant. Anal. 38(2) (2003) 449-473). We are able to create a unifying framework for European-style discrete arithmetic Asian options through these bounds, that generalizes several approaches in the literature as well as improves the existing results. We obtain analytical and easily computable bounds. The aim of the paper is to formulate an advice of the appropriate choice of the bounds given the parameters, investigate the effect of different conditioning variables and compare their efficiency numerically. Several sets of numerical results are included. We also discuss hedging using these bounds. Moreover, our methods are applicable to a wide range of (pricing) problems involving a sum of dependent random variables.
NASA Astrophysics Data System (ADS)
Khoo, Geoffrey; Kuennemeyer, Rainer; Claycomb, Rod W.
2005-04-01
Currently, the state of the art of mastitis detection in dairy cows is the laboratory-based measurement of somatic cell count (SCC), which is time consuming and expensive. Alternative, rapid, and reliable on-farm measurement methods are required for effective farm management. We have investigated whether fluorescence lifetime measurements can determine SCC in fresh, unprocessed milk. The method is based on the change in fluorescence lifetime of ethidium bromide when it binds to DNA from the somatic cells. Milk samples were obtained from a Fullwood Merlin Automated Milking System and analysed within a twenty-four hour period, over which the SCC does not change appreciably. For reference, the milk samples were also sent to a testing laboratory where the SCC was determined by traditional methods. The results show that we can quantify SCC using the fluorescence photon migration method from a lower bound of 4x105 cells mL-1 to an upper bound of 1 x 107 cells mL-1. The upper bound is due to the reference method used while the cause of the lower boundary is unknown, yet.
NASA Technical Reports Server (NTRS)
Lewis, Robert Michael; Patera, Anthony T.; Peraire, Jaume
1998-01-01
We present a Neumann-subproblem a posteriori finite element procedure for the efficient and accurate calculation of rigorous, 'constant-free' upper and lower bounds for sensitivity derivatives of functionals of the solutions of partial differential equations. The design motivation for sensitivity derivative error control is discussed; the a posteriori finite element procedure is described; the asymptotic bounding properties and computational complexity of the method are summarized; and illustrative numerical results are presented.
Fault-tolerant clock synchronization validation methodology. [in computer systems
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Palumbo, Daniel L.; Johnson, Sally C.
1987-01-01
A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight-crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating the clock synchronization system of the Software Implemented Fault Tolerance computer. The design proof of the algorithm includes a theorem that defines the maximum skew between any two nonfaulty clocks in the system in terms of specific system parameters. Most of these parameters are deterministic. One crucial parameter is the upper bound on the clock read error, which is stochastic. The probability that this upper bound is exceeded is calculated from data obtained by the measurement of system parameters. This probability is then included in a detailed reliability analysis of the system.
A simple method for assessing occupational exposure via the one-way random effects model.
Krishnamoorthy, K; Mathew, Thomas; Peng, Jie
2016-11-01
A one-way random effects model is postulated for the log-transformed shift-long personal exposure measurements, where the random effect in the model represents an effect due to the worker. Simple closed-form confidence intervals are proposed for the relevant parameters of interest using the method of variance estimates recovery (MOVER). The performance of the confidence bounds is evaluated and compared with those based on the generalized confidence interval approach. Comparison studies indicate that the proposed MOVER confidence bounds are better than the generalized confidence bounds for the overall mean exposure and an upper percentile of the exposure distribution. The proposed methods are illustrated using a few examples involving industrial hygiene data.
FACTORING TO FIT OFF DIAGONALS.
imply an upper bound on the number of factors. When applied to somatotype data, the method improved substantially on centroid solutions and indicated a reinterpretation of earlier factoring studies. (Author)
Calculations of reliability predictions for the Apollo spacecraft
NASA Technical Reports Server (NTRS)
Amstadter, B. L.
1966-01-01
A new method of reliability prediction for complex systems is defined. Calculation of both upper and lower bounds are involved, and a procedure for combining the two to yield an approximately true prediction value is presented. Both mission success and crew safety predictions can be calculated, and success probabilities can be obtained for individual mission phases or subsystems. Primary consideration is given to evaluating cases involving zero or one failure per subsystem, and the results of these evaluations are then used for analyzing multiple failure cases. Extensive development is provided for the overall mission success and crew safety equations for both the upper and lower bounds.
Toward allocative efficiency in the prescription drug industry.
Guell, R C; Fischbaum, M
1995-01-01
Traditionally, monopoly power in the pharmaceutical industry has been measured by profits. An alternative method estimates the deadweight loss of consumer surplus associated with the exercise of monopoly power. Although upper and lower bound estimates for this inefficiency are far apart, they at least suggest a dramatically greater welfare loss than measures of industry profitability would imply. A proposed system would have the U.S. government employing its power of eminent domain to "take" and distribute pharmaceutical patents, providing as "just compensation" the present value of the patent's expected future monopoly profits. Given the allocative inefficiency of raising taxes to pay for the program, the impact of the proposal on allocative efficiency would be at least as good at our lower bound estimate of monopoly costs while substantially improving efficiency at or near our upper bound estimate.
Tight upper bound for the maximal quantum value of the Svetlichny operators
NASA Astrophysics Data System (ADS)
Li, Ming; Shen, Shuqian; Jing, Naihuan; Fei, Shao-Ming; Li-Jost, Xianqing
2017-10-01
It is a challenging task to detect genuine multipartite nonlocality (GMNL). In this paper, the problem is considered via computing the maximal quantum value of Svetlichny operators for three-qubit systems and a tight upper bound is obtained. The constraints on the quantum states for the tightness of the bound are also presented. The approach enables us to give the necessary and sufficient conditions of violating the Svetlichny inequality (SI) for several quantum states, including the white and color noised Greenberger-Horne-Zeilinger (GHZ) states. The relation between the genuine multipartite entanglement concurrence and the maximal quantum value of the Svetlichny operators for mixed GHZ class states is also discussed. As the SI is useful for the investigation of GMNL, our results give an effective and operational method to detect the GMNL for three-qubit mixed states.
Paul L. Patterson; Mark Finco
2011-01-01
This paper explores the information forest inventory data can produce regarding forest types that were not sampled and develops the equations necessary to define the upper confidence bounds on not-sampled forest types. The problem is reduced to a Bernoulli variable. This simplification allows the upper confidence bounds to be calculated based on Cochran (1977)....
``Carbon Credits'' for Resource-Bounded Computations Using Amortised Analysis
NASA Astrophysics Data System (ADS)
Jost, Steffen; Loidl, Hans-Wolfgang; Hammond, Kevin; Scaife, Norman; Hofmann, Martin
Bounding resource usage is important for a number of areas, notably real-time embedded systems and safety-critical systems. In this paper, we present a fully automatic static type-based analysis for inferring upper bounds on resource usage for programs involving general algebraic datatypes and full recursion. Our method can easily be used to bound any countable resource, without needing to revisit proofs. We apply the analysis to the important metrics of worst-case execution time, stack- and heap-space usage. Our results from several realistic embedded control applications demonstrate good matches between our inferred bounds and measured worst-case costs for heap and stack usage. For time usage we infer good bounds for one application. Where we obtain less tight bounds, this is due to the use of software floating-point libraries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kyri; Toomey, Bridget
Evolving power systems with increasing levels of stochasticity call for a need to solve optimal power flow problems with large quantities of random variables. Weather forecasts, electricity prices, and shifting load patterns introduce higher levels of uncertainty and can yield optimization problems that are difficult to solve in an efficient manner. Solution methods for single chance constraints in optimal power flow problems have been considered in the literature, ensuring single constraints are satisfied with a prescribed probability; however, joint chance constraints, ensuring multiple constraints are simultaneously satisfied, have predominantly been solved via scenario-based approaches or by utilizing Boole's inequality asmore » an upper bound. In this paper, joint chance constraints are used to solve an AC optimal power flow problem while preventing overvoltages in distribution grids under high penetrations of photovoltaic systems. A tighter version of Boole's inequality is derived and used to provide a new upper bound on the joint chance constraint, and simulation results are shown demonstrating the benefit of the proposed upper bound. The new framework allows for a less conservative and more computationally efficient solution to considering joint chance constraints, specifically regarding preventing overvoltages.« less
Joseph Buongiorno; Mo Zhou; Craig Johnston
2017-01-01
Markov decision process models were extended to reflect some consequences of the risk attitude of forestry decision makers. One approach consisted of maximizing the expected value of a criterion subject to an upper bound on the variance or, symmetrically, minimizing the variance subject to a lower bound on the expected value. The other method used the certainty...
Paul L. Patterson; Mark Finco
2009-01-01
This paper explores the information FIA data can produce regarding forest types that were not sampled and develops the equations necessary to define the upper confidence bounds on not-sampled forest types. The problem is reduced to a Bernoulli variable. This simplification allows the upper confidence bounds to be calculated based on Cochran (1977). Examples are...
Pre-Test Assessment of the Upper Bound of the Drag Coefficient Repeatability of a Wind Tunnel Model
NASA Technical Reports Server (NTRS)
Ulbrich, N.; L'Esperance, A.
2017-01-01
A new method is presented that computes a pre{test estimate of the upper bound of the drag coefficient repeatability of a wind tunnel model. This upper bound is a conservative estimate of the precision error of the drag coefficient. For clarity, precision error contributions associated with the measurement of the dynamic pressure are analyzed separately from those that are associated with the measurement of the aerodynamic loads. The upper bound is computed by using information about the model, the tunnel conditions, and the balance in combination with an estimate of the expected output variations as input. The model information consists of the reference area and an assumed angle of attack. The tunnel conditions are described by the Mach number and the total pressure or unit Reynolds number. The balance inputs are the partial derivatives of the axial and normal force with respect to all balance outputs. Finally, an empirical output variation of 1.0 microV/V is used to relate both random instrumentation and angle measurement errors to the precision error of the drag coefficient. Results of the analysis are reported by plotting the upper bound of the precision error versus the tunnel conditions. The analysis shows that the influence of the dynamic pressure measurement error on the precision error of the drag coefficient is often small when compared with the influence of errors that are associated with the load measurements. Consequently, the sensitivities of the axial and normal force gages of the balance have a significant influence on the overall magnitude of the drag coefficient's precision error. Therefore, results of the error analysis can be used for balance selection purposes as the drag prediction characteristics of balances of similar size and capacities can objectively be compared. Data from two wind tunnel models and three balances are used to illustrate the assessment of the precision error of the drag coefficient.
Yin, H-L; Cao, W-F; Fu, Y; Tang, Y-L; Liu, Y; Chen, T-Y; Chen, Z-B
2014-09-15
Measurement-device-independent quantum key distribution (MDI-QKD) with decoy-state method is believed to be securely applied to defeat various hacking attacks in practical quantum key distribution systems. Recently, the coherent-state superpositions (CSS) have emerged as an alternative to single-photon qubits for quantum information processing and metrology. Here, in this Letter, CSS are exploited as the source in MDI-QKD. We present an analytical method that gives two tight formulas to estimate the lower bound of yield and the upper bound of bit error rate. We exploit the standard statistical analysis and Chernoff bound to perform the parameter estimation. Chernoff bound can provide good bounds in the long-distance MDI-QKD. Our results show that with CSS, both the security transmission distance and secure key rate are significantly improved compared with those of the weak coherent states in the finite-data case.
Upper bounds on secret-key agreement over lossy thermal bosonic channels
NASA Astrophysics Data System (ADS)
Kaur, Eneet; Wilde, Mark M.
2017-12-01
Upper bounds on the secret-key-agreement capacity of a quantum channel serve as a way to assess the performance of practical quantum-key-distribution protocols conducted over that channel. In particular, if a protocol employs a quantum repeater, achieving secret-key rates exceeding these upper bounds is evidence of having a working quantum repeater. In this paper, we extend a recent advance [Liuzzo-Scorpo et al., Phys. Rev. Lett. 119, 120503 (2017), 10.1103/PhysRevLett.119.120503] in the theory of the teleportation simulation of single-mode phase-insensitive Gaussian channels such that it now applies to the relative entropy of entanglement measure. As a consequence of this extension, we find tighter upper bounds on the nonasymptotic secret-key-agreement capacity of the lossy thermal bosonic channel than were previously known. The lossy thermal bosonic channel serves as a more realistic model of communication than the pure-loss bosonic channel, because it can model the effects of eavesdropper tampering and imperfect detectors. An implication of our result is that the previously known upper bounds on the secret-key-agreement capacity of the thermal channel are too pessimistic for the practical finite-size regime in which the channel is used a finite number of times, and so it should now be somewhat easier to witness a working quantum repeater when using secret-key-agreement capacity upper bounds as a benchmark.
SURE reliability analysis: Program and mathematics
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; White, Allan L.
1988-01-01
The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The computational methods on which the program is based provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.
Energy Bounds for a Compressed Elastic Film on a Substrate
NASA Astrophysics Data System (ADS)
Bourne, David P.; Conti, Sergio; Müller, Stefan
2017-04-01
We study pattern formation in a compressed elastic film which delaminates from a substrate. Our key tool is the determination of rigorous upper and lower bounds on the minimum value of a suitable energy functional. The energy consists of two parts, describing the two main physical effects. The first part represents the elastic energy of the film, which is approximated using the von Kármán plate theory. The second part represents the fracture or delamination energy, which is approximated using the Griffith model of fracture. A simpler model containing the first term alone was previously studied with similar methods by several authors, assuming that the delaminated region is fixed. We include the fracture term, transforming the elastic minimisation into a free boundary problem, and opening the way for patterns which result from the interplay of elasticity and delamination. After rescaling, the energy depends on only two parameters: the rescaled film thickness, {σ }, and a measure of the bonding strength between the film and substrate, {γ }. We prove upper bounds on the minimum energy of the form {σ }^a {γ }^b and find that there are four different parameter regimes corresponding to different values of a and b and to different folding patterns of the film. In some cases, the upper bounds are attained by self-similar folding patterns as observed in experiments. Moreover, for two of the four parameter regimes we prove matching, optimal lower bounds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Audenaert, Koenraad M. R., E-mail: koenraad.audenaert@rhul.ac.uk; Department of Physics and Astronomy, University of Ghent, S9, Krijgslaan 281, B-9000 Ghent; Mosonyi, Milán, E-mail: milan.mosonyi@gmail.com
2014-10-01
We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ₁, …, σ{sub r}. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ₁, …, σ{sub r}), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov'smore » classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min{sub j« less
Upper bounds on sequential decoding performance parameters
NASA Technical Reports Server (NTRS)
Jelinek, F.
1974-01-01
This paper presents the best obtainable random coding and expurgated upper bounds on the probabilities of undetectable error, of t-order failure (advance to depth t into an incorrect subset), and of likelihood rise in the incorrect subset, applicable to sequential decoding when the metric bias G is arbitrary. Upper bounds on the Pareto exponent are also presented. The G-values optimizing each of the parameters of interest are determined, and are shown to lie in intervals that in general have nonzero widths. The G-optimal expurgated bound on undetectable error is shown to agree with that for maximum likelihood decoding of convolutional codes, and that on failure agrees with the block code expurgated bound. Included are curves evaluating the bounds for interesting choices of G and SNR for a binary-input quantized-output Gaussian additive noise channel.
How entangled can a multi-party system possibly be?
NASA Astrophysics Data System (ADS)
Qi, Liqun; Zhang, Guofeng; Ni, Guyan
2018-06-01
The geometric measure of entanglement of a pure quantum state is defined to be its distance to the space of pure product (separable) states. Given an n-partite system composed of subsystems of dimensions d1 , … ,dn, an upper bound for maximally allowable entanglement is derived in terms of geometric measure of entanglement. This upper bound is characterized exclusively by the dimensions d1 , … ,dn of composite subsystems. Numerous examples demonstrate that the upper bound appears to be reasonably tight.
Bermudo, Carolina; Sevilla, Lorenzo; Martín, Francisco; Trujillo, Francisco Javier
2017-01-01
The application of incremental processes in the manufacturing industry is having a great development in recent years. The first stage of an Incremental Forming Process can be defined as an indentation. Because of this, the indentation process is starting to be widely studied, not only as a hardening test but also as a forming process. Thus, in this work, an analysis of the indentation process under the new Modular Upper Bound perspective has been performed. The modular implementation has several advantages, including the possibility of the introduction of different parameters to extend the study, such as the friction effect, the temperature or the hardening effect studied in this paper. The main objective of the present work is to analyze the three hardening models developed depending on the material characteristics. In order to support the validation of the hardening models, finite element analyses of diverse materials under an indentation are carried out. Results obtained from the Modular Upper Bound are in concordance with the results obtained from the numerical analyses. In addition, the numerical and analytical methods are in concordance with the results previously obtained in the experimental indentation of annealed aluminum A92030. Due to the introduction of the hardening factor, the new modular distribution is a suitable option for the analysis of indentation process. PMID:28772914
Faydasicok, Ozlem; Arik, Sabri
2013-08-01
The main problem with the analysis of robust stability of neural networks is to find the upper bound norm for the intervalized interconnection matrices of neural networks. In the previous literature, the major three upper bound norms for the intervalized interconnection matrices have been reported and they have been successfully applied to derive new sufficient conditions for robust stability of delayed neural networks. One of the main contributions of this paper will be the derivation of a new upper bound for the norm of the intervalized interconnection matrices of neural networks. Then, by exploiting this new upper bound norm of interval matrices and using stability theory of Lyapunov functionals and the theory of homomorphic mapping, we will obtain new sufficient conditions for the existence, uniqueness and global asymptotic stability of the equilibrium point for the class of neural networks with discrete time delays under parameter uncertainties and with respect to continuous and slope-bounded activation functions. The results obtained in this paper will be shown to be new and they can be considered alternative results to previously published corresponding results. We also give some illustrative and comparative numerical examples to demonstrate the effectiveness and applicability of the proposed robust stability condition. Copyright © 2013 Elsevier Ltd. All rights reserved.
Estimation variance bounds of importance sampling simulations in digital communication systems
NASA Technical Reports Server (NTRS)
Lu, D.; Yao, K.
1991-01-01
In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.
On the likelihood of single-peaked preferences.
Lackner, Marie-Louise; Lackner, Martin
2017-01-01
This paper contains an extensive combinatorial analysis of the single-peaked domain restriction and investigates the likelihood that an election is single-peaked. We provide a very general upper bound result for domain restrictions that can be defined by certain forbidden configurations. This upper bound implies that many domain restrictions (including the single-peaked restriction) are very unlikely to appear in a random election chosen according to the Impartial Culture assumption. For single-peaked elections, this upper bound can be refined and complemented by a lower bound that is asymptotically tight. In addition, we provide exact results for elections with few voters or candidates. Moreover, we consider the Pólya urn model and the Mallows model and obtain lower bounds showing that single-peakedness is considerably more likely to appear for certain parameterizations.
NASA Astrophysics Data System (ADS)
Alvarez, Diego A.; Uribe, Felipe; Hurtado, Jorge E.
2018-02-01
Random set theory is a general framework which comprises uncertainty in the form of probability boxes, possibility distributions, cumulative distribution functions, Dempster-Shafer structures or intervals; in addition, the dependence between the input variables can be expressed using copulas. In this paper, the lower and upper bounds on the probability of failure are calculated by means of random set theory. In order to accelerate the calculation, a well-known and efficient probability-based reliability method known as subset simulation is employed. This method is especially useful for finding small failure probabilities in both low- and high-dimensional spaces, disjoint failure domains and nonlinear limit state functions. The proposed methodology represents a drastic reduction of the computational labor implied by plain Monte Carlo simulation for problems defined with a mixture of representations for the input variables, while delivering similar results. Numerical examples illustrate the efficiency of the proposed approach.
1-norm support vector novelty detection and its sparseness.
Zhang, Li; Zhou, WeiDa
2013-12-01
This paper proposes a 1-norm support vector novelty detection (SVND) method and discusses its sparseness. 1-norm SVND is formulated as a linear programming problem and uses two techniques for inducing sparseness, or the 1-norm regularization and the hinge loss function. We also find two upper bounds on the sparseness of 1-norm SVND, or exact support vector (ESV) and kernel Gram matrix rank bounds. The ESV bound indicates that 1-norm SVND has a sparser representation model than SVND. The kernel Gram matrix rank bound can loosely estimate the sparseness of 1-norm SVND. Experimental results show that 1-norm SVND is feasible and effective. Copyright © 2013 Elsevier Ltd. All rights reserved.
The Lyapunov dimension and its estimation via the Leonov method
NASA Astrophysics Data System (ADS)
Kuznetsov, N. V.
2016-06-01
Along with widely used numerical methods for estimating and computing the Lyapunov dimension there is an effective analytical approach, proposed by G.A. Leonov in 1991. The Leonov method is based on the direct Lyapunov method with special Lyapunov-like functions. The advantage of the method is that it allows one to estimate the Lyapunov dimension of invariant sets without localization of the set in the phase space and, in many cases, to get effectively an exact Lyapunov dimension formula. In this work the invariance of the Lyapunov dimension with respect to diffeomorphisms and its connection with the Leonov method are discussed. For discrete-time dynamical systems an analog of Leonov method is suggested. In a simple but rigorous way, here it is presented the connection between the Leonov method and the key related works: Kaplan and Yorke (the concept of the Lyapunov dimension, 1979), Douady and Oesterlé (upper bounds of the Hausdorff dimension via the Lyapunov dimension of maps, 1980), Constantin, Eden, Foiaş, and Temam (upper bounds of the Hausdorff dimension via the Lyapunov exponents and Lyapunov dimension of dynamical systems, 1985-90), and the numerical calculation of the Lyapunov exponents and dimension.
Edge connectivity and the spectral gap of combinatorial and quantum graphs
NASA Astrophysics Data System (ADS)
Berkolaiko, Gregory; Kennedy, James B.; Kurasov, Pavel; Mugnolo, Delio
2017-09-01
We derive a number of upper and lower bounds for the first nontrivial eigenvalue of Laplacians on combinatorial and quantum graph in terms of the edge connectivity, i.e. the minimal number of edges which need to be removed to make the graph disconnected. On combinatorial graphs, one of the bounds corresponds to a well-known inequality of Fiedler, of which we give a new variational proof. On quantum graphs, the corresponding bound generalizes a recent result of Band and Lévy. All proofs are general enough to yield corresponding estimates for the p-Laplacian and allow us to identify the minimizers. Based on the Betti number of the graph, we also derive upper and lower bounds on all eigenvalues which are ‘asymptotically correct’, i.e. agree with the Weyl asymptotics for the eigenvalues of the quantum graph. In particular, the lower bounds improve the bounds of Friedlander on any given graph for all but finitely many eigenvalues, while the upper bounds improve recent results of Ariturk. Our estimates are also used to derive bounds on the eigenvalues of the normalized Laplacian matrix that improve known bounds of spectral graph theory.
Manipulations of Cartesian Graphs: A First Introduction to Analysis.
ERIC Educational Resources Information Center
Lowenthal, Francis; Vandeputte, Christiane
1989-01-01
Introduces an introductory module for analysis. Describes stock of basic functions and their graphs as part one and three methods as part two: transformations of simple graphs, the sum of stock functions, and upper and lower bounds. (YP)
On the role of entailment patterns and scalar implicatures in the processing of numerals
Panizza, Daniele; Chierchia, Gennaro; Clifton, Charles
2009-01-01
There has been much debate, in both the linguistics and the psycholinguistics literature, concerning numbers and the interpretation of number denoting determiners ('numerals'). Such debate concerns, in particular, the nature and distribution of upper-bounded ('at-least') interpretations vs. lower-bounded ('exact') construals. In the present paper we show that the interpretation and processing of numerals are affected by the entailment properties of the context in which they occur. Experiment 1 established off-line preferences using a questionnaire. Experiment 2 investigated the processing issue through an eye tracking experiment using a silent reading task. Our results show that the upper-bounded interpretation of numerals occurs more often in an upward entailing context than in a downward entailing context. Reading times of the numeral itself were longer when it was embedded in an upward entailing context than when it was not, indicating that processing resources were required when the context triggered an upper-bounded interpretation. However, reading of a following context that required an upper-bounded interpretation triggered more regressions towards the numeral when it had occurred in a downward entailing context than in an upward entailing one. Such findings show that speakers' interpretation and processing of numerals is systematically affected by the polarity of the sentence in which they occur, and support the hypothesis that the upper-bounded interpretation of numerals is due to a scalar implicature. PMID:20161494
Grosz, R; Stephanopoulos, G
1983-09-01
The need for the determination of the free energy of formation of biomass in bioreactor second law balances is well established. A statistical mechanical method for the calculation of the free energy of formation of E. coli biomass is introduced. In this method, biomass is modelled to consist of a system of biopolymer networks. The partition function of this system is proposed to consist of acoustic and optical modes of vibration. Acoustic modes are described by Tarasov's model, the parameters of which are evaluated with the aid of low-temperature calorimetric data for the crystalline protein bovine chymotrypsinogen A. The optical modes are described by considering the low-temperature thermodynamic properties of biological monomer crystals such as amino acid crystals. Upper and lower bounds are placed on the entropy to establish the maximum error associated with the statistical method. The upper bound is determined by endowing the monomers in biomass with ideal gas properties. The lower bound is obtained by limiting the monomers to complete immobility. On this basis, the free energy of formation is fixed to within 10%. Proposals are made with regard to experimental verification of the calculated value and extension of the calculation to other types of biomass.
Upper bound of abutment scour in laboratory and field data
Benedict, Stephen
2016-01-01
The U.S. Geological Survey, in cooperation with the South Carolina Department of Transportation, conducted a field investigation of abutment scour in South Carolina and used those data to develop envelope curves that define the upper bound of abutment scour. To expand on this previous work, an additional cooperative investigation was initiated to combine the South Carolina data with abutment scour data from other sources and evaluate upper bound patterns with this larger data set. To facilitate this analysis, 446 laboratory and 331 field measurements of abutment scour were compiled into a digital database. This extensive database was used to evaluate the South Carolina abutment scour envelope curves and to develop additional envelope curves that reflected the upper bound of abutment scour depth for the laboratory and field data. The envelope curves provide simple but useful supplementary tools for assessing the potential maximum abutment scour depth in the field setting.
Multi-soliton interaction of a generalized Schrödinger-Boussinesq system in a magnetized plasma
NASA Astrophysics Data System (ADS)
Zhao, Xue-Hui; Tian, Bo; Chai, Jun; Wu, Xiao-Yu; Guo, Yong-Jiang
2017-04-01
Under investigation in this paper is a generalized Schrödinger-Boussinesq system, which describes the stationary propagation of coupled upper-hybrid waves and magnetoacoustic waves in a magnetized plasma. Bilinear forms, one-, two- and three-soliton solutions are derived by virtue of the Hirota method and symbolic computation. Propagation and interaction for the solitons are illustrated graphically: Coefficients β1^{} and β2^{} can affect the velocities and propagation directions of the solitary waves. Amplitude, velocity and shape of the one solitary wave keep invariant during the propagation, implying that the transport of the energy is stable in the upper-hybrid and magnetoacoustic waves, and amplitude of the upper-hybrid wave is bigger than that of the magnetoacoustic wave. For the upper-hybrid and magnetoacoustic waves, head-on, overtaking and bound-state interaction between the two solitary waves are asymptotically depicted, respectively, indicating that the interaction between the two solitary waves is elastic. Elastic interaction between the bound-state soliton and a single one soliton is also displayed, and interaction among the three solitary waves is all elastic.
NASA Technical Reports Server (NTRS)
Glover, R. M.; Weinhold, F.
1977-01-01
Variational functionals of Braunn and Rebane (1972) for the imagery-frequency polarizability (IFP) have been generalized by the method of Gramian inequalities to give rigorous upper and lower bounds, valid even when the true (but unknown) unperturbed wavefunction must be represented by a variational approximation. Using these formulas in conjunction with flexible variational trial functions, tight error bounds are computed for the IFP and the associated two- and three-body van der Waals interaction constants of the ground 1(1S) and metastable 2(1,3S) states of He and Li(+). These bounds generally establish the ground-state properties to within a fraction of a per cent and metastable properties to within a few per cent, permitting a comparative assessment of competing theoretical methods at this level of accuracy. Unlike previous 'error bounds' for these properties, the present results have a completely a priori theoretical character, with no empirical input data.
Marathon: An Open Source Software Library for the Analysis of Markov-Chain Monte Carlo Algorithms
Rechner, Steffen; Berger, Annabell
2016-01-01
We present the software library marathon, which is designed to support the analysis of sampling algorithms that are based on the Markov-Chain Monte Carlo principle. The main application of this library is the computation of properties of so-called state graphs, which represent the structure of Markov chains. We demonstrate applications and the usefulness of marathon by investigating the quality of several bounding methods on four well-known Markov chains for sampling perfect matchings and bipartite graphs. In a set of experiments, we compute the total mixing time and several of its bounds for a large number of input instances. We find that the upper bound gained by the famous canonical path method is often several magnitudes larger than the total mixing time and deteriorates with growing input size. In contrast, the spectral bound is found to be a precise approximation of the total mixing time. PMID:26824442
NASA Astrophysics Data System (ADS)
Dong, Yuan; Li, Qian P.; Wu, Zhengchao; Zhang, Jia-Zhong
2016-12-01
Export fluxes of phosphorus (P) by sinking particles are important in studying ocean biogeochemical dynamics, whereas their composition and temporal variability are still inadequately understood in the global oceans, including the northern South China Sea (NSCS). A time-series study of particle fluxes was conducted at a mooring station adjacent to the Xisha Trough in the NSCS from September 2012 to September 2014, with sinking particles collected every two weeks by two sediment traps deployed at 500 m and 1500 m depths. Five operationally defined particulate P classes of sinking particles including loosely-bound P, Fe-bound P, CaCO3-bound P, detrital apatite P, and refractory organic P were quantified by a sequential extraction method (SEDEX). Our results revealed substantial variability in sinking particulate P composition at the Xisha over two years of samplings. Particulate inorganic P was largely contributed from Fe-bound P in the upper trap, but detrital P in the lower trap. Particulate organic P, including exchangeable organic P, CaCO3-bound organic P, and refractory organic P, contributed up to 50-55% of total sinking particulate P. Increase of CaCO3-bound P in the upper trap during 2014 could be related to a strong El Niño event with enhanced CaCO3 deposition. We also found sediment resuspension responsible for the unusual high particles fluxes at the lower trap based on analyses of a two-component mixing model. There was on average a total mass flux of 78±50 mg m-2 d-1 at the upper trap during the study period. A significant correlation between integrated primary productivity in the region and particle fluxes at 500 m of the station suggested the important role of biological production in controlling the concentration, composition, and export fluxes of sinking particulate P in the NSCS.
Hard and Soft Constraints in Reliability-Based Design Optimization
NASA Technical Reports Server (NTRS)
Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.
Perturbative unitarity constraints on the NMSSM Higgs Sector
Betre, Kassahun; El Hedri, Sonia; Walker, Devin G. E.
2017-11-11
We place perturbative unitarity constraints on both the dimensionful and dimensionless parameters in the Next-to-Minimal Supersymmetric Standard Model (NMSSM) Higgs Sector. These constraints, plus the requirement that the singlino and/or Higgsino constitutes at least part of the observed dark matter relic abundance, generate upper bounds on the Higgs, neutralino and chargino mass spectrum. Requiring higher-order corrections to be no more than 41% of the tree-level value, we obtain an upper bound of 20 TeV for the heavy Higgses and 12 TeV for the charginos and neutralinos outside defined fine-tuned regions. If the corrections are no more than 20% of themore » tree-level value, the bounds are 7 TeV for the heavy Higgses and 5 TeV for the charginos and neutralinos. Finally, in all, by using the NMSSM as a template, we describe a method which replaces naturalness arguments with more rigorous perturbative unitarity arguments to get a better understanding of when new physics will appear.« less
Perturbative unitarity constraints on the NMSSM Higgs Sector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Betre, Kassahun; El Hedri, Sonia; Walker, Devin G. E.
We place perturbative unitarity constraints on both the dimensionful and dimensionless parameters in the Next-to-Minimal Supersymmetric Standard Model (NMSSM) Higgs Sector. These constraints, plus the requirement that the singlino and/or Higgsino constitutes at least part of the observed dark matter relic abundance, generate upper bounds on the Higgs, neutralino and chargino mass spectrum. Requiring higher-order corrections to be no more than 41% of the tree-level value, we obtain an upper bound of 20 TeV for the heavy Higgses and 12 TeV for the charginos and neutralinos outside defined fine-tuned regions. If the corrections are no more than 20% of themore » tree-level value, the bounds are 7 TeV for the heavy Higgses and 5 TeV for the charginos and neutralinos. Finally, in all, by using the NMSSM as a template, we describe a method which replaces naturalness arguments with more rigorous perturbative unitarity arguments to get a better understanding of when new physics will appear.« less
Lattice Cleaving: A Multimaterial Tetrahedral Meshing Algorithm with Guarantees
Bronson, Jonathan; Levine, Joshua A.; Whitaker, Ross
2014-01-01
We introduce a new algorithm for generating tetrahedral meshes that conform to physical boundaries in volumetric domains consisting of multiple materials. The proposed method allows for an arbitrary number of materials, produces high-quality tetrahedral meshes with upper and lower bounds on dihedral angles, and guarantees geometric fidelity. Moreover, the method is combinatoric so its implementation enables rapid mesh construction. These meshes are structured in a way that also allows grading, to reduce element counts in regions of homogeneity. Additionally, we provide proofs showing that both element quality and geometric fidelity are bounded using this approach. PMID:24356365
Ionospheric Signatures in Radio Occultation Data
NASA Technical Reports Server (NTRS)
Mannucci, Anthony J.; Ao, Chi; Iijima, Byron A.; Kursinkski, E. Robert
2012-01-01
We can extend robustly the radio occultation data record by 6 years (+60%) by developing a singlefrequency processing method for GPS/MET data. We will produce a calibrated data set with profile-byprofile data characterization to determine robust upper bounds on ionospheric bias. Part of an effort to produce a calibrated RO data set addressing other key error sources such as upper boundary initialization. Planned: AIRS-GPS water vapor cross validation (water vapor climatology and trends).
NASA Astrophysics Data System (ADS)
Wang, Dong; Tsui, Kwok-Leung
2018-01-01
Bearing-supported shafts are widely used in various machines. Due to harsh working environments, bearing performance degrades over time. To prevent unexpected bearing failures and accidents, bearing performance degradation assessment becomes an emerging topic in recent years. Bearing performance degradation assessment aims to evaluate the current health condition of a bearing through a bearing health indicator. In the past years, many signal processing and data mining based methods were proposed to construct bearing health indicators. However, the upper and lower bounds of these bearing health indicators were not theoretically calculated and they strongly depended on historical bearing data including normal and failure data. Besides, most health indicators are dimensional, which connotes that these health indicators are prone to be affected by varying operating conditions, such as varying speeds and loads. In this paper, based on the principle of squared envelope analysis, we focus on theoretical investigation of bearing performance degradation assessment in the case of additive Gaussian noises, including distribution establishment of squared envelope, construction of a generalized dimensionless bearing health indicator, and mathematical calculation of the upper and lower bounds of the generalized dimensionless bearing health indicator. Then, analyses of simulated and real bearing run to failure data are used as two case studies to illustrate how the generalized dimensionless health indicator works and demonstrate its effectiveness in bearing performance degradation assessment. Results show that squared envelope follows a noncentral chi-square distribution and the upper and lower bounds of the generalized dimensionless health indicator can be mathematically established. Moreover, the generalized dimensionless health indicator is sensitive to an incipient bearing defect in the process of bearing performance degradation.
Limits on cold dark matter cosmologies from new anisotropy bounds on the cosmic microwave background
NASA Technical Reports Server (NTRS)
Vittorio, Nicola; Meinhold, Peter; Lubin, Philip; Muciaccia, Pio Francesco; Silk, Joseph
1991-01-01
A self-consistent method is presented for comparing theoretical predictions of and observational upper limits on CMB anisotropy. New bounds on CDM cosmologies set by the UCSB South Pole experiment on the 1 deg angular scale are presented. An upper limit of 4.0 x 10 to the -5th is placed on the rms differential temperature anisotropy to a 95 percent confidence level and a power of the test beta = 55 percent. A lower limit of about 0.6/b is placed on the density parameter of cold dark matter universes with greater than about 3 percent baryon abundance and a Hubble constant of 50 km/s/Mpc, where b is the bias factor, equal to unity only if light traces mass.
Interpolation Inequalities and Spectral Estimates for Magnetic Operators
NASA Astrophysics Data System (ADS)
Dolbeault, Jean; Esteban, Maria J.; Laptev, Ari; Loss, Michael
2018-05-01
We prove magnetic interpolation inequalities and Keller-Lieb-Thir-ring estimates for the principal eigenvalue of magnetic Schr{\\"o}dinger operators. We establish explicit upper and lower bounds for the best constants and show by numerical methods that our theoretical estimates are accurate.
Eigenvalues of the Wentzell-Laplace operator and of the fourth order Steklov problems
NASA Astrophysics Data System (ADS)
Xia, Changyu; Wang, Qiaoling
2018-05-01
We prove a sharp upper bound and a lower bound for the first nonzero eigenvalue of the Wentzell-Laplace operator on compact manifolds with boundary and an isoperimetric inequality for the same eigenvalue in the case where the manifold is a bounded domain in a Euclidean space. We study some fourth order Steklov problems and obtain isoperimetric upper bound for the first eigenvalue of them. We also find all the eigenvalues and eigenfunctions for two kind of fourth order Steklov problems on a Euclidean ball.
Hybrid Theory of Electron-Hydrogenic Systems Elastic Scattering
NASA Technical Reports Server (NTRS)
Bhatia, A. K.
2007-01-01
Accurate electron-hydrogen and electron-hydrogenic cross sections are required to interpret fusion experiments, laboratory plasma physics and properties of the solar and astrophysical plasmas. We have developed a method in which the short-range and long-range correlations can be included at the same time in the scattering equations. The phase shifts have rigorous lower bounds and the scattering lengths have rigorous upper bounds. The phase shifts in the resonance region can be used to calculate very accurately the resonance parameters.
NASA Technical Reports Server (NTRS)
Mitchell, J. R.
1972-01-01
The frequency response method of analyzing control system performance is discussed, and the difficulty of obtaining the sampled frequency response of the continuous system is considered. An upper bound magnitude error equation is obtained which yields reasonable estimates of the actual error. Finalization of the compensator improvement program is also reported, and the program was used to design compensators for Saturn 5/S1-C dry workshop and Saturn 5/S1-C Skylab.
The Problem of Limited Inter-rater Agreement in Modelling Music Similarity
Flexer, Arthur; Grill, Thomas
2016-01-01
One of the central goals of Music Information Retrieval (MIR) is the quantification of similarity between or within pieces of music. These quantitative relations should mirror the human perception of music similarity, which is however highly subjective with low inter-rater agreement. Unfortunately this principal problem has been given little attention in MIR so far. Since it is not meaningful to have computational models that go beyond the level of human agreement, these levels of inter-rater agreement present a natural upper bound for any algorithmic approach. We will illustrate this fundamental problem in the evaluation of MIR systems using results from two typical application scenarios: (i) modelling of music similarity between pieces of music; (ii) music structure analysis within pieces of music. For both applications, we derive upper bounds of performance which are due to the limited inter-rater agreement. We compare these upper bounds to the performance of state-of-the-art MIR systems and show how the upper bounds prevent further progress in developing better MIR systems. PMID:28190932
Kamiura, Moto; Sano, Kohei
2017-10-01
The principle of optimism in the face of uncertainty is known as a heuristic in sequential decision-making problems. Overtaking method based on this principle is an effective algorithm to solve multi-armed bandit problems. It was defined by a set of some heuristic patterns of the formulation in the previous study. The objective of the present paper is to redefine the value functions of Overtaking method and to unify the formulation of them. The unified Overtaking method is associated with upper bounds of confidence intervals of expected rewards on statistics. The unification of the formulation enhances the universality of Overtaking method. Consequently we newly obtain Overtaking method for the exponentially distributed rewards, numerically analyze it, and show that it outperforms UCB algorithm on average. The present study suggests that the principle of optimism in the face of uncertainty should be regarded as the statistics-based consequence of the law of large numbers for the sample mean of rewards and estimation of upper bounds of expected rewards, rather than as a heuristic, in the context of multi-armed bandit problems. Copyright © 2017 Elsevier B.V. All rights reserved.
Evidence for a bound on the lifetime of de Sitter space
NASA Astrophysics Data System (ADS)
Freivogel, Ben; Lippert, Matthew
2008-12-01
Recent work has suggested a surprising new upper bound on the lifetime of de Sitter vacua in string theory. The bound is parametrically longer than the Hubble time but parametrically shorter than the recurrence time. We investigate whether the bound is satisfied in a particular class of de Sitter solutions, the KKLT vacua. Despite the freedom to make the supersymmetry breaking scale exponentially small, which naively would lead to extremely stable vacua, we find that the lifetime is always less than about exp(1022) Hubble times, in agreement with the proposed bound. This result, however, is contingent on several estimates and assumptions; in particular, we rely on a conjectural upper bound on the Euler number of the Calabi-Yau fourfolds used in KKLT compactifications.
Lower and upper bounds for entanglement of Rényi-α entropy.
Song, Wei; Chen, Lin; Cao, Zhuo-Liang
2016-12-23
Entanglement Rényi-α entropy is an entanglement measure. It reduces to the standard entanglement of formation when α tends to 1. We derive analytical lower and upper bounds for the entanglement Rényi-α entropy of arbitrary dimensional bipartite quantum systems. We also demonstrate the application our bound for some concrete examples. Moreover, we establish the relation between entanglement Rényi-α entropy and some other entanglement measures.
Global a priori estimates for the inhomogeneous Landau equation with moderately soft potentials
NASA Astrophysics Data System (ADS)
Cameron, Stephen; Silvestre, Luis; Snelson, Stanley
2018-05-01
We establish a priori upper bounds for solutions to the spatially inhomogeneous Landau equation in the case of moderately soft potentials, with arbitrary initial data, under the assumption that mass, energy and entropy densities stay under control. Our pointwise estimates decay polynomially in the velocity variable. We also show that if the initial data satisfies a Gaussian upper bound, this bound is propagated for all positive times.
Biodegradation kinetics for pesticide exposure assessment.
Wolt, J D; Nelson, H P; Cleveland, C B; van Wesenbeeck, I J
2001-01-01
Understanding pesticide risks requires characterizing pesticide exposure within the environment in a manner that can be broadly generalized across widely varied conditions of use. The coupled processes of sorption and soil degradation are especially important for understanding the potential environmental exposure of pesticides. The data obtained from degradation studies are inherently variable and, when limited in extent, lend uncertainty to exposure characterization and risk assessment. Pesticide decline in soils reflects dynamically coupled processes of sorption and degradation that add complexity to the treatment of soil biodegradation data from a kinetic perspective. Additional complexity arises from study design limitations that may not fully account for the decline in microbial activity of test systems, or that may be inadequate for considerations of all potential dissipation routes for a given pesticide. Accordingly, kinetic treatment of data must accommodate a variety of differing approaches starting with very simple assumptions as to reaction dynamics and extending to more involved treatments if warranted by the available experimental data. Selection of the appropriate kinetic model to describe pesticide degradation should rely on statistical evaluation of the data fit to ensure that the models used are not overparameterized. Recognizing the effects of experimental conditions and methods for kinetic treatment of degradation data is critical for making appropriate comparisons among pesticide biodegradation data sets. Assessment of variability in soil half-life among soils is uncertain because for many pesticides the data on soil degradation rate are limited to one or two soils. Reasonable upper-bound estimates of soil half-life are necessary in risk assessment so that estimated environmental concentrations can be developed from exposure models. Thus, an understanding of the variable and uncertain distribution of soil half-lives in the environment is necessary to estimate bounding values. Statistical evaluation of measures of central tendency for multisoil kinetic studies shows that geometric means better represent the distribution in soil half-lives than do the arithmetic or harmonic means. Estimates of upper-bound soil half-life values based on the upper 90% confidence bound on the geometric mean tend to accurately represent the upper bound when pesticide degradation rate is biologically driven but appear to overestimate the upper bound when there is extensive coupling of biodegradation with sorptive processes. The limited data available comparing distribution in pesticide soil half-lives between multisoil laboratory studies and multilocation field studies suggest that the probability density functions are similar. Thus, upper-bound estimates of pesticide half-life determined from laboratory studies conservatively represent pesticide biodegradation in the field environment for the purposes of exposure and risk assessment. International guidelines and approaches used for interpretations of soil biodegradation reflect many common elements, but differ in how the source and nature of variability in soil kinetic data are considered. Harmonization of approaches for the use of soil biodegradation data will improve the interpretative power of these data for the purposes of exposure and risk assessment.
Standard Deviation for Small Samples
ERIC Educational Resources Information Center
Joarder, Anwar H.; Latif, Raja M.
2006-01-01
Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…
Finite-error metrological bounds on multiparameter Hamiltonian estimation
NASA Astrophysics Data System (ADS)
Kura, Naoto; Ueda, Masahito
2018-01-01
Estimation of multiple parameters in an unknown Hamiltonian is investigated. We present upper and lower bounds on the time required to complete the estimation within a prescribed error tolerance δ . The lower bound is given on the basis of the Cramér-Rao inequality, where the quantum Fisher information is bounded by the squared evolution time. The upper bound is obtained by an explicit construction of estimation procedures. By comparing the cases with different numbers of Hamiltonian channels, we also find that the few-channel procedure with adaptive feedback and the many-channel procedure with entanglement are equivalent in the sense that they require the same amount of time resource up to a constant factor.
Exploring L1 model space in search of conductivity bounds for the MT problem
NASA Astrophysics Data System (ADS)
Wheelock, B. D.; Parker, R. L.
2013-12-01
Geophysical inverse problems of the type encountered in electromagnetic techniques are highly non-unique. As a result, any single inverted model, though feasible, is at best inconclusive and at worst misleading. In this paper, we use modified inversion methods to establish bounds on electrical conductivity within a model of the earth. Our method consists of two steps, each making use of the 1-norm in model regularization. Both 1-norm minimization problems are framed without approximation as non-negative least-squares (NNLS) problems. First, we must identify a parsimonious set of regions within the model for which upper and lower bounds on average conductivity will be sought. This is accomplished by minimizing the 1-norm of spatial variation, which produces a model with a limited number of homogeneous regions; in fact, the number of homogeneous regions will never be greater than the number of data, regardless of the number of free parameters supplied. The second step establishes bounds for each of these regions with pairs of inversions. The new suite of inversions also uses a 1-norm penalty, but applied to the conductivity values themselves, rather than the spatial variation thereof. In the bounding step we use the 1-norm of our model parameters because it is proportional to average conductivity. For a lower bound on average conductivity, the 1-norm within a bounding region is minimized. For an upper bound on average conductivity, the 1-norm everywhere outside a bounding region is minimized. The latter minimization has the effect of concentrating conductance into the bounding region. Taken together, these bounds are a measure of the uncertainty in the associated region of our model. Starting with a blocky inverse solution is key in the selection of the bounding regions. Of course, there is a tradeoff between resolution and uncertainty: an increase in resolution (smaller bounding regions), results in greater uncertainty (wider bounds). Minimization of the 1-norm of spatial variation delivers the fewest possible regions defined by a mean conductivity, the quantity we wish to bound. Thus, these regions present a natural set for which the most narrow and discriminating bounds can be found. For illustration, we apply these techniques to synthetic magnetotelluric (MT) data sets resulting from one-dimensional (1D) earth models. In each case we find that with realistic data coverage, any single inverted model can often stray from the truth, while the computed bounds on an encompassing region contain both the inverted and the true conductivities, indicating that our measure of model uncertainty is robust. Such estimates of uncertainty for conductivity can then be translated to bounds on important petrological parameters such as mineralogy, porosity, saturation, and fluid type.
Variational bounds on the temperature distribution
NASA Astrophysics Data System (ADS)
Kalikstein, Kalman; Spruch, Larry; Baider, Alberto
1984-02-01
Upper and lower stationary or variational bounds are obtained for functions which satisfy parabolic linear differential equations. (The error in the bound, that is, the difference between the bound on the function and the function itself, is of second order in the error in the input function, and the error is of known sign.) The method is applicable to a range of functions associated with equalization processes, including heat conduction, mass diffusion, electric conduction, fluid friction, the slowing down of neutrons, and certain limiting forms of the random walk problem, under conditions which are not unduly restrictive: in heat conduction, for example, we do not allow the thermal coefficients or the boundary conditions to depend upon the temperature, but the thermal coefficients can be functions of space and time and the geometry is unrestricted. The variational bounds follow from a maximum principle obeyed by the solutions of these equations.
NASA Astrophysics Data System (ADS)
Santos, Jander P.; Sá Barreto, F. C.
2016-01-01
Spin correlation identities for the Blume-Emery-Griffiths model on Kagomé lattice are derived and combined with rigorous correlation inequalities lead to upper bounds on the critical temperature. From the spin correlation identities the mean field approximation and the effective field approximation results for the magnetization, the critical frontiers and the tricritical points are obtained. The rigorous upper bounds on the critical temperature improve over those effective-field type theories results.
Bounds for the Z-spectral radius of nonnegative tensors.
He, Jun; Liu, Yan-Min; Ke, Hua; Tian, Jun-Kang; Li, Xiang
2016-01-01
In this paper, we have proposed some new upper bounds for the largest Z-eigenvalue of an irreducible weakly symmetric and nonnegative tensor, which improve the known upper bounds obtained in Chang et al. (Linear Algebra Appl 438:4166-4182, 2013), Song and Qi (SIAM J Matrix Anal Appl 34:1581-1595, 2013), He and Huang (Appl Math Lett 38:110-114, 2014), Li et al. (J Comput Anal Appl 483:182-199, 2015), He (J Comput Anal Appl 20:1290-1301, 2016).
Morphological representation of order-statistics filters.
Charif-Chefchaouni, M; Schonfeld, D
1995-01-01
We propose a comprehensive theory for the morphological bounds on order-statistics filters (and their repeated iterations). Conditions are derived for morphological openings and closings to serve as bounds (lower and upper, respectively) on order-statistics filters (and their repeated iterations). Under various assumptions, morphological open-closings and close-openings are also shown to serve as (tighter) bounds (lower and upper, respectively) on iterations of order-statistics filters. Simulations of the application of the results presented to image restoration are finally provided.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
The upper bound of Pier Scour defined by selected laboratory and field data
Benedict, Stephen; Caldwell, Andral W.
2015-01-01
The U.S. Geological Survey, in cooperation with the South Carolina Department of Transportation, conducted several field investigations of pier scour in South Carolina (Benedict and Caldwell, 2006; Benedict and Caldwell, 2009) and used that data to develop envelope curves defining the upper bound of pier scour. To expand upon this previous work, an additional cooperative investigation was initiated to combine the South Carolina data with pier-scour data from other sources and evaluate the upper bound of pier scour with this larger data set. To facilitate this analysis, a literature review was made to identify potential sources of published pier-scour data, and selected data were compiled into a digital spreadsheet consisting of approximately 570 laboratory and 1,880 field measurements. These data encompass a wide range of laboratory and field conditions and represent field data from 24 states within the United States and six other countries. This extensive database was used to define the upper bound of pier-scour depth with respect to pier width encompassing the laboratory and field data. Pier width is a primary variable that influences pier-scour depth (Laursen and Toch, 1956; Melville and Coleman, 2000; Mueller and Wagner, 2005, Ettema et al. 2011, Arneson et al. 2012) and therefore, was used as the primary explanatory variable in developing the upper-bound envelope curve. The envelope curve provides a simple but useful tool for assessing the potential maximum pier-scour depth for pier widths of about 30 feet or less.
Bounds on the information rate of quantum-secret-sharing schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarvepalli, Pradeep
An important metric of the performance of a quantum-secret-sharing scheme is its information rate. Beyond the fact that the information rate is upper-bounded by one, very little is known in terms of bounds on the information rate of quantum-secret-sharing schemes. Furthermore, not every scheme can be realized with rate one. In this paper we derive upper bounds for the information rates of quantum-secret-sharing schemes. We show that there exist quantum access structures on n players for which the information rate cannot be better than O((log{sub 2}n)/n). These results are the quantum analogues of the bounds for classical-secret-sharing schemes proved bymore » Csirmaz.« less
Bounds of memory strength for power-law series.
Guo, Fangjian; Yang, Dan; Yang, Zimo; Zhao, Zhi-Dan; Zhou, Tao
2017-05-01
Many time series produced by complex systems are empirically found to follow power-law distributions with different exponents α. By permuting the independently drawn samples from a power-law distribution, we present nontrivial bounds on the memory strength (first-order autocorrelation) as a function of α, which are markedly different from the ordinary ±1 bounds for Gaussian or uniform distributions. When 1<α≤3, as α grows bigger, the upper bound increases from 0 to +1 while the lower bound remains 0; when α>3, the upper bound remains +1 while the lower bound descends below 0. Theoretical bounds agree well with numerical simulations. Based on the posts on Twitter, ratings of MovieLens, calling records of the mobile operator Orange, and the browsing behavior of Taobao, we find that empirical power-law-distributed data produced by human activities obey such constraints. The present findings explain some observed constraints in bursty time series and scale-free networks and challenge the validity of measures such as autocorrelation and assortativity coefficient in heterogeneous systems.
Bounds of memory strength for power-law series
NASA Astrophysics Data System (ADS)
Guo, Fangjian; Yang, Dan; Yang, Zimo; Zhao, Zhi-Dan; Zhou, Tao
2017-05-01
Many time series produced by complex systems are empirically found to follow power-law distributions with different exponents α . By permuting the independently drawn samples from a power-law distribution, we present nontrivial bounds on the memory strength (first-order autocorrelation) as a function of α , which are markedly different from the ordinary ±1 bounds for Gaussian or uniform distributions. When 1 <α ≤3 , as α grows bigger, the upper bound increases from 0 to +1 while the lower bound remains 0; when α >3 , the upper bound remains +1 while the lower bound descends below 0. Theoretical bounds agree well with numerical simulations. Based on the posts on Twitter, ratings of MovieLens, calling records of the mobile operator Orange, and the browsing behavior of Taobao, we find that empirical power-law-distributed data produced by human activities obey such constraints. The present findings explain some observed constraints in bursty time series and scale-free networks and challenge the validity of measures such as autocorrelation and assortativity coefficient in heterogeneous systems.
Bound of dissipation on a plane Couette dynamo
NASA Astrophysics Data System (ADS)
Alboussière, Thierry
2009-06-01
Variational turbulence is among the few approaches providing rigorous results in turbulence. In addition, it addresses a question of direct practical interest, namely, the rate of energy dissipation. Unfortunately, only an upper bound is obtained as a larger functional space than the space of solutions to the Navier-Stokes equations is searched. Yet, in some cases, this upper bound is in good agreement with experimental results in terms of order of magnitude and power law of the imposed Reynolds number. In this paper, the variational approach to turbulence is extended to the case of dynamo action and an upper bound is obtained for the global dissipation rate (viscous and Ohmic). A simple plane Couette flow is investigated. For low magnetic Prandtl number Pm fluids, the upper bound of energy dissipation is that of classical turbulence (i.e., proportional to the cubic power of the shear velocity) for magnetic Reynolds numbers below Pm-1 and follows a steeper evolution for magnetic Reynolds numbers above Pm-1 (i.e., proportional to the shear velocity to the power of 4) in the case of electrically insulating walls. However, the effect of wall conductance is crucial: for a given value of wall conductance, there is a value for the magnetic Reynolds number above which energy dissipation cannot be bounded. This limiting magnetic Reynolds number is inversely proportional to the square root of the conductance of the wall. Implications in terms of energy dissipation in experimental and natural dynamos are discussed.
LS Bound based gene selection for DNA microarray data.
Zhou, Xin; Mao, K Z
2005-04-15
One problem with discriminant analysis of DNA microarray data is that each sample is represented by quite a large number of genes, and many of them are irrelevant, insignificant or redundant to the discriminant problem at hand. Methods for selecting important genes are, therefore, of much significance in microarray data analysis. In the present study, a new criterion, called LS Bound measure, is proposed to address the gene selection problem. The LS Bound measure is derived from leave-one-out procedure of LS-SVMs (least squares support vector machines), and as the upper bound for leave-one-out classification results it reflects to some extent the generalization performance of gene subsets. We applied this LS Bound measure for gene selection on two benchmark microarray datasets: colon cancer and leukemia. We also compared the LS Bound measure with other evaluation criteria, including the well-known Fisher's ratio and Mahalanobis class separability measure, and other published gene selection algorithms, including Weighting factor and SVM Recursive Feature Elimination. The strength of the LS Bound measure is that it provides gene subsets leading to more accurate classification results than the filter method while its computational complexity is at the level of the filter method. A companion website can be accessed at http://www.ntu.edu.sg/home5/pg02776030/lsbound/. The website contains: (1) the source code of the gene selection algorithm; (2) the complete set of tables and figures regarding the experimental study; (3) proof of the inequality (9). ekzmao@ntu.edu.sg.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Datta, Nilanjana, E-mail: n.datta@statslab.cam.ac.uk; Hsieh, Min-Hsiu, E-mail: Min-Hsiu.Hsieh@uts.edu.au; Oppenheim, Jonathan, E-mail: j.oppenheim@ucl.ac.uk
State redistribution is the protocol in which given an arbitrary tripartite quantum state, with two of the subsystems initially being with Alice and one being with Bob, the goal is for Alice to send one of her subsystems to Bob, possibly with the help of prior shared entanglement. We derive an upper bound on the second order asymptotic expansion for the quantum communication cost of achieving state redistribution with a given finite accuracy. In proving our result, we also obtain an upper bound on the quantum communication cost of this protocol in the one-shot setting, by using the protocol ofmore » coherent state merging as a primitive.« less
Entropy production of doubly stochastic quantum channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Müller-Hermes, Alexander, E-mail: muellerh@posteo.net; Department of Mathematical Sciences, University of Copenhagen, 2100 Copenhagen; Stilck França, Daniel, E-mail: dsfranca@mytum.de
2016-02-15
We study the entropy increase of quantum systems evolving under primitive, doubly stochastic Markovian noise and thus converging to the maximally mixed state. This entropy increase can be quantified by a logarithmic-Sobolev constant of the Liouvillian generating the noise. We prove a universal lower bound on this constant that stays invariant under taking tensor-powers. Our methods involve a new comparison method to relate logarithmic-Sobolev constants of different Liouvillians and a technique to compute logarithmic-Sobolev inequalities of Liouvillians with eigenvectors forming a projective representation of a finite abelian group. Our bounds improve upon similar results established before and as an applicationmore » we prove an upper bound on continuous-time quantum capacities. In the last part of this work we study entropy production estimates of discrete-time doubly stochastic quantum channels by extending the framework of discrete-time logarithmic-Sobolev inequalities to the quantum case.« less
Potential-field sounding using Euler's homogeneity equation and Zidarov bubbling
Cordell, Lindrith
1994-01-01
Potential-field (gravity) data are transformed into a physical-property (density) distribution in a lower half-space, constrained solely by assumed upper bounds on physical-property contrast and data error. A two-step process is involved. The data are first transformed to an equivalent set of line (2-D case) or point (3-D case) sources, using Euler's homogeneity equation evaluated iteratively on the largest residual data value. Then, mass is converted to a volume-density product, constrained to an upper density bound, by 'bubbling,' which exploits circular or radial expansion to redistribute density without changing the associated gravity field. The method can be developed for gravity or magnetic data in two or three dimensions. The results can provide a beginning for interpretation of potential-field data where few independent constraints exist, or more likely, can be used to develop models and confirm or extend interpretation of other geophysical data sets.
Mechanical metamaterials at the theoretical limit of isotropic elastic stiffness
NASA Astrophysics Data System (ADS)
Berger, J. B.; Wadley, H. N. G.; McMeeking, R. M.
2017-02-01
A wide variety of high-performance applications require materials for which shape control is maintained under substantial stress, and that have minimal density. Bio-inspired hexagonal and square honeycomb structures and lattice materials based on repeating unit cells composed of webs or trusses, when made from materials of high elastic stiffness and low density, represent some of the lightest, stiffest and strongest materials available today. Recent advances in 3D printing and automated assembly have enabled such complicated material geometries to be fabricated at low (and declining) cost. These mechanical metamaterials have properties that are a function of their mesoscale geometry as well as their constituents, leading to combinations of properties that are unobtainable in solid materials; however, a material geometry that achieves the theoretical upper bounds for isotropic elasticity and strain energy storage (the Hashin-Shtrikman upper bounds) has yet to be identified. Here we evaluate the manner in which strain energy distributes under load in a representative selection of material geometries, to identify the morphological features associated with high elastic performance. Using finite-element models, supported by analytical methods, and a heuristic optimization scheme, we identify a material geometry that achieves the Hashin-Shtrikman upper bounds on isotropic elastic stiffness. Previous work has focused on truss networks and anisotropic honeycombs, neither of which can achieve this theoretical limit. We find that stiff but well distributed networks of plates are required to transfer loads efficiently between neighbouring members. The resulting low-density mechanical metamaterials have many advantageous properties: their mesoscale geometry can facilitate large crushing strains with high energy absorption, optical bandgaps and mechanically tunable acoustic bandgaps, high thermal insulation, buoyancy, and fluid storage and transport. Our relatively simple design can be manufactured using origami-like sheet folding and bonding methods.
Mechanical metamaterials at the theoretical limit of isotropic elastic stiffness.
Berger, J B; Wadley, H N G; McMeeking, R M
2017-03-23
A wide variety of high-performance applications require materials for which shape control is maintained under substantial stress, and that have minimal density. Bio-inspired hexagonal and square honeycomb structures and lattice materials based on repeating unit cells composed of webs or trusses, when made from materials of high elastic stiffness and low density, represent some of the lightest, stiffest and strongest materials available today. Recent advances in 3D printing and automated assembly have enabled such complicated material geometries to be fabricated at low (and declining) cost. These mechanical metamaterials have properties that are a function of their mesoscale geometry as well as their constituents, leading to combinations of properties that are unobtainable in solid materials; however, a material geometry that achieves the theoretical upper bounds for isotropic elasticity and strain energy storage (the Hashin-Shtrikman upper bounds) has yet to be identified. Here we evaluate the manner in which strain energy distributes under load in a representative selection of material geometries, to identify the morphological features associated with high elastic performance. Using finite-element models, supported by analytical methods, and a heuristic optimization scheme, we identify a material geometry that achieves the Hashin-Shtrikman upper bounds on isotropic elastic stiffness. Previous work has focused on truss networks and anisotropic honeycombs, neither of which can achieve this theoretical limit. We find that stiff but well distributed networks of plates are required to transfer loads efficiently between neighbouring members. The resulting low-density mechanical metamaterials have many advantageous properties: their mesoscale geometry can facilitate large crushing strains with high energy absorption, optical bandgaps and mechanically tunable acoustic bandgaps, high thermal insulation, buoyancy, and fluid storage and transport. Our relatively simple design can be manufactured using origami-like sheet folding and bonding methods.
Improved bounds on the energy-minimizing strains in martensitic polycrystals
NASA Astrophysics Data System (ADS)
Peigney, Michaël
2016-07-01
This paper is concerned with the theoretical prediction of the energy-minimizing (or recoverable) strains in martensitic polycrystals, considering a nonlinear elasticity model of phase transformation at finite strains. The main results are some rigorous upper bounds on the set of energy-minimizing strains. Those bounds depend on the polycrystalline texture through the volume fractions of the different orientations. The simplest form of the bounds presented is obtained by combining recent results for single crystals with a homogenization approach proposed previously for martensitic polycrystals. However, the polycrystalline bound delivered by that procedure may fail to recover the monocrystalline bound in the homogeneous limit, as is demonstrated in this paper by considering an example related to tetragonal martensite. This motivates the development of a more detailed analysis, leading to improved polycrystalline bounds that are notably consistent with results for single crystals in the homogeneous limit. A two-orientation polycrystal of tetragonal martensite is studied as an illustration. In that case, analytical expressions of the upper bounds are derived and the results are compared with lower bounds obtained by considering laminate textures.
Evaluation of uncertainty for regularized deconvolution: A case study in hydrophone measurements.
Eichstädt, S; Wilkens, V
2017-06-01
An estimation of the measurand in dynamic metrology usually requires a deconvolution based on a dynamic calibration of the measuring system. Since deconvolution is, mathematically speaking, an ill-posed inverse problem, some kind of regularization is required to render the problem stable and obtain usable results. Many approaches to regularized deconvolution exist in the literature, but the corresponding evaluation of measurement uncertainties is, in general, an unsolved issue. In particular, the uncertainty contribution of the regularization itself is a topic of great importance, because it has a significant impact on the estimation result. Here, a versatile approach is proposed to express prior knowledge about the measurand based on a flexible, low-dimensional modeling of an upper bound on the magnitude spectrum of the measurand. This upper bound allows the derivation of an uncertainty associated with the regularization method in line with the guidelines in metrology. As a case study for the proposed method, hydrophone measurements in medical ultrasound with an acoustic working frequency of up to 7.5 MHz are considered, but the approach is applicable for all kinds of estimation methods in dynamic metrology, where regularization is required and which can be expressed as a multiplication in the frequency domain.
Upper and lower bounds for semi-Markov reliability models of reconfigurable systems
NASA Technical Reports Server (NTRS)
White, A. L.
1984-01-01
This paper determines the information required about system recovery to compute the reliability of a class of reconfigurable systems. Upper and lower bounds are derived for these systems. The class consists of those systems that satisfy five assumptions: the components fail independently at a low constant rate, fault occurrence and system reconfiguration are independent processes, the reliability model is semi-Markov, the recovery functions which describe system configuration have small means and variances, and the system is well designed. The bounds are easy to compute, and examples are included.
The Laughlin liquid in an external potential
NASA Astrophysics Data System (ADS)
Rougerie, Nicolas; Yngvason, Jakob
2018-04-01
We study natural perturbations of the Laughlin state arising from the effects of trapping and disorder. These are N-particle wave functions that have the form of a product of Laughlin states and analytic functions of the N variables. We derive an upper bound to the ground state energy in a confining external potential, matching exactly a recently derived lower bound in the large N limit. Irrespective of the shape of the confining potential, this sharp upper bound can be achieved through a modification of the Laughlin function by suitably arranged quasi-holes.
Removing cosmic spikes using a hyperspectral upper-bound spectrum method
Anthony, Stephen Michael; Timlin, Jerilyn A.
2016-11-04
Cosmic ray spikes are especially problematic for hyperspectral imaging because of the large number of spikes often present and their negative effects upon subsequent chemometric analysis. Fortunately, while the large number of spectra acquired in a hyperspectral imaging data set increases the probability and number of cosmic spikes observed, the multitude of spectra can also aid in the effective recognition and removal of the cosmic spikes. Zhang and Ben-Amotz were perhaps the first to leverage the additional spatial dimension of hyperspectral data matrices (DM). They integrated principal component analysis (PCA) into the upper bound spectrum method (UBS), resulting in amore » hybrid method (UBS-DM) for hyperspectral images. Here, we expand upon their use of PCA, recognizing that principal components primarily present in only a few pixels most likely correspond to cosmic spikes. Eliminating the contribution of those principal components in those pixels improves the cosmic spike removal. Both simulated and experimental hyperspectral Raman image data sets are used to test the newly developed UBS-DM-hyperspectral (UBS-DM-HS) method which extends the UBS-DM method by leveraging characteristics of hyperspectral data sets. As a result, a comparison is provided between the performance of the UBS-DM-HS method and other methods suitable for despiking hyperspectral images, evaluating both their ability to remove cosmic ray spikes and the extent to which they introduce spectral bias.« less
Removing cosmic spikes using a hyperspectral upper-bound spectrum method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anthony, Stephen Michael; Timlin, Jerilyn A.
Cosmic ray spikes are especially problematic for hyperspectral imaging because of the large number of spikes often present and their negative effects upon subsequent chemometric analysis. Fortunately, while the large number of spectra acquired in a hyperspectral imaging data set increases the probability and number of cosmic spikes observed, the multitude of spectra can also aid in the effective recognition and removal of the cosmic spikes. Zhang and Ben-Amotz were perhaps the first to leverage the additional spatial dimension of hyperspectral data matrices (DM). They integrated principal component analysis (PCA) into the upper bound spectrum method (UBS), resulting in amore » hybrid method (UBS-DM) for hyperspectral images. Here, we expand upon their use of PCA, recognizing that principal components primarily present in only a few pixels most likely correspond to cosmic spikes. Eliminating the contribution of those principal components in those pixels improves the cosmic spike removal. Both simulated and experimental hyperspectral Raman image data sets are used to test the newly developed UBS-DM-hyperspectral (UBS-DM-HS) method which extends the UBS-DM method by leveraging characteristics of hyperspectral data sets. As a result, a comparison is provided between the performance of the UBS-DM-HS method and other methods suitable for despiking hyperspectral images, evaluating both their ability to remove cosmic ray spikes and the extent to which they introduce spectral bias.« less
Removing Cosmic Spikes Using a Hyperspectral Upper-Bound Spectrum Method.
Anthony, Stephen M; Timlin, Jerilyn A
2017-03-01
Cosmic ray spikes are especially problematic for hyperspectral imaging because of the large number of spikes often present and their negative effects upon subsequent chemometric analysis. Fortunately, while the large number of spectra acquired in a hyperspectral imaging data set increases the probability and number of cosmic spikes observed, the multitude of spectra can also aid in the effective recognition and removal of the cosmic spikes. Zhang and Ben-Amotz were perhaps the first to leverage the additional spatial dimension of hyperspectral data matrices (DM). They integrated principal component analysis (PCA) into the upper bound spectrum method (UBS), resulting in a hybrid method (UBS-DM) for hyperspectral images. Here, we expand upon their use of PCA, recognizing that principal components primarily present in only a few pixels most likely correspond to cosmic spikes. Eliminating the contribution of those principal components in those pixels improves the cosmic spike removal. Both simulated and experimental hyperspectral Raman image data sets are used to test the newly developed UBS-DM-hyperspectral (UBS-DM-HS) method which extends the UBS-DM method by leveraging characteristics of hyperspectral data sets. A comparison is provided between the performance of the UBS-DM-HS method and other methods suitable for despiking hyperspectral images, evaluating both their ability to remove cosmic ray spikes and the extent to which they introduce spectral bias.
Bounding Averages Rigorously Using Semidefinite Programming: Mean Moments of the Lorenz System
NASA Astrophysics Data System (ADS)
Goluskin, David
2018-04-01
We describe methods for proving bounds on infinite-time averages in differential dynamical systems. The methods rely on the construction of nonnegative polynomials with certain properties, similarly to the way nonlinear stability can be proved using Lyapunov functions. Nonnegativity is enforced by requiring the polynomials to be sums of squares, a condition which is then formulated as a semidefinite program (SDP) that can be solved computationally. Although such computations are subject to numerical error, we demonstrate two ways to obtain rigorous results: using interval arithmetic to control the error of an approximate SDP solution, and finding exact analytical solutions to relatively small SDPs. Previous formulations are extended to allow for bounds depending analytically on parametric variables. These methods are illustrated using the Lorenz equations, a system with three state variables ( x, y, z) and three parameters (β ,σ ,r). Bounds are reported for infinite-time averages of all eighteen moments x^ly^mz^n up to quartic degree that are symmetric under (x,y)\\mapsto (-x,-y). These bounds apply to all solutions regardless of stability, including chaotic trajectories, periodic orbits, and equilibrium points. The analytical approach yields two novel bounds that are sharp: the mean of z^3 can be no larger than its value of (r-1)^3 at the nonzero equilibria, and the mean of xy^3 must be nonnegative. The interval arithmetic approach is applied at the standard chaotic parameters to bound eleven average moments that all appear to be maximized on the shortest periodic orbit. Our best upper bound on each such average exceeds its value on the maximizing orbit by less than 1%. Many bounds reported here are much tighter than would be possible without computer assistance.
An Algorithm for Computing Matrix Square Roots with Application to Riccati Equation Implementation,
1977-01-01
pansion is compared to Euclid’s method. The apriori by Aerospace Medical Research Laboratory, Aero— upper and lower bounds are also calculated. The third ... space Medical Division , Air Force Systems Command , part of this paper extends the scalar square root al— Wright—Patterson Air Force Base, Ohio 45433
Exact Fundamental Limits of the First and Second Hyperpolarizabilities
NASA Astrophysics Data System (ADS)
Lytel, Rick; Mossman, Sean; Crowell, Ethan; Kuzyk, Mark G.
2017-08-01
Nonlinear optical interactions of light with materials originate in the microscopic response of the molecular constituents to excitation by an optical field, and are expressed by the first (β ) and second (γ ) hyperpolarizabilities. Upper bounds to these quantities were derived seventeen years ago using approximate, truncated state models that violated completeness and unitarity, and far exceed those achieved by potential optimization of analytical systems. This Letter determines the fundamental limits of the first and second hyperpolarizability tensors using Monte Carlo sampling of energy spectra and transition moments constrained by the diagonal Thomas-Reiche-Kuhn (TRK) sum rules and filtered by the off-diagonal TRK sum rules. The upper bounds of β and γ are determined from these quantities by applying error-refined extrapolation to perfect compliance with the sum rules. The method yields the largest diagonal component of the hyperpolarizabilities for an arbitrary number of interacting electrons in any number of dimensions. The new method provides design insight to the synthetic chemist and nanophysicist for approaching the limits. This analysis also reveals that the special cases which lead to divergent nonlinearities in the many-state catastrophe are not physically realizable.
NASA Astrophysics Data System (ADS)
Osterloh, Andreas
2016-12-01
Here I present a method for how intersections of a certain density matrix of rank 2 with the zero polytope can be calculated exactly. This is a purely geometrical procedure which thereby is applicable to obtaining the zeros of SL- and SU-invariant entanglement measures of arbitrary polynomial degree. I explain this method in detail for a recently unsolved problem. In particular, I show how a three-dimensional view, namely, in terms of the Bloch-sphere analogy, solves this problem immediately. To this end, I determine the zero polytope of the three-tangle, which is an exact result up to computer accuracy, and calculate upper bounds to its convex roof which are below the linearized upper bound. The zeros of the three-tangle (in this case) induced by the zero polytope (zero simplex) are exact values. I apply this procedure to a superposition of the four-qubit Greenberger-Horne-Zeilinger and W state. It can, however, be applied to every case one has under consideration, including an arbitrary polynomial convex-roof measure of entanglement and for arbitrary local dimension.
An evaluation of risk estimation procedures for mixtures of carcinogens
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hwang, J.S.; Chen, J.J.
1999-12-01
The estimation of health risks from exposure to a mixture of chemical carcinogens is generally based on the combination of information from several available single compound studies. The current practice of directly summing the upper bound risk estimates of individual carcinogenic components as an upper bound on the total risk of a mixture is known to be generally too conservative. Gaylor and Chen (1996, Risk Analysis) proposed a simple procedure to compute an upper bound on the total risk using only the upper confidence limits and central risk estimates of individual carcinogens. The Gaylor-Chen procedure was derived based on anmore » underlying assumption of the normality for the distributions of individual risk estimates. IN this paper the authors evaluated the Gaylor-Chen approach in terms the coverages of the upper confidence limits on the true risks of individual carcinogens. In general, if the coverage probabilities for the individual carcinogens are all approximately equal to the nominal level, then the Gaylor-Chen approach should perform well. However, the Gaylor-Chen approach can be conservative or anti-conservative if some of all individual upper confidence limit estimates are conservative or anti-conservative.« less
The upper bound of abutment scour defined by selected laboratory and field data
Benedict, Stephen; Caldwell, Andral W.
2015-01-01
The U.S. Geological Survey, in cooperation with the South Carolina Department of Transportation, conducted a field investigation of abutment scour in South Carolina and used that data to develop envelope curves defining the upper bound of abutment scour. To expand upon this previous work, an additional cooperative investigation was initiated to combine the South Carolina data with abutment-scour data from other sources and evaluate the upper bound of abutment scour with the larger data set. To facilitate this analysis, a literature review was made to identify potential sources of published abutment-scour data, and selected data, consisting of 446 laboratory and 331 field measurements, were compiled for the analysis. These data encompassed a wide range of laboratory and field conditions and represent field data from 6 states within the United States. The data set was used to evaluate the South Carolina abutment-scour envelope curves. Additionally, the data were used to evaluate a dimensionless abutment-scour envelope curve developed by Melville (1992), highlighting the distinct difference in the upper bound for laboratory and field data. The envelope curves evaluated in this investigation provide simple but useful tools for assessing the potential maximum abutment-scour depth in the field setting.
Money Gone Up in Smoke: The Tobacco Use and Malnutrition Nexus in Bangladesh
Husain, Muhammad Jami; Virk-Baker, Mandeep; Parascandola, Mark; Khondker, Bazlul Haque; Ahluwalia, Indu B.
2017-01-01
BACKGROUND The tobacco epidemic in Bangladesh is pervasive. Expenditures on tobacco may reduce money available for food in a country with a high malnutrition rate. OBJECTIVES The aims of the study are to quantify the opportunity costs of tobacco expenditure in terms of nutrition (ie, food energy) forgone and the potential improvements in the household level food-energy status if the money spent on tobacco were diverted for food consumption. METHOD We analyzed data from the 2010 Bangladesh Household Income and Expenditure Survey, a nationally representative survey conducted among 12,240 households. We present 2 analytical scenarios: (1) the lower-bound gain scenario entailing money spent on tobacco partially diverted to acquiring food according to households’ food consumption share in total expenditures; and (2) the upper-bound gain scenario entailing money spent on tobacco diverted to acquiring food only. Age- and gender-based energy norms were used to identify food-energy deficient households. Data were analyzed by mutually exclusive smoking-only, smokeless-only, and dual-tobacco user households. FINDINGS On average, a smoking-only household could gain 269–497 kilocalories (kcal) daily under the lower-bound and upper-bound scenarios, respectively. The potential energy gains for smokeless-only and dual-tobacco user households ranged from 148–268 kcal and 508–924 kcal, respectively. Under these lower- and upper-bound estimates, the percentage of smoking-only user households that are malnourished declined significantly from the baseline rate of 38% to 33% and 29%, respectively. For the smokeless-only and dual-tobacco user households, there were 2–3 and 6–9 percentage point drops in the malnutrition prevalence rates. The tobacco expenditure shift could translate to an additional 4.6–7.7 million food-energy malnourished persons meeting their caloric requirements. CONCLUSIONS The findings suggest that tobacco use reduction could facilitate concomitant improvements in population-level nutrition status and may inform the development and refinement of tobacco prevention and control efforts in Bangladesh. PMID:28283125
Relaxation-optimized transfer of spin order in Ising spin chains
NASA Astrophysics Data System (ADS)
Stefanatos, Dionisis; Glaser, Steffen J.; Khaneja, Navin
2005-12-01
In this paper, we present relaxation optimized methods for the transfer of bilinear spin correlations along Ising spin chains. These relaxation optimized methods can be used as a building block for the transfer of polarization between distant spins on a spin chain, a problem that is ubiquitous in multidimensional nuclear magnetic resonance spectroscopy of proteins. Compared to standard techniques, significant reduction in relaxation losses is achieved by these optimized methods when transverse relaxation rates are much larger than the longitudinal relaxation rates and comparable to couplings between spins. We derive an upper bound on the efficiency of the transfer of the spin order along a chain of spins in the presence of relaxation and show that this bound can be approached by the relaxation optimized pulse sequences presented in the paper.
Generalized Hofmann quantum process fidelity bounds for quantum filters
NASA Astrophysics Data System (ADS)
Sedlák, Michal; Fiurášek, Jaromír
2016-04-01
We propose and investigate bounds on the quantum process fidelity of quantum filters, i.e., probabilistic quantum operations represented by a single Kraus operator K . These bounds generalize the Hofmann bounds on the quantum process fidelity of unitary operations [H. F. Hofmann, Phys. Rev. Lett. 94, 160504 (2005), 10.1103/PhysRevLett.94.160504] and are based on probing the quantum filter with pure states forming two mutually unbiased bases. Determination of these bounds therefore requires far fewer measurements than full quantum process tomography. We find that it is particularly suitable to construct one of the probe bases from the right eigenstates of K , because in this case the bounds are tight in the sense that if the actual filter coincides with the ideal one, then both the lower and the upper bounds are equal to 1. We theoretically investigate the application of these bounds to a two-qubit optical quantum filter formed by the interference of two photons on a partially polarizing beam splitter. For an experimentally convenient choice of factorized input states and measurements we study the tightness of the bounds. We show that more stringent bounds can be obtained by more sophisticated processing of the data using convex optimization and we compare our methods for different choices of the input probe states.
Symmetry Parameter Constraints from a Lower Bound on Neutron-matter Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tews, Ingo; Lattimer, James M.; Ohnishi, Akira
We propose the existence of a lower bound on the energy of pure neutron matter (PNM) on the basis of unitary-gas considerations. We discuss its justification from experimental studies of cold atoms as well as from theoretical studies of neutron matter. We demonstrate that this bound results in limits to the density-dependent symmetry energy, which is the difference between the energies of symmetric nuclear matter and PNM. In particular, this bound leads to a lower limit to the volume symmetry energy parameter S {sub 0}. In addition, for assumed values of S {sub 0} above this minimum, this bound impliesmore » both upper and lower limits to the symmetry energy slope parameter L , which describes the lowest-order density dependence of the symmetry energy. A lower bound on neutron-matter incompressibility is also obtained. These bounds are found to be consistent with both recent calculations of the energies of PNM and constraints from nuclear experiments. Our results are significant because several equations of state that are currently used in astrophysical simulations of supernovae and neutron star mergers, as well as in nuclear physics simulations of heavy-ion collisions, have symmetry energy parameters that violate these bounds. Furthermore, below the nuclear saturation density, the bound on neutron-matter energies leads to a lower limit to the density-dependent symmetry energy, which leads to upper limits to the nuclear surface symmetry parameter and the neutron-star crust–core boundary. We also obtain a lower limit to the neutron-skin thicknesses of neutron-rich nuclei. Above the nuclear saturation density, the bound on neutron-matter energies also leads to an upper limit to the symmetry energy, with implications for neutron-star cooling via the direct Urca process.« less
Symmetry Parameter Constraints from a Lower Bound on Neutron-matter Energy
NASA Astrophysics Data System (ADS)
Tews, Ingo; Lattimer, James M.; Ohnishi, Akira; Kolomeitsev, Evgeni E.
2017-10-01
We propose the existence of a lower bound on the energy of pure neutron matter (PNM) on the basis of unitary-gas considerations. We discuss its justification from experimental studies of cold atoms as well as from theoretical studies of neutron matter. We demonstrate that this bound results in limits to the density-dependent symmetry energy, which is the difference between the energies of symmetric nuclear matter and PNM. In particular, this bound leads to a lower limit to the volume symmetry energy parameter S 0. In addition, for assumed values of S 0 above this minimum, this bound implies both upper and lower limits to the symmetry energy slope parameter L ,which describes the lowest-order density dependence of the symmetry energy. A lower bound on neutron-matter incompressibility is also obtained. These bounds are found to be consistent with both recent calculations of the energies of PNM and constraints from nuclear experiments. Our results are significant because several equations of state that are currently used in astrophysical simulations of supernovae and neutron star mergers, as well as in nuclear physics simulations of heavy-ion collisions, have symmetry energy parameters that violate these bounds. Furthermore, below the nuclear saturation density, the bound on neutron-matter energies leads to a lower limit to the density-dependent symmetry energy, which leads to upper limits to the nuclear surface symmetry parameter and the neutron-star crust-core boundary. We also obtain a lower limit to the neutron-skin thicknesses of neutron-rich nuclei. Above the nuclear saturation density, the bound on neutron-matter energies also leads to an upper limit to the symmetry energy, with implications for neutron-star cooling via the direct Urca process.
Upper and lower bounds of ground-motion variabilities: implication for source properties
NASA Astrophysics Data System (ADS)
Cotton, Fabrice; Reddy-Kotha, Sreeram; Bora, Sanjay; Bindi, Dino
2017-04-01
One of the key challenges of seismology is to be able to analyse the physical factors that control earthquakes and ground-motion variabilities. Such analysis is particularly important to calibrate physics-based simulations and seismic hazard estimations at high frequencies. Within the framework of the development of ground-motion prediction equation (GMPE) developments, ground-motions residuals (differences between recorded ground motions and the values predicted by a GMPE) are computed. The exponential growth of seismological near-source records and modern GMPE analysis technics allow to partition these residuals into between- and a within-event components. In particular, the between-event term quantifies all those repeatable source effects (e.g. related to stress-drop or kappa-source variability) which have not been accounted by the magnitude-dependent term of the model. In this presentation, we first discuss the between-event variabilities computed both in the Fourier and Response Spectra domains, using recent high-quality global accelerometric datasets (e.g. NGA-west2, Resorce, Kiknet). These analysis lead to the assessment of upper bounds for the ground-motion variability. Then, we compare these upper bounds with lower bounds estimated by analysing seismic sequences which occurred on specific fault systems (e.g., located in Central Italy or in Japan). We show that the lower bounds of between-event variabilities are surprisingly large which indicates a large variability of earthquake dynamic properties even within the same fault system. Finally, these upper and lower bounds of ground-shaking variability are discussed in term of variability of earthquake physical properties (e.g., stress-drop and kappa_source).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, Zachary; Neuert, Gregor; Department of Pharmacology, School of Medicine, Vanderbilt University, Nashville, Tennessee 37232
2016-08-21
Emerging techniques now allow for precise quantification of distributions of biological molecules in single cells. These rapidly advancing experimental methods have created a need for more rigorous and efficient modeling tools. Here, we derive new bounds on the likelihood that observations of single-cell, single-molecule responses come from a discrete stochastic model, posed in the form of the chemical master equation. These strict upper and lower bounds are based on a finite state projection approach, and they converge monotonically to the exact likelihood value. These bounds allow one to discriminate rigorously between models and with a minimum level of computational effort.more » In practice, these bounds can be incorporated into stochastic model identification and parameter inference routines, which improve the accuracy and efficiency of endeavors to analyze and predict single-cell behavior. We demonstrate the applicability of our approach using simulated data for three example models as well as for experimental measurements of a time-varying stochastic transcriptional response in yeast.« less
Trace of totally positive algebraic integers and integer transfinite diameter
NASA Astrophysics Data System (ADS)
Flammang, V.
2009-06-01
Explicit auxiliary functions can be used in the ``Schur-Siegel- Smyth trace problem''. In the previous works, these functions were constructed only with polynomials having all their roots positive. Here, we use several polynomials with complex roots, which are found with Wu's algorithm, and we improve the known lower bounds for the absolute trace of totally positive algebraic integers. This improvement has a consequence for the search of Salem numbers that have a negative trace. The same method also gives a small improvement of the upper bound for the integer transfinite diameter of [0,1].
Complexity, Heuristic, and Search Analysis for the Games of Crossings and Epaminondas
2014-03-27
research in Artifical Intelligence (Section 2.1) and why games are studied (Section 2.2). Section 2.3 discusses how games are played and solved. An...5 2.1 Games in Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Game Study...Artificial Intelligence UCT Upper Confidence Bounds applied to Trees HUCT Heuristic Guided UCT LOA Lines of Action UCB Upper Confidence Bound RAVE Rapid
ERIC Educational Resources Information Center
Kim, Seonghoon; Feldt, Leonard S.
2010-01-01
The primary purpose of this study is to investigate the mathematical characteristics of the test reliability coefficient rho[subscript XX'] as a function of item response theory (IRT) parameters and present the lower and upper bounds of the coefficient. Another purpose is to examine relative performances of the IRT reliability statistics and two…
Schulte, Berit; Eickmeyer, Holm; Heininger, Alexandra; Juretzek, Stephanie; Karrasch, Matthias; Denis, Olivier; Roisin, Sandrine; Pletz, Mathias W.; Klein, Matthias; Barth, Sandra; Lüdke, Gerd H.; Thews, Anne; Torres, Antoni; Cillóniz, Catia; Straube, Eberhard; Autenrieth, Ingo B.; Keller, Peter M.
2014-01-01
Severe pneumonia remains an important cause of morbidity and mortality. Polymerase chain reaction (PCR) has been shown to be more sensitive than current standard microbiological methods – particularly in patients with prior antibiotic treatment – and therefore, may improve the accuracy of microbiological diagnosis for hospitalized patients with pneumonia. Conventional detection techniques and multiplex PCR for 14 typical bacterial pneumonia-associated pathogens were performed on respiratory samples collected from adult hospitalized patients enrolled in a prospective multi-center study. Patients were enrolled from March until September 2012. A total of 739 fresh, native samples were eligible for analysis, of which 75 were sputa, 421 aspirates, and 234 bronchial lavages. 276 pathogens were detected by microbiology for which a valid PCR result was generated (positive or negative detection result by Curetis prototype system). Among these, 120 were identified by the prototype assay, 50 pathogens were not detected. Overall performance of the prototype for pathogen identification was 70.6% sensitivity (95% confidence interval (CI) lower bound: 63.3%, upper bound: 76.9%) and 95.2% specificity (95% CI lower bound: 94.6%, upper bound: 95.7%). Based on the study results, device cut-off settings were adjusted for future series production. The overall performance with the settings of the CE series production devices was 78.7% sensitivity (95% CI lower bound: 72.1%) and 96.6% specificity (95% CI lower bound: 96.1%). Time to result was 5.2 hours (median) for the prototype test and 43.5 h for standard-of-care. The Pneumonia Application provides a rapid and moderately sensitive assay for the detection of pneumonia-causing pathogens with minimal hands-on time. Trial Registration Deutsches Register Klinischer Studien (DRKS) DRKS00005684 PMID:25397673
Modeling of magnitude distributions by the generalized truncated exponential distribution
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2015-01-01
The probability distribution of the magnitude can be modeled by an exponential distribution according to the Gutenberg-Richter relation. Two alternatives are the truncated exponential distribution (TED) and the cutoff exponential distribution (CED). The TED is frequently used in seismic hazard analysis although it has a weak point: when two TEDs with equal parameters except the upper bound magnitude are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. We overcome it by the generalization of the abovementioned exponential distributions: the generalized truncated exponential distribution (GTED). Therein, identical exponential distributions are mixed by the probability distribution of the correct cutoff points. This distribution model is flexible in the vicinity of the upper bound magnitude and is equal to the exponential distribution for smaller magnitudes. Additionally, the exponential distributions TED and CED are special cases of the GTED. We discuss the possible ways of estimating its parameters and introduce the normalized spacing for this purpose. Furthermore, we present methods for geographic aggregation and differentiation of the GTED and demonstrate the potential and universality of our simple approach by applying it to empirical data. The considerable improvement by the GTED in contrast to the TED is indicated by a large difference between the corresponding values of the Akaike information criterion.
Numerical and analytical bounds on threshold error rates for hypergraph-product codes
NASA Astrophysics Data System (ADS)
Kovalev, Alexey A.; Prabhakar, Sanjay; Dumer, Ilya; Pryadko, Leonid P.
2018-06-01
We study analytically and numerically decoding properties of finite-rate hypergraph-product quantum low density parity-check codes obtained from random (3,4)-regular Gallager codes, with a simple model of independent X and Z errors. Several nontrivial lower and upper bounds for the decodable region are constructed analytically by analyzing the properties of the homological difference, equal minus the logarithm of the maximum-likelihood decoding probability for a given syndrome. Numerical results include an upper bound for the decodable region from specific heat calculations in associated Ising models and a minimum-weight decoding threshold of approximately 7 % .
Computational micromechanics of woven composites
NASA Technical Reports Server (NTRS)
Hopkins, Dale A.; Saigal, Sunil; Zeng, Xiaogang
1991-01-01
The bounds on the equivalent elastic material properties of a composite are presently addressed by a unified energy approach which is valid for both unidirectional and 2D and 3D woven composites. The unit cell considered is assumed to consist, first, of the actual composite arrangement of the fibers and matrix material, and then, of an equivalent pseudohomogeneous material. Equating the strain energies due to the two arrangements yields an estimate of the upper bound for the material equivalent properties; successive increases in the order of displacement field that is assumed in the composite arrangement will successively produce improved upper bound estimates.
Upper bounds on the photon mass
DOE Office of Scientific and Technical Information (OSTI.GOV)
Accioly, Antonio; Group of Field Theory from First Principles, Sao Paulo State University; Instituto de Fisica Teorica
2010-09-15
The effects of a nonzero photon rest mass can be incorporated into electromagnetism in a simple way using the Proca equations. In this vein, two interesting implications regarding the possible existence of a massive photon in nature, i.e., tiny alterations in the known values of both the anomalous magnetic moment of the electron and the gravitational deflection of electromagnetic radiation, are utilized to set upper limits on its mass. The bounds obtained are not as stringent as those recently found; nonetheless, they are comparable to other existing bounds and bring new elements to the issue of restricting the photon mass.
Upper bound on the Abelian gauge coupling from asymptotic safety
NASA Astrophysics Data System (ADS)
Eichhorn, Astrid; Versteegen, Fleur
2018-01-01
We explore the impact of asymptotically safe quantum gravity on the Abelian gauge coupling in a model including a charged scalar, confirming indications that asymptotically safe quantum fluctuations of gravity could trigger a power-law running towards a free fixed point for the gauge coupling above the Planck scale. Simultaneously, quantum gravity fluctuations balance against matter fluctuations to generate an interacting fixed point, which acts as a boundary of the basin of attraction of the free fixed point. This enforces an upper bound on the infrared value of the Abelian gauge coupling. In the regime of gravity couplings which in our approximation also allows for a prediction of the top quark and Higgs mass close to the experimental value [1], we obtain an upper bound approximately 35% above the infrared value of the hypercharge coupling in the Standard Model.
Limits of Gaussian fluctuations in the cosmic microwave background at 19.2 GHz
NASA Technical Reports Server (NTRS)
Boughn, S. P.; Cheng, E. S.; Cottingham, D. A.; Fixsen, D. J.
1992-01-01
The Northern Hemisphere data from the 19.2 GHz full sky survey are analyzed to place limits on the magnitude of Gaussian fluctuations in the cosmic microwave background implied by a variety of correlation functions. Included among the models tested are the monochromatic and Gaussian-shaped families, and those with power-law spectra for n values between -2 and 1. An upper bound is placed on the quadrupole anisotropy of Delta T/T less than 3.2 x 10 exp -5 rms, and an upper bound on scale-invariant (n = 1) fluctuations of a2 less than 4.5 x 10 exp -5 (95 percent confidence level). There is significant contamination of these data from Galactic emission, and improvement of the modeling of the Galaxy could yield a significant reduction of these upper bounds.
Limits on Gaussian fluctuations in the cosmic microwave background at 19.2 GHz
NASA Technical Reports Server (NTRS)
Boughn, S. P.; Cheng, E. S.; Cottingham, D. A.; Fixsen, D. J.
1991-01-01
The Northern Hemisphere data from the 19.2 GHz full sky survey are analyzed to place limits on the magnitude of Gaussian fluctuations in the cosmic microwave background implied by a variety of correlation functions. Included among the models tested are the monochromatic and Gaussian-shaped families, and those with power law spectra for n from -2 to 1. We place an upper bound on the quadrupole anisotropy of DeltaT/T less than 3.2 x 10 exp -5 rms, and an upper bound on scale-invariant (n = 1) fluctuations of a2 less than 4.5 x 10 exp -5 (95 percent confidence level). There is significant contamination of these data from Galactic emission, and improvement of our modeling of the Galaxy could yield a significant reduction of these upper bounds.
Complexity Bounds for Quantum Computation
2007-06-22
Programs Trustees of Boston University Boston, MA 02215 - Complexity Bounds for Quantum Computation REPORT DOCUMENTATION PAGE 18. SECURITY CLASSIFICATION...Complexity Bounds for Quantum Comp[utation Report Title ABSTRACT This project focused on upper and lower bounds for quantum computability using constant...classical computation models, particularly emphasizing new examples of where quantum circuits are more powerful than their classical counterparts. A second
Enhnacing the science of the WFIRST coronagraph instrument with post-processing.
NASA Astrophysics Data System (ADS)
Pueyo, Laurent; WFIRST CGI data analysis and post-processing WG
2018-01-01
We summarize the results of a three years effort investigating how to apply to the WFIRST coronagraph instrument (CGI) modern image analysis methods, now routinely used with ground-based coronagraphs. In this post we quantify the gain associated post-processing for WFIRST-CGI observing scenarios simulated between 2013 and 2017. We also show based one simulations that spectrum of planet can be confidently retrieved using these processing tools with and Integral Field Spectrograph. We then discuss our work using CGI experimental data and quantify coronagraph post-processing testbed gains. We finally introduce stability metrics that are simple to define and measure, and place useful lower bound and upper bounds on the achievable RDI post-processing contrast gain. We show that our bounds hold in the case of the testbed data.
Approximation Set of the Interval Set in Pawlak's Space
Wang, Jin; Wang, Guoyin
2014-01-01
The interval set is a special set, which describes uncertainty of an uncertain concept or set Z with its two crisp boundaries named upper-bound set and lower-bound set. In this paper, the concept of similarity degree between two interval sets is defined at first, and then the similarity degrees between an interval set and its two approximations (i.e., upper approximation set R¯(Z) and lower approximation set R_(Z)) are presented, respectively. The disadvantages of using upper-approximation set R¯(Z) or lower-approximation set R_(Z) as approximation sets of the uncertain set (uncertain concept) Z are analyzed, and a new method for looking for a better approximation set of the interval set Z is proposed. The conclusion that the approximation set R 0.5(Z) is an optimal approximation set of interval set Z is drawn and proved successfully. The change rules of R 0.5(Z) with different binary relations are analyzed in detail. Finally, a kind of crisp approximation set of the interval set Z is constructed. We hope this research work will promote the development of both the interval set model and granular computing theory. PMID:25177721
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamio, Y; Bouchard, H
2014-06-15
Purpose: Discrepancies in the verification of the absorbed dose to water from an IMRT plan using a radiation dosimeter can be wither caused by 1) detector specific nonstandard field correction factors as described by the formalism of Alfonso et al. 2) inaccurate delivery of the DQA plan. The aim of this work is to develop a simple/fast method to determine an upper limit on the contribution of composite field correction factors to these discrepancies. Methods: Indices that characterize the non-flatness of the symmetrised collapsed delivery (VSC) of IMRT fields over detector-specific regions of interest were shown to be correlated withmore » IMRT field correction factors. The indices introduced are the uniformity index (UI) and the mean fluctuation index (MF). Each one of these correlation plots have 10 000 fields generated with a stochastic model. A total of eight radiation detectors were investigated in the radial orientation. An upper bound on the correction factors was evaluated by fitting values of high correction factors for a given index value. Results: These fitted curves can be used to compare the performance of radiation dosimeters in composite IMRT fields. Highly water-equivalent dosimeters like the scintillating detector (Exradin W1) and a generic alanine detector have been found to have corrections under 1% over a broad range of field modulations (0 – 0.12 for MF and 0 – 0.5 for UI). Other detectors have been shown to have corrections of a few percent over this range. Finally, a full Monte Carlo simulations of 18 clinical and nonclinical IMRT field showed good agreement with the fitted curve for the A12 ionization chamber. Conclusion: This work proposes a rapid method to evaluate an upper bound on the contribution of correction factors to discrepancies found in the verification of DQA plans.« less
NASA Astrophysics Data System (ADS)
Hartman, Thomas; Hartnoll, Sean A.; Mahajan, Raghu
2017-10-01
The linear growth of operators in local quantum systems leads to an effective light cone even if the system is nonrelativistic. We show that the consistency of diffusive transport with this light cone places an upper bound on the diffusivity: D ≲v2τeq. The operator growth velocity v defines the light cone, and τeq is the local equilibration time scale, beyond which the dynamics of conserved densities is diffusive. We verify that the bound is obeyed in various weakly and strongly interacting theories. In holographic models, this bound establishes a relation between the hydrodynamic and leading nonhydrodynamic quasinormal modes of planar black holes. Our bound relates transport data—including the electrical resistivity and the shear viscosity—to the local equilibration time, even in the absence of a quasiparticle description. In this way, the bound sheds light on the observed T -linear resistivity of many unconventional metals, the shear viscosity of the quark-gluon plasma, and the spin transport of unitary fermions.
NASA Astrophysics Data System (ADS)
Kulkarni, Girish; Subrahmanyam, V.; Jha, Anand K.
2016-06-01
We study how one-particle correlations transfer to manifest as two-particle correlations in the context of parametric down-conversion (PDC), a process in which a pump photon is annihilated to produce two entangled photons. We work in the polarization degree of freedom and show that for any two-qubit generation process that is both trace-preserving and entropy-nondecreasing, the concurrence C (ρ ) of the generated two-qubit state ρ follows an intrinsic upper bound with C (ρ )≤(1 +P )/2 , where P is the degree of polarization of the pump photon. We also find that for the class of two-qubit states that is restricted to have only two nonzero diagonal elements such that the effective dimensionality of the two-qubit state is the same as the dimensionality of the pump polarization state, the upper bound on concurrence is the degree of polarization itself, that is, C (ρ )≤P . Our work shows that the maximum manifestation of two-particle correlations as entanglement is dictated by one-particle correlations. The formalism developed in this work can be extended to include multiparticle systems and can thus have important implications towards deducing the upper bounds on multiparticle entanglement, for which no universally accepted measure exists.
Backstepping Design of Adaptive Neural Fault-Tolerant Control for MIMO Nonlinear Systems.
Gao, Hui; Song, Yongduan; Wen, Changyun
In this paper, an adaptive controller is developed for a class of multi-input and multioutput nonlinear systems with neural networks (NNs) used as a modeling tool. It is shown that all the signals in the closed-loop system with the proposed adaptive neural controller are globally uniformly bounded for any external input in . In our control design, the upper bound of the NN modeling error and the gains of external disturbance are characterized by unknown upper bounds, which is more rational to establish the stability in the adaptive NN control. Filter-based modification terms are used in the update laws of unknown parameters to improve the transient performance. Finally, fault-tolerant control is developed to accommodate actuator failure. An illustrative example applying the adaptive controller to control a rigid robot arm shows the validation of the proposed controller.In this paper, an adaptive controller is developed for a class of multi-input and multioutput nonlinear systems with neural networks (NNs) used as a modeling tool. It is shown that all the signals in the closed-loop system with the proposed adaptive neural controller are globally uniformly bounded for any external input in . In our control design, the upper bound of the NN modeling error and the gains of external disturbance are characterized by unknown upper bounds, which is more rational to establish the stability in the adaptive NN control. Filter-based modification terms are used in the update laws of unknown parameters to improve the transient performance. Finally, fault-tolerant control is developed to accommodate actuator failure. An illustrative example applying the adaptive controller to control a rigid robot arm shows the validation of the proposed controller.
Length bounds for connecting discharges in triggered lightning subsequent strokes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Idone, V.P.
1990-11-20
Highly time resolved streak recordings from nine subsequent strokes in four triggered flashes have been examined for evidence of the occurrence of upward connecting discharges. These photographic recordings were obtained with superior spatial and temporal resolution (0.3 m and 0.5 {lambda}s) and were examined with a video image analysis system to help delineate the separate leader and return stroke image tracks. Unfortunately, a definitive determination of the occurrence of connecting discharges in these strokes could not be made. The data did allow various determinations of an upper bound length for any possible connecting discharge in each stroke. Under the simplestmore » analysis approach possible, an 'absolute' upper bound set of lengths was measured that ranged from 12 to 27 m with a mean of 19 m; two other more involved analyses yielded arguably better upper bound estimates of 8-18 m and 7-26 m with means of 19 m; two other more involved analyses yielded arguably better upper bound estimates of 8-18 m and 7-26 m with means of 12 and 13 m, respectively. An additional set of low time-resolution telephoto recordings of the lowest few meters of channel revealed six strokes in these flashes with one or more upward unconnected channels originating from the lightning rod tip. The maximum length of unconnected channel seen in each of these strokes ranged from 0.2 to 1.6 m with a mean of 0.7 m. This latter set of observations is interpreted as indirect evidence that connecting discharges did occur in these strokes and that the lower bound for their length is about 1 m.« less
Quijano, Leyre; Yusà, Vicent; Font, Guillermina; McAllister, Claudia; Torres, Concepción; Pardo, Olga
2017-02-01
This study was carried out to determine current levels of nitrate in vegetables marketed in the Region of Valencia (Spain) and to estimate the toxicological risk associated with their intake. A total of 533 samples of seven vegetable species were studied. Nitrate levels were derived from the Valencia Region monitoring programme carried out from 2009 to 2013 and food consumption levels were taken from the first Valencia Food Consumption Survey, conducted in 2010. The exposure was estimated using a probabilistic approach and two scenarios were assumed for left-censored data: the lower-bound scenario, in which unquantified results (below the limit of quantification) were set to zero and the upper-bound scenario, in which unquantified results were set to the limit of quantification value. The exposure of the Valencia consumers to nitrate through the consumption of vegetable products appears to be relatively low. In the adult population (16-95 years) the P99.9 was 3.13 mg kg -1 body weight day -1 and 3.15 mg kg -1 body weight day -1 in the lower bound and upper bound scenario, respectively. On the other hand, for young people (6-15 years) the P99.9 of the exposure was 4.20 mg kg -1 body weight day -1 and 4.40 mg kg -1 body weight day -1 in the lower bound and upper bound scenario, respectively. The risk characterisation indicates that, under the upper bound scenario, 0.79% of adults and 1.39% of young people can exceed the Acceptable Daily Intake of nitrate. This percentage could join the vegetable extreme consumers (such as vegetarians) of vegetables. Overall, the estimated exposures to nitrate from vegetables are unlikely to result in appreciable health risks. Copyright © 2016 Elsevier Ltd. All rights reserved.
On the upper bound in the Bohm sheath criterion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotelnikov, I. A., E-mail: I.A.Kotelnikov@inp.nsk.su; Skovorodin, D. I., E-mail: D.I.Skovorodin@inp.nsk.su
2016-02-15
The question is discussed about the existence of an upper bound in the Bohm sheath criterion, according to which the Debye sheath at the interface between plasma and a negatively charged electrode is stable only if the ion flow velocity in plasma exceeds the ion sound velocity. It is stated that, with an exception of some artificial ionization models, the Bohm sheath criterion is satisfied as an equality at the lower bound and the ion flow velocity is equal to the speed of sound. In the one-dimensional theory, a supersonic flow appears in an unrealistic model of a localized ionmore » source the size of which is less than the Debye length; however, supersonic flows seem to be possible in the two- and three-dimensional cases. In the available numerical codes used to simulate charged particle sources with a plasma emitter, the presence of the upper bound in the Bohm sheath criterion is not supposed; however, the correspondence with experimental data is usually achieved if the ion flow velocity in plasma is close to the ion sound velocity.« less
Jahan, K Luhluh; Boda, A; Shankar, I V; Raju, Ch Narasimha; Chatterjee, Ashok
2018-03-22
The problem of an exciton trapped in a Gaussian quantum dot (QD) of GaAs is studied in both two and three dimensions in the presence of an external magnetic field using the Ritz variational method, the 1/N expansion method and the shifted 1/N expansion method. The ground state energy and the binding energy of the exciton are obtained as a function of the quantum dot size, confinement strength and the magnetic field and compared with those available in the literature. While the variational method gives the upper bound to the ground state energy, the 1/N expansion method gives the lower bound. The results obtained from the shifted 1/N expansion method are shown to match very well with those obtained from the exact diagonalization technique. The variation of the exciton size and the oscillator strength of the exciton are also studied as a function of the size of the quantum dot. The excited states of the exciton are computed using the shifted 1/N expansion method and it is suggested that a given number of stable excitonic bound states can be realized in a quantum dot by tuning the quantum dot parameters. This can open up the possibility of having quantum dot lasers using excitonic states.
Ferromagnetic Potts models with multisite interaction
NASA Astrophysics Data System (ADS)
Schreiber, Nir; Cohen, Reuven; Haber, Simi
2018-03-01
We study the q -state Potts model with four-site interaction on a square lattice. Based on the asymptotic behavior of lattice animals, it is argued that when q ≤4 the system exhibits a second-order phase transition and when q >4 the transition is first order. The q =4 model is borderline. We find 1 /lnq to be an upper bound on Tc, the exact critical temperature. Using a low-temperature expansion, we show that 1 /(θ lnq ) , where θ >1 is a q -dependent geometrical term, is an improved upper bound on Tc. In fact, our findings support Tc=1 /(θ lnq ) . This expression is used to estimate the finite correlation length in first-order transition systems. These results can be extended to other lattices. Our theoretical predictions are confirmed numerically by an extensive study of the four-site interaction model using the Wang-Landau entropic sampling method for q =3 ,4 ,5 . In particular, the q =4 model shows an ambiguous finite-size pseudocritical behavior.
Pages, Gaël; Ramdani, Nacim; Fraisse, Philippe; Guiraud, David
2009-06-01
This paper presents a contribution for restoring standing in paraplegia while using functional electrical stimulation (FES). Movement generation induced by FES remains mostly open looped and stimulus intensities are tuned empirically. To design an efficient closed-loop control, a preliminary study has been carried out to investigate the relationship between body posture and voluntary upper body movements. A methodology is proposed to estimate body posture in the sagittal plane using force measurements exerted on supporting handles during standing. This is done by setting up constraints related to the geometric equations of a two-dimensional closed chain model and the hand-handle interactions. All measured quantities are subject to an uncertainty assumed unknown but bounded. The set membership estimation problem is solved via interval analysis. Guaranteed uncertainty bounds are computed for the estimated postures. In order to test the feasibility of our methodology, experiments were carried out with complete spinal cord injured patients.
Quantifying the tracking capability of space-based AIS systems
NASA Astrophysics Data System (ADS)
Skauen, Andreas Nordmo
2016-01-01
The Norwegian Defence Research Establishment (FFI) has operated three Automatic Identification System (AIS) receivers in space. Two are on dedicated nano-satellites, AISSat-1 and AISSat-2. The third, the NORAIS Receiver, was installed on the International Space Station. A general method for calculating the upper bound on the tracking capability of a space-based AIS system has been developed and the results from the algorithm applied to AISSat-1 and the NORAIS Receiver individually. In addition, a constellation of AISSat-1 and AISSat-2 is presented. The tracking capability is defined as the probability of re-detecting ships as they move around the globe and is explained to represent and upper bound on a space-based AIS system performance. AISSat-1 and AISSat-2 operates on the nominal AIS1 and AIS2 channels, while the NORAIS Receiver data used are from operations on the dedicated space AIS channels, AIS3 and AIS4. The improved tracking capability of operations on the space AIS channels is presented.
Flutter suppression and stability analysis for a variable-span wing via morphing technology
NASA Astrophysics Data System (ADS)
Li, Wencheng; Jin, Dongping
2018-01-01
A morphing wing can enhance aerodynamic characteristics and control authority as an alternative to using ailerons. To use morphing technology for flutter suppression, the dynamical behavior and stability of a variable-span wing subjected to the supersonic aerodynamic loads are investigated numerically in this paper. An axially moving cantilever plate is employed to model the variable-span wing, in which the governing equations of motion are established via the Kane method and piston theory. A morphing strategy based on axially moving rates is proposed to suppress the flutter that occurs beyond the critical span length, and the flutter stability is verified by Floquet theory. Furthermore, the transient stability during the morphing motion is analyzed and the upper bound of the morphing rate is obtained. The simulation results indicate that the proposed morphing law, which is varying periodically with a proper amplitude, could accomplish the flutter suppression. Further, the upper bound of the morphing speed decreases rapidly once the span length is close to its critical span length.
Dynamic characteristics of two new vibration modes of the disk-shell shaped gear
NASA Astrophysics Data System (ADS)
Yan, Litang; Qiu, Shijung; Gao, Xiangqung
1992-10-01
Two new vibration modes of the disk-shell-shaped big medium gears placed on three separate medium shafts of a turboprop engine have been found. They have the same nodal diameters as the conventional ones, but their frequencies are higher. The tooth ring vibrates both radially and axially and has greater deflection than the gear hub. The resonance of these two new nodal diameter modes is much more dangerous than that of the conventional nodal diameter modes. Moreover, they occur nearly at the upper and the lower bounds of the gear operating speed range. A special detuning method is developed for removing the resonance of these two new modes out of the upper and the lower bounds, respectively, and the effectiveness of the damping rings in this case has been researched. The vibration responses measured on the reductor casing have been then reduced to a quite low level after the damping rings were applied to the three big medium gears.
NASA Astrophysics Data System (ADS)
Thole, B. T.; Van Duijnen, P. Th.
1982-10-01
The induction and dispersion terms obtained from quantum-mechanical calculations with a direct reaction field hamiltonian are compared to second order perturbation theory expressions. The dispersion term is shown to give an upper bound which is a generalization of Alexander's upper bound. The model is illustrated by a calculation on the interactions in the water dimer. The long range Coulomb, induction and dispersion interactions are reasonably reproduced.
On the Kirchhoff Index of Graphs
NASA Astrophysics Data System (ADS)
Das, Kinkar C.
2013-09-01
Let G be a connected graph of order n with Laplacian eigenvalues μ1 ≥ μ2 ≥ ... ≥ μn-1 > mn = 0. The Kirchhoff index of G is defined as [xxx] In this paper. we give lower and upper bounds on Kf of graphs in terms on n, number of edges, maximum degree, and number of spanning trees. Moreover, we present lower and upper bounds on the Nordhaus-Gaddum-type result for the Kirchhoff index.
Upper bound of pier scour in laboratory and field data
Benedict, Stephen; Caldwell, Andral W.
2016-01-01
The U.S. Geological Survey (USGS), in cooperation with the South Carolina Department of Transportation, conducted several field investigations of pier scour in South Carolina and used the data to develop envelope curves defining the upper bound of pier scour. To expand on this previous work, an additional cooperative investigation was initiated to combine the South Carolina data with pier scour data from other sources and to evaluate upper-bound relations with this larger data set. To facilitate this analysis, 569 laboratory and 1,858 field measurements of pier scour were compiled to form the 2014 USGS Pier Scour Database. This extensive database was used to develop an envelope curve for the potential maximum pier scour depth encompassing the laboratory and field data. The envelope curve provides a simple but useful tool for assessing the potential maximum pier scour depth for effective pier widths of about 30 ft or less.
NASA Astrophysics Data System (ADS)
Wang, Fan; Liang, Jinling; Dobaie, Abdullah M.
2018-07-01
The resilient filtering problem is considered for a class of time-varying networks with stochastic coupling strengths. An event-triggered strategy is adopted to save the network resources by scheduling the signal transmission from the sensors to the filters based on certain prescribed rules. Moreover, the filter parameters to be designed are subject to gain perturbations. The primary aim of the addressed problem is to determine a resilient filter that ensures an acceptable filtering performance for the considered network with event-triggering scheduling. To handle such an issue, an upper bound on the estimation error variance is established for each node according to the stochastic analysis. Subsequently, the resilient filter is designed by locally minimizing the derived upper bound at each iteration. Moreover, rigorous analysis shows the monotonicity of the minimal upper bound regarding the triggering threshold. Finally, a simulation example is presented to show effectiveness of the established filter scheme.
Objects of Maximum Electromagnetic Chirality
NASA Astrophysics Data System (ADS)
Fernandez-Corbaton, Ivan; Fruhnert, Martin; Rockstuhl, Carsten
2016-07-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. Reciprocal objects attain the upper bound if and only if they are transparent for all the fields of one polarization handedness (helicity). Additionally, electromagnetic duality symmetry, i.e., helicity preservation upon interaction, turns out to be a necessary condition for reciprocal objects to attain the upper bound. We use these results to provide requirements for the design of such extremal objects. The requirements can be formulated as constraints on the polarizability tensors for dipolar objects or on the material constitutive relations for continuous media. We also outline two applications for objects of maximum electromagnetic chirality: a twofold resonantly enhanced and background-free circular dichroism measurement setup, and angle-independent helicity filtering glasses. Finally, we use the theoretically obtained requirements to guide the design of a specific structure, which we then analyze numerically and discuss its performance with respect to maximal electromagnetic chirality.
CENTER CONDITIONS AND CYCLICITY FOR A FAMILY OF CUBIC SYSTEMS: COMPUTER ALGEBRA APPROACH.
Ferčec, Brigita; Mahdi, Adam
2013-01-01
Using methods of computational algebra we obtain an upper bound for the cyclicity of a family of cubic systems. We overcame the problem of nonradicality of the associated Bautin ideal by moving from the ring of polynomials to a coordinate ring. Finally, we determine the number of limit cycles bifurcating from each component of the center variety.
Roentgen stereophotogrammetric analysis of metal-backed hemispherical cups without attached markers.
Valstar, E R; Spoor, C W; Nelissen, R G; Rozing, P M
1997-11-01
A method for the detection of micromotion of a metal-backed hemispherical acetabular cup is presented and tested. Unlike in conventional roentgen stereophotogrammetric analysis, the cup does not have to be marked with tantalum markers; the micromotion is calculated from the contours of the hemispherical part and the base circle of the cup. In this way, two rotations (tilt and anteversion) and the translations along the three cardinal axes are obtained. In a phantom study, the maximum error in the position of the cup's centre was 0.04 mm. The mean error in the orientation of the cup was 0.41 degree, with a 95% confidence interval of 0.28-0.54 degree. The in vivo accuracy was tested by repeated measurement of 21 radiographs from seven patients. The upper bound of the 95% tolerance interval for the translations along the transversal, longitudinal, and sagittal axes was 0.09, 0.07, and 0.34 mm, respectively: for the rotation, this upper bound was 0.39 degree. These results show that the new method, in which the position and orientation of metal-backed hemispherical cup is calculated from its projected contours, is a simple and accurate alternative to attaching markers to the cup.
NASA Astrophysics Data System (ADS)
Bagherinejad, Jafar; Niknam, Azar
2018-03-01
In this paper, a leader-follower competitive facility location problem considering the reactions of the competitors is studied. A model for locating new facilities and determining levels of quality for the facilities of the leader firm is proposed. Moreover, changes in the location and quality of existing facilities in a competitive market where a competitor offers the same goods or services are taken into account. The competitor could react by opening new facilities, closing existing ones, and adjusting the quality levels of its existing facilities. The market share, captured by each facility, depends on its distance to customer and its quality that is calculated based on the probabilistic Huff's model. Each firm aims to maximize its profit subject to constraints on quality levels and budget of setting up new facilities. This problem is formulated as a bi-level mixed integer non-linear model. The model is solved using a combination of Tabu Search with an exact method. The performance of the proposed algorithm is compared with an upper bound that is achieved by applying Karush-Kuhn-Tucker conditions. Computational results show that our algorithm finds near the upper bound solutions in a reasonable time.
The cost-constrained traveling salesman problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sokkappa, P.R.
1990-10-01
The Cost-Constrained Traveling Salesman Problem (CCTSP) is a variant of the well-known Traveling Salesman Problem (TSP). In the TSP, the goal is to find a tour of a given set of cities such that the total cost of the tour is minimized. In the CCTSP, each city is given a value, and a fixed cost-constraint is specified. The objective is to find a subtour of the cities that achieves maximum value without exceeding the cost-constraint. Thus, unlike the TSP, the CCTSP requires both selection and sequencing. As a consequence, most results for the TSP cannot be extended to the CCTSP.more » We show that the CCTSP is NP-hard and that no K-approximation algorithm or fully polynomial approximation scheme exists, unless P = NP. We also show that several special cases are polynomially solvable. Algorithms for the CCTSP, which outperform previous methods, are developed in three areas: upper bounding methods, exact algorithms, and heuristics. We found that a bounding strategy based on the knapsack problem performs better, both in speed and in the quality of the bounds, than methods based on the assignment problem. Likewise, we found that a branch-and-bound approach using the knapsack bound was superior to a method based on a common branch-and-bound method for the TSP. In our study of heuristic algorithms, we found that, when selecting modes for inclusion in the subtour, it is important to consider the neighborhood'' of the nodes. A node with low value that brings the subtour near many other nodes may be more desirable than an isolated node of high value. We found two types of repetition to be desirable: repetitions based on randomization in the subtour buildings process, and repetitions encouraging the inclusion of different subsets of the nodes. By varying the number and type of repetitions, we can adjust the computation time required by our method to obtain algorithms that outperform previous methods.« less
NASA Astrophysics Data System (ADS)
Audenaert, Koenraad M. R.; Mosonyi, Milán
2014-10-01
We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ1, …, σr. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ1, …, σr), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov's classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min _{j
Unveiling ν secrets with cosmological data: Neutrino masses and mass hierarchy
NASA Astrophysics Data System (ADS)
Vagnozzi, Sunny; Giusarma, Elena; Mena, Olga; Freese, Katherine; Gerbino, Martina; Ho, Shirley; Lattanzi, Massimiliano
2017-12-01
Using some of the latest cosmological data sets publicly available, we derive the strongest bounds in the literature on the sum of the three active neutrino masses, Mν, within the assumption of a background flat Λ CDM cosmology. In the most conservative scheme, combining Planck cosmic microwave background temperature anisotropies and baryon acoustic oscillations (BAO) data, as well as the up-to-date constraint on the optical depth to reionization (τ ), the tightest 95% confidence level upper bound we find is Mν<0.151 eV . The addition of Planck high-ℓ polarization data, which, however, might still be contaminated by systematics, further tightens the bound to Mν<0.118 eV . A proper model comparison treatment shows that the two aforementioned combinations disfavor the inverted hierarchy at ˜64 % C .L . and ˜71 % C .L . , respectively. In addition, we compare the constraining power of measurements of the full-shape galaxy power spectrum versus the BAO signature, from the BOSS survey. Even though the latest BOSS full-shape measurements cover a larger volume and benefit from smaller error bars compared to previous similar measurements, the analysis method commonly adopted results in their constraining power still being less powerful than that of the extracted BAO signal. Our work uses only cosmological data; imposing the constraint Mν>0.06 eV from oscillations data would raise the quoted upper bounds by O (0.1 σ ) and would not affect our conclusions.
Distribution of free and antibody-bound peptide hormones in two-phase aqueous polymer systems
Desbuquois, Bernard; Aurbach, G. D.
1972-01-01
Peptide hormones labelled with radioactive iodine were partitioned into the aqueous two-phase polymer systems developed by Albertsson (1960) and the conditions required for separation of free from antibody-bound hormone have been worked out. Hormones studied included insulin, growth hormone, parathyroid hormone and [arginine]-vasopressin. Free and antibody-bound hormones show different distribution coefficients in a number of systems tested; two systems, the dextran–polyethylene glycol and dextran sulphate–polyethylene glycol system, give optimum separation. Free hormones distribute readily into the upper phase of these systems, whereas hormone–antibody complexes, as well as uncombined antibody, are found almost completely in the lower phase. Various factors including the polymer concentration, the ionic composition of the system, the nature of the hormone and the nature of added serum protein differentially affect the distribution coefficients for free and antibody-bound hormone. These factors can be adequately controlled so as to improve separation. The two-phase partition method has been successfully applied to measure binding of labelled hormone to antibody under standard radioimmunoassay conditions. It exhibits several advantages over the method of equilibration dialysis and can be applied to the study of non-immunological interactions. PMID:4672674
Solar System and stellar tests of a quantum-corrected gravity
NASA Astrophysics Data System (ADS)
Zhao, Shan-Shan; Xie, Yi
2015-09-01
The renormalization group running of the gravitational constant has a universal form and represents a possible extension of general relativity. These renormalization group effects on general relativity will cause the running of the gravitational constant, and there exists a scale of renormalization α ν , which depends on the mass of an astronomical system and needs to be determined by observations. We test renormalization group effects on general relativity and obtain the upper bounds of α ν in the low-mass scales: the Solar System and five systems of binary pulsars. Using the supplementary advances of the perihelia provided by INPOP10a (IMCCE, France) and EPM2011 (IAA RAS, Russia) ephemerides, we obtain new upper bounds on α ν in the Solar System when the Lense-Thirring effect due to the Sun's angular momentum and the uncertainty of the Sun's quadrupole moment are properly taken into account. These two factors were absent in the previous work. We find that INPOP10a yields the upper bound as α ν =(0.3 ±2.8 )×10-20 while EPM2011 gives α ν =(-2.5 ±8.3 )×10-21. Both of them are tighter than the previous result by 4 orders of magnitude. Furthermore, based on the observational data sets of five systems of binary pulsars: PSR J 0737 -3039 , PSR B 1534 +12 , PSR J 1756 -2251 , PSR B 1913 +16 , and PSR B 2127 +11 C , the upper bound is found as α ν =(-2.6 ±5.1 )×10-17. From the bounds of this work at a low-mass scale and the ones at the mass scale of galaxies, we might catch an updated glimpse of the mass dependence of α ν , and it is found that our improvement of the upper bounds in the Solar System can significantly change the possible pattern of the relation between log |α ν | and log m from a linear one to a power law, where m is the mass of an astronomical system. This suggests that |α ν | needs to be suppressed more rapidly with the decrease of the mass of low-mass systems. It also predicts that |α ν | might have an upper limit in high-mass astrophysical systems, which can be tested in the future.
Differential Games of inf-sup Type and Isaacs Equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaise, Hidehiro; Sheu, S.-J.
2005-06-15
Motivated by the work of Fleming, we provide a general framework to associate inf-sup type values with the Isaacs equations.We show that upper and lower bounds for the generators of inf-sup type are upper and lower Hamiltonians, respectively. In particular, the lower (resp. upper) bound corresponds to the progressive (resp. strictly progressive) strategy. By the Dynamic Programming Principle and identification of the generator, we can prove that the inf-sup type game is characterized as the unique viscosity solution of the Isaacs equation. We also discuss the Isaacs equation with a Hamiltonian of a convex combination between the lower and uppermore » Hamiltonians.« less
Tidal disruption of Periodic Comet Shoemaker-Levy 9 and a constraint on its mean density
NASA Technical Reports Server (NTRS)
Boss, Alan P.
1994-01-01
The apparent tidal disruption of Periodic Comet Shoemaker-Levy 9 (1993e) during a close encounter within approximately 1.62 planetary radii of Jupiter can be used along with theoretical models of tidal disruption to place an upper bound on the density of the predisruption body. Depending on the theoretical model used, these upper bounds range from rho(sub c) less than 0.702 +/- 0.080 g/cu cm for a simple analytical model calibrated by numerical smoothed particle hydrodynamics (SPH) simulations to rho(sub c) less than 1.50 +/- 0.17 g/cu cm for a detailed semianalytical model. The quoted uncertainties stem from an assumed uncertainty in the perijove radius. However, the uncertainty introduced by the different theoretical models is the major source of error; this uncertainty could be eliminated by future SPH simulations specialized to cometary disruptions, including the effects of initially prolate, spinning comets. If the SPH-based upper bound turns out to be most appropriate, it would be consistent with the predisruption body being a comet with a relatively low density and porous structure, as has been asserted previously based on observations of cometary outgassing. Regardless of which upper bound is preferable, the models all agree that the predisruption body could not have been a relatively high-density body, such as an asteroid with rho approximately = 2 g/cu cm.
Limit analysis of hollow spheres or spheroids with Hill orthotropic matrix
NASA Astrophysics Data System (ADS)
Pastor, Franck; Pastor, Joseph; Kondo, Djimedo
2012-03-01
Recent theoretical studies of the literature are concerned by the hollow sphere or spheroid (confocal) problems with orthotropic Hill type matrix. They have been developed in the framework of the limit analysis kinematical approach by using very simple trial velocity fields. The present Note provides, through numerical upper and lower bounds, a rigorous assessment of the approximate criteria derived in these theoretical works. To this end, existing static 3D codes for a von Mises matrix have been easily extended to the orthotropic case. Conversely, instead of the non-obvious extension of the existing kinematic codes, a new original mixed approach has been elaborated on the basis of the plane strain structure formulation earlier developed by F. Pastor (2007). Indeed, such a formulation does not need the expressions of the unit dissipated powers. Interestingly, it delivers a numerical code better conditioned and notably more rapid than the previous one, while preserving the rigorous upper bound character of the corresponding numerical results. The efficiency of the whole approach is first demonstrated through comparisons of the results to the analytical upper bounds of Benzerga and Besson (2001) or Monchiet et al. (2008) in the case of spherical voids in the Hill matrix. Moreover, we provide upper and lower bounds results for the hollow spheroid with the Hill matrix which are compared to those of Monchiet et al. (2008).
Software reliability: Additional investigations into modeling with replicated experiments
NASA Technical Reports Server (NTRS)
Nagel, P. M.; Schotz, F. M.; Skirvan, J. A.
1984-01-01
The effects of programmer experience level, different program usage distributions, and programming languages are explored. All these factors affect performance, and some tentative relational hypotheses are presented. An analytic framework for replicated and non-replicated (traditional) software experiments is presented. A method of obtaining an upper bound on the error rate of the next error is proposed. The method was validated empirically by comparing forecasts with actual data. In all 14 cases the bound exceeded the observed parameter, albeit somewhat conservatively. Two other forecasting methods are proposed and compared to observed results. Although demonstrated relative to this framework that stages are neither independent nor exponentially distributed, empirical estimates show that the exponential assumption is nearly valid for all but the extreme tails of the distribution. Except for the dependence in the stage probabilities, Cox's model approximates to a degree what is being observed.
Stability of Nonlinear Systems with Unknown Time-varying Feedback Delay
NASA Astrophysics Data System (ADS)
Chunodkar, Apurva A.; Akella, Maruthi R.
2013-12-01
This paper considers the problem of stabilizing a class of nonlinear systems with unknown bounded delayed feedback wherein the time-varying delay is 1) piecewise constant 2) continuous with a bounded rate. We also consider application of these results to the stabilization of rigid-body attitude dynamics. In the first case, the time-delay in feedback is modeled specifically as a switch among an arbitrarily large set of unknown constant values with a known strict upper bound. The feedback is a linear function of the delayed states. In the case of linear systems with switched delay feedback, a new sufficiency condition for average dwell time result is presented using a complete type Lyapunov-Krasovskii (L-K) functional approach. Further, the corresponding switched system with nonlinear perturbations is proven to be exponentially stable inside a well characterized region of attraction for an appropriately chosen average dwell time. In the second case, the concept of the complete type L-K functional is extended to a class of nonlinear time-delay systems with unknown time-varying time-delay. This extension ensures stability robustness to time-delay in the control design for all values of time-delay less than the known upper bound. Model-transformation is used in order to partition the nonlinear system into a nominal linear part that is exponentially stable with a bounded perturbation. We obtain sufficient conditions which ensure exponential stability inside a region of attraction estimate. A constructive method to evaluate the sufficient conditions is presented together with comparison with the corresponding constant and piecewise constant delay. Numerical simulations are performed to illustrate the theoretical results of this paper.
Coefficient of performance and its bounds with the figure of merit for a general refrigerator
NASA Astrophysics Data System (ADS)
Long, Rui; Liu, Wei
2015-02-01
A general refrigerator model with non-isothermal processes is studied. The coefficient of performance (COP) and its bounds at maximum χ figure of merit are obtained and analyzed. This model accounts for different heat capacities during the heat transfer processes. So, different kinds of refrigerator cycles can be considered. Under the constant heat capacity condition, the upper bound of the COP is the Curzon-Ahlborn (CA) coefficient of performance and is independent of the time durations of the heat exchanging processes. With the maximum χ criterion, in the refrigerator cycles, such as the reversed Brayton refrigerator cycle, the reversed Otto refrigerator cycle and the reversed Atkinson refrigerator cycle, where the heat capacity in the heat absorbing process is not less than that in the heat releasing process, their COPs are bounded by the CA coefficient of performance; otherwise, such as for the reversed Diesel refrigerator cycle, its COP can exceed the CA coefficient of performance. Furthermore, the general refined upper and lower bounds have been proposed.
Search for Chemically Bound Water in the Surface Layer of Mars Based on HEND/Mars Odyssey Data
NASA Technical Reports Server (NTRS)
Basilevsky, A. T.; Litvak, M. L.; Mitrofanov, I. G.; Boynton, W.; Saunders, R. S.
2003-01-01
This study is emphasized on search for signatures of chemically bound water in surface layer of Mars based on data acquired by High Energy Neutron Detector (HEND) which is part of the Mars Odyssey Gamma Ray Spectrometer (GRS). Fluxes of epithermal (probe the upper 1-2 m) and fast (the upper 20-30 cm) neutrons, considered in this work, were measured since mid February till mid June 2002. First analysis of this data set with emphasis of chemically bound water was made. Early publications of the GRS results reported low neutron flux at high latitudes, interpreted as signature of ground water ice, and in two low latitude areas: Arabia and SW of Olympus Mons (SWOM), interpreted as 'geographic variations in the amount of chemically and/or physically bound H2O and or OH...'. It is clear that surface materials of Mars do contain chemically bound water, but its amounts are poorly known and its geographic distribution was not analyzed.
Tunable architecture for aircraft fault detection
NASA Technical Reports Server (NTRS)
Ganguli, Subhabrata (Inventor); Papageorgiou, George (Inventor); Glavaski-Radovanovic, Sonja (Inventor)
2012-01-01
A method for detecting faults in an aircraft is disclosed. The method involves predicting at least one state of the aircraft and tuning at least one threshold value to tightly upper bound the size of a mismatch between the at least one predicted state and a corresponding actual state of the non-faulted aircraft. If the mismatch between the at least one predicted state and the corresponding actual state is greater than or equal to the at least one threshold value, the method indicates that at least one fault has been detected.
Improved Lower Bounds on the Price of Stability of Undirected Network Design Games
NASA Astrophysics Data System (ADS)
Bilò, Vittorio; Caragiannis, Ioannis; Fanelli, Angelo; Monaco, Gianpiero
Bounding the price of stability of undirected network design games with fair cost allocation is a challenging open problem in the Algorithmic Game Theory research agenda. Even though the generalization of such games in directed networks is well understood in terms of the price of stability (it is exactly H n , the n-th harmonic number, for games with n players), far less is known for network design games in undirected networks. The upper bound carries over to this case as well while the best known lower bound is 42/23 ≈ 1.826. For more restricted but interesting variants of such games such as broadcast and multicast games, sublogarithmic upper bounds are known while the best known lower bound is 12/7 ≈ 1.714. In the current paper, we improve the lower bounds as follows. We break the psychological barrier of 2 by showing that the price of stability of undirected network design games is at least 348/155 ≈ 2.245. Our proof uses a recursive construction of a network design game with a simple gadget as the main building block. For broadcast and multicast games, we present new lower bounds of 20/11 ≈ 1.818 and 1.862, respectively.
An analysis of the vertical structure equation for arbitrary thermal profiles
NASA Technical Reports Server (NTRS)
Cohn, Stephen E.; Dee, Dick P.
1989-01-01
The vertical structure equation is a singular Sturm-Liouville problem whose eigenfunctions describe the vertical dependence of the normal modes of the primitive equations linearized about a given thermal profile. The eigenvalues give the equivalent depths of the modes. The spectrum of the vertical structure equation and the appropriateness of various upper boundary conditions, both for arbitrary thermal profiles were studied. The results depend critically upon whether or not the thermal profile is such that the basic state atmosphere is bounded. In the case of a bounded atmosphere it is shown that the spectrum is always totally discrete, regardless of details of the thermal profile. For the barotropic equivalent depth, which corresponds to the lowest eigen value, upper and lower bounds which depend only on the surface temperature and the atmosphere height were obtained. All eigenfunctions are bounded, but always have unbounded first derivatives. It was proved that the commonly invoked upper boundary condition that vertical velocity must vanish as pressure tends to zero, as well as a number of alternative conditions, is well posed. It was concluded that the vertical structure equation always has a totally discrete spectrum under the assumptions implicit in the primitive equations.
An analysis of the vertical structure equation for arbitrary thermal profiles
NASA Technical Reports Server (NTRS)
Cohn, Stephen E.; Dee, Dick P.
1987-01-01
The vertical structure equation is a singular Sturm-Liouville problem whose eigenfunctions describe the vertical dependence of the normal modes of the primitive equations linearized about a given thermal profile. The eigenvalues give the equivalent depths of the modes. The spectrum of the vertical structure equation and the appropriateness of various upper boundary conditions, both for arbitrary thermal profiles were studied. The results depend critically upon whether or not the thermal profile is such that the basic state atmosphere is bounded. In the case of a bounded atmosphere it is shown that the spectrum is always totally discrete, regardless of details of the thermal profile. For the barotropic equivalent depth, which corresponds to the lowest eigen value, upper and lower bounds which depend only on the surface temperature and the atmosphere height were obtained. All eigenfunctions are bounded, but always have unbounded first derivatives. It was proved that the commonly invoked upper boundary condition that vertical velocity must vanish as pressure tends to zero, as well as a number of alternative conditions, is well posed. It was concluded that the vertical structure equation always has a totally discrete spectrum under the assumptions implicit in the primitive equations.
Ultimate energy density of observable cold baryonic matter.
Lattimer, James M; Prakash, Madappa
2005-03-25
We demonstrate that the largest measured mass of a neutron star establishes an upper bound to the energy density of observable cold baryonic matter. An equation of state-independent expression satisfied by both normal neutron stars and self-bound quark matter stars is derived for the largest energy density of matter inside stars as a function of their masses. The largest observed mass sets the lowest upper limit to the density. Implications from existing and future neutron star mass measurements are discussed.
1990-06-01
synchronization . We consider the performance of various synchronization protocols by deriving upper and lower bounds on optimal perfor- mance, upper bounds on Time ...from universities and from industry, who have resident appointments for limited periods of time , and by consultants. Members of NASA’s research staff...convergence to steady state is also being studied together with D. Gottlieb. The idea is to generalize the concept of local- time stepping by minimizing the
Generalized monogamy inequalities and upper bounds of negativity for multiqubit systems
NASA Astrophysics Data System (ADS)
Yang, Yanmin; Chen, Wei; Li, Gang; Zheng, Zhu-Jun
2018-01-01
In this paper, we present some generalized monogamy inequalities and upper bounds of negativity based on convex-roof extended negativity (CREN) and CREN of assistance (CRENOA). These monogamy relations are satisfied by the negativity of N -qubit quantum systems A B C1⋯CN -2 , under the partitions A B | C1⋯CN -2 and A B C1| C2⋯CN -2 . Furthermore, the W -class states are used to test these generalized monogamy inequalities.
An approach to optimal semi-active control of vibration energy harvesting based on MEMS
NASA Astrophysics Data System (ADS)
Rojas, Rafael A.; Carcaterra, Antonio
2018-07-01
In this paper the energy harvesting problem involving typical MEMS technology is reduced to an optimal control problem, where the objective function is the absorption of the maximum amount of energy in a given time interval from a vibrating environment. The interest here is to identify a physical upper bound for this energy storage. The mathematical tool is a new optimal control called Krotov's method, that has not yet been applied to engineering problems, except in quantum dynamics. This approach leads to identify new maximum bounds to the energy harvesting performance. Novel MEMS-based device control configurations for vibration energy harvesting are proposed with particular emphasis to piezoelectric, electromagnetic and capacitive circuits.
Performance bounds on parallel self-initiating discrete-event
NASA Technical Reports Server (NTRS)
Nicol, David M.
1990-01-01
The use is considered of massively parallel architectures to execute discrete-event simulations of what is termed self-initiating models. A logical process in a self-initiating model schedules its own state re-evaluation times, independently of any other logical process, and sends its new state to other logical processes following the re-evaluation. The interest is in the effects of that communication on synchronization. The performance is considered of various synchronization protocols by deriving upper and lower bounds on optimal performance, upper bounds on Time Warp's performance, and lower bounds on the performance of a new conservative protocol. The analysis of Time Warp includes the overhead costs of state-saving and rollback. The analysis points out sufficient conditions for the conservative protocol to outperform Time Warp. The analysis also quantifies the sensitivity of performance to message fan-out, lookahead ability, and the probability distributions underlying the simulation.
Bounds on the Coupling of the Majoron to Light Neutrinos from Supernova Cooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farzan, Yasaman
2002-12-02
We explore the role of Majoron (J) emission in the supernova cooling process, as a source of upper bound on the neutrino-Majoron coupling. We show that the strongest upper bound on the coupling to {nu}{sub 3} comes from the {nu}{sub e}{nu}{sub e} {yields} J process in the core of a supernova. We also find bounds on diagonal couplings of the Majoron to {nu}{sub {mu}({tau})}{nu}{sub {mu}({tau})} and on off-diagonal {nu}{sub e}{nu}{sub {mu}({tau})} couplings in various regions of the parameter space. We discuss the evaluation of cross-section for four-particle interactions ({nu}{nu} {yields} JJ and {nu}J {yields} {nu}J). We show that these aremore » typically dominated by three-particle sub-processes and do not give new independent constraints.« less
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2015-01-01
Upper bounds on high speed satellite collision probability, P (sub c), have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum P (sub c). If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but useful P (sub c) upper bound. There are various avenues along which an upper bound on the high speed satellite collision probability has been pursued. Typically, for the collision plane representation of the high speed collision probability problem, the predicted miss position in the collision plane is assumed fixed. Then the shape (aspect ratio of ellipse), the size (scaling of standard deviations) or the orientation (rotation of ellipse principal axes) of the combined position error ellipse is varied to obtain a maximum P (sub c). Regardless as to the exact details of the approach, previously presented methods all assume that an individual position error covariance matrix is available for each object and the two are combined into a single, relative position error covariance matrix. This combined position error covariance matrix is then modified according to the chosen scheme to arrive at a maximum P (sub c). But what if error covariance information for one of the two objects is not available? When error covariance information for one of the objects is not available the analyst has commonly defaulted to the situation in which only the relative miss position and velocity are known without any corresponding state error covariance information. The various usual methods of finding a maximum P (sub c) do no good because the analyst defaults to no knowledge of the combined, relative position error covariance matrix. It is reasonable to think, given an assumption of no covariance information, an analyst might still attempt to determine the error covariance matrix that results in an upper bound on the P (sub c). Without some guidance on limits to the shape, size and orientation of the unknown covariance matrix, the limiting case is a degenerate ellipse lying along the relative miss vector in the collision plane. Unless the miss position is exceptionally large or the at-risk object is exceptionally small, this method results in a maximum P (sub c) too large to be of practical use. For example, assuming that the miss distance is equal to the current ISS alert volume along-track (+ or -) distance of 25 kilometers and that the at-risk area has a 70 meter radius. The maximum (degenerate ellipse) P (sub c) is about 0.00136. At 40 kilometers, the maximum P (sub c) would be 0.00085 which is still almost an order of magnitude larger than the ISS maneuver threshold of 0.0001. In fact, a miss distance of almost 340 kilometers is necessary to reduce the maximum P (sub c) associated with this degenerate ellipse to the ISS maneuver threshold value. Such a result is frequently of no practical value to the analyst. Some improvement may be made with respect to this problem by realizing that while the position error covariance matrix of one of the objects (usually the debris object) may not be known the position error covariance matrix of the other object (usually the asset) is almost always available. Making use of the position error covariance information for the one object provides an improvement in finding a maximum P (sub c) which, in some cases, may offer real utility. The equations to be used are presented and their use discussed.
A linear programming approach to max-sum problem: a review.
Werner, Tomás
2007-07-01
The max-sum labeling problem, defined as maximizing a sum of binary (i.e., pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field. We review a not widely known approach to the problem, developed by Ukrainian researchers Schlesinger et al. in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product. In particular, we review Schlesinger et al.'s upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound. We revisit problems with Boolean variables and supermodular problems. We describe two algorithms for decreasing the upper bound. We present an example application for structural image analysis.
NASA Astrophysics Data System (ADS)
Pang, Yi; Rong, Junchen; Su, Ning
2016-12-01
We consider ϕ 3 theory in 6 - 2 ɛ with F 4 global symmetry. The beta function is calculated up to 3 loops, and a stable unitary IR fixed point is observed. The anomalous dimensions of operators quadratic or cubic in ϕ are also computed. We then employ conformal bootstrap technique to study the fixed point predicted from the perturbative approach. For each putative scaling dimension of ϕ (Δ ϕ ), we obtain the corresponding upper bound on the scaling dimension of the second lowest scalar primary in the 26 representation ( Δ 26 2nd ) which appears in the OPE of ϕ × ϕ. In D = 5 .95, we observe a sharp peak on the upper bound curve located at Δ ϕ equal to the value predicted by the 3-loop computation. In D = 5, we observe a weak kink on the upper bound curve at ( Δ ϕ , Δ 26 2nd ) = (1.6, 4).
Effective elastic moduli of triangular lattice material with defects
NASA Astrophysics Data System (ADS)
Liu, Xiaoyu; Liang, Naigang
2012-10-01
This paper presents an attempt to extend homogenization analysis for the effective elastic moduli of triangular lattice materials with microstructural defects. The proposed homogenization method adopts a process based on homogeneous strain boundary conditions, the micro-scale constitutive law and the micro-to-macro static operator to establish the relationship between the macroscopic properties of a given lattice material to its micro-discrete behaviors and structures. Further, the idea behind Eshelby's equivalent eigenstrain principle is introduced to replace a defect distribution by an imagining displacement field (eigendisplacement) with the equivalent mechanical effect, and the triangular lattice Green's function technique is developed to solve the eigendisplacement field. The proposed method therefore allows handling of different types of microstructural defects as well as its arbitrary spatial distribution within a general and compact framework. Analytical closed-form estimations are derived, in the case of the dilute limit, for all the effective elastic moduli of stretch-dominated triangular lattices containing fractured cell walls and missing cells, respectively. Comparison with numerical results, the Hashin-Shtrikman upper bounds and uniform strain upper bounds are also presented to illustrate the predictive capability of the proposed method for lattice materials. Based on this work, we propose that not only the effective Young's and shear moduli but also the effective Poisson's ratio of triangular lattice materials depend on the number density of fractured cell walls and their spatial arrangements.
Control design for robust stability in linear regulators: Application to aerospace flight control
NASA Technical Reports Server (NTRS)
Yedavalli, R. K.
1986-01-01
Time domain stability robustness analysis and design for linear multivariable uncertain systems with bounded uncertainties is the central theme of the research. After reviewing the recently developed upper bounds on the linear elemental (structured), time varying perturbation of an asymptotically stable linear time invariant regulator, it is shown that it is possible to further improve these bounds by employing state transformations. Then introducing a quantitative measure called the stability robustness index, a state feedback conrol design algorithm is presented for a general linear regulator problem and then specialized to the case of modal systems as well as matched systems. The extension of the algorithm to stochastic systems with Kalman filter as the state estimator is presented. Finally an algorithm for robust dynamic compensator design is presented using Parameter Optimization (PO) procedure. Applications in a aircraft control and flexible structure control are presented along with a comparison with other existing methods.
Global stability and tumor clearance conditions for a cancer chemotherapy system
NASA Astrophysics Data System (ADS)
Valle, Paul A.; Starkov, Konstantin E.; Coria, Luis N.
2016-11-01
In this paper we study the global dynamics of a cancer chemotherapy system presented by de Pillis et al. (2007). This mathematical model describes the interaction between tumor cells, effector-immune cells, circulating lymphocytes and chemotherapy treatment. By applying the localization method of compact invariant sets, we find lower and upper bounds for these three cells populations. Further, we define a bounded domain in R+,04 where all compact invariant sets of the system are located and provide conditions under which this domain is positively invariant. We apply LaSalle's invariance principle and one result concerning two-dimensional competitive systems in order to derive sufficient conditions for tumor clearance and global asymptotic stability of the tumor-free equilibrium point. These conditions are computed by using bounds of the localization domain and they are given in terms of the chemotherapy treatment. Finally, we perform numerical simulations in order to illustrate our results.
An extended GS method for dense linear systems
NASA Astrophysics Data System (ADS)
Niki, Hiroshi; Kohno, Toshiyuki; Abe, Kuniyoshi
2009-09-01
Davey and Rosindale [K. Davey, I. Rosindale, An iterative solution scheme for systems of boundary element equations, Internat. J. Numer. Methods Engrg. 37 (1994) 1399-1411] derived the GSOR method, which uses an upper triangular matrix [Omega] in order to solve dense linear systems. By applying functional analysis, the authors presented an expression for the optimum [Omega]. Moreover, Davey and Bounds [K. Davey, S. Bounds, A generalized SOR method for dense linear systems of boundary element equations, SIAM J. Comput. 19 (1998) 953-967] also introduced further interesting results. In this note, we employ a matrix analysis approach to investigate these schemes, and derive theorems that compare these schemes with existing preconditioners for dense linear systems. We show that the convergence rate of the Gauss-Seidel method with preconditioner PG is superior to that of the GSOR method. Moreover, we define some splittings associated with the iterative schemes. Some numerical examples are reported to confirm the theoretical analysis. We show that the EGS method with preconditioner produces an extremely small spectral radius in comparison with the other schemes considered.
Strong polygamy of quantum correlations in multi-party quantum systems
NASA Astrophysics Data System (ADS)
Kim, Jeong San
2014-10-01
We propose a new type of polygamy inequality for multi-party quantum entanglement. We first consider the possible amount of bipartite entanglement distributed between a fixed party and any subset of the rest parties in a multi-party quantum system. By using the summation of these distributed entanglements, we provide an upper bound of the distributed entanglement between a party and the rest in multi-party quantum systems. We then show that this upper bound also plays as a lower bound of the usual polygamy inequality, therefore the strong polygamy of multi-party quantum entanglement. For the case of multi-party pure states, we further show that the strong polygamy of entanglement implies the strong polygamy of quantum discord.
Optimal bounds and extremal trajectories for time averages in nonlinear dynamical systems
NASA Astrophysics Data System (ADS)
Tobasco, Ian; Goluskin, David; Doering, Charles R.
2018-02-01
For any quantity of interest in a system governed by ordinary differential equations, it is natural to seek the largest (or smallest) long-time average among solution trajectories, as well as the extremal trajectories themselves. Upper bounds on time averages can be proved a priori using auxiliary functions, the optimal choice of which is a convex optimization problem. We prove that the problems of finding maximal trajectories and minimal auxiliary functions are strongly dual. Thus, auxiliary functions provide arbitrarily sharp upper bounds on time averages. Moreover, any nearly minimal auxiliary function provides phase space volumes in which all nearly maximal trajectories are guaranteed to lie. For polynomial equations, auxiliary functions can be constructed by semidefinite programming, which we illustrate using the Lorenz system.
Optimizing Retransmission Threshold in Wireless Sensor Networks
Bi, Ran; Li, Yingshu; Tan, Guozhen; Sun, Liang
2016-01-01
The retransmission threshold in wireless sensor networks is critical to the latency of data delivery in the networks. However, existing works on data transmission in sensor networks did not consider the optimization of the retransmission threshold, and they simply set the same retransmission threshold for all sensor nodes in advance. The method did not take link quality and delay requirement into account, which decreases the probability of a packet passing its delivery path within a given deadline. This paper investigates the problem of finding optimal retransmission thresholds for relay nodes along a delivery path in a sensor network. The object of optimizing retransmission thresholds is to maximize the summation of the probability of the packet being successfully delivered to the next relay node or destination node in time. A dynamic programming-based distributed algorithm for finding optimal retransmission thresholds for relay nodes along a delivery path in the sensor network is proposed. The time complexity is OnΔ·max1≤i≤n{ui}, where ui is the given upper bound of the retransmission threshold of sensor node i in a given delivery path, n is the length of the delivery path and Δ is the given upper bound of the transmission delay of the delivery path. If Δ is greater than the polynomial, to reduce the time complexity, a linear programming-based (1+pmin)-approximation algorithm is proposed. Furthermore, when the ranges of the upper and lower bounds of retransmission thresholds are big enough, a Lagrange multiplier-based distributed O(1)-approximation algorithm with time complexity O(1) is proposed. Experimental results show that the proposed algorithms have better performance. PMID:27171092
Mapping Computation with No Memory
NASA Astrophysics Data System (ADS)
Burckel, Serge; Gioan, Emeric; Thomé, Emmanuel
We investigate the computation of mappings from a set S n to itself with in situ programs, that is using no extra variables than the input, and performing modifications of one component at a time. We consider several types of mappings and obtain effective computation and decomposition methods, together with upper bounds on the program length (number of assignments). Our technique is combinatorial and algebraic (graph coloration, partition ordering, modular arithmetics).
Upper bounds on quantum uncertainty products and complexity measures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guerrero, Angel; Sanchez-Moreno, Pablo; Dehesa, Jesus S.
The position-momentum Shannon and Renyi uncertainty products of general quantum systems are shown to be bounded not only from below (through the known uncertainty relations), but also from above in terms of the Heisenberg-Kennard product . Moreover, the Cramer-Rao, Fisher-Shannon, and Lopez-Ruiz, Mancini, and Calbet shape measures of complexity (whose lower bounds have been recently found) are also bounded from above. The improvement of these bounds for systems subject to spherically symmetric potentials is also explicitly given. Finally, applications to hydrogenic and oscillator-like systems are done.
Inclusion-Based Effective Medium Models for the Permeability of a 3D Fractured Rock Mass
NASA Astrophysics Data System (ADS)
Ebigbo, A.; Lang, P. S.; Paluszny, A.; Zimmerman, R. W.
2015-12-01
Following the work of Saevik et al. (Transp. Porous Media, 2013; Geophys. Prosp., 2014), we investigate the ability of classical inclusion-based effective medium theories to predict the macroscopic permeability of a fractured rock mass. The fractures are assumed to be thin, oblate spheroids, and are treated as porous media in their own right, with permeability kf, and are embedded in a homogeneous matrix having permeability km. At very low fracture densities, the effective permeability is given exactly by a well-known expression that goes back at least as far as Fricke (Phys. Rev., 1924). For non-trivial fracture densities, an effective medium approximation must be employed. We have investigated several such approximations: Maxwell's method, the differential method, and the symmetric and asymmetric versions of the self-consistent approximation. The predictions of the various approximate models are tested against the results of explicit numerical simulations, averaged over numerous statistical realizations for each set of parameters. Each of the various effective medium approximations satisfies the Hashin-Shtrikman (H-S) bounds. Unfortunately, these bounds are much too far apart to provide quantitatively useful estimates of keff. For the case of zero matrix permeability, the well-known approximation of Snow, which is based on network considerations rather than a continuum approach, is shown to essentially coincide with the upper H-S bound, thereby proving that the commonly made assumption that Snow's equation is an "upper bound" is indeed correct. This problem is actually characterized by two small parameters, the aspect ratio of the spheroidal fractures, α, and the permeability ratio, κ = km/kf. Two different regimes can be identified, corresponding to α < κ and κ < α, and expressions for each of the effective medium approximations are developed in both regimes. In both regimes, the symmetric version of the self-consistent approximation is the most accurate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tyson, Jon
2009-06-15
Matrix monotonicity is used to obtain upper bounds on minimum-error distinguishability of arbitrary ensembles of mixed quantum states. This generalizes one direction of a two-sided bound recently obtained by the author [J. Tyson, J. Math. Phys. 50, 032106 (2009)]. It is shown that the previously obtained special case has unique properties.
NASA Technical Reports Server (NTRS)
Patera, Anthony T.; Paraschivoiu, Marius
1998-01-01
We present a finite element technique for the efficient generation of lower and upper bounds to outputs which are linear functionals of the solutions to the incompressible Stokes equations in two space dimensions; the finite element discretization is effected by Crouzeix-Raviart elements, the discontinuous pressure approximation of which is central to our approach. The bounds are based upon the construction of an augmented Lagrangian: the objective is a quadratic "energy" reformulation of the desired output; the constraints are the finite element equilibrium equations (including the incompressibility constraint), and the intersubdomain continuity conditions on velocity. Appeal to the dual max-min problem for appropriately chosen candidate Lagrange multipliers then yields inexpensive bounds for the output associated with a fine-mesh discretization; the Lagrange multipliers are generated by exploiting an associated coarse-mesh approximation. In addition to the requisite coarse-mesh calculations, the bound technique requires solution only of local subdomain Stokes problems on the fine-mesh. The method is illustrated for the Stokes equations, in which the outputs of interest are the flowrate past, and the lift force on, a body immersed in a channel.
Levenback, Charles F.; Ali, Shamshad; Coleman, Robert L.; Gold, Michael A.; Fowler, Jeffrey M.; Judson, Patricia L.; Bell, Maria C.; De Geest, Koen; Spirtos, Nick M.; Potkul, Ronald K.; Leitao, Mario M.; Bakkum-Gamez, Jamie N.; Rossi, Emma C.; Lentz, Samuel S.; Burke, James J.; Van Le, Linda; Trimble, Cornelia L.
2012-01-01
Purpose To determine the safety of sentinel lymph node biopsy as a replacement for inguinal femoral lymphadenectomy in selected women with vulvar cancer. Patients and Methods Eligible women had squamous cell carcinoma, at least 1-mm invasion, and tumor size ≥ 2 cm and ≤ 6 cm. The primary tumor was limited to the vulva, and there were no groin lymph nodes that were clinically suggestive of cancer. All women underwent intraoperative lymphatic mapping, sentinel lymph node biopsy, and inguinal femoral lymphadenectomy. Histologic ultra staging of the sentinel lymph node was prescribed. Results In all, 452 women underwent the planned procedures, and 418 had at least one sentinel lymph node identified. There were 132 node-positive women, including 11 (8.3%) with false-negative nodes. Twenty-three percent of the true-positive patients were detected by immunohistochemical analysis of the sentinel lymph node. The sensitivity was 91.7% (90% lower confidence bound, 86.7%) and the false-negative predictive value (1-negative predictive value) was 3.7% (90% upper confidence bound, 6.1%). In women with tumor less than 4 cm, the false-negative predictive value was 2.0% (90% upper confidence bound, 4.5%). Conclusion Sentinel lymph node biopsy is a reasonable alternative to inguinal femoral lymphadenectomy in selected women with squamous cell carcinoma of the vulva. PMID:22753905
NASA Astrophysics Data System (ADS)
Vukičević, Damir; Đurđević, Jelena
2011-10-01
Bond incident degree index is a descriptor that is calculated as the sum of the bond contributions such that each bond contribution depends solely on the degrees of its incident vertices (e.g. Randić index, Zagreb index, modified Zagreb index, variable Randić index, atom-bond connectivity index, augmented Zagreb index, sum-connectivity index, many Adriatic indices, and many variable Adriatic indices). In this Letter we find tight upper and lower bounds for bond incident degree index for catacondensed fluoranthenes with given number of hexagons.
The local interstellar helium density - Corrected
NASA Technical Reports Server (NTRS)
Freeman, J.; Paresce, F.; Bowyer, S.
1979-01-01
An upper bound for the number density of neutral helium in the local interstellar medium of 0.004 + or - 0.0022 per cu cm was previously reported, based on extreme-ultraviolet telescope observations at 584 A made during the 1975 Apollo-Soyuz Test Project. A variety of evidence is found which indicates that the 584-A sensitivity of the instrument declined by a factor of 2 between the last laboratory calibration and the time of the measurements. The upper bound on the helium density is therefore revised to 0.0089 + or - 0.005 per cu cm.
Upper bound on three-tangles of reduced states of four-qubit pure states
NASA Astrophysics Data System (ADS)
Sharma, S. Shelly; Sharma, N. K.
2017-06-01
Closed formulas for upper bounds on three-tangles of three-qubit reduced states in terms of three-qubit-invariant polynomials of pure four-qubit states are obtained. Our results offer tighter constraints on total three-way entanglement of a given qubit with the rest of the system than those used by Regula et al. [Phys. Rev. Lett. 113, 110501 (2014), 10.1103/PhysRevLett.113.110501 and Phys. Rev. Lett. 116, 049902(E) (2016)], 10.1103/PhysRevLett.116.049902 to verify monogamy of four-qubit quantum entanglement.
Planck limits on non-canonical generalizations of large-field inflation models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Nina K.; Kinney, William H., E-mail: ninastei@buffalo.edu, E-mail: whkinney@buffalo.edu
2017-04-01
In this paper, we consider two case examples of Dirac-Born-Infeld (DBI) generalizations of canonical large-field inflation models, characterized by a reduced sound speed, c {sub S} < 1. The reduced speed of sound lowers the tensor-scalar ratio, improving the fit of the models to the data, but increases the equilateral-mode non-Gaussianity, f {sup equil.}{sub NL}, which the latest results from the Planck satellite constrain by a new upper bound. We examine constraints on these models in light of the most recent Planck and BICEP/Keck results, and find that they have a greatly decreased window of viability. The upper bound onmore » f {sup equil.}{sub NL} corresponds to a lower bound on the sound speed and a corresponding lower bound on the tensor-scalar ratio of r ∼ 0.01, so that near-future Cosmic Microwave Background observations may be capable of ruling out entire classes of DBI inflation models. The result is, however, not universal: infrared-type DBI inflation models, where the speed of sound increases with time, are not subject to the bound.« less
Circuit bounds on stochastic transport in the Lorenz equations
NASA Astrophysics Data System (ADS)
Weady, Scott; Agarwal, Sahil; Wilen, Larry; Wettlaufer, J. S.
2018-07-01
In turbulent Rayleigh-Bénard convection one seeks the relationship between the heat transport, captured by the Nusselt number, and the temperature drop across the convecting layer, captured by the Rayleigh number. In experiments, one measures the Nusselt number for a given Rayleigh number, and the question of how close that value is to the maximal transport is a key prediction of variational fluid mechanics in the form of an upper bound. The Lorenz equations have traditionally been studied as a simplified model of turbulent Rayleigh-Bénard convection, and hence it is natural to investigate their upper bounds, which has previously been done numerically and analytically, but they are not as easily accessible in an experimental context. Here we describe a specially built circuit that is the experimental analogue of the Lorenz equations and compare its output to the recently determined upper bounds of the stochastic Lorenz equations [1]. The circuit is substantially more efficient than computational solutions, and hence we can more easily examine the system. Because of offsets that appear naturally in the circuit, we are motivated to study unique bifurcation phenomena that arise as a result. Namely, for a given Rayleigh number, we find a reentrant behavior of the transport on noise amplitude and this varies with Rayleigh number passing from the homoclinic to the Hopf bifurcation.
Constrained signal reconstruction from wavelet transform coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.
1991-12-31
A new method is introduced for reconstructing a signal from an incomplete sampling of its Discrete Wavelet Transform (DWT). The algorithm yields a minimum-norm estimate satisfying a priori upper and lower bounds on the signal. The method is based on a finite-dimensional representation theory for minimum-norm estimates of bounded signals developed by R.E. Cole. Cole`s work has its origins in earlier techniques of maximum-entropy spectral estimation due to Lang and McClellan, which were adapted by Steinhardt, Goodrich and Roberts for minimum-norm spectral estimation. Cole`s extension of their work provides a representation for minimum-norm estimates of a class of generalized transformsmore » in terms of general correlation data (not just DFT`s of autocorrelation lags, as in spectral estimation). One virtue of this great generality is that it includes the inverse DWT. 20 refs.« less
Evaluation of Ares-I Control System Robustness to Uncertain Aerodynamics and Flex Dynamics
NASA Technical Reports Server (NTRS)
Jang, Jiann-Woei; VanTassel, Chris; Bedrossian, Nazareth; Hall, Charles; Spanos, Pol
2008-01-01
This paper discusses the application of robust control theory to evaluate robustness of the Ares-I control systems. Three techniques for estimating upper and lower bounds of uncertain parameters which yield stable closed-loop response are used here: (1) Monte Carlo analysis, (2) mu analysis, and (3) characteristic frequency response analysis. All three methods are used to evaluate stability envelopes of the Ares-I control systems with uncertain aerodynamics and flex dynamics. The results show that characteristic frequency response analysis is the most effective of these methods for assessing robustness.
Sampling rare fluctuations of discrete-time Markov chains
NASA Astrophysics Data System (ADS)
Whitelam, Stephen
2018-03-01
We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.
Sampling rare fluctuations of discrete-time Markov chains.
Whitelam, Stephen
2018-03-01
We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mezzacappa, Anthony; Endeve, Eirik; Hauck, Cory D.
We extend the positivity-preserving method of Zhang & Shu [49] to simulate the advection of neutral particles in phase space using curvilinear coordinates. The ability to utilize these coordinates is important for non-equilibrium transport problems in general relativity and also in science and engineering applications with specific geometries. The method achieves high-order accuracy using Discontinuous Galerkin (DG) discretization of phase space and strong stabilitypreserving, Runge-Kutta (SSP-RK) time integration. Special care in taken to ensure that the method preserves strict bounds for the phase space distribution function f; i.e., f ϵ [0, 1]. The combination of suitable CFL conditions and themore » use of the high-order limiter proposed in [49] is su cient to ensure positivity of the distribution function. However, to ensure that the distribution function satisfies the upper bound, the discretization must, in addition, preserve the divergencefree property of the phase space ow. Proofs that highlight the necessary conditions are presented for general curvilinear coordinates, and the details of these conditions are worked out for some commonly used coordinate systems (i.e., spherical polar spatial coordinates in spherical symmetry and cylindrical spatial coordinates in axial symmetry, both with spherical momentum coordinates). Results from numerical experiments - including one example in spherical symmetry adopting the Schwarzschild metric - demonstrate that the method achieves high-order accuracy and that the distribution function satisfies the maximum principle.« less
1981-10-01
Numerical predictions used in the compari- sons were obtained from the energy -based, finite-difference computer proqram CLAPP. Test specimens were clamped...edges V LONGITUDINAL STIFFENERS 45 I. Introduction 45 2. Stiffener Strain Energy 46 3. Stiffener Energy in Matrix Form 47 4. Displacement Continuity 49...that theoretical bifurcation loads predicted by the energy method represent upper bounds to the classical bifurcation loads associated with the test
Shadow-Based Vehicle Detection in Urban Traffic
Ibarra-Arenado, Manuel; Tjahjadi, Tardi; Pérez-Oria, Juan; Robla-Gómez, Sandra; Jiménez-Avello, Agustín
2017-01-01
Vehicle detection is a fundamental task in Forward Collision Avoiding Systems (FACS). Generally, vision-based vehicle detection methods consist of two stages: hypotheses generation and hypotheses verification. In this paper, we focus on the former, presenting a feature-based method for on-road vehicle detection in urban traffic. Hypotheses for vehicle candidates are generated according to the shadow under the vehicles by comparing pixel properties across the vertical intensity gradients caused by shadows on the road, and followed by intensity thresholding and morphological discrimination. Unlike methods that identify the shadow under a vehicle as a road region with intensity smaller than a coarse lower bound of the intensity for road, the thresholding strategy we propose determines a coarse upper bound of the intensity for shadow which reduces false positives rates. The experimental results are promising in terms of detection performance and robustness in day time under different weather conditions and cluttered scenarios to enable validation for the first stage of a complete FACS. PMID:28448465
Cosmology and the neutrino mass ordering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hannestad, Steen; Schwetz, Thomas, E-mail: sth@phys.au.dk, E-mail: schwetz@kit.edu
We propose a simple method to quantify a possible exclusion of the inverted neutrino mass ordering from cosmological bounds on the sum of the neutrino masses. The method is based on Bayesian inference and allows for a calculation of the posterior odds of normal versus inverted ordering. We apply the method for a specific set of current data from Planck CMB data and large-scale structure surveys, providing an upper bound on the sum of neutrino masses of 0.14 eV at 95% CL. With this analysis we obtain posterior odds for normal versus inverted ordering of about 2:1. If cosmological datamore » is combined with data from oscillation experiments the odds reduce to about 3:2. For an exclusion of the inverted ordering from cosmology at more than 95% CL, an accuracy of better than 0.02 eV is needed for the sum. We demonstrate that such a value could be reached with planned observations of large scale structure by analysing artificial mock data for a EUCLID-like survey.« less
Saturn's very axisymmetric magnetic field: No detectable secular variation or tilt
NASA Astrophysics Data System (ADS)
Cao, Hao; Russell, Christopher T.; Christensen, Ulrich R.; Dougherty, Michele K.; Burton, Marcia E.
2011-04-01
Saturn is the only planet in the solar system whose observed magnetic field is highly axisymmetric. At least a small deviation from perfect symmetry is required for a dynamo-generated magnetic field. Analyzing more than six years of magnetometer data obtained by Cassini close to the planet, we show that Saturn's observed field is much more axisymmetric than previously thought. We invert the magnetometer observations that were obtained in the "current-free" inner magnetosphere for an internal model, varying the assumed unknown rotation rate of Saturn's deep interior. No unambiguous non-axially symmetric magnetic moment is detected, with a new upper bound on the dipole tilt of 0.06°. An axisymmetric internal model with Schmidt-normalized spherical harmonic coefficients g10 = 21,191 ± 24 nT, g20 = 1586 ± 7 nT. g30 = 2374 ± 47 nT is derived from these measurements, the upper bounds on the axial degree 4 and 5 terms are 720 nT and 3200 nT respectively. The secular variation for the last 30 years is within the probable error of each term from degree 1 to 3, and the upper bounds are an order of magnitude smaller than in similar terrestrial terms for degrees 1 and 2. Differentially rotating conducting stable layers above Saturn's dynamo region have been proposed to symmetrize the magnetic field (Stevenson, 1982). The new upper bound on the dipole tilt implies that this stable layer must have a thickness L >= 4000 km, and this thickness is consistent with our weak secular variation observations.
A simple test for spacetime symmetry
NASA Astrophysics Data System (ADS)
Houri, Tsuyoshi; Yasui, Yukinori
2015-03-01
This paper presents a simple method for investigating spacetime symmetry for a given metric. The method makes use of the curvature conditions that are obtained from the Killing equations. We use the solutions of the curvature conditions to compute an upper bound on the number of Killing vector fields, as well as Killing-Yano (KY) tensors and closed conformal KY tensors. We also use them in the integration of the Killing equations. By means of the method, we thoroughly investigate KY symmetry of type D vacuum solutions such as the Kerr metric in four dimensions. The method is also applied to a large variety of physical metrics in four and five dimensions.
Jarzynski equality: connections to thermodynamics and the second law.
Palmieri, Benoit; Ronis, David
2007-01-01
The one-dimensional expanding ideal gas model is used to compute the exact nonequilibrium distribution function. The state of the system during the expansion is defined in terms of local thermodynamics quantities. The final equilibrium free energy, obtained a long time after the expansion, is compared against the free energy that appears in the Jarzynski equality. Within this model, where the Jarzynski equality holds rigorously, the free energy change that appears in the equality does not equal the actual free energy change of the system at any time of the process. More generally, the work bound that is obtained from the Jarzynski equality is an upper bound to the upper bound that is obtained from the first and second laws of thermodynamics. The cancellation of the dissipative (nonequilibrium) terms that result in the Jarzynski equality is shown in the framework of response theory. This is used to show that the intuitive assumption that the Jarzynski work bound becomes equal to the average work done when the system evolves quasistatically is incorrect under some conditions.
NASA Technical Reports Server (NTRS)
Wade, T. O.
1984-01-01
Reduction techniques for traffic matrices are explored in some detail. These matrices arise in satellite switched time-division multiple access (SS/TDMA) techniques whereby switching of uplink and downlink beams is required to facilitate interconnectivity of beam zones. A traffic matrix is given to represent that traffic to be transmitted from n uplink beams to n downlink beams within a TDMA frame typically of 1 ms duration. The frame is divided into segments of time and during each segment a portion of the traffic is represented by a switching mode. This time slot assignment is characterized by a mode matrix in which there is not more than a single non-zero entry on each line (row or column) of the matrix. Investigation is confined to decomposition of an n x n traffic matrix by mode matrices with a requirement that the decomposition be 100 percent efficient or, equivalently, that the line(s) in the original traffic matrix whose sum is maximal (called critical line(s)) remain maximal as mode matrices are subtracted throughout the decomposition process. A method of decomposition of an n x n traffic matrix by mode matrices results in a number of steps that is bounded by n(2) - 2n + 2. It is shown that this upper bound exists for an n x n matrix wherein all the lines are maximal (called a quasi doubly stochastic (QDS) matrix) or for an n x n matrix that is completely arbitrary. That is, the fact that no method can exist with a lower upper bound is shown for both QDS and arbitrary matrices, in an elementary and straightforward manner.
More on the decoder error probability for Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Cheung, K.-M.
1987-01-01
The decoder error probability for Reed-Solomon codes (more generally, linear maximum distance separable codes) is examined. McEliece and Swanson offered an upper bound on P sub E (u), the decoder error probability given that u symbol errors occurs. This upper bound is slightly greater than Q, the probability that a completely random error pattern will cause decoder error. By using a combinatoric technique, the principle of inclusion and exclusion, an exact formula for P sub E (u) is derived. The P sub e (u) for the (255, 223) Reed-Solomon Code used by NASA, and for the (31,15) Reed-Solomon code (JTIDS code), are calculated using the exact formula, and the P sub E (u)'s are observed to approach the Q's of the codes rapidly as u gets larger. An upper bound for the expression is derived, and is shown to decrease nearly exponentially as u increases. This proves analytically that P sub E (u) indeed approaches Q as u becomes large, and some laws of large numbers come into play.
Lorenz curves in a new science-funding model
NASA Astrophysics Data System (ADS)
Huang, Ding-wei
2017-12-01
We propose an agent-based model to theoretically and systematically explore the implications of a new approach to fund science, which has been suggested recently by J. Bollen et al.[?] We introduce various parameters and examine their effects. The concentration of funding is shown by the Lorenz curve and the Gini coefficient. In this model, all scientists are treated equally and follow the well-intended regulations. All scientists give a fixed ratio of their funding to others. The fixed ratio becomes an upper bound for the Gini coefficient. We observe two distinct regimes in the parameter space: valley and plateau. In the valley regime, the fluidity of funding is significant. The Lorenz curve is smooth. The Gini coefficient is well below the upper bound. The funding distribution is the desired result. In the plateau regime, the cumulative advantage is significant. The Lorenz curve has a sharp turn. The Gini coefficient saturates to the upper bound. The undue concentration of funding happens swiftly. The funding distribution is the undesired results, where a minority of scientists take the majority of funding. Phase transitions between these two regimes are discussed.
Expected performance of m-solution backtracking
NASA Technical Reports Server (NTRS)
Nicol, D. M.
1986-01-01
This paper derives upper bounds on the expected number of search tree nodes visited during an m-solution backtracking search, a search which terminates after some preselected number m problem solutions are found. The search behavior is assumed to have a general probabilistic structure. The results are stated in terms of node expansion and contraction. A visited search tree node is said to be expanding if the mean number of its children visited by the search exceeds 1 and is contracting otherwise. It is shown that if every node expands, or if every node contracts, then the number of search tree nodes visited by a search has an upper bound which is linear in the depth of the tree, in the mean number of children a node has, and in the number of solutions sought. Also derived are bounds linear in the depth of the tree in some situations where an upper portion of the tree contracts (expands), while the lower portion expands (contracts). While previous analyses of 1-solution backtracking have concluded that the expected performance is always linear in the tree depth, the model allows superlinear expected performance.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1990-01-01
An expurgated upper bound on the event error probability of trellis coded modulation is presented. This bound is used to derive a lower bound on the minimum achievable free Euclidean distance d sub (free) of trellis codes. It is shown that the dominant parameters for both bounds, the expurgated error exponent and the asymptotic d sub (free) growth rate, respectively, can be obtained from the cutoff-rate R sub O of the transmission channel by a simple geometric construction, making R sub O the central parameter for finding good trellis codes. Several constellations are optimized with respect to the bounds.
ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.
2010-08-10
A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error),more » and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.« less
Performance analysis for minimally nonlinear irreversible refrigerators at finite cooling power
NASA Astrophysics Data System (ADS)
Long, Rui; Liu, Zhichun; Liu, Wei
2018-04-01
The coefficient of performance (COP) for general refrigerators at finite cooling power have been systematically researched through the minimally nonlinear irreversible model, and its lower and upper bounds in different operating regions have been proposed. Under the tight coupling conditions, we have calculated the universal COP bounds under the χ figure of merit in different operating regions. When the refrigerator operates in the region with lower external flux, we obtained the general bounds (0 < ε <(√{ 9 + 8εC } - 3) / 2) under the χ figure of merit. We have also calculated the universal bounds for maximum gain in COP under different operating regions to give a further insight into the COP gain with the cooling power away from the maximum one. When the refrigerator operates in the region located between maximum cooling power and maximum COP with lower external flux, the upper bound for COP and the lower bound for relative gain in COP present large values, compared to a relative small loss from the maximum cooling power. If the cooling power is the main objective, it is desirable to operate the refrigerator at a slightly lower cooling power than at the maximum one, where a small loss in the cooling power induces a much larger COP enhancement.
Abbas, Ash Mohammad
2012-01-01
In this paper, we describe some bounds and inequalities relating h-index, g-index, e-index, and generalized impact factor. We derive the bounds and inequalities relating these indexing parameters from their basic definitions and without assuming any continuous model to be followed by any of them. We verify the theorems using citation data for five Price Medalists. We observe that the lower bound for h-index given by Theorem 2, [formula: see text], g ≥ 1, comes out to be more accurate as compared to Schubert-Glanzel relation h is proportional to C(2/3)P(-1/3) for a proportionality constant of 1, where C is the number of citations and P is the number of papers referenced. Also, the values of h-index obtained using Theorem 2 outperform those obtained using Egghe-Liang-Rousseau power law model for the given citation data of Price Medalists. Further, we computed the values of upper bound on g-index given by Theorem 3, g ≤ (h + e), where e denotes the value of e-index. We observe that the upper bound on g-index given by Theorem 3 is reasonably tight for the given citation record of Price Medalists.
Reverse preferential spread in complex networks
NASA Astrophysics Data System (ADS)
Toyoizumi, Hiroshi; Tani, Seiichi; Miyoshi, Naoto; Okamoto, Yoshio
2012-08-01
Large-degree nodes may have a larger influence on the network, but they can be bottlenecks for spreading information since spreading attempts tend to concentrate on these nodes and become redundant. We discuss that the reverse preferential spread (distributing information inversely proportional to the degree of the receiving node) has an advantage over other spread mechanisms. In large uncorrelated networks, we show that the mean number of nodes that receive information under the reverse preferential spread is an upper bound among any other weight-based spread mechanisms, and this upper bound is indeed a logistic growth independent of the degree distribution.
A note on the upper bound of the spectral radius for SOR iteration matrix
NASA Astrophysics Data System (ADS)
Chang, D.-W. Da-Wei
2004-05-01
Recently, Wang and Huang (J. Comput. Appl. Math. 135 (2001) 325, Corollary 4.7) established the following estimation on the upper bound of the spectral radius for successive overrelaxation (SOR) iteration matrix:ρSOR≤1-ω+ωρGSunder the condition that the coefficient matrix A is a nonsingular M-matrix and ω≥1, where ρSOR and ρGS are the spectral radius of SOR iteration matrix and Gauss-Seidel iteration matrix, respectively. In this note, we would like to point out that the above estimation is not valid in general.
A Novel Capacity Analysis for Wireless Backhaul Mesh Networks
NASA Astrophysics Data System (ADS)
Chung, Tein-Yaw; Lee, Kuan-Chun; Lee, Hsiao-Chih
This paper derived a closed-form expression for inter-flow capacity of a backhaul wireless mesh network (WMN) with centralized scheduling by employing a ring-based approach. Through the definition of an interference area, we are able to accurately describe a bottleneck collision area for a WMN and calculate the upper bound of inter-flow capacity. The closed-form expression shows that the upper bound is a function of the ratio between transmission range and network radius. Simulations and numerical analysis show that our analytic solution can better estimate the inter-flow capacity of WMNs than that of previous approach.
Solving the chemical master equation using sliding windows
2010-01-01
Background The chemical master equation (CME) is a system of ordinary differential equations that describes the evolution of a network of chemical reactions as a stochastic process. Its solution yields the probability density vector of the system at each point in time. Solving the CME numerically is in many cases computationally expensive or even infeasible as the number of reachable states can be very large or infinite. We introduce the sliding window method, which computes an approximate solution of the CME by performing a sequence of local analysis steps. In each step, only a manageable subset of states is considered, representing a "window" into the state space. In subsequent steps, the window follows the direction in which the probability mass moves, until the time period of interest has elapsed. We construct the window based on a deterministic approximation of the future behavior of the system by estimating upper and lower bounds on the populations of the chemical species. Results In order to show the effectiveness of our approach, we apply it to several examples previously described in the literature. The experimental results show that the proposed method speeds up the analysis considerably, compared to a global analysis, while still providing high accuracy. Conclusions The sliding window method is a novel approach to address the performance problems of numerical algorithms for the solution of the chemical master equation. The method efficiently approximates the probability distributions at the time points of interest for a variety of chemically reacting systems, including systems for which no upper bound on the population sizes of the chemical species is known a priori. PMID:20377904
On the error probability of general tree and trellis codes with applications to sequential decoding
NASA Technical Reports Server (NTRS)
Johannesson, R.
1973-01-01
An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.
Vacuum stability in the U(1)χ extended model with vanishing scalar potential at the Planck scale
NASA Astrophysics Data System (ADS)
Haba, Naoyuki; Yamaguchi, Yuya
2015-09-01
We investigate the vacuum stability in a scale invariant local {U}(1)_χ model with vanishing scalar potential at the Planck scale. We find that it is impossible to realize the Higgs mass of 125 GeV while keeping the Higgs quartic coupling λ _H positive in all energy scales, that is, the same as the standard model. Once one allows λ _H<0, the lower bounds of the Z' boson mass ares obtained through the positive definiteness of the scalar mass squared eigenvalues, while the bounds are smaller than the LHC bounds. On the other hand, the upper bounds strongly depend on the number of relevant Majorana Yukawa couplings of the right-handed neutrinos N_ν . Considering decoupling effects of the Z' boson and the right-handed neutrinos, the condition of the singlet scalar quartic coupling λ _φ >0 gives the upper bound in the N_ν =1 case, while it does not constrain the N_ν =2 and 3 cases. In particular, we find that the Z' boson mass is tightly restricted for the N_ν =1 case as M_{Z'} &lsim 3.7 TeV.
NASA Astrophysics Data System (ADS)
Lee, Harry; Wen, Baole; Doering, Charles
2017-11-01
The rate of viscous energy dissipation ɛ in incompressible Newtonian planar Couette flow (a horizontal shear layer) imposed with uniform boundary injection and suction is studied numerically. Specifically, fluid is steadily injected through the top plate with a constant rate at a constant angle of injection, and the same amount of fluid is sucked out vertically through the bottom plate at the same rate. This set-up leads to two control parameters, namely the angle of injection, θ, and the Reynolds number of the horizontal shear flow, Re . We numerically implement the `background field' variational problem formulated by Constantin and Doering with a one-dimensional unidirectional background field ϕ(z) , where z aligns with the distance between the plates. Computation is carried out at various levels of Re with θ = 0 , 0 .1° ,1° and 2°, respectively. The computed upper bounds on ɛ scale like Re0 as Re > 20 , 000 for each fixed θ, this agrees with Kolmogorov's hypothesis on isotropic turbulence. The outcome provides new upper bounds to ɛ among any solution to the underlying Navier-Stokes equations, and they are sharper than the analytical bounds presented in Doering et al. (2000). This research was partially supported by the NSF Award DMS-1515161, and the University of Michigan's Rackham Graduate Student Research Grant.
$$ \\mathcal{N} $$ = 4 superconformal bootstrap of the K 3 CFT
Lin, Ying-Hsuan; Shao, Shu-Heng; Simmons-Duffin, David; ...
2017-05-23
We study two-dimensional (4; 4) superconformal eld theories of central charge c = 6, corresponding to nonlinear sigma models on K3 surfaces, using the superconformal bootstrap. This is made possible through a surprising relation between the BPS N = 4 superconformal blocks with c = 6 and bosonic Virasoro conformal blocks with c = 28, and an exact result on the moduli dependence of a certain integrated BPS 4-point function. Nontrivial bounds on the non-BPS spectrum in the K3 CFT are obtained as functions of the CFT moduli, that interpolate between the free orbifold points and singular CFT points. Wemore » observe directly from the CFT perspective the signature of a continuous spectrum above a gap at the singular moduli, and fi nd numerically an upper bound on this gap that is saturated by the A1 N = 4 cigar CFT. We also derive an analytic upper bound on the fi rst nonzero eigenvalue of the scalar Laplacian on K3 in the large volume regime, that depends on the K3 moduli data. As two byproducts, we find an exact equivalence between a class of BPS N = 2 superconformal blocks and Virasoro conformal blocks in two dimensions, and an upper bound on the four-point functions of operators of sufficiently low scaling dimension in three and four dimensional CFTs.« less
$$ \\mathcal{N} $$ = 4 superconformal bootstrap of the K 3 CFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Ying-Hsuan; Shao, Shu-Heng; Simmons-Duffin, David
We study two-dimensional (4; 4) superconformal eld theories of central charge c = 6, corresponding to nonlinear sigma models on K3 surfaces, using the superconformal bootstrap. This is made possible through a surprising relation between the BPS N = 4 superconformal blocks with c = 6 and bosonic Virasoro conformal blocks with c = 28, and an exact result on the moduli dependence of a certain integrated BPS 4-point function. Nontrivial bounds on the non-BPS spectrum in the K3 CFT are obtained as functions of the CFT moduli, that interpolate between the free orbifold points and singular CFT points. Wemore » observe directly from the CFT perspective the signature of a continuous spectrum above a gap at the singular moduli, and fi nd numerically an upper bound on this gap that is saturated by the A1 N = 4 cigar CFT. We also derive an analytic upper bound on the fi rst nonzero eigenvalue of the scalar Laplacian on K3 in the large volume regime, that depends on the K3 moduli data. As two byproducts, we find an exact equivalence between a class of BPS N = 2 superconformal blocks and Virasoro conformal blocks in two dimensions, and an upper bound on the four-point functions of operators of sufficiently low scaling dimension in three and four dimensional CFTs.« less
Comonotonic bounds on the survival probabilities in the Lee-Carter model for mortality projection
NASA Astrophysics Data System (ADS)
Denuit, Michel; Dhaene, Jan
2007-06-01
In the Lee-Carter framework, future survival probabilities are random variables with an intricate distribution function. In large homogeneous portfolios of life annuities, value-at-risk or conditional tail expectation of the total yearly payout of the company are approximately equal to the corresponding quantities involving random survival probabilities. This paper aims to derive some bounds in the increasing convex (or stop-loss) sense on these random survival probabilities. These bounds are obtained with the help of comonotonic upper and lower bounds on sums of correlated random variables.
On the global dynamics of a chronic myelogenous leukemia model
NASA Astrophysics Data System (ADS)
Krishchenko, Alexander P.; Starkov, Konstantin E.
2016-04-01
In this paper we analyze some features of global dynamics of a three-dimensional chronic myelogenous leukemia (CML) model with the help of the stability analysis and the localization method of compact invariant sets. The behavior of CML model is defined by concentrations of three cellpopulations circulating in the blood: naive T cells, effector T cells specific to CML and CML cancer cells. We prove that the dynamics of the CML system around the tumor-free equilibrium point is unstable. Further, we compute ultimate upper bounds for all three cell populations and provide the existence conditions of the positively invariant polytope. One ultimate lower bound is obtained as well. Moreover, we describe the iterative localization procedure for refining localization bounds; this procedure is based on cyclic using of localizing functions. Applying this procedure we obtain conditions under which the internal tumor equilibrium point is globally asymptotically stable. Our theoretical analyses are supplied by results of the numerical simulation.
Chaotification of complex networks with impulsive control.
Guan, Zhi-Hong; Liu, Feng; Li, Juan; Wang, Yan-Wu
2012-06-01
This paper investigates the chaotification problem of complex dynamical networks (CDN) with impulsive control. Both the discrete and continuous cases are studied. The method is presented to drive all states of every node in CDN to chaos. The proposed impulsive control strategy is effective for both the originally stable and unstable CDN. The upper bound of the impulse intervals for originally stable networks is derived. Finally, the effectiveness of the theoretical results is verified by numerical examples.
NASA Astrophysics Data System (ADS)
Liu, Bingchen; Dong, Mengzhen; Li, Fengjie
2018-04-01
This paper deals with a reaction-diffusion problem with coupled nonlinear inner sources and nonlocal boundary flux. Firstly, we propose the critical exponents on nonsimultaneous blow-up under some conditions on the initial data. Secondly, we combine the scaling technique and the Green's identity method to determine four kinds of simultaneous blow-up rates. Thirdly, the lower and the upper bounds of blow-up time are derived by using Sobolev-type differential inequalities.
New matrix bounds and iterative algorithms for the discrete coupled algebraic Riccati equation
NASA Astrophysics Data System (ADS)
Liu, Jianzhou; Wang, Li; Zhang, Juan
2017-11-01
The discrete coupled algebraic Riccati equation (DCARE) has wide applications in control theory and linear system. In general, for the DCARE, one discusses every term of the coupled term, respectively. In this paper, we consider the coupled term as a whole, which is different from the recent results. When applying eigenvalue inequalities to discuss the coupled term, our method has less error. In terms of the properties of special matrices and eigenvalue inequalities, we propose several upper and lower matrix bounds for the solution of DCARE. Further, we discuss the iterative algorithms for the solution of the DCARE. In the fixed point iterative algorithms, the scope of Lipschitz factor is wider than the recent results. Finally, we offer corresponding numerical examples to illustrate the effectiveness of the derived results.
Implicit Block ACK Scheme for IEEE 802.11 WLANs
Sthapit, Pranesh; Pyun, Jae-Young
2016-01-01
The throughput of IEEE 802.11 standard is significantly bounded by the associated Medium Access Control (MAC) overhead. Because of the overhead, an upper limit exists for throughput, which is bounded, including situations where data rates are extremely high. Therefore, an overhead reduction is necessary to achieve higher throughput. The IEEE 802.11e amendment introduced the block ACK mechanism, to reduce the number of control messages in MAC. Although the block ACK scheme greatly reduces overhead, further improvements are possible. In this letter, we propose an implicit block ACK method that further reduces the overhead associated with IEEE 802.11e’s block ACK scheme. The mathematical analysis results are presented for both the original protocol and the proposed scheme. A performance improvement of greater than 10% was achieved with the proposed implementation.
Einstein-Podolsky-Rosen steering: Its geometric quantification and witness
NASA Astrophysics Data System (ADS)
Ku, Huan-Yu; Chen, Shin-Liang; Budroni, Costantino; Miranowicz, Adam; Chen, Yueh-Nan; Nori, Franco
2018-02-01
We propose a measure of quantum steerability, namely, a convex steering monotone, based on the trace distance between a given assemblage and its corresponding closest assemblage admitting a local-hidden-state (LHS) model. We provide methods to estimate such a quantity, via lower and upper bounds, based on semidefinite programming. One of these upper bounds has a clear geometrical interpretation as a linear function of rescaled Euclidean distances in the Bloch sphere between the normalized quantum states of (i) a given assemblage and (ii) an LHS assemblage. For a qubit-qubit quantum state, these ideas also allow us to visualize various steerability properties of the state in the Bloch sphere via the so-called LHS surface. In particular, some steerability properties can be obtained by comparing such an LHS surface with a corresponding quantum steering ellipsoid. Thus, we propose a witness of steerability corresponding to the difference of the volumes enclosed by these two surfaces. This witness (which reveals the steerability of a quantum state) enables one to find an optimal measurement basis, which can then be used to determine the proposed steering monotone (which describes the steerability of an assemblage) optimized over all mutually unbiased bases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azunre, P.
Here in this paper, two novel techniques for bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations are developed. The first provides a theorem to construct interval bounds, while the second provides a theorem to construct lower bounds convex and upper bounds concave in the parameter. The convex/concave bounds can be significantly tighter than the interval bounds because of the wrapping effect suffered by interval analysis in dynamical systems. Both types of bounds are computationally cheap to construct, requiring solving auxiliary systems twice and four times larger than the original system, respectively. An illustrative numerical examplemore » of bound construction and use for deterministic global optimization within a simple serial branch-and-bound algorithm, implemented numerically using interval arithmetic and a generalization of McCormick's relaxation technique, is presented. Finally, problems within the important class of reaction-diffusion systems may be optimized with these tools.« less
Effects of triplet Higgs bosons in long baseline neutrino experiments
NASA Astrophysics Data System (ADS)
Huitu, K.; Kärkkäinen, T. J.; Maalampi, J.; Vihonen, S.
2018-05-01
The triplet scalars (Δ =Δ++,Δ+,Δ0) utilized in the so-called type-II seesaw model to explain the lightness of neutrinos, would generate nonstandard interactions (NSI) for a neutrino propagating in matter. We investigate the prospects to probe these interactions in long baseline neutrino oscillation experiments. We analyze the upper bounds that the proposed DUNE experiment might set on the nonstandard parameters and numerically derive upper bounds, as a function of the lightest neutrino mass, on the ratio the mass MΔ of the triplet scalars, and the strength |λϕ| of the coupling ϕ ϕ Δ of the triplet Δ and conventional Higgs doublet ϕ . We also discuss the possible misinterpretation of these effects as effects arising from a nonunitarity of the neutrino mixing matrix and compare the results with the bounds that arise from the charged lepton flavor violating processes.
Decay of superconducting correlations for gauged electrons in dimensions D ≤ 4
NASA Astrophysics Data System (ADS)
Tada, Yasuhiro; Koma, Tohru
2018-03-01
We study lattice superconductors coupled to gauge fields, such as an attractive Hubbard model in electromagnetic fields, with a standard gauge fixing. We prove upper bounds for a two-point Cooper pair correlation at finite temperatures in spatial dimensions D ≤ 4. The upper bounds decay exponentially in three dimensions and by power law in four dimensions. These imply the absence of the superconducting long-range order for the Cooper pair amplitude as a consequence of fluctuations of the gauge fields. Since our results hold for the gauge fixing Hamiltonian, they cannot be obtained as a corollary of Elitzur's theorem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gurvitis, Leonid
2009-01-01
An upper bound on the ergodic capacity of MIMO channels was introduced recently in [1]. This upper bound amounts to the maximization on the simplex of some multilinear polynomial p({lambda}{sub 1}, ..., {lambda}{sub n}) with non-negative coefficients. In general, such maximizations problems are NP-HARD. But if say, the functional log(p) is concave on the simplex and can be efficiently evaluated, then the maximization can also be done efficiently. Such log-concavity was conjectured in [1]. We give in this paper self-contained proof of the conjecture, based on the theory of H-Stable polynomials.
Investigation of matter-antimatter interaction for possible propulsion applications
NASA Technical Reports Server (NTRS)
Morgan, D. L., Jr.
1974-01-01
Matter-antimatter annihilation is discussed as a means of rocket propulsion. The feasibility of different means of antimatter storage is shown to depend on how annihilation rates are affected by various circumstances. The annihilation processes are described, with emphasis on important features of atom-antiatom interatomic potential energies. A model is developed that allows approximate calculation of upper and lower bounds to the interatomic potential energy for any atom-antiatom pair. Formulae for the upper and lower bounds for atom-antiatom annihilation cross-sections are obtained and applied to the annihilation rates for each means of antimatter storage under consideration. Recommendations for further studies are presented.
Marginal Consistency: Upper-Bounding Partition Functions over Commutative Semirings.
Werner, Tomás
2015-07-01
Many inference tasks in pattern recognition and artificial intelligence lead to partition functions in which addition and multiplication are abstract binary operations forming a commutative semiring. By generalizing max-sum diffusion (one of convergent message passing algorithms for approximate MAP inference in graphical models), we propose an iterative algorithm to upper bound such partition functions over commutative semirings. The iteration of the algorithm is remarkably simple: change any two factors of the partition function such that their product remains the same and their overlapping marginals become equal. In many commutative semirings, repeating this iteration for different pairs of factors converges to a fixed point when the overlapping marginals of every pair of factors coincide. We call this state marginal consistency. During that, an upper bound on the partition function monotonically decreases. This abstract algorithm unifies several existing algorithms, including max-sum diffusion and basic constraint propagation (or local consistency) algorithms in constraint programming. We further construct a hierarchy of marginal consistencies of increasingly higher levels and show than any such level can be enforced by adding identity factors of higher arity (order). Finally, we discuss instances of the framework for several semirings, including the distributive lattice and the max-sum and sum-product semirings.
On the Role of Entailment Patterns and Scalar Implicatures in the Processing of Numerals
ERIC Educational Resources Information Center
Panizza, Daniele; Chierchia, Gennaro; Clifton, Charles, Jr.
2009-01-01
There has been much debate, in both the linguistics and the psycholinguistics literature, concerning numbers and the interpretation of number denoting determiners ("numerals"). Such debate concerns, in particular, the nature and distribution of upper-bounded ("exact") interpretations vs. lower-bounded ("at-least") construals. In the present paper…
Sublinear Upper Bounds for Stochastic Programs with Recourse. Revision.
1987-06-01
approximation procedures for (1.1) generally rely on discretizations of E (Huang, Ziemba , and Ben-Tal (1977), Kall and Stoyan (1982), Birge and Wets...Wright, Practical optimization (Academic Press, London and New York,1981). C.C. Huang, W. Ziemba , and A. Ben-Tal, "Bounds on the expectation of a con
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zachos, C. K.; High Energy Physics
Following ref [1], a classical upper bound for quantum entropy is identified and illustrated, 0 {le} S{sub q} {le} ln (e{sigma}{sup 2}/2{h_bar}), involving the variance {sigma}{sup 2} in phase space of the classical limit distribution of a given system. A fortiori, this further bounds the corresponding information-theoretical generalizations of the quantum entropy proposed by Renyi.
Representing and Acquiring Geographic Knowledge.
1984-01-01
which is allowed if v is a kowledge bound of REG. e3. The real vertices of a clump map into the boundary of the corresponding object so * , 21...example, *What is the diameter of the pond?" can be answered, but the answer will, in general, be a range power -bound, upper-bound]. If the clump for...cases of others. They are included separately, because their procedures are either faster or more powerful than the general procedure. I will not
Diamond, Sarah E
2017-02-01
How will organisms respond to climate change? The rapid changes in global climate are expected to impose strong directional selection on fitness-related traits. A major open question then is the potential for adaptive evolutionary change under these shifting climates. At the most basic level, evolutionary change requires the presence of heritable variation and natural selection. Because organismal tolerances of high temperature place an upper bound on responding to temperature change, there has been a surge of research effort on the evolutionary potential of upper thermal tolerance traits. Here, I review the available evidence on heritable variation in upper thermal tolerance traits, adopting a biogeographic perspective to understand how heritability of tolerance varies across space. Specifically, I use meta-analytical models to explore the relationship between upper thermal tolerance heritability and environmental variability in temperature. I also explore how variation in the methods used to obtain these thermal tolerance heritabilities influences the estimation of heritable variation in tolerance. I conclude by discussing the implications of a positive relationship between thermal tolerance heritability and environmental variability in temperature and how this might influence responses to future changes in climate. © 2016 New York Academy of Sciences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Müller-Hermes, Alexander, E-mail: muellerh@ma.tum.de; Wolf, Michael M., E-mail: m.wolf@tum.de; Reeb, David, E-mail: reeb.qit@gmail.com
We investigate linear maps between matrix algebras that remain positive under tensor powers, i.e., under tensoring with n copies of themselves. Completely positive and completely co-positive maps are trivial examples of this kind. We show that for every n ∈ ℕ, there exist non-trivial maps with this property and that for two-dimensional Hilbert spaces there is no non-trivial map for which this holds for all n. For higher dimensions, we reduce the existence question of such non-trivial “tensor-stable positive maps” to a one-parameter family of maps and show that an affirmative answer would imply the existence of non-positive partial transposemore » bound entanglement. As an application, we show that any tensor-stable positive map that is not completely positive yields an upper bound on the quantum channel capacity, which for the transposition map gives the well-known cb-norm bound. We, furthermore, show that the latter is an upper bound even for the local operations and classical communications-assisted quantum capacity, and that moreover it is a strong converse rate for this task.« less
Measures and limits of models of fixation selection.
Wilming, Niklas; Betz, Torsten; Kietzmann, Tim C; König, Peter
2011-01-01
Models of fixation selection are a central tool in the quest to understand how the human mind selects relevant information. Using this tool in the evaluation of competing claims often requires comparing different models' relative performance in predicting eye movements. However, studies use a wide variety of performance measures with markedly different properties, which makes a comparison difficult. We make three main contributions to this line of research: First we argue for a set of desirable properties, review commonly used measures, and conclude that no single measure unites all desirable properties. However the area under the ROC curve (a classification measure) and the KL-divergence (a distance measure of probability distributions) combine many desirable properties and allow a meaningful comparison of critical model performance. We give an analytical proof of the linearity of the ROC measure with respect to averaging over subjects and demonstrate an appropriate correction of entropy-based measures like KL-divergence for small sample sizes in the context of eye-tracking data. Second, we provide a lower bound and an upper bound of these measures, based on image-independent properties of fixation data and between subject consistency respectively. Based on these bounds it is possible to give a reference frame to judge the predictive power of a model of fixation selection. We provide open-source python code to compute the reference frame. Third, we show that the upper, between subject consistency bound holds only for models that predict averages of subject populations. Departing from this we show that incorporating subject-specific viewing behavior can generate predictions which surpass that upper bound. Taken together, these findings lay out the required information that allow a well-founded judgment of the quality of any model of fixation selection and should therefore be reported when a new model is introduced.
Schwartz, Marc D; Valdimarsdottir, Heiddis B; Peshkin, Beth N; Mandelblatt, Jeanne; Nusbaum, Rachel; Huang, An-Tsun; Chang, Yaojen; Graves, Kristi; Isaacs, Claudine; Wood, Marie; McKinnon, Wendy; Garber, Judy; McCormick, Shelley; Kinney, Anita Y; Luta, George; Kelleher, Sarah; Leventhal, Kara-Grace; Vegella, Patti; Tong, Angie; King, Lesley
2014-03-01
Although guidelines recommend in-person counseling before BRCA1/BRCA2 gene testing, genetic counseling is increasingly offered by telephone. As genomic testing becomes more common, evaluating alternative delivery approaches becomes increasingly salient. We tested whether telephone delivery of BRCA1/2 genetic counseling was noninferior to in-person delivery. Participants (women age 21 to 85 years who did not have newly diagnosed or metastatic cancer and lived within a study site catchment area) were randomly assigned to usual care (UC; n = 334) or telephone counseling (TC; n = 335). UC participants received in-person pre- and post-test counseling; TC participants completed all counseling by telephone. Primary outcomes were knowledge, satisfaction, decision conflict, distress, and quality of life; secondary outcomes were equivalence of BRCA1/2 test uptake and costs of delivering TC versus UC. TC was noninferior to UC on all primary outcomes. At 2 weeks after pretest counseling, knowledge (d = 0.03; lower bound of 97.5% CI, -0.61), perceived stress (d = -0.12; upper bound of 97.5% CI, 0.21), and satisfaction (d = -0.16; lower bound of 97.5% CI, -0.70) had group differences and confidence intervals that did not cross their 1-point noninferiority limits. Decision conflict (d = 1.1; upper bound of 97.5% CI, 3.3) and cancer distress (d = -1.6; upper bound of 97.5% CI, 0.27) did not cross their 4-point noninferiority limit. Results were comparable at 3 months. TC was not equivalent to UC on BRCA1/2 test uptake (UC, 90.1%; TC, 84.2%). TC yielded cost savings of $114 per patient. Genetic counseling can be effectively and efficiently delivered via telephone to increase access and decrease costs.
Sign rank versus Vapnik-Chervonenkis dimension
NASA Astrophysics Data System (ADS)
Alon, N.; Moran, Sh; Yehudayoff, A.
2017-12-01
This work studies the maximum possible sign rank of sign (N × N)-matrices with a given Vapnik-Chervonenkis dimension d. For d=1, this maximum is three. For d=2, this maximum is \\widetilde{\\Theta}(N1/2). For d >2, similar but slightly less accurate statements hold. The lower bounds improve on previous ones by Ben-David et al., and the upper bounds are novel. The lower bounds are obtained by probabilistic constructions, using a theorem of Warren in real algebraic topology. The upper bounds are obtained using a result of Welzl about spanning trees with low stabbing number, and using the moment curve. The upper bound technique is also used to: (i) provide estimates on the number of classes of a given Vapnik-Chervonenkis dimension, and the number of maximum classes of a given Vapnik-Chervonenkis dimension--answering a question of Frankl from 1989, and (ii) design an efficient algorithm that provides an O(N/log(N)) multiplicative approximation for the sign rank. We also observe a general connection between sign rank and spectral gaps which is based on Forster's argument. Consider the adjacency (N × N)-matrix of a Δ-regular graph with a second eigenvalue of absolute value λ and Δ ≤ N/2. We show that the sign rank of the signed version of this matrix is at least Δ/λ. We use this connection to prove the existence of a maximum class C\\subseteq\\{+/- 1\\}^N with Vapnik-Chervonenkis dimension 2 and sign rank \\widetilde{\\Theta}(N1/2). This answers a question of Ben-David et al. regarding the sign rank of large Vapnik-Chervonenkis classes. We also describe limitations of this approach, in the spirit of the Alon-Boppana theorem. We further describe connections to communication complexity, geometry, learning theory, and combinatorics. Bibliography: 69 titles.
The impact of missing trauma data on predicting massive transfusion
Trickey, Amber W.; Fox, Erin E.; del Junco, Deborah J.; Ning, Jing; Holcomb, John B.; Brasel, Karen J.; Cohen, Mitchell J.; Schreiber, Martin A.; Bulger, Eileen M.; Phelan, Herb A.; Alarcon, Louis H.; Myers, John G.; Muskat, Peter; Cotton, Bryan A.; Wade, Charles E.; Rahbar, Mohammad H.
2013-01-01
INTRODUCTION Missing data are inherent in clinical research and may be especially problematic for trauma studies. This study describes a sensitivity analysis to evaluate the impact of missing data on clinical risk prediction algorithms. Three blood transfusion prediction models were evaluated utilizing an observational trauma dataset with valid missing data. METHODS The PRospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study included patients requiring ≥ 1 unit of red blood cells (RBC) at 10 participating U.S. Level I trauma centers from July 2009 – October 2010. Physiologic, laboratory, and treatment data were collected prospectively up to 24h after hospital admission. Subjects who received ≥ 10 RBC units within 24h of admission were classified as massive transfusion (MT) patients. Correct classification percentages for three MT prediction models were evaluated using complete case analysis and multiple imputation. A sensitivity analysis for missing data was conducted to determine the upper and lower bounds for correct classification percentages. RESULTS PROMMTT enrolled 1,245 subjects. MT was received by 297 patients (24%). Missing percentage ranged from 2.2% (heart rate) to 45% (respiratory rate). Proportions of complete cases utilized in the MT prediction models ranged from 41% to 88%. All models demonstrated similar correct classification percentages using complete case analysis and multiple imputation. In the sensitivity analysis, correct classification upper-lower bound ranges per model were 4%, 10%, and 12%. Predictive accuracy for all models using PROMMTT data was lower than reported in the original datasets. CONCLUSIONS Evaluating the accuracy clinical prediction models with missing data can be misleading, especially with many predictor variables and moderate levels of missingness per variable. The proposed sensitivity analysis describes the influence of missing data on risk prediction algorithms. Reporting upper/lower bounds for percent correct classification may be more informative than multiple imputation, which provided similar results to complete case analysis in this study. PMID:23778514
Song, Yoon S; Koontz, John L; Juskelis, Rima O; Zhao, Yang
2013-01-01
The migration of low molecular weight organic compounds through polyethylene terephthalate (PET) films was determined by using a custom permeation cell assembly. Fatty food simulant (Miglyol 812) was added to the receptor chamber, while the donor chamber was filled with 1% and 10% (v/v) migrant compounds spiked in simulant. The permeation cell was maintained at 40°C, 66°C, 100°C or 121°C for up to 25 days of polymer film exposure time. Migrants in Miglyol were directly quantified without a liquid-liquid extraction step by headspace-GC-MS analysis. Experimental diffusion coefficients (DP) of toluene, benzyl alcohol, ethyl butyrate and methyl salicylate through PET film were determined. Results from Limm's diffusion model showed that the predicted DP values for PET were all greater than the experimental values. DP values predicted by Piringer's diffusion model were also greater than those determined experimentally at 66°C, 100°C and 121°C. However, Piringer's model led to the underestimation of benzyl alcohol (Áp = 3.7) and methyl salicylate (Áp = 4.0) diffusion at 40°C with its revised "upper-bound" Áp value of 3.1 at temperatures below the glass transition temperature (Tg) of PET (<70°C). This implies that input parameters of Piringer's model may need to be revised to ensure a margin of safety for consumers. On the other hand, at temperatures greater than the Tg, both models appear too conservative and unrealistic. The highest estimated Áp value from Piringer's model was 2.6 for methyl salicylate, which was much lower than the "upper-bound" Áp value of 6.4 for PET. Therefore, it may be necessary further to refine "upper-bound" Áp values for PET such that Piringer's model does not significantly underestimate or overestimate the migration of organic compounds dependent upon the temperature condition of the food contact material.
Inner core boundary topography explored with reflected and diffracted P waves
NASA Astrophysics Data System (ADS)
deSilva, Susini; Cormier, Vernon F.; Zheng, Yingcai
2018-03-01
The existence of topography of the inner core boundary (ICB) can affect the amplitude, phase, and coda of body waves incident on the inner core. By applying pseudospectral and boundary element methods to synthesize compressional waves interacting with the ICB, these effects are predicted and compared with waveform observations in pre-critical, critical, post-critical, and diffraction ranges of the PKiKP wave reflected from the ICB. These data sample overlapping regions of the inner core beneath the circum-Pacific belt and the Eurasian, North American, and Australian continents, but exclude large areas beneath the Pacific and Indian Oceans and the poles. In the pre-critical range, PKiKP waveforms require an upper bound of 2 km at 1-20 km wavelength for any ICB topography. Higher topography sharply reduces PKiKP amplitude and produces time-extended coda not observed in PKiKP waveforms. The existence of topography of this scale smooths over minima and zeros in the pre-critical ICB reflection coefficient predicted from standard earth models. In the range surrounding critical incidence (108-130 °), this upper bound of topography does not strongly affect the amplitude and waveform behavior of PKIKP + PKiKP at 1.5 Hz, which is relatively insensitive to 10-20 km wavelength topography height approaching 5 km. These data, however, have a strong overlap in the regions of the ICB sampled by pre-critical PKiKP that require a 2 km upper bound to topography height. In the diffracted range (>152°), topography as high as 5 km attenuates the peak amplitudes of PKIKP and PKPCdiff by similar amounts, leaving the PKPCdiff/PKIKP amplitude ratio unchanged from that predicted by a smooth ICB. The observed decay of PKPCdiff into the inner core shadow and the PKIKP-PKPCdiff differential travel time are consistent with a flattening of the outer core P velocity gradient near the ICB and iron enrichment at the bottom of the outer core.
A formulation of a matrix sparsity approach for the quantum ordered search algorithm
NASA Astrophysics Data System (ADS)
Parmar, Jupinder; Rahman, Saarim; Thiara, Jaskaran
One specific subset of quantum algorithms is Grovers Ordered Search Problem (OSP), the quantum counterpart of the classical binary search algorithm, which utilizes oracle functions to produce a specified value within an ordered database. Classically, the optimal algorithm is known to have a log2N complexity; however, Grovers algorithm has been found to have an optimal complexity between the lower bound of ((lnN-1)/π≈0.221log2N) and the upper bound of 0.433log2N. We sought to lower the known upper bound of the OSP. With Farhi et al. MITCTP 2815 (1999), arXiv:quant-ph/9901059], we see that the OSP can be resolved into a translational invariant algorithm to create quantum query algorithm restraints. With these restraints, one can find Laurent polynomials for various k — queries — and N — database sizes — thus finding larger recursive sets to solve the OSP and effectively reducing the upper bound. These polynomials are found to be convex functions, allowing one to make use of convex optimization to find an improvement on the known bounds. According to Childs et al. [Phys. Rev. A 75 (2007) 032335], semidefinite programming, a subset of convex optimization, can solve the particular problem represented by the constraints. We were able to implement a program abiding to their formulation of a semidefinite program (SDP), leading us to find that it takes an immense amount of storage and time to compute. To combat this setback, we then formulated an approach to improve results of the SDP using matrix sparsity. Through the development of this approach, along with an implementation of a rudimentary solver, we demonstrate how matrix sparsity reduces the amount of time and storage required to compute the SDP — overall ensuring further improvements will likely be made to reach the theorized lower bound.
Sample Complexity Bounds for Differentially Private Learning
Chaudhuri, Kamalika; Hsu, Daniel
2013-01-01
This work studies the problem of privacy-preserving classification – namely, learning a classifier from sensitive data while preserving the privacy of individuals in the training set. In particular, the learning algorithm is required in this problem to guarantee differential privacy, a very strong notion of privacy that has gained significant attention in recent years. A natural question to ask is: what is the sample requirement of a learning algorithm that guarantees a certain level of privacy and accuracy? We address this question in the context of learning with infinite hypothesis classes when the data is drawn from a continuous distribution. We first show that even for very simple hypothesis classes, any algorithm that uses a finite number of examples and guarantees differential privacy must fail to return an accurate classifier for at least some unlabeled data distributions. This result is unlike the case with either finite hypothesis classes or discrete data domains, in which distribution-free private learning is possible, as previously shown by Kasiviswanathan et al. (2008). We then consider two approaches to differentially private learning that get around this lower bound. The first approach is to use prior knowledge about the unlabeled data distribution in the form of a reference distribution chosen independently of the sensitive data. Given such a reference , we provide an upper bound on the sample requirement that depends (among other things) on a measure of closeness between and the unlabeled data distribution. Our upper bound applies to the non-realizable as well as the realizable case. The second approach is to relax the privacy requirement, by requiring only label-privacy – namely, that the only labels (and not the unlabeled parts of the examples) be considered sensitive information. An upper bound on the sample requirement of learning with label privacy was shown by Chaudhuri et al. (2006); in this work, we show a lower bound. PMID:25285183
NASA Astrophysics Data System (ADS)
Abdullah, Dahlan; Suwilo, Saib; Tulus; Mawengkang, Herman; Efendi, Syahril
2017-09-01
The higher education system in Indonesia can be considered not only as an important source of developing knowledge in the country, but also could create positive living conditions for the country. Therefore it is not surprising that enrollments in higher education continue to expand. However, the implication of this situation, the Indonesian government is necessarily to support more funds. In the interest of accountability, it is essential to measure the efficiency for this higher institution. Data envelopment analysis (DEA) is a method to evaluate the technical efficiency of production units which have multiple input and output. The higher learning institution considered in this paper is Malikussaleh University located in Lhokseumawe, a city in Aceh province of Indonesia. This paper develops a method to evaluate efficiency for all departments in Malikussaleh University using DEA with bounded output. Accordingly, we present some important differences in efficiency of those departments. Finally we discuss the effort should be done by these departments in order to become efficient.
Dominating Scale-Free Networks Using Generalized Probabilistic Methods
Molnár,, F.; Derzsy, N.; Czabarka, É.; Székely, L.; Szymanski, B. K.; Korniss, G.
2014-01-01
We study ensemble-based graph-theoretical methods aiming to approximate the size of the minimum dominating set (MDS) in scale-free networks. We analyze both analytical upper bounds of dominating sets and numerical realizations for applications. We propose two novel probabilistic dominating set selection strategies that are applicable to heterogeneous networks. One of them obtains the smallest probabilistic dominating set and also outperforms the deterministic degree-ranked method. We show that a degree-dependent probabilistic selection method becomes optimal in its deterministic limit. In addition, we also find the precise limit where selecting high-degree nodes exclusively becomes inefficient for network domination. We validate our results on several real-world networks, and provide highly accurate analytical estimates for our methods. PMID:25200937
Wiener-Hammerstein system identification - an evolutionary approach
NASA Astrophysics Data System (ADS)
Naitali, Abdessamad; Giri, Fouad
2016-01-01
The problem of identifying parametric Wiener-Hammerstein (WH) systems is addressed within the evolutionary optimisation context. Specifically, a hybrid culture identification method is developed that involves model structure adaptation using genetic recombination and model parameter learning using particle swarm optimisation. The method enjoys three interesting features: (1) the risk of premature convergence of model parameter estimates to local optima is significantly reduced, due to the constantly maintained diversity of model candidates; (2) no prior knowledge is needed except for upper bounds on the system structure indices; (3) the method is fully autonomous as no interaction is needed with the user during the optimum search process. The performances of the proposed method will be illustrated and compared to alternative methods using a well-established WH benchmark.
Incorporating Alternative Care Site Characteristics Into Estimates of Substitutable ED Visits.
Trueger, Nathan Seth; Chua, Kao-Ping; Hussain, Aamir; Liferidge, Aisha T; Pitts, Stephen R; Pines, Jesse M
2017-07-01
Several recent efforts to improve health care value have focused on reducing emergency department (ED) visits that potentially could be treated in alternative care sites (ie, primary care offices, retail clinics, and urgent care centers). Estimates of the number of these visits may depend on assumptions regarding the operating hours and functional capabilities of alternative care sites. However, methods to account for the variability in these characteristics have not been developed. To develop methods to incorporate the variability in alternative care site characteristics into estimates of ED visit "substitutability." Our approach uses the range of hours and capabilities among alternative care sites to estimate lower and upper bounds of ED visit substitutability. We constructed "basic" and "extended" criteria that captured the plausible degree of variation in each site's hours and capabilities. To illustrate our approach, we analyzed data from 22,697 ED visits by adults in the 2011 National Hospital Ambulatory Medical Care Survey, defining a visit as substitutable if it was treat-and-release and met both the operating hours and functional capabilities criteria. Use of the combined basic hours/basic capabilities criteria and extended hours/extended capabilities generated lower and upper bounds of estimates. Our criteria classified 5.5%-27.1%, 7.6%-20.4%, and 10.6%-46.0% of visits as substitutable in primary care offices, retail clinics, and urgent care centers, respectively. Alternative care sites vary widely in operating hours and functional capabilities. Methods such as ours may help incorporate this variability into estimates of ED visit substitutability.
Spread of entanglement and causality
NASA Astrophysics Data System (ADS)
Casini, Horacio; Liu, Hong; Mezei, Márk
2016-07-01
We investigate causality constraints on the time evolution of entanglement entropy after a global quench in relativistic theories. We first provide a general proof that the so-called tsunami velocity is bounded by the speed of light. We then generalize the free particle streaming model of [1] to general dimensions and to an arbitrary entanglement pattern of the initial state. In more than two spacetime dimensions the spread of entanglement in these models is highly sensitive to the initial entanglement pattern, but we are able to prove an upper bound on the normalized rate of growth of entanglement entropy, and hence the tsunami velocity. The bound is smaller than what one gets for quenches in holographic theories, which highlights the importance of interactions in the spread of entanglement in many-body systems. We propose an interacting model which we believe provides an upper bound on the spread of entanglement for interacting relativistic theories. In two spacetime dimensions with multiple intervals, this model and its variations are able to reproduce intricate results exhibited by holographic theories for a significant part of the parameter space. For higher dimensions, the model bounds the tsunami velocity at the speed of light. Finally, we construct a geometric model for entanglement propagation based on a tensor network construction for global quenches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Myoung-Jae; Jung, Young-Dae, E-mail: ydjung@hanyang.ac.kr; Department of Physics, Applied Physics, and Astronomy, Rensselaer Polytechnic Institute, 110 8th Street, Troy, New York 12180-3590
The dispersion relation for the dust ion-acoustic surface waves propagating at the interface of semi-bounded Lorentzian dusty plasma with supersonic ion flow has been kinetically derived to investigate the nonthermal property and the ion wake field effect. We found that the supersonic ion flow creates the upper and the lower modes. The increase in the nonthermal particles decreases the wave frequency for the upper mode whereas it increases the frequency for the lower mode. The increase in the supersonic ion flow velocity is found to enhance the wave frequency for both modes. We also found that the increase in nonthermalmore » plasmas is found to enhance the group velocity of the upper mode. However, the nonthermal particles suppress the lower mode group velocity. The nonthermal effects on the group velocity will be reduced in the limit of small or large wavelength limit.« less
Jackson, Dan; Bowden, Jack
2016-09-07
Confidence intervals for the between study variance are useful in random-effects meta-analyses because they quantify the uncertainty in the corresponding point estimates. Methods for calculating these confidence intervals have been developed that are based on inverting hypothesis tests using generalised heterogeneity statistics. Whilst, under the random effects model, these new methods furnish confidence intervals with the correct coverage, the resulting intervals are usually very wide, making them uninformative. We discuss a simple strategy for obtaining 95 % confidence intervals for the between-study variance with a markedly reduced width, whilst retaining the nominal coverage probability. Specifically, we consider the possibility of using methods based on generalised heterogeneity statistics with unequal tail probabilities, where the tail probability used to compute the upper bound is greater than 2.5 %. This idea is assessed using four real examples and a variety of simulation studies. Supporting analytical results are also obtained. Our results provide evidence that using unequal tail probabilities can result in shorter 95 % confidence intervals for the between-study variance. We also show some further results for a real example that illustrates how shorter confidence intervals for the between-study variance can be useful when performing sensitivity analyses for the average effect, which is usually the parameter of primary interest. We conclude that using unequal tail probabilities when computing 95 % confidence intervals for the between-study variance, when using methods based on generalised heterogeneity statistics, can result in shorter confidence intervals. We suggest that those who find the case for using unequal tail probabilities convincing should use the '1-4 % split', where greater tail probability is allocated to the upper confidence bound. The 'width-optimal' interval that we present deserves further investigation.
Azunre, P.
2016-09-21
Here in this paper, two novel techniques for bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations are developed. The first provides a theorem to construct interval bounds, while the second provides a theorem to construct lower bounds convex and upper bounds concave in the parameter. The convex/concave bounds can be significantly tighter than the interval bounds because of the wrapping effect suffered by interval analysis in dynamical systems. Both types of bounds are computationally cheap to construct, requiring solving auxiliary systems twice and four times larger than the original system, respectively. An illustrative numerical examplemore » of bound construction and use for deterministic global optimization within a simple serial branch-and-bound algorithm, implemented numerically using interval arithmetic and a generalization of McCormick's relaxation technique, is presented. Finally, problems within the important class of reaction-diffusion systems may be optimized with these tools.« less
Bounds on quantum confinement effects in metal nanoparticles
NASA Astrophysics Data System (ADS)
Blackman, G. Neal; Genov, Dentcho A.
2018-03-01
Quantum size effects on the permittivity of metal nanoparticles are investigated using the quantum box model. Explicit upper and lower bounds are derived for the permittivity and relaxation rates due to quantum confinement effects. These bounds are verified numerically, and the size dependence and frequency dependence of the empirical Drude size parameter is extracted from the model. Results suggest that the common practice of empirically modifying the dielectric function can lead to inaccurate predictions for highly uniform distributions of finite-sized particles.
NASA Technical Reports Server (NTRS)
Sloss, J. M.; Kranzler, S. K.
1972-01-01
The equivalence of a considered integral equation form with an infinite system of linear equations is proved, and the localization of the eigenvalues of the infinite system is expressed. Error estimates are derived, and the problems of finding upper bounds and lower bounds for the eigenvalues are solved simultaneously.
The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates
NASA Technical Reports Server (NTRS)
Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush
2008-01-01
We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.
When clusters collide: constraints on antimatter on the largest scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steigman, Gary, E-mail: steigman@mps.ohio-state.edu
2008-10-15
Observations have ruled out the presence of significant amounts of antimatter in the Universe on scales ranging from the solar system, to the Galaxy, to groups and clusters of galaxies, and even to distances comparable to the scale of the present horizon. Except for the model-dependent constraints on the largest scales, the most significant upper limits to diffuse antimatter in the Universe are those on the {approx}Mpc scale of clusters of galaxies provided by the EGRET upper bounds to annihilation gamma rays from galaxy clusters whose intracluster gas is revealed through its x-ray emission. On the scale of individual clustersmore » of galaxies the upper bounds to the fraction of mixed matter and antimatter for the 55 clusters from a flux-limited x-ray survey range from 5 Multiplication-Sign 10{sup -9} to <1 Multiplication-Sign 10{sup -6}, strongly suggesting that individual clusters of galaxies are made entirely of matter or of antimatter. X-ray and gamma-ray observations of colliding clusters of galaxies, such as the Bullet Cluster, permit these constraints to be extended to even larger scales. If the observations of the Bullet Cluster, where the upper bound to the antimatter fraction is found to be <3 Multiplication-Sign 10{sup -6}, can be generalized to other colliding clusters of galaxies, cosmologically significant amounts of antimatter will be excluded on scales of order {approx}20 Mpc (M{approx}5 Multiplication-Sign 10{sup 15}M{sub sun})« less
Degteva, M O; Shagina, N B; Shishkina, E A; Vozilova, A V; Volchkova, A Y; Vorobiova, M I; Wieser, A; Fattibene, P; Della Monaca, S; Ainsbury, E; Moquet, J; Anspaugh, L R; Napier, B A
2015-11-01
Waterborne radioactive releases into the Techa River from the Mayak Production Association in Russia during 1949-1956 resulted in significant doses to about 30,000 persons who lived in downstream settlements. The residents were exposed to internal and external radiation. Two methods for reconstruction of the external dose are considered in this paper, electron paramagnetic resonance (EPR) measurements of teeth, and fluorescence in situ hybridization (FISH) measurements of chromosome translocations in circulating lymphocytes. The main issue in the application of the EPR and FISH methods for reconstruction of the external dose for the Techa Riverside residents was strontium radioisotopes incorporated in teeth and bones that act as a source of confounding local exposures. In order to estimate and subtract doses from incorporated (89,90)Sr, the EPR and FISH assays were supported by measurements of (90)Sr-body burdens and estimates of (90)Sr concentrations in dental tissues by the luminescence method. The resulting dose estimates derived from EPR to FISH measurements for residents of the upper Techa River were found to be consistent: The mean values vary from 510 to 550 mGy for the villages located close to the site of radioactive release to 130-160 mGy for the more distant villages. The upper bound of individual estimates for both methods is equal to 2.2-2.3 Gy. The EPR- and FISH-based dose estimates were compared with the doses calculated for the donors using the most recent Techa River Dosimetry System (TRDS). The TRDS external dose assessments are based on the data on contamination of the Techa River floodplain, simulation of air kerma above the contaminated soil, age-dependent lifestyles and individual residence histories. For correct comparison, TRDS-based doses were calculated from two sources: external exposure from the contaminated environment and internal exposure from (137)Cs incorporated in donors' soft tissues. It is shown here that the TRDS-based absorbed doses in tooth enamel and muscle are in agreement with EPR- and FISH-based estimates within uncertainty bounds. Basically, this agreement between the estimates has confirmed the validity of external doses calculated with the TRDS.
Low-dimensional Representation of Error Covariance
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan
2000-01-01
Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.
NASA Technical Reports Server (NTRS)
Jaggi, S.; Quattrochi, D.; Baskin, R.
1992-01-01
The effective flux incident upon the detectors of a thermal sensor, after it has been corrected for atmospheric effects, is a function of a non-linear combination of the emissivity of the target for that channel and the temperature of the target. The sensor system cannot separate the contribution from the emissivity and the temperature that constitute the flux value. A method that estimates the bounds on these temperatures and emissivities from thermal data is described. This method is then tested with remotely sensed data obtained from NASA's Thermal Infrared Multispectral Scanner (TIMS) - a 6 channel thermal sensor. Since this is an under-determined set of equations i.e. there are 7 unknowns (6 emissivities and 1 temperature) and 6 equations (corresponding to the 6 channel fluxes), there exist theoretically an infinite combination of values of emissivities and temperature that can satisfy these equations. Using some realistic bounds on the emissivities, bounds on the temperature are calculated. These bounds on the temperature are refined to estimate a tighter bound on the emissivity of the source. An error analysis is also carried out to quantitatively determine the extent of uncertainty introduced in the estimate of these parameters. This method is useful only when a realistic set of bounds can be obtained for the emissivities of the data. In the case of water the lower and upper bounds were set at 0.97 and 1.00 respectively. Five flights were flown in succession at altitudes of 2 km (low), 6 km (mid), 12 km (high), and then back again at 6 km and 2 km. The area selected with the Ross Barnett reservoir near Jackson, Mississippi. The mission was flown during the predawn hours of 1 Feb. 1992. Radiosonde data was collected for that duration to profile the characteristics of the atmosphere. Ground truth temperatures using thermometers and radiometers were also obtained over an area of the reservoir. The results of two independent runs of the radiometer data averaged 7.03 plus or minus .70 for the first run and 7.31 plus or minus .88 for the second run. The results of the algorithm yield a temperature of 7.68 for the low altitude data to 8.73 for the high altitude data.
On the Coriolis effect in acoustic waveguides.
Wegert, Henry; Reindl, Leonard M; Ruile, Werner; Mayer, Andreas P
2012-05-01
Rotation of an elastic medium gives rise to a shift of frequency of its acoustic modes, i.e., the time-period vibrations that exist in it. This frequency shift is investigated by applying perturbation theory in the regime of small ratios of the rotation velocity and the frequency of the acoustic mode. In an expansion of the relative frequency shift in powers of this ratio, upper bounds are derived for the first-order and the second-order terms. The derivation of the theoretical upper bounds of the first-order term is presented for linear vibration modes as well as for stable nonlinear vibrations with periodic time dependence that can be represented by a Fourier series.
NASA Astrophysics Data System (ADS)
Albeverio, Sergio; Tamura, Hiroshi
2018-04-01
We consider a model describing the coupling of a vector-valued and a scalar homogeneous Markovian random field over R4, interpreted as expressing the interaction between a charged scalar quantum field coupled with a nonlinear quantized electromagnetic field. Expectations of functionals of the random fields are expressed by Brownian bridges. Using this, together with Feynman-Kac-Itô type formulae and estimates on the small time and large time behaviour of Brownian functionals, we prove asymptotic upper and lower bounds on the kernel of the transition semigroup for our model. The upper bound gives faster than exponential decay for large distances of the corresponding resolvent (propagator).
The upper bounds of reduced axial and shear moduli in cross-ply laminates with matrix cracks
NASA Technical Reports Server (NTRS)
Lee, Jong-Won; Allen, D. H.; Harris, C. E.
1991-01-01
The present study proposes a mathematical model utilizing the internal state variable concept for predicting the upper bounds of the reduced axial and shear stiffnesses in cross-ply laminates with matrix cracks. The displacement components at the matrix crack surfaces are explicitly expressed in terms of the observable axial and shear strains and the undamaged material properties. The reduced axial and shear stiffnesses are predicted for glass/epoxy and graphite/epoxy laminates. Comparison of the model with other theoretical and experimental studies is also presented to confirm direct applicability of the model to angle-ply laminates with matrix cracks subjected to general in-plane loading.
Entanglement verification with detection efficiency mismatch
NASA Astrophysics Data System (ADS)
Zhang, Yanbao; Lütkenhaus, Norbert
Entanglement is a necessary condition for secure quantum key distribution (QKD). When there is an efficiency mismatch between various detectors used in the QKD system, it is still an open problem how to verify entanglement. Here we present a method to address this problem, given that the detection efficiency mismatch is characterized and known. The method works without assuming an upper bound on the number of photons going to each threshold detector. Our results suggest that the efficiency mismatch affects the ability to verify entanglement: the larger the efficiency mismatch is, the smaller the set of entangled states that can be verified becomes. When there is no mismatch, our method can verify entanglement even if the method based on squashing maps [PRL 101, 093601 (2008)] fails.
Constraining the generalized uncertainty principle with the atomic weak-equivalence-principle test
NASA Astrophysics Data System (ADS)
Gao, Dongfeng; Wang, Jin; Zhan, Mingsheng
2017-04-01
Various models of quantum gravity imply the Planck-scale modifications of Heisenberg's uncertainty principle into a so-called generalized uncertainty principle (GUP). The GUP effects on high-energy physics, cosmology, and astrophysics have been extensively studied. Here, we focus on the weak-equivalence-principle (WEP) violation induced by the GUP. Results from the WEP test with the 85Rb-87Rb dual-species atom interferometer are used to set upper bounds on parameters in two GUP proposals. A 1045-level bound on the Kempf-Mangano-Mann proposal and a 1027-level bound on Maggiore's proposal, which are consistent with bounds from other experiments, are obtained. All these bounds have huge room for improvement in the future.
Trinker, Horst
2011-10-28
We study the distribution of triples of codewords of codes and ordered codes. Schrijver [A. Schrijver, New code upper bounds from the Terwilliger algebra and semidefinite programming, IEEE Trans. Inform. Theory 51 (8) (2005) 2859-2866] used the triple distribution of a code to establish a bound on the number of codewords based on semidefinite programming. In the first part of this work, we generalize this approach for ordered codes. In the second part, we consider linear codes and linear ordered codes and present a MacWilliams-type identity for the triple distribution of their dual code. Based on the non-negativity of this linear transform, we establish a linear programming bound and conclude with a table of parameters for which this bound yields better results than the standard linear programming bound.
Validation of the SURE Program, phase 1
NASA Technical Reports Server (NTRS)
Dotson, Kelly J.
1987-01-01
Presented are the results of the first phase in the validation of the SURE (Semi-Markov Unreliability Range Evaluator) program. The SURE program gives lower and upper bounds on the death-state probabilities of a semi-Markov model. With these bounds, the reliability of a semi-Markov model of a fault-tolerant computer system can be analyzed. For the first phase in the validation, fifteen semi-Markov models were solved analytically for the exact death-state probabilities and these solutions compared to the corresponding bounds given by SURE. In every case, the SURE bounds covered the exact solution. The bounds, however, had a tendency to separate in cases where the recovery rate was slow or the fault arrival rate was fast.
The random coding bound is tight for the average code.
NASA Technical Reports Server (NTRS)
Gallager, R. G.
1973-01-01
The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.
Scalable L-infinite coding of meshes.
Munteanu, Adrian; Cernea, Dan C; Alecu, Alin; Cornelis, Jan; Schelkens, Peter
2010-01-01
The paper investigates the novel concept of local-error control in mesh geometry encoding. In contrast to traditional mesh-coding systems that use the mean-square error as target distortion metric, this paper proposes a new L-infinite mesh-coding approach, for which the target distortion metric is the L-infinite distortion. In this context, a novel wavelet-based L-infinite-constrained coding approach for meshes is proposed, which ensures that the maximum error between the vertex positions in the original and decoded meshes is lower than a given upper bound. Furthermore, the proposed system achieves scalability in L-infinite sense, that is, any decoding of the input stream will correspond to a perfectly predictable L-infinite distortion upper bound. An instantiation of the proposed L-infinite-coding approach is demonstrated for MESHGRID, which is a scalable 3D object encoding system, part of MPEG-4 AFX. In this context, the advantages of scalable L-infinite coding over L-2-oriented coding are experimentally demonstrated. One concludes that the proposed L-infinite mesh-coding approach guarantees an upper bound on the local error in the decoded mesh, it enables a fast real-time implementation of the rate allocation, and it preserves all the scalability features and animation capabilities of the employed scalable mesh codec.
Boukattaya, Mohamed; Mezghani, Neila; Damak, Tarak
2018-06-01
In this paper, robust and adaptive nonsingular fast terminal sliding-mode (NFTSM) control schemes for the trajectory tracking problem are proposed with known or unknown upper bound of the system uncertainty and external disturbances. The developed controllers take the advantage of the NFTSM theory to ensure fast convergence rate, singularity avoidance, and robustness against uncertainties and external disturbances. First, a robust NFTSM controller is proposed which guarantees that sliding surface and equilibrium point can be reached in a short finite-time from any initial state. Then, in order to cope with the unknown upper bound of the system uncertainty which may be occurring in practical applications, a new adaptive NFTSM algorithm is developed. One feature of the proposed control law is their adaptation techniques where the prior knowledge of parameters uncertainty and disturbances is not needed. However, the adaptive tuning law can estimate the upper bound of these uncertainties using only position and velocity measurements. Moreover, the proposed controller eliminates the chattering effect without losing the robustness property and the precision. Stability analysis is performed using the Lyapunov stability theory, and simulation studies are conducted to verify the effectiveness of the developed control schemes. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Effects of general relativity on glitch amplitudes and pulsar mass upper bounds
NASA Astrophysics Data System (ADS)
Antonelli, M.; Montoli, A.; Pizzochero, P. M.
2018-04-01
Pinning of vortex lines in the inner crust of a spinning neutron star may be the mechanism that enhances the differential rotation of the internal neutron superfluid, making it possible to freeze some amount of angular momentum which eventually can be released, thus causing a pulsar glitch. We investigate the general relativistic corrections to pulsar glitch amplitudes in the slow-rotation approximation, consistently with the stratified structure of the star. We thus provide a relativistic generalization of a previous Newtonian model that was recently used to estimate upper bounds on the masses of glitching pulsars. We find that the effect of general relativity on the glitch amplitudes obtained by emptying the whole angular momentum reservoir is less than 30 per cent. Moreover, we show that the Newtonian upper bounds on the masses of large glitchers obtained from observations of their maximum recorded event differ by less than a few percent from those calculated within the relativistic framework. This work can also serve as a basis to construct more sophisticated models of angular momentum reservoir in a relativistic context: in particular, we present two alternative scenarios for macroscopically rigid and slack pinned vortex lines, and we generalize the Feynman-Onsager relation to the case when both entrainment coupling between the fluids and a strong axisymmetric gravitational field are present.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albert, Sarah; Bowman, Daniel; Rodgers, Arthur
Here, this research uses the acoustic coda phase delay method to estimate relative changes in air temperature between explosions with varying event masses and heights of burst. It also places a bound on source–receiver distance for the method. Previous studies used events with different shapes, height of bursts, and masses and recorded the acoustic codas at source–receiver distances less than 1 km. This research further explores the method using explosions that differ in mass (by up to an order of magnitude) and are placed at varying heights. Source–receiver distances also cover an area out to 7 km. Relative air temperaturemore » change estimates are compared to complementary meteorological observations. Results show that two explosions that differ by an order of magnitude cannot be used with this method because their propagation times in the near field and their fundamental frequencies are different. These differences are expressed as inaccuracies in the relative air temperature change estimates. An order of magnitude difference in mass is also shown to bias estimates higher. Small differences in height of burst do not affect the accuracy of the method. Finally, an upper bound of 1 km on source–receiver distance is provided based on the standard deviation characteristics of the estimates.« less
Albert, Sarah; Bowman, Daniel; Rodgers, Arthur; ...
2018-04-23
Here, this research uses the acoustic coda phase delay method to estimate relative changes in air temperature between explosions with varying event masses and heights of burst. It also places a bound on source–receiver distance for the method. Previous studies used events with different shapes, height of bursts, and masses and recorded the acoustic codas at source–receiver distances less than 1 km. This research further explores the method using explosions that differ in mass (by up to an order of magnitude) and are placed at varying heights. Source–receiver distances also cover an area out to 7 km. Relative air temperaturemore » change estimates are compared to complementary meteorological observations. Results show that two explosions that differ by an order of magnitude cannot be used with this method because their propagation times in the near field and their fundamental frequencies are different. These differences are expressed as inaccuracies in the relative air temperature change estimates. An order of magnitude difference in mass is also shown to bias estimates higher. Small differences in height of burst do not affect the accuracy of the method. Finally, an upper bound of 1 km on source–receiver distance is provided based on the standard deviation characteristics of the estimates.« less
NASA Astrophysics Data System (ADS)
Bressers, C. A.; Nyblade, A.; Tugume, F.
2017-12-01
Data from a newly installed temporary seismic array in northeastern Uganda are incorporated into an existing body wave tomography model of eastern Africa to improve imaging of the upper mantle beneath the northern part of the East African Plateau. Nine temporary broadband stations were installed in January 2017 and will be operated through 2018 to obtain data for resolving structure under the northern part of the plateau as well as the East African rift in northern Kenya. Preliminary tomography models incorporate several months of data from stations in NE Uganda, plus many years of data from over 200 seismic stations throughout eastern Africa used in previously published body wave tomography models. The data come from teleseismic earthquakes with mb ≥ 5.5 at a distance from each station of 30° to 90°. P and S wave travel time residuals have been obtained using a multichannel cross correlation method and inverted using VanDecar's method to produce 3D tomographic images of the upper mantle. The preliminary results exhibit better resolved structure under the northern part of the East African Plateau than pervious models and suggest that the fast-wave speed anomaly in the upper mantle associated with the Tanzanian Craton—which is bounded by the Western and Eastern branches of the rift system—extends across most of northern Uganda.
NASA Astrophysics Data System (ADS)
Masson, Frederic; Knoepfler, Andreas; Mayer, Michael; Ulrich, Patrice; Heck, Bernhard
2010-05-01
In September 2008, the Institut de Physique du Globe de Strasbourg (Ecole et Observatoire des Sciences de la Terre, EOST) and the Geodetic Institute (GIK) of Karlsruhe University (TH) established a transnational cooperation called GURN (GNSS Upper Rhine Graben Network). Within the GURN initiative these institutions are cooperating in order to establish a highly precise and highly sensitive network of permanently operating GNSS sites for the detection of crustal movements in the Upper Rhine Graben region. At the beginning, the network consisted of the permanently operating GNSS sites of SAPOS®-Baden-Württemberg, different data providers in France (e.g. EOST, Teria, RGP) and some further sites (e.g. IGS). In July 2009, the network was extended to the South when swisstopo (Switzerland) and to the North when SAPOS®-Rheinland-Pfalz joined GURN. Therefore, actually the GNSS network consists of approx. 80 permanently operating reference sites. The presentation will discuss the actual status of GURN, main research goals, and will present first results concerning the data quality as well as time series of a first reprocessing of all available data since 2002 using GAMIT/GLOBK (EOST working group) and the Bernese GPS Software (GIK working group). Based on these time series, the velocity as well as strain fields will be calculated in the future. The GURN initiative is also aiming for the estimation of the upper bounds of deformation in the Upper Rhine Graben region.
The Economic Cost of Methamphetamine Use in the United States, 2005
ERIC Educational Resources Information Center
Nicosia, Nancy; Pacula, Rosalie Liccardo; Kilmer, Beau; Lundberg, Russell; Chiesa, James
2009-01-01
This first national estimate suggests that the economic cost of methamphetamine (meth) use in the United States reached $23.4 billion in 2005. Given the uncertainty in estimating the costs of meth use, this book provides a lower-bound estimate of $16.2 billion and an upper-bound estimate of $48.3 billion. The analysis considers a wide range of…
Curvature Continuous and Bounded Path Planning for Fixed-Wing UAVs
Jiang, Peng; Li, Deshi; Sun, Tao
2017-01-01
Unmanned Aerial Vehicles (UAVs) play an important role in applications such as data collection and target reconnaissance. An accurate and optimal path can effectively increase the mission success rate in the case of small UAVs. Although path planning for UAVs is similar to that for traditional mobile robots, the special kinematic characteristics of UAVs (such as their minimum turning radius) have not been taken into account in previous studies. In this paper, we propose a locally-adjustable, continuous-curvature, bounded path-planning algorithm for fixed-wing UAVs. To deal with the curvature discontinuity problem, an optimal interpolation algorithm and a key-point shift algorithm are proposed based on the derivation of a curvature continuity condition. To meet the upper bound for curvature and to render the curvature extrema controllable, a local replanning scheme is designed by combining arcs and Bezier curves with monotonic curvature. In particular, a path transition mechanism is built for the replanning phase using minimum curvature circles for a planning philosophy. Numerical results demonstrate that the analytical planning algorithm can effectively generate continuous-curvature paths, while satisfying the curvature upper bound constraint and allowing UAVs to pass through all predefined waypoints in the desired mission region. PMID:28925960
Paramagnetic or diamagnetic persistent currents? A topological point of view
NASA Astrophysics Data System (ADS)
Waintal, Xavier
2009-03-01
A persistent current flows at low temperatures in small conducting rings when they are threaded by a magnetic flux. I will discuss the sign of this persistent current (diamagnetic or paramagnetic response) in the special case of N electrons in a one dimensional ring [1]. One dimension is very special in the sense that the sign of the persistent current is entirely controlled by the topology of the system. I will establish lower bounds for the free energy in the presence of arbitrary electron-electron interactions and external potentials. Those bounds are the counterparts of upper bounds derived by Leggett using another topological argument. Rings with odd (even) numbers of polarized electrons are always diamagnetic (paramagnetic). The situation is more interesting with unpolarized electrons where Leggett upper bound breaks down: rings with N=4n exhibit either paramagnetic behavior or a superconductor-like current-phase relation. The topological argument provides a rigorous justification for the phenomenological Huckel rule which states that cyclic molecules with 4n + 2 electrons like benzene are aromatic while those with 4n electrons are not. [4pt] [1] Xavier Waintal, Geneviève Fleury, Kyryl Kazymyrenko, Manuel Houzet, Peter Schmitteckert, and Dietmar Weinmann Phys. Rev. Lett.101, 106804 (2008).
Curvature Continuous and Bounded Path Planning for Fixed-Wing UAVs.
Wang, Xiaoliang; Jiang, Peng; Li, Deshi; Sun, Tao
2017-09-19
Unmanned Aerial Vehicles (UAVs) play an important role in applications such as data collection and target reconnaissance. An accurate and optimal path can effectively increase the mission success rate in the case of small UAVs. Although path planning for UAVs is similar to that for traditional mobile robots, the special kinematic characteristics of UAVs (such as their minimum turning radius) have not been taken into account in previous studies. In this paper, we propose a locally-adjustable, continuous-curvature, bounded path-planning algorithm for fixed-wing UAVs. To deal with the curvature discontinuity problem, an optimal interpolation algorithm and a key-point shift algorithm are proposed based on the derivation of a curvature continuity condition. To meet the upper bound for curvature and to render the curvature extrema controllable, a local replanning scheme is designed by combining arcs and Bezier curves with monotonic curvature. In particular, a path transition mechanism is built for the replanning phase using minimum curvature circles for a planning philosophy. Numerical results demonstrate that the analytical planning algorithm can effectively generate continuous-curvature paths, while satisfying the curvature upper bound constraint and allowing UAVs to pass through all predefined waypoints in the desired mission region.
Chandon, Pierre; Ordabayeva, Nailya
2017-02-01
Five studies show that people, including experts such as professional chefs, estimate quantity decreases more accurately than quantity increases. We argue that this asymmetry occurs because physical quantities cannot be negative. Consequently, there is a natural lower bound (zero) when estimating decreasing quantities but no upper bound when estimating increasing quantities, which can theoretically grow to infinity. As a result, the "accuracy of less" disappears (a) when a numerical or a natural upper bound is present when estimating quantity increases, or (b) when people are asked to estimate the (unbounded) ratio of change from 1 size to another for both increasing and decreasing quantities. Ruling out explanations related to loss aversion, symbolic number mapping, and the visual arrangement of the stimuli, we show that the "accuracy of less" influences choice and demonstrate its robustness in a meta-analysis that includes previously published results. Finally, we discuss how the "accuracy of less" may explain asymmetric reactions to the supersizing and downsizing of food portions, some instances of the endowment effect, and asymmetries in the perception of increases and decreases in physical and psychological distance. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Bounds on area and charge for marginally trapped surfaces with a cosmological constant
NASA Astrophysics Data System (ADS)
Simon, Walter
2012-03-01
We sharpen the known inequalities AΛ ⩽ 4π(1 - g) (Hayward et al 1994 Phys. Rev. D 49 5080, Woolgar 1999 Class. Quantum Grav. 16 3005) and A ⩾ 4πQ2 (Dain et al 2012 Class. Quantum Grav. 29 035013) between the area A and the electric charge Q of a stable marginally outer-trapped surface (MOTS) of genus g in the presence of a cosmological constant Λ. In particular, instead of requiring stability we include the principal eigenvalue λ of the stability operator. For Λ* = Λ + λ > 0, we obtain a lower and an upper bound for Λ*A in terms of Λ*Q2, as well as the upper bound Q \\le 1/(2\\sqrt{\\Lambda ^{*}}) for the charge, which reduces to Q \\le 1/(2\\sqrt{\\Lambda }) in the stable case λ ⩾ 0. For Λ* < 0, there only remains a lower bound on A. In the spherically symmetric, static, stable case, one of our area inequalities is saturated iff the surface gravity vanishes. We also discuss implications of our inequalities for ‘jumps’ and mergers of charged MOTS.
On the sparseness of 1-norm support vector machines.
Zhang, Li; Zhou, Weida
2010-04-01
There is some empirical evidence available showing that 1-norm Support Vector Machines (1-norm SVMs) have good sparseness; however, both how good sparseness 1-norm SVMs can reach and whether they have a sparser representation than that of standard SVMs are not clear. In this paper we take into account the sparseness of 1-norm SVMs. Two upper bounds on the number of nonzero coefficients in the decision function of 1-norm SVMs are presented. First, the number of nonzero coefficients in 1-norm SVMs is at most equal to the number of only the exact support vectors lying on the +1 and -1 discriminating surfaces, while that in standard SVMs is equal to the number of support vectors, which implies that 1-norm SVMs have better sparseness than that of standard SVMs. Second, the number of nonzero coefficients is at most equal to the rank of the sample matrix. A brief review of the geometry of linear programming and the primal steepest edge pricing simplex method are given, which allows us to provide the proof of the two upper bounds and evaluate their tightness by experiments. Experimental results on toy data sets and the UCI data sets illustrate our analysis. Copyright 2009 Elsevier Ltd. All rights reserved.
Ostapczuk, Martin; Musch, Jochen
2011-01-01
Despite being susceptible to social desirability bias, attitudes towards people with disabilities are traditionally assessed via self-report. We investigated two methods presumably providing more valid prevalence estimates of sensitive attitudes than direct questioning (DQ). Most people projective questioning (MPPQ) attempts to reduce bias by asking interviewees to estimate the number of other people holding a sensitive attribute, rather than confirming or denying the attribute for themselves. The randomised-response technique (RRT) tries to reduce bias by assuring confidentiality through a random scrambling of the respondent's answers. We assessed negative attitudes towards people with physical and mental disability via MPPQ, RRT and DQ to compare the resulting estimates. The MPPQ estimates exceeded the DQ estimates. Employing a cheating-detection extension of the RRT, we determined the proportion of respondents disregarding the RRT instructions and computed an upper bound for the prevalence of negative attitudes. MPPQ estimates exceeded this upper bound and were thus shown to overestimate the prevalence. Furthermore, we found more negative attitudes towards people with mental disabilities than those with physical disabilities in all three questioning conditions. We recommend employing the cheating-detection variant of the RRT to gain additional insight in future studies on attitudes towards people with disabilities.
Ochi, Kento; Kamiura, Moto
2015-09-01
A multi-armed bandit problem is a search problem on which a learning agent must select the optimal arm among multiple slot machines generating random rewards. UCB algorithm is one of the most popular methods to solve multi-armed bandit problems. It achieves logarithmic regret performance by coordinating balance between exploration and exploitation. Since UCB algorithms, researchers have empirically known that optimistic value functions exhibit good performance in multi-armed bandit problems. The terms optimistic or optimism might suggest that the value function is sufficiently larger than the sample mean of rewards. The first definition of UCB algorithm is focused on the optimization of regret, and it is not directly based on the optimism of a value function. We need to think the reason why the optimism derives good performance in multi-armed bandit problems. In the present article, we propose a new method, which is called Overtaking method, to solve multi-armed bandit problems. The value function of the proposed method is defined as an upper bound of a confidence interval with respect to an estimator of expected value of reward: the value function asymptotically approaches to the expected value of reward from the upper bound. If the value function is larger than the expected value under the asymptote, then the learning agent is almost sure to be able to obtain the optimal arm. This structure is called sand-sifter mechanism, which has no regrowth of value function of suboptimal arms. It means that the learning agent can play only the current best arm in each time step. Consequently the proposed method achieves high accuracy rate and low regret and some value functions of it can outperform UCB algorithms. This study suggests the advantage of optimism of agents in uncertain environment by one of the simplest frameworks. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
A prevalence-based approach to societal costs occurring in consequence of child abuse and neglect
2012-01-01
Background Traumatization in childhood can result in lifelong health impairment and may have a negative impact on other areas of life such as education, social contacts and employment as well. Despite the frequent occurrence of traumatization, which is reflected in a 14.5 percent prevalence rate of severe child abuse and neglect, the economic burden of the consequences is hardly known. The objective of this prevalence-based cost-of-illness study is to show how impairment of the individual is reflected in economic trauma follow-up costs borne by society as a whole in Germany and to compare the results with other countries’ costs. Methods From a societal perspective trauma follow-up costs were estimated using a bottom-up approach. The literature-based prevalence rate includes emotional, physical and sexual abuse as well as physical and emotional neglect in Germany. Costs are derived from individual case scenarios of child endangerment presented in a German cost-benefit-analysis. A comparison with trauma follow-up costs in Australia, Canada and the USA is based on purchasing power parity. Results The annual trauma follow-up costs total to a margin of EUR 11.1 billion for the lower bound and to EUR 29.8 billion for the upper bound. This equals EUR 134.84 and EUR 363.58, respectively, per capita for the German population. These results conform to the ones obtained from cost studies conducted in Australia (lower bound) and Canada (upper bound), whereas the result for the United States is much lower. Conclusion Child abuse and neglect result in trauma follow-up costs of economically relevant magnitude for the German society. Although the result is well in line with other countries’ costs, the general lack of data should be fought in order to enable more detailed future studies. Creating a reliable cost data basis in the first place can pave the way for long-term cost savings. PMID:23158382
A case study to quantify prediction bounds caused by model-form uncertainty of a portal frame
NASA Astrophysics Data System (ADS)
Van Buren, Kendra L.; Hall, Thomas M.; Gonzales, Lindsey M.; Hemez, François M.; Anton, Steven R.
2015-01-01
Numerical simulations, irrespective of the discipline or application, are often plagued by arbitrary numerical and modeling choices. Arbitrary choices can originate from kinematic assumptions, for example the use of 1D beam, 2D shell, or 3D continuum elements, mesh discretization choices, boundary condition models, and the representation of contact and friction in the simulation. This work takes a step toward understanding the effect of arbitrary choices and model-form assumptions on the accuracy of numerical predictions. The application is the simulation of the first four resonant frequencies of a one-story aluminum portal frame structure under free-free boundary conditions. The main challenge of the portal frame structure resides in modeling the joint connections, for which different modeling assumptions are available. To study this model-form uncertainty, and compare it to other types of uncertainty, two finite element models are developed using solid elements, and with differing representations of the beam-to-column and column-to-base plate connections: (i) contact stiffness coefficients or (ii) tied nodes. Test-analysis correlation is performed to compare the lower and upper bounds of numerical predictions obtained from parametric studies of the joint modeling strategies to the range of experimentally obtained natural frequencies. The approach proposed is, first, to characterize the experimental variability of the joints by varying the bolt torque, method of bolt tightening, and the sequence in which the bolts are tightened. The second step is to convert what is learned from these experimental studies to models that "envelope" the range of observed bolt behavior. We show that this approach, that combines small-scale experiments, sensitivity analysis studies, and bounding-case models, successfully produces lower and upper bounds of resonant frequency predictions that match those measured experimentally on the frame structure. (Approved for unlimited, public release, LA-UR-13-27561).
An upper bound on the radius of a highly electrically conducting lunar core
NASA Technical Reports Server (NTRS)
Hobbs, B. A.; Hood, L. L.; Herbert, F.; Sonett, C. P.
1983-01-01
Parker's (1980) nonlinear inverse theory for the electromagnetic sounding problem is converted to a form suitable for analysis of lunar day-side transfer function data by: (1) transforming the solution in plane geometry to that in spherical geometry; and (2) transforming the theoretical lunar transfer function in the dipole limit to an apparent resistivity function. The theory is applied to the revised lunar transfer function data set of Hood et al. (1982), which extends in frequency from 10 to the -5th to 10 to the -3rd Hz. On the assumption that an iron-rich lunar core, whether molten or solid, can be represented by a perfect conductor at the minimum sampled frequency, an upper bound of 435 km on the maximum radius of such a core is calculated. This bound is somewhat larger than values of 360-375 km previously estimated from the same data set via forward model calculations because the prior work did not consider all possible mantle conductivity functions.
An upper bound on the particle-laden dependency of shear stresses at solid-fluid interfaces
NASA Astrophysics Data System (ADS)
Zohdi, T. I.
2018-03-01
In modern advanced manufacturing processes, such as three-dimensional printing of electronics, fine-scale particles are added to a base fluid yielding a modified fluid. For example, in three-dimensional printing, particle-functionalized inks are created by adding particles to freely flowing solvents forming a mixture, which is then deposited onto a surface, which upon curing yields desirable solid properties, such as thermal conductivity, electrical permittivity and magnetic permeability. However, wear at solid-fluid interfaces within the machinery walls that deliver such particle-laden fluids is typically attributed to the fluid-induced shear stresses, which increase with the volume fraction of added particles. The objective of this work is to develop a rigorous strict upper bound for the tolerable volume fraction of particles that can be added, while remaining below a given stress threshold at a fluid-solid interface. To illustrate the bound's utility, the expression is applied to a series of classical flow regimes.
Quantum Dynamical Applications of Salem's Theorem
NASA Astrophysics Data System (ADS)
Damanik, David; Del Rio, Rafael
2009-07-01
We consider the survival probability of a state that evolves according to the Schrödinger dynamics generated by a self-adjoint operator H. We deduce from a classical result of Salem that upper bounds for the Hausdorff dimension of a set supporting the spectral measure associated with the initial state imply lower bounds on a subsequence of time scales for the survival probability. This general phenomenon is illustrated with applications to the Fibonacci operator and the critical almost Mathieu operator. In particular, this gives the first quantitative dynamical bound for the critical almost Mathieu operator.
Volumes and intrinsic diameters of hypersurfaces
NASA Astrophysics Data System (ADS)
Paeng, Seong-Hun
2015-09-01
We estimate the volume and the intrinsic diameter of a hypersurface M with geometric information of a hypersurface which is parallel to M at distance T. It can be applied to the Riemannian Penrose inequality to obtain a lower bound of the total mass of a spacetime. Also it can be used to obtain upper bounds of the volume and the intrinsic diameter of the celestial r-sphere without a lower bound of the sectional curvature. We extend our results to metric-measure spaces by using the Bakry-Emery Ricci tensor.
Shock spectra applications to a class of multiple degree-of-freedom structures system
NASA Technical Reports Server (NTRS)
Hwang, Shoi Y.
1988-01-01
The demand on safety performance of launching structure and equipment system from impulsive excitations necessitates a study which predicts the maximum response of the system as well as the maximum stresses in the system. A method to extract higher modes and frequencies for a class of multiple degree-of-freedom (MDOF) Structure system is proposed. And, along with the shock spectra derived from a linear oscillator model, a procedure to obtain upper bound solutions for maximum displacement and maximum stresses in the MDOF system is presented.
Halo independent comparison of direct dark matter detection data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gondolo, Paolo; Gelmini, Graciela B., E-mail: paolo@physics.utah.edu, E-mail: gelmini@physics.ucla.edu
We extend the halo-independent method of Fox, Liu, and Weiner to include energy resolution and efficiency with arbitrary energy dependence, making it more suitable for experiments to use in presenting their results. Then we compare measurements and upper limits on the direct detection of low mass ( ∼ 10 GeV) weakly interacting massive particles with spin-independent interactions, including the upper limit on the annual modulation amplitude from the CDMS collaboration. We find that isospin-symmetric couplings are severely constrained both by XENON100 and CDMS bounds, and that isospin-violating couplings are still possible at the lowest energies, while the tension of themore » higher energy CoGeNT bins with the CDMS modulation constraint remains. We find the CRESST-II signal is not compatible with the modulation signals of DAMA and CoGeNT.« less
Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies
Rukhin, Andrew L.
2011-01-01
A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed. PMID:26989583
Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.
Rukhin, Andrew L
2011-01-01
A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.
Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen
2011-08-16
Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense.Existing lower and upper bounds (inequalities) on linear correlation coefficients provide useful guidance, but these bounds are too loose to serve directly as a method to predict subgrid correlations. Therefore,more » this paper proposes an alternative method that is based on a blend of theory and empiricism. The method begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are parameterized here using a cosine row-wise formula that is inspired by the aforementioned bounds on correlations. The method has three advantages: 1) the computational expense is tolerable; 2) the correlations are, by construction, guaranteed to be consistent with each other; and 3) the methodology is fairly general and hence may be applicable to other problems. The method is tested non-interactively using simulations of three Arctic mixed-phase cloud cases from two different field experiments: the Indirect and Semi-Direct Aerosol Campaign (ISDAC) and the Mixed-Phase Arctic Cloud Experiment (M-PACE). Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.« less
NASA Astrophysics Data System (ADS)
De Raedt, Hans; Michielsen, Kristel; Hess, Karl
2016-12-01
Using Einstein-Podolsky-Rosen-Bohm experiments as an example, we demonstrate that the combination of a digital computer and algorithms, as a metaphor for a perfect laboratory experiment, provides solutions to problems of the foundations of physics. Employing discrete-event simulation, we present a counterexample to John Bell's remarkable "proof" that any theory of physics, which is both Einstein-local and "realistic" (counterfactually definite), results in a strong upper bound to the correlations that are being measured in Einstein-Podolsky-Rosen-Bohm experiments. Our counterexample, which is free of the so-called detection-, coincidence-, memory-, and contextuality loophole, violates this upper bound and fully agrees with the predictions of quantum theory for Einstein-Podolsky-Rosen-Bohm experiments.
NASA Astrophysics Data System (ADS)
Shen, Yuxuan; Wang, Zidong; Shen, Bo; Alsaadi, Fuad E.
2018-07-01
In this paper, the recursive filtering problem is studied for a class of time-varying nonlinear systems with stochastic parameter matrices. The measurement transmission between the sensor and the filter is conducted through a fading channel characterized by the Rice fading model. An event-based transmission mechanism is adopted to decide whether the sensor measurement should be transmitted to the filter. A recursive filter is designed such that, in the simultaneous presence of the stochastic parameter matrices and fading channels, the filtering error covariance is guaranteed to have an upper bound and such an upper bound is then minimized by appropriately choosing filter gain matrix. Finally, a simulation example is presented to demonstrate the effectiveness of the proposed filtering scheme.
Combinatorial complexity of pathway analysis in metabolic networks.
Klamt, Steffen; Stelling, Jörg
2002-01-01
Elementary flux mode analysis is a promising approach for a pathway-oriented perspective of metabolic networks. However, in larger networks it is hampered by the combinatorial explosion of possible routes. In this work we give some estimations on the combinatorial complexity including theoretical upper bounds for the number of elementary flux modes in a network of a given size. In a case study, we computed the elementary modes in the central metabolism of Escherichia coli while utilizing four different substrates. Interestingly, although the number of modes occurring in this complex network can exceed half a million, it is still far below the upper bound. Hence, to a certain extent, pathway analysis of central catabolism is feasible to assess network properties such as flexibility and functionality.
A one-dimensional model of solid-earth electrical resistivity beneath Florida
Blum, Cletus; Love, Jeffrey J.; Pedrie, Kolby; Bedrosian, Paul A.; Rigler, E. Joshua
2015-11-19
An estimated one-dimensional layered model of electrical resistivity beneath Florida was developed from published geological and geophysical information. The resistivity of each layer is represented by plausible upper and lower bounds as well as a geometric mean resistivity. Corresponding impedance transfer functions, Schmucker-Weidelt transfer functions, apparent resistivity, and phase responses are calculated for inducing geomagnetic frequencies ranging from 10−5 to 100 hertz. The resulting one-dimensional model and response functions can be used to make general estimates of time-varying electric fields associated with geomagnetic storms such as might represent induction hazards for electric-power grid operation. The plausible upper- and lower-bound resistivity structures show the uncertainty, giving a wide range of plausible time-varying electric fields.
NASA Astrophysics Data System (ADS)
Soltani Bozchalooi, Iman; Liang, Ming
2018-04-01
A discussion paper entitled "On the distribution of the modulus of Gabor wavelet coefficients and the upper bound of the dimensionless smoothness index in the case of additive Gaussian noises: revisited" by Dong Wang, Qiang Zhou, Kwok-Leung Tsui has been brought to our attention recently. This discussion paper (hereafter called Wang et al. paper) is based on arguments that are fundamentally incorrect and which we rebut within this commentary. However, as the flaws in the arguments proposed by Wang et al. are clear, we will keep this rebuttal as brief as possible.
Bounds on geologically current rates of motion of groups of hot spots
NASA Astrophysics Data System (ADS)
Wang, Chengzu; Gordon, Richard G.; Zhang, Tuo
2017-06-01
It is widely believed that groups of hot spots in different regions of the world are in relative motion at rates of 10 to 30 mm a-1 or more. Here we present a new method for analyzing geologically current motion between groups of hot spots beneath different plates. In an inversion of 56 globally distributed, equally weighted trends of hot spot tracks, the dispersion is dominated by differences in trend between different plates rather than differences within plates. Nonetheless the rate of hot spot motion perpendicular to the direction of absolute plate motion, vperp, differs significantly from zero for only 3 of 10 plates and then by merely 0.3 to 1.4 mm a-1. The global mean upper bound on |vperp| is 3.2 ± 2.7 mm a-1. Therefore, hot spots move slowly and can be used to define a global reference frame for plate motions.
Constraints on the [Formula: see text] form factor from analyticity and unitarity.
Ananthanarayan, B; Caprini, I; Kubis, B
Motivated by the discrepancies noted recently between the theoretical calculations of the electromagnetic [Formula: see text] form factor and certain experimental data, we investigate this form factor using analyticity and unitarity in a framework known as the method of unitarity bounds. We use a QCD correlator computed on the spacelike axis by operator product expansion and perturbative QCD as input, and exploit unitarity and the positivity of its spectral function, including the two-pion contribution that can be reliably calculated using high-precision data on the pion form factor. From this information, we derive upper and lower bounds on the modulus of the [Formula: see text] form factor in the elastic region. The results provide a significant check on those obtained with standard dispersion relations, confirming the existence of a disagreement with experimental data in the region around [Formula: see text].
C-14 content of ten meteorites measured by tandem accelerator mass spectrometry
NASA Technical Reports Server (NTRS)
Brown, R. M.; Andrews, H. R.; Ball, G. C.; Burn, N.; Imahori, Y.; Milton, J. C. D.; Fireman, E. L.
1984-01-01
Measurements of C-14 in three North American and seven Antarctic meteorites show in most cases that this cosmogenic isotope, which is tightly bound, was separated from absorbed atmospheric radiocarbon by stepwise heating extractions. The present upper limit to age determination by the accelerator method varies from 50,000 to 70,000 years, depending on the mass and carbon content of the sample. The natural limit caused by cosmic ray production of C-14 in silicate rocks at 2000 m elevation is estimated to be 55,000 + or - 5000 years. An estimation is also made of the 'weathering ages' of the Antarctic meteorites from the specific activity of loosely bound CO2 which is thought to be absorbed from the terrestrial atmosphere. Accelerator measurements are found to agree with previous low level counting measurements, but are more sensitive and precise.
Two Upper Bounds for the Weighted Path Length of Binary Trees. Report No. UIUCDCS-R-73-565.
ERIC Educational Resources Information Center
Pradels, Jean Louis
Rooted binary trees with weighted nodes are structures encountered in many areas, such as coding theory, searching and sorting, information storage and retrieval. The path length is a meaningful quantity which gives indications about the expected time of a search or the length of a code, for example. In this paper, two sharp bounds for the total…
The Mystery of Io's Warm Polar Regions: Implications for Heat Flow
NASA Technical Reports Server (NTRS)
Matson, D. L.; Veeder, G. J.; Johnson, T. V.; Blaney, D. L.; Davies, A. G.
2002-01-01
Unexpectedly warm polar temperatures further support the idea that Io is covered virtually everywhere by cooling lava flows. This implies a new heat flow component. Io's heat flow remains constrained between a lower bound of (approximately) 2.5 W m(exp -2) and an upper bound of (approximately) 13 W m(exp -2). Additional information is contained in the original extended abstract.
Verifying the error bound of numerical computation implemented in computer systems
Sawada, Jun
2013-03-12
A verification tool receives a finite precision definition for an approximation of an infinite precision numerical function implemented in a processor in the form of a polynomial of bounded functions. The verification tool receives a domain for verifying outputs of segments associated with the infinite precision numerical function. The verification tool splits the domain into at least two segments, wherein each segment is non-overlapping with any other segment and converts, for each segment, a polynomial of bounded functions for the segment to a simplified formula comprising a polynomial, an inequality, and a constant for a selected segment. The verification tool calculates upper bounds of the polynomial for the at least two segments, beginning with the selected segment and reports the segments that violate a bounding condition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aad, G.; Abbott, B.; Abdallah, J.
2016-01-28
A search for a Higgs boson produced via vector-boson fusion and decaying into invisible particles is presented, using 20.3 fb -1 of proton-proton collision data at a centre-of-mass energy of 8 TeV recorded by the ATLAS detector at the LHC. For a Higgs boson with a mass of 125 GeV, assuming the Standard Model production cross section, an upper bound of 0.28 is set on the branching fraction of H → invisible at 95% confidence level, where the expected upper limit is 0.31. Furthermore, the results are interpreted in models of Higgs-portal dark matter where the branching fraction limit ismore » converted into upper bounds on the dark-matter-nucleon scattering cross section as a function of the dark-matter particle mass, and compared to results from the direct dark-matter detection experiments.« less
NASA Astrophysics Data System (ADS)
Badescu, Viorel; Landsberg, Peter T.
1995-08-01
The general theory developed in part I was applied to build up two models of photovoltaic conversion. To this end two different systems were analyzed. The first system consists of the whole absorber (converter), for which the balance equations for energy and entropy are written and then used to derive an upper bound for solar energy conversion. The second system covers a part of the absorber (converter), namely the valence and conduction electronic bands. The balance of energy is used in this case to derive, under additional assumptions, another upper limit for the conversion efficiency. This second system deals with the real location where the power is generated. Both models take into consideration the radiation polarization and reflection, and the effects of concentration. The second model yields a more accurate upper bound for the conversion efficiency. A generalized solar cell equation is derived. It is proved that other previous theories are particular cases of the present more general formalism.
Limits on the fluctuating part of y-type distortion monopole from Planck and SPT results
NASA Astrophysics Data System (ADS)
Khatri, Rishi; Sunyaev, Rashid
2015-08-01
We use the published Planck and SPT cluster catalogs [1,2] and recently published y-distortion maps [3] to put strong observational limits on the contribution of the fluctuating part of the y-type distortions to the y-distortion monopole. Our bounds are 5.4× 10-8 < langle yrangle < 2.2× 10-6. Our upper bound is a factor of 6.8 stronger than the currently best upper 95% confidence limit from COBE-FIRAS of langle yrangle <15× 10-6. In the standard cosmology, large scale structure is the only source of such distortions and our limits therefore constrain the baryonic physics involved in the formation of the large scale structure. Our lower limit, from the detected clusters in the Planck and SPT catalogs, also implies that a Pixie-like experiment should detect the y-distortion monopole at >27-σ. The biggest sources of uncertainty in our upper limit are the monopole offsets between different HFI channel maps that we estimate to be <10-6.
On the realization of the bulk modulus bounds for two-phase viscoelastic composites
NASA Astrophysics Data System (ADS)
Andreasen, Casper Schousboe; Andreassen, Erik; Jensen, Jakob Søndergaard; Sigmund, Ole
2014-02-01
Materials with good vibration damping properties and high stiffness are of great industrial interest. In this paper the bounds for viscoelastic composites are investigated and material microstructures that realize the upper bound are obtained by topology optimization. These viscoelastic composites can be realized by additive manufacturing technologies followed by an infiltration process. Viscoelastic composites consisting of a relatively stiff elastic phase, e.g. steel, and a relatively lossy viscoelastic phase, e.g. silicone rubber, have non-connected stiff regions when optimized for maximum damping. In order to ensure manufacturability of such composites the connectivity of the matrix is ensured by imposing a conductivity constraint and the influence on the bounds is discussed.
NASA Astrophysics Data System (ADS)
Castro-González, N.; Vélez-Cerrada, J. Y.
2008-05-01
Given a bounded operator A on a Banach space X with Drazin inverse AD and index r, we study the class of group invertible bounded operators B such that I+AD(B-A) is invertible and . We show that they can be written with respect to the decomposition as a matrix operator, , where B1 and are invertible. Several characterizations of the perturbed operators are established, extending matrix results. We analyze the perturbation of the Drazin inverse and we provide explicit upper bounds of ||B#-AD|| and ||BB#-ADA||. We obtain a result on the continuity of the group inverse for operators on Banach spaces.
Bounds on invisible Higgs boson decays extracted from LHC ttH production data.
Zhou, Ning; Khechadoorian, Zepyoor; Whiteson, Daniel; Tait, Tim M P
2014-10-10
We present an upper bound on the branching fraction of the Higgs boson to invisible particles by recasting a CMS Collaboration search for stop quarks decaying to tt + E(T)(miss). The observed (expected) bound, BF(H → inv.) < 0.40(0.65) at 95% C.L., is the strongest direct limit to date, benefiting from a downward fluctuation in the CMS data in that channel. In addition, we combine this new constraint with existing published constraints to give an observed (expected) bound of BF(H → inv.) < 0.40(0.40) at 95% C.L., and we show some of the implications for theories of dark matter which communicate through the Higgs portal.
A regularization method for extrapolation of solar potential magnetic fields
NASA Technical Reports Server (NTRS)
Gary, G. A.; Musielak, Z. E.
1992-01-01
The mathematical basis of a Tikhonov regularization method for extrapolating the chromospheric-coronal magnetic field using photospheric vector magnetograms is discussed. The basic techniques show that the Cauchy initial value problem can be formulated for potential magnetic fields. The potential field analysis considers a set of linear, elliptic partial differential equations. It is found that, by introducing an appropriate smoothing of the initial data of the Cauchy potential problem, an approximate Fourier integral solution is found, and an upper bound to the error in the solution is derived. This specific regularization technique, which is a function of magnetograph measurement sensitivities, provides a method to extrapolate the potential magnetic field above an active region into the chromosphere and low corona.
Money Gone Up in Smoke: The Tobacco Use and Malnutrition Nexus in Bangladesh.
Husain, Muhammad Jami; Virk-Baker, Mandeep; Parascandola, Mark; Khondker, Bazlul Haque; Ahluwalia, Indu B
The tobacco epidemic in Bangladesh is pervasive. Expenditures on tobacco may reduce money available for food in a country with a high malnutrition rate. The aims of the study are to quantify the opportunity costs of tobacco expenditure in terms of nutrition (ie, food energy) forgone and the potential improvements in the household level food-energy status if the money spent on tobacco were diverted for food consumption. We analyzed data from the 2010 Bangladesh Household Income and Expenditure Survey, a nationally representative survey conducted among 12,240 households. We present 2 analytical scenarios: (1) the lower-bound gain scenario entailing money spent on tobacco partially diverted to acquiring food according to households' food consumption share in total expenditures; and (2) the upper-bound gain scenario entailing money spent on tobacco diverted to acquiring food only. Age- and gender-based energy norms were used to identify food-energy deficient households. Data were analyzed by mutually exclusive smoking-only, smokeless-only, and dual-tobacco user households. On average, a smoking-only household could gain 269-497 kilocalories (kcal) daily under the lower-bound and upper-bound scenarios, respectively. The potential energy gains for smokeless-only and dual-tobacco user households ranged from 148-268 kcal and 508-924 kcal, respectively. Under these lower- and upper-bound estimates, the percentage of smoking-only user households that are malnourished declined significantly from the baseline rate of 38% to 33% and 29%, respectively. For the smokeless-only and dual-tobacco user households, there were 2-3 and 6-9 percentage point drops in the malnutrition prevalence rates. The tobacco expenditure shift could translate to an additional 4.6-7.7 million food-energy malnourished persons meeting their caloric requirements. The findings suggest that tobacco use reduction could facilitate concomitant improvements in population-level nutrition status and may inform the development and refinement of tobacco prevention and control efforts in Bangladesh. Copyright © 2016. Published by Elsevier Inc.
Recent Development on O(+) - O Collision Frequency and Ionosphere-Thermosphere Coupling
NASA Technical Reports Server (NTRS)
Omidvar, K.; Menard, R.
1999-01-01
The collision frequency between an oxygen atom and its singly charged ion controls the momentum transfer between the ionosphere and the thermosphere. There has been a long standing discrepancy, extending over a decade, between the theoretical and empirical determination of this frequency: the empirical value of this frequency exceeded the theoretical value by a factor of 1.7. Recent improvements in theory were obtained by using accurate oxygen ion-oxygen atom potential energy curves, and partial wave quantum mechanical calculations. We now have applied three independent statistical methods to the observational data, obtained at the MIT/Millstone Hill Observatory, consisting of two sets A and B. These methods give results consistent with each other, and together with the recent theoretical improvements, bring the ratio close to unity, as it should be. The three statistical methods lead to an average for the ratio of the empirical to the theoretical values equal to 0.98, with an uncertainty of +/-8%, resolving the old discrepancy between theory and observation. The Hines statistics, and the lognormal distribution statistics, both give lower and upper bounds for the Set A equal to 0.89 and 1.02, respectively. The related bounds for the Set B are 1.06 and 1.17. The average values of these bounds thus bracket the ideal value of the ratio which should be equal to unity. The main source of uncertainties are errors in the profile of the oxygen atom density, which is of the order of 11%. An alternative method to find the oxygen atom density is being suggested.
Termination Proofs for String Rewriting Systems via Inverse Match-Bounds
NASA Technical Reports Server (NTRS)
Butler, Ricky (Technical Monitor); Geser, Alfons; Hofbauer, Dieter; Waldmann, Johannes
2004-01-01
Annotating a letter by a number, one can record information about its history during a reduction. A string rewriting system is called match-bounded if there is a global upper bound to these numbers. In earlier papers we established match-boundedness as a strong sufficient criterion for both termination and preservation of regular languages. We show now that the string rewriting system whose inverse (left and right hand sides exchanged) is match-bounded, also have exceptional properties, but slightly different ones. Inverse match-bounded systems effectively preserve context-free languages; their sets of normalized strings and their sets of immortal strings are effectively regular. These sets of strings can be used to decide the normalization, the termination and the uniform termination problems of inverse match-bounded systems. We also show that the termination problem is decidable in linear time, and that a certain strong reachability problem is deciable, thus solving two open problems of McNaughton's.
Tightening the entropic uncertainty bound in the presence of quantum memory
NASA Astrophysics Data System (ADS)
Adabi, F.; Salimi, S.; Haseli, S.
2016-06-01
The uncertainty principle is a fundamental principle in quantum physics. It implies that the measurement outcomes of two incompatible observables cannot be predicted simultaneously. In quantum information theory, this principle can be expressed in terms of entropic measures. M. Berta et al. [Nat. Phys. 6, 659 (2010), 10.1038/nphys1734] have indicated that uncertainty bound can be altered by considering a particle as a quantum memory correlating with the primary particle. In this article, we obtain a lower bound for entropic uncertainty in the presence of a quantum memory by adding an additional term depending on the Holevo quantity and mutual information. We conclude that our lower bound will be tightened with respect to that of Berta et al. when the accessible information about measurements outcomes is less than the mutual information about the joint state. Some examples have been investigated for which our lower bound is tighter than Berta et al.'s lower bound. Using our lower bound, a lower bound for the entanglement of formation of bipartite quantum states has been obtained, as well as an upper bound for the regularized distillable common randomness.
NASA Astrophysics Data System (ADS)
Wang, Liwei; Liu, Xinggao; Zhang, Zeyin
2017-02-01
An efficient primal-dual interior-point algorithm using a new non-monotone line search filter method is presented for nonlinear constrained programming, which is widely applied in engineering optimization. The new non-monotone line search technique is introduced to lead to relaxed step acceptance conditions and improved convergence performance. It can also avoid the choice of the upper bound on the memory, which brings obvious disadvantages to traditional techniques. Under mild assumptions, the global convergence of the new non-monotone line search filter method is analysed, and fast local convergence is ensured by second order corrections. The proposed algorithm is applied to the classical alkylation process optimization problem and the results illustrate its effectiveness. Some comprehensive comparisons to existing methods are also presented.
Entropy Methods For Univariate Distributions in Decision Analysis
NASA Astrophysics Data System (ADS)
Abbas, Ali E.
2003-03-01
One of the most important steps in decision analysis practice is the elicitation of the decision-maker's belief about an uncertainty of interest in the form of a representative probability distribution. However, the probability elicitation process is a task that involves many cognitive and motivational biases. Alternatively, the decision-maker may provide other information about the distribution of interest, such as its moments, and the maximum entropy method can be used to obtain a full distribution subject to the given moment constraints. In practice however, decision makers cannot readily provide moments for the distribution, and are much more comfortable providing information about the fractiles of the distribution of interest or bounds on its cumulative probabilities. In this paper we present a graphical method to determine the maximum entropy distribution between upper and lower probability bounds and provide an interpretation for the shape of the maximum entropy distribution subject to fractile constraints, (FMED). We also discuss the problems with the FMED in that it is discontinuous and flat over each fractile interval. We present a heuristic approximation to a distribution if in addition to its fractiles, we also know it is continuous and work through full examples to illustrate the approach.
Communication complexity and information complexity
NASA Astrophysics Data System (ADS)
Pankratov, Denis
Information complexity enables the use of information-theoretic tools in communication complexity theory. Prior to the results presented in this thesis, information complexity was mainly used for proving lower bounds and direct-sum theorems in the setting of communication complexity. We present three results that demonstrate new connections between information complexity and communication complexity. In the first contribution we thoroughly study the information complexity of the smallest nontrivial two-party function: the AND function. While computing the communication complexity of AND is trivial, computing its exact information complexity presents a major technical challenge. In overcoming this challenge, we reveal that information complexity gives rise to rich geometrical structures. Our analysis of information complexity relies on new analytic techniques and new characterizations of communication protocols. We also uncover a connection of information complexity to the theory of elliptic partial differential equations. Once we compute the exact information complexity of AND, we can compute exact communication complexity of several related functions on n-bit inputs with some additional technical work. Previous combinatorial and algebraic techniques could only prove bounds of the form theta( n). Interestingly, this level of precision is typical in the area of information theory, so our result demonstrates that this meta-property of precise bounds carries over to information complexity and in certain cases even to communication complexity. Our result does not only strengthen the lower bound on communication complexity of disjointness by making it more exact, but it also shows that information complexity provides the exact upper bound on communication complexity. In fact, this result is more general and applies to a whole class of communication problems. In the second contribution, we use self-reduction methods to prove strong lower bounds on the information complexity of two of the most studied functions in the communication complexity literature: Gap Hamming Distance (GHD) and Inner Product mod 2 (IP). In our first result we affirm the conjecture that the information complexity of GHD is linear even under the uniform distribution. This strengthens the O(n) bound shown by Kerenidis et al. (2012) and answers an open problem by Chakrabarti et al. (2012). We also prove that the information complexity of IP is arbitrarily close to the trivial upper bound n as the permitted error tends to zero, again strengthening the O(n) lower bound proved by Braverman and Weinstein (2011). More importantly, our proofs demonstrate that self-reducibility makes the connection between information complexity and communication complexity lower bounds a two-way connection. Whereas numerous results in the past used information complexity techniques to derive new communication complexity lower bounds, we explore a generic way, in which communication complexity lower bounds imply information complexity lower bounds in a black-box manner. In the third contribution we consider the roles that private and public randomness play in the definition of information complexity. In communication complexity, private randomness can be trivially simulated by public randomness. Moreover, the communication cost of simulating public randomness with private randomness is well understood due to Newman's theorem (1991). In information complexity, the roles of public and private randomness are reversed: public randomness can be trivially simulated by private randomness. However, the information cost of simulating private randomness with public randomness is not understood. We show that protocols that use only public randomness admit a rather strong compression. In particular, efficient simulation of private randomness by public randomness would imply a version of a direct sum theorem in the setting of communication complexity. This establishes a yet another connection between the two areas. (Abstract shortened by UMI.).
Safe Upper-Bounds Inference of Energy Consumption for Java Bytecode Applications
NASA Technical Reports Server (NTRS)
Navas, Jorge; Mendez-Lojo, Mario; Hermenegildo, Manuel V.
2008-01-01
Many space applications such as sensor networks, on-board satellite-based platforms, on-board vehicle monitoring systems, etc. handle large amounts of data and analysis of such data is often critical for the scientific mission. Transmitting such large amounts of data to the remote control station for analysis is usually too expensive for time-critical applications. Instead, modern space applications are increasingly relying on autonomous on-board data analysis. All these applications face many resource constraints. A key requirement is to minimize energy consumption. Several approaches have been developed for estimating the energy consumption of such applications (e.g. [3, 1]) based on measuring actual consumption at run-time for large sets of random inputs. However, this approach has the limitation that it is in general not possible to cover all possible inputs. Using formal techniques offers the potential for inferring safe energy consumption bounds, thus being specially interesting for space exploration and safety-critical systems. We have proposed and implemented a general frame- work for resource usage analysis of Java bytecode [2]. The user defines a set of resource(s) of interest to be tracked and some annotations that describe the cost of some elementary elements of the program for those resources. These values can be constants or, more generally, functions of the input data sizes. The analysis then statically derives an upper bound on the amount of those resources that the program as a whole will consume or provide, also as functions of the input data sizes. This article develops a novel application of the analysis of [2] to inferring safe upper bounds on the energy consumption of Java bytecode applications. We first use a resource model that describes the cost of each bytecode instruction in terms of the joules it consumes. With this resource model, we then generate energy consumption cost relations, which are then used to infer safe upper bounds. How energy consumption for each bytecode instruction is measured is beyond the scope of this paper. Instead, this paper is about how to infer safe energy consumption estimations assuming that those energy consumption costs are provided. For concreteness, we use a simplified version of an existing resource model [1] in which an energy consumption cost for individual Java opcodes is defined.
Energy-constrained two-way assisted private and quantum capacities of quantum channels
NASA Astrophysics Data System (ADS)
Davis, Noah; Shirokov, Maksim E.; Wilde, Mark M.
2018-06-01
With the rapid growth of quantum technologies, knowing the fundamental characteristics of quantum systems and protocols is essential for their effective implementation. A particular communication setting that has received increased focus is related to quantum key distribution and distributed quantum computation. In this setting, a quantum channel connects a sender to a receiver, and their goal is to distill either a secret key or entanglement, along with the help of arbitrary local operations and classical communication (LOCC). In this work, we establish a general theory of energy-constrained, LOCC-assisted private and quantum capacities of quantum channels, which are the maximum rates at which an LOCC-assisted quantum channel can reliably establish a secret key or entanglement, respectively, subject to an energy constraint on the channel input states. We prove that the energy-constrained squashed entanglement of a channel is an upper bound on these capacities. We also explicitly prove that a thermal state maximizes a relaxation of the squashed entanglement of all phase-insensitive, single-mode input bosonic Gaussian channels, generalizing results from prior work. After doing so, we prove that a variation of the method introduced by Goodenough et al. [New J. Phys. 18, 063005 (2016), 10.1088/1367-2630/18/6/063005] leads to improved upper bounds on the energy-constrained secret-key-agreement capacity of a bosonic thermal channel. We then consider a multipartite setting and prove that two known multipartite generalizations of the squashed entanglement are in fact equal. We finally show that the energy-constrained, multipartite squashed entanglement plays a role in bounding the energy-constrained LOCC-assisted private and quantum capacity regions of quantum broadcast channels.
Vanishing spin stiffness in the spin-1/2 Heisenberg chain for any nonzero temperature
NASA Astrophysics Data System (ADS)
Carmelo, J. M. P.; Prosen, T.; Campbell, D. K.
2015-10-01
Whether at the zero spin density m =0 and finite temperatures T >0 the spin stiffness of the spin-1 /2 X X X chain is finite or vanishes remains an unsolved and controversial issue, as different approaches yield contradictory results. Here we explicitly compute the stiffness at m =0 and find strong evidence that it vanishes. In particular, we derive an upper bound on the stiffness within a canonical ensemble at any fixed value of spin density m that is proportional to m2L in the thermodynamic limit of chain length L →∞ , for any finite, nonzero temperature, which implies the absence of ballistic transport for T >0 for m =0 . Although our method relies in part on the thermodynamic Bethe ansatz (TBA), it does not evaluate the stiffness through the second derivative of the TBA energy eigenvalues relative to a uniform vector potential. Moreover, we provide strong evidence that in the thermodynamic limit the upper bounds on the spin current and stiffness used in our derivation remain valid under string deviations. Our results also provide strong evidence that in the thermodynamic limit the TBA method used by X. Zotos [Phys. Rev. Lett. 82, 1764 (1999), 10.1103/PhysRevLett.82.1764] leads to the exact stiffness values at finite temperature T >0 for models whose stiffness is finite at T =0 , similar to the spin stiffness of the spin-1 /2 Heisenberg chain but unlike the charge stiffness of the half-filled 1D Hubbard model.
Semiclassical analysis of spectral singularities and their applications in optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mostafazadeh, Ali
2011-08-15
Motivated by possible applications of spectral singularities in optics, we develop a semiclassical method of computing spectral singularities. We use this method to examine the spectral singularities of a planar slab gain medium whose gain coefficient varies due to the exponential decay of the intensity of the pumping beam inside the medium. For both singly and doublypumped samples, we obtain universal upper bounds on the decay constant beyond which no lasing occurs. Furthermore, we show that the dependence of the wavelength of the spectral singularities on the value of the decay constant is extremely mild. This is an indication ofmore » the stability of optical spectral singularities.« less
Generalization Performance of Regularized Ranking With Multiscale Kernels.
Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin
2016-05-01
The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.
Pharmacokinetics and repolarization effects of intravenous and transdermal granisetron.
Mason, Jay W; Selness, Daniel S; Moon, Thomas E; O'Mahony, Bridget; Donachie, Peter; Howell, Julian
2012-05-15
The need for greater clarity about the effects of 5-HT(3) receptor antagonists on cardiac repolarization is apparent in the changing product labeling across this therapeutic class. This study assessed the repolarization effects of granisetron, a 5-HT(3) receptor antagonist antiemetic, administered intravenously and by a granisetron transdermal system (GTDS). In a parallel four-arm study, healthy subjects were randomized to receive intravenous granisetron, GTDS, placebo, or oral moxifloxacin (active control). The primary endpoint was difference in change from baseline in mean Fridericia-corrected QT interval (QTcF) between GTDS and placebo (ddQTcF) on days 3 and 5. A total of 240 subjects were enrolled, 60 in each group. Adequate sensitivity for detection of QTc change was shown by a 5.75 ms lower bound of the 90% confidence interval (CI) for moxifloxacin versus placebo at 2 hours postdose on day 3. Day 3 ddQTcF values varied between 0.2 and 1.9 ms for GTDS (maximum upper bound of 90% CI, 6.88 ms), between -1.2 and 1.6 ms for i.v. granisetron (maximum upper bound of 90% CI, 5.86 ms), and between -3.4 and 4.7 ms for moxifloxacin (maximum upper bound of 90% CI, 13.45 ms). Day 5 findings were similar. Pharmacokinetic-ddQTcF modeling showed a minimally positive slope of 0.157 ms/(ng/mL), but a very low correlation (r = 0.090). GTDS was not associated with statistically or clinically significant effects on QTcF or other electrocardiographic variables. This study provides useful clarification on the effect of granisetron delivered by GTDS on cardiac repolarization. ©2012 AACR.
Using a Water Balance Model to Bound Potential Irrigation Development in the Upper Blue Nile Basin
NASA Astrophysics Data System (ADS)
Jain Figueroa, A.; McLaughlin, D.
2016-12-01
The Grand Ethiopian Renaissance Dam (GERD), on the Blue Nile is an example of water resource management underpinning food, water and energy security. Downstream countries have long expressed concern about water projects in Ethiopia because of possible diversions to agricultural uses that could reduce flow in the Nile. Such diversions are attractive to Ethiopia as a partial solution to its food security problems but they could also conflict with hydropower revenue from GERD. This research estimates an upper bound on diversions above the GERD project by considering the potential for irrigated agriculture expansion and, in particular, the availability of water and land resources for crop production. Although many studies have aimed to simulate downstream flows for various Nile basin management plans, few have taken the perspective of bounding the likely impacts of upstream agricultural development. The approach is to construct an optimization model to establish a bound on Upper Blue Nile (UBN) agricultural development, paying particular attention to soil suitability and seasonal variability in climate. The results show that land and climate constraints impose significant limitations on crop production. Only 25% of the land area is suitable for irrigation due to the soil, slope and temperature constraints. When precipitation is also considered only 11% of current land area could be used in a way that increases water consumption. The results suggest that Ethiopia could consume an additional 3.75 billion cubic meters (bcm) of water per year, through changes in land use and storage capacity. By exploiting this irrigation potential, Ethiopia could potentially decrease the annual flow downstream of the UBN by 8 percent from the current 46 bcm/y to the modeled 42 bcm/y.
Evolution of cosmic string networks
NASA Technical Reports Server (NTRS)
Albrecht, Andreas; Turok, Neil
1989-01-01
Results on cosmic strings are summarized including: (1) the application of non-equilibrium statistical mechanics to cosmic string evolution; (2) a simple one scale model for the long strings which has a great deal of predictive power; (3) results from large scale numerical simulations; and (4) a discussion of the observational consequences of our results. An upper bound on G mu of approximately 10(-7) emerges from the millisecond pulsar gravity wave bound. How numerical uncertainties affect this are discussed. Any changes which weaken the bound would probably also give the long strings the dominant role in producing observational consequences.
NASA Astrophysics Data System (ADS)
Chen, Zhixiang; Fu, Bin
This paper is our third step towards developing a theory of testing monomials in multivariate polynomials and concentrates on two problems: (1) How to compute the coefficients of multilinear monomials; and (2) how to find a maximum multilinear monomial when the input is a ΠΣΠ polynomial. We first prove that the first problem is #P-hard and then devise a O *(3 n s(n)) upper bound for this problem for any polynomial represented by an arithmetic circuit of size s(n). Later, this upper bound is improved to O *(2 n ) for ΠΣΠ polynomials. We then design fully polynomial-time randomized approximation schemes for this problem for ΠΣ polynomials. On the negative side, we prove that, even for ΠΣΠ polynomials with terms of degree ≤ 2, the first problem cannot be approximated at all for any approximation factor ≥ 1, nor "weakly approximated" in a much relaxed setting, unless P=NP. For the second problem, we first give a polynomial time λ-approximation algorithm for ΠΣΠ polynomials with terms of degrees no more a constant λ ≥ 2. On the inapproximability side, we give a n (1 - ɛ)/2 lower bound, for any ɛ> 0, on the approximation factor for ΠΣΠ polynomials. When the degrees of the terms in these polynomials are constrained as ≤ 2, we prove a 1.0476 lower bound, assuming Pnot=NP; and a higher 1.0604 lower bound, assuming the Unique Games Conjecture.
Necessary and sufficient criterion for extremal quantum correlations in the simplest Bell scenario
NASA Astrophysics Data System (ADS)
Ishizaka, Satoshi
2018-05-01
In the study of quantum nonlocality, one obstacle is that the analytical criterion for identifying the boundaries between quantum and postquantum correlations has not yet been given, even in the simplest Bell scenario. We propose a plausible, analytical, necessary and sufficient condition ensuring that a nonlocal quantum correlation in the simplest scenario is an extremal boundary point. Our extremality condition amounts to certifying an information-theoretical quantity; the probability of guessing a measurement outcome of a distant party optimized using any quantum instrument. We show that this quantity can be upper and lower bounded from any correlation in a device-independent way, and we use numerical calculations to confirm that coincidence of the upper and lower bounds appears to be necessary and sufficient for the extremality.
On the validity of the Arrhenius equation for electron attachment rate coefficients.
Fabrikant, Ilya I; Hotop, Hartmut
2008-03-28
The validity of the Arrhenius equation for dissociative electron attachment rate coefficients is investigated. A general analysis allows us to obtain estimates of the upper temperature bound for the range of validity of the Arrhenius equation in the endothermic case and both lower and upper bounds in the exothermic case with a reaction barrier. The results of the general discussion are illustrated by numerical examples whereby the rate coefficient, as a function of temperature for dissociative electron attachment, is calculated using the resonance R-matrix theory. In the endothermic case, the activation energy in the Arrhenius equation is close to the threshold energy, whereas in the case of exothermic reactions with an intermediate barrier, the activation energy is found to be substantially lower than the barrier height.
On dynamic tumor eradication conditions under combined chemical/anti-angiogenic therapies
NASA Astrophysics Data System (ADS)
Starkov, Konstantin E.
2018-02-01
In this paper ultimate dynamics of the five-dimensional cancer tumor growth model at the angiogenesis phase is studied. This model elaborated by Pinho et al. in 2014 describes interactions between normal/cancer/endothelial cells under chemotherapy/anti-angiogenic agents in tumor growth process. The author derives ultimate upper bounds for normal/tumor/endothelial cells concentrations and ultimate upper and lower bounds for chemical/anti-angiogenic concentrations. Global asymptotic tumor clearance conditions are obtained for two versions: the use of only chemotherapy and the combined application of chemotherapy and anti-angiogenic therapy. These conditions are established as the attraction conditions to the maximum invariant set in the tumor free plane, and furthermore, the case is examined when this set consists only of tumor free equilibrium points.
Robust guaranteed cost tracking control of quadrotor UAV with uncertainties.
Xu, Zhiwei; Nian, Xiaohong; Wang, Haibo; Chen, Yinsheng
2017-07-01
In this paper, a robust guaranteed cost controller (RGCC) is proposed for quadrotor UAV system with uncertainties to address set-point tracking problem. A sufficient condition of the existence for RGCC is derived by Lyapunov stability theorem. The designed RGCC not only guarantees the whole closed-loop system asymptotically stable but also makes the quadratic performance level built for the closed-loop system have an upper bound irrespective to all admissible parameter uncertainties. Then, an optimal robust guaranteed cost controller is developed to minimize the upper bound of performance level. Simulation results verify the presented control algorithms possess small overshoot and short setting time, with which the quadrotor has ability to perform set-point tracking task well. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Thermal dark matter co-annihilating with a strongly interacting scalar
NASA Astrophysics Data System (ADS)
Biondini, S.; Laine, M.
2018-04-01
Recently many investigations have considered Majorana dark matter co-annihilating with bound states formed by a strongly interacting scalar field. However only the gluon radiation contribution to bound state formation and dissociation, which at high temperatures is subleading to soft 2 → 2 scatterings, has been included. Making use of a non-relativistic effective theory framework and solving a plasma-modified Schrödinger equation, we address the effect of soft 2 → 2 scatterings as well as the thermal dissociation of bound states. We argue that the mass splitting between the Majorana and scalar field has in general both a lower and an upper bound, and that the dark matter mass scale can be pushed at least up to 5…6TeV.
A Priori Bound on the Velocity in Axially Symmetric Navier-Stokes Equations
NASA Astrophysics Data System (ADS)
Lei, Zhen; Navas, Esteban A.; Zhang, Qi S.
2016-01-01
Let v be the velocity of Leray-Hopf solutions to the axially symmetric three-dimensional Navier-Stokes equations. Under suitable conditions for initial values, we prove the following a priori bound |v(x, t)| ≤ C |ln r|^{1/2}/r^2, qquad 0 < r ≤ 1/2, where r is the distance from x to the z axis, and C is a constant depending only on the initial value. This provides a pointwise upper bound (worst case scenario) for possible singularities, while the recent papers (Chiun-Chuan et al., Commun PDE 34(1-3):203-232, 2009; Koch et al., Acta Math 203(1):83-105, 2009) gave a lower bound. The gap is polynomial order 1 modulo a half log term.
Parallel algorithms for the molecular conformation problem
NASA Astrophysics Data System (ADS)
Rajan, Kumar
Given a set of objects, and some of the pairwise distances between them, the problem of identifying the positions of the objects in the Euclidean space is referred to as the molecular conformation problem. This problem is known to be computationally difficult. One of the most important applications of this problem is the determination of the structure of molecules. In the case of molecular structure determination, usually only the lower and upper bounds on some of the interatomic distances are available. The process of obtaining a tighter set of bounds between all pairs of atoms, using the available interatomic distance bounds is referred to as bound-smoothing . One method for bound-smoothing is to use the limits imposed by the triangle inequality. The distance bounds so obtained can often be tightened further by applying the tetrangle inequality---the limits imposed on the six pairwise distances among a set of four atoms (instead of three for the triangle inequalities). The tetrangle inequality is expressed by the Cayley-Menger determinants. The sequential tetrangle-inequality bound-smoothing algorithm considers a quadruple of atoms at a time, and tightens the bounds on each of its six distances. The sequential algorithm is computationally expensive, and its application is limited to molecules with up to a few hundred atoms. Here, we conduct an experimental study of tetrangle-inequality bound-smoothing and reduce the sequential time by identifying the most computationally expensive portions of the process. We also present a simple criterion to determine which of the quadruples of atoms are likely to be tightened the most by tetrangle-inequality bound-smoothing. This test could be used to enhance the applicability of this process to large molecules. We map the problem of parallelizing tetrangle-inequality bound-smoothing to that of generating disjoint packing designs of a certain kind. We map this, in turn, to a regular-graph coloring problem, and present a simple, parallel algorithm for tetrangle-inequality bound-smoothing. We implement the parallel algorithm on the Intel Paragon X/PS, and apply it to real-life molecules. Our results show that with this parallel algorithm, tetrangle inequality can be applied to large molecules in a reasonable amount of time. We extend the regular graph to represent more general packing designs, and present a coloring algorithm for this graph. This can be used to generate constant-weight binary codes in parallel. Once a tighter set of distance bounds is obtained, the molecular conformation problem is usually formulated as a non-linear optimization problem, and a global optimization algorithm is then used to solve the problem. Here we present a parallel, deterministic algorithm for the optimization problem based on Interval Analysis. We implement our algorithm, using dynamic load balancing, on a network of Sun Ultra-Sparc workstations. Our experience with this algorithm shows that its application is limited to small instances of the molecular conformation problem, where the number of measured, pairwise distances is close to the maximum value. However, since the interval method eliminates a substantial portion of the initial search space very quickly, it can be used to prune the search space before any of the more efficient, nondeterministic methods can be applied.
Yu, Kai; Yin, Ming; Luo, Ji-An; Wang, Yingguan; Bao, Ming; Hu, Yu-Hen; Wang, Zhi
2016-05-23
A compressive sensing joint sparse representation direction of arrival estimation (CSJSR-DoA) approach is proposed for wireless sensor array networks (WSAN). By exploiting the joint spatial and spectral correlations of acoustic sensor array data, the CSJSR-DoA approach provides reliable DoA estimation using randomly-sampled acoustic sensor data. Since random sampling is performed at remote sensor arrays, less data need to be transmitted over lossy wireless channels to the fusion center (FC), and the expensive source coding operation at sensor nodes can be avoided. To investigate the spatial sparsity, an upper bound of the coherence of incoming sensor signals is derived assuming a linear sensor array configuration. This bound provides a theoretical constraint on the angular separation of acoustic sources to ensure the spatial sparsity of the received acoustic sensor array signals. The Cram e ´ r-Rao bound of the CSJSR-DoA estimator that quantifies the theoretical DoA estimation performance is also derived. The potential performance of the CSJSR-DoA approach is validated using both simulations and field experiments on a prototype WSAN platform. Compared to existing compressive sensing-based DoA estimation methods, the CSJSR-DoA approach shows significant performance improvement.
The construction, fouling and enzymatic cleaning of a textile dye surface.
Onaizi, Sagheer A; He, Lizhong; Middelberg, Anton P J
2010-11-01
The enzymatic cleaning of a rubisco protein stain bound onto Surface Plasmon Resonance (SPR) biosensor chips having a dye-bound upper layer is investigated. This novel method allowed, for the first time, a detailed kinetic study of rubisco cleanability (defined as fraction of adsorbed protein removed from a surface) from dyed surfaces (mimicking fabrics) at different enzyme concentrations. Analysis of kinetic data using an established mathematical model able to decouple enzyme transfer and reaction processes [Onaizi, He, Middelberg, Chem. Eng. Sci. 64 (2008) 3868] revealed a striking effect of dyeing on enzymatic cleaning performance. Specifically, the absolute rate constants for enzyme transfer to and from a dye-bound rubisco stain were significantly higher than reported previously for un-dyed surfaces. These increased transfer rates resulted in higher surface cleanability. Higher enzyme mobility (i.e., higher enzyme adsorption and desorption rates) at the liquid-dye interface was observed, consistent with previous suggestions that enzyme surface mobility is likely correlated with overall enzyme cleaning performance. Our results show that reaction engineering models of enzymatic action at surfaces may provide insight able to guide the design of better stain-resistant surfaces, and may also guide efforts to improve cleaning formulations. Copyright 2010 Elsevier Inc. All rights reserved.
Yan, H; Sun, G A; Peng, S M; Zhang, Y; Fu, C; Guo, H; Liu, B Q
2015-10-30
We have constrained possible new interactions which produce nonrelativistic potentials between polarized neutrons and unpolarized matter proportional to ασ[over →]·v[over →] where σ[over →] is the neutron spin and v[over →] is the relative velocity. We use existing data from laboratory measurements on the very long T_{1} and T_{2} spin relaxation times of polarized ^{3}He gas in glass cells. Using the best available measured T_{2} of polarized ^{3}He gas atoms as the polarized source and the Earth as an unpolarized source, we obtain constraints on two new interactions. We present a new experimental upper bound on possible vector-axial-vector (V_{VA}) type interactions for ranges between 1 and 10^{8} m. In combination with previous results, we set the most stringent experiment limits on g_{V}g_{A} ranging from ~μm to ~10^{8} m. We also report what is to our knowledge the first experimental upper limit on the possible torsion fields induced by the Earth on its surface. Dedicated experiments could further improve these bounds by a factor of ~100. Our method of analysis also makes it possible to probe many velocity dependent interactions which depend on the spins of both neutrons and other particles which have never been searched for before experimentally.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Degteva, M. O.; Shagina, N. B.; Shishkina, Elena A.
Waterborne radioactive releases into the Techa River from the Mayak Production Association in Russia during 1949–1956 resulted in significant doses to about 30,000 persons who lived in downstream settlements. The residents were exposed to internal and external radiation. Two methods for reconstruction of the external dose are considered in this paper, electron paramagnetic resonance (EPR) measurements of teeth and fluorescence in situ hybridization (FISH) measurements of chromosome translocations in circulating lymphocytes. The main issue in the application of the EPR and FISH methods for reconstruction of the external dose for the Techa Riverside residents was strontium radioisotopes incorporated in teethmore » and bones that served as a source of confounding local exposures. In order to estimate and subtract doses from incorporated 89,90Sr, the EPR and FISH assays were supported by measurements of 90Sr-body burdens and estimates of 90Sr concentrations in dental tissues by the luminescence method. The resulting dose estimates derived from EPR and FISH measurements for residents of the upper Techa River were found to be consistent: the mean values vary from 510 – 550 mGy for the villages located close to the site of radioactive release to 130 – 160 mGy for the more distant villages. The upper bound of individual estimates for both methods is equal to 2.2 – 2.3 Gy. The EPR- and FISH-based dose estimates were compared with the doses calculated for the donors using the Techa River Dosimetry System (TRDS). The TRDS external dose assessments were based on the data on contamination of the Techa River floodplain, simulation of ai r kerma above the contaminated soil, age-dependent life-styles and individual residence histories. For correct comparison TRDS-based doses were calculated from two sources: external exposure from the contaminated environment and internal exposure from 137Cs incorporated in donors’ soft tissues. The TRDS-based absorbed doses in tooth enamel and muscle were in agreement with with EPR- and FISH-based estimates within uncertainty bounds. Basically, the agreement between the estimates has confirmed the validity of external doses calculated with the Techa River Dosimetry System.« less
Bounds on graviton mass using weak lensing and SZ effect in galaxy clusters
NASA Astrophysics Data System (ADS)
Rana, Akshay; Jain, Deepak; Mahajan, Shobhit; Mukherjee, Amitabha
2018-06-01
In General Relativity (GR), the graviton is massless. However, a common feature in several theoretical alternatives of GR is a non-zero mass for the graviton. These theories can be described as massive gravity theories. Despite many theoretical complexities in these theories, on phenomenological grounds the implications of massive gravity have been widely used to put bounds on graviton mass. One of the generic implications of giving a mass to the graviton is that the gravitational potential will follow a Yukawa-like fall off. We use this feature of massive gravity theories to probe the mass of graviton by using the largest gravitationally bound objects, namely galaxy clusters. In this work, we use the mass estimates of galaxy clusters measured at various cosmologically defined radial distances measured via weak lensing (WL) and Sunyaev-Zel'dovich (SZ) effect. We also use the model independent values of Hubble parameter H (z) smoothed by a non-parametric method, Gaussian process. Within 1σ confidence region, we obtain the mass of graviton mg < 5.9 ×10-30 eV with the corresponding Compton length scale λg > 6.82 Mpc from weak lensing and mg < 8.31 ×10-30 eV with λg > 5.012 Mpc from SZ effect. This analysis improves the upper bound on graviton mass obtained earlier from galaxy clusters.
Exact results for the finite time thermodynamic uncertainty relation
NASA Astrophysics Data System (ADS)
Manikandan, Sreekanth K.; Krishnamurthy, Supriya
2018-03-01
We obtain exact results for the recently discovered finite-time thermodynamic uncertainty relation, for the dissipated work W d , in a stochastically driven system with non-Gaussian work statistics, both in the steady state and transient regimes, by obtaining exact expressions for any moment of W d at arbitrary times. The uncertainty function (the Fano factor of W d ) is bounded from below by 2k_BT as expected, for all times τ, in both steady state and transient regimes. The lower bound is reached at τ=0 as well as when certain system parameters vanish (corresponding to an equilibrium state). Surprisingly, we find that the uncertainty function also reaches a constant value at large τ for all the cases we have looked at. For a system starting and remaining in steady state, the uncertainty function increases monotonically, as a function of τ as well as other system parameters, implying that the large τ value is also an upper bound. For the same system in the transient regime, however, we find that the uncertainty function can have a local minimum at an accessible time τm , for a range of parameter values. The large τ value for the uncertainty function is hence not a bound in this case. The non-monotonicity suggests, rather counter-intuitively, that there might be an optimal time for the working of microscopic machines, as well as an optimal configuration in the phase space of parameter values. Our solutions show that the ratios of higher moments of the dissipated work are also bounded from below by 2k_BT . For another model, also solvable by our methods, which never reaches a steady state, the uncertainty function, is in some cases, bounded from below by a value less than 2k_BT .
Pioneer Venus orbiter search for Venusian lightning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borucki, W.J.; Dyer, J.W.; Phillips, J.R.
1991-07-01
During the 1988 and 1990, the star sensor aboard the Pioneer Venus orbiter (PVO) was used to search for optical pulses from lightning on the nightside of Venus. Useful data were obtained for 53 orbits in 1988 and 55 orbits in 1990. During this period, approximately 83 s of search time plus 7749 s of control data were obtained. The results again find no optical evidence for lightning activity. With the region that was observed during 1988, the results imply that the upper bound to short-duration flashes is 4 {times} 10{sup {minus}7} flashes/km{sup 2}/s for flashes that are at leastmore » 50% as bright as typical terrestrial lightning. During 1990, when the 2-Hz filter was used, the results imply an upper bound of 1 {times} 10{sup {minus}7} flashes/km{sup 2}/s for long-duration flashes at least 1.6% as bright as typical terrestrial lightning flashes or 33% as bright as the pulses observed by the Venera 9. The upper bounds to the flash rates for the 1988 and 1990 searches are twice and one half the global terrestrial rate, respectively. These two searches covered the region from 60{degrees}N latitude to 30{degrees}S latitude, 250{degrees} to 350{degrees} longitude, and the region from 45{degrees}N latitude to 55{degrees}S latitude, 155{degrees} to 300{degrees} longitude. Both searches sampled much of the nightside region from the dawn terminator to within 4 hours of the dusk terminator. These searches covered a much larger latitude range than any previous search. The results show that the Beat and Phoebe Regio areas previously identified by Russell et al. (1988) as areas with high rates of lightning activity were not active during the two seasons of the observations. When the authors assume that their upper bounds to the nightside flash rate are representative of the entire planet, the results imply that the global flash rate and energy dissipation rate derived by Krasnopol'sky (1983) from his observation of a single storm are too high.« less
Prediction Interval Development for Wind-Tunnel Balance Check-Loading
NASA Technical Reports Server (NTRS)
Landman, Drew; Toro, Kenneth G.; Commo, Sean A.; Lynn, Keith C.
2014-01-01
Results from the Facility Analysis Verification and Operational Reliability project revealed a critical gap in capability in ground-based aeronautics research applications. Without a standardized process for check-loading the wind-tunnel balance or the model system, the quality of the aerodynamic force data collected varied significantly between facilities. A prediction interval is required in order to confirm a check-loading. The prediction interval provides an expected upper and lower bound on balance load prediction at a given confidence level. A method has been developed which accounts for sources of variability due to calibration and check-load application. The prediction interval method of calculation and a case study demonstrating its use is provided. Validation of the methods is demonstrated for the case study based on the probability of capture of confirmation points.
3D magnetic sources' framework estimation using Genetic Algorithm (GA)
NASA Astrophysics Data System (ADS)
Ponte-Neto, C. F.; Barbosa, V. C.
2008-05-01
We present a method for inverting total-field anomaly for determining simple 3D magnetic sources' framework such as: batholiths, dikes, sills, geological contacts, kimberlite and lamproite pipes. We use GA to obtain magnetic sources' frameworks and their magnetic features simultaneously. Specifically, we estimate the magnetization direction (inclination and declination) and the total dipole moment intensity, and the horizontal and vertical positions, in Cartesian coordinates , of a finite set of elementary magnetic dipoles. The spatial distribution of these magnetic dipoles composes the skeletal outlines of the geologic sources. We assume that the geologic sources have a homogeneous magnetization distribution and, thus all dipoles have the same magnetization direction and dipole moment intensity. To implement the GA, we use real-valued encoding with crossover, mutation, and elitism. To obtain a unique and stable solution, we set upper and lower bounds on declination and inclination of [0,360°] and [-90°, 90°], respectively. We also set the criterion of minimum scattering of the dipole-position coordinates, to guarantee that spatial distribution of the dipoles (defining the source skeleton) be as close as possible to continuous distribution. To this end, we fix the upper and lower bounds of the dipole moment intensity and we evaluate the dipole-position estimates. If the dipole scattering is greater than a value expected by the interpreter, the upper bound of the dipole moment intensity is reduced by 10 % of the latter. We repeat this procedure until the dipole scattering and the data fitting are acceptable. We apply our method to noise-corrupted magnetic data from simulated 3D magnetic sources with simple geometries and located at different depths. In tests simulating sources such as sphere and cube, all estimates of the dipole coordinates are agreeing with center of mass of these sources. To elongated-prismatic sources in an arbitrary direction, we estimate dipole-position coordinates coincident with principal axis of sources. In tests with synthetic data, simulating the magnetic anomaly yielded by intrusive 2D structures such as dikes and sills, the estimates of the dipole coordinates are coincident with the principal plane of these 2D sources. We also inverted the aeromagnetic data from Serra do Cabral, in southeastern, Brazil, and we estimated dipoles distributed on a horizontal plane at depth of 30 km, with inclination and declination of 59.1° and -48.0°, respectively. The results showed close agreement with previous interpretation.
William J. Trush; Edward C. Connor; Knight Alan W.
1989-01-01
Riparian communities established along Elder Creek, a tributary of the upper South Fork Eel River, are bounded by two frequencies of periodic flooding. The upper limit for the riparian zone occurs at bankfull stage. The lower riparian limit is associated with a more frequent stage height, called the active channel, having an exceedance probability of 11 percent on a...
1987-08-01
of the absolute difference between the random variable and its mean.Gassmann and Ziemba 119861 provide a weaker bound that does not require...2.8284, and EX4tV) -12 EX’iX) = -42. Hence C = -2 -€t* i-4’]= I-- . 1213. £1 2 5 COMPARISONS OF BOUNDS IN IIn Gassmann and Ziemba 11986) extend an idea...solution of the foLLowing Linear program: (see Gassmann, Ziemba (1986),Theorem 1) m m m-GZ=max(XT(vi) I: z. 1=1,Z vo=x io (5.1hk i-l i=i i=1 I I where 0
Bounds on Block Error Probability for Multilevel Concatenated Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Moorthy, Hari T.; Stojanovic, Diana
1996-01-01
Maximum likelihood decoding of long block codes is not feasable due to large complexity. Some classes of codes are shown to be decomposable into multilevel concatenated codes (MLCC). For these codes, multistage decoding provides good trade-off between performance and complexity. In this paper, we derive an upper bound on the probability of block error for MLCC. We use this bound to evaluate difference in performance for different decompositions of some codes. Examples given show that a significant reduction in complexity can be achieved when increasing number of stages of decoding. Resulting performance degradation varies for different decompositions. A guideline is given for finding good m-level decompositions.
New Anomalous Lieb-Robinson Bounds in Quasiperiodic XY Chains
NASA Astrophysics Data System (ADS)
Damanik, David; Lemm, Marius; Lukic, Milivoje; Yessen, William
2014-09-01
We announce and sketch the rigorous proof of a new kind of anomalous (or sub-ballistic) Lieb-Robinson (LR) bound for an isotropic XY chain in a quasiperiodic transversal magnetic field. Instead of the usual effective light cone |x|≤v|t|, we obtain |x|≤v|t|α for some 0<α <1. We can characterize the allowed values of α exactly as those exceeding the upper transport exponent αu+ of a one-body Schrödinger operator. To our knowledge, this is the first rigorous derivation of anomalous quantum many-body transport. We also discuss anomalous LR bounds with power-law tails for a random dimer field.
Precision Measurement of the Electron's Electric Dipole Moment Using Trapped Molecular Ions
NASA Astrophysics Data System (ADS)
Cairncross, William B.; Gresh, Daniel N.; Grau, Matt; Cossel, Kevin C.; Roussy, Tanya S.; Ni, Yiqi; Zhou, Yan; Ye, Jun; Cornell, Eric A.
2017-10-01
We describe the first precision measurement of the electron's electric dipole moment (de) using trapped molecular ions, demonstrating the application of spin interrogation times over 700 ms to achieve high sensitivity and stringent rejection of systematic errors. Through electron spin resonance spectroscopy on 180Hf 19F+ in its metastable 3Δ1 electronic state, we obtain de=(0.9 ±7. 7stat±1. 7syst)×10-29 e cm , resulting in an upper bound of |de|<1.3 ×10-28 e cm (90% confidence). Our result provides independent confirmation of the current upper bound of |de|<9.4 ×10-29 e cm [J. Baron et al., New J. Phys. 19, 073029 (2017), 10.1088/1367-2630/aa708e], and offers the potential to improve on this limit in the near future.
Limit cycles via higher order perturbations for some piecewise differential systems
NASA Astrophysics Data System (ADS)
Buzzi, Claudio A.; Lima, Maurício Firmino Silva; Torregrosa, Joan
2018-05-01
A classical perturbation problem is the polynomial perturbation of the harmonic oscillator, (x‧ ,y‧) =(- y + εf(x , y , ε) , x + εg(x , y , ε)) . In this paper we study the limit cycles that bifurcate from the period annulus via piecewise polynomial perturbations in two zones separated by a straight line. We prove that, for polynomial perturbations of degree n , no more than Nn - 1 limit cycles appear up to a study of order N. We also show that this upper bound is reached for orders one and two. Moreover, we study this problem in some classes of piecewise Liénard differential systems providing better upper bounds for higher order perturbation in ε, showing also when they are reached. The Poincaré-Pontryagin-Melnikov theory is the main technique used to prove all the results.
Non-localization of eigenfunctions for Sturm-Liouville operators and applications
NASA Astrophysics Data System (ADS)
Liard, Thibault; Lissy, Pierre; Privat, Yannick
2018-02-01
In this article, we investigate a non-localization property of the eigenfunctions of Sturm-Liouville operators Aa = -∂xx + a (ṡ) Id with Dirichlet boundary conditions, where a (ṡ) runs over the bounded nonnegative potential functions on the interval (0 , L) with L > 0. More precisely, we address the extremal spectral problem of minimizing the L2-norm of a function e (ṡ) on a measurable subset ω of (0 , L), where e (ṡ) runs over all eigenfunctions of Aa, at the same time with respect to all subsets ω having a prescribed measure and all L∞ potential functions a (ṡ) having a prescribed essentially upper bound. We provide some existence and qualitative properties of the minimizers, as well as precise lower and upper estimates on the optimal value. Several consequences in control and stabilization theory are then highlighted.
Fisher information of a single qubit interacts with a spin-qubit in the presence of a magnetic field
NASA Astrophysics Data System (ADS)
Metwally, N.
2018-06-01
In this contribution, quantum Fisher information is utilized to estimate the parameters of a central qubit interacting with a single-spin qubit. The effect of the longitudinal, transverse and the rotating strengths of the magnetic field on the estimation degree is discussed. It is shown that, in the resonance case, the number of peaks and consequently the size of the estimation regions increase as the rotating magnetic field strength increases. The precision estimation of the central qubit parameters depends on the initial state settings of the central and the spin-qubit, either encode classical or quantum information. It is displayed that, the upper bounds of the estimation degree are large if the two qubits encode classical information. In the non-resonance case, the estimation degree depends on which of the longitudinal/transverse strength is larger. The coupling constant between the central qubit and the spin-qubit has a different effect on the estimation degree of the weight and the phase parameters, where the possibility of estimating the weight parameter decreases as the coupling constant increases, while it increases for the phase parameter. For large number of spin-particles, namely, we have a spin-bath particles, the upper bounds of the Fisher information with respect to the weight parameter of the central qubit decreases as the number of the spin particle increases. As the interaction time increases, the upper bounds appear at different initial values of the weight parameter.
Simplest little Higgs model revisited: Hidden mass relation, unitarity, and naturalness
NASA Astrophysics Data System (ADS)
Cheung, Kingman; He, Shi-Ping; Mao, Ying-nan; Zhang, Chen; Zhou, Yang
2018-06-01
We analyze the scalar potential of the simplest little Higgs (SLH) model in an approach consistent with the spirit of continuum effective field theory (CEFT). By requiring correct electroweak symmetry breaking (EWSB) with the 125 GeV Higgs boson, we are able to derive a relation between the pseudoaxion mass mη and the heavy top mass mT, which serves as a crucial test of the SLH mechanism. By requiring mη2>0 an upper bound on mT can be obtained for any fixed SLH global symmetry breaking scale f . We also point out that an absolute upper bound on f can be obtained by imposing partial wave unitarity constraint, which in turn leads to absolute upper bounds of mT≲19 TeV , mη≲1.5 TeV , and mZ'≲48 TeV . We present the allowed region in the three-dimensional parameter space characterized by f ,tβ,mT, taking into account the requirement of valid EWSB and the constraint from perturbative unitarity. We also propose a strategy of analyzing the fine-tuning problem consistent with the spirit of CEFT and apply it to the SLH. We suggest that the scalar potential and fine-tuning analysis strategies adopted here should also be applicable to a wide class of little Higgs and twin Higgs models, which may reveal interesting relations as crucial tests of the related EWSB mechanism and provide a new perspective on assessing their degree of fine-tuning.
Bounds on OPE coefficients from interference effects in the conformal collider
NASA Astrophysics Data System (ADS)
Córdova, Clay; Maldacena, Juan; Turiaci, Gustavo J.
2017-11-01
We apply the average null energy condition to obtain upper bounds on the three-point function coefficients of stress tensors and a scalar operator, < TTOi>, in general CFTs. We also constrain the gravitational anomaly of U(1) currents in four-dimensional CFTs, which are encoded in three-point functions of the form 〈 T T J 〉. In theories with a large N AdS dual we translate these bounds into constraints on the coefficient of a higher derivative bulk term of the form ∫ϕ W 2. We speculate that these bounds also apply in de-Sitter. In this case our results constrain inflationary observables, such as the amplitude for chiral gravity waves that originate from higher derivative terms in the Lagrangian of the form ϕ W W ∗.
Reduced conservatism in stability robustness bounds by state transformation
NASA Technical Reports Server (NTRS)
Yedavalli, R. K.; Liang, Z.
1986-01-01
This note addresses the issue of 'conservatism' in the time domain stability robustness bounds obtained by the Liapunov approach. A state transformation is employed to improve the upper bounds on the linear time-varying perturbation of an asymptotically stable linear time-invariant system for robust stability. This improvement is due to the variance of the conservatism of the Liapunov approach with respect to the basis of the vector space in which the Liapunov function is constructed. Improved bounds are obtained, using a transformation, on elemental and vector norms of perturbations (i.e., structured perturbations) as well as on a matrix norm of perturbations (i.e., unstructured perturbations). For the case of a diagonal transformation, an algorithm is proposed to find the 'optimal' transformation. Several examples are presented to illustrate the proposed analysis.
A MATLAB implementation of the minimum relative entropy method for linear inverse problems
NASA Astrophysics Data System (ADS)
Neupauer, Roseanna M.; Borchers, Brian
2001-08-01
The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.
Decoy-state quantum key distribution with biased basis choice
Wei, Zhengchao; Wang, Weilong; Zhang, Zhen; Gao, Ming; Ma, Zhi; Ma, Xiongfeng
2013-01-01
We propose a quantum key distribution scheme that combines a biased basis choice with the decoy-state method. In this scheme, Alice sends all signal states in the Z basis and decoy states in the X and Z basis with certain probabilities, and Bob measures received pulses with optimal basis choice. This scheme simplifies the system and reduces the random number consumption. From the simulation result taking into account of statistical fluctuations, we find that in a typical experimental setup, the proposed scheme can increase the key rate by at least 45% comparing to the standard decoy-state scheme. In the postprocessing, we also apply a rigorous method to upper bound the phase error rate of the single-photon components of signal states. PMID:23948999
Decoy-state quantum key distribution with biased basis choice.
Wei, Zhengchao; Wang, Weilong; Zhang, Zhen; Gao, Ming; Ma, Zhi; Ma, Xiongfeng
2013-01-01
We propose a quantum key distribution scheme that combines a biased basis choice with the decoy-state method. In this scheme, Alice sends all signal states in the Z basis and decoy states in the X and Z basis with certain probabilities, and Bob measures received pulses with optimal basis choice. This scheme simplifies the system and reduces the random number consumption. From the simulation result taking into account of statistical fluctuations, we find that in a typical experimental setup, the proposed scheme can increase the key rate by at least 45% comparing to the standard decoy-state scheme. In the postprocessing, we also apply a rigorous method to upper bound the phase error rate of the single-photon components of signal states.
Proof of a Dain inequality with charge
NASA Astrophysics Data System (ADS)
Lopes Costa, João
2010-07-01
We prove an upper bound for angular momentum and charge in terms of the mass for electro-vacuum asymptotically flat axisymmetric initial data sets with simply connected orbit space. This completes the work started in (Chruściel and Costa 2009 Class. Quantum Grav. 26 235013 (arXiv:gr-qc/0909.5625)) where this charged Dain inequality was first presented but where the proof of the main result, based on the methods of Chruściel et al (Ann. Phys. 2008 323 2591-613 (arXiv:gr-qc/0712.4064v2)), was only sketched. Here we present a complete proof while simplifying the methods suggested by Chruściel and Costa (2009 Class. Quantum Grav. 26 235013 (arXiv:gr-qc/0909.5625)).
Zavou, Christina; Kkoushi, Antria; Koutsou, Achilleas; Christodoulou, Chris
2017-11-01
The aim of the current work is twofold: firstly to adapt an existing method measuring the input synchrony of a neuron driven only by excitatory inputs in such a way so as to account for inhibitory inputs as well and secondly to further appropriately adapt this measure so as to be correctly utilised on experimentally-recorded data. The existing method uses the normalized pre-spike slope (NPSS) of the membrane potential, resulting from observing the slope of depolarization of the membrane potential of a neuron prior to the moment of crossing the threshold within a short period of time, to identify the response-relevant input synchrony and through it to infer the operational mode of a neuron. The first adaptation of NPSS is made such that its upper bound calculation accommodates for the higher possible slope values caused by the lower average and minimum membrane potential values due to inhibitory inputs. Results indicate that when the input spike trains arrive randomly, the modified NPSS works as expected inferring that the neuron is operating as a temporal integrator. When the input spike trains arrive in perfect synchrony though, the modified NPSS works as expected only when the level of inhibition is much higher than the level of excitation. This suggests that calculation of the upper bound of the NPSS should be a function of the ratio between excitatory and inhibitory inputs in order to be able to correctly capture perfect synchrony at a neuron's input. In addition, we effectively demonstrate a process which has to be followed when aiming to use the NPSS on real neuron recordings. This process, which relies on empirical observations of the slope of depolarisation for estimating the bounds for the range of observed interspike interval lengths, is successfully applied to experimentally-recorded data showing that through it both a real neuron's operational mode and the amount of input synchrony that caused its firing can be inferred. Copyright © 2017 Elsevier B.V. All rights reserved.
A Multi-Armed Bandit Approach to Following a Markov Chain
2017-06-01
focus on the House to Café transition (p1,4). We develop a Multi-Armed Bandit approach for efficiently following this target, where each state takes the...and longitude (each state corresponding to a physical location and a small set of activities). The searcher would then apply our approach on this...the target’s transition probability and the true probability over time. Further, we seek to provide upper bounds (i.e., worst case bounds) on the
Uncertainty analysis for absorbed dose from a brain receptor imaging agent
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aydogan, B.; Miller, L.F.; Sparks, R.B.
Absorbed dose estimates are known to contain uncertainties. A recent literature search indicates that prior to this study no rigorous investigation of uncertainty associated with absorbed dose has been undertaken. A method of uncertainty analysis for absorbed dose calculations has been developed and implemented for the brain receptor imaging agent {sup 123}I-IPT. The two major sources of uncertainty considered were the uncertainty associated with the determination of residence time and that associated with the determination of the S values. There are many sources of uncertainty in the determination of the S values, but only the inter-patient organ mass variation wasmore » considered in this work. The absorbed dose uncertainties were determined for lung, liver, heart and brain. Ninety-five percent confidence intervals of the organ absorbed dose distributions for each patient and for a seven-patient population group were determined by the ``Latin Hypercube Sampling`` method. For an individual patient, the upper bound of the 95% confidence interval of the absorbed dose was found to be about 2.5 times larger than the estimated mean absorbed dose. For the seven-patient population the upper bound of the 95% confidence interval of the absorbed dose distribution was around 45% more than the estimated population mean. For example, the 95% confidence interval of the population liver dose distribution was found to be between 1.49E+0.7 Gy/MBq and 4.65E+07 Gy/MBq with a mean of 2.52E+07 Gy/MBq. This study concluded that patients in a population receiving {sup 123}I-IPT could receive absorbed doses as much as twice as large as the standard estimated absorbed dose due to these uncertainties.« less
Alternate methods for FAAT S-curve generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaufman, A.M.
The FAAT (Foreign Asset Assessment Team) assessment methodology attempts to derive a probability of effect as a function of incident field strength. The probability of effect is the likelihood that the stress put on a system exceeds its strength. In the FAAT methodology, both the stress and strength are random variables whose statistical properties are estimated by experts. Each random variable has two components of uncertainty: systematic and random. The systematic uncertainty drives the confidence bounds in the FAAT assessment. Its variance can be reduced by improved information. The variance of the random uncertainty is not reducible. The FAAT methodologymore » uses an assessment code called ARES to generate probability of effect curves (S-curves) at various confidence levels. ARES assumes log normal distributions for all random variables. The S-curves themselves are log normal cumulants associated with the random portion of the uncertainty. The placement of the S-curves depends on confidence bounds. The systematic uncertainty in both stress and strength is usually described by a mode and an upper and lower variance. Such a description is not consistent with the log normal assumption of ARES and an unsatisfactory work around solution is used to obtain the required placement of the S-curves at each confidence level. We have looked into this situation and have found that significant errors are introduced by this work around. These errors are at least several dB-W/cm{sup 2} at all confidence levels, but they are especially bad in the estimate of the median. In this paper, we suggest two alternate solutions for the placement of S-curves. To compare these calculational methods, we have tabulated the common combinations of upper and lower variances and generated the relevant S-curves offsets from the mode difference of stress and strength.« less
Dynamic State Estimation of Power Systems With Quantization Effects: A Recursive Filter Approach.
Hu, Liang; Wang, Zidong; Liu, Xiaohui
2016-08-01
In this paper, a recursive filter algorithm is developed to deal with the state estimation problem for power systems with quantized nonlinear measurements. The measurements from both the remote terminal units and the phasor measurement unit are subject to quantizations described by a logarithmic quantizer. Attention is focused on the design of a recursive filter such that, in the simultaneous presence of nonlinear measurements and quantization effects, an upper bound for the estimation error covariance is guaranteed and subsequently minimized. Instead of using the traditional approximation methods in nonlinear estimation that simply ignore the linearization errors, we treat both the linearization and quantization errors as norm-bounded uncertainties in the algorithm development so as to improve the performance of the estimator. For the power system with such kind of introduced uncertainties, a filter is designed in the framework of robust recursive estimation, and the developed filter algorithm is tested on the IEEE benchmark power system to demonstrate its effectiveness.
Thermal diffusivity and butterfly velocity in anisotropic Q-lattice models
NASA Astrophysics Data System (ADS)
Jeong, Hyun-Sik; Ahn, Yongjun; Ahn, Dujin; Niu, Chao; Li, Wei-Jia; Kim, Keun-Young
2018-01-01
We study a relation between the thermal diffusivity ( D T ) and two quantum chaotic properties, Lyapunov time (τ L ) and butterfly velocity ( v B ) in strongly correlated systems by using a holographic method. Recently, it was shown that E_i:={D}_{T,i}/({v}{^{B,i}}^2{τ}_L)(i=x,y) is universal in the sense that it is determined only by some scaling exponents of the IR metric in the low temperature limit regardless of the matter fields and ultraviolet data. Inspired by this observation, by analyzing the anisotropic IR scaling geometry carefully, we find the concrete expressions for E_i in terms of the critical dynamical exponents z i in each direction, E_i={z}_i/2({z}_i-1) . Furthermore, we find the lower bound of E_i is always 1 /2, which is not affected by anisotropy, contrary to the η/s case. However, there may be an upper bound determined by given fixed anisotropy.
Rapid innovation diffusion in social networks.
Kreindler, Gabriel E; Young, H Peyton
2014-07-22
Social and technological innovations often spread through social networks as people respond to what their neighbors are doing. Previous research has identified specific network structures, such as local clustering, that promote rapid diffusion. Here we derive bounds that are independent of network structure and size, such that diffusion is fast whenever the payoff gain from the innovation is sufficiently high and the agents' responses are sufficiently noisy. We also provide a simple method for computing an upper bound on the expected time it takes for the innovation to become established in any finite network. For example, if agents choose log-linear responses to what their neighbors are doing, it takes on average less than 80 revision periods for the innovation to diffuse widely in any network, provided that the error rate is at least 5% and the payoff gain (relative to the status quo) is at least 150%. Qualitatively similar results hold for other smoothed best-response functions and populations that experience heterogeneous payoff shocks.
Bounds on geologically current rates of motion of groups of hotspots.
NASA Astrophysics Data System (ADS)
Wang, C.; Gordon, R. G.; Zhang, T.
2017-12-01
It is widely believed that groups of hotspots in different regions of the world are in relative motion at rates of 10 to 30 mm a-1 or more. Here we present a new method for analyzing geologically current motion between groups of hotspots beneath different plates. In an inversion of 56 globally distributed, equally weighted trends of hotspot tracks, the dispersion is dominated by differences in trend between different plates rather than differences within plates. Nonetheless the rate of hotspot motion perpendicular to the direction of absolute plate motion, vperp, differs significantly from zero for only three of ten plates and then by merely 0.3 to 1.4 mm a-1. The global mean upper bound on |vperp| is 3.2 ±2.7 mm a-1. Therefore, groups of hotspots move slowly and can be used to define a global reference frame for plate motions. Further implications for uncertainties in hotspot trends and current plate motion relative to hotspots will be discussed.
Rapid innovation diffusion in social networks
Kreindler, Gabriel E.; Young, H. Peyton
2014-01-01
Social and technological innovations often spread through social networks as people respond to what their neighbors are doing. Previous research has identified specific network structures, such as local clustering, that promote rapid diffusion. Here we derive bounds that are independent of network structure and size, such that diffusion is fast whenever the payoff gain from the innovation is sufficiently high and the agents’ responses are sufficiently noisy. We also provide a simple method for computing an upper bound on the expected time it takes for the innovation to become established in any finite network. For example, if agents choose log-linear responses to what their neighbors are doing, it takes on average less than 80 revision periods for the innovation to diffuse widely in any network, provided that the error rate is at least 5% and the payoff gain (relative to the status quo) is at least 150%. Qualitatively similar results hold for other smoothed best-response functions and populations that experience heterogeneous payoff shocks. PMID:25024191
Computing row and column counts for sparse QR and LU factorization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, John R.; Li, Xiaoye S.; Ng, Esmond G.
2001-01-01
We present algorithms to determine the number of nonzeros in each row and column of the factors of a sparse matrix, for both the QR factorization and the LU factorization with partial pivoting. The algorithms use only the nonzero structure of the input matrix, and run in time nearly linear in the number of nonzeros in that matrix. They may be used to set up data structures or schedule parallel operations in advance of the numerical factorization. The row and column counts we compute are upper bounds on the actual counts. If the input matrix is strong Hall and theremore » is no coincidental numerical cancellation, the counts are exact for QR factorization and are the tightest bounds possible for LU factorization. These algorithms are based on our earlier work on computing row and column counts for sparse Cholesky factorization, plus an efficient method to compute the column elimination tree of a sparse matrix without explicitly forming the product of the matrix and its transpose.« less
Optimal correction and design parameter search by modern methods of rigorous global optimization
NASA Astrophysics Data System (ADS)
Makino, K.; Berz, M.
2011-07-01
Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle optics for the computation of aberrations allow the determination of particularly sharp underestimators for large regions. As a consequence, the subsequent progressive pruning of the allowed search space as part of the optimization progresses is carried out particularly effectively. The end result is the rigorous determination of the single or multiple optimal solutions of the parameter optimization, regardless of their location, their number, and the starting values of optimization. The methods are particularly powerful if executed in interplay with genetic optimizers generating their new populations within the currently active unpruned space. Their current best guess provides rigorous upper bounds of the minima, which can then beneficially be used for better pruning. Examples of the method and its performance will be presented, including the determination of all operating points of desired tunes or chromaticities, etc. in storage ring lattices.
Munoz, F. D.; Hobbs, B. F.; Watson, J. -P.
2016-02-01
A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems is proposed and it considers a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improvesmore » upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. But, the decomposition phase is required to attain optimality gaps. Moreover, use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munoz, F. D.; Hobbs, B. F.; Watson, J. -P.
A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems is proposed and it considers a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improvesmore » upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. But, the decomposition phase is required to attain optimality gaps. Moreover, use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately.« less
On the statistical properties and tail risk of violent conflicts
NASA Astrophysics Data System (ADS)
Cirillo, Pasquale; Taleb, Nassim Nicholas
2016-06-01
We examine statistical pictures of violent conflicts over the last 2000 years, providing techniques for dealing with the unreliability of historical data. We make use of a novel approach to deal with fat-tailed random variables with a remote but nonetheless finite upper bound, by defining a corresponding unbounded dual distribution (given that potential war casualties are bounded by the world population). This approach can also be applied to other fields of science where power laws play a role in modeling, like geology, hydrology, statistical physics and finance. We apply methods from extreme value theory on the dual distribution and derive its tail properties. The dual method allows us to calculate the real tail mean of war casualties, which proves to be considerably larger than the corresponding sample mean for large thresholds, meaning severe underestimation of the tail risks of conflicts from naive observation. We analyze the robustness of our results to errors in historical reports. We study inter-arrival times between tail events and find that no particular trend can be asserted. All the statistical pictures obtained are at variance with the prevailing claims about ;long peace;, namely that violence has been declining over time.
Record length requirement of long-range dependent teletraffic
NASA Astrophysics Data System (ADS)
Li, Ming
2017-04-01
This article contributes the highlights mainly in two folds. On the one hand, it presents a formula to compute the upper bound of the variance of the correlation periodogram measurement of teletraffic (traffic for short) with long-range dependence (LRD) for a given record length T and a given value of the Hurst parameter H (Theorems 1 and 2). On the other hand, it proposes two formulas for the computation of the variance upper bound of the correlation periodogram measurement of traffic of fractional Gaussian noise (fGn) type and the generalized Cauchy (GC) type, respectively (Corollaries 1 and 2). They may constitute a reference guideline of record length requirement of traffic with LRD. In addition, record length requirement for the correlation periodogram measurement of traffic with either the Schuster type or the Bartlett one is studied and the present results about it show that both types of periodograms may be used for the correlation measurement of traffic with a pre-desired variance bound of correlation estimation. Moreover, real traffic in the Internet Archive by the Special Interest Group on Data Communication under the Association for Computing Machinery of US (ACM SIGCOMM) is analyzed in the case study in this topic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fortes, Raphael; Rigolin, Gustavo, E-mail: rigolin@ifi.unicamp.br
We push the limits of the direct use of partially pure entangled states to perform quantum teleportation by presenting several protocols in many different scenarios that achieve the optimal efficiency possible. We review and put in a single formalism the three major strategies known to date that allow one to use partially entangled states for direct quantum teleportation (no distillation strategies permitted) and compare their efficiencies in real world implementations. We show how one can improve the efficiency of many direct teleportation protocols by combining these techniques. We then develop new teleportation protocols employing multipartite partially entangled states. The threemore » techniques are also used here in order to achieve the highest efficiency possible. Finally, we prove the upper bound for the optimal success rate for protocols based on partially entangled Bell states and show that some of the protocols here developed achieve such a bound. -- Highlights: •Optimal direct teleportation protocols using directly partially entangled states. •We put in a single formalism all strategies of direct teleportation. •We extend these techniques for multipartite partially entangle states. •We give upper bounds for the optimal efficiency of these protocols.« less
Performance analysis of optimal power allocation in wireless cooperative communication systems
NASA Astrophysics Data System (ADS)
Babikir Adam, Edriss E.; Samb, Doudou; Yu, Li
2013-03-01
Cooperative communication has been recently proposed in wireless communication systems for exploring the inherent spatial diversity in relay channels.The Amplify-and-Forward (AF) cooperation protocols with multiple relays have not been sufficiently investigated even if it has a low complexity in term of implementation. We consider in this work a cooperative diversity system in which a source transmits some information to a destination with the help of multiple relay nodes with AF protocols and investigate the optimality of allocating powers both at the source and the relays system by optimizing the symbol error rate (SER) performance in an efficient way. Firstly we derive a closedform SER formulation for MPSK signal using the concept of moment generating function and some statistical approximations in high signal to noise ratio (SNR) for the system under studied. We then find a tight corresponding lower bound which converges to the same limit as the theoretical upper bound and develop an optimal power allocation (OPA) technique with mean channel gains to minimize the SER. Simulation results show that our scheme outperforms the equal power allocation (EPA) scheme and is tight to the theoretical approximation based on the SER upper bound in high SNR for different number of relays.
NASA Astrophysics Data System (ADS)
Lukey, B. T.; Sheffield, J.; Bathurst, J. C.; Lavabre, J.; Mathys, N.; Martin, C.
1995-08-01
The sediment yield of two catchments in southern France was modelled using the newly developed sediment code of SHETRAN. A fire in August 1990 denuded the Rimbaud catchment, providing an opportunity to study the effect of vegetation cover on sediment yield by running the model for both pre-and post-fire cases. Model output is in the form of upper and lower bounds on sediment discharge, reflecting the uncertainty in the erodibility of the soil. The results are encouraging since measured sediment discharge falls largely between the predicted bounds, and simulated sediment yield is dramatically lower for the catchment before the fire which matches observation. SHETRAN is also applied to the Laval catchment, which is subject to Badlands gulley erosion. Again using the principle of generating upper and lower bounds on sediment discharge, the model is shown to be capable of predicting the bulk sediment discharge over periods of months. To simulate the effect of reforestation, the model is run with vegetation cover equivalent to a neighbouring fully forested basin. The results obtained indicate that SHETRAN provides a powerful tool for predicting the impact of environmental change and land management on sediment yield.
Existence and amplitude bounds for irrotational water waves in finite depth
NASA Astrophysics Data System (ADS)
Kogelbauer, Florian
2017-12-01
We prove the existence of solutions to the irrotational water-wave problem in finite depth and derive an explicit upper bound on the amplitude of the nonlinear solutions in terms of the wavenumber, the total hydraulic head, the wave speed and the relative mass flux. Our approach relies upon a reformulation of the water-wave problem as a one-dimensional pseudo-differential equation and the Newton-Kantorovich iteration for Banach spaces. This article is part of the theme issue 'Nonlinear water waves'.
Entanglement polygon inequality in qubit systems
NASA Astrophysics Data System (ADS)
Qian, Xiao-Feng; Alonso, Miguel A.; Eberly, J. H.
2018-06-01
We prove a set of tight entanglement inequalities for arbitrary N-qubit pure states. By focusing on all bi-partite marginal entanglements between each single qubit and its remaining partners, we show that the inequalities provide an upper bound for each marginal entanglement, while the known monogamy relation establishes the lower bound. The restrictions and sharing properties associated with the inequalities are further analyzed with a geometric polytope approach, and examples of three-qubit GHZ-class and W-class entangled states are presented to illustrate the results.
Quantum Speed Limits across the Quantum-to-Classical Transition
NASA Astrophysics Data System (ADS)
Shanahan, B.; Chenu, A.; Margolus, N.; del Campo, A.
2018-02-01
Quantum speed limits set an upper bound to the rate at which a quantum system can evolve. Adopting a phase-space approach, we explore quantum speed limits across the quantum-to-classical transition and identify equivalent bounds in the classical world. As a result, and contrary to common belief, we show that speed limits exist for both quantum and classical systems. As in the quantum domain, classical speed limits are set by a given norm of the generator of time evolution.
Bounds on the cross-correlation functions of state m-sequences
NASA Astrophysics Data System (ADS)
Woodcock, C. F.; Davies, Phillip A.; Shaar, Ahmed A.
1987-03-01
Lower and upper bounds on the peaks of the periodic Hamming cross-correlation function for state m-sequences, which are often used in frequency-hopped spread-spectrum systems, are derived. The state position mapped (SPM) sequences of the state m-sequences are described. The use of SPM sequences for OR-channel code division multiplexing is studied. The relation between the Hamming cross-correlation function and the correlation function of SPM sequence is examined. Numerical results which support the theoretical data are presented.
DD-bar production and their interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu Yanrui; Oka, Makoto; Takizawa, Makoto
2011-05-23
We have explored the bound state problem and the scattering problem of the DD-bar pair in a meson exchange model. When considering their production in the e{sup +}e{sup -} process, we included the DD-bar rescattering effect. Although it is difficult to answer whether the S-wave DD-bar bound state exists or not from the binding energies and the phase shifts, one may get an upper limit of the binding energy from the production of the BB-bar, the bottom analog of DD-bar.
Thin-wall approximation in vacuum decay: A lemma
NASA Astrophysics Data System (ADS)
Brown, Adam R.
2018-05-01
The "thin-wall approximation" gives a simple estimate of the decay rate of an unstable quantum field. Unfortunately, the approximation is uncontrolled. In this paper I show that there are actually two different thin-wall approximations and that they bracket the true decay rate: I prove that one is an upper bound and the other a lower bound. In the thin-wall limit, the two approximations converge. In the presence of gravity, a generalization of this lemma provides a simple sufficient condition for nonperturbative vacuum instability.
A Note on the Kirchhoff and Additive Degree-Kirchhoff Indices of Graphs
NASA Astrophysics Data System (ADS)
Yang, Yujun; Klein, Douglas J.
2015-06-01
Two resistance-distance-based graph invariants, namely, the Kirchhoff index and the additive degree-Kirchhoff index, are studied. A relation between them is established, with inequalities for the additive degree-Kirchhoff index arising via the Kirchhoff index along with minimum, maximum, and average degrees. Bounds for the Kirchhoff and additive degree-Kirchhoff indices are also determined, and extremal graphs are characterised. In addition, an upper bound for the additive degree-Kirchhoff index is established to improve a previously known result.
Tunç, Cemil; Tunç, Osman
2016-01-01
In this paper, certain system of linear homogeneous differential equations of second-order is considered. By using integral inequalities, some new criteria for bounded and [Formula: see text]-solutions, upper bounds for values of improper integrals of the solutions and their derivatives are established to the considered system. The obtained results in this paper are considered as extension to the results obtained by Kroopnick (2014) [1]. An example is given to illustrate the obtained results.
Blow-up of solutions to a quasilinear wave equation for high initial energy
NASA Astrophysics Data System (ADS)
Li, Fang; Liu, Fang
2018-05-01
This paper deals with blow-up solutions to a nonlinear hyperbolic equation with variable exponent of nonlinearities. By constructing a new control function and using energy inequalities, the authors obtain the lower bound estimate of the L2 norm of the solution. Furthermore, the concavity arguments are used to prove the nonexistence of solutions; at the same time, an estimate of the upper bound of blow-up time is also obtained. This result extends and improves those of [1,2].
NASA Astrophysics Data System (ADS)
van Driel, J.; Reiss, A. S.; Thomas, C.
2016-12-01
The topography of upper mantle seismic discontinuities can be used to constrain regional variations in composition and temperature of the Earths mantle. The 410 km discontinuity is caused by the solid-solid phase transition from olivine to wadsleyite. Due to its positive Clapeyron slope, the discontinuity is depressed in hot regimes. The phase transition from ringwoodite to bridgemanite and magnesiowüstite in contrast has a negative Clapeyron slope and therefore is elevated when hot material is present. Cold material is expected to yield an opposing topographic signature, culminating in an elevated 410 km and a depressed 660 km discontinuity. As part of the RHUM-RUM project (Réunion Hotspot and Upper Mantle - Réunions Unterer Mantel) we extract relevant geophysical parameters, by investigating the properties of upper mantle seismic discontinuities beneath the Indian Ocean. The topography of the 410 and 660 km discontinuities, which define the upper and lower bounds of the mantle transition zone, have been mapped using PP and SS underside reflections. This study has utilised over 8500 events with Mw ≥ 5.8, distributed over the entire Indian Ocean. Our robust data set yields a dense coverage of points, which are defined by consistently crossing ray paths. Array seismology methods, such as vespagrams and slowness-backazimuth analysis, are used to enhance the signal-to-noise-ratio and detect and identify weak precursor signals. The differential travel times are corrected for crustal features and converted into depth values of the discontinuities by comparing the measured travel times with theoretical ones derived from ray tracing through the 1D reference Earth model ak135. A `travel-time' stacking method has also been applied for 4° radius bins around each of the bounce points. The addition of a secondary method derives greater stability of our results and allows an enhanced error analysis procedure. In order to better constrain the mineralogical processes taking place within the mantle transition zone, amplitude ratios, polarities and velocity gradients have also been investigated.
Vertical structure of tropospheric winds on gas giants
NASA Astrophysics Data System (ADS)
Scott, R. K.; Dunkerton, T. J.
2017-04-01
Zonal mean zonal velocity profiles from cloud-tracking observations on Jupiter and Saturn are used to infer latitudinal variations of potential temperature consistent with a shear stable potential vorticity distribution. Immediately below the cloud tops, density stratification is weaker on the poleward and stronger on the equatorward flanks of midlatitude jets, while at greater depth the opposite relation holds. Thermal wind balance then yields the associated vertical shears of midlatitude jets in an altitude range bounded above by the cloud tops and bounded below by the level where the latitudinal gradient of static stability changes sign. The inferred vertical shear below the cloud tops is consistent with existing thermal profiling of the upper troposphere. The sense of the associated mean meridional circulation in the upper troposphere is discussed, and expected magnitudes are given based on existing estimates of the radiative timescale on each planet.
Gravitating Q-balls in the Affleck-Dine mechanism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tamaki, Takashi; Sakai, Nobuyuki; Department of Education, Yamagata University, Yamagata 990-8560
2011-04-15
We investigate how gravity affects ''Q-balls'' with the Affleck-Dine potential V{sub AD}({phi}):=(m{sup 2}/2){phi}{sup 2} [1+Kln(({phi}/M)){sup 2}]. Contrary to the flat case, in which equilibrium solutions exist only if K<0, we find three types of gravitating solutions as follows. In the case that K<0, ordinary Q-ball solutions exist; there is an upper bound of the charge due to gravity. In the case that K=0, equilibrium solutions called (mini-)boson stars appear due to gravity; there is an upper bound of the charge, too. In the case that K>0, equilibrium solutions appear, too. In this case, these solutions are not asymptotically flat butmore » surrounded by Q-matter. These solutions might be important in considering a dark matter scenario in the Affleck-Dine mechanism.« less
Precision Measurement of the Electron's Electric Dipole Moment Using Trapped Molecular Ions.
Cairncross, William B; Gresh, Daniel N; Grau, Matt; Cossel, Kevin C; Roussy, Tanya S; Ni, Yiqi; Zhou, Yan; Ye, Jun; Cornell, Eric A
2017-10-13
We describe the first precision measurement of the electron's electric dipole moment (d_{e}) using trapped molecular ions, demonstrating the application of spin interrogation times over 700 ms to achieve high sensitivity and stringent rejection of systematic errors. Through electron spin resonance spectroscopy on ^{180}Hf^{19}F^{+} in its metastable ^{3}Δ_{1} electronic state, we obtain d_{e}=(0.9±7.7_{stat}±1.7_{syst})×10^{-29} e cm, resulting in an upper bound of |d_{e}|<1.3×10^{-28} e cm (90% confidence). Our result provides independent confirmation of the current upper bound of |d_{e}|<9.4×10^{-29} e cm [J. Baron et al., New J. Phys. 19, 073029 (2017)NJOPFM1367-263010.1088/1367-2630/aa708e], and offers the potential to improve on this limit in the near future.
Insights into the Earth System mass variability from CSR-RL05 GRACE gravity fields
NASA Astrophysics Data System (ADS)
Bettadpur, S.
2012-04-01
The next-generation Release-05 GRACE gravity field data products are the result of extensive effort applied to the improvements to the GRACE Level-1 (tracking) data products, and to improvements in the background gravity models and processing methodology. As a result, the squared-error upper-bound in RL05 fields is half or less than the squared-error upper-bound in RL04 fields. The CSR-RL05 field release consists of unconstrained gravity fields as well as a regularized gravity field time-series that can be used for several applications without any post-processing error reduction. This paper will describe the background and the nature of these improvements in the data products, and provide an error characterization. We will describe the insights these new series offer in measuring the mass flux due to diverse Hydrologic, Oceanographic and Cryospheric processes.
Search for violations of quantum mechanics
Ellis, John; Hagelin, John S.; Nanopoulos, D. V.; ...
1984-07-01
The treatment of quantum effects in gravitational fields indicates that pure states may evolve into mixed states, and Hawking has proposed modification of the axioms of field theory which incorporate the corresponding violation of quantum mechanics. In this study we propose a modified hamiltonian equation of motion for density matrices and use it to interpret upper bounds on the violation of quantum mechanics in different phenomenological situations. We apply our formalism to the K 0-K 0 system and to long baseline neutron interferometry experiments. In both cases we find upper bounds of about 2 × 10 -21 GeV on contributionsmore » to the single particle “hamiltonian” which violate quantum mechanical coherence. We discuss how these limits might be improved in the future, and consider the relative significance of other successful tests of quantum mechanics. Finally, an appendix contains model estimates of the magnitude of effects violating quantum mechanics.« less
DD production and their interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu Yanrui; Oka, Makoto; Takizawa, Makoto
2010-07-01
S- and P-wave DD scatterings are studied in a meson exchange model with the coupling constants obtained in the heavy quark effective theory. With the extracted P-wave phase shifts and the separable potential approximation, we include the DD rescattering effect and investigate the production process e{sup +}e{sup -{yields}}DD. We find that it is difficult to explain the anomalous line shape observed by the BES Collaboration with this mechanism. Combining our model calculation and the experimental measurement, we estimate the upper limit of the nearly universal cutoff parameter to be around 2 GeV. With this number, the upper limits of themore » binding energies of the S-wave DD and BB bound states are obtained. Assuming that the S-wave and P-wave interactions rely on the same cutoff, our study provides a way of extracting the information about S-wave molecular bound states from the P-wave meson pair production.« less
An Analytical Framework for Runtime of a Class of Continuous Evolutionary Algorithms.
Zhang, Yushan; Hu, Guiwu
2015-01-01
Although there have been many studies on the runtime of evolutionary algorithms in discrete optimization, relatively few theoretical results have been proposed on continuous optimization, such as evolutionary programming (EP). This paper proposes an analysis of the runtime of two EP algorithms based on Gaussian and Cauchy mutations, using an absorbing Markov chain. Given a constant variation, we calculate the runtime upper bound of special Gaussian mutation EP and Cauchy mutation EP. Our analysis reveals that the upper bounds are impacted by individual number, problem dimension number n, searching range, and the Lebesgue measure of the optimal neighborhood. Furthermore, we provide conditions whereby the average runtime of the considered EP can be no more than a polynomial of n. The condition is that the Lebesgue measure of the optimal neighborhood is larger than a combinatorial calculation of an exponential and the given polynomial of n.
Universal charge-radius relation for subatomic and astrophysical compact objects.
Madsen, Jes
2008-04-18
Electron-positron pair creation in supercritical electric fields limits the net charge of any static, spherical object, such as superheavy nuclei, strangelets, and Q balls, or compact stars like neutron stars, quark stars, and black holes. For radii between 4 x 10(2) and 10(4) fm the upper bound on the net charge is given by the universal relation Z=0.71R(fm), and for larger radii (measured in femtometers or kilometers) Z=7 x 10(-5)R_(2)(fm)=7 x 10(31)R_(2)(km). For objects with nuclear density the relation corresponds to Z approximately 0.7A(1/3)( (10(8)10(12)), where A is the baryon number. For some systems this universal upper bound improves existing charge limits in the literature.
Crustal volumes of the continents and of oceanic and continental submarine plateaus
NASA Technical Reports Server (NTRS)
Schubert, G.; Sandwell, D.
1989-01-01
Using global topographic data and the assumption of Airy isostasy, it is estimated that the crustal volume of the continents is 7182 X 10 to the 6th cu km. The crustal volumes of the oceanic and continental submarine plateaus are calculated at 369 X 10 to the 6th cu km and 242 X 10 to the 6th cu km, respectively. The total continental crustal volume is found to be 7581 X 10 to the 6th cu km, 3.2 percent of which is comprised of continental submarine plateaus on the seafloor. An upper bound on the contintental crust addition rate by the accretion of oceanic plateaus is set at 3.7 cu km/yr. Subduction of continental submarine plateaus with the oceanic lithosphere on a 100 Myr time scale yields an upper bound to the continental crustal subtraction rate of 2.4 cu km/yr.
Comparison of various techniques for calibration of AIS data
NASA Technical Reports Server (NTRS)
Roberts, D. A.; Yamaguchi, Y.; Lyon, R. J. P.
1986-01-01
The Airborne Imaging Spectrometer (AIS) samples a region which is strongly influenced by decreasing solar irradiance at longer wavelengths and strong atmospheric absorptions. Four techniques, the Log Residual, the Least Upper Bound Residual, the Flat Field Correction and calibration using field reflectance measurements were investigated as a means for removing these two features. Of the four techniques field reflectance calibration proved to be superior in terms of noise and normalization. Of the other three techniques, the Log Residual was superior when applied to areas which did not contain one dominant cover type. In heavily vegetated areas, the Log Residual proved to be ineffective. After removing anomalously bright data values, the Least Upper Bound Residual proved to be almost as effective as the Log Residual in sparsely vegetated areas and much more effective in heavily vegetated areas. Of all the techniques, the Flat Field Correction was the noisest.
Isotope-abundance variations and atomic weights of selected elements: 2016 (IUPAC Technical Report)
Coplen, Tyler B.; Shrestha, Yesha
2016-01-01
There are 63 chemical elements that have two or more isotopes that are used to determine their standard atomic weights. The isotopic abundances and atomic weights of these elements can vary in normal materials due to physical and chemical fractionation processes (not due to radioactive decay). These variations are well known for 12 elements (hydrogen, lithium, boron, carbon, nitrogen, oxygen, magnesium, silicon, sulfur, chlorine, bromine, and thallium), and the standard atomic weight of each of these elements is given by IUPAC as an interval with lower and upper bounds. Graphical plots of selected materials and compounds of each of these elements have been published previously. Herein and at the URL http://dx.doi.org/10.5066/F7GF0RN2, we provide isotopic abundances, isotope-delta values, and atomic weights for each of the upper and lower bounds of these materials and compounds.
Constructions for finite-state codes
NASA Technical Reports Server (NTRS)
Pollara, F.; Mceliece, R. J.; Abdel-Ghaffar, K.
1987-01-01
A class of codes called finite-state (FS) codes is defined and investigated. These codes, which generalize both block and convolutional codes, are defined by their encoders, which are finite-state machines with parallel inputs and outputs. A family of upper bounds on the free distance of a given FS code is derived from known upper bounds on the minimum distance of block codes. A general construction for FS codes is then given, based on the idea of partitioning a given linear block into cosets of one of its subcodes, and it is shown that in many cases the FS codes constructed in this way have a d sub free which is as large as possible. These codes are found without the need for lengthy computer searches, and have potential applications for future deep-space coding systems. The issue of catastropic error propagation (CEP) for FS codes is also investigated.
An upper-bound assessment of the benefits of reducing perchlorate in drinking water.
Lutter, Randall
2014-10-01
The Environmental Protection Agency plans to issue new federal regulations to limit drinking water concentrations of perchlorate, which occurs naturally and results from the combustion of rocket fuel. This article presents an upper-bound estimate of the potential benefits of alternative maximum contaminant levels for perchlorate in drinking water. The results suggest that the economic benefits of reducing perchlorate concentrations in drinking water are likely to be low, i.e., under $2.9 million per year nationally, for several reasons. First, the prevalence of detectable perchlorate in public drinking water systems is low. Second, the population especially sensitive to effects of perchlorate, pregnant women who are moderately iodide deficient, represents a minority of all pregnant women. Third, and perhaps most importantly, reducing exposure to perchlorate in drinking water is a relatively ineffective way of increasing iodide uptake, a crucial step linking perchlorate to health effects of concern. © 2014 Society for Risk Analysis.
Achieving unequal error protection with convolutional codes
NASA Technical Reports Server (NTRS)
Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.
1994-01-01
This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.
Gauge mediation at the LHC: status and prospects
Knapen, Simon; Redigolo, Diego
2017-01-30
We show that the predictivity of general gauge mediation (GGM) with TeV-scale stops is greatly increased once the Higgs mass constraint is imposed. The most notable results are a strong lower bound on the mass of the gluino and right-handed squarks, and an upper bound on the Higgsino mass. If the μ-parameter is positive, the wino mass is also bounded from above. These constraints relax significantly for high messenger scales and as such long-lived NLSPs are favored in GGM. We identify a small set of most promising topologies for the neutralino/sneutrino NLSP scenarios and estimate the impact of the currentmore » bounds and the sensitivity of the high luminosity LHC. The stau, stop and sbottom NLSP scenarios can be robustly excluded at the high luminosity LHC.« less
On the Inequalities of Babu\\vska-Aziz, Friedrichs and Horgan-Payne
NASA Astrophysics Data System (ADS)
Costabel, Martin; Dauge, Monique
2015-09-01
The equivalence between the inequalities of Babu\\vska-Aziz and Friedrichs for sufficiently smooth bounded domains in the plane was shown by Horgan and Payne 30 years ago. We prove that this equivalence, and the equality between the associated constants, is true without any regularity condition on the domain. For the Horgan-Payne inequality, which is an upper bound of the Friedrichs constant for plane star-shaped domains in terms of a geometric quantity known as the Horgan-Payne angle, we show that it is true for some classes of domains, but not for all bounded star-shaped domains. We prove a weaker inequality that is true in all cases.
NASA Astrophysics Data System (ADS)
Basu, Biswajit
2017-12-01
Bounds on estimates of wave heights (valid for large amplitudes) from pressure and flow measurements at an arbitrary intermediate depth have been provided. Two-dimensional irrotational steady water waves over a flat bed with a finite depth in the presence of underlying uniform currents have been considered in the analysis. Five different upper bounds based on a combination of pressure and velocity field measurements have been derived, though there is only one available lower bound on the wave height in the case of the speed of current greater than or less than the wave speed. This article is part of the theme issue 'Nonlinear water waves'.
A communication channel model of the software process
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1988-01-01
Reported here is beginning research into a noisy communication channel analogy of software development process productivity, in order to establish quantifiable behavior and theoretical bounds. The analogy leads to a fundamental mathematical relationship between human productivity and the amount of information supplied by the developers, the capacity of the human channel for processing and transmitting information, the software product yield (object size), the work effort, requirements efficiency, tool and process efficiency, and programming environment advantage. Also derived is an upper bound to productivity that shows that software reuse is the only means than can lead to unbounded productivity growth; practical considerations of size and cost of reusable components may reduce this to a finite bound.
A communication channel model of the software process
NASA Technical Reports Server (NTRS)
Tausworthe, Robert C.
1988-01-01
Beginning research into a noisy communication channel analogy of software development process productivity, in order to establish quantifiable behavior and theoretical bounds is discussed. The analogy leads to a fundamental mathematical relationship between human productivity and the amount of information supplied by the developers, the capacity of the human channel for processing and transmitting information, the software product yield (object size) the work effort, requirements efficiency, tool and process efficiency, and programming environment advantage. An upper bound to productivity is derived that shows that software reuse is the only means that can lead to unbounded productivity growth; practical considerations of size and cost of reusable components may reduce this to a finite bound.
A passivity criterion for sampled-data bilateral teleoperation systems.
Jazayeri, Ali; Tavakoli, Mahdi
2013-01-01
A teleoperation system consists of a teleoperator, a human operator, and a remote environment. Conditions involving system and controller parameters that ensure the teleoperator passivity can serve as control design guidelines to attain maximum teleoperation transparency while maintaining system stability. In this paper, sufficient conditions for teleoperator passivity are derived for when position error-based controllers are implemented in discrete-time. This new analysis is necessary because discretization causes energy leaks and does not necessarily preserve the passivity of the system. The proposed criterion for sampled-data teleoperator passivity imposes lower bounds on the teleoperator's robots dampings, an upper bound on the sampling time, and bounds on the control gains. The criterion is verified through simulations and experiments.
Reliability Estimating Procedures for Electric and Thermochemical Propulsion Systems. Volume 1
1977-02-01
Laboratories, The Marquardt Company, NASA Goddard Space Flight Center, RCA Astro Elec- tronics, Rockwell International, Applied Physics Laboratory...E fX ) 2.3 Failure Rate Means and Bounds 5% Lower Bound Median Mean 95% Upper Bound A.05 X.05 . AIA. 9 5 0.00025 0.0024 0.06 0.022 x10- 6 per cycle, 1...Iq IIt. Xg4 4l Wl ~ 4 L Q ൘ I1-269 I- I J N1- 74-i Liu I- (~J~~~jto 1-27 r4J > U 0 1-271 T 27 fX ~ 0L 1-273 -- va VAv( 13 1-272 %J% ~ii 000 41
Comparison of electromyography and force as interfaces for prosthetic control.
Corbett, Elaine A; Perreault, Eric J; Kuiken, Todd A
2011-01-01
The ease with which persons with upper-limb amputations can control their powered prostheses is largely determined by the efficacy of the user command interface. One needs to understand the abilities of the human operator regarding the different available options. Electromyography (EMG) is widely used to control powered upper-limb prostheses. It is an indirect estimator of muscle force and may be expected to limit the control capabilities of the prosthesis user. This study compared EMG control with force control, an interface that is used in everyday interactions with the environment. We used both methods to perform a position-tracking task. Direct-position control of the wrist provided an upper bound for human-operator capabilities. The results demonstrated that an EMG control interface is as effective as force control for the position-tracking task. We also examined the effects of gain and tracking frequency on EMG control to explore the limits of this control interface. We found that information transmission rates for myoelectric control were best at higher tracking frequencies than at the frequencies previously reported for position control. The results may be useful for the design of prostheses and prosthetic controllers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alemgadmi, Khaled I. K., E-mail: azozkied@yahoo.com; Suparmi; Cari
2015-09-30
The approximate analytical solution of Schrodinger equation for Q-Deformed Rosen-Morse potential was investigated using Supersymmetry Quantum Mechanics (SUSY QM) method. The approximate bound state energy is given in the closed form and the corresponding approximate wave function for arbitrary l-state given for ground state wave function. The first excited state obtained using upper operator and ground state wave function. The special case is given for the ground state in various number of q. The existence of Rosen-Morse potential reduce energy spectra of system. The larger value of q, the smaller energy spectra of system.
Skoraczyński, G; Dittwald, P; Miasojedow, B; Szymkuć, S; Gajewska, E P; Grzybowski, B A; Gambin, A
2017-06-15
As machine learning/artificial intelligence algorithms are defeating chess masters and, most recently, GO champions, there is interest - and hope - that they will prove equally useful in assisting chemists in predicting outcomes of organic reactions. This paper demonstrates, however, that the applicability of machine learning to the problems of chemical reactivity over diverse types of chemistries remains limited - in particular, with the currently available chemical descriptors, fundamental mathematical theorems impose upper bounds on the accuracy with which raction yields and times can be predicted. Improving the performance of machine-learning methods calls for the development of fundamentally new chemical descriptors.
Alternative to Ritt's pseudodivision for finding the input-output equations of multi-output models.
Meshkat, Nicolette; Anderson, Chris; DiStefano, Joseph J
2012-09-01
Differential algebra approaches to structural identifiability analysis of a dynamic system model in many instances heavily depend upon Ritt's pseudodivision at an early step in analysis. The pseudodivision algorithm is used to find the characteristic set, of which a subset, the input-output equations, is used for identifiability analysis. A simpler algorithm is proposed for this step, using Gröbner Bases, along with a proof of the method that includes a reduced upper bound on derivative requirements. Efficacy of the new algorithm is illustrated with several biosystem model examples. Copyright © 2012 Elsevier Inc. All rights reserved.
Breaking Megrelishvili protocol using matrix diagonalization
NASA Astrophysics Data System (ADS)
Arzaki, Muhammad; Triantoro Murdiansyah, Danang; Adi Prabowo, Satrio
2018-03-01
In this article we conduct a theoretical security analysis of Megrelishvili protocol—a linear algebra-based key agreement between two participants. We study the computational complexity of Megrelishvili vector-matrix problem (MVMP) as a mathematical problem that strongly relates to the security of Megrelishvili protocol. In particular, we investigate the asymptotic upper bounds for the running time and memory requirement of the MVMP that involves diagonalizable public matrix. Specifically, we devise a diagonalization method for solving the MVMP that is asymptotically faster than all of the previously existing algorithms. We also found an important counterintuitive result: the utilization of primitive matrix in Megrelishvili protocol makes the protocol more vulnerable to attacks.
Robust Inference of Risks of Large Portfolios
Fan, Jianqing; Han, Fang; Liu, Han; Vickers, Byron
2016-01-01
We propose a bootstrap-based robust high-confidence level upper bound (Robust H-CLUB) for assessing the risks of large portfolios. The proposed approach exploits rank-based and quantile-based estimators, and can be viewed as a robust extension of the H-CLUB procedure (Fan et al., 2015). Such an extension allows us to handle possibly misspecified models and heavy-tailed data, which are stylized features in financial returns. Under mixing conditions, we analyze the proposed approach and demonstrate its advantage over H-CLUB. We further provide thorough numerical results to back up the developed theory, and also apply the proposed method to analyze a stock market dataset. PMID:27818569
Global existence and finite time blow-up for a class of thin-film equation
NASA Astrophysics Data System (ADS)
Dong, Zhihua; Zhou, Jun
2017-08-01
This paper deals with a class of thin-film equation, which was considered in Li et al. (Nonlinear Anal Theory Methods Appl 147:96-109, 2016), where the case of lower initial energy (J(u_0)≤ d and d is a positive constant) was discussed, and the conditions on global existence or blow-up are given. We extend the results of this paper on two aspects: Firstly, we consider the upper and lower bounds of blow-up time and asymptotic behavior when J(u_0)
Tsai, Ching-Wei; Tsai, Chieh; Ruaan, Ruoh-Chyu; Hu, Chien-Chieh; Lee, Kueir-Rarn
2013-06-26
Interfacial polymerization of four aqueous phase monomers, diethylenetriamine (DETA), m-phenylenediamine (mPD), melamine (Mela), and piperazine (PIP), and two organic phase monomers, trimethyl chloride (TMC) and cyanuric chloride (CC), produce a thin-film composite membrane of polymerized polyamide layer capable of O2/N2 separation. To achieve maximum efficiency in gas permeance and O2/N2 permselectivity, the concentrations of monomers, time of interfacial polymerization, number of reactive groups in monomers, and the structure of monomers need to be optimized. By controlling the aqueous/organic monomer ratio between 1.9 and 2.7, we were able to obtain a uniformly interfacial polymerized layer. To achieve a highly cross-linked layer, three reactive groups in both the aqueous and organic phase monomers are required; however, if the monomers were arranged in a planar structure, the likelihood of structural defects also increased. On the contrary, linear polymers are less likely to result in structural defects, and can also produce polymer layers with moderate O2/N2 selectivity. To minimize structural defects while maximizing O2/N2 selectivity, the planar monomer, TMC, containing 3 reactive groups, was reacted with the semirigid monomer, PIP, containing 2 reactive groups to produce a membrane with an adequate gas permeance of 7.72 × 10(-6) cm(3) (STP) s(-1) cm(-2) cm Hg(-1) and a high O2/N2 selectivity of 10.43, allowing us to exceed the upper-bound limit of conventional thin-film composite membranes.
Contamination of U.S. Butter with Polybrominated Diphenyl Ethers from Wrapping Paper
Schecter, Arnold; Smith, Sarah; Colacino, Justin; Malik, Noor; Opel, Matthias; Paepke, Olaf; Birnbaum, Linda
2011-01-01
Objectives Our aim was to report the first known incidence of U.S. butter contamination with extremely high levels of polybrominated diphenyl ethers (PBDEs). Methods Ten butter samples were individually analyzed for PBDEs. One of the samples and its paper wrapper contained very high levels of higher-brominated PBDEs. Dietary estimates were calculated using the 2007 U.S. Department of Agriculture Loss-Adjusted Food Availability data, excluding the elevated sample. Results The highly contaminated butter sample had a total upper bound PBDE level of 42,252 pg/g wet weight (ww). Levels of brominated diphenyl ether (BDE)-206, -207, and -209 were 2,000, 2,290, and 37,600 pg/g ww, respectively. Its wrapping paper contained a total upper-bound PBDE concentration of 804,751 pg/g ww, with levels of BDE-206, -207, and -209 of 51,000, 11,700, and 614,000 pg/g, respectively. Total PBDE levels in the remaining nine butter samples ranged from 180 to 1,212 pg/g, with geometric mean of 483 and median of 284 pg/g. Excluding the outlier, total PBDE daily intake from all food was 22,764 pg/day, lower than some previous U.S. dietary intake estimates. Conclusion Higher-brominated PBDE congeners were likely transferred from contaminated wrapping paper to butter. A larger representative survey may help determine how frequently PBDE contamination occurs. Sampling at various stages in food production may identify contamination sources and reduce risk. PMID:21138809
Tobi, Dror
2017-08-01
A new algorithm for comparison of protein dynamics is presented. Compared protein structures are superposed and their modes of motions are calculated using the anisotropic network model. The obtained modes are aligned using the dynamic programming algorithm of Needleman and Wunsch, commonly used for sequence alignment. Dynamical comparison of hemoglobin in the T and R2 states reveals that the dynamics of the allosteric effector 2,3-bisphosphoglycerate binding site is different in the two states. These differences can contribute to the selectivity of the effector to the T state. Similar comparison of the ionotropic glutamate receptor in the kainate+(R,R)-2b and ZK bound states reveals that the kainate+(R,R)-2b bound states slow modes describe upward motions of ligand binding domain and the transmembrane domain regions. Such motions may lead to the opening of the receptor. The upper lobes of the LBDs of the ZK bound state have a smaller interface with the amino terminal domains above them and have a better ability to move together. The present study exemplifies the use of dynamics comparison as a tool to study protein function. Proteins 2017; 85:1507-1517. © 2014 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Ada (Trade Name)/SQL (Structured Query Language) Binding Specification
1988-06-01
TYPES iS package ADA-SOL Is type DWPLOYEEyNAME Is new STRING ( 1 .. 30 ); type BOSSNAME is new EMPLOYEENAME; type EMPLOYEE SALARY is digits 7 range 0.00...minimum number of significant decimal digits . All real numbers between the lower and upper bounds, inclusive, belong to the subtype, and are...and the elements of strings. Format <character> -:- < digit > I <letter> ! <special character> < digit > ::- 0111213141516171819 <letter> ::- <upper case
Characterization of Seismic Noise at Selected Non-Urban Sites
2010-03-01
Field sites for seismic recordings: Scottish moor (upper left), Enfield, NH (upper right), and vicinity of Keele, England (bottom). ERDC...three sites. The sites are: a wind farm on a remote moor in Scotland, a ~13 acre field bounded by woods in a rural Enfield, NH neigh- borhood, and a site...in a rural Enfield, NH, neighborhood, and a site transitional from developed land to farmland within 1 km of the six-lane M6 motorway near Keele
NASA Astrophysics Data System (ADS)
Stolzenburg, Maribeth; Marshall, Thomas C.; Karunarathne, Sumedhe; Orville, Richard E.
2018-10-01
Using video data recorded at 50,000 frames per second for nearby negative lightning flashes, estimates are derived for the length of positive upward connecting leaders (UCLs) that presumably formed prior to new ground attachments. Return strokes were 1.7 to 7.8 km distant, yielding image resolutions of 4.25 to 19.5 m. No UCLs are imaged in these data, indicating those features were too transient or too dim compared to other lightning processes that are imaged at these resolutions. Upper bound lengths for 17 presumed UCLs are determined from the height above flat ground or water of the successful stepped leader tip in the image immediately prior to (within 20 μs before) the return stroke. Better estimates of maximum UCL lengths are determined using the downward stepped leader tip's speed of advance and the estimated return stroke time within its first frame. For 17 strokes, the upper bound length of the possible UCL averages 31.6 m and ranges from 11.3 to 50.3 m. Among the close strokes (those with spatial resolution <8 m per pixel), the five which connected to water (salt water lagoon) have UCL upper bound estimates averaging significantly shorter (24.1 m) than the average for the three close strokes which connected to land (36.9 m). The better estimates of maximum UCL lengths for the eight close strokes average 20.2 m, with slightly shorter average of 18.3 m for the five that connected to water. All the better estimates of UCL maximum lengths are <38 m in this dataset
NASA Astrophysics Data System (ADS)
Wakabayashi, Kazuyuki; Nakano, Saho; Soga, Kouichi; Hoson, Takayuki
Lignin is a component of cell walls of terrestrial plants, which provides cell walls with the mechanical rigidity. Lignin is a phenolic polymer with high molecular mass and formed by the polymerization of phenolic substances on a cellulosic matrix. The polymerization is catalyzed by cell wall-bound peroxidase, and thus the activity of this enzyme regulates the rate of formation of lignin. In the present study, the changes in the lignin content and the activity of cell wall peroxidase were investigated along epicotyls of azuki bean seedlings grown under hypergravity conditions. The endogenous growth occurred primarily in the upper regions of the epicotyl and no growth was detected in the middle or basal regions. The amounts of acetyl bromide-soluble lignin increased from the upper to the basal regions of epicotyls. The lignin content per unit length in the basal region was three times higher than that in the upper region. Hypergravity treatment at 300 g for 6 h stimulated the increase in the lignin content in all regions of epicotyls, particularly in the basal regions. The peroxidase activity in the protein fraction extracted from the cell wall preparation with a high ionic strength buffer also increased gradually toward the basal region, and hypergravity treatment clearly increased the activity in all regions. There was a close correlation between the lignin content and the enzyme activity. These results suggest that gravity stimuli modulate the activity of cell wall-bound peroxidase, which, in turn, causes the stimulation of the lignin formation in stem organs.
Thermalization Time Bounds for Pauli Stabilizer Hamiltonians
NASA Astrophysics Data System (ADS)
Temme, Kristan
2017-03-01
We prove a general lower bound to the spectral gap of the Davies generator for Hamiltonians that can be written as the sum of commuting Pauli operators. These Hamiltonians, defined on the Hilbert space of N-qubits, serve as one of the most frequently considered candidates for a self-correcting quantum memory. A spectral gap bound on the Davies generator establishes an upper limit on the life time of such a quantum memory and can be used to estimate the time until the system relaxes to thermal equilibrium when brought into contact with a thermal heat bath. The bound can be shown to behave as {λ ≥ O(N^{-1} exp(-2β overline{ɛ}))}, where {overline{ɛ}} is a generalization of the well known energy barrier for logical operators. Particularly in the low temperature regime we expect this bound to provide the correct asymptotic scaling of the gap with the system size up to a factor of N -1. Furthermore, we discuss conditions and provide scenarios where this factor can be removed and a constant lower bound can be proven.
Chen, Xiaoyuan; Wai, Chien M.; Fisher, Darrell R.
2000-01-01
The invention pertains to compounds for binding lanthanide ions and actinide ions. The invention further pertains to compounds for binding radionuclides, and to methods of making radionuclide complexes. Also, the invention pertains to methods of extracting radionuclides. Additionally, the invention pertains to methods of delivering radionuclides to target locations. In one aspect, the invention includes a compound comprising: a) a calix[n]arene group, wherein n is an integer greater than 3, the calix[n]arene group comprising an upper rim and a lower rim; b) at least one ionizable group attached to the lower rim; and c) an ion selected from the group consisting of lanthanide and actinide elements bound to the ionizable group. In another aspect, the invention includes a method of extracting a radionuclide, comprising: a) providing a sample comprising a radionuclide; b) providing a calix[n]arene compound in contact with the sample, wherein n is an integer greater than 3; and c) extracting radionuclide from the sample into the calix[n]arene compound. In yet another aspect, the invention includes a method of delivering a radionuclide to a target location, comprising: a) providing a calix[n]arene compound, wherein n is an integer greater than 3, the calix[n]arene compound comprising at least one ionizable group; b) providing a radionuclide bound to the calix[n]arene compound; and c) providing an antibody attached to the calix[n]arene compound, the antibody being specific for a material found at the target location.
NASA Astrophysics Data System (ADS)
Tang, Wenlin; Xu, Peng; Hu, Songjie; Cao, Jianfeng; Dong, Peng; Bu, Yanlong; Chen, Lue; Han, Songtao; Gong, Xuefei; Li, Wenxiao; Ping, Jinsong; Lau, Yun-Kau; Tang, Geshi
2017-09-01
The Doppler tracking data of the Chang'e 3 lunar mission is used to constrain the stochastic background of gravitational wave in cosmology within the 1 mHz to 0.05 Hz frequency band. Our result improves on the upper bound on the energy density of the stochastic background of gravitational wave in the 0.02-0.05 Hz band obtained by the Apollo missions, with the improvement reaching almost one order of magnitude at around 0.05 Hz. Detailed noise analysis of the Doppler tracking data is also presented, with the prospect that these noise sources will be mitigated in future Chinese deep space missions. A feasibility study is also undertaken to understand the scientific capability of the Chang'e 4 mission, due to be launched in 2018, in relation to the stochastic gravitational wave background around 0.01 Hz. The study indicates that the upper bound on the energy density may be further improved by another order of magnitude from the Chang'e 3 mission, which will fill the gap in the frequency band from 0.02 Hz to 0.1 Hz in the foreseeable future.
The generalized truncated exponential distribution as a model for earthquake magnitudes
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2015-04-01
The random distribution of small, medium and large earthquake magnitudes follows an exponential distribution (ED) according to the Gutenberg-Richter relation. But a magnitude distribution is truncated in the range of very large magnitudes because the earthquake energy is finite and the upper tail of the exponential distribution does not fit well observations. Hence the truncated exponential distribution (TED) is frequently applied for the modelling of the magnitude distributions in the seismic hazard and risk analysis. The TED has a weak point: when two TEDs with equal parameters, except the upper bound magnitude, are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters, except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. It also applies to alternative distribution models. The presented generalized truncated exponential distribution (GTED) overcomes this weakness. The ED and the TED are special cases of the GTED. Different issues of the statistical inference are also discussed and an example of empirical data is presented in the current contribution.
Fundamental limitations of cavity-assisted atom interferometry
NASA Astrophysics Data System (ADS)
Dovale-Álvarez, M.; Brown, D. D.; Jones, A. W.; Mow-Lowry, C. M.; Miao, H.; Freise, A.
2017-11-01
Atom interferometers employing optical cavities to enhance the beam splitter pulses promise significant advances in science and technology, notably for future gravitational wave detectors. Long cavities, on the scale of hundreds of meters, have been proposed in experiments aiming to observe gravitational waves with frequencies below 1 Hz, where laser interferometers, such as LIGO, have poor sensitivity. Alternatively, short cavities have also been proposed for enhancing the sensitivity of more portable atom interferometers. We explore the fundamental limitations of two-mirror cavities for atomic beam splitting, and establish upper bounds on the temperature of the atomic ensemble as a function of cavity length and three design parameters: the cavity g factor, the bandwidth, and the optical suppression factor of the first and second order spatial modes. A lower bound to the cavity bandwidth is found which avoids elongation of the interaction time and maximizes power enhancement. An upper limit to cavity length is found for symmetric two-mirror cavities, restricting the practicality of long baseline detectors. For shorter cavities, an upper limit on the beam size was derived from the geometrical stability of the cavity. These findings aim to aid the design of current and future cavity-assisted atom interferometers.
Limits on the fluctuating part of y-type distortion monopole from Planck and SPT results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khatri, Rishi; Sunyaev, Rashid, E-mail: khatri@mpa-garching.mpg.de, E-mail: sunyaev@mpa-garching.mpg.de
2015-08-01
We use the published Planck and SPT cluster catalogs [1,2] and recently published y-distortion maps [3] to put strong observational limits on the contribution of the fluctuating part of the y-type distortions to the y-distortion monopole. Our bounds are 5.4× 10{sup −8} < ( y) < 2.2× 10{sup −6}. Our upper bound is a factor of 6.8 stronger than the currently best upper 95% confidence limit from COBE-FIRAS of ( y) <15× 10{sup −6}. In the standard cosmology, large scale structure is the only source of such distortions and our limits therefore constrain the baryonic physics involved in the formation of the large scale structure. Our lower limit, from themore » detected clusters in the Planck and SPT catalogs, also implies that a Pixie-like experiment should detect the y-distortion monopole at >27-σ. The biggest sources of uncertainty in our upper limit are the monopole offsets between different HFI channel maps that we estimate to be <10{sup −6}.« less
Uncertainty, imprecision, and the precautionary principle in climate change assessment.
Borsuk, M E; Tomassini, L
2005-01-01
Statistical decision theory can provide useful support for climate change decisions made under conditions of uncertainty. However, the probability distributions used to calculate expected costs in decision theory are themselves subject to uncertainty, disagreement, or ambiguity in their specification. This imprecision can be described using sets of probability measures, from which upper and lower bounds on expectations can be calculated. However, many representations, or classes, of probability measures are possible. We describe six of the more useful classes and demonstrate how each may be used to represent climate change uncertainties. When expected costs are specified by bounds, rather than precise values, the conventional decision criterion of minimum expected cost is insufficient to reach a unique decision. Alternative criteria are required, and the criterion of minimum upper expected cost may be desirable because it is consistent with the precautionary principle. Using simple climate and economics models as an example, we determine the carbon dioxide emissions levels that have minimum upper expected cost for each of the selected classes. There can be wide differences in these emissions levels and their associated costs, emphasizing the need for care when selecting an appropriate class.
2017-06-15
the methodology of reducing the online-algorithm-selecting problem as a contextual bandit problem, which is yet another interactive learning...KH2016a] Kuan-Hao Huang and Hsuan-Tien Lin. Linear upper confidence bound algorithm for contextual bandit problem with piled rewards. In Proceedings
Amortized entanglement of a quantum channel and approximately teleportation-simulable channels
NASA Astrophysics Data System (ADS)
Kaur, Eneet; Wilde, Mark M.
2018-01-01
This paper defines the amortized entanglement of a quantum channel as the largest difference in entanglement between the output and the input of the channel, where entanglement is quantified by an arbitrary entanglement measure. We prove that the amortized entanglement of a channel obeys several desirable properties, and we also consider special cases such as the amortized relative entropy of entanglement and the amortized Rains relative entropy. These latter quantities are shown to be single-letter upper bounds on the secret-key-agreement and PPT-assisted quantum capacities of a quantum channel, respectively. Of especial interest is a uniform continuity bound for these latter two special cases of amortized entanglement, in which the deviation between the amortized entanglement of two channels is bounded from above by a simple function of the diamond norm of their difference and the output dimension of the channels. We then define approximately teleportation- and positive-partial-transpose-simulable (PPT-simulable) channels as those that are close in diamond norm to a channel which is either exactly teleportation- or PPT-simulable, respectively. These results then lead to single-letter upper bounds on the secret-key-agreement and PPT-assisted quantum capacities of channels that are approximately teleportation- or PPT-simulable, respectively. Finally, we generalize many of the concepts in the paper to the setting of general resource theories, defining the amortized resourcefulness of a channel and the notion of ν-freely-simulable channels, connecting these concepts in an operational way as well.
Future trends in computer waste generation in India.
Dwivedy, Maheshwar; Mittal, R K
2010-11-01
The objective of this paper is to estimate the future projection of computer waste in India and to subsequently analyze their flow at the end of their useful phase. For this purpose, the study utilizes the logistic model-based approach proposed by Yang and Williams to forecast future trends in computer waste. The model estimates future projection of computer penetration rate utilizing their first lifespan distribution and historical sales data. A bounding analysis on the future carrying capacity was simulated using the three parameter logistic curve. The observed obsolete generation quantities from the extrapolated penetration rates are then used to model the disposal phase. The results of the bounding analysis indicate that in the year 2020, around 41-152 million units of computers will become obsolete. The obsolete computer generation quantities are then used to estimate the End-of-Life outflows by utilizing a time-series multiple lifespan model. Even a conservative estimate of the future recycling capacity of PCs will reach upwards of 30 million units during 2025. Apparently, more than 150 million units could be potentially recycled in the upper bound case. However, considering significant future investment in the e-waste recycling sector from all stakeholders in India, we propose a logistic growth in the recycling rate and estimate the requirement of recycling capacity between 60 and 400 million units for the lower and upper bound case during 2025. Finally, we compare the future obsolete PC generation amount of the US and India. Copyright © 2010 Elsevier Ltd. All rights reserved.
McDonald, Douglas B.; Buchholz, Carol E.
1994-01-01
A shield for restricting molten corium from flowing into a water sump disposed in a floor of a containment vessel includes upper and lower walls which extend vertically upwardly and downwardly from the floor for laterally bounding the sump. The upper wall includes a plurality of laterally spaced apart flow channels extending horizontally therethrough, with each channel having a bottom disposed coextensively with the floor for channeling water therefrom into the sump. Each channel has a height and a length predeterminedly selected for allowing heat from the molten corium to dissipate through the upper and lower walls as it flows therethrough for solidifying the molten corium therein to prevent accumulation thereof in the sump.
Automated determination of arterial input function for DCE-MRI of the prostate
NASA Astrophysics Data System (ADS)
Zhu, Yingxuan; Chang, Ming-Ching; Gupta, Sandeep
2011-03-01
Prostate cancer is one of the commonest cancers in the world. Dynamic contrast enhanced MRI (DCE-MRI) provides an opportunity for non-invasive diagnosis, staging, and treatment monitoring. Quantitative analysis of DCE-MRI relies on determination of an accurate arterial input function (AIF). Although several methods for automated AIF detection have been proposed in literature, none are optimized for use in prostate DCE-MRI, which is particularly challenging due to large spatial signal inhomogeneity. In this paper, we propose a fully automated method for determining the AIF from prostate DCE-MRI. Our method is based on modeling pixel uptake curves as gamma variate functions (GVF). First, we analytically compute bounds on GVF parameters for more robust fitting. Next, we approximate a GVF for each pixel based on local time domain information, and eliminate the pixels with false estimated AIFs using the deduced upper and lower bounds. This makes the algorithm robust to signal inhomogeneity. After that, according to spatial information such as similarity and distance between pixels, we formulate the global AIF selection as an energy minimization problem and solve it using a message passing algorithm to further rule out the weak pixels and optimize the detected AIF. Our method is fully automated without training or a priori setting of parameters. Experimental results on clinical data have shown that our method obtained promising detection accuracy (all detected pixels inside major arteries), and a very good match with expert traced manual AIF.
Fast calculation of the `ILC norm' in iterative learning control
NASA Astrophysics Data System (ADS)
Rice, Justin K.; van Wingerden, Jan-Willem
2013-06-01
In this paper, we discuss and demonstrate a method for the exploitation of matrix structure in computations for iterative learning control (ILC). In Barton, Bristow, and Alleyne [International Journal of Control, 83(2), 1-8 (2010)], a special insight into the structure of the lifted convolution matrices involved in ILC is used along with a modified Lanczos method to achieve very fast computational bounds on the learning convergence, by calculating the 'ILC norm' in ? computational complexity. In this paper, we show how their method is equivalent to a special instance of the sequentially semi-separable (SSS) matrix arithmetic, and thus can be extended to many other computations in ILC, and specialised in some cases to even faster methods. Our SSS-based methodology will be demonstrated on two examples: a linear time-varying example resulting in the same ? complexity as in Barton et al., and a linear time-invariant example where our approach reduces the computational complexity to ?, thus decreasing the computation time, for an example, from the literature by a factor of almost 100. This improvement is achieved by transforming the norm computation via a linear matrix inequality into a check of positive definiteness - which allows us to further exploit the almost-Toeplitz properties of the matrix, and additionally provides explicit upper and lower bounds on the norm of the matrix, instead of the indirect Ritz estimate. These methods are now implemented in a MATLAB toolbox, freely available on the Internet.
Conservative Analytical Collision Probabilities for Orbital Formation Flying
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
2004-01-01
The literature offers a number of approximations for analytically and/or efficiently computing the probability of collision between two space objects. However, only one of these techniques is a completely analytical approximation that is suitable for use in the preliminary design phase, when it is more important to quickly analyze a large segment of the trade space than it is to precisely compute collision probabilities. Unfortunately, among the types of formations that one might consider, some combine a range of conditions for which this analytical method is less suitable. This work proposes a simple, conservative approximation that produces reasonable upper bounds on the collision probability in such conditions. Although its estimates are much too conservative under other conditions, such conditions are typically well suited for use of the existing method.
Conservative Analytical Collision Probability for Design of Orbital Formations
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
2004-01-01
The literature offers a number of approximations for analytically and/or efficiently computing the probability of collision between two space objects. However, only one of these techniques is a completely analytical approximation that is suitable for use in the preliminary design phase, when it is more important to quickly analyze a large segment of the trade space than it is to precisely compute collision probabilities. Unfortunately, among the types of formations that one might consider, some combine a range of conditions for which this analytical method is less suitable. This work proposes a simple, conservative approximation that produces reasonable upper bounds on the collision probability in such conditions. Although its estimates are much too conservative under other conditions, such conditions are typically well suited for use of the existing method.
Wu, Zheng-Guang; Shi, Peng; Su, Hongye; Chu, Jian
2012-09-01
This paper investigates the problem of master-slave synchronization for neural networks with discrete and distributed delays under variable sampling with a known upper bound on the sampling intervals. An improved method is proposed, which captures the characteristic of sampled-data systems. Some delay-dependent criteria are derived to ensure the exponential stability of the error systems, and thus the master systems synchronize with the slave systems. The desired sampled-data controller can be achieved by solving a set of linear matrix inequalitys, which depend upon the maximum sampling interval and the decay rate. The obtained conditions not only have less conservatism but also have less decision variables than existing results. Simulation results are given to show the effectiveness and benefits of the proposed methods.
Predictive inference for best linear combination of biomarkers subject to limits of detection.
Coolen-Maturi, Tahani
2017-08-15
Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine, machine learning and credit scoring. The receiver operating characteristic (ROC) curve is a useful tool to assess the ability of a diagnostic test to discriminate between two classes or groups. In practice, multiple diagnostic tests or biomarkers are combined to improve diagnostic accuracy. Often, biomarker measurements are undetectable either below or above the so-called limits of detection (LoD). In this paper, nonparametric predictive inference (NPI) for best linear combination of two or more biomarkers subject to limits of detection is presented. NPI is a frequentist statistical method that is explicitly aimed at using few modelling assumptions, enabled through the use of lower and upper probabilities to quantify uncertainty. The NPI lower and upper bounds for the ROC curve subject to limits of detection are derived, where the objective function to maximize is the area under the ROC curve. In addition, the paper discusses the effect of restriction on the linear combination's coefficients on the analysis. Examples are provided to illustrate the proposed method. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Observed Volume Fluxes and Mixing in the Dardanelles Strait
2013-10-04
et al , 2001; Kara el al ., 2008]. [3] It has been recognized for years that the upper-layer outflow from the Dardanelles Strait to the Aegean Sea...than the interior of the sea and manifests itself as a subsurface flow bounded by the upper layer of the Sea of Mannara. 5007 JAROSZ ET AL ...both ends of the Dardanelles Strait, and assuming a steady state mass budget, Unl’uata et al . [1990] estimated mean annual volume transports in the
Canonical Probability Distributions for Model Building, Learning, and Inference
2006-07-14
hand , are for Ranked nodes set at Unobservable and Auxiliary nodes. The value of alpha is set in the diagnostic window by moving the slider in the upper...right hand side of the window. The upper bound of alpha can be modified by typing the new value in the small edit box to the right of the slider. f...TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER University of Pittsburgh
Exact one-sided confidence limits for the difference between two correlated proportions.
Lloyd, Chris J; Moldovan, Max V
2007-08-15
We construct exact and optimal one-sided upper and lower confidence bounds for the difference between two probabilities based on matched binary pairs using well-established optimality theory of Buehler. Starting with five different approximate lower and upper limits, we adjust them to have coverage probability exactly equal to the desired nominal level and then compare the resulting exact limits by their mean size. Exact limits based on the signed root likelihood ratio statistic are preferred and recommended for practical use.
Scales of mass generation for quarks, leptons, and majorana neutrinos.
Dicus, Duane A; He, Hong-Jian
2005-06-10
We study 2-->n inelastic fermion-(anti)fermion scattering into multiple longitudinal weak gauge bosons and derive universal upper bounds on the scales of fermion mass generation by imposing unitarity of the S matrix. We place new upper limits on the scales of fermion mass generation, independent of the electroweak symmetry breaking scale. Strikingly, we find that the strongest 2-->n limits fall in a narrow range, 3-170 TeV (with n=2-24), depending on the observed fermion masses.
NASA Astrophysics Data System (ADS)
Adam, J.; Adamová, D.; Aggarwal, M. M.; Aglieri Rinella, G.; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahmed, I.; Ahn, S. U.; Aimo, I.; Aiola, S.; Ajaz, M.; Akindinov, A.; Alam, S. N.; Aleksandrov, D.; Alessandro, B.; Alexandre, D.; Alfaro Molina, R.; Alici, A.; Alkin, A.; Alme, J.; Alt, T.; Altinpinar, S.; Altsybeev, I.; Alves Garcia Prado, C.; Andrei, C.; Andronic, A.; Anguelov, V.; Anielski, J.; Antičić, T.; Antinori, F.; Antonioli, P.; Aphecetche, L.; Appelshäuser, H.; Arcelli, S.; Armesto, N.; Arnaldi, R.; Aronsson, T.; Arsene, I. C.; Arslandok, M.; Augustinus, A.; Averbeck, R.; Azmi, M. D.; Bach, M.; Badalà, A.; Baek, Y. W.; Bagnasco, S.; Bailhache, R.; Bala, R.; Baldisseri, A.; Ball, M.; Baltasar Dos Santos Pedrosa, F.; Baral, R. C.; Barbano, A. M.; Barbera, R.; Barile, F.; Barnaföldi, G. G.; Barnby, L. S.; Barret, V.; Bartalini, P.; Bartke, J.; Bartsch, E.; Basile, M.; Bastid, N.; Basu, S.; Bathen, B.; Batigne, G.; Batista Camejo, A.; Batyunya, B.; Batzing, P. C.; Bearden, I. G.; Beck, H.; Bedda, C.; Behera, N. K.; Belikov, I.; Bellini, F.; Bello Martinez, H.; Bellwied, R.; Belmont, R.; Belmont-Moreno, E.; Belyaev, V.; Bencedi, G.; Beole, S.; Berceanu, I.; Bercuci, A.; Berdnikov, Y.; Berenyi, D.; Bertens, R. A.; Berzano, D.; Betev, L.; Bhasin, A.; Bhat, I. R.; Bhati, A. K.; Bhattacharjee, B.; Bhom, J.; Bianchi, L.; Bianchi, N.; Bianchin, C.; Bielčík, J.; Bielčíková, J.; Bilandzic, A.; Biswas, S.; Bjelogrlic, S.; Blanco, F.; Blau, D.; Blume, C.; Bock, F.; Bogdanov, A.; Bøggild, H.; Boldizsár, L.; Bombara, M.; Book, J.; Borel, H.; Borissov, A.; Borri, M.; Bossú, F.; Botje, M.; Botta, E.; Böttger, S.; Braun-Munzinger, P.; Bregant, M.; Breitner, T.; Broker, T. A.; Browning, T. A.; Broz, M.; Brucken, E. J.; Bruna, E.; Bruno, G. E.; Budnikov, D.; Buesching, H.; Bufalino, S.; Buncic, P.; Busch, O.; Buthelezi, Z.; Buxton, J. T.; Caffarri, D.; Cai, X.; Caines, H.; Calero Diaz, L.; Caliva, A.; Calvo Villar, E.; Camerini, P.; Carena, F.; Carena, W.; Castillo Castellanos, J.; Castro, A. J.; Casula, E. A. R.; Cavicchioli, C.; Ceballos Sanchez, C.; Cepila, J.; Cerello, P.; Chang, B.; Chapeland, S.; Chartier, M.; Charvet, J. L.; Chattopadhyay, S.; Chattopadhyay, S.; Chelnokov, V.; Cherney, M.; Cheshkov, C.; Cheynis, B.; Chibante Barroso, V.; Chinellato, D. D.; Chochula, P.; Choi, K.; Chojnacki, M.; Choudhury, S.; Christakoglou, P.; Christensen, C. H.; Christiansen, P.; Chujo, T.; Chung, S. U.; Cicalo, C.; Cifarelli, L.; Cindolo, F.; Cleymans, J.; Colamaria, F.; Colella, D.; Collu, A.; Colocci, M.; Conesa Balbastre, G.; Conesa del Valle, Z.; Connors, M. E.; Contreras, J. G.; Cormier, T. M.; Corrales Morales, Y.; Cortés Maldonado, I.; Cortese, P.; Cosentino, M. R.; Costa, F.; Crochet, P.; Cruz Albino, R.; Cuautle, E.; Cunqueiro, L.; Dahms, T.; Dainese, A.; Danu, A.; Das, D.; Das, I.; Das, S.; Dash, A.; Dash, S.; De, S.; De Caro, A.; de Cataldo, G.; de Cuveland, J.; De Falco, A.; De Gruttola, D.; De Marco, N.; De Pasquale, S.; Deisting, A.; Deloff, A.; Dénes, E.; D'Erasmo, G.; Di Bari, D.; Di Mauro, A.; Di Nezza, P.; Diaz Corchero, M. A.; Dietel, T.; Dillenseger, P.; Divià, R.; Djuvsland, Ø.; Dobrin, A.; Dobrowolski, T.; Domenicis Gimenez, D.; Dönigus, B.; Dordic, O.; Dubey, A. K.; Dubla, A.; Ducroux, L.; Dupieux, P.; Ehlers, R. J.; Elia, D.; Engel, H.; Erazmus, B.; Erhardt, F.; Eschweiler, D.; Espagnon, B.; Estienne, M.; Esumi, S.; Evans, D.; Evdokimov, S.; Eyyubova, G.; Fabbietti, L.; Fabris, D.; Faivre, J.; Fantoni, A.; Fasel, M.; Feldkamp, L.; Felea, D.; Feliciello, A.; Feofilov, G.; Ferencei, J.; Fernández Téllez, A.; Ferreiro, E. G.; Ferretti, A.; Festanti, A.; Figiel, J.; Figueredo, M. A. S.; Filchagin, S.; Finogeev, D.; Fionda, F. M.; Fiore, E. M.; Fleck, M. G.; Floris, M.; Foertsch, S.; Foka, P.; Fokin, S.; Fragiacomo, E.; Francescon, A.; Frankenfeld, U.; Fuchs, U.; Furget, C.; Furs, A.; Fusco Girard, M.; Gaardhøje, J. J.; Gagliardi, M.; Gago, A. M.; Gallio, M.; Gangadharan, D. R.; Ganoti, P.; Gao, C.; Garabatos, C.; Garcia-Solis, E.; Gargiulo, C.; Gasik, P.; Germain, M.; Gheata, A.; Gheata, M.; Ghosh, P.; Ghosh, S. K.; Gianotti, P.; Giubellino, P.; Giubilato, P.; Gladysz-Dziadus, E.; Glässel, P.; Gomez Ramirez, A.; González-Zamora, P.; Gorbunov, S.; Görlich, L.; Gotovac, S.; Grabski, V.; Graczykowski, L. K.; Grelli, A.; Grigoras, A.; Grigoras, C.; Grigoriev, V.; Grigoryan, A.; Grigoryan, S.; Grinyov, B.; Grion, N.; Grosse-Oetringhaus, J. F.; Grossiord, J.-Y.; Grosso, R.; Guber, F.; Guernane, R.; Guerzoni, B.; Gulbrandsen, K.; Gulkanyan, H.; Gunji, T.; Gupta, A.; Gupta, R.; Haake, R.; Haaland, Ø.; Hadjidakis, C.; Haiduc, M.; Hamagaki, H.; Hamar, G.; Hanratty, L. D.; Hansen, A.; Harris, J. W.; Hartmann, H.; Harton, A.; Hatzifotiadou, D.; Hayashi, S.; Heckel, S. T.; Heide, M.; Helstrup, H.; Herghelegiu, A.; Herrera Corral, G.; Hess, B. A.; Hetland, K. F.; Hilden, T. E.; Hillemanns, H.; Hippolyte, B.; Hristov, P.; Huang, M.; Humanic, T. J.; Hussain, N.; Hussain, T.; Hutter, D.; Hwang, D. S.; Ilkaev, R.; Ilkiv, I.; Inaba, M.; Ionita, C.; Ippolitov, M.; Irfan, M.; Ivanov, M.; Ivanov, V.; Izucheev, V.; Jachołkowski, A.; Jacobs, P. M.; Jahnke, C.; Jang, H. J.; Janik, M. A.; Jayarathna, P. H. S. Y.; Jena, C.; Jena, S.; Jimenez Bustamante, R. T.; Jones, P. G.; Jung, H.; Jusko, A.; Kalinak, P.; Kalweit, A.; Kamin, J.; Kang, J. H.; Kaplin, V.; Kar, S.; Karasu Uysal, A.; Karavichev, O.; Karavicheva, T.; Karpechev, E.; Kebschull, U.; Keidel, R.; Keijdener, D. L. D.; Keil, M.; Khan, K. H.; Khan, M. M.; Khan, P.; Khan, S. A.; Khanzadeev, A.; Kharlov, Y.; Kileng, B.; Kim, B.; Kim, D. W.; Kim, D. J.; Kim, H.; Kim, J. S.; Kim, M.; Kim, M.; Kim, S.; Kim, T.; Kirsch, S.; Kisel, I.; Kiselev, S.; Kisiel, A.; Kiss, G.; Klay, J. L.; Klein, C.; Klein, J.; Klein-Bösing, C.; Kluge, A.; Knichel, M. L.; Knospe, A. G.; Kobayashi, T.; Kobdaj, C.; Kofarago, M.; Köhler, M. K.; Kollegger, T.; Kolojvari, A.; Kondratiev, V.; Kondratyeva, N.; Kondratyuk, E.; Konevskikh, A.; Kouzinopoulos, C.; Kovalenko, V.; Kowalski, M.; Kox, S.; Koyithatta Meethaleveedu, G.; Kral, J.; Králik, I.; Kravčáková, A.; Krelina, M.; Kretz, M.; Krivda, M.; Krizek, F.; Kryshen, E.; Krzewicki, M.; Kubera, A. M.; Kučera, V.; Kucheriaev, Y.; Kugathasan, T.; Kuhn, C.; Kuijer, P. G.; Kulakov, I.; Kumar, J.; Kumar, L.; Kurashvili, P.; Kurepin, A.; Kurepin, A. B.; Kuryakin, A.; Kushpil, S.; Kweon, M. J.; Kwon, Y.; La Pointe, S. L.; La Rocca, P.; Lagana Fernandes, C.; Lakomov, I.; Langoy, R.; Lara, C.; Lardeux, A.; Lattuca, A.; Laudi, E.; Lea, R.; Leardini, L.; Lee, G. R.; Lee, S.; Legrand, I.; Lehnert, J.; Lemmon, R. C.; Lenti, V.; Leogrande, E.; León Monzón, I.; Leoncino, M.; Lévai, P.; Li, S.; Li, X.; Lien, J.; Lietava, R.; Lindal, S.; Lindenstruth, V.; Lippmann, C.; Lisa, M. A.; Ljunggren, H. M.; Lodato, D. F.; Loenne, P. I.; Loggins, V. R.; Loginov, V.; Loizides, C.; Lopez, X.; López Torres, E.; Lowe, A.; Lu, X.-G.; Luettig, P.; Lunardon, M.; Luparello, G.; Maevskaya, A.; Mager, M.; Mahajan, S.; Mahmood, S. M.; Maire, A.; Majka, R. D.; Malaev, M.; Maldonado Cervantes, I.; Malinina, L.; Mal'Kevich, D.; Malzacher, P.; Mamonov, A.; Manceau, L.; Manko, V.; Manso, F.; Manzari, V.; Marchisone, M.; Mareš, J.; Margagliotti, G. V.; Margotti, A.; Margutti, J.; Marín, A.; Markert, C.; Marquard, M.; Martashvili, I.; Martin, N. A.; Martin Blanco, J.; Martinengo, P.; Martínez, M. I.; Martínez García, G.; Martinez Pedreira, M.; Martynov, Y.; Mas, A.; Masciocchi, S.; Masera, M.; Masoni, A.; Massacrier, L.; Mastroserio, A.; Matyja, A.; Mayer, C.; Mazer, J.; Mazzoni, M. A.; Mcdonald, D.; Meddi, F.; Menchaca-Rocha, A.; Meninno, E.; Mercado Pérez, J.; Meres, M.; Miake, Y.; Mieskolainen, M. M.; Mikhaylov, K.; Milano, L.; Milosevic, J.; Minervini, L. M.; Mischke, A.; Mishra, A. N.; Miśkowiec, D.; Mitra, J.; Mitu, C. M.; Mohammadi, N.; Mohanty, B.; Molnar, L.; Montaño Zetina, L.; Montes, E.; Morando, M.; Moretto, S.; Morreale, A.; Morsch, A.; Muccifora, V.; Mudnic, E.; Mühlheim, D.; Muhuri, S.; Mukherjee, M.; Müller, H.; Mulligan, J. D.; Munhoz, M. G.; Murray, S.; Musa, L.; Musinsky, J.; Nandi, B. K.; Nania, R.; Nappi, E.; Naru, M. U.; Nattrass, C.; Nayak, K.; Nayak, T. K.; Nazarenko, S.; Nedosekin, A.; Nellen, L.; Ng, F.; Nicassio, M.; Niculescu, M.; Niedziela, J.; Nielsen, B. S.; Nikolaev, S.; Nikulin, S.; Nikulin, V.; Noferini, F.; Nomokonov, P.; Nooren, G.; Norman, J.; Nyanin, A.; Nystrand, J.; Oeschler, H.; Oh, S.; Oh, S. K.; Ohlson, A.; Okatan, A.; Okubo, T.; Olah, L.; Oleniacz, J.; Oliveira Da Silva, A. C.; Oliver, M. H.; Onderwaater, J.; Oppedisano, C.; Ortiz Velasquez, A.; Oskarsson, A.; Otwinowski, J.; Oyama, K.; Ozdemir, M.; Pachmayer, Y.; Pagano, P.; Paić, G.; Pajares, C.; Pal, S. K.; Pan, J.; Pandey, A. K.; Pant, D.; Papikyan, V.; Pappalardo, G. S.; Pareek, P.; Park, W. J.; Parmar, S.; Passfeld, A.; Paticchio, V.; Paul, B.; Pawlak, T.; Peitzmann, T.; Pereira Da Costa, H.; Pereira De Oliveira Filho, E.; Peresunko, D.; Pérez Lara, C. E.; Peskov, V.; Pestov, Y.; Petráček, V.; Petrov, V.; Petrovici, M.; Petta, C.; Piano, S.; Pikna, M.; Pillot, P.; Pinazza, O.; Pinsky, L.; Piyarathna, D. B.; Płoskoń, M.; Planinic, M.; Pluta, J.; Pochybova, S.; Podesta-Lerma, P. L. M.; Poghosyan, M. G.; Polichtchouk, B.; Poljak, N.; Poonsawat, W.; Pop, A.; Porteboeuf-Houssais, S.; Porter, J.; Pospisil, J.; Prasad, S. K.; Preghenella, R.; Prino, F.; Pruneau, C. A.; Pshenichnov, I.; Puccio, M.; Puddu, G.; Pujahari, P.; Punin, V.; Putschke, J.; Qvigstad, H.; Rachevski, A.; Raha, S.; Rajput, S.; Rak, J.; Rakotozafindrabe, A.; Ramello, L.; Raniwala, R.; Raniwala, S.; Räsänen, S. S.; Rascanu, B. T.; Rathee, D.; Razazi, V.; Read, K. F.; Real, J. S.; Redlich, K.; Reed, R. J.; Rehman, A.; Reichelt, P.; Reicher, M.; Reidt, F.; Ren, X.; Renfordt, R.; Reolon, A. R.; Reshetin, A.; Rettig, F.; Revol, J.-P.; Reygers, K.; Riabov, V.; Ricci, R. A.; Richert, T.; Richter, M.; Riedler, P.; Riegler, W.; Riggi, F.; Ristea, C.; Rivetti, A.; Rocco, E.; Rodríguez Cahuantzi, M.; Rodriguez Manso, A.; Røed, K.; Rogochaya, E.; Rohr, D.; Röhrich, D.; Romita, R.; Ronchetti, F.; Ronflette, L.; Rosnet, P.; Rossi, A.; Roukoutakis, F.; Roy, A.; Roy, C.; Roy, P.; Rubio Montero, A. J.; Rui, R.; Russo, R.; Ryabinkin, E.; Ryabov, Y.; Rybicki, A.; Sadovsky, S.; Šafařík, K.; Sahlmuller, B.; Sahoo, P.; Sahoo, R.; Sahoo, S.; Sahu, P. K.; Saini, J.; Sakai, S.; Saleh, M. A.; Salgado, C. A.; Salzwedel, J.; Sambyal, S.; Samsonov, V.; Sanchez Castro, X.; Šándor, L.; Sandoval, A.; Sano, M.; Santagati, G.; Sarkar, D.; Scapparone, E.; Scarlassara, F.; Scharenberg, R. P.; Schiaua, C.; Schicker, R.; Schmidt, C.; Schmidt, H. R.; Schuchmann, S.; Schukraft, J.; Schulc, M.; Schuster, T.; Schutz, Y.; Schwarz, K.; Schweda, K.; Scioli, G.; Scomparin, E.; Scott, R.; Seeder, K. S.; Seger, J. E.; Sekiguchi, Y.; Selyuzhenkov, I.; Senosi, K.; Seo, J.; Serradilla, E.; Sevcenco, A.; Shabanov, A.; Shabetai, A.; Shadura, O.; Shahoyan, R.; Shangaraev, A.; Sharma, A.; Sharma, N.; Shigaki, K.; Shtejer, K.; Sibiriak, Y.; Siddhanta, S.; Sielewicz, K. M.; Siemiarczuk, T.; Silvermyr, D.; Silvestre, C.; Simatovic, G.; Simonetti, G.; Singaraju, R.; Singh, R.; Singha, S.; Singhal, V.; Sinha, B. C.; Sinha, T.; Sitar, B.; Sitta, M.; Skaali, T. B.; Slupecki, M.; Smirnov, N.; Snellings, R. J. M.; Snellman, T. W.; Søgaard, C.; Soltz, R.; Song, J.; Song, M.; Song, Z.; Soramel, F.; Sorensen, S.; Spacek, M.; Spiriti, E.; Sputowska, I.; Spyropoulou-Stassinaki, M.; Srivastava, B. K.; Stachel, J.; Stan, I.; Stefanek, G.; Steinpreis, M.; Stenlund, E.; Steyn, G.; Stiller, J. H.; Stocco, D.; Strmen, P.; Suaide, A. A. P.; Sugitate, T.; Suire, C.; Suleymanov, M.; Sultanov, R.; Šumbera, M.; Symons, T. J. M.; Szabo, A.; Szanto de Toledo, A.; Szarka, I.; Szczepankiewicz, A.; Szymanski, M.; Takahashi, J.; Tanaka, N.; Tangaro, M. A.; Tapia Takaki, J. D.; Tarantola Peloni, A.; Tariq, M.; Tarzila, M. G.; Tauro, A.; Tejeda Muñoz, G.; Telesca, A.; Terasaki, K.; Terrevoli, C.; Teyssier, B.; Thäder, J.; Thomas, D.; Tieulent, R.; Timmins, A. R.; Toia, A.; Trogolo, S.; Trubnikov, V.; Trzaska, W. H.; Tsuji, T.; Tumkin, A.; Turrisi, R.; Tveter, T. S.; Ullaland, K.; Uras, A.; Usai, G. L.; Utrobicic, A.; Vajzer, M.; Vala, M.; Valencia Palomo, L.; Vallero, S.; Van Der Maarel, J.; Van Hoorne, J. W.; van Leeuwen, M.; Vanat, T.; Vande Vyvre, P.; Varga, D.; Vargas, A.; Vargyas, M.; Varma, R.; Vasileiou, M.; Vasiliev, A.; Vauthier, A.; Vechernin, V.; Veen, A. M.; Veldhoen, M.; Velure, A.; Venaruzzo, M.; Vercellin, E.; Vergara Limón, S.; Vernet, R.; Verweij, M.; Vickovic, L.; Viesti, G.; Viinikainen, J.; Vilakazi, Z.; Villalobos Baillie, O.; Vinogradov, A.; Vinogradov, L.; Vinogradov, Y.; Virgili, T.; Vislavicius, V.; Viyogi, Y. P.; Vodopyanov, A.; Völkl, M. A.; Voloshin, K.; Voloshin, S. A.; Volpe, G.; von Haller, B.; Vorobyev, I.; Vranic, D.; Vrláková, J.; Vulpescu, B.; Vyushin, A.; Wagner, B.; Wagner, J.; Wang, H.; Wang, M.; Wang, Y.; Watanabe, D.; Weber, M.; Weber, S. G.; Wessels, J. P.; Westerhoff, U.; Wiechula, J.; Wikne, J.; Wilde, M.; Wilk, G.; Wilkinson, J.; Williams, M. C. S.; Windelband, B.; Winn, M.; Yaldo, C. G.; Yamaguchi, Y.; Yang, H.; Yang, P.; Yano, S.; Yasnopolskiy, S.; Yin, Z.; Yokoyama, H.; Yoo, I.-K.; Yurchenko, V.; Yushmanov, I.; Zaborowska, A.; Zaccolo, V.; Zaman, A.; Zampolli, C.; Zanoli, H. J. C.; Zaporozhets, S.; Zarochentsev, A.; Závada, P.; Zaviyalov, N.; Zbroszczyk, H.; Zgura, I. S.; Zhalov, M.; Zhang, H.; Zhang, X.; Zhang, Y.; Zhao, C.; Zhigareva, N.; Zhou, D.; Zhou, Y.; Zhou, Z.; Zhu, H.; Zhu, J.; Zhu, X.; Zichichi, A.; Zimmermann, A.; Zimmermann, M. B.; Zinovjev, G.; Zyzak, M.
2016-01-01
We present results of a search for two hypothetical strange dibaryon states, i.e. the H-dibaryon and the possible Λn ‾ bound state. The search is performed with the ALICE detector in central (0-10%) Pb-Pb collisions at √{sNN} = 2.76 TeV, by invariant mass analysis in the decay modes Λn ‾ → d ‾π+ and H-dibaryon → Λpπ-. No evidence for these bound states is observed. Upper limits are determined at 99% confidence level for a wide range of lifetimes and for the full range of branching ratios. The results are compared to thermal, coalescence and hybrid UrQMD model expectations, which describe correctly the production of other loosely bound states, like the deuteron and the hypertriton.
Extracting Loop Bounds for WCET Analysis Using the Instrumentation Point Graph
NASA Astrophysics Data System (ADS)
Betts, A.; Bernat, G.
2009-05-01
Every calculation engine proposed in the literature of Worst-Case Execution Time (WCET) analysis requires upper bounds on loop iterations. Existing mechanisms to procure this information are either error prone, because they are gathered from the end-user, or limited in scope, because automatic analyses target very specific loop structures. In this paper, we present a technique that obtains bounds completely automatically for arbitrary loop structures. In particular, we show how to employ the Instrumentation Point Graph (IPG) to parse traces of execution (generated by an instrumented program) in order to extract bounds relative to any loop-nesting level. With this technique, therefore, non-rectangular dependencies between loops can be captured, allowing more accurate WCET estimates to be calculated. We demonstrate the improvement in accuracy by comparing WCET estimates computed through our HMB framework against those computed with state-of-the-art techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adam, J.; Adamová, D.; Aggarwal, M. M.
Here, we present results of a search for two hypothetical strange dibaryon states, i.e. the H-dibaryon and the possiblemore » $$\\overline{Λn}$$ bound state. The search is performed with the ALICE detector in central (0-10%) Pb-Pb collisions at $$\\sqrt{s}$$$_ {NN}$$ = 2.76 TeV, by invariant mass analysis in the decay modes $$\\overline{Λn}$$ → $$\\bar{d}$$π + and H-dibaryon →Λpπ -. No evidence for these bound states is observed. Upper limits are determined at 99% confidence level for a wide range of lifetimes and for the full range of branching ratios. The results are compared to thermal, coalescence and hybrid UrQMD model expectations, which describe correctly the production of other loosely bound states, like the deuteron and the hypertriton.« less
Information models of software productivity - Limits on productivity growth
NASA Technical Reports Server (NTRS)
Tausworthe, Robert C.
1992-01-01
Research into generalized information-metric models of software process productivity establishes quantifiable behavior and theoretical bounds. The models establish a fundamental mathematical relationship between software productivity and the human capacity for information traffic, the software product yield (system size), information efficiency, and tool and process efficiencies. An upper bound is derived that quantifies average software productivity and the maximum rate at which it may grow. This bound reveals that ultimately, when tools, methodologies, and automated assistants have reached their maximum effective state, further improvement in productivity can only be achieved through increasing software reuse. The reuse advantage is shown not to increase faster than logarithmically in the number of reusable features available. The reuse bound is further shown to be somewhat dependent on the reuse policy: a general 'reuse everything' policy can lead to a somewhat slower productivity growth than a specialized reuse policy.
Wang, Yang; Li, Mingxing; Tu, Z C; Hernández, A Calvo; Roco, J M M
2012-07-01
The figure of merit for refrigerators performing finite-time Carnot-like cycles between two reservoirs at temperature T(h) and T(c) (
Adam, J.; Adamová, D.; Aggarwal, M. M.; ...
2016-11-28
Here, we present results of a search for two hypothetical strange dibaryon states, i.e. the H-dibaryon and the possiblemore » $$\\overline{Λn}$$ bound state. The search is performed with the ALICE detector in central (0-10%) Pb-Pb collisions at $$\\sqrt{s}$$$_ {NN}$$ = 2.76 TeV, by invariant mass analysis in the decay modes $$\\overline{Λn}$$ → $$\\bar{d}$$π + and H-dibaryon →Λpπ -. No evidence for these bound states is observed. Upper limits are determined at 99% confidence level for a wide range of lifetimes and for the full range of branching ratios. The results are compared to thermal, coalescence and hybrid UrQMD model expectations, which describe correctly the production of other loosely bound states, like the deuteron and the hypertriton.« less
Resistivity bound for hydrodynamic bad metals
Lucas, Andrew; Hartnoll, Sean A.
2017-01-01
We obtain a rigorous upper bound on the resistivity ρ of an electron fluid whose electronic mean free path is short compared with the scale of spatial inhomogeneities. When such a hydrodynamic electron fluid supports a nonthermal diffusion process—such as an imbalance mode between different bands—we show that the resistivity bound becomes ρ≲AΓ. The coefficient A is independent of temperature and inhomogeneity lengthscale, and Γ is a microscopic momentum-preserving scattering rate. In this way, we obtain a unified mechanism—without umklapp—for ρ∼T2 in a Fermi liquid and the crossover to ρ∼T in quantum critical regimes. This behavior is widely observed in transition metal oxides, organic metals, pnictides, and heavy fermion compounds and has presented a long-standing challenge to transport theory. Our hydrodynamic bound allows phonon contributions to diffusion constants, including thermal diffusion, to directly affect the electrical resistivity. PMID:29073054
NASA Astrophysics Data System (ADS)
Wang, C.; Gordon, R. G.; Zheng, L.
2016-12-01
Hotspot tracks are widely used to estimate the absolute velocities of plates, i.e., relative to the lower mantle. Knowledge of current motion between hotspots is important for both plate kinematics and mantle dynamics and informs the discussion on the origin of the Hawaiian-Emperor Bend. Following Morgan & Morgan (2007), we focus only on the trends of young hotspot tracks and omit volcanic propagation rates. The dispersion of the trends can be partitioned into between-plate and within-plate dispersion. Applying the method of Gripp & Gordon (2002) to the hotspot trend data set of Morgan & Morgan (2007) constrained to the MORVEL relative plate angular velocities (DeMets et al., 2010) results in a standard deviation of the 56 hotspot trends of 22°. The largest angular misfits tend to occur on the slowest moving plates. Alternatively, estimation of best-fitting poles to hotspot tracks on the nine individual plates, results in a standard deviation of trends of only 13°, a statistically significant reduction from the introduction of 15 additional adjustable parameters. If all of the between-plate misfit is due to motion of groups of hotspots (beneath different plates), nominal velocities relative to the mean hotspot reference frame range from 1 to 4 mm/yr with the lower bounds ranging from 1 to 3 mm/yr and the greatest upper bound being 8 mm/yr. These are consistent with bounds on motion between Pacific and Indo-Atlantic hotspots over the past ≈50 Ma, which range from zero (lower bound) to 8 to 13 mm/yr (upper bounds) (Koivisto et al., 2014). We also determine HS4-MORVEL, a new global set of plate angular velocities relative to the hotspots constrained to consistency with the MORVEL relative plate angular velocities, using a two-tier analysis similar to that used by Zheng et al. (2014) to estimate the SKS-MORVEL global set of absolute plate velocities fit to the orientation of seismic anisotropy. We find that the 95% confidence limits of HS4-MORVEL and SKS-MORVEL overlap substantially and that the two sets of angular velocities differ insignificantly. Thus we combine the two sets of angular velocities to estimate ABS-MORVEL, an optimal set of global angular velocities consistent with both hotspot tracks and seismic anisotropy. ABS-MORVEL has more compact confidence limits than either SKS-MORVEL or HS4-MORVEL.
Interferometric tests of Planckian quantum geometry models
Kwon, Ohkyung; Hogan, Craig J.
2016-04-19
The effect of Planck scale quantum geometrical effects on measurements with interferometers is estimated with standard physics, and with a variety of proposed extensions. It is shown that effects are negligible in standard field theory with canonically quantized gravity. Statistical noise levels are estimated in a variety of proposals for nonstandard metric fluctuations, and these alternatives are constrained using upper bounds on stochastic metric fluctuations from LIGO. Idealized models of several interferometer system architectures are used to predict signal noise spectra in a quantum geometry that cannot be described by a fluctuating metric, in which position noise arises from holographicmore » bounds on directional information. Lastly, predictions in this case are shown to be close to current and projected experimental bounds.« less
Integrability and chemical potential in the (3 + 1)-dimensional Skyrme model
NASA Astrophysics Data System (ADS)
Alvarez, P. D.; Canfora, F.; Dimakis, N.; Paliathanasis, A.
2017-10-01
Using a remarkable mapping from the original (3 + 1)dimensional Skyrme model to the Sine-Gordon model, we construct the first analytic examples of Skyrmions as well as of Skyrmions-anti-Skyrmions bound states within a finite box in 3 + 1 dimensional flat space-time. An analytic upper bound on the number of these Skyrmions-anti-Skyrmions bound states is derived. We compute the critical isospin chemical potential beyond which these Skyrmions cease to exist. With these tools, we also construct topologically protected time-crystals: time-periodic configurations whose time-dependence is protected by their non-trivial winding number. These are striking realizations of the ideas of Shapere and Wilczek. The critical isospin chemical potential for these time-crystals is determined.
Properties of Coulomb crystals: rigorous results.
Cioslowski, Jerzy
2008-04-28
Rigorous equalities and bounds for several properties of Coulomb crystals are presented. The energy e(N) per particle pair is shown to be a nondecreasing function of the particle number N for all clusters described by double-power-law pairwise-additive potentials epsilon(r) that are unbound at both r-->0 and r-->infinity. A lower bound for the ratio of the mean reciprocal crystal radius and e(N) is derived. The leading term in the asymptotic expression for the shell capacity that appears in the recently introduced approximate model of Coulomb crystals is obtained, providing in turn explicit large-N asymptotics for e(N) and the mean crystal radius. In addition, properties of the harmonic vibrational spectra are investigated, producing an upper bound for the zero-point energy.
Calculating Reuse Distance from Source Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narayanan, Sri Hari Krishna; Hovland, Paul
The efficient use of a system is of paramount importance in high-performance computing. Applications need to be engineered for future systems even before the architecture of such a system is clearly known. Static performance analysis that generates performance bounds is one way to approach the task of understanding application behavior. Performance bounds provide an upper limit on the performance of an application on a given architecture. Predicting cache hierarchy behavior and accesses to main memory is a requirement for accurate performance bounds. This work presents our static reuse distance algorithm to generate reuse distance histograms. We then use these histogramsmore » to predict cache miss rates. Experimental results for kernels studied show that the approach is accurate.« less
Energy efficient quantum machines
NASA Astrophysics Data System (ADS)
Abah, Obinna; Lutz, Eric
2017-05-01
We investigate the performance of a quantum thermal machine operating in finite time based on shortcut-to-adiabaticity techniques. We compute efficiency and power for a paradigmatic harmonic quantum Otto engine by taking the energetic cost of the shortcut driving explicitly into account. We demonstrate that shortcut-to-adiabaticity machines outperform conventional ones for fast cycles. We further derive generic upper bounds on both quantities, valid for any heat engine cycle, using the notion of quantum speed limit for driven systems. We establish that these quantum bounds are tighter than those stemming from the second law of thermodynamics.
New Operational Matrices for Solving Fractional Differential Equations on the Half-Line
2015-01-01
In this paper, the fractional-order generalized Laguerre operational matrices (FGLOM) of fractional derivatives and fractional integration are derived. These operational matrices are used together with spectral tau method for solving linear fractional differential equations (FDEs) of order ν (0 < ν < 1) on the half line. An upper bound of the absolute errors is obtained for the approximate and exact solutions. Fractional-order generalized Laguerre pseudo-spectral approximation is investigated for solving nonlinear initial value problem of fractional order ν. The extension of the fractional-order generalized Laguerre pseudo-spectral method is given to solve systems of FDEs. We present the advantages of using the spectral schemes based on fractional-order generalized Laguerre functions and compare them with other methods. Several numerical examples are implemented for FDEs and systems of FDEs including linear and nonlinear terms. We demonstrate the high accuracy and the efficiency of the proposed techniques. PMID:25996369
New operational matrices for solving fractional differential equations on the half-line.
Bhrawy, Ali H; Taha, Taha M; Alzahrani, Ebraheem O; Alzahrani, Ebrahim O; Baleanu, Dumitru; Alzahrani, Abdulrahim A
2015-01-01
In this paper, the fractional-order generalized Laguerre operational matrices (FGLOM) of fractional derivatives and fractional integration are derived. These operational matrices are used together with spectral tau method for solving linear fractional differential equations (FDEs) of order ν (0 < ν < 1) on the half line. An upper bound of the absolute errors is obtained for the approximate and exact solutions. Fractional-order generalized Laguerre pseudo-spectral approximation is investigated for solving nonlinear initial value problem of fractional order ν. The extension of the fractional-order generalized Laguerre pseudo-spectral method is given to solve systems of FDEs. We present the advantages of using the spectral schemes based on fractional-order generalized Laguerre functions and compare them with other methods. Several numerical examples are implemented for FDEs and systems of FDEs including linear and nonlinear terms. We demonstrate the high accuracy and the efficiency of the proposed techniques.
Total Variation Denoising and Support Localization of the Gradient
NASA Astrophysics Data System (ADS)
Chambolle, A.; Duval, V.; Peyré, G.; Poon, C.
2016-10-01
This paper describes the geometrical properties of the solutions to the total variation denoising method. A folklore statement is that this method is able to restore sharp edges, but at the same time, might introduce some staircasing (i.e. “fake” edges) in flat areas. Quite surprisingly, put aside numerical evidences, almost no theoretical result are available to backup these claims. The first contribution of this paper is a precise mathematical definition of the “extended support” (associated to the noise-free image) of TV denoising. This is intuitively the region which is unstable and will suffer from the staircasing effect. Our main result shows that the TV denoising method indeed restores a piece-wise constant image outside a small tube surrounding the extended support. Furthermore, the radius of this tube shrinks toward zero as the noise level vanishes and in some cases, an upper bound on the convergence rate is given.
The end of the MACHO era, revisited: New limits on MACHO masses from halo wide binaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monroy-Rodríguez, Miguel A.; Allen, Christine, E-mail: chris@astro.unam.mx
2014-08-01
In order to determine an upper bound for the mass of the massive compact halo objects (MACHOs), we use the halo binaries contained in a recent catalog by Allen and Monroy-Rodríguez. To dynamically model their interactions with massive perturbers, a Monte Carlo simulation is conducted, using an impulsive approximation method and assuming a galactic halo constituted by massive particles of a characteristic mass. The results of such simulations are compared with several subsamples of our improved catalog of candidate halo wide binaries. In accordance with Quinn et al., we also find our results to be very sensitive to the widestmore » binaries. However, our larger sample, together with the fact that we can obtain galactic orbits for 150 of our systems, allows a more reliable estimate of the maximum MACHO mass than that obtained previously. If we employ the entire sample of 211 candidate halo stars we, obtain an upper limit of 112 M{sub ☉}. However, using the 150 binaries in our catalog with computed galactic orbits, we are able to refine our fitting criteria. Thus, for the 100 most halo-like binaries we obtain a maximum MACHO mass of 21-68 M{sub ☉}. Furthermore, we can estimate the dynamical effects of the galactic disk using binary samples that spend progressively shorter times within the disk. By extrapolating the limits obtained for our most reliable—albeit smallest—sample, we find that as the time spent within the disk tends to zero, the upper bound of the MACHO mass tends to less than 5 M{sub ☉}. The non-uniform density of the halo has also been taken into account, but the limit obtained, less than 5 M{sub ☉}, does not differ much from the previous one. Together with microlensing studies that provide lower limits on the MACHO mass, our results essentially exclude the existence of such objects in the galactic halo.« less
NASA Astrophysics Data System (ADS)
Li, Hongwei; Yuan, Ye; Xu, Zhiguo; Wang, Zongchen; Wang, Juncheng; Wang, Peitao; Gao, Yi; Hou, Jingming; Shan, Di
2017-06-01
The South China Sea (SCS) and its adjacent small basins including Sulu Sea and Celebes Sea are commonly identified as tsunami-prone region by its historical records on seismicity and tsunamis. However, quantification of tsunami hazard in the SCS region remained an intractable issue due to highly complex tectonic setting and multiple seismic sources within and surrounding this area. Probabilistic Tsunami Hazard Assessment (PTHA) is performed in the present study to evaluate tsunami hazard in the SCS region based on a brief review on seismological and tsunami records. 5 regional and local potential tsunami sources are tentatively identified, and earthquake catalogs are generated using Monte Carlo simulation following the Tapered Gutenberg-Richter relationship for each zone. Considering a lack of consensus on magnitude upper bound on each seismic source, as well as its critical role in PTHA, the major concern of the present study is to define the upper and lower limits of tsunami hazard in the SCS region comprehensively by adopting different corner magnitudes that could be derived by multiple principles and approaches, including TGR regression of historical catalog, fault-length scaling, tectonic and seismic moment balance, and repetition of historical largest event. The results show that tsunami hazard in the SCS and adjoining basins is subject to large variations when adopting different corner magnitudes, with the upper bounds 2-6 times of the lower. The probabilistic tsunami hazard maps for specified return periods reveal much higher threat from Cotabato Trench and Sulawesi Trench in the Celebes Sea, whereas tsunami hazard received by the coasts of the SCS and Sulu Sea is relatively moderate, yet non-negligible. By combining empirical method with numerical study of historical tsunami events, the present PTHA results are tentatively validated. The correspondence lends confidence to our study. Considering the proximity of major sources to population-laden cities around the SCS region, the tsunami hazard and risk should be further highlighted in the future.
Multi-dimensional Rankings, Program Termination, and Complexity Bounds of Flowchart Programs
NASA Astrophysics Data System (ADS)
Alias, Christophe; Darte, Alain; Feautrier, Paul; Gonnord, Laure
Proving the termination of a flowchart program can be done by exhibiting a ranking function, i.e., a function from the program states to a well-founded set, which strictly decreases at each program step. A standard method to automatically generate such a function is to compute invariants for each program point and to search for a ranking in a restricted class of functions that can be handled with linear programming techniques. Previous algorithms based on affine rankings either are applicable only to simple loops (i.e., single-node flowcharts) and rely on enumeration, or are not complete in the sense that they are not guaranteed to find a ranking in the class of functions they consider, if one exists. Our first contribution is to propose an efficient algorithm to compute ranking functions: It can handle flowcharts of arbitrary structure, the class of candidate rankings it explores is larger, and our method, although greedy, is provably complete. Our second contribution is to show how to use the ranking functions we generate to get upper bounds for the computational complexity (number of transitions) of the source program. This estimate is a polynomial, which means that we can handle programs with more than linear complexity. We applied the method on a collection of test cases from the literature. We also show the links and differences with previous techniques based on the insertion of counters.
Bounds on strong field magneto-transport in three-dimensional composites
NASA Astrophysics Data System (ADS)
Briane, Marc; Milton, Graeme W.
2011-10-01
This paper deals with bounds satisfied by the effective non-symmetric conductivity of three-dimensional composites in the presence of a strong magnetic field. On the one hand, it is shown that for general composites the antisymmetric part of the effective conductivity cannot be bounded solely in terms of the antisymmetric part of the local conductivity, contrary to the columnar case studied by Briane and Milton [SIAM J. Appl. Math. 70(8), 3272-3286 (2010), 10.1137/100798090]. Thus a suitable rank-two laminate, the conductivity of which has a bounded antisymmetric part together with a high-contrast symmetric part, may generate an arbitrarily large antisymmetric part of the effective conductivity. On the other hand, bounds are provided which show that the antisymmetric part of the effective conductivity must go to zero if the upper bound on the antisymmetric part of the local conductivity goes to zero, and the symmetric part of the local conductivity remains bounded below and above. Elementary bounds on the effective moduli are derived assuming the local conductivity and the effective conductivity have transverse isotropy in the plane orthogonal to the magnetic field. New Hashin-Shtrikman type bounds for two-phase three-dimensional composites with a non-symmetric conductivity are provided under geometric isotropy of the microstructure. The derivation of the bounds is based on a particular variational principle symmetrizing the problem, and the use of Y-tensors involving the averages of the fields in each phase.
Static aeroelastic analysis and tailoring of missile control fins
NASA Technical Reports Server (NTRS)
Mcintosh, S. C., Jr.; Dillenius, M. F. E.
1989-01-01
A concept for enhancing the design of control fins for supersonic tactical missiles is described. The concept makes use of aeroelastic tailoring to create fin designs (for given planforms) that limit the variations in hinge moments that can occur during maneuvers involving high load factors and high angles of attack. It combines supersonic nonlinear aerodynamic load calculations with finite-element structural modeling, static and dynamic structural analysis, and optimization. The problem definition is illustrated. The fin is at least partly made up of a composite material. The layup is fixed, and the orientations of the material principal axes are allowed to vary; these are the design variables. The objective is the magnitude of the difference between the chordwise location of the center of pressure and its desired location, calculated for a given flight condition. Three types of constraints can be imposed: upper bounds on static displacements for a given set of load conditions, lower bounds on specified natural frequencies, and upper bounds on the critical flutter damping parameter at a given set of flight speeds and altitudes. The idea is to seek designs that reduce variations in hinge moments that would otherwise occur. The block diagram describes the operation of the computer program that accomplishes these tasks. There is an option for a single analysis in addition to the optimization.
Bounds on light gluinos from the BEBC beam dump experiment
NASA Astrophysics Data System (ADS)
Cooper-Sarkar, A. M.; Parker, M. A.; Sarkar, S.; Aderholz, M.; Bostock, P.; Clayton, E. F.; Faccini-Turluer, M. L.; Grässler, H.; Guy, J.; Hulth, P. O.; Hultqvist, K.; Idschok, U.; Klein, H.; Kreutzmann, H.; Krstic, J.; Mobayyen, M. M.; Morrison, D. R. O.; Nellen, B.; Schmid, P.; Schmitz, N.; Talebzadeh, M.; Venus, W.; Vignaud, D.; Walck, Ch.; Wachsmuth, H.; Wünsch, B.; WA66 Collaboration
1985-10-01
Observational upper limits on anomalous neutral-current events in a proton beam dump experiment are used to constrain the possible hadroproduction and decay of light gluinos. These results require ifm g˜$̆4 GeV for ifm q˜ - minw.
5. Corridor A and Building No. 9962A (with white door). ...
5. Corridor A and Building No. 9962-A (with white door). In upper left is east side of Building No. 9952-B. - Madigan Hospital, Corridors & Ramps, Bounded by Wilson & McKinley Avenues & Garfield & Lincoln Streets, Tacoma, Pierce County, WA
Liouville type theorems of a nonlinear elliptic equation for the V-Laplacian
NASA Astrophysics Data System (ADS)
Huang, Guangyue; Li, Zhi
2018-03-01
In this paper, we consider Liouville type theorems for positive solutions to the following nonlinear elliptic equation: Δ _V u+aulog u=0, where a is a nonzero real constant. By using gradient estimates, we obtain upper bounds of |\
NASA Astrophysics Data System (ADS)
Watanabe, Norihiro; Kolditz, Olaf
2015-07-01
This work reports numerical stability conditions in two-dimensional solute transport simulations including discrete fractures surrounded by an impermeable rock matrix. We use an advective-dispersive problem described in Tang et al. (1981) and examine the stability of the Crank-Nicolson Galerkin finite element method (CN-GFEM). The stability conditions are analyzed in terms of the spatial discretization length perpendicular to the fracture, the flow velocity, the diffusion coefficient, the matrix porosity, the fracture aperture, and the fracture longitudinal dispersivity. In addition, we verify applicability of the recently developed finite element method-flux corrected transport (FEM-FCT) method by Kuzmin () to suppress oscillations in the hybrid system, with a comparison to the commonly utilized Streamline Upwinding/Petrov-Galerkin (SUPG) method. Major findings of this study are (1) the mesh von Neumann number (Fo) ≥ 0.373 must be satisfied to avoid undershooting in the matrix, (2) in addition to an upper bound, the Courant number also has a lower bound in the fracture in cases of low dispersivity, and (3) the FEM-FCT method can effectively suppress the oscillations in both the fracture and the matrix. The results imply that, in cases of low dispersivity, prerefinement of a numerical mesh is not sufficient to avoid the instability in the hybrid system if a problem involves evolutionary flow fields and dynamic material parameters. Applying the FEM-FCT method to such problems is recommended if negative concentrations cannot be tolerated and computing time is not a strong issue.
Momentum distributions for H 2 ( e , e ' p )
Ford, William P.; Jeschonnek, Sabine; Van Orden, J. W.
2014-12-29
[Background] A primary goal of deuteron electrodisintegration is the possibility of extracting the deuteron momentum distribution. This extraction is inherently fraught with difficulty, as the momentum distribution is not an observable and the extraction relies on theoretical models dependent on other models as input. [Purpose] We present a new method for extracting the momentum distribution which takes into account a wide variety of model inputs thus providing a theoretical uncertainty due to the various model constituents. [Method] The calculations presented here are using a Bethe-Salpeter like formalism with a wide variety of bound state wave functions, form factors, and finalmore » state interactions. We present a method to extract the momentum distributions from experimental cross sections, which takes into account the theoretical uncertainty from the various model constituents entering the calculation. [Results] In order to test the extraction pseudo-data was generated, and the extracted "experimental'' distribution, which has theoretical uncertainty from the various model inputs, was compared with the theoretical distribution used to generate the pseudo-data. [Conclusions] In the examples we compared the original distribution was typically within the error band of the extracted distribution. The input wave functions do contain some outliers which are discussed in the text, but at least this process can provide an upper bound on the deuteron momentum distribution. Due to the reliance on the theoretical calculation to obtain this quantity any extraction method should account for the theoretical error inherent in these calculations due to model inputs.« less
Evaluating the Potential Importance of Monoterpene Degradation for Global Acetone Production
NASA Astrophysics Data System (ADS)
Kelp, M. M.; Brewer, J.; Keller, C. A.; Fischer, E. V.
2015-12-01
Acetone is one of the most abundant volatile organic compounds (VOCs) in the atmosphere, but estimates of the global source of acetone vary widely. A better understanding of acetone sources is essential because acetone serves as a source of HOx in the upper troposphere and as a precursor to the NOx reservoir species peroxyacetyl nitrate (PAN). Although there are primary anthropogenic and pyrogenic sources of acetone, the dominant acetone sources are thought to be from direct biogenic emissions and photochemical production, particularly from the oxidation of iso-alkanes. Recent work suggests that the photochemical degradation of monoterpenes may also represent a significant contribution to global acetone production. We investigate that hypothesis using the GEOS-Chem chemical transport model. In this work, we calculate the emissions of eight terpene species (α-pinene, β-pinene, limonene, Δ3-carene, myrcene, sabinene, trans-β-ocimene, and an 'other monoterpenes' category which contains 34 other trace species) and couple these with upper and lower bound literature yields from species-specific chamber studies. We compare the simulated acetone distributions against in situ acetone measurements from a global suite of NASA aircraft campaigns. When simulating an upper bound on yields, the model-to-measurement comparison improves for North America at both the surface and in the upper troposphere. The inclusion of acetone production from monoterpene degradation also improves the ability of the model to reproduce observations of acetone in East Asian outflow. However, in general the addition of monoterpenes degrades the model comparison for the Southern Hemisphere.
Probing the size of extra dimensions with gravitational wave astronomy
NASA Astrophysics Data System (ADS)
Yagi, Kent; Tanahashi, Norihiro; Tanaka, Takahiro
2011-04-01
In the Randall-Sundrum II braneworld model, it has been conjectured, according to the AdS/CFT correspondence, that a brane-localized black hole (BH) larger than the bulk AdS curvature scale ℓ cannot be static, and it is dual to a four-dimensional BH emitting Hawking radiation through some quantum fields. In this scenario, the number of the quantum field species is so large that this radiation changes the orbital evolution of a BH binary. We derived the correction to the gravitational waveform phase due to this effect and estimated the upper bounds on ℓ by performing Fisher analyses. We found that the Deci-Hertz Interferometer Gravitational Wave Observatory and the Big Bang Observatory (DECIGO/BBO) can give a stronger constraint than the current tabletop result by detecting gravitational waves from small mass BH/BH and BH/neutron star (NS) binaries. Furthermore, DECIGO/BBO is expected to detect 105 BH/NS binaries per year. Taking this advantage, we find that DECIGO/BBO can actually measure ℓ down to ℓ=0.33μm for a 5 yr observation if we know that binaries are circular a priori. This is about 40 times smaller than the upper bound obtained from the tabletop experiment. On the other hand, when we take eccentricities into binary parameters, the detection limit weakens to ℓ=1.5μm due to strong degeneracies between ℓ and eccentricities. We also derived the upper bound on ℓ from the expected detection number of extreme mass ratio inspirals with LISA and BH/NS binaries with DECIGO/BBO, extending the discussion made recently by McWilliams [Phys. Rev. Lett. 104, 141601 (2010)PRLTAO0031-900710.1103/PhysRevLett.104.141601]. We found that these less robust constraints are weaker than the ones from phase differences.
Stability results for multi-layer radial Hele-Shaw and porous media flows
NASA Astrophysics Data System (ADS)
Gin, Craig; Daripa, Prabir
2015-01-01
Motivated by stability problems arising in the context of chemical enhanced oil recovery, we perform linear stability analysis of Hele-Shaw and porous media flows in radial geometry involving an arbitrary number of immiscible fluids. Key stability results obtained and their relevance to the stabilization of fingering instability are discussed. Some of the key results, among many others, are (i) absolute upper bounds on the growth rate in terms of the problem data; (ii) validation of these upper bound results against exact computation for the case of three-layer flows; (iii) stability enhancing injection policies; (iv) asymptotic limits that reduce these radial flow results to similar results for rectilinear flows; and (v) the stabilizing effect of curvature of the interfaces. Multi-layer radial flows have been found to have the following additional distinguishing features in comparison to rectilinear flows: (i) very long waves, some of which can be physically meaningful, are stable; and (ii) eigenvalues can be complex for some waves depending on the problem data, implying that the dispersion curves for one or more waves can contact each other. Similar to the rectilinear case, these results can be useful in providing insight into the interfacial instability transfer mechanism as the problem data are varied. Moreover, these can be useful in devising smart injection policies as well as controlling the complexity of the long-term dynamics when drops of various immiscible fluids intersperse among each other. As an application of the upper bound results, we provide stabilization criteria and design an almost stable multi-layer system by adding many layers of fluid with small positive jumps in viscosity in the direction of the basic flow.
Galluzzi, Paolo; de Jong, Marcus C; Sirin, Selma; Maeder, Philippe; Piu, Pietro; Cerase, Alfonso; Monti, Lucia; Brisse, Hervé J; Castelijns, Jonas A; de Graaf, Pim; Goericke, Sophia L
2016-07-01
Differentiation between normal solid (non-cystic) pineal glands and pineal pathologies on brain MRI is difficult. The aim of this study was to assess the size of the solid pineal gland in children (0-5 years) and compare the findings with published pineoblastoma cases. We retrospectively analyzed the size (width, height, planimetric area) of solid pineal glands in 184 non-retinoblastoma patients (73 female, 111 male) aged 0-5 years on MRI. The effect of age and gender on gland size was evaluated. Linear regression analysis was performed to analyze the relation between size and age. Ninety-nine percent prediction intervals around the mean were added to construct a normal size range per age, with the upper bound of the predictive interval as the parameter of interest as a cutoff for normalcy. There was no significant interaction of gender and age for all the three pineal gland parameters (width, height, and area). Linear regression analysis gave 99 % upper prediction bounds of 7.9, 4.8, and 25.4 mm(2), respectively, for width, height, and area. The slopes (size increase per month) of each parameter were 0.046, 0.023, and 0.202, respectively. Ninety-three percent (95 % CI 66-100 %) of asymptomatic solid pineoblastomas were larger in size than the 99 % upper bound. This study establishes norms for solid pineal gland size in non-retinoblastoma children aged 0-5 years. Knowledge of the size of the normal pineal gland is helpful for detection of pineal gland abnormalities, particularly pineoblastoma.
Formation of eyes in large-scale cyclonic vortices
NASA Astrophysics Data System (ADS)
Oruba, L.; Davidson, P. A.; Dormy, E.
2018-01-01
We present numerical simulations of steady, laminar, axisymmetric convection of a Boussinesq fluid in a shallow, rotating, cylindrical domain. The flow is driven by an imposed vertical heat flux and shaped by the background rotation of the domain. The geometry is inspired by that of tropical cyclones and the global flow pattern consists of a shallow swirling vortex combined with a poloidal flow in the r -z plane which is predominantly inward near the bottom boundary and outward along the upper surface. Our numerical experiments confirm that, as suggested in our recent work [L. Oruba et al., J. Fluid Mech. 812, 890 (2017), 10.1017/jfm.2016.846], an eye forms at the center of the vortex which is reminiscent of that seen in a tropical cyclone and is characterized by a local reversal in the direction of the poloidal flow. We establish scaling laws for the flow and map out the conditions under which an eye will, or will not, form. We show that, to leading order, the velocity scales with V =(αg β ) 1 /2H , where g is gravity, α is the expansion coefficient, β is the background temperature gradient, and H is the depth of the domain. We also show that the two most important parameters controlling the flow are Re =V H /ν and Ro =V /(Ω H ) , where Ω is the background rotation rate and ν the viscosity. The Prandtl number and aspect ratio also play an important, if secondary, role. Finally, and most importantly, we establish the criteria required for eye formation. These consist of a lower bound on Re , upper and lower bounds on Ro , and an upper bound on the Ekman number.