UPPER BOUND RISK ESTIMATES FOR MIXTURES OF CARCINOGENS
The excess cancer risk that might result from exposure to a mixture of chemical carcinogens usually is estimated with data from experiments conducted on individual chemicals. An upper bound on the total excess risk is estimated commonly by summing individual upper bound risk esti...
Estimation variance bounds of importance sampling simulations in digital communication systems
NASA Technical Reports Server (NTRS)
Lu, D.; Yao, K.
1991-01-01
In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.
Sun, Wei; Chou, Chih-Ping; Stacy, Alan W; Ma, Huiyan; Unger, Jennifer; Gallaher, Peggy
2007-02-01
Cronbach's a is widely used in social science research to estimate the internal consistency of reliability of a measurement scale. However, when items are not strictly parallel, the Cronbach's a coefficient provides a lower-bound estimate of true reliability, and this estimate may be further biased downward when items are dichotomous. The estimation of standardized Cronbach's a for a scale with dichotomous items can be improved by using the upper bound of coefficient phi. SAS and SPSS macros have been developed in this article to obtain standardized Cronbach's a via this method. The simulation analysis showed that Cronbach's a from upper-bound phi might be appropriate for estimating the real reliability when standardized Cronbach's a is problematic.
Finite-error metrological bounds on multiparameter Hamiltonian estimation
NASA Astrophysics Data System (ADS)
Kura, Naoto; Ueda, Masahito
2018-01-01
Estimation of multiple parameters in an unknown Hamiltonian is investigated. We present upper and lower bounds on the time required to complete the estimation within a prescribed error tolerance δ . The lower bound is given on the basis of the Cramér-Rao inequality, where the quantum Fisher information is bounded by the squared evolution time. The upper bound is obtained by an explicit construction of estimation procedures. By comparing the cases with different numbers of Hamiltonian channels, we also find that the few-channel procedure with adaptive feedback and the many-channel procedure with entanglement are equivalent in the sense that they require the same amount of time resource up to a constant factor.
Upper and lower bounds for the speed of pulled fronts with a cut-off
NASA Astrophysics Data System (ADS)
Benguria, R. D.; Depassier, M. C.; Loss, M.
2008-02-01
We establish rigorous upper and lower bounds for the speed of pulled fronts with a cut-off. For all reaction terms of KPP type a simple analytic upper bound is given. The lower bounds however depend on details of the reaction term. For a small cut-off parameter the two leading order terms in the asymptotic expansion of the upper and lower bounds coincide and correspond to the Brunet-Derrida formula. For large cut-off parameters the bounds do not coincide and permit a simple estimation of the speed of the front.
Eren, Metin I.; Chao, Anne; Hwang, Wen-Han; Colwell, Robert K.
2012-01-01
Background Estimating assemblage species or class richness from samples remains a challenging, but essential, goal. Though a variety of statistical tools for estimating species or class richness have been developed, they are all singly-bounded: assuming only a lower bound of species or classes. Nevertheless there are numerous situations, particularly in the cultural realm, where the maximum number of classes is fixed. For this reason, a new method is needed to estimate richness when both upper and lower bounds are known. Methodology/Principal Findings Here, we introduce a new method for estimating class richness: doubly-bounded confidence intervals (both lower and upper bounds are known). We specifically illustrate our new method using the Chao1 estimator, rarefaction, and extrapolation, although any estimator of asymptotic richness can be used in our method. Using a case study of Clovis stone tools from the North American Lower Great Lakes region, we demonstrate that singly-bounded richness estimators can yield confidence intervals with upper bound estimates larger than the possible maximum number of classes, while our new method provides estimates that make empirical sense. Conclusions/Significance Application of the new method for constructing doubly-bound richness estimates of Clovis stone tools permitted conclusions to be drawn that were not otherwise possible with singly-bounded richness estimates, namely, that Lower Great Lakes Clovis Paleoindians utilized a settlement pattern that was probably more logistical in nature than residential. However, our new method is not limited to archaeological applications. It can be applied to any set of data for which there is a fixed maximum number of classes, whether that be site occupancy models, commercial products (e.g. athletic shoes), or census information (e.g. nationality, religion, age, race). PMID:22666316
An evaluation of risk estimation procedures for mixtures of carcinogens
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hwang, J.S.; Chen, J.J.
1999-12-01
The estimation of health risks from exposure to a mixture of chemical carcinogens is generally based on the combination of information from several available single compound studies. The current practice of directly summing the upper bound risk estimates of individual carcinogenic components as an upper bound on the total risk of a mixture is known to be generally too conservative. Gaylor and Chen (1996, Risk Analysis) proposed a simple procedure to compute an upper bound on the total risk using only the upper confidence limits and central risk estimates of individual carcinogens. The Gaylor-Chen procedure was derived based on anmore » underlying assumption of the normality for the distributions of individual risk estimates. IN this paper the authors evaluated the Gaylor-Chen approach in terms the coverages of the upper confidence limits on the true risks of individual carcinogens. In general, if the coverage probabilities for the individual carcinogens are all approximately equal to the nominal level, then the Gaylor-Chen approach should perform well. However, the Gaylor-Chen approach can be conservative or anti-conservative if some of all individual upper confidence limit estimates are conservative or anti-conservative.« less
Global a priori estimates for the inhomogeneous Landau equation with moderately soft potentials
NASA Astrophysics Data System (ADS)
Cameron, Stephen; Silvestre, Luis; Snelson, Stanley
2018-05-01
We establish a priori upper bounds for solutions to the spatially inhomogeneous Landau equation in the case of moderately soft potentials, with arbitrary initial data, under the assumption that mass, energy and entropy densities stay under control. Our pointwise estimates decay polynomially in the velocity variable. We also show that if the initial data satisfies a Gaussian upper bound, this bound is propagated for all positive times.
Upper-Bound Estimates Of SEU in CMOS
NASA Technical Reports Server (NTRS)
Edmonds, Larry D.
1990-01-01
Theory of single-event upsets (SEU) (changes in logic state caused by energetic charged subatomic particles) in complementary metal oxide/semiconductor (CMOS) logic devices extended to provide upper-bound estimates of rates of SEU when limited experimental information available and configuration and dimensions of SEU-sensitive regions of devices unknown. Based partly on chord-length-distribution method.
NASA Astrophysics Data System (ADS)
Thelen, Brian J.; Xique, Ismael J.; Burns, Joseph W.; Goley, G. Steven; Nolan, Adam R.; Benson, Jonathan W.
2017-04-01
In Bayesian decision theory, there has been a great amount of research into theoretical frameworks and information- theoretic quantities that can be used to provide lower and upper bounds for the Bayes error. These include well-known bounds such as Chernoff, Battacharrya, and J-divergence. Part of the challenge of utilizing these various metrics in practice is (i) whether they are "loose" or "tight" bounds, (ii) how they might be estimated via either parametric or non-parametric methods, and (iii) how accurate the estimates are for limited amounts of data. In general what is desired is a methodology for generating relatively tight lower and upper bounds, and then an approach to estimate these bounds efficiently from data. In this paper, we explore the so-called triangle divergence which has been around for a while, but was recently made more prominent in some recent research on non-parametric estimation of information metrics. Part of this work is motivated by applications for quantifying fundamental information content in SAR/LIDAR data, and to help in this, we have developed a flexible multivariate modeling framework based on multivariate Gaussian copula models which can be combined with the triangle divergence framework to quantify this information, and provide approximate bounds on Bayes error. In this paper we present an overview of the bounds, including those based on triangle divergence and verify that under a number of multivariate models, the upper and lower bounds derived from triangle divergence are significantly tighter than the other common bounds, and often times, dramatically so. We also propose some simple but effective means for computing the triangle divergence using Monte Carlo methods, and then discuss estimation of the triangle divergence from empirical data based on Gaussian Copula models.
Edge connectivity and the spectral gap of combinatorial and quantum graphs
NASA Astrophysics Data System (ADS)
Berkolaiko, Gregory; Kennedy, James B.; Kurasov, Pavel; Mugnolo, Delio
2017-09-01
We derive a number of upper and lower bounds for the first nontrivial eigenvalue of Laplacians on combinatorial and quantum graph in terms of the edge connectivity, i.e. the minimal number of edges which need to be removed to make the graph disconnected. On combinatorial graphs, one of the bounds corresponds to a well-known inequality of Fiedler, of which we give a new variational proof. On quantum graphs, the corresponding bound generalizes a recent result of Band and Lévy. All proofs are general enough to yield corresponding estimates for the p-Laplacian and allow us to identify the minimizers. Based on the Betti number of the graph, we also derive upper and lower bounds on all eigenvalues which are ‘asymptotically correct’, i.e. agree with the Weyl asymptotics for the eigenvalues of the quantum graph. In particular, the lower bounds improve the bounds of Friedlander on any given graph for all but finitely many eigenvalues, while the upper bounds improve recent results of Ariturk. Our estimates are also used to derive bounds on the eigenvalues of the normalized Laplacian matrix that improve known bounds of spectral graph theory.
Noisy metrology: a saturable lower bound on quantum Fisher information
NASA Astrophysics Data System (ADS)
Yousefjani, R.; Salimi, S.; Khorashad, A. S.
2017-06-01
In order to provide a guaranteed precision and a more accurate judgement about the true value of the Cramér-Rao bound and its scaling behavior, an upper bound (equivalently a lower bound on the quantum Fisher information) for precision of estimation is introduced. Unlike the bounds previously introduced in the literature, the upper bound is saturable and yields a practical instruction to estimate the parameter through preparing the optimal initial state and optimal measurement. The bound is based on the underling dynamics, and its calculation is straightforward and requires only the matrix representation of the quantum maps responsible for encoding the parameter. This allows us to apply the bound to open quantum systems whose dynamics are described by either semigroup or non-semigroup maps. Reliability and efficiency of the method to predict the ultimate precision limit are demonstrated by three main examples.
Efficient Regressions via Optimally Combining Quantile Information*
Zhao, Zhibiao; Xiao, Zhijie
2014-01-01
We develop a generally applicable framework for constructing efficient estimators of regression models via quantile regressions. The proposed method is based on optimally combining information over multiple quantiles and can be applied to a broad range of parametric and nonparametric settings. When combining information over a fixed number of quantiles, we derive an upper bound on the distance between the efficiency of the proposed estimator and the Fisher information. As the number of quantiles increases, this upper bound decreases and the asymptotic variance of the proposed estimator approaches the Cramér-Rao lower bound under appropriate conditions. In the case of non-regular statistical estimation, the proposed estimator leads to super-efficient estimation. We illustrate the proposed method for several widely used regression models. Both asymptotic theory and Monte Carlo experiments show the superior performance over existing methods. PMID:25484481
Computational micromechanics of woven composites
NASA Technical Reports Server (NTRS)
Hopkins, Dale A.; Saigal, Sunil; Zeng, Xiaogang
1991-01-01
The bounds on the equivalent elastic material properties of a composite are presently addressed by a unified energy approach which is valid for both unidirectional and 2D and 3D woven composites. The unit cell considered is assumed to consist, first, of the actual composite arrangement of the fibers and matrix material, and then, of an equivalent pseudohomogeneous material. Equating the strain energies due to the two arrangements yields an estimate of the upper bound for the material equivalent properties; successive increases in the order of displacement field that is assumed in the composite arrangement will successively produce improved upper bound estimates.
Toward allocative efficiency in the prescription drug industry.
Guell, R C; Fischbaum, M
1995-01-01
Traditionally, monopoly power in the pharmaceutical industry has been measured by profits. An alternative method estimates the deadweight loss of consumer surplus associated with the exercise of monopoly power. Although upper and lower bound estimates for this inefficiency are far apart, they at least suggest a dramatically greater welfare loss than measures of industry profitability would imply. A proposed system would have the U.S. government employing its power of eminent domain to "take" and distribute pharmaceutical patents, providing as "just compensation" the present value of the patent's expected future monopoly profits. Given the allocative inefficiency of raising taxes to pay for the program, the impact of the proposal on allocative efficiency would be at least as good at our lower bound estimate of monopoly costs while substantially improving efficiency at or near our upper bound estimate.
The Economic Cost of Methamphetamine Use in the United States, 2005
ERIC Educational Resources Information Center
Nicosia, Nancy; Pacula, Rosalie Liccardo; Kilmer, Beau; Lundberg, Russell; Chiesa, James
2009-01-01
This first national estimate suggests that the economic cost of methamphetamine (meth) use in the United States reached $23.4 billion in 2005. Given the uncertainty in estimating the costs of meth use, this book provides a lower-bound estimate of $16.2 billion and an upper-bound estimate of $48.3 billion. The analysis considers a wide range of…
Quantum State Tomography via Linear Regression Estimation
Qi, Bo; Hou, Zhibo; Li, Li; Dong, Daoyi; Xiang, Guoyong; Guo, Guangcan
2013-01-01
A simple yet efficient state reconstruction algorithm of linear regression estimation (LRE) is presented for quantum state tomography. In this method, quantum state reconstruction is converted into a parameter estimation problem of a linear regression model and the least-squares method is employed to estimate the unknown parameters. An asymptotic mean squared error (MSE) upper bound for all possible states to be estimated is given analytically, which depends explicitly upon the involved measurement bases. This analytical MSE upper bound can guide one to choose optimal measurement sets. The computational complexity of LRE is O(d4) where d is the dimension of the quantum state. Numerical examples show that LRE is much faster than maximum-likelihood estimation for quantum state tomography. PMID:24336519
Chandon, Pierre; Ordabayeva, Nailya
2017-02-01
Five studies show that people, including experts such as professional chefs, estimate quantity decreases more accurately than quantity increases. We argue that this asymmetry occurs because physical quantities cannot be negative. Consequently, there is a natural lower bound (zero) when estimating decreasing quantities but no upper bound when estimating increasing quantities, which can theoretically grow to infinity. As a result, the "accuracy of less" disappears (a) when a numerical or a natural upper bound is present when estimating quantity increases, or (b) when people are asked to estimate the (unbounded) ratio of change from 1 size to another for both increasing and decreasing quantities. Ruling out explanations related to loss aversion, symbolic number mapping, and the visual arrangement of the stimuli, we show that the "accuracy of less" influences choice and demonstrate its robustness in a meta-analysis that includes previously published results. Finally, we discuss how the "accuracy of less" may explain asymmetric reactions to the supersizing and downsizing of food portions, some instances of the endowment effect, and asymmetries in the perception of increases and decreases in physical and psychological distance. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
ERIC Educational Resources Information Center
Kim, Seonghoon; Feldt, Leonard S.
2010-01-01
The primary purpose of this study is to investigate the mathematical characteristics of the test reliability coefficient rho[subscript XX'] as a function of item response theory (IRT) parameters and present the lower and upper bounds of the coefficient. Another purpose is to examine relative performances of the IRT reliability statistics and two…
Evidence for a bound on the lifetime of de Sitter space
NASA Astrophysics Data System (ADS)
Freivogel, Ben; Lippert, Matthew
2008-12-01
Recent work has suggested a surprising new upper bound on the lifetime of de Sitter vacua in string theory. The bound is parametrically longer than the Hubble time but parametrically shorter than the recurrence time. We investigate whether the bound is satisfied in a particular class of de Sitter solutions, the KKLT vacua. Despite the freedom to make the supersymmetry breaking scale exponentially small, which naively would lead to extremely stable vacua, we find that the lifetime is always less than about exp(1022) Hubble times, in agreement with the proposed bound. This result, however, is contingent on several estimates and assumptions; in particular, we rely on a conjectural upper bound on the Euler number of the Calabi-Yau fourfolds used in KKLT compactifications.
Quijano, Leyre; Yusà, Vicent; Font, Guillermina; McAllister, Claudia; Torres, Concepción; Pardo, Olga
2017-02-01
This study was carried out to determine current levels of nitrate in vegetables marketed in the Region of Valencia (Spain) and to estimate the toxicological risk associated with their intake. A total of 533 samples of seven vegetable species were studied. Nitrate levels were derived from the Valencia Region monitoring programme carried out from 2009 to 2013 and food consumption levels were taken from the first Valencia Food Consumption Survey, conducted in 2010. The exposure was estimated using a probabilistic approach and two scenarios were assumed for left-censored data: the lower-bound scenario, in which unquantified results (below the limit of quantification) were set to zero and the upper-bound scenario, in which unquantified results were set to the limit of quantification value. The exposure of the Valencia consumers to nitrate through the consumption of vegetable products appears to be relatively low. In the adult population (16-95 years) the P99.9 was 3.13 mg kg -1 body weight day -1 and 3.15 mg kg -1 body weight day -1 in the lower bound and upper bound scenario, respectively. On the other hand, for young people (6-15 years) the P99.9 of the exposure was 4.20 mg kg -1 body weight day -1 and 4.40 mg kg -1 body weight day -1 in the lower bound and upper bound scenario, respectively. The risk characterisation indicates that, under the upper bound scenario, 0.79% of adults and 1.39% of young people can exceed the Acceptable Daily Intake of nitrate. This percentage could join the vegetable extreme consumers (such as vegetarians) of vegetables. Overall, the estimated exposures to nitrate from vegetables are unlikely to result in appreciable health risks. Copyright © 2016 Elsevier Ltd. All rights reserved.
A note on the upper bound of the spectral radius for SOR iteration matrix
NASA Astrophysics Data System (ADS)
Chang, D.-W. Da-Wei
2004-05-01
Recently, Wang and Huang (J. Comput. Appl. Math. 135 (2001) 325, Corollary 4.7) established the following estimation on the upper bound of the spectral radius for successive overrelaxation (SOR) iteration matrix:ρSOR≤1-ω+ωρGSunder the condition that the coefficient matrix A is a nonsingular M-matrix and ω≥1, where ρSOR and ρGS are the spectral radius of SOR iteration matrix and Gauss-Seidel iteration matrix, respectively. In this note, we would like to point out that the above estimation is not valid in general.
A tight upper bound for quadratic knapsack problems in grid-based wind farm layout optimization
NASA Astrophysics Data System (ADS)
Quan, Ning; Kim, Harrison M.
2018-03-01
The 0-1 quadratic knapsack problem (QKP) in wind farm layout optimization models possible turbine locations as nodes, and power loss due to wake effects between pairs of turbines as edges in a complete graph. The goal is to select up to a certain number of turbine locations such that the sum of selected node and edge coefficients is maximized. Finding the optimal solution to the QKP is difficult in general, but it is possible to obtain a tight upper bound on the QKP's optimal value which facilitates the use of heuristics to solve QKPs by giving a good estimate of the optimality gap of any feasible solution. This article applies an upper bound method that is especially well-suited to QKPs in wind farm layout optimization due to certain features of the formulation that reduce the computational complexity of calculating the upper bound. The usefulness of the upper bound was demonstrated by assessing the performance of the greedy algorithm for solving QKPs in wind farm layout optimization. The results show that the greedy algorithm produces good solutions within 4% of the optimal value for small to medium sized problems considered in this article.
Upper and lower bounds of ground-motion variabilities: implication for source properties
NASA Astrophysics Data System (ADS)
Cotton, Fabrice; Reddy-Kotha, Sreeram; Bora, Sanjay; Bindi, Dino
2017-04-01
One of the key challenges of seismology is to be able to analyse the physical factors that control earthquakes and ground-motion variabilities. Such analysis is particularly important to calibrate physics-based simulations and seismic hazard estimations at high frequencies. Within the framework of the development of ground-motion prediction equation (GMPE) developments, ground-motions residuals (differences between recorded ground motions and the values predicted by a GMPE) are computed. The exponential growth of seismological near-source records and modern GMPE analysis technics allow to partition these residuals into between- and a within-event components. In particular, the between-event term quantifies all those repeatable source effects (e.g. related to stress-drop or kappa-source variability) which have not been accounted by the magnitude-dependent term of the model. In this presentation, we first discuss the between-event variabilities computed both in the Fourier and Response Spectra domains, using recent high-quality global accelerometric datasets (e.g. NGA-west2, Resorce, Kiknet). These analysis lead to the assessment of upper bounds for the ground-motion variability. Then, we compare these upper bounds with lower bounds estimated by analysing seismic sequences which occurred on specific fault systems (e.g., located in Central Italy or in Japan). We show that the lower bounds of between-event variabilities are surprisingly large which indicates a large variability of earthquake dynamic properties even within the same fault system. Finally, these upper and lower bounds of ground-shaking variability are discussed in term of variability of earthquake physical properties (e.g., stress-drop and kappa_source).
Fisher information of a single qubit interacts with a spin-qubit in the presence of a magnetic field
NASA Astrophysics Data System (ADS)
Metwally, N.
2018-06-01
In this contribution, quantum Fisher information is utilized to estimate the parameters of a central qubit interacting with a single-spin qubit. The effect of the longitudinal, transverse and the rotating strengths of the magnetic field on the estimation degree is discussed. It is shown that, in the resonance case, the number of peaks and consequently the size of the estimation regions increase as the rotating magnetic field strength increases. The precision estimation of the central qubit parameters depends on the initial state settings of the central and the spin-qubit, either encode classical or quantum information. It is displayed that, the upper bounds of the estimation degree are large if the two qubits encode classical information. In the non-resonance case, the estimation degree depends on which of the longitudinal/transverse strength is larger. The coupling constant between the central qubit and the spin-qubit has a different effect on the estimation degree of the weight and the phase parameters, where the possibility of estimating the weight parameter decreases as the coupling constant increases, while it increases for the phase parameter. For large number of spin-particles, namely, we have a spin-bath particles, the upper bounds of the Fisher information with respect to the weight parameter of the central qubit decreases as the number of the spin particle increases. As the interaction time increases, the upper bounds appear at different initial values of the weight parameter.
Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam
2009-01-01
This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.
Pre-Test Assessment of the Upper Bound of the Drag Coefficient Repeatability of a Wind Tunnel Model
NASA Technical Reports Server (NTRS)
Ulbrich, N.; L'Esperance, A.
2017-01-01
A new method is presented that computes a pre{test estimate of the upper bound of the drag coefficient repeatability of a wind tunnel model. This upper bound is a conservative estimate of the precision error of the drag coefficient. For clarity, precision error contributions associated with the measurement of the dynamic pressure are analyzed separately from those that are associated with the measurement of the aerodynamic loads. The upper bound is computed by using information about the model, the tunnel conditions, and the balance in combination with an estimate of the expected output variations as input. The model information consists of the reference area and an assumed angle of attack. The tunnel conditions are described by the Mach number and the total pressure or unit Reynolds number. The balance inputs are the partial derivatives of the axial and normal force with respect to all balance outputs. Finally, an empirical output variation of 1.0 microV/V is used to relate both random instrumentation and angle measurement errors to the precision error of the drag coefficient. Results of the analysis are reported by plotting the upper bound of the precision error versus the tunnel conditions. The analysis shows that the influence of the dynamic pressure measurement error on the precision error of the drag coefficient is often small when compared with the influence of errors that are associated with the load measurements. Consequently, the sensitivities of the axial and normal force gages of the balance have a significant influence on the overall magnitude of the drag coefficient's precision error. Therefore, results of the error analysis can be used for balance selection purposes as the drag prediction characteristics of balances of similar size and capacities can objectively be compared. Data from two wind tunnel models and three balances are used to illustrate the assessment of the precision error of the drag coefficient.
Biodegradation kinetics for pesticide exposure assessment.
Wolt, J D; Nelson, H P; Cleveland, C B; van Wesenbeeck, I J
2001-01-01
Understanding pesticide risks requires characterizing pesticide exposure within the environment in a manner that can be broadly generalized across widely varied conditions of use. The coupled processes of sorption and soil degradation are especially important for understanding the potential environmental exposure of pesticides. The data obtained from degradation studies are inherently variable and, when limited in extent, lend uncertainty to exposure characterization and risk assessment. Pesticide decline in soils reflects dynamically coupled processes of sorption and degradation that add complexity to the treatment of soil biodegradation data from a kinetic perspective. Additional complexity arises from study design limitations that may not fully account for the decline in microbial activity of test systems, or that may be inadequate for considerations of all potential dissipation routes for a given pesticide. Accordingly, kinetic treatment of data must accommodate a variety of differing approaches starting with very simple assumptions as to reaction dynamics and extending to more involved treatments if warranted by the available experimental data. Selection of the appropriate kinetic model to describe pesticide degradation should rely on statistical evaluation of the data fit to ensure that the models used are not overparameterized. Recognizing the effects of experimental conditions and methods for kinetic treatment of degradation data is critical for making appropriate comparisons among pesticide biodegradation data sets. Assessment of variability in soil half-life among soils is uncertain because for many pesticides the data on soil degradation rate are limited to one or two soils. Reasonable upper-bound estimates of soil half-life are necessary in risk assessment so that estimated environmental concentrations can be developed from exposure models. Thus, an understanding of the variable and uncertain distribution of soil half-lives in the environment is necessary to estimate bounding values. Statistical evaluation of measures of central tendency for multisoil kinetic studies shows that geometric means better represent the distribution in soil half-lives than do the arithmetic or harmonic means. Estimates of upper-bound soil half-life values based on the upper 90% confidence bound on the geometric mean tend to accurately represent the upper bound when pesticide degradation rate is biologically driven but appear to overestimate the upper bound when there is extensive coupling of biodegradation with sorptive processes. The limited data available comparing distribution in pesticide soil half-lives between multisoil laboratory studies and multilocation field studies suggest that the probability density functions are similar. Thus, upper-bound estimates of pesticide half-life determined from laboratory studies conservatively represent pesticide biodegradation in the field environment for the purposes of exposure and risk assessment. International guidelines and approaches used for interpretations of soil biodegradation reflect many common elements, but differ in how the source and nature of variability in soil kinetic data are considered. Harmonization of approaches for the use of soil biodegradation data will improve the interpretative power of these data for the purposes of exposure and risk assessment.
Length bounds for connecting discharges in triggered lightning subsequent strokes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Idone, V.P.
1990-11-20
Highly time resolved streak recordings from nine subsequent strokes in four triggered flashes have been examined for evidence of the occurrence of upward connecting discharges. These photographic recordings were obtained with superior spatial and temporal resolution (0.3 m and 0.5 {lambda}s) and were examined with a video image analysis system to help delineate the separate leader and return stroke image tracks. Unfortunately, a definitive determination of the occurrence of connecting discharges in these strokes could not be made. The data did allow various determinations of an upper bound length for any possible connecting discharge in each stroke. Under the simplestmore » analysis approach possible, an 'absolute' upper bound set of lengths was measured that ranged from 12 to 27 m with a mean of 19 m; two other more involved analyses yielded arguably better upper bound estimates of 8-18 m and 7-26 m with means of 19 m; two other more involved analyses yielded arguably better upper bound estimates of 8-18 m and 7-26 m with means of 12 and 13 m, respectively. An additional set of low time-resolution telephoto recordings of the lowest few meters of channel revealed six strokes in these flashes with one or more upward unconnected channels originating from the lightning rod tip. The maximum length of unconnected channel seen in each of these strokes ranged from 0.2 to 1.6 m with a mean of 0.7 m. This latter set of observations is interpreted as indirect evidence that connecting discharges did occur in these strokes and that the lower bound for their length is about 1 m.« less
NASA Astrophysics Data System (ADS)
Stolzenburg, Maribeth; Marshall, Thomas C.; Karunarathne, Sumedhe; Orville, Richard E.
2018-10-01
Using video data recorded at 50,000 frames per second for nearby negative lightning flashes, estimates are derived for the length of positive upward connecting leaders (UCLs) that presumably formed prior to new ground attachments. Return strokes were 1.7 to 7.8 km distant, yielding image resolutions of 4.25 to 19.5 m. No UCLs are imaged in these data, indicating those features were too transient or too dim compared to other lightning processes that are imaged at these resolutions. Upper bound lengths for 17 presumed UCLs are determined from the height above flat ground or water of the successful stepped leader tip in the image immediately prior to (within 20 μs before) the return stroke. Better estimates of maximum UCL lengths are determined using the downward stepped leader tip's speed of advance and the estimated return stroke time within its first frame. For 17 strokes, the upper bound length of the possible UCL averages 31.6 m and ranges from 11.3 to 50.3 m. Among the close strokes (those with spatial resolution <8 m per pixel), the five which connected to water (salt water lagoon) have UCL upper bound estimates averaging significantly shorter (24.1 m) than the average for the three close strokes which connected to land (36.9 m). The better estimates of maximum UCL lengths for the eight close strokes average 20.2 m, with slightly shorter average of 18.3 m for the five that connected to water. All the better estimates of UCL maximum lengths are <38 m in this dataset
Interpolation Inequalities and Spectral Estimates for Magnetic Operators
NASA Astrophysics Data System (ADS)
Dolbeault, Jean; Esteban, Maria J.; Laptev, Ari; Loss, Michael
2018-05-01
We prove magnetic interpolation inequalities and Keller-Lieb-Thir-ring estimates for the principal eigenvalue of magnetic Schr{\\"o}dinger operators. We establish explicit upper and lower bounds for the best constants and show by numerical methods that our theoretical estimates are accurate.
Estimates on Functional Integrals of Quantum Mechanics and Non-relativistic Quantum Field Theory
NASA Astrophysics Data System (ADS)
Bley, Gonzalo A.; Thomas, Lawrence E.
2017-01-01
We provide a unified method for obtaining upper bounds for certain functional integrals appearing in quantum mechanics and non-relativistic quantum field theory, functionals of the form {E[{exp}(A_T)]} , the (effective) action {A_T} being a function of particle trajectories up to time T. The estimates in turn yield rigorous lower bounds for ground state energies, via the Feynman-Kac formula. The upper bounds are obtained by writing the action for these functional integrals in terms of stochastic integrals. The method is illustrated in familiar quantum mechanical settings: for the hydrogen atom, for a Schrödinger operator with {1/|x|^2} potential with small coupling, and, with a modest adaptation of the method, for the harmonic oscillator. We then present our principal applications of the method, in the settings of non-relativistic quantum field theories for particles moving in a quantized Bose field, including the optical polaron and Nelson models.
Stochastic parameter estimation in nonlinear time-delayed vibratory systems with distributed delay
NASA Astrophysics Data System (ADS)
Torkamani, Shahab; Butcher, Eric A.
2013-07-01
The stochastic estimation of parameters and states in linear and nonlinear time-delayed vibratory systems with distributed delay is explored. The approach consists of first employing a continuous time approximation to approximate the delayed integro-differential system with a large set of ordinary differential equations having stochastic excitations. Then the problem of state and parameter estimation in the resulting stochastic ordinary differential system is represented as an optimal filtering problem using a state augmentation technique. By adapting the extended Kalman-Bucy filter to the augmented filtering problem, the unknown parameters of the time-delayed system are estimated from noise-corrupted, possibly incomplete measurements of the states. Similarly, the upper bound of the distributed delay can also be estimated by the proposed technique. As an illustrative example to a practical problem in vibrations, the parameter, delay upper bound, and state estimation from noise-corrupted measurements in a distributed force model widely used for modeling machine tool vibrations in the turning operation is investigated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tyson, Jon
2009-06-15
Matrix monotonicity is used to obtain upper bounds on minimum-error distinguishability of arbitrary ensembles of mixed quantum states. This generalizes one direction of a two-sided bound recently obtained by the author [J. Tyson, J. Math. Phys. 50, 032106 (2009)]. It is shown that the previously obtained special case has unique properties.
A one-dimensional model of solid-earth electrical resistivity beneath Florida
Blum, Cletus; Love, Jeffrey J.; Pedrie, Kolby; Bedrosian, Paul A.; Rigler, E. Joshua
2015-11-19
An estimated one-dimensional layered model of electrical resistivity beneath Florida was developed from published geological and geophysical information. The resistivity of each layer is represented by plausible upper and lower bounds as well as a geometric mean resistivity. Corresponding impedance transfer functions, Schmucker-Weidelt transfer functions, apparent resistivity, and phase responses are calculated for inducing geomagnetic frequencies ranging from 10−5 to 100 hertz. The resulting one-dimensional model and response functions can be used to make general estimates of time-varying electric fields associated with geomagnetic storms such as might represent induction hazards for electric-power grid operation. The plausible upper- and lower-bound resistivity structures show the uncertainty, giving a wide range of plausible time-varying electric fields.
NASA Astrophysics Data System (ADS)
Wang, Fan; Liang, Jinling; Dobaie, Abdullah M.
2018-07-01
The resilient filtering problem is considered for a class of time-varying networks with stochastic coupling strengths. An event-triggered strategy is adopted to save the network resources by scheduling the signal transmission from the sensors to the filters based on certain prescribed rules. Moreover, the filter parameters to be designed are subject to gain perturbations. The primary aim of the addressed problem is to determine a resilient filter that ensures an acceptable filtering performance for the considered network with event-triggering scheduling. To handle such an issue, an upper bound on the estimation error variance is established for each node according to the stochastic analysis. Subsequently, the resilient filter is designed by locally minimizing the derived upper bound at each iteration. Moreover, rigorous analysis shows the monotonicity of the minimal upper bound regarding the triggering threshold. Finally, a simulation example is presented to show effectiveness of the established filter scheme.
Future trends in computer waste generation in India.
Dwivedy, Maheshwar; Mittal, R K
2010-11-01
The objective of this paper is to estimate the future projection of computer waste in India and to subsequently analyze their flow at the end of their useful phase. For this purpose, the study utilizes the logistic model-based approach proposed by Yang and Williams to forecast future trends in computer waste. The model estimates future projection of computer penetration rate utilizing their first lifespan distribution and historical sales data. A bounding analysis on the future carrying capacity was simulated using the three parameter logistic curve. The observed obsolete generation quantities from the extrapolated penetration rates are then used to model the disposal phase. The results of the bounding analysis indicate that in the year 2020, around 41-152 million units of computers will become obsolete. The obsolete computer generation quantities are then used to estimate the End-of-Life outflows by utilizing a time-series multiple lifespan model. Even a conservative estimate of the future recycling capacity of PCs will reach upwards of 30 million units during 2025. Apparently, more than 150 million units could be potentially recycled in the upper bound case. However, considering significant future investment in the e-waste recycling sector from all stakeholders in India, we propose a logistic growth in the recycling rate and estimate the requirement of recycling capacity between 60 and 400 million units for the lower and upper bound case during 2025. Finally, we compare the future obsolete PC generation amount of the US and India. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Sloss, J. M.; Kranzler, S. K.
1972-01-01
The equivalence of a considered integral equation form with an infinite system of linear equations is proved, and the localization of the eigenvalues of the infinite system is expressed. Error estimates are derived, and the problems of finding upper bounds and lower bounds for the eigenvalues are solved simultaneously.
Blow-up of solutions to a quasilinear wave equation for high initial energy
NASA Astrophysics Data System (ADS)
Li, Fang; Liu, Fang
2018-05-01
This paper deals with blow-up solutions to a nonlinear hyperbolic equation with variable exponent of nonlinearities. By constructing a new control function and using energy inequalities, the authors obtain the lower bound estimate of the L2 norm of the solution. Furthermore, the concavity arguments are used to prove the nonexistence of solutions; at the same time, an estimate of the upper bound of blow-up time is also obtained. This result extends and improves those of [1,2].
Pages, Gaël; Ramdani, Nacim; Fraisse, Philippe; Guiraud, David
2009-06-01
This paper presents a contribution for restoring standing in paraplegia while using functional electrical stimulation (FES). Movement generation induced by FES remains mostly open looped and stimulus intensities are tuned empirically. To design an efficient closed-loop control, a preliminary study has been carried out to investigate the relationship between body posture and voluntary upper body movements. A methodology is proposed to estimate body posture in the sagittal plane using force measurements exerted on supporting handles during standing. This is done by setting up constraints related to the geometric equations of a two-dimensional closed chain model and the hand-handle interactions. All measured quantities are subject to an uncertainty assumed unknown but bounded. The set membership estimation problem is solved via interval analysis. Guaranteed uncertainty bounds are computed for the estimated postures. In order to test the feasibility of our methodology, experiments were carried out with complete spinal cord injured patients.
A Novel Capacity Analysis for Wireless Backhaul Mesh Networks
NASA Astrophysics Data System (ADS)
Chung, Tein-Yaw; Lee, Kuan-Chun; Lee, Hsiao-Chih
This paper derived a closed-form expression for inter-flow capacity of a backhaul wireless mesh network (WMN) with centralized scheduling by employing a ring-based approach. Through the definition of an interference area, we are able to accurately describe a bottleneck collision area for a WMN and calculate the upper bound of inter-flow capacity. The closed-form expression shows that the upper bound is a function of the ratio between transmission range and network radius. Simulations and numerical analysis show that our analytic solution can better estimate the inter-flow capacity of WMNs than that of previous approach.
NASA Astrophysics Data System (ADS)
Basu, Biswajit
2017-12-01
Bounds on estimates of wave heights (valid for large amplitudes) from pressure and flow measurements at an arbitrary intermediate depth have been provided. Two-dimensional irrotational steady water waves over a flat bed with a finite depth in the presence of underlying uniform currents have been considered in the analysis. Five different upper bounds based on a combination of pressure and velocity field measurements have been derived, though there is only one available lower bound on the wave height in the case of the speed of current greater than or less than the wave speed. This article is part of the theme issue 'Nonlinear water waves'.
Yin, H-L; Cao, W-F; Fu, Y; Tang, Y-L; Liu, Y; Chen, T-Y; Chen, Z-B
2014-09-15
Measurement-device-independent quantum key distribution (MDI-QKD) with decoy-state method is believed to be securely applied to defeat various hacking attacks in practical quantum key distribution systems. Recently, the coherent-state superpositions (CSS) have emerged as an alternative to single-photon qubits for quantum information processing and metrology. Here, in this Letter, CSS are exploited as the source in MDI-QKD. We present an analytical method that gives two tight formulas to estimate the lower bound of yield and the upper bound of bit error rate. We exploit the standard statistical analysis and Chernoff bound to perform the parameter estimation. Chernoff bound can provide good bounds in the long-distance MDI-QKD. Our results show that with CSS, both the security transmission distance and secure key rate are significantly improved compared with those of the weak coherent states in the finite-data case.
Volumes and intrinsic diameters of hypersurfaces
NASA Astrophysics Data System (ADS)
Paeng, Seong-Hun
2015-09-01
We estimate the volume and the intrinsic diameter of a hypersurface M with geometric information of a hypersurface which is parallel to M at distance T. It can be applied to the Riemannian Penrose inequality to obtain a lower bound of the total mass of a spacetime. Also it can be used to obtain upper bounds of the volume and the intrinsic diameter of the celestial r-sphere without a lower bound of the sectional curvature. We extend our results to metric-measure spaces by using the Bakry-Emery Ricci tensor.
Interferometric tests of Planckian quantum geometry models
Kwon, Ohkyung; Hogan, Craig J.
2016-04-19
The effect of Planck scale quantum geometrical effects on measurements with interferometers is estimated with standard physics, and with a variety of proposed extensions. It is shown that effects are negligible in standard field theory with canonically quantized gravity. Statistical noise levels are estimated in a variety of proposals for nonstandard metric fluctuations, and these alternatives are constrained using upper bounds on stochastic metric fluctuations from LIGO. Idealized models of several interferometer system architectures are used to predict signal noise spectra in a quantum geometry that cannot be described by a fluctuating metric, in which position noise arises from holographicmore » bounds on directional information. Lastly, predictions in this case are shown to be close to current and projected experimental bounds.« less
Fan, Jianqing; Liao, Yuan; Shi, Xiaofeng
2014-01-01
The risk of a large portfolio is often estimated by substituting a good estimator of the volatility matrix. However, the accuracy of such a risk estimator is largely unknown. We study factor-based risk estimators under a large amount of assets, and introduce a high-confidence level upper bound (H-CLUB) to assess the estimation. The H-CLUB is constructed using the confidence interval of risk estimators with either known or unknown factors. We derive the limiting distribution of the estimated risks in high dimensionality. We find that when the dimension is large, the factor-based risk estimators have the same asymptotic variance no matter whether the factors are known or not, which is slightly smaller than that of the sample covariance-based estimator. Numerically, H-CLUB outperforms the traditional crude bounds, and provides an insightful risk assessment. In addition, our simulated results quantify the relative error in the risk estimation, which is usually negligible using 3-month daily data. PMID:26195851
Ostapczuk, Martin; Musch, Jochen
2011-01-01
Despite being susceptible to social desirability bias, attitudes towards people with disabilities are traditionally assessed via self-report. We investigated two methods presumably providing more valid prevalence estimates of sensitive attitudes than direct questioning (DQ). Most people projective questioning (MPPQ) attempts to reduce bias by asking interviewees to estimate the number of other people holding a sensitive attribute, rather than confirming or denying the attribute for themselves. The randomised-response technique (RRT) tries to reduce bias by assuring confidentiality through a random scrambling of the respondent's answers. We assessed negative attitudes towards people with physical and mental disability via MPPQ, RRT and DQ to compare the resulting estimates. The MPPQ estimates exceeded the DQ estimates. Employing a cheating-detection extension of the RRT, we determined the proportion of respondents disregarding the RRT instructions and computed an upper bound for the prevalence of negative attitudes. MPPQ estimates exceeded this upper bound and were thus shown to overestimate the prevalence. Furthermore, we found more negative attitudes towards people with mental disabilities than those with physical disabilities in all three questioning conditions. We recommend employing the cheating-detection variant of the RRT to gain additional insight in future studies on attitudes towards people with disabilities.
NASA Astrophysics Data System (ADS)
Albeverio, Sergio; Tamura, Hiroshi
2018-04-01
We consider a model describing the coupling of a vector-valued and a scalar homogeneous Markovian random field over R4, interpreted as expressing the interaction between a charged scalar quantum field coupled with a nonlinear quantized electromagnetic field. Expectations of functionals of the random fields are expressed by Brownian bridges. Using this, together with Feynman-Kac-Itô type formulae and estimates on the small time and large time behaviour of Brownian functionals, we prove asymptotic upper and lower bounds on the kernel of the transition semigroup for our model. The upper bound gives faster than exponential decay for large distances of the corresponding resolvent (propagator).
Upper bounds on superpartner masses from upper bounds on the Higgs boson mass.
Cabrera, M E; Casas, J A; Delgado, A
2012-01-13
The LHC is putting bounds on the Higgs boson mass. In this Letter we use those bounds to constrain the minimal supersymmetric standard model (MSSM) parameter space using the fact that, in supersymmetry, the Higgs mass is a function of the masses of sparticles, and therefore an upper bound on the Higgs mass translates into an upper bound for the masses for superpartners. We show that, although current bounds do not constrain the MSSM parameter space from above, once the Higgs mass bound improves big regions of this parameter space will be excluded, putting upper bounds on supersymmetry (SUSY) masses. On the other hand, for the case of split-SUSY we show that, for moderate or large tanβ, the present bounds on the Higgs mass imply that the common mass for scalars cannot be greater than 10(11) GeV. We show how these bounds will evolve as LHC continues to improve the limits on the Higgs mass.
Extracting Loop Bounds for WCET Analysis Using the Instrumentation Point Graph
NASA Astrophysics Data System (ADS)
Betts, A.; Bernat, G.
2009-05-01
Every calculation engine proposed in the literature of Worst-Case Execution Time (WCET) analysis requires upper bounds on loop iterations. Existing mechanisms to procure this information are either error prone, because they are gathered from the end-user, or limited in scope, because automatic analyses target very specific loop structures. In this paper, we present a technique that obtains bounds completely automatically for arbitrary loop structures. In particular, we show how to employ the Instrumentation Point Graph (IPG) to parse traces of execution (generated by an instrumented program) in order to extract bounds relative to any loop-nesting level. With this technique, therefore, non-rectangular dependencies between loops can be captured, allowing more accurate WCET estimates to be calculated. We demonstrate the improvement in accuracy by comparing WCET estimates computed through our HMB framework against those computed with state-of-the-art techniques.
Dwell time-based stabilisation of switched delay systems using free-weighting matrices
NASA Astrophysics Data System (ADS)
Koru, Ahmet Taha; Delibaşı, Akın; Özbay, Hitay
2018-01-01
In this paper, we present a quasi-convex optimisation method to minimise an upper bound of the dwell time for stability of switched delay systems. Piecewise Lyapunov-Krasovskii functionals are introduced and the upper bound for the derivative of Lyapunov functionals is estimated by free-weighting matrices method to investigate non-switching stability of each candidate subsystems. Then, a sufficient condition for the dwell time is derived to guarantee the asymptotic stability of the switched delay system. Once these conditions are represented by a set of linear matrix inequalities , dwell time optimisation problem can be formulated as a standard quasi-convex optimisation problem. Numerical examples are given to illustrate the improvements over previously obtained dwell time bounds. Using the results obtained in the stability case, we present a nonlinear minimisation algorithm to synthesise the dwell time minimiser controllers. The algorithm solves the problem with successive linearisation of nonlinear conditions.
Uncertainty estimates in broadband seismometer sensitivities using microseisms
Ringler, Adam T.; Storm, Tyler L.; Gee, Lind S.; Hutt, Charles R.; Wilson, David C.
2015-01-01
The midband sensitivity of a seismic instrument is one of the fundamental parameters used in published station metadata. Any errors in this value can compromise amplitude estimates in otherwise high-quality data. To estimate an upper bound in the uncertainty of the midband sensitivity for modern broadband instruments, we compare daily microseism (4- to 8-s period) amplitude ratios between the vertical components of colocated broadband sensors across the IRIS/USGS (network code IU) seismic network. We find that the mean of the 145,972 daily ratios used between 2002 and 2013 is 0.9895 with a standard deviation of 0.0231. This suggests that the ratio between instruments shows a small bias and considerable scatter. We also find that these ratios follow a standard normal distribution (R 2 = 0.95442), which suggests that the midband sensitivity of an instrument has an error of no greater than ±6 % with a 99 % confidence interval. This gives an upper bound on the precision to which we know the sensitivity of a fielded instrument.
Liouville type theorems of a nonlinear elliptic equation for the V-Laplacian
NASA Astrophysics Data System (ADS)
Huang, Guangyue; Li, Zhi
2018-03-01
In this paper, we consider Liouville type theorems for positive solutions to the following nonlinear elliptic equation: Δ _V u+aulog u=0, where a is a nonzero real constant. By using gradient estimates, we obtain upper bounds of |\
1-norm support vector novelty detection and its sparseness.
Zhang, Li; Zhou, WeiDa
2013-12-01
This paper proposes a 1-norm support vector novelty detection (SVND) method and discusses its sparseness. 1-norm SVND is formulated as a linear programming problem and uses two techniques for inducing sparseness, or the 1-norm regularization and the hinge loss function. We also find two upper bounds on the sparseness of 1-norm SVND, or exact support vector (ESV) and kernel Gram matrix rank bounds. The ESV bound indicates that 1-norm SVND has a sparser representation model than SVND. The kernel Gram matrix rank bound can loosely estimate the sparseness of 1-norm SVND. Experimental results show that 1-norm SVND is feasible and effective. Copyright © 2013 Elsevier Ltd. All rights reserved.
Constrained signal reconstruction from wavelet transform coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.
1991-12-31
A new method is introduced for reconstructing a signal from an incomplete sampling of its Discrete Wavelet Transform (DWT). The algorithm yields a minimum-norm estimate satisfying a priori upper and lower bounds on the signal. The method is based on a finite-dimensional representation theory for minimum-norm estimates of bounded signals developed by R.E. Cole. Cole`s work has its origins in earlier techniques of maximum-entropy spectral estimation due to Lang and McClellan, which were adapted by Steinhardt, Goodrich and Roberts for minimum-norm spectral estimation. Cole`s extension of their work provides a representation for minimum-norm estimates of a class of generalized transformsmore » in terms of general correlation data (not just DFT`s of autocorrelation lags, as in spectral estimation). One virtue of this great generality is that it includes the inverse DWT. 20 refs.« less
Combinatorial complexity of pathway analysis in metabolic networks.
Klamt, Steffen; Stelling, Jörg
2002-01-01
Elementary flux mode analysis is a promising approach for a pathway-oriented perspective of metabolic networks. However, in larger networks it is hampered by the combinatorial explosion of possible routes. In this work we give some estimations on the combinatorial complexity including theoretical upper bounds for the number of elementary flux modes in a network of a given size. In a case study, we computed the elementary modes in the central metabolism of Escherichia coli while utilizing four different substrates. Interestingly, although the number of modes occurring in this complex network can exceed half a million, it is still far below the upper bound. Hence, to a certain extent, pathway analysis of central catabolism is feasible to assess network properties such as flexibility and functionality.
Heskes, Tom; Eisinga, Rob; Breitling, Rainer
2014-11-21
The rank product method is a powerful statistical technique for identifying differentially expressed molecules in replicated experiments. A critical issue in molecule selection is accurate calculation of the p-value of the rank product statistic to adequately address multiple testing. Both exact calculation and permutation and gamma approximations have been proposed to determine molecule-level significance. These current approaches have serious drawbacks as they are either computationally burdensome or provide inaccurate estimates in the tail of the p-value distribution. We derive strict lower and upper bounds to the exact p-value along with an accurate approximation that can be used to assess the significance of the rank product statistic in a computationally fast manner. The bounds and the proposed approximation are shown to provide far better accuracy over existing approximate methods in determining tail probabilities, with the slightly conservative upper bound protecting against false positives. We illustrate the proposed method in the context of a recently published analysis on transcriptomic profiling performed in blood. We provide a method to determine upper bounds and accurate approximate p-values of the rank product statistic. The proposed algorithm provides an order of magnitude increase in throughput as compared with current approaches and offers the opportunity to explore new application domains with even larger multiple testing issue. The R code is published in one of the Additional files and is available at http://www.ru.nl/publish/pages/726696/rankprodbounds.zip .
New upper bounds on the rate of a code via the Delsarte-MacWilliams inequalities
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Rodemich, E. R.; Rumsey, H., Jr.; Welch, L. R.
1977-01-01
An upper bound on the rate of a binary code as a function of minimum code distance (using a Hamming code metric) is arrived at from Delsarte-MacWilliams inequalities. The upper bound so found is asymptotically less than Levenshtein's bound, and a fortiori less than Elias' bound. Appendices review properties of Krawtchouk polynomials and Q-polynomials utilized in the rigorous proofs.
Cade, Brian S.; Noon, Barry R.; Scherer, Rick D.; Keane, John J.
2017-01-01
Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical conditional distribution of a bounded discrete random variable. The logistic quantile regression model requires that counts are randomly jittered to a continuous random variable, logit transformed to bound them between specified lower and upper values, then estimated in conventional linear quantile regression, repeating the 3 steps and averaging estimates. Back-transformation to the original discrete scale relies on the fact that quantiles are equivariant to monotonic transformations. We demonstrate this statistical procedure by modeling 20 years of California Spotted Owl fledgling production (0−3 per territory) on the Lassen National Forest, California, USA, as related to climate, demographic, and landscape habitat characteristics at territories. Spotted Owl fledgling counts increased nonlinearly with decreasing precipitation in the early nesting period, in the winter prior to nesting, and in the prior growing season; with increasing minimum temperatures in the early nesting period; with adult compared to subadult parents; when there was no fledgling production in the prior year; and when percentage of the landscape surrounding nesting sites (202 ha) with trees ≥25 m height increased. Changes in production were primarily driven by changes in the proportion of territories with 2 or 3 fledglings. Average variances of the discrete cumulative distributions of the estimated fledgling counts indicated that temporal changes in climate and parent age class explained 18% of the annual variance in owl fledgling production, which was 34% of the total variance. Prior fledgling production explained as much of the variance in the fledgling counts as climate, parent age class, and landscape habitat predictors. Our logistic quantile regression model can be used for any discrete response variables with fixed upper and lower bounds.
Thin-wall approximation in vacuum decay: A lemma
NASA Astrophysics Data System (ADS)
Brown, Adam R.
2018-05-01
The "thin-wall approximation" gives a simple estimate of the decay rate of an unstable quantum field. Unfortunately, the approximation is uncontrolled. In this paper I show that there are actually two different thin-wall approximations and that they bracket the true decay rate: I prove that one is an upper bound and the other a lower bound. In the thin-wall limit, the two approximations converge. In the presence of gravity, a generalization of this lemma provides a simple sufficient condition for nonperturbative vacuum instability.
NASA Technical Reports Server (NTRS)
Mitchell, J. R.
1972-01-01
The frequency response method of analyzing control system performance is discussed, and the difficulty of obtaining the sampled frequency response of the continuous system is considered. An upper bound magnitude error equation is obtained which yields reasonable estimates of the actual error. Finalization of the compensator improvement program is also reported, and the program was used to design compensators for Saturn 5/S1-C dry workshop and Saturn 5/S1-C Skylab.
49 CFR Appendix B to Part 236 - Risk Assessment Criteria
Code of Federal Regulations, 2012 CFR
2012-10-01
... availability calculations for subsystems and components, Fault Tree Analysis (FTA) of the subsystems, and... upper bound, as estimated with a sensitivity analysis, and the risk value selected must be demonstrated... interconnected subsystems/components? The risk assessment of each safety-critical system (product) must account...
49 CFR Appendix B to Part 236 - Risk Assessment Criteria
Code of Federal Regulations, 2014 CFR
2014-10-01
... availability calculations for subsystems and components, Fault Tree Analysis (FTA) of the subsystems, and... upper bound, as estimated with a sensitivity analysis, and the risk value selected must be demonstrated... interconnected subsystems/components? The risk assessment of each safety-critical system (product) must account...
SIP Version 1.0 User's Guide for Pesticide Exposure of Birds and Mammals through Drinking Water
Model provides an upper bound estimate of exposure of birds and mammals to pesticides through drinking water alone. Intended for use in problem formulation to determine whether or not drinking water exposure alone is a potential pathway of concern.
Yu, Kai; Yin, Ming; Luo, Ji-An; Wang, Yingguan; Bao, Ming; Hu, Yu-Hen; Wang, Zhi
2016-05-23
A compressive sensing joint sparse representation direction of arrival estimation (CSJSR-DoA) approach is proposed for wireless sensor array networks (WSAN). By exploiting the joint spatial and spectral correlations of acoustic sensor array data, the CSJSR-DoA approach provides reliable DoA estimation using randomly-sampled acoustic sensor data. Since random sampling is performed at remote sensor arrays, less data need to be transmitted over lossy wireless channels to the fusion center (FC), and the expensive source coding operation at sensor nodes can be avoided. To investigate the spatial sparsity, an upper bound of the coherence of incoming sensor signals is derived assuming a linear sensor array configuration. This bound provides a theoretical constraint on the angular separation of acoustic sources to ensure the spatial sparsity of the received acoustic sensor array signals. The Cram e ´ r-Rao bound of the CSJSR-DoA estimator that quantifies the theoretical DoA estimation performance is also derived. The potential performance of the CSJSR-DoA approach is validated using both simulations and field experiments on a prototype WSAN platform. Compared to existing compressive sensing-based DoA estimation methods, the CSJSR-DoA approach shows significant performance improvement.
An upper bound on the radius of a highly electrically conducting lunar core
NASA Technical Reports Server (NTRS)
Hobbs, B. A.; Hood, L. L.; Herbert, F.; Sonett, C. P.
1983-01-01
Parker's (1980) nonlinear inverse theory for the electromagnetic sounding problem is converted to a form suitable for analysis of lunar day-side transfer function data by: (1) transforming the solution in plane geometry to that in spherical geometry; and (2) transforming the theoretical lunar transfer function in the dipole limit to an apparent resistivity function. The theory is applied to the revised lunar transfer function data set of Hood et al. (1982), which extends in frequency from 10 to the -5th to 10 to the -3rd Hz. On the assumption that an iron-rich lunar core, whether molten or solid, can be represented by a perfect conductor at the minimum sampled frequency, an upper bound of 435 km on the maximum radius of such a core is calculated. This bound is somewhat larger than values of 360-375 km previously estimated from the same data set via forward model calculations because the prior work did not consider all possible mantle conductivity functions.
On the validity of the Arrhenius equation for electron attachment rate coefficients.
Fabrikant, Ilya I; Hotop, Hartmut
2008-03-28
The validity of the Arrhenius equation for dissociative electron attachment rate coefficients is investigated. A general analysis allows us to obtain estimates of the upper temperature bound for the range of validity of the Arrhenius equation in the endothermic case and both lower and upper bounds in the exothermic case with a reaction barrier. The results of the general discussion are illustrated by numerical examples whereby the rate coefficient, as a function of temperature for dissociative electron attachment, is calculated using the resonance R-matrix theory. In the endothermic case, the activation energy in the Arrhenius equation is close to the threshold energy, whereas in the case of exothermic reactions with an intermediate barrier, the activation energy is found to be substantially lower than the barrier height.
The number of chemicals with limited toxicological information for chemical safety decision-making has accelerated alternative model development, which often are evaluated via referencing animal toxicology studies. In vivo studies are generally considered the standard for hazard ...
Doppler-based motion compensation algorithm for focusing the signature of a rotorcraft.
Goldman, Geoffrey H
2013-02-01
A computationally efficient algorithm was developed and tested to compensate for the effects of motion on the acoustic signature of a rotorcraft. For target signatures with large spectral peaks that vary slowly in amplitude and have near constant frequency, the time-varying Doppler shift can be tracked and then removed from the data. The algorithm can be used to preprocess data for classification, tracking, and nulling algorithms. The algorithm was tested on rotorcraft data. The average instantaneous frequency of the first harmonic of a rotorcraft was tracked with a fixed-lag smoother. Then, state space estimates of the frequency were used to calculate a time warping that removed the effect of a time-varying Doppler shift from the data. The algorithm was evaluated by analyzing the increase in the amplitude of the harmonics in the spectrum of a rotorcraft. The results depended upon the frequency of the harmonics and the processing interval duration. Under good conditions, the results for the fundamental frequency of the target (~11 Hz) almost achieved an estimated upper bound. The results for higher frequency harmonics had larger increases in the amplitude of the peaks, but significantly lower than the estimated upper bounds.
The Lyapunov dimension and its estimation via the Leonov method
NASA Astrophysics Data System (ADS)
Kuznetsov, N. V.
2016-06-01
Along with widely used numerical methods for estimating and computing the Lyapunov dimension there is an effective analytical approach, proposed by G.A. Leonov in 1991. The Leonov method is based on the direct Lyapunov method with special Lyapunov-like functions. The advantage of the method is that it allows one to estimate the Lyapunov dimension of invariant sets without localization of the set in the phase space and, in many cases, to get effectively an exact Lyapunov dimension formula. In this work the invariance of the Lyapunov dimension with respect to diffeomorphisms and its connection with the Leonov method are discussed. For discrete-time dynamical systems an analog of Leonov method is suggested. In a simple but rigorous way, here it is presented the connection between the Leonov method and the key related works: Kaplan and Yorke (the concept of the Lyapunov dimension, 1979), Douady and Oesterlé (upper bounds of the Hausdorff dimension via the Lyapunov dimension of maps, 1980), Constantin, Eden, Foiaş, and Temam (upper bounds of the Hausdorff dimension via the Lyapunov exponents and Lyapunov dimension of dynamical systems, 1985-90), and the numerical calculation of the Lyapunov exponents and dimension.
Evaluation of uncertainty for regularized deconvolution: A case study in hydrophone measurements.
Eichstädt, S; Wilkens, V
2017-06-01
An estimation of the measurand in dynamic metrology usually requires a deconvolution based on a dynamic calibration of the measuring system. Since deconvolution is, mathematically speaking, an ill-posed inverse problem, some kind of regularization is required to render the problem stable and obtain usable results. Many approaches to regularized deconvolution exist in the literature, but the corresponding evaluation of measurement uncertainties is, in general, an unsolved issue. In particular, the uncertainty contribution of the regularization itself is a topic of great importance, because it has a significant impact on the estimation result. Here, a versatile approach is proposed to express prior knowledge about the measurand based on a flexible, low-dimensional modeling of an upper bound on the magnitude spectrum of the measurand. This upper bound allows the derivation of an uncertainty associated with the regularization method in line with the guidelines in metrology. As a case study for the proposed method, hydrophone measurements in medical ultrasound with an acoustic working frequency of up to 7.5 MHz are considered, but the approach is applicable for all kinds of estimation methods in dynamic metrology, where regularization is required and which can be expressed as a multiplication in the frequency domain.
Observed Volume Fluxes and Mixing in the Dardanelles Strait
2013-10-04
et al , 2001; Kara el al ., 2008]. [3] It has been recognized for years that the upper-layer outflow from the Dardanelles Strait to the Aegean Sea...than the interior of the sea and manifests itself as a subsurface flow bounded by the upper layer of the Sea of Mannara. 5007 JAROSZ ET AL ...both ends of the Dardanelles Strait, and assuming a steady state mass budget, Unl’uata et al . [1990] estimated mean annual volume transports in the
Safe Upper-Bounds Inference of Energy Consumption for Java Bytecode Applications
NASA Technical Reports Server (NTRS)
Navas, Jorge; Mendez-Lojo, Mario; Hermenegildo, Manuel V.
2008-01-01
Many space applications such as sensor networks, on-board satellite-based platforms, on-board vehicle monitoring systems, etc. handle large amounts of data and analysis of such data is often critical for the scientific mission. Transmitting such large amounts of data to the remote control station for analysis is usually too expensive for time-critical applications. Instead, modern space applications are increasingly relying on autonomous on-board data analysis. All these applications face many resource constraints. A key requirement is to minimize energy consumption. Several approaches have been developed for estimating the energy consumption of such applications (e.g. [3, 1]) based on measuring actual consumption at run-time for large sets of random inputs. However, this approach has the limitation that it is in general not possible to cover all possible inputs. Using formal techniques offers the potential for inferring safe energy consumption bounds, thus being specially interesting for space exploration and safety-critical systems. We have proposed and implemented a general frame- work for resource usage analysis of Java bytecode [2]. The user defines a set of resource(s) of interest to be tracked and some annotations that describe the cost of some elementary elements of the program for those resources. These values can be constants or, more generally, functions of the input data sizes. The analysis then statically derives an upper bound on the amount of those resources that the program as a whole will consume or provide, also as functions of the input data sizes. This article develops a novel application of the analysis of [2] to inferring safe upper bounds on the energy consumption of Java bytecode applications. We first use a resource model that describes the cost of each bytecode instruction in terms of the joules it consumes. With this resource model, we then generate energy consumption cost relations, which are then used to infer safe upper bounds. How energy consumption for each bytecode instruction is measured is beyond the scope of this paper. Instead, this paper is about how to infer safe energy consumption estimations assuming that those energy consumption costs are provided. For concreteness, we use a simplified version of an existing resource model [1] in which an energy consumption cost for individual Java opcodes is defined.
Limits on the fluctuating part of y-type distortion monopole from Planck and SPT results
NASA Astrophysics Data System (ADS)
Khatri, Rishi; Sunyaev, Rashid
2015-08-01
We use the published Planck and SPT cluster catalogs [1,2] and recently published y-distortion maps [3] to put strong observational limits on the contribution of the fluctuating part of the y-type distortions to the y-distortion monopole. Our bounds are 5.4× 10-8 < langle yrangle < 2.2× 10-6. Our upper bound is a factor of 6.8 stronger than the currently best upper 95% confidence limit from COBE-FIRAS of langle yrangle <15× 10-6. In the standard cosmology, large scale structure is the only source of such distortions and our limits therefore constrain the baryonic physics involved in the formation of the large scale structure. Our lower limit, from the detected clusters in the Planck and SPT catalogs, also implies that a Pixie-like experiment should detect the y-distortion monopole at >27-σ. The biggest sources of uncertainty in our upper limit are the monopole offsets between different HFI channel maps that we estimate to be <10-6.
Data-based fault-tolerant control for affine nonlinear systems with actuator faults.
Xie, Chun-Hua; Yang, Guang-Hong
2016-09-01
This paper investigates the fault-tolerant control (FTC) problem for unknown nonlinear systems with actuator faults including stuck, outage, bias and loss of effectiveness. The upper bounds of stuck faults, bias faults and loss of effectiveness faults are unknown. A new data-based FTC scheme is proposed. It consists of the online estimations of the bounds and a state-dependent function. The estimations are adjusted online to compensate automatically the actuator faults. The state-dependent function solved by using real system data helps to stabilize the system. Furthermore, all signals in the resulting closed-loop system are uniformly bounded and the states converge asymptotically to zero. Compared with the existing results, the proposed approach is data-based. Finally, two simulation examples are provided to show the effectiveness of the proposed approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Dynamic State Estimation of Power Systems With Quantization Effects: A Recursive Filter Approach.
Hu, Liang; Wang, Zidong; Liu, Xiaohui
2016-08-01
In this paper, a recursive filter algorithm is developed to deal with the state estimation problem for power systems with quantized nonlinear measurements. The measurements from both the remote terminal units and the phasor measurement unit are subject to quantizations described by a logarithmic quantizer. Attention is focused on the design of a recursive filter such that, in the simultaneous presence of nonlinear measurements and quantization effects, an upper bound for the estimation error covariance is guaranteed and subsequently minimized. Instead of using the traditional approximation methods in nonlinear estimation that simply ignore the linearization errors, we treat both the linearization and quantization errors as norm-bounded uncertainties in the algorithm development so as to improve the performance of the estimator. For the power system with such kind of introduced uncertainties, a filter is designed in the framework of robust recursive estimation, and the developed filter algorithm is tested on the IEEE benchmark power system to demonstrate its effectiveness.
Upper bound on the slope of steady water waves with small adverse vorticity
NASA Astrophysics Data System (ADS)
So, Seung Wook; Strauss, Walter A.
2018-03-01
We consider the angle of inclination (with respect to the horizontal) of the profile of a steady 2D inviscid symmetric periodic or solitary water wave subject to gravity. There is an upper bound of 31.15° in the irrotational case [1] and an upper bound of 45° in the case of favorable vorticity [13]. On the other hand, if the vorticity is adverse, the profile can become vertical. We prove here that if the adverse vorticity is sufficiently small, then the angle still has an upper bound which is slightly larger than 45°.
A simple method for assessing occupational exposure via the one-way random effects model.
Krishnamoorthy, K; Mathew, Thomas; Peng, Jie
2016-11-01
A one-way random effects model is postulated for the log-transformed shift-long personal exposure measurements, where the random effect in the model represents an effect due to the worker. Simple closed-form confidence intervals are proposed for the relevant parameters of interest using the method of variance estimates recovery (MOVER). The performance of the confidence bounds is evaluated and compared with those based on the generalized confidence interval approach. Comparison studies indicate that the proposed MOVER confidence bounds are better than the generalized confidence bounds for the overall mean exposure and an upper percentile of the exposure distribution. The proposed methods are illustrated using a few examples involving industrial hygiene data.
Reliability Estimating Procedures for Electric and Thermochemical Propulsion Systems. Volume 1
1977-02-01
Laboratories, The Marquardt Company, NASA Goddard Space Flight Center, RCA Astro Elec- tronics, Rockwell International, Applied Physics Laboratory...E fX ) 2.3 Failure Rate Means and Bounds 5% Lower Bound Median Mean 95% Upper Bound A.05 X.05 . AIA. 9 5 0.00025 0.0024 0.06 0.022 x10- 6 per cycle, 1...Iq IIt. Xg4 4l Wl ~ 4 L Q ൘ I1-269 I- I J N1- 74-i Liu I- (~J~~~jto 1-27 r4J > U 0 1-271 T 27 fX ~ 0L 1-273 -- va VAv( 13 1-272 %J% ~ii 000 41
ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.
2010-08-10
A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error),more » and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.« less
Long-Term Follow Up of CSRP: Understanding Students' Academic Achievement Post-Treatment
ERIC Educational Resources Information Center
Lennon, Jaclyn M.; Li-Grining, Christine; Raver, C. Cybele; Pess, Rachel A.
2011-01-01
In this poster presentation, the authors examine the impact of Chicago School Readiness Project (CSRP) on students' academic achievement in elementary school. First, they provide upper- and lower-bound estimates of the impact of CSRP on students' academic achievement, taking into account their subsequent nonrandom selection into higher versus…
Non-localization of eigenfunctions for Sturm-Liouville operators and applications
NASA Astrophysics Data System (ADS)
Liard, Thibault; Lissy, Pierre; Privat, Yannick
2018-02-01
In this article, we investigate a non-localization property of the eigenfunctions of Sturm-Liouville operators Aa = -∂xx + a (ṡ) Id with Dirichlet boundary conditions, where a (ṡ) runs over the bounded nonnegative potential functions on the interval (0 , L) with L > 0. More precisely, we address the extremal spectral problem of minimizing the L2-norm of a function e (ṡ) on a measurable subset ω of (0 , L), where e (ṡ) runs over all eigenfunctions of Aa, at the same time with respect to all subsets ω having a prescribed measure and all L∞ potential functions a (ṡ) having a prescribed essentially upper bound. We provide some existence and qualitative properties of the minimizers, as well as precise lower and upper estimates on the optimal value. Several consequences in control and stabilization theory are then highlighted.
Boukattaya, Mohamed; Mezghani, Neila; Damak, Tarak
2018-06-01
In this paper, robust and adaptive nonsingular fast terminal sliding-mode (NFTSM) control schemes for the trajectory tracking problem are proposed with known or unknown upper bound of the system uncertainty and external disturbances. The developed controllers take the advantage of the NFTSM theory to ensure fast convergence rate, singularity avoidance, and robustness against uncertainties and external disturbances. First, a robust NFTSM controller is proposed which guarantees that sliding surface and equilibrium point can be reached in a short finite-time from any initial state. Then, in order to cope with the unknown upper bound of the system uncertainty which may be occurring in practical applications, a new adaptive NFTSM algorithm is developed. One feature of the proposed control law is their adaptation techniques where the prior knowledge of parameters uncertainty and disturbances is not needed. However, the adaptive tuning law can estimate the upper bound of these uncertainties using only position and velocity measurements. Moreover, the proposed controller eliminates the chattering effect without losing the robustness property and the precision. Stability analysis is performed using the Lyapunov stability theory, and simulation studies are conducted to verify the effectiveness of the developed control schemes. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Effects of general relativity on glitch amplitudes and pulsar mass upper bounds
NASA Astrophysics Data System (ADS)
Antonelli, M.; Montoli, A.; Pizzochero, P. M.
2018-04-01
Pinning of vortex lines in the inner crust of a spinning neutron star may be the mechanism that enhances the differential rotation of the internal neutron superfluid, making it possible to freeze some amount of angular momentum which eventually can be released, thus causing a pulsar glitch. We investigate the general relativistic corrections to pulsar glitch amplitudes in the slow-rotation approximation, consistently with the stratified structure of the star. We thus provide a relativistic generalization of a previous Newtonian model that was recently used to estimate upper bounds on the masses of glitching pulsars. We find that the effect of general relativity on the glitch amplitudes obtained by emptying the whole angular momentum reservoir is less than 30 per cent. Moreover, we show that the Newtonian upper bounds on the masses of large glitchers obtained from observations of their maximum recorded event differ by less than a few percent from those calculated within the relativistic framework. This work can also serve as a basis to construct more sophisticated models of angular momentum reservoir in a relativistic context: in particular, we present two alternative scenarios for macroscopically rigid and slack pinned vortex lines, and we generalize the Feynman-Onsager relation to the case when both entrainment coupling between the fluids and a strong axisymmetric gravitational field are present.
Long-Time Behavior and Critical Limit of Subcritical SQG Equations in Scale-Invariant Sobolev Spaces
NASA Astrophysics Data System (ADS)
Coti Zelati, Michele
2018-02-01
We consider the subcritical SQG equation in its natural scale-invariant Sobolev space and prove the existence of a global attractor of optimal regularity. The proof is based on a new energy estimate in Sobolev spaces to bootstrap the regularity to the optimal level, derived by means of nonlinear lower bounds on the fractional Laplacian. This estimate appears to be new in the literature and allows a sharp use of the subcritical nature of the L^∞ bounds for this problem. As a by-product, we obtain attractors for weak solutions as well. Moreover, we study the critical limit of the attractors and prove their stability and upper semicontinuity with respect to the strength of the diffusion.
Skeletal Correlates for Body Mass Estimation in Modern and Fossil Flying Birds
Field, Daniel J.; Lynner, Colton; Brown, Christian; Darroch, Simon A. F.
2013-01-01
Scaling relationships between skeletal dimensions and body mass in extant birds are often used to estimate body mass in fossil crown-group birds, as well as in stem-group avialans. However, useful statistical measurements for constraining the precision and accuracy of fossil mass estimates are rarely provided, which prevents the quantification of robust upper and lower bound body mass estimates for fossils. Here, we generate thirteen body mass correlations and associated measures of statistical robustness using a sample of 863 extant flying birds. By providing robust body mass regressions with upper- and lower-bound prediction intervals for individual skeletal elements, we address the longstanding problem of body mass estimation for highly fragmentary fossil birds. We demonstrate that the most precise proxy for estimating body mass in the overall dataset, measured both as coefficient determination of ordinary least squares regression and percent prediction error, is the maximum diameter of the coracoid’s humeral articulation facet (the glenoid). We further demonstrate that this result is consistent among the majority of investigated avian orders (10 out of 18). As a result, we suggest that, in the majority of cases, this proxy may provide the most accurate estimates of body mass for volant fossil birds. Additionally, by presenting statistical measurements of body mass prediction error for thirteen different body mass regressions, this study provides a much-needed quantitative framework for the accurate estimation of body mass and associated ecological correlates in fossil birds. The application of these regressions will enhance the precision and robustness of many mass-based inferences in future paleornithological studies. PMID:24312392
Low-dimensional Representation of Error Covariance
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan
2000-01-01
Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.
Sign rank versus Vapnik-Chervonenkis dimension
NASA Astrophysics Data System (ADS)
Alon, N.; Moran, Sh; Yehudayoff, A.
2017-12-01
This work studies the maximum possible sign rank of sign (N × N)-matrices with a given Vapnik-Chervonenkis dimension d. For d=1, this maximum is three. For d=2, this maximum is \\widetilde{\\Theta}(N1/2). For d >2, similar but slightly less accurate statements hold. The lower bounds improve on previous ones by Ben-David et al., and the upper bounds are novel. The lower bounds are obtained by probabilistic constructions, using a theorem of Warren in real algebraic topology. The upper bounds are obtained using a result of Welzl about spanning trees with low stabbing number, and using the moment curve. The upper bound technique is also used to: (i) provide estimates on the number of classes of a given Vapnik-Chervonenkis dimension, and the number of maximum classes of a given Vapnik-Chervonenkis dimension--answering a question of Frankl from 1989, and (ii) design an efficient algorithm that provides an O(N/log(N)) multiplicative approximation for the sign rank. We also observe a general connection between sign rank and spectral gaps which is based on Forster's argument. Consider the adjacency (N × N)-matrix of a Δ-regular graph with a second eigenvalue of absolute value λ and Δ ≤ N/2. We show that the sign rank of the signed version of this matrix is at least Δ/λ. We use this connection to prove the existence of a maximum class C\\subseteq\\{+/- 1\\}^N with Vapnik-Chervonenkis dimension 2 and sign rank \\widetilde{\\Theta}(N1/2). This answers a question of Ben-David et al. regarding the sign rank of large Vapnik-Chervonenkis classes. We also describe limitations of this approach, in the spirit of the Alon-Boppana theorem. We further describe connections to communication complexity, geometry, learning theory, and combinatorics. Bibliography: 69 titles.
An estimator for the standard deviation of a natural frequency. I.
NASA Technical Reports Server (NTRS)
Schiff, A. J.; Bogdanoff, J. L.
1971-01-01
A brief review of mean-square approximate systems is given. The case in which the masses are deterministic is considered first in the derivation of an estimator for the upper bound of the standard deviation of a natural frequency. Two examples presented include a two-degree-of-freedom system and a case in which the disorder in the springs is perfectly correlated. For purposes of comparison, a Monte Carlo simulation was done on a digital computer.
Plate Motions, Regional Deformation, and Time-Variation of Plate Motions
NASA Technical Reports Server (NTRS)
Gordon, R. G.
1998-01-01
The significant results obtained with support of this grant include the following: (1) Using VLBI data in combination with other geodetical, geophysical, and geological data to bound the present rotation of the Colorado Plateau, and to evaluate to its implications for the kinematics and seismogenic potential of the western half of the conterminous U.S. (2) Determining realistic estimates of uncertainties for VLBI data and then applying the data and uncertainties to obtain an upper bound on the integral of deformation within the "stable interior" of the North American and other plates and thus to place an upper bound on the seismogenic potential within these regions. (3) Combining VLBI data with other geodetic, geophysical, and geologic data to estimate the motion of coastal California in a frame of reference attached to the Sierra Nevada-Great Valley microplate. This analysis has provided new insights into the kinematic boundary conditions that may control or at least strongly influence the locations of asperities that rupture in great earthquakes along the San Andreas transform system. (4) Determining a global tectonic model from VLBI geodetic data that combines the estimation of plate angular velocities with individual site linear velocities where tectonically appropriate. and (5) Investigation of the some of the outstanding problems defined by the work leading to global plate motion model NUVEL-1. These problems, such as the motion between the Pacific and North American plates and between west Africa and east Africa, are focused on regions where the seismogenic potential may be greater than implied by published plate tectonic models.
Martin, Julien; Edwards, Holly H.; Bled, Florent; Fonnesbeck, Christopher J.; Dupuis, Jérôme A.; Gardner, Beth; Koslovsky, Stacie M.; Aven, Allen M.; Ward-Geiger, Leslie I.; Carmichael, Ruth H.; Fagan, Daniel E.; Ross, Monica A.; Reinert, Thomas R.
2014-01-01
The explosion of the Deepwater Horizon drilling platform created the largest marine oil spill in U.S. history. As part of the Natural Resource Damage Assessment process, we applied an innovative modeling approach to obtain upper estimates for occupancy and for number of manatees in areas potentially affected by the oil spill. Our data consisted of aerial survey counts in waters of the Florida Panhandle, Alabama and Mississippi. Our method, which uses a Bayesian approach, allows for the propagation of uncertainty associated with estimates from empirical data and from the published literature. We illustrate that it is possible to derive estimates of occupancy rate and upper estimates of the number of manatees present at the time of sampling, even when no manatees were observed in our sampled plots during surveys. We estimated that fewer than 2.4% of potentially affected manatee habitat in our Florida study area may have been occupied by manatees. The upper estimate for the number of manatees present in potentially impacted areas (within our study area) was estimated with our model to be 74 (95%CI 46 to 107). This upper estimate for the number of manatees was conditioned on the upper 95%CI value of the occupancy rate. In other words, based on our estimates, it is highly probable that there were 107 or fewer manatees in our study area during the time of our surveys. Because our analyses apply to habitats considered likely manatee habitats, our inference is restricted to these sites and to the time frame of our surveys. Given that manatees may be hard to see during aerial surveys, it was important to account for imperfect detection. The approach that we described can be useful for determining the best allocation of resources for monitoring and conservation. PMID:24670971
Gauge mediation at the LHC: status and prospects
Knapen, Simon; Redigolo, Diego
2017-01-30
We show that the predictivity of general gauge mediation (GGM) with TeV-scale stops is greatly increased once the Higgs mass constraint is imposed. The most notable results are a strong lower bound on the mass of the gluino and right-handed squarks, and an upper bound on the Higgsino mass. If the μ-parameter is positive, the wino mass is also bounded from above. These constraints relax significantly for high messenger scales and as such long-lived NLSPs are favored in GGM. We identify a small set of most promising topologies for the neutralino/sneutrino NLSP scenarios and estimate the impact of the currentmore » bounds and the sensitivity of the high luminosity LHC. The stau, stop and sbottom NLSP scenarios can be robustly excluded at the high luminosity LHC.« less
NASA Astrophysics Data System (ADS)
Masson, Frederic; Knoepfler, Andreas; Mayer, Michael; Ulrich, Patrice; Heck, Bernhard
2010-05-01
In September 2008, the Institut de Physique du Globe de Strasbourg (Ecole et Observatoire des Sciences de la Terre, EOST) and the Geodetic Institute (GIK) of Karlsruhe University (TH) established a transnational cooperation called GURN (GNSS Upper Rhine Graben Network). Within the GURN initiative these institutions are cooperating in order to establish a highly precise and highly sensitive network of permanently operating GNSS sites for the detection of crustal movements in the Upper Rhine Graben region. At the beginning, the network consisted of the permanently operating GNSS sites of SAPOS®-Baden-Württemberg, different data providers in France (e.g. EOST, Teria, RGP) and some further sites (e.g. IGS). In July 2009, the network was extended to the South when swisstopo (Switzerland) and to the North when SAPOS®-Rheinland-Pfalz joined GURN. Therefore, actually the GNSS network consists of approx. 80 permanently operating reference sites. The presentation will discuss the actual status of GURN, main research goals, and will present first results concerning the data quality as well as time series of a first reprocessing of all available data since 2002 using GAMIT/GLOBK (EOST working group) and the Bernese GPS Software (GIK working group). Based on these time series, the velocity as well as strain fields will be calculated in the future. The GURN initiative is also aiming for the estimation of the upper bounds of deformation in the Upper Rhine Graben region.
WINDOWS: a program for the analysis of spectral data foil activation measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stallmann, F.W.; Eastham, J.F.; Kam, F.B.K.
The computer program WINDOWS together with its subroutines is described for the analysis of neutron spectral data foil activation measurements. In particular, the unfolding of the neutron differential spectrum, estimated windows and detector contributions, upper and lower bounds for an integral response, and group fluxes obtained from neutron transport calculations. 116 references. (JFP)
Channel Simulation in Quantum Metrology
NASA Astrophysics Data System (ADS)
Laurenza, Riccardo; Lupo, Cosmo; Spedalieri, Gaetana; Braunstein, Samuel L.; Pirandola, Stefano
2018-04-01
In this review we discuss how channel simulation can be used to simplify the most general protocols of quantum parameter estimation, where unlimited entanglement and adaptive joint operations may be employed. Whenever the unknown parameter encoded in a quantum channel is completely transferred in an environmental program state simulating the channel, the optimal adaptive estimation cannot beat the standard quantum limit. In this setting, we elucidate the crucial role of quantum teleportation as a primitive operation which allows one to completely reduce adaptive protocols over suitable teleportation-covariant channels and derive matching upper and lower bounds for parameter estimation. For these channels,wemay express the quantum Cramér Rao bound directly in terms of their Choi matrices. Our review considers both discrete- and continuous-variable systems, also presenting some new results for bosonic Gaussian channels using an alternative sub-optimal simulation. It is an open problem to design simulations for quantum channels that achieve the Heisenberg limit.
Vertical structure of tropospheric winds on gas giants
NASA Astrophysics Data System (ADS)
Scott, R. K.; Dunkerton, T. J.
2017-04-01
Zonal mean zonal velocity profiles from cloud-tracking observations on Jupiter and Saturn are used to infer latitudinal variations of potential temperature consistent with a shear stable potential vorticity distribution. Immediately below the cloud tops, density stratification is weaker on the poleward and stronger on the equatorward flanks of midlatitude jets, while at greater depth the opposite relation holds. Thermal wind balance then yields the associated vertical shears of midlatitude jets in an altitude range bounded above by the cloud tops and bounded below by the level where the latitudinal gradient of static stability changes sign. The inferred vertical shear below the cloud tops is consistent with existing thermal profiling of the upper troposphere. The sense of the associated mean meridional circulation in the upper troposphere is discussed, and expected magnitudes are given based on existing estimates of the radiative timescale on each planet.
Search for violations of quantum mechanics
Ellis, John; Hagelin, John S.; Nanopoulos, D. V.; ...
1984-07-01
The treatment of quantum effects in gravitational fields indicates that pure states may evolve into mixed states, and Hawking has proposed modification of the axioms of field theory which incorporate the corresponding violation of quantum mechanics. In this study we propose a modified hamiltonian equation of motion for density matrices and use it to interpret upper bounds on the violation of quantum mechanics in different phenomenological situations. We apply our formalism to the K 0-K 0 system and to long baseline neutron interferometry experiments. In both cases we find upper bounds of about 2 × 10 -21 GeV on contributionsmore » to the single particle “hamiltonian” which violate quantum mechanical coherence. We discuss how these limits might be improved in the future, and consider the relative significance of other successful tests of quantum mechanics. Finally, an appendix contains model estimates of the magnitude of effects violating quantum mechanics.« less
DD production and their interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu Yanrui; Oka, Makoto; Takizawa, Makoto
2010-07-01
S- and P-wave DD scatterings are studied in a meson exchange model with the coupling constants obtained in the heavy quark effective theory. With the extracted P-wave phase shifts and the separable potential approximation, we include the DD rescattering effect and investigate the production process e{sup +}e{sup -{yields}}DD. We find that it is difficult to explain the anomalous line shape observed by the BES Collaboration with this mechanism. Combining our model calculation and the experimental measurement, we estimate the upper limit of the nearly universal cutoff parameter to be around 2 GeV. With this number, the upper limits of themore » binding energies of the S-wave DD and BB bound states are obtained. Assuming that the S-wave and P-wave interactions rely on the same cutoff, our study provides a way of extracting the information about S-wave molecular bound states from the P-wave meson pair production.« less
NASA Astrophysics Data System (ADS)
Alvarez, Diego A.; Uribe, Felipe; Hurtado, Jorge E.
2018-02-01
Random set theory is a general framework which comprises uncertainty in the form of probability boxes, possibility distributions, cumulative distribution functions, Dempster-Shafer structures or intervals; in addition, the dependence between the input variables can be expressed using copulas. In this paper, the lower and upper bounds on the probability of failure are calculated by means of random set theory. In order to accelerate the calculation, a well-known and efficient probability-based reliability method known as subset simulation is employed. This method is especially useful for finding small failure probabilities in both low- and high-dimensional spaces, disjoint failure domains and nonlinear limit state functions. The proposed methodology represents a drastic reduction of the computational labor implied by plain Monte Carlo simulation for problems defined with a mixture of representations for the input variables, while delivering similar results. Numerical examples illustrate the efficiency of the proposed approach.
Crustal volumes of the continents and of oceanic and continental submarine plateaus
NASA Technical Reports Server (NTRS)
Schubert, G.; Sandwell, D.
1989-01-01
Using global topographic data and the assumption of Airy isostasy, it is estimated that the crustal volume of the continents is 7182 X 10 to the 6th cu km. The crustal volumes of the oceanic and continental submarine plateaus are calculated at 369 X 10 to the 6th cu km and 242 X 10 to the 6th cu km, respectively. The total continental crustal volume is found to be 7581 X 10 to the 6th cu km, 3.2 percent of which is comprised of continental submarine plateaus on the seafloor. An upper bound on the contintental crust addition rate by the accretion of oceanic plateaus is set at 3.7 cu km/yr. Subduction of continental submarine plateaus with the oceanic lithosphere on a 100 Myr time scale yields an upper bound to the continental crustal subtraction rate of 2.4 cu km/yr.
An upper-bound assessment of the benefits of reducing perchlorate in drinking water.
Lutter, Randall
2014-10-01
The Environmental Protection Agency plans to issue new federal regulations to limit drinking water concentrations of perchlorate, which occurs naturally and results from the combustion of rocket fuel. This article presents an upper-bound estimate of the potential benefits of alternative maximum contaminant levels for perchlorate in drinking water. The results suggest that the economic benefits of reducing perchlorate concentrations in drinking water are likely to be low, i.e., under $2.9 million per year nationally, for several reasons. First, the prevalence of detectable perchlorate in public drinking water systems is low. Second, the population especially sensitive to effects of perchlorate, pregnant women who are moderately iodide deficient, represents a minority of all pregnant women. Third, and perhaps most importantly, reducing exposure to perchlorate in drinking water is a relatively ineffective way of increasing iodide uptake, a crucial step linking perchlorate to health effects of concern. © 2014 Society for Risk Analysis.
Paul L. Patterson; Mark Finco
2011-01-01
This paper explores the information forest inventory data can produce regarding forest types that were not sampled and develops the equations necessary to define the upper confidence bounds on not-sampled forest types. The problem is reduced to a Bernoulli variable. This simplification allows the upper confidence bounds to be calculated based on Cochran (1977)....
Assessments of fluid friction factors for use in leak rate calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chivers, T.C.
1997-04-01
Leak before Break procedures require estimates of leakage, and these in turn need fluid friction to be assessed. In this paper available data on flow rates through idealized and real crack geometries are reviewed in terms of a single friction factor k It is shown that for {lambda} < 1 flow rates can be bounded using correlations in terms of surface R{sub a} values. For {lambda} > 1 the database is less precise, but {lambda} {approx} 4 is an upper bound, hence in this region flow calculations can be assessed using 1 < {lambda} < 4.
Heterogeneous losses of externally generated I atoms for OIL
NASA Astrophysics Data System (ADS)
Torbin, A. P.; Mikheyev, P. A.; Ufimtsev, N. I.; Voronov, A. I.; Azyazov, V. N.
2012-01-01
Usage of an external iodine atom generator can improve energy efficiency of the oxygen-iodine laser (OIL) and expand its range of operation parameters. However, a noticeable part of iodine atoms may recombine or undergo chemical bonding during transportation from the generator to the injection point. Experimental results reported in this paper showed that uncoated aluminum surfaces readily bounded iodine atoms, while nickel, stainless steel, Teflon or Plexiglas did not. Estimations based on experimental results had shown that the upper bound of probability of surface iodine atom recombination for materials Teflon, Plexiglas, nickel or stainless steel is γrec <= 10-5.
Age and disability: explaining the wage differential.
Gannon, Brenda; Munley, Margaret
2009-07-01
This paper estimates the level of explained and unexplained factors that contribute to the wage gap between workers with and without disabilities, providing benchmark estimates for Ireland. It separates out the confounding impact of productivity differences between disabled and non-disabled, by comparing wage differentials across three groups, disabled with limitations, disabled without limitations and non-disabled. Furthermore, data are analysed for the years 1995-2001 and two sub-samples pre and post 1998 allow us to decompose wage differentials before and after the Employment Equality Act 1998. Results are comparable to those of the UK and the unexplained component (upper bound of discrimination) is lower once we control for productivity differences. The lower bound level depends on the contribution of unobserved effects and the validity of the selection component in the decomposition model.
Brief Report: Quantifying the Impact of Autism Coverage on Private Insurance Premiums
Bouder, James N.; Spielman, Stuart
2010-01-01
Many states are considering legislation requiring private insurance companies to pay for autism-related services. Arguments against mandates include that they will result in higher premiums. Using Pennsylvania legislation as an example, which proposed covering services up to $36,000 per year for individuals less than 21 years of age, this paper estimates potential premium increases. The estimate relies on autism treated prevalence, the number of individuals insured by affected plans, mean annual autism expenditures, administrative costs, medical loss ratio, and total insurer revenue. Current treated prevalence and expenditures suggests that premium increases would approximate 1%, with a lower bound of 0.19% and an upper bound of 2.31%. Policy makers can use these results to assess the cost-effectiveness of similar legislation. PMID:19214727
Brief report: Quantifying the impact of autism coverage on private insurance premiums.
Bouder, James N; Spielman, Stuart; Mandell, David S
2009-06-01
Many states are considering legislation requiring private insurance companies to pay for autism-related services. Arguments against mandates include that they will result in higher premiums. Using Pennsylvania legislation as an example, which proposed covering services up to $36,000 per year for individuals less than 21 years of age, this paper estimates potential premium increases. The estimate relies on autism treated prevalence, the number of individuals insured by affected plans, mean annual autism expenditures, administrative costs, medical loss ratio, and total insurer revenue. Current treated prevalence and expenditures suggests that premium increases would approximate 1%, with a lower bound of 0.19% and an upper bound of 2.31%. Policy makers can use these results to assess the cost-effectiveness of similar legislation.
Retrospective Assessment of Cost Savings From Prevention
Grosse, Scott D.; Berry, Robert J.; Tilford, J. Mick; Kucik, James E.; Waitzman, Norman J.
2016-01-01
Introduction Although fortification of food with folic acid has been calculated to be cost saving in the U.S., updated estimates are needed. This analysis calculates new estimates from the societal perspective of net cost savings per year associated with mandatory folic acid fortification of enriched cereal grain products in the U.S. that was implemented during 1997–1998. Methods Estimates of annual numbers of live-born spina bifida cases in 1995–1996 relative to 1999–2011 based on birth defects surveillance data were combined during 2015 with published estimates of the present value of lifetime direct costs updated in 2014 U.S. dollars for a live-born infant with spina bifida to estimate avoided direct costs and net cost savings. Results The fortification mandate is estimated to have reduced the annual number of U.S. live-born spina bifida cases by 767, with a lower-bound estimate of 614. The present value of mean direct lifetime cost per infant with spina bifida is estimated to be $791,900, or $577,000 excluding caregiving costs. Using a best estimate of numbers of avoided live-born spina bifida cases, fortification is estimated to reduce the present value of total direct costs for each year's birth cohort by $603 million more than the cost of fortification. A lower-bound estimate of cost savings using conservative assumptions, including the upper-bound estimate of fortification cost, is $299 million. Conclusions The estimates of cost savings are larger than previously reported, even using conservative assumptions. The analysis can also inform assessments of folic acid fortification in other countries. PMID:26790341
Incorporating Alternative Care Site Characteristics Into Estimates of Substitutable ED Visits.
Trueger, Nathan Seth; Chua, Kao-Ping; Hussain, Aamir; Liferidge, Aisha T; Pitts, Stephen R; Pines, Jesse M
2017-07-01
Several recent efforts to improve health care value have focused on reducing emergency department (ED) visits that potentially could be treated in alternative care sites (ie, primary care offices, retail clinics, and urgent care centers). Estimates of the number of these visits may depend on assumptions regarding the operating hours and functional capabilities of alternative care sites. However, methods to account for the variability in these characteristics have not been developed. To develop methods to incorporate the variability in alternative care site characteristics into estimates of ED visit "substitutability." Our approach uses the range of hours and capabilities among alternative care sites to estimate lower and upper bounds of ED visit substitutability. We constructed "basic" and "extended" criteria that captured the plausible degree of variation in each site's hours and capabilities. To illustrate our approach, we analyzed data from 22,697 ED visits by adults in the 2011 National Hospital Ambulatory Medical Care Survey, defining a visit as substitutable if it was treat-and-release and met both the operating hours and functional capabilities criteria. Use of the combined basic hours/basic capabilities criteria and extended hours/extended capabilities generated lower and upper bounds of estimates. Our criteria classified 5.5%-27.1%, 7.6%-20.4%, and 10.6%-46.0% of visits as substitutable in primary care offices, retail clinics, and urgent care centers, respectively. Alternative care sites vary widely in operating hours and functional capabilities. Methods such as ours may help incorporate this variability into estimates of ED visit substitutability.
3D magnetic sources' framework estimation using Genetic Algorithm (GA)
NASA Astrophysics Data System (ADS)
Ponte-Neto, C. F.; Barbosa, V. C.
2008-05-01
We present a method for inverting total-field anomaly for determining simple 3D magnetic sources' framework such as: batholiths, dikes, sills, geological contacts, kimberlite and lamproite pipes. We use GA to obtain magnetic sources' frameworks and their magnetic features simultaneously. Specifically, we estimate the magnetization direction (inclination and declination) and the total dipole moment intensity, and the horizontal and vertical positions, in Cartesian coordinates , of a finite set of elementary magnetic dipoles. The spatial distribution of these magnetic dipoles composes the skeletal outlines of the geologic sources. We assume that the geologic sources have a homogeneous magnetization distribution and, thus all dipoles have the same magnetization direction and dipole moment intensity. To implement the GA, we use real-valued encoding with crossover, mutation, and elitism. To obtain a unique and stable solution, we set upper and lower bounds on declination and inclination of [0,360°] and [-90°, 90°], respectively. We also set the criterion of minimum scattering of the dipole-position coordinates, to guarantee that spatial distribution of the dipoles (defining the source skeleton) be as close as possible to continuous distribution. To this end, we fix the upper and lower bounds of the dipole moment intensity and we evaluate the dipole-position estimates. If the dipole scattering is greater than a value expected by the interpreter, the upper bound of the dipole moment intensity is reduced by 10 % of the latter. We repeat this procedure until the dipole scattering and the data fitting are acceptable. We apply our method to noise-corrupted magnetic data from simulated 3D magnetic sources with simple geometries and located at different depths. In tests simulating sources such as sphere and cube, all estimates of the dipole coordinates are agreeing with center of mass of these sources. To elongated-prismatic sources in an arbitrary direction, we estimate dipole-position coordinates coincident with principal axis of sources. In tests with synthetic data, simulating the magnetic anomaly yielded by intrusive 2D structures such as dikes and sills, the estimates of the dipole coordinates are coincident with the principal plane of these 2D sources. We also inverted the aeromagnetic data from Serra do Cabral, in southeastern, Brazil, and we estimated dipoles distributed on a horizontal plane at depth of 30 km, with inclination and declination of 59.1° and -48.0°, respectively. The results showed close agreement with previous interpretation.
Paul L. Patterson; Mark Finco
2009-01-01
This paper explores the information FIA data can produce regarding forest types that were not sampled and develops the equations necessary to define the upper confidence bounds on not-sampled forest types. The problem is reduced to a Bernoulli variable. This simplification allows the upper confidence bounds to be calculated based on Cochran (1977). Examples are...
NASA Technical Reports Server (NTRS)
Chlouber, Dean; O'Neill, Pat; Pollock, Jim
1990-01-01
A technique of predicting an upper bound on the rate at which single-event upsets due to ionizing radiation occur in semiconducting memory cells is described. The upper bound on the upset rate, which depends on the high-energy particle environment in earth orbit and accelerator cross-section data, is given by the product of an upper-bound linear energy-transfer spectrum and the mean cross section of the memory cell. Plots of the spectrum are given for low-inclination and polar orbits. An alternative expression for the exact upset rate is also presented. Both methods rely only on experimentally obtained cross-section data and are valid for sensitive bit regions having arbitrary shape.
Limitation of Ground-based Estimates of Solar Irradiance Due to Atmospheric Variations
NASA Technical Reports Server (NTRS)
Wen, Guoyong; Cahalan, Robert F.; Holben, Brent N.
2003-01-01
The uncertainty in ground-based estimates of solar irradiance is quantitatively related to the temporal variability of the atmosphere's optical thickness. The upper and lower bounds of the accuracy of estimates using the Langley Plot technique are proportional to the standard deviation of aerosol optical thickness (approx. +/- 13 sigma(delta tau)). The estimates of spectral solar irradiance (SSI) in two Cimel sun photometer channels from the Mauna Loa site of AERONET are compared with satellite observations from SOLSTICE (Solar Stellar Irradiance Comparison Experiment) on UARS (Upper Atmospheric Research Satellite) for almost two years of data. The true solar variations related to the 27-day solar rotation cycle observed from SOLSTICE are about 0.15% at the two sun photometer channels. The variability in ground-based estimates is statistically one order of magnitude larger. Even though about 30% of these estimates from all Level 2.0 Cimel data fall within the 0.4 to approx. 0.5% variation level, ground-based estimates are not able to capture the 27-day solar variation observed from SOLSTICE.
A procedure for estimating upper bound lifetime human cancer risk from air levels of 6 common carcinogenic PAHs, termed "PAHs of concern", is proposed. These PAHs are benzo(a)pyrene, benz(a)anathracene, benzo(k)flouranthene, indeno(1,2,3-c,d)pyrene and chrysene. In application,...
Wang, Yang; Li, Mingxing; Tu, Z C; Hernández, A Calvo; Roco, J M M
2012-07-01
The figure of merit for refrigerators performing finite-time Carnot-like cycles between two reservoirs at temperature T(h) and T(c) (
Upper bounds on secret-key agreement over lossy thermal bosonic channels
NASA Astrophysics Data System (ADS)
Kaur, Eneet; Wilde, Mark M.
2017-12-01
Upper bounds on the secret-key-agreement capacity of a quantum channel serve as a way to assess the performance of practical quantum-key-distribution protocols conducted over that channel. In particular, if a protocol employs a quantum repeater, achieving secret-key rates exceeding these upper bounds is evidence of having a working quantum repeater. In this paper, we extend a recent advance [Liuzzo-Scorpo et al., Phys. Rev. Lett. 119, 120503 (2017), 10.1103/PhysRevLett.119.120503] in the theory of the teleportation simulation of single-mode phase-insensitive Gaussian channels such that it now applies to the relative entropy of entanglement measure. As a consequence of this extension, we find tighter upper bounds on the nonasymptotic secret-key-agreement capacity of the lossy thermal bosonic channel than were previously known. The lossy thermal bosonic channel serves as a more realistic model of communication than the pure-loss bosonic channel, because it can model the effects of eavesdropper tampering and imperfect detectors. An implication of our result is that the previously known upper bounds on the secret-key-agreement capacity of the thermal channel are too pessimistic for the practical finite-size regime in which the channel is used a finite number of times, and so it should now be somewhat easier to witness a working quantum repeater when using secret-key-agreement capacity upper bounds as a benchmark.
NASA Technical Reports Server (NTRS)
Li, Xiaoyuan; Jeanloz, Raymond
1987-01-01
Electrical conductivity measurements of Perovskite and a Perovskite-dominated assemblage synthesized from pyroxene and olivine demonstrate that these high-pressure phases are insulating to pressures of 82 GPa and temperatures of 4500 K. Assuming an anhydrous upper mantle composition, the result provides an upper bound of 0.01 S/m for the electrical conductivity of the lower mantle between depths of 700 and 1900 km. This is 2 to 4 orders of magnitude lower than previous estimates of lower-mantle conductivity derived from studies of geomagnetic secular variations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Audenaert, Koenraad M. R., E-mail: koenraad.audenaert@rhul.ac.uk; Department of Physics and Astronomy, University of Ghent, S9, Krijgslaan 281, B-9000 Ghent; Mosonyi, Milán, E-mail: milan.mosonyi@gmail.com
2014-10-01
We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ₁, …, σ{sub r}. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ₁, …, σ{sub r}), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov'smore » classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min{sub j« less
Upper bounds on sequential decoding performance parameters
NASA Technical Reports Server (NTRS)
Jelinek, F.
1974-01-01
This paper presents the best obtainable random coding and expurgated upper bounds on the probabilities of undetectable error, of t-order failure (advance to depth t into an incorrect subset), and of likelihood rise in the incorrect subset, applicable to sequential decoding when the metric bias G is arbitrary. Upper bounds on the Pareto exponent are also presented. The G-values optimizing each of the parameters of interest are determined, and are shown to lie in intervals that in general have nonzero widths. The G-optimal expurgated bound on undetectable error is shown to agree with that for maximum likelihood decoding of convolutional codes, and that on failure agrees with the block code expurgated bound. Included are curves evaluating the bounds for interesting choices of G and SNR for a binary-input quantized-output Gaussian additive noise channel.
How entangled can a multi-party system possibly be?
NASA Astrophysics Data System (ADS)
Qi, Liqun; Zhang, Guofeng; Ni, Guyan
2018-06-01
The geometric measure of entanglement of a pure quantum state is defined to be its distance to the space of pure product (separable) states. Given an n-partite system composed of subsystems of dimensions d1 , … ,dn, an upper bound for maximally allowable entanglement is derived in terms of geometric measure of entanglement. This upper bound is characterized exclusively by the dimensions d1 , … ,dn of composite subsystems. Numerous examples demonstrate that the upper bound appears to be reasonably tight.
Thermalization Time Bounds for Pauli Stabilizer Hamiltonians
NASA Astrophysics Data System (ADS)
Temme, Kristan
2017-03-01
We prove a general lower bound to the spectral gap of the Davies generator for Hamiltonians that can be written as the sum of commuting Pauli operators. These Hamiltonians, defined on the Hilbert space of N-qubits, serve as one of the most frequently considered candidates for a self-correcting quantum memory. A spectral gap bound on the Davies generator establishes an upper limit on the life time of such a quantum memory and can be used to estimate the time until the system relaxes to thermal equilibrium when brought into contact with a thermal heat bath. The bound can be shown to behave as {λ ≥ O(N^{-1} exp(-2β overline{ɛ}))}, where {overline{ɛ}} is a generalization of the well known energy barrier for logical operators. Particularly in the low temperature regime we expect this bound to provide the correct asymptotic scaling of the gap with the system size up to a factor of N -1. Furthermore, we discuss conditions and provide scenarios where this factor can be removed and a constant lower bound can be proven.
Degteva, M O; Shagina, N B; Shishkina, E A; Vozilova, A V; Volchkova, A Y; Vorobiova, M I; Wieser, A; Fattibene, P; Della Monaca, S; Ainsbury, E; Moquet, J; Anspaugh, L R; Napier, B A
2015-11-01
Waterborne radioactive releases into the Techa River from the Mayak Production Association in Russia during 1949-1956 resulted in significant doses to about 30,000 persons who lived in downstream settlements. The residents were exposed to internal and external radiation. Two methods for reconstruction of the external dose are considered in this paper, electron paramagnetic resonance (EPR) measurements of teeth, and fluorescence in situ hybridization (FISH) measurements of chromosome translocations in circulating lymphocytes. The main issue in the application of the EPR and FISH methods for reconstruction of the external dose for the Techa Riverside residents was strontium radioisotopes incorporated in teeth and bones that act as a source of confounding local exposures. In order to estimate and subtract doses from incorporated (89,90)Sr, the EPR and FISH assays were supported by measurements of (90)Sr-body burdens and estimates of (90)Sr concentrations in dental tissues by the luminescence method. The resulting dose estimates derived from EPR to FISH measurements for residents of the upper Techa River were found to be consistent: The mean values vary from 510 to 550 mGy for the villages located close to the site of radioactive release to 130-160 mGy for the more distant villages. The upper bound of individual estimates for both methods is equal to 2.2-2.3 Gy. The EPR- and FISH-based dose estimates were compared with the doses calculated for the donors using the most recent Techa River Dosimetry System (TRDS). The TRDS external dose assessments are based on the data on contamination of the Techa River floodplain, simulation of air kerma above the contaminated soil, age-dependent lifestyles and individual residence histories. For correct comparison, TRDS-based doses were calculated from two sources: external exposure from the contaminated environment and internal exposure from (137)Cs incorporated in donors' soft tissues. It is shown here that the TRDS-based absorbed doses in tooth enamel and muscle are in agreement with EPR- and FISH-based estimates within uncertainty bounds. Basically, this agreement between the estimates has confirmed the validity of external doses calculated with the TRDS.
Record length requirement of long-range dependent teletraffic
NASA Astrophysics Data System (ADS)
Li, Ming
2017-04-01
This article contributes the highlights mainly in two folds. On the one hand, it presents a formula to compute the upper bound of the variance of the correlation periodogram measurement of teletraffic (traffic for short) with long-range dependence (LRD) for a given record length T and a given value of the Hurst parameter H (Theorems 1 and 2). On the other hand, it proposes two formulas for the computation of the variance upper bound of the correlation periodogram measurement of traffic of fractional Gaussian noise (fGn) type and the generalized Cauchy (GC) type, respectively (Corollaries 1 and 2). They may constitute a reference guideline of record length requirement of traffic with LRD. In addition, record length requirement for the correlation periodogram measurement of traffic with either the Schuster type or the Bartlett one is studied and the present results about it show that both types of periodograms may be used for the correlation measurement of traffic with a pre-desired variance bound of correlation estimation. Moreover, real traffic in the Internet Archive by the Special Interest Group on Data Communication under the Association for Computing Machinery of US (ACM SIGCOMM) is analyzed in the case study in this topic.
Faydasicok, Ozlem; Arik, Sabri
2013-08-01
The main problem with the analysis of robust stability of neural networks is to find the upper bound norm for the intervalized interconnection matrices of neural networks. In the previous literature, the major three upper bound norms for the intervalized interconnection matrices have been reported and they have been successfully applied to derive new sufficient conditions for robust stability of delayed neural networks. One of the main contributions of this paper will be the derivation of a new upper bound for the norm of the intervalized interconnection matrices of neural networks. Then, by exploiting this new upper bound norm of interval matrices and using stability theory of Lyapunov functionals and the theory of homomorphic mapping, we will obtain new sufficient conditions for the existence, uniqueness and global asymptotic stability of the equilibrium point for the class of neural networks with discrete time delays under parameter uncertainties and with respect to continuous and slope-bounded activation functions. The results obtained in this paper will be shown to be new and they can be considered alternative results to previously published corresponding results. We also give some illustrative and comparative numerical examples to demonstrate the effectiveness and applicability of the proposed robust stability condition. Copyright © 2013 Elsevier Ltd. All rights reserved.
Grosse, Scott D; Berry, Robert J; Mick Tilford, J; Kucik, James E; Waitzman, Norman J
2016-05-01
Although fortification of food with folic acid has been calculated to be cost saving in the U.S., updated estimates are needed. This analysis calculates new estimates from the societal perspective of net cost savings per year associated with mandatory folic acid fortification of enriched cereal grain products in the U.S. that was implemented during 1997-1998. Estimates of annual numbers of live-born spina bifida cases in 1995-1996 relative to 1999-2011 based on birth defects surveillance data were combined during 2015 with published estimates of the present value of lifetime direct costs updated in 2014 U.S. dollars for a live-born infant with spina bifida to estimate avoided direct costs and net cost savings. The fortification mandate is estimated to have reduced the annual number of U.S. live-born spina bifida cases by 767, with a lower-bound estimate of 614. The present value of mean direct lifetime cost per infant with spina bifida is estimated to be $791,900, or $577,000 excluding caregiving costs. Using a best estimate of numbers of avoided live-born spina bifida cases, fortification is estimated to reduce the present value of total direct costs for each year's birth cohort by $603 million more than the cost of fortification. A lower-bound estimate of cost savings using conservative assumptions, including the upper-bound estimate of fortification cost, is $299 million. The estimates of cost savings are larger than previously reported, even using conservative assumptions. The analysis can also inform assessments of folic acid fortification in other countries. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Benguria, Rafael D.; Depassier, M. Cristina; Loss, Michael
2012-12-01
We study the effect of a cutoff on the speed of pulled fronts of the one-dimensional reaction diffusion equation. To accomplish this, we first use variational techniques to prove the existence of a heteroclinic orbit in phase space for traveling wave solutions of the corresponding reaction diffusion equation under conditions that include discontinuous reaction profiles. This existence result allows us to prove rigorous upper and lower bounds on the minimal speed of monotonic fronts in terms of the cut-off parameter ɛ. From these bounds we estimate the range of validity of the Brunet-Derrida formula for a general class of reaction terms.
The role of viscosity in TATB hot spot ignition
NASA Astrophysics Data System (ADS)
Fried, Laurence E.; Zepeda-Ruis, Luis; Howard, W. Michael; Najjar, Fady; Reaugh, John E.
2012-03-01
The role of dissipative effects, such as viscosity, in the ignition of high explosive pores is investigated using a coupled chemical, thermal, and hydrodynamic model. Chemical reactions are tracked with the Cheetah thermochemical code coupled to the ALE3D hydrodynamic code. We perform molecular dynamics simulations to determine the viscosity of liquid TATB. We also analyze shock wave experiments to obtain an estimate for the shock viscosity of TATB. Using the lower bound liquid-like viscosities, we find that the pore collapse is hydrodynamic in nature. Using the upper bound viscosity from shock wave experiments, we find that the pore collapse is closest to the viscous limit.
Using a Water Balance Model to Bound Potential Irrigation Development in the Upper Blue Nile Basin
NASA Astrophysics Data System (ADS)
Jain Figueroa, A.; McLaughlin, D.
2016-12-01
The Grand Ethiopian Renaissance Dam (GERD), on the Blue Nile is an example of water resource management underpinning food, water and energy security. Downstream countries have long expressed concern about water projects in Ethiopia because of possible diversions to agricultural uses that could reduce flow in the Nile. Such diversions are attractive to Ethiopia as a partial solution to its food security problems but they could also conflict with hydropower revenue from GERD. This research estimates an upper bound on diversions above the GERD project by considering the potential for irrigated agriculture expansion and, in particular, the availability of water and land resources for crop production. Although many studies have aimed to simulate downstream flows for various Nile basin management plans, few have taken the perspective of bounding the likely impacts of upstream agricultural development. The approach is to construct an optimization model to establish a bound on Upper Blue Nile (UBN) agricultural development, paying particular attention to soil suitability and seasonal variability in climate. The results show that land and climate constraints impose significant limitations on crop production. Only 25% of the land area is suitable for irrigation due to the soil, slope and temperature constraints. When precipitation is also considered only 11% of current land area could be used in a way that increases water consumption. The results suggest that Ethiopia could consume an additional 3.75 billion cubic meters (bcm) of water per year, through changes in land use and storage capacity. By exploiting this irrigation potential, Ethiopia could potentially decrease the annual flow downstream of the UBN by 8 percent from the current 46 bcm/y to the modeled 42 bcm/y.
On the likelihood of single-peaked preferences.
Lackner, Marie-Louise; Lackner, Martin
2017-01-01
This paper contains an extensive combinatorial analysis of the single-peaked domain restriction and investigates the likelihood that an election is single-peaked. We provide a very general upper bound result for domain restrictions that can be defined by certain forbidden configurations. This upper bound implies that many domain restrictions (including the single-peaked restriction) are very unlikely to appear in a random election chosen according to the Impartial Culture assumption. For single-peaked elections, this upper bound can be refined and complemented by a lower bound that is asymptotically tight. In addition, we provide exact results for elections with few voters or candidates. Moreover, we consider the Pólya urn model and the Mallows model and obtain lower bounds showing that single-peakedness is considerably more likely to appear for certain parameterizations.
NASA Technical Reports Server (NTRS)
Elsaesser, Greg; Del Genio, Anthony
2015-01-01
The CMIP5 configurations of the GISS Model-E2 GCM simulated a mid- and high latitude ice IWP that decreased by 50 relative to that simulated for CMIP3 (Jiang et al. 2012; JGR). Tropical IWP increased by 15 in CMIP5. While the tropical IWP was still within the published upper-bounds of IWP uncertainty derived using NASA A-Train satellite observations, it was found that the upper troposphere (200 mb) ice water content (IWC) exceeded the published upper-bound by a factor of 2. This was largely driven by IWC in deep-convecting regions of the tropics.Recent advances in the model-E2 convective parameterization have been found to have a substantial impact on tropical IWC. These advances include the development of both a cold pool parameterization (Del Genio et al. 2015) and new convective ice parameterization. In this presentation, we focus on the new parameterization of convective cloud ice that was developed using data from the NASA TC4 Mission. Ice particle terminal velocity formulations now include information from a number of NASA field campaigns. The new parameterization predicts both an ice water mass weighted-average particle diameter and a particle cross sectional area weighted-average size diameter as a function of temperature and ice water content. By assuming a gamma-distribution functional form for the particle size distribution, these two diameter estimates are all that are needed to explicitly predict the distribution of ice particles as a function of particle diameter.GCM simulations with the improved convective parameterization yield a 50 decrease in upper tropospheric IWC, bringing the tropical and global mean IWP climatologies into even closer agreement with the A-Train satellite observation best estimates.
NASA Astrophysics Data System (ADS)
Elsaesser, G.; Del Genio, A. D.
2015-12-01
The CMIP5 configurations of the GISS Model-E2 GCM simulated a mid- and high-latitude ice IWP that decreased by ~50% relative to that simulated for CMIP3 (Jiang et al. 2012; JGR). Tropical IWP increased by ~15% in CMIP5. While the tropical IWP was still within the published upper-bounds of IWP uncertainty derived using NASA A-Train satellite observations, it was found that the upper troposphere (~200 mb) ice water content (IWC) exceeded the published upper-bound by a factor of ~2. This was largely driven by IWC in deep-convecting regions of the tropics. Recent advances in the model-E2 convective parameterization have been found to have a substantial impact on tropical IWC. These advances include the development of both a cold pool parameterization (Del Genio et al. 2015) and new convective ice parameterization. In this presentation, we focus on the new parameterization of convective cloud ice that was developed using data from the NASA TC4 Mission. Ice particle terminal velocity formulations now include information from a number of NASA field campaigns. The new parameterization predicts both an ice water mass weighted-average particle diameter and a particle cross sectional area weighted-average size diameter as a function of temperature and ice water content. By assuming a gamma-distribution functional form for the particle size distribution, these two diameter estimates are all that are needed to explicitly predict the distribution of ice particles as a function of particle diameter. GCM simulations with the improved convective parameterization yield a ~50% decrease in upper tropospheric IWC, bringing the tropical and global mean IWP climatologies into even closer agreement with the A-Train satellite observation best estimates.
Hard and Soft Constraints in Reliability-Based Design Optimization
NASA Technical Reports Server (NTRS)
Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.
Money Gone Up in Smoke: The Tobacco Use and Malnutrition Nexus in Bangladesh
Husain, Muhammad Jami; Virk-Baker, Mandeep; Parascandola, Mark; Khondker, Bazlul Haque; Ahluwalia, Indu B.
2017-01-01
BACKGROUND The tobacco epidemic in Bangladesh is pervasive. Expenditures on tobacco may reduce money available for food in a country with a high malnutrition rate. OBJECTIVES The aims of the study are to quantify the opportunity costs of tobacco expenditure in terms of nutrition (ie, food energy) forgone and the potential improvements in the household level food-energy status if the money spent on tobacco were diverted for food consumption. METHOD We analyzed data from the 2010 Bangladesh Household Income and Expenditure Survey, a nationally representative survey conducted among 12,240 households. We present 2 analytical scenarios: (1) the lower-bound gain scenario entailing money spent on tobacco partially diverted to acquiring food according to households’ food consumption share in total expenditures; and (2) the upper-bound gain scenario entailing money spent on tobacco diverted to acquiring food only. Age- and gender-based energy norms were used to identify food-energy deficient households. Data were analyzed by mutually exclusive smoking-only, smokeless-only, and dual-tobacco user households. FINDINGS On average, a smoking-only household could gain 269–497 kilocalories (kcal) daily under the lower-bound and upper-bound scenarios, respectively. The potential energy gains for smokeless-only and dual-tobacco user households ranged from 148–268 kcal and 508–924 kcal, respectively. Under these lower- and upper-bound estimates, the percentage of smoking-only user households that are malnourished declined significantly from the baseline rate of 38% to 33% and 29%, respectively. For the smokeless-only and dual-tobacco user households, there were 2–3 and 6–9 percentage point drops in the malnutrition prevalence rates. The tobacco expenditure shift could translate to an additional 4.6–7.7 million food-energy malnourished persons meeting their caloric requirements. CONCLUSIONS The findings suggest that tobacco use reduction could facilitate concomitant improvements in population-level nutrition status and may inform the development and refinement of tobacco prevention and control efforts in Bangladesh. PMID:28283125
Money Gone Up in Smoke: The Tobacco Use and Malnutrition Nexus in Bangladesh.
Husain, Muhammad Jami; Virk-Baker, Mandeep; Parascandola, Mark; Khondker, Bazlul Haque; Ahluwalia, Indu B
The tobacco epidemic in Bangladesh is pervasive. Expenditures on tobacco may reduce money available for food in a country with a high malnutrition rate. The aims of the study are to quantify the opportunity costs of tobacco expenditure in terms of nutrition (ie, food energy) forgone and the potential improvements in the household level food-energy status if the money spent on tobacco were diverted for food consumption. We analyzed data from the 2010 Bangladesh Household Income and Expenditure Survey, a nationally representative survey conducted among 12,240 households. We present 2 analytical scenarios: (1) the lower-bound gain scenario entailing money spent on tobacco partially diverted to acquiring food according to households' food consumption share in total expenditures; and (2) the upper-bound gain scenario entailing money spent on tobacco diverted to acquiring food only. Age- and gender-based energy norms were used to identify food-energy deficient households. Data were analyzed by mutually exclusive smoking-only, smokeless-only, and dual-tobacco user households. On average, a smoking-only household could gain 269-497 kilocalories (kcal) daily under the lower-bound and upper-bound scenarios, respectively. The potential energy gains for smokeless-only and dual-tobacco user households ranged from 148-268 kcal and 508-924 kcal, respectively. Under these lower- and upper-bound estimates, the percentage of smoking-only user households that are malnourished declined significantly from the baseline rate of 38% to 33% and 29%, respectively. For the smokeless-only and dual-tobacco user households, there were 2-3 and 6-9 percentage point drops in the malnutrition prevalence rates. The tobacco expenditure shift could translate to an additional 4.6-7.7 million food-energy malnourished persons meeting their caloric requirements. The findings suggest that tobacco use reduction could facilitate concomitant improvements in population-level nutrition status and may inform the development and refinement of tobacco prevention and control efforts in Bangladesh. Copyright © 2016. Published by Elsevier Inc.
Limits on the fluctuating part of y-type distortion monopole from Planck and SPT results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khatri, Rishi; Sunyaev, Rashid, E-mail: khatri@mpa-garching.mpg.de, E-mail: sunyaev@mpa-garching.mpg.de
2015-08-01
We use the published Planck and SPT cluster catalogs [1,2] and recently published y-distortion maps [3] to put strong observational limits on the contribution of the fluctuating part of the y-type distortions to the y-distortion monopole. Our bounds are 5.4× 10{sup −8} < ( y) < 2.2× 10{sup −6}. Our upper bound is a factor of 6.8 stronger than the currently best upper 95% confidence limit from COBE-FIRAS of ( y) <15× 10{sup −6}. In the standard cosmology, large scale structure is the only source of such distortions and our limits therefore constrain the baryonic physics involved in the formation of the large scale structure. Our lower limit, from themore » detected clusters in the Planck and SPT catalogs, also implies that a Pixie-like experiment should detect the y-distortion monopole at >27-σ. The biggest sources of uncertainty in our upper limit are the monopole offsets between different HFI channel maps that we estimate to be <10{sup −6}.« less
Modeling of magnitude distributions by the generalized truncated exponential distribution
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2015-01-01
The probability distribution of the magnitude can be modeled by an exponential distribution according to the Gutenberg-Richter relation. Two alternatives are the truncated exponential distribution (TED) and the cutoff exponential distribution (CED). The TED is frequently used in seismic hazard analysis although it has a weak point: when two TEDs with equal parameters except the upper bound magnitude are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. We overcome it by the generalization of the abovementioned exponential distributions: the generalized truncated exponential distribution (GTED). Therein, identical exponential distributions are mixed by the probability distribution of the correct cutoff points. This distribution model is flexible in the vicinity of the upper bound magnitude and is equal to the exponential distribution for smaller magnitudes. Additionally, the exponential distributions TED and CED are special cases of the GTED. We discuss the possible ways of estimating its parameters and introduce the normalized spacing for this purpose. Furthermore, we present methods for geographic aggregation and differentiation of the GTED and demonstrate the potential and universality of our simple approach by applying it to empirical data. The considerable improvement by the GTED in contrast to the TED is indicated by a large difference between the corresponding values of the Akaike information criterion.
Ferromagnetic Potts models with multisite interaction
NASA Astrophysics Data System (ADS)
Schreiber, Nir; Cohen, Reuven; Haber, Simi
2018-03-01
We study the q -state Potts model with four-site interaction on a square lattice. Based on the asymptotic behavior of lattice animals, it is argued that when q ≤4 the system exhibits a second-order phase transition and when q >4 the transition is first order. The q =4 model is borderline. We find 1 /lnq to be an upper bound on Tc, the exact critical temperature. Using a low-temperature expansion, we show that 1 /(θ lnq ) , where θ >1 is a q -dependent geometrical term, is an improved upper bound on Tc. In fact, our findings support Tc=1 /(θ lnq ) . This expression is used to estimate the finite correlation length in first-order transition systems. These results can be extended to other lattices. Our theoretical predictions are confirmed numerically by an extensive study of the four-site interaction model using the Wang-Landau entropic sampling method for q =3 ,4 ,5 . In particular, the q =4 model shows an ambiguous finite-size pseudocritical behavior.
Beamforming Based Full-Duplex for Millimeter-Wave Communication
Liu, Xiao; Xiao, Zhenyu; Bai, Lin; Choi, Jinho; Xia, Pengfei; Xia, Xiang-Gen
2016-01-01
In this paper, we study beamforming based full-duplex (FD) systems in millimeter-wave (mmWave) communications. A joint transmission and reception (Tx/Rx) beamforming problem is formulated to maximize the achievable rate by mitigating self-interference (SI). Since the optimal solution is difficult to find due to the non-convexity of the objective function, suboptimal schemes are proposed in this paper. A low-complexity algorithm, which iteratively maximizes signal power while suppressing SI, is proposed and its convergence is proven. Moreover, two closed-form solutions, which do not require iterations, are also derived under minimum-mean-square-error (MMSE), zero-forcing (ZF), and maximum-ratio transmission (MRT) criteria. Performance evaluations show that the proposed iterative scheme converges fast (within only two iterations on average) and approaches an upper-bound performance, while the two closed-form solutions also achieve appealing performances, although there are noticeable differences from the upper bound depending on channel conditions. Interestingly, these three schemes show different robustness against the geometry of Tx/Rx antenna arrays and channel estimation errors. PMID:27455256
On the role of entailment patterns and scalar implicatures in the processing of numerals
Panizza, Daniele; Chierchia, Gennaro; Clifton, Charles
2009-01-01
There has been much debate, in both the linguistics and the psycholinguistics literature, concerning numbers and the interpretation of number denoting determiners ('numerals'). Such debate concerns, in particular, the nature and distribution of upper-bounded ('at-least') interpretations vs. lower-bounded ('exact') construals. In the present paper we show that the interpretation and processing of numerals are affected by the entailment properties of the context in which they occur. Experiment 1 established off-line preferences using a questionnaire. Experiment 2 investigated the processing issue through an eye tracking experiment using a silent reading task. Our results show that the upper-bounded interpretation of numerals occurs more often in an upward entailing context than in a downward entailing context. Reading times of the numeral itself were longer when it was embedded in an upward entailing context than when it was not, indicating that processing resources were required when the context triggered an upper-bounded interpretation. However, reading of a following context that required an upper-bounded interpretation triggered more regressions towards the numeral when it had occurred in a downward entailing context than in an upward entailing one. Such findings show that speakers' interpretation and processing of numerals is systematically affected by the polarity of the sentence in which they occur, and support the hypothesis that the upper-bounded interpretation of numerals is due to a scalar implicature. PMID:20161494
Sun, Zhijian; Zhang, Guoqing; Lu, Yu; Zhang, Weidong
2018-01-01
This paper studies the leader-follower formation control of underactuated surface vehicles with model uncertainties and environmental disturbances. A parameter estimation and upper bound estimation based sliding mode control scheme is proposed to solve the problem of the unknown plant parameters and environmental disturbances. For each of these leader-follower formation systems, the dynamic equations of position and attitude are analyzed using coordinate transformation with the aid of the backstepping technique. All the variables are guaranteed to be uniformly ultimately bounded stable in the closed-loop system, which is proven by the distribution design Lyapunov function synthesis. The main advantages of this approach are that: first, parameter estimation based sliding mode control can enhance the robustness of the closed-loop system in presence of model uncertainties and environmental disturbances; second, a continuous function is developed to replace the signum function in the design of sliding mode scheme, which devotes to reduce the chattering of the control system. Finally, numerical simulations are given to demonstrate the effectiveness of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Wang, Xueyi; Davidson, Nicholas J.
2011-01-01
Ensemble methods have been widely used to improve prediction accuracy over individual classifiers. In this paper, we achieve a few results about the prediction accuracies of ensemble methods for binary classification that are missed or misinterpreted in previous literature. First we show the upper and lower bounds of the prediction accuracies (i.e. the best and worst possible prediction accuracies) of ensemble methods. Next we show that an ensemble method can achieve > 0.5 prediction accuracy, while individual classifiers have < 0.5 prediction accuracies. Furthermore, for individual classifiers with different prediction accuracies, the average of the individual accuracies determines the upper and lower bounds. We perform two experiments to verify the results and show that it is hard to achieve the upper and lower bounds accuracies by random individual classifiers and better algorithms need to be developed. PMID:21853162
Upper bound of abutment scour in laboratory and field data
Benedict, Stephen
2016-01-01
The U.S. Geological Survey, in cooperation with the South Carolina Department of Transportation, conducted a field investigation of abutment scour in South Carolina and used those data to develop envelope curves that define the upper bound of abutment scour. To expand on this previous work, an additional cooperative investigation was initiated to combine the South Carolina data with abutment scour data from other sources and evaluate upper bound patterns with this larger data set. To facilitate this analysis, 446 laboratory and 331 field measurements of abutment scour were compiled into a digital database. This extensive database was used to evaluate the South Carolina abutment scour envelope curves and to develop additional envelope curves that reflected the upper bound of abutment scour depth for the laboratory and field data. The envelope curves provide simple but useful supplementary tools for assessing the potential maximum abutment scour depth in the field setting.
C-14 content of ten meteorites measured by tandem accelerator mass spectrometry
NASA Technical Reports Server (NTRS)
Brown, R. M.; Andrews, H. R.; Ball, G. C.; Burn, N.; Imahori, Y.; Milton, J. C. D.; Fireman, E. L.
1984-01-01
Measurements of C-14 in three North American and seven Antarctic meteorites show in most cases that this cosmogenic isotope, which is tightly bound, was separated from absorbed atmospheric radiocarbon by stepwise heating extractions. The present upper limit to age determination by the accelerator method varies from 50,000 to 70,000 years, depending on the mass and carbon content of the sample. The natural limit caused by cosmic ray production of C-14 in silicate rocks at 2000 m elevation is estimated to be 55,000 + or - 5000 years. An estimation is also made of the 'weathering ages' of the Antarctic meteorites from the specific activity of loosely bound CO2 which is thought to be absorbed from the terrestrial atmosphere. Accelerator measurements are found to agree with previous low level counting measurements, but are more sensitive and precise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stephens, T. S.; Gonder, Jeff; Chen, Yuche
This report details a study of the potential effects of connected and automated vehicle (CAV) technologies on vehicle miles traveled (VMT), vehicle fuel efficiency, and consumer costs. Related analyses focused on a range of light-duty CAV technologies in conventional powertrain vehicles -- from partial automation to full automation, with and without ridesharing -- compared to today's base-case scenario. Analysis results revealed widely disparate upper- and lower-bound estimates for fuel use and VMT, ranging from a tripling of fuel use to decreasing light-duty fuel use to below 40% of today's level. This wide range reflects uncertainties in the ways that CAVmore » technologies can influence vehicle efficiency and use through changes in vehicle designs, driving habits, and travel behavior. The report further identifies the most significant potential impacting factors, the largest areas of uncertainty, and where further research is particularly needed.« less
ERIC Educational Resources Information Center
Jackson, C. Kirabo
2011-01-01
Existing studies on single-sex schooling suffer from biases due to student selection to schools and single-sex schools being better in unmeasured ways. In Trinidad and Tobago students are assigned to secondary schools based on an algorithm allowing one to address self-selection bias and cleanly estimate an upper-bound single-sex school effect. The…
Garcia, C. Amanda; Huntington, Jena M; Buto, Susan G.; Moreo, Michael T.; Smith, J. LaRue; Andraski, Brian J.
2014-01-01
Mean annual basin-scale ETg totaled about 28 million cubic meters (Mm3) (23,000 acre-feet [acre-ft]), and represents the sum of ETg from all ET units. Annual groundwater ET from vegetated areas totaled about 26 Mm3 (21,000 acre-ft), and was dominated by the moderate-to-dense shrubland ET unit (54 percent), followed by sparse shrubland (37 percent) and grassland (9 percent) ET units. Senesced grasses observed in the northern most areas of the moderate-to-dense ET unit likely confounded the vegetation index and led to an overestimate of ETg for this ET unit. Therefore, mean annual ETg for moderate-to-dense shrubland presented here is likely an upper bound. Annual groundwater ET from the playa ET unit was 2.2 Mm3 (1,800 acre-ft), whereas groundwater ET from the playa lake ET unit was 0–0.1 Mm3 (0–100 acre-ft). Oxygen-18 and deuterium data indicate discharge from the playa center predominantly represents removal of local precipitation-derived recharge. The playa lake estimate, therefore, is considered an upper bound. Mean annual ETg estimates for Dixie Valley are assumed to represent the pre‑development, long-term ETg rates within the study area.
NASA Astrophysics Data System (ADS)
Wang, C.; Gordon, R. G.; Zheng, L.
2016-12-01
Hotspot tracks are widely used to estimate the absolute velocities of plates, i.e., relative to the lower mantle. Knowledge of current motion between hotspots is important for both plate kinematics and mantle dynamics and informs the discussion on the origin of the Hawaiian-Emperor Bend. Following Morgan & Morgan (2007), we focus only on the trends of young hotspot tracks and omit volcanic propagation rates. The dispersion of the trends can be partitioned into between-plate and within-plate dispersion. Applying the method of Gripp & Gordon (2002) to the hotspot trend data set of Morgan & Morgan (2007) constrained to the MORVEL relative plate angular velocities (DeMets et al., 2010) results in a standard deviation of the 56 hotspot trends of 22°. The largest angular misfits tend to occur on the slowest moving plates. Alternatively, estimation of best-fitting poles to hotspot tracks on the nine individual plates, results in a standard deviation of trends of only 13°, a statistically significant reduction from the introduction of 15 additional adjustable parameters. If all of the between-plate misfit is due to motion of groups of hotspots (beneath different plates), nominal velocities relative to the mean hotspot reference frame range from 1 to 4 mm/yr with the lower bounds ranging from 1 to 3 mm/yr and the greatest upper bound being 8 mm/yr. These are consistent with bounds on motion between Pacific and Indo-Atlantic hotspots over the past ≈50 Ma, which range from zero (lower bound) to 8 to 13 mm/yr (upper bounds) (Koivisto et al., 2014). We also determine HS4-MORVEL, a new global set of plate angular velocities relative to the hotspots constrained to consistency with the MORVEL relative plate angular velocities, using a two-tier analysis similar to that used by Zheng et al. (2014) to estimate the SKS-MORVEL global set of absolute plate velocities fit to the orientation of seismic anisotropy. We find that the 95% confidence limits of HS4-MORVEL and SKS-MORVEL overlap substantially and that the two sets of angular velocities differ insignificantly. Thus we combine the two sets of angular velocities to estimate ABS-MORVEL, an optimal set of global angular velocities consistent with both hotspot tracks and seismic anisotropy. ABS-MORVEL has more compact confidence limits than either SKS-MORVEL or HS4-MORVEL.
Physical Uncertainty Bounds (PUB)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaughan, Diane Elizabeth; Preston, Dean L.
2015-03-19
This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switchingmore » out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.« less
The Role of Viscosity in TATB Hot Spot Ignition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fried, L E; Zepeda-Ruis, L; Howard, W M
2011-08-02
The role of dissipative effects, such as viscosity, in the ignition of high explosive pores is investigated using a coupled chemical, thermal, and hydrodynamic model. Chemical reactions are tracked with the Cheetah thermochemical code coupled to the ALE3D hydrodynamic code. We perform molecular dynamics simulations to determine the viscosity of liquid TATB. We also analyze shock wave experiments to obtain an estimate for the shock viscosity of TATB. Using the lower bound liquid-like viscosities, we find that the pore collapse is hydrodynamic in nature. Using the upper bound viscosity from shock wave experiments, we find that the pore collapse ismore » closest to the viscous limit.« less
Statistical speed of quantum states: Generalized quantum Fisher information and Schatten speed
NASA Astrophysics Data System (ADS)
Gessner, Manuel; Smerzi, Augusto
2018-02-01
We analyze families of measures for the quantum statistical speed which include as special cases the quantum Fisher information, the trace speed, i.e., the quantum statistical speed obtained from the trace distance, and more general quantifiers obtained from the family of Schatten norms. These measures quantify the statistical speed under generic quantum evolutions and are obtained by maximizing classical measures over all possible quantum measurements. We discuss general properties, optimal measurements, and upper bounds on the speed of separable states. We further provide a physical interpretation for the trace speed by linking it to an analog of the quantum Cramér-Rao bound for median-unbiased quantum phase estimation.
Taulbee, Timothy D; Glover, Samuel E; Macievic, Gregory V; Hunacek, Mickey; Smith, Cheryl; DeBord, Gary W; Morris, Donald; Fix, Jack
2010-07-01
Neutron and photon radiation survey records have been used to evaluate and develop a neutron to photon (NP) ratio to reconstruct neutron doses to workers around Hanford's single pass reactors that operated from 1945 to 1972. A total of 5,773 paired neutron and photon measurements extracted from 57 boxes of survey records were used in the development of the NP ratio. The development of the NP ratio enables the use of the recorded dose from an individual's photon dosimeter badge to be used to estimate the unmonitored neutron dose. The Pearson rank correlation between the neutron and photon measurements was 0.71. The NP ratio best fit a lognormal distribution with a geometric mean (GM) of 0.8, a geometric standard deviation (GSD) of 2.95, and the upper 95 th % of this distribution was 4.75. An estimate of the neutron dose based on this NP ratio is considered bounding due to evidence that up to 70% of the total photon exposure received by workers around the single pass reactors occurs during shutdown maintenance and refueling activities when there is no significant neutron exposure. Thus when this NP ratio is applied to the total measured photon dose from an individual film badge dosimeter, the resulting neutron dose is considered bounded.
Perturbative unitarity constraints on gauge portals
NASA Astrophysics Data System (ADS)
El Hedri, Sonia; Shepherd, William; Walker, Devin G. E.
2017-12-01
Dark matter that was once in thermal equilibrium with the Standard Model is generally prohibited from obtaining all of its mass from the electroweak phase transition. This implies a new scale of physics and mediator particles to facilitate dark matter annihilation. In this work, we focus on dark matter that annihilates through a generic gauge boson portal. We show how partial wave unitarity places upper bounds on the dark gauge boson, dark Higgs and dark matter masses. Outside of well-defined fine-tuned regions, we find an upper bound of 9 TeV for the dark matter mass when the dark Higgs and dark gauge bosons both facilitate the dark matter annihilations. In this scenario, the upper bound on the dark Higgs and dark gauge boson masses are 10 TeV and 16 TeV, respectively. When only the dark gauge boson facilitates dark matter annihilations, we find an upper bound of 3 TeV and 6 TeV for the dark matter and dark gauge boson, respectively. Overall, using the gauge portal as a template, we describe a method to not only place upper bounds on the dark matter mass but also on the new particles with Standard Model quantum numbers. We briefly discuss the reach of future accelerator, direct and indirect detection experiments for this class of models.
Grosz, R; Stephanopoulos, G
1983-09-01
The need for the determination of the free energy of formation of biomass in bioreactor second law balances is well established. A statistical mechanical method for the calculation of the free energy of formation of E. coli biomass is introduced. In this method, biomass is modelled to consist of a system of biopolymer networks. The partition function of this system is proposed to consist of acoustic and optical modes of vibration. Acoustic modes are described by Tarasov's model, the parameters of which are evaluated with the aid of low-temperature calorimetric data for the crystalline protein bovine chymotrypsinogen A. The optical modes are described by considering the low-temperature thermodynamic properties of biological monomer crystals such as amino acid crystals. Upper and lower bounds are placed on the entropy to establish the maximum error associated with the statistical method. The upper bound is determined by endowing the monomers in biomass with ideal gas properties. The lower bound is obtained by limiting the monomers to complete immobility. On this basis, the free energy of formation is fixed to within 10%. Proposals are made with regard to experimental verification of the calculated value and extension of the calculation to other types of biomass.
Eigenvalues of the Wentzell-Laplace operator and of the fourth order Steklov problems
NASA Astrophysics Data System (ADS)
Xia, Changyu; Wang, Qiaoling
2018-05-01
We prove a sharp upper bound and a lower bound for the first nonzero eigenvalue of the Wentzell-Laplace operator on compact manifolds with boundary and an isoperimetric inequality for the same eigenvalue in the case where the manifold is a bounded domain in a Euclidean space. We study some fourth order Steklov problems and obtain isoperimetric upper bound for the first eigenvalue of them. We also find all the eigenvalues and eigenfunctions for two kind of fourth order Steklov problems on a Euclidean ball.
Control design for robust stability in linear regulators: Application to aerospace flight control
NASA Technical Reports Server (NTRS)
Yedavalli, R. K.
1986-01-01
Time domain stability robustness analysis and design for linear multivariable uncertain systems with bounded uncertainties is the central theme of the research. After reviewing the recently developed upper bounds on the linear elemental (structured), time varying perturbation of an asymptotically stable linear time invariant regulator, it is shown that it is possible to further improve these bounds by employing state transformations. Then introducing a quantitative measure called the stability robustness index, a state feedback conrol design algorithm is presented for a general linear regulator problem and then specialized to the case of modal systems as well as matched systems. The extension of the algorithm to stochastic systems with Kalman filter as the state estimator is presented. Finally an algorithm for robust dynamic compensator design is presented using Parameter Optimization (PO) procedure. Applications in a aircraft control and flexible structure control are presented along with a comparison with other existing methods.
Thermodynamics in variable speed of light theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Racker, Juan; Facultad de Ciencias Astronomicas y Geofisicas, Universidad Nacional de La Plata, Paseo del Bosque S/N; Sisterna, Pablo
2009-10-15
The perfect fluid in the context of a covariant variable speed of light theory proposed by J. Magueijo is studied. On the one hand the modified first law of thermodynamics together with a recipe to obtain equations of state are obtained. On the other hand the Newtonian limit is performed to obtain the nonrelativistic hydrostatic equilibrium equation for the theory. The results obtained are used to determine the time variation of the radius of Mercury induced by the variability of the speed of light (c), and the scalar contribution to the luminosity of white dwarfs. Using a bound for themore » change of that radius and combining it with an upper limit for the variation of the fine structure constant, a bound on the time variation of c is set. An independent bound is obtained from luminosity estimates for Stein 2015B.« less
Entropy production of doubly stochastic quantum channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Müller-Hermes, Alexander, E-mail: muellerh@posteo.net; Department of Mathematical Sciences, University of Copenhagen, 2100 Copenhagen; Stilck França, Daniel, E-mail: dsfranca@mytum.de
2016-02-15
We study the entropy increase of quantum systems evolving under primitive, doubly stochastic Markovian noise and thus converging to the maximally mixed state. This entropy increase can be quantified by a logarithmic-Sobolev constant of the Liouvillian generating the noise. We prove a universal lower bound on this constant that stays invariant under taking tensor-powers. Our methods involve a new comparison method to relate logarithmic-Sobolev constants of different Liouvillians and a technique to compute logarithmic-Sobolev inequalities of Liouvillians with eigenvectors forming a projective representation of a finite abelian group. Our bounds improve upon similar results established before and as an applicationmore » we prove an upper bound on continuous-time quantum capacities. In the last part of this work we study entropy production estimates of discrete-time doubly stochastic quantum channels by extending the framework of discrete-time logarithmic-Sobolev inequalities to the quantum case.« less
NASA Technical Reports Server (NTRS)
Shahshahani, Behzad M.; Landgrebe, David A.
1992-01-01
The effect of additional unlabeled samples in improving the supervised learning process is studied in this paper. Three learning processes. supervised, unsupervised, and combined supervised-unsupervised, are compared by studying the asymptotic behavior of the estimates obtained under each process. Upper and lower bounds on the asymptotic covariance matrices are derived. It is shown that under a normal mixture density assumption for the probability density function of the feature space, the combined supervised-unsupervised learning is always superior to the supervised learning in achieving better estimates. Experimental results are provided to verify the theoretical concepts.
Parameter Transient Behavior Analysis on Fault Tolerant Control System
NASA Technical Reports Server (NTRS)
Belcastro, Christine (Technical Monitor); Shin, Jong-Yeob
2003-01-01
In a fault tolerant control (FTC) system, a parameter varying FTC law is reconfigured based on fault parameters estimated by fault detection and isolation (FDI) modules. FDI modules require some time to detect fault occurrences in aero-vehicle dynamics. This paper illustrates analysis of a FTC system based on estimated fault parameter transient behavior which may include false fault detections during a short time interval. Using Lyapunov function analysis, the upper bound of an induced-L2 norm of the FTC system performance is calculated as a function of a fault detection time and the exponential decay rate of the Lyapunov function.
2013-06-01
Belvoir, VA 22060-6201 10. SPONSOR/MONITOR’S ACRONYM(S) DTRA J9-NTSN 11 . SPONSORING/MONITORING AGENCY REPORT NUMBER DTRA-TR-12-003 12...average tritium activity in drinking water samples (Bq L-1) ......................... 43 Table 11 . Parameter values and assumptions for estimating...the ground, roads and ship loading areas .......................... 59 11 Table 23. Parameter values and assumptions for the internal dose from
The Problem of Limited Inter-rater Agreement in Modelling Music Similarity
Flexer, Arthur; Grill, Thomas
2016-01-01
One of the central goals of Music Information Retrieval (MIR) is the quantification of similarity between or within pieces of music. These quantitative relations should mirror the human perception of music similarity, which is however highly subjective with low inter-rater agreement. Unfortunately this principal problem has been given little attention in MIR so far. Since it is not meaningful to have computational models that go beyond the level of human agreement, these levels of inter-rater agreement present a natural upper bound for any algorithmic approach. We will illustrate this fundamental problem in the evaluation of MIR systems using results from two typical application scenarios: (i) modelling of music similarity between pieces of music; (ii) music structure analysis within pieces of music. For both applications, we derive upper bounds of performance which are due to the limited inter-rater agreement. We compare these upper bounds to the performance of state-of-the-art MIR systems and show how the upper bounds prevent further progress in developing better MIR systems. PMID:28190932
The linearized multistage model and the future of quantitative risk assessment.
Crump, K S
1996-10-01
The linearized multistage (LMS) model has for over 15 years been the default dose-response model used by the U.S. Environmental Protection Agency (USEPA) and other federal and state regulatory agencies in the United States for calculating quantitative estimates of low-dose carcinogenic risks from animal data. The LMS model is in essence a flexible statistical model that can describe both linear and non-linear dose-response patterns, and that produces an upper confidence bound on the linear low-dose slope of the dose-response curve. Unlike its namesake, the Armitage-Doll multistage model, the parameters of the LMS do not correspond to actual physiological phenomena. Thus the LMS is 'biological' only to the extent that the true biological dose response is linear at low dose and that low-dose slope is reflected in the experimental data. If the true dose response is non-linear the LMS upper bound may overestimate the true risk by many orders of magnitude. However, competing low-dose extrapolation models, including those derived from 'biologically-based models' that are capable of incorporating additional biological information, have not shown evidence to date of being able to produce quantitative estimates of low-dose risks that are any more accurate than those obtained from the LMS model. Further, even if these attempts were successful, the extent to which more accurate estimates of low-dose risks in a test animal species would translate into improved estimates of human risk is questionable. Thus, it does not appear possible at present to develop a quantitative approach that would be generally applicable and that would offer significant improvements upon the crude bounding estimates of the type provided by the LMS model. Draft USEPA guidelines for cancer risk assessment incorporate an approach similar to the LMS for carcinogens having a linear mode of action. However, under these guidelines quantitative estimates of low-dose risks would not be developed for carcinogens having a non-linear mode of action; instead dose-response modelling would be used in the experimental range to calculate an LED10* (a statistical lower bound on the dose corresponding to a 10% increase in risk), and safety factors would be applied to the LED10* to determine acceptable exposure levels for humans. This approach is very similar to the one presently used by USEPA for non-carcinogens. Rather than using one approach for carcinogens believed to have a linear mode of action and a different approach for all other health effects, it is suggested herein that it would be more appropriate to use an approach conceptually similar to the 'LED10*-safety factor' approach for all health effects, and not to routinely develop quantitative risk estimates from animal data.
Cost and benefit estimates of partially-automated vehicle collision avoidance technologies.
Harper, Corey D; Hendrickson, Chris T; Samaras, Constantine
2016-10-01
Many light-duty vehicle crashes occur due to human error and distracted driving. Partially-automated crash avoidance features offer the potential to reduce the frequency and severity of vehicle crashes that occur due to distracted driving and/or human error by assisting in maintaining control of the vehicle or issuing alerts if a potentially dangerous situation is detected. This paper evaluates the benefits and costs of fleet-wide deployment of blind spot monitoring, lane departure warning, and forward collision warning crash avoidance systems within the US light-duty vehicle fleet. The three crash avoidance technologies could collectively prevent or reduce the severity of as many as 1.3 million U.S. crashes a year including 133,000 injury crashes and 10,100 fatal crashes. For this paper we made two estimates of potential benefits in the United States: (1) the upper bound fleet-wide technology diffusion benefits by assuming all relevant crashes are avoided and (2) the lower bound fleet-wide benefits of the three technologies based on observed insurance data. The latter represents a lower bound as technology is improved over time and cost reduced with scale economies and technology improvement. All three technologies could collectively provide a lower bound annual benefit of about $18 billion if equipped on all light-duty vehicles. With 2015 pricing of safety options, the total annual costs to equip all light-duty vehicles with the three technologies would be about $13 billion, resulting in an annual net benefit of about $4 billion or a $20 per vehicle net benefit. By assuming all relevant crashes are avoided, the total upper bound annual net benefit from all three technologies combined is about $202 billion or an $861 per vehicle net benefit, at current technology costs. The technologies we are exploring in this paper represent an early form of vehicle automation and a positive net benefit suggests the fleet-wide adoption of these technologies would be beneficial from an economic and social perspective. Copyright © 2016 Elsevier Ltd. All rights reserved.
Estimating the epidemic threshold on networks by deterministic connections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Kezan, E-mail: lkzzr@sohu.com; Zhu, Guanghu; Fu, Xinchu
2014-12-15
For many epidemic networks some connections between nodes are treated as deterministic, while the remainder are random and have different connection probabilities. By applying spectral analysis to several constructed models, we find that one can estimate the epidemic thresholds of these networks by investigating information from only the deterministic connections. Nonetheless, in these models, generic nonuniform stochastic connections and heterogeneous community structure are also considered. The estimation of epidemic thresholds is achieved via inequalities with upper and lower bounds, which are found to be in very good agreement with numerical simulations. Since these deterministic connections are easier to detect thanmore » those stochastic connections, this work provides a feasible and effective method to estimate the epidemic thresholds in real epidemic networks.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Degteva, M. O.; Shagina, N. B.; Shishkina, Elena A.
Waterborne radioactive releases into the Techa River from the Mayak Production Association in Russia during 1949–1956 resulted in significant doses to about 30,000 persons who lived in downstream settlements. The residents were exposed to internal and external radiation. Two methods for reconstruction of the external dose are considered in this paper, electron paramagnetic resonance (EPR) measurements of teeth and fluorescence in situ hybridization (FISH) measurements of chromosome translocations in circulating lymphocytes. The main issue in the application of the EPR and FISH methods for reconstruction of the external dose for the Techa Riverside residents was strontium radioisotopes incorporated in teethmore » and bones that served as a source of confounding local exposures. In order to estimate and subtract doses from incorporated 89,90Sr, the EPR and FISH assays were supported by measurements of 90Sr-body burdens and estimates of 90Sr concentrations in dental tissues by the luminescence method. The resulting dose estimates derived from EPR and FISH measurements for residents of the upper Techa River were found to be consistent: the mean values vary from 510 – 550 mGy for the villages located close to the site of radioactive release to 130 – 160 mGy for the more distant villages. The upper bound of individual estimates for both methods is equal to 2.2 – 2.3 Gy. The EPR- and FISH-based dose estimates were compared with the doses calculated for the donors using the Techa River Dosimetry System (TRDS). The TRDS external dose assessments were based on the data on contamination of the Techa River floodplain, simulation of ai r kerma above the contaminated soil, age-dependent life-styles and individual residence histories. For correct comparison TRDS-based doses were calculated from two sources: external exposure from the contaminated environment and internal exposure from 137Cs incorporated in donors’ soft tissues. The TRDS-based absorbed doses in tooth enamel and muscle were in agreement with with EPR- and FISH-based estimates within uncertainty bounds. Basically, the agreement between the estimates has confirmed the validity of external doses calculated with the Techa River Dosimetry System.« less
Upper bound on the efficiency of certain nonimaging concentrators in the physical-optics model
NASA Astrophysics Data System (ADS)
Welford, W. T.; Winston, R.
1982-09-01
Upper bounds on the performance of nonimaging concentrators are obtained within the framework of scalar-wave theory by using a simple approach to avoid complex calculations on multiple phase fronts. The approach consists in treating a theoretically perfect image-forming device and postulating that no non-image-forming concentrator can have a better performance than such an ideal image-forming system. The performance of such a system can be calculated according to wave theory, and this will provide, in accordance with the postulate, upper bounds on the performance of nonimaging systems. The method is demonstrated for a two-dimensional compound parabolic concentrator.
NASA Astrophysics Data System (ADS)
Liu, X.; Bassis, J. N.
2015-12-01
With observations showing accelerated mass loss from the Greenland Ice Sheet due to surface melt, the Greenland Ice Sheet is becoming one of the most significant contributors to sea level rise. The contribution of the Greenland Ice Sheet o sea level rise is likely to accelerate in the coming decade and centuries as atmospheric temperatures continue to rise, potentially triggering ever larger surface melt rates. However, at present considerable uncertainty remains in projecting the contribution to sea level of the Greenland Ice Sheet both due to uncertainty in atmospheric forcing and the ice sheet response to climate forcing. Here we seek an upper bound on the contribution of surface melt from the Greenland to sea level rise in the coming century using a surface energy balance model coupled to an englacial model. We use IPCC Representative Concentration Pathways (RCP8.5, RCP6, RCP4.5, RCP2.6) climate scenarios from an ensemble of global climate models in our simulations to project the maximum rate of ice volume loss and related sea-level rise associated with surface melting. To estimate the upper bound, we assume the Greenland Ice Sheet is perpetually covered in thick clouds, which maximize longwave radiation to the ice sheet. We further assume that deposition of black carbon darkens the ice substantially turning it nearly black, substantially reducing its albedo. Although assuming that all melt water not stored in the snow/firn is instantaneously transported off the ice sheet increases mass loss in the short term, refreezing of retained water warms the ice and may lead to more melt in the long term. Hence we examine both assumptions and use the scenario that leads to the most surface melt by 2100. Preliminary models results suggest that under the most aggressive climate forcing, surface melt from the Greenland Ice Sheet contributes ~1 m to sea level by the year 2100. This is a significant contribution and ignores dynamic effects. We also examined a lower bound, assuming negligible longwave radiation and albedo near the maximum observed for freshly fallen snow. Even under this scenarios preliminary estimates suggest tens of centimeters of sea level rise by 2100.
Nosyk, Bohdan; Zang, Xiao; Min, Jeong E; Krebs, Emanuel; Lima, Viviane D; Milloy, M-J; Shoveller, Jean; Barrios, Rolando; Harrigan, P Richard; Kerr, Thomas; Wood, Evan; Montaner, Julio S G
2017-07-01
Antiretroviral therapy (ART) and harm reduction services have been cited as key contributors to control of HIV epidemics; however, the specific contribution of ART has been questioned due to uncertainty of its true efficacy on HIV transmission through needle sharing. We aimed to isolate the independent effects of harm reduction services (opioid agonist treatment uptake and needle distribution volumes) and ART on HIV transmission via needle sharing in British Columbia, Canada, from 1996 to 2013. We used comprehensive linked individual health administrative and registry data for the population of diagnosed people living with HIV in British Columbia to populate a dynamic, compartmental transmission model to simulate the HIV/AIDS epidemic in British Columbia from 1996 to 2013. We estimated HIV incidence, mortality, and quality-adjusted life-years (QALYs). We also estimated scenarios designed to isolate the independent effects of harm reduction services and ART, assuming 50% (10-90%) efficacy, in reducing HIV incidence through needle sharing, and we investigated structural and parameter uncertainty. We estimate that 3204 (upper bound-lower bound 2402-4589) incident HIV cases were averted between 1996 and 2013 as a result of the combined effect of the expansion of harm reduction services and ART coverage on HIV transmission via needle sharing. In a hypothetical scenario assuming ART had zero effect on transmission through needle sharing, we estimated harm reduction services alone would have accounted for 77% (upper bound-lower bound 62-95%) of averted HIV incidence. In a separate hypothetical scenario where harm reduction services remained at 1996 levels, we estimated ART alone would have accounted for 44% (10-67%) of averted HIV incidence. As a result of high distribution volumes, needle distribution predominantly accounted for incidence reductions attributable to harm reduction but opioid agonist treatment provided substantially greater QALY gains. If the true efficacy of ART in preventing HIV transmission through needle sharing is closer to its efficacy in sexual transmission, ART's effect on incident cases averted could be greater than that of harm reduction. Nonetheless, harm reduction services had a vital role in reducing HIV incidence in British Columbia, and should be viewed as essential and cost-effective tools in combination implementation strategies to reduce the public health and economic burden of HIV/AIDS. BC Ministry of Health; National Institutes of Health (R01DA041747); Genome Canada (142HIV). Copyright © 2017 Elsevier Ltd. All rights reserved.
Lower and upper bounds for entanglement of Rényi-α entropy.
Song, Wei; Chen, Lin; Cao, Zhuo-Liang
2016-12-23
Entanglement Rényi-α entropy is an entanglement measure. It reduces to the standard entanglement of formation when α tends to 1. We derive analytical lower and upper bounds for the entanglement Rényi-α entropy of arbitrary dimensional bipartite quantum systems. We also demonstrate the application our bound for some concrete examples. Moreover, we establish the relation between entanglement Rényi-α entropy and some other entanglement measures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baldes, Iason; Petraki, Kalliopi, E-mail: iason.baldes@desy.de, E-mail: kpetraki@lpthe.jussieu.fr
Dark matter that possesses a particle-antiparticle asymmetry and has thermalised in the early universe, requires a larger annihilation cross-section compared to symmetric dark matter, in order to deplete the dark antiparticles and account for the observed dark matter density. The annihilation cross-section determines the residual symmetric component of dark matter, which may give rise to annihilation signals during CMB and inside haloes today. We consider dark matter with long-range interactions, in particular dark matter coupled to a light vector or scalar force mediator. We compute the couplings required to attain a final antiparticle-to-particle ratio after the thermal freeze-out of themore » annihilation processes in the early universe, and then estimate the late-time annihilation signals. We show that, due to the Sommerfeld enhancement, highly asymmetric dark matter with long-range interactions can have a significant annihilation rate, potentially larger than symmetric dark matter of the same mass with contact interactions. We discuss caveats in this estimation, relating to the formation of stable bound states. Finally, we consider the non-relativistic partial-wave unitarity bound on the inelastic cross-section, we discuss why it can be realised only by long-range interactions, and showcase the importance of higher partial waves in this regime of large inelasticity. We derive upper bounds on the mass of symmetric and asymmetric thermal-relic dark matter for s -wave and p -wave annihilation, and exhibit how these bounds strengthen as the dark asymmetry increases.« less
Veeraraghavan, Srikant; Mazziotti, David A
2014-03-28
We present a density matrix approach for computing global solutions of restricted open-shell Hartree-Fock theory, based on semidefinite programming (SDP), that gives upper and lower bounds on the Hartree-Fock energy of quantum systems. While wave function approaches to Hartree-Fock theory yield an upper bound to the Hartree-Fock energy, we derive a semidefinite relaxation of Hartree-Fock theory that yields a rigorous lower bound on the Hartree-Fock energy. We also develop an upper-bound algorithm in which Hartree-Fock theory is cast as a SDP with a nonconvex constraint on the rank of the matrix variable. Equality of the upper- and lower-bound energies guarantees that the computed solution is the globally optimal solution of Hartree-Fock theory. The work extends a previously presented method for closed-shell systems [S. Veeraraghavan and D. A. Mazziotti, Phys. Rev. A 89, 010502-R (2014)]. For strongly correlated systems the SDP approach provides an alternative to the locally optimized Hartree-Fock energies and densities with a certificate of global optimality. Applications are made to the potential energy curves of C2, CN, Cr2, and NO2.
NASA Technical Reports Server (NTRS)
Jaggi, S.; Quattrochi, D.; Baskin, R.
1992-01-01
The effective flux incident upon the detectors of a thermal sensor, after it has been corrected for atmospheric effects, is a function of a non-linear combination of the emissivity of the target for that channel and the temperature of the target. The sensor system cannot separate the contribution from the emissivity and the temperature that constitute the flux value. A method that estimates the bounds on these temperatures and emissivities from thermal data is described. This method is then tested with remotely sensed data obtained from NASA's Thermal Infrared Multispectral Scanner (TIMS) - a 6 channel thermal sensor. Since this is an under-determined set of equations i.e. there are 7 unknowns (6 emissivities and 1 temperature) and 6 equations (corresponding to the 6 channel fluxes), there exist theoretically an infinite combination of values of emissivities and temperature that can satisfy these equations. Using some realistic bounds on the emissivities, bounds on the temperature are calculated. These bounds on the temperature are refined to estimate a tighter bound on the emissivity of the source. An error analysis is also carried out to quantitatively determine the extent of uncertainty introduced in the estimate of these parameters. This method is useful only when a realistic set of bounds can be obtained for the emissivities of the data. In the case of water the lower and upper bounds were set at 0.97 and 1.00 respectively. Five flights were flown in succession at altitudes of 2 km (low), 6 km (mid), 12 km (high), and then back again at 6 km and 2 km. The area selected with the Ross Barnett reservoir near Jackson, Mississippi. The mission was flown during the predawn hours of 1 Feb. 1992. Radiosonde data was collected for that duration to profile the characteristics of the atmosphere. Ground truth temperatures using thermometers and radiometers were also obtained over an area of the reservoir. The results of two independent runs of the radiometer data averaged 7.03 plus or minus .70 for the first run and 7.31 plus or minus .88 for the second run. The results of the algorithm yield a temperature of 7.68 for the low altitude data to 8.73 for the high altitude data.
NASA Astrophysics Data System (ADS)
Wen, Huanyao; Zhu, Limei
2018-02-01
In this paper, we consider the Cauchy problem for a two-phase model with magnetic field in three dimensions. The global existence and uniqueness of strong solution as well as the time decay estimates in H2 (R3) are obtained by introducing a new linearized system with respect to (nγ -n˜γ , n - n ˜ , P - P ˜ , u , H) for constants n ˜ ≥ 0 and P ˜ > 0, and doing some new a priori estimates in Sobolev Spaces to get the uniform upper bound of (n - n ˜ ,nγ -n˜γ) in H2 (R3) norm.
Perturbative unitarity constraints on gauge portals
El Hedri, Sonia; Shepherd, William; Walker, Devin G. E.
2017-10-03
Dark matter that was once in thermal equilibrium with the Standard Model is generally prohibited from obtaining all of its mass from the electroweak phase transition. This implies a new scale of physics and mediator particles to facilitate dark matter annihilation. In this work, we focus on dark matter that annihilates through a generic gauge boson portal. We show how partial wave unitarity places upper bounds on the dark gauge boson, dark Higgs and dark matter masses. Outside of well-defined fine-tuned regions, we find an upper bound of 9 TeV for the dark matter mass when the dark Higgs andmore » dark gauge bosons both facilitate the dark matter annihilations. In this scenario, the upper bound on the dark Higgs and dark gauge boson masses are 10 TeV and 16 TeV, respectively. When only the dark gauge boson facilitates dark matter annihilations, we find an upper bound of 3 TeV and 6 TeV for the dark matter and dark gauge boson, respectively. Overall, using the gauge portal as a template, we describe a method to not only place upper bounds on the dark matter mass but also on the new particles with Standard Model quantum numbers. Here, we briefly discuss the reach of future accelerator, direct and indirect detection experiments for this class of models.« less
Perturbative unitarity constraints on gauge portals
DOE Office of Scientific and Technical Information (OSTI.GOV)
El Hedri, Sonia; Shepherd, William; Walker, Devin G. E.
Dark matter that was once in thermal equilibrium with the Standard Model is generally prohibited from obtaining all of its mass from the electroweak phase transition. This implies a new scale of physics and mediator particles to facilitate dark matter annihilation. In this work, we focus on dark matter that annihilates through a generic gauge boson portal. We show how partial wave unitarity places upper bounds on the dark gauge boson, dark Higgs and dark matter masses. Outside of well-defined fine-tuned regions, we find an upper bound of 9 TeV for the dark matter mass when the dark Higgs andmore » dark gauge bosons both facilitate the dark matter annihilations. In this scenario, the upper bound on the dark Higgs and dark gauge boson masses are 10 TeV and 16 TeV, respectively. When only the dark gauge boson facilitates dark matter annihilations, we find an upper bound of 3 TeV and 6 TeV for the dark matter and dark gauge boson, respectively. Overall, using the gauge portal as a template, we describe a method to not only place upper bounds on the dark matter mass but also on the new particles with Standard Model quantum numbers. Here, we briefly discuss the reach of future accelerator, direct and indirect detection experiments for this class of models.« less
Stability of Nonlinear Systems with Unknown Time-varying Feedback Delay
NASA Astrophysics Data System (ADS)
Chunodkar, Apurva A.; Akella, Maruthi R.
2013-12-01
This paper considers the problem of stabilizing a class of nonlinear systems with unknown bounded delayed feedback wherein the time-varying delay is 1) piecewise constant 2) continuous with a bounded rate. We also consider application of these results to the stabilization of rigid-body attitude dynamics. In the first case, the time-delay in feedback is modeled specifically as a switch among an arbitrarily large set of unknown constant values with a known strict upper bound. The feedback is a linear function of the delayed states. In the case of linear systems with switched delay feedback, a new sufficiency condition for average dwell time result is presented using a complete type Lyapunov-Krasovskii (L-K) functional approach. Further, the corresponding switched system with nonlinear perturbations is proven to be exponentially stable inside a well characterized region of attraction for an appropriately chosen average dwell time. In the second case, the concept of the complete type L-K functional is extended to a class of nonlinear time-delay systems with unknown time-varying time-delay. This extension ensures stability robustness to time-delay in the control design for all values of time-delay less than the known upper bound. Model-transformation is used in order to partition the nonlinear system into a nominal linear part that is exponentially stable with a bounded perturbation. We obtain sufficient conditions which ensure exponential stability inside a region of attraction estimate. A constructive method to evaluate the sufficient conditions is presented together with comparison with the corresponding constant and piecewise constant delay. Numerical simulations are performed to illustrate the theoretical results of this paper.
NASA Technical Reports Server (NTRS)
Menga, G.
1975-01-01
An approach, is proposed for the design of approximate, fixed order, discrete time realizations of stochastic processes from the output covariance over a finite time interval, was proposed. No restrictive assumptions are imposed on the process; it can be nonstationary and lead to a high dimension realization. Classes of fixed order models are defined, having the joint covariance matrix of the combined vector of the outputs in the interval of definition greater or equal than the process covariance; (the difference matrix is nonnegative definite). The design is achieved by minimizing, in one of those classes, a measure of the approximation between the model and the process evaluated by the trace of the difference of the respective covariance matrices. Models belonging to these classes have the notable property that, under the same measurement system and estimator structure, the output estimation error covariance matrix computed on the model is an upper bound of the corresponding covariance on the real process. An application of the approach is illustrated by the modeling of random meteorological wind profiles from the statistical analysis of historical data.
Standard Deviation for Small Samples
ERIC Educational Resources Information Center
Joarder, Anwar H.; Latif, Raja M.
2006-01-01
Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…
Bounds for Asian basket options
NASA Astrophysics Data System (ADS)
Deelstra, Griselda; Diallo, Ibrahima; Vanmaele, Michèle
2008-09-01
In this paper we propose pricing bounds for European-style discrete arithmetic Asian basket options in a Black and Scholes framework. We start from methods used for basket options and Asian options. First, we use the general approach for deriving upper and lower bounds for stop-loss premia of sums of non-independent random variables as in Kaas et al. [Upper and lower bounds for sums of random variables, Insurance Math. Econom. 27 (2000) 151-168] or Dhaene et al. [The concept of comonotonicity in actuarial science and finance: theory, Insurance Math. Econom. 31(1) (2002) 3-33]. We generalize the methods in Deelstra et al. [Pricing of arithmetic basket options by conditioning, Insurance Math. Econom. 34 (2004) 55-57] and Vanmaele et al. [Bounds for the price of discrete sampled arithmetic Asian options, J. Comput. Appl. Math. 185(1) (2006) 51-90]. Afterwards we show how to derive an analytical closed-form expression for a lower bound in the non-comonotonic case. Finally, we derive upper bounds for Asian basket options by applying techniques as in Thompson [Fast narrow bounds on the value of Asian options, Working Paper, University of Cambridge, 1999] and Lord [Partially exact and bounded approximations for arithmetic Asian options, J. Comput. Finance 10 (2) (2006) 1-52]. Numerical results are included and on the basis of our numerical tests, we explain which method we recommend depending on moneyness and time-to-maturity.
Uncertainty analysis for absorbed dose from a brain receptor imaging agent
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aydogan, B.; Miller, L.F.; Sparks, R.B.
Absorbed dose estimates are known to contain uncertainties. A recent literature search indicates that prior to this study no rigorous investigation of uncertainty associated with absorbed dose has been undertaken. A method of uncertainty analysis for absorbed dose calculations has been developed and implemented for the brain receptor imaging agent {sup 123}I-IPT. The two major sources of uncertainty considered were the uncertainty associated with the determination of residence time and that associated with the determination of the S values. There are many sources of uncertainty in the determination of the S values, but only the inter-patient organ mass variation wasmore » considered in this work. The absorbed dose uncertainties were determined for lung, liver, heart and brain. Ninety-five percent confidence intervals of the organ absorbed dose distributions for each patient and for a seven-patient population group were determined by the ``Latin Hypercube Sampling`` method. For an individual patient, the upper bound of the 95% confidence interval of the absorbed dose was found to be about 2.5 times larger than the estimated mean absorbed dose. For the seven-patient population the upper bound of the 95% confidence interval of the absorbed dose distribution was around 45% more than the estimated population mean. For example, the 95% confidence interval of the population liver dose distribution was found to be between 1.49E+0.7 Gy/MBq and 4.65E+07 Gy/MBq with a mean of 2.52E+07 Gy/MBq. This study concluded that patients in a population receiving {sup 123}I-IPT could receive absorbed doses as much as twice as large as the standard estimated absorbed dose due to these uncertainties.« less
Information transmission over an amplitude damping channel with an arbitrary degree of memory
NASA Astrophysics Data System (ADS)
D'Arrigo, Antonio; Benenti, Giuliano; Falci, Giuseppe; Macchiavello, Chiara
2015-12-01
We study the performance of a partially correlated amplitude damping channel acting on two qubits. We derive lower bounds for the single-shot classical capacity by studying two kinds of quantum ensembles, one which allows us to maximize the Holevo quantity for the memoryless channel and the other allowing the same task but for the full-memory channel. In these two cases we also show the amount of entanglement which is involved in achieving the maximum of the Holevo quantity. For the single-shot quantum capacity we discuss both a lower and an upper bound, achieving a good estimate for high values of the channel transmissivity. We finally compute the entanglement-assisted classical channel capacity.
Evaluating the Potential Importance of Monoterpene Degradation for Global Acetone Production
NASA Astrophysics Data System (ADS)
Kelp, M. M.; Brewer, J.; Keller, C. A.; Fischer, E. V.
2015-12-01
Acetone is one of the most abundant volatile organic compounds (VOCs) in the atmosphere, but estimates of the global source of acetone vary widely. A better understanding of acetone sources is essential because acetone serves as a source of HOx in the upper troposphere and as a precursor to the NOx reservoir species peroxyacetyl nitrate (PAN). Although there are primary anthropogenic and pyrogenic sources of acetone, the dominant acetone sources are thought to be from direct biogenic emissions and photochemical production, particularly from the oxidation of iso-alkanes. Recent work suggests that the photochemical degradation of monoterpenes may also represent a significant contribution to global acetone production. We investigate that hypothesis using the GEOS-Chem chemical transport model. In this work, we calculate the emissions of eight terpene species (α-pinene, β-pinene, limonene, Δ3-carene, myrcene, sabinene, trans-β-ocimene, and an 'other monoterpenes' category which contains 34 other trace species) and couple these with upper and lower bound literature yields from species-specific chamber studies. We compare the simulated acetone distributions against in situ acetone measurements from a global suite of NASA aircraft campaigns. When simulating an upper bound on yields, the model-to-measurement comparison improves for North America at both the surface and in the upper troposphere. The inclusion of acetone production from monoterpene degradation also improves the ability of the model to reproduce observations of acetone in East Asian outflow. However, in general the addition of monoterpenes degrades the model comparison for the Southern Hemisphere.
Exploring L1 model space in search of conductivity bounds for the MT problem
NASA Astrophysics Data System (ADS)
Wheelock, B. D.; Parker, R. L.
2013-12-01
Geophysical inverse problems of the type encountered in electromagnetic techniques are highly non-unique. As a result, any single inverted model, though feasible, is at best inconclusive and at worst misleading. In this paper, we use modified inversion methods to establish bounds on electrical conductivity within a model of the earth. Our method consists of two steps, each making use of the 1-norm in model regularization. Both 1-norm minimization problems are framed without approximation as non-negative least-squares (NNLS) problems. First, we must identify a parsimonious set of regions within the model for which upper and lower bounds on average conductivity will be sought. This is accomplished by minimizing the 1-norm of spatial variation, which produces a model with a limited number of homogeneous regions; in fact, the number of homogeneous regions will never be greater than the number of data, regardless of the number of free parameters supplied. The second step establishes bounds for each of these regions with pairs of inversions. The new suite of inversions also uses a 1-norm penalty, but applied to the conductivity values themselves, rather than the spatial variation thereof. In the bounding step we use the 1-norm of our model parameters because it is proportional to average conductivity. For a lower bound on average conductivity, the 1-norm within a bounding region is minimized. For an upper bound on average conductivity, the 1-norm everywhere outside a bounding region is minimized. The latter minimization has the effect of concentrating conductance into the bounding region. Taken together, these bounds are a measure of the uncertainty in the associated region of our model. Starting with a blocky inverse solution is key in the selection of the bounding regions. Of course, there is a tradeoff between resolution and uncertainty: an increase in resolution (smaller bounding regions), results in greater uncertainty (wider bounds). Minimization of the 1-norm of spatial variation delivers the fewest possible regions defined by a mean conductivity, the quantity we wish to bound. Thus, these regions present a natural set for which the most narrow and discriminating bounds can be found. For illustration, we apply these techniques to synthetic magnetotelluric (MT) data sets resulting from one-dimensional (1D) earth models. In each case we find that with realistic data coverage, any single inverted model can often stray from the truth, while the computed bounds on an encompassing region contain both the inverted and the true conductivities, indicating that our measure of model uncertainty is robust. Such estimates of uncertainty for conductivity can then be translated to bounds on important petrological parameters such as mineralogy, porosity, saturation, and fluid type.
Booze, Thomas F; Reinhardt, Timothy E; Quiring, Sharon J; Ottmar, Roger D
2004-05-01
A screening health risk assessment was performed to assess the upper-bound risks of cancer and noncancer adverse health effects among wildland firefighters performing wildfire suppression and prescribed burn management. Of the hundreds of chemicals in wildland fire smoke, we identified 15 substances of potential concern from the standpoints of concentration and toxicology; these included aldehydes, polycyclic aromatic hydrocarbons, carbon monoxide, benzene, and respirable particulate matter. Data defining daily exposures to smoke at prescribed burns and wildfires, potential days of exposure in a year, and career lengths were used to estimate average and reasonable maximum career inhalation exposures to these substances. Of the 15 substances in smoke that were evaluated, only benzene and formaldehyde posed a cancer risk greater than 1 per million, while only acrolein and respirable particulate matter exposures resulted in hazard indices greater than 1.0. The estimated upper-bound cancer risks ranged from 1.4 to 220 excess cancers per million, and noncancer hazard indices ranged from 9 to 360, depending on the exposure group. These values only indicate the likelihood of adverse health effects, not whether they will or will not occur. The risk assessment process narrows the field of substances that deserve further assessment, and the hazards identified by risk assessment generally agree with those identified as a concern in occupational exposure assessments.
Hoyle, Martin; Cresswell, James E
2007-09-07
We present a spatially implicit analytical model of forager movement, designed to address a simple scenario common in nature. We assume minimal depression of patch resources, and discrete foraging bouts, during which foragers fill to capacity. The model is particularly suitable for foragers that search systematically, foragers that deplete resources in a patch only incrementally, and for sit-and-wait foragers, where harvesting does not affect the rate of arrival of forage. Drawing on the theory of job search from microeconomics, we estimate the expected number of patches visited as a function of just two variables: the coefficient of variation of the rate of energy gain among patches, and the ratio of the expected time exploiting a randomly chosen patch and the expected time travelling between patches. We then consider the forager as a pollinator and apply our model to estimate gene flow. Under model assumptions, an upper bound for animal-mediated gene flow between natural plant populations is approximately proportional to the probability that the animal rejects a plant population. In addition, an upper bound for animal-mediated gene flow in any animal-pollinated agricultural crop from a genetically modified (GM) to a non-GM field is approximately proportional to the proportion of fields that are GM and the probability that the animal rejects a field.
NASA Astrophysics Data System (ADS)
Costa, Veber; Fernandes, Wilson
2017-11-01
Extreme flood estimation has been a key research topic in hydrological sciences. Reliable estimates of such events are necessary as structures for flood conveyance are continuously evolving in size and complexity and, as a result, their failure-associated hazards become more and more pronounced. Due to this fact, several estimation techniques intended to improve flood frequency analysis and reducing uncertainty in extreme quantile estimation have been addressed in the literature in the last decades. In this paper, we develop a Bayesian framework for the indirect estimation of extreme flood quantiles from rainfall-runoff models. In the proposed approach, an ensemble of long daily rainfall series is simulated with a stochastic generator, which models extreme rainfall amounts with an upper-bounded distribution function, namely, the 4-parameter lognormal model. The rationale behind the generation model is that physical limits for rainfall amounts, and consequently for floods, exist and, by imposing an appropriate upper bound for the probabilistic model, more plausible estimates can be obtained for those rainfall quantiles with very low exceedance probabilities. Daily rainfall time series are converted into streamflows by routing each realization of the synthetic ensemble through a conceptual hydrologic model, the Rio Grande rainfall-runoff model. Calibration of parameters is performed through a nonlinear regression model, by means of the specification of a statistical model for the residuals that is able to accommodate autocorrelation, heteroscedasticity and nonnormality. By combining the outlined steps in a Bayesian structure of analysis, one is able to properly summarize the resulting uncertainty and estimating more accurate credible intervals for a set of flood quantiles of interest. The method for extreme flood indirect estimation was applied to the American river catchment, at the Folsom dam, in the state of California, USA. Results show that most floods, including exceptionally large non-systematic events, were reasonably estimated with the proposed approach. In addition, by accounting for uncertainties in each modeling step, one is able to obtain a better understanding of the influential factors in large flood formation dynamics.
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2015-01-01
Upper bounds on high speed satellite collision probability, PC †, have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum PC. If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but potentially useful Pc upper bound.
Estimating formation properties from early-time oscillatory water levels in a pumped well
Shapiro, A.M.; Oki, D.S.
2000-01-01
Hydrologists often attempt to estimate formation properties from aquifer tests for which only the hydraulic responses in a pumped well are available. Borehole storage, turbulent head losses, and borehole skin, however, can mask the hydraulic behavior of the formation inferred from the water level in the pumped well. Also, in highly permeable formations or in formations at significant depth below land surface, where there is a long column of water in the well casing, oscillatory water levels may arise during the onset of pumping to further mask formation responses in the pumped well. Usually borehole phenomena are confined to the early stages of pumping or recovery, and late-time hydraulic data can be used to estimate formation properties. In many instances, however, early-time hydraulic data provide valuable information about the formation, especially if there are interferences in the late-time data. A mathematical model and its Laplace transform solution that account for inertial influences and turbulent head losses during pumping is developed for the coupled response between the pumped borehole and the formation. The formation is assumed to be homogeneous, isotropic, of infinite areal extent, and uniform thickness, with leakage from an overlying aquifer, and the screened or open interval of the pumped well is assumed to fully penetrate the pumped aquifer. Other mathematical models of aquifer flow can also be coupled with the equations describing turbulent head losses and the inertial effects on the water column in the pumped well. The mathematical model developed in this paper is sufficiently general to consider both underdamped conditions for which oscillations arise, and overdamped conditions for which there are no oscillations. Through numerical inversion of the Laplace transform solution, type curves from the mathematical model are developed to estimate formation properties through comparison with the measured hydraulic response in the pumped well. The mathematical model is applied to estimate formation properties from a singlewell test conducted near Waialua, Oahu, Hawaii. At this site, both the drawdown and recovery showed oscillatory water levels in the pumped well, and a step-drawdown test showed that approximately 86% of the drawdown is attributed to turbulent head losses. Analyses at this site using late-time drawdown data were confounded by the noise present in the measured water levels due primarily to nearby irrigation wells and ocean tides. By analyzing the early-time oscillatory recovery data at the Waialua site, upper and lower bounds were placed on the transmissivity, T, storage coefficient, S, and the leakance of the confining unit, K′/B′. The upper and lower bounds on T differ by a factor of 2. Upper and lower bounds on S and K′/B′ are much larger, because drawdown stabilized relatively quickly after the onset of pumping.
NASA Astrophysics Data System (ADS)
Santos, Jander P.; Sá Barreto, F. C.
2016-01-01
Spin correlation identities for the Blume-Emery-Griffiths model on Kagomé lattice are derived and combined with rigorous correlation inequalities lead to upper bounds on the critical temperature. From the spin correlation identities the mean field approximation and the effective field approximation results for the magnetization, the critical frontiers and the tricritical points are obtained. The rigorous upper bounds on the critical temperature improve over those effective-field type theories results.
Bounds for the Z-spectral radius of nonnegative tensors.
He, Jun; Liu, Yan-Min; Ke, Hua; Tian, Jun-Kang; Li, Xiang
2016-01-01
In this paper, we have proposed some new upper bounds for the largest Z-eigenvalue of an irreducible weakly symmetric and nonnegative tensor, which improve the known upper bounds obtained in Chang et al. (Linear Algebra Appl 438:4166-4182, 2013), Song and Qi (SIAM J Matrix Anal Appl 34:1581-1595, 2013), He and Huang (Appl Math Lett 38:110-114, 2014), Li et al. (J Comput Anal Appl 483:182-199, 2015), He (J Comput Anal Appl 20:1290-1301, 2016).
Morphological representation of order-statistics filters.
Charif-Chefchaouni, M; Schonfeld, D
1995-01-01
We propose a comprehensive theory for the morphological bounds on order-statistics filters (and their repeated iterations). Conditions are derived for morphological openings and closings to serve as bounds (lower and upper, respectively) on order-statistics filters (and their repeated iterations). Under various assumptions, morphological open-closings and close-openings are also shown to serve as (tighter) bounds (lower and upper, respectively) on iterations of order-statistics filters. Simulations of the application of the results presented to image restoration are finally provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albert, Sarah; Bowman, Daniel; Rodgers, Arthur
Here, this research uses the acoustic coda phase delay method to estimate relative changes in air temperature between explosions with varying event masses and heights of burst. It also places a bound on source–receiver distance for the method. Previous studies used events with different shapes, height of bursts, and masses and recorded the acoustic codas at source–receiver distances less than 1 km. This research further explores the method using explosions that differ in mass (by up to an order of magnitude) and are placed at varying heights. Source–receiver distances also cover an area out to 7 km. Relative air temperaturemore » change estimates are compared to complementary meteorological observations. Results show that two explosions that differ by an order of magnitude cannot be used with this method because their propagation times in the near field and their fundamental frequencies are different. These differences are expressed as inaccuracies in the relative air temperature change estimates. An order of magnitude difference in mass is also shown to bias estimates higher. Small differences in height of burst do not affect the accuracy of the method. Finally, an upper bound of 1 km on source–receiver distance is provided based on the standard deviation characteristics of the estimates.« less
Albert, Sarah; Bowman, Daniel; Rodgers, Arthur; ...
2018-04-23
Here, this research uses the acoustic coda phase delay method to estimate relative changes in air temperature between explosions with varying event masses and heights of burst. It also places a bound on source–receiver distance for the method. Previous studies used events with different shapes, height of bursts, and masses and recorded the acoustic codas at source–receiver distances less than 1 km. This research further explores the method using explosions that differ in mass (by up to an order of magnitude) and are placed at varying heights. Source–receiver distances also cover an area out to 7 km. Relative air temperaturemore » change estimates are compared to complementary meteorological observations. Results show that two explosions that differ by an order of magnitude cannot be used with this method because their propagation times in the near field and their fundamental frequencies are different. These differences are expressed as inaccuracies in the relative air temperature change estimates. An order of magnitude difference in mass is also shown to bias estimates higher. Small differences in height of burst do not affect the accuracy of the method. Finally, an upper bound of 1 km on source–receiver distance is provided based on the standard deviation characteristics of the estimates.« less
Bootstrapping the (A1, A2) Argyres-Douglas theory
NASA Astrophysics Data System (ADS)
Cornagliotto, Martina; Lemos, Madalena; Liendo, Pedro
2018-03-01
We apply bootstrap techniques in order to constrain the CFT data of the ( A 1 , A 2) Argyres-Douglas theory, which is arguably the simplest of the Argyres-Douglas models. We study the four-point function of its single Coulomb branch chiral ring generator and put numerical bounds on the low-lying spectrum of the theory. Of particular interest is an infinite family of semi-short multiplets labeled by the spin ℓ. Although the conformal dimensions of these multiplets are protected, their three-point functions are not. Using the numerical bootstrap we impose rigorous upper and lower bounds on their values for spins up to ℓ = 20. Through a recently obtained inversion formula, we also estimate them for sufficiently large ℓ, and the comparison of both approaches shows consistent results. We also give a rigorous numerical range for the OPE coefficient of the next operator in the chiral ring, and estimates for the dimension of the first R-symmetry neutral non-protected multiplet for small spin.
The upper bound of Pier Scour defined by selected laboratory and field data
Benedict, Stephen; Caldwell, Andral W.
2015-01-01
The U.S. Geological Survey, in cooperation with the South Carolina Department of Transportation, conducted several field investigations of pier scour in South Carolina (Benedict and Caldwell, 2006; Benedict and Caldwell, 2009) and used that data to develop envelope curves defining the upper bound of pier scour. To expand upon this previous work, an additional cooperative investigation was initiated to combine the South Carolina data with pier-scour data from other sources and evaluate the upper bound of pier scour with this larger data set. To facilitate this analysis, a literature review was made to identify potential sources of published pier-scour data, and selected data were compiled into a digital spreadsheet consisting of approximately 570 laboratory and 1,880 field measurements. These data encompass a wide range of laboratory and field conditions and represent field data from 24 states within the United States and six other countries. This extensive database was used to define the upper bound of pier-scour depth with respect to pier width encompassing the laboratory and field data. Pier width is a primary variable that influences pier-scour depth (Laursen and Toch, 1956; Melville and Coleman, 2000; Mueller and Wagner, 2005, Ettema et al. 2011, Arneson et al. 2012) and therefore, was used as the primary explanatory variable in developing the upper-bound envelope curve. The envelope curve provides a simple but useful tool for assessing the potential maximum pier-scour depth for pier widths of about 30 feet or less.
Bounds on the information rate of quantum-secret-sharing schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarvepalli, Pradeep
An important metric of the performance of a quantum-secret-sharing scheme is its information rate. Beyond the fact that the information rate is upper-bounded by one, very little is known in terms of bounds on the information rate of quantum-secret-sharing schemes. Furthermore, not every scheme can be realized with rate one. In this paper we derive upper bounds for the information rates of quantum-secret-sharing schemes. We show that there exist quantum access structures on n players for which the information rate cannot be better than O((log{sub 2}n)/n). These results are the quantum analogues of the bounds for classical-secret-sharing schemes proved bymore » Csirmaz.« less
Van Holle, Lionel; Bauchau, Vincent
2014-01-01
Purpose For disproportionality measures based on the Relative Reporting Ratio (RRR) such as the Information Component (IC) and the Empirical Bayesian Geometrical Mean (EBGM), each product and event is assumed to represent a negligible fraction of the spontaneous report database (SRD). Here, we provide the tools for allowing signal detection experts to assess the consequence of the violation of this assumption on their specific SRD. Methods For each product–event pair (P–E), a worst-case scenario associated all the reported events-of-interest with the product of interest. The values of the RRR under this scenario were measured for different sets of stratification factors using the GlaxoSmithKline vaccines SRD. These values represent the RRR upper bound that RRR cannot exceed whatever the true strength of association. Results Depending on the choice of stratification factors, the RRR could not exceed an upper bound of 2 for up to 2.4% of the P–Es. For Engerix™, 23.4% of all reports in the SDR, the RRR could not exceed an upper bound of 2 for up to 13.8% of pairs. For the P–E Rotarix™-Intussusception, the choice of stratification factors impacted the upper bound to RRR: from 52.5 for an unstratified RRR to 2.0 for a fully stratified RRR. Conclusions The quantification of the upper bound can indicate whether measures such as EBGM, IC, or RRR can be used for SRD for which products or events represent a non-negligible fraction of the entire SRD. In addition, at the level of the product or P–E, it can also highlight detrimental impact of overstratification. © 2014 The Authors. Pharmacoepidemiology and Drug Safety published by John Wiley & Sons, Ltd. PMID:24395594
Probing the size of extra dimensions with gravitational wave astronomy
NASA Astrophysics Data System (ADS)
Yagi, Kent; Tanahashi, Norihiro; Tanaka, Takahiro
2011-04-01
In the Randall-Sundrum II braneworld model, it has been conjectured, according to the AdS/CFT correspondence, that a brane-localized black hole (BH) larger than the bulk AdS curvature scale ℓ cannot be static, and it is dual to a four-dimensional BH emitting Hawking radiation through some quantum fields. In this scenario, the number of the quantum field species is so large that this radiation changes the orbital evolution of a BH binary. We derived the correction to the gravitational waveform phase due to this effect and estimated the upper bounds on ℓ by performing Fisher analyses. We found that the Deci-Hertz Interferometer Gravitational Wave Observatory and the Big Bang Observatory (DECIGO/BBO) can give a stronger constraint than the current tabletop result by detecting gravitational waves from small mass BH/BH and BH/neutron star (NS) binaries. Furthermore, DECIGO/BBO is expected to detect 105 BH/NS binaries per year. Taking this advantage, we find that DECIGO/BBO can actually measure ℓ down to ℓ=0.33μm for a 5 yr observation if we know that binaries are circular a priori. This is about 40 times smaller than the upper bound obtained from the tabletop experiment. On the other hand, when we take eccentricities into binary parameters, the detection limit weakens to ℓ=1.5μm due to strong degeneracies between ℓ and eccentricities. We also derived the upper bound on ℓ from the expected detection number of extreme mass ratio inspirals with LISA and BH/NS binaries with DECIGO/BBO, extending the discussion made recently by McWilliams [Phys. Rev. Lett. 104, 141601 (2010)PRLTAO0031-900710.1103/PhysRevLett.104.141601]. We found that these less robust constraints are weaker than the ones from phase differences.
Bounds of memory strength for power-law series.
Guo, Fangjian; Yang, Dan; Yang, Zimo; Zhao, Zhi-Dan; Zhou, Tao
2017-05-01
Many time series produced by complex systems are empirically found to follow power-law distributions with different exponents α. By permuting the independently drawn samples from a power-law distribution, we present nontrivial bounds on the memory strength (first-order autocorrelation) as a function of α, which are markedly different from the ordinary ±1 bounds for Gaussian or uniform distributions. When 1<α≤3, as α grows bigger, the upper bound increases from 0 to +1 while the lower bound remains 0; when α>3, the upper bound remains +1 while the lower bound descends below 0. Theoretical bounds agree well with numerical simulations. Based on the posts on Twitter, ratings of MovieLens, calling records of the mobile operator Orange, and the browsing behavior of Taobao, we find that empirical power-law-distributed data produced by human activities obey such constraints. The present findings explain some observed constraints in bursty time series and scale-free networks and challenge the validity of measures such as autocorrelation and assortativity coefficient in heterogeneous systems.
Bounds of memory strength for power-law series
NASA Astrophysics Data System (ADS)
Guo, Fangjian; Yang, Dan; Yang, Zimo; Zhao, Zhi-Dan; Zhou, Tao
2017-05-01
Many time series produced by complex systems are empirically found to follow power-law distributions with different exponents α . By permuting the independently drawn samples from a power-law distribution, we present nontrivial bounds on the memory strength (first-order autocorrelation) as a function of α , which are markedly different from the ordinary ±1 bounds for Gaussian or uniform distributions. When 1 <α ≤3 , as α grows bigger, the upper bound increases from 0 to +1 while the lower bound remains 0; when α >3 , the upper bound remains +1 while the lower bound descends below 0. Theoretical bounds agree well with numerical simulations. Based on the posts on Twitter, ratings of MovieLens, calling records of the mobile operator Orange, and the browsing behavior of Taobao, we find that empirical power-law-distributed data produced by human activities obey such constraints. The present findings explain some observed constraints in bursty time series and scale-free networks and challenge the validity of measures such as autocorrelation and assortativity coefficient in heterogeneous systems.
Bound of dissipation on a plane Couette dynamo
NASA Astrophysics Data System (ADS)
Alboussière, Thierry
2009-06-01
Variational turbulence is among the few approaches providing rigorous results in turbulence. In addition, it addresses a question of direct practical interest, namely, the rate of energy dissipation. Unfortunately, only an upper bound is obtained as a larger functional space than the space of solutions to the Navier-Stokes equations is searched. Yet, in some cases, this upper bound is in good agreement with experimental results in terms of order of magnitude and power law of the imposed Reynolds number. In this paper, the variational approach to turbulence is extended to the case of dynamo action and an upper bound is obtained for the global dissipation rate (viscous and Ohmic). A simple plane Couette flow is investigated. For low magnetic Prandtl number Pm fluids, the upper bound of energy dissipation is that of classical turbulence (i.e., proportional to the cubic power of the shear velocity) for magnetic Reynolds numbers below Pm-1 and follows a steeper evolution for magnetic Reynolds numbers above Pm-1 (i.e., proportional to the shear velocity to the power of 4) in the case of electrically insulating walls. However, the effect of wall conductance is crucial: for a given value of wall conductance, there is a value for the magnetic Reynolds number above which energy dissipation cannot be bounded. This limiting magnetic Reynolds number is inversely proportional to the square root of the conductance of the wall. Implications in terms of energy dissipation in experimental and natural dynamos are discussed.
Limitations of the background field method applied to Rayleigh-Bénard convection
NASA Astrophysics Data System (ADS)
Nobili, Camilla; Otto, Felix
2017-09-01
We consider Rayleigh-Bénard convection as modeled by the Boussinesq equations, in the case of infinite Prandtl numbers and with no-slip boundary condition. There is a broad interest in bounds of the upwards heat flux, as given by the Nusselt number Nu, in terms of the forcing via the imposed temperature difference, as given by the Rayleigh number in the turbulent regime Ra ≫ 1 . In several studies, the background field method applied to the temperature field has been used to provide upper bounds on Nu in terms of Ra. In these applications, the background field method comes in the form of a variational problem where one optimizes a stratified temperature profile subject to a certain stability condition; the method is believed to capture the marginal stability of the boundary layer. The best available upper bound via this method is Nu ≲Ra/1 3 ( ln R a )/1 15 ; it proceeds via the construction of a stable temperature background profile that increases logarithmically in the bulk. In this paper, we show that the background temperature field method cannot provide a tighter upper bound in terms of the power of the logarithm. However, by another method, one does obtain the tighter upper bound Nu ≲ Ra /1 3 ( ln ln Ra ) /1 3 so that the result of this paper implies that the background temperature field method is unphysical in the sense that it cannot provide the optimal bound.
Investigation of geomagnetic field forecasting and fluid dynamics of the core
NASA Technical Reports Server (NTRS)
Benton, E. R. (Principal Investigator)
1981-01-01
The magnetic determination of the depth of the core-mantle boundary using MAGSAT data is discussed. Refinements to the approach of using the pole-strength of Earth to evaluate the radius of the Earth's core-mantle boundary are reported. The downward extrapolation through the electrically conducting mantle was reviewed. Estimates of an upper bound for the time required for Earth's liquid core to overturn completely are presented. High order analytic approximations to the unsigned magnetic flux crossing the Earth's surface are also presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Datta, Nilanjana, E-mail: n.datta@statslab.cam.ac.uk; Hsieh, Min-Hsiu, E-mail: Min-Hsiu.Hsieh@uts.edu.au; Oppenheim, Jonathan, E-mail: j.oppenheim@ucl.ac.uk
State redistribution is the protocol in which given an arbitrary tripartite quantum state, with two of the subsystems initially being with Alice and one being with Bob, the goal is for Alice to send one of her subsystems to Bob, possibly with the help of prior shared entanglement. We derive an upper bound on the second order asymptotic expansion for the quantum communication cost of achieving state redistribution with a given finite accuracy. In proving our result, we also obtain an upper bound on the quantum communication cost of this protocol in the one-shot setting, by using the protocol ofmore » coherent state merging as a primitive.« less
Carstensen, C.; Feischl, M.; Page, M.; Praetorius, D.
2014-01-01
This paper aims first at a simultaneous axiomatic presentation of the proof of optimal convergence rates for adaptive finite element methods and second at some refinements of particular questions like the avoidance of (discrete) lower bounds, inexact solvers, inhomogeneous boundary data, or the use of equivalent error estimators. Solely four axioms guarantee the optimality in terms of the error estimators. Compared to the state of the art in the temporary literature, the improvements of this article can be summarized as follows: First, a general framework is presented which covers the existing literature on optimality of adaptive schemes. The abstract analysis covers linear as well as nonlinear problems and is independent of the underlying finite element or boundary element method. Second, efficiency of the error estimator is neither needed to prove convergence nor quasi-optimal convergence behavior of the error estimator. In this paper, efficiency exclusively characterizes the approximation classes involved in terms of the best-approximation error and data resolution and so the upper bound on the optimal marking parameters does not depend on the efficiency constant. Third, some general quasi-Galerkin orthogonality is not only sufficient, but also necessary for the R-linear convergence of the error estimator, which is a fundamental ingredient in the current quasi-optimality analysis due to Stevenson 2007. Finally, the general analysis allows for equivalent error estimators and inexact solvers as well as different non-homogeneous and mixed boundary conditions. PMID:25983390
NASA Astrophysics Data System (ADS)
Baker, David M. H.; Head, James W.
2015-11-01
The mid-latitudes of Mars are host to a record of recent episodes of accumulations of ice-rich materials. The record includes debris aprons, interpreted to be debris-covered glaciers, that may represent the preserved remnants of a much more extensive ice sheet. We assessed the possibility of former glacial extents by examining debris aprons and the surrounding plains in Deuteronilus Mensae. Geomorphic units and stratigraphic relationships were mapped and documented from Mars Reconnaissance Orbiter (MRO) Context (CTX) and High Resolution Imaging Science Experiment (HiRISE) camera images, and crater retention ages were estimated from crater size-frequency distributions. Three major units are observed within the study area: debris aprons, lower plains, and upper plains. Debris aprons exhibit characteristics typical for these features documented elsewhere and in previous studies, including integrated flow lineations and patterns, convex-upward profiles, and knobby and brain terrain surface textures. A lower bound on the age for debris aprons is estimated to be 0.9 Ga. Debris aprons are superposed on a lower plains unit having a lower bound age of 3.3-3.5 Ga. A 50-100 m thick upper plains unit superposes both debris apron landforms and lower plains units and has a best-fit minimum age of 0.6 Ga. The upper plains unit exhibits characteristics of atmospherically-emplaced mantle material, including fine-grained nature, sublimation textures, cyclic layering, draping character, and widespread spatial distribution. Fracturing and subsequent sublimation/erosion of upper plains on debris aprons has contributed to many of the surface textures on debris aprons. The upper plains unit has also been eroded from the lower plains and plateaus, evidenced by isolated blocks of upper plains in the interiors of craters and on the walls and tops of plateaus. While no conclusive evidence diagnostic of former cold-based ice sheets are observed in the plains within the study region, such landforms and units may have been poorly developed or absent, as is often the case on Earth, and would have been covered and reworked by later mantling episodes. These observations suggest that emplacement of thick ice-rich mantle deposits extended at least to near the Early/Middle Amazonian boundary and overlapped with the waning stages of glaciation in Deuteronilus Mensae.
NASA Astrophysics Data System (ADS)
Røising, Henrik Schou; Simon, Steven H.
2018-03-01
Topological insulator surfaces in proximity to superconductors have been proposed as a way to produce Majorana fermions in condensed matter physics. One of the simplest proposed experiments with such a system is Majorana interferometry. Here we consider two possibly conflicting constraints on the size of such an interferometer. Coupling of a Majorana mode from the edge (the arms) of the interferometer to vortices in the center of the device sets a lower bound on the size of the device. On the other hand, scattering to the usually imperfectly insulating bulk sets an upper bound. From estimates of experimental parameters, we find that typical samples may have no size window in which the Majorana interferometer can operate, implying that a new generation of more highly insulating samples must be explored.
Multiprocessor speed-up, Amdahl's Law, and the Activity Set Model of parallel program behavior
NASA Technical Reports Server (NTRS)
Gelenbe, Erol
1988-01-01
An important issue in the effective use of parallel processing is the estimation of the speed-up one may expect as a function of the number of processors used. Amdahl's Law has traditionally provided a guideline to this issue, although it appears excessively pessimistic in the light of recent experimental results. In this note, Amdahl's Law is amended by giving a greater importance to the capacity of a program to make effective use of parallel processing, but also recognizing the fact that imbalance of the workload of each processor is bound to occur. An activity set model of parallel program behavior is then introduced along with the corresponding parallelism index of a program, leading to upper and lower bounds to the speed-up.
Improved bounds on the energy-minimizing strains in martensitic polycrystals
NASA Astrophysics Data System (ADS)
Peigney, Michaël
2016-07-01
This paper is concerned with the theoretical prediction of the energy-minimizing (or recoverable) strains in martensitic polycrystals, considering a nonlinear elasticity model of phase transformation at finite strains. The main results are some rigorous upper bounds on the set of energy-minimizing strains. Those bounds depend on the polycrystalline texture through the volume fractions of the different orientations. The simplest form of the bounds presented is obtained by combining recent results for single crystals with a homogenization approach proposed previously for martensitic polycrystals. However, the polycrystalline bound delivered by that procedure may fail to recover the monocrystalline bound in the homogeneous limit, as is demonstrated in this paper by considering an example related to tetragonal martensite. This motivates the development of a more detailed analysis, leading to improved polycrystalline bounds that are notably consistent with results for single crystals in the homogeneous limit. A two-orientation polycrystal of tetragonal martensite is studied as an illustration. In that case, analytical expressions of the upper bounds are derived and the results are compared with lower bounds obtained by considering laminate textures.
Updated estimates of the climate response to emissions and their policy implications (Invited)
NASA Astrophysics Data System (ADS)
Allen, M. R.; Otto, A.; Stocker, T. F.; Frame, D. J.
2013-12-01
We review the implications of observations of the global energy budget over recent decades, particularly the 'warming hiatus' period over the 2000s, for key climate system properties including equilibrium climate sensitivity (ECS), transient climate response (TCR) and transient climate response to cumulative carbon emissions (TCRE). We show how estimates of the upper bound of ECS remain, as ever, sensitive to prior assumptions and also how ECS, even if it were better constrained, would provide much less information about the social cost of carbon than TCR or TCRE. Hence the excitement over recent, apparently conflicting, estimates of ECS, is almost entirely misplaced. Of greater potential policy significance is the fact that recent observations imply a modest (of order 25%) downward revision in the upper bound and most likely values of TCR and TCRE, as compared to some, but not all, of the estimates published in the mid-2000s. This is partly due to the recent reduced rate of warming, and partly due to revisions in estimates of total anthropogenic forcing to date. Both of these developments may turn out to be short-lived, so the policy implications of this modest revision in TCR/TCRE should not be over-sold: nevertheless, it is interesting to explore what they are. The implications for climate change adaptation of a 25% downward revision in TCR and TCRE are minimal, being overshadowed by uncertainty due to internal variability and non-CO2 climate forcings over typical timescales for adaptation planning. We introduce a simple framework for assessing the implications for mitigation in terms of timing of peak emissions average rates of emission reduction required to avoid specific levels of peak warming. We show that, as long as emissions continue to increase approximately exponentially, the implications for mitigation of any revisions in the climate response are surprisingly small.
Solving Open Job-Shop Scheduling Problems by SAT Encoding
NASA Astrophysics Data System (ADS)
Koshimura, Miyuki; Nabeshima, Hidetomo; Fujita, Hiroshi; Hasegawa, Ryuzo
This paper tries to solve open Job-Shop Scheduling Problems (JSSP) by translating them into Boolean Satisfiability Testing Problems (SAT). The encoding method is essentially the same as the one proposed by Crawford and Baker. The open problems are ABZ8, ABZ9, YN1, YN2, YN3, and YN4. We proved that the best known upper bounds 678 of ABZ9 and 884 of YN1 are indeed optimal. We also improved the upper bound of YN2 and lower bounds of ABZ8, YN2, YN3 and YN4.
Upper and lower bounds for semi-Markov reliability models of reconfigurable systems
NASA Technical Reports Server (NTRS)
White, A. L.
1984-01-01
This paper determines the information required about system recovery to compute the reliability of a class of reconfigurable systems. Upper and lower bounds are derived for these systems. The class consists of those systems that satisfy five assumptions: the components fail independently at a low constant rate, fault occurrence and system reconfiguration are independent processes, the reliability model is semi-Markov, the recovery functions which describe system configuration have small means and variances, and the system is well designed. The bounds are easy to compute, and examples are included.
The Laughlin liquid in an external potential
NASA Astrophysics Data System (ADS)
Rougerie, Nicolas; Yngvason, Jakob
2018-04-01
We study natural perturbations of the Laughlin state arising from the effects of trapping and disorder. These are N-particle wave functions that have the form of a product of Laughlin states and analytic functions of the N variables. We derive an upper bound to the ground state energy in a confining external potential, matching exactly a recently derived lower bound in the large N limit. Irrespective of the shape of the confining potential, this sharp upper bound can be achieved through a modification of the Laughlin function by suitably arranged quasi-holes.
Determining Normal-Distribution Tolerance Bounds Graphically
NASA Technical Reports Server (NTRS)
Mezzacappa, M. A.
1983-01-01
Graphical method requires calculations and table lookup. Distribution established from only three points: mean upper and lower confidence bounds and lower confidence bound of standard deviation. Method requires only few calculations with simple equations. Graphical procedure establishes best-fit line for measured data and bounds for selected confidence level and any distribution percentile.
Butler, Troy; Wildey, Timothy
2018-01-01
In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, Troy; Wildey, Timothy
In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less
The upper bound of abutment scour defined by selected laboratory and field data
Benedict, Stephen; Caldwell, Andral W.
2015-01-01
The U.S. Geological Survey, in cooperation with the South Carolina Department of Transportation, conducted a field investigation of abutment scour in South Carolina and used that data to develop envelope curves defining the upper bound of abutment scour. To expand upon this previous work, an additional cooperative investigation was initiated to combine the South Carolina data with abutment-scour data from other sources and evaluate the upper bound of abutment scour with the larger data set. To facilitate this analysis, a literature review was made to identify potential sources of published abutment-scour data, and selected data, consisting of 446 laboratory and 331 field measurements, were compiled for the analysis. These data encompassed a wide range of laboratory and field conditions and represent field data from 6 states within the United States. The data set was used to evaluate the South Carolina abutment-scour envelope curves. Additionally, the data were used to evaluate a dimensionless abutment-scour envelope curve developed by Melville (1992), highlighting the distinct difference in the upper bound for laboratory and field data. The envelope curves evaluated in this investigation provide simple but useful tools for assessing the potential maximum abutment-scour depth in the field setting.
A fresh look into the interacting dark matter scenario
NASA Astrophysics Data System (ADS)
Escudero, Miguel; Lopez-Honorez, Laura; Mena, Olga; Palomares-Ruiz, Sergio; Villanueva-Domingo, Pablo
2018-06-01
The elastic scattering between dark matter particles and radiation represents an attractive possibility to solve a number of discrepancies between observations and standard cold dark matter predictions, as the induced collisional damping would imply a suppression of small-scale structures. We consider this scenario and confront it with measurements of the ionization history of the Universe at several redshifts and with recent estimates of the counts of Milky Way satellite galaxies. We derive a conservative upper bound on the dark matter-photon elastic scattering cross section of σγ DM < 8 × 10‑10 σT (mDM/GeV) at 95% CL, about one order of magnitude tighter than previous constraints from satellite number counts. Due to the strong degeneracies with astrophysical parameters, the bound on the dark matter-photon scattering cross section derived here is driven by the estimate of the number of Milky Way satellite galaxies. Finally, we also argue that future 21 cm probes could help in disentangling among possible non-cold dark matter candidates, such as interacting and warm dark matter scenarios. Let us emphasize that bounds of similar magnitude to the ones obtained here could be also derived for models with dark matter-neutrino interactions and would be as constraining as the tightest limits on such scenarios.
Upper limit set by causality on the tidal deformability of a neutron star
NASA Astrophysics Data System (ADS)
Van Oeveren, Eric D.; Friedman, John L.
2017-04-01
A principal goal of gravitational-wave astronomy is to constrain the neutron star equation of state (EOS) by measuring the tidal deformability of neutron stars. The tidally induced departure of the waveform from that of a point particle [or a spinless binary black hole (BBH)] increases with the stiffness of the EOS. We show that causality (the requirement that the speed of sound be less than the speed of light for a perfect fluid satisfying a one-parameter equation of state) places an upper bound on tidal deformability as a function of mass. Like the upper mass limit, the limit on deformability is obtained by using an EOS with vsound=c for high densities and matching to a low density (candidate) EOS at a matching density of order nuclear saturation density. We use these results and those of Lackey et al. [Phys. Rev. D 89, 043009 (2014), 10.1103/PhysRevD.89.043009] to estimate the resulting upper limit on the gravitational-wave phase shift of a black hole-neutron star (BHNS) binary relative to a BBH. Even for assumptions weak enough to allow a maximum mass of 4 M⊙ (a match at nuclear saturation density to an unusually stiff low-density candidate EOS), the upper limit on dimensionless tidal deformability is stringent. It leads to a still more stringent estimated upper limit on the maximum tidally induced phase shift prior to merger. We comment in an appendix on the relation among causality, the condition vsound
Symmetry Parameter Constraints from a Lower Bound on Neutron-matter Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tews, Ingo; Lattimer, James M.; Ohnishi, Akira
We propose the existence of a lower bound on the energy of pure neutron matter (PNM) on the basis of unitary-gas considerations. We discuss its justification from experimental studies of cold atoms as well as from theoretical studies of neutron matter. We demonstrate that this bound results in limits to the density-dependent symmetry energy, which is the difference between the energies of symmetric nuclear matter and PNM. In particular, this bound leads to a lower limit to the volume symmetry energy parameter S {sub 0}. In addition, for assumed values of S {sub 0} above this minimum, this bound impliesmore » both upper and lower limits to the symmetry energy slope parameter L , which describes the lowest-order density dependence of the symmetry energy. A lower bound on neutron-matter incompressibility is also obtained. These bounds are found to be consistent with both recent calculations of the energies of PNM and constraints from nuclear experiments. Our results are significant because several equations of state that are currently used in astrophysical simulations of supernovae and neutron star mergers, as well as in nuclear physics simulations of heavy-ion collisions, have symmetry energy parameters that violate these bounds. Furthermore, below the nuclear saturation density, the bound on neutron-matter energies leads to a lower limit to the density-dependent symmetry energy, which leads to upper limits to the nuclear surface symmetry parameter and the neutron-star crust–core boundary. We also obtain a lower limit to the neutron-skin thicknesses of neutron-rich nuclei. Above the nuclear saturation density, the bound on neutron-matter energies also leads to an upper limit to the symmetry energy, with implications for neutron-star cooling via the direct Urca process.« less
Computational experience with a parallel algorithm for tetrangle inequality bound smoothing.
Rajan, K; Deo, N
1999-09-01
Determining molecular structure from interatomic distances is an important and challenging problem. Given a molecule with n atoms, lower and upper bounds on interatomic distances can usually be obtained only for a small subset of the 2(n(n-1)) atom pairs, using NMR. Given the bounds so obtained on the distances between some of the atom pairs, it is often useful to compute tighter bounds on all the 2(n(n-1)) pairwise distances. This process is referred to as bound smoothing. The initial lower and upper bounds for the pairwise distances not measured are usually assumed to be 0 and infinity. One method for bound smoothing is to use the limits imposed by the triangle inequality. The distance bounds so obtained can often be tightened further by applying the tetrangle inequality--the limits imposed on the six pairwise distances among a set of four atoms (instead of three for the triangle inequalities). The tetrangle inequality is expressed by the Cayley-Menger determinants. For every quadruple of atoms, each pass of the tetrangle inequality bound smoothing procedure finds upper and lower limits on each of the six distances in the quadruple. Applying the tetrangle inequalities to each of the (4n) quadruples requires O(n4) time. Here, we propose a parallel algorithm for bound smoothing employing the tetrangle inequality. Each pass of our algorithm requires O(n3 log n) time on a REW PRAM (Concurrent Read Exclusive Write Parallel Random Access Machine) with O(log(n)n) processors. An implementation of this parallel algorithm on the Intel Paragon XP/S and its performance are also discussed.
Symmetry Parameter Constraints from a Lower Bound on Neutron-matter Energy
NASA Astrophysics Data System (ADS)
Tews, Ingo; Lattimer, James M.; Ohnishi, Akira; Kolomeitsev, Evgeni E.
2017-10-01
We propose the existence of a lower bound on the energy of pure neutron matter (PNM) on the basis of unitary-gas considerations. We discuss its justification from experimental studies of cold atoms as well as from theoretical studies of neutron matter. We demonstrate that this bound results in limits to the density-dependent symmetry energy, which is the difference between the energies of symmetric nuclear matter and PNM. In particular, this bound leads to a lower limit to the volume symmetry energy parameter S 0. In addition, for assumed values of S 0 above this minimum, this bound implies both upper and lower limits to the symmetry energy slope parameter L ,which describes the lowest-order density dependence of the symmetry energy. A lower bound on neutron-matter incompressibility is also obtained. These bounds are found to be consistent with both recent calculations of the energies of PNM and constraints from nuclear experiments. Our results are significant because several equations of state that are currently used in astrophysical simulations of supernovae and neutron star mergers, as well as in nuclear physics simulations of heavy-ion collisions, have symmetry energy parameters that violate these bounds. Furthermore, below the nuclear saturation density, the bound on neutron-matter energies leads to a lower limit to the density-dependent symmetry energy, which leads to upper limits to the nuclear surface symmetry parameter and the neutron-star crust-core boundary. We also obtain a lower limit to the neutron-skin thicknesses of neutron-rich nuclei. Above the nuclear saturation density, the bound on neutron-matter energies also leads to an upper limit to the symmetry energy, with implications for neutron-star cooling via the direct Urca process.
Contamination of U.S. Butter with Polybrominated Diphenyl Ethers from Wrapping Paper
Schecter, Arnold; Smith, Sarah; Colacino, Justin; Malik, Noor; Opel, Matthias; Paepke, Olaf; Birnbaum, Linda
2011-01-01
Objectives Our aim was to report the first known incidence of U.S. butter contamination with extremely high levels of polybrominated diphenyl ethers (PBDEs). Methods Ten butter samples were individually analyzed for PBDEs. One of the samples and its paper wrapper contained very high levels of higher-brominated PBDEs. Dietary estimates were calculated using the 2007 U.S. Department of Agriculture Loss-Adjusted Food Availability data, excluding the elevated sample. Results The highly contaminated butter sample had a total upper bound PBDE level of 42,252 pg/g wet weight (ww). Levels of brominated diphenyl ether (BDE)-206, -207, and -209 were 2,000, 2,290, and 37,600 pg/g ww, respectively. Its wrapping paper contained a total upper-bound PBDE concentration of 804,751 pg/g ww, with levels of BDE-206, -207, and -209 of 51,000, 11,700, and 614,000 pg/g, respectively. Total PBDE levels in the remaining nine butter samples ranged from 180 to 1,212 pg/g, with geometric mean of 483 and median of 284 pg/g. Excluding the outlier, total PBDE daily intake from all food was 22,764 pg/day, lower than some previous U.S. dietary intake estimates. Conclusion Higher-brominated PBDE congeners were likely transferred from contaminated wrapping paper to butter. A larger representative survey may help determine how frequently PBDE contamination occurs. Sampling at various stages in food production may identify contamination sources and reduce risk. PMID:21138809
Contamination of U.S. butter with polybrominated diphenyl ethers from wrapping paper.
Schecter, Arnold; Smith, Sarah; Colacino, Justin; Malik, Noor; Opel, Matthias; Paepke, Olaf; Birnbaum, Linda
2011-02-01
Our aim was to report the first known incidence of U.S. butter contamination with extremely high levels of polybrominated diphenyl ethers (PBDEs). Ten butter samples were individually analyzed for PBDEs. One of the samples and its paper wrapper contained very high levels of higher-brominated PBDEs. Dietary estimates were calculated using the 2007 U.S. Department of Agriculture Loss-Adjusted Food Availability data, excluding the elevated sample. The highly contaminated butter sample had a total upper bound PBDE level of 42,252 pg/g wet weight (ww). Levels of brominated diphenyl ether (BDE)-206, -207, and -209 were 2,000, 2,290, and 37,600 pg/g ww, respectively. Its wrapping paper contained a total upper-bound PBDE concentration of 804,751 pg/g ww, with levels of BDE-206, -207, and -209 of 51,000, 11,700, and 614,000 pg/g, respectively. Total PBDE levels in the remaining nine butter samples ranged from 180 to 1,212 pg/g, with geometric mean of 483 and median of 284 pg/g. Excluding the outlier, total PBDE daily intake from all food was 22,764 pg/day, lower than some previous U.S. dietary intake estimates. Higher-brominated PBDE congeners were likely transferred from contaminated wrapping paper to butter. A larger representative survey may help determine how frequently PBDE contamination occurs. Sampling at various stages in food production may identify contamination sources and reduce risk.
Lu, Chunling; Chu, Annie; Li, Zhihui; Shen, Jian; Subramanian, S V; Hill, Kenneth
2017-01-01
The majority of Countdown countries did not reach the fourth Millennium Development Goal (MDG 4) on reducing child mortality, despite the fact that donor funding to the health sector has drastically increased. When tracking aid invested in child survival, previous studies have exclusively focused on aid targeting reproductive, maternal, newborn, and child health (RMNCH). We take a multi-sectoral approach and extend the estimation to the four sectors that determine child survival: health (RMNCH and non-RMNCH), education, water and sanitation, and food and humanitarian assistance (Food/HA). Using donor reported data, obtained mainly from the OECD Creditor Reporting System and Development Assistance Committee, we tracked the level and trends of aid (in grants or loans) disbursed to each of the four sectors at the global, regional, and country levels. We performed detailed analyses on missing data and conducted imputation with various methods. To identify aid projects for RMNCH, we developed an identification strategy that combined keyword searches and manual coding. To quantify aid for RMNCH in projects with multiple purposes, we adopted an integrated approach and produced the lower and upper bounds of estimates for RMNCH, so as to avoid making assumptions or using weak evidence for allocation. We checked the sensitivity of trends to the estimation methods and compared our estimates to that produced by other studies. Our study yielded time-series and recipient-specific annual estimates of aid disbursed to each sector, as well as their lower- and upper-bounds in 134 countries between 2000 and 2014, with a specific focus on Countdown countries. We found that the upper-bound estimates of total aid disbursed to the four sectors in 134 countries rose from US$ 22.62 billion in 2000 to US$ 59.29 billion in 2014, with the increase occurring in all income groups and regions with sub-Saharan Africa receiving the largest sum. Aid to RMNCH has experienced the fastest growth (12.4%), followed by aid to Food/HA (9.4%), education (5.1%), and water and sanitation (5.0%). With the exception of RMNCH, the average per capita aid disbursed to each sector in the 74 Countdown countries was smaller than in non-Countdown countries. While countries with a large number of child deaths tend to receive the largest amount of disbursements, non-Countdown countries with small populations usually received the highest level of per capita aid for child survival among all 134 countries. Compared to other Countdown countries, those that met MDG 4 with a high reliance on health aid received much higher per capita aid across all sectors. These findings are robust to estimation methods. The study suggests that to improve child survival, better targeted investments should be made in the four sectors, and aid to non-health sectors could be a possible contributor to child mortality reduction. We recommend that future studies on tracking aid for child survival go beyond the health sector and include other sectors that directly affect child survival. Investigation should also be made about the link between aid to each of the four sectors and child mortality reduction.
Einstein-Podolsky-Rosen steering: Its geometric quantification and witness
NASA Astrophysics Data System (ADS)
Ku, Huan-Yu; Chen, Shin-Liang; Budroni, Costantino; Miranowicz, Adam; Chen, Yueh-Nan; Nori, Franco
2018-02-01
We propose a measure of quantum steerability, namely, a convex steering monotone, based on the trace distance between a given assemblage and its corresponding closest assemblage admitting a local-hidden-state (LHS) model. We provide methods to estimate such a quantity, via lower and upper bounds, based on semidefinite programming. One of these upper bounds has a clear geometrical interpretation as a linear function of rescaled Euclidean distances in the Bloch sphere between the normalized quantum states of (i) a given assemblage and (ii) an LHS assemblage. For a qubit-qubit quantum state, these ideas also allow us to visualize various steerability properties of the state in the Bloch sphere via the so-called LHS surface. In particular, some steerability properties can be obtained by comparing such an LHS surface with a corresponding quantum steering ellipsoid. Thus, we propose a witness of steerability corresponding to the difference of the volumes enclosed by these two surfaces. This witness (which reveals the steerability of a quantum state) enables one to find an optimal measurement basis, which can then be used to determine the proposed steering monotone (which describes the steerability of an assemblage) optimized over all mutually unbiased bases.
Kamiura, Moto; Sano, Kohei
2017-10-01
The principle of optimism in the face of uncertainty is known as a heuristic in sequential decision-making problems. Overtaking method based on this principle is an effective algorithm to solve multi-armed bandit problems. It was defined by a set of some heuristic patterns of the formulation in the previous study. The objective of the present paper is to redefine the value functions of Overtaking method and to unify the formulation of them. The unified Overtaking method is associated with upper bounds of confidence intervals of expected rewards on statistics. The unification of the formulation enhances the universality of Overtaking method. Consequently we newly obtain Overtaking method for the exponentially distributed rewards, numerically analyze it, and show that it outperforms UCB algorithm on average. The present study suggests that the principle of optimism in the face of uncertainty should be regarded as the statistics-based consequence of the law of large numbers for the sample mean of rewards and estimation of upper bounds of expected rewards, rather than as a heuristic, in the context of multi-armed bandit problems. Copyright © 2017 Elsevier B.V. All rights reserved.
Complexity, Heuristic, and Search Analysis for the Games of Crossings and Epaminondas
2014-03-27
research in Artifical Intelligence (Section 2.1) and why games are studied (Section 2.2). Section 2.3 discusses how games are played and solved. An...5 2.1 Games in Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Game Study...Artificial Intelligence UCT Upper Confidence Bounds applied to Trees HUCT Heuristic Guided UCT LOA Lines of Action UCB Upper Confidence Bound RAVE Rapid
Grootendorst, Paul; Matteo, Livio Di
2007-01-01
While pharmaceutical patent terms have increased in Canada, increases in patented drug spending have been mitigated by price controls and retrenchment of public prescription drug subsidy programs. We estimate the net effects of these offsetting policies on domestic pharmaceutical R&D expenditures and also provide an upper-bound estimate on the effects of these policies on Canadian pharmaceutical spending over the period 1988–2002. We estimate that R&D spending increased by $4.4 billion (1997 dollars). Drug spending increased by $3.9 billion at most and, quite likely, by much less. Cutbacks to public drug subsidies and the introduction of price controls likely mitigated drug spending growth. In cost–benefit terms, we suspect that the patent extension policies have been beneficial to Canada. PMID:19305720
On relating apparent stress to the stress causing earthquake fault slip
McGarr, A.
1999-01-01
Apparent stress ??a is defined as ??a = ??????, where ???? is the average shear stress loading the fault plane to cause slip and ?? is the seismic efficiency, defined as Ea/W, where Ea is the energy radiated seismically and W is the total energy released by the earthquake. The results of a recent study in which apparent stresses of mining-induced earthquakes were compared to those measured for laboratory stick-slip friction events led to the hypothesis that ??a/???? ??? 0.06. This hypothesis is tested here against a substantially augmented data set of earthquakes for which ???? can be estimated, mostly from in situ stress measurements, for comparison with ??a. The expanded data set, which includes earthquakes artificially triggered at a depth of 9 km in the German Kontinentales Tiefbohrprogramm der Bundesrepublik Deutschland (KTB) borehole and natural tectonic earthquakes, covers a broad range of hypocentral depths, rock types, pore pressures, and tectonic settings. Nonetheless, over ???14 orders of magnitude in seismic moment, apparent stresses exhibit distinct upper bounds defined by a maximum seismic efficiency of ???0.06, consistent with the hypothesis proposed before. This behavior of ??a and ?? can be expressed in terms of two parameters measured for stick-slip friction events in the laboratory: the ratio of the static to the dynamic coefficient of friction and the fault slip overshoot. Typical values for these two parameters yield seismic efficiencies of ???0.06. In contrast to efficiencies for laboratory events for which ?? is always near 0.06, those for earthquakes tend to be less than this bounding value because Ea for earthquakes is usually underestimated due to factors such as band-limited recording. Thus upper bounds on ??a/???? appear to be controlled by just a few fundamental aspects of frictional stick-slip behavior that are common to shallow earthquakes everywhere. Estimates of ???? from measurements of ??a for suites of earthquakes, using ??a/???? ??? 0.06, are found to be comparable in magnitude to estimates of shear stress on the basis of extrapolating in situ stress data to seismogenic depths.
Multivariate Lipschitz optimization: Survey and computational comparison
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, P.; Gourdin, E.; Jaumard, B.
1994-12-31
Many methods have been proposed to minimize a multivariate Lipschitz function on a box. They pertain the three approaches: (i) reduction to the univariate case by projection (Pijavskii) or by using a space-filling curve (Strongin); (ii) construction and refinement of a single upper bounding function (Pijavskii, Mladineo, Mayne and Polak, Jaumard Hermann and Ribault, Wood...); (iii) branch and bound with local upper bounding functions (Galperin, Pint{acute e}r, Meewella and Mayne, the present authors). A survey is made, stressing similarities of algorithms, expressed when possible within a unified framework. Moreover, an extensive computational comparison is reported on.
Numerical and analytical bounds on threshold error rates for hypergraph-product codes
NASA Astrophysics Data System (ADS)
Kovalev, Alexey A.; Prabhakar, Sanjay; Dumer, Ilya; Pryadko, Leonid P.
2018-06-01
We study analytically and numerically decoding properties of finite-rate hypergraph-product quantum low density parity-check codes obtained from random (3,4)-regular Gallager codes, with a simple model of independent X and Z errors. Several nontrivial lower and upper bounds for the decodable region are constructed analytically by analyzing the properties of the homological difference, equal minus the logarithm of the maximum-likelihood decoding probability for a given syndrome. Numerical results include an upper bound for the decodable region from specific heat calculations in associated Ising models and a minimum-weight decoding threshold of approximately 7 % .
Upper bounds on the photon mass
DOE Office of Scientific and Technical Information (OSTI.GOV)
Accioly, Antonio; Group of Field Theory from First Principles, Sao Paulo State University; Instituto de Fisica Teorica
2010-09-15
The effects of a nonzero photon rest mass can be incorporated into electromagnetism in a simple way using the Proca equations. In this vein, two interesting implications regarding the possible existence of a massive photon in nature, i.e., tiny alterations in the known values of both the anomalous magnetic moment of the electron and the gravitational deflection of electromagnetic radiation, are utilized to set upper limits on its mass. The bounds obtained are not as stringent as those recently found; nonetheless, they are comparable to other existing bounds and bring new elements to the issue of restricting the photon mass.
Diamond, Sarah E
2017-02-01
How will organisms respond to climate change? The rapid changes in global climate are expected to impose strong directional selection on fitness-related traits. A major open question then is the potential for adaptive evolutionary change under these shifting climates. At the most basic level, evolutionary change requires the presence of heritable variation and natural selection. Because organismal tolerances of high temperature place an upper bound on responding to temperature change, there has been a surge of research effort on the evolutionary potential of upper thermal tolerance traits. Here, I review the available evidence on heritable variation in upper thermal tolerance traits, adopting a biogeographic perspective to understand how heritability of tolerance varies across space. Specifically, I use meta-analytical models to explore the relationship between upper thermal tolerance heritability and environmental variability in temperature. I also explore how variation in the methods used to obtain these thermal tolerance heritabilities influences the estimation of heritable variation in tolerance. I conclude by discussing the implications of a positive relationship between thermal tolerance heritability and environmental variability in temperature and how this might influence responses to future changes in climate. © 2016 New York Academy of Sciences.
Upper bound on the Abelian gauge coupling from asymptotic safety
NASA Astrophysics Data System (ADS)
Eichhorn, Astrid; Versteegen, Fleur
2018-01-01
We explore the impact of asymptotically safe quantum gravity on the Abelian gauge coupling in a model including a charged scalar, confirming indications that asymptotically safe quantum fluctuations of gravity could trigger a power-law running towards a free fixed point for the gauge coupling above the Planck scale. Simultaneously, quantum gravity fluctuations balance against matter fluctuations to generate an interacting fixed point, which acts as a boundary of the basin of attraction of the free fixed point. This enforces an upper bound on the infrared value of the Abelian gauge coupling. In the regime of gravity couplings which in our approximation also allows for a prediction of the top quark and Higgs mass close to the experimental value [1], we obtain an upper bound approximately 35% above the infrared value of the hypercharge coupling in the Standard Model.
Limits of Gaussian fluctuations in the cosmic microwave background at 19.2 GHz
NASA Technical Reports Server (NTRS)
Boughn, S. P.; Cheng, E. S.; Cottingham, D. A.; Fixsen, D. J.
1992-01-01
The Northern Hemisphere data from the 19.2 GHz full sky survey are analyzed to place limits on the magnitude of Gaussian fluctuations in the cosmic microwave background implied by a variety of correlation functions. Included among the models tested are the monochromatic and Gaussian-shaped families, and those with power-law spectra for n values between -2 and 1. An upper bound is placed on the quadrupole anisotropy of Delta T/T less than 3.2 x 10 exp -5 rms, and an upper bound on scale-invariant (n = 1) fluctuations of a2 less than 4.5 x 10 exp -5 (95 percent confidence level). There is significant contamination of these data from Galactic emission, and improvement of the modeling of the Galaxy could yield a significant reduction of these upper bounds.
Limits on Gaussian fluctuations in the cosmic microwave background at 19.2 GHz
NASA Technical Reports Server (NTRS)
Boughn, S. P.; Cheng, E. S.; Cottingham, D. A.; Fixsen, D. J.
1991-01-01
The Northern Hemisphere data from the 19.2 GHz full sky survey are analyzed to place limits on the magnitude of Gaussian fluctuations in the cosmic microwave background implied by a variety of correlation functions. Included among the models tested are the monochromatic and Gaussian-shaped families, and those with power law spectra for n from -2 to 1. We place an upper bound on the quadrupole anisotropy of DeltaT/T less than 3.2 x 10 exp -5 rms, and an upper bound on scale-invariant (n = 1) fluctuations of a2 less than 4.5 x 10 exp -5 (95 percent confidence level). There is significant contamination of these data from Galactic emission, and improvement of our modeling of the Galaxy could yield a significant reduction of these upper bounds.
Complexity Bounds for Quantum Computation
2007-06-22
Programs Trustees of Boston University Boston, MA 02215 - Complexity Bounds for Quantum Computation REPORT DOCUMENTATION PAGE 18. SECURITY CLASSIFICATION...Complexity Bounds for Quantum Comp[utation Report Title ABSTRACT This project focused on upper and lower bounds for quantum computability using constant...classical computation models, particularly emphasizing new examples of where quantum circuits are more powerful than their classical counterparts. A second
Strain Rate Tensor Estimation in Cine Cardiac MRI Based on Elastic Image Registration
NASA Astrophysics Data System (ADS)
Sánchez-Ferrero, Gonzalo Vegas; Vega, Antonio Tristán; Grande, Lucilio Cordero; de La Higuera, Pablo Casaseca; Fernández, Santiago Aja; Fernández, Marcos Martín; López, Carlos Alberola
In this work we propose an alternative method to estimate and visualize the Strain Rate Tensor (SRT) in Magnetic Resonance Images (MRI) when Phase Contrast MRI (PCMRI) and Tagged MRI (TMRI) are not available. This alternative is based on image processing techniques. Concretely, image registration algorithms are used to estimate the movement of the myocardium at each point. Additionally, a consistency checking method is presented to validate the accuracy of the estimates when no golden standard is available. Results prove that the consistency checking method provides an upper bound of the mean squared error of the estimate. Our experiments with real data show that the registration algorithm provides a useful deformation field to estimate the SRT fields. A classification between regional normal and dysfunctional contraction patterns, as compared with experts diagnosis, points out that the parameters extracted from the estimated SRT can represent these patterns. Additionally, a scheme for visualizing and analyzing the local behavior of the SRT field is presented.
Dioxins, Furans and PCBs in Recycled Water for Indirect Potable Reuse
Rodriguez, Clemencia; Cook, Angus; Devine, Brian; Van Buynder, Paul; Lugg, Richard; Linge, Kathryn; Weinstein, Philip
2008-01-01
An assessment of potential health impacts of dioxin and dioxin-like compounds in recycled water for indirect potable reuse was conducted. Toxic equivalency factors (TEFs) for 2,3,7,8-substituted polychlorinated dibenzo-p-dioxins (PCDD) and dibenzofurans (PCDFs) and dioxin-like polychlorinated biphenyls (PCBs) congeners have been developed by the World Health Organization to simplify the risk assessment of complex mixtures. Samples of secondary treated wastewater in Perth, Australia were examined pre-and post-tertiary treatment in one full-scale and one pilot water reclamation plant. Risk quotients (RQs) were estimated by expressing the middle-bound toxic equivalent (TEQ) and the upper-bound TEQ concentration in each sampling point as a function of the estimated health target value. The results indicate that reverse osmosis (RO) is able to reduce the concentration of PCDD, PCDF and dioxin-like PCBs and produce water of high quality (RQ after RO=0.15). No increased human health risk from dioxin and dioxin-like compounds is anticipated if highly treated recycled water is used to augment drinking water supplies in Perth. Recommendations for a verification monitoring program are offered. PMID:19151430
Sensor selection cost optimisation for tracking structurally cyclic systems: a P-order solution
NASA Astrophysics Data System (ADS)
Doostmohammadian, M.; Zarrabi, H.; Rabiee, H. R.
2017-08-01
Measurements and sensing implementations impose certain cost in sensor networks. The sensor selection cost optimisation is the problem of minimising the sensing cost of monitoring a physical (or cyber-physical) system. Consider a given set of sensors tracking states of a dynamical system for estimation purposes. For each sensor assume different costs to measure different (realisable) states. The idea is to assign sensors to measure states such that the global cost is minimised. The number and selection of sensor measurements need to ensure the observability to track the dynamic state of the system with bounded estimation error. The main question we address is how to select the state measurements to minimise the cost while satisfying the observability conditions. Relaxing the observability condition for structurally cyclic systems, the main contribution is to propose a graph theoretic approach to solve the problem in polynomial time. Note that polynomial time algorithms are suitable for large-scale systems as their running time is upper-bounded by a polynomial expression in the size of input for the algorithm. We frame the problem as a linear sum assignment with solution complexity of ?.
NASA Astrophysics Data System (ADS)
Hartman, Thomas; Hartnoll, Sean A.; Mahajan, Raghu
2017-10-01
The linear growth of operators in local quantum systems leads to an effective light cone even if the system is nonrelativistic. We show that the consistency of diffusive transport with this light cone places an upper bound on the diffusivity: D ≲v2τeq. The operator growth velocity v defines the light cone, and τeq is the local equilibration time scale, beyond which the dynamics of conserved densities is diffusive. We verify that the bound is obeyed in various weakly and strongly interacting theories. In holographic models, this bound establishes a relation between the hydrodynamic and leading nonhydrodynamic quasinormal modes of planar black holes. Our bound relates transport data—including the electrical resistivity and the shear viscosity—to the local equilibration time, even in the absence of a quasiparticle description. In this way, the bound sheds light on the observed T -linear resistivity of many unconventional metals, the shear viscosity of the quark-gluon plasma, and the spin transport of unitary fermions.
NASA Astrophysics Data System (ADS)
Kulkarni, Girish; Subrahmanyam, V.; Jha, Anand K.
2016-06-01
We study how one-particle correlations transfer to manifest as two-particle correlations in the context of parametric down-conversion (PDC), a process in which a pump photon is annihilated to produce two entangled photons. We work in the polarization degree of freedom and show that for any two-qubit generation process that is both trace-preserving and entropy-nondecreasing, the concurrence C (ρ ) of the generated two-qubit state ρ follows an intrinsic upper bound with C (ρ )≤(1 +P )/2 , where P is the degree of polarization of the pump photon. We also find that for the class of two-qubit states that is restricted to have only two nonzero diagonal elements such that the effective dimensionality of the two-qubit state is the same as the dimensionality of the pump polarization state, the upper bound on concurrence is the degree of polarization itself, that is, C (ρ )≤P . Our work shows that the maximum manifestation of two-particle correlations as entanglement is dictated by one-particle correlations. The formalism developed in this work can be extended to include multiparticle systems and can thus have important implications towards deducing the upper bounds on multiparticle entanglement, for which no universally accepted measure exists.
Near-Earth water sources: Ethics and fairness
NASA Astrophysics Data System (ADS)
Schwartz, James S. J.
2016-08-01
There is a small finite upper bound on the amount of easily accessible water in near-Earth space, including water from C-type NEAs and permanently shadowed lunar craters. Recent estimates put this total at about 3.7 ×1012kg . Given the non-renewable nature of this resource, we should begin thinking carefully about the regulation of near-Earth water sources (NEWS). This paper discusses this issue from an ethical vantage point, and argues that for the foreseeable future, the scientific use of NEWS should be prioritized over other potential uses of NEWS.
Evaluation of Ares-I Control System Robustness to Uncertain Aerodynamics and Flex Dynamics
NASA Technical Reports Server (NTRS)
Jang, Jiann-Woei; VanTassel, Chris; Bedrossian, Nazareth; Hall, Charles; Spanos, Pol
2008-01-01
This paper discusses the application of robust control theory to evaluate robustness of the Ares-I control systems. Three techniques for estimating upper and lower bounds of uncertain parameters which yield stable closed-loop response are used here: (1) Monte Carlo analysis, (2) mu analysis, and (3) characteristic frequency response analysis. All three methods are used to evaluate stability envelopes of the Ares-I control systems with uncertain aerodynamics and flex dynamics. The results show that characteristic frequency response analysis is the most effective of these methods for assessing robustness.
Wang, Leimin; Shen, Yi; Sheng, Yin
2016-04-01
This paper is concerned with the finite-time robust stabilization of delayed neural networks (DNNs) in the presence of discontinuous activations and parameter uncertainties. By using the nonsmooth analysis and control theory, a delayed controller is designed to realize the finite-time robust stabilization of DNNs with discontinuous activations and parameter uncertainties, and the upper bound of the settling time functional for stabilization is estimated. Finally, two examples are provided to demonstrate the effectiveness of the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Johnston, P. H.
2008-01-01
This activity seeks to estimate a theoretical upper bound of detectability for a layer of oxide embedded in a friction stir weld in aluminum. The oxide is theoretically modeled as an ideal planar layer of aluminum oxide, oriented normal to an interrogating ultrasound beam. Experimentally-measured grain scattering level is used to represent the practical noise floor. Echoes from naturally-occurring oxides will necessarily fall below this theoretical limit, and must be above the measurement noise to be potentially detectable.
Some conservative estimates in quantum cryptography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molotkov, S. N.
2006-08-15
Relationship is established between the security of the BB84 quantum key distribution protocol and the forward and converse coding theorems for quantum communication channels. The upper bound Q{sub c} {approx} 11% on the bit error rate compatible with secure key distribution is determined by solving the transcendental equation H(Q{sub c})=C-bar({rho})/2, where {rho} is the density matrix of the input ensemble, C-bar({rho}) is the classical capacity of a noiseless quantum channel, and H(Q) is the capacity of a classical binary symmetric channel with error rate Q.
Estimation of periodic solutions number of first-order differential equations
NASA Astrophysics Data System (ADS)
Ivanov, Gennady; Alferov, Gennady; Gorovenko, Polina; Sharlay, Artem
2018-05-01
The paper deals with first-order differential equations under the assumption that the right-hand side is a periodic function of time and continuous in the set of arguments. Pliss V.A. obtained the first results for a particular class of equations and showed that a number of theorems can not be continued. In this paper, it was possible to reduce the restrictions on the degree of smoothness of the right-hand side of the equation and obtain upper and lower bounds on the number of possible periodic solutions.
Backstepping Design of Adaptive Neural Fault-Tolerant Control for MIMO Nonlinear Systems.
Gao, Hui; Song, Yongduan; Wen, Changyun
In this paper, an adaptive controller is developed for a class of multi-input and multioutput nonlinear systems with neural networks (NNs) used as a modeling tool. It is shown that all the signals in the closed-loop system with the proposed adaptive neural controller are globally uniformly bounded for any external input in . In our control design, the upper bound of the NN modeling error and the gains of external disturbance are characterized by unknown upper bounds, which is more rational to establish the stability in the adaptive NN control. Filter-based modification terms are used in the update laws of unknown parameters to improve the transient performance. Finally, fault-tolerant control is developed to accommodate actuator failure. An illustrative example applying the adaptive controller to control a rigid robot arm shows the validation of the proposed controller.In this paper, an adaptive controller is developed for a class of multi-input and multioutput nonlinear systems with neural networks (NNs) used as a modeling tool. It is shown that all the signals in the closed-loop system with the proposed adaptive neural controller are globally uniformly bounded for any external input in . In our control design, the upper bound of the NN modeling error and the gains of external disturbance are characterized by unknown upper bounds, which is more rational to establish the stability in the adaptive NN control. Filter-based modification terms are used in the update laws of unknown parameters to improve the transient performance. Finally, fault-tolerant control is developed to accommodate actuator failure. An illustrative example applying the adaptive controller to control a rigid robot arm shows the validation of the proposed controller.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, L.C.; Crouch, E.A.C.; Lester, R.R.
1996-12-31
The authors analyze here the dose-response data generated from the seminal bioassay of 2,3,7,8-tetrachlorodibenzo-p-dioxin (2,3,7,8-TCDD) in Sprague-Dawley rats, reported by Kociba and coworkers. That chronic toxicity and oncogenicity study showed 2,3,7,8-TCDD to increase the incidence of certain tumors, while decreasing the incidence of others. Further, results in female rats were markedly different from those in male rats--a result ascribed to the dependence of dioxin on estrogen for some of its toxic effects. For each sex, the authors analyze each tumor type on which 2,3,7,8-TCDD has, or might have, an effect, whether positive, negative, or neutral. After generating dose-response relationships formore » each tumor type, the authors combine them. The combination involves simply adding the slopes of each tumor-specific dose-response relationship. They perform separate analyses for each set of dose-ranges. They also calculate upper (and lower) bounds on the maximum likelihood estimates, using the upper 95th percentile estimates for the slopes of the net dose-response relationships as conservative estimates of carcinogenic potency.« less
Examination of shipping package 9975-04985
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daugherty, W. L.
Package 9975-04985 was examined following the identification of several unexpected conditions during surveillance activities. A heavy layer of corrosion product on the shield and the shield outer diameter being larger that allowed by drawing tolerances contributed to a very tight fit between the upper fiberboard assembly and shield. The average corrosion rate for the shield is estimated to be 0.0018 inch/year or less, which falls within the bounding rate of 0.002 inch/year that has been previously recommended for these packages. Several apparent foreign objects were noted within the package. One object observed on the air shield was identified as tape.more » The other objects were comprised of mostly fine fibers from the cane fiberboard. It is postulated that the upper and lower fiberboard assemblies were able to rub against each other due to the upper fiberboard assembly being held tight to the shield, and a few stray cane chips became frayed under vibratory motions.« less
Aqil, Muhammad; Jeong, Myung Yung
2018-04-24
The robust characterization of real-time brain activity carries potential for many applications. However, the contamination of measured signals by various instrumental, environmental, and physiological sources of noise introduces a substantial amount of signal variance and, consequently, challenges real-time estimation of contributions from underlying neuronal sources. Functional near infra-red spectroscopy (fNIRS) is an emerging imaging modality whose real-time potential is yet to be fully explored. The objectives of the current study are to (i) validate a time-dependent linear model of hemodynamic responses in fNIRS, and (ii) test the robustness of this approach against measurement noise (instrumental and physiological) and mis-specification of the hemodynamic response basis functions (amplitude, latency, and duration). We propose a linear hemodynamic model with time-varying parameters, which are estimated (adapted and tracked) using a dynamic recursive least square algorithm. Owing to the linear nature of the activation model, the problem of achieving robust convergence to an accurate estimation of the model parameters is recast as a problem of parameter error stability around the origin. We show that robust convergence of the proposed method is guaranteed in the presence of an acceptable degree of model misspecification and we derive an upper bound on noise under which reliable parameters can still be inferred. We also derived a lower bound on signal-to-noise-ratio over which the reliable parameters can still be inferred from a channel/voxel. Whilst here applied to fNIRS, the proposed methodology is applicable to other hemodynamic-based imaging technologies such as functional magnetic resonance imaging. Copyright © 2018 Elsevier Inc. All rights reserved.
Quantitative Compactness Estimates for Hamilton-Jacobi Equations
NASA Astrophysics Data System (ADS)
Ancona, Fabio; Cannarsa, Piermarco; Nguyen, Khai T.
2016-02-01
We study quantitative compactness estimates in {W^{1,1}_{loc}} for the map {S_t}, {t > 0} that is associated with the given initial data {u_0in Lip (R^N)} for the corresponding solution {S_t u_0} of a Hamilton-Jacobi equation u_t+Hbig(nabla_{x} ubig)=0, qquad t≥ 0,quad xinR^N, with a uniformly convex Hamiltonian {H=H(p)}. We provide upper and lower estimates of order {1/\\varepsilon^N} on the Kolmogorov {\\varepsilon}-entropy in {W^{1,1}} of the image through the map S t of sets of bounded, compactly supported initial data. Estimates of this type are inspired by a question posed by Lax (Course on Hyperbolic Systems of Conservation Laws. XXVII Scuola Estiva di Fisica Matematica, Ravello, 2002) within the context of conservation laws, and could provide a measure of the order of "resolution" of a numerical method implemented for this equation.
On the upper bound in the Bohm sheath criterion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotelnikov, I. A., E-mail: I.A.Kotelnikov@inp.nsk.su; Skovorodin, D. I., E-mail: D.I.Skovorodin@inp.nsk.su
2016-02-15
The question is discussed about the existence of an upper bound in the Bohm sheath criterion, according to which the Debye sheath at the interface between plasma and a negatively charged electrode is stable only if the ion flow velocity in plasma exceeds the ion sound velocity. It is stated that, with an exception of some artificial ionization models, the Bohm sheath criterion is satisfied as an equality at the lower bound and the ion flow velocity is equal to the speed of sound. In the one-dimensional theory, a supersonic flow appears in an unrealistic model of a localized ionmore » source the size of which is less than the Debye length; however, supersonic flows seem to be possible in the two- and three-dimensional cases. In the available numerical codes used to simulate charged particle sources with a plasma emitter, the presence of the upper bound in the Bohm sheath criterion is not supposed; however, the correspondence with experimental data is usually achieved if the ion flow velocity in plasma is close to the ion sound velocity.« less
Lower Bounds to the Reliabilities of Factor Score Estimators.
Hessen, David J
2016-10-06
Under the general common factor model, the reliabilities of factor score estimators might be of more interest than the reliability of the total score (the unweighted sum of item scores). In this paper, lower bounds to the reliabilities of Thurstone's factor score estimators, Bartlett's factor score estimators, and McDonald's factor score estimators are derived and conditions are given under which these lower bounds are equal. The relative performance of the derived lower bounds is studied using classic example data sets. The results show that estimates of the lower bounds to the reliabilities of Thurstone's factor score estimators are greater than or equal to the estimates of the lower bounds to the reliabilities of Bartlett's and McDonald's factor score estimators.
Computer search for binary cyclic UEP codes of odd length up to 65
NASA Technical Reports Server (NTRS)
Lin, Mao-Chao; Lin, Chi-Chang; Lin, Shu
1990-01-01
Using an exhaustive computation, the unequal error protection capabilities of all binary cyclic codes of odd length up to 65 that have minimum distances at least 3 are found. For those codes that can only have upper bounds on their unequal error protection capabilities computed, an analytic method developed by Dynkin and Togonidze (1976) is used to show that the upper bounds meet the exact unequal error protection capabilities.
NASA Astrophysics Data System (ADS)
Thole, B. T.; Van Duijnen, P. Th.
1982-10-01
The induction and dispersion terms obtained from quantum-mechanical calculations with a direct reaction field hamiltonian are compared to second order perturbation theory expressions. The dispersion term is shown to give an upper bound which is a generalization of Alexander's upper bound. The model is illustrated by a calculation on the interactions in the water dimer. The long range Coulomb, induction and dispersion interactions are reasonably reproduced.
On the Kirchhoff Index of Graphs
NASA Astrophysics Data System (ADS)
Das, Kinkar C.
2013-09-01
Let G be a connected graph of order n with Laplacian eigenvalues μ1 ≥ μ2 ≥ ... ≥ μn-1 > mn = 0. The Kirchhoff index of G is defined as [xxx] In this paper. we give lower and upper bounds on Kf of graphs in terms on n, number of edges, maximum degree, and number of spanning trees. Moreover, we present lower and upper bounds on the Nordhaus-Gaddum-type result for the Kirchhoff index.
Upper bound of pier scour in laboratory and field data
Benedict, Stephen; Caldwell, Andral W.
2016-01-01
The U.S. Geological Survey (USGS), in cooperation with the South Carolina Department of Transportation, conducted several field investigations of pier scour in South Carolina and used the data to develop envelope curves defining the upper bound of pier scour. To expand on this previous work, an additional cooperative investigation was initiated to combine the South Carolina data with pier scour data from other sources and to evaluate upper-bound relations with this larger data set. To facilitate this analysis, 569 laboratory and 1,858 field measurements of pier scour were compiled to form the 2014 USGS Pier Scour Database. This extensive database was used to develop an envelope curve for the potential maximum pier scour depth encompassing the laboratory and field data. The envelope curve provides a simple but useful tool for assessing the potential maximum pier scour depth for effective pier widths of about 30 ft or less.
Objects of Maximum Electromagnetic Chirality
NASA Astrophysics Data System (ADS)
Fernandez-Corbaton, Ivan; Fruhnert, Martin; Rockstuhl, Carsten
2016-07-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. Reciprocal objects attain the upper bound if and only if they are transparent for all the fields of one polarization handedness (helicity). Additionally, electromagnetic duality symmetry, i.e., helicity preservation upon interaction, turns out to be a necessary condition for reciprocal objects to attain the upper bound. We use these results to provide requirements for the design of such extremal objects. The requirements can be formulated as constraints on the polarizability tensors for dipolar objects or on the material constitutive relations for continuous media. We also outline two applications for objects of maximum electromagnetic chirality: a twofold resonantly enhanced and background-free circular dichroism measurement setup, and angle-independent helicity filtering glasses. Finally, we use the theoretically obtained requirements to guide the design of a specific structure, which we then analyze numerically and discuss its performance with respect to maximal electromagnetic chirality.
Exact lower and upper bounds on stationary moments in stochastic biochemical systems
NASA Astrophysics Data System (ADS)
Ghusinga, Khem Raj; Vargas-Garcia, Cesar A.; Lamperski, Andrew; Singh, Abhyudai
2017-08-01
In the stochastic description of biochemical reaction systems, the time evolution of statistical moments for species population counts is described by a linear dynamical system. However, except for some ideal cases (such as zero- and first-order reaction kinetics), the moment dynamics is underdetermined as lower-order moments depend upon higher-order moments. Here, we propose a novel method to find exact lower and upper bounds on stationary moments for a given arbitrary system of biochemical reactions. The method exploits the fact that statistical moments of any positive-valued random variable must satisfy some constraints that are compactly represented through the positive semidefiniteness of moment matrices. Our analysis shows that solving moment equations at steady state in conjunction with constraints on moment matrices provides exact lower and upper bounds on the moments. These results are illustrated by three different examples—the commonly used logistic growth model, stochastic gene expression with auto-regulation and an activator-repressor gene network motif. Interestingly, in all cases the accuracy of the bounds is shown to improve as moment equations are expanded to include higher-order moments. Our results provide avenues for development of approximation methods that provide explicit bounds on moments for nonlinear stochastic systems that are otherwise analytically intractable.
The end of the MACHO era, revisited: New limits on MACHO masses from halo wide binaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monroy-Rodríguez, Miguel A.; Allen, Christine, E-mail: chris@astro.unam.mx
2014-08-01
In order to determine an upper bound for the mass of the massive compact halo objects (MACHOs), we use the halo binaries contained in a recent catalog by Allen and Monroy-Rodríguez. To dynamically model their interactions with massive perturbers, a Monte Carlo simulation is conducted, using an impulsive approximation method and assuming a galactic halo constituted by massive particles of a characteristic mass. The results of such simulations are compared with several subsamples of our improved catalog of candidate halo wide binaries. In accordance with Quinn et al., we also find our results to be very sensitive to the widestmore » binaries. However, our larger sample, together with the fact that we can obtain galactic orbits for 150 of our systems, allows a more reliable estimate of the maximum MACHO mass than that obtained previously. If we employ the entire sample of 211 candidate halo stars we, obtain an upper limit of 112 M{sub ☉}. However, using the 150 binaries in our catalog with computed galactic orbits, we are able to refine our fitting criteria. Thus, for the 100 most halo-like binaries we obtain a maximum MACHO mass of 21-68 M{sub ☉}. Furthermore, we can estimate the dynamical effects of the galactic disk using binary samples that spend progressively shorter times within the disk. By extrapolating the limits obtained for our most reliable—albeit smallest—sample, we find that as the time spent within the disk tends to zero, the upper bound of the MACHO mass tends to less than 5 M{sub ☉}. The non-uniform density of the halo has also been taken into account, but the limit obtained, less than 5 M{sub ☉}, does not differ much from the previous one. Together with microlensing studies that provide lower limits on the MACHO mass, our results essentially exclude the existence of such objects in the galactic halo.« less
NASA Astrophysics Data System (ADS)
Audenaert, Koenraad M. R.; Mosonyi, Milán
2014-10-01
We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ1, …, σr. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ1, …, σr), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov's classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min _{j
Micromechanical Modeling of Anisotropic Damage-Induced Permeability Variation in Crystalline Rocks
NASA Astrophysics Data System (ADS)
Chen, Yifeng; Hu, Shaohua; Zhou, Chuangbing; Jing, Lanru
2014-09-01
This paper presents a study on the initiation and progress of anisotropic damage and its impact on the permeability variation of crystalline rocks of low porosity. This work was based on an existing micromechanical model considering the frictional sliding and dilatancy behaviors of microcracks and the recovery of degraded stiffness when the microcracks are closed. By virtue of an analytical ellipsoidal inclusion solution, lower bound estimates were formulated through a rigorous homogenization procedure for the damage-induced effective permeability of the microcracks-matrix system, and their predictive limitations were discussed with superconducting penny-shaped microcracks, in which the greatest lower bounds were obtained for each homogenization scheme. On this basis, an empirical upper bound estimation model was suggested to account for the influences of anisotropic damage growth, connectivity, frictional sliding, dilatancy, and normal stiffness recovery of closed microcracks, as well as tensile stress-induced microcrack opening on the permeability variation, with a small number of material parameters. The developed model was calibrated and validated by a series of existing laboratory triaxial compression tests with permeability measurements on crystalline rocks, and applied for characterizing the excavation-induced damage zone and permeability variation in the surrounding granitic rock of the TSX tunnel at the Atomic Energy of Canada Limited's (AECL) Underground Research Laboratory (URL) in Canada, with an acceptable agreement between the predicted and measured data.
NASA Technical Reports Server (NTRS)
Goldman, Marvin; Hoover, Mark D.; Nelson, Robert C.; Templeton, William; Bollinger, Lance; Anspaugh, Lynn
1991-01-01
Potential radiation impacts from launch of the Ulysses solar exploration experiment were evaluated using eight postulated accident scenarios. Lifetime individual dose estimates rarely exceeded 1 mrem. Most of the potential health effects would come from inhalation exposures immediately after an accident, rather than from ingestion of contaminated food or water, or from inhalation of resuspended plutonium from contaminated ground. For local Florida accidents (that is, during the first minute after launch), an average source term accident was estimated to cause a total added cancer risk of up to 0.2 deaths. For accidents at later time after launch, a worldwide cancer risk of up to three cases was calculated (with a four in a million probability). Upper bound estimates were calculated to be about 10 times higher.
A sharp lower bound for the sum of a sine series with convex coefficients
NASA Astrophysics Data System (ADS)
Solodov, A. P.
2016-12-01
The sum of a sine series g(\\mathbf b,x)=\\sumk=1^∞ b_k\\sin kx with coefficients forming a convex sequence \\mathbf b is known to be positive on the interval (0,π). Its values near zero are conventionally evaluated using the Salem function v(\\mathbf b,x)=x\\sumk=1m(x) kb_k, m(x)=[π/x]. In this paper it is proved that 2π-2v(\\mathbf b,x) is not a minorant for g(\\mathbf b,x). The modified Salem function v_0(\\mathbf b,x)=x\\bigl(\\sumk=1m(x)-1 kb_k+(1/2)m(x)bm(x)\\bigr) is shown to satisfy the lower bound g(\\mathbf b,x)>2π-2v_0(\\mathbf b,x) in some right neighbourhood of zero. This estimate is shown to be sharp on the class of convex sequences \\mathbf b. Moreover, the upper bound for g(\\mathbf b,x) is refined on the class of monotone sequences \\mathbf b. Bibliography: 11 titles.
Measurements of electron detection efficiencies in solid state detectors.
NASA Technical Reports Server (NTRS)
Lupton, J. E.; Stone, E. C.
1972-01-01
Detailed laboratory measurement of the electron response of solid state detectors as a function of incident electron energy, detector depletion depth, and energy-loss discriminator threshold. These response functions were determined by exposing totally depleted silicon surface barrier detectors with depletion depths between 50 and 1000 microns to the beam from a magnetic beta-ray spectrometer. The data were extended to 5000 microns depletion depth using the results of previously published Monte Carlo electron calculations. When the electron counting efficiency of a given detector is plotted as a function of energy-loss threshold for various incident energies, the efficiency curves are bounded by a smooth envelope which represents the upper limit to the detection efficiency. These upper limit curves, which scale in a simple way, make it possible to easily estimate the electron sensitivity of solid-state detector systems.
Gas production in the Barnett Shale obeys a simple scaling theory
Patzek, Tad W.; Male, Frank; Marder, Michael
2013-01-01
Natural gas from tight shale formations will provide the United States with a major source of energy over the next several decades. Estimates of gas production from these formations have mainly relied on formulas designed for wells with a different geometry. We consider the simplest model of gas production consistent with the basic physics and geometry of the extraction process. In principle, solutions of the model depend upon many parameters, but in practice and within a given gas field, all but two can be fixed at typical values, leading to a nonlinear diffusion problem we solve exactly with a scaling curve. The scaling curve production rate declines as 1 over the square root of time early on, and it later declines exponentially. This simple model provides a surprisingly accurate description of gas extraction from 8,294 wells in the United States’ oldest shale play, the Barnett Shale. There is good agreement with the scaling theory for 2,057 horizontal wells in which production started to decline exponentially in less than 10 y. The remaining 6,237 horizontal wells in our analysis are too young for us to predict when exponential decline will set in, but the model can nevertheless be used to establish lower and upper bounds on well lifetime. Finally, we obtain upper and lower bounds on the gas that will be produced by the wells in our sample, individually and in total. The estimated ultimate recovery from our sample of 8,294 wells is between 10 and 20 trillion standard cubic feet. PMID:24248376
Quijano, Leyre; Marín, Silvia; Millan, Encarnación; Yusà, Vicent; Font, Guillermina; Pardo, Olga
2018-04-01
Dietary exposure of the Valencia Region population to polychlorinated dibenzo-p-dioxins (PCDDs), polychlorinated dibenzofurans (PCDFs) and PCBs was assessed in the Region of Valencia in 2010-2011. A total of 7700 food samples were collected. Occurrence data were combined with consumption data to estimate dietary exposure in adults (>15 years of age) and young people (6-15 years of age). The estimated intake was calculated by a probabilistic approach. Average intake levels (upper-bound scenario) were 1.58 and 2.76 pg toxic equivalent (TEQ) kg -1 body weight (bw) day -1 for adults and young people, respectively. These average intakes are within range of the tolerable daily intake of 1-4 pg WHO-TEQ kg -1 bw day -1 recommended by WHO, and slightly above the tolerable weekly intake (TWI) of 14 pg TEQ kg -1 bw week -1 and the Provisional tolerable monthly intake of 70 pg TEQ kg -1 bw month -1 set by the Scientific Committee on Food and the Joint FAO/WHO Expert Committee on Food, respectively. These results show that the contamination levels in food and therefore the exposure of the general population to PCDD/Fs and PCBs have declined in this region and therefore show the efficiency of the European risk-management measures. In terms of risk characterisation, the results showed that, under the upper-bound scenario, 22% of the adult and 58% of the young people population could exceed the TWI.
Fast Inbound Top-K Query for Random Walk with Restart.
Zhang, Chao; Jiang, Shan; Chen, Yucheng; Sun, Yidan; Han, Jiawei
2015-09-01
Random walk with restart (RWR) is widely recognized as one of the most important node proximity measures for graphs, as it captures the holistic graph structure and is robust to noise in the graph. In this paper, we study a novel query based on the RWR measure, called the inbound top-k (Ink) query. Given a query node q and a number k , the Ink query aims at retrieving k nodes in the graph that have the largest weighted RWR scores to q . Ink queries can be highly useful for various applications such as traffic scheduling, disease treatment, and targeted advertising. Nevertheless, none of the existing RWR computation techniques can accurately and efficiently process the Ink query in large graphs. We propose two algorithms, namely Squeeze and Ripple, both of which can accurately answer the Ink query in a fast and incremental manner. To identify the top- k nodes, Squeeze iteratively performs matrix-vector multiplication and estimates the lower and upper bounds for all the nodes in the graph. Ripple employs a more aggressive strategy by only estimating the RWR scores for the nodes falling in the vicinity of q , the nodes outside the vicinity do not need to be evaluated because their RWR scores are propagated from the boundary of the vicinity and thus upper bounded. Ripple incrementally expands the vicinity until the top- k result set can be obtained. Our extensive experiments on real-life graph data sets show that Ink queries can retrieve interesting results, and the proposed algorithms are orders of magnitude faster than state-of-the-art method.
Elimination of Onchocerciasis from Mexico.
Rodríguez-Pérez, Mario A; Fernández-Santos, Nadia A; Orozco-Algarra, María E; Rodríguez-Atanacio, José A; Domínguez-Vázquez, Alfredo; Rodríguez-Morales, Kristel B; Real-Najarro, Olga; Prado-Velasco, Francisco G; Cupp, Eddie W; Richards, Frank O; Hassan, Hassan K; González-Roldán, Jesús F; Kuri-Morales, Pablo A; Unnasch, Thomas R
2015-01-01
Mexico is one of the six countries formerly endemic for onchocerciasis in Latin America. Transmission has been interrupted in the three endemic foci of that country and mass drug distribution has ceased. Three years after mass drug distribution ended, post-treatment surveillance (PTS) surveys were undertaken which employed entomological indicators to check for transmission recrudescence. In-depth entomologic assessments were performed in 18 communities in the three endemic foci of Mexico. None of the 108,212 Simulium ochraceum s.l. collected from the three foci were found to contain parasite DNA when tested by polymerase chain reaction-enzyme-linked immunosorbent assay (PCR-ELISA), resulting in a maximum upper bound of the 95% confidence interval (95%-ULCI) of the infective rate in the vectors of 0.035/2,000 flies examined. This is an order of magnitude below the threshold of a 95%-ULCI of less than one infective fly per 2,000 flies tested, the current entomological criterion for interruption of transmission developed by the international community. The point estimate of seasonal transmission potential (STP) was zero, and the upper bound of the 95% confidence interval for the STP ranged from 1.2 to 1.7 L3/person/season in the different foci. This value is below all previous estimates for the minimum transmission potential required to maintain the parasite population. The results from the in-depth entomological post treatment surveillance surveys strongly suggest that transmission has not resumed in the three foci of Mexico during the three years since the last distribution of ivermectin occurred; it was concluded that transmission remains undetectable without intervention, and Onchocerca volvulus has been eliminated from Mexico.
Gas production in the Barnett Shale obeys a simple scaling theory.
Patzek, Tad W; Male, Frank; Marder, Michael
2013-12-03
Natural gas from tight shale formations will provide the United States with a major source of energy over the next several decades. Estimates of gas production from these formations have mainly relied on formulas designed for wells with a different geometry. We consider the simplest model of gas production consistent with the basic physics and geometry of the extraction process. In principle, solutions of the model depend upon many parameters, but in practice and within a given gas field, all but two can be fixed at typical values, leading to a nonlinear diffusion problem we solve exactly with a scaling curve. The scaling curve production rate declines as 1 over the square root of time early on, and it later declines exponentially. This simple model provides a surprisingly accurate description of gas extraction from 8,294 wells in the United States' oldest shale play, the Barnett Shale. There is good agreement with the scaling theory for 2,057 horizontal wells in which production started to decline exponentially in less than 10 y. The remaining 6,237 horizontal wells in our analysis are too young for us to predict when exponential decline will set in, but the model can nevertheless be used to establish lower and upper bounds on well lifetime. Finally, we obtain upper and lower bounds on the gas that will be produced by the wells in our sample, individually and in total. The estimated ultimate recovery from our sample of 8,294 wells is between 10 and 20 trillion standard cubic feet.
Song, Yoon S; Koontz, John L; Juskelis, Rima O; Zhao, Yang
2013-01-01
The migration of low molecular weight organic compounds through polyethylene terephthalate (PET) films was determined by using a custom permeation cell assembly. Fatty food simulant (Miglyol 812) was added to the receptor chamber, while the donor chamber was filled with 1% and 10% (v/v) migrant compounds spiked in simulant. The permeation cell was maintained at 40°C, 66°C, 100°C or 121°C for up to 25 days of polymer film exposure time. Migrants in Miglyol were directly quantified without a liquid-liquid extraction step by headspace-GC-MS analysis. Experimental diffusion coefficients (DP) of toluene, benzyl alcohol, ethyl butyrate and methyl salicylate through PET film were determined. Results from Limm's diffusion model showed that the predicted DP values for PET were all greater than the experimental values. DP values predicted by Piringer's diffusion model were also greater than those determined experimentally at 66°C, 100°C and 121°C. However, Piringer's model led to the underestimation of benzyl alcohol (Áp = 3.7) and methyl salicylate (Áp = 4.0) diffusion at 40°C with its revised "upper-bound" Áp value of 3.1 at temperatures below the glass transition temperature (Tg) of PET (<70°C). This implies that input parameters of Piringer's model may need to be revised to ensure a margin of safety for consumers. On the other hand, at temperatures greater than the Tg, both models appear too conservative and unrealistic. The highest estimated Áp value from Piringer's model was 2.6 for methyl salicylate, which was much lower than the "upper-bound" Áp value of 6.4 for PET. Therefore, it may be necessary further to refine "upper-bound" Áp values for PET such that Piringer's model does not significantly underestimate or overestimate the migration of organic compounds dependent upon the temperature condition of the food contact material.
Solar System and stellar tests of a quantum-corrected gravity
NASA Astrophysics Data System (ADS)
Zhao, Shan-Shan; Xie, Yi
2015-09-01
The renormalization group running of the gravitational constant has a universal form and represents a possible extension of general relativity. These renormalization group effects on general relativity will cause the running of the gravitational constant, and there exists a scale of renormalization α ν , which depends on the mass of an astronomical system and needs to be determined by observations. We test renormalization group effects on general relativity and obtain the upper bounds of α ν in the low-mass scales: the Solar System and five systems of binary pulsars. Using the supplementary advances of the perihelia provided by INPOP10a (IMCCE, France) and EPM2011 (IAA RAS, Russia) ephemerides, we obtain new upper bounds on α ν in the Solar System when the Lense-Thirring effect due to the Sun's angular momentum and the uncertainty of the Sun's quadrupole moment are properly taken into account. These two factors were absent in the previous work. We find that INPOP10a yields the upper bound as α ν =(0.3 ±2.8 )×10-20 while EPM2011 gives α ν =(-2.5 ±8.3 )×10-21. Both of them are tighter than the previous result by 4 orders of magnitude. Furthermore, based on the observational data sets of five systems of binary pulsars: PSR J 0737 -3039 , PSR B 1534 +12 , PSR J 1756 -2251 , PSR B 1913 +16 , and PSR B 2127 +11 C , the upper bound is found as α ν =(-2.6 ±5.1 )×10-17. From the bounds of this work at a low-mass scale and the ones at the mass scale of galaxies, we might catch an updated glimpse of the mass dependence of α ν , and it is found that our improvement of the upper bounds in the Solar System can significantly change the possible pattern of the relation between log |α ν | and log m from a linear one to a power law, where m is the mass of an astronomical system. This suggests that |α ν | needs to be suppressed more rapidly with the decrease of the mass of low-mass systems. It also predicts that |α ν | might have an upper limit in high-mass astrophysical systems, which can be tested in the future.
Differential Games of inf-sup Type and Isaacs Equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaise, Hidehiro; Sheu, S.-J.
2005-06-15
Motivated by the work of Fleming, we provide a general framework to associate inf-sup type values with the Isaacs equations.We show that upper and lower bounds for the generators of inf-sup type are upper and lower Hamiltonians, respectively. In particular, the lower (resp. upper) bound corresponds to the progressive (resp. strictly progressive) strategy. By the Dynamic Programming Principle and identification of the generator, we can prove that the inf-sup type game is characterized as the unique viscosity solution of the Isaacs equation. We also discuss the Isaacs equation with a Hamiltonian of a convex combination between the lower and uppermore » Hamiltonians.« less
Integer aperture ambiguity resolution based on difference test
NASA Astrophysics Data System (ADS)
Zhang, Jingyu; Wu, Meiping; Li, Tao; Zhang, Kaidong
2015-07-01
Carrier-phase integer ambiguity resolution (IAR) is the key to highly precise, fast positioning and attitude determination with Global Navigation Satellite System (GNSS). It can be seen as the process of estimating the unknown cycle ambiguities of the carrier-phase observations as integers. Once the ambiguities are fixed, carrier phase data will act as the very precise range data. Integer aperture (IA) ambiguity resolution is the combination of acceptance testing and integer ambiguity resolution, which can realize better quality control of IAR. Difference test (DT) is one of the most popular acceptance tests. This contribution will give a detailed analysis about the following properties of IA ambiguity resolution based on DT: 1.
Beryllium and boron constraints on an early Galactic bright phase
NASA Technical Reports Server (NTRS)
Fields, Brian D.; Schramm, David N.; Truran, James W.
1993-01-01
The recent observations of Be and B in metal-deficient halo dwarfs are used to constrain a 'bright phase' of enhanced cosmic-ray flux in the early Galaxy. Assuming that this Be and B arises from cosmic-ray spallation in the early Galaxy, limits are placed on the intensity of the early (Population II) cosmic-ray flux relative to the present (Population I) flux. A simple estimate of bounds on the flux ratio is 1 - 40. This upper bound would restrict galaxies like our own from producing neutrino fluxes that would be detectable in any currently proposed detectors. It is found that the relative enhancement of the early flux varies inversely with the relative time of enhancement. It is noted that associated gamma-ray production via pp - pi sup 0 pp may be a significant contribution to the gamma-ray background above 100 MeV.
Software reliability: Additional investigations into modeling with replicated experiments
NASA Technical Reports Server (NTRS)
Nagel, P. M.; Schotz, F. M.; Skirvan, J. A.
1984-01-01
The effects of programmer experience level, different program usage distributions, and programming languages are explored. All these factors affect performance, and some tentative relational hypotheses are presented. An analytic framework for replicated and non-replicated (traditional) software experiments is presented. A method of obtaining an upper bound on the error rate of the next error is proposed. The method was validated empirically by comparing forecasts with actual data. In all 14 cases the bound exceeded the observed parameter, albeit somewhat conservatively. Two other forecasting methods are proposed and compared to observed results. Although demonstrated relative to this framework that stages are neither independent nor exponentially distributed, empirical estimates show that the exponential assumption is nearly valid for all but the extreme tails of the distribution. Except for the dependence in the stage probabilities, Cox's model approximates to a degree what is being observed.
Tidal disruption of Periodic Comet Shoemaker-Levy 9 and a constraint on its mean density
NASA Technical Reports Server (NTRS)
Boss, Alan P.
1994-01-01
The apparent tidal disruption of Periodic Comet Shoemaker-Levy 9 (1993e) during a close encounter within approximately 1.62 planetary radii of Jupiter can be used along with theoretical models of tidal disruption to place an upper bound on the density of the predisruption body. Depending on the theoretical model used, these upper bounds range from rho(sub c) less than 0.702 +/- 0.080 g/cu cm for a simple analytical model calibrated by numerical smoothed particle hydrodynamics (SPH) simulations to rho(sub c) less than 1.50 +/- 0.17 g/cu cm for a detailed semianalytical model. The quoted uncertainties stem from an assumed uncertainty in the perijove radius. However, the uncertainty introduced by the different theoretical models is the major source of error; this uncertainty could be eliminated by future SPH simulations specialized to cometary disruptions, including the effects of initially prolate, spinning comets. If the SPH-based upper bound turns out to be most appropriate, it would be consistent with the predisruption body being a comet with a relatively low density and porous structure, as has been asserted previously based on observations of cometary outgassing. Regardless of which upper bound is preferable, the models all agree that the predisruption body could not have been a relatively high-density body, such as an asteroid with rho approximately = 2 g/cu cm.
Limit analysis of hollow spheres or spheroids with Hill orthotropic matrix
NASA Astrophysics Data System (ADS)
Pastor, Franck; Pastor, Joseph; Kondo, Djimedo
2012-03-01
Recent theoretical studies of the literature are concerned by the hollow sphere or spheroid (confocal) problems with orthotropic Hill type matrix. They have been developed in the framework of the limit analysis kinematical approach by using very simple trial velocity fields. The present Note provides, through numerical upper and lower bounds, a rigorous assessment of the approximate criteria derived in these theoretical works. To this end, existing static 3D codes for a von Mises matrix have been easily extended to the orthotropic case. Conversely, instead of the non-obvious extension of the existing kinematic codes, a new original mixed approach has been elaborated on the basis of the plane strain structure formulation earlier developed by F. Pastor (2007). Indeed, such a formulation does not need the expressions of the unit dissipated powers. Interestingly, it delivers a numerical code better conditioned and notably more rapid than the previous one, while preserving the rigorous upper bound character of the corresponding numerical results. The efficiency of the whole approach is first demonstrated through comparisons of the results to the analytical upper bounds of Benzerga and Besson (2001) or Monchiet et al. (2008) in the case of spherical voids in the Hill matrix. Moreover, we provide upper and lower bounds results for the hollow spheroid with the Hill matrix which are compared to those of Monchiet et al. (2008).
Stimuli Reduce the Dimensionality of Cortical Activity
Mazzucato, Luca; Fontanini, Alfredo; La Camera, Giancarlo
2016-01-01
The activity of ensembles of simultaneously recorded neurons can be represented as a set of points in the space of firing rates. Even though the dimension of this space is equal to the ensemble size, neural activity can be effectively localized on smaller subspaces. The dimensionality of the neural space is an important determinant of the computational tasks supported by the neural activity. Here, we investigate the dimensionality of neural ensembles from the sensory cortex of alert rats during periods of ongoing (inter-trial) and stimulus-evoked activity. We find that dimensionality grows linearly with ensemble size, and grows significantly faster during ongoing activity compared to evoked activity. We explain these results using a spiking network model based on a clustered architecture. The model captures the difference in growth rate between ongoing and evoked activity and predicts a characteristic scaling with ensemble size that could be tested in high-density multi-electrode recordings. Moreover, we present a simple theory that predicts the existence of an upper bound on dimensionality. This upper bound is inversely proportional to the amount of pair-wise correlations and, compared to a homogeneous network without clusters, it is larger by a factor equal to the number of clusters. The empirical estimation of such bounds depends on the number and duration of trials and is well predicted by the theory. Together, these results provide a framework to analyze neural dimensionality in alert animals, its behavior under stimulus presentation, and its theoretical dependence on ensemble size, number of clusters, and correlations in spiking network models. PMID:26924968
Stimuli Reduce the Dimensionality of Cortical Activity.
Mazzucato, Luca; Fontanini, Alfredo; La Camera, Giancarlo
2016-01-01
The activity of ensembles of simultaneously recorded neurons can be represented as a set of points in the space of firing rates. Even though the dimension of this space is equal to the ensemble size, neural activity can be effectively localized on smaller subspaces. The dimensionality of the neural space is an important determinant of the computational tasks supported by the neural activity. Here, we investigate the dimensionality of neural ensembles from the sensory cortex of alert rats during periods of ongoing (inter-trial) and stimulus-evoked activity. We find that dimensionality grows linearly with ensemble size, and grows significantly faster during ongoing activity compared to evoked activity. We explain these results using a spiking network model based on a clustered architecture. The model captures the difference in growth rate between ongoing and evoked activity and predicts a characteristic scaling with ensemble size that could be tested in high-density multi-electrode recordings. Moreover, we present a simple theory that predicts the existence of an upper bound on dimensionality. This upper bound is inversely proportional to the amount of pair-wise correlations and, compared to a homogeneous network without clusters, it is larger by a factor equal to the number of clusters. The empirical estimation of such bounds depends on the number and duration of trials and is well predicted by the theory. Together, these results provide a framework to analyze neural dimensionality in alert animals, its behavior under stimulus presentation, and its theoretical dependence on ensemble size, number of clusters, and correlations in spiking network models.
Upper bound dose values for meson radiation in heavy-ion therapy.
Rabin, C; Gonçalves, M; Duarte, S B; González-Sprinberg, G A
2018-06-01
Radiation treatment of cancer has evolved to include massive particle beams, instead of traditional irradiation procedures. Thus, patient doses and worker radiological protection have become issues of constant concern in the use of these new technologies, especially for proton- and heavy-ion-therapy. In the beam energies of interest of heavy-ion-therapy, secondary particle radiation comes from proton, neutron, and neutral and charged pions produced in the nuclear collisions of the beam with human tissue atoms. This work, for the first time, offers the upper bound of meson radiation dose in organic tissues due to secondary meson radiation in heavy-ion therapy. A model based on intranuclear collision has been used to follow in time the nuclear reaction and to determine the secondary radiation due to the meson yield produced in the beam interaction with nuclei in the tissue-equivalent media and water. The multiplicity, energy spectrum, and angular distribution of these pions, as well as their decay products, have been calculated in different scenarios for the nuclear reaction mechanism. The results of the produced secondary meson particles has been used to estimate the energy deposited in tissue using a cylindrical phantom by a transport Monte Carlo simulation and we have concluded that these mesons contribute at most 0.1% of the total prescribed dose.
NASA Astrophysics Data System (ADS)
Fishman, M. M.
1985-01-01
The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.
Bounds for the price of discrete arithmetic Asian options
NASA Astrophysics Data System (ADS)
Vanmaele, M.; Deelstra, G.; Liinev, J.; Dhaene, J.; Goovaerts, M. J.
2006-01-01
In this paper the pricing of European-style discrete arithmetic Asian options with fixed and floating strike is studied by deriving analytical lower and upper bounds. In our approach we use a general technique for deriving upper (and lower) bounds for stop-loss premiums of sums of dependent random variables, as explained in Kaas et al. (Ins. Math. Econom. 27 (2000) 151-168), and additionally, the ideas of Rogers and Shi (J. Appl. Probab. 32 (1995) 1077-1088) and of Nielsen and Sandmann (J. Financial Quant. Anal. 38(2) (2003) 449-473). We are able to create a unifying framework for European-style discrete arithmetic Asian options through these bounds, that generalizes several approaches in the literature as well as improves the existing results. We obtain analytical and easily computable bounds. The aim of the paper is to formulate an advice of the appropriate choice of the bounds given the parameters, investigate the effect of different conditioning variables and compare their efficiency numerically. Several sets of numerical results are included. We also discuss hedging using these bounds. Moreover, our methods are applicable to a wide range of (pricing) problems involving a sum of dependent random variables.
Coefficient of performance and its bounds with the figure of merit for a general refrigerator
NASA Astrophysics Data System (ADS)
Long, Rui; Liu, Wei
2015-02-01
A general refrigerator model with non-isothermal processes is studied. The coefficient of performance (COP) and its bounds at maximum χ figure of merit are obtained and analyzed. This model accounts for different heat capacities during the heat transfer processes. So, different kinds of refrigerator cycles can be considered. Under the constant heat capacity condition, the upper bound of the COP is the Curzon-Ahlborn (CA) coefficient of performance and is independent of the time durations of the heat exchanging processes. With the maximum χ criterion, in the refrigerator cycles, such as the reversed Brayton refrigerator cycle, the reversed Otto refrigerator cycle and the reversed Atkinson refrigerator cycle, where the heat capacity in the heat absorbing process is not less than that in the heat releasing process, their COPs are bounded by the CA coefficient of performance; otherwise, such as for the reversed Diesel refrigerator cycle, its COP can exceed the CA coefficient of performance. Furthermore, the general refined upper and lower bounds have been proposed.
Search for Chemically Bound Water in the Surface Layer of Mars Based on HEND/Mars Odyssey Data
NASA Technical Reports Server (NTRS)
Basilevsky, A. T.; Litvak, M. L.; Mitrofanov, I. G.; Boynton, W.; Saunders, R. S.
2003-01-01
This study is emphasized on search for signatures of chemically bound water in surface layer of Mars based on data acquired by High Energy Neutron Detector (HEND) which is part of the Mars Odyssey Gamma Ray Spectrometer (GRS). Fluxes of epithermal (probe the upper 1-2 m) and fast (the upper 20-30 cm) neutrons, considered in this work, were measured since mid February till mid June 2002. First analysis of this data set with emphasis of chemically bound water was made. Early publications of the GRS results reported low neutron flux at high latitudes, interpreted as signature of ground water ice, and in two low latitude areas: Arabia and SW of Olympus Mons (SWOM), interpreted as 'geographic variations in the amount of chemically and/or physically bound H2O and or OH...'. It is clear that surface materials of Mars do contain chemically bound water, but its amounts are poorly known and its geographic distribution was not analyzed.
Huang, Jian; Zhang, Cun-Hui
2013-01-01
The ℓ1-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted ℓ1-penalized estimators for convex loss functions of a general form, including the generalized linear models. We study the estimation, prediction, selection and sparsity properties of the weighted ℓ1-penalized estimator in sparse, high-dimensional settings where the number of predictors p can be much larger than the sample size n. Adaptive Lasso is considered as a special case. A multistage method is developed to approximate concave regularized estimation by applying an adaptive Lasso recursively. We provide prediction and estimation oracle inequalities for single- and multi-stage estimators, a general selection consistency theorem, and an upper bound for the dimension of the Lasso estimator. Important models including the linear regression, logistic regression and log-linear models are used throughout to illustrate the applications of the general results. PMID:24348100
Power spectrum and non-Gaussianities in anisotropic inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dey, Anindya; Kovetz, Ely D.; Paban, Sonia, E-mail: anindya@physics.utexas.edu, E-mail: elykovetz@gmail.com, E-mail: paban@physics.utexas.edu
2014-06-01
We study the planar regime of curvature perturbations for single field inflationary models in an axially symmetric Bianchi I background. In a theory with standard scalar field action, the power spectrum for such modes has a pole as the planarity parameter goes to zero. We show that constraints from back reaction lead to a strong lower bound on the planarity parameter for high-momentum planar modes and use this bound to calculate the signal-to-noise ratio of the anisotropic power spectrum in the CMB, which in turn places an upper bound on the Hubble scale during inflation allowed in our model. Wemore » find that non-Gaussianities for these planar modes are enhanced for the flattened triangle and the squeezed triangle configurations, but show that the estimated values of the f{sub NL} parameters remain well below the experimental bounds from the CMB for generic planar modes (other, more promising signatures are also discussed). For a standard action, f{sub NL} from the squeezed configuration turns out to be larger compared to that from the flattened triangle configuration in the planar regime. However, in a theory with higher derivative operators, non-Gaussianities from the flattened triangle can become larger than the squeezed configuration in a certain limit of the planarity parameter.« less
Wilson, Timothy P.; Bonin, Jennifer L.
2008-01-01
Samples of surface water and suspended sediment were collected from the Passaic and Elizabeth Rivers and their tributaries in New Jersey from July 2003 to February 2004 to determine the concentrations of selected chlorinated organic and inorganic constituents. This sampling and analysis was conducted as Phase II of the New York-New Jersey Harbor Estuary Workplan?Contaminant Assessment and Reduction Program (CARP), which is overseen by the New Jersey Department of Environmental Protection. Phase II of the New Jersey Workplan was conducted to define upstream tributary and point sources of contaminants in those rivers sampled during Phase I work, with special emphasis on the Passaic and Elizabeth Rivers. Samples were collected from three groups of tributaries: (1) the Second, Third, and Saddle Rivers; (2) the Pompton and upper Passaic Rivers; and (3) the West Branch and main stem of the Elizabeth River. The Second, Third, and Saddle Rivers were sampled near their confluence with the tidal Passaic River, but at locations not affected by tidal flooding. The Pompton and upper Passaic Rivers were sampled immediately upstream from their confluence at Two Bridges, N.J. The West Branch and the main stem of the Elizabeth River were sampled just upstream from their confluence at Hillside, N.J. All tributaries were sampled during low-flow discharge conditions using the protocols and analytical methods for organic constituents used in low-flow sampling in Phase I. Grab samples of streamflow also were collected at each site and were analyzed for trace elements (mercury, methylmercury, cadmium, and lead) and for suspended sediment, particulate organic carbon, and dissolved organic carbon. The measured concentrations and available historical suspended-sediment and stream-discharge data (where available) were used to estimate average annual loads of suspended sediment and organic compounds in these rivers. Total suspended-sediment loads for 1975?2000 were estimated using rating curves developed from historical U.S. Geological Survey (USGS) suspended-sediment and discharge data, where available. Average annual loads of suspended sediment, in millions of kilograms per year (Mkg/yr), were estimated to be 0.190 for the Second River, 0.23 for the Third River, 1.00 for the Saddle River, 1.76 for the Pompton River, and 7.40 for the upper Passaic River. On the basis of the available discharge records, the upper Passaic River was estimated to provide approximately 60 percent of the water and 80 percent of the total suspended-sediment load at the Passaic River head-of-tide, whereas the Pompton River provided roughly 20 percent of the total suspended-sediment load estimated at the head-of-tide. The combined suspended-sediment loads of the upper Passaic and Pompton Rivers (9.2 Mkg/yr), however, represent only 40 percent of the average annual suspended-sediment load estimated for the head-of-tide (23 Mkg/yr) at Little Falls, N.J. The difference between the combined suspended-sediment loads of the tributaries and the estimated load at Little Falls represents either sediment trapped upriver from the dam at Little Falls, additional inputs of suspended sediment downstream from the tributary confluence, or uncertainty in the suspended-sediment and discharge data that were used. The concentrations of total suspended sediment-bound polychlorinated biphenyls (PCBs) in the tributaries to the Passaic River were 194 ng/g (nanograms per gram) in the Second River, 575 ng/g in the Third River, 2,320 ng/g in the Saddle River, 200 ng/g in the Pompton River, and 87 ng/g in the upper Passic River. The dissolved PCB concentrations in the tributaries were 563 pg/L (picograms per liter) in the Second River, 2,510 pg/L in the Third River, 2,270 pg/L in the Saddle River, 887 pg/L in the Pompton River, and 1,000 pg/L in the upper Passaic River. Combined with the sediment loads and discharge, these concentrations resulted in annual loads of suspended sediment-bound PCBs, i
Improved Lower Bounds on the Price of Stability of Undirected Network Design Games
NASA Astrophysics Data System (ADS)
Bilò, Vittorio; Caragiannis, Ioannis; Fanelli, Angelo; Monaco, Gianpiero
Bounding the price of stability of undirected network design games with fair cost allocation is a challenging open problem in the Algorithmic Game Theory research agenda. Even though the generalization of such games in directed networks is well understood in terms of the price of stability (it is exactly H n , the n-th harmonic number, for games with n players), far less is known for network design games in undirected networks. The upper bound carries over to this case as well while the best known lower bound is 42/23 ≈ 1.826. For more restricted but interesting variants of such games such as broadcast and multicast games, sublogarithmic upper bounds are known while the best known lower bound is 12/7 ≈ 1.714. In the current paper, we improve the lower bounds as follows. We break the psychological barrier of 2 by showing that the price of stability of undirected network design games is at least 348/155 ≈ 2.245. Our proof uses a recursive construction of a network design game with a simple gadget as the main building block. For broadcast and multicast games, we present new lower bounds of 20/11 ≈ 1.818 and 1.862, respectively.
An analysis of the vertical structure equation for arbitrary thermal profiles
NASA Technical Reports Server (NTRS)
Cohn, Stephen E.; Dee, Dick P.
1989-01-01
The vertical structure equation is a singular Sturm-Liouville problem whose eigenfunctions describe the vertical dependence of the normal modes of the primitive equations linearized about a given thermal profile. The eigenvalues give the equivalent depths of the modes. The spectrum of the vertical structure equation and the appropriateness of various upper boundary conditions, both for arbitrary thermal profiles were studied. The results depend critically upon whether or not the thermal profile is such that the basic state atmosphere is bounded. In the case of a bounded atmosphere it is shown that the spectrum is always totally discrete, regardless of details of the thermal profile. For the barotropic equivalent depth, which corresponds to the lowest eigen value, upper and lower bounds which depend only on the surface temperature and the atmosphere height were obtained. All eigenfunctions are bounded, but always have unbounded first derivatives. It was proved that the commonly invoked upper boundary condition that vertical velocity must vanish as pressure tends to zero, as well as a number of alternative conditions, is well posed. It was concluded that the vertical structure equation always has a totally discrete spectrum under the assumptions implicit in the primitive equations.
An analysis of the vertical structure equation for arbitrary thermal profiles
NASA Technical Reports Server (NTRS)
Cohn, Stephen E.; Dee, Dick P.
1987-01-01
The vertical structure equation is a singular Sturm-Liouville problem whose eigenfunctions describe the vertical dependence of the normal modes of the primitive equations linearized about a given thermal profile. The eigenvalues give the equivalent depths of the modes. The spectrum of the vertical structure equation and the appropriateness of various upper boundary conditions, both for arbitrary thermal profiles were studied. The results depend critically upon whether or not the thermal profile is such that the basic state atmosphere is bounded. In the case of a bounded atmosphere it is shown that the spectrum is always totally discrete, regardless of details of the thermal profile. For the barotropic equivalent depth, which corresponds to the lowest eigen value, upper and lower bounds which depend only on the surface temperature and the atmosphere height were obtained. All eigenfunctions are bounded, but always have unbounded first derivatives. It was proved that the commonly invoked upper boundary condition that vertical velocity must vanish as pressure tends to zero, as well as a number of alternative conditions, is well posed. It was concluded that the vertical structure equation always has a totally discrete spectrum under the assumptions implicit in the primitive equations.
Ultimate energy density of observable cold baryonic matter.
Lattimer, James M; Prakash, Madappa
2005-03-25
We demonstrate that the largest measured mass of a neutron star establishes an upper bound to the energy density of observable cold baryonic matter. An equation of state-independent expression satisfied by both normal neutron stars and self-bound quark matter stars is derived for the largest energy density of matter inside stars as a function of their masses. The largest observed mass sets the lowest upper limit to the density. Implications from existing and future neutron star mass measurements are discussed.
1990-06-01
synchronization . We consider the performance of various synchronization protocols by deriving upper and lower bounds on optimal perfor- mance, upper bounds on Time ...from universities and from industry, who have resident appointments for limited periods of time , and by consultants. Members of NASA’s research staff...convergence to steady state is also being studied together with D. Gottlieb. The idea is to generalize the concept of local- time stepping by minimizing the
Generalized monogamy inequalities and upper bounds of negativity for multiqubit systems
NASA Astrophysics Data System (ADS)
Yang, Yanmin; Chen, Wei; Li, Gang; Zheng, Zhu-Jun
2018-01-01
In this paper, we present some generalized monogamy inequalities and upper bounds of negativity based on convex-roof extended negativity (CREN) and CREN of assistance (CRENOA). These monogamy relations are satisfied by the negativity of N -qubit quantum systems A B C1⋯CN -2 , under the partitions A B | C1⋯CN -2 and A B C1| C2⋯CN -2 . Furthermore, the W -class states are used to test these generalized monogamy inequalities.
In Search of the Largest Possible Tsunami: An Example Following the 2011 Japan Tsunami
NASA Astrophysics Data System (ADS)
Geist, E. L.; Parsons, T.
2012-12-01
Many tsunami hazard assessments focus on estimating the largest possible tsunami: i.e., the worst-case scenario. This is typically performed by examining historic and prehistoric tsunami data or by estimating the largest source that can produce a tsunami. We demonstrate that worst-case assessments derived from tsunami and tsunami-source catalogs are greatly affected by sampling bias. Both tsunami and tsunami sources are well represented by a Pareto distribution. It is intuitive to assume that there is some limiting size (i.e., runup or seismic moment) for which a Pareto distribution is truncated or tapered. Likelihood methods are used to determine whether a limiting size can be determined from existing catalogs. Results from synthetic catalogs indicate that several observations near the limiting size are needed for accurate parameter estimation. Accordingly, the catalog length needed to empirically determine the limiting size is dependent on the difference between the limiting size and the observation threshold, with larger catalog lengths needed for larger limiting-threshold size differences. Most, if not all, tsunami catalogs and regional tsunami source catalogs are of insufficient length to determine the upper bound on tsunami runup. As an example, estimates of the empirical tsunami runup distribution are obtained from the Miyako tide gauge station in Japan, which recorded the 2011 Tohoku-oki tsunami as the largest tsunami among 51 other events. Parameter estimation using a tapered Pareto distribution is made both with and without the Tohoku-oki event. The catalog without the 2011 event appears to have a low limiting tsunami runup. However, this is an artifact of undersampling. Including the 2011 event, the catalog conforms more to a pure Pareto distribution with no confidence in estimating a limiting runup. Estimating the size distribution of regional tsunami sources is subject to the same sampling bias. Physical attenuation mechanisms such as wave breaking likely limit the maximum tsunami runup at a particular site. However, historic and prehistoric data alone cannot determine the upper bound on tsunami runup. Because of problems endemic to sampling Pareto distributions of tsunamis and their sources, we recommend that tsunami hazard assessment be based on a specific design probability of exceedance following a pure Pareto distribution, rather than attempting to determine the worst-case scenario.
NASA Astrophysics Data System (ADS)
Buhler, Peter Benjamin; Ingersoll, Andrew P.
2017-10-01
Sputnik Planitia, Pluto contains cellular landforms with areas on the order of a few 102-103 km2 that are likely the surface manifestation of convective overturn in a vast basin of nitrogen ice. The cells have sublimation pits on them, with smaller pits near their centers and larger pits near their edges. We map over 12,000 pits on seven cells and find that the pit radii increase by between 2.1 ± 0.4 and 5.9 ± 0.8 × 10-3 m per meter away from the cell center, depending on the cell. Due to finite data resolution, this is a lower bound on the size increase. Conservatively accounting for resolution effects yields upper bounds on the size vs. distance distribution of 4.2 ± 0.2 to 23.4 ± 1.5 × 10-3 m m-1. In order to convert the pit size vs. distance distribution into a pit age vs. distance distribution, we use an analytic model to calculate that pit radii grow via sublimation at a rate of 3.6 [+2.1,-0.6] × 10-4 m yr-1. Combined with the mapped distribution of pit radii, this yields surface velocities between 1.5 [+1.0,-0.2] and 6.2 [+3.4,-1.4] cm yr-1 for the slowest cell and surface velocities between 8.1 [+5.5,-1.0] and 17.9 [+8.9,-5.1] cm yr-1 for the fastest cell; the lower bound estimate for each cell accounts for resolution effects, while the upper bound estimate does not. These convection rates imply that the surface ages at the edge of cells reach approximately 4.2 to 8.9 × 105 yr, depending on the cell. The rates we find are comparable to rates of ~6 cm yr-1 that were previously obtained from modeling of the convective overturn in Sputnik Planitia [McKinnon, W.B. et al., 2016, Nature, 534(7605), 82-85]. Finally, we find that the minimum viscosity at the surface of the convection cells is of order 1016 to 1017 Pa s; we find that pits would relax away before sublimating to their observed radii of several hundred meters if the viscosity were lower than this value.
2007-04-01
distribution is unlimited. April 2007 DTRAo1-03-C-0064 David C. Kocher Prepared by: SENES Oak Ridge, Inc. 102 Donner Drive Oak Ridge, TN 37830 REPORT...Donner Drive Oak Ridge, TN 37830 8. PERFORMlNGORGANIZATION REPORT NUMBER DTRA-TR-07-3 9. SPONSORTNG/MONTTORTNG AGENCY NAME(S) AND ADDRESS(ES) Defense...Greensboro Drive, I\\{lS E-5-5 Mclean, VAZZI02 Senes Oak Ridge, Inc. 102 Donner Drive Oak Ridge, TN 37830 Oak Ridge Associated Universities 210 Badger Avenue Oak Ridge, TN 37830 ATTN:Dr. Toohev DL-2
NASA Astrophysics Data System (ADS)
Quader, Khandker F.; Salamon, M. B.
1988-06-01
Ginzburg-Landau theory is used to explore the thermodynamic and electrodynamic properties of YBa 2Cu 3O 7-δ, and to determine γ, m ∗/m and the exchange enhancement. This material is found to be in a moderately strong coupling regime, intermediate between dirty and clean limits; strong coupling corrections are estimated. It is shown that, irrespective of the choice of the carrier density, spin fluctuations are unable to give a sufficiently large T c. An upper bound is given for the T c due spin-fluctuation-mediated pairing.
Astrophysics: Is a doomsday catastrophe likely?
NASA Astrophysics Data System (ADS)
Tegmark, Max; Bostrom, Nick
2005-12-01
The risk of a doomsday scenario in which high-energy physics experiments trigger the destruction of the Earth has been estimated to be minuscule. But this may give a false sense of security: the fact that the Earth has survived for so long does not necessarily mean that such disasters are unlikely, because observers are, by definition, in places that have avoided destruction. Here we derive a new upper bound of one per billion years (99.9% confidence level) for the exogenous terminal-catastrophe rate that is free of such selection bias, using calculations based on the relatively late formation time of Earth.
An exact solution of a simplified two-phase plume model. [for solid propellant rocket
NASA Technical Reports Server (NTRS)
Wang, S.-Y.; Roberts, B. B.
1974-01-01
An exact solution of a simplified two-phase, gas-particle, rocket exhaust plume model is presented. It may be used to make the upper-bound estimation of the heat flux and pressure loads due to particle impingement on the objects existing in the rocket exhaust plume. By including the correction factors to be determined experimentally, the present technique will provide realistic data concerning the heat and aerodynamic loads on these objects for design purposes. Excellent agreement in trend between the best available computer solution and the present exact solution is shown.
Astrophysics: is a doomsday catastrophe likely?
Tegmark, Max; Bostrom, Nick
2005-12-08
The risk of a doomsday scenario in which high-energy physics experiments trigger the destruction of the Earth has been estimated to be minuscule. But this may give a false sense of security: the fact that the Earth has survived for so long does not necessarily mean that such disasters are unlikely, because observers are, by definition, in places that have avoided destruction. Here we derive a new upper bound of one per billion years (99.9% confidence level) for the exogenous terminal-catastrophe rate that is free of such selection bias, using calculations based on the relatively late formation time of Earth.
Impulsive control of a financial model [rapid communication
NASA Astrophysics Data System (ADS)
Sun, Jitao; Qiao, Fei; Wu, Qidi
2005-02-01
In this Letter, several new theorems on the stability of impulsive control systems are presented. These theorem are then used to find the conditions under which an advertising strategy can be asymptotically control to the equilibrium point by using impulsive control. Given the parameters of the financial model and the impulsive control law, an estimation of the upper bound of the impulse interval is given, i.e., number of advert can been decreased (i.e., can decrease cost) for to obtain the equivalent advertising effect.The result is illustrated to be efficient through a numerical example.
Computing an upper bound on contact stress with surrogate duality
NASA Astrophysics Data System (ADS)
Xuan, Zhaocheng; Papadopoulos, Panayiotis
2016-07-01
We present a method for computing an upper bound on the contact stress of elastic bodies. The continuum model of elastic bodies with contact is first modeled as a constrained optimization problem by using finite elements. An explicit formulation of the total contact force, a fraction function with the numerator as a linear function and the denominator as a quadratic convex function, is derived with only the normalized nodal contact forces as the constrained variables in a standard simplex. Then two bounds are obtained for the sum of the nodal contact forces. The first is an explicit formulation of matrices of the finite element model, derived by maximizing the fraction function under the constraint that the sum of the normalized nodal contact forces is one. The second bound is solved by first maximizing the fraction function subject to the standard simplex and then using Dinkelbach's algorithm for fractional programming to find the maximum—since the fraction function is pseudo concave in a neighborhood of the solution. These two bounds are solved with the problem dimensions being only the number of contact nodes or node pairs, which are much smaller than the dimension for the original problem, namely, the number of degrees of freedom. Next, a scheme for constructing an upper bound on the contact stress is proposed that uses the bounds on the sum of the nodal contact forces obtained on a fine finite element mesh and the nodal contact forces obtained on a coarse finite element mesh, which are problems that can be solved at a lower computational cost. Finally, the proposed method is verified through some examples concerning both frictionless and frictional contact to demonstrate the method's feasibility, efficiency, and robustness.
Performance bounds on parallel self-initiating discrete-event
NASA Technical Reports Server (NTRS)
Nicol, David M.
1990-01-01
The use is considered of massively parallel architectures to execute discrete-event simulations of what is termed self-initiating models. A logical process in a self-initiating model schedules its own state re-evaluation times, independently of any other logical process, and sends its new state to other logical processes following the re-evaluation. The interest is in the effects of that communication on synchronization. The performance is considered of various synchronization protocols by deriving upper and lower bounds on optimal performance, upper bounds on Time Warp's performance, and lower bounds on the performance of a new conservative protocol. The analysis of Time Warp includes the overhead costs of state-saving and rollback. The analysis points out sufficient conditions for the conservative protocol to outperform Time Warp. The analysis also quantifies the sensitivity of performance to message fan-out, lookahead ability, and the probability distributions underlying the simulation.
Bounds on the Coupling of the Majoron to Light Neutrinos from Supernova Cooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farzan, Yasaman
2002-12-02
We explore the role of Majoron (J) emission in the supernova cooling process, as a source of upper bound on the neutrino-Majoron coupling. We show that the strongest upper bound on the coupling to {nu}{sub 3} comes from the {nu}{sub e}{nu}{sub e} {yields} J process in the core of a supernova. We also find bounds on diagonal couplings of the Majoron to {nu}{sub {mu}({tau})}{nu}{sub {mu}({tau})} and on off-diagonal {nu}{sub e}{nu}{sub {mu}({tau})} couplings in various regions of the parameter space. We discuss the evaluation of cross-section for four-particle interactions ({nu}{nu} {yields} JJ and {nu}J {yields} {nu}J). We show that these aremore » typically dominated by three-particle sub-processes and do not give new independent constraints.« less
The contribution of glacier melt to streamflow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaner, Neil; Voisin, Nathalie; Nijssen, Bart
2012-09-13
Ongoing and projected future changes in glacier extent and water storage globally have lead to concerns about the implications for water supplies. However, the current magnitude of glacier contributions to river runoff is not well known, nor is the population at risk to future glacier changes. We estimate an upper bound on glacier melt contribution to seasonal streamflow by computing the energy balance of glaciers globally. Melt water quantities are computed as a fraction of total streamflow simulated using a hydrology model and the melt fraction is tracked down the stream network. In general, our estimates of the glacier meltmore » contribution to streamflow are lower than previously published values. Nonetheless, we find that globally an estimated 225 (36) million people live in river basins where maximum seasonal glacier melt contributes at least 10% (25%) of streamflow, mostly in the High Asia region.« less
Turbulent vertical diffusivity in the sub-tropical stratosphere
NASA Astrophysics Data System (ADS)
Pisso, I.; Legras, B.
2008-02-01
Vertical (cross-isentropic) mixing is produced by small-scale turbulent processes which are still poorly understood and paramaterized in numerical models. In this work we provide estimates of local equivalent diffusion in the lower stratosphere by comparing balloon borne high-resolution measurements of chemical tracers with reconstructed mixing ratio from large ensembles of random Lagrangian backward trajectories using European Centre for Medium-range Weather Forecasts analysed winds and a chemistry-transport model (REPROBUS). We focus on a case study in subtropical latitudes using data from HIBISCUS campaign. An upper bound on the vertical diffusivity is found in this case study to be of the order of 0.5 m2 s-1 in the subtropical region, which is larger than the estimates at higher latitudes. The relation between diffusion and dispersion is studied by estimating Lyapunov exponents and studying their variation according to the presence of active dynamical structures.
A linear programming approach to max-sum problem: a review.
Werner, Tomás
2007-07-01
The max-sum labeling problem, defined as maximizing a sum of binary (i.e., pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field. We review a not widely known approach to the problem, developed by Ukrainian researchers Schlesinger et al. in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product. In particular, we review Schlesinger et al.'s upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound. We revisit problems with Boolean variables and supermodular problems. We describe two algorithms for decreasing the upper bound. We present an example application for structural image analysis.
Kernel K-Means Sampling for Nyström Approximation.
He, Li; Zhang, Hong
2018-05-01
A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Pang, Yi; Rong, Junchen; Su, Ning
2016-12-01
We consider ϕ 3 theory in 6 - 2 ɛ with F 4 global symmetry. The beta function is calculated up to 3 loops, and a stable unitary IR fixed point is observed. The anomalous dimensions of operators quadratic or cubic in ϕ are also computed. We then employ conformal bootstrap technique to study the fixed point predicted from the perturbative approach. For each putative scaling dimension of ϕ (Δ ϕ ), we obtain the corresponding upper bound on the scaling dimension of the second lowest scalar primary in the 26 representation ( Δ 26 2nd ) which appears in the OPE of ϕ × ϕ. In D = 5 .95, we observe a sharp peak on the upper bound curve located at Δ ϕ equal to the value predicted by the 3-loop computation. In D = 5, we observe a weak kink on the upper bound curve at ( Δ ϕ , Δ 26 2nd ) = (1.6, 4).
A prevalence-based approach to societal costs occurring in consequence of child abuse and neglect
2012-01-01
Background Traumatization in childhood can result in lifelong health impairment and may have a negative impact on other areas of life such as education, social contacts and employment as well. Despite the frequent occurrence of traumatization, which is reflected in a 14.5 percent prevalence rate of severe child abuse and neglect, the economic burden of the consequences is hardly known. The objective of this prevalence-based cost-of-illness study is to show how impairment of the individual is reflected in economic trauma follow-up costs borne by society as a whole in Germany and to compare the results with other countries’ costs. Methods From a societal perspective trauma follow-up costs were estimated using a bottom-up approach. The literature-based prevalence rate includes emotional, physical and sexual abuse as well as physical and emotional neglect in Germany. Costs are derived from individual case scenarios of child endangerment presented in a German cost-benefit-analysis. A comparison with trauma follow-up costs in Australia, Canada and the USA is based on purchasing power parity. Results The annual trauma follow-up costs total to a margin of EUR 11.1 billion for the lower bound and to EUR 29.8 billion for the upper bound. This equals EUR 134.84 and EUR 363.58, respectively, per capita for the German population. These results conform to the ones obtained from cost studies conducted in Australia (lower bound) and Canada (upper bound), whereas the result for the United States is much lower. Conclusion Child abuse and neglect result in trauma follow-up costs of economically relevant magnitude for the German society. Although the result is well in line with other countries’ costs, the general lack of data should be fought in order to enable more detailed future studies. Creating a reliable cost data basis in the first place can pave the way for long-term cost savings. PMID:23158382
A prevalence-based approach to societal costs occurring in consequence of child abuse and neglect.
Habetha, Susanne; Bleich, Sabrina; Weidenhammer, Jörg; Fegert, Jörg M
2012-11-16
Traumatization in childhood can result in lifelong health impairment and may have a negative impact on other areas of life such as education, social contacts and employment as well. Despite the frequent occurrence of traumatization, which is reflected in a 14.5 percent prevalence rate of severe child abuse and neglect, the economic burden of the consequences is hardly known. The objective of this prevalence-based cost-of-illness study is to show how impairment of the individual is reflected in economic trauma follow-up costs borne by society as a whole in Germany and to compare the results with other countries' costs. From a societal perspective trauma follow-up costs were estimated using a bottom-up approach. The literature-based prevalence rate includes emotional, physical and sexual abuse as well as physical and emotional neglect in Germany. Costs are derived from individual case scenarios of child endangerment presented in a German cost-benefit-analysis. A comparison with trauma follow-up costs in Australia, Canada and the USA is based on purchasing power parity. The annual trauma follow-up costs total to a margin of EUR 11.1 billion for the lower bound and to EUR 29.8 billion for the upper bound. This equals EUR 134.84 and EUR 363.58, respectively, per capita for the German population. These results conform to the ones obtained from cost studies conducted in Australia (lower bound) and Canada (upper bound), whereas the result for the United States is much lower. Child abuse and neglect result in trauma follow-up costs of economically relevant magnitude for the German society. Although the result is well in line with other countries' costs, the general lack of data should be fought in order to enable more detailed future studies. Creating a reliable cost data basis in the first place can pave the way for long-term cost savings.
Inclusion-Based Effective Medium Models for the Permeability of a 3D Fractured Rock Mass
NASA Astrophysics Data System (ADS)
Ebigbo, A.; Lang, P. S.; Paluszny, A.; Zimmerman, R. W.
2015-12-01
Following the work of Saevik et al. (Transp. Porous Media, 2013; Geophys. Prosp., 2014), we investigate the ability of classical inclusion-based effective medium theories to predict the macroscopic permeability of a fractured rock mass. The fractures are assumed to be thin, oblate spheroids, and are treated as porous media in their own right, with permeability kf, and are embedded in a homogeneous matrix having permeability km. At very low fracture densities, the effective permeability is given exactly by a well-known expression that goes back at least as far as Fricke (Phys. Rev., 1924). For non-trivial fracture densities, an effective medium approximation must be employed. We have investigated several such approximations: Maxwell's method, the differential method, and the symmetric and asymmetric versions of the self-consistent approximation. The predictions of the various approximate models are tested against the results of explicit numerical simulations, averaged over numerous statistical realizations for each set of parameters. Each of the various effective medium approximations satisfies the Hashin-Shtrikman (H-S) bounds. Unfortunately, these bounds are much too far apart to provide quantitatively useful estimates of keff. For the case of zero matrix permeability, the well-known approximation of Snow, which is based on network considerations rather than a continuum approach, is shown to essentially coincide with the upper H-S bound, thereby proving that the commonly made assumption that Snow's equation is an "upper bound" is indeed correct. This problem is actually characterized by two small parameters, the aspect ratio of the spheroidal fractures, α, and the permeability ratio, κ = km/kf. Two different regimes can be identified, corresponding to α < κ and κ < α, and expressions for each of the effective medium approximations are developed in both regimes. In both regimes, the symmetric version of the self-consistent approximation is the most accurate.
Temperature of Earth's core constrained from melting of Fe and Fe0.9Ni0.1 at high pressures
NASA Astrophysics Data System (ADS)
Zhang, Dongzhou; Jackson, Jennifer M.; Zhao, Jiyong; Sturhahn, Wolfgang; Alp, E. Ercan; Hu, Michael Y.; Toellner, Thomas S.; Murphy, Caitlin A.; Prakapenka, Vitali B.
2016-08-01
The melting points of fcc- and hcp-structured Fe0.9Ni0.1 and Fe are measured up to 125 GPa using laser heated diamond anvil cells, synchrotron Mössbauer spectroscopy, and a recently developed fast temperature readout spectrometer. The onset of melting is detected by a characteristic drop in the time-integrated synchrotron Mössbauer signal which is sensitive to atomic motion. The thermal pressure experienced by the samples is constrained by X-ray diffraction measurements under high pressures and temperatures. The obtained best-fit melting curves of fcc-structured Fe and Fe0.9Ni0.1 fall within the wide region bounded by previous studies. We are able to derive the γ-ɛ-l triple point of Fe and the quasi triple point of Fe0.9Ni0.1 to be 110 ± 5GPa, 3345 ± 120K and 116 ± 5GPa, 3260 ± 120K, respectively. The measured melting temperatures of Fe at similar pressure are slightly higher than those of Fe0.9Ni0.1 while their one sigma uncertainties overlap. Using previously measured phonon density of states of hcp-Fe, we calculate melting curves of hcp-structured Fe and Fe0.9Ni0.1 using our (quasi) triple points as anchors. The extrapolated Fe0.9Ni0.1 melting curve provides an estimate for the upper bound of Earth's inner core-outer core boundary temperature of 5500 ± 200K. The temperature within the liquid outer core is then approximated with an adiabatic model, which constrains the upper bound of the temperature at the core side of the core-mantle boundary to be 4000 ± 200K. We discuss a potential melting point depression caused by light elements and the implications of the presented core-mantle boundary temperature bounds on phase relations in the lowermost part of the mantle.
Temperature of Earth's core constrained from melting of Fe and Fe 0.9Ni 0.1 at high pressures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Dongzhou; Jackson, Jennifer M.; Zhao, Jiyong
The melting points of fcc- and hcp-structured Fe 0.9Ni 0.1 and Fe are measured up to 125 GPa using laser heated diamond anvil cells, synchrotron Mossbauer spectroscopy, and a recently developed fast temperature readout spectrometer. The onset of melting is detected by a characteristic drop in the time integrated synchrotron Mfissbauer signal which is sensitive to atomic motion. The thermal pressure experienced by the samples is constrained by X-ray diffraction measurements under high pressures and temperatures. The obtained best-fit melting curves of fcc-structured Fe and Fe 0.9Ni 0.1 fall within the wide region bounded by previous studies. We are ablemore » to derive the gamma-is an element of-1 triple point of Fe and the quasi triple point of Fe0.9Ni0.1 to be 110 ± 5 GPa, 3345 ± 120 K and 116 ± 5 GPa, 3260 ± 120 K, respectively. The measured melting temperatures of Fe at similar pressure are slightly higher than those of Fe 0.9Ni 0.1 while their one sigma uncertainties overlap. Using previously measured phonon density of states of hcp-Fe, we calculate melting curves of hcp-structured Fe and Fe 0.9Ni 0.1 using our (quasi) triple points as anchors. The extrapolated Fe 0.9Ni 0.1 melting curve provides an estimate for the upper bound of Earth's inner core-outer core boundary temperature of 5500 ± 200 K. The temperature within the liquid outer core is then approximated with an adiabatic model, which constrains the upper bound of the temperature at the core side of the core -mantle boundary to be 4000 ± 200 K. We discuss a potential melting point depression caused by light elements and the implications of the presented core -mantle boundary temperature bounds on phase relations in the lowermost part of the mantle.« less
Strong polygamy of quantum correlations in multi-party quantum systems
NASA Astrophysics Data System (ADS)
Kim, Jeong San
2014-10-01
We propose a new type of polygamy inequality for multi-party quantum entanglement. We first consider the possible amount of bipartite entanglement distributed between a fixed party and any subset of the rest parties in a multi-party quantum system. By using the summation of these distributed entanglements, we provide an upper bound of the distributed entanglement between a party and the rest in multi-party quantum systems. We then show that this upper bound also plays as a lower bound of the usual polygamy inequality, therefore the strong polygamy of multi-party quantum entanglement. For the case of multi-party pure states, we further show that the strong polygamy of entanglement implies the strong polygamy of quantum discord.
Optimal bounds and extremal trajectories for time averages in nonlinear dynamical systems
NASA Astrophysics Data System (ADS)
Tobasco, Ian; Goluskin, David; Doering, Charles R.
2018-02-01
For any quantity of interest in a system governed by ordinary differential equations, it is natural to seek the largest (or smallest) long-time average among solution trajectories, as well as the extremal trajectories themselves. Upper bounds on time averages can be proved a priori using auxiliary functions, the optimal choice of which is a convex optimization problem. We prove that the problems of finding maximal trajectories and minimal auxiliary functions are strongly dual. Thus, auxiliary functions provide arbitrarily sharp upper bounds on time averages. Moreover, any nearly minimal auxiliary function provides phase space volumes in which all nearly maximal trajectories are guaranteed to lie. For polynomial equations, auxiliary functions can be constructed by semidefinite programming, which we illustrate using the Lorenz system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cartas-Fuentevilla, Roberto; Escalante, Alberto; Germán, Gabriel
Following recent studies which show that it is possible to localize gravity as well as scalar and gauge vector fields in a tachyonic de Sitter thick braneworld, we investigate the solution of the gauge hierarchy problem, the localization of fermion fields in this model, the recovering of the Coulomb law on the non-relativistic limit of the Yukawa interaction between bulk fermions and gauge bosons localized in the brane, and confront the predicted 5D corrections to the photon mass with its upper experimental/observational bounds, finding the model physically viable since it passes these tests. In order to achieve the latter aimsmore » we first consider the Yukawa interaction term between the fermionic and the tachyonic scalar fields MF(T)ΨΨ-bar in the action and analyze four distinct tachyonic functions F(T) that lead to four different structures of the respective fermionic mass spectra with different physics. In particular, localization of the massless left-chiral fermion zero mode is possible for three of these cases. We further analyze the phenomenology of these Yukawa interactions among fermion fields and gauge bosons localized on the brane and obtain the crucial and necessary information to compute the corrections to Coulomb’s law coming from massive KK vector modes in the non-relativistic limit. These corrections are exponentially suppressed due to the presence of the mass gap in the mass spectrum of the bulk gauge vector field. From our results we conclude that corrections to Coulomb’s law in the thin brane limit have the same form (up to a numerical factor) as far as the left-chiral massless fermion field is localized on the brane. Finally we compute the corrections to the Coulomb’s law for an arbitrarily thick brane scenario which can be interpreted as 5D corrections to the photon mass. By performing consistent estimations with brane phenomenology, we found that the predicted corrections to the photon mass, which are well bounded by the experimentally observed or astrophysically inferred photon mass, are far beyond its upper bound, positively testing the viability of our tachyonic braneworld. Moreover, the 5D parameters that define these corrections possess the same order, providing naturalness to our model, however, a fine-tuning between them is needed in order to fit the corresponding upper bound on the photon mass.« less
NASA Technical Reports Server (NTRS)
Hallock, Ashley K.; Polzin, Kurt A.; Bonds, Kevin W.; Emsellem, Gregory D.
2011-01-01
Results are presented demonstrating the e ect of inductive coil geometry and current sheet trajectory on the exhaust velocity of propellant in conical theta pinch pulsed induc- tive plasma accelerators. The electromagnetic coupling between the inductive coil of the accelerator and a plasma current sheet is simulated, substituting a conical copper frustum for the plasma. The variation of system inductance as a function of plasma position is obtained by displacing the simulated current sheet from the coil while measuring the total inductance of the coil. Four coils of differing geometries were employed, and the total inductance of each coil was measured as a function of the axial displacement of two sep- arate copper frusta both having the same cone angle and length as the coil but with one compressed to a smaller size relative to the coil. The measured relationship between total coil inductance and current sheet position closes a dynamical circuit model that is used to calculate the resulting current sheet velocity for various coil and current sheet con gura- tions. The results of this model, which neglects the pinching contribution to thrust, radial propellant con nement, and plume divergence, indicate that in a conical theta pinch ge- ometry current sheet pinching is detrimental to thruster performance, reducing the kinetic energy of the exhausting propellant by up to 50% (at the upper bound for the parameter range of the study). The decrease in exhaust velocity was larger for coils and simulated current sheets of smaller half cone angles. An upper bound for the pinching contribution to thrust is estimated for typical operating parameters. Measurements of coil inductance for three di erent current sheet pinching conditions are used to estimate the magnetic pressure as a function of current sheet radial compression. The gas-dynamic contribution to axial acceleration is also estimated and shown to not compensate for the decrease in axial electromagnetic acceleration that accompanies the radial compression of the plasma in conical theta pinches.
Estimation of inflation parameters for Perturbed Power Law model using recent CMB measurements
NASA Astrophysics Data System (ADS)
Mukherjee, Suvodip; Das, Santanu; Joy, Minu; Souradeep, Tarun
2015-01-01
Cosmic Microwave Background (CMB) is an important probe for understanding the inflationary era of the Universe. We consider the Perturbed Power Law (PPL) model of inflation which is a soft deviation from Power Law (PL) inflationary model. This model captures the effect of higher order derivative of Hubble parameter during inflation, which in turn leads to a non-zero effective mass meff for the inflaton field. The higher order derivatives of Hubble parameter at leading order sources constant difference in the spectral index for scalar and tensor perturbation going beyond PL model of inflation. PPL model have two observable independent parameters, namely spectral index for tensor perturbation νt and change in spectral index for scalar perturbation νst to explain the observed features in the scalar and tensor power spectrum of perturbation. From the recent measurements of CMB power spectra by WMAP, Planck and BICEP-2 for temperature and polarization, we estimate the feasibility of PPL model with standard ΛCDM model. Although BICEP-2 claimed a detection of r=0.2, estimates of dust contamination provided by Planck have left open the possibility that only upper bound on r will be expected in a joint analysis. As a result we consider different upper bounds on the value of r and show that PPL model can explain a lower value of tensor to scalar ratio (r<0.1 or r<0.01) for a scalar spectral index of ns=0.96 by having a non-zero value of effective mass of the inflaton field m2eff/H2. The analysis with WP + Planck likelihood shows a non-zero detection of m2eff/H2 with 5.7 σ and 8.1 σ respectively for r<0.1 and r<0.01. Whereas, with BICEP-2 likelihood m2eff/H2 = -0.0237 ± 0.0135 which is consistent with zero.
Elimination of Onchocerciasis from Mexico
Rodríguez-Pérez, Mario A.; Fernández-Santos, Nadia A.; Orozco-Algarra, María E.; Rodríguez-Atanacio, José A.; Domínguez-Vázquez, Alfredo; Rodríguez-Morales, Kristel B.; Real-Najarro, Olga; Prado-Velasco, Francisco G.; Cupp, Eddie W.; Richards, Frank O.; Hassan, Hassan K.; González-Roldán, Jesús F.; Kuri-Morales, Pablo A.; Unnasch, Thomas R.
2015-01-01
Background Mexico is one of the six countries formerly endemic for onchocerciasis in Latin America. Transmission has been interrupted in the three endemic foci of that country and mass drug distribution has ceased. Three years after mass drug distribution ended, post-treatment surveillance (PTS) surveys were undertaken which employed entomological indicators to check for transmission recrudescence. Methodology/Principal findings In-depth entomologic assessments were performed in 18 communities in the three endemic foci of Mexico. None of the 108,212 Simulium ochraceum s.l. collected from the three foci were found to contain parasite DNA when tested by polymerase chain reaction-enzyme-linked immunosorbent assay (PCR-ELISA), resulting in a maximum upper bound of the 95% confidence interval (95%-ULCI) of the infective rate in the vectors of 0.035/2,000 flies examined. This is an order of magnitude below the threshold of a 95%-ULCI of less than one infective fly per 2,000 flies tested, the current entomological criterion for interruption of transmission developed by the international community. The point estimate of seasonal transmission potential (STP) was zero, and the upper bound of the 95% confidence interval for the STP ranged from 1.2 to 1.7 L3/person/season in the different foci. This value is below all previous estimates for the minimum transmission potential required to maintain the parasite population. Conclusions/Significance The results from the in-depth entomological post treatment surveillance surveys strongly suggest that transmission has not resumed in the three foci of Mexico during the three years since the last distribution of ivermectin occurred; it was concluded that transmission remains undetectable without intervention, and Onchocerca volvulus has been eliminated from Mexico. PMID:26161558
Alternate methods for FAAT S-curve generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaufman, A.M.
The FAAT (Foreign Asset Assessment Team) assessment methodology attempts to derive a probability of effect as a function of incident field strength. The probability of effect is the likelihood that the stress put on a system exceeds its strength. In the FAAT methodology, both the stress and strength are random variables whose statistical properties are estimated by experts. Each random variable has two components of uncertainty: systematic and random. The systematic uncertainty drives the confidence bounds in the FAAT assessment. Its variance can be reduced by improved information. The variance of the random uncertainty is not reducible. The FAAT methodologymore » uses an assessment code called ARES to generate probability of effect curves (S-curves) at various confidence levels. ARES assumes log normal distributions for all random variables. The S-curves themselves are log normal cumulants associated with the random portion of the uncertainty. The placement of the S-curves depends on confidence bounds. The systematic uncertainty in both stress and strength is usually described by a mode and an upper and lower variance. Such a description is not consistent with the log normal assumption of ARES and an unsatisfactory work around solution is used to obtain the required placement of the S-curves at each confidence level. We have looked into this situation and have found that significant errors are introduced by this work around. These errors are at least several dB-W/cm{sup 2} at all confidence levels, but they are especially bad in the estimate of the median. In this paper, we suggest two alternate solutions for the placement of S-curves. To compare these calculational methods, we have tabulated the common combinations of upper and lower variances and generated the relevant S-curves offsets from the mode difference of stress and strength.« less
Carlson, Josh J; Thariani, Rahber; Roth, Josh; Gralow, Julie; Henry, N Lynn; Esmail, Laura; Deverka, Pat; Ramsey, Scott D; Baker, Laurence; Veenstra, David L
2013-05-01
The objective of this study was to evaluate the feasibility and outcomes of incorporating value-of-information (VOI) analysis into a stakeholder-driven research prioritization process in a US-based setting. . Within a program to prioritize comparative effectiveness research areas in cancer genomics, over a period of 7 months, we developed decision-analytic models and calculated upper-bound VOI estimates for 3 previously selected genomic tests. Thirteen stakeholders representing patient advocates, payers, test developers, regulators, policy makers, and community-based oncologists ranked the tests before and after receiving VOI results. The stakeholders were surveyed about the usefulness and impact of the VOI findings. The estimated upper-bound VOI ranged from $33 million to $2.8 billion for the 3 research areas. Seven stakeholders indicated the results modified their rankings, 9 stated VOI data were useful, and all indicated they would support its use in future prioritization processes. Some stakeholders indicated expected value of sampled information might be the preferred choice when evaluating specific Limitations. Our study was limited by the size and the potential for selection bias in the composition of the external stakeholder group, lack of a randomized design to assess effect of VOI data on rankings, and the use of expected value of perfect information v. expected value of sample information methods. Value of information analyses may have a meaningful role in research topic prioritization for comparative effectiveness research in the United States, particularly when large differences in VOI across topic areas are identified. Additional research is needed to facilitate the use of more complex value of information analyses in this setting.
Upper bounds on quantum uncertainty products and complexity measures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guerrero, Angel; Sanchez-Moreno, Pablo; Dehesa, Jesus S.
The position-momentum Shannon and Renyi uncertainty products of general quantum systems are shown to be bounded not only from below (through the known uncertainty relations), but also from above in terms of the Heisenberg-Kennard product . Moreover, the Cramer-Rao, Fisher-Shannon, and Lopez-Ruiz, Mancini, and Calbet shape measures of complexity (whose lower bounds have been recently found) are also bounded from above. The improvement of these bounds for systems subject to spherically symmetric potentials is also explicitly given. Finally, applications to hydrogenic and oscillator-like systems are done.
The Nazca-South American convergence rate and the recurrence of the great 1960 Chilean earthquake
NASA Technical Reports Server (NTRS)
Stein, S.; Engeln, J. F.; Demets, C.; Gordon, R. G.; Woods, D.
1986-01-01
The seismic slip rate along the Chile Trench estimated from the slip in the great 1960 earthquake and the recurrence history of major earthquakes has been interpreted as consistent with the subduction rate of the Nazca plate beneath South America. The convergence rate, estimated from global relative plate motion models, depends significantly on closure of the Nazca - Antarctica - South America circuit. NUVEL-1, a new plate motion model which incorporates recently determined spreading rates on the Chile Rise, shows that the average convergence rate over the last three million years is slower than previously estimated. If this time-averaged convergence rate provides an appropriate upper bound for the seismic slip rate, either the characteristic Chilean subduction earthquake is smaller than the 1960 event, the average recurrence interval is greater than observed in the last 400 years, or both. These observations bear out the nonuniformity of plate motions on various time scales, the variability in characteristic subduction zone earthquake size, and the limitations of recurrence time estimates.
Degree-of-Freedom Strengthened Cascade Array for DOD-DOA Estimation in MIMO Array Systems.
Yao, Bobin; Dong, Zhi; Zhang, Weile; Wang, Wei; Wu, Qisheng
2018-05-14
In spatial spectrum estimation, difference co-array can provide extra degrees-of-freedom (DOFs) for promoting parameter identifiability and parameter estimation accuracy. For the sake of acquiring as more DOFs as possible with a given number of physical sensors, we herein design a novel sensor array geometry named cascade array. This structure is generated by systematically connecting a uniform linear array (ULA) and a non-uniform linear array, and can provide more DOFs than some exist array structures but less than the upper-bound indicated by minimum redundant array (MRA). We further apply this cascade array into multiple input multiple output (MIMO) array systems, and propose a novel joint direction of departure (DOD) and direction of arrival (DOA) estimation algorithm, which is based on a reduced-dimensional weighted subspace fitting technique. The algorithm is angle auto-paired and computationally efficient. Theoretical analysis and numerical simulations prove the advantages and effectiveness of the proposed array structure and the related algorithm.
Fitzpatrick, Colin; Olivetti, Elsa; Miller, Reed; Roth, Richard; Kirchain, Randolph
2015-01-20
Recent legislation has focused attention on the supply chains of tin, tungsten, tantalum, and gold (3TG), specifically those originating from the eastern part of the Democratic Republic of Congo. The unique properties of these so-called “conflict minerals” lead to their use in many products, ranging from medical devices to industrial cutting tools. This paper calculates per product use of 3TG in several information, communication, and technology (ICT) products such as desktops, servers, laptops, smart phones, and tablets. By scaling up individual product estimates to global shipment figures, this work estimates the influence of the ICT sector on 3TG mining in covered countries. The model estimates the upper bound of tin, tungsten, tantalum, and gold use within ICT products to be 2%, 0.1%, 15%, and 3% of the 2013 market share, respectively. This result is projected into the future (2018) based on the anticipated increase in ICT device production.
A posteriori error estimates in voice source recovery
NASA Astrophysics Data System (ADS)
Leonov, A. S.; Sorokin, V. N.
2017-12-01
The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.
Single-shot quantum state estimation via a continuous measurement in the strong backaction regime
NASA Astrophysics Data System (ADS)
Cook, Robert L.; Riofrío, Carlos A.; Deutsch, Ivan H.
2014-09-01
We study quantum tomography based on a stochastic continuous-time measurement record obtained from a probe field collectively interacting with an ensemble of identically prepared systems. In comparison to previous studies, we consider here the case in which the measurement-induced backaction has a non-negligible effect on the dynamical evolution of the ensemble. We formulate a maximum likelihood estimate for the initial quantum state given only a single instance of the continuous diffusive measurement record. We apply our estimator to the simplest problem: state tomography of a single pure qubit, which, during the course of the measurement, is also subjected to dynamical control. We identify a regime where the many-body system is well approximated at all times by a separable pure spin coherent state, whose Bloch vector undergoes a conditional stochastic evolution. We simulate the results of our estimator and show that we can achieve close to the upper bound of fidelity set by the optimal generalized measurement. This estimate is compared to, and significantly outperforms, an equivalent estimator that ignores measurement backaction.
NASA Astrophysics Data System (ADS)
Vukičević, Damir; Đurđević, Jelena
2011-10-01
Bond incident degree index is a descriptor that is calculated as the sum of the bond contributions such that each bond contribution depends solely on the degrees of its incident vertices (e.g. Randić index, Zagreb index, modified Zagreb index, variable Randić index, atom-bond connectivity index, augmented Zagreb index, sum-connectivity index, many Adriatic indices, and many variable Adriatic indices). In this Letter we find tight upper and lower bounds for bond incident degree index for catacondensed fluoranthenes with given number of hexagons.
Beating the photon-number-splitting attack in practical quantum cryptography.
Wang, Xiang-Bin
2005-06-17
We propose an efficient method to verify the upper bound of the fraction of counts caused by multiphoton pulses in practical quantum key distribution using weak coherent light, given whatever type of Eve's action. The protocol simply uses two coherent states for the signal pulses and vacuum for the decoy pulse. Our verified upper bound is sufficiently tight for quantum key distribution with a very lossy channel, in both the asymptotic and nonasymptotic case. So far our protocol is the only decoy-state protocol that works efficiently for currently existing setups.
The local interstellar helium density - Corrected
NASA Technical Reports Server (NTRS)
Freeman, J.; Paresce, F.; Bowyer, S.
1979-01-01
An upper bound for the number density of neutral helium in the local interstellar medium of 0.004 + or - 0.0022 per cu cm was previously reported, based on extreme-ultraviolet telescope observations at 584 A made during the 1975 Apollo-Soyuz Test Project. A variety of evidence is found which indicates that the 584-A sensitivity of the instrument declined by a factor of 2 between the last laboratory calibration and the time of the measurements. The upper bound on the helium density is therefore revised to 0.0089 + or - 0.005 per cu cm.
Upper bound on three-tangles of reduced states of four-qubit pure states
NASA Astrophysics Data System (ADS)
Sharma, S. Shelly; Sharma, N. K.
2017-06-01
Closed formulas for upper bounds on three-tangles of three-qubit reduced states in terms of three-qubit-invariant polynomials of pure four-qubit states are obtained. Our results offer tighter constraints on total three-way entanglement of a given qubit with the rest of the system than those used by Regula et al. [Phys. Rev. Lett. 113, 110501 (2014), 10.1103/PhysRevLett.113.110501 and Phys. Rev. Lett. 116, 049902(E) (2016)], 10.1103/PhysRevLett.116.049902 to verify monogamy of four-qubit quantum entanglement.
Planck limits on non-canonical generalizations of large-field inflation models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Nina K.; Kinney, William H., E-mail: ninastei@buffalo.edu, E-mail: whkinney@buffalo.edu
2017-04-01
In this paper, we consider two case examples of Dirac-Born-Infeld (DBI) generalizations of canonical large-field inflation models, characterized by a reduced sound speed, c {sub S} < 1. The reduced speed of sound lowers the tensor-scalar ratio, improving the fit of the models to the data, but increases the equilateral-mode non-Gaussianity, f {sup equil.}{sub NL}, which the latest results from the Planck satellite constrain by a new upper bound. We examine constraints on these models in light of the most recent Planck and BICEP/Keck results, and find that they have a greatly decreased window of viability. The upper bound onmore » f {sup equil.}{sub NL} corresponds to a lower bound on the sound speed and a corresponding lower bound on the tensor-scalar ratio of r ∼ 0.01, so that near-future Cosmic Microwave Background observations may be capable of ruling out entire classes of DBI inflation models. The result is, however, not universal: infrared-type DBI inflation models, where the speed of sound increases with time, are not subject to the bound.« less
Derivative-free generation and interpolation of convex Pareto optimal IMRT plans
NASA Astrophysics Data System (ADS)
Hoffmann, Aswin L.; Siem, Alex Y. D.; den Hertog, Dick; Kaanders, Johannes H. A. M.; Huizenga, Henk
2006-12-01
In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning.
Circuit bounds on stochastic transport in the Lorenz equations
NASA Astrophysics Data System (ADS)
Weady, Scott; Agarwal, Sahil; Wilen, Larry; Wettlaufer, J. S.
2018-07-01
In turbulent Rayleigh-Bénard convection one seeks the relationship between the heat transport, captured by the Nusselt number, and the temperature drop across the convecting layer, captured by the Rayleigh number. In experiments, one measures the Nusselt number for a given Rayleigh number, and the question of how close that value is to the maximal transport is a key prediction of variational fluid mechanics in the form of an upper bound. The Lorenz equations have traditionally been studied as a simplified model of turbulent Rayleigh-Bénard convection, and hence it is natural to investigate their upper bounds, which has previously been done numerically and analytically, but they are not as easily accessible in an experimental context. Here we describe a specially built circuit that is the experimental analogue of the Lorenz equations and compare its output to the recently determined upper bounds of the stochastic Lorenz equations [1]. The circuit is substantially more efficient than computational solutions, and hence we can more easily examine the system. Because of offsets that appear naturally in the circuit, we are motivated to study unique bifurcation phenomena that arise as a result. Namely, for a given Rayleigh number, we find a reentrant behavior of the transport on noise amplitude and this varies with Rayleigh number passing from the homoclinic to the Hopf bifurcation.
Energy Bounds for a Compressed Elastic Film on a Substrate
NASA Astrophysics Data System (ADS)
Bourne, David P.; Conti, Sergio; Müller, Stefan
2017-04-01
We study pattern formation in a compressed elastic film which delaminates from a substrate. Our key tool is the determination of rigorous upper and lower bounds on the minimum value of a suitable energy functional. The energy consists of two parts, describing the two main physical effects. The first part represents the elastic energy of the film, which is approximated using the von Kármán plate theory. The second part represents the fracture or delamination energy, which is approximated using the Griffith model of fracture. A simpler model containing the first term alone was previously studied with similar methods by several authors, assuming that the delaminated region is fixed. We include the fracture term, transforming the elastic minimisation into a free boundary problem, and opening the way for patterns which result from the interplay of elasticity and delamination. After rescaling, the energy depends on only two parameters: the rescaled film thickness, {σ }, and a measure of the bonding strength between the film and substrate, {γ }. We prove upper bounds on the minimum energy of the form {σ }^a {γ }^b and find that there are four different parameter regimes corresponding to different values of a and b and to different folding patterns of the film. In some cases, the upper bounds are attained by self-similar folding patterns as observed in experiments. Moreover, for two of the four parameter regimes we prove matching, optimal lower bounds.
Kok, H P; de Greef, M; Bel, A; Crezee, J
2009-08-01
In regional hyperthermia, optimization is useful to obtain adequate applicator settings. A speed-up of the previously published method for high resolution temperature based optimization is proposed. Element grouping as described in literature uses selected voxel sets instead of single voxels to reduce computation time. Elements which achieve their maximum heating potential for approximately the same phase/amplitude setting are grouped. To form groups, eigenvalues and eigenvectors of precomputed temperature matrices are used. At high resolution temperature matrices are unknown and temperatures are estimated using low resolution (1 cm) computations and the high resolution (2 mm) temperature distribution computed for low resolution optimized settings using zooming. This technique can be applied to estimate an upper bound for high resolution eigenvalues. The heating potential of elements was estimated using these upper bounds. Correlations between elements were estimated with low resolution eigenvalues and eigenvectors, since high resolution eigenvectors remain unknown. Four different grouping criteria were applied. Constraints were set to the average group temperatures. Element grouping was applied for five patients and optimal settings for the AMC-8 system were determined. Without element grouping the average computation times for five and ten runs were 7.1 and 14.4 h, respectively. Strict grouping criteria were necessary to prevent an unacceptable exceeding of the normal tissue constraints (up to approximately 2 degrees C), caused by constraining average instead of maximum temperatures. When strict criteria were applied, speed-up factors of 1.8-2.1 and 2.6-3.5 were achieved for five and ten runs, respectively, depending on the grouping criteria. When many runs are performed, the speed-up factor will converge to 4.3-8.5, which is the average reduction factor of the constraints and depends on the grouping criteria. Tumor temperatures were comparable. Maximum exceeding of the constraint in a hot spot was 0.24-0.34 degree C; average maximum exceeding over all five patients was 0.09-0.21 degree C, which is acceptable. High resolution temperature based optimization using element grouping can achieve a speed-up factor of 4-8, without large deviations from the conventional method.
Parameter estimation of qubit states with unknown phase parameter
NASA Astrophysics Data System (ADS)
Suzuki, Jun
2015-02-01
We discuss a problem of parameter estimation for quantum two-level system, qubit system, in presence of unknown phase parameter. We analyze trade-off relations for mean square errors (MSEs) when estimating relevant parameters with separable measurements based on known precision bounds; the symmetric logarithmic derivative (SLD) Cramér-Rao (CR) bound and Hayashi-Gill-Massar (HGM) bound. We investigate the optimal measurement which attains the HGM bound and discuss its properties. We show that the HGM bound for relevant parameters can be attained asymptotically by using some fraction of given n quantum states to estimate the phase parameter. We also discuss the Holevo bound which can be attained asymptotically by a collective measurement.
Risk of Death in Infants Who Have Experienced a Brief Resolved Unexplained Event: A Meta-Analysis.
Brand, Donald A; Fazzari, Melissa J
2018-06-01
To estimate an upper bound on the risk of death after a brief resolved unexplained event (BRUE), a sudden alteration in an infant's breathing, color, tone, or responsiveness, previously labeled "apparent life-threatening event" (ALTE). The meta-analysis incorporated observational studies of patients with ALTE that included data on in-hospital and post-discharge deaths with at least 1 week of follow-up after hospital discharge. Pertinent studies were identified from a published review of the literature from 1970 through 2014 and a supplementary PubMed query through February 2017. The 12 included studies (n = 3005) reported 12 deaths, of which 8 occurred within 4 months of the event. Applying a Poisson-normal random effects model to the 8 proximate deaths using a 4-month time horizon yielded a post-ALTE mortality rate of about 1 in 800, which constitutes an upper bound on the risk of death after a BRUE. This risk is about the same as the baseline risk of death during the first year of life. The meta-analysis therefore supports the return-home approach advocated in a recently published clinical practice guideline-not routine hospitalization-for BRUE patients who have been evaluated in the emergency department and determined to be at lower risk. Copyright © 2017 Elsevier Inc. All rights reserved.
Effective elastic moduli of triangular lattice material with defects
NASA Astrophysics Data System (ADS)
Liu, Xiaoyu; Liang, Naigang
2012-10-01
This paper presents an attempt to extend homogenization analysis for the effective elastic moduli of triangular lattice materials with microstructural defects. The proposed homogenization method adopts a process based on homogeneous strain boundary conditions, the micro-scale constitutive law and the micro-to-macro static operator to establish the relationship between the macroscopic properties of a given lattice material to its micro-discrete behaviors and structures. Further, the idea behind Eshelby's equivalent eigenstrain principle is introduced to replace a defect distribution by an imagining displacement field (eigendisplacement) with the equivalent mechanical effect, and the triangular lattice Green's function technique is developed to solve the eigendisplacement field. The proposed method therefore allows handling of different types of microstructural defects as well as its arbitrary spatial distribution within a general and compact framework. Analytical closed-form estimations are derived, in the case of the dilute limit, for all the effective elastic moduli of stretch-dominated triangular lattices containing fractured cell walls and missing cells, respectively. Comparison with numerical results, the Hashin-Shtrikman upper bounds and uniform strain upper bounds are also presented to illustrate the predictive capability of the proposed method for lattice materials. Based on this work, we propose that not only the effective Young's and shear moduli but also the effective Poisson's ratio of triangular lattice materials depend on the number density of fractured cell walls and their spatial arrangements.
Bounds on negativity for the success of quantum teleportation of qutrit-qubit system
NASA Astrophysics Data System (ADS)
K G, Paulson; Satyanarayana, S. V. M.
In the original protocol Bennet et.al., used maximally entangled pure states as quantum channel to teleport unknown states between distant observers with maximum fidelity. Noisy quantum channel can be used for imperfect teleportation. Both degree of entanglement and mixedness decide the success of teleportation in the case of mixed entangled quantum channel. . In one of our previous works, we discussed the existence of lower bound below which ,state is useless for quantum teleportation in the measure of entanglement for a fixed value of fidelity, and this lower bound decreases as rank increases for two-qubit system. We use negativity as the measure of entanglement. . In this work, we consider a qutrit-qubit system as quantum channel for teleportation, and study how the negativity and rank affect the teleportation fidelity for a class of states. We construct a new class of mixed entangled qutrit-qubit states as quantum channel, which is a convex sum of orthonormal maximally entangled and separable pure states. The classical limit of fidelity below which state is useless for quantum teleportation is fixed as 2/3. We numerically generate 30000 states and estimate the value of negativity below which each rank mixed state is useless for quantum teleportation. We also construct rank dependant boundary states by choosing appropriate eigen values, which act as upper bound for respective rank states.
Bounds on graviton mass using weak lensing and SZ effect in galaxy clusters
NASA Astrophysics Data System (ADS)
Rana, Akshay; Jain, Deepak; Mahajan, Shobhit; Mukherjee, Amitabha
2018-06-01
In General Relativity (GR), the graviton is massless. However, a common feature in several theoretical alternatives of GR is a non-zero mass for the graviton. These theories can be described as massive gravity theories. Despite many theoretical complexities in these theories, on phenomenological grounds the implications of massive gravity have been widely used to put bounds on graviton mass. One of the generic implications of giving a mass to the graviton is that the gravitational potential will follow a Yukawa-like fall off. We use this feature of massive gravity theories to probe the mass of graviton by using the largest gravitationally bound objects, namely galaxy clusters. In this work, we use the mass estimates of galaxy clusters measured at various cosmologically defined radial distances measured via weak lensing (WL) and Sunyaev-Zel'dovich (SZ) effect. We also use the model independent values of Hubble parameter H (z) smoothed by a non-parametric method, Gaussian process. Within 1σ confidence region, we obtain the mass of graviton mg < 5.9 ×10-30 eV with the corresponding Compton length scale λg > 6.82 Mpc from weak lensing and mg < 8.31 ×10-30 eV with λg > 5.012 Mpc from SZ effect. This analysis improves the upper bound on graviton mass obtained earlier from galaxy clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berryman, J. G.
While the well-known Voigt and Reuss (VR) bounds, and the Voigt-Reuss-Hill (VRH) elastic constant estimators for random polycrystals are all straightforwardly calculated once the elastic constants of anisotropic crystals are known, the Hashin-Shtrikman (HS) bounds and related self-consistent (SC) estimators for the same constants are, by comparison, more difficult to compute. Recent work has shown how to simplify (to some extent) these harder to compute HS bounds and SC estimators. An overview and analysis of a subsampling of these results is presented here with the main point being to show whether or not this extra work (i.e., in calculating bothmore » the HS bounds and the SC estimates) does provide added value since, in particular, the VRH estimators often do not fall within the HS bounds, while the SC estimators (for good reasons) have always been found to do so. The quantitative differences between the SC and the VRH estimators in the eight cases considered are often quite small however, being on the order of ±1%. These quantitative results hold true even though these polycrystal Voigt-Reuss-Hill estimators more typically (but not always) fall outside the Hashin-Shtrikman bounds, while the self-consistent estimators always fall inside (or on the boundaries of) these same bounds.« less
Saturn's very axisymmetric magnetic field: No detectable secular variation or tilt
NASA Astrophysics Data System (ADS)
Cao, Hao; Russell, Christopher T.; Christensen, Ulrich R.; Dougherty, Michele K.; Burton, Marcia E.
2011-04-01
Saturn is the only planet in the solar system whose observed magnetic field is highly axisymmetric. At least a small deviation from perfect symmetry is required for a dynamo-generated magnetic field. Analyzing more than six years of magnetometer data obtained by Cassini close to the planet, we show that Saturn's observed field is much more axisymmetric than previously thought. We invert the magnetometer observations that were obtained in the "current-free" inner magnetosphere for an internal model, varying the assumed unknown rotation rate of Saturn's deep interior. No unambiguous non-axially symmetric magnetic moment is detected, with a new upper bound on the dipole tilt of 0.06°. An axisymmetric internal model with Schmidt-normalized spherical harmonic coefficients g10 = 21,191 ± 24 nT, g20 = 1586 ± 7 nT. g30 = 2374 ± 47 nT is derived from these measurements, the upper bounds on the axial degree 4 and 5 terms are 720 nT and 3200 nT respectively. The secular variation for the last 30 years is within the probable error of each term from degree 1 to 3, and the upper bounds are an order of magnitude smaller than in similar terrestrial terms for degrees 1 and 2. Differentially rotating conducting stable layers above Saturn's dynamo region have been proposed to symmetrize the magnetic field (Stevenson, 1982). The new upper bound on the dipole tilt implies that this stable layer must have a thickness L >= 4000 km, and this thickness is consistent with our weak secular variation observations.
Permanent uplift in magmatic systems with application to the Tharsis region of Mars
NASA Astrophysics Data System (ADS)
Phillips, R. J.; Sleep, N. H.; Banerdt, W. B.
1990-04-01
A model is derived for predicting both crustal displacement (leading to permanent uplift) and topographic elevation in regional large-scale magmatic systems associated with partial melting of mantle rocks. The model is then applied to the Tharsis region of Mars to test the uplift versus construction. It was found that a lower bound estimate of the fraction of intrusives necessary for any uplift at all is about 85 percent of the total magmatic products at Tharsis. Thus, it is proposed that most of the magmas associated with Tharsis evolution ended up as intrusive bodies in the crust and upper mantle.
Quantum Kronecker sum-product low-density parity-check codes with finite rate
NASA Astrophysics Data System (ADS)
Kovalev, Alexey A.; Pryadko, Leonid P.
2013-07-01
We introduce an ansatz for quantum codes which gives the hypergraph-product (generalized toric) codes by Tillich and Zémor and generalized bicycle codes by MacKay as limiting cases. The construction allows for both the lower and the upper bounds on the minimum distance; they scale as a square root of the block length. Many thus defined codes have a finite rate and limited-weight stabilizer generators, an analog of classical low-density parity-check (LDPC) codes. Compared to the hypergraph-product codes, hyperbicycle codes generally have a wider range of parameters; in particular, they can have a higher rate while preserving the estimated error threshold.
Association of Many Regions of the Bacillus subtilis Chromosome with the Cell Membrane
Ivarie, Robert D.; Pène, Jacques J.
1973-01-01
Unsheared lysates of Bacillus subtilis 168T− containing uniformly labeled deoxyribonucleic acid (DNA) were exposed to varying doses of gamma rays to introduce double-strand scissions in the chromosome. From an estimate of the number-average molecular weight and the amount of DNA bound to membrane after irradiation, about 70 to 90 regions of the bacterial chromosome were detected in membrane fractions. Since this number was independent of the molecular weight of the DNA (i.e., the extent of fragmentation of the chromosome), it is thought to represent an upper limit in the number of membrane-binding sites per chromosome. PMID:4196245
The dynamic behaviour of data-driven Δ-M and ΔΣ-M in sliding mode control
NASA Astrophysics Data System (ADS)
Almakhles, Dhafer; Swain, Akshya K.; Nasiri, Alireza
2017-11-01
In recent years, delta (Δ-M) and delta-sigma modulators (ΔΣ-M) are increasingly being used as efficient data converters due to numerous advantages they offer. This paper investigates various dynamical features of these modulators/systems (both in continuous and discrete time domain) and derives their stability conditions using the theory of sliding mode. The upper bound of the hitting time (step) has been estimated. The equivalent mode conditions, i.e. where the outputs of the modulators are equivalent to the inputs, are established. The results of the analysis are validated through simulations considering a numerical example.
Robust Inference of Risks of Large Portfolios
Fan, Jianqing; Han, Fang; Liu, Han; Vickers, Byron
2016-01-01
We propose a bootstrap-based robust high-confidence level upper bound (Robust H-CLUB) for assessing the risks of large portfolios. The proposed approach exploits rank-based and quantile-based estimators, and can be viewed as a robust extension of the H-CLUB procedure (Fan et al., 2015). Such an extension allows us to handle possibly misspecified models and heavy-tailed data, which are stylized features in financial returns. Under mixing conditions, we analyze the proposed approach and demonstrate its advantage over H-CLUB. We further provide thorough numerical results to back up the developed theory, and also apply the proposed method to analyze a stock market dataset. PMID:27818569
Mercury's helium exosphere after Mariner 10's third encounter
NASA Technical Reports Server (NTRS)
Curtis, S. A.; Hartle, R. E.
1977-01-01
From Mariner 10 third encounter UV data, a value of .00045 was calculated as the fraction of the solar wind He++ flux intercepted and captured by Mercury's magnetosphere if the observed He atmosphere is maintained by the solar wind. If an internal source for He prevails, the corresponding upper bound for the global outgassing rate is estimated to be 4.5 x 10 to the 22nd power per sec. A surface temperature distribution was used which satisfies the heat equation over Mercury's entire surface using Mariner 10 determined mean surface thermal characteristics. The means stand off distance of Mercury's magnetopause averaged over Mercury's orbit was also used.
Permanent uplift in magmatic systems with application to the Tharsis region of Mars
NASA Technical Reports Server (NTRS)
Phillips, Roger J.; Sleep, Norman H.; Banerdt, W. Bruce
1990-01-01
A model is derived for predicting both crustal displacement (leading to permanent uplift) and topographic elevation in regional large-scale magmatic systems associated with partial melting of mantle rocks. The model is then applied to the Tharsis region of Mars to test the uplift versus construction. It was found that a lower bound estimate of the fraction of intrusives necessary for any uplift at all is about 85 percent of the total magmatic products at Tharsis. Thus, it is proposed that most of the magmas associated with Tharsis evolution ended up as intrusive bodies in the crust and upper mantle.
NASA Astrophysics Data System (ADS)
Daniell, James; Pomonis, Antonios; Gunasekera, Rashmin; Ishizawa, Oscar; Gaspari, Maria; Lu, Xijie; Aubrecht, Christoph; Ungar, Joachim
2017-04-01
In order to quantify disaster risk, there is a demand and need for determining consistent and reliable economic value of built assets at national or sub national level exposed to natural hazards. The value of the built stock in the context of a city or a country is critical for risk modelling applications as it allows for the upper bound in potential losses to be established. Under the World Bank probabilistic disaster risk assessment - Country Disaster Risk Profiles (CDRP) Program and rapid post-disaster loss analyses in CATDAT, key methodologies have been developed that quantify the asset exposure of a country. In this study, we assess the complementary methods determining value of building stock through capital investment data vs aggregated ground up values based on built area and unit cost of construction analyses. Different approaches to modelling exposure around the world, have resulted in estimated values of built assets of some countries differing by order(s) of magnitude. Using the aforementioned methodology of comparing investment data based capital stock and bottom-up unit cost of construction values per square meter of assets; a suitable range of capital stock estimates for built assets have been created. A blind test format was undertaken to compare the two types of approaches from top-down (investment) and bottom-up (construction cost per unit), In many cases, census data, demographic, engineering and construction cost data are key for bottom-up calculations from previous years. Similarly for the top-down investment approach, distributed GFCF (Gross Fixed Capital Formation) data is also required. Over the past few years, numerous studies have been undertaken through the World Bank Caribbean and Central America disaster risk assessment program adopting this methodology initially developed by Gunasekera et al. (2015). The range of values of the building stock is tested for around 15 countries. In addition, three types of costs - Reconstruction cost (building back to the standard required by building codes); Replacement cost (gross capital stock) and Book value (net capital stock - depreciated value of assets) are discussed and the differences in methodologies assessed. We then examine historical costs (reconstruction and replacement) and losses (book value) of natural disasters versus this upper bound of capital stock in various locations to examine the impact of a reasonable capital stock estimate. It is found that some historic loss estimates in publications are not reasonable given the value of assets at the time of the event. This has applications for quantitative disaster risk assessment and development of country disaster risk profiles, economic analyses and benchmarking upper loss limits of built assets damaged due to natural hazards.
Jarzynski equality: connections to thermodynamics and the second law.
Palmieri, Benoit; Ronis, David
2007-01-01
The one-dimensional expanding ideal gas model is used to compute the exact nonequilibrium distribution function. The state of the system during the expansion is defined in terms of local thermodynamics quantities. The final equilibrium free energy, obtained a long time after the expansion, is compared against the free energy that appears in the Jarzynski equality. Within this model, where the Jarzynski equality holds rigorously, the free energy change that appears in the equality does not equal the actual free energy change of the system at any time of the process. More generally, the work bound that is obtained from the Jarzynski equality is an upper bound to the upper bound that is obtained from the first and second laws of thermodynamics. The cancellation of the dissipative (nonequilibrium) terms that result in the Jarzynski equality is shown in the framework of response theory. This is used to show that the intuitive assumption that the Jarzynski work bound becomes equal to the average work done when the system evolves quasistatically is incorrect under some conditions.
More on the decoder error probability for Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Cheung, K.-M.
1987-01-01
The decoder error probability for Reed-Solomon codes (more generally, linear maximum distance separable codes) is examined. McEliece and Swanson offered an upper bound on P sub E (u), the decoder error probability given that u symbol errors occurs. This upper bound is slightly greater than Q, the probability that a completely random error pattern will cause decoder error. By using a combinatoric technique, the principle of inclusion and exclusion, an exact formula for P sub E (u) is derived. The P sub e (u) for the (255, 223) Reed-Solomon Code used by NASA, and for the (31,15) Reed-Solomon code (JTIDS code), are calculated using the exact formula, and the P sub E (u)'s are observed to approach the Q's of the codes rapidly as u gets larger. An upper bound for the expression is derived, and is shown to decrease nearly exponentially as u increases. This proves analytically that P sub E (u) indeed approaches Q as u becomes large, and some laws of large numbers come into play.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kyri; Toomey, Bridget
Evolving power systems with increasing levels of stochasticity call for a need to solve optimal power flow problems with large quantities of random variables. Weather forecasts, electricity prices, and shifting load patterns introduce higher levels of uncertainty and can yield optimization problems that are difficult to solve in an efficient manner. Solution methods for single chance constraints in optimal power flow problems have been considered in the literature, ensuring single constraints are satisfied with a prescribed probability; however, joint chance constraints, ensuring multiple constraints are simultaneously satisfied, have predominantly been solved via scenario-based approaches or by utilizing Boole's inequality asmore » an upper bound. In this paper, joint chance constraints are used to solve an AC optimal power flow problem while preventing overvoltages in distribution grids under high penetrations of photovoltaic systems. A tighter version of Boole's inequality is derived and used to provide a new upper bound on the joint chance constraint, and simulation results are shown demonstrating the benefit of the proposed upper bound. The new framework allows for a less conservative and more computationally efficient solution to considering joint chance constraints, specifically regarding preventing overvoltages.« less
Lorenz curves in a new science-funding model
NASA Astrophysics Data System (ADS)
Huang, Ding-wei
2017-12-01
We propose an agent-based model to theoretically and systematically explore the implications of a new approach to fund science, which has been suggested recently by J. Bollen et al.[?] We introduce various parameters and examine their effects. The concentration of funding is shown by the Lorenz curve and the Gini coefficient. In this model, all scientists are treated equally and follow the well-intended regulations. All scientists give a fixed ratio of their funding to others. The fixed ratio becomes an upper bound for the Gini coefficient. We observe two distinct regimes in the parameter space: valley and plateau. In the valley regime, the fluidity of funding is significant. The Lorenz curve is smooth. The Gini coefficient is well below the upper bound. The funding distribution is the desired result. In the plateau regime, the cumulative advantage is significant. The Lorenz curve has a sharp turn. The Gini coefficient saturates to the upper bound. The undue concentration of funding happens swiftly. The funding distribution is the undesired results, where a minority of scientists take the majority of funding. Phase transitions between these two regimes are discussed.
Expected performance of m-solution backtracking
NASA Technical Reports Server (NTRS)
Nicol, D. M.
1986-01-01
This paper derives upper bounds on the expected number of search tree nodes visited during an m-solution backtracking search, a search which terminates after some preselected number m problem solutions are found. The search behavior is assumed to have a general probabilistic structure. The results are stated in terms of node expansion and contraction. A visited search tree node is said to be expanding if the mean number of its children visited by the search exceeds 1 and is contracting otherwise. It is shown that if every node expands, or if every node contracts, then the number of search tree nodes visited by a search has an upper bound which is linear in the depth of the tree, in the mean number of children a node has, and in the number of solutions sought. Also derived are bounds linear in the depth of the tree in some situations where an upper portion of the tree contracts (expands), while the lower portion expands (contracts). While previous analyses of 1-solution backtracking have concluded that the expected performance is always linear in the tree depth, the model allows superlinear expected performance.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1990-01-01
An expurgated upper bound on the event error probability of trellis coded modulation is presented. This bound is used to derive a lower bound on the minimum achievable free Euclidean distance d sub (free) of trellis codes. It is shown that the dominant parameters for both bounds, the expurgated error exponent and the asymptotic d sub (free) growth rate, respectively, can be obtained from the cutoff-rate R sub O of the transmission channel by a simple geometric construction, making R sub O the central parameter for finding good trellis codes. Several constellations are optimized with respect to the bounds.
Attractors of three-dimensional fast-rotating Navier-Stokes equations
NASA Astrophysics Data System (ADS)
Trahe, Markus
The three-dimensional (3-D) rotating Navier-Stokes equations describe the dynamics of rotating, incompressible, viscous fluids. In this work, they are considered with smooth, time-independent forces and the original statements implied by the classical "Taylor-Proudman Theorem" of geophysics are rigorously proved. It is shown that fully developed turbulence of 3-D fast-rotating fluids is essentially characterized by turbulence of two-dimensional (2-D) fluids in terms of numbers of degrees of freedom. In this context, the 3-D nonlinear "resonant limit equations", which arise in a non-linear averaging process as the rotation frequency O → infinity, are studied and optimal (2-D-type) upper bounds for fractal box and Hausdorff dimensions of the global attractor as well as upper bounds for box dimensions of exponential attractors are determined. Then, the convergence of exponential attractors for the full 3-D rotating Navier-Stokes equations to exponential attractors for the resonant limit equations as O → infinity in the sense of full Hausdorff-metric distances is established. This provides upper and lower semi-continuity of exponential attractors with respect to the rotation frequency and implies that the number of degrees of freedom (attractor dimension) of 3-D fast-rotating fluids is close to that of 2-D fluids. Finally, the algebraic-geometric structure of the Poincare curves, which control the resonances and small divisor estimates for partial differential equations, is further investigated; the 3-D nonlinear limit resonant operators are characterized by three-wave interactions governed by these curves. A new canonical transformation between those curves is constructed; with far-reaching consequences on the density of the latter.
Neutron Electric Dipole Moment and Tensor Charges from Lattice QCD.
Bhattacharya, Tanmoy; Cirigliano, Vincenzo; Gupta, Rajan; Lin, Huey-Wen; Yoon, Boram
2015-11-20
We present lattice QCD results on the neutron tensor charges including, for the first time, a simultaneous extrapolation in the lattice spacing, volume, and light quark masses to the physical point in the continuum limit. We find that the "disconnected" contribution is smaller than the statistical error in the "connected" contribution. Our estimates in the modified minimal subtraction scheme at 2 GeV, including all systematics, are g_{T}^{d-u}=1.020(76), g_{T}^{d}=0.774(66), g_{T}^{u}=-0.233(28), and g_{T}^{s}=0.008(9). The flavor diagonal charges determine the size of the neutron electric dipole moment (EDM) induced by quark EDMs that are generated in many new scenarios of CP violation beyond the standard model. We use our results to derive model-independent bounds on the EDMs of light quarks and update the EDM phenomenology in split supersymmetry with gaugino mass unification, finding a stringent upper bound of d_{n}<4×10^{-28} e cm for the neutron EDM in this scenario.
The Energy Measure for the Euler and Navier-Stokes Equations
NASA Astrophysics Data System (ADS)
Leslie, Trevor M.; Shvydkoy, Roman
2018-04-01
The potential failure of energy equality for a solution u of the Euler or Navier-Stokes equations can be quantified using a so-called `energy measure': the weak-* limit of the measures {|u(t)|^2dx} as t approaches the first possible blowup time. We show that membership of u in certain (weak or strong) {L^q L^p} classes gives a uniform lower bound on the lower local dimension of E ; more precisely, it implies uniform boundedness of a certain upper s-density of E . We also define and give lower bounds on the `concentration dimension' associated to E , which is the Hausdorff dimension of the smallest set on which energy can concentrate. Both the lower local dimension and the concentration dimension of E measure the departure from energy equality. As an application of our estimates, we prove that any solution to the 3-dimensional Navier-Stokes Equations which is Type-I in time must satisfy the energy equality at the first blowup time.
Hurricane destructive power predictions based on historical storm and sea surface temperature data.
Bogen, Kenneth T; Jones, Edwin D; Fischer, Larry E
2007-12-01
Forecasting destructive hurricane potential is complicated by substantial, unexplained intraannual variation in storm-specific power dissipation index (PDI, or integrated third power of wind speed), and interannual variation in annual accumulated PDI (APDI). A growing controversy concerns the recent hypothesis that the clearly positive trend in North Atlantic Ocean (NAO) sea surface temperature (SST) since 1970 explains increased hurricane intensities over this period, and so implies ominous PDI and APDI growth as global warming continues. To test this "SST hypothesis" and examine its quantitative implications, a combination of statistical and probabilistic methods were applied to National Hurricane Center HURDAT best-track data on NAO hurricanes during 1880-2002, and corresponding National Oceanographic and Atmospheric Administration Extended Reconstruction SST estimates. Notably, hurricane behavior was compared to corresponding hurricane-specific (i.e., spatiotemporally linked) SST; previous similar comparisons considered only SST averaged over large NAO regions. Contrary to the SST hypothesis, SST was found to vary in a monthly pattern inconsistent with that of corresponding PDI, and to be at best weakly associated with PDI or APDI despite strong correlation with corresponding mean latitude (R(2)= 0.55) or with combined mean location and a approximately 90-year periodic trend (R(2)= 0.70). Over the last century, the lower 75% of APDIs appear randomly sampled from a nearly uniform distribution, and the upper 25% of APDIs from a nearly lognormal distribution. From the latter distribution, a baseline (SST-independent) stochastic model was derived predicting that over the next half century, APDI will not likely exceed its maximum value over the last half century by more than a factor of 1.5. This factor increased to 2 using a baseline model modified to assume SST-dependence conditioned on an upper bound of the increasing NAO SST trend observed since 1970. An additional model was developed that predicts PDI statistics conditional on APDI. These PDI and APDI models can be used to estimate upper bounds on indices of hurricane power likely to be realized over the next century, under divergent assumptions regarding SST influence.
RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection.
Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S
Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request.
RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection
Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S.
2015-01-01
Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request. PMID:25685112
Reconstruction of financial networks for robust estimation of systemic risk
NASA Astrophysics Data System (ADS)
Mastromatteo, Iacopo; Zarinelli, Elia; Marsili, Matteo
2012-03-01
In this paper we estimate the propagation of liquidity shocks through interbank markets when the information about the underlying credit network is incomplete. We show that techniques such as maximum entropy currently used to reconstruct credit networks severely underestimate the risk of contagion by assuming a trivial (fully connected) topology, a type of network structure which can be very different from the one empirically observed. We propose an efficient message-passing algorithm to explore the space of possible network structures and show that a correct estimation of the network degree of connectedness leads to more reliable estimations for systemic risk. Such an algorithm is also able to produce maximally fragile structures, providing a practical upper bound for the risk of contagion when the actual network structure is unknown. We test our algorithm on ensembles of synthetic data encoding some features of real financial networks (sparsity and heterogeneity), finding that more accurate estimations of risk can be achieved. Finally we find that this algorithm can be used to control the amount of information that regulators need to require from banks in order to sufficiently constrain the reconstruction of financial networks.
Feischl, Michael; Gantner, Gregor; Praetorius, Dirk
2015-01-01
We consider the Galerkin boundary element method (BEM) for weakly-singular integral equations of the first-kind in 2D. We analyze some residual-type a posteriori error estimator which provides a lower as well as an upper bound for the unknown Galerkin BEM error. The required assumptions are weak and allow for piecewise smooth parametrizations of the boundary, local mesh-refinement, and related standard piecewise polynomials as well as NURBS. In particular, our analysis gives a first contribution to adaptive BEM in the frame of isogeometric analysis (IGABEM), for which we formulate an adaptive algorithm which steers the local mesh-refinement and the multiplicity of the knots. Numerical experiments underline the theoretical findings and show that the proposed adaptive strategy leads to optimal convergence. PMID:26085698
NASA Technical Reports Server (NTRS)
Wheeler, Ward C.
2003-01-01
The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.
Age determination by spheno-occipital synchondrosis fusion in Central Indian population.
Pate, Rajeshwar Sambhaji; Tingne, Chaitanya Vidyadhar; Dixit, Pradeep Gangadhar
2018-02-01
The spheno occipital suture synchondrosis is a vital contributor to adolescent and adult age estimation in that it can provide an upper or lower age bound depending on its state of fusion. The present study evaluates the utility of the spheno-occipital suture fusion in age estimation of the Central Indian population. The sample includes 198 (117 males and 81 females) cadavers aged between 8 to 26 years. Grading was done using Mitra-Akhlaghi Scale as - Open, Semi closed and Closed. Our study demonstrates that a significant linear correlation exists between the age of an individual and spheno-occipital suture closure for both the sexes and observation of the degree of fusion of this single suture allows the prediction of age in mature individuals. Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
NASA Astrophysics Data System (ADS)
Dong, Yuan; Li, Qian P.; Wu, Zhengchao; Zhang, Jia-Zhong
2016-12-01
Export fluxes of phosphorus (P) by sinking particles are important in studying ocean biogeochemical dynamics, whereas their composition and temporal variability are still inadequately understood in the global oceans, including the northern South China Sea (NSCS). A time-series study of particle fluxes was conducted at a mooring station adjacent to the Xisha Trough in the NSCS from September 2012 to September 2014, with sinking particles collected every two weeks by two sediment traps deployed at 500 m and 1500 m depths. Five operationally defined particulate P classes of sinking particles including loosely-bound P, Fe-bound P, CaCO3-bound P, detrital apatite P, and refractory organic P were quantified by a sequential extraction method (SEDEX). Our results revealed substantial variability in sinking particulate P composition at the Xisha over two years of samplings. Particulate inorganic P was largely contributed from Fe-bound P in the upper trap, but detrital P in the lower trap. Particulate organic P, including exchangeable organic P, CaCO3-bound organic P, and refractory organic P, contributed up to 50-55% of total sinking particulate P. Increase of CaCO3-bound P in the upper trap during 2014 could be related to a strong El Niño event with enhanced CaCO3 deposition. We also found sediment resuspension responsible for the unusual high particles fluxes at the lower trap based on analyses of a two-component mixing model. There was on average a total mass flux of 78±50 mg m-2 d-1 at the upper trap during the study period. A significant correlation between integrated primary productivity in the region and particle fluxes at 500 m of the station suggested the important role of biological production in controlling the concentration, composition, and export fluxes of sinking particulate P in the NSCS.
Performance analysis for minimally nonlinear irreversible refrigerators at finite cooling power
NASA Astrophysics Data System (ADS)
Long, Rui; Liu, Zhichun; Liu, Wei
2018-04-01
The coefficient of performance (COP) for general refrigerators at finite cooling power have been systematically researched through the minimally nonlinear irreversible model, and its lower and upper bounds in different operating regions have been proposed. Under the tight coupling conditions, we have calculated the universal COP bounds under the χ figure of merit in different operating regions. When the refrigerator operates in the region with lower external flux, we obtained the general bounds (0 < ε <(√{ 9 + 8εC } - 3) / 2) under the χ figure of merit. We have also calculated the universal bounds for maximum gain in COP under different operating regions to give a further insight into the COP gain with the cooling power away from the maximum one. When the refrigerator operates in the region located between maximum cooling power and maximum COP with lower external flux, the upper bound for COP and the lower bound for relative gain in COP present large values, compared to a relative small loss from the maximum cooling power. If the cooling power is the main objective, it is desirable to operate the refrigerator at a slightly lower cooling power than at the maximum one, where a small loss in the cooling power induces a much larger COP enhancement.
Abbas, Ash Mohammad
2012-01-01
In this paper, we describe some bounds and inequalities relating h-index, g-index, e-index, and generalized impact factor. We derive the bounds and inequalities relating these indexing parameters from their basic definitions and without assuming any continuous model to be followed by any of them. We verify the theorems using citation data for five Price Medalists. We observe that the lower bound for h-index given by Theorem 2, [formula: see text], g ≥ 1, comes out to be more accurate as compared to Schubert-Glanzel relation h is proportional to C(2/3)P(-1/3) for a proportionality constant of 1, where C is the number of citations and P is the number of papers referenced. Also, the values of h-index obtained using Theorem 2 outperform those obtained using Egghe-Liang-Rousseau power law model for the given citation data of Price Medalists. Further, we computed the values of upper bound on g-index given by Theorem 3, g ≤ (h + e), where e denotes the value of e-index. We observe that the upper bound on g-index given by Theorem 3 is reasonably tight for the given citation record of Price Medalists.
Reverse preferential spread in complex networks
NASA Astrophysics Data System (ADS)
Toyoizumi, Hiroshi; Tani, Seiichi; Miyoshi, Naoto; Okamoto, Yoshio
2012-08-01
Large-degree nodes may have a larger influence on the network, but they can be bottlenecks for spreading information since spreading attempts tend to concentrate on these nodes and become redundant. We discuss that the reverse preferential spread (distributing information inversely proportional to the degree of the receiving node) has an advantage over other spread mechanisms. In large uncorrelated networks, we show that the mean number of nodes that receive information under the reverse preferential spread is an upper bound among any other weight-based spread mechanisms, and this upper bound is indeed a logistic growth independent of the degree distribution.
On the error probability of general tree and trellis codes with applications to sequential decoding
NASA Technical Reports Server (NTRS)
Johannesson, R.
1973-01-01
An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.
Tight upper bound for the maximal quantum value of the Svetlichny operators
NASA Astrophysics Data System (ADS)
Li, Ming; Shen, Shuqian; Jing, Naihuan; Fei, Shao-Ming; Li-Jost, Xianqing
2017-10-01
It is a challenging task to detect genuine multipartite nonlocality (GMNL). In this paper, the problem is considered via computing the maximal quantum value of Svetlichny operators for three-qubit systems and a tight upper bound is obtained. The constraints on the quantum states for the tightness of the bound are also presented. The approach enables us to give the necessary and sufficient conditions of violating the Svetlichny inequality (SI) for several quantum states, including the white and color noised Greenberger-Horne-Zeilinger (GHZ) states. The relation between the genuine multipartite entanglement concurrence and the maximal quantum value of the Svetlichny operators for mixed GHZ class states is also discussed. As the SI is useful for the investigation of GMNL, our results give an effective and operational method to detect the GMNL for three-qubit mixed states.
Vacuum stability in the U(1)χ extended model with vanishing scalar potential at the Planck scale
NASA Astrophysics Data System (ADS)
Haba, Naoyuki; Yamaguchi, Yuya
2015-09-01
We investigate the vacuum stability in a scale invariant local {U}(1)_χ model with vanishing scalar potential at the Planck scale. We find that it is impossible to realize the Higgs mass of 125 GeV while keeping the Higgs quartic coupling λ _H positive in all energy scales, that is, the same as the standard model. Once one allows λ _H<0, the lower bounds of the Z' boson mass ares obtained through the positive definiteness of the scalar mass squared eigenvalues, while the bounds are smaller than the LHC bounds. On the other hand, the upper bounds strongly depend on the number of relevant Majorana Yukawa couplings of the right-handed neutrinos N_ν . Considering decoupling effects of the Z' boson and the right-handed neutrinos, the condition of the singlet scalar quartic coupling λ _φ >0 gives the upper bound in the N_ν =1 case, while it does not constrain the N_ν =2 and 3 cases. In particular, we find that the Z' boson mass is tightly restricted for the N_ν =1 case as M_{Z'} &lsim 3.7 TeV.
NASA Astrophysics Data System (ADS)
Lee, Harry; Wen, Baole; Doering, Charles
2017-11-01
The rate of viscous energy dissipation ɛ in incompressible Newtonian planar Couette flow (a horizontal shear layer) imposed with uniform boundary injection and suction is studied numerically. Specifically, fluid is steadily injected through the top plate with a constant rate at a constant angle of injection, and the same amount of fluid is sucked out vertically through the bottom plate at the same rate. This set-up leads to two control parameters, namely the angle of injection, θ, and the Reynolds number of the horizontal shear flow, Re . We numerically implement the `background field' variational problem formulated by Constantin and Doering with a one-dimensional unidirectional background field ϕ(z) , where z aligns with the distance between the plates. Computation is carried out at various levels of Re with θ = 0 , 0 .1° ,1° and 2°, respectively. The computed upper bounds on ɛ scale like Re0 as Re > 20 , 000 for each fixed θ, this agrees with Kolmogorov's hypothesis on isotropic turbulence. The outcome provides new upper bounds to ɛ among any solution to the underlying Navier-Stokes equations, and they are sharper than the analytical bounds presented in Doering et al. (2000). This research was partially supported by the NSF Award DMS-1515161, and the University of Michigan's Rackham Graduate Student Research Grant.
$$ \\mathcal{N} $$ = 4 superconformal bootstrap of the K 3 CFT
Lin, Ying-Hsuan; Shao, Shu-Heng; Simmons-Duffin, David; ...
2017-05-23
We study two-dimensional (4; 4) superconformal eld theories of central charge c = 6, corresponding to nonlinear sigma models on K3 surfaces, using the superconformal bootstrap. This is made possible through a surprising relation between the BPS N = 4 superconformal blocks with c = 6 and bosonic Virasoro conformal blocks with c = 28, and an exact result on the moduli dependence of a certain integrated BPS 4-point function. Nontrivial bounds on the non-BPS spectrum in the K3 CFT are obtained as functions of the CFT moduli, that interpolate between the free orbifold points and singular CFT points. Wemore » observe directly from the CFT perspective the signature of a continuous spectrum above a gap at the singular moduli, and fi nd numerically an upper bound on this gap that is saturated by the A1 N = 4 cigar CFT. We also derive an analytic upper bound on the fi rst nonzero eigenvalue of the scalar Laplacian on K3 in the large volume regime, that depends on the K3 moduli data. As two byproducts, we find an exact equivalence between a class of BPS N = 2 superconformal blocks and Virasoro conformal blocks in two dimensions, and an upper bound on the four-point functions of operators of sufficiently low scaling dimension in three and four dimensional CFTs.« less
$$ \\mathcal{N} $$ = 4 superconformal bootstrap of the K 3 CFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Ying-Hsuan; Shao, Shu-Heng; Simmons-Duffin, David
We study two-dimensional (4; 4) superconformal eld theories of central charge c = 6, corresponding to nonlinear sigma models on K3 surfaces, using the superconformal bootstrap. This is made possible through a surprising relation between the BPS N = 4 superconformal blocks with c = 6 and bosonic Virasoro conformal blocks with c = 28, and an exact result on the moduli dependence of a certain integrated BPS 4-point function. Nontrivial bounds on the non-BPS spectrum in the K3 CFT are obtained as functions of the CFT moduli, that interpolate between the free orbifold points and singular CFT points. Wemore » observe directly from the CFT perspective the signature of a continuous spectrum above a gap at the singular moduli, and fi nd numerically an upper bound on this gap that is saturated by the A1 N = 4 cigar CFT. We also derive an analytic upper bound on the fi rst nonzero eigenvalue of the scalar Laplacian on K3 in the large volume regime, that depends on the K3 moduli data. As two byproducts, we find an exact equivalence between a class of BPS N = 2 superconformal blocks and Virasoro conformal blocks in two dimensions, and an upper bound on the four-point functions of operators of sufficiently low scaling dimension in three and four dimensional CFTs.« less
Comonotonic bounds on the survival probabilities in the Lee-Carter model for mortality projection
NASA Astrophysics Data System (ADS)
Denuit, Michel; Dhaene, Jan
2007-06-01
In the Lee-Carter framework, future survival probabilities are random variables with an intricate distribution function. In large homogeneous portfolios of life annuities, value-at-risk or conditional tail expectation of the total yearly payout of the company are approximately equal to the corresponding quantities involving random survival probabilities. This paper aims to derive some bounds in the increasing convex (or stop-loss) sense on these random survival probabilities. These bounds are obtained with the help of comonotonic upper and lower bounds on sums of correlated random variables.
Space Radiation Risks for Astronauts on Multiple International Space Station Missions
Cucinotta, Francis A.
2014-01-01
Mortality and morbidity risks from space radiation exposure are an important concern for astronauts participating in International Space Station (ISS) missions. NASA’s radiation limits set a 3% cancer fatality probability as the upper bound of acceptable risk and considers uncertainties in risk predictions using the upper 95% confidence level (CL) of the assessment. In addition to risk limitation, an important question arises as to the likelihood of a causal association between a crew-members’ radiation exposure in the past and a diagnosis of cancer. For the first time, we report on predictions of age and sex specific cancer risks, expected years of life-loss for specific diseases, and probability of causation (PC) at different post-mission times for participants in 1-year or multiple ISS missions. Risk projections with uncertainty estimates are within NASA acceptable radiation standards for mission lengths of 1-year or less for likely crew demographics. However, for solar minimum conditions upper 95% CL exceed 3% risk of exposure induced death (REID) by 18 months or 24 months for females and males, respectively. Median PC and upper 95%-confidence intervals are found to exceed 50% for several cancers for participation in two or more ISS missions of 18 months or longer total duration near solar minimum, or for longer ISS missions at other phases of the solar cycle. However, current risk models only consider estimates of quantitative differences between high and low linear energy transfer (LET) radiation. We also make predictions of risk and uncertainties that would result from an increase in tumor lethality for highly ionizing radiation reported in animal studies, and the additional risks from circulatory diseases. These additional concerns could further reduce the maximum duration of ISS missions within acceptable risk levels, and will require new knowledge to properly evaluate. PMID:24759903
Space radiation risks for astronauts on multiple International Space Station missions.
Cucinotta, Francis A
2014-01-01
Mortality and morbidity risks from space radiation exposure are an important concern for astronauts participating in International Space Station (ISS) missions. NASA's radiation limits set a 3% cancer fatality probability as the upper bound of acceptable risk and considers uncertainties in risk predictions using the upper 95% confidence level (CL) of the assessment. In addition to risk limitation, an important question arises as to the likelihood of a causal association between a crew-members' radiation exposure in the past and a diagnosis of cancer. For the first time, we report on predictions of age and sex specific cancer risks, expected years of life-loss for specific diseases, and probability of causation (PC) at different post-mission times for participants in 1-year or multiple ISS missions. Risk projections with uncertainty estimates are within NASA acceptable radiation standards for mission lengths of 1-year or less for likely crew demographics. However, for solar minimum conditions upper 95% CL exceed 3% risk of exposure induced death (REID) by 18 months or 24 months for females and males, respectively. Median PC and upper 95%-confidence intervals are found to exceed 50% for several cancers for participation in two or more ISS missions of 18 months or longer total duration near solar minimum, or for longer ISS missions at other phases of the solar cycle. However, current risk models only consider estimates of quantitative differences between high and low linear energy transfer (LET) radiation. We also make predictions of risk and uncertainties that would result from an increase in tumor lethality for highly ionizing radiation reported in animal studies, and the additional risks from circulatory diseases. These additional concerns could further reduce the maximum duration of ISS missions within acceptable risk levels, and will require new knowledge to properly evaluate.
Multi-Level Reduced Order Modeling Equipped with Probabilistic Error Bounds
NASA Astrophysics Data System (ADS)
Abdo, Mohammad Gamal Mohammad Mostafa
This thesis develops robust reduced order modeling (ROM) techniques to achieve the needed efficiency to render feasible the use of high fidelity tools for routine engineering analyses. Markedly different from the state-of-the-art ROM techniques, our work focuses only on techniques which can quantify the credibility of the reduction which can be measured with the reduction errors upper-bounded for the envisaged range of ROM model application. Our objective is two-fold. First, further developments of ROM techniques are proposed when conventional ROM techniques are too taxing to be computationally practical. This is achieved via a multi-level ROM methodology designed to take advantage of the multi-scale modeling strategy typically employed for computationally taxing models such as those associated with the modeling of nuclear reactor behavior. Second, the discrepancies between the original model and ROM model predictions over the full range of model application conditions are upper-bounded in a probabilistic sense with high probability. ROM techniques may be classified into two broad categories: surrogate construction techniques and dimensionality reduction techniques, with the latter being the primary focus of this work. We focus on dimensionality reduction, because it offers a rigorous approach by which reduction errors can be quantified via upper-bounds that are met in a probabilistic sense. Surrogate techniques typically rely on fitting a parametric model form to the original model at a number of training points, with the residual of the fit taken as a measure of the prediction accuracy of the surrogate. This approach, however, does not generally guarantee that the surrogate model predictions at points not included in the training process will be bound by the error estimated from the fitting residual. Dimensionality reduction techniques however employ a different philosophy to render the reduction, wherein randomized snapshots of the model variables, such as the model parameters, responses, or state variables, are projected onto lower dimensional subspaces, referred to as the "active subspaces", which are selected to capture a user-defined portion of the snapshots variations. Once determined, the ROM model application involves constraining the variables to the active subspaces. In doing so, the contribution from the variables discarded components can be estimated using a fundamental theorem from random matrix theory which has its roots in Dixon's theory, developed in 1983. This theory was initially presented for linear matrix operators. The thesis extends this theorem's results to allow reduction of general smooth nonlinear operators. The result is an approach by which the adequacy of a given active subspace determined using a given set of snapshots, generated either using the full high fidelity model, or other models with lower fidelity, can be assessed, which provides insight to the analyst on the type of snapshots required to reach a reduction that can satisfy user-defined preset tolerance limits on the reduction errors. Reactor physics calculations are employed as a test bed for the proposed developments. The focus will be on reducing the effective dimensionality of the various data streams such as the cross-section data and the neutron flux. The developed methods will be applied to representative assembly level calculations, where the size of the cross-section and flux spaces are typically large, as required by downstream core calculations, in order to capture the broad range of conditions expected during reactor operation. (Abstract shortened by ProQuest.).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schafer, Annette L.; Brown, LLoyd C.; Carathers, David C.
2014-02-01
This document contains the analysis details and summary of analyses conducted to evaluate the environmental impacts for the Resumption of Transient Fuel and Materials Testing Program. It provides an assessment of the impacts for the two action alternatives being evaluated in the environmental assessment. These alternatives are (1) resumption of transient testing using the Transient Reactor Test Facility (TREAT) at Idaho National Laboratory (INL) and (2) conducting transient testing using the Annular Core Research Reactor (ACRR) at Sandia National Laboratory in New Mexico (SNL/NM). Analyses are provided for radiologic emissions, other air emissions, soil contamination, and groundwater contamination that couldmore » occur (1) during normal operations, (2) as a result of accidents in one of the facilities, and (3) during transport. It does not include an assessment of the biotic, cultural resources, waste generation, or other impacts that could result from the resumption of transient testing. Analyses were conducted by technical professionals at INL and SNL/NM as noted throughout this report. The analyses are based on bounding radionuclide inventories, with the same inventories used for test materials by both alternatives and different inventories for the TREAT Reactor and ACRR. An upper value on the number of tests was assumed, with a test frequency determined by the realistic turn-around times required between experiments. The estimates provided for impacts during normal operations are based on historical emission rates and projected usage rates; therefore, they are bounding. Estimated doses for members of the public, collocated workers, and facility workers that could be incurred as a result of an accident are very conservative. They do not credit safety systems or administrative procedures (such as evacuation plans or use of personal protective equipment) that could be used to limit worker doses. Doses estimated for transportation are conservative and are based on transport of the bounding radiologic inventory that will be contained in any given test. The transportation analysis assumes all transports will contain the bounding inventory.« less
Chiu, Chun-Huo; Wang, Yi-Ting; Walther, Bruno A; Chao, Anne
2014-09-01
It is difficult to accurately estimate species richness if there are many almost undetectable species in a hyper-diverse community. Practically, an accurate lower bound for species richness is preferable to an inaccurate point estimator. The traditional nonparametric lower bound developed by Chao (1984, Scandinavian Journal of Statistics 11, 265-270) for individual-based abundance data uses only the information on the rarest species (the numbers of singletons and doubletons) to estimate the number of undetected species in samples. Applying a modified Good-Turing frequency formula, we derive an approximate formula for the first-order bias of this traditional lower bound. The approximate bias is estimated by using additional information (namely, the numbers of tripletons and quadrupletons). This approximate bias can be corrected, and an improved lower bound is thus obtained. The proposed lower bound is nonparametric in the sense that it is universally valid for any species abundance distribution. A similar type of improved lower bound can be derived for incidence data. We test our proposed lower bounds on simulated data sets generated from various species abundance models. Simulation results show that the proposed lower bounds always reduce bias over the traditional lower bounds and improve accuracy (as measured by mean squared error) when the heterogeneity of species abundances is relatively high. We also apply the proposed new lower bounds to real data for illustration and for comparisons with previously developed estimators. © 2014, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Wang, Dong; Tsui, Kwok-Leung
2018-01-01
Bearing-supported shafts are widely used in various machines. Due to harsh working environments, bearing performance degrades over time. To prevent unexpected bearing failures and accidents, bearing performance degradation assessment becomes an emerging topic in recent years. Bearing performance degradation assessment aims to evaluate the current health condition of a bearing through a bearing health indicator. In the past years, many signal processing and data mining based methods were proposed to construct bearing health indicators. However, the upper and lower bounds of these bearing health indicators were not theoretically calculated and they strongly depended on historical bearing data including normal and failure data. Besides, most health indicators are dimensional, which connotes that these health indicators are prone to be affected by varying operating conditions, such as varying speeds and loads. In this paper, based on the principle of squared envelope analysis, we focus on theoretical investigation of bearing performance degradation assessment in the case of additive Gaussian noises, including distribution establishment of squared envelope, construction of a generalized dimensionless bearing health indicator, and mathematical calculation of the upper and lower bounds of the generalized dimensionless bearing health indicator. Then, analyses of simulated and real bearing run to failure data are used as two case studies to illustrate how the generalized dimensionless health indicator works and demonstrate its effectiveness in bearing performance degradation assessment. Results show that squared envelope follows a noncentral chi-square distribution and the upper and lower bounds of the generalized dimensionless health indicator can be mathematically established. Moreover, the generalized dimensionless health indicator is sensitive to an incipient bearing defect in the process of bearing performance degradation.
NASA Astrophysics Data System (ADS)
Malinowski, Arkadiusz; Takeuchi, Takuya; Chen, Shang; Suzuki, Toshiya; Ishikawa, Kenji; Sekine, Makoto; Hori, Masaru; Lukasiak, Lidia; Jakubowski, Andrzej
2013-07-01
This paper describes a new, fast, and case-independent technique for sticking coefficient (SC) estimation based on pallet for plasma evaluation (PAPE) structure and numerical analysis. Our approach does not require complicated structure, apparatus, or time-consuming measurements but offers high reliability of data and high flexibility. Thermal analysis is also possible. This technique has been successfully applied to estimation of very low value of SC of hydrogen radicals on chemically amplified ArF 193 nm photoresist (the main goal of this study). Upper bound of our technique has been determined by investigation of SC of fluorine radical on polysilicon (in elevated temperature). Sources of estimation error and ways of its reduction have been also discussed. Results of this study give an insight into the process kinetics, and not only they are helpful in better process understanding but additionally they may serve as parameters in a phenomenological model development for predictive modelling of etching for ultimate CMOS topography simulation.
Comparison of electromyography and force as interfaces for prosthetic control.
Corbett, Elaine A; Perreault, Eric J; Kuiken, Todd A
2011-01-01
The ease with which persons with upper-limb amputations can control their powered prostheses is largely determined by the efficacy of the user command interface. One needs to understand the abilities of the human operator regarding the different available options. Electromyography (EMG) is widely used to control powered upper-limb prostheses. It is an indirect estimator of muscle force and may be expected to limit the control capabilities of the prosthesis user. This study compared EMG control with force control, an interface that is used in everyday interactions with the environment. We used both methods to perform a position-tracking task. Direct-position control of the wrist provided an upper bound for human-operator capabilities. The results demonstrated that an EMG control interface is as effective as force control for the position-tracking task. We also examined the effects of gain and tracking frequency on EMG control to explore the limits of this control interface. We found that information transmission rates for myoelectric control were best at higher tracking frequencies than at the frequencies previously reported for position control. The results may be useful for the design of prostheses and prosthetic controllers.
Robust adaptive sliding mode control for uncertain systems with unknown time-varying delay input.
Benamor, Anouar; Messaoud, Hassani
2018-05-02
This article focuses on robust adaptive sliding mode control law for uncertain discrete systems with unknown time-varying delay input, where the uncertainty is assumed unknown. The main results of this paper are divided into three phases. In the first phase, we propose a new sliding surface is derived within the Linear Matrix Inequalities (LMIs). In the second phase, using the new sliding surface, the novel Robust Sliding Mode Control (RSMC) is proposed where the upper bound of uncertainty is supposed known. Finally, the novel approach of Robust Adaptive Sliding ModeControl (RASMC) has been defined for this type of systems, where the upper limit of uncertainty which is assumed unknown. In this new approach, we have estimate the upper limit of uncertainties and we have determined the control law based on a sliding surface that will converge to zero. This novel control laws are been validated in simulation on an uncertain numerical system with good results and comparative study. This efficiency is emphasized through the application of the new controls on the two physical systems which are the process trainer PT326 and hydraulic system two tanks. Published by Elsevier Ltd.
Information geometry of Gaussian channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monras, Alex; CNR-INFM Coherentia, Napoli; CNISM Unita di Salerno
2010-06-15
We define a local Riemannian metric tensor in the manifold of Gaussian channels and the distance that it induces. We adopt an information-geometric approach and define a metric derived from the Bures-Fisher metric for quantum states. The resulting metric inherits several desirable properties from the Bures-Fisher metric and is operationally motivated by distinguishability considerations: It serves as an upper bound to the attainable quantum Fisher information for the channel parameters using Gaussian states, under generic constraints on the physically available resources. Our approach naturally includes the use of entangled Gaussian probe states. We prove that the metric enjoys some desirablemore » properties like stability and covariance. As a by-product, we also obtain some general results in Gaussian channel estimation that are the continuous-variable analogs of previously known results in finite dimensions. We prove that optimal probe states are always pure and bounded in the number of ancillary modes, even in the presence of constraints on the reduced state input in the channel. This has experimental and computational implications. It limits the complexity of optimal experimental setups for channel estimation and reduces the computational requirements for the evaluation of the metric: Indeed, we construct a converging algorithm for its computation. We provide explicit formulas for computing the multiparametric quantum Fisher information for dissipative channels probed with arbitrary Gaussian states and provide the optimal observables for the estimation of the channel parameters (e.g., bath couplings, squeezing, and temperature).« less
``Carbon Credits'' for Resource-Bounded Computations Using Amortised Analysis
NASA Astrophysics Data System (ADS)
Jost, Steffen; Loidl, Hans-Wolfgang; Hammond, Kevin; Scaife, Norman; Hofmann, Martin
Bounding resource usage is important for a number of areas, notably real-time embedded systems and safety-critical systems. In this paper, we present a fully automatic static type-based analysis for inferring upper bounds on resource usage for programs involving general algebraic datatypes and full recursion. Our method can easily be used to bound any countable resource, without needing to revisit proofs. We apply the analysis to the important metrics of worst-case execution time, stack- and heap-space usage. Our results from several realistic embedded control applications demonstrate good matches between our inferred bounds and measured worst-case costs for heap and stack usage. For time usage we infer good bounds for one application. Where we obtain less tight bounds, this is due to the use of software floating-point libraries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azunre, P.
Here in this paper, two novel techniques for bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations are developed. The first provides a theorem to construct interval bounds, while the second provides a theorem to construct lower bounds convex and upper bounds concave in the parameter. The convex/concave bounds can be significantly tighter than the interval bounds because of the wrapping effect suffered by interval analysis in dynamical systems. Both types of bounds are computationally cheap to construct, requiring solving auxiliary systems twice and four times larger than the original system, respectively. An illustrative numerical examplemore » of bound construction and use for deterministic global optimization within a simple serial branch-and-bound algorithm, implemented numerically using interval arithmetic and a generalization of McCormick's relaxation technique, is presented. Finally, problems within the important class of reaction-diffusion systems may be optimized with these tools.« less
Variational principle for the Navier-Stokes equations.
Kerswell, R R
1999-05-01
A variational principle is presented for the Navier-Stokes equations in the case of a contained boundary-driven, homogeneous, incompressible, viscous fluid. Based upon making the fluid's total viscous dissipation over a given time interval stationary subject to the constraint of the Navier-Stokes equations, the variational problem looks overconstrained and intractable. However, introducing a nonunique velocity decomposition, u(x,t)=phi(x,t) + nu(x,t), "opens up" the variational problem so that what is presumed a single allowable point over the velocity domain u corresponding to the unique solution of the Navier-Stokes equations becomes a surface with a saddle point over the extended domain (phi,nu). Complementary or dual variational problems can then be constructed to estimate this saddle point value strictly from above as part of a minimization process or below via a maximization procedure. One of these reduced variational principles is the natural and ultimate generalization of the upper bounding problem developed by Doering and Constantin. The other corresponds to the ultimate Busse problem which now acts to lower bound the true dissipation. Crucially, these reduced variational problems require only the solution of a series of linear problems to produce bounds even though their unique intersection is conjectured to correspond to a solution of the nonlinear Navier-Stokes equations.
Event-Based Sensing and Control for Remote Robot Guidance: An Experimental Case
Santos, Carlos; Martínez-Rey, Miguel; Santiso, Enrique
2017-01-01
This paper describes the theoretical and practical foundations for remote control of a mobile robot for nonlinear trajectory tracking using an external localisation sensor. It constitutes a classical networked control system, whereby event-based techniques for both control and state estimation contribute to efficient use of communications and reduce sensor activity. Measurement requests are dictated by an event-based state estimator by setting an upper bound to the estimation error covariance matrix. The rest of the time, state prediction is carried out with the Unscented transformation. This prediction method makes it possible to select the appropriate instants at which to perform actuations on the robot so that guidance performance does not degrade below a certain threshold. Ultimately, we obtained a combined event-based control and estimation solution that drastically reduces communication accesses. The magnitude of this reduction is set according to the tracking error margin of a P3-DX robot following a nonlinear trajectory, remotely controlled with a mini PC and whose pose is detected by a camera sensor. PMID:28878144
Effects of triplet Higgs bosons in long baseline neutrino experiments
NASA Astrophysics Data System (ADS)
Huitu, K.; Kärkkäinen, T. J.; Maalampi, J.; Vihonen, S.
2018-05-01
The triplet scalars (Δ =Δ++,Δ+,Δ0) utilized in the so-called type-II seesaw model to explain the lightness of neutrinos, would generate nonstandard interactions (NSI) for a neutrino propagating in matter. We investigate the prospects to probe these interactions in long baseline neutrino oscillation experiments. We analyze the upper bounds that the proposed DUNE experiment might set on the nonstandard parameters and numerically derive upper bounds, as a function of the lightest neutrino mass, on the ratio the mass MΔ of the triplet scalars, and the strength |λϕ| of the coupling ϕ ϕ Δ of the triplet Δ and conventional Higgs doublet ϕ . We also discuss the possible misinterpretation of these effects as effects arising from a nonunitarity of the neutrino mixing matrix and compare the results with the bounds that arise from the charged lepton flavor violating processes.
Decay of superconducting correlations for gauged electrons in dimensions D ≤ 4
NASA Astrophysics Data System (ADS)
Tada, Yasuhiro; Koma, Tohru
2018-03-01
We study lattice superconductors coupled to gauge fields, such as an attractive Hubbard model in electromagnetic fields, with a standard gauge fixing. We prove upper bounds for a two-point Cooper pair correlation at finite temperatures in spatial dimensions D ≤ 4. The upper bounds decay exponentially in three dimensions and by power law in four dimensions. These imply the absence of the superconducting long-range order for the Cooper pair amplitude as a consequence of fluctuations of the gauge fields. Since our results hold for the gauge fixing Hamiltonian, they cannot be obtained as a corollary of Elitzur's theorem.
Calculations of reliability predictions for the Apollo spacecraft
NASA Technical Reports Server (NTRS)
Amstadter, B. L.
1966-01-01
A new method of reliability prediction for complex systems is defined. Calculation of both upper and lower bounds are involved, and a procedure for combining the two to yield an approximately true prediction value is presented. Both mission success and crew safety predictions can be calculated, and success probabilities can be obtained for individual mission phases or subsystems. Primary consideration is given to evaluating cases involving zero or one failure per subsystem, and the results of these evaluations are then used for analyzing multiple failure cases. Extensive development is provided for the overall mission success and crew safety equations for both the upper and lower bounds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gurvitis, Leonid
2009-01-01
An upper bound on the ergodic capacity of MIMO channels was introduced recently in [1]. This upper bound amounts to the maximization on the simplex of some multilinear polynomial p({lambda}{sub 1}, ..., {lambda}{sub n}) with non-negative coefficients. In general, such maximizations problems are NP-HARD. But if say, the functional log(p) is concave on the simplex and can be efficiently evaluated, then the maximization can also be done efficiently. Such log-concavity was conjectured in [1]. We give in this paper self-contained proof of the conjecture, based on the theory of H-Stable polynomials.
Investigation of matter-antimatter interaction for possible propulsion applications
NASA Technical Reports Server (NTRS)
Morgan, D. L., Jr.
1974-01-01
Matter-antimatter annihilation is discussed as a means of rocket propulsion. The feasibility of different means of antimatter storage is shown to depend on how annihilation rates are affected by various circumstances. The annihilation processes are described, with emphasis on important features of atom-antiatom interatomic potential energies. A model is developed that allows approximate calculation of upper and lower bounds to the interatomic potential energy for any atom-antiatom pair. Formulae for the upper and lower bounds for atom-antiatom annihilation cross-sections are obtained and applied to the annihilation rates for each means of antimatter storage under consideration. Recommendations for further studies are presented.
Marginal Consistency: Upper-Bounding Partition Functions over Commutative Semirings.
Werner, Tomás
2015-07-01
Many inference tasks in pattern recognition and artificial intelligence lead to partition functions in which addition and multiplication are abstract binary operations forming a commutative semiring. By generalizing max-sum diffusion (one of convergent message passing algorithms for approximate MAP inference in graphical models), we propose an iterative algorithm to upper bound such partition functions over commutative semirings. The iteration of the algorithm is remarkably simple: change any two factors of the partition function such that their product remains the same and their overlapping marginals become equal. In many commutative semirings, repeating this iteration for different pairs of factors converges to a fixed point when the overlapping marginals of every pair of factors coincide. We call this state marginal consistency. During that, an upper bound on the partition function monotonically decreases. This abstract algorithm unifies several existing algorithms, including max-sum diffusion and basic constraint propagation (or local consistency) algorithms in constraint programming. We further construct a hierarchy of marginal consistencies of increasingly higher levels and show than any such level can be enforced by adding identity factors of higher arity (order). Finally, we discuss instances of the framework for several semirings, including the distributive lattice and the max-sum and sum-product semirings.
On the Role of Entailment Patterns and Scalar Implicatures in the Processing of Numerals
ERIC Educational Resources Information Center
Panizza, Daniele; Chierchia, Gennaro; Clifton, Charles, Jr.
2009-01-01
There has been much debate, in both the linguistics and the psycholinguistics literature, concerning numbers and the interpretation of number denoting determiners ("numerals"). Such debate concerns, in particular, the nature and distribution of upper-bounded ("exact") interpretations vs. lower-bounded ("at-least") construals. In the present paper…
Sublinear Upper Bounds for Stochastic Programs with Recourse. Revision.
1987-06-01
approximation procedures for (1.1) generally rely on discretizations of E (Huang, Ziemba , and Ben-Tal (1977), Kall and Stoyan (1982), Birge and Wets...Wright, Practical optimization (Academic Press, London and New York,1981). C.C. Huang, W. Ziemba , and A. Ben-Tal, "Bounds on the expectation of a con
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zachos, C. K.; High Energy Physics
Following ref [1], a classical upper bound for quantum entropy is identified and illustrated, 0 {le} S{sub q} {le} ln (e{sigma}{sup 2}/2{h_bar}), involving the variance {sigma}{sup 2} in phase space of the classical limit distribution of a given system. A fortiori, this further bounds the corresponding information-theoretical generalizations of the quantum entropy proposed by Renyi.
Representing and Acquiring Geographic Knowledge.
1984-01-01
which is allowed if v is a kowledge bound of REG. e3. The real vertices of a clump map into the boundary of the corresponding object so * , 21...example, *What is the diameter of the pond?" can be answered, but the answer will, in general, be a range power -bound, upper-bound]. If the clump for...cases of others. They are included separately, because their procedures are either faster or more powerful than the general procedure. I will not
NASA Technical Reports Server (NTRS)
Lewis, Robert Michael; Patera, Anthony T.; Peraire, Jaume
1998-01-01
We present a Neumann-subproblem a posteriori finite element procedure for the efficient and accurate calculation of rigorous, 'constant-free' upper and lower bounds for sensitivity derivatives of functionals of the solutions of partial differential equations. The design motivation for sensitivity derivative error control is discussed; the a posteriori finite element procedure is described; the asymptotic bounding properties and computational complexity of the method are summarized; and illustrative numerical results are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Müller-Hermes, Alexander, E-mail: muellerh@ma.tum.de; Wolf, Michael M., E-mail: m.wolf@tum.de; Reeb, David, E-mail: reeb.qit@gmail.com
We investigate linear maps between matrix algebras that remain positive under tensor powers, i.e., under tensoring with n copies of themselves. Completely positive and completely co-positive maps are trivial examples of this kind. We show that for every n ∈ ℕ, there exist non-trivial maps with this property and that for two-dimensional Hilbert spaces there is no non-trivial map for which this holds for all n. For higher dimensions, we reduce the existence question of such non-trivial “tensor-stable positive maps” to a one-parameter family of maps and show that an affirmative answer would imply the existence of non-positive partial transposemore » bound entanglement. As an application, we show that any tensor-stable positive map that is not completely positive yields an upper bound on the quantum channel capacity, which for the transposition map gives the well-known cb-norm bound. We, furthermore, show that the latter is an upper bound even for the local operations and classical communications-assisted quantum capacity, and that moreover it is a strong converse rate for this task.« less
Estimation of two ordered mean residual lifetime functions.
Ebrahimi, N
1993-06-01
In many statistical studies involving failure data, biometric mortality data, and actuarial data, mean residual lifetime (MRL) function is of prime importance. In this paper we introduce the problem of nonparametric estimation of a MRL function on an interval when this function is bounded from below by another such function (known or unknown) on that interval, and derive the corresponding two functional estimators. The first is to be used when there is a known bound, and the second when the bound is another MRL function to be estimated independently. Both estimators are obtained by truncating the empirical estimator discussed by Yang (1978, Annals of Statistics 6, 112-117). In the first case, it is truncated at a known bound; in the second, at a point somewhere between the two empirical estimates. Consistency of both estimators is proved, and a pointwise large-sample distribution theory of the first estimator is derived.
NASA Astrophysics Data System (ADS)
He, Shaoming; Wang, Jiang; Wang, Wei
2017-12-01
This paper proposes a new composite guidance law to intercept manoeuvring targets without line-of-sight (LOS) angular rate information in the presence of autopilot lag. The presented formulation is obtained via a combination of homogeneous theory and sliding mode control approach. Different from some existing observers, the proposed homogeneous observer can estimate the lumped uncertainty and the LOS angular rate in an integrated manner. To reject the mismatched lumped uncertainty in the integrated guidance and autopilot system, a sliding surface, which consists of the system states and the estimated states, is proposed and a robust guidance law is then synthesised. Stability analysis shows that the LOS angular rate can be stabilised in a small region around zero asymptotically and the upper bound can be lowered by appropriate parameter choice. Numerical simulations with some comparisons are carried out to demonstrate the superiority of the proposed method.
Singular spectrum analysis in nonlinear dynamics, with applications to paleoclimatic time series
NASA Technical Reports Server (NTRS)
Vautard, R.; Ghil, M.
1989-01-01
Two dimensions of a dynamical system given by experimental time series are distinguished. Statistical dimension gives a theoretical upper bound for the minimal number of degrees of freedom required to describe the attractor up to the accuracy of the data, taking into account sampling and noise problems. The dynamical dimension is the intrinsic dimension of the attractor and does not depend on the quality of the data. Singular Spectrum Analysis (SSA) provides estimates of the statistical dimension. SSA also describes the main physical phenomena reflected by the data. It gives adaptive spectral filters associated with the dominant oscillations of the system and clarifies the noise characteristics of the data. SSA is applied to four paleoclimatic records. The principal climatic oscillations and the regime changes in their amplitude are detected. About 10 degrees of freedom are statistically significant in the data. Large noise and insufficient sample length do not allow reliable estimates of the dynamical dimension.
Discriminating leptonic Yukawa interactions with doubly charged scalar at the ILC
NASA Astrophysics Data System (ADS)
Nomura, Takaaki; Okada, Hiroshi; Yokoya, Hiroshi
2018-04-01
We explore discrimination of two types of leptonic Yukawa interactions associated with Higgs triplet, LbarLc ΔLL, and with SU (2) singlet doubly charged scalar, ebarRc k++eR. These interactions can be distinguished by measuring the effects of doubly charged scalar boson exchange in the e+e- →ℓ+ℓ- processes at polarized electron-positron colliders. We study a forward-backward asymmetry of scattering angular distribution to estimate the sensitivity for these effects at the ILC. In addition, we investigate prospects of upper bounds on the Yukawa couplings by combining the constraints of lepton flavor violation processes and the e+e- →ℓ+ℓ- processes at the LEP and the ILC.
A new class of finite-time nonlinear consensus protocols for multi-agent systems
NASA Astrophysics Data System (ADS)
Zuo, Zongyu; Tie, Lin
2014-02-01
This paper is devoted to investigating the finite-time consensus problem for a multi-agent system in networks with undirected topology. A new class of global continuous time-invariant consensus protocols is constructed for each single-integrator agent dynamics with the aid of Lyapunov functions. In particular, it is shown that the settling time of the proposed new class of finite-time consensus protocols is upper bounded for arbitrary initial conditions. This makes it possible for network consensus problems that the convergence time is designed and estimated offline for a given undirected information flow and a group volume of agents. Finally, a numerical simulation example is presented as a proof of concept.
Distributed robust finite-time nonlinear consensus protocols for multi-agent systems
NASA Astrophysics Data System (ADS)
Zuo, Zongyu; Tie, Lin
2016-04-01
This paper investigates the robust finite-time consensus problem of multi-agent systems in networks with undirected topology. Global nonlinear consensus protocols augmented with a variable structure are constructed with the aid of Lyapunov functions for each single-integrator agent dynamics in the presence of external disturbances. In particular, it is shown that the finite settling time of the proposed general framework for robust consensus design is upper bounded for any initial condition. This makes it possible for network consensus problems to design and estimate the convergence time offline for a multi-agent team with a given undirected information flow. Finally, simulation results are presented to demonstrate the performance and effectiveness of our finite-time protocols.
The Wrinkling of a Twisted Ribbon
NASA Astrophysics Data System (ADS)
Kohn, Robert V.; O'Brien, Ethan
2018-02-01
Recent experiments by Chopin and Kudrolli (Phys Rev Lett 111:174302, 2013) showed that a thin elastic ribbon, when twisted into a helicoid, may wrinkle in the center. We study this from the perspective of elastic energy minimization, building on recent work by Chopin et al. (J Elast 119(1-2):137-189, 2015) in which they derive a modified von Kármán functional and solve the relaxed problem. Our main contribution is to show matching upper and lower bounds for the minimum energy in the small-thickness limit. Along the way, we show that the displacements must be small where we expect that the ribbon is helicoidal, and we estimate the wavelength of the wrinkles.
Highly correlated configuration interaction calculations on water with large orbital bases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almora-Díaz, César X., E-mail: xalmora@fisica.unam.mx
2014-05-14
A priori selected configuration interaction (SCI) with truncation energy error [C. F. Bunge, J. Chem. Phys. 125, 014107 (2006)] and CI by parts [C. F. Bunge and R. Carbó-Dorca, J. Chem. Phys. 125, 014108 (2006)] are used to approximate the total nonrelativistic electronic ground state energy of water at fixed experimental geometry with CI up to sextuple excitations. Correlation-consistent polarized core-valence basis sets (cc-pCVnZ) up to sextuple zeta and augmented correlation-consistent polarized core-valence basis sets (aug-cc-pCVnZ) up to quintuple zeta quality are employed. Truncation energy errors range between less than 1 μhartree, and 100 μhartree for the largest orbital set. Coupledmore » cluster CCSD and CCSD(T) calculations are also obtained for comparison. Our best upper bound, −76.4343 hartree, obtained by SCI with up to sextuple excitations with a cc-pCV6Z basis recovers more than 98.8% of the correlation energy of the system, and it is only about 3 kcal/mol above the “experimental” value. Despite that the present energy upper bounds are far below all previous ones, comparatively large dispersion errors in the determination of the extrapolated energies to the complete basis set do not allow to determine a reliable estimation of the full CI energy with an accuracy better than 0.6 mhartree (0.4 kcal/mol)« less
NASA Astrophysics Data System (ADS)
Sahni, V.; Ma, C. Q.
1980-12-01
The inhomogeneous electron gas at a jellium metal surface is studied in the Hartree-Fock approximation by Kohn-Sham density functional theory. Rigorous upper bounds to the surface energy are derived by application of the Rayleigh-Ritz variational principle for the energy, the surface kinetic, electrostatic, and nonlocal exchange energy functionals being determined exactly for the accurate linear-potential model electronic wave functions. The densities obtained by the energy minimization constraint are then employed to determine work-function results via the variationally accurate "displaced-profile change-in-self-consistent-field" expression. The theoretical basis of this non-self-consistent procedure and its demonstrated accuracy for the fully correlated system (as treated within the local-density approximation for exchange and correlation) leads us to conclude these results for the surface energies and work functions to be essentially exact. Work-function values are also determined by the Koopmans'-theorem expression, both for these densities as well as for those obtained by satisfaction of the constraint set on the electrostatic potential by the Budd-Vannimenus theorem. The use of the Hartree-Fock results in the accurate estimation of correlation-effect contributions to these surface properties of the nonuniform electron gas is also indicated. In addition, the original work and approximations made by Bardeen in this attempt at a solution of the Hartree-Fock problem are briefly reviewed in order to contrast with the present work.
Coleman, B K; Wells, J R; Nazaroff, W W
2010-02-01
The reaction of ozone with permethrin can potentially form phosgene. Published evidence on ozone levels and permethrin surface concentrations in aircraft cabins indicated that significant phosgene formation might occur in this setting. A derivatization technique was developed to detect phosgene with a lower limit of detection of 2 ppb. Chamber experiments were conducted with permethrin-coated materials (glass, carpet, seat fabric, and plastic) exposed to ozone under cabin-relevant conditions (150 ppb O(3), 4.5/h air exchange rate, <1% relative humidity, 1700 ng/cm(2) of permethrin). Phosgene was not detected in these experiments. Reaction of ozone with permethrin appears to be hindered by the electron-withdrawing chlorine atoms adjacent to the double bond in permethrin. Experimental results indicate that the upper limit on the reaction probability of ozone with surface-bound permethrin is approximately 10(-7). Extrapolation by means of material-balance modeling indicates that the upper limit on the phosgene level in aircraft cabins resulting from this chemistry is approximately 1 microg/m(3) or approximately 0.3 ppb. It was thus determined that phosgene formation, if it occurs in aircraft cabins, is not likely to exceed relevant, health-based phosgene exposure guidelines. Phosgene formation from ozone-initiated oxidation of permethrin in the aircraft cabin environment, if it occurs, is estimated to generate levels below the California Office of Environmental Health Hazard Assessment acute reference exposure level of 4 microg/m(3) or approximately 1 ppb.
Measures and limits of models of fixation selection.
Wilming, Niklas; Betz, Torsten; Kietzmann, Tim C; König, Peter
2011-01-01
Models of fixation selection are a central tool in the quest to understand how the human mind selects relevant information. Using this tool in the evaluation of competing claims often requires comparing different models' relative performance in predicting eye movements. However, studies use a wide variety of performance measures with markedly different properties, which makes a comparison difficult. We make three main contributions to this line of research: First we argue for a set of desirable properties, review commonly used measures, and conclude that no single measure unites all desirable properties. However the area under the ROC curve (a classification measure) and the KL-divergence (a distance measure of probability distributions) combine many desirable properties and allow a meaningful comparison of critical model performance. We give an analytical proof of the linearity of the ROC measure with respect to averaging over subjects and demonstrate an appropriate correction of entropy-based measures like KL-divergence for small sample sizes in the context of eye-tracking data. Second, we provide a lower bound and an upper bound of these measures, based on image-independent properties of fixation data and between subject consistency respectively. Based on these bounds it is possible to give a reference frame to judge the predictive power of a model of fixation selection. We provide open-source python code to compute the reference frame. Third, we show that the upper, between subject consistency bound holds only for models that predict averages of subject populations. Departing from this we show that incorporating subject-specific viewing behavior can generate predictions which surpass that upper bound. Taken together, these findings lay out the required information that allow a well-founded judgment of the quality of any model of fixation selection and should therefore be reported when a new model is introduced.
Schwartz, Marc D; Valdimarsdottir, Heiddis B; Peshkin, Beth N; Mandelblatt, Jeanne; Nusbaum, Rachel; Huang, An-Tsun; Chang, Yaojen; Graves, Kristi; Isaacs, Claudine; Wood, Marie; McKinnon, Wendy; Garber, Judy; McCormick, Shelley; Kinney, Anita Y; Luta, George; Kelleher, Sarah; Leventhal, Kara-Grace; Vegella, Patti; Tong, Angie; King, Lesley
2014-03-01
Although guidelines recommend in-person counseling before BRCA1/BRCA2 gene testing, genetic counseling is increasingly offered by telephone. As genomic testing becomes more common, evaluating alternative delivery approaches becomes increasingly salient. We tested whether telephone delivery of BRCA1/2 genetic counseling was noninferior to in-person delivery. Participants (women age 21 to 85 years who did not have newly diagnosed or metastatic cancer and lived within a study site catchment area) were randomly assigned to usual care (UC; n = 334) or telephone counseling (TC; n = 335). UC participants received in-person pre- and post-test counseling; TC participants completed all counseling by telephone. Primary outcomes were knowledge, satisfaction, decision conflict, distress, and quality of life; secondary outcomes were equivalence of BRCA1/2 test uptake and costs of delivering TC versus UC. TC was noninferior to UC on all primary outcomes. At 2 weeks after pretest counseling, knowledge (d = 0.03; lower bound of 97.5% CI, -0.61), perceived stress (d = -0.12; upper bound of 97.5% CI, 0.21), and satisfaction (d = -0.16; lower bound of 97.5% CI, -0.70) had group differences and confidence intervals that did not cross their 1-point noninferiority limits. Decision conflict (d = 1.1; upper bound of 97.5% CI, 3.3) and cancer distress (d = -1.6; upper bound of 97.5% CI, 0.27) did not cross their 4-point noninferiority limit. Results were comparable at 3 months. TC was not equivalent to UC on BRCA1/2 test uptake (UC, 90.1%; TC, 84.2%). TC yielded cost savings of $114 per patient. Genetic counseling can be effectively and efficiently delivered via telephone to increase access and decrease costs.
A formulation of a matrix sparsity approach for the quantum ordered search algorithm
NASA Astrophysics Data System (ADS)
Parmar, Jupinder; Rahman, Saarim; Thiara, Jaskaran
One specific subset of quantum algorithms is Grovers Ordered Search Problem (OSP), the quantum counterpart of the classical binary search algorithm, which utilizes oracle functions to produce a specified value within an ordered database. Classically, the optimal algorithm is known to have a log2N complexity; however, Grovers algorithm has been found to have an optimal complexity between the lower bound of ((lnN-1)/π≈0.221log2N) and the upper bound of 0.433log2N. We sought to lower the known upper bound of the OSP. With Farhi et al. MITCTP 2815 (1999), arXiv:quant-ph/9901059], we see that the OSP can be resolved into a translational invariant algorithm to create quantum query algorithm restraints. With these restraints, one can find Laurent polynomials for various k — queries — and N — database sizes — thus finding larger recursive sets to solve the OSP and effectively reducing the upper bound. These polynomials are found to be convex functions, allowing one to make use of convex optimization to find an improvement on the known bounds. According to Childs et al. [Phys. Rev. A 75 (2007) 032335], semidefinite programming, a subset of convex optimization, can solve the particular problem represented by the constraints. We were able to implement a program abiding to their formulation of a semidefinite program (SDP), leading us to find that it takes an immense amount of storage and time to compute. To combat this setback, we then formulated an approach to improve results of the SDP using matrix sparsity. Through the development of this approach, along with an implementation of a rudimentary solver, we demonstrate how matrix sparsity reduces the amount of time and storage required to compute the SDP — overall ensuring further improvements will likely be made to reach the theorized lower bound.
Sample Complexity Bounds for Differentially Private Learning
Chaudhuri, Kamalika; Hsu, Daniel
2013-01-01
This work studies the problem of privacy-preserving classification – namely, learning a classifier from sensitive data while preserving the privacy of individuals in the training set. In particular, the learning algorithm is required in this problem to guarantee differential privacy, a very strong notion of privacy that has gained significant attention in recent years. A natural question to ask is: what is the sample requirement of a learning algorithm that guarantees a certain level of privacy and accuracy? We address this question in the context of learning with infinite hypothesis classes when the data is drawn from a continuous distribution. We first show that even for very simple hypothesis classes, any algorithm that uses a finite number of examples and guarantees differential privacy must fail to return an accurate classifier for at least some unlabeled data distributions. This result is unlike the case with either finite hypothesis classes or discrete data domains, in which distribution-free private learning is possible, as previously shown by Kasiviswanathan et al. (2008). We then consider two approaches to differentially private learning that get around this lower bound. The first approach is to use prior knowledge about the unlabeled data distribution in the form of a reference distribution chosen independently of the sensitive data. Given such a reference , we provide an upper bound on the sample requirement that depends (among other things) on a measure of closeness between and the unlabeled data distribution. Our upper bound applies to the non-realizable as well as the realizable case. The second approach is to relax the privacy requirement, by requiring only label-privacy – namely, that the only labels (and not the unlabeled parts of the examples) be considered sensitive information. An upper bound on the sample requirement of learning with label privacy was shown by Chaudhuri et al. (2006); in this work, we show a lower bound. PMID:25285183
Tillman, Fred; Wiele, Stephen M.; Pool, Donald R.
2015-01-01
Population growth in the Verde Valley in Arizona has led to efforts to better understand water availability in the watershed. Evapotranspiration (ET) is a substantial component of the water budget and a critical factor in estimating groundwater recharge in the area. In this study, four estimates of ET are compared and discussed with applications to the Verde Valley. Higher potential ET (PET) rates from the soil-water balance (SWB) recharge model resulted in an average annual ET volume about 17% greater than for ET from the basin characteristics (BCM) recharge model. Annual BCM PET volume, however, was greater by about a factor of 2 or more than SWB actual ET (AET) estimates, which are used in the SWB model to estimate groundwater recharge. ET also was estimated using a method that combines MODIS-EVI remote sensing data and geospatial information and by the MODFLOW-EVT ET package as part of a regional groundwater-flow model that includes the study area. Annual ET volumes were about same for upper-bound MODIS-EVI ET for perennial streams as for the MODFLOW ET estimates, with the small differences between the two methods having minimal impact on annual or longer groundwater budgets for the study area.
Estimation of inflation parameters for Perturbed Power Law model using recent CMB measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukherjee, Suvodip; Das, Santanu; Souradeep, Tarun
2015-01-01
Cosmic Microwave Background (CMB) is an important probe for understanding the inflationary era of the Universe. We consider the Perturbed Power Law (PPL) model of inflation which is a soft deviation from Power Law (PL) inflationary model. This model captures the effect of higher order derivative of Hubble parameter during inflation, which in turn leads to a non-zero effective mass m{sub eff} for the inflaton field. The higher order derivatives of Hubble parameter at leading order sources constant difference in the spectral index for scalar and tensor perturbation going beyond PL model of inflation. PPL model have two observable independentmore » parameters, namely spectral index for tensor perturbation ν{sub t} and change in spectral index for scalar perturbation ν{sub st} to explain the observed features in the scalar and tensor power spectrum of perturbation. From the recent measurements of CMB power spectra by WMAP, Planck and BICEP-2 for temperature and polarization, we estimate the feasibility of PPL model with standard ΛCDM model. Although BICEP-2 claimed a detection of r=0.2, estimates of dust contamination provided by Planck have left open the possibility that only upper bound on r will be expected in a joint analysis. As a result we consider different upper bounds on the value of r and show that PPL model can explain a lower value of tensor to scalar ratio (r<0.1 or r<0.01) for a scalar spectral index of n{sub s}=0.96 by having a non-zero value of effective mass of the inflaton field m{sup 2}{sub eff}/H{sup 2}. The analysis with WP + Planck likelihood shows a non-zero detection of m{sup 2}{sub eff}/H{sup 2} with 5.7 σ and 8.1 σ respectively for r<0.1 and r<0.01. Whereas, with BICEP-2 likelihood m{sup 2}{sub eff}/H{sup 2} = −0.0237 ± 0.0135 which is consistent with zero.« less
Spread of entanglement and causality
NASA Astrophysics Data System (ADS)
Casini, Horacio; Liu, Hong; Mezei, Márk
2016-07-01
We investigate causality constraints on the time evolution of entanglement entropy after a global quench in relativistic theories. We first provide a general proof that the so-called tsunami velocity is bounded by the speed of light. We then generalize the free particle streaming model of [1] to general dimensions and to an arbitrary entanglement pattern of the initial state. In more than two spacetime dimensions the spread of entanglement in these models is highly sensitive to the initial entanglement pattern, but we are able to prove an upper bound on the normalized rate of growth of entanglement entropy, and hence the tsunami velocity. The bound is smaller than what one gets for quenches in holographic theories, which highlights the importance of interactions in the spread of entanglement in many-body systems. We propose an interacting model which we believe provides an upper bound on the spread of entanglement for interacting relativistic theories. In two spacetime dimensions with multiple intervals, this model and its variations are able to reproduce intricate results exhibited by holographic theories for a significant part of the parameter space. For higher dimensions, the model bounds the tsunami velocity at the speed of light. Finally, we construct a geometric model for entanglement propagation based on a tensor network construction for global quenches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Myoung-Jae; Jung, Young-Dae, E-mail: ydjung@hanyang.ac.kr; Department of Physics, Applied Physics, and Astronomy, Rensselaer Polytechnic Institute, 110 8th Street, Troy, New York 12180-3590
The dispersion relation for the dust ion-acoustic surface waves propagating at the interface of semi-bounded Lorentzian dusty plasma with supersonic ion flow has been kinetically derived to investigate the nonthermal property and the ion wake field effect. We found that the supersonic ion flow creates the upper and the lower modes. The increase in the nonthermal particles decreases the wave frequency for the upper mode whereas it increases the frequency for the lower mode. The increase in the supersonic ion flow velocity is found to enhance the wave frequency for both modes. We also found that the increase in nonthermalmore » plasmas is found to enhance the group velocity of the upper mode. However, the nonthermal particles suppress the lower mode group velocity. The nonthermal effects on the group velocity will be reduced in the limit of small or large wavelength limit.« less
Azunre, P.
2016-09-21
Here in this paper, two novel techniques for bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations are developed. The first provides a theorem to construct interval bounds, while the second provides a theorem to construct lower bounds convex and upper bounds concave in the parameter. The convex/concave bounds can be significantly tighter than the interval bounds because of the wrapping effect suffered by interval analysis in dynamical systems. Both types of bounds are computationally cheap to construct, requiring solving auxiliary systems twice and four times larger than the original system, respectively. An illustrative numerical examplemore » of bound construction and use for deterministic global optimization within a simple serial branch-and-bound algorithm, implemented numerically using interval arithmetic and a generalization of McCormick's relaxation technique, is presented. Finally, problems within the important class of reaction-diffusion systems may be optimized with these tools.« less
Bounds on quantum confinement effects in metal nanoparticles
NASA Astrophysics Data System (ADS)
Blackman, G. Neal; Genov, Dentcho A.
2018-03-01
Quantum size effects on the permittivity of metal nanoparticles are investigated using the quantum box model. Explicit upper and lower bounds are derived for the permittivity and relaxation rates due to quantum confinement effects. These bounds are verified numerically, and the size dependence and frequency dependence of the empirical Drude size parameter is extracted from the model. Results suggest that the common practice of empirically modifying the dielectric function can lead to inaccurate predictions for highly uniform distributions of finite-sized particles.
Some Metric Properties of Planar Gaussian Free Field
NASA Astrophysics Data System (ADS)
Goswami, Subhajit
In this thesis we study the properties of some metrics arising from two-dimensional Gaussian free field (GFF), namely the Liouville first-passage percolation (Liouville FPP), the Liouville graph distance and an effective resistance metric. In Chapter 1, we define these metrics as well as discuss the motivations for studying them. Roughly speaking, Liouville FPP is the shortest path metric in a planar domain D where the length of a path P is given by ∫Pe gammah(z)|dz| where h is the GFF on D and gamma > 0. In Chapter 2, we present an upper bound on the expected Liouville FPP distance between two typical points for small values of gamma (the near-Euclidean regime). A similar upper bound is derived in Chapter 3 for the Liouville graph distance which is, roughly, the minimal number of Euclidean balls with comparable Liouville quantum gravity (LQG) measure whose union contains a continuous path between two endpoints. Our bounds seem to be in disagreement with Watabiki's prediction (1993) on the random metric of Liouville quantum gravity in this regime. The contents of these two chapters are based on a joint work with Jian Ding. In Chapter 4, we derive some asymptotic estimates for effective resistances on a random network which is defined as follows. Given any gamma > 0 and for eta = {etav}v∈Z2 denoting a sample of the two-dimensional discrete Gaussian free field on Z2 pinned at the origin, we equip the edge ( u, v) with conductance egamma(etau + eta v). The metric structure of effective resistance plays a crucial role in our proof of the main result in Chapter 4. The primary motivation behind this metric is to understand the random walk on Z 2 where the edge (u, v) has weight egamma(etau + etav). Using the estimates from Chapter 4 we show in Chapter 5 that for almost every eta, this random walk is recurrent and that, with probability tending to 1 as T → infinity, the return probability at time 2T decays as T-1+o(1). In addition, we prove a version of subdiffusive behavior by showing that the expected exit time from a ball of radius N scales as Npsi(gamma)+o(1) with psi(gamma) > 2 for all gamma > 0. The contents of these chapters are based on a joint work with Marek Biskup and Jian Ding.
Ratio-based estimators for a change point in persistence.
Halunga, Andreea G; Osborn, Denise R
2012-11-01
We study estimation of the date of change in persistence, from [Formula: see text] to [Formula: see text] or vice versa. Contrary to statements in the original papers, our analytical results establish that the ratio-based break point estimators of Kim [Kim, J.Y., 2000. Detection of change in persistence of a linear time series. Journal of Econometrics 95, 97-116], Kim et al. [Kim, J.Y., Belaire-Franch, J., Badillo Amador, R., 2002. Corringendum to "Detection of change in persistence of a linear time series". Journal of Econometrics 109, 389-392] and Busetti and Taylor [Busetti, F., Taylor, A.M.R., 2004. Tests of stationarity against a change in persistence. Journal of Econometrics 123, 33-66] are inconsistent when a mean (or other deterministic component) is estimated for the process. In such cases, the estimators converge to random variables with upper bound given by the true break date when persistence changes from [Formula: see text] to [Formula: see text]. A Monte Carlo study confirms the large sample downward bias and also finds substantial biases in moderate sized samples, partly due to properties at the end points of the search interval.
Estimating the benefits of public health policies that reduce harmful consumption.
Ashley, Elizabeth M; Nardinelli, Clark; Lavaty, Rosemarie A
2015-05-01
For products such as tobacco and junk food, where policy interventions are often designed to decrease consumption, affected consumers gain utility from improvements in lifetime health and longevity but also lose utility associated with the activity of consuming the product. In the case of anti-smoking policies, even though published estimates of gross health and longevity benefits are up to 900 times higher than the net consumer benefits suggested by a more direct willingness-to-pay estimation approach, there is little recognition in the cost-benefit and cost-effectiveness literature that gross estimates will overstate intrapersonal welfare improvements when utility losses are not netted out. This paper presents a general framework for analyzing policies that are designed to reduce inefficiently high consumption and provides a rule of thumb for the relationship between net and gross consumer welfare effects: where there exists a plausible estimate of the tax that would allow consumers to fully internalize health costs, the ratio of the tax to the per-unit long-term cost can provide an upper bound on the ratio of net to gross benefits. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.
A revised timescale for human evolution based on ancient mitochondrial genomes
Johnson, Philip L.F.; Bos, Kirsten; Lari, Martina; Bollongino, Ruth; Sun, Chengkai; Giemsch, Liane; Schmitz, Ralf; Burger, Joachim; Ronchitelli, Anna Maria; Martini, Fabio; Cremonesi, Renata G.; Svoboda, Jiří; Bauer, Peter; Caramelli, David; Castellano, Sergi; Reich, David; Pääbo, Svante; Krause, Johannes
2016-01-01
Summary Background Recent analyses of de novo DNA mutations in modern humans have suggested a nuclear substitution rate that is approximately half that of previous estimates based on fossil calibration. This result has led to suggestions that major events in human evolution occurred far earlier than previously thought. Result Here we use mitochondrial genome sequences from 10 securely dated ancient modern humans spanning 40,000 years as calibration points for the mitochondrial clock, thus yielding a direct estimate of the mitochondrial substitution rate. Our clock yields mitochondrial divergence times that are in agreement with earlier estimates based on calibration points derived from either fossils or archaeological material. In particular, our results imply a separation of non-Africans from the most closely related sub-Saharan African mitochondrial DNAs (haplogroup L3) of less than 62,000-95,000 years ago. Conclusion Though single loci like mitochondrial DNA (mtDNA) can only provide biased estimates of population split times, they can provide valid upper bounds; our results exclude most of the older dates for African and non-African split times recently suggested by de novo mutation rate estimates in the nuclear genome. PMID:23523248
A revised timescale for human evolution based on ancient mitochondrial genomes.
Fu, Qiaomei; Mittnik, Alissa; Johnson, Philip L F; Bos, Kirsten; Lari, Martina; Bollongino, Ruth; Sun, Chengkai; Giemsch, Liane; Schmitz, Ralf; Burger, Joachim; Ronchitelli, Anna Maria; Martini, Fabio; Cremonesi, Renata G; Svoboda, Jiří; Bauer, Peter; Caramelli, David; Castellano, Sergi; Reich, David; Pääbo, Svante; Krause, Johannes
2013-04-08
Recent analyses of de novo DNA mutations in modern humans have suggested a nuclear substitution rate that is approximately half that of previous estimates based on fossil calibration. This result has led to suggestions that major events in human evolution occurred far earlier than previously thought. Here, we use mitochondrial genome sequences from ten securely dated ancient modern humans spanning 40,000 years as calibration points for the mitochondrial clock, thus yielding a direct estimate of the mitochondrial substitution rate. Our clock yields mitochondrial divergence times that are in agreement with earlier estimates based on calibration points derived from either fossils or archaeological material. In particular, our results imply a separation of non-Africans from the most closely related sub-Saharan African mitochondrial DNAs (haplogroup L3) that occurred less than 62-95 kya. Though single loci like mitochondrial DNA (mtDNA) can only provide biased estimates of population divergence times, they can provide valid upper bounds. Our results exclude most of the older dates for African and non-African population divergences recently suggested by de novo mutation rate estimates in the nuclear genome. Copyright © 2013 Elsevier Ltd. All rights reserved.
The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates
NASA Technical Reports Server (NTRS)
Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush
2008-01-01
We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.
When clusters collide: constraints on antimatter on the largest scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steigman, Gary, E-mail: steigman@mps.ohio-state.edu
2008-10-15
Observations have ruled out the presence of significant amounts of antimatter in the Universe on scales ranging from the solar system, to the Galaxy, to groups and clusters of galaxies, and even to distances comparable to the scale of the present horizon. Except for the model-dependent constraints on the largest scales, the most significant upper limits to diffuse antimatter in the Universe are those on the {approx}Mpc scale of clusters of galaxies provided by the EGRET upper bounds to annihilation gamma rays from galaxy clusters whose intracluster gas is revealed through its x-ray emission. On the scale of individual clustersmore » of galaxies the upper bounds to the fraction of mixed matter and antimatter for the 55 clusters from a flux-limited x-ray survey range from 5 Multiplication-Sign 10{sup -9} to <1 Multiplication-Sign 10{sup -6}, strongly suggesting that individual clusters of galaxies are made entirely of matter or of antimatter. X-ray and gamma-ray observations of colliding clusters of galaxies, such as the Bullet Cluster, permit these constraints to be extended to even larger scales. If the observations of the Bullet Cluster, where the upper bound to the antimatter fraction is found to be <3 Multiplication-Sign 10{sup -6}, can be generalized to other colliding clusters of galaxies, cosmologically significant amounts of antimatter will be excluded on scales of order {approx}20 Mpc (M{approx}5 Multiplication-Sign 10{sup 15}M{sub sun})« less
High exposure to inorganic arsenic by food: the need for risk reduction.
Gundert-Remy, Ursula; Damm, Georg; Foth, Heidi; Freyberger, Alexius; Gebel, Thomas; Golka, Klaus; Röhl, Claudia; Schupp, Thomas; Wollin, Klaus-Michael; Hengstler, Jan Georg
2015-12-01
Arsenic is a human carcinogen that occurs ubiquitously in soil and water. Based on epidemiological studies, a benchmark dose (lower/higher bound estimate) between 0.3 and 8 μg/kg bw/day was estimated to cause a 1 % increased risk of lung, skin and bladder cancer. A recently published study by EFSA on dietary exposure to inorganic arsenic in the European population reported 95th percentiles (lower bound min to upper bound max) for different age groups in the same range as the benchmark dose. For toddlers, a highly exposed group, the highest values ranged between 0.61 and 2.09 µg arsenic/kg bw/day. For all other age classes, the margin of exposure is also small. This scenario calls for regulatory action to reduce arsenic exposure. One priority measure should be to reduce arsenic in food categories that contribute most to exposure. In the EFSA study the food categories 'milk and dairy products,' 'drinking water' and 'food for infants' represent major sources of inorganic arsenic for infants and also rice is an important source. Long-term strategies are required to reduce inorganic arsenic in these food groups. The reduced consumption of rice and rice products which has been recommended may be helpful for a minority of individuals consuming unusually high amounts of rice. However, it is only of limited value for the general European population, because the food categories 'grain-based processed products (non rice-based)' or 'milk and dairy products' contribute more to the exposure with inorganic arsenic than the food category 'rice.' A balanced regulatory activity focusing on the most relevant food categories is required. In conclusion, exposure to inorganic arsenic represents a risk to the health of the European population, particularly to young children. Regulatory measures to reduce exposure are urgently required.
Uncertainty Quantification for Ice Sheet Science and Sea Level Projections
NASA Astrophysics Data System (ADS)
Boening, C.; Schlegel, N.; Limonadi, D.; Schodlok, M.; Seroussi, H. L.; Larour, E. Y.; Watkins, M. M.
2017-12-01
In order to better quantify uncertainties in global mean sea level rise projections and in particular upper bounds, we aim at systematically evaluating the contributions from ice sheets and potential for extreme sea level rise due to sudden ice mass loss. Here, we take advantage of established uncertainty quantification tools embedded within the Ice Sheet System Model (ISSM) as well as sensitivities to ice/ocean interactions using melt rates and melt potential derived from MITgcm/ECCO2. With the use of these tools, we conduct Monte-Carlo style sampling experiments on forward simulations of the Antarctic ice sheet, by varying internal parameters and boundary conditions of the system over both extreme and credible worst-case ranges. Uncertainty bounds for climate forcing are informed by CMIP5 ensemble precipitation and ice melt estimates for year 2100, and uncertainty bounds for ocean melt rates are derived from a suite of regional sensitivity experiments using MITgcm. Resulting statistics allow us to assess how regional uncertainty in various parameters affect model estimates of century-scale sea level rise projections. The results inform efforts to a) isolate the processes and inputs that are most responsible for determining ice sheet contribution to sea level; b) redefine uncertainty brackets for century-scale projections; and c) provide a prioritized list of measurements, along with quantitative information on spatial and temporal resolution, required for reducing uncertainty in future sea level rise projections. Results indicate that ice sheet mass loss is dependent on the spatial resolution of key boundary conditions - such as bedrock topography and melt rates at the ice-ocean interface. This work is performed at and supported by the California Institute of Technology's Jet Propulsion Laboratory. Supercomputing time is also supported through a contract with the National Aeronautics and Space Administration's Cryosphere program.
Multi-soliton interaction of a generalized Schrödinger-Boussinesq system in a magnetized plasma
NASA Astrophysics Data System (ADS)
Zhao, Xue-Hui; Tian, Bo; Chai, Jun; Wu, Xiao-Yu; Guo, Yong-Jiang
2017-04-01
Under investigation in this paper is a generalized Schrödinger-Boussinesq system, which describes the stationary propagation of coupled upper-hybrid waves and magnetoacoustic waves in a magnetized plasma. Bilinear forms, one-, two- and three-soliton solutions are derived by virtue of the Hirota method and symbolic computation. Propagation and interaction for the solitons are illustrated graphically: Coefficients β1^{} and β2^{} can affect the velocities and propagation directions of the solitary waves. Amplitude, velocity and shape of the one solitary wave keep invariant during the propagation, implying that the transport of the energy is stable in the upper-hybrid and magnetoacoustic waves, and amplitude of the upper-hybrid wave is bigger than that of the magnetoacoustic wave. For the upper-hybrid and magnetoacoustic waves, head-on, overtaking and bound-state interaction between the two solitary waves are asymptotically depicted, respectively, indicating that the interaction between the two solitary waves is elastic. Elastic interaction between the bound-state soliton and a single one soliton is also displayed, and interaction among the three solitary waves is all elastic.
How Well Will MODIS Measure Top of Atmosphere Aerosol Direct Radiative Forcing?
NASA Technical Reports Server (NTRS)
Remer, Lorraine A.; Kaufman, Yoram J.; Levin, Zev; Ghan, Stephen; Einaudi, Franco (Technical Monitor)
2000-01-01
The new generation of satellite sensors such as the Moderate Resolution Imaging Spectroradiometer (MODIS) will be able to detect and characterize global aerosols with an unprecedented accuracy. The question remains whether this accuracy will be sufficient to narrow the uncertainties in our estimates of aerosol radiative forcing at the top of the atmosphere. Satellite remote sensing detects aerosol optical thickness with the least amount of relative error when aerosol loading is high. Satellites are less effective when aerosol loading is low. We use the monthly mean results of two global aerosol transport models to simulate the spatial distribution of smoke aerosol in the Southern Hemisphere during the tropical biomass burning season. This spatial distribution allows us to determine that 87-94% of the smoke aerosol forcing at the top of the atmosphere occurs in grid squares with sufficient signal to noise ratio to be detectable from space. The uncertainty of quantifying the smoke aerosol forcing in the Southern Hemisphere depends on the uncertainty introduced by errors in estimating the background aerosol, errors resulting from uncertainties in surface properties and errors resulting from uncertainties in assumptions of aerosol properties. These three errors combine to give overall uncertainties of 1.5 to 2.2 Wm-2 (21-56%) in determining the Southern Hemisphere smoke aerosol forcing at the top of the atmosphere. The range of values depend on which estimate of MODIS retrieval uncertainty is used, either the theoretical calculation (upper bound) or the empirical estimate (lower bound). Strategies that use the satellite data to derive flux directly or use the data in conjunction with ground-based remote sensing and aerosol transport models can reduce these uncertainties.
Ni, Hong-Gang; Cao, Shan-Ping; Chang, Wen-Jing; Zeng, Hui
2011-07-01
This study examined polybrominated diphenyl ethers (PBDEs) in central air conditioner filter (CACF) dust from a new office building in Shenzhen, China. Human exposure to PBDE via dust inhalation and ingestion were also estimated. PBDEs level in CACF dust was lower than those in the other countries and regions. Approximately 0.671 pg/kg bw/day PM(2.5) (Particulate Matter up to 2.5 μm in size) bounded Σ(15)PBDEs can be inhaled deep into the lungs and 4.123 pg/kg bw/day PM(10) (Particulate Matter up to 10 μm in size) bounded Σ(15)PBDEs tend to be deposited in the upper parts of the respiratory system. The average total intake of Σ(15)PBDEs via dust inhalation and ingestion for adults reached ∼ 141 pg/kg bw/day in this building. This value was far below the reference dose (RfD) recommended by United States Environmental Protection Agency. Human exposure to PBDEs via dust inhalation and ingestion in the new building is less than the old ones. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Parisi, M. G.; Brunini, A.
1996-01-01
By means of a simplified dynamical model, we have computed the eccentricity change in the orbit of each giant planet, caused by a single, large impact at the end of the accretion process. In order to set an upper bound on this eccentricity change, we have considered the giant planets' present eccentricities as primordial ones. By means of this procedure, we were able to obtain an implicit relation for the impactor masses and maximum velocities. We have estimated by this method the maximum allowed mass to impact Jupiter to be approx. 1.136 x 10(exp -1), being in the case of Neptune approx. 3.99 x 10(exp -2) (expressed in units of each planet final mass). Due to the similar present eccentricities of Saturn, Uranus and Jupiter, the constraint masses and velocities of the bodies to impact them (in units of each planet final mass and velocity respectively) are almost the same for the three planets. These results are in good agreement with those obtained by Lissauer and Safronov. These bounds might be used to derive the mass distribution of planetesimals in the early solar system.
Graviton mass bounds from an analysis of bright star trajectories at the Galactic Center
NASA Astrophysics Data System (ADS)
Zakharov, Alexander; Jovanović, Predrag; Borka, Dusko; Jovanović, Vesna Borka
2017-03-01
In February 2016 the LIGO & VIRGO collaboration reported the discovery of gravitational waves in merging black holes, therefore, the team confirmed GR predictions about an existence of black holes and gravitational waves in the strong gravitational field limit. Moreover, in their papers the joint LIGO & VIRGO team presented an upper limit on graviton mass such as mg < 1.2 × 10-22 eV (Abbott et al. 2016). So, the authors concluded that their observational data do not show any violation of classical general relativity. We show that an analysis of bright star trajectories could constrain graviton mass with a comparable accuracy with accuracies reached with gravitational wave interferometers and the estimate is consistent with the one obtained by the LIGO & VIRGO collaboration. This analysis gives an opportunity to treat observations of bright stars near the Galactic Center as a useful tool to obtain constraints on the fundamental gravity law such as modifications of the Newton gravity law in a weak field approximation. In that way, based on a potential reconstruction at the Galactic Center we obtain bounds on a graviton mass.
Neutron Electric Dipole Moment and Tensor Charges from Lattice QCD
Bhattacharya, Tanmoy; Cirigliano, Vincenzo; Gupta, Rajan; ...
2015-11-17
In this paper, we present lattice QCD results on the neutron tensor charges including, for the first time, a simultaneous extrapolation in the lattice spacing, volume, and light quark masses to the physical point in the continuum limit. We find that the “disconnected” contribution is smaller than the statistical error in the “connected” contribution. Our estimates in the modified minimal subtraction scheme at 2 GeV, including all systematics, are g d-u T=1.020(76), g d T=0.774(66), g u T=-0.233(28), and g s T=0.008(9). The flavor diagonal charges determine the size of the neutron electric dipole moment (EDM) induced by quark EDMsmore » that are generated in many new scenarios of CP violation beyond the standard model. Finally, we use our results to derive model-independent bounds on the EDMs of light quarks and update the EDM phenomenology in split supersymmetry with gaugino mass unification, finding a stringent upper bound of d n<4×10 -28 e cm for the neutron EDM in this scenario.« less
Sarkodie, Samuel Asumadu
2018-05-24
This study examined the drivers of environmental degradation and pollution in 17 countries in Africa from 1971 to 2013. The empirical study was analyzed with Westerlund error-correction model and panel cointegration tests with 1000 bootstrapping samples, U-shape test, fixed and random effect estimators, and panel causality test. The investigation of the nexus between environmental pollution economic growth in Africa confirms the validity of the EKC hypothesis in Africa at a turning point of US$ 5702 GDP per capita. However, the nexus between environmental degradation and economic growth reveals a U shape at a lower bound GDP of US$ 101/capita and upper bound GDP of US$ 8050/capita, at a turning point of US$ 7958 GDP per capita, confirming the scale effect hypothesis. The empirical findings revealed that energy consumption, food production, economic growth, permanent crop, agricultural land, birth rate, and fertility rate play a major role in environmental degradation and pollution in Africa, thus supporting the global indicators for achieving the sustainable development goals by 2030.
Optimal Attack Strategies Subject to Detection Constraints Against Cyber-Physical Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
This paper studies an attacker against a cyberphysical system (CPS) whose goal is to move the state of a CPS to a target state while ensuring that his or her probability of being detected does not exceed a given bound. The attacker’s probability of being detected is related to the nonnegative bias induced by his or her attack on the CPS’s detection statistic. We formulate a linear quadratic cost function that captures the attacker’s control goal and establish constraints on the induced bias that reflect the attacker’s detection-avoidance objectives. When the attacker is constrained to be detected at the false-alarmmore » rate of the detector, we show that the optimal attack strategy reduces to a linear feedback of the attacker’s state estimate. In the case that the attacker’s bias is upper bounded by a positive constant, we provide two algorithms – an optimal algorithm and a sub-optimal, less computationally intensive algorithm – to find suitable attack sequences. Lastly, we illustrate our attack strategies in numerical examples based on a remotely-controlled helicopter under attack.« less
Optimal Attack Strategies Subject to Detection Constraints Against Cyber-Physical Systems
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
2017-03-31
This paper studies an attacker against a cyberphysical system (CPS) whose goal is to move the state of a CPS to a target state while ensuring that his or her probability of being detected does not exceed a given bound. The attacker’s probability of being detected is related to the nonnegative bias induced by his or her attack on the CPS’s detection statistic. We formulate a linear quadratic cost function that captures the attacker’s control goal and establish constraints on the induced bias that reflect the attacker’s detection-avoidance objectives. When the attacker is constrained to be detected at the false-alarmmore » rate of the detector, we show that the optimal attack strategy reduces to a linear feedback of the attacker’s state estimate. In the case that the attacker’s bias is upper bounded by a positive constant, we provide two algorithms – an optimal algorithm and a sub-optimal, less computationally intensive algorithm – to find suitable attack sequences. Lastly, we illustrate our attack strategies in numerical examples based on a remotely-controlled helicopter under attack.« less
On the Coriolis effect in acoustic waveguides.
Wegert, Henry; Reindl, Leonard M; Ruile, Werner; Mayer, Andreas P
2012-05-01
Rotation of an elastic medium gives rise to a shift of frequency of its acoustic modes, i.e., the time-period vibrations that exist in it. This frequency shift is investigated by applying perturbation theory in the regime of small ratios of the rotation velocity and the frequency of the acoustic mode. In an expansion of the relative frequency shift in powers of this ratio, upper bounds are derived for the first-order and the second-order terms. The derivation of the theoretical upper bounds of the first-order term is presented for linear vibration modes as well as for stable nonlinear vibrations with periodic time dependence that can be represented by a Fourier series.
The upper bounds of reduced axial and shear moduli in cross-ply laminates with matrix cracks
NASA Technical Reports Server (NTRS)
Lee, Jong-Won; Allen, D. H.; Harris, C. E.
1991-01-01
The present study proposes a mathematical model utilizing the internal state variable concept for predicting the upper bounds of the reduced axial and shear stiffnesses in cross-ply laminates with matrix cracks. The displacement components at the matrix crack surfaces are explicitly expressed in terms of the observable axial and shear strains and the undamaged material properties. The reduced axial and shear stiffnesses are predicted for glass/epoxy and graphite/epoxy laminates. Comparison of the model with other theoretical and experimental studies is also presented to confirm direct applicability of the model to angle-ply laminates with matrix cracks subjected to general in-plane loading.
SURE reliability analysis: Program and mathematics
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; White, Allan L.
1988-01-01
The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The computational methods on which the program is based provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.
NASA Astrophysics Data System (ADS)
Traforti, Anna; Bistacchi, Andrea; Massironi, Matteo; Zampieri, Dario; Di Toro, Giulio
2017-04-01
Intracontinental deformation within the upper crust is accommodated by nucleation of new faults (generally satisfying the Anderson's theory of faulting) or brittle reactivation of pre-existing anisotropies when certain conditions are met. How prone to reactivation an existing mechanical anisotropy or discontinuity is, depends on its mechanical strength compared to that of the intact rock and on its orientation with respect to the regional stress field. In this study, we consider how different rock types (i.e. anisotropic vs. isotropic) are deformed during a well-constrained brittle polyphase tectonic evolution to derive the mechanical strength of pre-existing anisotropies and discontinuities (i.e. metamorphic foliations and inherited faults/fractures). The analysis has been carried out in the Eastern Sierras Pampeanas of Central Argentina. These are a series of basement ranges of the Andean foreland, which show compelling evidence of a long-lasting brittle deformation history from the Early Carboniferous to Present time, with three main deformational events (Early Triassic to Early Jurassic NE-SW extension, Early Cretaceous NW-SE extension and Miocene to Present ENE-WNW compression). The study area includes both isotropic granitic bodies and anisotropic phyllosilicate-bearing rocks (gneisses and phyllites). In this environment, each deformation phase causes significant reactivation of the inherited structures and rheological anisotropies, or alternatively formation of neo-formed Andersonian faults, thus providing a multidirectional probing of mechanical properties of these rocks. A meso- and micro-structural analysis of brittle reactivation of metamorphic foliation or inherited faults/fractures revealed that different rock types present remarkable differences in the style of deformation (i.e., phyllite foliation is reactivated during the last compressional phase and cut by newly-formed Andersonian faults/fractures during the first two extensional regimes; instead, gneiss foliation is pervasively reactivated during all the tectonic phases). Considering these observations, we applied a Slip Tendency analysis to estimate the upper and lower bounds to the friction coefficient for slip along the foliations (μs) and along pre-existing faults/fractures (μf). If an hypothetical condition with simultaneous failure on the inherited mechanical discontinuity (foliation or pre-existing fault/fracture) and new Andersonian faults is assumed, the ratio between μsor μf and μ0(the average friction coefficient for intact isotropic rocks) can be calculated as μs (or μf) = NTs ṡ μ0(where NTs represents the normalized slip tendency of the analyzed discontinuity). When just reactivation of foliation/faults/fractures is observed (i.e. no newly-formed Andersonian faults are recognised), an upper bound to μsand μfcan be estimated as μs (or μf) < NTs ṡ μ0. By contrast, the lower bound to μsand μfcan be obtained as μs (or μf) > NTs ṡ μ0, when the mechanical anisotropies are not reactivated and new Andersonian faults nucleate. Applying the above analysis to multiple deformation phases and rock types, we were able to approximatively estimate μs < 0.4 (gneisses) and 0.1 < μs < 0.2 (phyllites) and μf ≈ 0.4 (phyllites) and 0.3 (gneisses).
Weakly dynamic dark energy via metric-scalar couplings with torsion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sur, Sourav; Bhatia, Arshdeep Singh, E-mail: sourav.sur@gmail.com, E-mail: arshdeepsb@gmail.com
We study the dynamical aspects of dark energy in the context of a non-minimally coupled scalar field with curvature and torsion. Whereas the scalar field acts as the source of the trace mode of torsion, a suitable constraint on the torsion pseudo-trace provides a mass term for the scalar field in the effective action. In the equivalent scalar-tensor framework, we find explicit cosmological solutions representing dark energy in both Einstein and Jordan frames. We demand the dynamical evolution of the dark energy to be weak enough, so that the present-day values of the cosmological parameters could be estimated keeping themmore » within the confidence limits set for the standard LCDM model from recent observations. For such estimates, we examine the variations of the effective matter density and the dark energy equation of state parameters over different redshift ranges. In spite of being weakly dynamic, the dark energy component differs significantly from the cosmological constant, both in characteristics and features, for e.g. it interacts with the cosmological (dust) fluid in the Einstein frame, and crosses the phantom barrier in the Jordan frame. We also obtain the upper bounds on the torsion mode parameters and the lower bound on the effective Brans-Dicke parameter. The latter turns out to be fairly large, and in agreement with the local gravity constraints, which therefore come in support of our analysis.« less
NASA Astrophysics Data System (ADS)
Mahmood, A.; Hossain, F.
2016-12-01
Low-lying deltas of Asian region are usually densely populated and located in developing countries situated at the downstream end of major rivers. Extensive dam construction by the upstream countries has now caused water scarcity in large portions of low-lying deltas. Most inhabitants depend on shallow tube well for safe drinking water that tend to suffer from water quality issues (e.g. Arsenic contamination). In addition, people also get infected from water borne diseases like Cholera and Typhoid due to lack of safe drinking water. Developing a centralized piped network based water supply system is often not a feasible option in rural regions. Due to social acceptability, environment friendliness, lower capital and maintenance cost, rainwater harvesting can be the most sustainable option to supply safe drinking water in rural areas. In this study, first we estimate the monthly rainfall variability using long precipitation climatology from satellite precipitation data. The upper and lower bounds of monthly harvestable rainwater were estimated for each satellite precipitation grid. Taking this lower bound of monthly harvestable rainwater as input, we use quantitative water management concept to determine the percent of the time of the year potable water demand can be fulfilled. Analysis indicates that a 6 m³ reservoir tank can fulfill the potable water demand of a 6 person family throughout a year in almost all parts of this region.
Weakly dynamic dark energy via metric-scalar couplings with torsion
NASA Astrophysics Data System (ADS)
Sur, Sourav; Singh Bhatia, Arshdeep
2017-07-01
We study the dynamical aspects of dark energy in the context of a non-minimally coupled scalar field with curvature and torsion. Whereas the scalar field acts as the source of the trace mode of torsion, a suitable constraint on the torsion pseudo-trace provides a mass term for the scalar field in the effective action. In the equivalent scalar-tensor framework, we find explicit cosmological solutions representing dark energy in both Einstein and Jordan frames. We demand the dynamical evolution of the dark energy to be weak enough, so that the present-day values of the cosmological parameters could be estimated keeping them within the confidence limits set for the standard LCDM model from recent observations. For such estimates, we examine the variations of the effective matter density and the dark energy equation of state parameters over different redshift ranges. In spite of being weakly dynamic, the dark energy component differs significantly from the cosmological constant, both in characteristics and features, for e.g. it interacts with the cosmological (dust) fluid in the Einstein frame, and crosses the phantom barrier in the Jordan frame. We also obtain the upper bounds on the torsion mode parameters and the lower bound on the effective Brans-Dicke parameter. The latter turns out to be fairly large, and in agreement with the local gravity constraints, which therefore come in support of our analysis.
Improved quantum backtracking algorithms using effective resistance estimates
NASA Astrophysics Data System (ADS)
Jarret, Michael; Wan, Kianna
2018-02-01
We investigate quantum backtracking algorithms of the type introduced by Montanaro (Montanaro, arXiv:1509.02374). These algorithms explore trees of unknown structure and in certain settings exponentially outperform their classical counterparts. Some of the previous work focused on obtaining a quantum advantage for trees in which a unique marked vertex is promised to exist. We remove this restriction by recharacterizing the problem in terms of the effective resistance of the search space. In this paper, we present a generalization of one of Montanaro's algorithms to trees containing k marked vertices, where k is not necessarily known a priori. Our approach involves using amplitude estimation to determine a near-optimal weighting of a diffusion operator, which can then be applied to prepare a superposition state with support only on marked vertices and ancestors thereof. By repeatedly sampling this state and updating the input vertex, a marked vertex is reached in a logarithmic number of steps. The algorithm thereby achieves the conjectured bound of O ˜(√{T Rmax }) for finding a single marked vertex and O ˜(k √{T Rmax }) for finding all k marked vertices, where T is an upper bound on the tree size and Rmax is the maximum effective resistance encountered by the algorithm. This constitutes a speedup over Montanaro's original procedure in both the case of finding one and the case of finding multiple marked vertices in an arbitrary tree.
Constraining the generalized uncertainty principle with the atomic weak-equivalence-principle test
NASA Astrophysics Data System (ADS)
Gao, Dongfeng; Wang, Jin; Zhan, Mingsheng
2017-04-01
Various models of quantum gravity imply the Planck-scale modifications of Heisenberg's uncertainty principle into a so-called generalized uncertainty principle (GUP). The GUP effects on high-energy physics, cosmology, and astrophysics have been extensively studied. Here, we focus on the weak-equivalence-principle (WEP) violation induced by the GUP. Results from the WEP test with the 85Rb-87Rb dual-species atom interferometer are used to set upper bounds on parameters in two GUP proposals. A 1045-level bound on the Kempf-Mangano-Mann proposal and a 1027-level bound on Maggiore's proposal, which are consistent with bounds from other experiments, are obtained. All these bounds have huge room for improvement in the future.
Trinker, Horst
2011-10-28
We study the distribution of triples of codewords of codes and ordered codes. Schrijver [A. Schrijver, New code upper bounds from the Terwilliger algebra and semidefinite programming, IEEE Trans. Inform. Theory 51 (8) (2005) 2859-2866] used the triple distribution of a code to establish a bound on the number of codewords based on semidefinite programming. In the first part of this work, we generalize this approach for ordered codes. In the second part, we consider linear codes and linear ordered codes and present a MacWilliams-type identity for the triple distribution of their dual code. Based on the non-negativity of this linear transform, we establish a linear programming bound and conclude with a table of parameters for which this bound yields better results than the standard linear programming bound.
Validation of the SURE Program, phase 1
NASA Technical Reports Server (NTRS)
Dotson, Kelly J.
1987-01-01
Presented are the results of the first phase in the validation of the SURE (Semi-Markov Unreliability Range Evaluator) program. The SURE program gives lower and upper bounds on the death-state probabilities of a semi-Markov model. With these bounds, the reliability of a semi-Markov model of a fault-tolerant computer system can be analyzed. For the first phase in the validation, fifteen semi-Markov models were solved analytically for the exact death-state probabilities and these solutions compared to the corresponding bounds given by SURE. In every case, the SURE bounds covered the exact solution. The bounds, however, had a tendency to separate in cases where the recovery rate was slow or the fault arrival rate was fast.
The random coding bound is tight for the average code.
NASA Technical Reports Server (NTRS)
Gallager, R. G.
1973-01-01
The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.
Scalable L-infinite coding of meshes.
Munteanu, Adrian; Cernea, Dan C; Alecu, Alin; Cornelis, Jan; Schelkens, Peter
2010-01-01
The paper investigates the novel concept of local-error control in mesh geometry encoding. In contrast to traditional mesh-coding systems that use the mean-square error as target distortion metric, this paper proposes a new L-infinite mesh-coding approach, for which the target distortion metric is the L-infinite distortion. In this context, a novel wavelet-based L-infinite-constrained coding approach for meshes is proposed, which ensures that the maximum error between the vertex positions in the original and decoded meshes is lower than a given upper bound. Furthermore, the proposed system achieves scalability in L-infinite sense, that is, any decoding of the input stream will correspond to a perfectly predictable L-infinite distortion upper bound. An instantiation of the proposed L-infinite-coding approach is demonstrated for MESHGRID, which is a scalable 3D object encoding system, part of MPEG-4 AFX. In this context, the advantages of scalable L-infinite coding over L-2-oriented coding are experimentally demonstrated. One concludes that the proposed L-infinite mesh-coding approach guarantees an upper bound on the local error in the decoded mesh, it enables a fast real-time implementation of the rate allocation, and it preserves all the scalability features and animation capabilities of the employed scalable mesh codec.
A Reduced Basis Method with Exact-Solution Certificates for Symmetric Coercive Equations
2013-11-06
the energy associated with the infinite - dimensional weak solution of parametrized symmetric coercive partial differential equations with piecewise...builds bounds with respect to the infinite - dimensional weak solution, aims to entirely remove the issue of the “truth” within the certified reduced basis...framework. We in particular introduce a reduced basis method that provides rigorous upper and lower bounds
Curvature Continuous and Bounded Path Planning for Fixed-Wing UAVs
Jiang, Peng; Li, Deshi; Sun, Tao
2017-01-01
Unmanned Aerial Vehicles (UAVs) play an important role in applications such as data collection and target reconnaissance. An accurate and optimal path can effectively increase the mission success rate in the case of small UAVs. Although path planning for UAVs is similar to that for traditional mobile robots, the special kinematic characteristics of UAVs (such as their minimum turning radius) have not been taken into account in previous studies. In this paper, we propose a locally-adjustable, continuous-curvature, bounded path-planning algorithm for fixed-wing UAVs. To deal with the curvature discontinuity problem, an optimal interpolation algorithm and a key-point shift algorithm are proposed based on the derivation of a curvature continuity condition. To meet the upper bound for curvature and to render the curvature extrema controllable, a local replanning scheme is designed by combining arcs and Bezier curves with monotonic curvature. In particular, a path transition mechanism is built for the replanning phase using minimum curvature circles for a planning philosophy. Numerical results demonstrate that the analytical planning algorithm can effectively generate continuous-curvature paths, while satisfying the curvature upper bound constraint and allowing UAVs to pass through all predefined waypoints in the desired mission region. PMID:28925960
Paramagnetic or diamagnetic persistent currents? A topological point of view
NASA Astrophysics Data System (ADS)
Waintal, Xavier
2009-03-01
A persistent current flows at low temperatures in small conducting rings when they are threaded by a magnetic flux. I will discuss the sign of this persistent current (diamagnetic or paramagnetic response) in the special case of N electrons in a one dimensional ring [1]. One dimension is very special in the sense that the sign of the persistent current is entirely controlled by the topology of the system. I will establish lower bounds for the free energy in the presence of arbitrary electron-electron interactions and external potentials. Those bounds are the counterparts of upper bounds derived by Leggett using another topological argument. Rings with odd (even) numbers of polarized electrons are always diamagnetic (paramagnetic). The situation is more interesting with unpolarized electrons where Leggett upper bound breaks down: rings with N=4n exhibit either paramagnetic behavior or a superconductor-like current-phase relation. The topological argument provides a rigorous justification for the phenomenological Huckel rule which states that cyclic molecules with 4n + 2 electrons like benzene are aromatic while those with 4n electrons are not. [4pt] [1] Xavier Waintal, Geneviève Fleury, Kyryl Kazymyrenko, Manuel Houzet, Peter Schmitteckert, and Dietmar Weinmann Phys. Rev. Lett.101, 106804 (2008).
Curvature Continuous and Bounded Path Planning for Fixed-Wing UAVs.
Wang, Xiaoliang; Jiang, Peng; Li, Deshi; Sun, Tao
2017-09-19
Unmanned Aerial Vehicles (UAVs) play an important role in applications such as data collection and target reconnaissance. An accurate and optimal path can effectively increase the mission success rate in the case of small UAVs. Although path planning for UAVs is similar to that for traditional mobile robots, the special kinematic characteristics of UAVs (such as their minimum turning radius) have not been taken into account in previous studies. In this paper, we propose a locally-adjustable, continuous-curvature, bounded path-planning algorithm for fixed-wing UAVs. To deal with the curvature discontinuity problem, an optimal interpolation algorithm and a key-point shift algorithm are proposed based on the derivation of a curvature continuity condition. To meet the upper bound for curvature and to render the curvature extrema controllable, a local replanning scheme is designed by combining arcs and Bezier curves with monotonic curvature. In particular, a path transition mechanism is built for the replanning phase using minimum curvature circles for a planning philosophy. Numerical results demonstrate that the analytical planning algorithm can effectively generate continuous-curvature paths, while satisfying the curvature upper bound constraint and allowing UAVs to pass through all predefined waypoints in the desired mission region.
Bounds on area and charge for marginally trapped surfaces with a cosmological constant
NASA Astrophysics Data System (ADS)
Simon, Walter
2012-03-01
We sharpen the known inequalities AΛ ⩽ 4π(1 - g) (Hayward et al 1994 Phys. Rev. D 49 5080, Woolgar 1999 Class. Quantum Grav. 16 3005) and A ⩾ 4πQ2 (Dain et al 2012 Class. Quantum Grav. 29 035013) between the area A and the electric charge Q of a stable marginally outer-trapped surface (MOTS) of genus g in the presence of a cosmological constant Λ. In particular, instead of requiring stability we include the principal eigenvalue λ of the stability operator. For Λ* = Λ + λ > 0, we obtain a lower and an upper bound for Λ*A in terms of Λ*Q2, as well as the upper bound Q \\le 1/(2\\sqrt{\\Lambda ^{*}}) for the charge, which reduces to Q \\le 1/(2\\sqrt{\\Lambda }) in the stable case λ ⩾ 0. For Λ* < 0, there only remains a lower bound on A. In the spherically symmetric, static, stable case, one of our area inequalities is saturated iff the surface gravity vanishes. We also discuss implications of our inequalities for ‘jumps’ and mergers of charged MOTS.
Guatteri, Mariagiovanna; Spudich, P.; Beroza, G.C.
2001-01-01
We consider the applicability of laboratory-derived rate- and state-variable friction laws to the dynamic rupture of the 1995 Kobe earthquake. We analyze the shear stress and slip evolution of Ide and Takeo's [1997] dislocation model, fitting the inferred stress change time histories by calculating the dynamic load and the instantaneous friction at a series of points within the rupture area. For points exhibiting a fast-weakening behavior, the Dieterich-Ruina friction law, with values of dc = 0.01-0.05 m for critical slip, fits the stress change time series well. This range of dc is 10-20 times smaller than the slip distance over which the stress is released, Dc, which previous studies have equated with the slip-weakening distance. The limited resolution and low-pass character of the strong motion inversion degrades the resolution of the frictional parameters and suggests that the actual dc is less than this value. Stress time series at points characterized by a slow-weakening behavior are well fitted by the Dieterich-Ruina friction law with values of dc ??? 0.01-0.05 m. The apparent fracture energy Gc can be estimated from waveform inversions more stably than the other friction parameters. We obtain a Gc = 1.5??106 J m-2 for the 1995 Kobe earthquake, in agreement with estimates for previous earthquakes. From this estimate and a plausible upper bound for the local rock strength we infer a lower bound for Dc of about 0.008 m. Copyright 2001 by the American Geophysical Union.
The leaching of atmospherically deposited nitrogen from forested watersheds may acidify lakes and streams. he Nitrogen Bounding Study evaluates the potential range of such adverse effects. he study estimates bounds on changes in regional-scale surface water acidification that mig...
Perturbative unitarity constraints on the NMSSM Higgs Sector
Betre, Kassahun; El Hedri, Sonia; Walker, Devin G. E.
2017-11-11
We place perturbative unitarity constraints on both the dimensionful and dimensionless parameters in the Next-to-Minimal Supersymmetric Standard Model (NMSSM) Higgs Sector. These constraints, plus the requirement that the singlino and/or Higgsino constitutes at least part of the observed dark matter relic abundance, generate upper bounds on the Higgs, neutralino and chargino mass spectrum. Requiring higher-order corrections to be no more than 41% of the tree-level value, we obtain an upper bound of 20 TeV for the heavy Higgses and 12 TeV for the charginos and neutralinos outside defined fine-tuned regions. If the corrections are no more than 20% of themore » tree-level value, the bounds are 7 TeV for the heavy Higgses and 5 TeV for the charginos and neutralinos. Finally, in all, by using the NMSSM as a template, we describe a method which replaces naturalness arguments with more rigorous perturbative unitarity arguments to get a better understanding of when new physics will appear.« less
An upper bound on the particle-laden dependency of shear stresses at solid-fluid interfaces
NASA Astrophysics Data System (ADS)
Zohdi, T. I.
2018-03-01
In modern advanced manufacturing processes, such as three-dimensional printing of electronics, fine-scale particles are added to a base fluid yielding a modified fluid. For example, in three-dimensional printing, particle-functionalized inks are created by adding particles to freely flowing solvents forming a mixture, which is then deposited onto a surface, which upon curing yields desirable solid properties, such as thermal conductivity, electrical permittivity and magnetic permeability. However, wear at solid-fluid interfaces within the machinery walls that deliver such particle-laden fluids is typically attributed to the fluid-induced shear stresses, which increase with the volume fraction of added particles. The objective of this work is to develop a rigorous strict upper bound for the tolerable volume fraction of particles that can be added, while remaining below a given stress threshold at a fluid-solid interface. To illustrate the bound's utility, the expression is applied to a series of classical flow regimes.
Perturbative unitarity constraints on the NMSSM Higgs Sector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Betre, Kassahun; El Hedri, Sonia; Walker, Devin G. E.
We place perturbative unitarity constraints on both the dimensionful and dimensionless parameters in the Next-to-Minimal Supersymmetric Standard Model (NMSSM) Higgs Sector. These constraints, plus the requirement that the singlino and/or Higgsino constitutes at least part of the observed dark matter relic abundance, generate upper bounds on the Higgs, neutralino and chargino mass spectrum. Requiring higher-order corrections to be no more than 41% of the tree-level value, we obtain an upper bound of 20 TeV for the heavy Higgses and 12 TeV for the charginos and neutralinos outside defined fine-tuned regions. If the corrections are no more than 20% of themore » tree-level value, the bounds are 7 TeV for the heavy Higgses and 5 TeV for the charginos and neutralinos. Finally, in all, by using the NMSSM as a template, we describe a method which replaces naturalness arguments with more rigorous perturbative unitarity arguments to get a better understanding of when new physics will appear.« less
Energy and criticality in random Boolean networks
NASA Astrophysics Data System (ADS)
Andrecut, M.; Kauffman, S. A.
2008-06-01
The central issue of the research on the Random Boolean Networks (RBNs) model is the characterization of the critical transition between ordered and chaotic phases. Here, we discuss an approach based on the ‘energy’ associated with the unsatisfiability of the Boolean functions in the RBNs model, which provides an upper bound estimation for the energy used in computation. We show that in the ordered phase the RBNs are in a ‘dissipative’ regime, performing mostly ‘downhill’ moves on the ‘energy’ landscape. Also, we show that in the disordered phase the RBNs have to ‘hillclimb’ on the ‘energy’ landscape in order to perform computation. The analytical results, obtained using Derrida's approximation method, are in complete agreement with numerical simulations.
Dominating Scale-Free Networks Using Generalized Probabilistic Methods
Molnár,, F.; Derzsy, N.; Czabarka, É.; Székely, L.; Szymanski, B. K.; Korniss, G.
2014-01-01
We study ensemble-based graph-theoretical methods aiming to approximate the size of the minimum dominating set (MDS) in scale-free networks. We analyze both analytical upper bounds of dominating sets and numerical realizations for applications. We propose two novel probabilistic dominating set selection strategies that are applicable to heterogeneous networks. One of them obtains the smallest probabilistic dominating set and also outperforms the deterministic degree-ranked method. We show that a degree-dependent probabilistic selection method becomes optimal in its deterministic limit. In addition, we also find the precise limit where selecting high-degree nodes exclusively becomes inefficient for network domination. We validate our results on several real-world networks, and provide highly accurate analytical estimates for our methods. PMID:25200937
Wiener-Hammerstein system identification - an evolutionary approach
NASA Astrophysics Data System (ADS)
Naitali, Abdessamad; Giri, Fouad
2016-01-01
The problem of identifying parametric Wiener-Hammerstein (WH) systems is addressed within the evolutionary optimisation context. Specifically, a hybrid culture identification method is developed that involves model structure adaptation using genetic recombination and model parameter learning using particle swarm optimisation. The method enjoys three interesting features: (1) the risk of premature convergence of model parameter estimates to local optima is significantly reduced, due to the constantly maintained diversity of model candidates; (2) no prior knowledge is needed except for upper bounds on the system structure indices; (3) the method is fully autonomous as no interaction is needed with the user during the optimum search process. The performances of the proposed method will be illustrated and compared to alternative methods using a well-established WH benchmark.
Real time estimation and prediction of ship motions using Kalman filtering techniques
NASA Technical Reports Server (NTRS)
Triantafyllou, M. A.; Bodson, M.; Athans, M.
1982-01-01
A landing scheme for landing V/STOL aircraft on rolling ships was sought using computerized simulations. The equations of motion as derived from hydrodynamics, their form and the physical mechanisms involved and the general form of the approximation are discussed. The modeling of the sea is discussed. The derivation of the state-space equations for the DD-963 destroyer is described. Kalman filter studies are presented and the influence of the various parameters is assessed. The effect of various modeling parameters on the rms error is assessed and simplifying conclusions are drawn. An upper bound for prediction time of about five seconds is established, with the exception of roll, which can be predicted up to ten seconds ahead.
Magnetostrophic balance in planetary dynamos - Predictions for Neptune's magnetosphere
NASA Technical Reports Server (NTRS)
Curtis, S. A.; Ness, N. F.
1986-01-01
With the purpose of estimating Neptune's magnetic field and its implications for nonthermal Neptune radio emissions, a new scaling law for planetary magnetic fields was developed in terms of externally observable parameters (the planet's mean density, radius, mass, rotation rate, and internal heat source luminosity). From a comparison of theory and observations by Voyager it was concluded that planetary dynamos are two-state systems with either zero intrinsic magnetic field (for planets with low internal heat source) or (for planets with the internal heat source sufficiently strong to drive convection) a magnetic field near the upper bound determined from magnetostrophic balance. It is noted that mass loading of the Neptune magnetosphere by Triton may play an important role in the generation of nonthermal radio emissions.
Optimal joint detection and estimation that maximizes ROC-type curves
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K.
2017-01-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation. PMID:27093544
Optimal Joint Detection and Estimation That Maximizes ROC-Type Curves.
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K
2016-09-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation.
Quantum Dynamical Applications of Salem's Theorem
NASA Astrophysics Data System (ADS)
Damanik, David; Del Rio, Rafael
2009-07-01
We consider the survival probability of a state that evolves according to the Schrödinger dynamics generated by a self-adjoint operator H. We deduce from a classical result of Salem that upper bounds for the Hausdorff dimension of a set supporting the spectral measure associated with the initial state imply lower bounds on a subsequence of time scales for the survival probability. This general phenomenon is illustrated with applications to the Fibonacci operator and the critical almost Mathieu operator. In particular, this gives the first quantitative dynamical bound for the critical almost Mathieu operator.
VizieR Online Data Catalog: Tracers of the Milky Way mass (Bratek+, 2014)
NASA Astrophysics Data System (ADS)
Bratek, L.; Sikora, S.; Jalocha, J.; Kutschera, M.
2013-11-01
We model the phase-space distribution of the kinematic tracers using general, smooth distribution functions to derive a conservative lower bound on the total mass within ~~150-200kpc. By approximating the potential as Keplerian, the phase-space distribution can be simplified to that of a smooth distribution of energies and eccentricities. Our approach naturally allows for calculating moments of the distribution function, such as the radial profile of the orbital anisotropy. We systematically construct a family of phase-space functions with the resulting radial velocity dispersion overlapping with the one obtained using data on radial motions of distant kinematic tracers, while making no assumptions about the density of the tracers and the velocity anisotropy parameter β regarded as a function of the radial variable. While there is no apparent upper bound for the Milky Way mass, at least as long as only the radial motions are concerned, we find a sharp lower bound for the mass that is small. In particular, a mass value of 2.4x1011M⊙, obtained in the past for lower and intermediate radii, is still consistent with the dispersion profile at larger radii. Compared with much greater mass values in the literature, this result shows that determining the Milky Way mass is strongly model-dependent. We expect a similar reduction of mass estimates in models assuming more realistic mass profiles. (1 data file).
Natural gas fugitive emissions rates constrained by global atmospheric methane and ethane.
Schwietzke, Stefan; Griffin, W Michael; Matthews, H Scott; Bruhwiler, Lori M P
2014-07-15
The amount of methane emissions released by the natural gas (NG) industry is a critical and uncertain value for various industry and policy decisions, such as for determining the climate implications of using NG over coal. Previous studies have estimated fugitive emissions rates (FER)--the fraction of produced NG (mainly methane and ethane) escaped to the atmosphere--between 1 and 9%. Most of these studies rely on few and outdated measurements, and some may represent only temporal/regional NG industry snapshots. This study estimates NG industry representative FER using global atmospheric methane and ethane measurements over three decades, and literature ranges of (i) tracer gas atmospheric lifetimes, (ii) non-NG source estimates, and (iii) fossil fuel fugitive gas hydrocarbon compositions. The modeling suggests an upper bound global average FER of 5% during 2006-2011, and a most likely FER of 2-4% since 2000, trending downward. These results do not account for highly uncertain natural hydrocarbon seepage, which could lower the FER. Further emissions reductions by the NG industry may be needed to ensure climate benefits over coal during the next few decades.
NASA Astrophysics Data System (ADS)
Chin, Siu A.
2014-03-01
The sign-problem in PIMC simulations of non-relativistic fermions increases in serverity with the number of fermions and the number of beads (or time-slices) of the simulation. A large of number of beads is usually needed, because the conventional primitive propagator is only second-order and the usual thermodynamic energy-estimator converges very slowly from below with the total imaginary time. The Hamiltonian energy-estimator, while more complicated to evaluate, is a variational upper-bound and converges much faster with the total imaginary time, thereby requiring fewer beads. This work shows that when the Hamiltonian estimator is used in conjunction with fourth-order propagators with optimizable parameters, the ground state energies of 2D parabolic quantum-dots with approximately 10 completely polarized electrons can be obtain with ONLY 3-5 beads, before the onset of severe sign problems. This work was made possible by NPRP GRANT #5-674-1-114 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the author.
Solving the chemical master equation using sliding windows
2010-01-01
Background The chemical master equation (CME) is a system of ordinary differential equations that describes the evolution of a network of chemical reactions as a stochastic process. Its solution yields the probability density vector of the system at each point in time. Solving the CME numerically is in many cases computationally expensive or even infeasible as the number of reachable states can be very large or infinite. We introduce the sliding window method, which computes an approximate solution of the CME by performing a sequence of local analysis steps. In each step, only a manageable subset of states is considered, representing a "window" into the state space. In subsequent steps, the window follows the direction in which the probability mass moves, until the time period of interest has elapsed. We construct the window based on a deterministic approximation of the future behavior of the system by estimating upper and lower bounds on the populations of the chemical species. Results In order to show the effectiveness of our approach, we apply it to several examples previously described in the literature. The experimental results show that the proposed method speeds up the analysis considerably, compared to a global analysis, while still providing high accuracy. Conclusions The sliding window method is a novel approach to address the performance problems of numerical algorithms for the solution of the chemical master equation. The method efficiently approximates the probability distributions at the time points of interest for a variety of chemically reacting systems, including systems for which no upper bound on the population sizes of the chemical species is known a priori. PMID:20377904
The SURE reliability analysis program
NASA Technical Reports Server (NTRS)
Butler, R. W.
1986-01-01
The SURE program is a new reliability tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.
The SURE Reliability Analysis Program
NASA Technical Reports Server (NTRS)
Butler, R. W.
1986-01-01
The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.
NASA Astrophysics Data System (ADS)
De Raedt, Hans; Michielsen, Kristel; Hess, Karl
2016-12-01
Using Einstein-Podolsky-Rosen-Bohm experiments as an example, we demonstrate that the combination of a digital computer and algorithms, as a metaphor for a perfect laboratory experiment, provides solutions to problems of the foundations of physics. Employing discrete-event simulation, we present a counterexample to John Bell's remarkable "proof" that any theory of physics, which is both Einstein-local and "realistic" (counterfactually definite), results in a strong upper bound to the correlations that are being measured in Einstein-Podolsky-Rosen-Bohm experiments. Our counterexample, which is free of the so-called detection-, coincidence-, memory-, and contextuality loophole, violates this upper bound and fully agrees with the predictions of quantum theory for Einstein-Podolsky-Rosen-Bohm experiments.
NASA Astrophysics Data System (ADS)
Shen, Yuxuan; Wang, Zidong; Shen, Bo; Alsaadi, Fuad E.
2018-07-01
In this paper, the recursive filtering problem is studied for a class of time-varying nonlinear systems with stochastic parameter matrices. The measurement transmission between the sensor and the filter is conducted through a fading channel characterized by the Rice fading model. An event-based transmission mechanism is adopted to decide whether the sensor measurement should be transmitted to the filter. A recursive filter is designed such that, in the simultaneous presence of the stochastic parameter matrices and fading channels, the filtering error covariance is guaranteed to have an upper bound and such an upper bound is then minimized by appropriately choosing filter gain matrix. Finally, a simulation example is presented to demonstrate the effectiveness of the proposed filtering scheme.
NASA Astrophysics Data System (ADS)
Soltani Bozchalooi, Iman; Liang, Ming
2018-04-01
A discussion paper entitled "On the distribution of the modulus of Gabor wavelet coefficients and the upper bound of the dimensionless smoothness index in the case of additive Gaussian noises: revisited" by Dong Wang, Qiang Zhou, Kwok-Leung Tsui has been brought to our attention recently. This discussion paper (hereafter called Wang et al. paper) is based on arguments that are fundamentally incorrect and which we rebut within this commentary. However, as the flaws in the arguments proposed by Wang et al. are clear, we will keep this rebuttal as brief as possible.
On the estimation of the correlation dimension and its application to radar reflector discrimination
NASA Technical Reports Server (NTRS)
Barnett, Kevin D.
1993-01-01
Recently, system theorists have recognized that low order systems of nonlinear differential equations can give rise to solutions which are neither periodic, constant, nor predictable in steady state, but which are nonetheless bounded and deterministic. This behavior, which was first described in the study of weather systems, has been termed 'chaotic.' Much study of chaotic systems has concentrated on analysis of the systems' phase space attractors. It has been recognized that invariant measures of the attractor possess inherent information about the system. One such measure is the dimension of the attractors. The dimension of a chaotic attractor has been shown to be noninteger, leading to the term 'strange attractor;' the attractor is said to have a fractal structure. The correlation dimension has become one of the most popular measures of dimension. However, many problems have been identified in correlation dimension estimation from time sequences. The most common methods for obtaining the correlation dimension have been least squares curves fitting to find the slope of the correlation integral and the Takens Estimator. However, these estimates show unacceptable sensitivity to the upper limit on the distance chosen. Here, a new method is proposed which is shown to be rather insensitive to the upper limit and to perform in a very stable manner, at least in the absence of noise. The correlation dimension is also shown to be an effective discriminant in distinguishing between radar returns resulting from weather and those from the ground. The weather returns are shown to have a correlation dimension generally between 2.0 and 3.0, while ground returns have a correlation dimension exceeding 3.0.
Quantum algorithms for Gibbs sampling and hitting-time estimation
Chowdhury, Anirban Narayan; Somma, Rolando D.
2017-02-01
In this paper, we present quantum algorithms for solving two problems regarding stochastic processes. The first algorithm prepares the thermal Gibbs state of a quantum system and runs in time almost linear in √Nβ/Ζ and polynomial in log(1/ϵ), where N is the Hilbert space dimension, β is the inverse temperature, Ζ is the partition function, and ϵ is the desired precision of the output state. Our quantum algorithm exponentially improves the dependence on 1/ϵ and quadratically improves the dependence on β of known quantum algorithms for this problem. The second algorithm estimates the hitting time of a Markov chain. Formore » a sparse stochastic matrix Ρ, it runs in time almost linear in 1/(ϵΔ 3/2), where ϵ is the absolute precision in the estimation and Δ is a parameter determined by Ρ, and whose inverse is an upper bound of the hitting time. Our quantum algorithm quadratically improves the dependence on 1/ϵ and 1/Δ of the analog classical algorithm for hitting-time estimation. Finally, both algorithms use tools recently developed in the context of Hamiltonian simulation, spectral gap amplification, and solving linear systems of equations.« less
On the behavior of the leading eigenvalue of Eigen's evolutionary matrices.
Semenov, Yuri S; Bratus, Alexander S; Novozhilov, Artem S
2014-12-01
We study general properties of the leading eigenvalue w¯(q) of Eigen's evolutionary matrices depending on the replication fidelity q. This is a linear algebra problem that has various applications in theoretical biology, including such diverse fields as the origin of life, evolution of cancer progression, and virus evolution. We present the exact expressions for w¯(q),w¯(')(q),w¯('')(q) for q = 0, 0.5, 1 and prove that the absolute minimum of w¯(q), which always exists, belongs to the interval (0, 0.5]. For the specific case of a single peaked landscape we also find lower and upper bounds on w¯(q), which are used to estimate the critical mutation rate, after which the distribution of the types of individuals in the population becomes almost uniform. This estimate is used as a starting point to conjecture another estimate, valid for any fitness landscape, and which is checked by numerical calculations. The last estimate stresses the fact that the inverse dependence of the critical mutation rate on the sequence length is not a generally valid fact. Copyright © 2014 Elsevier Inc. All rights reserved.
A Temperature-Dependent Battery Model for Wireless Sensor Networks.
Rodrigues, Leonardo M; Montez, Carlos; Moraes, Ricardo; Portugal, Paulo; Vasques, Francisco
2017-02-22
Energy consumption is a major issue in Wireless Sensor Networks (WSNs), as nodes are powered by chemical batteries with an upper bounded lifetime. Estimating the lifetime of batteries is a difficult task, as it depends on several factors, such as operating temperatures and discharge rates. Analytical battery models can be used for estimating both the battery lifetime and the voltage behavior over time. Still, available models usually do not consider the impact of operating temperatures on the battery behavior. The target of this work is to extend the widely-used Kinetic Battery Model (KiBaM) to include the effect of temperature on the battery behavior. The proposed Temperature-Dependent KiBaM (T-KiBaM) is able to handle operating temperatures, providing better estimates for the battery lifetime and voltage behavior. The performed experimental validation shows that T-KiBaM achieves an average accuracy error smaller than 0.33%, when estimating the lifetime of Ni-MH batteries for different temperature conditions. In addition, T-KiBaM significantly improves the original KiBaM voltage model. The proposed model can be easily adapted to handle other battery technologies, enabling the consideration of different WSN deployments.
A Temperature-Dependent Battery Model for Wireless Sensor Networks
Rodrigues, Leonardo M.; Montez, Carlos; Moraes, Ricardo; Portugal, Paulo; Vasques, Francisco
2017-01-01
Energy consumption is a major issue in Wireless Sensor Networks (WSNs), as nodes are powered by chemical batteries with an upper bounded lifetime. Estimating the lifetime of batteries is a difficult task, as it depends on several factors, such as operating temperatures and discharge rates. Analytical battery models can be used for estimating both the battery lifetime and the voltage behavior over time. Still, available models usually do not consider the impact of operating temperatures on the battery behavior. The target of this work is to extend the widely-used Kinetic Battery Model (KiBaM) to include the effect of temperature on the battery behavior. The proposed Temperature-Dependent KiBaM (T-KiBaM) is able to handle operating temperatures, providing better estimates for the battery lifetime and voltage behavior. The performed experimental validation shows that T-KiBaM achieves an average accuracy error smaller than 0.33%, when estimating the lifetime of Ni-MH batteries for different temperature conditions. In addition, T-KiBaM significantly improves the original KiBaM voltage model. The proposed model can be easily adapted to handle other battery technologies, enabling the consideration of different WSN deployments. PMID:28241444
A Novel Method for Age Estimation in Solar-Type Stars Through GALEX FUV Magnitudes
NASA Astrophysics Data System (ADS)
Ho, Kelly; Subramonian, Arjun; Smith, Graeme; Shouru Shieh
2018-01-01
Utilizing an inverse association known to exist between Galaxy Evolution Explorer (GALEX) far ultraviolet (FUV) magnitudes and the chromospheric activity of F, G, and K dwarfs, we explored a method of age estimation in solar-type stars through GALEX FUV magnitudes. Sample solar-type star data were collected from refereed publications and filtered by B-V and absolute visual magnitude to ensure similarities in temperature and luminosity to the Sun. We determined FUV-B and calculated a residual index Q for all the stars, using the temperature-induced upper bound on FUV-B as the fiducial. Plotting current age estimates for the stars against Q, we discovered a strong and significant association between the variables. By applying a log-linear transformation to the data to produce a strong correlation between Q and loge Age, we confirmed the association between Q and age to be exponential. Thus, least-squares regression was used to generate an exponential model relating Q to age in solar-type stars, which can be used by astronomers. The Q-method of stellar age estimation is simple and more efficient than existing spectroscopic methods and has applications to galactic archaeology and stellar chemical composition analysis.
Donne, D.D.; Plccardi, L.; Odum, J.K.; Stephenson, W.J.; Williams, R.A.
2007-01-01
Shallow seismic reflection prospecting has been carried out in order to investigate the faults that bound to the southwest and northeast the Quaternary Upper Tiber Basin (Northern Apennines, Italy). On the northeastern margin of the basin a ??? 1 km long reflection seismic profile images a fault segment and the associated up to 100 meters thick sediment wedge. Across the southwestern margin a 0.5 km-long seismic profile images a 50-55??-dipping extensional fault, that projects to the scarp at the base of the range-front, and against which a 100 m thick syn-tectonic sediment wedge has formed. The integration of surface and sub-surface data allows to estimate at least 190 meters of vertical displacement along the fault and a slip rate around 0.25 m/kyr. Southwestern fault might also be interpreted as the main splay structure of regional Alto Tiberina extensional fault. At last, the 1917 Monterchi earthquake (Imax=X, Boschi et alii, 2000) is correlable with an activation of the southwestern fault, and thus suggesting the seismogenic character of this latter.
Two Upper Bounds for the Weighted Path Length of Binary Trees. Report No. UIUCDCS-R-73-565.
ERIC Educational Resources Information Center
Pradels, Jean Louis
Rooted binary trees with weighted nodes are structures encountered in many areas, such as coding theory, searching and sorting, information storage and retrieval. The path length is a meaningful quantity which gives indications about the expected time of a search or the length of a code, for example. In this paper, two sharp bounds for the total…
The Mystery of Io's Warm Polar Regions: Implications for Heat Flow
NASA Technical Reports Server (NTRS)
Matson, D. L.; Veeder, G. J.; Johnson, T. V.; Blaney, D. L.; Davies, A. G.
2002-01-01
Unexpectedly warm polar temperatures further support the idea that Io is covered virtually everywhere by cooling lava flows. This implies a new heat flow component. Io's heat flow remains constrained between a lower bound of (approximately) 2.5 W m(exp -2) and an upper bound of (approximately) 13 W m(exp -2). Additional information is contained in the original extended abstract.
Joseph Buongiorno; Mo Zhou; Craig Johnston
2017-01-01
Markov decision process models were extended to reflect some consequences of the risk attitude of forestry decision makers. One approach consisted of maximizing the expected value of a criterion subject to an upper bound on the variance or, symmetrically, minimizing the variance subject to a lower bound on the expected value. The other method used the certainty...
Limits on estimating the width of thin tubular structures in 3D images.
Wörz, Stefan; Rohr, Karl
2006-01-01
This work studies limits on estimating the width of thin tubular structures in 3D images. Based on nonlinear estimation theory we analyze the minimal stochastic error of estimating the width. Given a 3D analytic model of the image intensities of tubular structures, we derive a closed-form expression for the Cramér-Rao bound of the width estimate under image noise. We use the derived lower bound as a benchmark and compare it with three previously proposed accuracy limits for vessel width estimation. Moreover, by experimental investigations we demonstrate that the derived lower bound can be achieved by fitting a 3D parametric intensity model directly to the image data.
Verifying the error bound of numerical computation implemented in computer systems
Sawada, Jun
2013-03-12
A verification tool receives a finite precision definition for an approximation of an infinite precision numerical function implemented in a processor in the form of a polynomial of bounded functions. The verification tool receives a domain for verifying outputs of segments associated with the infinite precision numerical function. The verification tool splits the domain into at least two segments, wherein each segment is non-overlapping with any other segment and converts, for each segment, a polynomial of bounded functions for the segment to a simplified formula comprising a polynomial, an inequality, and a constant for a selected segment. The verification tool calculates upper bounds of the polynomial for the at least two segments, beginning with the selected segment and reports the segments that violate a bounding condition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aad, G.; Abbott, B.; Abdallah, J.
2016-01-28
A search for a Higgs boson produced via vector-boson fusion and decaying into invisible particles is presented, using 20.3 fb -1 of proton-proton collision data at a centre-of-mass energy of 8 TeV recorded by the ATLAS detector at the LHC. For a Higgs boson with a mass of 125 GeV, assuming the Standard Model production cross section, an upper bound of 0.28 is set on the branching fraction of H → invisible at 95% confidence level, where the expected upper limit is 0.31. Furthermore, the results are interpreted in models of Higgs-portal dark matter where the branching fraction limit ismore » converted into upper bounds on the dark-matter-nucleon scattering cross section as a function of the dark-matter particle mass, and compared to results from the direct dark-matter detection experiments.« less