Sample records for upper bound analysis

  1. A new upper bound for the norm of interval matrices with application to robust stability analysis of delayed neural networks.

    PubMed

    Faydasicok, Ozlem; Arik, Sabri

    2013-08-01

    The main problem with the analysis of robust stability of neural networks is to find the upper bound norm for the intervalized interconnection matrices of neural networks. In the previous literature, the major three upper bound norms for the intervalized interconnection matrices have been reported and they have been successfully applied to derive new sufficient conditions for robust stability of delayed neural networks. One of the main contributions of this paper will be the derivation of a new upper bound for the norm of the intervalized interconnection matrices of neural networks. Then, by exploiting this new upper bound norm of interval matrices and using stability theory of Lyapunov functionals and the theory of homomorphic mapping, we will obtain new sufficient conditions for the existence, uniqueness and global asymptotic stability of the equilibrium point for the class of neural networks with discrete time delays under parameter uncertainties and with respect to continuous and slope-bounded activation functions. The results obtained in this paper will be shown to be new and they can be considered alternative results to previously published corresponding results. We also give some illustrative and comparative numerical examples to demonstrate the effectiveness and applicability of the proposed robust stability condition. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. On the likelihood of single-peaked preferences.

    PubMed

    Lackner, Marie-Louise; Lackner, Martin

    2017-01-01

    This paper contains an extensive combinatorial analysis of the single-peaked domain restriction and investigates the likelihood that an election is single-peaked. We provide a very general upper bound result for domain restrictions that can be defined by certain forbidden configurations. This upper bound implies that many domain restrictions (including the single-peaked restriction) are very unlikely to appear in a random election chosen according to the Impartial Culture assumption. For single-peaked elections, this upper bound can be refined and complemented by a lower bound that is asymptotically tight. In addition, we provide exact results for elections with few voters or candidates. Moreover, we consider the Pólya urn model and the Mallows model and obtain lower bounds showing that single-peakedness is considerably more likely to appear for certain parameterizations.

  3. Upper bound of abutment scour in laboratory and field data

    USGS Publications Warehouse

    Benedict, Stephen

    2016-01-01

    The U.S. Geological Survey, in cooperation with the South Carolina Department of Transportation, conducted a field investigation of abutment scour in South Carolina and used those data to develop envelope curves that define the upper bound of abutment scour. To expand on this previous work, an additional cooperative investigation was initiated to combine the South Carolina data with abutment scour data from other sources and evaluate upper bound patterns with this larger data set. To facilitate this analysis, 446 laboratory and 331 field measurements of abutment scour were compiled into a digital database. This extensive database was used to evaluate the South Carolina abutment scour envelope curves and to develop additional envelope curves that reflected the upper bound of abutment scour depth for the laboratory and field data. The envelope curves provide simple but useful supplementary tools for assessing the potential maximum abutment scour depth in the field setting.

  4. Upper and lower bounds of ground-motion variabilities: implication for source properties

    NASA Astrophysics Data System (ADS)

    Cotton, Fabrice; Reddy-Kotha, Sreeram; Bora, Sanjay; Bindi, Dino

    2017-04-01

    One of the key challenges of seismology is to be able to analyse the physical factors that control earthquakes and ground-motion variabilities. Such analysis is particularly important to calibrate physics-based simulations and seismic hazard estimations at high frequencies. Within the framework of the development of ground-motion prediction equation (GMPE) developments, ground-motions residuals (differences between recorded ground motions and the values predicted by a GMPE) are computed. The exponential growth of seismological near-source records and modern GMPE analysis technics allow to partition these residuals into between- and a within-event components. In particular, the between-event term quantifies all those repeatable source effects (e.g. related to stress-drop or kappa-source variability) which have not been accounted by the magnitude-dependent term of the model. In this presentation, we first discuss the between-event variabilities computed both in the Fourier and Response Spectra domains, using recent high-quality global accelerometric datasets (e.g. NGA-west2, Resorce, Kiknet). These analysis lead to the assessment of upper bounds for the ground-motion variability. Then, we compare these upper bounds with lower bounds estimated by analysing seismic sequences which occurred on specific fault systems (e.g., located in Central Italy or in Japan). We show that the lower bounds of between-event variabilities are surprisingly large which indicates a large variability of earthquake dynamic properties even within the same fault system. Finally, these upper and lower bounds of ground-shaking variability are discussed in term of variability of earthquake physical properties (e.g., stress-drop and kappa_source).

  5. SAS and SPSS macros to calculate standardized Cronbach's alpha using the upper bound of the phi coefficient for dichotomous items.

    PubMed

    Sun, Wei; Chou, Chih-Ping; Stacy, Alan W; Ma, Huiyan; Unger, Jennifer; Gallaher, Peggy

    2007-02-01

    Cronbach's a is widely used in social science research to estimate the internal consistency of reliability of a measurement scale. However, when items are not strictly parallel, the Cronbach's a coefficient provides a lower-bound estimate of true reliability, and this estimate may be further biased downward when items are dichotomous. The estimation of standardized Cronbach's a for a scale with dichotomous items can be improved by using the upper bound of coefficient phi. SAS and SPSS macros have been developed in this article to obtain standardized Cronbach's a via this method. The simulation analysis showed that Cronbach's a from upper-bound phi might be appropriate for estimating the real reliability when standardized Cronbach's a is problematic.

  6. The upper bound of abutment scour defined by selected laboratory and field data

    USGS Publications Warehouse

    Benedict, Stephen; Caldwell, Andral W.

    2015-01-01

    The U.S. Geological Survey, in cooperation with the South Carolina Department of Transportation, conducted a field investigation of abutment scour in South Carolina and used that data to develop envelope curves defining the upper bound of abutment scour. To expand upon this previous work, an additional cooperative investigation was initiated to combine the South Carolina data with abutment-scour data from other sources and evaluate the upper bound of abutment scour with the larger data set. To facilitate this analysis, a literature review was made to identify potential sources of published abutment-scour data, and selected data, consisting of 446 laboratory and 331 field measurements, were compiled for the analysis. These data encompassed a wide range of laboratory and field conditions and represent field data from 6 states within the United States. The data set was used to evaluate the South Carolina abutment-scour envelope curves. Additionally, the data were used to evaluate a dimensionless abutment-scour envelope curve developed by Melville (1992), highlighting the distinct difference in the upper bound for laboratory and field data. The envelope curves evaluated in this investigation provide simple but useful tools for assessing the potential maximum abutment-scour depth in the field setting.

  7. Performance bounds on parallel self-initiating discrete-event

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1990-01-01

    The use is considered of massively parallel architectures to execute discrete-event simulations of what is termed self-initiating models. A logical process in a self-initiating model schedules its own state re-evaluation times, independently of any other logical process, and sends its new state to other logical processes following the re-evaluation. The interest is in the effects of that communication on synchronization. The performance is considered of various synchronization protocols by deriving upper and lower bounds on optimal performance, upper bounds on Time Warp's performance, and lower bounds on the performance of a new conservative protocol. The analysis of Time Warp includes the overhead costs of state-saving and rollback. The analysis points out sufficient conditions for the conservative protocol to outperform Time Warp. The analysis also quantifies the sensitivity of performance to message fan-out, lookahead ability, and the probability distributions underlying the simulation.

  8. The Problem of Limited Inter-rater Agreement in Modelling Music Similarity

    PubMed Central

    Flexer, Arthur; Grill, Thomas

    2016-01-01

    One of the central goals of Music Information Retrieval (MIR) is the quantification of similarity between or within pieces of music. These quantitative relations should mirror the human perception of music similarity, which is however highly subjective with low inter-rater agreement. Unfortunately this principal problem has been given little attention in MIR so far. Since it is not meaningful to have computational models that go beyond the level of human agreement, these levels of inter-rater agreement present a natural upper bound for any algorithmic approach. We will illustrate this fundamental problem in the evaluation of MIR systems using results from two typical application scenarios: (i) modelling of music similarity between pieces of music; (ii) music structure analysis within pieces of music. For both applications, we derive upper bounds of performance which are due to the limited inter-rater agreement. We compare these upper bounds to the performance of state-of-the-art MIR systems and show how the upper bounds prevent further progress in developing better MIR systems. PMID:28190932

  9. Resilient filtering for time-varying stochastic coupling networks under the event-triggering scheduling

    NASA Astrophysics Data System (ADS)

    Wang, Fan; Liang, Jinling; Dobaie, Abdullah M.

    2018-07-01

    The resilient filtering problem is considered for a class of time-varying networks with stochastic coupling strengths. An event-triggered strategy is adopted to save the network resources by scheduling the signal transmission from the sensors to the filters based on certain prescribed rules. Moreover, the filter parameters to be designed are subject to gain perturbations. The primary aim of the addressed problem is to determine a resilient filter that ensures an acceptable filtering performance for the considered network with event-triggering scheduling. To handle such an issue, an upper bound on the estimation error variance is established for each node according to the stochastic analysis. Subsequently, the resilient filter is designed by locally minimizing the derived upper bound at each iteration. Moreover, rigorous analysis shows the monotonicity of the minimal upper bound regarding the triggering threshold. Finally, a simulation example is presented to show effectiveness of the established filter scheme.

  10. Complexity, Heuristic, and Search Analysis for the Games of Crossings and Epaminondas

    DTIC Science & Technology

    2014-03-27

    research in Artifical Intelligence (Section 2.1) and why games are studied (Section 2.2). Section 2.3 discusses how games are played and solved. An...5 2.1 Games in Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Game Study...Artificial Intelligence UCT Upper Confidence Bounds applied to Trees HUCT Heuristic Guided UCT LOA Lines of Action UCB Upper Confidence Bound RAVE Rapid

  11. The direct reaction field hamiltonian: Analysis of the dispersion term and application to the water dimer

    NASA Astrophysics Data System (ADS)

    Thole, B. T.; Van Duijnen, P. Th.

    1982-10-01

    The induction and dispersion terms obtained from quantum-mechanical calculations with a direct reaction field hamiltonian are compared to second order perturbation theory expressions. The dispersion term is shown to give an upper bound which is a generalization of Alexander's upper bound. The model is illustrated by a calculation on the interactions in the water dimer. The long range Coulomb, induction and dispersion interactions are reasonably reproduced.

  12. Pre-Test Assessment of the Upper Bound of the Drag Coefficient Repeatability of a Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; L'Esperance, A.

    2017-01-01

    A new method is presented that computes a pre{test estimate of the upper bound of the drag coefficient repeatability of a wind tunnel model. This upper bound is a conservative estimate of the precision error of the drag coefficient. For clarity, precision error contributions associated with the measurement of the dynamic pressure are analyzed separately from those that are associated with the measurement of the aerodynamic loads. The upper bound is computed by using information about the model, the tunnel conditions, and the balance in combination with an estimate of the expected output variations as input. The model information consists of the reference area and an assumed angle of attack. The tunnel conditions are described by the Mach number and the total pressure or unit Reynolds number. The balance inputs are the partial derivatives of the axial and normal force with respect to all balance outputs. Finally, an empirical output variation of 1.0 microV/V is used to relate both random instrumentation and angle measurement errors to the precision error of the drag coefficient. Results of the analysis are reported by plotting the upper bound of the precision error versus the tunnel conditions. The analysis shows that the influence of the dynamic pressure measurement error on the precision error of the drag coefficient is often small when compared with the influence of errors that are associated with the load measurements. Consequently, the sensitivities of the axial and normal force gages of the balance have a significant influence on the overall magnitude of the drag coefficient's precision error. Therefore, results of the error analysis can be used for balance selection purposes as the drag prediction characteristics of balances of similar size and capacities can objectively be compared. Data from two wind tunnel models and three balances are used to illustrate the assessment of the precision error of the drag coefficient.

  13. Upper bounds on superpartner masses from upper bounds on the Higgs boson mass.

    PubMed

    Cabrera, M E; Casas, J A; Delgado, A

    2012-01-13

    The LHC is putting bounds on the Higgs boson mass. In this Letter we use those bounds to constrain the minimal supersymmetric standard model (MSSM) parameter space using the fact that, in supersymmetry, the Higgs mass is a function of the masses of sparticles, and therefore an upper bound on the Higgs mass translates into an upper bound for the masses for superpartners. We show that, although current bounds do not constrain the MSSM parameter space from above, once the Higgs mass bound improves big regions of this parameter space will be excluded, putting upper bounds on supersymmetry (SUSY) masses. On the other hand, for the case of split-SUSY we show that, for moderate or large tanβ, the present bounds on the Higgs mass imply that the common mass for scalars cannot be greater than 10(11)  GeV. We show how these bounds will evolve as LHC continues to improve the limits on the Higgs mass.

  14. ``Carbon Credits'' for Resource-Bounded Computations Using Amortised Analysis

    NASA Astrophysics Data System (ADS)

    Jost, Steffen; Loidl, Hans-Wolfgang; Hammond, Kevin; Scaife, Norman; Hofmann, Martin

    Bounding resource usage is important for a number of areas, notably real-time embedded systems and safety-critical systems. In this paper, we present a fully automatic static type-based analysis for inferring upper bounds on resource usage for programs involving general algebraic datatypes and full recursion. Our method can easily be used to bound any countable resource, without needing to revisit proofs. We apply the analysis to the important metrics of worst-case execution time, stack- and heap-space usage. Our results from several realistic embedded control applications demonstrate good matches between our inferred bounds and measured worst-case costs for heap and stack usage. For time usage we infer good bounds for one application. Where we obtain less tight bounds, this is due to the use of software floating-point libraries.

  15. Length bounds for connecting discharges in triggered lightning subsequent strokes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Idone, V.P.

    1990-11-20

    Highly time resolved streak recordings from nine subsequent strokes in four triggered flashes have been examined for evidence of the occurrence of upward connecting discharges. These photographic recordings were obtained with superior spatial and temporal resolution (0.3 m and 0.5 {lambda}s) and were examined with a video image analysis system to help delineate the separate leader and return stroke image tracks. Unfortunately, a definitive determination of the occurrence of connecting discharges in these strokes could not be made. The data did allow various determinations of an upper bound length for any possible connecting discharge in each stroke. Under the simplestmore » analysis approach possible, an 'absolute' upper bound set of lengths was measured that ranged from 12 to 27 m with a mean of 19 m; two other more involved analyses yielded arguably better upper bound estimates of 8-18 m and 7-26 m with means of 19 m; two other more involved analyses yielded arguably better upper bound estimates of 8-18 m and 7-26 m with means of 12 and 13 m, respectively. An additional set of low time-resolution telephoto recordings of the lowest few meters of channel revealed six strokes in these flashes with one or more upward unconnected channels originating from the lightning rod tip. The maximum length of unconnected channel seen in each of these strokes ranged from 0.2 to 1.6 m with a mean of 0.7 m. This latter set of observations is interpreted as indirect evidence that connecting discharges did occur in these strokes and that the lower bound for their length is about 1 m.« less

  16. A Novel Capacity Analysis for Wireless Backhaul Mesh Networks

    NASA Astrophysics Data System (ADS)

    Chung, Tein-Yaw; Lee, Kuan-Chun; Lee, Hsiao-Chih

    This paper derived a closed-form expression for inter-flow capacity of a backhaul wireless mesh network (WMN) with centralized scheduling by employing a ring-based approach. Through the definition of an interference area, we are able to accurately describe a bottleneck collision area for a WMN and calculate the upper bound of inter-flow capacity. The closed-form expression shows that the upper bound is a function of the ratio between transmission range and network radius. Simulations and numerical analysis show that our analytic solution can better estimate the inter-flow capacity of WMNs than that of previous approach.

  17. Limit analysis of hollow spheres or spheroids with Hill orthotropic matrix

    NASA Astrophysics Data System (ADS)

    Pastor, Franck; Pastor, Joseph; Kondo, Djimedo

    2012-03-01

    Recent theoretical studies of the literature are concerned by the hollow sphere or spheroid (confocal) problems with orthotropic Hill type matrix. They have been developed in the framework of the limit analysis kinematical approach by using very simple trial velocity fields. The present Note provides, through numerical upper and lower bounds, a rigorous assessment of the approximate criteria derived in these theoretical works. To this end, existing static 3D codes for a von Mises matrix have been easily extended to the orthotropic case. Conversely, instead of the non-obvious extension of the existing kinematic codes, a new original mixed approach has been elaborated on the basis of the plane strain structure formulation earlier developed by F. Pastor (2007). Indeed, such a formulation does not need the expressions of the unit dissipated powers. Interestingly, it delivers a numerical code better conditioned and notably more rapid than the previous one, while preserving the rigorous upper bound character of the corresponding numerical results. The efficiency of the whole approach is first demonstrated through comparisons of the results to the analytical upper bounds of Benzerga and Besson (2001) or Monchiet et al. (2008) in the case of spherical voids in the Hill matrix. Moreover, we provide upper and lower bounds results for the hollow spheroid with the Hill matrix which are compared to those of Monchiet et al. (2008).

  18. Improved bounds on the energy-minimizing strains in martensitic polycrystals

    NASA Astrophysics Data System (ADS)

    Peigney, Michaël

    2016-07-01

    This paper is concerned with the theoretical prediction of the energy-minimizing (or recoverable) strains in martensitic polycrystals, considering a nonlinear elasticity model of phase transformation at finite strains. The main results are some rigorous upper bounds on the set of energy-minimizing strains. Those bounds depend on the polycrystalline texture through the volume fractions of the different orientations. The simplest form of the bounds presented is obtained by combining recent results for single crystals with a homogenization approach proposed previously for martensitic polycrystals. However, the polycrystalline bound delivered by that procedure may fail to recover the monocrystalline bound in the homogeneous limit, as is demonstrated in this paper by considering an example related to tetragonal martensite. This motivates the development of a more detailed analysis, leading to improved polycrystalline bounds that are notably consistent with results for single crystals in the homogeneous limit. A two-orientation polycrystal of tetragonal martensite is studied as an illustration. In that case, analytical expressions of the upper bounds are derived and the results are compared with lower bounds obtained by considering laminate textures.

  19. Upper and lower bounds for the speed of pulled fronts with a cut-off

    NASA Astrophysics Data System (ADS)

    Benguria, R. D.; Depassier, M. C.; Loss, M.

    2008-02-01

    We establish rigorous upper and lower bounds for the speed of pulled fronts with a cut-off. For all reaction terms of KPP type a simple analytic upper bound is given. The lower bounds however depend on details of the reaction term. For a small cut-off parameter the two leading order terms in the asymptotic expansion of the upper and lower bounds coincide and correspond to the Brunet-Derrida formula. For large cut-off parameters the bounds do not coincide and permit a simple estimation of the speed of the front.

  20. SURE reliability analysis: Program and mathematics

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; White, Allan L.

    1988-01-01

    The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The computational methods on which the program is based provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  1. The upper bound of Pier Scour defined by selected laboratory and field data

    USGS Publications Warehouse

    Benedict, Stephen; Caldwell, Andral W.

    2015-01-01

    The U.S. Geological Survey, in cooperation with the South Carolina Department of Transportation, conducted several field investigations of pier scour in South Carolina (Benedict and Caldwell, 2006; Benedict and Caldwell, 2009) and used that data to develop envelope curves defining the upper bound of pier scour. To expand upon this previous work, an additional cooperative investigation was initiated to combine the South Carolina data with pier-scour data from other sources and evaluate the upper bound of pier scour with this larger data set. To facilitate this analysis, a literature review was made to identify potential sources of published pier-scour data, and selected data were compiled into a digital spreadsheet consisting of approximately 570 laboratory and 1,880 field measurements. These data encompass a wide range of laboratory and field conditions and represent field data from 24 states within the United States and six other countries. This extensive database was used to define the upper bound of pier-scour depth with respect to pier width encompassing the laboratory and field data. Pier width is a primary variable that influences pier-scour depth (Laursen and Toch, 1956; Melville and Coleman, 2000; Mueller and Wagner, 2005, Ettema et al. 2011, Arneson et al. 2012) and therefore, was used as the primary explanatory variable in developing the upper-bound envelope curve. The envelope curve provides a simple but useful tool for assessing the potential maximum pier-scour depth for pier widths of about 30 feet or less.

  2. An evaluation of risk estimation procedures for mixtures of carcinogens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, J.S.; Chen, J.J.

    1999-12-01

    The estimation of health risks from exposure to a mixture of chemical carcinogens is generally based on the combination of information from several available single compound studies. The current practice of directly summing the upper bound risk estimates of individual carcinogenic components as an upper bound on the total risk of a mixture is known to be generally too conservative. Gaylor and Chen (1996, Risk Analysis) proposed a simple procedure to compute an upper bound on the total risk using only the upper confidence limits and central risk estimates of individual carcinogens. The Gaylor-Chen procedure was derived based on anmore » underlying assumption of the normality for the distributions of individual risk estimates. IN this paper the authors evaluated the Gaylor-Chen approach in terms the coverages of the upper confidence limits on the true risks of individual carcinogens. In general, if the coverage probabilities for the individual carcinogens are all approximately equal to the nominal level, then the Gaylor-Chen approach should perform well. However, the Gaylor-Chen approach can be conservative or anti-conservative if some of all individual upper confidence limit estimates are conservative or anti-conservative.« less

  3. New upper bounds on the rate of a code via the Delsarte-MacWilliams inequalities

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Rodemich, E. R.; Rumsey, H., Jr.; Welch, L. R.

    1977-01-01

    An upper bound on the rate of a binary code as a function of minimum code distance (using a Hamming code metric) is arrived at from Delsarte-MacWilliams inequalities. The upper bound so found is asymptotically less than Levenshtein's bound, and a fortiori less than Elias' bound. Appendices review properties of Krawtchouk polynomials and Q-polynomials utilized in the rigorous proofs.

  4. Upper bound of pier scour in laboratory and field data

    USGS Publications Warehouse

    Benedict, Stephen; Caldwell, Andral W.

    2016-01-01

    The U.S. Geological Survey (USGS), in cooperation with the South Carolina Department of Transportation, conducted several field investigations of pier scour in South Carolina and used the data to develop envelope curves defining the upper bound of pier scour. To expand on this previous work, an additional cooperative investigation was initiated to combine the South Carolina data with pier scour data from other sources and to evaluate upper-bound relations with this larger data set. To facilitate this analysis, 569 laboratory and 1,858 field measurements of pier scour were compiled to form the 2014 USGS Pier Scour Database. This extensive database was used to develop an envelope curve for the potential maximum pier scour depth encompassing the laboratory and field data. The envelope curve provides a simple but useful tool for assessing the potential maximum pier scour depth for effective pier widths of about 30 ft or less.

  5. The SURE Reliability Analysis Program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  6. Combinatorial complexity of pathway analysis in metabolic networks.

    PubMed

    Klamt, Steffen; Stelling, Jörg

    2002-01-01

    Elementary flux mode analysis is a promising approach for a pathway-oriented perspective of metabolic networks. However, in larger networks it is hampered by the combinatorial explosion of possible routes. In this work we give some estimations on the combinatorial complexity including theoretical upper bounds for the number of elementary flux modes in a network of a given size. In a case study, we computed the elementary modes in the central metabolism of Escherichia coli while utilizing four different substrates. Interestingly, although the number of modes occurring in this complex network can exceed half a million, it is still far below the upper bound. Hence, to a certain extent, pathway analysis of central catabolism is feasible to assess network properties such as flexibility and functionality.

  7. UPPER BOUND RISK ESTIMATES FOR MIXTURES OF CARCINOGENS

    EPA Science Inventory

    The excess cancer risk that might result from exposure to a mixture of chemical carcinogens usually is estimated with data from experiments conducted on individual chemicals. An upper bound on the total excess risk is estimated commonly by summing individual upper bound risk esti...

  8. Safe Upper-Bounds Inference of Energy Consumption for Java Bytecode Applications

    NASA Technical Reports Server (NTRS)

    Navas, Jorge; Mendez-Lojo, Mario; Hermenegildo, Manuel V.

    2008-01-01

    Many space applications such as sensor networks, on-board satellite-based platforms, on-board vehicle monitoring systems, etc. handle large amounts of data and analysis of such data is often critical for the scientific mission. Transmitting such large amounts of data to the remote control station for analysis is usually too expensive for time-critical applications. Instead, modern space applications are increasingly relying on autonomous on-board data analysis. All these applications face many resource constraints. A key requirement is to minimize energy consumption. Several approaches have been developed for estimating the energy consumption of such applications (e.g. [3, 1]) based on measuring actual consumption at run-time for large sets of random inputs. However, this approach has the limitation that it is in general not possible to cover all possible inputs. Using formal techniques offers the potential for inferring safe energy consumption bounds, thus being specially interesting for space exploration and safety-critical systems. We have proposed and implemented a general frame- work for resource usage analysis of Java bytecode [2]. The user defines a set of resource(s) of interest to be tracked and some annotations that describe the cost of some elementary elements of the program for those resources. These values can be constants or, more generally, functions of the input data sizes. The analysis then statically derives an upper bound on the amount of those resources that the program as a whole will consume or provide, also as functions of the input data sizes. This article develops a novel application of the analysis of [2] to inferring safe upper bounds on the energy consumption of Java bytecode applications. We first use a resource model that describes the cost of each bytecode instruction in terms of the joules it consumes. With this resource model, we then generate energy consumption cost relations, which are then used to infer safe upper bounds. How energy consumption for each bytecode instruction is measured is beyond the scope of this paper. Instead, this paper is about how to infer safe energy consumption estimations assuming that those energy consumption costs are provided. For concreteness, we use a simplified version of an existing resource model [1] in which an energy consumption cost for individual Java opcodes is defined.

  9. Search for Chemically Bound Water in the Surface Layer of Mars Based on HEND/Mars Odyssey Data

    NASA Technical Reports Server (NTRS)

    Basilevsky, A. T.; Litvak, M. L.; Mitrofanov, I. G.; Boynton, W.; Saunders, R. S.

    2003-01-01

    This study is emphasized on search for signatures of chemically bound water in surface layer of Mars based on data acquired by High Energy Neutron Detector (HEND) which is part of the Mars Odyssey Gamma Ray Spectrometer (GRS). Fluxes of epithermal (probe the upper 1-2 m) and fast (the upper 20-30 cm) neutrons, considered in this work, were measured since mid February till mid June 2002. First analysis of this data set with emphasis of chemically bound water was made. Early publications of the GRS results reported low neutron flux at high latitudes, interpreted as signature of ground water ice, and in two low latitude areas: Arabia and SW of Olympus Mons (SWOM), interpreted as 'geographic variations in the amount of chemically and/or physically bound H2O and or OH...'. It is clear that surface materials of Mars do contain chemically bound water, but its amounts are poorly known and its geographic distribution was not analyzed.

  10. Exact lower and upper bounds on stationary moments in stochastic biochemical systems

    NASA Astrophysics Data System (ADS)

    Ghusinga, Khem Raj; Vargas-Garcia, Cesar A.; Lamperski, Andrew; Singh, Abhyudai

    2017-08-01

    In the stochastic description of biochemical reaction systems, the time evolution of statistical moments for species population counts is described by a linear dynamical system. However, except for some ideal cases (such as zero- and first-order reaction kinetics), the moment dynamics is underdetermined as lower-order moments depend upon higher-order moments. Here, we propose a novel method to find exact lower and upper bounds on stationary moments for a given arbitrary system of biochemical reactions. The method exploits the fact that statistical moments of any positive-valued random variable must satisfy some constraints that are compactly represented through the positive semidefiniteness of moment matrices. Our analysis shows that solving moment equations at steady state in conjunction with constraints on moment matrices provides exact lower and upper bounds on the moments. These results are illustrated by three different examples—the commonly used logistic growth model, stochastic gene expression with auto-regulation and an activator-repressor gene network motif. Interestingly, in all cases the accuracy of the bounds is shown to improve as moment equations are expanded to include higher-order moments. Our results provide avenues for development of approximation methods that provide explicit bounds on moments for nonlinear stochastic systems that are otherwise analytically intractable.

  11. Upper bound on the slope of steady water waves with small adverse vorticity

    NASA Astrophysics Data System (ADS)

    So, Seung Wook; Strauss, Walter A.

    2018-03-01

    We consider the angle of inclination (with respect to the horizontal) of the profile of a steady 2D inviscid symmetric periodic or solitary water wave subject to gravity. There is an upper bound of 31.15° in the irrotational case [1] and an upper bound of 45° in the case of favorable vorticity [13]. On the other hand, if the vorticity is adverse, the profile can become vertical. We prove here that if the adverse vorticity is sufficiently small, then the angle still has an upper bound which is slightly larger than 45°.

  12. A linear programming approach to max-sum problem: a review.

    PubMed

    Werner, Tomás

    2007-07-01

    The max-sum labeling problem, defined as maximizing a sum of binary (i.e., pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field. We review a not widely known approach to the problem, developed by Ukrainian researchers Schlesinger et al. in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product. In particular, we review Schlesinger et al.'s upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound. We revisit problems with Boolean variables and supermodular problems. We describe two algorithms for decreasing the upper bound. We present an example application for structural image analysis.

  13. The SURE reliability analysis program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  14. An analysis of the vertical structure equation for arbitrary thermal profiles

    NASA Technical Reports Server (NTRS)

    Cohn, Stephen E.; Dee, Dick P.

    1989-01-01

    The vertical structure equation is a singular Sturm-Liouville problem whose eigenfunctions describe the vertical dependence of the normal modes of the primitive equations linearized about a given thermal profile. The eigenvalues give the equivalent depths of the modes. The spectrum of the vertical structure equation and the appropriateness of various upper boundary conditions, both for arbitrary thermal profiles were studied. The results depend critically upon whether or not the thermal profile is such that the basic state atmosphere is bounded. In the case of a bounded atmosphere it is shown that the spectrum is always totally discrete, regardless of details of the thermal profile. For the barotropic equivalent depth, which corresponds to the lowest eigen value, upper and lower bounds which depend only on the surface temperature and the atmosphere height were obtained. All eigenfunctions are bounded, but always have unbounded first derivatives. It was proved that the commonly invoked upper boundary condition that vertical velocity must vanish as pressure tends to zero, as well as a number of alternative conditions, is well posed. It was concluded that the vertical structure equation always has a totally discrete spectrum under the assumptions implicit in the primitive equations.

  15. An analysis of the vertical structure equation for arbitrary thermal profiles

    NASA Technical Reports Server (NTRS)

    Cohn, Stephen E.; Dee, Dick P.

    1987-01-01

    The vertical structure equation is a singular Sturm-Liouville problem whose eigenfunctions describe the vertical dependence of the normal modes of the primitive equations linearized about a given thermal profile. The eigenvalues give the equivalent depths of the modes. The spectrum of the vertical structure equation and the appropriateness of various upper boundary conditions, both for arbitrary thermal profiles were studied. The results depend critically upon whether or not the thermal profile is such that the basic state atmosphere is bounded. In the case of a bounded atmosphere it is shown that the spectrum is always totally discrete, regardless of details of the thermal profile. For the barotropic equivalent depth, which corresponds to the lowest eigen value, upper and lower bounds which depend only on the surface temperature and the atmosphere height were obtained. All eigenfunctions are bounded, but always have unbounded first derivatives. It was proved that the commonly invoked upper boundary condition that vertical velocity must vanish as pressure tends to zero, as well as a number of alternative conditions, is well posed. It was concluded that the vertical structure equation always has a totally discrete spectrum under the assumptions implicit in the primitive equations.

  16. Hardening Effect Analysis by Modular Upper Bound and Finite Element Methods in Indentation of Aluminum, Steel, Titanium and Superalloys

    PubMed Central

    Bermudo, Carolina; Sevilla, Lorenzo; Martín, Francisco; Trujillo, Francisco Javier

    2017-01-01

    The application of incremental processes in the manufacturing industry is having a great development in recent years. The first stage of an Incremental Forming Process can be defined as an indentation. Because of this, the indentation process is starting to be widely studied, not only as a hardening test but also as a forming process. Thus, in this work, an analysis of the indentation process under the new Modular Upper Bound perspective has been performed. The modular implementation has several advantages, including the possibility of the introduction of different parameters to extend the study, such as the friction effect, the temperature or the hardening effect studied in this paper. The main objective of the present work is to analyze the three hardening models developed depending on the material characteristics. In order to support the validation of the hardening models, finite element analyses of diverse materials under an indentation are carried out. Results obtained from the Modular Upper Bound are in concordance with the results obtained from the numerical analyses. In addition, the numerical and analytical methods are in concordance with the results previously obtained in the experimental indentation of annealed aluminum A92030. Due to the introduction of the hardening factor, the new modular distribution is a suitable option for the analysis of indentation process. PMID:28772914

  17. Manipulations of Cartesian Graphs: A First Introduction to Analysis.

    ERIC Educational Resources Information Center

    Lowenthal, Francis; Vandeputte, Christiane

    1989-01-01

    Introduces an introductory module for analysis. Describes stock of basic functions and their graphs as part one and three methods as part two: transformations of simple graphs, the sum of stock functions, and upper and lower bounds. (YP)

  18. Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam

    2009-01-01

    This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.

  19. The Economic Cost of Methamphetamine Use in the United States, 2005

    ERIC Educational Resources Information Center

    Nicosia, Nancy; Pacula, Rosalie Liccardo; Kilmer, Beau; Lundberg, Russell; Chiesa, James

    2009-01-01

    This first national estimate suggests that the economic cost of methamphetamine (meth) use in the United States reached $23.4 billion in 2005. Given the uncertainty in estimating the costs of meth use, this book provides a lower-bound estimate of $16.2 billion and an upper-bound estimate of $48.3 billion. The analysis considers a wide range of…

  20. Calculation of upper confidence bounds on proportion of area containing not-sampled vegetation types: An application to map unit definition for existing vegetation maps

    Treesearch

    Paul L. Patterson; Mark Finco

    2011-01-01

    This paper explores the information forest inventory data can produce regarding forest types that were not sampled and develops the equations necessary to define the upper confidence bounds on not-sampled forest types. The problem is reduced to a Bernoulli variable. This simplification allows the upper confidence bounds to be calculated based on Cochran (1977)....

  1. Calculation of upper confidence bounds on not-sampled vegetation types using a systematic grid sample: An application to map unit definition for existing vegetation maps

    Treesearch

    Paul L. Patterson; Mark Finco

    2009-01-01

    This paper explores the information FIA data can produce regarding forest types that were not sampled and develops the equations necessary to define the upper confidence bounds on not-sampled forest types. The problem is reduced to a Bernoulli variable. This simplification allows the upper confidence bounds to be calculated based on Cochran (1977). Examples are...

  2. General upper bound on single-event upset rate. [due to ionizing radiation in orbiting vehicle avionics

    NASA Technical Reports Server (NTRS)

    Chlouber, Dean; O'Neill, Pat; Pollock, Jim

    1990-01-01

    A technique of predicting an upper bound on the rate at which single-event upsets due to ionizing radiation occur in semiconducting memory cells is described. The upper bound on the upset rate, which depends on the high-energy particle environment in earth orbit and accelerator cross-section data, is given by the product of an upper-bound linear energy-transfer spectrum and the mean cross section of the memory cell. Plots of the spectrum are given for low-inclination and polar orbits. An alternative expression for the exact upset rate is also presented. Both methods rely only on experimentally obtained cross-section data and are valid for sensitive bit regions having arbitrary shape.

  3. Upper bounds on secret-key agreement over lossy thermal bosonic channels

    NASA Astrophysics Data System (ADS)

    Kaur, Eneet; Wilde, Mark M.

    2017-12-01

    Upper bounds on the secret-key-agreement capacity of a quantum channel serve as a way to assess the performance of practical quantum-key-distribution protocols conducted over that channel. In particular, if a protocol employs a quantum repeater, achieving secret-key rates exceeding these upper bounds is evidence of having a working quantum repeater. In this paper, we extend a recent advance [Liuzzo-Scorpo et al., Phys. Rev. Lett. 119, 120503 (2017), 10.1103/PhysRevLett.119.120503] in the theory of the teleportation simulation of single-mode phase-insensitive Gaussian channels such that it now applies to the relative entropy of entanglement measure. As a consequence of this extension, we find tighter upper bounds on the nonasymptotic secret-key-agreement capacity of the lossy thermal bosonic channel than were previously known. The lossy thermal bosonic channel serves as a more realistic model of communication than the pure-loss bosonic channel, because it can model the effects of eavesdropper tampering and imperfect detectors. An implication of our result is that the previously known upper bounds on the secret-key-agreement capacity of the thermal channel are too pessimistic for the practical finite-size regime in which the channel is used a finite number of times, and so it should now be somewhat easier to witness a working quantum repeater when using secret-key-agreement capacity upper bounds as a benchmark.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azunre, P.

    Here in this paper, two novel techniques for bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations are developed. The first provides a theorem to construct interval bounds, while the second provides a theorem to construct lower bounds convex and upper bounds concave in the parameter. The convex/concave bounds can be significantly tighter than the interval bounds because of the wrapping effect suffered by interval analysis in dynamical systems. Both types of bounds are computationally cheap to construct, requiring solving auxiliary systems twice and four times larger than the original system, respectively. An illustrative numerical examplemore » of bound construction and use for deterministic global optimization within a simple serial branch-and-bound algorithm, implemented numerically using interval arithmetic and a generalization of McCormick's relaxation technique, is presented. Finally, problems within the important class of reaction-diffusion systems may be optimized with these tools.« less

  5. Upper bounds on the error probabilities and asymptotic error exponents in quantum multiple state discrimination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audenaert, Koenraad M. R., E-mail: koenraad.audenaert@rhul.ac.uk; Department of Physics and Astronomy, University of Ghent, S9, Krijgslaan 281, B-9000 Ghent; Mosonyi, Milán, E-mail: milan.mosonyi@gmail.com

    2014-10-01

    We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ₁, …, σ{sub r}. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ₁, …, σ{sub r}), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov'smore » classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min{sub j« less

  6. Upper bounds on sequential decoding performance parameters

    NASA Technical Reports Server (NTRS)

    Jelinek, F.

    1974-01-01

    This paper presents the best obtainable random coding and expurgated upper bounds on the probabilities of undetectable error, of t-order failure (advance to depth t into an incorrect subset), and of likelihood rise in the incorrect subset, applicable to sequential decoding when the metric bias G is arbitrary. Upper bounds on the Pareto exponent are also presented. The G-values optimizing each of the parameters of interest are determined, and are shown to lie in intervals that in general have nonzero widths. The G-optimal expurgated bound on undetectable error is shown to agree with that for maximum likelihood decoding of convolutional codes, and that on failure agrees with the block code expurgated bound. Included are curves evaluating the bounds for interesting choices of G and SNR for a binary-input quantized-output Gaussian additive noise channel.

  7. A fast algorithm for determining bounds and accurate approximate p-values of the rank product statistic for replicate experiments.

    PubMed

    Heskes, Tom; Eisinga, Rob; Breitling, Rainer

    2014-11-21

    The rank product method is a powerful statistical technique for identifying differentially expressed molecules in replicated experiments. A critical issue in molecule selection is accurate calculation of the p-value of the rank product statistic to adequately address multiple testing. Both exact calculation and permutation and gamma approximations have been proposed to determine molecule-level significance. These current approaches have serious drawbacks as they are either computationally burdensome or provide inaccurate estimates in the tail of the p-value distribution. We derive strict lower and upper bounds to the exact p-value along with an accurate approximation that can be used to assess the significance of the rank product statistic in a computationally fast manner. The bounds and the proposed approximation are shown to provide far better accuracy over existing approximate methods in determining tail probabilities, with the slightly conservative upper bound protecting against false positives. We illustrate the proposed method in the context of a recently published analysis on transcriptomic profiling performed in blood. We provide a method to determine upper bounds and accurate approximate p-values of the rank product statistic. The proposed algorithm provides an order of magnitude increase in throughput as compared with current approaches and offers the opportunity to explore new application domains with even larger multiple testing issue. The R code is published in one of the Additional files and is available at http://www.ru.nl/publish/pages/726696/rankprodbounds.zip .

  8. How entangled can a multi-party system possibly be?

    NASA Astrophysics Data System (ADS)

    Qi, Liqun; Zhang, Guofeng; Ni, Guyan

    2018-06-01

    The geometric measure of entanglement of a pure quantum state is defined to be its distance to the space of pure product (separable) states. Given an n-partite system composed of subsystems of dimensions d1 , … ,dn, an upper bound for maximally allowable entanglement is derived in terms of geometric measure of entanglement. This upper bound is characterized exclusively by the dimensions d1 , … ,dn of composite subsystems. Numerous examples demonstrate that the upper bound appears to be reasonably tight.

  9. Performance analysis for minimally nonlinear irreversible refrigerators at finite cooling power

    NASA Astrophysics Data System (ADS)

    Long, Rui; Liu, Zhichun; Liu, Wei

    2018-04-01

    The coefficient of performance (COP) for general refrigerators at finite cooling power have been systematically researched through the minimally nonlinear irreversible model, and its lower and upper bounds in different operating regions have been proposed. Under the tight coupling conditions, we have calculated the universal COP bounds under the χ figure of merit in different operating regions. When the refrigerator operates in the region with lower external flux, we obtained the general bounds (0 < ε <(√{ 9 + 8εC } - 3) / 2) under the χ figure of merit. We have also calculated the universal bounds for maximum gain in COP under different operating regions to give a further insight into the COP gain with the cooling power away from the maximum one. When the refrigerator operates in the region located between maximum cooling power and maximum COP with lower external flux, the upper bound for COP and the lower bound for relative gain in COP present large values, compared to a relative small loss from the maximum cooling power. If the cooling power is the main objective, it is desirable to operate the refrigerator at a slightly lower cooling power than at the maximum one, where a small loss in the cooling power induces a much larger COP enhancement.

  10. 49 CFR Appendix B to Part 236 - Risk Assessment Criteria

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... availability calculations for subsystems and components, Fault Tree Analysis (FTA) of the subsystems, and... upper bound, as estimated with a sensitivity analysis, and the risk value selected must be demonstrated... interconnected subsystems/components? The risk assessment of each safety-critical system (product) must account...

  11. 49 CFR Appendix B to Part 236 - Risk Assessment Criteria

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... availability calculations for subsystems and components, Fault Tree Analysis (FTA) of the subsystems, and... upper bound, as estimated with a sensitivity analysis, and the risk value selected must be demonstrated... interconnected subsystems/components? The risk assessment of each safety-critical system (product) must account...

  12. Estimation variance bounds of importance sampling simulations in digital communication systems

    NASA Technical Reports Server (NTRS)

    Lu, D.; Yao, K.

    1991-01-01

    In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.

  13. A tight upper bound for quadratic knapsack problems in grid-based wind farm layout optimization

    NASA Astrophysics Data System (ADS)

    Quan, Ning; Kim, Harrison M.

    2018-03-01

    The 0-1 quadratic knapsack problem (QKP) in wind farm layout optimization models possible turbine locations as nodes, and power loss due to wake effects between pairs of turbines as edges in a complete graph. The goal is to select up to a certain number of turbine locations such that the sum of selected node and edge coefficients is maximized. Finding the optimal solution to the QKP is difficult in general, but it is possible to obtain a tight upper bound on the QKP's optimal value which facilitates the use of heuristics to solve QKPs by giving a good estimate of the optimality gap of any feasible solution. This article applies an upper bound method that is especially well-suited to QKPs in wind farm layout optimization due to certain features of the formulation that reduce the computational complexity of calculating the upper bound. The usefulness of the upper bound was demonstrated by assessing the performance of the greedy algorithm for solving QKPs in wind farm layout optimization. The results show that the greedy algorithm produces good solutions within 4% of the optimal value for small to medium sized problems considered in this article.

  14. Bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations

    DOE PAGES

    Azunre, P.

    2016-09-21

    Here in this paper, two novel techniques for bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations are developed. The first provides a theorem to construct interval bounds, while the second provides a theorem to construct lower bounds convex and upper bounds concave in the parameter. The convex/concave bounds can be significantly tighter than the interval bounds because of the wrapping effect suffered by interval analysis in dynamical systems. Both types of bounds are computationally cheap to construct, requiring solving auxiliary systems twice and four times larger than the original system, respectively. An illustrative numerical examplemore » of bound construction and use for deterministic global optimization within a simple serial branch-and-bound algorithm, implemented numerically using interval arithmetic and a generalization of McCormick's relaxation technique, is presented. Finally, problems within the important class of reaction-diffusion systems may be optimized with these tools.« less

  15. An Analytical Framework for Runtime of a Class of Continuous Evolutionary Algorithms.

    PubMed

    Zhang, Yushan; Hu, Guiwu

    2015-01-01

    Although there have been many studies on the runtime of evolutionary algorithms in discrete optimization, relatively few theoretical results have been proposed on continuous optimization, such as evolutionary programming (EP). This paper proposes an analysis of the runtime of two EP algorithms based on Gaussian and Cauchy mutations, using an absorbing Markov chain. Given a constant variation, we calculate the runtime upper bound of special Gaussian mutation EP and Cauchy mutation EP. Our analysis reveals that the upper bounds are impacted by individual number, problem dimension number n, searching range, and the Lebesgue measure of the optimal neighborhood. Furthermore, we provide conditions whereby the average runtime of the considered EP can be no more than a polynomial of n. The condition is that the Lebesgue measure of the optimal neighborhood is larger than a combinatorial calculation of an exponential and the given polynomial of n.

  16. Edge connectivity and the spectral gap of combinatorial and quantum graphs

    NASA Astrophysics Data System (ADS)

    Berkolaiko, Gregory; Kennedy, James B.; Kurasov, Pavel; Mugnolo, Delio

    2017-09-01

    We derive a number of upper and lower bounds for the first nontrivial eigenvalue of Laplacians on combinatorial and quantum graph in terms of the edge connectivity, i.e. the minimal number of edges which need to be removed to make the graph disconnected. On combinatorial graphs, one of the bounds corresponds to a well-known inequality of Fiedler, of which we give a new variational proof. On quantum graphs, the corresponding bound generalizes a recent result of Band and Lévy. All proofs are general enough to yield corresponding estimates for the p-Laplacian and allow us to identify the minimizers. Based on the Betti number of the graph, we also derive upper and lower bounds on all eigenvalues which are ‘asymptotically correct’, i.e. agree with the Weyl asymptotics for the eigenvalues of the quantum graph. In particular, the lower bounds improve the bounds of Friedlander on any given graph for all but finitely many eigenvalues, while the upper bounds improve recent results of Ariturk. Our estimates are also used to derive bounds on the eigenvalues of the normalized Laplacian matrix that improve known bounds of spectral graph theory.

  17. On the role of entailment patterns and scalar implicatures in the processing of numerals

    PubMed Central

    Panizza, Daniele; Chierchia, Gennaro; Clifton, Charles

    2009-01-01

    There has been much debate, in both the linguistics and the psycholinguistics literature, concerning numbers and the interpretation of number denoting determiners ('numerals'). Such debate concerns, in particular, the nature and distribution of upper-bounded ('at-least') interpretations vs. lower-bounded ('exact') construals. In the present paper we show that the interpretation and processing of numerals are affected by the entailment properties of the context in which they occur. Experiment 1 established off-line preferences using a questionnaire. Experiment 2 investigated the processing issue through an eye tracking experiment using a silent reading task. Our results show that the upper-bounded interpretation of numerals occurs more often in an upward entailing context than in a downward entailing context. Reading times of the numeral itself were longer when it was embedded in an upward entailing context than when it was not, indicating that processing resources were required when the context triggered an upper-bounded interpretation. However, reading of a following context that required an upper-bounded interpretation triggered more regressions towards the numeral when it had occurred in a downward entailing context than in an upward entailing one. Such findings show that speakers' interpretation and processing of numerals is systematically affected by the polarity of the sentence in which they occur, and support the hypothesis that the upper-bounded interpretation of numerals is due to a scalar implicature. PMID:20161494

  18. The Upper and Lower Bounds of the Prediction Accuracies of Ensemble Methods for Binary Classification

    PubMed Central

    Wang, Xueyi; Davidson, Nicholas J.

    2011-01-01

    Ensemble methods have been widely used to improve prediction accuracy over individual classifiers. In this paper, we achieve a few results about the prediction accuracies of ensemble methods for binary classification that are missed or misinterpreted in previous literature. First we show the upper and lower bounds of the prediction accuracies (i.e. the best and worst possible prediction accuracies) of ensemble methods. Next we show that an ensemble method can achieve > 0.5 prediction accuracy, while individual classifiers have < 0.5 prediction accuracies. Furthermore, for individual classifiers with different prediction accuracies, the average of the individual accuracies determines the upper and lower bounds. We perform two experiments to verify the results and show that it is hard to achieve the upper and lower bounds accuracies by random individual classifiers and better algorithms need to be developed. PMID:21853162

  19. Static aeroelastic analysis and tailoring of missile control fins

    NASA Technical Reports Server (NTRS)

    Mcintosh, S. C., Jr.; Dillenius, M. F. E.

    1989-01-01

    A concept for enhancing the design of control fins for supersonic tactical missiles is described. The concept makes use of aeroelastic tailoring to create fin designs (for given planforms) that limit the variations in hinge moments that can occur during maneuvers involving high load factors and high angles of attack. It combines supersonic nonlinear aerodynamic load calculations with finite-element structural modeling, static and dynamic structural analysis, and optimization. The problem definition is illustrated. The fin is at least partly made up of a composite material. The layup is fixed, and the orientations of the material principal axes are allowed to vary; these are the design variables. The objective is the magnitude of the difference between the chordwise location of the center of pressure and its desired location, calculated for a given flight condition. Three types of constraints can be imposed: upper bounds on static displacements for a given set of load conditions, lower bounds on specified natural frequencies, and upper bounds on the critical flutter damping parameter at a given set of flight speeds and altitudes. The idea is to seek designs that reduce variations in hinge moments that would otherwise occur. The block diagram describes the operation of the computer program that accomplishes these tasks. There is an option for a single analysis in addition to the optimization.

  20. The impact of missing trauma data on predicting massive transfusion

    PubMed Central

    Trickey, Amber W.; Fox, Erin E.; del Junco, Deborah J.; Ning, Jing; Holcomb, John B.; Brasel, Karen J.; Cohen, Mitchell J.; Schreiber, Martin A.; Bulger, Eileen M.; Phelan, Herb A.; Alarcon, Louis H.; Myers, John G.; Muskat, Peter; Cotton, Bryan A.; Wade, Charles E.; Rahbar, Mohammad H.

    2013-01-01

    INTRODUCTION Missing data are inherent in clinical research and may be especially problematic for trauma studies. This study describes a sensitivity analysis to evaluate the impact of missing data on clinical risk prediction algorithms. Three blood transfusion prediction models were evaluated utilizing an observational trauma dataset with valid missing data. METHODS The PRospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study included patients requiring ≥ 1 unit of red blood cells (RBC) at 10 participating U.S. Level I trauma centers from July 2009 – October 2010. Physiologic, laboratory, and treatment data were collected prospectively up to 24h after hospital admission. Subjects who received ≥ 10 RBC units within 24h of admission were classified as massive transfusion (MT) patients. Correct classification percentages for three MT prediction models were evaluated using complete case analysis and multiple imputation. A sensitivity analysis for missing data was conducted to determine the upper and lower bounds for correct classification percentages. RESULTS PROMMTT enrolled 1,245 subjects. MT was received by 297 patients (24%). Missing percentage ranged from 2.2% (heart rate) to 45% (respiratory rate). Proportions of complete cases utilized in the MT prediction models ranged from 41% to 88%. All models demonstrated similar correct classification percentages using complete case analysis and multiple imputation. In the sensitivity analysis, correct classification upper-lower bound ranges per model were 4%, 10%, and 12%. Predictive accuracy for all models using PROMMTT data was lower than reported in the original datasets. CONCLUSIONS Evaluating the accuracy clinical prediction models with missing data can be misleading, especially with many predictor variables and moderate levels of missingness per variable. The proposed sensitivity analysis describes the influence of missing data on risk prediction algorithms. Reporting upper/lower bounds for percent correct classification may be more informative than multiple imputation, which provided similar results to complete case analysis in this study. PMID:23778514

  1. ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.

    2010-08-10

    A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error),more » and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.« less

  2. Physical Uncertainty Bounds (PUB)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaughan, Diane Elizabeth; Preston, Dean L.

    2015-03-19

    This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switchingmore » out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.« less

  3. Extracting Loop Bounds for WCET Analysis Using the Instrumentation Point Graph

    NASA Astrophysics Data System (ADS)

    Betts, A.; Bernat, G.

    2009-05-01

    Every calculation engine proposed in the literature of Worst-Case Execution Time (WCET) analysis requires upper bounds on loop iterations. Existing mechanisms to procure this information are either error prone, because they are gathered from the end-user, or limited in scope, because automatic analyses target very specific loop structures. In this paper, we present a technique that obtains bounds completely automatically for arbitrary loop structures. In particular, we show how to employ the Instrumentation Point Graph (IPG) to parse traces of execution (generated by an instrumented program) in order to extract bounds relative to any loop-nesting level. With this technique, therefore, non-rectangular dependencies between loops can be captured, allowing more accurate WCET estimates to be calculated. We demonstrate the improvement in accuracy by comparing WCET estimates computed through our HMB framework against those computed with state-of-the-art techniques.

  4. Marathon: An Open Source Software Library for the Analysis of Markov-Chain Monte Carlo Algorithms

    PubMed Central

    Rechner, Steffen; Berger, Annabell

    2016-01-01

    We present the software library marathon, which is designed to support the analysis of sampling algorithms that are based on the Markov-Chain Monte Carlo principle. The main application of this library is the computation of properties of so-called state graphs, which represent the structure of Markov chains. We demonstrate applications and the usefulness of marathon by investigating the quality of several bounding methods on four well-known Markov chains for sampling perfect matchings and bipartite graphs. In a set of experiments, we compute the total mixing time and several of its bounds for a large number of input instances. We find that the upper bound gained by the famous canonical path method is often several magnitudes larger than the total mixing time and deteriorates with growing input size. In contrast, the spectral bound is found to be a precise approximation of the total mixing time. PMID:26824442

  5. An upper bound on the radius of a highly electrically conducting lunar core

    NASA Technical Reports Server (NTRS)

    Hobbs, B. A.; Hood, L. L.; Herbert, F.; Sonett, C. P.

    1983-01-01

    Parker's (1980) nonlinear inverse theory for the electromagnetic sounding problem is converted to a form suitable for analysis of lunar day-side transfer function data by: (1) transforming the solution in plane geometry to that in spherical geometry; and (2) transforming the theoretical lunar transfer function in the dipole limit to an apparent resistivity function. The theory is applied to the revised lunar transfer function data set of Hood et al. (1982), which extends in frequency from 10 to the -5th to 10 to the -3rd Hz. On the assumption that an iron-rich lunar core, whether molten or solid, can be represented by a perfect conductor at the minimum sampled frequency, an upper bound of 435 km on the maximum radius of such a core is calculated. This bound is somewhat larger than values of 360-375 km previously estimated from the same data set via forward model calculations because the prior work did not consider all possible mantle conductivity functions.

  6. Perturbative unitarity constraints on gauge portals

    NASA Astrophysics Data System (ADS)

    El Hedri, Sonia; Shepherd, William; Walker, Devin G. E.

    2017-12-01

    Dark matter that was once in thermal equilibrium with the Standard Model is generally prohibited from obtaining all of its mass from the electroweak phase transition. This implies a new scale of physics and mediator particles to facilitate dark matter annihilation. In this work, we focus on dark matter that annihilates through a generic gauge boson portal. We show how partial wave unitarity places upper bounds on the dark gauge boson, dark Higgs and dark matter masses. Outside of well-defined fine-tuned regions, we find an upper bound of 9 TeV for the dark matter mass when the dark Higgs and dark gauge bosons both facilitate the dark matter annihilations. In this scenario, the upper bound on the dark Higgs and dark gauge boson masses are 10 TeV and 16 TeV, respectively. When only the dark gauge boson facilitates dark matter annihilations, we find an upper bound of 3 TeV and 6 TeV for the dark matter and dark gauge boson, respectively. Overall, using the gauge portal as a template, we describe a method to not only place upper bounds on the dark matter mass but also on the new particles with Standard Model quantum numbers. We briefly discuss the reach of future accelerator, direct and indirect detection experiments for this class of models.

  7. Theoretical investigation of the upper and lower bounds of a generalized dimensionless bearing health indicator

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Tsui, Kwok-Leung

    2018-01-01

    Bearing-supported shafts are widely used in various machines. Due to harsh working environments, bearing performance degrades over time. To prevent unexpected bearing failures and accidents, bearing performance degradation assessment becomes an emerging topic in recent years. Bearing performance degradation assessment aims to evaluate the current health condition of a bearing through a bearing health indicator. In the past years, many signal processing and data mining based methods were proposed to construct bearing health indicators. However, the upper and lower bounds of these bearing health indicators were not theoretically calculated and they strongly depended on historical bearing data including normal and failure data. Besides, most health indicators are dimensional, which connotes that these health indicators are prone to be affected by varying operating conditions, such as varying speeds and loads. In this paper, based on the principle of squared envelope analysis, we focus on theoretical investigation of bearing performance degradation assessment in the case of additive Gaussian noises, including distribution establishment of squared envelope, construction of a generalized dimensionless bearing health indicator, and mathematical calculation of the upper and lower bounds of the generalized dimensionless bearing health indicator. Then, analyses of simulated and real bearing run to failure data are used as two case studies to illustrate how the generalized dimensionless health indicator works and demonstrate its effectiveness in bearing performance degradation assessment. Results show that squared envelope follows a noncentral chi-square distribution and the upper and lower bounds of the generalized dimensionless health indicator can be mathematically established. Moreover, the generalized dimensionless health indicator is sensitive to an incipient bearing defect in the process of bearing performance degradation.

  8. Low-dimensional Representation of Error Covariance

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan

    2000-01-01

    Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.

  9. Eigenvalues of the Wentzell-Laplace operator and of the fourth order Steklov problems

    NASA Astrophysics Data System (ADS)

    Xia, Changyu; Wang, Qiaoling

    2018-05-01

    We prove a sharp upper bound and a lower bound for the first nonzero eigenvalue of the Wentzell-Laplace operator on compact manifolds with boundary and an isoperimetric inequality for the same eigenvalue in the case where the manifold is a bounded domain in a Euclidean space. We study some fourth order Steklov problems and obtain isoperimetric upper bound for the first eigenvalue of them. We also find all the eigenvalues and eigenfunctions for two kind of fourth order Steklov problems on a Euclidean ball.

  10. On the validity of the Arrhenius equation for electron attachment rate coefficients.

    PubMed

    Fabrikant, Ilya I; Hotop, Hartmut

    2008-03-28

    The validity of the Arrhenius equation for dissociative electron attachment rate coefficients is investigated. A general analysis allows us to obtain estimates of the upper temperature bound for the range of validity of the Arrhenius equation in the endothermic case and both lower and upper bounds in the exothermic case with a reaction barrier. The results of the general discussion are illustrated by numerical examples whereby the rate coefficient, as a function of temperature for dissociative electron attachment, is calculated using the resonance R-matrix theory. In the endothermic case, the activation energy in the Arrhenius equation is close to the threshold energy, whereas in the case of exothermic reactions with an intermediate barrier, the activation energy is found to be substantially lower than the barrier height.

  11. The accuracy of less: Natural bounds explain why quantity decreases are estimated more accurately than quantity increases.

    PubMed

    Chandon, Pierre; Ordabayeva, Nailya

    2017-02-01

    Five studies show that people, including experts such as professional chefs, estimate quantity decreases more accurately than quantity increases. We argue that this asymmetry occurs because physical quantities cannot be negative. Consequently, there is a natural lower bound (zero) when estimating decreasing quantities but no upper bound when estimating increasing quantities, which can theoretically grow to infinity. As a result, the "accuracy of less" disappears (a) when a numerical or a natural upper bound is present when estimating quantity increases, or (b) when people are asked to estimate the (unbounded) ratio of change from 1 size to another for both increasing and decreasing quantities. Ruling out explanations related to loss aversion, symbolic number mapping, and the visual arrangement of the stimuli, we show that the "accuracy of less" influences choice and demonstrate its robustness in a meta-analysis that includes previously published results. Finally, we discuss how the "accuracy of less" may explain asymmetric reactions to the supersizing and downsizing of food portions, some instances of the endowment effect, and asymmetries in the perception of increases and decreases in physical and psychological distance. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. Evidence for a bound on the lifetime of de Sitter space

    NASA Astrophysics Data System (ADS)

    Freivogel, Ben; Lippert, Matthew

    2008-12-01

    Recent work has suggested a surprising new upper bound on the lifetime of de Sitter vacua in string theory. The bound is parametrically longer than the Hubble time but parametrically shorter than the recurrence time. We investigate whether the bound is satisfied in a particular class of de Sitter solutions, the KKLT vacua. Despite the freedom to make the supersymmetry breaking scale exponentially small, which naively would lead to extremely stable vacua, we find that the lifetime is always less than about exp(1022) Hubble times, in agreement with the proposed bound. This result, however, is contingent on several estimates and assumptions; in particular, we rely on a conjectural upper bound on the Euler number of the Calabi-Yau fourfolds used in KKLT compactifications.

  13. Adaptive nonsingular fast terminal sliding-mode control for the tracking problem of uncertain dynamical systems.

    PubMed

    Boukattaya, Mohamed; Mezghani, Neila; Damak, Tarak

    2018-06-01

    In this paper, robust and adaptive nonsingular fast terminal sliding-mode (NFTSM) control schemes for the trajectory tracking problem are proposed with known or unknown upper bound of the system uncertainty and external disturbances. The developed controllers take the advantage of the NFTSM theory to ensure fast convergence rate, singularity avoidance, and robustness against uncertainties and external disturbances. First, a robust NFTSM controller is proposed which guarantees that sliding surface and equilibrium point can be reached in a short finite-time from any initial state. Then, in order to cope with the unknown upper bound of the system uncertainty which may be occurring in practical applications, a new adaptive NFTSM algorithm is developed. One feature of the proposed control law is their adaptation techniques where the prior knowledge of parameters uncertainty and disturbances is not needed. However, the adaptive tuning law can estimate the upper bound of these uncertainties using only position and velocity measurements. Moreover, the proposed controller eliminates the chattering effect without losing the robustness property and the precision. Stability analysis is performed using the Lyapunov stability theory, and simulation studies are conducted to verify the effectiveness of the developed control schemes. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Upper bound on the efficiency of certain nonimaging concentrators in the physical-optics model

    NASA Astrophysics Data System (ADS)

    Welford, W. T.; Winston, R.

    1982-09-01

    Upper bounds on the performance of nonimaging concentrators are obtained within the framework of scalar-wave theory by using a simple approach to avoid complex calculations on multiple phase fronts. The approach consists in treating a theoretically perfect image-forming device and postulating that no non-image-forming concentrator can have a better performance than such an ideal image-forming system. The performance of such a system can be calculated according to wave theory, and this will provide, in accordance with the postulate, upper bounds on the performance of nonimaging systems. The method is demonstrated for a two-dimensional compound parabolic concentrator.

  15. Lower and upper bounds for entanglement of Rényi-α entropy.

    PubMed

    Song, Wei; Chen, Lin; Cao, Zhuo-Liang

    2016-12-23

    Entanglement Rényi-α entropy is an entanglement measure. It reduces to the standard entanglement of formation when α tends to 1. We derive analytical lower and upper bounds for the entanglement Rényi-α entropy of arbitrary dimensional bipartite quantum systems. We also demonstrate the application our bound for some concrete examples. Moreover, we establish the relation between entanglement Rényi-α entropy and some other entanglement measures.

  16. Global a priori estimates for the inhomogeneous Landau equation with moderately soft potentials

    NASA Astrophysics Data System (ADS)

    Cameron, Stephen; Silvestre, Luis; Snelson, Stanley

    2018-05-01

    We establish a priori upper bounds for solutions to the spatially inhomogeneous Landau equation in the case of moderately soft potentials, with arbitrary initial data, under the assumption that mass, energy and entropy densities stay under control. Our pointwise estimates decay polynomially in the velocity variable. We also show that if the initial data satisfies a Gaussian upper bound, this bound is propagated for all positive times.

  17. WINDOWS: a program for the analysis of spectral data foil activation measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stallmann, F.W.; Eastham, J.F.; Kam, F.B.K.

    The computer program WINDOWS together with its subroutines is described for the analysis of neutron spectral data foil activation measurements. In particular, the unfolding of the neutron differential spectrum, estimated windows and detector contributions, upper and lower bounds for an integral response, and group fluxes obtained from neutron transport calculations. 116 references. (JFP)

  18. Global solutions of restricted open-shell Hartree-Fock theory from semidefinite programming with applications to strongly correlated quantum systems.

    PubMed

    Veeraraghavan, Srikant; Mazziotti, David A

    2014-03-28

    We present a density matrix approach for computing global solutions of restricted open-shell Hartree-Fock theory, based on semidefinite programming (SDP), that gives upper and lower bounds on the Hartree-Fock energy of quantum systems. While wave function approaches to Hartree-Fock theory yield an upper bound to the Hartree-Fock energy, we derive a semidefinite relaxation of Hartree-Fock theory that yields a rigorous lower bound on the Hartree-Fock energy. We also develop an upper-bound algorithm in which Hartree-Fock theory is cast as a SDP with a nonconvex constraint on the rank of the matrix variable. Equality of the upper- and lower-bound energies guarantees that the computed solution is the globally optimal solution of Hartree-Fock theory. The work extends a previously presented method for closed-shell systems [S. Veeraraghavan and D. A. Mazziotti, Phys. Rev. A 89, 010502-R (2014)]. For strongly correlated systems the SDP approach provides an alternative to the locally optimized Hartree-Fock energies and densities with a certificate of global optimality. Applications are made to the potential energy curves of C2, CN, Cr2, and NO2.

  19. Perturbative unitarity constraints on gauge portals

    DOE PAGES

    El Hedri, Sonia; Shepherd, William; Walker, Devin G. E.

    2017-10-03

    Dark matter that was once in thermal equilibrium with the Standard Model is generally prohibited from obtaining all of its mass from the electroweak phase transition. This implies a new scale of physics and mediator particles to facilitate dark matter annihilation. In this work, we focus on dark matter that annihilates through a generic gauge boson portal. We show how partial wave unitarity places upper bounds on the dark gauge boson, dark Higgs and dark matter masses. Outside of well-defined fine-tuned regions, we find an upper bound of 9 TeV for the dark matter mass when the dark Higgs andmore » dark gauge bosons both facilitate the dark matter annihilations. In this scenario, the upper bound on the dark Higgs and dark gauge boson masses are 10 TeV and 16 TeV, respectively. When only the dark gauge boson facilitates dark matter annihilations, we find an upper bound of 3 TeV and 6 TeV for the dark matter and dark gauge boson, respectively. Overall, using the gauge portal as a template, we describe a method to not only place upper bounds on the dark matter mass but also on the new particles with Standard Model quantum numbers. Here, we briefly discuss the reach of future accelerator, direct and indirect detection experiments for this class of models.« less

  20. Perturbative unitarity constraints on gauge portals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    El Hedri, Sonia; Shepherd, William; Walker, Devin G. E.

    Dark matter that was once in thermal equilibrium with the Standard Model is generally prohibited from obtaining all of its mass from the electroweak phase transition. This implies a new scale of physics and mediator particles to facilitate dark matter annihilation. In this work, we focus on dark matter that annihilates through a generic gauge boson portal. We show how partial wave unitarity places upper bounds on the dark gauge boson, dark Higgs and dark matter masses. Outside of well-defined fine-tuned regions, we find an upper bound of 9 TeV for the dark matter mass when the dark Higgs andmore » dark gauge bosons both facilitate the dark matter annihilations. In this scenario, the upper bound on the dark Higgs and dark gauge boson masses are 10 TeV and 16 TeV, respectively. When only the dark gauge boson facilitates dark matter annihilations, we find an upper bound of 3 TeV and 6 TeV for the dark matter and dark gauge boson, respectively. Overall, using the gauge portal as a template, we describe a method to not only place upper bounds on the dark matter mass but also on the new particles with Standard Model quantum numbers. Here, we briefly discuss the reach of future accelerator, direct and indirect detection experiments for this class of models.« less

  1. Noisy metrology: a saturable lower bound on quantum Fisher information

    NASA Astrophysics Data System (ADS)

    Yousefjani, R.; Salimi, S.; Khorashad, A. S.

    2017-06-01

    In order to provide a guaranteed precision and a more accurate judgement about the true value of the Cramér-Rao bound and its scaling behavior, an upper bound (equivalently a lower bound on the quantum Fisher information) for precision of estimation is introduced. Unlike the bounds previously introduced in the literature, the upper bound is saturable and yields a practical instruction to estimate the parameter through preparing the optimal initial state and optimal measurement. The bound is based on the underling dynamics, and its calculation is straightforward and requires only the matrix representation of the quantum maps responsible for encoding the parameter. This allows us to apply the bound to open quantum systems whose dynamics are described by either semigroup or non-semigroup maps. Reliability and efficiency of the method to predict the ultimate precision limit are demonstrated by three main examples.

  2. Standard Deviation for Small Samples

    ERIC Educational Resources Information Center

    Joarder, Anwar H.; Latif, Raja M.

    2006-01-01

    Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…

  3. Bounds for Asian basket options

    NASA Astrophysics Data System (ADS)

    Deelstra, Griselda; Diallo, Ibrahima; Vanmaele, Michèle

    2008-09-01

    In this paper we propose pricing bounds for European-style discrete arithmetic Asian basket options in a Black and Scholes framework. We start from methods used for basket options and Asian options. First, we use the general approach for deriving upper and lower bounds for stop-loss premia of sums of non-independent random variables as in Kaas et al. [Upper and lower bounds for sums of random variables, Insurance Math. Econom. 27 (2000) 151-168] or Dhaene et al. [The concept of comonotonicity in actuarial science and finance: theory, Insurance Math. Econom. 31(1) (2002) 3-33]. We generalize the methods in Deelstra et al. [Pricing of arithmetic basket options by conditioning, Insurance Math. Econom. 34 (2004) 55-57] and Vanmaele et al. [Bounds for the price of discrete sampled arithmetic Asian options, J. Comput. Appl. Math. 185(1) (2006) 51-90]. Afterwards we show how to derive an analytical closed-form expression for a lower bound in the non-comonotonic case. Finally, we derive upper bounds for Asian basket options by applying techniques as in Thompson [Fast narrow bounds on the value of Asian options, Working Paper, University of Cambridge, 1999] and Lord [Partially exact and bounded approximations for arithmetic Asian options, J. Comput. Finance 10 (2) (2006) 1-52]. Numerical results are included and on the basis of our numerical tests, we explain which method we recommend depending on moneyness and time-to-maturity.

  4. Finite-error metrological bounds on multiparameter Hamiltonian estimation

    NASA Astrophysics Data System (ADS)

    Kura, Naoto; Ueda, Masahito

    2018-01-01

    Estimation of multiple parameters in an unknown Hamiltonian is investigated. We present upper and lower bounds on the time required to complete the estimation within a prescribed error tolerance δ . The lower bound is given on the basis of the Cramér-Rao inequality, where the quantum Fisher information is bounded by the squared evolution time. The upper bound is obtained by an explicit construction of estimation procedures. By comparing the cases with different numbers of Hamiltonian channels, we also find that the few-channel procedure with adaptive feedback and the many-channel procedure with entanglement are equivalent in the sense that they require the same amount of time resource up to a constant factor.

  5. Future trends in computer waste generation in India.

    PubMed

    Dwivedy, Maheshwar; Mittal, R K

    2010-11-01

    The objective of this paper is to estimate the future projection of computer waste in India and to subsequently analyze their flow at the end of their useful phase. For this purpose, the study utilizes the logistic model-based approach proposed by Yang and Williams to forecast future trends in computer waste. The model estimates future projection of computer penetration rate utilizing their first lifespan distribution and historical sales data. A bounding analysis on the future carrying capacity was simulated using the three parameter logistic curve. The observed obsolete generation quantities from the extrapolated penetration rates are then used to model the disposal phase. The results of the bounding analysis indicate that in the year 2020, around 41-152 million units of computers will become obsolete. The obsolete computer generation quantities are then used to estimate the End-of-Life outflows by utilizing a time-series multiple lifespan model. Even a conservative estimate of the future recycling capacity of PCs will reach upwards of 30 million units during 2025. Apparently, more than 150 million units could be potentially recycled in the upper bound case. However, considering significant future investment in the e-waste recycling sector from all stakeholders in India, we propose a logistic growth in the recycling rate and estimate the requirement of recycling capacity between 60 and 400 million units for the lower and upper bound case during 2025. Finally, we compare the future obsolete PC generation amount of the US and India. Copyright © 2010 Elsevier Ltd. All rights reserved.

  6. Reduced conservatism in stability robustness bounds by state transformation

    NASA Technical Reports Server (NTRS)

    Yedavalli, R. K.; Liang, Z.

    1986-01-01

    This note addresses the issue of 'conservatism' in the time domain stability robustness bounds obtained by the Liapunov approach. A state transformation is employed to improve the upper bounds on the linear time-varying perturbation of an asymptotically stable linear time-invariant system for robust stability. This improvement is due to the variance of the conservatism of the Liapunov approach with respect to the basis of the vector space in which the Liapunov function is constructed. Improved bounds are obtained, using a transformation, on elemental and vector norms of perturbations (i.e., structured perturbations) as well as on a matrix norm of perturbations (i.e., unstructured perturbations). For the case of a diagonal transformation, an algorithm is proposed to find the 'optimal' transformation. Several examples are presented to illustrate the proposed analysis.

  7. An analysis of spectral envelope-reduction via quadratic assignment problems

    NASA Technical Reports Server (NTRS)

    George, Alan; Pothen, Alex

    1994-01-01

    A new spectral algorithm for reordering a sparse symmetric matrix to reduce its envelope size was described. The ordering is computed by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. In this paper, we provide an analysis of the spectral envelope reduction algorithm. We described related 1- and 2-sum problems; the former is related to the envelope size, while the latter is related to an upper bound on the work involved in an envelope Cholesky factorization scheme. We formulate the latter two problems as quadratic assignment problems, and then study the 2-sum problem in more detail. We obtain lower bounds on the 2-sum by considering a projected quadratic assignment problem, and then show that finding a permutation matrix closest to an orthogonal matrix attaining one of the lower bounds justifies the spectral envelope reduction algorithm. The lower bound on the 2-sum is seen to be tight for reasonably 'uniform' finite element meshes. We also obtain asymptotically tight lower bounds for the envelope size for certain classes of meshes.

  8. Investigations of Tissue-Level Mechanisms of Primary Blast Injury Through Modeling, Simulation, Neuroimaging and Neuropathological Studies

    DTIC Science & Technology

    2012-07-10

    materials used, the complexity of the human anatomy , manufacturing limitations, and analysis capability prohibits exactly matching surrogate material...upper and lower bounds for possible loading behaviour. Although it is impossible to exactly match the human anatomy according to mechanical

  9. ARES I-X USS Fracture Analysis Loads Spectra Development

    NASA Technical Reports Server (NTRS)

    Larsen, Curtis; Mackey, Alden

    2008-01-01

    This report describes the development of a set of bounding load spectra for the ARES I-X launch vehicle. These load spectra are used in the determination of the critical initial flaw size (CIFS) of the welds in the ARES I-X upper stage simulator (USS).

  10. An Upper Bound on High Speed Satellite Collision Probability When Only One Object has Position Uncertainty Information

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    Upper bounds on high speed satellite collision probability, PC †, have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum PC. If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but potentially useful Pc upper bound.

  11. Long-distance measurement-device-independent quantum key distribution with coherent-state superpositions.

    PubMed

    Yin, H-L; Cao, W-F; Fu, Y; Tang, Y-L; Liu, Y; Chen, T-Y; Chen, Z-B

    2014-09-15

    Measurement-device-independent quantum key distribution (MDI-QKD) with decoy-state method is believed to be securely applied to defeat various hacking attacks in practical quantum key distribution systems. Recently, the coherent-state superpositions (CSS) have emerged as an alternative to single-photon qubits for quantum information processing and metrology. Here, in this Letter, CSS are exploited as the source in MDI-QKD. We present an analytical method that gives two tight formulas to estimate the lower bound of yield and the upper bound of bit error rate. We exploit the standard statistical analysis and Chernoff bound to perform the parameter estimation. Chernoff bound can provide good bounds in the long-distance MDI-QKD. Our results show that with CSS, both the security transmission distance and secure key rate are significantly improved compared with those of the weak coherent states in the finite-data case.

  12. Interstage Flammability Analysis Approach

    NASA Technical Reports Server (NTRS)

    Little, Jeffrey K.; Eppard, William M.

    2011-01-01

    The Interstage of the Ares I launch platform houses several key components which are on standby during First Stage operation: the Reaction Control System (ReCS), the Upper Stage (US) Thrust Vector Control (TVC) and the J-2X with the Main Propulsion System (MPS) propellant feed system. Therefore potentially dangerous leaks of propellants could develop. The Interstage leaks analysis addresses the concerns of localized mixing of hydrogen and oxygen gases to produce deflagration zones in the Interstage of the Ares I launch vehicle during First Stage operation. This report details the approach taken to accomplish the analysis. Specified leakage profiles and actual flammability results are not presented due to proprietary and security restrictions. The interior volume formed by the Interstage walls, bounding interfaces with the Upper and First Stages, and surrounding the J2-X engine was modeled using Loci-CHEM to assess the potential for flammable gas mixtures to develop during First Stage operations. The transient analysis included a derived flammability indicator based on mixture ratios to maintain achievable simulation times. Validation of results was based on a comparison to Interstage pressure profiles outlined in prior NASA studies. The approach proved useful in the bounding of flammability risk in supporting program hazard reviews.

  13. Tri-critical behavior of the Blume-Emery-Griffiths model on a Kagomé lattice: Effective-field theory and Rigorous bounds

    NASA Astrophysics Data System (ADS)

    Santos, Jander P.; Sá Barreto, F. C.

    2016-01-01

    Spin correlation identities for the Blume-Emery-Griffiths model on Kagomé lattice are derived and combined with rigorous correlation inequalities lead to upper bounds on the critical temperature. From the spin correlation identities the mean field approximation and the effective field approximation results for the magnetization, the critical frontiers and the tricritical points are obtained. The rigorous upper bounds on the critical temperature improve over those effective-field type theories results.

  14. Bounds for the Z-spectral radius of nonnegative tensors.

    PubMed

    He, Jun; Liu, Yan-Min; Ke, Hua; Tian, Jun-Kang; Li, Xiang

    2016-01-01

    In this paper, we have proposed some new upper bounds for the largest Z-eigenvalue of an irreducible weakly symmetric and nonnegative tensor, which improve the known upper bounds obtained in Chang et al. (Linear Algebra Appl 438:4166-4182, 2013), Song and Qi (SIAM J Matrix Anal Appl 34:1581-1595, 2013), He and Huang (Appl Math Lett 38:110-114, 2014), Li et al. (J Comput Anal Appl 483:182-199, 2015), He (J Comput Anal Appl 20:1290-1301, 2016).

  15. Morphological representation of order-statistics filters.

    PubMed

    Charif-Chefchaouni, M; Schonfeld, D

    1995-01-01

    We propose a comprehensive theory for the morphological bounds on order-statistics filters (and their repeated iterations). Conditions are derived for morphological openings and closings to serve as bounds (lower and upper, respectively) on order-statistics filters (and their repeated iterations). Under various assumptions, morphological open-closings and close-openings are also shown to serve as (tighter) bounds (lower and upper, respectively) on iterations of order-statistics filters. Simulations of the application of the results presented to image restoration are finally provided.

  16. Wave height estimates from pressure and velocity data at an intermediate depth in the presence of uniform currents

    NASA Astrophysics Data System (ADS)

    Basu, Biswajit

    2017-12-01

    Bounds on estimates of wave heights (valid for large amplitudes) from pressure and flow measurements at an arbitrary intermediate depth have been provided. Two-dimensional irrotational steady water waves over a flat bed with a finite depth in the presence of underlying uniform currents have been considered in the analysis. Five different upper bounds based on a combination of pressure and velocity field measurements have been derived, though there is only one available lower bound on the wave height in the case of the speed of current greater than or less than the wave speed. This article is part of the theme issue 'Nonlinear water waves'.

  17. A passivity criterion for sampled-data bilateral teleoperation systems.

    PubMed

    Jazayeri, Ali; Tavakoli, Mahdi

    2013-01-01

    A teleoperation system consists of a teleoperator, a human operator, and a remote environment. Conditions involving system and controller parameters that ensure the teleoperator passivity can serve as control design guidelines to attain maximum teleoperation transparency while maintaining system stability. In this paper, sufficient conditions for teleoperator passivity are derived for when position error-based controllers are implemented in discrete-time. This new analysis is necessary because discretization causes energy leaks and does not necessarily preserve the passivity of the system. The proposed criterion for sampled-data teleoperator passivity imposes lower bounds on the teleoperator's robots dampings, an upper bound on the sampling time, and bounds on the control gains. The criterion is verified through simulations and experiments.

  18. Bounds on the information rate of quantum-secret-sharing schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarvepalli, Pradeep

    An important metric of the performance of a quantum-secret-sharing scheme is its information rate. Beyond the fact that the information rate is upper-bounded by one, very little is known in terms of bounds on the information rate of quantum-secret-sharing schemes. Furthermore, not every scheme can be realized with rate one. In this paper we derive upper bounds for the information rates of quantum-secret-sharing schemes. We show that there exist quantum access structures on n players for which the information rate cannot be better than O((log{sub 2}n)/n). These results are the quantum analogues of the bounds for classical-secret-sharing schemes proved bymore » Csirmaz.« less

  19. The upper bound to the Relative Reporting Ratio—a measure of the impact of the violation of hidden assumptions underlying some disproportionality methods used in signal detection

    PubMed Central

    Van Holle, Lionel; Bauchau, Vincent

    2014-01-01

    Purpose For disproportionality measures based on the Relative Reporting Ratio (RRR) such as the Information Component (IC) and the Empirical Bayesian Geometrical Mean (EBGM), each product and event is assumed to represent a negligible fraction of the spontaneous report database (SRD). Here, we provide the tools for allowing signal detection experts to assess the consequence of the violation of this assumption on their specific SRD. Methods For each product–event pair (P–E), a worst-case scenario associated all the reported events-of-interest with the product of interest. The values of the RRR under this scenario were measured for different sets of stratification factors using the GlaxoSmithKline vaccines SRD. These values represent the RRR upper bound that RRR cannot exceed whatever the true strength of association. Results Depending on the choice of stratification factors, the RRR could not exceed an upper bound of 2 for up to 2.4% of the P–Es. For Engerix™, 23.4% of all reports in the SDR, the RRR could not exceed an upper bound of 2 for up to 13.8% of pairs. For the P–E Rotarix™-Intussusception, the choice of stratification factors impacted the upper bound to RRR: from 52.5 for an unstratified RRR to 2.0 for a fully stratified RRR. Conclusions The quantification of the upper bound can indicate whether measures such as EBGM, IC, or RRR can be used for SRD for which products or events represent a non-negligible fraction of the entire SRD. In addition, at the level of the product or P–E, it can also highlight detrimental impact of overstratification. © 2014 The Authors. Pharmacoepidemiology and Drug Safety published by John Wiley & Sons, Ltd. PMID:24395594

  20. Bounds of memory strength for power-law series.

    PubMed

    Guo, Fangjian; Yang, Dan; Yang, Zimo; Zhao, Zhi-Dan; Zhou, Tao

    2017-05-01

    Many time series produced by complex systems are empirically found to follow power-law distributions with different exponents α. By permuting the independently drawn samples from a power-law distribution, we present nontrivial bounds on the memory strength (first-order autocorrelation) as a function of α, which are markedly different from the ordinary ±1 bounds for Gaussian or uniform distributions. When 1<α≤3, as α grows bigger, the upper bound increases from 0 to +1 while the lower bound remains 0; when α>3, the upper bound remains +1 while the lower bound descends below 0. Theoretical bounds agree well with numerical simulations. Based on the posts on Twitter, ratings of MovieLens, calling records of the mobile operator Orange, and the browsing behavior of Taobao, we find that empirical power-law-distributed data produced by human activities obey such constraints. The present findings explain some observed constraints in bursty time series and scale-free networks and challenge the validity of measures such as autocorrelation and assortativity coefficient in heterogeneous systems.

  1. Bounds of memory strength for power-law series

    NASA Astrophysics Data System (ADS)

    Guo, Fangjian; Yang, Dan; Yang, Zimo; Zhao, Zhi-Dan; Zhou, Tao

    2017-05-01

    Many time series produced by complex systems are empirically found to follow power-law distributions with different exponents α . By permuting the independently drawn samples from a power-law distribution, we present nontrivial bounds on the memory strength (first-order autocorrelation) as a function of α , which are markedly different from the ordinary ±1 bounds for Gaussian or uniform distributions. When 1 <α ≤3 , as α grows bigger, the upper bound increases from 0 to +1 while the lower bound remains 0; when α >3 , the upper bound remains +1 while the lower bound descends below 0. Theoretical bounds agree well with numerical simulations. Based on the posts on Twitter, ratings of MovieLens, calling records of the mobile operator Orange, and the browsing behavior of Taobao, we find that empirical power-law-distributed data produced by human activities obey such constraints. The present findings explain some observed constraints in bursty time series and scale-free networks and challenge the validity of measures such as autocorrelation and assortativity coefficient in heterogeneous systems.

  2. Bound of dissipation on a plane Couette dynamo

    NASA Astrophysics Data System (ADS)

    Alboussière, Thierry

    2009-06-01

    Variational turbulence is among the few approaches providing rigorous results in turbulence. In addition, it addresses a question of direct practical interest, namely, the rate of energy dissipation. Unfortunately, only an upper bound is obtained as a larger functional space than the space of solutions to the Navier-Stokes equations is searched. Yet, in some cases, this upper bound is in good agreement with experimental results in terms of order of magnitude and power law of the imposed Reynolds number. In this paper, the variational approach to turbulence is extended to the case of dynamo action and an upper bound is obtained for the global dissipation rate (viscous and Ohmic). A simple plane Couette flow is investigated. For low magnetic Prandtl number Pm fluids, the upper bound of energy dissipation is that of classical turbulence (i.e., proportional to the cubic power of the shear velocity) for magnetic Reynolds numbers below Pm-1 and follows a steeper evolution for magnetic Reynolds numbers above Pm-1 (i.e., proportional to the shear velocity to the power of 4) in the case of electrically insulating walls. However, the effect of wall conductance is crucial: for a given value of wall conductance, there is a value for the magnetic Reynolds number above which energy dissipation cannot be bounded. This limiting magnetic Reynolds number is inversely proportional to the square root of the conductance of the wall. Implications in terms of energy dissipation in experimental and natural dynamos are discussed.

  3. Limitations of the background field method applied to Rayleigh-Bénard convection

    NASA Astrophysics Data System (ADS)

    Nobili, Camilla; Otto, Felix

    2017-09-01

    We consider Rayleigh-Bénard convection as modeled by the Boussinesq equations, in the case of infinite Prandtl numbers and with no-slip boundary condition. There is a broad interest in bounds of the upwards heat flux, as given by the Nusselt number Nu, in terms of the forcing via the imposed temperature difference, as given by the Rayleigh number in the turbulent regime Ra ≫ 1 . In several studies, the background field method applied to the temperature field has been used to provide upper bounds on Nu in terms of Ra. In these applications, the background field method comes in the form of a variational problem where one optimizes a stratified temperature profile subject to a certain stability condition; the method is believed to capture the marginal stability of the boundary layer. The best available upper bound via this method is Nu ≲Ra/1 3 ( ln R a )/1 15 ; it proceeds via the construction of a stable temperature background profile that increases logarithmically in the bulk. In this paper, we show that the background temperature field method cannot provide a tighter upper bound in terms of the power of the logarithm. However, by another method, one does obtain the tighter upper bound Nu ≲ Ra /1 3 ( ln ln Ra ) /1 3 so that the result of this paper implies that the background temperature field method is unphysical in the sense that it cannot provide the optimal bound.

  4. Upper-Bound Estimates Of SEU in CMOS

    NASA Technical Reports Server (NTRS)

    Edmonds, Larry D.

    1990-01-01

    Theory of single-event upsets (SEU) (changes in logic state caused by energetic charged subatomic particles) in complementary metal oxide/semiconductor (CMOS) logic devices extended to provide upper-bound estimates of rates of SEU when limited experimental information available and configuration and dimensions of SEU-sensitive regions of devices unknown. Based partly on chord-length-distribution method.

  5. An upper bound on the second order asymptotic expansion for the quantum communication cost of state redistribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Datta, Nilanjana, E-mail: n.datta@statslab.cam.ac.uk; Hsieh, Min-Hsiu, E-mail: Min-Hsiu.Hsieh@uts.edu.au; Oppenheim, Jonathan, E-mail: j.oppenheim@ucl.ac.uk

    State redistribution is the protocol in which given an arbitrary tripartite quantum state, with two of the subsystems initially being with Alice and one being with Bob, the goal is for Alice to send one of her subsystems to Bob, possibly with the help of prior shared entanglement. We derive an upper bound on the second order asymptotic expansion for the quantum communication cost of achieving state redistribution with a given finite accuracy. In proving our result, we also obtain an upper bound on the quantum communication cost of this protocol in the one-shot setting, by using the protocol ofmore » coherent state merging as a primitive.« less

  6. An upper-bound assessment of the benefits of reducing perchlorate in drinking water.

    PubMed

    Lutter, Randall

    2014-10-01

    The Environmental Protection Agency plans to issue new federal regulations to limit drinking water concentrations of perchlorate, which occurs naturally and results from the combustion of rocket fuel. This article presents an upper-bound estimate of the potential benefits of alternative maximum contaminant levels for perchlorate in drinking water. The results suggest that the economic benefits of reducing perchlorate concentrations in drinking water are likely to be low, i.e., under $2.9 million per year nationally, for several reasons. First, the prevalence of detectable perchlorate in public drinking water systems is low. Second, the population especially sensitive to effects of perchlorate, pregnant women who are moderately iodide deficient, represents a minority of all pregnant women. Third, and perhaps most importantly, reducing exposure to perchlorate in drinking water is a relatively ineffective way of increasing iodide uptake, a crucial step linking perchlorate to health effects of concern. © 2014 Society for Risk Analysis.

  7. Fault-tolerant clock synchronization validation methodology. [in computer systems

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Palumbo, Daniel L.; Johnson, Sally C.

    1987-01-01

    A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight-crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating the clock synchronization system of the Software Implemented Fault Tolerance computer. The design proof of the algorithm includes a theorem that defines the maximum skew between any two nonfaulty clocks in the system in terms of specific system parameters. Most of these parameters are deterministic. One crucial parameter is the upper bound on the clock read error, which is stochastic. The probability that this upper bound is exceeded is calculated from data obtained by the measurement of system parameters. This probability is then included in a detailed reliability analysis of the system.

  8. Solving Open Job-Shop Scheduling Problems by SAT Encoding

    NASA Astrophysics Data System (ADS)

    Koshimura, Miyuki; Nabeshima, Hidetomo; Fujita, Hiroshi; Hasegawa, Ryuzo

    This paper tries to solve open Job-Shop Scheduling Problems (JSSP) by translating them into Boolean Satisfiability Testing Problems (SAT). The encoding method is essentially the same as the one proposed by Crawford and Baker. The open problems are ABZ8, ABZ9, YN1, YN2, YN3, and YN4. We proved that the best known upper bounds 678 of ABZ9 and 884 of YN1 are indeed optimal. We also improved the upper bound of YN2 and lower bounds of ABZ8, YN2, YN3 and YN4.

  9. Upper and lower bounds for semi-Markov reliability models of reconfigurable systems

    NASA Technical Reports Server (NTRS)

    White, A. L.

    1984-01-01

    This paper determines the information required about system recovery to compute the reliability of a class of reconfigurable systems. Upper and lower bounds are derived for these systems. The class consists of those systems that satisfy five assumptions: the components fail independently at a low constant rate, fault occurrence and system reconfiguration are independent processes, the reliability model is semi-Markov, the recovery functions which describe system configuration have small means and variances, and the system is well designed. The bounds are easy to compute, and examples are included.

  10. The Laughlin liquid in an external potential

    NASA Astrophysics Data System (ADS)

    Rougerie, Nicolas; Yngvason, Jakob

    2018-04-01

    We study natural perturbations of the Laughlin state arising from the effects of trapping and disorder. These are N-particle wave functions that have the form of a product of Laughlin states and analytic functions of the N variables. We derive an upper bound to the ground state energy in a confining external potential, matching exactly a recently derived lower bound in the large N limit. Irrespective of the shape of the confining potential, this sharp upper bound can be achieved through a modification of the Laughlin function by suitably arranged quasi-holes.

  11. Determining Normal-Distribution Tolerance Bounds Graphically

    NASA Technical Reports Server (NTRS)

    Mezzacappa, M. A.

    1983-01-01

    Graphical method requires calculations and table lookup. Distribution established from only three points: mean upper and lower confidence bounds and lower confidence bound of standard deviation. Method requires only few calculations with simple equations. Graphical procedure establishes best-fit line for measured data and bounds for selected confidence level and any distribution percentile.

  12. MRI-based assessment of the pineal gland in a large population of children aged 0-5 years and comparison with pineoblastoma: part I, the solid gland.

    PubMed

    Galluzzi, Paolo; de Jong, Marcus C; Sirin, Selma; Maeder, Philippe; Piu, Pietro; Cerase, Alfonso; Monti, Lucia; Brisse, Hervé J; Castelijns, Jonas A; de Graaf, Pim; Goericke, Sophia L

    2016-07-01

    Differentiation between normal solid (non-cystic) pineal glands and pineal pathologies on brain MRI is difficult. The aim of this study was to assess the size of the solid pineal gland in children (0-5 years) and compare the findings with published pineoblastoma cases. We retrospectively analyzed the size (width, height, planimetric area) of solid pineal glands in 184 non-retinoblastoma patients (73 female, 111 male) aged 0-5 years on MRI. The effect of age and gender on gland size was evaluated. Linear regression analysis was performed to analyze the relation between size and age. Ninety-nine percent prediction intervals around the mean were added to construct a normal size range per age, with the upper bound of the predictive interval as the parameter of interest as a cutoff for normalcy. There was no significant interaction of gender and age for all the three pineal gland parameters (width, height, and area). Linear regression analysis gave 99 % upper prediction bounds of 7.9, 4.8, and 25.4 mm(2), respectively, for width, height, and area. The slopes (size increase per month) of each parameter were 0.046, 0.023, and 0.202, respectively. Ninety-three percent (95 % CI 66-100 %) of asymptomatic solid pineoblastomas were larger in size than the 99 % upper bound. This study establishes norms for solid pineal gland size in non-retinoblastoma children aged 0-5 years. Knowledge of the size of the normal pineal gland is helpful for detection of pineal gland abnormalities, particularly pineoblastoma.

  13. Calculating Reuse Distance from Source Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayanan, Sri Hari Krishna; Hovland, Paul

    The efficient use of a system is of paramount importance in high-performance computing. Applications need to be engineered for future systems even before the architecture of such a system is clearly known. Static performance analysis that generates performance bounds is one way to approach the task of understanding application behavior. Performance bounds provide an upper limit on the performance of an application on a given architecture. Predicting cache hierarchy behavior and accesses to main memory is a requirement for accurate performance bounds. This work presents our static reuse distance algorithm to generate reuse distance histograms. We then use these histogramsmore » to predict cache miss rates. Experimental results for kernels studied show that the approach is accurate.« less

  14. Symmetry Parameter Constraints from a Lower Bound on Neutron-matter Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tews, Ingo; Lattimer, James M.; Ohnishi, Akira

    We propose the existence of a lower bound on the energy of pure neutron matter (PNM) on the basis of unitary-gas considerations. We discuss its justification from experimental studies of cold atoms as well as from theoretical studies of neutron matter. We demonstrate that this bound results in limits to the density-dependent symmetry energy, which is the difference between the energies of symmetric nuclear matter and PNM. In particular, this bound leads to a lower limit to the volume symmetry energy parameter S {sub 0}. In addition, for assumed values of S {sub 0} above this minimum, this bound impliesmore » both upper and lower limits to the symmetry energy slope parameter L , which describes the lowest-order density dependence of the symmetry energy. A lower bound on neutron-matter incompressibility is also obtained. These bounds are found to be consistent with both recent calculations of the energies of PNM and constraints from nuclear experiments. Our results are significant because several equations of state that are currently used in astrophysical simulations of supernovae and neutron star mergers, as well as in nuclear physics simulations of heavy-ion collisions, have symmetry energy parameters that violate these bounds. Furthermore, below the nuclear saturation density, the bound on neutron-matter energies leads to a lower limit to the density-dependent symmetry energy, which leads to upper limits to the nuclear surface symmetry parameter and the neutron-star crust–core boundary. We also obtain a lower limit to the neutron-skin thicknesses of neutron-rich nuclei. Above the nuclear saturation density, the bound on neutron-matter energies also leads to an upper limit to the symmetry energy, with implications for neutron-star cooling via the direct Urca process.« less

  15. Computational experience with a parallel algorithm for tetrangle inequality bound smoothing.

    PubMed

    Rajan, K; Deo, N

    1999-09-01

    Determining molecular structure from interatomic distances is an important and challenging problem. Given a molecule with n atoms, lower and upper bounds on interatomic distances can usually be obtained only for a small subset of the 2(n(n-1)) atom pairs, using NMR. Given the bounds so obtained on the distances between some of the atom pairs, it is often useful to compute tighter bounds on all the 2(n(n-1)) pairwise distances. This process is referred to as bound smoothing. The initial lower and upper bounds for the pairwise distances not measured are usually assumed to be 0 and infinity. One method for bound smoothing is to use the limits imposed by the triangle inequality. The distance bounds so obtained can often be tightened further by applying the tetrangle inequality--the limits imposed on the six pairwise distances among a set of four atoms (instead of three for the triangle inequalities). The tetrangle inequality is expressed by the Cayley-Menger determinants. For every quadruple of atoms, each pass of the tetrangle inequality bound smoothing procedure finds upper and lower limits on each of the six distances in the quadruple. Applying the tetrangle inequalities to each of the (4n) quadruples requires O(n4) time. Here, we propose a parallel algorithm for bound smoothing employing the tetrangle inequality. Each pass of our algorithm requires O(n3 log n) time on a REW PRAM (Concurrent Read Exclusive Write Parallel Random Access Machine) with O(log(n)n) processors. An implementation of this parallel algorithm on the Intel Paragon XP/S and its performance are also discussed.

  16. Symmetry Parameter Constraints from a Lower Bound on Neutron-matter Energy

    NASA Astrophysics Data System (ADS)

    Tews, Ingo; Lattimer, James M.; Ohnishi, Akira; Kolomeitsev, Evgeni E.

    2017-10-01

    We propose the existence of a lower bound on the energy of pure neutron matter (PNM) on the basis of unitary-gas considerations. We discuss its justification from experimental studies of cold atoms as well as from theoretical studies of neutron matter. We demonstrate that this bound results in limits to the density-dependent symmetry energy, which is the difference between the energies of symmetric nuclear matter and PNM. In particular, this bound leads to a lower limit to the volume symmetry energy parameter S 0. In addition, for assumed values of S 0 above this minimum, this bound implies both upper and lower limits to the symmetry energy slope parameter L ,which describes the lowest-order density dependence of the symmetry energy. A lower bound on neutron-matter incompressibility is also obtained. These bounds are found to be consistent with both recent calculations of the energies of PNM and constraints from nuclear experiments. Our results are significant because several equations of state that are currently used in astrophysical simulations of supernovae and neutron star mergers, as well as in nuclear physics simulations of heavy-ion collisions, have symmetry energy parameters that violate these bounds. Furthermore, below the nuclear saturation density, the bound on neutron-matter energies leads to a lower limit to the density-dependent symmetry energy, which leads to upper limits to the nuclear surface symmetry parameter and the neutron-star crust-core boundary. We also obtain a lower limit to the neutron-skin thicknesses of neutron-rich nuclei. Above the nuclear saturation density, the bound on neutron-matter energies also leads to an upper limit to the symmetry energy, with implications for neutron-star cooling via the direct Urca process.

  17. Divergences and estimating tight bounds on Bayes error with applications to multivariate Gaussian copula and latent Gaussian copula

    NASA Astrophysics Data System (ADS)

    Thelen, Brian J.; Xique, Ismael J.; Burns, Joseph W.; Goley, G. Steven; Nolan, Adam R.; Benson, Jonathan W.

    2017-04-01

    In Bayesian decision theory, there has been a great amount of research into theoretical frameworks and information- theoretic quantities that can be used to provide lower and upper bounds for the Bayes error. These include well-known bounds such as Chernoff, Battacharrya, and J-divergence. Part of the challenge of utilizing these various metrics in practice is (i) whether they are "loose" or "tight" bounds, (ii) how they might be estimated via either parametric or non-parametric methods, and (iii) how accurate the estimates are for limited amounts of data. In general what is desired is a methodology for generating relatively tight lower and upper bounds, and then an approach to estimate these bounds efficiently from data. In this paper, we explore the so-called triangle divergence which has been around for a while, but was recently made more prominent in some recent research on non-parametric estimation of information metrics. Part of this work is motivated by applications for quantifying fundamental information content in SAR/LIDAR data, and to help in this, we have developed a flexible multivariate modeling framework based on multivariate Gaussian copula models which can be combined with the triangle divergence framework to quantify this information, and provide approximate bounds on Bayes error. In this paper we present an overview of the bounds, including those based on triangle divergence and verify that under a number of multivariate models, the upper and lower bounds derived from triangle divergence are significantly tighter than the other common bounds, and often times, dramatically so. We also propose some simple but effective means for computing the triangle divergence using Monte Carlo methods, and then discuss estimation of the triangle divergence from empirical data based on Gaussian Copula models.

  18. Search for weakly decaying Λn ‾ and ΛΛ exotic bound states in central Pb-Pb collisions at √{sNN} = 2.76 TeV

    NASA Astrophysics Data System (ADS)

    Adam, J.; Adamová, D.; Aggarwal, M. M.; Aglieri Rinella, G.; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahmed, I.; Ahn, S. U.; Aimo, I.; Aiola, S.; Ajaz, M.; Akindinov, A.; Alam, S. N.; Aleksandrov, D.; Alessandro, B.; Alexandre, D.; Alfaro Molina, R.; Alici, A.; Alkin, A.; Alme, J.; Alt, T.; Altinpinar, S.; Altsybeev, I.; Alves Garcia Prado, C.; Andrei, C.; Andronic, A.; Anguelov, V.; Anielski, J.; Antičić, T.; Antinori, F.; Antonioli, P.; Aphecetche, L.; Appelshäuser, H.; Arcelli, S.; Armesto, N.; Arnaldi, R.; Aronsson, T.; Arsene, I. C.; Arslandok, M.; Augustinus, A.; Averbeck, R.; Azmi, M. D.; Bach, M.; Badalà, A.; Baek, Y. W.; Bagnasco, S.; Bailhache, R.; Bala, R.; Baldisseri, A.; Ball, M.; Baltasar Dos Santos Pedrosa, F.; Baral, R. C.; Barbano, A. M.; Barbera, R.; Barile, F.; Barnaföldi, G. G.; Barnby, L. S.; Barret, V.; Bartalini, P.; Bartke, J.; Bartsch, E.; Basile, M.; Bastid, N.; Basu, S.; Bathen, B.; Batigne, G.; Batista Camejo, A.; Batyunya, B.; Batzing, P. C.; Bearden, I. G.; Beck, H.; Bedda, C.; Behera, N. K.; Belikov, I.; Bellini, F.; Bello Martinez, H.; Bellwied, R.; Belmont, R.; Belmont-Moreno, E.; Belyaev, V.; Bencedi, G.; Beole, S.; Berceanu, I.; Bercuci, A.; Berdnikov, Y.; Berenyi, D.; Bertens, R. A.; Berzano, D.; Betev, L.; Bhasin, A.; Bhat, I. R.; Bhati, A. K.; Bhattacharjee, B.; Bhom, J.; Bianchi, L.; Bianchi, N.; Bianchin, C.; Bielčík, J.; Bielčíková, J.; Bilandzic, A.; Biswas, S.; Bjelogrlic, S.; Blanco, F.; Blau, D.; Blume, C.; Bock, F.; Bogdanov, A.; Bøggild, H.; Boldizsár, L.; Bombara, M.; Book, J.; Borel, H.; Borissov, A.; Borri, M.; Bossú, F.; Botje, M.; Botta, E.; Böttger, S.; Braun-Munzinger, P.; Bregant, M.; Breitner, T.; Broker, T. A.; Browning, T. A.; Broz, M.; Brucken, E. J.; Bruna, E.; Bruno, G. E.; Budnikov, D.; Buesching, H.; Bufalino, S.; Buncic, P.; Busch, O.; Buthelezi, Z.; Buxton, J. T.; Caffarri, D.; Cai, X.; Caines, H.; Calero Diaz, L.; Caliva, A.; Calvo Villar, E.; Camerini, P.; Carena, F.; Carena, W.; Castillo Castellanos, J.; Castro, A. J.; Casula, E. A. R.; Cavicchioli, C.; Ceballos Sanchez, C.; Cepila, J.; Cerello, P.; Chang, B.; Chapeland, S.; Chartier, M.; Charvet, J. L.; Chattopadhyay, S.; Chattopadhyay, S.; Chelnokov, V.; Cherney, M.; Cheshkov, C.; Cheynis, B.; Chibante Barroso, V.; Chinellato, D. D.; Chochula, P.; Choi, K.; Chojnacki, M.; Choudhury, S.; Christakoglou, P.; Christensen, C. H.; Christiansen, P.; Chujo, T.; Chung, S. U.; Cicalo, C.; Cifarelli, L.; Cindolo, F.; Cleymans, J.; Colamaria, F.; Colella, D.; Collu, A.; Colocci, M.; Conesa Balbastre, G.; Conesa del Valle, Z.; Connors, M. E.; Contreras, J. G.; Cormier, T. M.; Corrales Morales, Y.; Cortés Maldonado, I.; Cortese, P.; Cosentino, M. R.; Costa, F.; Crochet, P.; Cruz Albino, R.; Cuautle, E.; Cunqueiro, L.; Dahms, T.; Dainese, A.; Danu, A.; Das, D.; Das, I.; Das, S.; Dash, A.; Dash, S.; De, S.; De Caro, A.; de Cataldo, G.; de Cuveland, J.; De Falco, A.; De Gruttola, D.; De Marco, N.; De Pasquale, S.; Deisting, A.; Deloff, A.; Dénes, E.; D'Erasmo, G.; Di Bari, D.; Di Mauro, A.; Di Nezza, P.; Diaz Corchero, M. A.; Dietel, T.; Dillenseger, P.; Divià, R.; Djuvsland, Ø.; Dobrin, A.; Dobrowolski, T.; Domenicis Gimenez, D.; Dönigus, B.; Dordic, O.; Dubey, A. K.; Dubla, A.; Ducroux, L.; Dupieux, P.; Ehlers, R. J.; Elia, D.; Engel, H.; Erazmus, B.; Erhardt, F.; Eschweiler, D.; Espagnon, B.; Estienne, M.; Esumi, S.; Evans, D.; Evdokimov, S.; Eyyubova, G.; Fabbietti, L.; Fabris, D.; Faivre, J.; Fantoni, A.; Fasel, M.; Feldkamp, L.; Felea, D.; Feliciello, A.; Feofilov, G.; Ferencei, J.; Fernández Téllez, A.; Ferreiro, E. G.; Ferretti, A.; Festanti, A.; Figiel, J.; Figueredo, M. A. S.; Filchagin, S.; Finogeev, D.; Fionda, F. M.; Fiore, E. M.; Fleck, M. G.; Floris, M.; Foertsch, S.; Foka, P.; Fokin, S.; Fragiacomo, E.; Francescon, A.; Frankenfeld, U.; Fuchs, U.; Furget, C.; Furs, A.; Fusco Girard, M.; Gaardhøje, J. J.; Gagliardi, M.; Gago, A. M.; Gallio, M.; Gangadharan, D. R.; Ganoti, P.; Gao, C.; Garabatos, C.; Garcia-Solis, E.; Gargiulo, C.; Gasik, P.; Germain, M.; Gheata, A.; Gheata, M.; Ghosh, P.; Ghosh, S. K.; Gianotti, P.; Giubellino, P.; Giubilato, P.; Gladysz-Dziadus, E.; Glässel, P.; Gomez Ramirez, A.; González-Zamora, P.; Gorbunov, S.; Görlich, L.; Gotovac, S.; Grabski, V.; Graczykowski, L. K.; Grelli, A.; Grigoras, A.; Grigoras, C.; Grigoriev, V.; Grigoryan, A.; Grigoryan, S.; Grinyov, B.; Grion, N.; Grosse-Oetringhaus, J. F.; Grossiord, J.-Y.; Grosso, R.; Guber, F.; Guernane, R.; Guerzoni, B.; Gulbrandsen, K.; Gulkanyan, H.; Gunji, T.; Gupta, A.; Gupta, R.; Haake, R.; Haaland, Ø.; Hadjidakis, C.; Haiduc, M.; Hamagaki, H.; Hamar, G.; Hanratty, L. D.; Hansen, A.; Harris, J. W.; Hartmann, H.; Harton, A.; Hatzifotiadou, D.; Hayashi, S.; Heckel, S. T.; Heide, M.; Helstrup, H.; Herghelegiu, A.; Herrera Corral, G.; Hess, B. A.; Hetland, K. F.; Hilden, T. E.; Hillemanns, H.; Hippolyte, B.; Hristov, P.; Huang, M.; Humanic, T. J.; Hussain, N.; Hussain, T.; Hutter, D.; Hwang, D. S.; Ilkaev, R.; Ilkiv, I.; Inaba, M.; Ionita, C.; Ippolitov, M.; Irfan, M.; Ivanov, M.; Ivanov, V.; Izucheev, V.; Jachołkowski, A.; Jacobs, P. M.; Jahnke, C.; Jang, H. J.; Janik, M. A.; Jayarathna, P. H. S. Y.; Jena, C.; Jena, S.; Jimenez Bustamante, R. T.; Jones, P. G.; Jung, H.; Jusko, A.; Kalinak, P.; Kalweit, A.; Kamin, J.; Kang, J. H.; Kaplin, V.; Kar, S.; Karasu Uysal, A.; Karavichev, O.; Karavicheva, T.; Karpechev, E.; Kebschull, U.; Keidel, R.; Keijdener, D. L. D.; Keil, M.; Khan, K. H.; Khan, M. M.; Khan, P.; Khan, S. A.; Khanzadeev, A.; Kharlov, Y.; Kileng, B.; Kim, B.; Kim, D. W.; Kim, D. J.; Kim, H.; Kim, J. S.; Kim, M.; Kim, M.; Kim, S.; Kim, T.; Kirsch, S.; Kisel, I.; Kiselev, S.; Kisiel, A.; Kiss, G.; Klay, J. L.; Klein, C.; Klein, J.; Klein-Bösing, C.; Kluge, A.; Knichel, M. L.; Knospe, A. G.; Kobayashi, T.; Kobdaj, C.; Kofarago, M.; Köhler, M. K.; Kollegger, T.; Kolojvari, A.; Kondratiev, V.; Kondratyeva, N.; Kondratyuk, E.; Konevskikh, A.; Kouzinopoulos, C.; Kovalenko, V.; Kowalski, M.; Kox, S.; Koyithatta Meethaleveedu, G.; Kral, J.; Králik, I.; Kravčáková, A.; Krelina, M.; Kretz, M.; Krivda, M.; Krizek, F.; Kryshen, E.; Krzewicki, M.; Kubera, A. M.; Kučera, V.; Kucheriaev, Y.; Kugathasan, T.; Kuhn, C.; Kuijer, P. G.; Kulakov, I.; Kumar, J.; Kumar, L.; Kurashvili, P.; Kurepin, A.; Kurepin, A. B.; Kuryakin, A.; Kushpil, S.; Kweon, M. J.; Kwon, Y.; La Pointe, S. L.; La Rocca, P.; Lagana Fernandes, C.; Lakomov, I.; Langoy, R.; Lara, C.; Lardeux, A.; Lattuca, A.; Laudi, E.; Lea, R.; Leardini, L.; Lee, G. R.; Lee, S.; Legrand, I.; Lehnert, J.; Lemmon, R. C.; Lenti, V.; Leogrande, E.; León Monzón, I.; Leoncino, M.; Lévai, P.; Li, S.; Li, X.; Lien, J.; Lietava, R.; Lindal, S.; Lindenstruth, V.; Lippmann, C.; Lisa, M. A.; Ljunggren, H. M.; Lodato, D. F.; Loenne, P. I.; Loggins, V. R.; Loginov, V.; Loizides, C.; Lopez, X.; López Torres, E.; Lowe, A.; Lu, X.-G.; Luettig, P.; Lunardon, M.; Luparello, G.; Maevskaya, A.; Mager, M.; Mahajan, S.; Mahmood, S. M.; Maire, A.; Majka, R. D.; Malaev, M.; Maldonado Cervantes, I.; Malinina, L.; Mal'Kevich, D.; Malzacher, P.; Mamonov, A.; Manceau, L.; Manko, V.; Manso, F.; Manzari, V.; Marchisone, M.; Mareš, J.; Margagliotti, G. V.; Margotti, A.; Margutti, J.; Marín, A.; Markert, C.; Marquard, M.; Martashvili, I.; Martin, N. A.; Martin Blanco, J.; Martinengo, P.; Martínez, M. I.; Martínez García, G.; Martinez Pedreira, M.; Martynov, Y.; Mas, A.; Masciocchi, S.; Masera, M.; Masoni, A.; Massacrier, L.; Mastroserio, A.; Matyja, A.; Mayer, C.; Mazer, J.; Mazzoni, M. A.; Mcdonald, D.; Meddi, F.; Menchaca-Rocha, A.; Meninno, E.; Mercado Pérez, J.; Meres, M.; Miake, Y.; Mieskolainen, M. M.; Mikhaylov, K.; Milano, L.; Milosevic, J.; Minervini, L. M.; Mischke, A.; Mishra, A. N.; Miśkowiec, D.; Mitra, J.; Mitu, C. M.; Mohammadi, N.; Mohanty, B.; Molnar, L.; Montaño Zetina, L.; Montes, E.; Morando, M.; Moretto, S.; Morreale, A.; Morsch, A.; Muccifora, V.; Mudnic, E.; Mühlheim, D.; Muhuri, S.; Mukherjee, M.; Müller, H.; Mulligan, J. D.; Munhoz, M. G.; Murray, S.; Musa, L.; Musinsky, J.; Nandi, B. K.; Nania, R.; Nappi, E.; Naru, M. U.; Nattrass, C.; Nayak, K.; Nayak, T. K.; Nazarenko, S.; Nedosekin, A.; Nellen, L.; Ng, F.; Nicassio, M.; Niculescu, M.; Niedziela, J.; Nielsen, B. S.; Nikolaev, S.; Nikulin, S.; Nikulin, V.; Noferini, F.; Nomokonov, P.; Nooren, G.; Norman, J.; Nyanin, A.; Nystrand, J.; Oeschler, H.; Oh, S.; Oh, S. K.; Ohlson, A.; Okatan, A.; Okubo, T.; Olah, L.; Oleniacz, J.; Oliveira Da Silva, A. C.; Oliver, M. H.; Onderwaater, J.; Oppedisano, C.; Ortiz Velasquez, A.; Oskarsson, A.; Otwinowski, J.; Oyama, K.; Ozdemir, M.; Pachmayer, Y.; Pagano, P.; Paić, G.; Pajares, C.; Pal, S. K.; Pan, J.; Pandey, A. K.; Pant, D.; Papikyan, V.; Pappalardo, G. S.; Pareek, P.; Park, W. J.; Parmar, S.; Passfeld, A.; Paticchio, V.; Paul, B.; Pawlak, T.; Peitzmann, T.; Pereira Da Costa, H.; Pereira De Oliveira Filho, E.; Peresunko, D.; Pérez Lara, C. E.; Peskov, V.; Pestov, Y.; Petráček, V.; Petrov, V.; Petrovici, M.; Petta, C.; Piano, S.; Pikna, M.; Pillot, P.; Pinazza, O.; Pinsky, L.; Piyarathna, D. B.; Płoskoń, M.; Planinic, M.; Pluta, J.; Pochybova, S.; Podesta-Lerma, P. L. M.; Poghosyan, M. G.; Polichtchouk, B.; Poljak, N.; Poonsawat, W.; Pop, A.; Porteboeuf-Houssais, S.; Porter, J.; Pospisil, J.; Prasad, S. K.; Preghenella, R.; Prino, F.; Pruneau, C. A.; Pshenichnov, I.; Puccio, M.; Puddu, G.; Pujahari, P.; Punin, V.; Putschke, J.; Qvigstad, H.; Rachevski, A.; Raha, S.; Rajput, S.; Rak, J.; Rakotozafindrabe, A.; Ramello, L.; Raniwala, R.; Raniwala, S.; Räsänen, S. S.; Rascanu, B. T.; Rathee, D.; Razazi, V.; Read, K. F.; Real, J. S.; Redlich, K.; Reed, R. J.; Rehman, A.; Reichelt, P.; Reicher, M.; Reidt, F.; Ren, X.; Renfordt, R.; Reolon, A. R.; Reshetin, A.; Rettig, F.; Revol, J.-P.; Reygers, K.; Riabov, V.; Ricci, R. A.; Richert, T.; Richter, M.; Riedler, P.; Riegler, W.; Riggi, F.; Ristea, C.; Rivetti, A.; Rocco, E.; Rodríguez Cahuantzi, M.; Rodriguez Manso, A.; Røed, K.; Rogochaya, E.; Rohr, D.; Röhrich, D.; Romita, R.; Ronchetti, F.; Ronflette, L.; Rosnet, P.; Rossi, A.; Roukoutakis, F.; Roy, A.; Roy, C.; Roy, P.; Rubio Montero, A. J.; Rui, R.; Russo, R.; Ryabinkin, E.; Ryabov, Y.; Rybicki, A.; Sadovsky, S.; Šafařík, K.; Sahlmuller, B.; Sahoo, P.; Sahoo, R.; Sahoo, S.; Sahu, P. K.; Saini, J.; Sakai, S.; Saleh, M. A.; Salgado, C. A.; Salzwedel, J.; Sambyal, S.; Samsonov, V.; Sanchez Castro, X.; Šándor, L.; Sandoval, A.; Sano, M.; Santagati, G.; Sarkar, D.; Scapparone, E.; Scarlassara, F.; Scharenberg, R. P.; Schiaua, C.; Schicker, R.; Schmidt, C.; Schmidt, H. R.; Schuchmann, S.; Schukraft, J.; Schulc, M.; Schuster, T.; Schutz, Y.; Schwarz, K.; Schweda, K.; Scioli, G.; Scomparin, E.; Scott, R.; Seeder, K. S.; Seger, J. E.; Sekiguchi, Y.; Selyuzhenkov, I.; Senosi, K.; Seo, J.; Serradilla, E.; Sevcenco, A.; Shabanov, A.; Shabetai, A.; Shadura, O.; Shahoyan, R.; Shangaraev, A.; Sharma, A.; Sharma, N.; Shigaki, K.; Shtejer, K.; Sibiriak, Y.; Siddhanta, S.; Sielewicz, K. M.; Siemiarczuk, T.; Silvermyr, D.; Silvestre, C.; Simatovic, G.; Simonetti, G.; Singaraju, R.; Singh, R.; Singha, S.; Singhal, V.; Sinha, B. C.; Sinha, T.; Sitar, B.; Sitta, M.; Skaali, T. B.; Slupecki, M.; Smirnov, N.; Snellings, R. J. M.; Snellman, T. W.; Søgaard, C.; Soltz, R.; Song, J.; Song, M.; Song, Z.; Soramel, F.; Sorensen, S.; Spacek, M.; Spiriti, E.; Sputowska, I.; Spyropoulou-Stassinaki, M.; Srivastava, B. K.; Stachel, J.; Stan, I.; Stefanek, G.; Steinpreis, M.; Stenlund, E.; Steyn, G.; Stiller, J. H.; Stocco, D.; Strmen, P.; Suaide, A. A. P.; Sugitate, T.; Suire, C.; Suleymanov, M.; Sultanov, R.; Šumbera, M.; Symons, T. J. M.; Szabo, A.; Szanto de Toledo, A.; Szarka, I.; Szczepankiewicz, A.; Szymanski, M.; Takahashi, J.; Tanaka, N.; Tangaro, M. A.; Tapia Takaki, J. D.; Tarantola Peloni, A.; Tariq, M.; Tarzila, M. G.; Tauro, A.; Tejeda Muñoz, G.; Telesca, A.; Terasaki, K.; Terrevoli, C.; Teyssier, B.; Thäder, J.; Thomas, D.; Tieulent, R.; Timmins, A. R.; Toia, A.; Trogolo, S.; Trubnikov, V.; Trzaska, W. H.; Tsuji, T.; Tumkin, A.; Turrisi, R.; Tveter, T. S.; Ullaland, K.; Uras, A.; Usai, G. L.; Utrobicic, A.; Vajzer, M.; Vala, M.; Valencia Palomo, L.; Vallero, S.; Van Der Maarel, J.; Van Hoorne, J. W.; van Leeuwen, M.; Vanat, T.; Vande Vyvre, P.; Varga, D.; Vargas, A.; Vargyas, M.; Varma, R.; Vasileiou, M.; Vasiliev, A.; Vauthier, A.; Vechernin, V.; Veen, A. M.; Veldhoen, M.; Velure, A.; Venaruzzo, M.; Vercellin, E.; Vergara Limón, S.; Vernet, R.; Verweij, M.; Vickovic, L.; Viesti, G.; Viinikainen, J.; Vilakazi, Z.; Villalobos Baillie, O.; Vinogradov, A.; Vinogradov, L.; Vinogradov, Y.; Virgili, T.; Vislavicius, V.; Viyogi, Y. P.; Vodopyanov, A.; Völkl, M. A.; Voloshin, K.; Voloshin, S. A.; Volpe, G.; von Haller, B.; Vorobyev, I.; Vranic, D.; Vrláková, J.; Vulpescu, B.; Vyushin, A.; Wagner, B.; Wagner, J.; Wang, H.; Wang, M.; Wang, Y.; Watanabe, D.; Weber, M.; Weber, S. G.; Wessels, J. P.; Westerhoff, U.; Wiechula, J.; Wikne, J.; Wilde, M.; Wilk, G.; Wilkinson, J.; Williams, M. C. S.; Windelband, B.; Winn, M.; Yaldo, C. G.; Yamaguchi, Y.; Yang, H.; Yang, P.; Yano, S.; Yasnopolskiy, S.; Yin, Z.; Yokoyama, H.; Yoo, I.-K.; Yurchenko, V.; Yushmanov, I.; Zaborowska, A.; Zaccolo, V.; Zaman, A.; Zampolli, C.; Zanoli, H. J. C.; Zaporozhets, S.; Zarochentsev, A.; Závada, P.; Zaviyalov, N.; Zbroszczyk, H.; Zgura, I. S.; Zhalov, M.; Zhang, H.; Zhang, X.; Zhang, Y.; Zhao, C.; Zhigareva, N.; Zhou, D.; Zhou, Y.; Zhou, Z.; Zhu, H.; Zhu, J.; Zhu, X.; Zichichi, A.; Zimmermann, A.; Zimmermann, M. B.; Zinovjev, G.; Zyzak, M.

    2016-01-01

    We present results of a search for two hypothetical strange dibaryon states, i.e. the H-dibaryon and the possible Λn ‾ bound state. The search is performed with the ALICE detector in central (0-10%) Pb-Pb collisions at √{sNN} = 2.76 TeV, by invariant mass analysis in the decay modes Λn ‾ → d ‾π+ and H-dibaryon → Λpπ-. No evidence for these bound states is observed. Upper limits are determined at 99% confidence level for a wide range of lifetimes and for the full range of branching ratios. The results are compared to thermal, coalescence and hybrid UrQMD model expectations, which describe correctly the production of other loosely bound states, like the deuteron and the hypertriton.

  19. Search for weakly decaying Λ n - and ΛΛ exotic bound states in central Pb–Pb collisions at s NN = 2.76  TeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adam, J.; Adamová, D.; Aggarwal, M. M.

    Here, we present results of a search for two hypothetical strange dibaryon states, i.e. the H-dibaryon and the possiblemore » $$\\overline{Λn}$$ bound state. The search is performed with the ALICE detector in central (0-10%) Pb-Pb collisions at $$\\sqrt{s}$$$_ {NN}$$ = 2.76 TeV, by invariant mass analysis in the decay modes $$\\overline{Λn}$$ → $$\\bar{d}$$π + and H-dibaryon →Λpπ -. No evidence for these bound states is observed. Upper limits are determined at 99% confidence level for a wide range of lifetimes and for the full range of branching ratios. The results are compared to thermal, coalescence and hybrid UrQMD model expectations, which describe correctly the production of other loosely bound states, like the deuteron and the hypertriton.« less

  20. Search for weakly decaying Λ n - and ΛΛ exotic bound states in central Pb–Pb collisions at s NN = 2.76  TeV

    DOE PAGES

    Adam, J.; Adamová, D.; Aggarwal, M. M.; ...

    2016-11-28

    Here, we present results of a search for two hypothetical strange dibaryon states, i.e. the H-dibaryon and the possiblemore » $$\\overline{Λn}$$ bound state. The search is performed with the ALICE detector in central (0-10%) Pb-Pb collisions at $$\\sqrt{s}$$$_ {NN}$$ = 2.76 TeV, by invariant mass analysis in the decay modes $$\\overline{Λn}$$ → $$\\bar{d}$$π + and H-dibaryon →Λpπ -. No evidence for these bound states is observed. Upper limits are determined at 99% confidence level for a wide range of lifetimes and for the full range of branching ratios. The results are compared to thermal, coalescence and hybrid UrQMD model expectations, which describe correctly the production of other loosely bound states, like the deuteron and the hypertriton.« less

  1. The Estimation of the IRT Reliability Coefficient and Its Lower and Upper Bounds, with Comparisons to CTT Reliability Statistics

    ERIC Educational Resources Information Center

    Kim, Seonghoon; Feldt, Leonard S.

    2010-01-01

    The primary purpose of this study is to investigate the mathematical characteristics of the test reliability coefficient rho[subscript XX'] as a function of item response theory (IRT) parameters and present the lower and upper bounds of the coefficient. Another purpose is to examine relative performances of the IRT reliability statistics and two…

  2. Performance analysis of optimal power allocation in wireless cooperative communication systems

    NASA Astrophysics Data System (ADS)

    Babikir Adam, Edriss E.; Samb, Doudou; Yu, Li

    2013-03-01

    Cooperative communication has been recently proposed in wireless communication systems for exploring the inherent spatial diversity in relay channels.The Amplify-and-Forward (AF) cooperation protocols with multiple relays have not been sufficiently investigated even if it has a low complexity in term of implementation. We consider in this work a cooperative diversity system in which a source transmits some information to a destination with the help of multiple relay nodes with AF protocols and investigate the optimality of allocating powers both at the source and the relays system by optimizing the symbol error rate (SER) performance in an efficient way. Firstly we derive a closedform SER formulation for MPSK signal using the concept of moment generating function and some statistical approximations in high signal to noise ratio (SNR) for the system under studied. We then find a tight corresponding lower bound which converges to the same limit as the theoretical upper bound and develop an optimal power allocation (OPA) technique with mean channel gains to minimize the SER. Simulation results show that our scheme outperforms the equal power allocation (EPA) scheme and is tight to the theoretical approximation based on the SER upper bound in high SNR for different number of relays.

  3. Hard and Soft Constraints in Reliability-Based Design Optimization

    NASA Technical Reports Server (NTRS)

    Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.

    2006-01-01

    This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.

  4. Multivariate Lipschitz optimization: Survey and computational comparison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, P.; Gourdin, E.; Jaumard, B.

    1994-12-31

    Many methods have been proposed to minimize a multivariate Lipschitz function on a box. They pertain the three approaches: (i) reduction to the univariate case by projection (Pijavskii) or by using a space-filling curve (Strongin); (ii) construction and refinement of a single upper bounding function (Pijavskii, Mladineo, Mayne and Polak, Jaumard Hermann and Ribault, Wood...); (iii) branch and bound with local upper bounding functions (Galperin, Pint{acute e}r, Meewella and Mayne, the present authors). A survey is made, stressing similarities of algorithms, expressed when possible within a unified framework. Moreover, an extensive computational comparison is reported on.

  5. Numerical and analytical bounds on threshold error rates for hypergraph-product codes

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Prabhakar, Sanjay; Dumer, Ilya; Pryadko, Leonid P.

    2018-06-01

    We study analytically and numerically decoding properties of finite-rate hypergraph-product quantum low density parity-check codes obtained from random (3,4)-regular Gallager codes, with a simple model of independent X and Z errors. Several nontrivial lower and upper bounds for the decodable region are constructed analytically by analyzing the properties of the homological difference, equal minus the logarithm of the maximum-likelihood decoding probability for a given syndrome. Numerical results include an upper bound for the decodable region from specific heat calculations in associated Ising models and a minimum-weight decoding threshold of approximately 7 % .

  6. Computational micromechanics of woven composites

    NASA Technical Reports Server (NTRS)

    Hopkins, Dale A.; Saigal, Sunil; Zeng, Xiaogang

    1991-01-01

    The bounds on the equivalent elastic material properties of a composite are presently addressed by a unified energy approach which is valid for both unidirectional and 2D and 3D woven composites. The unit cell considered is assumed to consist, first, of the actual composite arrangement of the fibers and matrix material, and then, of an equivalent pseudohomogeneous material. Equating the strain energies due to the two arrangements yields an estimate of the upper bound for the material equivalent properties; successive increases in the order of displacement field that is assumed in the composite arrangement will successively produce improved upper bound estimates.

  7. Upper bounds on the photon mass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Accioly, Antonio; Group of Field Theory from First Principles, Sao Paulo State University; Instituto de Fisica Teorica

    2010-09-15

    The effects of a nonzero photon rest mass can be incorporated into electromagnetism in a simple way using the Proca equations. In this vein, two interesting implications regarding the possible existence of a massive photon in nature, i.e., tiny alterations in the known values of both the anomalous magnetic moment of the electron and the gravitational deflection of electromagnetic radiation, are utilized to set upper limits on its mass. The bounds obtained are not as stringent as those recently found; nonetheless, they are comparable to other existing bounds and bring new elements to the issue of restricting the photon mass.

  8. Detection of Pneumonia Associated Pathogens Using a Prototype Multiplexed Pneumonia Test in Hospitalized Patients with Severe Pneumonia

    PubMed Central

    Schulte, Berit; Eickmeyer, Holm; Heininger, Alexandra; Juretzek, Stephanie; Karrasch, Matthias; Denis, Olivier; Roisin, Sandrine; Pletz, Mathias W.; Klein, Matthias; Barth, Sandra; Lüdke, Gerd H.; Thews, Anne; Torres, Antoni; Cillóniz, Catia; Straube, Eberhard; Autenrieth, Ingo B.; Keller, Peter M.

    2014-01-01

    Severe pneumonia remains an important cause of morbidity and mortality. Polymerase chain reaction (PCR) has been shown to be more sensitive than current standard microbiological methods – particularly in patients with prior antibiotic treatment – and therefore, may improve the accuracy of microbiological diagnosis for hospitalized patients with pneumonia. Conventional detection techniques and multiplex PCR for 14 typical bacterial pneumonia-associated pathogens were performed on respiratory samples collected from adult hospitalized patients enrolled in a prospective multi-center study. Patients were enrolled from March until September 2012. A total of 739 fresh, native samples were eligible for analysis, of which 75 were sputa, 421 aspirates, and 234 bronchial lavages. 276 pathogens were detected by microbiology for which a valid PCR result was generated (positive or negative detection result by Curetis prototype system). Among these, 120 were identified by the prototype assay, 50 pathogens were not detected. Overall performance of the prototype for pathogen identification was 70.6% sensitivity (95% confidence interval (CI) lower bound: 63.3%, upper bound: 76.9%) and 95.2% specificity (95% CI lower bound: 94.6%, upper bound: 95.7%). Based on the study results, device cut-off settings were adjusted for future series production. The overall performance with the settings of the CE series production devices was 78.7% sensitivity (95% CI lower bound: 72.1%) and 96.6% specificity (95% CI lower bound: 96.1%). Time to result was 5.2 hours (median) for the prototype test and 43.5 h for standard-of-care. The Pneumonia Application provides a rapid and moderately sensitive assay for the detection of pneumonia-causing pathogens with minimal hands-on time. Trial Registration Deutsches Register Klinischer Studien (DRKS) DRKS00005684 PMID:25397673

  9. Modeling of magnitude distributions by the generalized truncated exponential distribution

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2015-01-01

    The probability distribution of the magnitude can be modeled by an exponential distribution according to the Gutenberg-Richter relation. Two alternatives are the truncated exponential distribution (TED) and the cutoff exponential distribution (CED). The TED is frequently used in seismic hazard analysis although it has a weak point: when two TEDs with equal parameters except the upper bound magnitude are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. We overcome it by the generalization of the abovementioned exponential distributions: the generalized truncated exponential distribution (GTED). Therein, identical exponential distributions are mixed by the probability distribution of the correct cutoff points. This distribution model is flexible in the vicinity of the upper bound magnitude and is equal to the exponential distribution for smaller magnitudes. Additionally, the exponential distributions TED and CED are special cases of the GTED. We discuss the possible ways of estimating its parameters and introduce the normalized spacing for this purpose. Furthermore, we present methods for geographic aggregation and differentiation of the GTED and demonstrate the potential and universality of our simple approach by applying it to empirical data. The considerable improvement by the GTED in contrast to the TED is indicated by a large difference between the corresponding values of the Akaike information criterion.

  10. Simplest little Higgs model revisited: Hidden mass relation, unitarity, and naturalness

    NASA Astrophysics Data System (ADS)

    Cheung, Kingman; He, Shi-Ping; Mao, Ying-nan; Zhang, Chen; Zhou, Yang

    2018-06-01

    We analyze the scalar potential of the simplest little Higgs (SLH) model in an approach consistent with the spirit of continuum effective field theory (CEFT). By requiring correct electroweak symmetry breaking (EWSB) with the 125 GeV Higgs boson, we are able to derive a relation between the pseudoaxion mass mη and the heavy top mass mT, which serves as a crucial test of the SLH mechanism. By requiring mη2>0 an upper bound on mT can be obtained for any fixed SLH global symmetry breaking scale f . We also point out that an absolute upper bound on f can be obtained by imposing partial wave unitarity constraint, which in turn leads to absolute upper bounds of mT≲19 TeV , mη≲1.5 TeV , and mZ'≲48 TeV . We present the allowed region in the three-dimensional parameter space characterized by f ,tβ,mT, taking into account the requirement of valid EWSB and the constraint from perturbative unitarity. We also propose a strategy of analyzing the fine-tuning problem consistent with the spirit of CEFT and apply it to the SLH. We suggest that the scalar potential and fine-tuning analysis strategies adopted here should also be applicable to a wide class of little Higgs and twin Higgs models, which may reveal interesting relations as crucial tests of the related EWSB mechanism and provide a new perspective on assessing their degree of fine-tuning.

  11. Dynamic Analysis of the Melanoma Model: From Cancer Persistence to Its Eradication

    NASA Astrophysics Data System (ADS)

    Starkov, Konstantin E.; Jimenez Beristain, Laura

    In this paper, we study the global dynamics of the five-dimensional melanoma model developed by Kronik et al. This model describes interactions of tumor cells with cytotoxic T cells and respective cytokines under cellular immunotherapy. We get the ultimate upper and lower bounds for variables of this model, provide formulas for equilibrium points and present local asymptotic stability/hyperbolic instability conditions. Next, we prove the existence of the attracting set. Based on these results we come to global asymptotic melanoma eradication conditions via global stability analysis. Finally, we provide bounds for a locus of the melanoma persistence equilibrium point, study the case of melanoma persistence and describe conditions under which we observe global attractivity to the unique melanoma persistence equilibrium point.

  12. Estimating the Richness of a Population When the Maximum Number of Classes Is Fixed: A Nonparametric Solution to an Archaeological Problem

    PubMed Central

    Eren, Metin I.; Chao, Anne; Hwang, Wen-Han; Colwell, Robert K.

    2012-01-01

    Background Estimating assemblage species or class richness from samples remains a challenging, but essential, goal. Though a variety of statistical tools for estimating species or class richness have been developed, they are all singly-bounded: assuming only a lower bound of species or classes. Nevertheless there are numerous situations, particularly in the cultural realm, where the maximum number of classes is fixed. For this reason, a new method is needed to estimate richness when both upper and lower bounds are known. Methodology/Principal Findings Here, we introduce a new method for estimating class richness: doubly-bounded confidence intervals (both lower and upper bounds are known). We specifically illustrate our new method using the Chao1 estimator, rarefaction, and extrapolation, although any estimator of asymptotic richness can be used in our method. Using a case study of Clovis stone tools from the North American Lower Great Lakes region, we demonstrate that singly-bounded richness estimators can yield confidence intervals with upper bound estimates larger than the possible maximum number of classes, while our new method provides estimates that make empirical sense. Conclusions/Significance Application of the new method for constructing doubly-bound richness estimates of Clovis stone tools permitted conclusions to be drawn that were not otherwise possible with singly-bounded richness estimates, namely, that Lower Great Lakes Clovis Paleoindians utilized a settlement pattern that was probably more logistical in nature than residential. However, our new method is not limited to archaeological applications. It can be applied to any set of data for which there is a fixed maximum number of classes, whether that be site occupancy models, commercial products (e.g. athletic shoes), or census information (e.g. nationality, religion, age, race). PMID:22666316

  13. Upper bound on the Abelian gauge coupling from asymptotic safety

    NASA Astrophysics Data System (ADS)

    Eichhorn, Astrid; Versteegen, Fleur

    2018-01-01

    We explore the impact of asymptotically safe quantum gravity on the Abelian gauge coupling in a model including a charged scalar, confirming indications that asymptotically safe quantum fluctuations of gravity could trigger a power-law running towards a free fixed point for the gauge coupling above the Planck scale. Simultaneously, quantum gravity fluctuations balance against matter fluctuations to generate an interacting fixed point, which acts as a boundary of the basin of attraction of the free fixed point. This enforces an upper bound on the infrared value of the Abelian gauge coupling. In the regime of gravity couplings which in our approximation also allows for a prediction of the top quark and Higgs mass close to the experimental value [1], we obtain an upper bound approximately 35% above the infrared value of the hypercharge coupling in the Standard Model.

  14. Limits of Gaussian fluctuations in the cosmic microwave background at 19.2 GHz

    NASA Technical Reports Server (NTRS)

    Boughn, S. P.; Cheng, E. S.; Cottingham, D. A.; Fixsen, D. J.

    1992-01-01

    The Northern Hemisphere data from the 19.2 GHz full sky survey are analyzed to place limits on the magnitude of Gaussian fluctuations in the cosmic microwave background implied by a variety of correlation functions. Included among the models tested are the monochromatic and Gaussian-shaped families, and those with power-law spectra for n values between -2 and 1. An upper bound is placed on the quadrupole anisotropy of Delta T/T less than 3.2 x 10 exp -5 rms, and an upper bound on scale-invariant (n = 1) fluctuations of a2 less than 4.5 x 10 exp -5 (95 percent confidence level). There is significant contamination of these data from Galactic emission, and improvement of the modeling of the Galaxy could yield a significant reduction of these upper bounds.

  15. Limits on Gaussian fluctuations in the cosmic microwave background at 19.2 GHz

    NASA Technical Reports Server (NTRS)

    Boughn, S. P.; Cheng, E. S.; Cottingham, D. A.; Fixsen, D. J.

    1991-01-01

    The Northern Hemisphere data from the 19.2 GHz full sky survey are analyzed to place limits on the magnitude of Gaussian fluctuations in the cosmic microwave background implied by a variety of correlation functions. Included among the models tested are the monochromatic and Gaussian-shaped families, and those with power law spectra for n from -2 to 1. We place an upper bound on the quadrupole anisotropy of DeltaT/T less than 3.2 x 10 exp -5 rms, and an upper bound on scale-invariant (n = 1) fluctuations of a2 less than 4.5 x 10 exp -5 (95 percent confidence level). There is significant contamination of these data from Galactic emission, and improvement of our modeling of the Galaxy could yield a significant reduction of these upper bounds.

  16. Complexity Bounds for Quantum Computation

    DTIC Science & Technology

    2007-06-22

    Programs Trustees of Boston University Boston, MA 02215 - Complexity Bounds for Quantum Computation REPORT DOCUMENTATION PAGE 18. SECURITY CLASSIFICATION...Complexity Bounds for Quantum Comp[utation Report Title ABSTRACT This project focused on upper and lower bounds for quantum computability using constant...classical computation models, particularly emphasizing new examples of where quantum circuits are more powerful than their classical counterparts. A second

  17. Upper Bound on Diffusivity

    NASA Astrophysics Data System (ADS)

    Hartman, Thomas; Hartnoll, Sean A.; Mahajan, Raghu

    2017-10-01

    The linear growth of operators in local quantum systems leads to an effective light cone even if the system is nonrelativistic. We show that the consistency of diffusive transport with this light cone places an upper bound on the diffusivity: D ≲v2τeq. The operator growth velocity v defines the light cone, and τeq is the local equilibration time scale, beyond which the dynamics of conserved densities is diffusive. We verify that the bound is obeyed in various weakly and strongly interacting theories. In holographic models, this bound establishes a relation between the hydrodynamic and leading nonhydrodynamic quasinormal modes of planar black holes. Our bound relates transport data—including the electrical resistivity and the shear viscosity—to the local equilibration time, even in the absence of a quasiparticle description. In this way, the bound sheds light on the observed T -linear resistivity of many unconventional metals, the shear viscosity of the quark-gluon plasma, and the spin transport of unitary fermions.

  18. Intrinsic upper bound on two-qubit polarization entanglement predetermined by pump polarization correlations in parametric down-conversion

    NASA Astrophysics Data System (ADS)

    Kulkarni, Girish; Subrahmanyam, V.; Jha, Anand K.

    2016-06-01

    We study how one-particle correlations transfer to manifest as two-particle correlations in the context of parametric down-conversion (PDC), a process in which a pump photon is annihilated to produce two entangled photons. We work in the polarization degree of freedom and show that for any two-qubit generation process that is both trace-preserving and entropy-nondecreasing, the concurrence C (ρ ) of the generated two-qubit state ρ follows an intrinsic upper bound with C (ρ )≤(1 +P )/2 , where P is the degree of polarization of the pump photon. We also find that for the class of two-qubit states that is restricted to have only two nonzero diagonal elements such that the effective dimensionality of the two-qubit state is the same as the dimensionality of the pump polarization state, the upper bound on concurrence is the degree of polarization itself, that is, C (ρ )≤P . Our work shows that the maximum manifestation of two-particle correlations as entanglement is dictated by one-particle correlations. The formalism developed in this work can be extended to include multiparticle systems and can thus have important implications towards deducing the upper bounds on multiparticle entanglement, for which no universally accepted measure exists.

  19. Backstepping Design of Adaptive Neural Fault-Tolerant Control for MIMO Nonlinear Systems.

    PubMed

    Gao, Hui; Song, Yongduan; Wen, Changyun

    In this paper, an adaptive controller is developed for a class of multi-input and multioutput nonlinear systems with neural networks (NNs) used as a modeling tool. It is shown that all the signals in the closed-loop system with the proposed adaptive neural controller are globally uniformly bounded for any external input in . In our control design, the upper bound of the NN modeling error and the gains of external disturbance are characterized by unknown upper bounds, which is more rational to establish the stability in the adaptive NN control. Filter-based modification terms are used in the update laws of unknown parameters to improve the transient performance. Finally, fault-tolerant control is developed to accommodate actuator failure. An illustrative example applying the adaptive controller to control a rigid robot arm shows the validation of the proposed controller.In this paper, an adaptive controller is developed for a class of multi-input and multioutput nonlinear systems with neural networks (NNs) used as a modeling tool. It is shown that all the signals in the closed-loop system with the proposed adaptive neural controller are globally uniformly bounded for any external input in . In our control design, the upper bound of the NN modeling error and the gains of external disturbance are characterized by unknown upper bounds, which is more rational to establish the stability in the adaptive NN control. Filter-based modification terms are used in the update laws of unknown parameters to improve the transient performance. Finally, fault-tolerant control is developed to accommodate actuator failure. An illustrative example applying the adaptive controller to control a rigid robot arm shows the validation of the proposed controller.

  20. Sensitivity analysis of limit state functions for probability-based plastic design

    NASA Technical Reports Server (NTRS)

    Frangopol, D. M.

    1984-01-01

    The evaluation of the total probability of a plastic collapse failure P sub f for a highly redundant structure of random interdependent plastic moments acted on by random interdepedent loads is a difficult and computationally very costly process. The evaluation of reasonable bounds to this probability requires the use of second moment algebra which involves man statistical parameters. A computer program which selects the best strategy for minimizing the interval between upper and lower bounds of P sub f is now in its final stage of development. The relative importance of various uncertainties involved in the computational process on the resulting bounds of P sub f, sensitivity is analyzed. Response sensitivities for both mode and system reliability of an ideal plastic portal frame are shown.

  1. Enhnacing the science of the WFIRST coronagraph instrument with post-processing.

    NASA Astrophysics Data System (ADS)

    Pueyo, Laurent; WFIRST CGI data analysis and post-processing WG

    2018-01-01

    We summarize the results of a three years effort investigating how to apply to the WFIRST coronagraph instrument (CGI) modern image analysis methods, now routinely used with ground-based coronagraphs. In this post we quantify the gain associated post-processing for WFIRST-CGI observing scenarios simulated between 2013 and 2017. We also show based one simulations that spectrum of planet can be confidently retrieved using these processing tools with and Integral Field Spectrograph. We then discuss our work using CGI experimental data and quantify coronagraph post-processing testbed gains. We finally introduce stability metrics that are simple to define and measure, and place useful lower bound and upper bounds on the achievable RDI post-processing contrast gain. We show that our bounds hold in the case of the testbed data.

  2. Chang'e 3 lunar mission and upper limit on stochastic background of gravitational wave around the 0.01 Hz band

    NASA Astrophysics Data System (ADS)

    Tang, Wenlin; Xu, Peng; Hu, Songjie; Cao, Jianfeng; Dong, Peng; Bu, Yanlong; Chen, Lue; Han, Songtao; Gong, Xuefei; Li, Wenxiao; Ping, Jinsong; Lau, Yun-Kau; Tang, Geshi

    2017-09-01

    The Doppler tracking data of the Chang'e 3 lunar mission is used to constrain the stochastic background of gravitational wave in cosmology within the 1 mHz to 0.05 Hz frequency band. Our result improves on the upper bound on the energy density of the stochastic background of gravitational wave in the 0.02-0.05 Hz band obtained by the Apollo missions, with the improvement reaching almost one order of magnitude at around 0.05 Hz. Detailed noise analysis of the Doppler tracking data is also presented, with the prospect that these noise sources will be mitigated in future Chinese deep space missions. A feasibility study is also undertaken to understand the scientific capability of the Chang'e 4 mission, due to be launched in 2018, in relation to the stochastic gravitational wave background around 0.01 Hz. The study indicates that the upper bound on the energy density may be further improved by another order of magnitude from the Chang'e 3 mission, which will fill the gap in the frequency band from 0.02 Hz to 0.1 Hz in the foreseeable future.

  3. The generalized truncated exponential distribution as a model for earthquake magnitudes

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2015-04-01

    The random distribution of small, medium and large earthquake magnitudes follows an exponential distribution (ED) according to the Gutenberg-Richter relation. But a magnitude distribution is truncated in the range of very large magnitudes because the earthquake energy is finite and the upper tail of the exponential distribution does not fit well observations. Hence the truncated exponential distribution (TED) is frequently applied for the modelling of the magnitude distributions in the seismic hazard and risk analysis. The TED has a weak point: when two TEDs with equal parameters, except the upper bound magnitude, are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters, except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. It also applies to alternative distribution models. The presented generalized truncated exponential distribution (GTED) overcomes this weakness. The ED and the TED are special cases of the GTED. Different issues of the statistical inference are also discussed and an example of empirical data is presented in the current contribution.

  4. Statis Program Analysis for Reliable, Trusted Apps

    DTIC Science & Technology

    2017-02-01

    flexibility to system design. However, it is challenging for a static analysis to compute or verify properties about a system that uses implicit control...sources might affect the variable’s value. The type qualifier @Sink indicates where (information computed from) the value might be output. These...upper bound on the set of sensitive sources that were actually used to compute the value. If the type of x is qualified by @Source({INTERNET, LOCATION

  5. Risk assessment and monitoring programme of nitrates through vegetables in the Region of Valencia (Spain).

    PubMed

    Quijano, Leyre; Yusà, Vicent; Font, Guillermina; McAllister, Claudia; Torres, Concepción; Pardo, Olga

    2017-02-01

    This study was carried out to determine current levels of nitrate in vegetables marketed in the Region of Valencia (Spain) and to estimate the toxicological risk associated with their intake. A total of 533 samples of seven vegetable species were studied. Nitrate levels were derived from the Valencia Region monitoring programme carried out from 2009 to 2013 and food consumption levels were taken from the first Valencia Food Consumption Survey, conducted in 2010. The exposure was estimated using a probabilistic approach and two scenarios were assumed for left-censored data: the lower-bound scenario, in which unquantified results (below the limit of quantification) were set to zero and the upper-bound scenario, in which unquantified results were set to the limit of quantification value. The exposure of the Valencia consumers to nitrate through the consumption of vegetable products appears to be relatively low. In the adult population (16-95 years) the P99.9 was 3.13 mg kg -1 body weight day -1 and 3.15 mg kg -1 body weight day -1 in the lower bound and upper bound scenario, respectively. On the other hand, for young people (6-15 years) the P99.9 of the exposure was 4.20 mg kg -1 body weight day -1 and 4.40 mg kg -1 body weight day -1 in the lower bound and upper bound scenario, respectively. The risk characterisation indicates that, under the upper bound scenario, 0.79% of adults and 1.39% of young people can exceed the Acceptable Daily Intake of nitrate. This percentage could join the vegetable extreme consumers (such as vegetarians) of vegetables. Overall, the estimated exposures to nitrate from vegetables are unlikely to result in appreciable health risks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. On the upper bound in the Bohm sheath criterion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotelnikov, I. A., E-mail: I.A.Kotelnikov@inp.nsk.su; Skovorodin, D. I., E-mail: D.I.Skovorodin@inp.nsk.su

    2016-02-15

    The question is discussed about the existence of an upper bound in the Bohm sheath criterion, according to which the Debye sheath at the interface between plasma and a negatively charged electrode is stable only if the ion flow velocity in plasma exceeds the ion sound velocity. It is stated that, with an exception of some artificial ionization models, the Bohm sheath criterion is satisfied as an equality at the lower bound and the ion flow velocity is equal to the speed of sound. In the one-dimensional theory, a supersonic flow appears in an unrealistic model of a localized ionmore » source the size of which is less than the Debye length; however, supersonic flows seem to be possible in the two- and three-dimensional cases. In the available numerical codes used to simulate charged particle sources with a plasma emitter, the presence of the upper bound in the Bohm sheath criterion is not supposed; however, the correspondence with experimental data is usually achieved if the ion flow velocity in plasma is close to the ion sound velocity.« less

  7. LS Bound based gene selection for DNA microarray data.

    PubMed

    Zhou, Xin; Mao, K Z

    2005-04-15

    One problem with discriminant analysis of DNA microarray data is that each sample is represented by quite a large number of genes, and many of them are irrelevant, insignificant or redundant to the discriminant problem at hand. Methods for selecting important genes are, therefore, of much significance in microarray data analysis. In the present study, a new criterion, called LS Bound measure, is proposed to address the gene selection problem. The LS Bound measure is derived from leave-one-out procedure of LS-SVMs (least squares support vector machines), and as the upper bound for leave-one-out classification results it reflects to some extent the generalization performance of gene subsets. We applied this LS Bound measure for gene selection on two benchmark microarray datasets: colon cancer and leukemia. We also compared the LS Bound measure with other evaluation criteria, including the well-known Fisher's ratio and Mahalanobis class separability measure, and other published gene selection algorithms, including Weighting factor and SVM Recursive Feature Elimination. The strength of the LS Bound measure is that it provides gene subsets leading to more accurate classification results than the filter method while its computational complexity is at the level of the filter method. A companion website can be accessed at http://www.ntu.edu.sg/home5/pg02776030/lsbound/. The website contains: (1) the source code of the gene selection algorithm; (2) the complete set of tables and figures regarding the experimental study; (3) proof of the inequality (9). ekzmao@ntu.edu.sg.

  8. Computer search for binary cyclic UEP codes of odd length up to 65

    NASA Technical Reports Server (NTRS)

    Lin, Mao-Chao; Lin, Chi-Chang; Lin, Shu

    1990-01-01

    Using an exhaustive computation, the unequal error protection capabilities of all binary cyclic codes of odd length up to 65 that have minimum distances at least 3 are found. For those codes that can only have upper bounds on their unequal error protection capabilities computed, an analytic method developed by Dynkin and Togonidze (1976) is used to show that the upper bounds meet the exact unequal error protection capabilities.

  9. On the Kirchhoff Index of Graphs

    NASA Astrophysics Data System (ADS)

    Das, Kinkar C.

    2013-09-01

    Let G be a connected graph of order n with Laplacian eigenvalues μ1 ≥ μ2 ≥ ... ≥ μn-1 > mn = 0. The Kirchhoff index of G is defined as [xxx] In this paper. we give lower and upper bounds on Kf of graphs in terms on n, number of edges, maximum degree, and number of spanning trees. Moreover, we present lower and upper bounds on the Nordhaus-Gaddum-type result for the Kirchhoff index.

  10. Objects of Maximum Electromagnetic Chirality

    NASA Astrophysics Data System (ADS)

    Fernandez-Corbaton, Ivan; Fruhnert, Martin; Rockstuhl, Carsten

    2016-07-01

    We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. Reciprocal objects attain the upper bound if and only if they are transparent for all the fields of one polarization handedness (helicity). Additionally, electromagnetic duality symmetry, i.e., helicity preservation upon interaction, turns out to be a necessary condition for reciprocal objects to attain the upper bound. We use these results to provide requirements for the design of such extremal objects. The requirements can be formulated as constraints on the polarizability tensors for dipolar objects or on the material constitutive relations for continuous media. We also outline two applications for objects of maximum electromagnetic chirality: a twofold resonantly enhanced and background-free circular dichroism measurement setup, and angle-independent helicity filtering glasses. Finally, we use the theoretically obtained requirements to guide the design of a specific structure, which we then analyze numerically and discuss its performance with respect to maximal electromagnetic chirality.

  11. A method for paraplegic upper-body posture estimation during standing: a pilot study for rehabilitation purposes.

    PubMed

    Pages, Gaël; Ramdani, Nacim; Fraisse, Philippe; Guiraud, David

    2009-06-01

    This paper presents a contribution for restoring standing in paraplegia while using functional electrical stimulation (FES). Movement generation induced by FES remains mostly open looped and stimulus intensities are tuned empirically. To design an efficient closed-loop control, a preliminary study has been carried out to investigate the relationship between body posture and voluntary upper body movements. A methodology is proposed to estimate body posture in the sagittal plane using force measurements exerted on supporting handles during standing. This is done by setting up constraints related to the geometric equations of a two-dimensional closed chain model and the hand-handle interactions. All measured quantities are subject to an uncertainty assumed unknown but bounded. The set membership estimation problem is solved via interval analysis. Guaranteed uncertainty bounds are computed for the estimated postures. In order to test the feasibility of our methodology, experiments were carried out with complete spinal cord injured patients.

  12. Performance Analysis of Amplify-and-Forward Systems with Single Relay Selection in Correlated Environments.

    PubMed

    Van Nguyen, Binh; Kim, Kiseon

    2016-09-11

    In this paper, we consider amplify-and-forward (AnF) cooperative systems under correlated fading environments. We first present a brief overview of existing works on the effect of channel correlations on the system performance. We then focus on our main contribution which is analyzing the outage probability of a multi-AnF-relay system with the best relay selection (BRS) scheme under a condition that two channels of each relay, source-relay and relay-destination channels, are correlated. Using lower and upper bounds on the end-to-end received signal-to-noise ratio (SNR) at the destination, we derive corresponding upper and lower bounds on the system outage probability. We prove that the system can achieve a diversity order (DO) equal to the number of relays. In addition, and importantly, we show that the considered correlation form has a constructive effect on the system performance. In other words, the larger the correlation coefficient, the better system performance. Our analytic results are corroborated by extensive Monte-Carlo simulations.

  13. Flutter suppression and stability analysis for a variable-span wing via morphing technology

    NASA Astrophysics Data System (ADS)

    Li, Wencheng; Jin, Dongping

    2018-01-01

    A morphing wing can enhance aerodynamic characteristics and control authority as an alternative to using ailerons. To use morphing technology for flutter suppression, the dynamical behavior and stability of a variable-span wing subjected to the supersonic aerodynamic loads are investigated numerically in this paper. An axially moving cantilever plate is employed to model the variable-span wing, in which the governing equations of motion are established via the Kane method and piston theory. A morphing strategy based on axially moving rates is proposed to suppress the flutter that occurs beyond the critical span length, and the flutter stability is verified by Floquet theory. Furthermore, the transient stability during the morphing motion is analyzed and the upper bound of the morphing rate is obtained. The simulation results indicate that the proposed morphing law, which is varying periodically with a proper amplitude, could accomplish the flutter suppression. Further, the upper bound of the morphing speed decreases rapidly once the span length is close to its critical span length.

  14. Time-optical spinup maneuvers of flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Singh, G.; Kabamba, P. T.; Mcclamroch, N. H.

    1990-01-01

    Attitude controllers for spacecraft have been based on the assumption that the bodies being controlled are rigid. Future spacecraft, however, may be quite flexible. Many applications require spinning up/down these vehicles. In this work the minimum time control of these maneuvers is considered. The time-optimal control is shown to possess an important symmetry property. Taking advantage of this property, the necessary and sufficient conditions for optimality are transformed into a system of nonlinear algebraic equations in the control switching times during one half of the maneuver, the maneuver time, and the costates at the mid-maneuver time. These equations can be solved using a homotopy approach. Control spillover measures are introduced and upper bounds on these measures are obtained. For a special case these upper bounds can be expressed in closed form for an infinite dimensional evaluation model. Rotational stiffening effects are ignored in the optimal control analysis. Based on a heuristic argument a simple condition is given which justifies the omission of these nonlinear effects. This condition is validated by numerical simulation.

  15. Mercury: results on mass, radius, ionosphere, and atmosphere from mariner 10 dual-frequency radio signals.

    PubMed

    Howard, H T; Tyler, G L; Esposito, P B; Anderson, J D; Reasenberg, R D; Shapiro, I I; Fjeldbo, G; Kliore, A J; Levy, G S; Brunn, D L; Dickinson, R; Edelson, R E; Martin, W L; Postal, R B; Seidel, B; Sesplaukis, T T; Shirley, D L; Stelzried, C T; Sweetnam, D N; Wood, G E; Zygielbaum, A I

    1974-07-12

    Analysis of the radio-tracking data from Mariner 10 yields 6,023,600 +/- 600 for the ratio of the mass of the sun to that of Mercury, in very good agreement with values determined earlier from radar data alone. Occultation measurements yielded values for the radius of Mercury of 2440 +/- 2 and 2438 +/- 2 kilometers at laditudes of 2 degrees N and 68 degrees N, respectively, again in close agreement with the average equatorial radius of 2439 +/- 1 kilometers determined from radar data. The mean density of 5.44 grams per cubic centimeter deduced for Mercury from Mariner 10 data thus virtually coincides with the prior determination. No evidence of either an ionosphere or an atmosphere was found, with the data yielding upper bounds on the electron density of about 1500 and 4000 electrons per cubic centimeter on the dayside and nightside, respectively, and an inferred upper bound on the surface pressure of 10(-8) millibar.

  16. Heterogeneous upper-bound finite element limit analysis of masonry walls out-of-plane loaded

    NASA Astrophysics Data System (ADS)

    Milani, G.; Zuccarello, F. A.; Olivito, R. S.; Tralli, A.

    2007-11-01

    A heterogeneous approach for FE upper bound limit analyses of out-of-plane loaded masonry panels is presented. Under the assumption of associated plasticity for the constituent materials, mortar joints are reduced to interfaces with a Mohr Coulomb failure criterion with tension cut-off and cap in compression, whereas for bricks both limited and unlimited strength are taken into account. At each interface, plastic dissipation can occur as a combination of out-of-plane shear, bending and torsion. In order to test the reliability of the model proposed, several examples of dry-joint panels out-of-plane loaded tested at the University of Calabria (Italy) are discussed. Numerical results are compared with experimental data for three different series of walls at different values of the in-plane compressive vertical loads applied. The comparisons show that reliable predictions of both collapse loads and failure mechanisms can be obtained by means of the numerical procedure employed.

  17. Upper bounds on the error probabilities and asymptotic error exponents in quantum multiple state discrimination

    NASA Astrophysics Data System (ADS)

    Audenaert, Koenraad M. R.; Mosonyi, Milán

    2014-10-01

    We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ1, …, σr. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ1, …, σr), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov's classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min _{j

  18. On subgame perfect equilibria in quantum Stackelberg duopoly

    NASA Astrophysics Data System (ADS)

    Frąckiewicz, Piotr; Pykacz, Jarosław

    2018-02-01

    Our purpose is to study the Stackelberg duopoly with the use of the Li-Du-Massar quantum duopoly scheme. The result of Lo and Kiang has shown that the correlation of players's quantities caused by the quantum entanglement enlarges the first-mover advantage in the quantum Stackelberg duopoly. However, the interval of entanglement parameters for which this result is valid is bounded from above. It has been an open question what the equilibrium result is over the upper bound, in particular when the entanglement parameter goes to infinity. Our work provides complete analysis of subgame perfect equilibria of the game for all the values of the entanglement parameter.

  19. Soft-decision decoding techniques for linear block codes and their error performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1996-01-01

    The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.

  20. Analysis of synchronous digital-modulation schemes for satellite communication

    NASA Technical Reports Server (NTRS)

    Takhar, G. S.; Gupta, S. C.

    1975-01-01

    The multipath communication channel for space communications is modeled as a multiplicative channel. This paper discusses the effects of multiplicative channel processes on the symbol error rate for quadrature modulation (QM) digital modulation schemes. An expression for the upper bound on the probability of error is derived and numerically evaluated. The results are compared with those obtained for additive channels.

  1. Evaluation of Ares-I Control System Robustness to Uncertain Aerodynamics and Flex Dynamics

    NASA Technical Reports Server (NTRS)

    Jang, Jiann-Woei; VanTassel, Chris; Bedrossian, Nazareth; Hall, Charles; Spanos, Pol

    2008-01-01

    This paper discusses the application of robust control theory to evaluate robustness of the Ares-I control systems. Three techniques for estimating upper and lower bounds of uncertain parameters which yield stable closed-loop response are used here: (1) Monte Carlo analysis, (2) mu analysis, and (3) characteristic frequency response analysis. All three methods are used to evaluate stability envelopes of the Ares-I control systems with uncertain aerodynamics and flex dynamics. The results show that characteristic frequency response analysis is the most effective of these methods for assessing robustness.

  2. Roentgen stereophotogrammetric analysis of metal-backed hemispherical cups without attached markers.

    PubMed

    Valstar, E R; Spoor, C W; Nelissen, R G; Rozing, P M

    1997-11-01

    A method for the detection of micromotion of a metal-backed hemispherical acetabular cup is presented and tested. Unlike in conventional roentgen stereophotogrammetric analysis, the cup does not have to be marked with tantalum markers; the micromotion is calculated from the contours of the hemispherical part and the base circle of the cup. In this way, two rotations (tilt and anteversion) and the translations along the three cardinal axes are obtained. In a phantom study, the maximum error in the position of the cup's centre was 0.04 mm. The mean error in the orientation of the cup was 0.41 degree, with a 95% confidence interval of 0.28-0.54 degree. The in vivo accuracy was tested by repeated measurement of 21 radiographs from seven patients. The upper bound of the 95% tolerance interval for the translations along the transversal, longitudinal, and sagittal axes was 0.09, 0.07, and 0.34 mm, respectively: for the rotation, this upper bound was 0.39 degree. These results show that the new method, in which the position and orientation of metal-backed hemispherical cup is calculated from its projected contours, is a simple and accurate alternative to attaching markers to the cup.

  3. Solar System and stellar tests of a quantum-corrected gravity

    NASA Astrophysics Data System (ADS)

    Zhao, Shan-Shan; Xie, Yi

    2015-09-01

    The renormalization group running of the gravitational constant has a universal form and represents a possible extension of general relativity. These renormalization group effects on general relativity will cause the running of the gravitational constant, and there exists a scale of renormalization α ν , which depends on the mass of an astronomical system and needs to be determined by observations. We test renormalization group effects on general relativity and obtain the upper bounds of α ν in the low-mass scales: the Solar System and five systems of binary pulsars. Using the supplementary advances of the perihelia provided by INPOP10a (IMCCE, France) and EPM2011 (IAA RAS, Russia) ephemerides, we obtain new upper bounds on α ν in the Solar System when the Lense-Thirring effect due to the Sun's angular momentum and the uncertainty of the Sun's quadrupole moment are properly taken into account. These two factors were absent in the previous work. We find that INPOP10a yields the upper bound as α ν =(0.3 ±2.8 )×10-20 while EPM2011 gives α ν =(-2.5 ±8.3 )×10-21. Both of them are tighter than the previous result by 4 orders of magnitude. Furthermore, based on the observational data sets of five systems of binary pulsars: PSR J 0737 -3039 , PSR B 1534 +12 , PSR J 1756 -2251 , PSR B 1913 +16 , and PSR B 2127 +11 C , the upper bound is found as α ν =(-2.6 ±5.1 )×10-17. From the bounds of this work at a low-mass scale and the ones at the mass scale of galaxies, we might catch an updated glimpse of the mass dependence of α ν , and it is found that our improvement of the upper bounds in the Solar System can significantly change the possible pattern of the relation between log |α ν | and log m from a linear one to a power law, where m is the mass of an astronomical system. This suggests that |α ν | needs to be suppressed more rapidly with the decrease of the mass of low-mass systems. It also predicts that |α ν | might have an upper limit in high-mass astrophysical systems, which can be tested in the future.

  4. Differential Games of inf-sup Type and Isaacs Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaise, Hidehiro; Sheu, S.-J.

    2005-06-15

    Motivated by the work of Fleming, we provide a general framework to associate inf-sup type values with the Isaacs equations.We show that upper and lower bounds for the generators of inf-sup type are upper and lower Hamiltonians, respectively. In particular, the lower (resp. upper) bound corresponds to the progressive (resp. strictly progressive) strategy. By the Dynamic Programming Principle and identification of the generator, we can prove that the inf-sup type game is characterized as the unique viscosity solution of the Isaacs equation. We also discuss the Isaacs equation with a Hamiltonian of a convex combination between the lower and uppermore » Hamiltonians.« less

  5. Unveiling ν secrets with cosmological data: Neutrino masses and mass hierarchy

    NASA Astrophysics Data System (ADS)

    Vagnozzi, Sunny; Giusarma, Elena; Mena, Olga; Freese, Katherine; Gerbino, Martina; Ho, Shirley; Lattanzi, Massimiliano

    2017-12-01

    Using some of the latest cosmological data sets publicly available, we derive the strongest bounds in the literature on the sum of the three active neutrino masses, Mν, within the assumption of a background flat Λ CDM cosmology. In the most conservative scheme, combining Planck cosmic microwave background temperature anisotropies and baryon acoustic oscillations (BAO) data, as well as the up-to-date constraint on the optical depth to reionization (τ ), the tightest 95% confidence level upper bound we find is Mν<0.151 eV . The addition of Planck high-ℓ polarization data, which, however, might still be contaminated by systematics, further tightens the bound to Mν<0.118 eV . A proper model comparison treatment shows that the two aforementioned combinations disfavor the inverted hierarchy at ˜64 % C .L . and ˜71 % C .L . , respectively. In addition, we compare the constraining power of measurements of the full-shape galaxy power spectrum versus the BAO signature, from the BOSS survey. Even though the latest BOSS full-shape measurements cover a larger volume and benefit from smaller error bars compared to previous similar measurements, the analysis method commonly adopted results in their constraining power still being less powerful than that of the extracted BAO signal. Our work uses only cosmological data; imposing the constraint Mν>0.06 eV from oscillations data would raise the quoted upper bounds by O (0.1 σ ) and would not affect our conclusions.

  6. Tidal disruption of Periodic Comet Shoemaker-Levy 9 and a constraint on its mean density

    NASA Technical Reports Server (NTRS)

    Boss, Alan P.

    1994-01-01

    The apparent tidal disruption of Periodic Comet Shoemaker-Levy 9 (1993e) during a close encounter within approximately 1.62 planetary radii of Jupiter can be used along with theoretical models of tidal disruption to place an upper bound on the density of the predisruption body. Depending on the theoretical model used, these upper bounds range from rho(sub c) less than 0.702 +/- 0.080 g/cu cm for a simple analytical model calibrated by numerical smoothed particle hydrodynamics (SPH) simulations to rho(sub c) less than 1.50 +/- 0.17 g/cu cm for a detailed semianalytical model. The quoted uncertainties stem from an assumed uncertainty in the perijove radius. However, the uncertainty introduced by the different theoretical models is the major source of error; this uncertainty could be eliminated by future SPH simulations specialized to cometary disruptions, including the effects of initially prolate, spinning comets. If the SPH-based upper bound turns out to be most appropriate, it would be consistent with the predisruption body being a comet with a relatively low density and porous structure, as has been asserted previously based on observations of cometary outgassing. Regardless of which upper bound is preferable, the models all agree that the predisruption body could not have been a relatively high-density body, such as an asteroid with rho approximately = 2 g/cu cm.

  7. Bounds for the price of discrete arithmetic Asian options

    NASA Astrophysics Data System (ADS)

    Vanmaele, M.; Deelstra, G.; Liinev, J.; Dhaene, J.; Goovaerts, M. J.

    2006-01-01

    In this paper the pricing of European-style discrete arithmetic Asian options with fixed and floating strike is studied by deriving analytical lower and upper bounds. In our approach we use a general technique for deriving upper (and lower) bounds for stop-loss premiums of sums of dependent random variables, as explained in Kaas et al. (Ins. Math. Econom. 27 (2000) 151-168), and additionally, the ideas of Rogers and Shi (J. Appl. Probab. 32 (1995) 1077-1088) and of Nielsen and Sandmann (J. Financial Quant. Anal. 38(2) (2003) 449-473). We are able to create a unifying framework for European-style discrete arithmetic Asian options through these bounds, that generalizes several approaches in the literature as well as improves the existing results. We obtain analytical and easily computable bounds. The aim of the paper is to formulate an advice of the appropriate choice of the bounds given the parameters, investigate the effect of different conditioning variables and compare their efficiency numerically. Several sets of numerical results are included. We also discuss hedging using these bounds. Moreover, our methods are applicable to a wide range of (pricing) problems involving a sum of dependent random variables.

  8. Impact of jammer side information on the performance of anti-jam systems

    NASA Astrophysics Data System (ADS)

    Lim, Samuel

    1992-03-01

    The Chernoff bound parameter, D, provides a performance measure for all coded communication systems. D can be used to determine upper-bounds on bit error probabilities (BEPs) of Viterbi decoded convolutional codes. The impact on BEP bounds of channel measurements that provide additional side information can also be evaluated with D. This memo documents the results of a Chernoff bound parameter evaluation in optimum partial-band noise jamming (OPBNJ) for both BPSK and DPSK modulation schemes. Hard and soft quantized receivers, with and without jammer side information (JSI), were examined. The results of this analysis indicate that JSI does improve decoding performance. However, a knowledge of jammer presence alone achieves a performance level comparable to soft decision decoding with perfect JSI. Furthermore, performance degradation due to the lack of JSI can be compensated for by increasing the number of levels of quantization. Therefore, an anti-jam system without JSI can be made to perform almost as well as a system with JSI.

  9. Coefficient of performance and its bounds with the figure of merit for a general refrigerator

    NASA Astrophysics Data System (ADS)

    Long, Rui; Liu, Wei

    2015-02-01

    A general refrigerator model with non-isothermal processes is studied. The coefficient of performance (COP) and its bounds at maximum χ figure of merit are obtained and analyzed. This model accounts for different heat capacities during the heat transfer processes. So, different kinds of refrigerator cycles can be considered. Under the constant heat capacity condition, the upper bound of the COP is the Curzon-Ahlborn (CA) coefficient of performance and is independent of the time durations of the heat exchanging processes. With the maximum χ criterion, in the refrigerator cycles, such as the reversed Brayton refrigerator cycle, the reversed Otto refrigerator cycle and the reversed Atkinson refrigerator cycle, where the heat capacity in the heat absorbing process is not less than that in the heat releasing process, their COPs are bounded by the CA coefficient of performance; otherwise, such as for the reversed Diesel refrigerator cycle, its COP can exceed the CA coefficient of performance. Furthermore, the general refined upper and lower bounds have been proposed.

  10. Improved Lower Bounds on the Price of Stability of Undirected Network Design Games

    NASA Astrophysics Data System (ADS)

    Bilò, Vittorio; Caragiannis, Ioannis; Fanelli, Angelo; Monaco, Gianpiero

    Bounding the price of stability of undirected network design games with fair cost allocation is a challenging open problem in the Algorithmic Game Theory research agenda. Even though the generalization of such games in directed networks is well understood in terms of the price of stability (it is exactly H n , the n-th harmonic number, for games with n players), far less is known for network design games in undirected networks. The upper bound carries over to this case as well while the best known lower bound is 42/23 ≈ 1.826. For more restricted but interesting variants of such games such as broadcast and multicast games, sublogarithmic upper bounds are known while the best known lower bound is 12/7 ≈ 1.714. In the current paper, we improve the lower bounds as follows. We break the psychological barrier of 2 by showing that the price of stability of undirected network design games is at least 348/155 ≈ 2.245. Our proof uses a recursive construction of a network design game with a simple gadget as the main building block. For broadcast and multicast games, we present new lower bounds of 20/11 ≈ 1.818 and 1.862, respectively.

  11. Ultimate energy density of observable cold baryonic matter.

    PubMed

    Lattimer, James M; Prakash, Madappa

    2005-03-25

    We demonstrate that the largest measured mass of a neutron star establishes an upper bound to the energy density of observable cold baryonic matter. An equation of state-independent expression satisfied by both normal neutron stars and self-bound quark matter stars is derived for the largest energy density of matter inside stars as a function of their masses. The largest observed mass sets the lowest upper limit to the density. Implications from existing and future neutron star mass measurements are discussed.

  12. Semiannual Report, October 1, 1989 through March 31, 1990 (Institute for Computer Applications in Science and Engineering)

    DTIC Science & Technology

    1990-06-01

    synchronization . We consider the performance of various synchronization protocols by deriving upper and lower bounds on optimal perfor- mance, upper bounds on Time ...from universities and from industry, who have resident appointments for limited periods of time , and by consultants. Members of NASA’s research staff...convergence to steady state is also being studied together with D. Gottlieb. The idea is to generalize the concept of local- time stepping by minimizing the

  13. Generalized monogamy inequalities and upper bounds of negativity for multiqubit systems

    NASA Astrophysics Data System (ADS)

    Yang, Yanmin; Chen, Wei; Li, Gang; Zheng, Zhu-Jun

    2018-01-01

    In this paper, we present some generalized monogamy inequalities and upper bounds of negativity based on convex-roof extended negativity (CREN) and CREN of assistance (CRENOA). These monogamy relations are satisfied by the negativity of N -qubit quantum systems A B C1⋯CN -2 , under the partitions A B | C1⋯CN -2 and A B C1| C2⋯CN -2 . Furthermore, the W -class states are used to test these generalized monogamy inequalities.

  14. Efficiency and formalism of quantum games

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C.F.; Johnson, Neil F.

    We show that quantum games are more efficient than classical games and provide a saturated upper bound for this efficiency. We also demonstrate that the set of finite classical games is a strict subset of the set of finite quantum games. Our analysis is based on a rigorous formulation of quantum games, from which quantum versions of the minimax theorem and the Nash equilibrium theorem can be deduced.

  15. Computing an upper bound on contact stress with surrogate duality

    NASA Astrophysics Data System (ADS)

    Xuan, Zhaocheng; Papadopoulos, Panayiotis

    2016-07-01

    We present a method for computing an upper bound on the contact stress of elastic bodies. The continuum model of elastic bodies with contact is first modeled as a constrained optimization problem by using finite elements. An explicit formulation of the total contact force, a fraction function with the numerator as a linear function and the denominator as a quadratic convex function, is derived with only the normalized nodal contact forces as the constrained variables in a standard simplex. Then two bounds are obtained for the sum of the nodal contact forces. The first is an explicit formulation of matrices of the finite element model, derived by maximizing the fraction function under the constraint that the sum of the normalized nodal contact forces is one. The second bound is solved by first maximizing the fraction function subject to the standard simplex and then using Dinkelbach's algorithm for fractional programming to find the maximum—since the fraction function is pseudo concave in a neighborhood of the solution. These two bounds are solved with the problem dimensions being only the number of contact nodes or node pairs, which are much smaller than the dimension for the original problem, namely, the number of degrees of freedom. Next, a scheme for constructing an upper bound on the contact stress is proposed that uses the bounds on the sum of the nodal contact forces obtained on a fine finite element mesh and the nodal contact forces obtained on a coarse finite element mesh, which are problems that can be solved at a lower computational cost. Finally, the proposed method is verified through some examples concerning both frictionless and frictional contact to demonstrate the method's feasibility, efficiency, and robustness.

  16. Antiferromagnetic Potts Model on the Erdős-Rényi Random Graph

    NASA Astrophysics Data System (ADS)

    Contucci, Pierluigi; Dommers, Sander; Giardinà, Cristian; Starr, Shannon

    2013-10-01

    We study the antiferromagnetic Potts model on the Poissonian Erdős-Rényi random graph. By identifying a suitable interpolation structure and an extended variational principle, together with a positive temperature second-moment analysis we prove the existence of a phase transition at a positive critical temperature. Upper and lower bounds on the temperature critical value are obtained from the stability analysis of the replica symmetric solution (recovered in the framework of Derrida-Ruelle probability cascades) and from an entropy positivity argument.

  17. Porous medium convection at large Rayleigh number: Studies of coherent structure, transport, and reduced dynamics

    NASA Astrophysics Data System (ADS)

    Wen, Baole

    Buoyancy-driven convection in fluid-saturated porous media is a key environmental and technological process, with applications ranging from carbon dioxide storage in terrestrial aquifers to the design of compact heat exchangers. Porous medium convection is also a paradigm for forced-dissipative infinite-dimensional dynamical systems, exhibiting spatiotemporally chaotic dynamics if not "true" turbulence. The objective of this dissertation research is to quantitatively characterize the dynamics and heat transport in two-dimensional horizontal and inclined porous medium convection between isothermal plane parallel boundaries at asymptotically large values of the Rayleigh number Ra by investigating the emergent, quasi-coherent flow. This investigation employs a complement of direct numerical simulations (DNS), secondary stability and dynamical systems theory, and variational analysis. The DNS confirm the remarkable tendency for the interior flow to self-organize into closely-spaced columnar plumes at sufficiently large Ra (up to Ra ≃ 105), with more complex spatiotemporal features being confined to boundary layers near the heated and cooled walls. The relatively simple form of the interior flow motivates investigation of unstable steady and time-periodic convective states at large Ra as a function of the domain aspect ratio L. To gain insight into the development of spatiotemporally chaotic convection, the (secondary) stability of these fully nonlinear states to small-amplitude disturbances is investigated using a spatial Floquet analysis. The results indicate that there exist two distinct modes of instability at large Ra: a bulk instability mode and a wall instability mode. The former usually is excited by long-wavelength disturbances and is generally much weaker than the latter. DNS, strategically initialized to investigate the fully nonlinear evolution of the most dangerous secondary instability modes, suggest that the (long time) mean inter-plume spacing in statistically-steady porous medium convection results from an interplay between the competing effects of these two types of instability. Upper bound analysis is then employed to investigate the dependence of the heat transport enhancement factor, i.e. the Nusselt number Nu, on Ra and L. To solve the optimization problems arising from the "background field" upper-bound variational analysis, a novel two-step algorithm in which time is introduced into the formulation is developed. The new algorithm obviates the need for numerical continuation, thereby enabling the best available bounds to be computed up to Ra ≈ 2.65 x 104. A mathematical proof is given to demonstrate that the only steady state to which this numerical algorithm can converge is the required global optimal of the variational problem. Using this algorithm, the dependence of the bounds on L( Ra) is explored, and a "minimal flow unit" is identified. Finally, the upper bound variational methodology is also shown to yield quantitatively useful predictions of Nu and to furnish a functional basis that is naturally adapted to the boundary layer dynamics at large Ra..

  18. Bounds on the Coupling of the Majoron to Light Neutrinos from Supernova Cooling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farzan, Yasaman

    2002-12-02

    We explore the role of Majoron (J) emission in the supernova cooling process, as a source of upper bound on the neutrino-Majoron coupling. We show that the strongest upper bound on the coupling to {nu}{sub 3} comes from the {nu}{sub e}{nu}{sub e} {yields} J process in the core of a supernova. We also find bounds on diagonal couplings of the Majoron to {nu}{sub {mu}({tau})}{nu}{sub {mu}({tau})} and on off-diagonal {nu}{sub e}{nu}{sub {mu}({tau})} couplings in various regions of the parameter space. We discuss the evaluation of cross-section for four-particle interactions ({nu}{nu} {yields} JJ and {nu}J {yields} {nu}J). We show that these aremore » typically dominated by three-particle sub-processes and do not give new independent constraints.« less

  19. Kernel K-Means Sampling for Nyström Approximation.

    PubMed

    He, Li; Zhang, Hong

    2018-05-01

    A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.

  20. ϕ 3 theory with F4 flavor symmetry in 6 - 2 ɛ dimensions: 3-loop renormalization and conformal bootstrap

    NASA Astrophysics Data System (ADS)

    Pang, Yi; Rong, Junchen; Su, Ning

    2016-12-01

    We consider ϕ 3 theory in 6 - 2 ɛ with F 4 global symmetry. The beta function is calculated up to 3 loops, and a stable unitary IR fixed point is observed. The anomalous dimensions of operators quadratic or cubic in ϕ are also computed. We then employ conformal bootstrap technique to study the fixed point predicted from the perturbative approach. For each putative scaling dimension of ϕ (Δ ϕ ), we obtain the corresponding upper bound on the scaling dimension of the second lowest scalar primary in the 26 representation ( Δ 26 2nd ) which appears in the OPE of ϕ × ϕ. In D = 5 .95, we observe a sharp peak on the upper bound curve located at Δ ϕ equal to the value predicted by the 3-loop computation. In D = 5, we observe a weak kink on the upper bound curve at ( Δ ϕ , Δ 26 2nd ) = (1.6, 4).

  1. Control design for robust stability in linear regulators: Application to aerospace flight control

    NASA Technical Reports Server (NTRS)

    Yedavalli, R. K.

    1986-01-01

    Time domain stability robustness analysis and design for linear multivariable uncertain systems with bounded uncertainties is the central theme of the research. After reviewing the recently developed upper bounds on the linear elemental (structured), time varying perturbation of an asymptotically stable linear time invariant regulator, it is shown that it is possible to further improve these bounds by employing state transformations. Then introducing a quantitative measure called the stability robustness index, a state feedback conrol design algorithm is presented for a general linear regulator problem and then specialized to the case of modal systems as well as matched systems. The extension of the algorithm to stochastic systems with Kalman filter as the state estimator is presented. Finally an algorithm for robust dynamic compensator design is presented using Parameter Optimization (PO) procedure. Applications in a aircraft control and flexible structure control are presented along with a comparison with other existing methods.

  2. A Computational Framework to Control Verification and Robustness Analysis

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2010-01-01

    This paper presents a methodology for evaluating the robustness of a controller based on its ability to satisfy the design requirements. The framework proposed is generic since it allows for high-fidelity models, arbitrary control structures and arbitrary functional dependencies between the requirements and the uncertain parameters. The cornerstone of this contribution is the ability to bound the region of the uncertain parameter space where the degradation in closed-loop performance remains acceptable. The size of this bounding set, whose geometry can be prescribed according to deterministic or probabilistic uncertainty models, is a measure of robustness. The robustness metrics proposed herein are the parametric safety margin, the reliability index, the failure probability and upper bounds to this probability. The performance observed at the control verification setting, where the assumptions and approximations used for control design may no longer hold, will fully determine the proposed control assessment.

  3. Strong polygamy of quantum correlations in multi-party quantum systems

    NASA Astrophysics Data System (ADS)

    Kim, Jeong San

    2014-10-01

    We propose a new type of polygamy inequality for multi-party quantum entanglement. We first consider the possible amount of bipartite entanglement distributed between a fixed party and any subset of the rest parties in a multi-party quantum system. By using the summation of these distributed entanglements, we provide an upper bound of the distributed entanglement between a party and the rest in multi-party quantum systems. We then show that this upper bound also plays as a lower bound of the usual polygamy inequality, therefore the strong polygamy of multi-party quantum entanglement. For the case of multi-party pure states, we further show that the strong polygamy of entanglement implies the strong polygamy of quantum discord.

  4. Optimal bounds and extremal trajectories for time averages in nonlinear dynamical systems

    NASA Astrophysics Data System (ADS)

    Tobasco, Ian; Goluskin, David; Doering, Charles R.

    2018-02-01

    For any quantity of interest in a system governed by ordinary differential equations, it is natural to seek the largest (or smallest) long-time average among solution trajectories, as well as the extremal trajectories themselves. Upper bounds on time averages can be proved a priori using auxiliary functions, the optimal choice of which is a convex optimization problem. We prove that the problems of finding maximal trajectories and minimal auxiliary functions are strongly dual. Thus, auxiliary functions provide arbitrarily sharp upper bounds on time averages. Moreover, any nearly minimal auxiliary function provides phase space volumes in which all nearly maximal trajectories are guaranteed to lie. For polynomial equations, auxiliary functions can be constructed by semidefinite programming, which we illustrate using the Lorenz system.

  5. Risk of Death in Infants Who Have Experienced a Brief Resolved Unexplained Event: A Meta-Analysis.

    PubMed

    Brand, Donald A; Fazzari, Melissa J

    2018-06-01

    To estimate an upper bound on the risk of death after a brief resolved unexplained event (BRUE), a sudden alteration in an infant's breathing, color, tone, or responsiveness, previously labeled "apparent life-threatening event" (ALTE). The meta-analysis incorporated observational studies of patients with ALTE that included data on in-hospital and post-discharge deaths with at least 1 week of follow-up after hospital discharge. Pertinent studies were identified from a published review of the literature from 1970 through 2014 and a supplementary PubMed query through February 2017. The 12 included studies (n = 3005) reported 12 deaths, of which 8 occurred within 4 months of the event. Applying a Poisson-normal random effects model to the 8 proximate deaths using a 4-month time horizon yielded a post-ALTE mortality rate of about 1 in 800, which constitutes an upper bound on the risk of death after a BRUE. This risk is about the same as the baseline risk of death during the first year of life. The meta-analysis therefore supports the return-home approach advocated in a recently published clinical practice guideline-not routine hospitalization-for BRUE patients who have been evaluated in the emergency department and determined to be at lower risk. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Stability results for multi-layer radial Hele-Shaw and porous media flows

    NASA Astrophysics Data System (ADS)

    Gin, Craig; Daripa, Prabir

    2015-01-01

    Motivated by stability problems arising in the context of chemical enhanced oil recovery, we perform linear stability analysis of Hele-Shaw and porous media flows in radial geometry involving an arbitrary number of immiscible fluids. Key stability results obtained and their relevance to the stabilization of fingering instability are discussed. Some of the key results, among many others, are (i) absolute upper bounds on the growth rate in terms of the problem data; (ii) validation of these upper bound results against exact computation for the case of three-layer flows; (iii) stability enhancing injection policies; (iv) asymptotic limits that reduce these radial flow results to similar results for rectilinear flows; and (v) the stabilizing effect of curvature of the interfaces. Multi-layer radial flows have been found to have the following additional distinguishing features in comparison to rectilinear flows: (i) very long waves, some of which can be physically meaningful, are stable; and (ii) eigenvalues can be complex for some waves depending on the problem data, implying that the dispersion curves for one or more waves can contact each other. Similar to the rectilinear case, these results can be useful in providing insight into the interfacial instability transfer mechanism as the problem data are varied. Moreover, these can be useful in devising smart injection policies as well as controlling the complexity of the long-term dynamics when drops of various immiscible fluids intersperse among each other. As an application of the upper bound results, we provide stabilization criteria and design an almost stable multi-layer system by adding many layers of fluid with small positive jumps in viscosity in the direction of the basic flow.

  7. Upper bounds on quantum uncertainty products and complexity measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guerrero, Angel; Sanchez-Moreno, Pablo; Dehesa, Jesus S.

    The position-momentum Shannon and Renyi uncertainty products of general quantum systems are shown to be bounded not only from below (through the known uncertainty relations), but also from above in terms of the Heisenberg-Kennard product . Moreover, the Cramer-Rao, Fisher-Shannon, and Lopez-Ruiz, Mancini, and Calbet shape measures of complexity (whose lower bounds have been recently found) are also bounded from above. The improvement of these bounds for systems subject to spherically symmetric potentials is also explicitly given. Finally, applications to hydrogenic and oscillator-like systems are done.

  8. Minimum-error quantum distinguishability bounds from matrix monotone functions: A comment on 'Two-sided estimates of minimum-error distinguishability of mixed quantum states via generalized Holevo-Curlander bounds' [J. Math. Phys. 50, 032106 (2009)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tyson, Jon

    2009-06-15

    Matrix monotonicity is used to obtain upper bounds on minimum-error distinguishability of arbitrary ensembles of mixed quantum states. This generalizes one direction of a two-sided bound recently obtained by the author [J. Tyson, J. Math. Phys. 50, 032106 (2009)]. It is shown that the previously obtained special case has unique properties.

  9. Exact Fundamental Limits of the First and Second Hyperpolarizabilities

    NASA Astrophysics Data System (ADS)

    Lytel, Rick; Mossman, Sean; Crowell, Ethan; Kuzyk, Mark G.

    2017-08-01

    Nonlinear optical interactions of light with materials originate in the microscopic response of the molecular constituents to excitation by an optical field, and are expressed by the first (β ) and second (γ ) hyperpolarizabilities. Upper bounds to these quantities were derived seventeen years ago using approximate, truncated state models that violated completeness and unitarity, and far exceed those achieved by potential optimization of analytical systems. This Letter determines the fundamental limits of the first and second hyperpolarizability tensors using Monte Carlo sampling of energy spectra and transition moments constrained by the diagonal Thomas-Reiche-Kuhn (TRK) sum rules and filtered by the off-diagonal TRK sum rules. The upper bounds of β and γ are determined from these quantities by applying error-refined extrapolation to perfect compliance with the sum rules. The method yields the largest diagonal component of the hyperpolarizabilities for an arbitrary number of interacting electrons in any number of dimensions. The new method provides design insight to the synthetic chemist and nanophysicist for approaching the limits. This analysis also reveals that the special cases which lead to divergent nonlinearities in the many-state catastrophe are not physically realizable.

  10. Performance Analysis of Amplify-and-Forward Systems with Single Relay Selection in Correlated Environments

    PubMed Central

    Nguyen, Binh Van; Kim, Kiseon

    2016-01-01

    In this paper, we consider amplify-and-forward (AnF) cooperative systems under correlated fading environments. We first present a brief overview of existing works on the effect of channel correlations on the system performance. We then focus on our main contribution which is analyzing the outage probability of a multi-AnF-relay system with the best relay selection (BRS) scheme under a condition that two channels of each relay, source-relay and relay-destination channels, are correlated. Using lower and upper bounds on the end-to-end received signal-to-noise ratio (SNR) at the destination, we derive corresponding upper and lower bounds on the system outage probability. We prove that the system can achieve a diversity order (DO) equal to the number of relays. In addition, and importantly, we show that the considered correlation form has a constructive effect on the system performance. In other words, the larger the correlation coefficient, the better system performance. Our analytic results are corroborated by extensive Monte-Carlo simulations. PMID:27626426

  11. Bond additive modeling 10. Upper and lower bounds of bond incident degree indices of catacondensed fluoranthenes

    NASA Astrophysics Data System (ADS)

    Vukičević, Damir; Đurđević, Jelena

    2011-10-01

    Bond incident degree index is a descriptor that is calculated as the sum of the bond contributions such that each bond contribution depends solely on the degrees of its incident vertices (e.g. Randić index, Zagreb index, modified Zagreb index, variable Randić index, atom-bond connectivity index, augmented Zagreb index, sum-connectivity index, many Adriatic indices, and many variable Adriatic indices). In this Letter we find tight upper and lower bounds for bond incident degree index for catacondensed fluoranthenes with given number of hexagons.

  12. Beating the photon-number-splitting attack in practical quantum cryptography.

    PubMed

    Wang, Xiang-Bin

    2005-06-17

    We propose an efficient method to verify the upper bound of the fraction of counts caused by multiphoton pulses in practical quantum key distribution using weak coherent light, given whatever type of Eve's action. The protocol simply uses two coherent states for the signal pulses and vacuum for the decoy pulse. Our verified upper bound is sufficiently tight for quantum key distribution with a very lossy channel, in both the asymptotic and nonasymptotic case. So far our protocol is the only decoy-state protocol that works efficiently for currently existing setups.

  13. The local interstellar helium density - Corrected

    NASA Technical Reports Server (NTRS)

    Freeman, J.; Paresce, F.; Bowyer, S.

    1979-01-01

    An upper bound for the number density of neutral helium in the local interstellar medium of 0.004 + or - 0.0022 per cu cm was previously reported, based on extreme-ultraviolet telescope observations at 584 A made during the 1975 Apollo-Soyuz Test Project. A variety of evidence is found which indicates that the 584-A sensitivity of the instrument declined by a factor of 2 between the last laboratory calibration and the time of the measurements. The upper bound on the helium density is therefore revised to 0.0089 + or - 0.005 per cu cm.

  14. Upper bound on three-tangles of reduced states of four-qubit pure states

    NASA Astrophysics Data System (ADS)

    Sharma, S. Shelly; Sharma, N. K.

    2017-06-01

    Closed formulas for upper bounds on three-tangles of three-qubit reduced states in terms of three-qubit-invariant polynomials of pure four-qubit states are obtained. Our results offer tighter constraints on total three-way entanglement of a given qubit with the rest of the system than those used by Regula et al. [Phys. Rev. Lett. 113, 110501 (2014), 10.1103/PhysRevLett.113.110501 and Phys. Rev. Lett. 116, 049902(E) (2016)], 10.1103/PhysRevLett.116.049902 to verify monogamy of four-qubit quantum entanglement.

  15. A case study to quantify prediction bounds caused by model-form uncertainty of a portal frame

    NASA Astrophysics Data System (ADS)

    Van Buren, Kendra L.; Hall, Thomas M.; Gonzales, Lindsey M.; Hemez, François M.; Anton, Steven R.

    2015-01-01

    Numerical simulations, irrespective of the discipline or application, are often plagued by arbitrary numerical and modeling choices. Arbitrary choices can originate from kinematic assumptions, for example the use of 1D beam, 2D shell, or 3D continuum elements, mesh discretization choices, boundary condition models, and the representation of contact and friction in the simulation. This work takes a step toward understanding the effect of arbitrary choices and model-form assumptions on the accuracy of numerical predictions. The application is the simulation of the first four resonant frequencies of a one-story aluminum portal frame structure under free-free boundary conditions. The main challenge of the portal frame structure resides in modeling the joint connections, for which different modeling assumptions are available. To study this model-form uncertainty, and compare it to other types of uncertainty, two finite element models are developed using solid elements, and with differing representations of the beam-to-column and column-to-base plate connections: (i) contact stiffness coefficients or (ii) tied nodes. Test-analysis correlation is performed to compare the lower and upper bounds of numerical predictions obtained from parametric studies of the joint modeling strategies to the range of experimentally obtained natural frequencies. The approach proposed is, first, to characterize the experimental variability of the joints by varying the bolt torque, method of bolt tightening, and the sequence in which the bolts are tightened. The second step is to convert what is learned from these experimental studies to models that "envelope" the range of observed bolt behavior. We show that this approach, that combines small-scale experiments, sensitivity analysis studies, and bounding-case models, successfully produces lower and upper bounds of resonant frequency predictions that match those measured experimentally on the frame structure. (Approved for unlimited, public release, LA-UR-13-27561).

  16. Planck limits on non-canonical generalizations of large-field inflation models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Nina K.; Kinney, William H., E-mail: ninastei@buffalo.edu, E-mail: whkinney@buffalo.edu

    2017-04-01

    In this paper, we consider two case examples of Dirac-Born-Infeld (DBI) generalizations of canonical large-field inflation models, characterized by a reduced sound speed, c {sub S} < 1. The reduced speed of sound lowers the tensor-scalar ratio, improving the fit of the models to the data, but increases the equilateral-mode non-Gaussianity, f {sup equil.}{sub NL}, which the latest results from the Planck satellite constrain by a new upper bound. We examine constraints on these models in light of the most recent Planck and BICEP/Keck results, and find that they have a greatly decreased window of viability. The upper bound onmore » f {sup equil.}{sub NL} corresponds to a lower bound on the sound speed and a corresponding lower bound on the tensor-scalar ratio of r ∼ 0.01, so that near-future Cosmic Microwave Background observations may be capable of ruling out entire classes of DBI inflation models. The result is, however, not universal: infrared-type DBI inflation models, where the speed of sound increases with time, are not subject to the bound.« less

  17. Rapidly assessing the probability of exceptionally high natural hazard losses

    NASA Astrophysics Data System (ADS)

    Gollini, Isabella; Rougier, Jonathan

    2014-05-01

    One of the objectives in catastrophe modeling is to assess the probability distribution of losses for a specified period, such as a year. From the point of view of an insurance company, the whole of the loss distribution is interesting, and valuable in determining insurance premiums. But the shape of the righthand tail is critical, because it impinges on the solvency of the company. A simple measure of the risk of insolvency is the probability that the annual loss will exceed the company's current operating capital. Imposing an upper limit on this probability is one of the objectives of the EU Solvency II directive. If a probabilistic model is supplied for the loss process, then this tail probability can be computed, either directly, or by simulation. This can be a lengthy calculation for complex losses. Given the inevitably subjective nature of quantifying loss distributions, computational resources might be better used in a sensitivity analysis. This requires either a quick approximation to the tail probability or an upper bound on the probability, ideally a tight one. We present several different bounds, all of which can be computed nearly instantly from a very general event loss table. We provide a numerical illustration, and discuss the conditions under which the bound is tight. Although we consider the perspective of insurance and reinsurance companies, exactly the same issues concern the risk manager, who is typically very sensitive to large losses.

  18. Lymphatic Mapping and Sentinel Lymph Node Biopsy in Women With Squamous Cell Carcinoma of the Vulva: A Gynecologic Oncology Group Study

    PubMed Central

    Levenback, Charles F.; Ali, Shamshad; Coleman, Robert L.; Gold, Michael A.; Fowler, Jeffrey M.; Judson, Patricia L.; Bell, Maria C.; De Geest, Koen; Spirtos, Nick M.; Potkul, Ronald K.; Leitao, Mario M.; Bakkum-Gamez, Jamie N.; Rossi, Emma C.; Lentz, Samuel S.; Burke, James J.; Van Le, Linda; Trimble, Cornelia L.

    2012-01-01

    Purpose To determine the safety of sentinel lymph node biopsy as a replacement for inguinal femoral lymphadenectomy in selected women with vulvar cancer. Patients and Methods Eligible women had squamous cell carcinoma, at least 1-mm invasion, and tumor size ≥ 2 cm and ≤ 6 cm. The primary tumor was limited to the vulva, and there were no groin lymph nodes that were clinically suggestive of cancer. All women underwent intraoperative lymphatic mapping, sentinel lymph node biopsy, and inguinal femoral lymphadenectomy. Histologic ultra staging of the sentinel lymph node was prescribed. Results In all, 452 women underwent the planned procedures, and 418 had at least one sentinel lymph node identified. There were 132 node-positive women, including 11 (8.3%) with false-negative nodes. Twenty-three percent of the true-positive patients were detected by immunohistochemical analysis of the sentinel lymph node. The sensitivity was 91.7% (90% lower confidence bound, 86.7%) and the false-negative predictive value (1-negative predictive value) was 3.7% (90% upper confidence bound, 6.1%). In women with tumor less than 4 cm, the false-negative predictive value was 2.0% (90% upper confidence bound, 4.5%). Conclusion Sentinel lymph node biopsy is a reasonable alternative to inguinal femoral lymphadenectomy in selected women with squamous cell carcinoma of the vulva. PMID:22753905

  19. Circuit bounds on stochastic transport in the Lorenz equations

    NASA Astrophysics Data System (ADS)

    Weady, Scott; Agarwal, Sahil; Wilen, Larry; Wettlaufer, J. S.

    2018-07-01

    In turbulent Rayleigh-Bénard convection one seeks the relationship between the heat transport, captured by the Nusselt number, and the temperature drop across the convecting layer, captured by the Rayleigh number. In experiments, one measures the Nusselt number for a given Rayleigh number, and the question of how close that value is to the maximal transport is a key prediction of variational fluid mechanics in the form of an upper bound. The Lorenz equations have traditionally been studied as a simplified model of turbulent Rayleigh-Bénard convection, and hence it is natural to investigate their upper bounds, which has previously been done numerically and analytically, but they are not as easily accessible in an experimental context. Here we describe a specially built circuit that is the experimental analogue of the Lorenz equations and compare its output to the recently determined upper bounds of the stochastic Lorenz equations [1]. The circuit is substantially more efficient than computational solutions, and hence we can more easily examine the system. Because of offsets that appear naturally in the circuit, we are motivated to study unique bifurcation phenomena that arise as a result. Namely, for a given Rayleigh number, we find a reentrant behavior of the transport on noise amplitude and this varies with Rayleigh number passing from the homoclinic to the Hopf bifurcation.

  20. Energy Bounds for a Compressed Elastic Film on a Substrate

    NASA Astrophysics Data System (ADS)

    Bourne, David P.; Conti, Sergio; Müller, Stefan

    2017-04-01

    We study pattern formation in a compressed elastic film which delaminates from a substrate. Our key tool is the determination of rigorous upper and lower bounds on the minimum value of a suitable energy functional. The energy consists of two parts, describing the two main physical effects. The first part represents the elastic energy of the film, which is approximated using the von Kármán plate theory. The second part represents the fracture or delamination energy, which is approximated using the Griffith model of fracture. A simpler model containing the first term alone was previously studied with similar methods by several authors, assuming that the delaminated region is fixed. We include the fracture term, transforming the elastic minimisation into a free boundary problem, and opening the way for patterns which result from the interplay of elasticity and delamination. After rescaling, the energy depends on only two parameters: the rescaled film thickness, {σ }, and a measure of the bonding strength between the film and substrate, {γ }. We prove upper bounds on the minimum energy of the form {σ }^a {γ }^b and find that there are four different parameter regimes corresponding to different values of a and b and to different folding patterns of the film. In some cases, the upper bounds are attained by self-similar folding patterns as observed in experiments. Moreover, for two of the four parameter regimes we prove matching, optimal lower bounds.

  1. Parameter Transient Behavior Analysis on Fault Tolerant Control System

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine (Technical Monitor); Shin, Jong-Yeob

    2003-01-01

    In a fault tolerant control (FTC) system, a parameter varying FTC law is reconfigured based on fault parameters estimated by fault detection and isolation (FDI) modules. FDI modules require some time to detect fault occurrences in aero-vehicle dynamics. This paper illustrates analysis of a FTC system based on estimated fault parameter transient behavior which may include false fault detections during a short time interval. Using Lyapunov function analysis, the upper bound of an induced-L2 norm of the FTC system performance is calculated as a function of a fault detection time and the exponential decay rate of the Lyapunov function.

  2. Saturn's very axisymmetric magnetic field: No detectable secular variation or tilt

    NASA Astrophysics Data System (ADS)

    Cao, Hao; Russell, Christopher T.; Christensen, Ulrich R.; Dougherty, Michele K.; Burton, Marcia E.

    2011-04-01

    Saturn is the only planet in the solar system whose observed magnetic field is highly axisymmetric. At least a small deviation from perfect symmetry is required for a dynamo-generated magnetic field. Analyzing more than six years of magnetometer data obtained by Cassini close to the planet, we show that Saturn's observed field is much more axisymmetric than previously thought. We invert the magnetometer observations that were obtained in the "current-free" inner magnetosphere for an internal model, varying the assumed unknown rotation rate of Saturn's deep interior. No unambiguous non-axially symmetric magnetic moment is detected, with a new upper bound on the dipole tilt of 0.06°. An axisymmetric internal model with Schmidt-normalized spherical harmonic coefficients g10 = 21,191 ± 24 nT, g20 = 1586 ± 7 nT. g30 = 2374 ± 47 nT is derived from these measurements, the upper bounds on the axial degree 4 and 5 terms are 720 nT and 3200 nT respectively. The secular variation for the last 30 years is within the probable error of each term from degree 1 to 3, and the upper bounds are an order of magnitude smaller than in similar terrestrial terms for degrees 1 and 2. Differentially rotating conducting stable layers above Saturn's dynamo region have been proposed to symmetrize the magnetic field (Stevenson, 1982). The new upper bound on the dipole tilt implies that this stable layer must have a thickness L >= 4000 km, and this thickness is consistent with our weak secular variation observations.

  3. Direct detection of light ''Ge-phobic'' exothermic dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gelmini, Graciela B.; Georgescu, Andreea; Huh, Ji-Haeng, E-mail: gelmini@physics.ucla.edu, E-mail: a.georgescu@physics.ucla.edu, E-mail: jhhuh@physics.ucla.edu

    2014-07-01

    We present comparisons of direct dark matter (DM) detection data for light WIMPs with exothermic scattering with nuclei (exoDM), both assuming the Standard Halo Model (SHM) and in a halo model–independent manner. Exothermic interactions favor light targets, thus reducing the importance of upper limits derived from xenon targets, the most restrictive of which is at present the LUX limit. In our SHM analysis the CDMS-II-Si and CoGeNT regions become allowed by these bounds, however the recent SuperCDMS limit rejects both regions for exoDM with isospin-conserving couplings. An isospin-violating coupling of the exoDM, in particular one with a neutron to protonmore » coupling ratio of -0.8 (which we call ''Ge-phobic''), maximally reduces the DM coupling to germanium and allows the CDMS-II-Si region to become compatible with all bounds. This is also clearly shown in our halo-independent analysis.« less

  4. Biodegradation kinetics for pesticide exposure assessment.

    PubMed

    Wolt, J D; Nelson, H P; Cleveland, C B; van Wesenbeeck, I J

    2001-01-01

    Understanding pesticide risks requires characterizing pesticide exposure within the environment in a manner that can be broadly generalized across widely varied conditions of use. The coupled processes of sorption and soil degradation are especially important for understanding the potential environmental exposure of pesticides. The data obtained from degradation studies are inherently variable and, when limited in extent, lend uncertainty to exposure characterization and risk assessment. Pesticide decline in soils reflects dynamically coupled processes of sorption and degradation that add complexity to the treatment of soil biodegradation data from a kinetic perspective. Additional complexity arises from study design limitations that may not fully account for the decline in microbial activity of test systems, or that may be inadequate for considerations of all potential dissipation routes for a given pesticide. Accordingly, kinetic treatment of data must accommodate a variety of differing approaches starting with very simple assumptions as to reaction dynamics and extending to more involved treatments if warranted by the available experimental data. Selection of the appropriate kinetic model to describe pesticide degradation should rely on statistical evaluation of the data fit to ensure that the models used are not overparameterized. Recognizing the effects of experimental conditions and methods for kinetic treatment of degradation data is critical for making appropriate comparisons among pesticide biodegradation data sets. Assessment of variability in soil half-life among soils is uncertain because for many pesticides the data on soil degradation rate are limited to one or two soils. Reasonable upper-bound estimates of soil half-life are necessary in risk assessment so that estimated environmental concentrations can be developed from exposure models. Thus, an understanding of the variable and uncertain distribution of soil half-lives in the environment is necessary to estimate bounding values. Statistical evaluation of measures of central tendency for multisoil kinetic studies shows that geometric means better represent the distribution in soil half-lives than do the arithmetic or harmonic means. Estimates of upper-bound soil half-life values based on the upper 90% confidence bound on the geometric mean tend to accurately represent the upper bound when pesticide degradation rate is biologically driven but appear to overestimate the upper bound when there is extensive coupling of biodegradation with sorptive processes. The limited data available comparing distribution in pesticide soil half-lives between multisoil laboratory studies and multilocation field studies suggest that the probability density functions are similar. Thus, upper-bound estimates of pesticide half-life determined from laboratory studies conservatively represent pesticide biodegradation in the field environment for the purposes of exposure and risk assessment. International guidelines and approaches used for interpretations of soil biodegradation reflect many common elements, but differ in how the source and nature of variability in soil kinetic data are considered. Harmonization of approaches for the use of soil biodegradation data will improve the interpretative power of these data for the purposes of exposure and risk assessment.

  5. Jarzynski equality: connections to thermodynamics and the second law.

    PubMed

    Palmieri, Benoit; Ronis, David

    2007-01-01

    The one-dimensional expanding ideal gas model is used to compute the exact nonequilibrium distribution function. The state of the system during the expansion is defined in terms of local thermodynamics quantities. The final equilibrium free energy, obtained a long time after the expansion, is compared against the free energy that appears in the Jarzynski equality. Within this model, where the Jarzynski equality holds rigorously, the free energy change that appears in the equality does not equal the actual free energy change of the system at any time of the process. More generally, the work bound that is obtained from the Jarzynski equality is an upper bound to the upper bound that is obtained from the first and second laws of thermodynamics. The cancellation of the dissipative (nonequilibrium) terms that result in the Jarzynski equality is shown in the framework of response theory. This is used to show that the intuitive assumption that the Jarzynski work bound becomes equal to the average work done when the system evolves quasistatically is incorrect under some conditions.

  6. More on the decoder error probability for Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.

    1987-01-01

    The decoder error probability for Reed-Solomon codes (more generally, linear maximum distance separable codes) is examined. McEliece and Swanson offered an upper bound on P sub E (u), the decoder error probability given that u symbol errors occurs. This upper bound is slightly greater than Q, the probability that a completely random error pattern will cause decoder error. By using a combinatoric technique, the principle of inclusion and exclusion, an exact formula for P sub E (u) is derived. The P sub e (u) for the (255, 223) Reed-Solomon Code used by NASA, and for the (31,15) Reed-Solomon code (JTIDS code), are calculated using the exact formula, and the P sub E (u)'s are observed to approach the Q's of the codes rapidly as u gets larger. An upper bound for the expression is derived, and is shown to decrease nearly exponentially as u increases. This proves analytically that P sub E (u) indeed approaches Q as u becomes large, and some laws of large numbers come into play.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Toomey, Bridget

    Evolving power systems with increasing levels of stochasticity call for a need to solve optimal power flow problems with large quantities of random variables. Weather forecasts, electricity prices, and shifting load patterns introduce higher levels of uncertainty and can yield optimization problems that are difficult to solve in an efficient manner. Solution methods for single chance constraints in optimal power flow problems have been considered in the literature, ensuring single constraints are satisfied with a prescribed probability; however, joint chance constraints, ensuring multiple constraints are simultaneously satisfied, have predominantly been solved via scenario-based approaches or by utilizing Boole's inequality asmore » an upper bound. In this paper, joint chance constraints are used to solve an AC optimal power flow problem while preventing overvoltages in distribution grids under high penetrations of photovoltaic systems. A tighter version of Boole's inequality is derived and used to provide a new upper bound on the joint chance constraint, and simulation results are shown demonstrating the benefit of the proposed upper bound. The new framework allows for a less conservative and more computationally efficient solution to considering joint chance constraints, specifically regarding preventing overvoltages.« less

  8. Lorenz curves in a new science-funding model

    NASA Astrophysics Data System (ADS)

    Huang, Ding-wei

    2017-12-01

    We propose an agent-based model to theoretically and systematically explore the implications of a new approach to fund science, which has been suggested recently by J. Bollen et al.[?] We introduce various parameters and examine their effects. The concentration of funding is shown by the Lorenz curve and the Gini coefficient. In this model, all scientists are treated equally and follow the well-intended regulations. All scientists give a fixed ratio of their funding to others. The fixed ratio becomes an upper bound for the Gini coefficient. We observe two distinct regimes in the parameter space: valley and plateau. In the valley regime, the fluidity of funding is significant. The Lorenz curve is smooth. The Gini coefficient is well below the upper bound. The funding distribution is the desired result. In the plateau regime, the cumulative advantage is significant. The Lorenz curve has a sharp turn. The Gini coefficient saturates to the upper bound. The undue concentration of funding happens swiftly. The funding distribution is the undesired results, where a minority of scientists take the majority of funding. Phase transitions between these two regimes are discussed.

  9. Expected performance of m-solution backtracking

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.

    1986-01-01

    This paper derives upper bounds on the expected number of search tree nodes visited during an m-solution backtracking search, a search which terminates after some preselected number m problem solutions are found. The search behavior is assumed to have a general probabilistic structure. The results are stated in terms of node expansion and contraction. A visited search tree node is said to be expanding if the mean number of its children visited by the search exceeds 1 and is contracting otherwise. It is shown that if every node expands, or if every node contracts, then the number of search tree nodes visited by a search has an upper bound which is linear in the depth of the tree, in the mean number of children a node has, and in the number of solutions sought. Also derived are bounds linear in the depth of the tree in some situations where an upper portion of the tree contracts (expands), while the lower portion expands (contracts). While previous analyses of 1-solution backtracking have concluded that the expected performance is always linear in the tree depth, the model allows superlinear expected performance.

  10. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1990-01-01

    An expurgated upper bound on the event error probability of trellis coded modulation is presented. This bound is used to derive a lower bound on the minimum achievable free Euclidean distance d sub (free) of trellis codes. It is shown that the dominant parameters for both bounds, the expurgated error exponent and the asymptotic d sub (free) growth rate, respectively, can be obtained from the cutoff-rate R sub O of the transmission channel by a simple geometric construction, making R sub O the central parameter for finding good trellis codes. Several constellations are optimized with respect to the bounds.

  11. Geochemical Fractionations and Mobility of Arsenic, Lead and Cadmium in Sediments of the Kanto Plain, Japan.

    NASA Astrophysics Data System (ADS)

    Hossain, Sushmita; Oguchi, Chiaki T.; Hachinohe, Shoichi; Ishiyama, Takashi; Hamamoto, Hideki

    2014-05-01

    Lowland alluvial and floodplain sediment play a major role in transferring heavy metals and other elements to groundwater through sediment water interaction in changing environmental conditions. However identification of geochemical forms of toxic elements such as arsenic (As), lead (Pb) and cadmium (Cd) requires risk assessment of sediment and subsequent groundwater pollution. A four steps sequential extraction procedure was applied to characterize the geochemical fractionations of As, Pb and Cd for 44 sediment samples including one peat sample from middle basin area of the Nakagawa river in the central Kanto plain. The studied sediment profile extended from the bottom of the river to 44 m depth; sediment samples were collected at 1m intervals from a bored core. The existing sedimentary facies in vertical profile are continental, transitional and marine. There are two aquifers in vertical profile; the upper aquifer (15-20m) contains fine to medium sand whereas medium to coarse sand and gravelly sand contain in lower aquifer (37-44m). The total As and Pb contents were measured by the X-Ray Fluorescence analysis which ranged from 4 to 23 mg/kg of As and 10 to 27 mg/kg of Pb in sediment profile. The three trace elements and major heavy metals were determined by ICP/MS and ICP/AES, and major ions were measured by an ion chromatograph. The marine sediment is mainly Ca-SO4 type. The Geochemical analysis showed the order of mobility trends to be As > Pb > Cd for all the steps. The geochemical fractionations order was determined to be Fe-Mn oxide bound > carbonate bound > ion exchangeable > water soluble for As and Pb whereas the order for Cd is carbonate bound > Fe-Mn oxide bound > ion exchangeable > water soluble. The mobility tendency of Pb and Cd showed high in fine silty sediment of marine environment than for those from continental and transitional environments. In the case of As, the potential mobility is very high (>60%) in the riverbed sediments and clayey silt sediment at 13m depth which is just above the upper aquifer. This potential mobility may pose a threat to upper aquifer and riverbed aquatic system. The overall geochemical analysis revealed that the dissolution of Fe-Mn oxide is the most effective mechanism for As, Pb in groundwater however the mobility of Cd is mainly carbonate bound. In the present study, the pollution level is much below from leaching environmental standards (0.01 mg/L) for all three elements and the total content is within the natural abundance of As, Pb and Cd in sediment. The potential mobility of these elements in oxidized fine silty sediment and the possible further effect to the aquifer suggest that shallow groundwater abstraction should be restricted to protect seasonal groundwater fluctuation. Moreover marine sediment containing high total toxic element contents and mobility tendency at changing oxidation and reduction environments requires proper management when sediments are excavated for construction purpose.

  12. Experimental and Numerical Analysis of Axially Compressed Circular Cylindrical Fiber-Reinforced Panels with Various Boundary Conditions.

    DTIC Science & Technology

    1981-10-01

    Numerical predictions used in the compari- sons were obtained from the energy -based, finite-difference computer proqram CLAPP. Test specimens were clamped...edges V LONGITUDINAL STIFFENERS 45 I. Introduction 45 2. Stiffener Strain Energy 46 3. Stiffener Energy in Matrix Form 47 4. Displacement Continuity 49...that theoretical bifurcation loads predicted by the energy method represent upper bounds to the classical bifurcation loads associated with the test

  13. Variability in sinking fluxes and composition of particle-bound phosphorus in the Xisha area of the northern South China Sea

    NASA Astrophysics Data System (ADS)

    Dong, Yuan; Li, Qian P.; Wu, Zhengchao; Zhang, Jia-Zhong

    2016-12-01

    Export fluxes of phosphorus (P) by sinking particles are important in studying ocean biogeochemical dynamics, whereas their composition and temporal variability are still inadequately understood in the global oceans, including the northern South China Sea (NSCS). A time-series study of particle fluxes was conducted at a mooring station adjacent to the Xisha Trough in the NSCS from September 2012 to September 2014, with sinking particles collected every two weeks by two sediment traps deployed at 500 m and 1500 m depths. Five operationally defined particulate P classes of sinking particles including loosely-bound P, Fe-bound P, CaCO3-bound P, detrital apatite P, and refractory organic P were quantified by a sequential extraction method (SEDEX). Our results revealed substantial variability in sinking particulate P composition at the Xisha over two years of samplings. Particulate inorganic P was largely contributed from Fe-bound P in the upper trap, but detrital P in the lower trap. Particulate organic P, including exchangeable organic P, CaCO3-bound organic P, and refractory organic P, contributed up to 50-55% of total sinking particulate P. Increase of CaCO3-bound P in the upper trap during 2014 could be related to a strong El Niño event with enhanced CaCO3 deposition. We also found sediment resuspension responsible for the unusual high particles fluxes at the lower trap based on analyses of a two-component mixing model. There was on average a total mass flux of 78±50 mg m-2 d-1 at the upper trap during the study period. A significant correlation between integrated primary productivity in the region and particle fluxes at 500 m of the station suggested the important role of biological production in controlling the concentration, composition, and export fluxes of sinking particulate P in the NSCS.

  14. Bounds and inequalities relating h-index, g-index, e-index and generalized impact factor: an improvement over existing models.

    PubMed

    Abbas, Ash Mohammad

    2012-01-01

    In this paper, we describe some bounds and inequalities relating h-index, g-index, e-index, and generalized impact factor. We derive the bounds and inequalities relating these indexing parameters from their basic definitions and without assuming any continuous model to be followed by any of them. We verify the theorems using citation data for five Price Medalists. We observe that the lower bound for h-index given by Theorem 2, [formula: see text], g ≥ 1, comes out to be more accurate as compared to Schubert-Glanzel relation h is proportional to C(2/3)P(-1/3) for a proportionality constant of 1, where C is the number of citations and P is the number of papers referenced. Also, the values of h-index obtained using Theorem 2 outperform those obtained using Egghe-Liang-Rousseau power law model for the given citation data of Price Medalists. Further, we computed the values of upper bound on g-index given by Theorem 3, g ≤ (h + e), where e denotes the value of e-index. We observe that the upper bound on g-index given by Theorem 3 is reasonably tight for the given citation record of Price Medalists.

  15. Reverse preferential spread in complex networks

    NASA Astrophysics Data System (ADS)

    Toyoizumi, Hiroshi; Tani, Seiichi; Miyoshi, Naoto; Okamoto, Yoshio

    2012-08-01

    Large-degree nodes may have a larger influence on the network, but they can be bottlenecks for spreading information since spreading attempts tend to concentrate on these nodes and become redundant. We discuss that the reverse preferential spread (distributing information inversely proportional to the degree of the receiving node) has an advantage over other spread mechanisms. In large uncorrelated networks, we show that the mean number of nodes that receive information under the reverse preferential spread is an upper bound among any other weight-based spread mechanisms, and this upper bound is indeed a logistic growth independent of the degree distribution.

  16. A note on the upper bound of the spectral radius for SOR iteration matrix

    NASA Astrophysics Data System (ADS)

    Chang, D.-W. Da-Wei

    2004-05-01

    Recently, Wang and Huang (J. Comput. Appl. Math. 135 (2001) 325, Corollary 4.7) established the following estimation on the upper bound of the spectral radius for successive overrelaxation (SOR) iteration matrix:ρSOR≤1-ω+ωρGSunder the condition that the coefficient matrix A is a nonsingular M-matrix and ω≥1, where ρSOR and ρGS are the spectral radius of SOR iteration matrix and Gauss-Seidel iteration matrix, respectively. In this note, we would like to point out that the above estimation is not valid in general.

  17. Measuring Integrated Information from the Decoding Perspective

    PubMed Central

    Oizumi, Masafumi; Amari, Shun-ichi; Yanagawa, Toru; Fujii, Naotaka; Tsuchiya, Naotsugu

    2016-01-01

    Accumulating evidence indicates that the capacity to integrate information in the brain is a prerequisite for consciousness. Integrated Information Theory (IIT) of consciousness provides a mathematical approach to quantifying the information integrated in a system, called integrated information, Φ. Integrated information is defined theoretically as the amount of information a system generates as a whole, above and beyond the amount of information its parts independently generate. IIT predicts that the amount of integrated information in the brain should reflect levels of consciousness. Empirical evaluation of this theory requires computing integrated information from neural data acquired from experiments, although difficulties with using the original measure Φ precludes such computations. Although some practical measures have been previously proposed, we found that these measures fail to satisfy the theoretical requirements as a measure of integrated information. Measures of integrated information should satisfy the lower and upper bounds as follows: The lower bound of integrated information should be 0 and is equal to 0 when the system does not generate information (no information) or when the system comprises independent parts (no integration). The upper bound of integrated information is the amount of information generated by the whole system. Here we derive the novel practical measure Φ* by introducing a concept of mismatched decoding developed from information theory. We show that Φ* is properly bounded from below and above, as required, as a measure of integrated information. We derive the analytical expression of Φ* under the Gaussian assumption, which makes it readily applicable to experimental data. Our novel measure Φ* can generally be used as a measure of integrated information in research on consciousness, and also as a tool for network analysis on diverse areas of biology. PMID:26796119

  18. On the error probability of general tree and trellis codes with applications to sequential decoding

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1973-01-01

    An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.

  19. Efficient Regressions via Optimally Combining Quantile Information*

    PubMed Central

    Zhao, Zhibiao; Xiao, Zhijie

    2014-01-01

    We develop a generally applicable framework for constructing efficient estimators of regression models via quantile regressions. The proposed method is based on optimally combining information over multiple quantiles and can be applied to a broad range of parametric and nonparametric settings. When combining information over a fixed number of quantiles, we derive an upper bound on the distance between the efficiency of the proposed estimator and the Fisher information. As the number of quantiles increases, this upper bound decreases and the asymptotic variance of the proposed estimator approaches the Cramér-Rao lower bound under appropriate conditions. In the case of non-regular statistical estimation, the proposed estimator leads to super-efficient estimation. We illustrate the proposed method for several widely used regression models. Both asymptotic theory and Monte Carlo experiments show the superior performance over existing methods. PMID:25484481

  20. Toward allocative efficiency in the prescription drug industry.

    PubMed

    Guell, R C; Fischbaum, M

    1995-01-01

    Traditionally, monopoly power in the pharmaceutical industry has been measured by profits. An alternative method estimates the deadweight loss of consumer surplus associated with the exercise of monopoly power. Although upper and lower bound estimates for this inefficiency are far apart, they at least suggest a dramatically greater welfare loss than measures of industry profitability would imply. A proposed system would have the U.S. government employing its power of eminent domain to "take" and distribute pharmaceutical patents, providing as "just compensation" the present value of the patent's expected future monopoly profits. Given the allocative inefficiency of raising taxes to pay for the program, the impact of the proposal on allocative efficiency would be at least as good at our lower bound estimate of monopoly costs while substantially improving efficiency at or near our upper bound estimate.

  1. Tight upper bound for the maximal quantum value of the Svetlichny operators

    NASA Astrophysics Data System (ADS)

    Li, Ming; Shen, Shuqian; Jing, Naihuan; Fei, Shao-Ming; Li-Jost, Xianqing

    2017-10-01

    It is a challenging task to detect genuine multipartite nonlocality (GMNL). In this paper, the problem is considered via computing the maximal quantum value of Svetlichny operators for three-qubit systems and a tight upper bound is obtained. The constraints on the quantum states for the tightness of the bound are also presented. The approach enables us to give the necessary and sufficient conditions of violating the Svetlichny inequality (SI) for several quantum states, including the white and color noised Greenberger-Horne-Zeilinger (GHZ) states. The relation between the genuine multipartite entanglement concurrence and the maximal quantum value of the Svetlichny operators for mixed GHZ class states is also discussed. As the SI is useful for the investigation of GMNL, our results give an effective and operational method to detect the GMNL for three-qubit mixed states.

  2. Vacuum stability in the U(1)χ extended model with vanishing scalar potential at the Planck scale

    NASA Astrophysics Data System (ADS)

    Haba, Naoyuki; Yamaguchi, Yuya

    2015-09-01

    We investigate the vacuum stability in a scale invariant local {U}(1)_χ model with vanishing scalar potential at the Planck scale. We find that it is impossible to realize the Higgs mass of 125 GeV while keeping the Higgs quartic coupling λ _H positive in all energy scales, that is, the same as the standard model. Once one allows λ _H<0, the lower bounds of the Z' boson mass ares obtained through the positive definiteness of the scalar mass squared eigenvalues, while the bounds are smaller than the LHC bounds. On the other hand, the upper bounds strongly depend on the number of relevant Majorana Yukawa couplings of the right-handed neutrinos N_ν . Considering decoupling effects of the Z' boson and the right-handed neutrinos, the condition of the singlet scalar quartic coupling λ _φ >0 gives the upper bound in the N_ν =1 case, while it does not constrain the N_ν =2 and 3 cases. In particular, we find that the Z' boson mass is tightly restricted for the N_ν =1 case as M_{Z'} &lsim 3.7 TeV.

  3. Improved upper bounds on energy dissipation rates in plane Couette flow with boundary injection and suction

    NASA Astrophysics Data System (ADS)

    Lee, Harry; Wen, Baole; Doering, Charles

    2017-11-01

    The rate of viscous energy dissipation ɛ in incompressible Newtonian planar Couette flow (a horizontal shear layer) imposed with uniform boundary injection and suction is studied numerically. Specifically, fluid is steadily injected through the top plate with a constant rate at a constant angle of injection, and the same amount of fluid is sucked out vertically through the bottom plate at the same rate. This set-up leads to two control parameters, namely the angle of injection, θ, and the Reynolds number of the horizontal shear flow, Re . We numerically implement the `background field' variational problem formulated by Constantin and Doering with a one-dimensional unidirectional background field ϕ(z) , where z aligns with the distance between the plates. Computation is carried out at various levels of Re with θ = 0 , 0 .1° ,1° and 2°, respectively. The computed upper bounds on ɛ scale like Re0 as Re > 20 , 000 for each fixed θ, this agrees with Kolmogorov's hypothesis on isotropic turbulence. The outcome provides new upper bounds to ɛ among any solution to the underlying Navier-Stokes equations, and they are sharper than the analytical bounds presented in Doering et al. (2000). This research was partially supported by the NSF Award DMS-1515161, and the University of Michigan's Rackham Graduate Student Research Grant.

  4. $$ \\mathcal{N} $$ = 4 superconformal bootstrap of the K 3 CFT

    DOE PAGES

    Lin, Ying-Hsuan; Shao, Shu-Heng; Simmons-Duffin, David; ...

    2017-05-23

    We study two-dimensional (4; 4) superconformal eld theories of central charge c = 6, corresponding to nonlinear sigma models on K3 surfaces, using the superconformal bootstrap. This is made possible through a surprising relation between the BPS N = 4 superconformal blocks with c = 6 and bosonic Virasoro conformal blocks with c = 28, and an exact result on the moduli dependence of a certain integrated BPS 4-point function. Nontrivial bounds on the non-BPS spectrum in the K3 CFT are obtained as functions of the CFT moduli, that interpolate between the free orbifold points and singular CFT points. Wemore » observe directly from the CFT perspective the signature of a continuous spectrum above a gap at the singular moduli, and fi nd numerically an upper bound on this gap that is saturated by the A1 N = 4 cigar CFT. We also derive an analytic upper bound on the fi rst nonzero eigenvalue of the scalar Laplacian on K3 in the large volume regime, that depends on the K3 moduli data. As two byproducts, we find an exact equivalence between a class of BPS N = 2 superconformal blocks and Virasoro conformal blocks in two dimensions, and an upper bound on the four-point functions of operators of sufficiently low scaling dimension in three and four dimensional CFTs.« less

  5. $$ \\mathcal{N} $$ = 4 superconformal bootstrap of the K 3 CFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Ying-Hsuan; Shao, Shu-Heng; Simmons-Duffin, David

    We study two-dimensional (4; 4) superconformal eld theories of central charge c = 6, corresponding to nonlinear sigma models on K3 surfaces, using the superconformal bootstrap. This is made possible through a surprising relation between the BPS N = 4 superconformal blocks with c = 6 and bosonic Virasoro conformal blocks with c = 28, and an exact result on the moduli dependence of a certain integrated BPS 4-point function. Nontrivial bounds on the non-BPS spectrum in the K3 CFT are obtained as functions of the CFT moduli, that interpolate between the free orbifold points and singular CFT points. Wemore » observe directly from the CFT perspective the signature of a continuous spectrum above a gap at the singular moduli, and fi nd numerically an upper bound on this gap that is saturated by the A1 N = 4 cigar CFT. We also derive an analytic upper bound on the fi rst nonzero eigenvalue of the scalar Laplacian on K3 in the large volume regime, that depends on the K3 moduli data. As two byproducts, we find an exact equivalence between a class of BPS N = 2 superconformal blocks and Virasoro conformal blocks in two dimensions, and an upper bound on the four-point functions of operators of sufficiently low scaling dimension in three and four dimensional CFTs.« less

  6. Comonotonic bounds on the survival probabilities in the Lee-Carter model for mortality projection

    NASA Astrophysics Data System (ADS)

    Denuit, Michel; Dhaene, Jan

    2007-06-01

    In the Lee-Carter framework, future survival probabilities are random variables with an intricate distribution function. In large homogeneous portfolios of life annuities, value-at-risk or conditional tail expectation of the total yearly payout of the company are approximately equal to the corresponding quantities involving random survival probabilities. This paper aims to derive some bounds in the increasing convex (or stop-loss) sense on these random survival probabilities. These bounds are obtained with the help of comonotonic upper and lower bounds on sums of correlated random variables.

  7. Performance analysis of a cascaded coding scheme with interleaved outer code

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1986-01-01

    A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.

  8. Stochastic analysis of three-dimensional flow in a bounded domain

    USGS Publications Warehouse

    Naff, R.L.; Vecchia, A.V.

    1986-01-01

    A commonly accepted first-order approximation of the equation for steady state flow in a fully saturated spatially random medium has the form of Poisson's equation. This form allows for the advantageous use of Green's functions to solve for the random output (hydraulic heads) in terms of a convolution over the random input (the logarithm of hydraulic conductivity). A solution for steady state three- dimensional flow in an aquifer bounded above and below is presented; consideration of these boundaries is made possible by use of Green's functions to solve Poisson's equation. Within the bounded domain the medium hydraulic conductivity is assumed to be a second-order stationary random process as represented by a simple three-dimensional covariance function. Upper and lower boundaries are taken to be no-flow boundaries; the mean flow vector lies entirely in the horizontal dimensions. The resulting hydraulic head covariance function exhibits nonstationary effects resulting from the imposition of boundary conditions. Comparisons are made with existing infinite domain solutions.

  9. Unified trade-off optimization for general heat devices with nonisothermal processes.

    PubMed

    Long, Rui; Liu, Wei

    2015-04-01

    An analysis of the efficiency and coefficient of performance (COP) for general heat engines and refrigerators with nonisothermal processes is conducted under the trade-off criterion. The specific heat of the working medium has significant impacts on the optimal configurations of heat devices. For cycles with constant specific heat, the bounds of the efficiency and COP are found to be the same as those obtained through the endoreversible Carnot ones. However, they are independent of the cycle time durations. For cycles with nonconstant specific heat, whose dimensionless contact time approaches infinity, the general alternative upper and lower bounds of the efficiency and COP under the trade-off criteria have been proposed under the asymmetric limits. Furthermore, when the dimensionless contact time approaches zero, the endoreversible Carnot model is recovered. In addition, the efficiency and COP bounds of different kinds of actual heat engines and refrigerators have also been analyzed. This paper may provide practical insight for designing and operating actual heat engines and refrigerators.

  10. Performances of One-Round Walks in Linear Congestion Games

    NASA Astrophysics Data System (ADS)

    Bilò, Vittorio; Fanelli, Angelo; Flammini, Michele; Moscardelli, Luca

    We investigate the approximation ratio of the solutions achieved after a one-round walk in linear congestion games. We consider the social functions {Stextsc{um}}, defined as the sum of the players’ costs, and {Mtextsc{ax}}, defined as the maximum cost per player, as a measure of the quality of a given solution. For the social function {Stextsc{um}} and one-round walks starting from the empty strategy profile, we close the gap between the upper bound of 2+sqrt{5}≈ 4.24 given in [8] and the lower bound of 4 derived in [4] by providing a matching lower bound whose construction and analysis require non-trivial arguments. For the social function {Mtextsc{ax}}, for which, to the best of our knowledge, no results were known prior to this work, we show an approximation ratio of Θ(sqrt[4]{n^3}) (resp. Θ(nsqrt{n})), where n is the number of players, for one-round walks starting from the empty (resp. an arbitrary) strategy profile.

  11. On the global dynamics of a chronic myelogenous leukemia model

    NASA Astrophysics Data System (ADS)

    Krishchenko, Alexander P.; Starkov, Konstantin E.

    2016-04-01

    In this paper we analyze some features of global dynamics of a three-dimensional chronic myelogenous leukemia (CML) model with the help of the stability analysis and the localization method of compact invariant sets. The behavior of CML model is defined by concentrations of three cellpopulations circulating in the blood: naive T cells, effector T cells specific to CML and CML cancer cells. We prove that the dynamics of the CML system around the tumor-free equilibrium point is unstable. Further, we compute ultimate upper bounds for all three cell populations and provide the existence conditions of the positively invariant polytope. One ultimate lower bound is obtained as well. Moreover, we describe the iterative localization procedure for refining localization bounds; this procedure is based on cyclic using of localizing functions. Applying this procedure we obtain conditions under which the internal tumor equilibrium point is globally asymptotically stable. Our theoretical analyses are supplied by results of the numerical simulation.

  12. Uncoordinated MAC for Adaptive Multi-Beam Directional Networks: Analysis and Evaluation

    DTIC Science & Technology

    2016-04-10

    transmission times, hence traditional CSMA approaches are not appropriate. We first present our model of these multi-beamforming capa- bilities and the...resulting wireless interference. We then derive an upper bound on multi-access performance for an idealized version of this physical layer. We then present... transmissions and receptions in a mobile ad-hoc network has in practice led to very constrained topologies. As mentioned, one approach for system design is to de

  13. Removing cosmic spikes using a hyperspectral upper-bound spectrum method

    DOE PAGES

    Anthony, Stephen Michael; Timlin, Jerilyn A.

    2016-11-04

    Cosmic ray spikes are especially problematic for hyperspectral imaging because of the large number of spikes often present and their negative effects upon subsequent chemometric analysis. Fortunately, while the large number of spectra acquired in a hyperspectral imaging data set increases the probability and number of cosmic spikes observed, the multitude of spectra can also aid in the effective recognition and removal of the cosmic spikes. Zhang and Ben-Amotz were perhaps the first to leverage the additional spatial dimension of hyperspectral data matrices (DM). They integrated principal component analysis (PCA) into the upper bound spectrum method (UBS), resulting in amore » hybrid method (UBS-DM) for hyperspectral images. Here, we expand upon their use of PCA, recognizing that principal components primarily present in only a few pixels most likely correspond to cosmic spikes. Eliminating the contribution of those principal components in those pixels improves the cosmic spike removal. Both simulated and experimental hyperspectral Raman image data sets are used to test the newly developed UBS-DM-hyperspectral (UBS-DM-HS) method which extends the UBS-DM method by leveraging characteristics of hyperspectral data sets. As a result, a comparison is provided between the performance of the UBS-DM-HS method and other methods suitable for despiking hyperspectral images, evaluating both their ability to remove cosmic ray spikes and the extent to which they introduce spectral bias.« less

  14. Removing cosmic spikes using a hyperspectral upper-bound spectrum method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anthony, Stephen Michael; Timlin, Jerilyn A.

    Cosmic ray spikes are especially problematic for hyperspectral imaging because of the large number of spikes often present and their negative effects upon subsequent chemometric analysis. Fortunately, while the large number of spectra acquired in a hyperspectral imaging data set increases the probability and number of cosmic spikes observed, the multitude of spectra can also aid in the effective recognition and removal of the cosmic spikes. Zhang and Ben-Amotz were perhaps the first to leverage the additional spatial dimension of hyperspectral data matrices (DM). They integrated principal component analysis (PCA) into the upper bound spectrum method (UBS), resulting in amore » hybrid method (UBS-DM) for hyperspectral images. Here, we expand upon their use of PCA, recognizing that principal components primarily present in only a few pixels most likely correspond to cosmic spikes. Eliminating the contribution of those principal components in those pixels improves the cosmic spike removal. Both simulated and experimental hyperspectral Raman image data sets are used to test the newly developed UBS-DM-hyperspectral (UBS-DM-HS) method which extends the UBS-DM method by leveraging characteristics of hyperspectral data sets. As a result, a comparison is provided between the performance of the UBS-DM-HS method and other methods suitable for despiking hyperspectral images, evaluating both their ability to remove cosmic ray spikes and the extent to which they introduce spectral bias.« less

  15. Removing Cosmic Spikes Using a Hyperspectral Upper-Bound Spectrum Method.

    PubMed

    Anthony, Stephen M; Timlin, Jerilyn A

    2017-03-01

    Cosmic ray spikes are especially problematic for hyperspectral imaging because of the large number of spikes often present and their negative effects upon subsequent chemometric analysis. Fortunately, while the large number of spectra acquired in a hyperspectral imaging data set increases the probability and number of cosmic spikes observed, the multitude of spectra can also aid in the effective recognition and removal of the cosmic spikes. Zhang and Ben-Amotz were perhaps the first to leverage the additional spatial dimension of hyperspectral data matrices (DM). They integrated principal component analysis (PCA) into the upper bound spectrum method (UBS), resulting in a hybrid method (UBS-DM) for hyperspectral images. Here, we expand upon their use of PCA, recognizing that principal components primarily present in only a few pixels most likely correspond to cosmic spikes. Eliminating the contribution of those principal components in those pixels improves the cosmic spike removal. Both simulated and experimental hyperspectral Raman image data sets are used to test the newly developed UBS-DM-hyperspectral (UBS-DM-HS) method which extends the UBS-DM method by leveraging characteristics of hyperspectral data sets. A comparison is provided between the performance of the UBS-DM-HS method and other methods suitable for despiking hyperspectral images, evaluating both their ability to remove cosmic ray spikes and the extent to which they introduce spectral bias.

  16. On the security of compressed encryption with partial unitary sensing matrices embedding a secret keystream

    NASA Astrophysics Data System (ADS)

    Yu, Nam Yul

    2017-12-01

    The principle of compressed sensing (CS) can be applied in a cryptosystem by providing the notion of security. In this paper, we study the computational security of a CS-based cryptosystem that encrypts a plaintext with a partial unitary sensing matrix embedding a secret keystream. The keystream is obtained by a keystream generator of stream ciphers, where the initial seed becomes the secret key of the CS-based cryptosystem. For security analysis, the total variation distance, bounded by the relative entropy and the Hellinger distance, is examined as a security measure for the indistinguishability. By developing upper bounds on the distance measures, we show that the CS-based cryptosystem can be computationally secure in terms of the indistinguishability, as long as the keystream length for each encryption is sufficiently large with low compression and sparsity ratios. In addition, we consider a potential chosen plaintext attack (CPA) from an adversary, which attempts to recover the key of the CS-based cryptosystem. Associated with the key recovery attack, we show that the computational security of our CS-based cryptosystem is brought by the mathematical intractability of a constrained integer least-squares (ILS) problem. For a sub-optimal, but feasible key recovery attack, we consider a successive approximate maximum-likelihood detection (SAMD) and investigate the performance by developing an upper bound on the success probability. Through theoretical and numerical analyses, we demonstrate that our CS-based cryptosystem can be secure against the key recovery attack through the SAMD.

  17. Alternative to Ritt's pseudodivision for finding the input-output equations of multi-output models.

    PubMed

    Meshkat, Nicolette; Anderson, Chris; DiStefano, Joseph J

    2012-09-01

    Differential algebra approaches to structural identifiability analysis of a dynamic system model in many instances heavily depend upon Ritt's pseudodivision at an early step in analysis. The pseudodivision algorithm is used to find the characteristic set, of which a subset, the input-output equations, is used for identifiability analysis. A simpler algorithm is proposed for this step, using Gröbner Bases, along with a proof of the method that includes a reduced upper bound on derivative requirements. Efficacy of the new algorithm is illustrated with several biosystem model examples. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Implicit Block ACK Scheme for IEEE 802.11 WLANs

    PubMed Central

    Sthapit, Pranesh; Pyun, Jae-Young

    2016-01-01

    The throughput of IEEE 802.11 standard is significantly bounded by the associated Medium Access Control (MAC) overhead. Because of the overhead, an upper limit exists for throughput, which is bounded, including situations where data rates are extremely high. Therefore, an overhead reduction is necessary to achieve higher throughput. The IEEE 802.11e amendment introduced the block ACK mechanism, to reduce the number of control messages in MAC. Although the block ACK scheme greatly reduces overhead, further improvements are possible. In this letter, we propose an implicit block ACK method that further reduces the overhead associated with IEEE 802.11e’s block ACK scheme. The mathematical analysis results are presented for both the original protocol and the proposed scheme. A performance improvement of greater than 10% was achieved with the proposed implementation.

  19. Effects of triplet Higgs bosons in long baseline neutrino experiments

    NASA Astrophysics Data System (ADS)

    Huitu, K.; Kärkkäinen, T. J.; Maalampi, J.; Vihonen, S.

    2018-05-01

    The triplet scalars (Δ =Δ++,Δ+,Δ0) utilized in the so-called type-II seesaw model to explain the lightness of neutrinos, would generate nonstandard interactions (NSI) for a neutrino propagating in matter. We investigate the prospects to probe these interactions in long baseline neutrino oscillation experiments. We analyze the upper bounds that the proposed DUNE experiment might set on the nonstandard parameters and numerically derive upper bounds, as a function of the lightest neutrino mass, on the ratio the mass MΔ of the triplet scalars, and the strength |λϕ| of the coupling ϕ ϕ Δ of the triplet Δ and conventional Higgs doublet ϕ . We also discuss the possible misinterpretation of these effects as effects arising from a nonunitarity of the neutrino mixing matrix and compare the results with the bounds that arise from the charged lepton flavor violating processes.

  20. Estimates on Functional Integrals of Quantum Mechanics and Non-relativistic Quantum Field Theory

    NASA Astrophysics Data System (ADS)

    Bley, Gonzalo A.; Thomas, Lawrence E.

    2017-01-01

    We provide a unified method for obtaining upper bounds for certain functional integrals appearing in quantum mechanics and non-relativistic quantum field theory, functionals of the form {E[{exp}(A_T)]} , the (effective) action {A_T} being a function of particle trajectories up to time T. The estimates in turn yield rigorous lower bounds for ground state energies, via the Feynman-Kac formula. The upper bounds are obtained by writing the action for these functional integrals in terms of stochastic integrals. The method is illustrated in familiar quantum mechanical settings: for the hydrogen atom, for a Schrödinger operator with {1/|x|^2} potential with small coupling, and, with a modest adaptation of the method, for the harmonic oscillator. We then present our principal applications of the method, in the settings of non-relativistic quantum field theories for particles moving in a quantized Bose field, including the optical polaron and Nelson models.

  1. Dwell time-based stabilisation of switched delay systems using free-weighting matrices

    NASA Astrophysics Data System (ADS)

    Koru, Ahmet Taha; Delibaşı, Akın; Özbay, Hitay

    2018-01-01

    In this paper, we present a quasi-convex optimisation method to minimise an upper bound of the dwell time for stability of switched delay systems. Piecewise Lyapunov-Krasovskii functionals are introduced and the upper bound for the derivative of Lyapunov functionals is estimated by free-weighting matrices method to investigate non-switching stability of each candidate subsystems. Then, a sufficient condition for the dwell time is derived to guarantee the asymptotic stability of the switched delay system. Once these conditions are represented by a set of linear matrix inequalities , dwell time optimisation problem can be formulated as a standard quasi-convex optimisation problem. Numerical examples are given to illustrate the improvements over previously obtained dwell time bounds. Using the results obtained in the stability case, we present a nonlinear minimisation algorithm to synthesise the dwell time minimiser controllers. The algorithm solves the problem with successive linearisation of nonlinear conditions.

  2. Decay of superconducting correlations for gauged electrons in dimensions D ≤ 4

    NASA Astrophysics Data System (ADS)

    Tada, Yasuhiro; Koma, Tohru

    2018-03-01

    We study lattice superconductors coupled to gauge fields, such as an attractive Hubbard model in electromagnetic fields, with a standard gauge fixing. We prove upper bounds for a two-point Cooper pair correlation at finite temperatures in spatial dimensions D ≤ 4. The upper bounds decay exponentially in three dimensions and by power law in four dimensions. These imply the absence of the superconducting long-range order for the Cooper pair amplitude as a consequence of fluctuations of the gauge fields. Since our results hold for the gauge fixing Hamiltonian, they cannot be obtained as a corollary of Elitzur's theorem.

  3. Calculations of reliability predictions for the Apollo spacecraft

    NASA Technical Reports Server (NTRS)

    Amstadter, B. L.

    1966-01-01

    A new method of reliability prediction for complex systems is defined. Calculation of both upper and lower bounds are involved, and a procedure for combining the two to yield an approximately true prediction value is presented. Both mission success and crew safety predictions can be calculated, and success probabilities can be obtained for individual mission phases or subsystems. Primary consideration is given to evaluating cases involving zero or one failure per subsystem, and the results of these evaluations are then used for analyzing multiple failure cases. Extensive development is provided for the overall mission success and crew safety equations for both the upper and lower bounds.

  4. A proof of the log-concavity conjecture related to the computation of the ergodic capacity of MIMO channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gurvitis, Leonid

    2009-01-01

    An upper bound on the ergodic capacity of MIMO channels was introduced recently in [1]. This upper bound amounts to the maximization on the simplex of some multilinear polynomial p({lambda}{sub 1}, ..., {lambda}{sub n}) with non-negative coefficients. In general, such maximizations problems are NP-HARD. But if say, the functional log(p) is concave on the simplex and can be efficiently evaluated, then the maximization can also be done efficiently. Such log-concavity was conjectured in [1]. We give in this paper self-contained proof of the conjecture, based on the theory of H-Stable polynomials.

  5. Investigation of matter-antimatter interaction for possible propulsion applications

    NASA Technical Reports Server (NTRS)

    Morgan, D. L., Jr.

    1974-01-01

    Matter-antimatter annihilation is discussed as a means of rocket propulsion. The feasibility of different means of antimatter storage is shown to depend on how annihilation rates are affected by various circumstances. The annihilation processes are described, with emphasis on important features of atom-antiatom interatomic potential energies. A model is developed that allows approximate calculation of upper and lower bounds to the interatomic potential energy for any atom-antiatom pair. Formulae for the upper and lower bounds for atom-antiatom annihilation cross-sections are obtained and applied to the annihilation rates for each means of antimatter storage under consideration. Recommendations for further studies are presented.

  6. Marginal Consistency: Upper-Bounding Partition Functions over Commutative Semirings.

    PubMed

    Werner, Tomás

    2015-07-01

    Many inference tasks in pattern recognition and artificial intelligence lead to partition functions in which addition and multiplication are abstract binary operations forming a commutative semiring. By generalizing max-sum diffusion (one of convergent message passing algorithms for approximate MAP inference in graphical models), we propose an iterative algorithm to upper bound such partition functions over commutative semirings. The iteration of the algorithm is remarkably simple: change any two factors of the partition function such that their product remains the same and their overlapping marginals become equal. In many commutative semirings, repeating this iteration for different pairs of factors converges to a fixed point when the overlapping marginals of every pair of factors coincide. We call this state marginal consistency. During that, an upper bound on the partition function monotonically decreases. This abstract algorithm unifies several existing algorithms, including max-sum diffusion and basic constraint propagation (or local consistency) algorithms in constraint programming. We further construct a hierarchy of marginal consistencies of increasingly higher levels and show than any such level can be enforced by adding identity factors of higher arity (order). Finally, we discuss instances of the framework for several semirings, including the distributive lattice and the max-sum and sum-product semirings.

  7. On the Role of Entailment Patterns and Scalar Implicatures in the Processing of Numerals

    ERIC Educational Resources Information Center

    Panizza, Daniele; Chierchia, Gennaro; Clifton, Charles, Jr.

    2009-01-01

    There has been much debate, in both the linguistics and the psycholinguistics literature, concerning numbers and the interpretation of number denoting determiners ("numerals"). Such debate concerns, in particular, the nature and distribution of upper-bounded ("exact") interpretations vs. lower-bounded ("at-least") construals. In the present paper…

  8. Sublinear Upper Bounds for Stochastic Programs with Recourse. Revision.

    DTIC Science & Technology

    1987-06-01

    approximation procedures for (1.1) generally rely on discretizations of E (Huang, Ziemba , and Ben-Tal (1977), Kall and Stoyan (1982), Birge and Wets...Wright, Practical optimization (Academic Press, London and New York,1981). C.C. Huang, W. Ziemba , and A. Ben-Tal, "Bounds on the expectation of a con

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zachos, C. K.; High Energy Physics

    Following ref [1], a classical upper bound for quantum entropy is identified and illustrated, 0 {le} S{sub q} {le} ln (e{sigma}{sup 2}/2{h_bar}), involving the variance {sigma}{sup 2} in phase space of the classical limit distribution of a given system. A fortiori, this further bounds the corresponding information-theoretical generalizations of the quantum entropy proposed by Renyi.

  10. Representing and Acquiring Geographic Knowledge.

    DTIC Science & Technology

    1984-01-01

    which is allowed if v is a kowledge bound of REG. e3. The real vertices of a clump map into the boundary of the corresponding object so * , 21...example, *What is the diameter of the pond?" can be answered, but the answer will, in general, be a range power -bound, upper-bound]. If the clump for...cases of others. They are included separately, because their procedures are either faster or more powerful than the general procedure. I will not

  11. A Posteriori Finite Element Bounds for Sensitivity Derivatives of Partial-Differential-Equation Outputs. Revised

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Patera, Anthony T.; Peraire, Jaume

    1998-01-01

    We present a Neumann-subproblem a posteriori finite element procedure for the efficient and accurate calculation of rigorous, 'constant-free' upper and lower bounds for sensitivity derivatives of functionals of the solutions of partial differential equations. The design motivation for sensitivity derivative error control is discussed; the a posteriori finite element procedure is described; the asymptotic bounding properties and computational complexity of the method are summarized; and illustrative numerical results are presented.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Müller-Hermes, Alexander, E-mail: muellerh@ma.tum.de; Wolf, Michael M., E-mail: m.wolf@tum.de; Reeb, David, E-mail: reeb.qit@gmail.com

    We investigate linear maps between matrix algebras that remain positive under tensor powers, i.e., under tensoring with n copies of themselves. Completely positive and completely co-positive maps are trivial examples of this kind. We show that for every n ∈ ℕ, there exist non-trivial maps with this property and that for two-dimensional Hilbert spaces there is no non-trivial map for which this holds for all n. For higher dimensions, we reduce the existence question of such non-trivial “tensor-stable positive maps” to a one-parameter family of maps and show that an affirmative answer would imply the existence of non-positive partial transposemore » bound entanglement. As an application, we show that any tensor-stable positive map that is not completely positive yields an upper bound on the quantum channel capacity, which for the transposition map gives the well-known cb-norm bound. We, furthermore, show that the latter is an upper bound even for the local operations and classical communications-assisted quantum capacity, and that moreover it is a strong converse rate for this task.« less

  13. Measures and limits of models of fixation selection.

    PubMed

    Wilming, Niklas; Betz, Torsten; Kietzmann, Tim C; König, Peter

    2011-01-01

    Models of fixation selection are a central tool in the quest to understand how the human mind selects relevant information. Using this tool in the evaluation of competing claims often requires comparing different models' relative performance in predicting eye movements. However, studies use a wide variety of performance measures with markedly different properties, which makes a comparison difficult. We make three main contributions to this line of research: First we argue for a set of desirable properties, review commonly used measures, and conclude that no single measure unites all desirable properties. However the area under the ROC curve (a classification measure) and the KL-divergence (a distance measure of probability distributions) combine many desirable properties and allow a meaningful comparison of critical model performance. We give an analytical proof of the linearity of the ROC measure with respect to averaging over subjects and demonstrate an appropriate correction of entropy-based measures like KL-divergence for small sample sizes in the context of eye-tracking data. Second, we provide a lower bound and an upper bound of these measures, based on image-independent properties of fixation data and between subject consistency respectively. Based on these bounds it is possible to give a reference frame to judge the predictive power of a model of fixation selection. We provide open-source python code to compute the reference frame. Third, we show that the upper, between subject consistency bound holds only for models that predict averages of subject populations. Departing from this we show that incorporating subject-specific viewing behavior can generate predictions which surpass that upper bound. Taken together, these findings lay out the required information that allow a well-founded judgment of the quality of any model of fixation selection and should therefore be reported when a new model is introduced.

  14. Randomized noninferiority trial of telephone versus in-person genetic counseling for hereditary breast and ovarian cancer.

    PubMed

    Schwartz, Marc D; Valdimarsdottir, Heiddis B; Peshkin, Beth N; Mandelblatt, Jeanne; Nusbaum, Rachel; Huang, An-Tsun; Chang, Yaojen; Graves, Kristi; Isaacs, Claudine; Wood, Marie; McKinnon, Wendy; Garber, Judy; McCormick, Shelley; Kinney, Anita Y; Luta, George; Kelleher, Sarah; Leventhal, Kara-Grace; Vegella, Patti; Tong, Angie; King, Lesley

    2014-03-01

    Although guidelines recommend in-person counseling before BRCA1/BRCA2 gene testing, genetic counseling is increasingly offered by telephone. As genomic testing becomes more common, evaluating alternative delivery approaches becomes increasingly salient. We tested whether telephone delivery of BRCA1/2 genetic counseling was noninferior to in-person delivery. Participants (women age 21 to 85 years who did not have newly diagnosed or metastatic cancer and lived within a study site catchment area) were randomly assigned to usual care (UC; n = 334) or telephone counseling (TC; n = 335). UC participants received in-person pre- and post-test counseling; TC participants completed all counseling by telephone. Primary outcomes were knowledge, satisfaction, decision conflict, distress, and quality of life; secondary outcomes were equivalence of BRCA1/2 test uptake and costs of delivering TC versus UC. TC was noninferior to UC on all primary outcomes. At 2 weeks after pretest counseling, knowledge (d = 0.03; lower bound of 97.5% CI, -0.61), perceived stress (d = -0.12; upper bound of 97.5% CI, 0.21), and satisfaction (d = -0.16; lower bound of 97.5% CI, -0.70) had group differences and confidence intervals that did not cross their 1-point noninferiority limits. Decision conflict (d = 1.1; upper bound of 97.5% CI, 3.3) and cancer distress (d = -1.6; upper bound of 97.5% CI, 0.27) did not cross their 4-point noninferiority limit. Results were comparable at 3 months. TC was not equivalent to UC on BRCA1/2 test uptake (UC, 90.1%; TC, 84.2%). TC yielded cost savings of $114 per patient. Genetic counseling can be effectively and efficiently delivered via telephone to increase access and decrease costs.

  15. Sign rank versus Vapnik-Chervonenkis dimension

    NASA Astrophysics Data System (ADS)

    Alon, N.; Moran, Sh; Yehudayoff, A.

    2017-12-01

    This work studies the maximum possible sign rank of sign (N × N)-matrices with a given Vapnik-Chervonenkis dimension d. For d=1, this maximum is three. For d=2, this maximum is \\widetilde{\\Theta}(N1/2). For d >2, similar but slightly less accurate statements hold. The lower bounds improve on previous ones by Ben-David et al., and the upper bounds are novel. The lower bounds are obtained by probabilistic constructions, using a theorem of Warren in real algebraic topology. The upper bounds are obtained using a result of Welzl about spanning trees with low stabbing number, and using the moment curve. The upper bound technique is also used to: (i) provide estimates on the number of classes of a given Vapnik-Chervonenkis dimension, and the number of maximum classes of a given Vapnik-Chervonenkis dimension--answering a question of Frankl from 1989, and (ii) design an efficient algorithm that provides an O(N/log(N)) multiplicative approximation for the sign rank. We also observe a general connection between sign rank and spectral gaps which is based on Forster's argument. Consider the adjacency (N × N)-matrix of a Δ-regular graph with a second eigenvalue of absolute value λ and Δ ≤ N/2. We show that the sign rank of the signed version of this matrix is at least Δ/λ. We use this connection to prove the existence of a maximum class C\\subseteq\\{+/- 1\\}^N with Vapnik-Chervonenkis dimension 2 and sign rank \\widetilde{\\Theta}(N1/2). This answers a question of Ben-David et al. regarding the sign rank of large Vapnik-Chervonenkis classes. We also describe limitations of this approach, in the spirit of the Alon-Boppana theorem. We further describe connections to communication complexity, geometry, learning theory, and combinatorics. Bibliography: 69 titles.

  16. A formulation of a matrix sparsity approach for the quantum ordered search algorithm

    NASA Astrophysics Data System (ADS)

    Parmar, Jupinder; Rahman, Saarim; Thiara, Jaskaran

    One specific subset of quantum algorithms is Grovers Ordered Search Problem (OSP), the quantum counterpart of the classical binary search algorithm, which utilizes oracle functions to produce a specified value within an ordered database. Classically, the optimal algorithm is known to have a log2N complexity; however, Grovers algorithm has been found to have an optimal complexity between the lower bound of ((lnN-1)/π≈0.221log2N) and the upper bound of 0.433log2N. We sought to lower the known upper bound of the OSP. With Farhi et al. MITCTP 2815 (1999), arXiv:quant-ph/9901059], we see that the OSP can be resolved into a translational invariant algorithm to create quantum query algorithm restraints. With these restraints, one can find Laurent polynomials for various k — queries — and N — database sizes — thus finding larger recursive sets to solve the OSP and effectively reducing the upper bound. These polynomials are found to be convex functions, allowing one to make use of convex optimization to find an improvement on the known bounds. According to Childs et al. [Phys. Rev. A 75 (2007) 032335], semidefinite programming, a subset of convex optimization, can solve the particular problem represented by the constraints. We were able to implement a program abiding to their formulation of a semidefinite program (SDP), leading us to find that it takes an immense amount of storage and time to compute. To combat this setback, we then formulated an approach to improve results of the SDP using matrix sparsity. Through the development of this approach, along with an implementation of a rudimentary solver, we demonstrate how matrix sparsity reduces the amount of time and storage required to compute the SDP — overall ensuring further improvements will likely be made to reach the theorized lower bound.

  17. Sample Complexity Bounds for Differentially Private Learning

    PubMed Central

    Chaudhuri, Kamalika; Hsu, Daniel

    2013-01-01

    This work studies the problem of privacy-preserving classification – namely, learning a classifier from sensitive data while preserving the privacy of individuals in the training set. In particular, the learning algorithm is required in this problem to guarantee differential privacy, a very strong notion of privacy that has gained significant attention in recent years. A natural question to ask is: what is the sample requirement of a learning algorithm that guarantees a certain level of privacy and accuracy? We address this question in the context of learning with infinite hypothesis classes when the data is drawn from a continuous distribution. We first show that even for very simple hypothesis classes, any algorithm that uses a finite number of examples and guarantees differential privacy must fail to return an accurate classifier for at least some unlabeled data distributions. This result is unlike the case with either finite hypothesis classes or discrete data domains, in which distribution-free private learning is possible, as previously shown by Kasiviswanathan et al. (2008). We then consider two approaches to differentially private learning that get around this lower bound. The first approach is to use prior knowledge about the unlabeled data distribution in the form of a reference distribution chosen independently of the sensitive data. Given such a reference , we provide an upper bound on the sample requirement that depends (among other things) on a measure of closeness between and the unlabeled data distribution. Our upper bound applies to the non-realizable as well as the realizable case. The second approach is to relax the privacy requirement, by requiring only label-privacy – namely, that the only labels (and not the unlabeled parts of the examples) be considered sensitive information. An upper bound on the sample requirement of learning with label privacy was shown by Chaudhuri et al. (2006); in this work, we show a lower bound. PMID:25285183

  18. Static liquid permeation cell method for determining the migration parameters of low molecular weight organic compounds in polyethylene terephthalate.

    PubMed

    Song, Yoon S; Koontz, John L; Juskelis, Rima O; Zhao, Yang

    2013-01-01

    The migration of low molecular weight organic compounds through polyethylene terephthalate (PET) films was determined by using a custom permeation cell assembly. Fatty food simulant (Miglyol 812) was added to the receptor chamber, while the donor chamber was filled with 1% and 10% (v/v) migrant compounds spiked in simulant. The permeation cell was maintained at 40°C, 66°C, 100°C or 121°C for up to 25 days of polymer film exposure time. Migrants in Miglyol were directly quantified without a liquid-liquid extraction step by headspace-GC-MS analysis. Experimental diffusion coefficients (DP) of toluene, benzyl alcohol, ethyl butyrate and methyl salicylate through PET film were determined. Results from Limm's diffusion model showed that the predicted DP values for PET were all greater than the experimental values. DP values predicted by Piringer's diffusion model were also greater than those determined experimentally at 66°C, 100°C and 121°C. However, Piringer's model led to the underestimation of benzyl alcohol (Áp = 3.7) and methyl salicylate (Áp = 4.0) diffusion at 40°C with its revised "upper-bound" Áp value of 3.1 at temperatures below the glass transition temperature (Tg) of PET (<70°C). This implies that input parameters of Piringer's model may need to be revised to ensure a margin of safety for consumers. On the other hand, at temperatures greater than the Tg, both models appear too conservative and unrealistic. The highest estimated Áp value from Piringer's model was 2.6 for methyl salicylate, which was much lower than the "upper-bound" Áp value of 6.4 for PET. Therefore, it may be necessary further to refine "upper-bound" Áp values for PET such that Piringer's model does not significantly underestimate or overestimate the migration of organic compounds dependent upon the temperature condition of the food contact material.

  19. Spread of entanglement and causality

    NASA Astrophysics Data System (ADS)

    Casini, Horacio; Liu, Hong; Mezei, Márk

    2016-07-01

    We investigate causality constraints on the time evolution of entanglement entropy after a global quench in relativistic theories. We first provide a general proof that the so-called tsunami velocity is bounded by the speed of light. We then generalize the free particle streaming model of [1] to general dimensions and to an arbitrary entanglement pattern of the initial state. In more than two spacetime dimensions the spread of entanglement in these models is highly sensitive to the initial entanglement pattern, but we are able to prove an upper bound on the normalized rate of growth of entanglement entropy, and hence the tsunami velocity. The bound is smaller than what one gets for quenches in holographic theories, which highlights the importance of interactions in the spread of entanglement in many-body systems. We propose an interacting model which we believe provides an upper bound on the spread of entanglement for interacting relativistic theories. In two spacetime dimensions with multiple intervals, this model and its variations are able to reproduce intricate results exhibited by holographic theories for a significant part of the parameter space. For higher dimensions, the model bounds the tsunami velocity at the speed of light. Finally, we construct a geometric model for entanglement propagation based on a tensor network construction for global quenches.

  20. Ion wake field effects on the dust-ion-acoustic surface mode in a semi-bounded Lorentzian dusty plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Myoung-Jae; Jung, Young-Dae, E-mail: ydjung@hanyang.ac.kr; Department of Physics, Applied Physics, and Astronomy, Rensselaer Polytechnic Institute, 110 8th Street, Troy, New York 12180-3590

    The dispersion relation for the dust ion-acoustic surface waves propagating at the interface of semi-bounded Lorentzian dusty plasma with supersonic ion flow has been kinetically derived to investigate the nonthermal property and the ion wake field effect. We found that the supersonic ion flow creates the upper and the lower modes. The increase in the nonthermal particles decreases the wave frequency for the upper mode whereas it increases the frequency for the lower mode. The increase in the supersonic ion flow velocity is found to enhance the wave frequency for both modes. We also found that the increase in nonthermalmore » plasmas is found to enhance the group velocity of the upper mode. However, the nonthermal particles suppress the lower mode group velocity. The nonthermal effects on the group velocity will be reduced in the limit of small or large wavelength limit.« less

  1. LHC phenomenology of SO(10) models with Yukawa unification

    NASA Astrophysics Data System (ADS)

    Anandakrishnan, Archana; Bryant, B. Charles; Raby, Stuart; Wingerter, Akın

    2013-10-01

    In this paper we study an SO(10) SUSY GUT with Yukawa unification for the third generation. We perform a global χ2 analysis given to obtain the GUT boundary conditions consistent with 11 low-energy observables, including the top, bottom and tau masses. We assume a universal mass, m16, for squarks and sleptons and a universal gaugino mass, M1/2. We then analyze the phenomenological consequences for the LHC for 15 benchmark models with fixed m16=20TeV and with varying values of the gluino mass. The goal of the present work is to (i) evaluate the lower bound on the gluino mass in our model coming from the most recent published data of CMS and (ii) to compare this bound with similar bounds obtained by CMS using simplified models. The bottom line is that the bounds coming from the same-sign dilepton analysis are comparable for our model and the simplified model studied assuming B(g˜→tt¯χ˜10)=100%. However the bounds coming from the purely hadronic analyses for our model are 10%-20% lower than obtained for the simplified models. This is due to the fact that for our models the branching ratio for the decay g˜→gχ˜1,20 is significant. Thus there are significantly fewer b-jets. We find a lower bound on the gluino mass in our models with Mg˜≳1000GeV. Finally, there is a theoretical upper bound on the gluino mass which increases with the value of m16. For m16≤30TeV, the gluino mass satisfies Mg˜≤2.8TeV at 90% C.L. Thus, unless we further increase the amount of fine-tuning, we expect gluinos to be discovered at LHC 14.

  2. Bounds on quantum confinement effects in metal nanoparticles

    NASA Astrophysics Data System (ADS)

    Blackman, G. Neal; Genov, Dentcho A.

    2018-03-01

    Quantum size effects on the permittivity of metal nanoparticles are investigated using the quantum box model. Explicit upper and lower bounds are derived for the permittivity and relaxation rates due to quantum confinement effects. These bounds are verified numerically, and the size dependence and frequency dependence of the empirical Drude size parameter is extracted from the model. Results suggest that the common practice of empirically modifying the dielectric function can lead to inaccurate predictions for highly uniform distributions of finite-sized particles.

  3. Data envelopment analysis with upper bound on output to measure efficiency performance of departments in Malaikulsaleh University

    NASA Astrophysics Data System (ADS)

    Abdullah, Dahlan; Suwilo, Saib; Tulus; Mawengkang, Herman; Efendi, Syahril

    2017-09-01

    The higher education system in Indonesia can be considered not only as an important source of developing knowledge in the country, but also could create positive living conditions for the country. Therefore it is not surprising that enrollments in higher education continue to expand. However, the implication of this situation, the Indonesian government is necessarily to support more funds. In the interest of accountability, it is essential to measure the efficiency for this higher institution. Data envelopment analysis (DEA) is a method to evaluate the technical efficiency of production units which have multiple input and output. The higher learning institution considered in this paper is Malikussaleh University located in Lhokseumawe, a city in Aceh province of Indonesia. This paper develops a method to evaluate efficiency for all departments in Malikussaleh University using DEA with bounded output. Accordingly, we present some important differences in efficiency of those departments. Finally we discuss the effort should be done by these departments in order to become efficient.

  4. Interacting dark energy: Dynamical system analysis

    NASA Astrophysics Data System (ADS)

    Golchin, Hanif; Jamali, Sara; Ebrahimi, Esmaeil

    We investigate the impacts of interaction between dark matter (DM) and dark energy (DE) in the context of two DE models, holographic (HDE) and ghost dark energy (GDE). In fact, using the dynamical system analysis, we obtain the cosmological consequence of several interactions, considering all relevant component of universe, i.e. matter (dark and luminous), radiation and DE. Studying the phase space for all interactions in detail, we show the existence of unstable matter-dominated and stable DE-dominated phases. We also show that linear interactions suffer from the absence of standard radiation-dominated epoch. Interestingly, this failure resolved by adding the nonlinear interactions to the models. We find an upper bound for the value of the coupling constant of the interaction between DM and DE as b < 0.57in the case of holographic model, and b < 0.61 in the case of GDE model, to result in a cosmological viable matter-dominated epoch. More specifically, this bound is vital to satisfy instability and deceleration of matter-dominated epoch.

  5. Direct detection of light “Ge-phobic” exothermic dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gelmini, Graciela B.; Georgescu, Andreea; Huh, Ji-Haeng

    2014-07-15

    We present comparisons of direct dark matter (DM) detection data for light WIMPs with exothermic scattering with nuclei (exoDM), both assuming the Standard Halo Model (SHM) and in a halo model — independent manner. Exothermic interactions favor light targets, thus reducing the importance of upper limits derived from xenon targets, the most restrictive of which is at present the LUX limit. In our SHM analysis the CDMS-II-Si and CoGeNT regions become allowed by these bounds, however the recent SuperCDMS limit rejects both regions for exoDM with isospin-conserving couplings. An isospin-violating coupling of the exoDM, in particular one with a neutronmore » to proton coupling ratio of −0.8 (which we call “Ge-phobic”), maximally reduces the DM coupling to germanium and allows the CDMS-II-Si region to become compatible with all bounds. This is also clearly shown in our halo-independent analysis.« less

  6. Solving the chemical master equation using sliding windows

    PubMed Central

    2010-01-01

    Background The chemical master equation (CME) is a system of ordinary differential equations that describes the evolution of a network of chemical reactions as a stochastic process. Its solution yields the probability density vector of the system at each point in time. Solving the CME numerically is in many cases computationally expensive or even infeasible as the number of reachable states can be very large or infinite. We introduce the sliding window method, which computes an approximate solution of the CME by performing a sequence of local analysis steps. In each step, only a manageable subset of states is considered, representing a "window" into the state space. In subsequent steps, the window follows the direction in which the probability mass moves, until the time period of interest has elapsed. We construct the window based on a deterministic approximation of the future behavior of the system by estimating upper and lower bounds on the populations of the chemical species. Results In order to show the effectiveness of our approach, we apply it to several examples previously described in the literature. The experimental results show that the proposed method speeds up the analysis considerably, compared to a global analysis, while still providing high accuracy. Conclusions The sliding window method is a novel approach to address the performance problems of numerical algorithms for the solution of the chemical master equation. The method efficiently approximates the probability distributions at the time points of interest for a variety of chemically reacting systems, including systems for which no upper bound on the population sizes of the chemical species is known a priori. PMID:20377904

  7. Localization of the eigenvalues of linear integral equations with applications to linear ordinary differential equations.

    NASA Technical Reports Server (NTRS)

    Sloss, J. M.; Kranzler, S. K.

    1972-01-01

    The equivalence of a considered integral equation form with an infinite system of linear equations is proved, and the localization of the eigenvalues of the infinite system is expressed. Error estimates are derived, and the problems of finding upper bounds and lower bounds for the eigenvalues are solved simultaneously.

  8. Graviton mass bounds from an analysis of bright star trajectories at the Galactic Center

    NASA Astrophysics Data System (ADS)

    Zakharov, Alexander; Jovanović, Predrag; Borka, Dusko; Jovanović, Vesna Borka

    2017-03-01

    In February 2016 the LIGO & VIRGO collaboration reported the discovery of gravitational waves in merging black holes, therefore, the team confirmed GR predictions about an existence of black holes and gravitational waves in the strong gravitational field limit. Moreover, in their papers the joint LIGO & VIRGO team presented an upper limit on graviton mass such as mg < 1.2 × 10-22 eV (Abbott et al. 2016). So, the authors concluded that their observational data do not show any violation of classical general relativity. We show that an analysis of bright star trajectories could constrain graviton mass with a comparable accuracy with accuracies reached with gravitational wave interferometers and the estimate is consistent with the one obtained by the LIGO & VIRGO collaboration. This analysis gives an opportunity to treat observations of bright stars near the Galactic Center as a useful tool to obtain constraints on the fundamental gravity law such as modifications of the Newton gravity law in a weak field approximation. In that way, based on a potential reconstruction at the Galactic Center we obtain bounds on a graviton mass.

  9. An extended GS method for dense linear systems

    NASA Astrophysics Data System (ADS)

    Niki, Hiroshi; Kohno, Toshiyuki; Abe, Kuniyoshi

    2009-09-01

    Davey and Rosindale [K. Davey, I. Rosindale, An iterative solution scheme for systems of boundary element equations, Internat. J. Numer. Methods Engrg. 37 (1994) 1399-1411] derived the GSOR method, which uses an upper triangular matrix [Omega] in order to solve dense linear systems. By applying functional analysis, the authors presented an expression for the optimum [Omega]. Moreover, Davey and Bounds [K. Davey, S. Bounds, A generalized SOR method for dense linear systems of boundary element equations, SIAM J. Comput. 19 (1998) 953-967] also introduced further interesting results. In this note, we employ a matrix analysis approach to investigate these schemes, and derive theorems that compare these schemes with existing preconditioners for dense linear systems. We show that the convergence rate of the Gauss-Seidel method with preconditioner PG is superior to that of the GSOR method. Moreover, we define some splittings associated with the iterative schemes. Some numerical examples are reported to confirm the theoretical analysis. We show that the EGS method with preconditioner produces an extremely small spectral radius in comparison with the other schemes considered.

  10. The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates

    NASA Technical Reports Server (NTRS)

    Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2008-01-01

    We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.

  11. When clusters collide: constraints on antimatter on the largest scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steigman, Gary, E-mail: steigman@mps.ohio-state.edu

    2008-10-15

    Observations have ruled out the presence of significant amounts of antimatter in the Universe on scales ranging from the solar system, to the Galaxy, to groups and clusters of galaxies, and even to distances comparable to the scale of the present horizon. Except for the model-dependent constraints on the largest scales, the most significant upper limits to diffuse antimatter in the Universe are those on the {approx}Mpc scale of clusters of galaxies provided by the EGRET upper bounds to annihilation gamma rays from galaxy clusters whose intracluster gas is revealed through its x-ray emission. On the scale of individual clustersmore » of galaxies the upper bounds to the fraction of mixed matter and antimatter for the 55 clusters from a flux-limited x-ray survey range from 5 Multiplication-Sign 10{sup -9} to <1 Multiplication-Sign 10{sup -6}, strongly suggesting that individual clusters of galaxies are made entirely of matter or of antimatter. X-ray and gamma-ray observations of colliding clusters of galaxies, such as the Bullet Cluster, permit these constraints to be extended to even larger scales. If the observations of the Bullet Cluster, where the upper bound to the antimatter fraction is found to be <3 Multiplication-Sign 10{sup -6}, can be generalized to other colliding clusters of galaxies, cosmologically significant amounts of antimatter will be excluded on scales of order {approx}20 Mpc (M{approx}5 Multiplication-Sign 10{sup 15}M{sub sun})« less

  12. Multi-soliton interaction of a generalized Schrödinger-Boussinesq system in a magnetized plasma

    NASA Astrophysics Data System (ADS)

    Zhao, Xue-Hui; Tian, Bo; Chai, Jun; Wu, Xiao-Yu; Guo, Yong-Jiang

    2017-04-01

    Under investigation in this paper is a generalized Schrödinger-Boussinesq system, which describes the stationary propagation of coupled upper-hybrid waves and magnetoacoustic waves in a magnetized plasma. Bilinear forms, one-, two- and three-soliton solutions are derived by virtue of the Hirota method and symbolic computation. Propagation and interaction for the solitons are illustrated graphically: Coefficients β1^{} and β2^{} can affect the velocities and propagation directions of the solitary waves. Amplitude, velocity and shape of the one solitary wave keep invariant during the propagation, implying that the transport of the energy is stable in the upper-hybrid and magnetoacoustic waves, and amplitude of the upper-hybrid wave is bigger than that of the magnetoacoustic wave. For the upper-hybrid and magnetoacoustic waves, head-on, overtaking and bound-state interaction between the two solitary waves are asymptotically depicted, respectively, indicating that the interaction between the two solitary waves is elastic. Elastic interaction between the bound-state soliton and a single one soliton is also displayed, and interaction among the three solitary waves is all elastic.

  13. On the Coriolis effect in acoustic waveguides.

    PubMed

    Wegert, Henry; Reindl, Leonard M; Ruile, Werner; Mayer, Andreas P

    2012-05-01

    Rotation of an elastic medium gives rise to a shift of frequency of its acoustic modes, i.e., the time-period vibrations that exist in it. This frequency shift is investigated by applying perturbation theory in the regime of small ratios of the rotation velocity and the frequency of the acoustic mode. In an expansion of the relative frequency shift in powers of this ratio, upper bounds are derived for the first-order and the second-order terms. The derivation of the theoretical upper bounds of the first-order term is presented for linear vibration modes as well as for stable nonlinear vibrations with periodic time dependence that can be represented by a Fourier series.

  14. Asymptotics of the evolution semigroup associated with a scalar field in the presence of a non-linear electromagnetic field

    NASA Astrophysics Data System (ADS)

    Albeverio, Sergio; Tamura, Hiroshi

    2018-04-01

    We consider a model describing the coupling of a vector-valued and a scalar homogeneous Markovian random field over R4, interpreted as expressing the interaction between a charged scalar quantum field coupled with a nonlinear quantized electromagnetic field. Expectations of functionals of the random fields are expressed by Brownian bridges. Using this, together with Feynman-Kac-Itô type formulae and estimates on the small time and large time behaviour of Brownian functionals, we prove asymptotic upper and lower bounds on the kernel of the transition semigroup for our model. The upper bound gives faster than exponential decay for large distances of the corresponding resolvent (propagator).

  15. The upper bounds of reduced axial and shear moduli in cross-ply laminates with matrix cracks

    NASA Technical Reports Server (NTRS)

    Lee, Jong-Won; Allen, D. H.; Harris, C. E.

    1991-01-01

    The present study proposes a mathematical model utilizing the internal state variable concept for predicting the upper bounds of the reduced axial and shear stiffnesses in cross-ply laminates with matrix cracks. The displacement components at the matrix crack surfaces are explicitly expressed in terms of the observable axial and shear strains and the undamaged material properties. The reduced axial and shear stiffnesses are predicted for glass/epoxy and graphite/epoxy laminates. Comparison of the model with other theoretical and experimental studies is also presented to confirm direct applicability of the model to angle-ply laminates with matrix cracks subjected to general in-plane loading.

  16. Low-temperature overpressurization protection system setpoint analysis using RETRAN-02/MOD5 for Salem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dodson, R.J.; Feltus, M.A.

    The low-temperature overpressurization protection system (LTOPS) is designed to protect the reactor pressure vessel (RPV) from brittle failure during startup and cooldown maneuvers in Westinghouse pressurized water reactors. For the Salem power plants, the power-operated relief valves (PORVs) mitigate pressure increases above a setpoint where an operational startup transient may put the RPV in the embrittlement fracture zone. The Title 10, Part 50, Code of Federal Regulations Appendix G limit, given by plant technical specifications, conservatively bounds the maximum pressure allowed during those transients where the RPV can suffer brittle fracture (usually below 350{degrees}F). The Appendix G limit is amore » pressure versus temperature curve that is more restrictive at lower RPV temperatures and allows for higher pressures as the temperature approaches the upper bounding fracture temperature.« less

  17. Geologic and geophysical investigations of the Zuni-Bandera volcanic field, New Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ander, M.E.; Heiken, G.; Eichelberger, J.

    1981-05-01

    A positive, northeast-trending gravity anomaly, 90 km long and 30 km wide, extends southwest from the Zuni uplift, New Mexico. The Zuni-Bandera volcanic field, an alignment of 74 basaltic vents, is parallel to the eastern edge of the anomaly. Lavas display a bimodal distribution of tholeiitic and alkalic compositions, and were erupted over a period from 4 Myr to present. A residual gravity profile taken perpendicular to the major axis of the anomaly was analyzed using linear programming and ideal body theory to obtain bounds on the density contrast, depth, and minimum thickness of the gravity body. Two-dimensionality was assumed.more » The limiting case where the anomalous body reaches the surface gives 0.1 g/cm/sup 3/ as the greatest lower bound on the maximum density contrast. If 0.4 g/cm/sup 3/ is taken as the geologically reasonable upper limit on the maximum density contrast, the least upper bound on the depth of burial is 3.5 km and minimum thickness is 2 km. A shallow mafic intrusion, emplaced sometime before Laramide deformation, is proposed to account for the positive gravity anomaly. Analysis of a magnetotelluric survey suggests that the intrusion is not due to recent basaltic magma associated with the Zuni-Bandera volcanic field. This large basement structure has controlled the development of the volcanic field; vent orientations have changed somewhat through time, but the trend of the volcanic chain followed the edge of the basement structure. It has also exhibited some control on deformation of the sedimentary section.« less

  18. Plate Motions, Regional Deformation, and Time-Variation of Plate Motions

    NASA Technical Reports Server (NTRS)

    Gordon, R. G.

    1998-01-01

    The significant results obtained with support of this grant include the following: (1) Using VLBI data in combination with other geodetical, geophysical, and geological data to bound the present rotation of the Colorado Plateau, and to evaluate to its implications for the kinematics and seismogenic potential of the western half of the conterminous U.S. (2) Determining realistic estimates of uncertainties for VLBI data and then applying the data and uncertainties to obtain an upper bound on the integral of deformation within the "stable interior" of the North American and other plates and thus to place an upper bound on the seismogenic potential within these regions. (3) Combining VLBI data with other geodetic, geophysical, and geologic data to estimate the motion of coastal California in a frame of reference attached to the Sierra Nevada-Great Valley microplate. This analysis has provided new insights into the kinematic boundary conditions that may control or at least strongly influence the locations of asperities that rupture in great earthquakes along the San Andreas transform system. (4) Determining a global tectonic model from VLBI geodetic data that combines the estimation of plate angular velocities with individual site linear velocities where tectonically appropriate. and (5) Investigation of the some of the outstanding problems defined by the work leading to global plate motion model NUVEL-1. These problems, such as the motion between the Pacific and North American plates and between west Africa and east Africa, are focused on regions where the seismogenic potential may be greater than implied by published plate tectonic models.

  19. Survival analysis of the high energy channel of BATSE

    NASA Astrophysics Data System (ADS)

    Balázs, L. G.; Bagoly, Z.; Horváth, I.; Mészáros, A.

    2004-06-01

    We used Kaplan-Meier (KM) survival analysis to study the true distribution of high energy (F4) fluences on BATSE. The measured values were divided into two classes: A. if F4 exceeded the 3σ of the noise level we accepted the measured value as 'true event'. B. We treated 3σ as an upper bound if F4 did not exceeded it and identified those data as 'censored'. KM analysis were made for short (t90 < 2 s) and long (t90 > 2 s) bursts, separately. Comparison of the calculated probability distribution functions of the two groups indicated about an order of magnitude difference in the > 300 keV part of the energies released.

  20. Constraining the generalized uncertainty principle with the atomic weak-equivalence-principle test

    NASA Astrophysics Data System (ADS)

    Gao, Dongfeng; Wang, Jin; Zhan, Mingsheng

    2017-04-01

    Various models of quantum gravity imply the Planck-scale modifications of Heisenberg's uncertainty principle into a so-called generalized uncertainty principle (GUP). The GUP effects on high-energy physics, cosmology, and astrophysics have been extensively studied. Here, we focus on the weak-equivalence-principle (WEP) violation induced by the GUP. Results from the WEP test with the 85Rb-87Rb dual-species atom interferometer are used to set upper bounds on parameters in two GUP proposals. A 1045-level bound on the Kempf-Mangano-Mann proposal and a 1027-level bound on Maggiore's proposal, which are consistent with bounds from other experiments, are obtained. All these bounds have huge room for improvement in the future.

  1. PubMed

    Trinker, Horst

    2011-10-28

    We study the distribution of triples of codewords of codes and ordered codes. Schrijver [A. Schrijver, New code upper bounds from the Terwilliger algebra and semidefinite programming, IEEE Trans. Inform. Theory 51 (8) (2005) 2859-2866] used the triple distribution of a code to establish a bound on the number of codewords based on semidefinite programming. In the first part of this work, we generalize this approach for ordered codes. In the second part, we consider linear codes and linear ordered codes and present a MacWilliams-type identity for the triple distribution of their dual code. Based on the non-negativity of this linear transform, we establish a linear programming bound and conclude with a table of parameters for which this bound yields better results than the standard linear programming bound.

  2. Validation of the SURE Program, phase 1

    NASA Technical Reports Server (NTRS)

    Dotson, Kelly J.

    1987-01-01

    Presented are the results of the first phase in the validation of the SURE (Semi-Markov Unreliability Range Evaluator) program. The SURE program gives lower and upper bounds on the death-state probabilities of a semi-Markov model. With these bounds, the reliability of a semi-Markov model of a fault-tolerant computer system can be analyzed. For the first phase in the validation, fifteen semi-Markov models were solved analytically for the exact death-state probabilities and these solutions compared to the corresponding bounds given by SURE. In every case, the SURE bounds covered the exact solution. The bounds, however, had a tendency to separate in cases where the recovery rate was slow or the fault arrival rate was fast.

  3. The random coding bound is tight for the average code.

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.

    1973-01-01

    The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.

  4. Sedimentation of Jurassic fan-delta wedges in the Xiahuayuan basin reflecting thrust-fault movements of the western Yanshan fold-and-thrust belt, China

    NASA Astrophysics Data System (ADS)

    Lin, Chengfa; Liu, Shaofeng; Zhuang, Qitian; Steel, Ronald J.

    2018-06-01

    Mesozoic thrusting within the Yanshan fold-and-thrust belt of North China resulted in a series of fault-bounded intramontane basins whose infill and evolution remain poorly understood. In particular, the bounding faults and adjacent sediment accumulations along the western segments of the belt are almost unstudied. A sedimentological and provenance analysis of the Lower Jurassic Xiahuayuan Formation and the Upper Jurassic Jiulongshan Formation have been mapped to show two distinctive clastic wedges: an early Jurassic wedge representing a mass-flow-dominated, Gilbert-type fan delta with a classic tripartite architecture, and an late Jurassic shoal-water fan delta without steeply inclined strata. The basinward migration of the fan-delta wedges, together with the analysis of their conglomerate clast compositions, paleocurrent data and detrital zircon U-Pb age spectra, strongly suggest that the northern-bounding Xuanhuan thrust fault controlled their growth during accumulation of the Jiulongshan Formation. Previous studies have suggested that the fan-delta wedge of the Xiahuayuan Formation was also syntectonic, related to movement on the Xuanhua thrust fault. Two stages of thrusting therefore exerted an influence on the formation and evolution of the Xiahuayuan basin during the early-late Jurassic.

  5. Scalable L-infinite coding of meshes.

    PubMed

    Munteanu, Adrian; Cernea, Dan C; Alecu, Alin; Cornelis, Jan; Schelkens, Peter

    2010-01-01

    The paper investigates the novel concept of local-error control in mesh geometry encoding. In contrast to traditional mesh-coding systems that use the mean-square error as target distortion metric, this paper proposes a new L-infinite mesh-coding approach, for which the target distortion metric is the L-infinite distortion. In this context, a novel wavelet-based L-infinite-constrained coding approach for meshes is proposed, which ensures that the maximum error between the vertex positions in the original and decoded meshes is lower than a given upper bound. Furthermore, the proposed system achieves scalability in L-infinite sense, that is, any decoding of the input stream will correspond to a perfectly predictable L-infinite distortion upper bound. An instantiation of the proposed L-infinite-coding approach is demonstrated for MESHGRID, which is a scalable 3D object encoding system, part of MPEG-4 AFX. In this context, the advantages of scalable L-infinite coding over L-2-oriented coding are experimentally demonstrated. One concludes that the proposed L-infinite mesh-coding approach guarantees an upper bound on the local error in the decoded mesh, it enables a fast real-time implementation of the rate allocation, and it preserves all the scalability features and animation capabilities of the employed scalable mesh codec.

  6. Effects of general relativity on glitch amplitudes and pulsar mass upper bounds

    NASA Astrophysics Data System (ADS)

    Antonelli, M.; Montoli, A.; Pizzochero, P. M.

    2018-04-01

    Pinning of vortex lines in the inner crust of a spinning neutron star may be the mechanism that enhances the differential rotation of the internal neutron superfluid, making it possible to freeze some amount of angular momentum which eventually can be released, thus causing a pulsar glitch. We investigate the general relativistic corrections to pulsar glitch amplitudes in the slow-rotation approximation, consistently with the stratified structure of the star. We thus provide a relativistic generalization of a previous Newtonian model that was recently used to estimate upper bounds on the masses of glitching pulsars. We find that the effect of general relativity on the glitch amplitudes obtained by emptying the whole angular momentum reservoir is less than 30 per cent. Moreover, we show that the Newtonian upper bounds on the masses of large glitchers obtained from observations of their maximum recorded event differ by less than a few percent from those calculated within the relativistic framework. This work can also serve as a basis to construct more sophisticated models of angular momentum reservoir in a relativistic context: in particular, we present two alternative scenarios for macroscopically rigid and slack pinned vortex lines, and we generalize the Feynman-Onsager relation to the case when both entrainment coupling between the fluids and a strong axisymmetric gravitational field are present.

  7. Upper bounds of deformation in the Upper Rhine Graben from GPS data - First results from GURN (GNSS Upper Rhine Graben Network)

    NASA Astrophysics Data System (ADS)

    Masson, Frederic; Knoepfler, Andreas; Mayer, Michael; Ulrich, Patrice; Heck, Bernhard

    2010-05-01

    In September 2008, the Institut de Physique du Globe de Strasbourg (Ecole et Observatoire des Sciences de la Terre, EOST) and the Geodetic Institute (GIK) of Karlsruhe University (TH) established a transnational cooperation called GURN (GNSS Upper Rhine Graben Network). Within the GURN initiative these institutions are cooperating in order to establish a highly precise and highly sensitive network of permanently operating GNSS sites for the detection of crustal movements in the Upper Rhine Graben region. At the beginning, the network consisted of the permanently operating GNSS sites of SAPOS®-Baden-Württemberg, different data providers in France (e.g. EOST, Teria, RGP) and some further sites (e.g. IGS). In July 2009, the network was extended to the South when swisstopo (Switzerland) and to the North when SAPOS®-Rheinland-Pfalz joined GURN. Therefore, actually the GNSS network consists of approx. 80 permanently operating reference sites. The presentation will discuss the actual status of GURN, main research goals, and will present first results concerning the data quality as well as time series of a first reprocessing of all available data since 2002 using GAMIT/GLOBK (EOST working group) and the Bernese GPS Software (GIK working group). Based on these time series, the velocity as well as strain fields will be calculated in the future. The GURN initiative is also aiming for the estimation of the upper bounds of deformation in the Upper Rhine Graben region.

  8. A Reduced Basis Method with Exact-Solution Certificates for Symmetric Coercive Equations

    DTIC Science & Technology

    2013-11-06

    the energy associated with the infinite - dimensional weak solution of parametrized symmetric coercive partial differential equations with piecewise...builds bounds with respect to the infinite - dimensional weak solution, aims to entirely remove the issue of the “truth” within the certified reduced basis...framework. We in particular introduce a reduced basis method that provides rigorous upper and lower bounds

  9. Curvature Continuous and Bounded Path Planning for Fixed-Wing UAVs

    PubMed Central

    Jiang, Peng; Li, Deshi; Sun, Tao

    2017-01-01

    Unmanned Aerial Vehicles (UAVs) play an important role in applications such as data collection and target reconnaissance. An accurate and optimal path can effectively increase the mission success rate in the case of small UAVs. Although path planning for UAVs is similar to that for traditional mobile robots, the special kinematic characteristics of UAVs (such as their minimum turning radius) have not been taken into account in previous studies. In this paper, we propose a locally-adjustable, continuous-curvature, bounded path-planning algorithm for fixed-wing UAVs. To deal with the curvature discontinuity problem, an optimal interpolation algorithm and a key-point shift algorithm are proposed based on the derivation of a curvature continuity condition. To meet the upper bound for curvature and to render the curvature extrema controllable, a local replanning scheme is designed by combining arcs and Bezier curves with monotonic curvature. In particular, a path transition mechanism is built for the replanning phase using minimum curvature circles for a planning philosophy. Numerical results demonstrate that the analytical planning algorithm can effectively generate continuous-curvature paths, while satisfying the curvature upper bound constraint and allowing UAVs to pass through all predefined waypoints in the desired mission region. PMID:28925960

  10. Paramagnetic or diamagnetic persistent currents? A topological point of view

    NASA Astrophysics Data System (ADS)

    Waintal, Xavier

    2009-03-01

    A persistent current flows at low temperatures in small conducting rings when they are threaded by a magnetic flux. I will discuss the sign of this persistent current (diamagnetic or paramagnetic response) in the special case of N electrons in a one dimensional ring [1]. One dimension is very special in the sense that the sign of the persistent current is entirely controlled by the topology of the system. I will establish lower bounds for the free energy in the presence of arbitrary electron-electron interactions and external potentials. Those bounds are the counterparts of upper bounds derived by Leggett using another topological argument. Rings with odd (even) numbers of polarized electrons are always diamagnetic (paramagnetic). The situation is more interesting with unpolarized electrons where Leggett upper bound breaks down: rings with N=4n exhibit either paramagnetic behavior or a superconductor-like current-phase relation. The topological argument provides a rigorous justification for the phenomenological Huckel rule which states that cyclic molecules with 4n + 2 electrons like benzene are aromatic while those with 4n electrons are not. [4pt] [1] Xavier Waintal, Geneviève Fleury, Kyryl Kazymyrenko, Manuel Houzet, Peter Schmitteckert, and Dietmar Weinmann Phys. Rev. Lett.101, 106804 (2008).

  11. Curvature Continuous and Bounded Path Planning for Fixed-Wing UAVs.

    PubMed

    Wang, Xiaoliang; Jiang, Peng; Li, Deshi; Sun, Tao

    2017-09-19

    Unmanned Aerial Vehicles (UAVs) play an important role in applications such as data collection and target reconnaissance. An accurate and optimal path can effectively increase the mission success rate in the case of small UAVs. Although path planning for UAVs is similar to that for traditional mobile robots, the special kinematic characteristics of UAVs (such as their minimum turning radius) have not been taken into account in previous studies. In this paper, we propose a locally-adjustable, continuous-curvature, bounded path-planning algorithm for fixed-wing UAVs. To deal with the curvature discontinuity problem, an optimal interpolation algorithm and a key-point shift algorithm are proposed based on the derivation of a curvature continuity condition. To meet the upper bound for curvature and to render the curvature extrema controllable, a local replanning scheme is designed by combining arcs and Bezier curves with monotonic curvature. In particular, a path transition mechanism is built for the replanning phase using minimum curvature circles for a planning philosophy. Numerical results demonstrate that the analytical planning algorithm can effectively generate continuous-curvature paths, while satisfying the curvature upper bound constraint and allowing UAVs to pass through all predefined waypoints in the desired mission region.

  12. Bounds on area and charge for marginally trapped surfaces with a cosmological constant

    NASA Astrophysics Data System (ADS)

    Simon, Walter

    2012-03-01

    We sharpen the known inequalities AΛ ⩽ 4π(1 - g) (Hayward et al 1994 Phys. Rev. D 49 5080, Woolgar 1999 Class. Quantum Grav. 16 3005) and A ⩾ 4πQ2 (Dain et al 2012 Class. Quantum Grav. 29 035013) between the area A and the electric charge Q of a stable marginally outer-trapped surface (MOTS) of genus g in the presence of a cosmological constant Λ. In particular, instead of requiring stability we include the principal eigenvalue λ of the stability operator. For Λ* = Λ + λ > 0, we obtain a lower and an upper bound for Λ*A in terms of Λ*Q2, as well as the upper bound Q \\le 1/(2\\sqrt{\\Lambda ^{*}}) for the charge, which reduces to Q \\le 1/(2\\sqrt{\\Lambda }) in the stable case λ ⩾ 0. For Λ* < 0, there only remains a lower bound on A. In the spherically symmetric, static, stable case, one of our area inequalities is saturated iff the surface gravity vanishes. We also discuss implications of our inequalities for ‘jumps’ and mergers of charged MOTS.

  13. Perturbative unitarity constraints on the NMSSM Higgs Sector

    DOE PAGES

    Betre, Kassahun; El Hedri, Sonia; Walker, Devin G. E.

    2017-11-11

    We place perturbative unitarity constraints on both the dimensionful and dimensionless parameters in the Next-to-Minimal Supersymmetric Standard Model (NMSSM) Higgs Sector. These constraints, plus the requirement that the singlino and/or Higgsino constitutes at least part of the observed dark matter relic abundance, generate upper bounds on the Higgs, neutralino and chargino mass spectrum. Requiring higher-order corrections to be no more than 41% of the tree-level value, we obtain an upper bound of 20 TeV for the heavy Higgses and 12 TeV for the charginos and neutralinos outside defined fine-tuned regions. If the corrections are no more than 20% of themore » tree-level value, the bounds are 7 TeV for the heavy Higgses and 5 TeV for the charginos and neutralinos. Finally, in all, by using the NMSSM as a template, we describe a method which replaces naturalness arguments with more rigorous perturbative unitarity arguments to get a better understanding of when new physics will appear.« less

  14. An upper bound on the particle-laden dependency of shear stresses at solid-fluid interfaces

    NASA Astrophysics Data System (ADS)

    Zohdi, T. I.

    2018-03-01

    In modern advanced manufacturing processes, such as three-dimensional printing of electronics, fine-scale particles are added to a base fluid yielding a modified fluid. For example, in three-dimensional printing, particle-functionalized inks are created by adding particles to freely flowing solvents forming a mixture, which is then deposited onto a surface, which upon curing yields desirable solid properties, such as thermal conductivity, electrical permittivity and magnetic permeability. However, wear at solid-fluid interfaces within the machinery walls that deliver such particle-laden fluids is typically attributed to the fluid-induced shear stresses, which increase with the volume fraction of added particles. The objective of this work is to develop a rigorous strict upper bound for the tolerable volume fraction of particles that can be added, while remaining below a given stress threshold at a fluid-solid interface. To illustrate the bound's utility, the expression is applied to a series of classical flow regimes.

  15. Perturbative unitarity constraints on the NMSSM Higgs Sector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Betre, Kassahun; El Hedri, Sonia; Walker, Devin G. E.

    We place perturbative unitarity constraints on both the dimensionful and dimensionless parameters in the Next-to-Minimal Supersymmetric Standard Model (NMSSM) Higgs Sector. These constraints, plus the requirement that the singlino and/or Higgsino constitutes at least part of the observed dark matter relic abundance, generate upper bounds on the Higgs, neutralino and chargino mass spectrum. Requiring higher-order corrections to be no more than 41% of the tree-level value, we obtain an upper bound of 20 TeV for the heavy Higgses and 12 TeV for the charginos and neutralinos outside defined fine-tuned regions. If the corrections are no more than 20% of themore » tree-level value, the bounds are 7 TeV for the heavy Higgses and 5 TeV for the charginos and neutralinos. Finally, in all, by using the NMSSM as a template, we describe a method which replaces naturalness arguments with more rigorous perturbative unitarity arguments to get a better understanding of when new physics will appear.« less

  16. On the sparseness of 1-norm support vector machines.

    PubMed

    Zhang, Li; Zhou, Weida

    2010-04-01

    There is some empirical evidence available showing that 1-norm Support Vector Machines (1-norm SVMs) have good sparseness; however, both how good sparseness 1-norm SVMs can reach and whether they have a sparser representation than that of standard SVMs are not clear. In this paper we take into account the sparseness of 1-norm SVMs. Two upper bounds on the number of nonzero coefficients in the decision function of 1-norm SVMs are presented. First, the number of nonzero coefficients in 1-norm SVMs is at most equal to the number of only the exact support vectors lying on the +1 and -1 discriminating surfaces, while that in standard SVMs is equal to the number of support vectors, which implies that 1-norm SVMs have better sparseness than that of standard SVMs. Second, the number of nonzero coefficients is at most equal to the rank of the sample matrix. A brief review of the geometry of linear programming and the primal steepest edge pricing simplex method are given, which allows us to provide the proof of the two upper bounds and evaluate their tightness by experiments. Experimental results on toy data sets and the UCI data sets illustrate our analysis. Copyright 2009 Elsevier Ltd. All rights reserved.

  17. Quantum Dynamical Applications of Salem's Theorem

    NASA Astrophysics Data System (ADS)

    Damanik, David; Del Rio, Rafael

    2009-07-01

    We consider the survival probability of a state that evolves according to the Schrödinger dynamics generated by a self-adjoint operator H. We deduce from a classical result of Salem that upper bounds for the Hausdorff dimension of a set supporting the spectral measure associated with the initial state imply lower bounds on a subsequence of time scales for the survival probability. This general phenomenon is illustrated with applications to the Fibonacci operator and the critical almost Mathieu operator. In particular, this gives the first quantitative dynamical bound for the critical almost Mathieu operator.

  18. Volumes and intrinsic diameters of hypersurfaces

    NASA Astrophysics Data System (ADS)

    Paeng, Seong-Hun

    2015-09-01

    We estimate the volume and the intrinsic diameter of a hypersurface M with geometric information of a hypersurface which is parallel to M at distance T. It can be applied to the Riemannian Penrose inequality to obtain a lower bound of the total mass of a spacetime. Also it can be used to obtain upper bounds of the volume and the intrinsic diameter of the celestial r-sphere without a lower bound of the sectional curvature. We extend our results to metric-measure spaces by using the Bakry-Emery Ricci tensor.

  19. A prevalence-based approach to societal costs occurring in consequence of child abuse and neglect

    PubMed Central

    2012-01-01

    Background Traumatization in childhood can result in lifelong health impairment and may have a negative impact on other areas of life such as education, social contacts and employment as well. Despite the frequent occurrence of traumatization, which is reflected in a 14.5 percent prevalence rate of severe child abuse and neglect, the economic burden of the consequences is hardly known. The objective of this prevalence-based cost-of-illness study is to show how impairment of the individual is reflected in economic trauma follow-up costs borne by society as a whole in Germany and to compare the results with other countries’ costs. Methods From a societal perspective trauma follow-up costs were estimated using a bottom-up approach. The literature-based prevalence rate includes emotional, physical and sexual abuse as well as physical and emotional neglect in Germany. Costs are derived from individual case scenarios of child endangerment presented in a German cost-benefit-analysis. A comparison with trauma follow-up costs in Australia, Canada and the USA is based on purchasing power parity. Results The annual trauma follow-up costs total to a margin of EUR 11.1 billion for the lower bound and to EUR 29.8 billion for the upper bound. This equals EUR 134.84 and EUR 363.58, respectively, per capita for the German population. These results conform to the ones obtained from cost studies conducted in Australia (lower bound) and Canada (upper bound), whereas the result for the United States is much lower. Conclusion Child abuse and neglect result in trauma follow-up costs of economically relevant magnitude for the German society. Although the result is well in line with other countries’ costs, the general lack of data should be fought in order to enable more detailed future studies. Creating a reliable cost data basis in the first place can pave the way for long-term cost savings. PMID:23158382

  20. A prevalence-based approach to societal costs occurring in consequence of child abuse and neglect.

    PubMed

    Habetha, Susanne; Bleich, Sabrina; Weidenhammer, Jörg; Fegert, Jörg M

    2012-11-16

    Traumatization in childhood can result in lifelong health impairment and may have a negative impact on other areas of life such as education, social contacts and employment as well. Despite the frequent occurrence of traumatization, which is reflected in a 14.5 percent prevalence rate of severe child abuse and neglect, the economic burden of the consequences is hardly known. The objective of this prevalence-based cost-of-illness study is to show how impairment of the individual is reflected in economic trauma follow-up costs borne by society as a whole in Germany and to compare the results with other countries' costs. From a societal perspective trauma follow-up costs were estimated using a bottom-up approach. The literature-based prevalence rate includes emotional, physical and sexual abuse as well as physical and emotional neglect in Germany. Costs are derived from individual case scenarios of child endangerment presented in a German cost-benefit-analysis. A comparison with trauma follow-up costs in Australia, Canada and the USA is based on purchasing power parity. The annual trauma follow-up costs total to a margin of EUR 11.1 billion for the lower bound and to EUR 29.8 billion for the upper bound. This equals EUR 134.84 and EUR 363.58, respectively, per capita for the German population. These results conform to the ones obtained from cost studies conducted in Australia (lower bound) and Canada (upper bound), whereas the result for the United States is much lower. Child abuse and neglect result in trauma follow-up costs of economically relevant magnitude for the German society. Although the result is well in line with other countries' costs, the general lack of data should be fought in order to enable more detailed future studies. Creating a reliable cost data basis in the first place can pave the way for long-term cost savings.

  1. Ares I-X Upper Stage Simulator Structural Analyses Supporting the NESC Critical Initial Flaw Size Assessment

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Phillips, Dawn R.; Raju, Ivatury S.

    2008-01-01

    The structural analyses described in the present report were performed in support of the NASA Engineering and Safety Center (NESC) Critical Initial Flaw Size (CIFS) assessment for the ARES I-X Upper Stage Simulator (USS) common shell segment. The structural analysis effort for the NESC assessment had three thrusts: shell buckling analyses, detailed stress analyses of the single-bolt joint test; and stress analyses of two-segment 10 degree-wedge models for the peak axial tensile running load. Elasto-plastic, large-deformation simulations were performed. Stress analysis results indicated that the stress levels were well below the material yield stress for the bounding axial tensile design load. This report also summarizes the analyses and results from parametric studies on modeling the shell-to-gusset weld, flange-surface mismatch, bolt preload, and washer-bearing-surface modeling. These analyses models were used to generate the stress levels specified for the fatigue crack growth assessment using the design load with a factor of safety.

  2. Quantum State Tomography via Linear Regression Estimation

    PubMed Central

    Qi, Bo; Hou, Zhibo; Li, Li; Dong, Daoyi; Xiang, Guoyong; Guo, Guangcan

    2013-01-01

    A simple yet efficient state reconstruction algorithm of linear regression estimation (LRE) is presented for quantum state tomography. In this method, quantum state reconstruction is converted into a parameter estimation problem of a linear regression model and the least-squares method is employed to estimate the unknown parameters. An asymptotic mean squared error (MSE) upper bound for all possible states to be estimated is given analytically, which depends explicitly upon the involved measurement bases. This analytical MSE upper bound can guide one to choose optimal measurement sets. The computational complexity of LRE is O(d4) where d is the dimension of the quantum state. Numerical examples show that LRE is much faster than maximum-likelihood estimation for quantum state tomography. PMID:24336519

  3. The digital computer as a metaphor for the perfect laboratory experiment: Loophole-free Bell experiments

    NASA Astrophysics Data System (ADS)

    De Raedt, Hans; Michielsen, Kristel; Hess, Karl

    2016-12-01

    Using Einstein-Podolsky-Rosen-Bohm experiments as an example, we demonstrate that the combination of a digital computer and algorithms, as a metaphor for a perfect laboratory experiment, provides solutions to problems of the foundations of physics. Employing discrete-event simulation, we present a counterexample to John Bell's remarkable "proof" that any theory of physics, which is both Einstein-local and "realistic" (counterfactually definite), results in a strong upper bound to the correlations that are being measured in Einstein-Podolsky-Rosen-Bohm experiments. Our counterexample, which is free of the so-called detection-, coincidence-, memory-, and contextuality loophole, violates this upper bound and fully agrees with the predictions of quantum theory for Einstein-Podolsky-Rosen-Bohm experiments.

  4. Event-based recursive filtering for a class of nonlinear stochastic parameter systems over fading channels

    NASA Astrophysics Data System (ADS)

    Shen, Yuxuan; Wang, Zidong; Shen, Bo; Alsaadi, Fuad E.

    2018-07-01

    In this paper, the recursive filtering problem is studied for a class of time-varying nonlinear systems with stochastic parameter matrices. The measurement transmission between the sensor and the filter is conducted through a fading channel characterized by the Rice fading model. An event-based transmission mechanism is adopted to decide whether the sensor measurement should be transmitted to the filter. A recursive filter is designed such that, in the simultaneous presence of the stochastic parameter matrices and fading channels, the filtering error covariance is guaranteed to have an upper bound and such an upper bound is then minimized by appropriately choosing filter gain matrix. Finally, a simulation example is presented to demonstrate the effectiveness of the proposed filtering scheme.

  5. A one-dimensional model of solid-earth electrical resistivity beneath Florida

    USGS Publications Warehouse

    Blum, Cletus; Love, Jeffrey J.; Pedrie, Kolby; Bedrosian, Paul A.; Rigler, E. Joshua

    2015-11-19

    An estimated one-dimensional layered model of electrical resistivity beneath Florida was developed from published geological and geophysical information. The resistivity of each layer is represented by plausible upper and lower bounds as well as a geometric mean resistivity. Corresponding impedance transfer functions, Schmucker-Weidelt transfer functions, apparent resistivity, and phase responses are calculated for inducing geomagnetic frequencies ranging from 10−5 to 100 hertz. The resulting one-dimensional model and response functions can be used to make general estimates of time-varying electric fields associated with geomagnetic storms such as might represent induction hazards for electric-power grid operation. The plausible upper- and lower-bound resistivity structures show the uncertainty, giving a wide range of plausible time-varying electric fields.

  6. Rebuttal to "On the distribution of the modulus of Gabor wavelet coefficients and the upper bound of the dimensionless smoothness index in the case of additive Gaussian noises: Revisited" by Dong Wang, Qiang Zhou, and Kwok-Leung Tsui

    NASA Astrophysics Data System (ADS)

    Soltani Bozchalooi, Iman; Liang, Ming

    2018-04-01

    A discussion paper entitled "On the distribution of the modulus of Gabor wavelet coefficients and the upper bound of the dimensionless smoothness index in the case of additive Gaussian noises: revisited" by Dong Wang, Qiang Zhou, Kwok-Leung Tsui has been brought to our attention recently. This discussion paper (hereafter called Wang et al. paper) is based on arguments that are fundamentally incorrect and which we rebut within this commentary. However, as the flaws in the arguments proposed by Wang et al. are clear, we will keep this rebuttal as brief as possible.

  7. A New Finite-Time Observer for Nonlinear Systems: Applications to Synchronization of Lorenz-Like Systems.

    PubMed

    Aguilar-López, Ricardo; Mata-Machuca, Juan L

    2016-01-01

    This paper proposes a synchronization methodology of two chaotic oscillators under the framework of identical synchronization and master-slave configuration. The proposed methodology is based on state observer design under the frame of control theory; the observer structure provides finite-time synchronization convergence by cancelling the upper bounds of the main nonlinearities of the chaotic oscillator. The above is showed via an analysis of the dynamic of the so called synchronization error. Numerical experiments corroborate the satisfactory results of the proposed scheme.

  8. A New Finite-Time Observer for Nonlinear Systems: Applications to Synchronization of Lorenz-Like Systems

    PubMed Central

    Aguilar-López, Ricardo

    2016-01-01

    This paper proposes a synchronization methodology of two chaotic oscillators under the framework of identical synchronization and master-slave configuration. The proposed methodology is based on state observer design under the frame of control theory; the observer structure provides finite-time synchronization convergence by cancelling the upper bounds of the main nonlinearities of the chaotic oscillator. The above is showed via an analysis of the dynamic of the so called synchronization error. Numerical experiments corroborate the satisfactory results of the proposed scheme. PMID:27738651

  9. Finite-time robust stabilization of uncertain delayed neural networks with discontinuous activations via delayed feedback control.

    PubMed

    Wang, Leimin; Shen, Yi; Sheng, Yin

    2016-04-01

    This paper is concerned with the finite-time robust stabilization of delayed neural networks (DNNs) in the presence of discontinuous activations and parameter uncertainties. By using the nonsmooth analysis and control theory, a delayed controller is designed to realize the finite-time robust stabilization of DNNs with discontinuous activations and parameter uncertainties, and the upper bound of the settling time functional for stabilization is estimated. Finally, two examples are provided to demonstrate the effectiveness of the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Two Upper Bounds for the Weighted Path Length of Binary Trees. Report No. UIUCDCS-R-73-565.

    ERIC Educational Resources Information Center

    Pradels, Jean Louis

    Rooted binary trees with weighted nodes are structures encountered in many areas, such as coding theory, searching and sorting, information storage and retrieval. The path length is a meaningful quantity which gives indications about the expected time of a search or the length of a code, for example. In this paper, two sharp bounds for the total…

  11. The Mystery of Io's Warm Polar Regions: Implications for Heat Flow

    NASA Technical Reports Server (NTRS)

    Matson, D. L.; Veeder, G. J.; Johnson, T. V.; Blaney, D. L.; Davies, A. G.

    2002-01-01

    Unexpectedly warm polar temperatures further support the idea that Io is covered virtually everywhere by cooling lava flows. This implies a new heat flow component. Io's heat flow remains constrained between a lower bound of (approximately) 2.5 W m(exp -2) and an upper bound of (approximately) 13 W m(exp -2). Additional information is contained in the original extended abstract.

  12. Risk aversion and risk seeking in multicriteria forest management: a Markov decision process approach

    Treesearch

    Joseph Buongiorno; Mo Zhou; Craig Johnston

    2017-01-01

    Markov decision process models were extended to reflect some consequences of the risk attitude of forestry decision makers. One approach consisted of maximizing the expected value of a criterion subject to an upper bound on the variance or, symmetrically, minimizing the variance subject to a lower bound on the expected value.  The other method used the certainty...

  13. Low energy theorems and the unitarity bounds in the extra U(1) superstring inspired E{sub 6} models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, N.K.; Saxena, Pranav; Nagawat, Ashok K.

    2005-11-01

    The conventional method using low energy theorems derived by Chanowitz et al. [Phys. Rev. Lett. 57, 2344 (1986);] does not seem to lead to an explicit unitarity limit in the scattering processes of longitudinally polarized gauge bosons for the high energy case in the extra U(1) superstring inspired models, commonly known as {eta} model, emanating from E{sub 6} group of superstring theory. We have made use of an alternative procedure given by Durand and Lopez [Phys. Lett. B 217, 463 (1989);], which is applicable to supersymmetric grand unified theories. Explicit unitarity bounds on the superpotential couplings (identified as Yukawa couplings)more » are obtained from both using unitarity constraints as well as using renormalization group equations (RGE) analysis at one-loop level utilizing critical couplings concepts implying divergence of scalar coupling at M{sub G}. These are found to be consistent with finiteness over the entire range M{sub Z}{<=}{radical}(s){<=}M{sub G} i.e. from grand unification scale to weak scale. For completeness, the similar approach has been made use of in other models i.e., {chi}, {psi}, and {nu} models emanating from E{sub 6} and it has been noticed that at weak scale, the unitarity bounds on Yukawa couplings do not differ among E{sub 6} extra U(1) models significantly except for the case of {chi} model in 16 representations. For the case of the E{sub 6}-{eta} model ({beta}{sub E} congruent with 9.64), the analysis using the unitarity constraints leads to the following bounds on various parameters: {lambda}{sub t(max.)}(M{sub Z})=1.294, {lambda}{sub b(max.)}(M{sub Z})=1.278, {lambda}{sub H(max.)}(M{sub Z})=0.955, {lambda}{sub D(max.)}(M{sub Z})=1.312. The analytical analysis of RGE at the one-loop level provides the following critical bounds on superpotential couplings: {lambda}{sub t,c}(M{sub Z}) congruent with 1.295, {lambda}{sub b,c}(M{sub Z}) congruent with 1.279, {lambda}{sub H,c}(M{sub Z}) congruent with 0.968, {lambda}{sub D,c}(M{sub Z}) congruent with 1.315. Thus superpotential coupling values obtained by both the approaches are in good agreement. Theoretically we have obtained bounds on physical mass parameters using the unitarity constrained superpotential couplings. The bounds are as follows: (i) Absolute upper bound on top quark mass m{sub t}{<=}225 GeV (ii) the upper bound on the lightest neutral Higgs boson mass at the tree level is m{sub H{sub 2}{sup 0}}{sup tree}{<=}169 GeV, and after the inclusion of the one-loop radiative correction it is m{sub H{sub 2}{sup 0}}{<=}229 GeV when {lambda}{sub t}{ne}{lambda}{sub b} at the grand unified theory scale. On the other hand, these are m{sub H{sub 2}{sup 0}}{sup tree}{<=}159 GeV, m{sub H{sub 2}{sup 0}}{<=}222 GeV, respectively, when {lambda}{sub t}={lambda}{sub b} at the grand unified theory scale. A plausible range on D-quark mass as a function of mass scale M{sub Z{sub 2}} is m{sub D}{approx_equal}O(3 TeV) for M{sub Z{sub 2}}{approx_equal}O(1 TeV) for the favored values of tan{beta}{<=}1. The bounds on aforesaid physical parameters in the case of {chi}, {psi}, and {nu} models in the 27 representation are almost identical with those of {eta} model and are consistent with the present day experimental precision measurements.« less

  14. Verifying the error bound of numerical computation implemented in computer systems

    DOEpatents

    Sawada, Jun

    2013-03-12

    A verification tool receives a finite precision definition for an approximation of an infinite precision numerical function implemented in a processor in the form of a polynomial of bounded functions. The verification tool receives a domain for verifying outputs of segments associated with the infinite precision numerical function. The verification tool splits the domain into at least two segments, wherein each segment is non-overlapping with any other segment and converts, for each segment, a polynomial of bounded functions for the segment to a simplified formula comprising a polynomial, an inequality, and a constant for a selected segment. The verification tool calculates upper bounds of the polynomial for the at least two segments, beginning with the selected segment and reports the segments that violate a bounding condition.

  15. Search for invisible decays of a Higgs boson using vector-boson fusion in pp collisions at √s = 8 TeV with the ATLAS detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aad, G.; Abbott, B.; Abdallah, J.

    2016-01-28

    A search for a Higgs boson produced via vector-boson fusion and decaying into invisible particles is presented, using 20.3 fb -1 of proton-proton collision data at a centre-of-mass energy of 8 TeV recorded by the ATLAS detector at the LHC. For a Higgs boson with a mass of 125 GeV, assuming the Standard Model production cross section, an upper bound of 0.28 is set on the branching fraction of H → invisible at 95% confidence level, where the expected upper limit is 0.31. Furthermore, the results are interpreted in models of Higgs-portal dark matter where the branching fraction limit ismore » converted into upper bounds on the dark-matter-nucleon scattering cross section as a function of the dark-matter particle mass, and compared to results from the direct dark-matter detection experiments.« less

  16. Statistical thermodynamics foundation for photovoltaic and photothermal conversion. II. Application to photovoltaic conversion

    NASA Astrophysics Data System (ADS)

    Badescu, Viorel; Landsberg, Peter T.

    1995-08-01

    The general theory developed in part I was applied to build up two models of photovoltaic conversion. To this end two different systems were analyzed. The first system consists of the whole absorber (converter), for which the balance equations for energy and entropy are written and then used to derive an upper bound for solar energy conversion. The second system covers a part of the absorber (converter), namely the valence and conduction electronic bands. The balance of energy is used in this case to derive, under additional assumptions, another upper limit for the conversion efficiency. This second system deals with the real location where the power is generated. Both models take into consideration the radiation polarization and reflection, and the effects of concentration. The second model yields a more accurate upper bound for the conversion efficiency. A generalized solar cell equation is derived. It is proved that other previous theories are particular cases of the present more general formalism.

  17. Limits on the fluctuating part of y-type distortion monopole from Planck and SPT results

    NASA Astrophysics Data System (ADS)

    Khatri, Rishi; Sunyaev, Rashid

    2015-08-01

    We use the published Planck and SPT cluster catalogs [1,2] and recently published y-distortion maps [3] to put strong observational limits on the contribution of the fluctuating part of the y-type distortions to the y-distortion monopole. Our bounds are 5.4× 10-8 < langle yrangle < 2.2× 10-6. Our upper bound is a factor of 6.8 stronger than the currently best upper 95% confidence limit from COBE-FIRAS of langle yrangle <15× 10-6. In the standard cosmology, large scale structure is the only source of such distortions and our limits therefore constrain the baryonic physics involved in the formation of the large scale structure. Our lower limit, from the detected clusters in the Planck and SPT catalogs, also implies that a Pixie-like experiment should detect the y-distortion monopole at >27-σ. The biggest sources of uncertainty in our upper limit are the monopole offsets between different HFI channel maps that we estimate to be <10-6.

  18. On the realization of the bulk modulus bounds for two-phase viscoelastic composites

    NASA Astrophysics Data System (ADS)

    Andreasen, Casper Schousboe; Andreassen, Erik; Jensen, Jakob Søndergaard; Sigmund, Ole

    2014-02-01

    Materials with good vibration damping properties and high stiffness are of great industrial interest. In this paper the bounds for viscoelastic composites are investigated and material microstructures that realize the upper bound are obtained by topology optimization. These viscoelastic composites can be realized by additive manufacturing technologies followed by an infiltration process. Viscoelastic composites consisting of a relatively stiff elastic phase, e.g. steel, and a relatively lossy viscoelastic phase, e.g. silicone rubber, have non-connected stiff regions when optimized for maximum damping. In order to ensure manufacturability of such composites the connectivity of the matrix is ensured by imposing a conductivity constraint and the influence on the bounds is discussed.

  19. 1-norm support vector novelty detection and its sparseness.

    PubMed

    Zhang, Li; Zhou, WeiDa

    2013-12-01

    This paper proposes a 1-norm support vector novelty detection (SVND) method and discusses its sparseness. 1-norm SVND is formulated as a linear programming problem and uses two techniques for inducing sparseness, or the 1-norm regularization and the hinge loss function. We also find two upper bounds on the sparseness of 1-norm SVND, or exact support vector (ESV) and kernel Gram matrix rank bounds. The ESV bound indicates that 1-norm SVND has a sparser representation model than SVND. The kernel Gram matrix rank bound can loosely estimate the sparseness of 1-norm SVND. Experimental results show that 1-norm SVND is feasible and effective. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. On the perturbation of the group generalized inverse for a class of bounded operators in Banach spaces

    NASA Astrophysics Data System (ADS)

    Castro-González, N.; Vélez-Cerrada, J. Y.

    2008-05-01

    Given a bounded operator A on a Banach space X with Drazin inverse AD and index r, we study the class of group invertible bounded operators B such that I+AD(B-A) is invertible and . We show that they can be written with respect to the decomposition as a matrix operator, , where B1 and are invertible. Several characterizations of the perturbed operators are established, extending matrix results. We analyze the perturbation of the Drazin inverse and we provide explicit upper bounds of ||B#-AD|| and ||BB#-ADA||. We obtain a result on the continuity of the group inverse for operators on Banach spaces.

  1. Bounds on invisible Higgs boson decays extracted from LHC ttH production data.

    PubMed

    Zhou, Ning; Khechadoorian, Zepyoor; Whiteson, Daniel; Tait, Tim M P

    2014-10-10

    We present an upper bound on the branching fraction of the Higgs boson to invisible particles by recasting a CMS Collaboration search for stop quarks decaying to tt + E(T)(miss). The observed (expected) bound, BF(H → inv.) < 0.40(0.65) at 95% C.L., is the strongest direct limit to date, benefiting from a downward fluctuation in the CMS data in that channel. In addition, we combine this new constraint with existing published constraints to give an observed (expected) bound of BF(H → inv.) < 0.40(0.40) at 95% C.L., and we show some of the implications for theories of dark matter which communicate through the Higgs portal.

  2. Money Gone Up in Smoke: The Tobacco Use and Malnutrition Nexus in Bangladesh

    PubMed Central

    Husain, Muhammad Jami; Virk-Baker, Mandeep; Parascandola, Mark; Khondker, Bazlul Haque; Ahluwalia, Indu B.

    2017-01-01

    BACKGROUND The tobacco epidemic in Bangladesh is pervasive. Expenditures on tobacco may reduce money available for food in a country with a high malnutrition rate. OBJECTIVES The aims of the study are to quantify the opportunity costs of tobacco expenditure in terms of nutrition (ie, food energy) forgone and the potential improvements in the household level food-energy status if the money spent on tobacco were diverted for food consumption. METHOD We analyzed data from the 2010 Bangladesh Household Income and Expenditure Survey, a nationally representative survey conducted among 12,240 households. We present 2 analytical scenarios: (1) the lower-bound gain scenario entailing money spent on tobacco partially diverted to acquiring food according to households’ food consumption share in total expenditures; and (2) the upper-bound gain scenario entailing money spent on tobacco diverted to acquiring food only. Age- and gender-based energy norms were used to identify food-energy deficient households. Data were analyzed by mutually exclusive smoking-only, smokeless-only, and dual-tobacco user households. FINDINGS On average, a smoking-only household could gain 269–497 kilocalories (kcal) daily under the lower-bound and upper-bound scenarios, respectively. The potential energy gains for smokeless-only and dual-tobacco user households ranged from 148–268 kcal and 508–924 kcal, respectively. Under these lower- and upper-bound estimates, the percentage of smoking-only user households that are malnourished declined significantly from the baseline rate of 38% to 33% and 29%, respectively. For the smokeless-only and dual-tobacco user households, there were 2–3 and 6–9 percentage point drops in the malnutrition prevalence rates. The tobacco expenditure shift could translate to an additional 4.6–7.7 million food-energy malnourished persons meeting their caloric requirements. CONCLUSIONS The findings suggest that tobacco use reduction could facilitate concomitant improvements in population-level nutrition status and may inform the development and refinement of tobacco prevention and control efforts in Bangladesh. PMID:28283125

  3. Money Gone Up in Smoke: The Tobacco Use and Malnutrition Nexus in Bangladesh.

    PubMed

    Husain, Muhammad Jami; Virk-Baker, Mandeep; Parascandola, Mark; Khondker, Bazlul Haque; Ahluwalia, Indu B

    The tobacco epidemic in Bangladesh is pervasive. Expenditures on tobacco may reduce money available for food in a country with a high malnutrition rate. The aims of the study are to quantify the opportunity costs of tobacco expenditure in terms of nutrition (ie, food energy) forgone and the potential improvements in the household level food-energy status if the money spent on tobacco were diverted for food consumption. We analyzed data from the 2010 Bangladesh Household Income and Expenditure Survey, a nationally representative survey conducted among 12,240 households. We present 2 analytical scenarios: (1) the lower-bound gain scenario entailing money spent on tobacco partially diverted to acquiring food according to households' food consumption share in total expenditures; and (2) the upper-bound gain scenario entailing money spent on tobacco diverted to acquiring food only. Age- and gender-based energy norms were used to identify food-energy deficient households. Data were analyzed by mutually exclusive smoking-only, smokeless-only, and dual-tobacco user households. On average, a smoking-only household could gain 269-497 kilocalories (kcal) daily under the lower-bound and upper-bound scenarios, respectively. The potential energy gains for smokeless-only and dual-tobacco user households ranged from 148-268 kcal and 508-924 kcal, respectively. Under these lower- and upper-bound estimates, the percentage of smoking-only user households that are malnourished declined significantly from the baseline rate of 38% to 33% and 29%, respectively. For the smokeless-only and dual-tobacco user households, there were 2-3 and 6-9 percentage point drops in the malnutrition prevalence rates. The tobacco expenditure shift could translate to an additional 4.6-7.7 million food-energy malnourished persons meeting their caloric requirements. The findings suggest that tobacco use reduction could facilitate concomitant improvements in population-level nutrition status and may inform the development and refinement of tobacco prevention and control efforts in Bangladesh. Copyright © 2016. Published by Elsevier Inc.

  4. Inflow-weighted pulmonary perfusion: comparison between dynamic contrast-enhanced MRI versus perfusion scintigraphy in complex pulmonary circulation

    PubMed Central

    2013-01-01

    Background Due to the different properties of the contrast agents, the lung perfusion maps as measured by 99mTc-labeled macroaggregated albumin perfusion scintigraphy (PS) are not uncommonly discrepant from those measured by dynamic contrast-enhanced MRI (DCE-MRI) using indicator-dilution analysis in complex pulmonary circulation. Since PS offers the pre-capillary perfusion of the first-pass transit, we hypothesized that an inflow-weighted perfusion model of DCE-MRI could simulate the result by PS. Methods 22 patients underwent DCE-MRI at 1.5T and also PS. Relative perfusion contributed by the left lung was calculated by PS (PSL%), by DCE-MRI using conventional indicator dilution theory for pulmonary blood volume (PBVL%) and pulmonary blood flow (PBFL%) and using our proposed inflow-weighted pulmonary blood volume (PBViwL%). For PBViwL%, the optimal upper bound of the inflow-weighted integration range was determined by correlation coefficient analysis. Results The time-to-peak of the normal lung parenchyma was the optimal upper bound in the inflow-weighted perfusion model. Using PSL% as a reference, PBVL% showed error of 49.24% to −40.37% (intraclass correlation coefficient RI = 0.55) and PBFL% had error of 34.87% to −27.76% (RI = 0.80). With the inflow-weighted model, PBViwL% had much less error of 12.28% to −11.20% (RI = 0.98) from PSL%. Conclusions The inflow-weighted DCE-MRI provides relative perfusion maps similar to that by PS. The discrepancy between conventional indicator-dilution and inflow-weighted analysis represents a mixed-flow component in which pathological flow such as shunting or collaterals might have participated. PMID:23448679

  5. Inflow-weighted pulmonary perfusion: comparison between dynamic contrast-enhanced MRI versus perfusion scintigraphy in complex pulmonary circulation.

    PubMed

    Lin, Yi-Ru; Tsai, Shang-Yueh; Huang, Teng-Yi; Chung, Hsiao-Wen; Huang, Yi-Luan; Wu, Fu-Zong; Lin, Chu-Chuan; Peng, Nan-Jing; Wu, Ming-Ting

    2013-02-28

    Due to the different properties of the contrast agents, the lung perfusion maps as measured by 99mTc-labeled macroaggregated albumin perfusion scintigraphy (PS) are not uncommonly discrepant from those measured by dynamic contrast-enhanced MRI (DCE-MRI) using indicator-dilution analysis in complex pulmonary circulation. Since PS offers the pre-capillary perfusion of the first-pass transit, we hypothesized that an inflow-weighted perfusion model of DCE-MRI could simulate the result by PS. 22 patients underwent DCE-MRI at 1.5T and also PS. Relative perfusion contributed by the left lung was calculated by PS (PS(L%)), by DCE-MRI using conventional indicator dilution theory for pulmonary blood volume (PBV(L%)) and pulmonary blood flow (PBFL%) and using our proposed inflow-weighted pulmonary blood volume (PBV(iw)(L%)). For PBViw(L%), the optimal upper bound of the inflow-weighted integration range was determined by correlation coefficient analysis. The time-to-peak of the normal lung parenchyma was the optimal upper bound in the inflow-weighted perfusion model. Using PSL% as a reference, PBV(L%) showed error of 49.24% to -40.37% (intraclass correlation coefficient R(I) = 0.55) and PBF(L%) had error of 34.87% to -27.76% (R(I) = 0.80). With the inflow-weighted model, PBV(iw)(L%) had much less error of 12.28% to -11.20% (R(I) = 0.98) from PS(L%). The inflow-weighted DCE-MRI provides relative perfusion maps similar to that by PS. The discrepancy between conventional indicator-dilution and inflow-weighted analysis represents a mixed-flow component in which pathological flow such as shunting or collaterals might have participated.

  6. Search for a gamma-ray line feature from a group of nearby galaxy clusters with Fermi LAT Pass 8 data

    NASA Astrophysics Data System (ADS)

    Liang, Yun-Feng; Shen, Zhao-Qiang; Li, Xiang; Fan, Yi-Zhong; Huang, Xiaoyuan; Lei, Shi-Jun; Feng, Lei; Liang, En-Wei; Chang, Jin

    2016-05-01

    Galaxy clusters are the largest gravitationally bound objects in the Universe and may be suitable targets for indirect dark matter searches. With 85 months of Fermi LAT Pass 8 publicly available data, we analyze the gamma-ray emission in the direction of 16 nearby galaxy clusters with an unbinned likelihood analysis. No statistically or globally significant γ -ray line feature is identified and a tentative line signal may present at ˜43 GeV . The 95% confidence level upper limits on the velocity-averaged cross section of dark matter particles annihilating into double γ rays (i.e., ⟨σ v ⟩χχ →γ γ) are derived. Unless very optimistic boost factors of dark matter annihilation in these galaxy clusters have been assumed, such constraints are much weaker than the bounds set by the Galactic γ -ray data.

  7. Integer aperture ambiguity resolution based on difference test

    NASA Astrophysics Data System (ADS)

    Zhang, Jingyu; Wu, Meiping; Li, Tao; Zhang, Kaidong

    2015-07-01

    Carrier-phase integer ambiguity resolution (IAR) is the key to highly precise, fast positioning and attitude determination with Global Navigation Satellite System (GNSS). It can be seen as the process of estimating the unknown cycle ambiguities of the carrier-phase observations as integers. Once the ambiguities are fixed, carrier phase data will act as the very precise range data. Integer aperture (IA) ambiguity resolution is the combination of acceptance testing and integer ambiguity resolution, which can realize better quality control of IAR. Difference test (DT) is one of the most popular acceptance tests. This contribution will give a detailed analysis about the following properties of IA ambiguity resolution based on DT: 1. The sharpest and loose upper bounds of DT are derived from the perspective of geometry. These bounds are very simple and easy to be computed, which give the range for the critical values of DT.

  8. Bounding the marginal cost of producing potable water including the use of seawater desalinization as a backstop potable water production technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dooley, James J.

    2014-04-01

    The analysis presented in this technical report should allow for the creation of high, medium, and low cost potable water prices for GCAM. Seawater reverse osmosis (SWRO) based desalinization should act as a backstop for the cost of producing potable water (i.e., the literature seems clear that SWRO should establish an upper bound for the plant gate cost of producing potable water). Transporting water over significant distances and having to lift water to higher elevations to reach end-users can also have a significant impact on the cost of producing water. The three potable fresh water scenarios describe in this technicalmore » report are: low cost water scenario ($0.10/m3); medium water cost scenario ($1.00/m3); and high water cost scenario ($2.50/m3).« less

  9. Estimated Bounds and Important Factors for Fuel Use and Consumer Costs of Connected and Automated Vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stephens, T. S.; Gonder, Jeff; Chen, Yuche

    This report details a study of the potential effects of connected and automated vehicle (CAV) technologies on vehicle miles traveled (VMT), vehicle fuel efficiency, and consumer costs. Related analyses focused on a range of light-duty CAV technologies in conventional powertrain vehicles -- from partial automation to full automation, with and without ridesharing -- compared to today's base-case scenario. Analysis results revealed widely disparate upper- and lower-bound estimates for fuel use and VMT, ranging from a tripling of fuel use to decreasing light-duty fuel use to below 40% of today's level. This wide range reflects uncertainties in the ways that CAVmore » technologies can influence vehicle efficiency and use through changes in vehicle designs, driving habits, and travel behavior. The report further identifies the most significant potential impacting factors, the largest areas of uncertainty, and where further research is particularly needed.« less

  10. Termination Proofs for String Rewriting Systems via Inverse Match-Bounds

    NASA Technical Reports Server (NTRS)

    Butler, Ricky (Technical Monitor); Geser, Alfons; Hofbauer, Dieter; Waldmann, Johannes

    2004-01-01

    Annotating a letter by a number, one can record information about its history during a reduction. A string rewriting system is called match-bounded if there is a global upper bound to these numbers. In earlier papers we established match-boundedness as a strong sufficient criterion for both termination and preservation of regular languages. We show now that the string rewriting system whose inverse (left and right hand sides exchanged) is match-bounded, also have exceptional properties, but slightly different ones. Inverse match-bounded systems effectively preserve context-free languages; their sets of normalized strings and their sets of immortal strings are effectively regular. These sets of strings can be used to decide the normalization, the termination and the uniform termination problems of inverse match-bounded systems. We also show that the termination problem is decidable in linear time, and that a certain strong reachability problem is deciable, thus solving two open problems of McNaughton's.

  11. Tightening the entropic uncertainty bound in the presence of quantum memory

    NASA Astrophysics Data System (ADS)

    Adabi, F.; Salimi, S.; Haseli, S.

    2016-06-01

    The uncertainty principle is a fundamental principle in quantum physics. It implies that the measurement outcomes of two incompatible observables cannot be predicted simultaneously. In quantum information theory, this principle can be expressed in terms of entropic measures. M. Berta et al. [Nat. Phys. 6, 659 (2010), 10.1038/nphys1734] have indicated that uncertainty bound can be altered by considering a particle as a quantum memory correlating with the primary particle. In this article, we obtain a lower bound for entropic uncertainty in the presence of a quantum memory by adding an additional term depending on the Holevo quantity and mutual information. We conclude that our lower bound will be tightened with respect to that of Berta et al. when the accessible information about measurements outcomes is less than the mutual information about the joint state. Some examples have been investigated for which our lower bound is tighter than Berta et al.'s lower bound. Using our lower bound, a lower bound for the entanglement of formation of bipartite quantum states has been obtained, as well as an upper bound for the regularized distillable common randomness.

  12. Communication complexity and information complexity

    NASA Astrophysics Data System (ADS)

    Pankratov, Denis

    Information complexity enables the use of information-theoretic tools in communication complexity theory. Prior to the results presented in this thesis, information complexity was mainly used for proving lower bounds and direct-sum theorems in the setting of communication complexity. We present three results that demonstrate new connections between information complexity and communication complexity. In the first contribution we thoroughly study the information complexity of the smallest nontrivial two-party function: the AND function. While computing the communication complexity of AND is trivial, computing its exact information complexity presents a major technical challenge. In overcoming this challenge, we reveal that information complexity gives rise to rich geometrical structures. Our analysis of information complexity relies on new analytic techniques and new characterizations of communication protocols. We also uncover a connection of information complexity to the theory of elliptic partial differential equations. Once we compute the exact information complexity of AND, we can compute exact communication complexity of several related functions on n-bit inputs with some additional technical work. Previous combinatorial and algebraic techniques could only prove bounds of the form theta( n). Interestingly, this level of precision is typical in the area of information theory, so our result demonstrates that this meta-property of precise bounds carries over to information complexity and in certain cases even to communication complexity. Our result does not only strengthen the lower bound on communication complexity of disjointness by making it more exact, but it also shows that information complexity provides the exact upper bound on communication complexity. In fact, this result is more general and applies to a whole class of communication problems. In the second contribution, we use self-reduction methods to prove strong lower bounds on the information complexity of two of the most studied functions in the communication complexity literature: Gap Hamming Distance (GHD) and Inner Product mod 2 (IP). In our first result we affirm the conjecture that the information complexity of GHD is linear even under the uniform distribution. This strengthens the O(n) bound shown by Kerenidis et al. (2012) and answers an open problem by Chakrabarti et al. (2012). We also prove that the information complexity of IP is arbitrarily close to the trivial upper bound n as the permitted error tends to zero, again strengthening the O(n) lower bound proved by Braverman and Weinstein (2011). More importantly, our proofs demonstrate that self-reducibility makes the connection between information complexity and communication complexity lower bounds a two-way connection. Whereas numerous results in the past used information complexity techniques to derive new communication complexity lower bounds, we explore a generic way, in which communication complexity lower bounds imply information complexity lower bounds in a black-box manner. In the third contribution we consider the roles that private and public randomness play in the definition of information complexity. In communication complexity, private randomness can be trivially simulated by public randomness. Moreover, the communication cost of simulating public randomness with private randomness is well understood due to Newman's theorem (1991). In information complexity, the roles of public and private randomness are reversed: public randomness can be trivially simulated by private randomness. However, the information cost of simulating private randomness with public randomness is not understood. We show that protocols that use only public randomness admit a rather strong compression. In particular, efficient simulation of private randomness by public randomness would imply a version of a direct sum theorem in the setting of communication complexity. This establishes a yet another connection between the two areas. (Abstract shortened by UMI.).

  13. Exploring L1 model space in search of conductivity bounds for the MT problem

    NASA Astrophysics Data System (ADS)

    Wheelock, B. D.; Parker, R. L.

    2013-12-01

    Geophysical inverse problems of the type encountered in electromagnetic techniques are highly non-unique. As a result, any single inverted model, though feasible, is at best inconclusive and at worst misleading. In this paper, we use modified inversion methods to establish bounds on electrical conductivity within a model of the earth. Our method consists of two steps, each making use of the 1-norm in model regularization. Both 1-norm minimization problems are framed without approximation as non-negative least-squares (NNLS) problems. First, we must identify a parsimonious set of regions within the model for which upper and lower bounds on average conductivity will be sought. This is accomplished by minimizing the 1-norm of spatial variation, which produces a model with a limited number of homogeneous regions; in fact, the number of homogeneous regions will never be greater than the number of data, regardless of the number of free parameters supplied. The second step establishes bounds for each of these regions with pairs of inversions. The new suite of inversions also uses a 1-norm penalty, but applied to the conductivity values themselves, rather than the spatial variation thereof. In the bounding step we use the 1-norm of our model parameters because it is proportional to average conductivity. For a lower bound on average conductivity, the 1-norm within a bounding region is minimized. For an upper bound on average conductivity, the 1-norm everywhere outside a bounding region is minimized. The latter minimization has the effect of concentrating conductance into the bounding region. Taken together, these bounds are a measure of the uncertainty in the associated region of our model. Starting with a blocky inverse solution is key in the selection of the bounding regions. Of course, there is a tradeoff between resolution and uncertainty: an increase in resolution (smaller bounding regions), results in greater uncertainty (wider bounds). Minimization of the 1-norm of spatial variation delivers the fewest possible regions defined by a mean conductivity, the quantity we wish to bound. Thus, these regions present a natural set for which the most narrow and discriminating bounds can be found. For illustration, we apply these techniques to synthetic magnetotelluric (MT) data sets resulting from one-dimensional (1D) earth models. In each case we find that with realistic data coverage, any single inverted model can often stray from the truth, while the computed bounds on an encompassing region contain both the inverted and the true conductivities, indicating that our measure of model uncertainty is robust. Such estimates of uncertainty for conductivity can then be translated to bounds on important petrological parameters such as mineralogy, porosity, saturation, and fluid type.

  14. Pharmacokinetics and repolarization effects of intravenous and transdermal granisetron.

    PubMed

    Mason, Jay W; Selness, Daniel S; Moon, Thomas E; O'Mahony, Bridget; Donachie, Peter; Howell, Julian

    2012-05-15

    The need for greater clarity about the effects of 5-HT(3) receptor antagonists on cardiac repolarization is apparent in the changing product labeling across this therapeutic class. This study assessed the repolarization effects of granisetron, a 5-HT(3) receptor antagonist antiemetic, administered intravenously and by a granisetron transdermal system (GTDS). In a parallel four-arm study, healthy subjects were randomized to receive intravenous granisetron, GTDS, placebo, or oral moxifloxacin (active control). The primary endpoint was difference in change from baseline in mean Fridericia-corrected QT interval (QTcF) between GTDS and placebo (ddQTcF) on days 3 and 5. A total of 240 subjects were enrolled, 60 in each group. Adequate sensitivity for detection of QTc change was shown by a 5.75 ms lower bound of the 90% confidence interval (CI) for moxifloxacin versus placebo at 2 hours postdose on day 3. Day 3 ddQTcF values varied between 0.2 and 1.9 ms for GTDS (maximum upper bound of 90% CI, 6.88 ms), between -1.2 and 1.6 ms for i.v. granisetron (maximum upper bound of 90% CI, 5.86 ms), and between -3.4 and 4.7 ms for moxifloxacin (maximum upper bound of 90% CI, 13.45 ms). Day 5 findings were similar. Pharmacokinetic-ddQTcF modeling showed a minimally positive slope of 0.157 ms/(ng/mL), but a very low correlation (r = 0.090). GTDS was not associated with statistically or clinically significant effects on QTcF or other electrocardiographic variables. This study provides useful clarification on the effect of granisetron delivered by GTDS on cardiac repolarization. ©2012 AACR.

  15. Using a Water Balance Model to Bound Potential Irrigation Development in the Upper Blue Nile Basin

    NASA Astrophysics Data System (ADS)

    Jain Figueroa, A.; McLaughlin, D.

    2016-12-01

    The Grand Ethiopian Renaissance Dam (GERD), on the Blue Nile is an example of water resource management underpinning food, water and energy security. Downstream countries have long expressed concern about water projects in Ethiopia because of possible diversions to agricultural uses that could reduce flow in the Nile. Such diversions are attractive to Ethiopia as a partial solution to its food security problems but they could also conflict with hydropower revenue from GERD. This research estimates an upper bound on diversions above the GERD project by considering the potential for irrigated agriculture expansion and, in particular, the availability of water and land resources for crop production. Although many studies have aimed to simulate downstream flows for various Nile basin management plans, few have taken the perspective of bounding the likely impacts of upstream agricultural development. The approach is to construct an optimization model to establish a bound on Upper Blue Nile (UBN) agricultural development, paying particular attention to soil suitability and seasonal variability in climate. The results show that land and climate constraints impose significant limitations on crop production. Only 25% of the land area is suitable for irrigation due to the soil, slope and temperature constraints. When precipitation is also considered only 11% of current land area could be used in a way that increases water consumption. The results suggest that Ethiopia could consume an additional 3.75 billion cubic meters (bcm) of water per year, through changes in land use and storage capacity. By exploiting this irrigation potential, Ethiopia could potentially decrease the annual flow downstream of the UBN by 8 percent from the current 46 bcm/y to the modeled 42 bcm/y.

  16. Quantifying Behavior Driven Energy Savings for Hotels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Bing; Wang, Na; Hooks, Edward

    2016-08-12

    Hotel facilities present abundant opportunities for energy savings. In the United States, there are around 25,000 hotels that spend on an average of $2,196 on energy costs per room each year. This amounts to about 6% of the total annual hotel operating cost. However, unlike offices, there are limited studies on establishing appropriate baselines and quantifying hotel energy savings given the variety of services and amenities, unpredictable customer behaviors, and the around-the-clock operation hours. In this study, we investigate behavior driven energy savings for three medium-size (around 90,000 sf2) hotels that offer similar services in different climate zones. We firstmore » used Department of Energy Asset Scoring Tool to establish baseline models. We then conducted energy saving analysis in EnergyPlus based on a behavior model that defines the upper bound and lower bound of customer and hotel staff behavior. Lastly, we presented a probabilistic energy savings outlook for each hotel. The analysis shows behavior driven energy savings up to 25%. We believe this is the first study to incorporate behavioral factors into energy analysis for hotels. It also demonstrates a procedure to quickly create tailored baselines and identify improvement opportunities for hotels.« less

  17. FACTORING TO FIT OFF DIAGONALS.

    DTIC Science & Technology

    imply an upper bound on the number of factors. When applied to somatotype data, the method improved substantially on centroid solutions and indicated a reinterpretation of earlier factoring studies. (Author)

  18. Neutrino oscillations: what do we know about θ13

    NASA Astrophysics Data System (ADS)

    Ernst, David

    2008-10-01

    The phenomenon of neutrino oscillations is reviewed. A new analysis tool for the recent, more finely binned Super-K atmospheric data is outlined. This analysis incorporates the full three-neutrino oscillation probabilities, including the mixing angle θ13 to all orders, and a full three- neutrino treatment of the Earth's MSW effect. Combined with the K2K, MINOS, and CHOOZ data, the upper bound on θ13 is found to arise from the Super-K atmospheric data, while the lower bound arises from CHOOZ. This is caused by the linear in θ13 terms which are of particualr importance in the region L/E>10^4 m/MeV where the sub-dominant expansion is not convergent. In addition, the enhancement of θ12 by the Earth MSW effect is found to be important for this result. The best fit value of θ13 is found to be (statistically insignificantly) negative and given by θ13=-0.07^+0.18-0.11. In collaboration with Jesus Escamilla, Vanderbilt University and David Latimer, University of Kentucky.

  19. Evolution of cosmic string networks

    NASA Technical Reports Server (NTRS)

    Albrecht, Andreas; Turok, Neil

    1989-01-01

    Results on cosmic strings are summarized including: (1) the application of non-equilibrium statistical mechanics to cosmic string evolution; (2) a simple one scale model for the long strings which has a great deal of predictive power; (3) results from large scale numerical simulations; and (4) a discussion of the observational consequences of our results. An upper bound on G mu of approximately 10(-7) emerges from the millisecond pulsar gravity wave bound. How numerical uncertainties affect this are discussed. Any changes which weaken the bound would probably also give the long strings the dominant role in producing observational consequences.

  20. Approximating Multilinear Monomial Coefficients and Maximum Multilinear Monomials in Multivariate Polynomials

    NASA Astrophysics Data System (ADS)

    Chen, Zhixiang; Fu, Bin

    This paper is our third step towards developing a theory of testing monomials in multivariate polynomials and concentrates on two problems: (1) How to compute the coefficients of multilinear monomials; and (2) how to find a maximum multilinear monomial when the input is a ΠΣΠ polynomial. We first prove that the first problem is #P-hard and then devise a O *(3 n s(n)) upper bound for this problem for any polynomial represented by an arithmetic circuit of size s(n). Later, this upper bound is improved to O *(2 n ) for ΠΣΠ polynomials. We then design fully polynomial-time randomized approximation schemes for this problem for ΠΣ polynomials. On the negative side, we prove that, even for ΠΣΠ polynomials with terms of degree ≤ 2, the first problem cannot be approximated at all for any approximation factor ≥ 1, nor "weakly approximated" in a much relaxed setting, unless P=NP. For the second problem, we first give a polynomial time λ-approximation algorithm for ΠΣΠ polynomials with terms of degrees no more a constant λ ≥ 2. On the inapproximability side, we give a n (1 - ɛ)/2 lower bound, for any ɛ> 0, on the approximation factor for ΠΣΠ polynomials. When the degrees of the terms in these polynomials are constrained as ≤ 2, we prove a 1.0476 lower bound, assuming Pnot=NP; and a higher 1.0604 lower bound, assuming the Unique Games Conjecture.

  1. Analysis on a diffusive SIS epidemic model with logistic source

    NASA Astrophysics Data System (ADS)

    Li, Bo; Li, Huicong; Tong, Yachun

    2017-08-01

    In this paper, we are concerned with an SIS epidemic reaction-diffusion model with logistic source in spatially heterogeneous environment. We first discuss some basic properties of the parabolic system, including the uniform upper bound of solutions and global stability of the endemic equilibrium when spatial environment is homogeneous. Our primary focus is to determine the asymptotic profile of endemic equilibria (when exist) if the diffusion (migration) rate of the susceptible or infected population is small or large. Combined with the results of Li et al. (J Differ Equ 262:885-913, 2017) where the case of linear source is studied, our analysis suggests that varying total population enhances persistence of infectious disease.

  2. Necessary and sufficient criterion for extremal quantum correlations in the simplest Bell scenario

    NASA Astrophysics Data System (ADS)

    Ishizaka, Satoshi

    2018-05-01

    In the study of quantum nonlocality, one obstacle is that the analytical criterion for identifying the boundaries between quantum and postquantum correlations has not yet been given, even in the simplest Bell scenario. We propose a plausible, analytical, necessary and sufficient condition ensuring that a nonlocal quantum correlation in the simplest scenario is an extremal boundary point. Our extremality condition amounts to certifying an information-theoretical quantity; the probability of guessing a measurement outcome of a distant party optimized using any quantum instrument. We show that this quantity can be upper and lower bounded from any correlation in a device-independent way, and we use numerical calculations to confirm that coincidence of the upper and lower bounds appears to be necessary and sufficient for the extremality.

  3. On dynamic tumor eradication conditions under combined chemical/anti-angiogenic therapies

    NASA Astrophysics Data System (ADS)

    Starkov, Konstantin E.

    2018-02-01

    In this paper ultimate dynamics of the five-dimensional cancer tumor growth model at the angiogenesis phase is studied. This model elaborated by Pinho et al. in 2014 describes interactions between normal/cancer/endothelial cells under chemotherapy/anti-angiogenic agents in tumor growth process. The author derives ultimate upper bounds for normal/tumor/endothelial cells concentrations and ultimate upper and lower bounds for chemical/anti-angiogenic concentrations. Global asymptotic tumor clearance conditions are obtained for two versions: the use of only chemotherapy and the combined application of chemotherapy and anti-angiogenic therapy. These conditions are established as the attraction conditions to the maximum invariant set in the tumor free plane, and furthermore, the case is examined when this set consists only of tumor free equilibrium points.

  4. Robust guaranteed cost tracking control of quadrotor UAV with uncertainties.

    PubMed

    Xu, Zhiwei; Nian, Xiaohong; Wang, Haibo; Chen, Yinsheng

    2017-07-01

    In this paper, a robust guaranteed cost controller (RGCC) is proposed for quadrotor UAV system with uncertainties to address set-point tracking problem. A sufficient condition of the existence for RGCC is derived by Lyapunov stability theorem. The designed RGCC not only guarantees the whole closed-loop system asymptotically stable but also makes the quadratic performance level built for the closed-loop system have an upper bound irrespective to all admissible parameter uncertainties. Then, an optimal robust guaranteed cost controller is developed to minimize the upper bound of performance level. Simulation results verify the presented control algorithms possess small overshoot and short setting time, with which the quadrotor has ability to perform set-point tracking task well. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Limits on cold dark matter cosmologies from new anisotropy bounds on the cosmic microwave background

    NASA Technical Reports Server (NTRS)

    Vittorio, Nicola; Meinhold, Peter; Lubin, Philip; Muciaccia, Pio Francesco; Silk, Joseph

    1991-01-01

    A self-consistent method is presented for comparing theoretical predictions of and observational upper limits on CMB anisotropy. New bounds on CDM cosmologies set by the UCSB South Pole experiment on the 1 deg angular scale are presented. An upper limit of 4.0 x 10 to the -5th is placed on the rms differential temperature anisotropy to a 95 percent confidence level and a power of the test beta = 55 percent. A lower limit of about 0.6/b is placed on the density parameter of cold dark matter universes with greater than about 3 percent baryon abundance and a Hubble constant of 50 km/s/Mpc, where b is the bias factor, equal to unity only if light traces mass.

  6. Forecasting neutrino masses from combining KATRIN and the CMB observations: Frequentist and Bayesian analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Host, Ole; Lahav, Ofer; Abdalla, Filipe B.

    We present a showcase for deriving bounds on the neutrino masses from laboratory experiments and cosmological observations. We compare the frequentist and Bayesian bounds on the effective electron neutrino mass m{sub {beta}} which the KATRIN neutrino mass experiment is expected to obtain, using both an analytical likelihood function and Monte Carlo simulations of KATRIN. Assuming a uniform prior in m{sub {beta}}, we find that a null result yields an upper bound of about 0.17 eV at 90% confidence in the Bayesian analysis, to be compared with the frequentist KATRIN reference value of 0.20 eV. This is a significant difference whenmore » judged relative to the systematic and statistical uncertainties of the experiment. On the other hand, an input m{sub {beta}}=0.35 eV, which is the KATRIN 5{sigma} detection threshold, would be detected at virtually the same level. Finally, we combine the simulated KATRIN results with cosmological data in the form of present (post-WMAP) and future (simulated Planck) observations. If an input of m{sub {beta}}=0.2 eV is assumed in our simulations, KATRIN alone excludes a zero neutrino mass at 2.2{sigma}. Adding Planck data increases the probability of detection to a median 2.7{sigma}. The analysis highlights the importance of combining cosmological and laboratory data on an equal footing.« less

  7. Constraining the range of Yukawa gravity interaction from S2 star orbits II: bounds on graviton mass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zakharov, A.F.; Jovanović, P.; Borka, D.

    2016-05-01

    Recently LIGO collaboration discovered gravitational waves [1] predicted 100 years ago by A. Einstein. Moreover, in the key paper reporting about the discovery, the joint LIGO and VIRGO team presented an upper limit on graviton mass such as m {sub g} < 1.2 × 10{sup −22} eV [2] (see also more details in another LIGO paper [3] dedicated to a data analysis to obtain such a small constraint on a graviton mass). Since the graviton mass limit is so small the authors concluded that their observational data do not show violations of classical general relativity. We consider another opportunity tomore » evaluate a graviton mass from phenomenological consequences of massive gravity and show that an analysis of bright star trajectories could bound graviton mass with a comparable accuracy with accuracies reached with gravitational wave interferometers and expected with forthcoming pulsar timing observations for gravitational wave detection. It gives an opportunity to treat observations of bright stars near the Galactic Center as a wonderful tool not only for an evaluation specific parameters of the black hole but also to obtain constraints on the fundamental gravity law such as a modifications of Newton gravity law in a weak field approximation. In particular, we obtain bounds on a graviton mass based on a potential reconstruction at the Galactic Center.« less

  8. Thermal dark matter co-annihilating with a strongly interacting scalar

    NASA Astrophysics Data System (ADS)

    Biondini, S.; Laine, M.

    2018-04-01

    Recently many investigations have considered Majorana dark matter co-annihilating with bound states formed by a strongly interacting scalar field. However only the gluon radiation contribution to bound state formation and dissociation, which at high temperatures is subleading to soft 2 → 2 scatterings, has been included. Making use of a non-relativistic effective theory framework and solving a plasma-modified Schrödinger equation, we address the effect of soft 2 → 2 scatterings as well as the thermal dissociation of bound states. We argue that the mass splitting between the Majorana and scalar field has in general both a lower and an upper bound, and that the dark matter mass scale can be pushed at least up to 5…6TeV.

  9. A Priori Bound on the Velocity in Axially Symmetric Navier-Stokes Equations

    NASA Astrophysics Data System (ADS)

    Lei, Zhen; Navas, Esteban A.; Zhang, Qi S.

    2016-01-01

    Let v be the velocity of Leray-Hopf solutions to the axially symmetric three-dimensional Navier-Stokes equations. Under suitable conditions for initial values, we prove the following a priori bound |v(x, t)| ≤ C |ln r|^{1/2}/r^2, qquad 0 < r ≤ 1/2, where r is the distance from x to the z axis, and C is a constant depending only on the initial value. This provides a pointwise upper bound (worst case scenario) for possible singularities, while the recent papers (Chiun-Chuan et al., Commun PDE 34(1-3):203-232, 2009; Koch et al., Acta Math 203(1):83-105, 2009) gave a lower bound. The gap is polynomial order 1 modulo a half log term.

  10. The construction, fouling and enzymatic cleaning of a textile dye surface.

    PubMed

    Onaizi, Sagheer A; He, Lizhong; Middelberg, Anton P J

    2010-11-01

    The enzymatic cleaning of a rubisco protein stain bound onto Surface Plasmon Resonance (SPR) biosensor chips having a dye-bound upper layer is investigated. This novel method allowed, for the first time, a detailed kinetic study of rubisco cleanability (defined as fraction of adsorbed protein removed from a surface) from dyed surfaces (mimicking fabrics) at different enzyme concentrations. Analysis of kinetic data using an established mathematical model able to decouple enzyme transfer and reaction processes [Onaizi, He, Middelberg, Chem. Eng. Sci. 64 (2008) 3868] revealed a striking effect of dyeing on enzymatic cleaning performance. Specifically, the absolute rate constants for enzyme transfer to and from a dye-bound rubisco stain were significantly higher than reported previously for un-dyed surfaces. These increased transfer rates resulted in higher surface cleanability. Higher enzyme mobility (i.e., higher enzyme adsorption and desorption rates) at the liquid-dye interface was observed, consistent with previous suggestions that enzyme surface mobility is likely correlated with overall enzyme cleaning performance. Our results show that reaction engineering models of enzymatic action at surfaces may provide insight able to guide the design of better stain-resistant surfaces, and may also guide efforts to improve cleaning formulations. Copyright 2010 Elsevier Inc. All rights reserved.

  11. Constraints on sea to air emissions from methane clathrates in the vicinity of Svalbard

    NASA Astrophysics Data System (ADS)

    Pisso, Ignacio; Vadakkepuliyambatta, Sunil; Platt, Stephen Matthew; Eckhardt, Sabine; Allen, Grant; Pitt, Joseph; Silyakova, Anna; Hermansen, Ove; Schmidbauer, Norbert; Mienert, Jurgen; Myhre, Cathrine Lund; Stohl, Andreas

    2016-04-01

    Methane stored in the seabed in the form of clathrates has the potential to be released into the atmosphere due to ongoing ocean warming. The Methane Emissions from Arctic Ocean to Atmosphere (MOCA, http://moca.nilu.no/) proje sct conducted measurement campaigns in the vicinity of Svalbard during the summers of 2014 and 2015 in collaboration with the Centre for Arctic Gas Hydrate, Environment and Climate (CAGE, https://cage.uit.no/) and the MAMM (https://arcticmethane.wordpress.com) project . The extensive set of measurements includes air (BAe 146) and ship (RV Helmer Hansen) borne methane concentrations, complemented with the nearby monitoring site at Zeppelin mountain. In order to assess the atmospheric impact of emissions from seabed methane hydrates, we characterised the local and long range atmospheric transport during the aircraft campaign and different scenarios for the emission sources. We present a range of upper bounds for the CH4 emissions during the campaign period as well as the methodologies used to obtain them. The methodologies include a box model, Lagrangian transport and elementary inverse modelling. We emphasise the analysis of the aircraft data. We discuss in detail the different methodologies used for determining the upper flux bounds as well as its uncertainties and limitations. The additional information provided by the ship and station observations will be briefly mentioned.

  12. When, not if: the inescapability of an uncertain climate future.

    PubMed

    Ballard, Timothy; Lewandowsky, Stephan

    2015-11-28

    Climate change projections necessarily involve uncertainty. Analysis of the physics and mathematics of the climate system reveals that greater uncertainty about future temperature increases is nearly always associated with greater expected damages from climate change. In contrast to those normative constraints, uncertainty is frequently cited in public discourse as a reason to delay mitigative action. This failure to understand the actual implications of uncertainty may incur notable future costs. It is therefore important to communicate uncertainty in a way that improves people's understanding of climate change risks. We examined whether responses to projections were influenced by whether the projection emphasized uncertainty in the outcome or in its time of arrival. We presented participants with statements and graphs indicating projected increases in temperature, sea levels, ocean acidification and a decrease in arctic sea ice. In the uncertain-outcome condition, statements reported the upper and lower confidence bounds of the projected outcome at a fixed time point. In the uncertain time-of-arrival condition, statements reported the upper and lower confidence bounds of the projected time of arrival for a fixed outcome. Results suggested that people perceived the threat as more serious and were more likely to encourage mitigative action in the time-uncertain condition than in the outcome-uncertain condition. This finding has implications for effectively communicating the climate change risks to policy-makers and the general public. © 2015 The Author(s).

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anthony, Stephen

    The Sandia hyperspectral upper-bound spectrum algorithm (hyper-UBS) is a cosmic ray despiking algorithm for hyperspectral data sets. When naturally-occurring, high-energy (gigaelectronvolt) cosmic rays impact the earth’s atmosphere, they create an avalanche of secondary particles which will register as a large, positive spike on any spectroscopic detector they hit. Cosmic ray spikes are therefore an unavoidable spectroscopic contaminant which can interfere with subsequent analysis. A variety of cosmic ray despiking algorithms already exist and can potentially be applied to hyperspectral data matrices, most notably the upper-bound spectrum data matrices (UBS-DM) algorithm by Dongmao Zhang and Dor Ben-Amotz which served as themore » basis for the hyper-UBS algorithm. However, the existing algorithms either cannot be applied to hyperspectral data, require information that is not always available, introduce undesired spectral bias, or have otherwise limited effectiveness for some experimentally relevant conditions. Hyper-UBS is more effective at removing a wider variety of cosmic ray spikes from hyperspectral data without introducing undesired spectral bias. In addition to the core algorithm the Sandia hyper-UBS software package includes additional source code useful in evaluating the effectiveness of the hyper-UBS algorithm. The accompanying source code includes code to generate simulated hyperspectral data contaminated by cosmic ray spikes, several existing despiking algorithms, and code to evaluate the performance of the despiking algorithms on simulated data.« less

  14. Effective elastic moduli of triangular lattice material with defects

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoyu; Liang, Naigang

    2012-10-01

    This paper presents an attempt to extend homogenization analysis for the effective elastic moduli of triangular lattice materials with microstructural defects. The proposed homogenization method adopts a process based on homogeneous strain boundary conditions, the micro-scale constitutive law and the micro-to-macro static operator to establish the relationship between the macroscopic properties of a given lattice material to its micro-discrete behaviors and structures. Further, the idea behind Eshelby's equivalent eigenstrain principle is introduced to replace a defect distribution by an imagining displacement field (eigendisplacement) with the equivalent mechanical effect, and the triangular lattice Green's function technique is developed to solve the eigendisplacement field. The proposed method therefore allows handling of different types of microstructural defects as well as its arbitrary spatial distribution within a general and compact framework. Analytical closed-form estimations are derived, in the case of the dilute limit, for all the effective elastic moduli of stretch-dominated triangular lattices containing fractured cell walls and missing cells, respectively. Comparison with numerical results, the Hashin-Shtrikman upper bounds and uniform strain upper bounds are also presented to illustrate the predictive capability of the proposed method for lattice materials. Based on this work, we propose that not only the effective Young's and shear moduli but also the effective Poisson's ratio of triangular lattice materials depend on the number density of fractured cell walls and their spatial arrangements.

  15. Constraining the phantom braneworld model from cosmic structure sizes

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Sourav; Kousvos, Stefanos R.

    2017-11-01

    We consider the phantom braneworld model in the context of the maximum turnaround radius, RTA ,max, of a stable, spherical cosmic structure with a given mass. The maximum turnaround radius is the point where the attraction due to the central inhomogeneity gets balanced with the repulsion of the ambient dark energy, beyond which a structure cannot hold any mass, thereby giving the maximum upper bound on the size of a stable structure. In this work we derive an analytical expression of RTA ,max for this model using cosmological scalar perturbation theory. Using this we numerically constrain the parameter space, including a bulk cosmological constant and the Weyl fluid, from the mass versus observed size data for some nearby, nonvirial cosmic structures. We use different values of the matter density parameter Ωm, both larger and smaller than that of the Λ cold dark matter, as the input in our analysis. We show in particular, that (a) with a vanishing bulk cosmological constant the predicted upper bound is always greater than what is actually observed; a similar conclusion holds if the bulk cosmological constant is negative (b) if it is positive, the predicted maximum size can go considerably below than what is actually observed and owing to the involved nature of the field equations, it leads to interesting constraints on not only the bulk cosmological constant itself but on the whole parameter space of the theory.

  16. Searching for New Spin- and Velocity-Dependent Interactions by Spin Relaxation of Polarized ^{3}He Gas.

    PubMed

    Yan, H; Sun, G A; Peng, S M; Zhang, Y; Fu, C; Guo, H; Liu, B Q

    2015-10-30

    We have constrained possible new interactions which produce nonrelativistic potentials between polarized neutrons and unpolarized matter proportional to ασ[over →]·v[over →] where σ[over →] is the neutron spin and v[over →] is the relative velocity. We use existing data from laboratory measurements on the very long T_{1} and T_{2} spin relaxation times of polarized ^{3}He gas in glass cells. Using the best available measured T_{2} of polarized ^{3}He gas atoms as the polarized source and the Earth as an unpolarized source, we obtain constraints on two new interactions. We present a new experimental upper bound on possible vector-axial-vector (V_{VA}) type interactions for ranges between 1 and 10^{8} m. In combination with previous results, we set the most stringent experiment limits on g_{V}g_{A} ranging from ~μm to ~10^{8} m. We also report what is to our knowledge the first experimental upper limit on the possible torsion fields induced by the Earth on its surface. Dedicated experiments could further improve these bounds by a factor of ~100. Our method of analysis also makes it possible to probe many velocity dependent interactions which depend on the spins of both neutrons and other particles which have never been searched for before experimentally.

  17. Apparent dynamic contact angle of an advancing gas--liquid meniscus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalliadasis, S.; Chang, H.

    1994-01-01

    The steady motion of an advancing meniscus in a gas-filled capillary tube involves a delicate balance of capillary, viscous, and intermolecular forces. The limit of small capillary numbers Ca (dimensionless speeds) is analyzed here with a matched asymptotic analysis that links the outer capillary region to the precursor film in front of the meniscus through a lubricating film. The meniscus shape in the outer region is constructed and the apparent dynamic contact angle [Theta] that the meniscus forms with the solid surface is derived as a function of the capillary number, the capillary radius, and the Hamaker's constant for intermolecularmore » forces, under conditions of weak gas--solid interaction, which lead to fast spreading of the precursor film and weak intermolecular forces relative to viscous forces within the lubricating film. The dependence on intermolecular forces is very weak and the contact angle expression has a tight upper bound tan [Theta]=7.48 Ca[sup 1/3] for thick films, which is independent of the Hamaker constant. This upper bound is in very good agreement with existing experimental data for wetting fluids in any capillary and for partially wetting fluids in a prewetted capillary. Significant correction to the Ca[sup 1/3] dependence occurs only at very low Ca, where the intermolecular forces become more important and tan [Theta] diverges slightly from the above asymptotic behavior toward lower values.« less

  18. Pioneer Venus orbiter search for Venusian lightning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borucki, W.J.; Dyer, J.W.; Phillips, J.R.

    1991-07-01

    During the 1988 and 1990, the star sensor aboard the Pioneer Venus orbiter (PVO) was used to search for optical pulses from lightning on the nightside of Venus. Useful data were obtained for 53 orbits in 1988 and 55 orbits in 1990. During this period, approximately 83 s of search time plus 7749 s of control data were obtained. The results again find no optical evidence for lightning activity. With the region that was observed during 1988, the results imply that the upper bound to short-duration flashes is 4 {times} 10{sup {minus}7} flashes/km{sup 2}/s for flashes that are at leastmore » 50% as bright as typical terrestrial lightning. During 1990, when the 2-Hz filter was used, the results imply an upper bound of 1 {times} 10{sup {minus}7} flashes/km{sup 2}/s for long-duration flashes at least 1.6% as bright as typical terrestrial lightning flashes or 33% as bright as the pulses observed by the Venera 9. The upper bounds to the flash rates for the 1988 and 1990 searches are twice and one half the global terrestrial rate, respectively. These two searches covered the region from 60{degrees}N latitude to 30{degrees}S latitude, 250{degrees} to 350{degrees} longitude, and the region from 45{degrees}N latitude to 55{degrees}S latitude, 155{degrees} to 300{degrees} longitude. Both searches sampled much of the nightside region from the dawn terminator to within 4 hours of the dusk terminator. These searches covered a much larger latitude range than any previous search. The results show that the Beat and Phoebe Regio areas previously identified by Russell et al. (1988) as areas with high rates of lightning activity were not active during the two seasons of the observations. When the authors assume that their upper bounds to the nightside flash rate are representative of the entire planet, the results imply that the global flash rate and energy dissipation rate derived by Krasnopol'sky (1983) from his observation of a single storm are too high.« less

  19. Bounds on graviton mass using weak lensing and SZ effect in galaxy clusters

    NASA Astrophysics Data System (ADS)

    Rana, Akshay; Jain, Deepak; Mahajan, Shobhit; Mukherjee, Amitabha

    2018-06-01

    In General Relativity (GR), the graviton is massless. However, a common feature in several theoretical alternatives of GR is a non-zero mass for the graviton. These theories can be described as massive gravity theories. Despite many theoretical complexities in these theories, on phenomenological grounds the implications of massive gravity have been widely used to put bounds on graviton mass. One of the generic implications of giving a mass to the graviton is that the gravitational potential will follow a Yukawa-like fall off. We use this feature of massive gravity theories to probe the mass of graviton by using the largest gravitationally bound objects, namely galaxy clusters. In this work, we use the mass estimates of galaxy clusters measured at various cosmologically defined radial distances measured via weak lensing (WL) and Sunyaev-Zel'dovich (SZ) effect. We also use the model independent values of Hubble parameter H (z) smoothed by a non-parametric method, Gaussian process. Within 1σ confidence region, we obtain the mass of graviton mg < 5.9 ×10-30 eV with the corresponding Compton length scale λg > 6.82 Mpc from weak lensing and mg < 8.31 ×10-30 eV with λg > 5.012 Mpc from SZ effect. This analysis improves the upper bound on graviton mass obtained earlier from galaxy clusters.

  20. Alder Establishment and Channel Dynamics in a Tributary of the South Fork Eel River, Mendocino County, California

    Treesearch

    William J. Trush; Edward C. Connor; Knight Alan W.

    1989-01-01

    Riparian communities established along Elder Creek, a tributary of the upper South Fork Eel River, are bounded by two frequencies of periodic flooding. The upper limit for the riparian zone occurs at bankfull stage. The lower riparian limit is associated with a more frequent stage height, called the active channel, having an exceedance probability of 11 percent on a...

  1. Variational bounds on the temperature distribution

    NASA Astrophysics Data System (ADS)

    Kalikstein, Kalman; Spruch, Larry; Baider, Alberto

    1984-02-01

    Upper and lower stationary or variational bounds are obtained for functions which satisfy parabolic linear differential equations. (The error in the bound, that is, the difference between the bound on the function and the function itself, is of second order in the error in the input function, and the error is of known sign.) The method is applicable to a range of functions associated with equalization processes, including heat conduction, mass diffusion, electric conduction, fluid friction, the slowing down of neutrons, and certain limiting forms of the random walk problem, under conditions which are not unduly restrictive: in heat conduction, for example, we do not allow the thermal coefficients or the boundary conditions to depend upon the temperature, but the thermal coefficients can be functions of space and time and the geometry is unrestricted. The variational bounds follow from a maximum principle obeyed by the solutions of these equations.

  2. A critical examination of the validity of simplified models for radiant heat transfer analysis.

    NASA Technical Reports Server (NTRS)

    Toor, J. S.; Viskanta, R.

    1972-01-01

    Examination of the directional effects of the simplified models by comparing the experimental data with the predictions based on simple and more detailed models for the radiation characteristics of surfaces. Analytical results indicate that the constant property diffuse and specular models do not yield the upper and lower bounds on local radiant heat flux. In general, the constant property specular analysis yields higher values of irradiation than the constant property diffuse analysis. A diffuse surface in the enclosure appears to destroy the effect of specularity of the other surfaces. Semigray and gray analyses predict the irradiation reasonably well provided that the directional properties and the specularity of the surfaces are taken into account. The uniform and nonuniform radiosity diffuse models are in satisfactory agreement with each other.

  3. Swarming behaviors in multi-agent systems with nonlinear dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Wenwu, E-mail: wenwuyu@gmail.com; School of Electrical and Computer Engineering, RMIT University, Melbourne VIC 3001; Chen, Guanrong

    2013-12-15

    The dynamic analysis of a continuous-time multi-agent swarm model with nonlinear profiles is investigated in this paper. It is shown that, under mild conditions, all agents in a swarm can reach cohesion within a finite time, where the upper bounds of the cohesion are derived in terms of the parameters of the swarm model. The results are then generalized by considering stochastic noise and switching between nonlinear profiles. Furthermore, swarm models with limited sensing range inducing changing communication topologies and unbounded repulsive interactions between agents are studied by switching system and nonsmooth analysis. Here, the sensing range of each agentmore » is limited and the possibility of collision among nearby agents is high. Finally, simulation results are presented to demonstrate the validity of the theoretical analysis.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, E.B. Jr.

    Various methods for the calculation of lower bounds for eigenvalues are examined, including those of Weinstein, Temple, Bazley and Fox, Gay, and Miller. It is shown how all of these can be derived in a unified manner by the projection technique. The alternate forms obtained for the Gay formula show how a considerably improved method can be readily obtained. Applied to the ground state of the helium atom with a simple screened hydrogenic trial function, this new method gives a lower bound closer to the true energy than the best upper bound obtained with this form of trial function. Possiblemore » routes to further improved methods are suggested.« less

  5. Upper Bounds on the Expected Value of a Convex Function Using Gradient and Conjugate Function Information.

    DTIC Science & Technology

    1987-08-01

    of the absolute difference between the random variable and its mean.Gassmann and Ziemba 119861 provide a weaker bound that does not require...2.8284, and EX4tV) -12 EX’iX) = -42. Hence C = -2 -€t* i-4’]= I-- . 1213. £1 2 5 COMPARISONS OF BOUNDS IN IIn Gassmann and Ziemba 11986) extend an idea...solution of the foLLowing Linear program: (see Gassmann, Ziemba (1986),Theorem 1) m m m-GZ=max(XT(vi) I: z. 1=1,Z vo=x io (5.1hk i-l i=i i=1 I I where 0

  6. Bounds on Block Error Probability for Multilevel Concatenated Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Moorthy, Hari T.; Stojanovic, Diana

    1996-01-01

    Maximum likelihood decoding of long block codes is not feasable due to large complexity. Some classes of codes are shown to be decomposable into multilevel concatenated codes (MLCC). For these codes, multistage decoding provides good trade-off between performance and complexity. In this paper, we derive an upper bound on the probability of block error for MLCC. We use this bound to evaluate difference in performance for different decompositions of some codes. Examples given show that a significant reduction in complexity can be achieved when increasing number of stages of decoding. Resulting performance degradation varies for different decompositions. A guideline is given for finding good m-level decompositions.

  7. New Anomalous Lieb-Robinson Bounds in Quasiperiodic XY Chains

    NASA Astrophysics Data System (ADS)

    Damanik, David; Lemm, Marius; Lukic, Milivoje; Yessen, William

    2014-09-01

    We announce and sketch the rigorous proof of a new kind of anomalous (or sub-ballistic) Lieb-Robinson (LR) bound for an isotropic XY chain in a quasiperiodic transversal magnetic field. Instead of the usual effective light cone |x|≤v|t|, we obtain |x|≤v|t|α for some 0<α <1. We can characterize the allowed values of α exactly as those exceeding the upper transport exponent αu+ of a one-body Schrödinger operator. To our knowledge, this is the first rigorous derivation of anomalous quantum many-body transport. We also discuss anomalous LR bounds with power-law tails for a random dimer field.

  8. An upper limit on Pluto's ionosphere from radio occultation measurements with New Horizons

    NASA Astrophysics Data System (ADS)

    Hinson, D. P.; Linscott, I. R.; Strobel, D. F.; Tyler, G. L.; Bird, M. K.; Pätzold, M.; Summers, M. E.; Stern, S. A.; Ennico, K.; Gladstone, G. R.; Olkin, C. B.; Weaver, H. A.; Woods, W. W.; Young, L. A.; New Horizons Science Team

    2018-06-01

    On 14 July 2015 New Horizons performed a radio occultation (RO) that sounded Pluto's neutral atmosphere and ionosphere. The solar zenith angle was 90.2° (sunset) at entry and 89.8° (sunrise) at exit. We examined the data for evidence of an ionosphere, using the same method of analysis as in a previous investigation of the neutral atmosphere (Hinson et al., 2017). No ionosphere was detected. The measurements are more accurate at occultation exit, where the 1-sigma sensitivity in integrated electron content (IEC) is 2.3 × 1011 cm-2. The corresponding upper bound on the peak electron density at the terminator is about 1000 cm-3. We constructed a model for the ionosphere and used it to guide the analysis and interpretation of the RO data. Owing to the large abundance of CH4 at ionospheric heights, the dominant ions are molecular and the electron densities are relatively small. The model predicts a peak IEC of 1.8 × 1011 cm-2 for an occultation at the terminator, slightly smaller than the threshold of detection by New Horizons.

  9. Precision Measurement of the Electron's Electric Dipole Moment Using Trapped Molecular Ions

    NASA Astrophysics Data System (ADS)

    Cairncross, William B.; Gresh, Daniel N.; Grau, Matt; Cossel, Kevin C.; Roussy, Tanya S.; Ni, Yiqi; Zhou, Yan; Ye, Jun; Cornell, Eric A.

    2017-10-01

    We describe the first precision measurement of the electron's electric dipole moment (de) using trapped molecular ions, demonstrating the application of spin interrogation times over 700 ms to achieve high sensitivity and stringent rejection of systematic errors. Through electron spin resonance spectroscopy on 180Hf 19F+ in its metastable 3Δ1 electronic state, we obtain de=(0.9 ±7. 7stat±1. 7syst)×10-29 e cm , resulting in an upper bound of |de|<1.3 ×10-28 e cm (90% confidence). Our result provides independent confirmation of the current upper bound of |de|<9.4 ×10-29 e cm [J. Baron et al., New J. Phys. 19, 073029 (2017), 10.1088/1367-2630/aa708e], and offers the potential to improve on this limit in the near future.

  10. Limit cycles via higher order perturbations for some piecewise differential systems

    NASA Astrophysics Data System (ADS)

    Buzzi, Claudio A.; Lima, Maurício Firmino Silva; Torregrosa, Joan

    2018-05-01

    A classical perturbation problem is the polynomial perturbation of the harmonic oscillator, (x‧ ,y‧) =(- y + εf(x , y , ε) , x + εg(x , y , ε)) . In this paper we study the limit cycles that bifurcate from the period annulus via piecewise polynomial perturbations in two zones separated by a straight line. We prove that, for polynomial perturbations of degree n , no more than Nn - 1 limit cycles appear up to a study of order N. We also show that this upper bound is reached for orders one and two. Moreover, we study this problem in some classes of piecewise Liénard differential systems providing better upper bounds for higher order perturbation in ε, showing also when they are reached. The Poincaré-Pontryagin-Melnikov theory is the main technique used to prove all the results.

  11. Non-localization of eigenfunctions for Sturm-Liouville operators and applications

    NASA Astrophysics Data System (ADS)

    Liard, Thibault; Lissy, Pierre; Privat, Yannick

    2018-02-01

    In this article, we investigate a non-localization property of the eigenfunctions of Sturm-Liouville operators Aa = -∂xx + a (ṡ) Id with Dirichlet boundary conditions, where a (ṡ) runs over the bounded nonnegative potential functions on the interval (0 , L) with L > 0. More precisely, we address the extremal spectral problem of minimizing the L2-norm of a function e (ṡ) on a measurable subset ω of (0 , L), where e (ṡ) runs over all eigenfunctions of Aa, at the same time with respect to all subsets ω having a prescribed measure and all L∞ potential functions a (ṡ) having a prescribed essentially upper bound. We provide some existence and qualitative properties of the minimizers, as well as precise lower and upper estimates on the optimal value. Several consequences in control and stabilization theory are then highlighted.

  12. Fisher information of a single qubit interacts with a spin-qubit in the presence of a magnetic field

    NASA Astrophysics Data System (ADS)

    Metwally, N.

    2018-06-01

    In this contribution, quantum Fisher information is utilized to estimate the parameters of a central qubit interacting with a single-spin qubit. The effect of the longitudinal, transverse and the rotating strengths of the magnetic field on the estimation degree is discussed. It is shown that, in the resonance case, the number of peaks and consequently the size of the estimation regions increase as the rotating magnetic field strength increases. The precision estimation of the central qubit parameters depends on the initial state settings of the central and the spin-qubit, either encode classical or quantum information. It is displayed that, the upper bounds of the estimation degree are large if the two qubits encode classical information. In the non-resonance case, the estimation degree depends on which of the longitudinal/transverse strength is larger. The coupling constant between the central qubit and the spin-qubit has a different effect on the estimation degree of the weight and the phase parameters, where the possibility of estimating the weight parameter decreases as the coupling constant increases, while it increases for the phase parameter. For large number of spin-particles, namely, we have a spin-bath particles, the upper bounds of the Fisher information with respect to the weight parameter of the central qubit decreases as the number of the spin particle increases. As the interaction time increases, the upper bounds appear at different initial values of the weight parameter.

  13. Bounds on OPE coefficients from interference effects in the conformal collider

    NASA Astrophysics Data System (ADS)

    Córdova, Clay; Maldacena, Juan; Turiaci, Gustavo J.

    2017-11-01

    We apply the average null energy condition to obtain upper bounds on the three-point function coefficients of stress tensors and a scalar operator, < TTOi>, in general CFTs. We also constrain the gravitational anomaly of U(1) currents in four-dimensional CFTs, which are encoded in three-point functions of the form 〈 T T J 〉. In theories with a large N AdS dual we translate these bounds into constraints on the coefficient of a higher derivative bulk term of the form ∫ϕ W 2. We speculate that these bounds also apply in de-Sitter. In this case our results constrain inflationary observables, such as the amplitude for chiral gravity waves that originate from higher derivative terms in the Lagrangian of the form ϕ W W ∗.

  14. Generalized Hofmann quantum process fidelity bounds for quantum filters

    NASA Astrophysics Data System (ADS)

    Sedlák, Michal; Fiurášek, Jaromír

    2016-04-01

    We propose and investigate bounds on the quantum process fidelity of quantum filters, i.e., probabilistic quantum operations represented by a single Kraus operator K . These bounds generalize the Hofmann bounds on the quantum process fidelity of unitary operations [H. F. Hofmann, Phys. Rev. Lett. 94, 160504 (2005), 10.1103/PhysRevLett.94.160504] and are based on probing the quantum filter with pure states forming two mutually unbiased bases. Determination of these bounds therefore requires far fewer measurements than full quantum process tomography. We find that it is particularly suitable to construct one of the probe bases from the right eigenstates of K , because in this case the bounds are tight in the sense that if the actual filter coincides with the ideal one, then both the lower and the upper bounds are equal to 1. We theoretically investigate the application of these bounds to a two-qubit optical quantum filter formed by the interference of two photons on a partially polarizing beam splitter. For an experimentally convenient choice of factorized input states and measurements we study the tightness of the bounds. We show that more stringent bounds can be obtained by more sophisticated processing of the data using convex optimization and we compare our methods for different choices of the input probe states.

  15. A Multi-Armed Bandit Approach to Following a Markov Chain

    DTIC Science & Technology

    2017-06-01

    focus on the House to Café transition (p1,4). We develop a Multi-Armed Bandit approach for efficiently following this target, where each state takes the...and longitude (each state corresponding to a physical location and a small set of activities). The searcher would then apply our approach on this...the target’s transition probability and the true probability over time. Further, we seek to provide upper bounds (i.e., worst case bounds) on the

  16. Higher order terms in the inflation potential and the lower bound on the tensor to scalar ratio r

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Destri, C., E-mail: Claudio.Destri@mib.infn.it; Vega, H.J. de, E-mail: devega@lpthe.jussieu.fr; Observatoire de Paris, LERMA, Laboratoire Associe au CNRS UMR 8112, 61, Avenue de l'Observatoire, 75014 Paris

    Research Highlights: > In Ginsburg-Landau (G-L) approach data favors new inflation over chaotic inflation. > n{sub s} and r fall inside a universal banana-shaped region in G-L new inflation. > The banana region for the observed value n{sub s}=0.964 implies 0.021 Fermion condensate inflaton potential is a double well in the G-L class. - Abstract: The MCMC analysis of the CMB + LSS data in the context of the Ginsburg-Landau approach to inflation indicated that the fourth degree double-well inflaton potential in new inflation gives an excellent fit of the present CMB and LSS data. This provided a lowermore » bound for the ratio r of the tensor to scalar fluctuations and as most probable value r {approx_equal} 0.05, within reach of the forthcoming CMB observations. In this paper we systematically analyze the effects of arbitrarily higher order terms in the inflaton potential on the CMB observables: spectral index n{sub s} and ratio r. Furthermore, we compute in close form the inflaton potential dynamically generated when the inflaton field is a fermion condensate in the inflationary universe. This inflaton potential turns out to belong to the Ginsburg-Landau class too. The theoretical values in the (n{sub s}, r) plane for all double well inflaton potentials in the Ginsburg-Landau approach (including the potential generated by fermions) fall inside a universal banana-shaped region B. The upper border of the banana-shaped region B is given by the fourth order double-well potential and provides an upper bound for the ratio r. The lower border of B is defined by the quadratic plus an infinite barrier inflaton potential and provides a lower bound for the ratio r. For example, the current best value of the spectral index n{sub s} = 0.964, implies r is in the interval: 0.021 < r < 0.053. Interestingly enough, this range is within reach of forthcoming CMB observations.« less

  17. Lepton-flavor violating B decays in generic Z' models

    NASA Astrophysics Data System (ADS)

    Crivellin, Andreas; Hofer, Lars; Matias, Joaquim; Nierste, Ulrich; Pokorski, Stefan; Rosiek, Janusz

    2015-09-01

    LHCb has reported deviations from the Standard Model in b →s μ+μ- transitions for which a new neutral gauge boson is a prime candidate for an explanation. As this gauge boson has to couple in a flavor nonuniversal way to muons and electrons in order to explain RK, it is interesting to examine the possibility that also lepton flavor is violated, especially in the light of the CMS excess in h →τ±μ∓. In this article, we investigate the perspectives to discover the lepton-flavor violating modes B →K(*)τ±μ∓ , Bs→τ±μ∓ and B →K(*)μ±e∓, Bs→μ±e∓. For this purpose we consider a simplified model in which new-physics effects originate from an additional neutral gauge boson (Z') with generic couplings to quarks and leptons. The constraints from τ →3 μ , τ →μ ν ν ¯, μ →e γ , gμ-2 , semileptonic b →s μ+μ- decays, B →K(*)ν ν ¯ and Bs-B¯s mixing are examined. From these decays, we determine upper bounds on the decay rates of lepton-flavor violating B decays. Br (B →K ν ν ¯) limits the branching ratios of lepton-flavor violating B decays to be smaller than 8 ×10-5(2 ×10-5) for vectorial (left-handed) lepton couplings. However, much stronger bounds can be obtained by a combined analysis of Bs-B¯s, τ →3 μ , τ →μ ν ν ¯ and other rare decays. The bounds depend on the amount of fine-tuning among the contributions to Bs-B¯s mixing. Allowing for a fine-tuning at the percent level we find upper bounds of the order of 10-6 for branching ratios into τ μ final states, while Bs→μ±e∓ is strongly suppressed and only B →K(*)μ±e∓ can be experimentally accessible (with a branching ratio of order 10-7).

  18. Receive-Noise Analysis of Capacitive Micromachined Ultrasonic Transducers.

    PubMed

    Bozkurt, Ayhan; Yaralioglu, G Goksenin

    2016-11-01

    This paper presents an analysis of thermal (Johnson) noise received from the radiation medium by otherwise noiseless capacitive micromachined ultrasonic transducer (CMUT) membranes operating in their fundamental resonance mode. Determination of thermal noise received by multiple numbers of transducers or a transducer array requires the assessment of cross-coupling through the radiation medium, as well as the self-radiation impedance of the individual transducer. We show that the total thermal noise received by the cells of a CMUT has insignificant correlation, and is independent of the radiation impedance, but is only determined by the mass of each membrane and the electromechanical transformer ratio. The proof is based on the analytical derivations for a simple transducer with two cells, and extended to transducers with numerous cells using circuit simulators. We used a first-order model, which incorporates the fundamental resonance of the CMUT. Noise power is calculated by integrating over the entire spectrum; hence, the presented figures are an upper bound for the noise. The presented analyses are valid for a transimpedance amplifier in the receive path. We use the analysis results to calculate the minimum detectable pressure of a CMUT. We also provide an analysis based on the experimental data to show that output noise power is limited by and comparable to the theoretical upper limit.

  19. Ionospheric Signatures in Radio Occultation Data

    NASA Technical Reports Server (NTRS)

    Mannucci, Anthony J.; Ao, Chi; Iijima, Byron A.; Kursinkski, E. Robert

    2012-01-01

    We can extend robustly the radio occultation data record by 6 years (+60%) by developing a singlefrequency processing method for GPS/MET data. We will produce a calibrated data set with profile-byprofile data characterization to determine robust upper bounds on ionospheric bias. Part of an effort to produce a calibrated RO data set addressing other key error sources such as upper boundary initialization. Planned: AIRS-GPS water vapor cross validation (water vapor climatology and trends).

  20. Uncertainty analysis for absorbed dose from a brain receptor imaging agent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aydogan, B.; Miller, L.F.; Sparks, R.B.

    Absorbed dose estimates are known to contain uncertainties. A recent literature search indicates that prior to this study no rigorous investigation of uncertainty associated with absorbed dose has been undertaken. A method of uncertainty analysis for absorbed dose calculations has been developed and implemented for the brain receptor imaging agent {sup 123}I-IPT. The two major sources of uncertainty considered were the uncertainty associated with the determination of residence time and that associated with the determination of the S values. There are many sources of uncertainty in the determination of the S values, but only the inter-patient organ mass variation wasmore » considered in this work. The absorbed dose uncertainties were determined for lung, liver, heart and brain. Ninety-five percent confidence intervals of the organ absorbed dose distributions for each patient and for a seven-patient population group were determined by the ``Latin Hypercube Sampling`` method. For an individual patient, the upper bound of the 95% confidence interval of the absorbed dose was found to be about 2.5 times larger than the estimated mean absorbed dose. For the seven-patient population the upper bound of the 95% confidence interval of the absorbed dose distribution was around 45% more than the estimated population mean. For example, the 95% confidence interval of the population liver dose distribution was found to be between 1.49E+0.7 Gy/MBq and 4.65E+07 Gy/MBq with a mean of 2.52E+07 Gy/MBq. This study concluded that patients in a population receiving {sup 123}I-IPT could receive absorbed doses as much as twice as large as the standard estimated absorbed dose due to these uncertainties.« less

  1. Value-of-information analysis within a stakeholder-driven research prioritization process in a US setting: an application in cancer genomics.

    PubMed

    Carlson, Josh J; Thariani, Rahber; Roth, Josh; Gralow, Julie; Henry, N Lynn; Esmail, Laura; Deverka, Pat; Ramsey, Scott D; Baker, Laurence; Veenstra, David L

    2013-05-01

    The objective of this study was to evaluate the feasibility and outcomes of incorporating value-of-information (VOI) analysis into a stakeholder-driven research prioritization process in a US-based setting. . Within a program to prioritize comparative effectiveness research areas in cancer genomics, over a period of 7 months, we developed decision-analytic models and calculated upper-bound VOI estimates for 3 previously selected genomic tests. Thirteen stakeholders representing patient advocates, payers, test developers, regulators, policy makers, and community-based oncologists ranked the tests before and after receiving VOI results. The stakeholders were surveyed about the usefulness and impact of the VOI findings. The estimated upper-bound VOI ranged from $33 million to $2.8 billion for the 3 research areas. Seven stakeholders indicated the results modified their rankings, 9 stated VOI data were useful, and all indicated they would support its use in future prioritization processes. Some stakeholders indicated expected value of sampled information might be the preferred choice when evaluating specific Limitations. Our study was limited by the size and the potential for selection bias in the composition of the external stakeholder group, lack of a randomized design to assess effect of VOI data on rankings, and the use of expected value of perfect information v. expected value of sample information methods. Value of information analyses may have a meaningful role in research topic prioritization for comparative effectiveness research in the United States, particularly when large differences in VOI across topic areas are identified. Additional research is needed to facilitate the use of more complex value of information analyses in this setting.

  2. Formation of the Aerosol of Space Origin in Earth's Atmosphere

    NASA Technical Reports Server (NTRS)

    Kozak, P. M.; Kruchynenko, V. G.

    2011-01-01

    The problem of formation of the aerosol of space origin in Earth s atmosphere is examined. Meteoroids of the mass range of 10-18-10-8 g are considered as a source of its origin. The lower bound of the mass range is chosen according to the data presented in literature, the upper bound is determined in accordance with the theory of Whipple s micrometeorites. Basing on the classical equations of deceleration and heating for small meteor bodies we have determined the maximal temperatures of the particles, and altitudes at which they reach critically low velocities, which can be called as velocities of stopping . As a condition for the transformation of a space particle into an aerosol one we have used the condition of non-reaching melting temperature of the meteoroid. The simplified equation of deceleration without earth gravity and barometric formula for the atmosphere density are used. In the equation of heat balance the energy loss for heating is neglected. The analytical solution of the simplified equations is used for the analysis.

  3. Cosmology and the neutrino mass ordering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hannestad, Steen; Schwetz, Thomas, E-mail: sth@phys.au.dk, E-mail: schwetz@kit.edu

    We propose a simple method to quantify a possible exclusion of the inverted neutrino mass ordering from cosmological bounds on the sum of the neutrino masses. The method is based on Bayesian inference and allows for a calculation of the posterior odds of normal versus inverted ordering. We apply the method for a specific set of current data from Planck CMB data and large-scale structure surveys, providing an upper bound on the sum of neutrino masses of 0.14 eV at 95% CL. With this analysis we obtain posterior odds for normal versus inverted ordering of about 2:1. If cosmological datamore » is combined with data from oscillation experiments the odds reduce to about 3:2. For an exclusion of the inverted ordering from cosmology at more than 95% CL, an accuracy of better than 0.02 eV is needed for the sum. We demonstrate that such a value could be reached with planned observations of large scale structure by analysing artificial mock data for a EUCLID-like survey.« less

  4. Moderate-magnitude earthquakes induced by magma reservoir inflation at Kīlauea Volcano, Hawai‘i

    USGS Publications Warehouse

    Wauthier, Christelle; Roman, Diana C.; Poland, Michael P.

    2013-01-01

    Although volcano-tectonic (VT) earthquakes often occur in response to magma intrusion, it is rare for them to have magnitudes larger than ~M4. On 24 May 2007, two shallow M4+ earthquakes occurred beneath the upper part of the east rift zone of Kīlauea Volcano, Hawai‘i. An integrated analysis of geodetic, seismic, and field data, together with Coulomb stress modeling, demonstrates that the earthquakes occurred due to strike-slip motion on pre-existing faults that bound Kīlauea Caldera to the southeast and that the pressurization of Kīlauea's summit magma system may have been sufficient to promote faulting. For the first time, we infer a plausible origin to generate rare moderate-magnitude VTs at Kīlauea by reactivation of suitably oriented pre-existing caldera-bounding faults. Rare moderate- to large-magnitude VTs at Kīlauea and other volcanoes can therefore result from reactivation of existing fault planes due to stresses induced by magmatic processes.

  5. New bounding and decomposition approaches for MILP investment problems: Multi-area transmission and generation planning under policy constraints

    DOE PAGES

    Munoz, F. D.; Hobbs, B. F.; Watson, J. -P.

    2016-02-01

    A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems is proposed and it considers a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improvesmore » upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. But, the decomposition phase is required to attain optimality gaps. Moreover, use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately.« less

  6. New bounding and decomposition approaches for MILP investment problems: Multi-area transmission and generation planning under policy constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munoz, F. D.; Hobbs, B. F.; Watson, J. -P.

    A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems is proposed and it considers a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improvesmore » upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. But, the decomposition phase is required to attain optimality gaps. Moreover, use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately.« less

  7. Fluorescence photon migration techniques for the on-farm measurement of somatic cell count in fresh cow's milk

    NASA Astrophysics Data System (ADS)

    Khoo, Geoffrey; Kuennemeyer, Rainer; Claycomb, Rod W.

    2005-04-01

    Currently, the state of the art of mastitis detection in dairy cows is the laboratory-based measurement of somatic cell count (SCC), which is time consuming and expensive. Alternative, rapid, and reliable on-farm measurement methods are required for effective farm management. We have investigated whether fluorescence lifetime measurements can determine SCC in fresh, unprocessed milk. The method is based on the change in fluorescence lifetime of ethidium bromide when it binds to DNA from the somatic cells. Milk samples were obtained from a Fullwood Merlin Automated Milking System and analysed within a twenty-four hour period, over which the SCC does not change appreciably. For reference, the milk samples were also sent to a testing laboratory where the SCC was determined by traditional methods. The results show that we can quantify SCC using the fluorescence photon migration method from a lower bound of 4x105 cells mL-1 to an upper bound of 1 x 107 cells mL-1. The upper bound is due to the reference method used while the cause of the lower boundary is unknown, yet.

  8. Record length requirement of long-range dependent teletraffic

    NASA Astrophysics Data System (ADS)

    Li, Ming

    2017-04-01

    This article contributes the highlights mainly in two folds. On the one hand, it presents a formula to compute the upper bound of the variance of the correlation periodogram measurement of teletraffic (traffic for short) with long-range dependence (LRD) for a given record length T and a given value of the Hurst parameter H (Theorems 1 and 2). On the other hand, it proposes two formulas for the computation of the variance upper bound of the correlation periodogram measurement of traffic of fractional Gaussian noise (fGn) type and the generalized Cauchy (GC) type, respectively (Corollaries 1 and 2). They may constitute a reference guideline of record length requirement of traffic with LRD. In addition, record length requirement for the correlation periodogram measurement of traffic with either the Schuster type or the Bartlett one is studied and the present results about it show that both types of periodograms may be used for the correlation measurement of traffic with a pre-desired variance bound of correlation estimation. Moreover, real traffic in the Internet Archive by the Special Interest Group on Data Communication under the Association for Computing Machinery of US (ACM SIGCOMM) is analyzed in the case study in this topic.

  9. Improving the efficiency of single and multiple teleportation protocols based on the direct use of partially entangled states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fortes, Raphael; Rigolin, Gustavo, E-mail: rigolin@ifi.unicamp.br

    We push the limits of the direct use of partially pure entangled states to perform quantum teleportation by presenting several protocols in many different scenarios that achieve the optimal efficiency possible. We review and put in a single formalism the three major strategies known to date that allow one to use partially entangled states for direct quantum teleportation (no distillation strategies permitted) and compare their efficiencies in real world implementations. We show how one can improve the efficiency of many direct teleportation protocols by combining these techniques. We then develop new teleportation protocols employing multipartite partially entangled states. The threemore » techniques are also used here in order to achieve the highest efficiency possible. Finally, we prove the upper bound for the optimal success rate for protocols based on partially entangled Bell states and show that some of the protocols here developed achieve such a bound. -- Highlights: •Optimal direct teleportation protocols using directly partially entangled states. •We put in a single formalism all strategies of direct teleportation. •We extend these techniques for multipartite partially entangle states. •We give upper bounds for the optimal efficiency of these protocols.« less

  10. Simulating the effect of vegetation cover on the sediment yield of mediterranean catchments using SHETRAN

    NASA Astrophysics Data System (ADS)

    Lukey, B. T.; Sheffield, J.; Bathurst, J. C.; Lavabre, J.; Mathys, N.; Martin, C.

    1995-08-01

    The sediment yield of two catchments in southern France was modelled using the newly developed sediment code of SHETRAN. A fire in August 1990 denuded the Rimbaud catchment, providing an opportunity to study the effect of vegetation cover on sediment yield by running the model for both pre-and post-fire cases. Model output is in the form of upper and lower bounds on sediment discharge, reflecting the uncertainty in the erodibility of the soil. The results are encouraging since measured sediment discharge falls largely between the predicted bounds, and simulated sediment yield is dramatically lower for the catchment before the fire which matches observation. SHETRAN is also applied to the Laval catchment, which is subject to Badlands gulley erosion. Again using the principle of generating upper and lower bounds on sediment discharge, the model is shown to be capable of predicting the bulk sediment discharge over periods of months. To simulate the effect of reforestation, the model is run with vegetation cover equivalent to a neighbouring fully forested basin. The results obtained indicate that SHETRAN provides a powerful tool for predicting the impact of environmental change and land management on sediment yield.

  11. Existence and amplitude bounds for irrotational water waves in finite depth

    NASA Astrophysics Data System (ADS)

    Kogelbauer, Florian

    2017-12-01

    We prove the existence of solutions to the irrotational water-wave problem in finite depth and derive an explicit upper bound on the amplitude of the nonlinear solutions in terms of the wavenumber, the total hydraulic head, the wave speed and the relative mass flux. Our approach relies upon a reformulation of the water-wave problem as a one-dimensional pseudo-differential equation and the Newton-Kantorovich iteration for Banach spaces. This article is part of the theme issue 'Nonlinear water waves'.

  12. Entanglement polygon inequality in qubit systems

    NASA Astrophysics Data System (ADS)

    Qian, Xiao-Feng; Alonso, Miguel A.; Eberly, J. H.

    2018-06-01

    We prove a set of tight entanglement inequalities for arbitrary N-qubit pure states. By focusing on all bi-partite marginal entanglements between each single qubit and its remaining partners, we show that the inequalities provide an upper bound for each marginal entanglement, while the known monogamy relation establishes the lower bound. The restrictions and sharing properties associated with the inequalities are further analyzed with a geometric polytope approach, and examples of three-qubit GHZ-class and W-class entangled states are presented to illustrate the results.

  13. Quantum Speed Limits across the Quantum-to-Classical Transition

    NASA Astrophysics Data System (ADS)

    Shanahan, B.; Chenu, A.; Margolus, N.; del Campo, A.

    2018-02-01

    Quantum speed limits set an upper bound to the rate at which a quantum system can evolve. Adopting a phase-space approach, we explore quantum speed limits across the quantum-to-classical transition and identify equivalent bounds in the classical world. As a result, and contrary to common belief, we show that speed limits exist for both quantum and classical systems. As in the quantum domain, classical speed limits are set by a given norm of the generator of time evolution.

  14. Bounds on the cross-correlation functions of state m-sequences

    NASA Astrophysics Data System (ADS)

    Woodcock, C. F.; Davies, Phillip A.; Shaar, Ahmed A.

    1987-03-01

    Lower and upper bounds on the peaks of the periodic Hamming cross-correlation function for state m-sequences, which are often used in frequency-hopped spread-spectrum systems, are derived. The state position mapped (SPM) sequences of the state m-sequences are described. The use of SPM sequences for OR-channel code division multiplexing is studied. The relation between the Hamming cross-correlation function and the correlation function of SPM sequence is examined. Numerical results which support the theoretical data are presented.

  15. Hybrid Theory of Electron-Hydrogenic Systems Elastic Scattering

    NASA Technical Reports Server (NTRS)

    Bhatia, A. K.

    2007-01-01

    Accurate electron-hydrogen and electron-hydrogenic cross sections are required to interpret fusion experiments, laboratory plasma physics and properties of the solar and astrophysical plasmas. We have developed a method in which the short-range and long-range correlations can be included at the same time in the scattering equations. The phase shifts have rigorous lower bounds and the scattering lengths have rigorous upper bounds. The phase shifts in the resonance region can be used to calculate very accurately the resonance parameters.

  16. DD-bar production and their interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu Yanrui; Oka, Makoto; Takizawa, Makoto

    2011-05-23

    We have explored the bound state problem and the scattering problem of the DD-bar pair in a meson exchange model. When considering their production in the e{sup +}e{sup -} process, we included the DD-bar rescattering effect. Although it is difficult to answer whether the S-wave DD-bar bound state exists or not from the binding energies and the phase shifts, one may get an upper limit of the binding energy from the production of the BB-bar, the bottom analog of DD-bar.

  17. Thin-wall approximation in vacuum decay: A lemma

    NASA Astrophysics Data System (ADS)

    Brown, Adam R.

    2018-05-01

    The "thin-wall approximation" gives a simple estimate of the decay rate of an unstable quantum field. Unfortunately, the approximation is uncontrolled. In this paper I show that there are actually two different thin-wall approximations and that they bracket the true decay rate: I prove that one is an upper bound and the other a lower bound. In the thin-wall limit, the two approximations converge. In the presence of gravity, a generalization of this lemma provides a simple sufficient condition for nonperturbative vacuum instability.

  18. A Note on the Kirchhoff and Additive Degree-Kirchhoff Indices of Graphs

    NASA Astrophysics Data System (ADS)

    Yang, Yujun; Klein, Douglas J.

    2015-06-01

    Two resistance-distance-based graph invariants, namely, the Kirchhoff index and the additive degree-Kirchhoff index, are studied. A relation between them is established, with inequalities for the additive degree-Kirchhoff index arising via the Kirchhoff index along with minimum, maximum, and average degrees. Bounds for the Kirchhoff and additive degree-Kirchhoff indices are also determined, and extremal graphs are characterised. In addition, an upper bound for the additive degree-Kirchhoff index is established to improve a previously known result.

  19. Magnitude error bounds for sampled-data frequency response obtained from the truncation of an infinite series, and compensator improvement program

    NASA Technical Reports Server (NTRS)

    Mitchell, J. R.

    1972-01-01

    The frequency response method of analyzing control system performance is discussed, and the difficulty of obtaining the sampled frequency response of the continuous system is considered. An upper bound magnitude error equation is obtained which yields reasonable estimates of the actual error. Finalization of the compensator improvement program is also reported, and the program was used to design compensators for Saturn 5/S1-C dry workshop and Saturn 5/S1-C Skylab.

  20. On the boundedness and integration of non-oscillatory solutions of certain linear differential equations of second order.

    PubMed

    Tunç, Cemil; Tunç, Osman

    2016-01-01

    In this paper, certain system of linear homogeneous differential equations of second-order is considered. By using integral inequalities, some new criteria for bounded and [Formula: see text]-solutions, upper bounds for values of improper integrals of the solutions and their derivatives are established to the considered system. The obtained results in this paper are considered as extension to the results obtained by Kroopnick (2014) [1]. An example is given to illustrate the obtained results.

  1. Blow-up of solutions to a quasilinear wave equation for high initial energy

    NASA Astrophysics Data System (ADS)

    Li, Fang; Liu, Fang

    2018-05-01

    This paper deals with blow-up solutions to a nonlinear hyperbolic equation with variable exponent of nonlinearities. By constructing a new control function and using energy inequalities, the authors obtain the lower bound estimate of the L2 norm of the solution. Furthermore, the concavity arguments are used to prove the nonexistence of solutions; at the same time, an estimate of the upper bound of blow-up time is also obtained. This result extends and improves those of [1,2].

  2. Trajectories of bright stars at the Galactic Center as a tool to evaluate a graviton mass

    NASA Astrophysics Data System (ADS)

    Zakharov, Alexander; Jovanović, Predrag; Borka, Dusko; Jovanović, Vesna Borka

    2016-10-01

    Scientists worked in Saint-Petersburg (Petrograd, Leningrad) played the extremely important role in creation of scientific school and development of general relativity in Russia. Very recently LIGO collaboration discovered gravitational waves [1] predicted 100 years ago by A. Einstein. In the papers reporting about this discovery, the joint LIGO & VIRGO team presented an upper limit on graviton mass such as mg < 1.2 × 10-22eV [1, 2]. The authors concluded that their observational data do not show violations of classical general relativity because the graviton mass limit is very small. We show that an analysis of bright star trajectories could bound graviton mass with a comparable accuracy with accuracies reached with gravitational wave interferometers and expected with forthcoming pulsar timing observations for gravitational wave detection. This analysis gives an opportunity to treat observations of bright stars near the Galactic Center as a tool for an evaluation specific parameters of the black hole and also to obtain constraints on the fundamental gravity law such as a modifications of Newton gravity law in a weak field approximation. In that way, based on a potential reconstruction at the Galactic Center we give a bounds on a graviton mass.

  3. Entropy Methods For Univariate Distributions in Decision Analysis

    NASA Astrophysics Data System (ADS)

    Abbas, Ali E.

    2003-03-01

    One of the most important steps in decision analysis practice is the elicitation of the decision-maker's belief about an uncertainty of interest in the form of a representative probability distribution. However, the probability elicitation process is a task that involves many cognitive and motivational biases. Alternatively, the decision-maker may provide other information about the distribution of interest, such as its moments, and the maximum entropy method can be used to obtain a full distribution subject to the given moment constraints. In practice however, decision makers cannot readily provide moments for the distribution, and are much more comfortable providing information about the fractiles of the distribution of interest or bounds on its cumulative probabilities. In this paper we present a graphical method to determine the maximum entropy distribution between upper and lower probability bounds and provide an interpretation for the shape of the maximum entropy distribution subject to fractile constraints, (FMED). We also discuss the problems with the FMED in that it is discontinuous and flat over each fractile interval. We present a heuristic approximation to a distribution if in addition to its fractiles, we also know it is continuous and work through full examples to illustrate the approach.

  4. Implications of the Super-K atmospheric, long baseline, and reactor data for the mixing angles θ13 and θ23

    NASA Astrophysics Data System (ADS)

    Escamilla-Roa, J.; Latimer, D. C.; Ernst, D. J.

    2010-01-01

    A three-neutrino analysis of oscillation data is performed using the recent, more finely binned Super-K oscillation data, together with the CHOOZ, K2K, and MINOS data. The solar parameters Δ21 and θ12 are fixed from a recent analysis and Δ32, θ13, and θ23 are varied. We utilize the full three-neutrino oscillation probability and an exact treatment of Earth’s Mikheyev-Smirnov-Wolfenstein (MSW) effect with a castle-wall density. By including terms linear in θ13 and ɛ:=θ23-π/4, we find asymmetric errors for these parameters θ13=-0.07-0.11+0.18 and ɛ=0.03-0.15+0.09. For θ13, we see that the lower bound is primarily set by the CHOOZ experiment while the upper bound is determined by the low energy e-like events in the Super-K atmospheric data. We find that the parameters θ13 and ɛ are correlated—the preferred negative value of θ13 permits the preferred value of θ23 to be in the second octant, and the true value of θ13 affects the allowed region for θ23.

  5. Reactivation of pre-existing mechanical anisotropies during polyphase tectonic evolution: slip tendency analysis as a tool to constrain mechanical properties of rocks

    NASA Astrophysics Data System (ADS)

    Traforti, Anna; Bistacchi, Andrea; Massironi, Matteo; Zampieri, Dario; Di Toro, Giulio

    2017-04-01

    Intracontinental deformation within the upper crust is accommodated by nucleation of new faults (generally satisfying the Anderson's theory of faulting) or brittle reactivation of pre-existing anisotropies when certain conditions are met. How prone to reactivation an existing mechanical anisotropy or discontinuity is, depends on its mechanical strength compared to that of the intact rock and on its orientation with respect to the regional stress field. In this study, we consider how different rock types (i.e. anisotropic vs. isotropic) are deformed during a well-constrained brittle polyphase tectonic evolution to derive the mechanical strength of pre-existing anisotropies and discontinuities (i.e. metamorphic foliations and inherited faults/fractures). The analysis has been carried out in the Eastern Sierras Pampeanas of Central Argentina. These are a series of basement ranges of the Andean foreland, which show compelling evidence of a long-lasting brittle deformation history from the Early Carboniferous to Present time, with three main deformational events (Early Triassic to Early Jurassic NE-SW extension, Early Cretaceous NW-SE extension and Miocene to Present ENE-WNW compression). The study area includes both isotropic granitic bodies and anisotropic phyllosilicate-bearing rocks (gneisses and phyllites). In this environment, each deformation phase causes significant reactivation of the inherited structures and rheological anisotropies, or alternatively formation of neo-formed Andersonian faults, thus providing a multidirectional probing of mechanical properties of these rocks. A meso- and micro-structural analysis of brittle reactivation of metamorphic foliation or inherited faults/fractures revealed that different rock types present remarkable differences in the style of deformation (i.e., phyllite foliation is reactivated during the last compressional phase and cut by newly-formed Andersonian faults/fractures during the first two extensional regimes; instead, gneiss foliation is pervasively reactivated during all the tectonic phases). Considering these observations, we applied a Slip Tendency analysis to estimate the upper and lower bounds to the friction coefficient for slip along the foliations (μs) and along pre-existing faults/fractures (μf). If an hypothetical condition with simultaneous failure on the inherited mechanical discontinuity (foliation or pre-existing fault/fracture) and new Andersonian faults is assumed, the ratio between μsor μf and μ0(the average friction coefficient for intact isotropic rocks) can be calculated as μs (or μf) = NTs ṡ μ0(where NTs represents the normalized slip tendency of the analyzed discontinuity). When just reactivation of foliation/faults/fractures is observed (i.e. no newly-formed Andersonian faults are recognised), an upper bound to μsand μfcan be estimated as μs (or μf) < NTs ṡ μ0. By contrast, the lower bound to μsand μfcan be obtained as μs (or μf) > NTs ṡ μ0, when the mechanical anisotropies are not reactivated and new Andersonian faults nucleate. Applying the above analysis to multiple deformation phases and rock types, we were able to approximatively estimate μs < 0.4 (gneisses) and 0.1 < μs < 0.2 (phyllites) and μf ≈ 0.4 (phyllites) and 0.3 (gneisses).

  6. Vertical structure of tropospheric winds on gas giants

    NASA Astrophysics Data System (ADS)

    Scott, R. K.; Dunkerton, T. J.

    2017-04-01

    Zonal mean zonal velocity profiles from cloud-tracking observations on Jupiter and Saturn are used to infer latitudinal variations of potential temperature consistent with a shear stable potential vorticity distribution. Immediately below the cloud tops, density stratification is weaker on the poleward and stronger on the equatorward flanks of midlatitude jets, while at greater depth the opposite relation holds. Thermal wind balance then yields the associated vertical shears of midlatitude jets in an altitude range bounded above by the cloud tops and bounded below by the level where the latitudinal gradient of static stability changes sign. The inferred vertical shear below the cloud tops is consistent with existing thermal profiling of the upper troposphere. The sense of the associated mean meridional circulation in the upper troposphere is discussed, and expected magnitudes are given based on existing estimates of the radiative timescale on each planet.

  7. Gravitating Q-balls in the Affleck-Dine mechanism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamaki, Takashi; Sakai, Nobuyuki; Department of Education, Yamagata University, Yamagata 990-8560

    2011-04-15

    We investigate how gravity affects ''Q-balls'' with the Affleck-Dine potential V{sub AD}({phi}):=(m{sup 2}/2){phi}{sup 2} [1+Kln(({phi}/M)){sup 2}]. Contrary to the flat case, in which equilibrium solutions exist only if K<0, we find three types of gravitating solutions as follows. In the case that K<0, ordinary Q-ball solutions exist; there is an upper bound of the charge due to gravity. In the case that K=0, equilibrium solutions called (mini-)boson stars appear due to gravity; there is an upper bound of the charge, too. In the case that K>0, equilibrium solutions appear, too. In this case, these solutions are not asymptotically flat butmore » surrounded by Q-matter. These solutions might be important in considering a dark matter scenario in the Affleck-Dine mechanism.« less

  8. Precision Measurement of the Electron's Electric Dipole Moment Using Trapped Molecular Ions.

    PubMed

    Cairncross, William B; Gresh, Daniel N; Grau, Matt; Cossel, Kevin C; Roussy, Tanya S; Ni, Yiqi; Zhou, Yan; Ye, Jun; Cornell, Eric A

    2017-10-13

    We describe the first precision measurement of the electron's electric dipole moment (d_{e}) using trapped molecular ions, demonstrating the application of spin interrogation times over 700 ms to achieve high sensitivity and stringent rejection of systematic errors. Through electron spin resonance spectroscopy on ^{180}Hf^{19}F^{+} in its metastable ^{3}Δ_{1} electronic state, we obtain d_{e}=(0.9±7.7_{stat}±1.7_{syst})×10^{-29}  e cm, resulting in an upper bound of |d_{e}|<1.3×10^{-28}  e cm (90% confidence). Our result provides independent confirmation of the current upper bound of |d_{e}|<9.4×10^{-29}  e cm [J. Baron et al., New J. Phys. 19, 073029 (2017)NJOPFM1367-263010.1088/1367-2630/aa708e], and offers the potential to improve on this limit in the near future.

  9. Insights into the Earth System mass variability from CSR-RL05 GRACE gravity fields

    NASA Astrophysics Data System (ADS)

    Bettadpur, S.

    2012-04-01

    The next-generation Release-05 GRACE gravity field data products are the result of extensive effort applied to the improvements to the GRACE Level-1 (tracking) data products, and to improvements in the background gravity models and processing methodology. As a result, the squared-error upper-bound in RL05 fields is half or less than the squared-error upper-bound in RL04 fields. The CSR-RL05 field release consists of unconstrained gravity fields as well as a regularized gravity field time-series that can be used for several applications without any post-processing error reduction. This paper will describe the background and the nature of these improvements in the data products, and provide an error characterization. We will describe the insights these new series offer in measuring the mass flux due to diverse Hydrologic, Oceanographic and Cryospheric processes.

  10. Potential-field sounding using Euler's homogeneity equation and Zidarov bubbling

    USGS Publications Warehouse

    Cordell, Lindrith

    1994-01-01

    Potential-field (gravity) data are transformed into a physical-property (density) distribution in a lower half-space, constrained solely by assumed upper bounds on physical-property contrast and data error. A two-step process is involved. The data are first transformed to an equivalent set of line (2-D case) or point (3-D case) sources, using Euler's homogeneity equation evaluated iteratively on the largest residual data value. Then, mass is converted to a volume-density product, constrained to an upper density bound, by 'bubbling,' which exploits circular or radial expansion to redistribute density without changing the associated gravity field. The method can be developed for gravity or magnetic data in two or three dimensions. The results can provide a beginning for interpretation of potential-field data where few independent constraints exist, or more likely, can be used to develop models and confirm or extend interpretation of other geophysical data sets.

  11. Search for violations of quantum mechanics

    DOE PAGES

    Ellis, John; Hagelin, John S.; Nanopoulos, D. V.; ...

    1984-07-01

    The treatment of quantum effects in gravitational fields indicates that pure states may evolve into mixed states, and Hawking has proposed modification of the axioms of field theory which incorporate the corresponding violation of quantum mechanics. In this study we propose a modified hamiltonian equation of motion for density matrices and use it to interpret upper bounds on the violation of quantum mechanics in different phenomenological situations. We apply our formalism to the K 0-K 0 system and to long baseline neutron interferometry experiments. In both cases we find upper bounds of about 2 × 10 -21 GeV on contributionsmore » to the single particle “hamiltonian” which violate quantum mechanical coherence. We discuss how these limits might be improved in the future, and consider the relative significance of other successful tests of quantum mechanics. Finally, an appendix contains model estimates of the magnitude of effects violating quantum mechanics.« less

  12. DD production and their interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu Yanrui; Oka, Makoto; Takizawa, Makoto

    2010-07-01

    S- and P-wave DD scatterings are studied in a meson exchange model with the coupling constants obtained in the heavy quark effective theory. With the extracted P-wave phase shifts and the separable potential approximation, we include the DD rescattering effect and investigate the production process e{sup +}e{sup -{yields}}DD. We find that it is difficult to explain the anomalous line shape observed by the BES Collaboration with this mechanism. Combining our model calculation and the experimental measurement, we estimate the upper limit of the nearly universal cutoff parameter to be around 2 GeV. With this number, the upper limits of themore » binding energies of the S-wave DD and BB bound states are obtained. Assuming that the S-wave and P-wave interactions rely on the same cutoff, our study provides a way of extracting the information about S-wave molecular bound states from the P-wave meson pair production.« less

  13. Estimation of the lower and upper bounds on the probability of failure using subset simulation and random set theory

    NASA Astrophysics Data System (ADS)

    Alvarez, Diego A.; Uribe, Felipe; Hurtado, Jorge E.

    2018-02-01

    Random set theory is a general framework which comprises uncertainty in the form of probability boxes, possibility distributions, cumulative distribution functions, Dempster-Shafer structures or intervals; in addition, the dependence between the input variables can be expressed using copulas. In this paper, the lower and upper bounds on the probability of failure are calculated by means of random set theory. In order to accelerate the calculation, a well-known and efficient probability-based reliability method known as subset simulation is employed. This method is especially useful for finding small failure probabilities in both low- and high-dimensional spaces, disjoint failure domains and nonlinear limit state functions. The proposed methodology represents a drastic reduction of the computational labor implied by plain Monte Carlo simulation for problems defined with a mixture of representations for the input variables, while delivering similar results. Numerical examples illustrate the efficiency of the proposed approach.

  14. Universal charge-radius relation for subatomic and astrophysical compact objects.

    PubMed

    Madsen, Jes

    2008-04-18

    Electron-positron pair creation in supercritical electric fields limits the net charge of any static, spherical object, such as superheavy nuclei, strangelets, and Q balls, or compact stars like neutron stars, quark stars, and black holes. For radii between 4 x 10(2) and 10(4) fm the upper bound on the net charge is given by the universal relation Z=0.71R(fm), and for larger radii (measured in femtometers or kilometers) Z=7 x 10(-5)R_(2)(fm)=7 x 10(31)R_(2)(km). For objects with nuclear density the relation corresponds to Z approximately 0.7A(1/3)( (10(8)10(12)), where A is the baryon number. For some systems this universal upper bound improves existing charge limits in the literature.

  15. Crustal volumes of the continents and of oceanic and continental submarine plateaus

    NASA Technical Reports Server (NTRS)

    Schubert, G.; Sandwell, D.

    1989-01-01

    Using global topographic data and the assumption of Airy isostasy, it is estimated that the crustal volume of the continents is 7182 X 10 to the 6th cu km. The crustal volumes of the oceanic and continental submarine plateaus are calculated at 369 X 10 to the 6th cu km and 242 X 10 to the 6th cu km, respectively. The total continental crustal volume is found to be 7581 X 10 to the 6th cu km, 3.2 percent of which is comprised of continental submarine plateaus on the seafloor. An upper bound on the contintental crust addition rate by the accretion of oceanic plateaus is set at 3.7 cu km/yr. Subduction of continental submarine plateaus with the oceanic lithosphere on a 100 Myr time scale yields an upper bound to the continental crustal subtraction rate of 2.4 cu km/yr.

  16. Comparison of various techniques for calibration of AIS data

    NASA Technical Reports Server (NTRS)

    Roberts, D. A.; Yamaguchi, Y.; Lyon, R. J. P.

    1986-01-01

    The Airborne Imaging Spectrometer (AIS) samples a region which is strongly influenced by decreasing solar irradiance at longer wavelengths and strong atmospheric absorptions. Four techniques, the Log Residual, the Least Upper Bound Residual, the Flat Field Correction and calibration using field reflectance measurements were investigated as a means for removing these two features. Of the four techniques field reflectance calibration proved to be superior in terms of noise and normalization. Of the other three techniques, the Log Residual was superior when applied to areas which did not contain one dominant cover type. In heavily vegetated areas, the Log Residual proved to be ineffective. After removing anomalously bright data values, the Least Upper Bound Residual proved to be almost as effective as the Log Residual in sparsely vegetated areas and much more effective in heavily vegetated areas. Of all the techniques, the Flat Field Correction was the noisest.

  17. Isotope-abundance variations and atomic weights of selected elements: 2016 (IUPAC Technical Report)

    USGS Publications Warehouse

    Coplen, Tyler B.; Shrestha, Yesha

    2016-01-01

    There are 63 chemical elements that have two or more isotopes that are used to determine their standard atomic weights. The isotopic abundances and atomic weights of these elements can vary in normal materials due to physical and chemical fractionation processes (not due to radioactive decay). These variations are well known for 12 elements (hydrogen, lithium, boron, carbon, nitrogen, oxygen, magnesium, silicon, sulfur, chlorine, bromine, and thallium), and the standard atomic weight of each of these elements is given by IUPAC as an interval with lower and upper bounds. Graphical plots of selected materials and compounds of each of these elements have been published previously. Herein and at the URL http://dx.doi.org/10.5066/F7GF0RN2, we provide isotopic abundances, isotope-delta values, and atomic weights for each of the upper and lower bounds of these materials and compounds.

  18. Constructions for finite-state codes

    NASA Technical Reports Server (NTRS)

    Pollara, F.; Mceliece, R. J.; Abdel-Ghaffar, K.

    1987-01-01

    A class of codes called finite-state (FS) codes is defined and investigated. These codes, which generalize both block and convolutional codes, are defined by their encoders, which are finite-state machines with parallel inputs and outputs. A family of upper bounds on the free distance of a given FS code is derived from known upper bounds on the minimum distance of block codes. A general construction for FS codes is then given, based on the idea of partitioning a given linear block into cosets of one of its subcodes, and it is shown that in many cases the FS codes constructed in this way have a d sub free which is as large as possible. These codes are found without the need for lengthy computer searches, and have potential applications for future deep-space coding systems. The issue of catastropic error propagation (CEP) for FS codes is also investigated.

  19. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  20. Gauge mediation at the LHC: status and prospects

    DOE PAGES

    Knapen, Simon; Redigolo, Diego

    2017-01-30

    We show that the predictivity of general gauge mediation (GGM) with TeV-scale stops is greatly increased once the Higgs mass constraint is imposed. The most notable results are a strong lower bound on the mass of the gluino and right-handed squarks, and an upper bound on the Higgsino mass. If the μ-parameter is positive, the wino mass is also bounded from above. These constraints relax significantly for high messenger scales and as such long-lived NLSPs are favored in GGM. We identify a small set of most promising topologies for the neutralino/sneutrino NLSP scenarios and estimate the impact of the currentmore » bounds and the sensitivity of the high luminosity LHC. The stau, stop and sbottom NLSP scenarios can be robustly excluded at the high luminosity LHC.« less

  1. On the Inequalities of Babu\\vska-Aziz, Friedrichs and Horgan-Payne

    NASA Astrophysics Data System (ADS)

    Costabel, Martin; Dauge, Monique

    2015-09-01

    The equivalence between the inequalities of Babu\\vska-Aziz and Friedrichs for sufficiently smooth bounded domains in the plane was shown by Horgan and Payne 30 years ago. We prove that this equivalence, and the equality between the associated constants, is true without any regularity condition on the domain. For the Horgan-Payne inequality, which is an upper bound of the Friedrichs constant for plane star-shaped domains in terms of a geometric quantity known as the Horgan-Payne angle, we show that it is true for some classes of domains, but not for all bounded star-shaped domains. We prove a weaker inequality that is true in all cases.

  2. A simple method for assessing occupational exposure via the one-way random effects model.

    PubMed

    Krishnamoorthy, K; Mathew, Thomas; Peng, Jie

    2016-11-01

    A one-way random effects model is postulated for the log-transformed shift-long personal exposure measurements, where the random effect in the model represents an effect due to the worker. Simple closed-form confidence intervals are proposed for the relevant parameters of interest using the method of variance estimates recovery (MOVER). The performance of the confidence bounds is evaluated and compared with those based on the generalized confidence interval approach. Comparison studies indicate that the proposed MOVER confidence bounds are better than the generalized confidence bounds for the overall mean exposure and an upper percentile of the exposure distribution. The proposed methods are illustrated using a few examples involving industrial hygiene data.

  3. A communication channel model of the software process

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1988-01-01

    Reported here is beginning research into a noisy communication channel analogy of software development process productivity, in order to establish quantifiable behavior and theoretical bounds. The analogy leads to a fundamental mathematical relationship between human productivity and the amount of information supplied by the developers, the capacity of the human channel for processing and transmitting information, the software product yield (object size), the work effort, requirements efficiency, tool and process efficiency, and programming environment advantage. Also derived is an upper bound to productivity that shows that software reuse is the only means than can lead to unbounded productivity growth; practical considerations of size and cost of reusable components may reduce this to a finite bound.

  4. A communication channel model of the software process

    NASA Technical Reports Server (NTRS)

    Tausworthe, Robert C.

    1988-01-01

    Beginning research into a noisy communication channel analogy of software development process productivity, in order to establish quantifiable behavior and theoretical bounds is discussed. The analogy leads to a fundamental mathematical relationship between human productivity and the amount of information supplied by the developers, the capacity of the human channel for processing and transmitting information, the software product yield (object size) the work effort, requirements efficiency, tool and process efficiency, and programming environment advantage. An upper bound to productivity is derived that shows that software reuse is the only means that can lead to unbounded productivity growth; practical considerations of size and cost of reusable components may reduce this to a finite bound.

  5. Reliability Estimating Procedures for Electric and Thermochemical Propulsion Systems. Volume 1

    DTIC Science & Technology

    1977-02-01

    Laboratories, The Marquardt Company, NASA Goddard Space Flight Center, RCA Astro Elec- tronics, Rockwell International, Applied Physics Laboratory...E fX ) 2.3 Failure Rate Means and Bounds 5% Lower Bound Median Mean 95% Upper Bound A.05 X.05 . AIA. 9 5 0.00025 0.0024 0.06 0.022 x10- 6 per cycle, 1...Iq IIt. Xg4 4l Wl ~ 4𔃺 L Q ൘ I1-269 I- I J N1- 74-i Liu I- (~J~~~jto 1-27 r4J > U 0 1-271 T 27 fX ~𔃽 0L 1-273 -- va VAv( 13 1-272 %J% ~ii 000 41

  6. Ada (Trade Name)/SQL (Structured Query Language) Binding Specification

    DTIC Science & Technology

    1988-06-01

    TYPES iS package ADA-SOL Is type DWPLOYEEyNAME Is new STRING ( 1 .. 30 ); type BOSSNAME is new EMPLOYEENAME; type EMPLOYEE SALARY is digits 7 range 0.00...minimum number of significant decimal digits . All real numbers between the lower and upper bounds, inclusive, belong to the subtype, and are...and the elements of strings. Format <character> -:- < digit > I <letter> ! <special character> < digit > ::- 0111213141516171819 <letter> ::- <upper case

  7. Characterization of Seismic Noise at Selected Non-Urban Sites

    DTIC Science & Technology

    2010-03-01

    Field sites for seismic recordings: Scottish moor (upper left), Enfield, NH (upper right), and vicinity of Keele, England (bottom). ERDC...three sites. The sites are: a wind farm on a remote moor in Scotland, a ~13 acre field bounded by woods in a rural Enfield, NH neigh- borhood, and a site...in a rural Enfield, NH, neighborhood, and a site transitional from developed land to farmland within 1 km of the six-lane M6 motorway near Keele

  8. Modal cost analysis for simple continua

    NASA Technical Reports Server (NTRS)

    Hu, A.; Skelton, R. E.; Yang, T. Y.

    1988-01-01

    The most popular finite element codes are based upon appealing theories of convergence of modal frequencies. For example, the popularity of cubic elements for beam-like structures is due to the rapid convergence of modal frequencies and stiffness properties. However, for those problems in which the primary consideration is the accuracy of response of the structure at specified locations, it is more important to obtain accuracy in the modal costs than in the modal frequencies. The modal cost represents the contribution of a mode in the norm of the response vector. This paper provides a complete modal cost analysis for simple continua such as beam-like structures. Upper bounds are developed for mode truncation errors in the model reduction process and modal cost analysis dictates which modes to retain in order to reduce the model for control design purposes.

  9. FBI fingerprint identification automation study: AIDS 3 evaluation report. Volume 6: Environmental analysis

    NASA Technical Reports Server (NTRS)

    Mulhall, B. D. L.

    1980-01-01

    The results of the analysis of the external environment of the FBI Fingerprint Identification Division are presented. Possible trends in the future environment of the Division that may have an effect on the work load were projected to determine if future work load will lie within the capability range of the proposed new system, AIDS 3. Two working models of the environment were developed, the internal and external model, and from these scenarios the projection of possible future work load volume and mixture was developed. Possible drivers of work load change were identified and assessed for upper and lower bounds of effects. Data used for the study were derived from historical information, analysis of the current situation and from interviews with various agencies who are users of or stakeholders in the present system.

  10. A Data Envelopment Analysis Model for Selecting Material Handling System Designs

    NASA Astrophysics Data System (ADS)

    Liu, Fuh-Hwa Franklin; Kuo, Wan-Ting

    The material handling system under design is an unmanned job shop with an automated guided vehicle that transport loads within the processing machines. The engineering task is to select the design alternatives that are the combinations of the four design factors: the ratio of production time to transportation time, mean job arrival rate to the system, input/output buffer capacities at each processing machine, and the vehicle control strategies. Each of the design alternatives is simulated to collect the upper and lower bounds of the five performance indices. We develop a Data Envelopment Analysis (DEA) model to assess the 180 designs with imprecise data of the five indices. The three-ways factorial experiment analysis for the assessment results indicates the buffer capacity and the interaction of job arrival rate and buffer capacity affect the performance significantly.

  11. The permeability of fault zones in the upper continental crust: statistical analysis from 460 datasets, updated depth-trends, and permeability contrasts between fault damage zones and protoliths.

    NASA Astrophysics Data System (ADS)

    Scibek, J.; Gleeson, T. P.; Ingebritsen, S.; McKenzie, J. M.

    2017-12-01

    Fault zones are an important part of the hydraulic structure of the Earth's crust and influence a wide range of Earth processes and a large amount of test data has been collected over the years. We conducted a meta-analysis of global of fault zone permeabilities in the upper brittle continental crust, using about 10,000 published research items from a variety of geoscience and engineering disciplines. Using 460 datasets at 340 localities, the in-situ bulk permeabilities (>10's meters scale, including macro-fractures) and matrix permeabilities (drilled core samples or outcrop spot tests) are separated, analyzed, and compared. The values have log-normal distributions and we analyze the log-permeability values. In the fault damage zones of plutonic and metamorphic rocks the mean bulk permeability was 1x10-14m2, compared to matrix mean of 1x10-16m2. In sedimentary siliciclastic rocks the mean value was the same for bulk and matrix permeability (4x10-14m2). More useful insights were determined from the regression analysis of paired permeability data at all sites (fault damage zone vs. protolith). Much of the variation in fault permeability is explained by the permeability of protolith: in relatively weak volcaniclastic and clay-rich rocks up to 70 to 88% of the variation is explained, and only 20-30% in plutonic and metamorphic rocks. We propose a revision at shallow depths for previously published upper-bound curves for the "fault-damaged crust " and the geothermal-metamorphic rock assemblage outside of major fault zones. Although the bounding curves describe the "fault-damaged crust" permeability parameter space adequately, the only statistically significant permeability-depth trend is for plutonic and metamorphic rocks (50% of variation explained). We find a depth-dependent systematic variation of the permeability ratio (fault damage zone / protolith) from the in-situ bulk permeability global data. A moving average of the log-permeability ratio value is 2 to 2.5 (global mean is 2.2). Although the data is unevenly distributed with depth, the present evidence is that the permeability ratio is at a maximum at depths 1 to 2 kilometers, decreases with depth below 2km, and is also lower near the ground surface.

  12. New limit on possible long-range parity-odd interactions of the neutron from neutron-spin rotation in liquid 4He.

    PubMed

    Yan, H; Snow, W M

    2013-02-22

    Various theories beyond the standard model predict new particles with masses in the sub-eV range with very weak couplings to ordinary matter. A parity-odd interaction between polarized nucleons and unpolarized matter proportional to g(V)g(A)s · p is one such possibility, where s[over →] and p[over →] are the spin and the momentum of the polarized nucleon, and g(V) and g(A) are the vector and axial vector couplings of an interaction induced by the exchange of a new light vector boson. We report a new experimental upper bound on such possible long-range parity-odd interactions of the neutron with nucleons and electrons from a recent search for parity violation in neutron spin rotation in liquid ^{4}He. Our constraint on the product of vector and axial vector couplings of a possible new light vector boson is g(V) g(A)(n) ≤ 10(-32) for an interaction range of 1 m. This upper bound is more than 7 orders of magnitude more stringent than the existing laboratory constraints for interaction ranges below 1 m, corresponding to a broad range of vector boson masses above 10(-6) eV. More sensitive searches for a g(V) g(A)(n) coupling could be performed using neutron spin rotation measurements in heavy nuclei or through analysis of experiments conducted to search for nucleon-nucleon weak interactions and nuclear anapole moments.

  13. Length estimations of presumed upward connecting leaders in lightning flashes to flat water and flat ground

    NASA Astrophysics Data System (ADS)

    Stolzenburg, Maribeth; Marshall, Thomas C.; Karunarathne, Sumedhe; Orville, Richard E.

    2018-10-01

    Using video data recorded at 50,000 frames per second for nearby negative lightning flashes, estimates are derived for the length of positive upward connecting leaders (UCLs) that presumably formed prior to new ground attachments. Return strokes were 1.7 to 7.8 km distant, yielding image resolutions of 4.25 to 19.5 m. No UCLs are imaged in these data, indicating those features were too transient or too dim compared to other lightning processes that are imaged at these resolutions. Upper bound lengths for 17 presumed UCLs are determined from the height above flat ground or water of the successful stepped leader tip in the image immediately prior to (within 20 μs before) the return stroke. Better estimates of maximum UCL lengths are determined using the downward stepped leader tip's speed of advance and the estimated return stroke time within its first frame. For 17 strokes, the upper bound length of the possible UCL averages 31.6 m and ranges from 11.3 to 50.3 m. Among the close strokes (those with spatial resolution <8 m per pixel), the five which connected to water (salt water lagoon) have UCL upper bound estimates averaging significantly shorter (24.1 m) than the average for the three close strokes which connected to land (36.9 m). The better estimates of maximum UCL lengths for the eight close strokes average 20.2 m, with slightly shorter average of 18.3 m for the five that connected to water. All the better estimates of UCL maximum lengths are <38 m in this dataset

  14. Modification of the activity of cell wall-bound peroxidase by hypergravity in relation to the stimulation of lignin formation in azuki bean epicotyls

    NASA Astrophysics Data System (ADS)

    Wakabayashi, Kazuyuki; Nakano, Saho; Soga, Kouichi; Hoson, Takayuki

    Lignin is a component of cell walls of terrestrial plants, which provides cell walls with the mechanical rigidity. Lignin is a phenolic polymer with high molecular mass and formed by the polymerization of phenolic substances on a cellulosic matrix. The polymerization is catalyzed by cell wall-bound peroxidase, and thus the activity of this enzyme regulates the rate of formation of lignin. In the present study, the changes in the lignin content and the activity of cell wall peroxidase were investigated along epicotyls of azuki bean seedlings grown under hypergravity conditions. The endogenous growth occurred primarily in the upper regions of the epicotyl and no growth was detected in the middle or basal regions. The amounts of acetyl bromide-soluble lignin increased from the upper to the basal regions of epicotyls. The lignin content per unit length in the basal region was three times higher than that in the upper region. Hypergravity treatment at 300 g for 6 h stimulated the increase in the lignin content in all regions of epicotyls, particularly in the basal regions. The peroxidase activity in the protein fraction extracted from the cell wall preparation with a high ionic strength buffer also increased gradually toward the basal region, and hypergravity treatment clearly increased the activity in all regions. There was a close correlation between the lignin content and the enzyme activity. These results suggest that gravity stimuli modulate the activity of cell wall-bound peroxidase, which, in turn, causes the stimulation of the lignin formation in stem organs.

  15. Radiation protection for manned space activities

    NASA Technical Reports Server (NTRS)

    Jordan, T. M.

    1983-01-01

    The Earth's natural radiation environment poses a hazard to manned space activities directly through biological effects and indirectly through effects on materials and electronics. The following standard practices are indicated that address: (1) environment models for all radiation species including uncertainties and temporal variations; (2) upper bound and nominal quality factors for biological radiation effects that include dose, dose rate, critical organ, and linear energy transfer variations; (3) particle transport and shielding methodology including system and man modeling and uncertainty analysis; (4) mission planning that includes active dosimetry, minimizes exposure during extravehicular activities, subjects every mission to a radiation review, and specifies operational procedures for forecasting, recognizing, and dealing with large solar flaes.

  16. Breaking Megrelishvili protocol using matrix diagonalization

    NASA Astrophysics Data System (ADS)

    Arzaki, Muhammad; Triantoro Murdiansyah, Danang; Adi Prabowo, Satrio

    2018-03-01

    In this article we conduct a theoretical security analysis of Megrelishvili protocol—a linear algebra-based key agreement between two participants. We study the computational complexity of Megrelishvili vector-matrix problem (MVMP) as a mathematical problem that strongly relates to the security of Megrelishvili protocol. In particular, we investigate the asymptotic upper bounds for the running time and memory requirement of the MVMP that involves diagonalizable public matrix. Specifically, we devise a diagonalization method for solving the MVMP that is asymptotically faster than all of the previously existing algorithms. We also found an important counterintuitive result: the utilization of primitive matrix in Megrelishvili protocol makes the protocol more vulnerable to attacks.

  17. Determination and error analysis of emittance and spectral emittance measurements by remote sensing

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Kumar, R.

    1977-01-01

    The author has identified the following significant results. From the theory of remote sensing of surface temperatures, an equation of the upper bound of absolute error of emittance was determined. It showed that the absolute error decreased with an increase in contact temperature, whereas, it increased with an increase in environmental integrated radiant flux density. Change in emittance had little effect on the absolute error. A plot of the difference between temperature and band radiance temperature vs. emittance was provided for the wavelength intervals: 4.5 to 5.5 microns, 8 to 13.5 microns, and 10.2 to 12.5 microns.

  18. The dynamic behaviour of data-driven Δ-M and ΔΣ-M in sliding mode control

    NASA Astrophysics Data System (ADS)

    Almakhles, Dhafer; Swain, Akshya K.; Nasiri, Alireza

    2017-11-01

    In recent years, delta (Δ-M) and delta-sigma modulators (ΔΣ-M) are increasingly being used as efficient data converters due to numerous advantages they offer. This paper investigates various dynamical features of these modulators/systems (both in continuous and discrete time domain) and derives their stability conditions using the theory of sliding mode. The upper bound of the hitting time (step) has been estimated. The equivalent mode conditions, i.e. where the outputs of the modulators are equivalent to the inputs, are established. The results of the analysis are validated through simulations considering a numerical example.

  19. Monotone Boolean approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hulme, B.L.

    1982-12-01

    This report presents a theory of approximation of arbitrary Boolean functions by simpler, monotone functions. Monotone increasing functions can be expressed without the use of complements. Nonconstant monotone increasing functions are important in their own right since they model a special class of systems known as coherent systems. It is shown here that when Boolean expressions for noncoherent systems become too large to treat exactly, then monotone approximations are easily defined. The algorithms proposed here not only provide simpler formulas but also produce best possible upper and lower monotone bounds for any Boolean function. This theory has practical application formore » the analysis of noncoherent fault trees and event tree sequences.« less

  20. Thermalization Time Bounds for Pauli Stabilizer Hamiltonians

    NASA Astrophysics Data System (ADS)

    Temme, Kristan

    2017-03-01

    We prove a general lower bound to the spectral gap of the Davies generator for Hamiltonians that can be written as the sum of commuting Pauli operators. These Hamiltonians, defined on the Hilbert space of N-qubits, serve as one of the most frequently considered candidates for a self-correcting quantum memory. A spectral gap bound on the Davies generator establishes an upper limit on the life time of such a quantum memory and can be used to estimate the time until the system relaxes to thermal equilibrium when brought into contact with a thermal heat bath. The bound can be shown to behave as {λ ≥ O(N^{-1} exp(-2β overline{ɛ}))}, where {overline{ɛ}} is a generalization of the well known energy barrier for logical operators. Particularly in the low temperature regime we expect this bound to provide the correct asymptotic scaling of the gap with the system size up to a factor of N -1. Furthermore, we discuss conditions and provide scenarios where this factor can be removed and a constant lower bound can be proven.

  1. A duality approach for solving bounded linear programming problems with fuzzy variables based on ranking functions and its application in bounded transportation problems

    NASA Astrophysics Data System (ADS)

    Ebrahimnejad, Ali

    2015-08-01

    There are several methods, in the literature, for solving fuzzy variable linear programming problems (fuzzy linear programming in which the right-hand-side vectors and decision variables are represented by trapezoidal fuzzy numbers). In this paper, the shortcomings of some existing methods are pointed out and to overcome these shortcomings a new method based on the bounded dual simplex method is proposed to determine the fuzzy optimal solution of that kind of fuzzy variable linear programming problems in which some or all variables are restricted to lie within lower and upper bounds. To illustrate the proposed method, an application example is solved and the obtained results are given. The advantages of the proposed method over existing methods are discussed. Also, one application of this algorithm in solving bounded transportation problems with fuzzy supplies and demands is dealt with. The proposed method is easy to understand and to apply for determining the fuzzy optimal solution of bounded fuzzy variable linear programming problems occurring in real-life situations.

  2. Bounds on neutrino mass in viscous cosmology

    NASA Astrophysics Data System (ADS)

    Anand, Sampurn; Chaubal, Prakrut; Mazumdar, Arindam; Mohanty, Subhendra; Parashari, Priyank

    2018-05-01

    Effective field theoretic description of dark matter fluid on large scales predicts viscosity of the order 10‑6 H0 MP2. Recently, it has been shown that the same magnitude of viscosity can resolve the discordance between large scale structure observations and Planck CMB data in the σ8-Ωm0 and H0-Ωm0 parameters space. On the other hand, massive neutrinos suppresses the matter power spectrum on the small length scales similar to the viscosities. Therefore, it is expected that the viscous dark matter setup along with massive neutrinos can provide stringent constraint on neutrino mass. In this article, we show that the inclusion of effective viscosity, which arises from summing over non linear perturbations at small length scales, indeed severely constrains the cosmological bound on neutrino masses. Under a joint analysis of Planck CMB and different large scale observation data, we find that upper bound on the sum of the neutrino masses, at 2-σ level, decreases respectively from ∑ mν <= 0.396 eV (for normal hierarchy) and ∑ mν <= 0.378 eV (for inverted hierarchy) to ∑ mν <= 0.267 eV (for normal hierarchy) and ∑ mν <= 0.146 eV (for inverted hierarchy).

  3. Stability of Nonlinear Systems with Unknown Time-varying Feedback Delay

    NASA Astrophysics Data System (ADS)

    Chunodkar, Apurva A.; Akella, Maruthi R.

    2013-12-01

    This paper considers the problem of stabilizing a class of nonlinear systems with unknown bounded delayed feedback wherein the time-varying delay is 1) piecewise constant 2) continuous with a bounded rate. We also consider application of these results to the stabilization of rigid-body attitude dynamics. In the first case, the time-delay in feedback is modeled specifically as a switch among an arbitrarily large set of unknown constant values with a known strict upper bound. The feedback is a linear function of the delayed states. In the case of linear systems with switched delay feedback, a new sufficiency condition for average dwell time result is presented using a complete type Lyapunov-Krasovskii (L-K) functional approach. Further, the corresponding switched system with nonlinear perturbations is proven to be exponentially stable inside a well characterized region of attraction for an appropriately chosen average dwell time. In the second case, the concept of the complete type L-K functional is extended to a class of nonlinear time-delay systems with unknown time-varying time-delay. This extension ensures stability robustness to time-delay in the control design for all values of time-delay less than the known upper bound. Model-transformation is used in order to partition the nonlinear system into a nominal linear part that is exponentially stable with a bounded perturbation. We obtain sufficient conditions which ensure exponential stability inside a region of attraction estimate. A constructive method to evaluate the sufficient conditions is presented together with comparison with the corresponding constant and piecewise constant delay. Numerical simulations are performed to illustrate the theoretical results of this paper.

  4. Fundamental limitations of cavity-assisted atom interferometry

    NASA Astrophysics Data System (ADS)

    Dovale-Álvarez, M.; Brown, D. D.; Jones, A. W.; Mow-Lowry, C. M.; Miao, H.; Freise, A.

    2017-11-01

    Atom interferometers employing optical cavities to enhance the beam splitter pulses promise significant advances in science and technology, notably for future gravitational wave detectors. Long cavities, on the scale of hundreds of meters, have been proposed in experiments aiming to observe gravitational waves with frequencies below 1 Hz, where laser interferometers, such as LIGO, have poor sensitivity. Alternatively, short cavities have also been proposed for enhancing the sensitivity of more portable atom interferometers. We explore the fundamental limitations of two-mirror cavities for atomic beam splitting, and establish upper bounds on the temperature of the atomic ensemble as a function of cavity length and three design parameters: the cavity g factor, the bandwidth, and the optical suppression factor of the first and second order spatial modes. A lower bound to the cavity bandwidth is found which avoids elongation of the interaction time and maximizes power enhancement. An upper limit to cavity length is found for symmetric two-mirror cavities, restricting the practicality of long baseline detectors. For shorter cavities, an upper limit on the beam size was derived from the geometrical stability of the cavity. These findings aim to aid the design of current and future cavity-assisted atom interferometers.

  5. Limits on the fluctuating part of y-type distortion monopole from Planck and SPT results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khatri, Rishi; Sunyaev, Rashid, E-mail: khatri@mpa-garching.mpg.de, E-mail: sunyaev@mpa-garching.mpg.de

    2015-08-01

    We use the published Planck and SPT cluster catalogs [1,2] and recently published y-distortion maps [3] to put strong observational limits on the contribution of the fluctuating part of the y-type distortions to the y-distortion monopole. Our bounds are 5.4× 10{sup −8} < ( y) < 2.2× 10{sup −6}. Our upper bound is a factor of 6.8 stronger than the currently best upper 95% confidence limit from COBE-FIRAS of ( y) <15× 10{sup −6}. In the standard cosmology, large scale structure is the only source of such distortions and our limits therefore constrain the baryonic physics involved in the formation of the large scale structure. Our lower limit, from themore » detected clusters in the Planck and SPT catalogs, also implies that a Pixie-like experiment should detect the y-distortion monopole at >27-σ. The biggest sources of uncertainty in our upper limit are the monopole offsets between different HFI channel maps that we estimate to be <10{sup −6}.« less

  6. Approximation Set of the Interval Set in Pawlak's Space

    PubMed Central

    Wang, Jin; Wang, Guoyin

    2014-01-01

    The interval set is a special set, which describes uncertainty of an uncertain concept or set Z with its two crisp boundaries named upper-bound set and lower-bound set. In this paper, the concept of similarity degree between two interval sets is defined at first, and then the similarity degrees between an interval set and its two approximations (i.e., upper approximation set R¯(Z) and lower approximation set R_(Z)) are presented, respectively. The disadvantages of using upper-approximation set R¯(Z) or lower-approximation set R_(Z) as approximation sets of the uncertain set (uncertain concept) Z are analyzed, and a new method for looking for a better approximation set of the interval set Z is proposed. The conclusion that the approximation set R 0.5(Z) is an optimal approximation set of interval set Z is drawn and proved successfully. The change rules of R 0.5(Z) with different binary relations are analyzed in detail. Finally, a kind of crisp approximation set of the interval set Z is constructed. We hope this research work will promote the development of both the interval set model and granular computing theory. PMID:25177721

  7. Uncertainty, imprecision, and the precautionary principle in climate change assessment.

    PubMed

    Borsuk, M E; Tomassini, L

    2005-01-01

    Statistical decision theory can provide useful support for climate change decisions made under conditions of uncertainty. However, the probability distributions used to calculate expected costs in decision theory are themselves subject to uncertainty, disagreement, or ambiguity in their specification. This imprecision can be described using sets of probability measures, from which upper and lower bounds on expectations can be calculated. However, many representations, or classes, of probability measures are possible. We describe six of the more useful classes and demonstrate how each may be used to represent climate change uncertainties. When expected costs are specified by bounds, rather than precise values, the conventional decision criterion of minimum expected cost is insufficient to reach a unique decision. Alternative criteria are required, and the criterion of minimum upper expected cost may be desirable because it is consistent with the precautionary principle. Using simple climate and economics models as an example, we determine the carbon dioxide emissions levels that have minimum upper expected cost for each of the selected classes. There can be wide differences in these emissions levels and their associated costs, emphasizing the need for care when selecting an appropriate class.

  8. Finite state projection based bounds to compare chemical master equation models using single-cell data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, Zachary; Neuert, Gregor; Department of Pharmacology, School of Medicine, Vanderbilt University, Nashville, Tennessee 37232

    2016-08-21

    Emerging techniques now allow for precise quantification of distributions of biological molecules in single cells. These rapidly advancing experimental methods have created a need for more rigorous and efficient modeling tools. Here, we derive new bounds on the likelihood that observations of single-cell, single-molecule responses come from a discrete stochastic model, posed in the form of the chemical master equation. These strict upper and lower bounds are based on a finite state projection approach, and they converge monotonically to the exact likelihood value. These bounds allow one to discriminate rigorously between models and with a minimum level of computational effort.more » In practice, these bounds can be incorporated into stochastic model identification and parameter inference routines, which improve the accuracy and efficiency of endeavors to analyze and predict single-cell behavior. We demonstrate the applicability of our approach using simulated data for three example models as well as for experimental measurements of a time-varying stochastic transcriptional response in yeast.« less

  9. Imaginary-frequency polarizability and van der Waals force constants of two-electron atoms, with rigorous bounds

    NASA Technical Reports Server (NTRS)

    Glover, R. M.; Weinhold, F.

    1977-01-01

    Variational functionals of Braunn and Rebane (1972) for the imagery-frequency polarizability (IFP) have been generalized by the method of Gramian inequalities to give rigorous upper and lower bounds, valid even when the true (but unknown) unperturbed wavefunction must be represented by a variational approximation. Using these formulas in conjunction with flexible variational trial functions, tight error bounds are computed for the IFP and the associated two- and three-body van der Waals interaction constants of the ground 1(1S) and metastable 2(1,3S) states of He and Li(+). These bounds generally establish the ground-state properties to within a fraction of a per cent and metastable properties to within a few per cent, permitting a comparative assessment of competing theoretical methods at this level of accuracy. Unlike previous 'error bounds' for these properties, the present results have a completely a priori theoretical character, with no empirical input data.

  10. Budgeted Interactive Learning

    DTIC Science & Technology

    2017-06-15

    the methodology of reducing the online-algorithm-selecting problem as a contextual bandit problem, which is yet another interactive learning...KH2016a] Kuan-Hao Huang and Hsuan-Tien Lin. Linear upper confidence bound algorithm for contextual bandit problem with piled rewards. In Proceedings

  11. Amortized entanglement of a quantum channel and approximately teleportation-simulable channels

    NASA Astrophysics Data System (ADS)

    Kaur, Eneet; Wilde, Mark M.

    2018-01-01

    This paper defines the amortized entanglement of a quantum channel as the largest difference in entanglement between the output and the input of the channel, where entanglement is quantified by an arbitrary entanglement measure. We prove that the amortized entanglement of a channel obeys several desirable properties, and we also consider special cases such as the amortized relative entropy of entanglement and the amortized Rains relative entropy. These latter quantities are shown to be single-letter upper bounds on the secret-key-agreement and PPT-assisted quantum capacities of a quantum channel, respectively. Of especial interest is a uniform continuity bound for these latter two special cases of amortized entanglement, in which the deviation between the amortized entanglement of two channels is bounded from above by a simple function of the diamond norm of their difference and the output dimension of the channels. We then define approximately teleportation- and positive-partial-transpose-simulable (PPT-simulable) channels as those that are close in diamond norm to a channel which is either exactly teleportation- or PPT-simulable, respectively. These results then lead to single-letter upper bounds on the secret-key-agreement and PPT-assisted quantum capacities of channels that are approximately teleportation- or PPT-simulable, respectively. Finally, we generalize many of the concepts in the paper to the setting of general resource theories, defining the amortized resourcefulness of a channel and the notion of ν-freely-simulable channels, connecting these concepts in an operational way as well.

  12. Corium shield

    DOEpatents

    McDonald, Douglas B.; Buchholz, Carol E.

    1994-01-01

    A shield for restricting molten corium from flowing into a water sump disposed in a floor of a containment vessel includes upper and lower walls which extend vertically upwardly and downwardly from the floor for laterally bounding the sump. The upper wall includes a plurality of laterally spaced apart flow channels extending horizontally therethrough, with each channel having a bottom disposed coextensively with the floor for channeling water therefrom into the sump. Each channel has a height and a length predeterminedly selected for allowing heat from the molten corium to dissipate through the upper and lower walls as it flows therethrough for solidifying the molten corium therein to prevent accumulation thereof in the sump.

  13. BICEP2 / Keck Array IX: New bounds on anisotropies of CMB polarization rotation and implications for axionlike particles and primordial magnetic fields

    NASA Astrophysics Data System (ADS)

    BICEP2 Collaboration; Keck Array Collaboration; Ade, P. A. R.; Ahmed, Z.; Aikin, R. W.; Alexander, K. D.; Barkats, D.; Benton, S. J.; Bischoff, C. A.; Bock, J. J.; Bowens-Rubin, R.; Brevik, J. A.; Buder, I.; Bullock, E.; Buza, V.; Connors, J.; Crill, B. P.; Duband, L.; Dvorkin, C.; Filippini, J. P.; Fliescher, S.; Germaine, T. St.; Ghosh, T.; Grayson, J.; Harrison, S.; Hildebrandt, S. R.; Hilton, G. C.; Hui, H.; Irwin, K. D.; Kang, J.; Karkare, K. S.; Karpel, E.; Kaufman, J. P.; Keating, B. G.; Kefeli, S.; Kernasovskiy, S. A.; Kovac, J. M.; Kuo, C. L.; Larson, N.; Leitch, E. M.; Megerian, K. G.; Moncelsi, L.; Namikawa, T.; Netterfield, C. B.; Nguyen, H. T.; O'Brient, R.; Ogburn, R. W.; Pryke, C.; Richter, S.; Schillaci, A.; Schwarz, R.; Sheehy, C. D.; Staniszewski, Z. K.; Steinbach, B.; Sudiwala, R. V.; Teply, G. P.; Thompson, K. L.; Tolan, J. E.; Tucker, C.; Turner, A. D.; Vieregg, A. G.; Weber, A. C.; Wiebe, D. V.; Willmert, J.; Wong, C. L.; Wu, W. L. K.; Yoon, K. W.

    2017-11-01

    We present the strongest constraints to date on anisotropies of cosmic microwave background (CMB) polarization rotation derived from 150 GHz data taken by the BICEP2 & Keck Array CMB experiments up to and including the 2014 observing season (BK14). The definition of the polarization angle in BK14 maps has gone through self-calibration in which the overall angle is adjusted to minimize the observed T B and E B power spectra. After this procedure, the Q U maps lose sensitivity to a uniform polarization rotation but are still sensitive to anisotropies of polarization rotation. This analysis places constraints on the anisotropies of polarization rotation, which could be generated by CMB photons interacting with axionlike pseudoscalar fields or Faraday rotation induced by primordial magnetic fields. The sensitivity of BK14 maps (˜3 μ K -arc min ) makes it possible to reconstruct anisotropies of the polarization rotation angle and measure their angular power spectrum much more precisely than previous attempts. Our data are found to be consistent with no polarization rotation anisotropies, improving the upper bound on the amplitude of the rotation angle spectrum by roughly an order of magnitude compared to the previous best constraints. Our results lead to an order of magnitude better constraint on the coupling constant of the Chern-Simons electromagnetic term ga γ≤7.2 ×10-2/HI (95% confidence) than the constraint derived from the B -mode spectrum, where HI is the inflationary Hubble scale. This constraint leads to a limit on the decay constant of 10-6≲fa/Mpl at mass range of 10-33≤ma≤10-28 eV for r =0.01 , assuming ga γ˜α /(2 π fa) with α denoting the fine structure constant. The upper bound on the amplitude of the primordial magnetic fields is 30 nG (95% confidence) from the polarization rotation anisotropies.

  14. Modeling and control of beam-like structures

    NASA Technical Reports Server (NTRS)

    Hu, A.; Skelton, R. E.; Yang, T. Y.

    1987-01-01

    The most popular finite element codes are based upon appealing theories of convergence of modal frequencies. For example, the popularity of cubic elements for beam-like structures is due to the rapid convergence of modal frequencies and stiffness properties. However, for those problems in which the primary consideration is the accuracy of response of the structure at specified locations it is more important to obtain accuracy in the modal costs than in the modal frequencies. The modal cost represents the contribution of a mode in the norm of the response vector. This paper provides a complete modal cost analysis for beam-like continua. Upper bounds are developed for mode truncation errors in the model reduction process and modal cost analysis dictates which modes to retain in order to reduce the model for control design purposes.

  15. Observed Volume Fluxes and Mixing in the Dardanelles Strait

    DTIC Science & Technology

    2013-10-04

    et al , 2001; Kara el al ., 2008]. [3] It has been recognized for years that the upper-layer outflow from the Dardanelles Strait to the Aegean Sea...than the interior of the sea and manifests itself as a subsurface flow bounded by the upper layer of the Sea of Mannara. 5007 JAROSZ ET AL ...both ends of the Dardanelles Strait, and assuming a steady state mass budget, Unl’uata et al . [1990] estimated mean annual volume transports in the

  16. Canonical Probability Distributions for Model Building, Learning, and Inference

    DTIC Science & Technology

    2006-07-14

    hand , are for Ranked nodes set at Unobservable and Auxiliary nodes. The value of alpha is set in the diagnostic window by moving the slider in the upper...right hand side of the window. The upper bound of alpha can be modified by typing the new value in the small edit box to the right of the slider. f...TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER University of Pittsburgh

  17. Exact one-sided confidence limits for the difference between two correlated proportions.

    PubMed

    Lloyd, Chris J; Moldovan, Max V

    2007-08-15

    We construct exact and optimal one-sided upper and lower confidence bounds for the difference between two probabilities based on matched binary pairs using well-established optimality theory of Buehler. Starting with five different approximate lower and upper limits, we adjust them to have coverage probability exactly equal to the desired nominal level and then compare the resulting exact limits by their mean size. Exact limits based on the signed root likelihood ratio statistic are preferred and recommended for practical use.

  18. Scales of mass generation for quarks, leptons, and majorana neutrinos.

    PubMed

    Dicus, Duane A; He, Hong-Jian

    2005-06-10

    We study 2-->n inelastic fermion-(anti)fermion scattering into multiple longitudinal weak gauge bosons and derive universal upper bounds on the scales of fermion mass generation by imposing unitarity of the S matrix. We place new upper limits on the scales of fermion mass generation, independent of the electroweak symmetry breaking scale. Strikingly, we find that the strongest 2-->n limits fall in a narrow range, 3-170 TeV (with n=2-24), depending on the observed fermion masses.

  19. Information models of software productivity - Limits on productivity growth

    NASA Technical Reports Server (NTRS)

    Tausworthe, Robert C.

    1992-01-01

    Research into generalized information-metric models of software process productivity establishes quantifiable behavior and theoretical bounds. The models establish a fundamental mathematical relationship between software productivity and the human capacity for information traffic, the software product yield (system size), information efficiency, and tool and process efficiencies. An upper bound is derived that quantifies average software productivity and the maximum rate at which it may grow. This bound reveals that ultimately, when tools, methodologies, and automated assistants have reached their maximum effective state, further improvement in productivity can only be achieved through increasing software reuse. The reuse advantage is shown not to increase faster than logarithmically in the number of reusable features available. The reuse bound is further shown to be somewhat dependent on the reuse policy: a general 'reuse everything' policy can lead to a somewhat slower productivity growth than a specialized reuse policy.

  20. Coefficient of performance at maximum figure of merit and its bounds for low-dissipation Carnot-like refrigerators.

    PubMed

    Wang, Yang; Li, Mingxing; Tu, Z C; Hernández, A Calvo; Roco, J M M

    2012-07-01

    The figure of merit for refrigerators performing finite-time Carnot-like cycles between two reservoirs at temperature T(h) and T(c) (

  1. Resistivity bound for hydrodynamic bad metals

    PubMed Central

    Lucas, Andrew; Hartnoll, Sean A.

    2017-01-01

    We obtain a rigorous upper bound on the resistivity ρ of an electron fluid whose electronic mean free path is short compared with the scale of spatial inhomogeneities. When such a hydrodynamic electron fluid supports a nonthermal diffusion process—such as an imbalance mode between different bands—we show that the resistivity bound becomes ρ≲AΓ. The coefficient A is independent of temperature and inhomogeneity lengthscale, and Γ is a microscopic momentum-preserving scattering rate. In this way, we obtain a unified mechanism—without umklapp—for ρ∼T2 in a Fermi liquid and the crossover to ρ∼T in quantum critical regimes. This behavior is widely observed in transition metal oxides, organic metals, pnictides, and heavy fermion compounds and has presented a long-standing challenge to transport theory. Our hydrodynamic bound allows phonon contributions to diffusion constants, including thermal diffusion, to directly affect the electrical resistivity. PMID:29073054

  2. Interferometric tests of Planckian quantum geometry models

    DOE PAGES

    Kwon, Ohkyung; Hogan, Craig J.

    2016-04-19

    The effect of Planck scale quantum geometrical effects on measurements with interferometers is estimated with standard physics, and with a variety of proposed extensions. It is shown that effects are negligible in standard field theory with canonically quantized gravity. Statistical noise levels are estimated in a variety of proposals for nonstandard metric fluctuations, and these alternatives are constrained using upper bounds on stochastic metric fluctuations from LIGO. Idealized models of several interferometer system architectures are used to predict signal noise spectra in a quantum geometry that cannot be described by a fluctuating metric, in which position noise arises from holographicmore » bounds on directional information. Lastly, predictions in this case are shown to be close to current and projected experimental bounds.« less

  3. Integrability and chemical potential in the (3 + 1)-dimensional Skyrme model

    NASA Astrophysics Data System (ADS)

    Alvarez, P. D.; Canfora, F.; Dimakis, N.; Paliathanasis, A.

    2017-10-01

    Using a remarkable mapping from the original (3 + 1)dimensional Skyrme model to the Sine-Gordon model, we construct the first analytic examples of Skyrmions as well as of Skyrmions-anti-Skyrmions bound states within a finite box in 3 + 1 dimensional flat space-time. An analytic upper bound on the number of these Skyrmions-anti-Skyrmions bound states is derived. We compute the critical isospin chemical potential beyond which these Skyrmions cease to exist. With these tools, we also construct topologically protected time-crystals: time-periodic configurations whose time-dependence is protected by their non-trivial winding number. These are striking realizations of the ideas of Shapere and Wilczek. The critical isospin chemical potential for these time-crystals is determined.

  4. Properties of Coulomb crystals: rigorous results.

    PubMed

    Cioslowski, Jerzy

    2008-04-28

    Rigorous equalities and bounds for several properties of Coulomb crystals are presented. The energy e(N) per particle pair is shown to be a nondecreasing function of the particle number N for all clusters described by double-power-law pairwise-additive potentials epsilon(r) that are unbound at both r-->0 and r-->infinity. A lower bound for the ratio of the mean reciprocal crystal radius and e(N) is derived. The leading term in the asymptotic expression for the shell capacity that appears in the recently introduced approximate model of Coulomb crystals is obtained, providing in turn explicit large-N asymptotics for e(N) and the mean crystal radius. In addition, properties of the harmonic vibrational spectra are investigated, producing an upper bound for the zero-point energy.

  5. Direct dark matter search by annual modulation in XMASS-I

    NASA Astrophysics Data System (ADS)

    Abe, K.; Hiraide, K.; Ichimura, K.; Kishimoto, Y.; Kobayashi, K.; Kobayashi, M.; Moriyama, S.; Nakahata, M.; Norita, T.; Ogawa, H.; Sekiya, H.; Takachio, O.; Takeda, A.; Yamashita, M.; Yang, B. S.; Kim, N. Y.; Kim, Y. D.; Tasaka, S.; Fushimi, K.; Liu, J.; Martens, K.; Suzuki, Y.; Xu, B. D.; Fujita, R.; Hosokawa, K.; Miuchi, K.; Onishi, Y.; Oka, N.; Takeuchi, Y.; Kim, Y. H.; Lee, J. S.; Lee, K. B.; Lee, M. K.; Fukuda, Y.; Itow, Y.; Kegasa, R.; Kobayashi, K.; Masuda, K.; Takiya, H.; Nishijima, K.; Nakamura, S.; Xmass Collaboration

    2016-08-01

    A search for dark matter was conducted by looking for an annual modulation signal due to the Earth's rotation around the Sun using XMASS, a single phase liquid xenon detector. The data used for this analysis was 359.2 live days times 832 kg of exposure accumulated between November 2013 and March 2015. When we assume Weakly Interacting Massive Particle (WIMP) dark matter elastically scattering on the target nuclei, the exclusion upper limit of the WIMP-nucleon cross section 4.3 ×10-41 cm2 at 8 GeV/c2 was obtained and we exclude almost all the DAMA/LIBRA allowed region in the 6 to 16 GeV/c2 range at ∼10-40 cm2. The result of a simple modulation analysis, without assuming any specific dark matter model but including electron/γ events, showed a slight negative amplitude. The p-values obtained with two independent analyses are 0.014 and 0.068 for null hypothesis, respectively. We obtained 90% C.L. upper bounds that can be used to test various models. This is the first extensive annual modulation search probing this region with an exposure comparable to DAMA/LIBRA.

  6. Current Global Absolute Plate Velocities Inferred from the Trends of Hotspot Tracks: Implications for Motion between Groups of Hotspots and Comparison and Combination with Absolute Velocities Inferred from the Orientation of Seismic Anisotropy

    NASA Astrophysics Data System (ADS)

    Wang, C.; Gordon, R. G.; Zheng, L.

    2016-12-01

    Hotspot tracks are widely used to estimate the absolute velocities of plates, i.e., relative to the lower mantle. Knowledge of current motion between hotspots is important for both plate kinematics and mantle dynamics and informs the discussion on the origin of the Hawaiian-Emperor Bend. Following Morgan & Morgan (2007), we focus only on the trends of young hotspot tracks and omit volcanic propagation rates. The dispersion of the trends can be partitioned into between-plate and within-plate dispersion. Applying the method of Gripp & Gordon (2002) to the hotspot trend data set of Morgan & Morgan (2007) constrained to the MORVEL relative plate angular velocities (DeMets et al., 2010) results in a standard deviation of the 56 hotspot trends of 22°. The largest angular misfits tend to occur on the slowest moving plates. Alternatively, estimation of best-fitting poles to hotspot tracks on the nine individual plates, results in a standard deviation of trends of only 13°, a statistically significant reduction from the introduction of 15 additional adjustable parameters. If all of the between-plate misfit is due to motion of groups of hotspots (beneath different plates), nominal velocities relative to the mean hotspot reference frame range from 1 to 4 mm/yr with the lower bounds ranging from 1 to 3 mm/yr and the greatest upper bound being 8 mm/yr. These are consistent with bounds on motion between Pacific and Indo-Atlantic hotspots over the past ≈50 Ma, which range from zero (lower bound) to 8 to 13 mm/yr (upper bounds) (Koivisto et al., 2014). We also determine HS4-MORVEL, a new global set of plate angular velocities relative to the hotspots constrained to consistency with the MORVEL relative plate angular velocities, using a two-tier analysis similar to that used by Zheng et al. (2014) to estimate the SKS-MORVEL global set of absolute plate velocities fit to the orientation of seismic anisotropy. We find that the 95% confidence limits of HS4-MORVEL and SKS-MORVEL overlap substantially and that the two sets of angular velocities differ insignificantly. Thus we combine the two sets of angular velocities to estimate ABS-MORVEL, an optimal set of global angular velocities consistent with both hotspot tracks and seismic anisotropy. ABS-MORVEL has more compact confidence limits than either SKS-MORVEL or HS4-MORVEL.

  7. Energy efficient quantum machines

    NASA Astrophysics Data System (ADS)

    Abah, Obinna; Lutz, Eric

    2017-05-01

    We investigate the performance of a quantum thermal machine operating in finite time based on shortcut-to-adiabaticity techniques. We compute efficiency and power for a paradigmatic harmonic quantum Otto engine by taking the energetic cost of the shortcut driving explicitly into account. We demonstrate that shortcut-to-adiabaticity machines outperform conventional ones for fast cycles. We further derive generic upper bounds on both quantities, valid for any heat engine cycle, using the notion of quantum speed limit for driven systems. We establish that these quantum bounds are tighter than those stemming from the second law of thermodynamics.

  8. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    PubMed

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.

  9. Bounds on strong field magneto-transport in three-dimensional composites

    NASA Astrophysics Data System (ADS)

    Briane, Marc; Milton, Graeme W.

    2011-10-01

    This paper deals with bounds satisfied by the effective non-symmetric conductivity of three-dimensional composites in the presence of a strong magnetic field. On the one hand, it is shown that for general composites the antisymmetric part of the effective conductivity cannot be bounded solely in terms of the antisymmetric part of the local conductivity, contrary to the columnar case studied by Briane and Milton [SIAM J. Appl. Math. 70(8), 3272-3286 (2010), 10.1137/100798090]. Thus a suitable rank-two laminate, the conductivity of which has a bounded antisymmetric part together with a high-contrast symmetric part, may generate an arbitrarily large antisymmetric part of the effective conductivity. On the other hand, bounds are provided which show that the antisymmetric part of the effective conductivity must go to zero if the upper bound on the antisymmetric part of the local conductivity goes to zero, and the symmetric part of the local conductivity remains bounded below and above. Elementary bounds on the effective moduli are derived assuming the local conductivity and the effective conductivity have transverse isotropy in the plane orthogonal to the magnetic field. New Hashin-Shtrikman type bounds for two-phase three-dimensional composites with a non-symmetric conductivity are provided under geometric isotropy of the microstructure. The derivation of the bounds is based on a particular variational principle symmetrizing the problem, and the use of Y-tensors involving the averages of the fields in each phase.

  10. Statistical mechanical estimation of the free energy of formation of E. coli biomass for use with macroscopic bioreactor balances.

    PubMed

    Grosz, R; Stephanopoulos, G

    1983-09-01

    The need for the determination of the free energy of formation of biomass in bioreactor second law balances is well established. A statistical mechanical method for the calculation of the free energy of formation of E. coli biomass is introduced. In this method, biomass is modelled to consist of a system of biopolymer networks. The partition function of this system is proposed to consist of acoustic and optical modes of vibration. Acoustic modes are described by Tarasov's model, the parameters of which are evaluated with the aid of low-temperature calorimetric data for the crystalline protein bovine chymotrypsinogen A. The optical modes are described by considering the low-temperature thermodynamic properties of biological monomer crystals such as amino acid crystals. Upper and lower bounds are placed on the entropy to establish the maximum error associated with the statistical method. The upper bound is determined by endowing the monomers in biomass with ideal gas properties. The lower bound is obtained by limiting the monomers to complete immobility. On this basis, the free energy of formation is fixed to within 10%. Proposals are made with regard to experimental verification of the calculated value and extension of the calculation to other types of biomass.

  11. Retrospective Assessment of Cost Savings From Prevention

    PubMed Central

    Grosse, Scott D.; Berry, Robert J.; Tilford, J. Mick; Kucik, James E.; Waitzman, Norman J.

    2016-01-01

    Introduction Although fortification of food with folic acid has been calculated to be cost saving in the U.S., updated estimates are needed. This analysis calculates new estimates from the societal perspective of net cost savings per year associated with mandatory folic acid fortification of enriched cereal grain products in the U.S. that was implemented during 1997–1998. Methods Estimates of annual numbers of live-born spina bifida cases in 1995–1996 relative to 1999–2011 based on birth defects surveillance data were combined during 2015 with published estimates of the present value of lifetime direct costs updated in 2014 U.S. dollars for a live-born infant with spina bifida to estimate avoided direct costs and net cost savings. Results The fortification mandate is estimated to have reduced the annual number of U.S. live-born spina bifida cases by 767, with a lower-bound estimate of 614. The present value of mean direct lifetime cost per infant with spina bifida is estimated to be $791,900, or $577,000 excluding caregiving costs. Using a best estimate of numbers of avoided live-born spina bifida cases, fortification is estimated to reduce the present value of total direct costs for each year's birth cohort by $603 million more than the cost of fortification. A lower-bound estimate of cost savings using conservative assumptions, including the upper-bound estimate of fortification cost, is $299 million. Conclusions The estimates of cost savings are larger than previously reported, even using conservative assumptions. The analysis can also inform assessments of folic acid fortification in other countries. PMID:26790341

  12. Bounds on light gluinos from the BEBC beam dump experiment

    NASA Astrophysics Data System (ADS)

    Cooper-Sarkar, A. M.; Parker, M. A.; Sarkar, S.; Aderholz, M.; Bostock, P.; Clayton, E. F.; Faccini-Turluer, M. L.; Grässler, H.; Guy, J.; Hulth, P. O.; Hultqvist, K.; Idschok, U.; Klein, H.; Kreutzmann, H.; Krstic, J.; Mobayyen, M. M.; Morrison, D. R. O.; Nellen, B.; Schmid, P.; Schmitz, N.; Talebzadeh, M.; Venus, W.; Vignaud, D.; Walck, Ch.; Wachsmuth, H.; Wünsch, B.; WA66 Collaboration

    1985-10-01

    Observational upper limits on anomalous neutral-current events in a proton beam dump experiment are used to constrain the possible hadroproduction and decay of light gluinos. These results require ifm g˜$̆4 GeV for ifm q˜ - minw.

  13. 5. Corridor A and Building No. 9962A (with white door). ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    5. Corridor A and Building No. 9962-A (with white door). In upper left is east side of Building No. 9952-B. - Madigan Hospital, Corridors & Ramps, Bounded by Wilson & McKinley Avenues & Garfield & Lincoln Streets, Tacoma, Pierce County, WA

  14. Interpolation Inequalities and Spectral Estimates for Magnetic Operators

    NASA Astrophysics Data System (ADS)

    Dolbeault, Jean; Esteban, Maria J.; Laptev, Ari; Loss, Michael

    2018-05-01

    We prove magnetic interpolation inequalities and Keller-Lieb-Thir-ring estimates for the principal eigenvalue of magnetic Schr{\\"o}dinger operators. We establish explicit upper and lower bounds for the best constants and show by numerical methods that our theoretical estimates are accurate.

  15. Liouville type theorems of a nonlinear elliptic equation for the V-Laplacian

    NASA Astrophysics Data System (ADS)

    Huang, Guangyue; Li, Zhi

    2018-03-01

    In this paper, we consider Liouville type theorems for positive solutions to the following nonlinear elliptic equation: Δ _V u+aulog u=0, where a is a nonzero real constant. By using gradient estimates, we obtain upper bounds of |\

  16. Mechanical properties of silicate glasses exposed to a low-Earth orbit

    NASA Technical Reports Server (NTRS)

    Wiedlocher, David E.; Tucker, Dennis S.; Nichols, Ron; Kinser, Donald L.

    1992-01-01

    The effects of a 5.8 year exposure to low earth orbit environment upon the mechanical properties of commercial optical fused silica, low iron soda-lime-silica, Pyrex 7740, Vycor 7913, BK-7, and the glass ceramic Zerodur were examined. Mechanical testing employed the ASTM-F-394 piston on 3-ball method in a liquid nitrogen environment. Samples were exposed on the Long Duration Exposure Facility (LDEF) in two locations. Impacts were observed on all specimens except Vycor. Weibull analysis as well as a standard statistical evaluation were conducted. The Weibull analysis revealed no differences between control samples and the two exposed samples. We thus concluded that radiation components of the Earth orbital environment did not degrade the mechanical strength of the samples examined within the limits of experimental error. The upper bound of strength degradation for meteorite impacted samples based upon statistical analysis and observation was 50 percent.

  17. Testing non-minimally coupled inflation with CMB data: a Bayesian analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campista, Marcela; Benetti, Micol; Alcaniz, Jailson, E-mail: campista@on.br, E-mail: micolbenetti@on.br, E-mail: alcaniz@on.br

    2017-09-01

    We use the most recent cosmic microwave background (CMB) data to perform a Bayesian statistical analysis and discuss the observational viability of inflationary models with a non-minimal coupling, ξ, between the inflaton field and the Ricci scalar. We particularize our analysis to two examples of small and large field inflationary models, namely, the Coleman-Weinberg and the chaotic quartic potentials. We find that ( i ) the ξ parameter is closely correlated with the primordial amplitude ; ( ii ) although improving the agreement with the CMB data in the r − n {sub s} plane, where r is the tensor-to-scalarmore » ratio and n {sub s} the primordial spectral index, a non-null coupling is strongly disfavoured with respect to the minimally coupled standard ΛCDM model, since the upper bounds of the Bayes factor (odds) for ξ parameter are greater than 150:1.« less

  18. Axioms of adaptivity

    PubMed Central

    Carstensen, C.; Feischl, M.; Page, M.; Praetorius, D.

    2014-01-01

    This paper aims first at a simultaneous axiomatic presentation of the proof of optimal convergence rates for adaptive finite element methods and second at some refinements of particular questions like the avoidance of (discrete) lower bounds, inexact solvers, inhomogeneous boundary data, or the use of equivalent error estimators. Solely four axioms guarantee the optimality in terms of the error estimators. Compared to the state of the art in the temporary literature, the improvements of this article can be summarized as follows: First, a general framework is presented which covers the existing literature on optimality of adaptive schemes. The abstract analysis covers linear as well as nonlinear problems and is independent of the underlying finite element or boundary element method. Second, efficiency of the error estimator is neither needed to prove convergence nor quasi-optimal convergence behavior of the error estimator. In this paper, efficiency exclusively characterizes the approximation classes involved in terms of the best-approximation error and data resolution and so the upper bound on the optimal marking parameters does not depend on the efficiency constant. Third, some general quasi-Galerkin orthogonality is not only sufficient, but also necessary for the R-linear convergence of the error estimator, which is a fundamental ingredient in the current quasi-optimality analysis due to Stevenson 2007. Finally, the general analysis allows for equivalent error estimators and inexact solvers as well as different non-homogeneous and mixed boundary conditions. PMID:25983390

  19. Evaluating the Potential Importance of Monoterpene Degradation for Global Acetone Production

    NASA Astrophysics Data System (ADS)

    Kelp, M. M.; Brewer, J.; Keller, C. A.; Fischer, E. V.

    2015-12-01

    Acetone is one of the most abundant volatile organic compounds (VOCs) in the atmosphere, but estimates of the global source of acetone vary widely. A better understanding of acetone sources is essential because acetone serves as a source of HOx in the upper troposphere and as a precursor to the NOx reservoir species peroxyacetyl nitrate (PAN). Although there are primary anthropogenic and pyrogenic sources of acetone, the dominant acetone sources are thought to be from direct biogenic emissions and photochemical production, particularly from the oxidation of iso-alkanes. Recent work suggests that the photochemical degradation of monoterpenes may also represent a significant contribution to global acetone production. We investigate that hypothesis using the GEOS-Chem chemical transport model. In this work, we calculate the emissions of eight terpene species (α-pinene, β-pinene, limonene, Δ3-carene, myrcene, sabinene, trans-β-ocimene, and an 'other monoterpenes' category which contains 34 other trace species) and couple these with upper and lower bound literature yields from species-specific chamber studies. We compare the simulated acetone distributions against in situ acetone measurements from a global suite of NASA aircraft campaigns. When simulating an upper bound on yields, the model-to-measurement comparison improves for North America at both the surface and in the upper troposphere. The inclusion of acetone production from monoterpene degradation also improves the ability of the model to reproduce observations of acetone in East Asian outflow. However, in general the addition of monoterpenes degrades the model comparison for the Southern Hemisphere.

  20. Probing the size of extra dimensions with gravitational wave astronomy

    NASA Astrophysics Data System (ADS)

    Yagi, Kent; Tanahashi, Norihiro; Tanaka, Takahiro

    2011-04-01

    In the Randall-Sundrum II braneworld model, it has been conjectured, according to the AdS/CFT correspondence, that a brane-localized black hole (BH) larger than the bulk AdS curvature scale ℓ cannot be static, and it is dual to a four-dimensional BH emitting Hawking radiation through some quantum fields. In this scenario, the number of the quantum field species is so large that this radiation changes the orbital evolution of a BH binary. We derived the correction to the gravitational waveform phase due to this effect and estimated the upper bounds on ℓ by performing Fisher analyses. We found that the Deci-Hertz Interferometer Gravitational Wave Observatory and the Big Bang Observatory (DECIGO/BBO) can give a stronger constraint than the current tabletop result by detecting gravitational waves from small mass BH/BH and BH/neutron star (NS) binaries. Furthermore, DECIGO/BBO is expected to detect 105 BH/NS binaries per year. Taking this advantage, we find that DECIGO/BBO can actually measure ℓ down to ℓ=0.33μm for a 5 yr observation if we know that binaries are circular a priori. This is about 40 times smaller than the upper bound obtained from the tabletop experiment. On the other hand, when we take eccentricities into binary parameters, the detection limit weakens to ℓ=1.5μm due to strong degeneracies between ℓ and eccentricities. We also derived the upper bound on ℓ from the expected detection number of extreme mass ratio inspirals with LISA and BH/NS binaries with DECIGO/BBO, extending the discussion made recently by McWilliams [Phys. Rev. Lett. 104, 141601 (2010)PRLTAO0031-900710.1103/PhysRevLett.104.141601]. We found that these less robust constraints are weaker than the ones from phase differences.

  1. Mechanical metamaterials at the theoretical limit of isotropic elastic stiffness

    NASA Astrophysics Data System (ADS)

    Berger, J. B.; Wadley, H. N. G.; McMeeking, R. M.

    2017-02-01

    A wide variety of high-performance applications require materials for which shape control is maintained under substantial stress, and that have minimal density. Bio-inspired hexagonal and square honeycomb structures and lattice materials based on repeating unit cells composed of webs or trusses, when made from materials of high elastic stiffness and low density, represent some of the lightest, stiffest and strongest materials available today. Recent advances in 3D printing and automated assembly have enabled such complicated material geometries to be fabricated at low (and declining) cost. These mechanical metamaterials have properties that are a function of their mesoscale geometry as well as their constituents, leading to combinations of properties that are unobtainable in solid materials; however, a material geometry that achieves the theoretical upper bounds for isotropic elasticity and strain energy storage (the Hashin-Shtrikman upper bounds) has yet to be identified. Here we evaluate the manner in which strain energy distributes under load in a representative selection of material geometries, to identify the morphological features associated with high elastic performance. Using finite-element models, supported by analytical methods, and a heuristic optimization scheme, we identify a material geometry that achieves the Hashin-Shtrikman upper bounds on isotropic elastic stiffness. Previous work has focused on truss networks and anisotropic honeycombs, neither of which can achieve this theoretical limit. We find that stiff but well distributed networks of plates are required to transfer loads efficiently between neighbouring members. The resulting low-density mechanical metamaterials have many advantageous properties: their mesoscale geometry can facilitate large crushing strains with high energy absorption, optical bandgaps and mechanically tunable acoustic bandgaps, high thermal insulation, buoyancy, and fluid storage and transport. Our relatively simple design can be manufactured using origami-like sheet folding and bonding methods.

  2. Mechanical metamaterials at the theoretical limit of isotropic elastic stiffness.

    PubMed

    Berger, J B; Wadley, H N G; McMeeking, R M

    2017-03-23

    A wide variety of high-performance applications require materials for which shape control is maintained under substantial stress, and that have minimal density. Bio-inspired hexagonal and square honeycomb structures and lattice materials based on repeating unit cells composed of webs or trusses, when made from materials of high elastic stiffness and low density, represent some of the lightest, stiffest and strongest materials available today. Recent advances in 3D printing and automated assembly have enabled such complicated material geometries to be fabricated at low (and declining) cost. These mechanical metamaterials have properties that are a function of their mesoscale geometry as well as their constituents, leading to combinations of properties that are unobtainable in solid materials; however, a material geometry that achieves the theoretical upper bounds for isotropic elasticity and strain energy storage (the Hashin-Shtrikman upper bounds) has yet to be identified. Here we evaluate the manner in which strain energy distributes under load in a representative selection of material geometries, to identify the morphological features associated with high elastic performance. Using finite-element models, supported by analytical methods, and a heuristic optimization scheme, we identify a material geometry that achieves the Hashin-Shtrikman upper bounds on isotropic elastic stiffness. Previous work has focused on truss networks and anisotropic honeycombs, neither of which can achieve this theoretical limit. We find that stiff but well distributed networks of plates are required to transfer loads efficiently between neighbouring members. The resulting low-density mechanical metamaterials have many advantageous properties: their mesoscale geometry can facilitate large crushing strains with high energy absorption, optical bandgaps and mechanically tunable acoustic bandgaps, high thermal insulation, buoyancy, and fluid storage and transport. Our relatively simple design can be manufactured using origami-like sheet folding and bonding methods.

  3. Optimizing Retransmission Threshold in Wireless Sensor Networks

    PubMed Central

    Bi, Ran; Li, Yingshu; Tan, Guozhen; Sun, Liang

    2016-01-01

    The retransmission threshold in wireless sensor networks is critical to the latency of data delivery in the networks. However, existing works on data transmission in sensor networks did not consider the optimization of the retransmission threshold, and they simply set the same retransmission threshold for all sensor nodes in advance. The method did not take link quality and delay requirement into account, which decreases the probability of a packet passing its delivery path within a given deadline. This paper investigates the problem of finding optimal retransmission thresholds for relay nodes along a delivery path in a sensor network. The object of optimizing retransmission thresholds is to maximize the summation of the probability of the packet being successfully delivered to the next relay node or destination node in time. A dynamic programming-based distributed algorithm for finding optimal retransmission thresholds for relay nodes along a delivery path in the sensor network is proposed. The time complexity is OnΔ·max1≤i≤n{ui}, where ui is the given upper bound of the retransmission threshold of sensor node i in a given delivery path, n is the length of the delivery path and Δ is the given upper bound of the transmission delay of the delivery path. If Δ is greater than the polynomial, to reduce the time complexity, a linear programming-based (1+pmin)-approximation algorithm is proposed. Furthermore, when the ranges of the upper and lower bounds of retransmission thresholds are big enough, a Lagrange multiplier-based distributed O(1)-approximation algorithm with time complexity O(1) is proposed. Experimental results show that the proposed algorithms have better performance. PMID:27171092

  4. Formation of eyes in large-scale cyclonic vortices

    NASA Astrophysics Data System (ADS)

    Oruba, L.; Davidson, P. A.; Dormy, E.

    2018-01-01

    We present numerical simulations of steady, laminar, axisymmetric convection of a Boussinesq fluid in a shallow, rotating, cylindrical domain. The flow is driven by an imposed vertical heat flux and shaped by the background rotation of the domain. The geometry is inspired by that of tropical cyclones and the global flow pattern consists of a shallow swirling vortex combined with a poloidal flow in the r -z plane which is predominantly inward near the bottom boundary and outward along the upper surface. Our numerical experiments confirm that, as suggested in our recent work [L. Oruba et al., J. Fluid Mech. 812, 890 (2017), 10.1017/jfm.2016.846], an eye forms at the center of the vortex which is reminiscent of that seen in a tropical cyclone and is characterized by a local reversal in the direction of the poloidal flow. We establish scaling laws for the flow and map out the conditions under which an eye will, or will not, form. We show that, to leading order, the velocity scales with V =(αg β ) 1 /2H , where g is gravity, α is the expansion coefficient, β is the background temperature gradient, and H is the depth of the domain. We also show that the two most important parameters controlling the flow are Re =V H /ν and Ro =V /(Ω H ) , where Ω is the background rotation rate and ν the viscosity. The Prandtl number and aspect ratio also play an important, if secondary, role. Finally, and most importantly, we establish the criteria required for eye formation. These consist of a lower bound on Re , upper and lower bounds on Ro , and an upper bound on the Ekman number.

  5. Thick deltaic sedimentation and detachment faulting delay the onset of continental rupture in the Northern Gulf of California: Analysis of seismic reflection profiles

    NASA Astrophysics Data System (ADS)

    Martín-Barajas, Arturo; González-Escobar, Mario; Fletcher, John M.; Pacheco, Martín.; Oskin, Michael; Dorsey, Rebecca

    2013-09-01

    transition from distributed continental extension to the rupture of continental lithosphere is imaged in the northern Gulf of California across the obliquely conjugate Tiburón-Upper Delfin basin segment. Structural mapping on a 5-20 km grid of seismic reflection lines of Petroleos Mexicanos demonstrates that ~1000% extension is accommodated on a series of NNE striking listric-normal faults that merge at depth into a detachment fault. The detachment juxtaposes a late-Neogene marine sequence over thinned continental crust and contains an intrabasinal divide due to footwall uplift. Two northwest striking, dextral-oblique faults bound both ends of the detachment and shear the continental crust parallel to the tectonic transport. A regional unconformity in the upper 0.5 s (two-way travel time) and crest erosion of rollover anticlines above the detachment indicates inversion and footwall uplift during the lithospheric rupture in the Upper Delfin and Lower Delfin basins. The maximum length of new crust in both Delfin basins is less than 40 km based on the lack of an acoustic basement and the absence of a lower sedimentary sequence beneath a wedge-shaped upper sequence that reaches >5 km in thickness. A fundamental difference exists between the Tiburón-Delfin segment and the Guaymas segment to the south in terms of presence of low-angle normal faults and amount of new oceanic lithosphere, which we attribute to thermal insulation, diffuse upper-plate extension, and slip on low-angle normal faults engendered by a thick sedimentary lid.

  6. Thick deltaic sedimentation and detachment faulting delay the onset of continental rupture in the Northern Gulf of California: Analysis of seismic reflection profiles

    NASA Astrophysics Data System (ADS)

    Martin, A.; González-Escobar, M.; Fletcher, J. M.; Pacheco, M.; Oskin, M. E.; Dorsey, R. J.

    2013-12-01

    The transition from distributed continental extension to the rupture of continental lithosphere is imaged in the northern Gulf of California across the obliquely conjugate Tiburón-Upper Delfín basin segment. Structural mapping on a 5-20 km grid of seismic reflection lines of Petroleos Mexicanos (PEMEX) demonstrates that ~1000% extension is accommodated on a series of NNE-striking listric-normal faults that merge at depth into a detachment fault. The detachment juxtaposes a late-Neogene marine sequence over thinned continental crust and contains an intrabasinal divide due to footwall uplift. Two northwest striking, dextral-oblique faults bound both ends of the detachment and shear the continental crust parallel to the tectonic transport. A regional unconformity in the upper 0.5 seconds (TWTT) and crest erosion of rollover anticlines above the detachment indicates inversion and footwall uplift during the lithospheric rupture in the Upper Delfin and Lower Delfin basins. The maximum length of new crust in both Delfin basins is less than 40 km based on the lack of an acoustic basement and the absence of a lower sedimentary sequence beneath a wedge shaped upper sequence that reaches >5 km in thickness. A fundamental difference exists between the Tiburón-Delfin segment and the Guaymas segment to the south in terms of presence of low angle normal faults and amount of new oceanic lithosphere, which we attribute to thermal insulation, diffuse upper-plate extension, and slip on low angle normal faults engendered by a thick sedimentary lid.

  7. Architectural elements and bounding surfaces in fluvial deposits: anatomy of the Kayenta formation (lower jurassic), Southwest Colorado

    NASA Astrophysics Data System (ADS)

    Miall, Andrew D.

    1988-03-01

    Three well-exposed outcrops in the Kayenta Formation (Lower Jurassic), near Dove Creek in southwestern Colorado, were studied using lateral profiles, in order to test recent regarding architectural-element analysis and the classification and interpretation of internal bounding surfaces. Examination of bounding surfaces within and between elements in the Kayenta outcrops raises problems in applying the three-fold classification of Allen (1983). Enlarging this classification to a six-fold hierarchy permits the discrimination of surfaces intermediate between Allen's second- and third-order types, corresponding to the upper bounding surfaces of macroforms, and internal erosional "reactivation" surfaces within the macroforms. Examples of the first five types of surface occur in the Kayenta outcrops at Dove Creek. The new classifications is offered as a general solution to the problem of description of complex, three-dimensional fluvial sandstone bodies. The Kayenta Formation at Dove Creek consists of a multistorey sandstone body, including the deposits of lateral- and downstream-accreted macroforms. The storeys show no internal cyclicity, neither within individual elements nor through the overall vertical thickness of the formation. Low paleocurrent variance indicates low sinuosity flow, whereas macroform geometry and orientation suggest low to moderate sinuosity. The many internal minor erosion surfaces draped with mud and followed by intraclast breccias imply frequent rapid stage fluctuation, consistent with variable (seasonal? monsonal? ephemmeral?) flow. The results suggest a fluvial architecture similar to that of the South Saskatchewan River, through with a three-dimensional geometry unlike that interpreted from surface studies of that river.

  8. Standard Model in multiscale theories and observational constraints

    NASA Astrophysics Data System (ADS)

    Calcagni, Gianluca; Nardelli, Giuseppe; Rodríguez-Fernández, David

    2016-08-01

    We construct and analyze the Standard Model of electroweak and strong interactions in multiscale spacetimes with (i) weighted derivatives and (ii) q -derivatives. Both theories can be formulated in two different frames, called fractional and integer picture. By definition, the fractional picture is where physical predictions should be made. (i) In the theory with weighted derivatives, it is shown that gauge invariance and the requirement of having constant masses in all reference frames make the Standard Model in the integer picture indistinguishable from the ordinary one. Experiments involving only weak and strong forces are insensitive to a change of spacetime dimensionality also in the fractional picture, and only the electromagnetic and gravitational sectors can break the degeneracy. For the simplest multiscale measures with only one characteristic time, length and energy scale t*, ℓ* and E*, we compute the Lamb shift in the hydrogen atom and constrain the multiscale correction to the ordinary result, getting the absolute upper bound t*<10-23 s . For the natural choice α0=1 /2 of the fractional exponent in the measure, this bound is strengthened to t*<10-29 s , corresponding to ℓ*<10-20 m and E*>28 TeV . Stronger bounds are obtained from the measurement of the fine-structure constant. (ii) In the theory with q -derivatives, considering the muon decay rate and the Lamb shift in light atoms, we obtain the independent absolute upper bounds t*<10-13 s and E*>35 MeV . For α0=1 /2 , the Lamb shift alone yields t*<10-27 s , ℓ*<10-19 m and E*>450 GeV .

  9. A note on the WGC, effective field theory and clockwork within string theory

    NASA Astrophysics Data System (ADS)

    Ibáñez, Luis E.; Montero, Miguel

    2018-02-01

    It has been recently argued that Higgsing of theories with U(1) n gauge interactions consistent with the Weak Gravity Conjecture (WGC) may lead to effective field theories parametrically violating WGC constraints. The minimal examples typically involve Higgs scalars with a large charge with respect to a U(1) (e.g. charges ( Z, 1) in U(1)2 with Z ≫ 1). This type of Higgs multiplets play also a key role in clockwork U(1) theories. We study these issues in the context of heterotic string theory and find that, even if there is no new physics at the standard magnetic WGC scale Λ ˜ g IR M P , the string scale is just slightly above, at a scale ˜ √{k_{IR}}Λ. Here k IR is the level of the IR U(1) worldsheet current. We show that, unlike the standard magnetic cutoff, this bound is insensitive to subsequent Higgsing. One may argue that this constraint gives rise to no bound at the effective field theory level since k IR is model dependent and in general unknown. However there is an additional constraint to be taken into account, which is that the Higgsing scalars with large charge Z should be part of the string massless spectrum, which becomes an upper bound k IR ≤ k 0 2 , where k 0 is the level of the UV currents. Thus, for fixed k 0, Z cannot be made parametrically large. The upper bound on the charges Z leads to limitations on the size and structure of hierarchies in an iterated U(1) clockwork mechanism.

  10. THE COOL ACCRETION DISK IN ESO 243-49 HLX-1: FURTHER EVIDENCE OF AN INTERMEDIATE-MASS BLACK HOLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Shane W.; Narayan, Ramesh; Zhu Yucong

    2011-06-20

    With an inferred bolometric luminosity exceeding 10{sup 42} erg s{sup -1}, HLX-1 in ESO 243-49 is the most luminous of ultraluminous X-ray sources and provides one of the strongest cases for the existence of intermediate-mass black holes. We obtain good fits to disk-dominated observations of the source with BHSPEC, a fully relativistic black hole accretion disk spectral model. Due to degeneracies in the model arising from the lack of independent constraints on inclination and black hole spin, there is a factor of 100 uncertainty in the best-fit black hole mass M. Nevertheless, spectral fitting of XMM-Newton observations provides robust lowermore » and upper limits with 3000 M{sub sun} {approx}< M {approx}< 3 x 10{sup 5} M{sub sun}, at 90% confidence, placing HLX-1 firmly in the intermediate-mass regime. The lower bound on M is entirely determined by matching the shape and peak energy of the thermal component in the spectrum. This bound is consistent with (but independent of) arguments based solely on the Eddington limit. Joint spectral modeling of the XMM-Newton data with more luminous Swift and Chandra observations increases the lower bound to 6000 M{sub sun}, but this tighter constraint is not independent of the Eddington limit. The upper bound on M is sensitive to the maximum allowed inclination i, and is reduced to M {approx}< 10{sup 5} M{sub sun} if we limit i {approx}< 75{sup 0}.« less

  11. The Limb Infrared Monitor of the Stratosphere (LIMS) experiment

    NASA Technical Reports Server (NTRS)

    Russell, J. M.; Gille, J. C.

    1978-01-01

    The Limb Infrared Monitor of the Stratosphere is used to obtain vertical profiles and maps of temperature and the concentration of ozone, water vapor, nitrogen dioxide, and nitric acid for the region of the stratosphere bounded by the upper troposphere and the lower mesosphere.

  12. RIEMANNIAN MANIFOLDS ADMITTING A CONFORMAL TRANSFORMATION GROUP

    PubMed Central

    Yano, Kentaro

    1969-01-01

    Let M be a Riemannian manifold with constant scalar curvature K which admits an infinitesimal conformal transformation. A necessary and sufficient condition in order that it be isometric with a sphere is obtained. Inequalities giving upper and lower bounds for K are also derived. PMID:16578692

  13. An Upper Bound for Population Exposure Variability (SOT)

    EPA Science Inventory

    Tools for the rapid assessment of exposure potential are needed in order to put the results of rapidly-applied tools for assessing biological activity, such as ToxCast® and other high throughput methodologies, into a quantitative exposure context. The ExpoCast models (Wambaugh et...

  14. Aggregating quantum repeaters for the quantum internet

    NASA Astrophysics Data System (ADS)

    Azuma, Koji; Kato, Go

    2017-09-01

    The quantum internet holds promise for accomplishing quantum teleportation and unconditionally secure communication freely between arbitrary clients all over the globe, as well as the simulation of quantum many-body systems. For such a quantum internet protocol, a general fundamental upper bound on the obtainable entanglement or secret key has been derived [K. Azuma, A. Mizutani, and H.-K. Lo, Nat. Commun. 7, 13523 (2016), 10.1038/ncomms13523]. Here we consider its converse problem. In particular, we present a universal protocol constructible from any given quantum network, which is based on running quantum repeater schemes in parallel over the network. For arbitrary lossy optical channel networks, our protocol has no scaling gap with the upper bound, even based on existing quantum repeater schemes. In an asymptotic limit, our protocol works as an optimal entanglement or secret-key distribution over any quantum network composed of practical channels such as erasure channels, dephasing channels, bosonic quantum amplifier channels, and lossy optical channels.

  15. Ferromagnetic Potts models with multisite interaction

    NASA Astrophysics Data System (ADS)

    Schreiber, Nir; Cohen, Reuven; Haber, Simi

    2018-03-01

    We study the q -state Potts model with four-site interaction on a square lattice. Based on the asymptotic behavior of lattice animals, it is argued that when q ≤4 the system exhibits a second-order phase transition and when q >4 the transition is first order. The q =4 model is borderline. We find 1 /lnq to be an upper bound on Tc, the exact critical temperature. Using a low-temperature expansion, we show that 1 /(θ lnq ) , where θ >1 is a q -dependent geometrical term, is an improved upper bound on Tc. In fact, our findings support Tc=1 /(θ lnq ) . This expression is used to estimate the finite correlation length in first-order transition systems. These results can be extended to other lattices. Our theoretical predictions are confirmed numerically by an extensive study of the four-site interaction model using the Wang-Landau entropic sampling method for q =3 ,4 ,5 . In particular, the q =4 model shows an ambiguous finite-size pseudocritical behavior.

  16. Extremal values on Zagreb indices of trees with given distance k-domination number.

    PubMed

    Pei, Lidan; Pan, Xiangfeng

    2018-01-01

    Let [Formula: see text] be a graph. A set [Formula: see text] is a distance k -dominating set of G if for every vertex [Formula: see text], [Formula: see text] for some vertex [Formula: see text], where k is a positive integer. The distance k -domination number [Formula: see text] of G is the minimum cardinality among all distance k -dominating sets of G . The first Zagreb index of G is defined as [Formula: see text] and the second Zagreb index of G is [Formula: see text]. In this paper, we obtain the upper bounds for the Zagreb indices of n -vertex trees with given distance k -domination number and characterize the extremal trees, which generalize the results of Borovićanin and Furtula (Appl. Math. Comput. 276:208-218, 2016). What is worth mentioning, for an n -vertex tree T , is that a sharp upper bound on the distance k -domination number [Formula: see text] is determined.

  17. Pinning down inelastic dark matter in the Sun and in direct detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blennow, Mattias; Clementz, Stefan; Herrero-Garcia, Juan, E-mail: emb@kth.se, E-mail: scl@kth.se, E-mail: juhg@kth.se

    2016-04-01

    We study the solar capture rate of inelastic dark matter with endothermic and/or exothermic interactions. By assuming that an inelastic dark matter signal will be observed in next generation direct detection experiments we can set a lower bound on the capture rate that is independent of the local dark matter density, the velocity distribution, the galactic escape velocity as well as the scattering cross section. In combination with upper limits from neutrino observatories we can place upper bounds on the annihilation channels leading to neutrinos. We find that, while endothermic scattering limits are weak in the isospin-conserving case, strong boundsmore » may be set for exothermic interactions, in particular in the spin-dependent case. Furthermore, we study the implications of observing two direct detection signals, in which case one can halo-independently obtain the dark matter mass and the mass splitting, and disentangle the endothermic/exothermic nature of the scattering. Finally we discuss isospin violation.« less

  18. Diffusion Influenced Adsorption Kinetics.

    PubMed

    Miura, Toshiaki; Seki, Kazuhiko

    2015-08-27

    When the kinetics of adsorption is influenced by the diffusive flow of solutes, the solute concentration at the surface is influenced by the surface coverage of solutes, which is given by the Langmuir-Hinshelwood adsorption equation. The diffusion equation with the boundary condition given by the Langmuir-Hinshelwood adsorption equation leads to the nonlinear integro-differential equation for the surface coverage. In this paper, we solved the nonlinear integro-differential equation using the Grünwald-Letnikov formula developed to solve fractional kinetics. Guided by the numerical results, analytical expressions for the upper and lower bounds of the exact numerical results were obtained. The upper and lower bounds were close to the exact numerical results in the diffusion- and reaction-controlled limits, respectively. We examined the validity of the two simple analytical expressions obtained in the diffusion-controlled limit. The results were generalized to include the effect of dispersive diffusion. We also investigated the effect of molecular rearrangement of anisotropic molecules on surface coverage.

  19. Non-linear collisional Penrose process: How much energy can a black hole release?

    NASA Astrophysics Data System (ADS)

    Nakao, Ken-ichi; Okawa, Hirotada; Maeda, Kei-ichi

    2018-01-01

    Energy extraction from a rotating or charged black hole is one of the fascinating issues in general relativity. The collisional Penrose process is one such extraction mechanism and has been reconsidered intensively since Bañados, Silk, and West pointed out the physical importance of very high energy collisions around a maximally rotating black hole. In order to get results analytically, the test particle approximation has been adopted so far. Successive works based on this approximation scheme have not yet revealed the upper bound on the efficiency of the energy extraction because of the lack of backreaction. In the Reissner-Nordström spacetime, by fully taking into account the self-gravity of the shells, we find that there is an upper bound on the extracted energy that is consistent with the area law of a black hole. We also show one particular scenario in which almost the maximum energy extraction is achieved even without the Bañados-Silk-West collision.

  20. Optimal Coordinated EV Charging with Reactive Power Support in Constrained Distribution Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paudyal, Sumit; Ceylan, Oğuzhan; Bhattarai, Bishnu P.

    Electric vehicle (EV) charging/discharging can take place in any P-Q quadrants, which means EVs could support reactive power to the grid while charging the battery. In controlled charging schemes, distribution system operator (DSO) coordinates with the charging of EV fleets to ensure grid’s operating constraints are not violated. In fact, this refers to DSO setting upper bounds on power limits for EV charging. In this work, we demonstrate that if EVs inject reactive power into the grid while charging, DSO could issue higher upper bounds on the active power limits for the EVs for the same set of grid constraints.more » We demonstrate the concept in an 33-node test feeder with 1,500 EVs. Case studies show that in constrained distribution grids in coordinated charging, average costs of EV charging could be reduced if the charging takes place in the fourth P-Q quadrant compared to charging with unity power factor.« less

Top