Sample records for minimum variance bound

  1. CMB bispectrum, trispectrum, non-Gaussianity, and the Cramer-Rao bound

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamionkowski, Marc; Smith, Tristan L.; Heavens, Alan

    Minimum-variance estimators for the parameter f{sub nl} that quantifies local-model non-Gaussianity can be constructed from the cosmic microwave background (CMB) bispectrum (three-point function) and also from the trispectrum (four-point function). Some have suggested that a comparison between the estimates for the values of f{sub nl} from the bispectrum and trispectrum allow a consistency test for the model. But others argue that the saturation of the Cramer-Rao bound--which gives a lower limit to the variance of an estimator--by the bispectrum estimator implies that no further information on f{sub nl} can be obtained from the trispectrum. Here, we elaborate the nature ofmore » the correlation between the bispectrum and trispectrum estimators for f{sub nl}. We show that the two estimators become statistically independent in the limit of large number of CMB pixels, and thus that the trispectrum estimator does indeed provide additional information on f{sub nl} beyond that obtained from the bispectrum. We explain how this conclusion is consistent with the Cramer-Rao bound. Our discussion of the Cramer-Rao bound may be of interest to those doing Fisher-matrix parameter-estimation forecasts or data analysis in other areas of physics as well.« less

  2. Logistic quantile regression provides improved estimates for bounded avian counts: A case study of California Spotted Owl fledgling production

    USGS Publications Warehouse

    Cade, Brian S.; Noon, Barry R.; Scherer, Rick D.; Keane, John J.

    2017-01-01

    Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical conditional distribution of a bounded discrete random variable. The logistic quantile regression model requires that counts are randomly jittered to a continuous random variable, logit transformed to bound them between specified lower and upper values, then estimated in conventional linear quantile regression, repeating the 3 steps and averaging estimates. Back-transformation to the original discrete scale relies on the fact that quantiles are equivariant to monotonic transformations. We demonstrate this statistical procedure by modeling 20 years of California Spotted Owl fledgling production (0−3 per territory) on the Lassen National Forest, California, USA, as related to climate, demographic, and landscape habitat characteristics at territories. Spotted Owl fledgling counts increased nonlinearly with decreasing precipitation in the early nesting period, in the winter prior to nesting, and in the prior growing season; with increasing minimum temperatures in the early nesting period; with adult compared to subadult parents; when there was no fledgling production in the prior year; and when percentage of the landscape surrounding nesting sites (202 ha) with trees ≥25 m height increased. Changes in production were primarily driven by changes in the proportion of territories with 2 or 3 fledglings. Average variances of the discrete cumulative distributions of the estimated fledgling counts indicated that temporal changes in climate and parent age class explained 18% of the annual variance in owl fledgling production, which was 34% of the total variance. Prior fledgling production explained as much of the variance in the fledgling counts as climate, parent age class, and landscape habitat predictors. Our logistic quantile regression model can be used for any discrete response variables with fixed upper and lower bounds.

  3. Predicting minimum uncertainties in the inversion of ocean color geophysical parameters based on Cramer-Rao bounds.

    PubMed

    Jay, Sylvain; Guillaume, Mireille; Chami, Malik; Minghelli, Audrey; Deville, Yannick; Lafrance, Bruno; Serfaty, Véronique

    2018-01-22

    We present an analytical approach based on Cramer-Rao Bounds (CRBs) to investigate the uncertainties in estimated ocean color parameters resulting from the propagation of uncertainties in the bio-optical reflectance modeling through the inversion process. Based on given bio-optical and noise probabilistic models, CRBs can be computed efficiently for any set of ocean color parameters and any sensor configuration, directly providing the minimum estimation variance that can be possibly attained by any unbiased estimator of any targeted parameter. Here, CRBs are explicitly developed using (1) two water reflectance models corresponding to deep and shallow waters, resp., and (2) four probabilistic models describing the environmental noises observed within four Sentinel-2 MSI, HICO, Sentinel-3 OLCI and MODIS images, resp. For both deep and shallow waters, CRBs are shown to be consistent with the experimental estimation variances obtained using two published remote-sensing methods, while not requiring one to perform any inversion. CRBs are also used to investigate to what extent perfect a priori knowledge on one or several geophysical parameters can improve the estimation of remaining unknown parameters. For example, using pre-existing knowledge of bathymetry (e.g., derived from LiDAR) within the inversion is shown to greatly improve the retrieval of bottom cover for shallow waters. Finally, CRBs are shown to provide valuable information on the best estimation performances that may be achieved with the MSI, HICO, OLCI and MODIS configurations for a variety of oceanic, coastal and inland waters. CRBs are thus demonstrated to be an informative and efficient tool to characterize minimum uncertainties in inverted ocean color geophysical parameters.

  4. Estimation variance bounds of importance sampling simulations in digital communication systems

    NASA Technical Reports Server (NTRS)

    Lu, D.; Yao, K.

    1991-01-01

    In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.

  5. Are stock prices too volatile to be justified by the dividend discount model?

    NASA Astrophysics Data System (ADS)

    Akdeniz, Levent; Salih, Aslıhan Altay; Ok, Süleyman Tuluğ

    2007-03-01

    This study investigates excess stock price volatility using the variance bound framework of LeRoy and Porter [The present-value relation: tests based on implied variance bounds, Econometrica 49 (1981) 555-574] and of Shiller [Do stock prices move too much to be justified by subsequent changes in dividends? Am. Econ. Rev. 71 (1981) 421-436.]. The conditional variance bound relationship is examined using cross-sectional data simulated from the general equilibrium asset pricing model of Brock [Asset prices in a production economy, in: J.J. McCall (Ed.), The Economics of Information and Uncertainty, University of Chicago Press, Chicago (for N.B.E.R.), 1982]. Results show that the conditional variance bounds hold, hence, our hypothesis of the validity of the dividend discount model cannot be rejected. Moreover, in our setting, markets are efficient and stock prices are neither affected by herd psychology nor by the outcome of noise trading by naive investors; thus, we are able to control for market efficiency. Consequently, we show that one cannot infer any conclusions about market efficiency from the unconditional variance bounds tests.

  6. Minimum-error quantum distinguishability bounds from matrix monotone functions: A comment on 'Two-sided estimates of minimum-error distinguishability of mixed quantum states via generalized Holevo-Curlander bounds' [J. Math. Phys. 50, 032106 (2009)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tyson, Jon

    2009-06-15

    Matrix monotonicity is used to obtain upper bounds on minimum-error distinguishability of arbitrary ensembles of mixed quantum states. This generalizes one direction of a two-sided bound recently obtained by the author [J. Tyson, J. Math. Phys. 50, 032106 (2009)]. It is shown that the previously obtained special case has unique properties.

  7. Estimating contaminant loads in rivers: An application of adjusted maximum likelihood to type 1 censored data

    USGS Publications Warehouse

    Cohn, Timothy A.

    2005-01-01

    This paper presents an adjusted maximum likelihood estimator (AMLE) that can be used to estimate fluvial transport of contaminants, like phosphorus, that are subject to censoring because of analytical detection limits. The AMLE is a generalization of the widely accepted minimum variance unbiased estimator (MVUE), and Monte Carlo experiments confirm that it shares essentially all of the MVUE's desirable properties, including high efficiency and negligible bias. In particular, the AMLE exhibits substantially less bias than alternative censored‐data estimators such as the MLE (Tobit) or the MLE followed by a jackknife. As with the MLE and the MVUE the AMLE comes close to achieving the theoretical Frechet‐Cramér‐Rao bounds on its variance. This paper also presents a statistical framework, applicable to both censored and complete data, for understanding and estimating the components of uncertainty associated with load estimates. This can serve to lower the cost and improve the efficiency of both traditional and real‐time water quality monitoring.

  8. Reliability Stress-Strength Models for Dependent Observations with Applications in Clinical Trials

    NASA Technical Reports Server (NTRS)

    Kushary, Debashis; Kulkarni, Pandurang M.

    1995-01-01

    We consider the applications of stress-strength models in studies involving clinical trials. When studying the effects and side effects of certain procedures (treatments), it is often the case that observations are correlated due to subject effect, repeated measurements and observing many characteristics simultaneously. We develop maximum likelihood estimator (MLE) and uniform minimum variance unbiased estimator (UMVUE) of the reliability which in clinical trial studies could be considered as the chances of increased side effects due to a particular procedure compared to another. The results developed apply to both univariate and multivariate situations. Also, for the univariate situations we develop simple to use lower confidence bounds for the reliability. Further, we consider the cases when both stress and strength constitute time dependent processes. We define the future reliability and obtain methods of constructing lower confidence bounds for this reliability. Finally, we conduct simulation studies to evaluate all the procedures developed and also to compare the MLE and the UMVUE.

  9. Standard Deviation for Small Samples

    ERIC Educational Resources Information Center

    Joarder, Anwar H.; Latif, Raja M.

    2006-01-01

    Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…

  10. 75 FR 40797 - Upper Peninsula Power Company; Notice of Application for Temporary Amendment of License and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-14

    ... for drought-based temporary variance of the reservoir elevations and minimum flow releases at the Dead... temporary variance to the reservoir elevation and minimum flow requirements at the Hoist Development. The...: (1) Releasing a minimum flow of 75 cubic feet per second (cfs) from the Hoist Reservoir, instead of...

  11. The wave function and minimum uncertainty function of the bound quadratic Hamiltonian system

    NASA Technical Reports Server (NTRS)

    Yeon, Kyu Hwang; Um, Chung IN; George, T. F.

    1994-01-01

    The bound quadratic Hamiltonian system is analyzed explicitly on the basis of quantum mechanics. We have derived the invariant quantity with an auxiliary equation as the classical equation of motion. With the use of this invariant it can be determined whether or not the system is bound. In bound system we have evaluated the exact eigenfunction and minimum uncertainty function through unitary transformation.

  12. The Variance of Solar Wind Magnetic Fluctuations: Solutions and Further Puzzles

    NASA Technical Reports Server (NTRS)

    Roberts, D. A.; Goldstein, M. L.

    2006-01-01

    We study the dependence of the variance directions of the magnetic field in the solar wind as a function of scale, radial distance, and Alfvenicity. The study resolves the question of why different studies have arrived at widely differing values for the maximum to minimum power (approximately equal to 3:1 up to approximately equal to 20:1). This is due to the decreasing anisotropy with increasing time interval chosen for the variance, and is a direct result of the "spherical polarization" of the waves which follows from the near constancy of |B|. The reason for the magnitude preserving evolution is still unresolved. Moreover, while the long-known tendency for the minimum variance to lie along the mean field also follows from this view (as shown by Barnes many years ago), there is no theory for why the minimum variance follows the field direction as the Parker angle changes. We show that this turning is quite generally true in Alfvenic regions over a wide range of heliocentric distances. The fact that nonAlfvenic regions, while still showing strong power anisotropies, tend to have a much broader range of angles between the minimum variance and the mean field makes it unlikely that the cause of the variance turning is to be found in a turbulence mechanism. There are no obvious alternative mechanisms, leaving us with another intriguing puzzle.

  13. Fisher information and Cramér-Rao lower bound for experimental design in parallel imaging.

    PubMed

    Bouhrara, Mustapha; Spencer, Richard G

    2018-06-01

    The Cramér-Rao lower bound (CRLB) is widely used in the design of magnetic resonance (MR) experiments for parameter estimation. Previous work has considered only Gaussian or Rician noise distributions in this calculation. However, the noise distribution for multi-coil acquisitions, such as in parallel imaging, obeys the noncentral χ-distribution under many circumstances. The purpose of this paper is to present the CRLB calculation for parameter estimation from multi-coil acquisitions. We perform explicit calculations of Fisher matrix elements and the associated CRLB for noise distributions following the noncentral χ-distribution. The special case of diffusion kurtosis is examined as an important example. For comparison with analytic results, Monte Carlo (MC) simulations were conducted to evaluate experimental minimum standard deviations (SDs) in the estimation of diffusion kurtosis model parameters. Results were obtained for a range of signal-to-noise ratios (SNRs), and for both the conventional case of Gaussian noise distribution and noncentral χ-distribution with different numbers of coils, m. At low-to-moderate SNR, the noncentral χ-distribution deviates substantially from the Gaussian distribution. Our results indicate that this departure is more pronounced for larger values of m. As expected, the minimum SDs (i.e., CRLB) in derived diffusion kurtosis model parameters assuming a noncentral χ-distribution provided a closer match to the MC simulations as compared to the Gaussian results. Estimates of minimum variance for parameter estimation and experimental design provided by the CRLB must account for the noncentral χ-distribution of noise in multi-coil acquisitions, especially in the low-to-moderate SNR regime. Magn Reson Med 79:3249-3255, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  14. Minimum variance geographic sampling

    NASA Technical Reports Server (NTRS)

    Terrell, G. R. (Principal Investigator)

    1980-01-01

    Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.

  15. A Model-Free No-arbitrage Price Bound for Variance Options

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonnans, J. Frederic, E-mail: frederic.bonnans@inria.fr; Tan Xiaolu, E-mail: xiaolu.tan@polytechnique.edu

    2013-08-01

    We suggest a numerical approximation for an optimization problem, motivated by its applications in finance to find the model-free no-arbitrage bound of variance options given the marginal distributions of the underlying asset. A first approximation restricts the computation to a bounded domain. Then we propose a gradient projection algorithm together with the finite difference scheme to solve the optimization problem. We prove the general convergence, and derive some convergence rate estimates. Finally, we give some numerical examples to test the efficiency of the algorithm.

  16. Automatic quantification of mammary glands on non-contrast x-ray CT by using a novel segmentation approach

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Kano, Takuya; Cai, Yunliang; Li, Shuo; Zhou, Xinxin; Hara, Takeshi; Yokoyama, Ryujiro; Fujita, Hiroshi

    2016-03-01

    This paper describes a brand new automatic segmentation method for quantifying volume and density of mammary gland regions on non-contrast CT images. The proposed method uses two processing steps: (1) breast region localization, and (2) breast region decomposition to accomplish a robust mammary gland segmentation task on CT images. The first step detects two minimum bounding boxes of left and right breast regions, respectively, based on a machine-learning approach that adapts to a large variance of the breast appearances on different age levels. The second step divides the whole breast region in each side into mammary gland, fat tissue, and other regions by using spectral clustering technique that focuses on intra-region similarities of each patient and aims to overcome the image variance caused by different scan-parameters. The whole approach is designed as a simple structure with very minimum number of parameters to gain a superior robustness and computational efficiency for real clinical setting. We applied this approach to a dataset of 300 CT scans, which are sampled with the equal number from 30 to 50 years-old-women. Comparing to human annotations, the proposed approach can measure volume and quantify distributions of the CT numbers of mammary gland regions successfully. The experimental results demonstrated that the proposed approach achieves results consistent with manual annotations. Through our proposed framework, an efficient and effective low cost clinical screening scheme may be easily implemented to predict breast cancer risk, especially on those already acquired scans.

  17. Risk aversion and risk seeking in multicriteria forest management: a Markov decision process approach

    Treesearch

    Joseph Buongiorno; Mo Zhou; Craig Johnston

    2017-01-01

    Markov decision process models were extended to reflect some consequences of the risk attitude of forestry decision makers. One approach consisted of maximizing the expected value of a criterion subject to an upper bound on the variance or, symmetrically, minimizing the variance subject to a lower bound on the expected value.  The other method used the certainty...

  18. Metastable Features of Economic Networks and Responses to Exogenous Shocks

    PubMed Central

    Hosseiny, Ali; Bahrami, Mohammad; Palestrini, Antonio; Gallegati, Mauro

    2016-01-01

    It is well known that a network structure plays an important role in addressing a collective behavior. In this paper we study a network of firms and corporations for addressing metastable features in an Ising based model. In our model we observe that if in a recession the government imposes a demand shock to stimulate the network, metastable features shape its response. Actually we find that there exists a minimum bound where any demand shock with a size below it is unable to trigger the market out of recession. We then investigate the impact of network characteristics on this minimum bound. We surprisingly observe that in a Watts-Strogatz network, although the minimum bound depends on the average of the degrees, when translated into the language of economics, such a bound is independent of the average degrees. This bound is about 0.44ΔGDP, where ΔGDP is the gap of GDP between recession and expansion. We examine our suggestions for the cases of the United States and the European Union in the recent recession, and compare them with the imposed stimulations. While the stimulation in the US has been above our threshold, in the EU it has been far below our threshold. Beside providing a minimum bound for a successful stimulation, our study on the metastable features suggests that in the time of crisis there is a “golden time passage” in which the minimum bound for successful stimulation can be much lower. Hence, our study strongly suggests stimulations to arise within this time passage. PMID:27706166

  19. Portfolio optimization with mean-variance model

    NASA Astrophysics Data System (ADS)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  20. A linear programming approach to characterizing norm bounded uncertainty from experimental data

    NASA Technical Reports Server (NTRS)

    Scheid, R. E.; Bayard, D. S.; Yam, Y.

    1991-01-01

    The linear programming spectral overbounding and factorization (LPSOF) algorithm, an algorithm for finding a minimum phase transfer function of specified order whose magnitude tightly overbounds a specified nonparametric function of frequency, is introduced. This method has direct application to transforming nonparametric uncertainty bounds (available from system identification experiments) into parametric representations required for modern robust control design software (i.e., a minimum-phase transfer function multiplied by a norm-bounded perturbation).

  1. Noise and analyzer-crystal angular position analysis for analyzer-based phase-contrast imaging

    NASA Astrophysics Data System (ADS)

    Majidi, Keivan; Li, Jun; Muehleman, Carol; Brankov, Jovan G.

    2014-04-01

    The analyzer-based phase-contrast x-ray imaging (ABI) method is emerging as a potential alternative to conventional radiography. Like many of the modern imaging techniques, ABI is a computed imaging method (meaning that images are calculated from raw data). ABI can simultaneously generate a number of planar parametric images containing information about absorption, refraction, and scattering properties of an object. These images are estimated from raw data acquired by measuring (sampling) the angular intensity profile of the x-ray beam passed through the object at different angular positions of the analyzer crystal. The noise in the estimated ABI parametric images depends upon imaging conditions like the source intensity (flux), measurements angular positions, object properties, and the estimation method. In this paper, we use the Cramér-Rao lower bound (CRLB) to quantify the noise properties in parametric images and to investigate the effect of source intensity, different analyzer-crystal angular positions and object properties on this bound, assuming a fixed radiation dose delivered to an object. The CRLB is the minimum bound for the variance of an unbiased estimator and defines the best noise performance that one can obtain regardless of which estimation method is used to estimate ABI parametric images. The main result of this paper is that the variance (hence the noise) in parametric images is directly proportional to the source intensity and only a limited number of analyzer-crystal angular measurements (eleven for uniform and three for optimal non-uniform) are required to get the best parametric images. The following angular measurements only spread the total dose to the measurements without improving or worsening CRLB, but the added measurements may improve parametric images by reducing estimation bias. Next, using CRLB we evaluate the multiple-image radiography, diffraction enhanced imaging and scatter diffraction enhanced imaging estimation techniques, though the proposed methodology can be used to evaluate any other ABI parametric image estimation technique.

  2. Noise and Analyzer-Crystal Angular Position Analysis for Analyzer-Based Phase-Contrast Imaging

    PubMed Central

    Majidi, Keivan; Li, Jun; Muehleman, Carol; Brankov, Jovan G.

    2014-01-01

    The analyzer-based phase-contrast X-ray imaging (ABI) method is emerging as a potential alternative to conventional radiography. Like many of the modern imaging techniques, ABI is a computed imaging method (meaning that images are calculated from raw data). ABI can simultaneously generate a number of planar parametric images containing information about absorption, refraction, and scattering properties of an object. These images are estimated from raw data acquired by measuring (sampling) the angular intensity profile (AIP) of the X-ray beam passed through the object at different angular positions of the analyzer crystal. The noise in the estimated ABI parametric images depends upon imaging conditions like the source intensity (flux), measurements angular positions, object properties, and the estimation method. In this paper, we use the Cramér-Rao lower bound (CRLB) to quantify the noise properties in parametric images and to investigate the effect of source intensity, different analyzer-crystal angular positions and object properties on this bound, assuming a fixed radiation dose delivered to an object. The CRLB is the minimum bound for the variance of an unbiased estimator and defines the best noise performance that one can obtain regardless of which estimation method is used to estimate ABI parametric images. The main result of this manuscript is that the variance (hence the noise) in parametric images is directly proportional to the source intensity and only a limited number of analyzer-crystal angular measurements (eleven for uniform and three for optimal non-uniform) are required to get the best parametric images. The following angular measurements only spread the total dose to the measurements without improving or worsening CRLB, but the added measurements may improve parametric images by reducing estimation bias. Next, using CRLB we evaluate the Multiple-Image Radiography (MIR), Diffraction Enhanced Imaging (DEI) and Scatter Diffraction Enhanced Imaging (S-DEI) estimation techniques, though the proposed methodology can be used to evaluate any other ABI parametric image estimation technique. PMID:24651402

  3. Variance-Stable R-Estimators.

    DTIC Science & Technology

    1984-05-01

    By means of the concept of change-of variance function we investigate the stability properties of the asymptotic variance of R-estimators. This allows us to construct the optimal V-robust R-estimator that minimizes the asymptotic variance at the model, under the side condition of a bounded change-of variance function. Finally, we discuss the connection between this function and an influence function for two-sample rank tests introduced by Eplett (1980). (Author)

  4. On bound-states of the Gross Neveu model with massive fundamental fermions

    NASA Astrophysics Data System (ADS)

    Frishman, Yitzhak; Sonnenschein, Jacob

    2018-01-01

    In the search for QFT's that admit boundstates, we reinvestigate the two dimensional Gross-Neveu model, but with massive fermions. By computing the self-energy for the auxiliary boundstate field and the effective potential, we show that there are no bound states around the lowest minimum, but there is a meta-stable bound state around the other minimum, a local one. The latter decays by tunneling. We determine the dependence of its lifetime on the fermion mass and coupling constant.

  5. Bounds of cavitation inception in a creeping flow between eccentric cylinders rotating with a small minimum gap

    NASA Astrophysics Data System (ADS)

    Monakhov, A. A.; Chernyavski, V. M.; Shtemler, Yu.

    2013-09-01

    Bounds of cavitation inception are experimentally determined in a creeping flow between eccentric cylinders, the inner one being static and the outer rotating at a constant angular velocity, Ω. The geometric configuration is additionally specified by a small minimum gap between cylinders, H, as compared with the radii of the inner and outer cylinders. For some values H and Ω, cavitation bubbles are observed, which are collected on the surface of the inner cylinder and equally distributed over the line parallel to its axis near the downstream minimum gap position. Cavitation occurs for the parameters {H,Ω} within a region bounded on the right by the cavitation inception curve that passes through the plane origin and cannot exceed the asymptotic threshold value of the minimum gap, Ha, in whose vicinity cavitation may occur at H < Ha only for high angular rotation velocities.

  6. 76 FR 1145 - Alabama Power Company; Notice of Application for Amendment of License and Soliciting Comments...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-07

    ... drought-based temporary variance of the Martin Project rule curve and minimum flow releases at the Yates... requesting a drought- based temporary variance to the Martin Project rule curve. The rule curve variance...

  7. Dependence of the quantum speed limit on system size and control complexity

    NASA Astrophysics Data System (ADS)

    Lee, Juneseo; Arenz, Christian; Rabitz, Herschel; Russell, Benjamin

    2018-06-01

    We extend the work in 2017 New J. Phys. 19 103015 by deriving a lower bound for the minimum time necessary to implement a unitary transformation on a generic, closed quantum system with an arbitrary number of classical control fields. This bound is explicitly analyzed for a specific N-level system similar to those used to represent simple models of an atom, or the first excitation sector of a Heisenberg spin chain, both of which are of interest in quantum control for quantum computation. Specifically, it is shown that the resultant bound depends on the dimension of the system, and on the number of controls used to implement a specific target unitary operation. The value of the bound determined numerically, and an estimate of the true minimum gate time are systematically compared for a range of system dimension and number of controls; special attention is drawn to the relationship between these two variables. It is seen that the bound captures the scaling of the minimum time well for the systems studied, and quantitatively is correct in the order of magnitude.

  8. Impact of Damping Uncertainty on SEA Model Response Variance

    NASA Technical Reports Server (NTRS)

    Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand

    2010-01-01

    Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.

  9. How does variance in fertility change over the demographic transition?

    PubMed Central

    Hruschka, Daniel J.; Burger, Oskar

    2016-01-01

    Most work on the human fertility transition has focused on declines in mean fertility. However, understanding changes in the variance of reproductive outcomes can be equally important for evolutionary questions about the heritability of fertility, individual determinants of fertility and changing patterns of reproductive skew. Here, we document how variance in completed fertility among women (45–49 years) differs across 200 surveys in 72 low- to middle-income countries where fertility transitions are currently in progress at various stages. Nearly all (91%) of samples exhibit variance consistent with a Poisson process of fertility, which places systematic, and often severe, theoretical upper bounds on the proportion of variance that can be attributed to individual differences. In contrast to the pattern of total variance, these upper bounds increase from high- to mid-fertility samples, then decline again as samples move from mid to low fertility. Notably, the lowest fertility samples often deviate from a Poisson process. This suggests that as populations move to low fertility their reproduction shifts from a rate-based process to a focus on an ideal number of children. We discuss the implications of these findings for predicting completed fertility from individual-level variables. PMID:27022082

  10. Superresolution SAR Imaging Algorithm Based on Mvm and Weighted Norm Extrapolation

    NASA Astrophysics Data System (ADS)

    Zhang, P.; Chen, Q.; Li, Z.; Tang, Z.; Liu, J.; Zhao, L.

    2013-08-01

    In this paper, we present an extrapolation approach, which uses minimum weighted norm constraint and minimum variance spectrum estimation, for improving synthetic aperture radar (SAR) resolution. Minimum variance method is a robust high resolution method to estimate spectrum. Based on the theory of SAR imaging, the signal model of SAR imagery is analyzed to be feasible for using data extrapolation methods to improve the resolution of SAR image. The method is used to extrapolate the efficient bandwidth in phase history field and better results are obtained compared with adaptive weighted norm extrapolation (AWNE) method and traditional imaging method using simulated data and actual measured data.

  11. Bounds on the performance of a class of digital communication systems

    NASA Technical Reports Server (NTRS)

    Polk, D. R.; Gupta, S. C.; Cohn, D. L.

    1973-01-01

    Bounds on the capacity of a class of digital communication channels are derived. Equating the bounds on capacity to rate-distortion functions of (typical) sources in turn produces bounds on the performance of a class of digital communication systems. For ratios of squared quantization level to noise variance much less than one, the power requirements for this class of digital communication systems are shown to be within approximately 3 dB of the theoretical optimum.

  12. Analysis and application of minimum variance discrete time system identification

    NASA Technical Reports Server (NTRS)

    Kaufman, H.; Kotob, S.

    1975-01-01

    An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.

  13. Synthesis of correlation filters: a generalized space-domain approach for improved filter characteristics

    NASA Astrophysics Data System (ADS)

    Sudharsanan, Subramania I.; Mahalanobis, Abhijit; Sundareshan, Malur K.

    1990-12-01

    Discrete frequency domain design of Minimum Average Correlation Energy filters for optical pattern recognition introduces an implementational limitation of circular correlation. An alternative methodology which uses space domain computations to overcome this problem is presented. The technique is generalized to construct an improved synthetic discriminant function which satisfies the conflicting requirements of reduced noise variance and sharp correlation peaks to facilitate ease of detection. A quantitative evaluation of the performance characteristics of the new filter is conducted and is shown to compare favorably with the well known Minimum Variance Synthetic Discriminant Function and the space domain Minimum Average Correlation Energy filter, which are special cases of the present design.

  14. New upper bounds on the rate of a code via the Delsarte-MacWilliams inequalities

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Rodemich, E. R.; Rumsey, H., Jr.; Welch, L. R.

    1977-01-01

    An upper bound on the rate of a binary code as a function of minimum code distance (using a Hamming code metric) is arrived at from Delsarte-MacWilliams inequalities. The upper bound so found is asymptotically less than Levenshtein's bound, and a fortiori less than Elias' bound. Appendices review properties of Krawtchouk polynomials and Q-polynomials utilized in the rigorous proofs.

  15. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, M.

    1980-12-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less

  16. Applications of active adaptive noise control to jet engines

    NASA Technical Reports Server (NTRS)

    Shoureshi, Rahmat; Brackney, Larry

    1993-01-01

    During phase 2 research on the application of active noise control to jet engines, the development of multiple-input/multiple-output (MIMO) active adaptive noise control algorithms and acoustic/controls models for turbofan engines were considered. Specific goals for this research phase included: (1) implementation of a MIMO adaptive minimum variance active noise controller; and (2) turbofan engine model development. A minimum variance control law for adaptive active noise control has been developed, simulated, and implemented for single-input/single-output (SISO) systems. Since acoustic systems tend to be distributed, multiple sensors, and actuators are more appropriate. As such, the SISO minimum variance controller was extended to the MIMO case. Simulation and experimental results are presented. A state-space model of a simplified gas turbine engine is developed using the bond graph technique. The model retains important system behavior, yet is of low enough order to be useful for controller design. Expansion of the model to include multiple stages and spools is also discussed.

  17. Large amplitude MHD waves upstream of the Jovian bow shock

    NASA Technical Reports Server (NTRS)

    Goldstein, M. L.; Smith, C. W.; Matthaeus, W. H.

    1983-01-01

    Observations of large amplitude magnetohydrodynamics (MHD) waves upstream of Jupiter's bow shock are analyzed. The waves are found to be right circularly polarized in the solar wind frame which suggests that they are propagating in the fast magnetosonic mode. A complete spectral and minimum variance eigenvalue analysis of the data was performed. The power spectrum of the magnetic fluctuations contains several peaks. The fluctuations at 2.3 mHz have a direction of minimum variance along the direction of the average magnetic field. The direction of minimum variance of these fluctuations lies at approximately 40 deg. to the magnetic field and is parallel to the radial direction. We argue that these fluctuations are waves excited by protons reflected off the Jovian bow shock. The inferred speed of the reflected protons is about two times the solar wind speed in the plasma rest frame. A linear instability analysis is presented which suggests an explanation for many of the observed features of the observations.

  18. Minimum Error Bounded Efficient L1 Tracker with Occlusion Detection (PREPRINT)

    DTIC Science & Technology

    2011-01-01

    Minimum Error Bounded Efficient `1 Tracker with Occlusion Detection Xue Mei\\ ∗ Haibin Ling† Yi Wu†[ Erik Blasch‡ Li Bai] \\Assembly Test Technology...proposed BPR-L1 tracker is tested on several challenging benchmark sequences involving chal- lenges such as occlusion and illumination changes. In all...point method de - pends on the value of the regularization parameter λ. In the experiments, we found that the total number of PCG is a few hundred. The

  19. Modeling Multiplicative Error Variance: An Example Predicting Tree Diameter from Stump Dimensions in Baldcypress

    Treesearch

    Bernard R. Parresol

    1993-01-01

    In the context of forest modeling, it is often reasonable to assume a multiplicative heteroscedastic error structure to the data. Under such circumstances ordinary least squares no longer provides minimum variance estimates of the model parameters. Through study of the error structure, a suitable error variance model can be specified and its parameters estimated. This...

  20. Noncommuting observables in quantum detection and estimation theory

    NASA Technical Reports Server (NTRS)

    Helstrom, C. W.

    1972-01-01

    Basing decisions and estimates on simultaneous approximate measurements of noncommuting observables in a quantum receiver is shown to be equivalent to measuring commuting projection operators on a larger Hilbert space than that of the receiver itself. The quantum-mechanical Cramer-Rao inequalities derived from right logarithmic derivatives and symmetrized logarithmic derivatives of the density operator are compared, and it is shown that the latter give superior lower bounds on the error variances of individual unbiased estimates of arrival time and carrier frequency of a coherent signal. For a suitably weighted sum of the error variances of simultaneous estimates of these, the former yield the superior lower bound under some conditions.

  1. Record length requirement of long-range dependent teletraffic

    NASA Astrophysics Data System (ADS)

    Li, Ming

    2017-04-01

    This article contributes the highlights mainly in two folds. On the one hand, it presents a formula to compute the upper bound of the variance of the correlation periodogram measurement of teletraffic (traffic for short) with long-range dependence (LRD) for a given record length T and a given value of the Hurst parameter H (Theorems 1 and 2). On the other hand, it proposes two formulas for the computation of the variance upper bound of the correlation periodogram measurement of traffic of fractional Gaussian noise (fGn) type and the generalized Cauchy (GC) type, respectively (Corollaries 1 and 2). They may constitute a reference guideline of record length requirement of traffic with LRD. In addition, record length requirement for the correlation periodogram measurement of traffic with either the Schuster type or the Bartlett one is studied and the present results about it show that both types of periodograms may be used for the correlation measurement of traffic with a pre-desired variance bound of correlation estimation. Moreover, real traffic in the Internet Archive by the Special Interest Group on Data Communication under the Association for Computing Machinery of US (ACM SIGCOMM) is analyzed in the case study in this topic.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guha, Saikat; Shapiro, Jeffrey H.; Erkmen, Baris I.

    Previous work on the classical information capacities of bosonic channels has established the capacity of the single-user pure-loss channel, bounded the capacity of the single-user thermal-noise channel, and bounded the capacity region of the multiple-access channel. The latter is a multiple-user scenario in which several transmitters seek to simultaneously and independently communicate to a single receiver. We study the capacity region of the bosonic broadcast channel, in which a single transmitter seeks to simultaneously and independently communicate to two different receivers. It is known that the tightest available lower bound on the capacity of the single-user thermal-noise channel is thatmore » channel's capacity if, as conjectured, the minimum von Neumann entropy at the output of a bosonic channel with additive thermal noise occurs for coherent-state inputs. Evidence in support of this minimum output entropy conjecture has been accumulated, but a rigorous proof has not been obtained. We propose a minimum output entropy conjecture that, if proved to be correct, will establish that the capacity region of the bosonic broadcast channel equals the inner bound achieved using a coherent-state encoding and optimum detection. We provide some evidence that supports this conjecture, but again a full proof is not available.« less

  3. Minimum number of measurements for evaluating soursop (Annona muricata L.) yield.

    PubMed

    Sánchez, C F B; Teodoro, P E; Londoño, S; Silva, L A; Peixoto, L A; Bhering, L L

    2017-05-31

    Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of soursop (Annona muricata L.) genotypes based on fruit yield. Sixteen measurements of fruit yield from 71 soursop genotypes were carried out between 2000 and 2016. In order to estimate r with the best accuracy, four procedures were used: analysis of variance, principal component analysis based on the correlation matrix, principal component analysis based on the phenotypic variance and covariance matrix, and structural analysis based on the correlation matrix. The minimum number of measurements needed to predict the actual value of individuals was estimated. Principal component analysis using the phenotypic variance and covariance matrix provided the most accurate estimates of both r and the number of measurements required for accurate evaluation of fruit yield in soursop. Our results indicate that selection of soursop genotypes with high fruit yield can be performed based on the third and fourth measurements in the early years and/or based on the eighth and ninth measurements at more advanced stages.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zachos, C. K.; High Energy Physics

    Following ref [1], a classical upper bound for quantum entropy is identified and illustrated, 0 {le} S{sub q} {le} ln (e{sigma}{sup 2}/2{h_bar}), involving the variance {sigma}{sup 2} in phase space of the classical limit distribution of a given system. A fortiori, this further bounds the corresponding information-theoretical generalizations of the quantum entropy proposed by Renyi.

  5. Maximum and minimum entropy states yielding local continuity bounds

    NASA Astrophysics Data System (ADS)

    Hanson, Eric P.; Datta, Nilanjana

    2018-04-01

    Given an arbitrary quantum state (σ), we obtain an explicit construction of a state ρɛ * ( σ ) [respectively, ρ * , ɛ ( σ ) ] which has the maximum (respectively, minimum) entropy among all states which lie in a specified neighborhood (ɛ-ball) of σ. Computing the entropy of these states leads to a local strengthening of the continuity bound of the von Neumann entropy, i.e., the Audenaert-Fannes inequality. Our bound is local in the sense that it depends on the spectrum of σ. The states ρɛ * ( σ ) and ρ * , ɛ (σ) depend only on the geometry of the ɛ-ball and are in fact optimizers for a larger class of entropies. These include the Rényi entropy and the minimum- and maximum-entropies, providing explicit formulas for certain smoothed quantities. This allows us to obtain local continuity bounds for these quantities as well. In obtaining this bound, we first derive a more general result which may be of independent interest, namely, a necessary and sufficient condition under which a state maximizes a concave and Gâteaux-differentiable function in an ɛ-ball around a given state σ. Examples of such a function include the von Neumann entropy and the conditional entropy of bipartite states. Our proofs employ tools from the theory of convex optimization under non-differentiable constraints, in particular Fermat's rule, and majorization theory.

  6. Nanometric depth resolution from multi-focal images in microscopy.

    PubMed

    Dalgarno, Heather I C; Dalgarno, Paul A; Dada, Adetunmise C; Towers, Catherine E; Gibson, Gavin J; Parton, Richard M; Davis, Ilan; Warburton, Richard J; Greenaway, Alan H

    2011-07-06

    We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels.

  7. Nanometric depth resolution from multi-focal images in microscopy

    PubMed Central

    Dalgarno, Heather I. C.; Dalgarno, Paul A.; Dada, Adetunmise C.; Towers, Catherine E.; Gibson, Gavin J.; Parton, Richard M.; Davis, Ilan; Warburton, Richard J.; Greenaway, Alan H.

    2011-01-01

    We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels. PMID:21247948

  8. Analysis of 20 magnetic clouds at 1 AU during a solar minimum

    NASA Astrophysics Data System (ADS)

    Gulisano, A. M.; Dasso, S.; Mandrini, C. H.; Démoulin, P.

    We study 20 magnetic clouds, observed in situ by the spacecraft Wind, at the Lagrangian point L1, from 22 August, 1995, to 7 November, 1997. In previous works, assuming a cylindrical symmetry for the local magnetic configuration and a satellite trajectory crossing the axis of the cloud, we obtained their orientations using a minimum variance analysis. In this work we compute the orientations and magnetic configurations using a non-linear simultaneous fit of the geometric and physical parameters for a linear force-free model, including the possibility of a not null impact parameter. We quantify global magnitudes such as the relative magnetic helicity per unit length and compare the values found with both methods (minimum variance and the simultaneous fit). FULL TEXT IN SPANISH

  9. Distribution of lod scores in oligogenic linkage analysis.

    PubMed

    Williams, J T; North, K E; Martin, L J; Comuzzie, A G; Göring, H H; Blangero, J

    2001-01-01

    In variance component oligogenic linkage analysis it can happen that the residual additive genetic variance bounds to zero when estimating the effect of the ith quantitative trait locus. Using quantitative trait Q1 from the Genetic Analysis Workshop 12 simulated general population data, we compare the observed lod scores from oligogenic linkage analysis with the empirical lod score distribution under a null model of no linkage. We find that zero residual additive genetic variance in the null model alters the usual distribution of the likelihood-ratio statistic.

  10. Estimation of transformation parameters for microarray data.

    PubMed

    Durbin, Blythe; Rocke, David M

    2003-07-22

    Durbin et al. (2002), Huber et al. (2002) and Munson (2001) independently introduced a family of transformations (the generalized-log family) which stabilizes the variance of microarray data up to the first order. We introduce a method for estimating the transformation parameter in tandem with a linear model based on the procedure outlined in Box and Cox (1964). We also discuss means of finding transformations within the generalized-log family which are optimal under other criteria, such as minimum residual skewness and minimum mean-variance dependency. R and Matlab code and test data are available from the authors on request.

  11. Constrained signal reconstruction from wavelet transform coefficients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, C.M.

    1991-12-31

    A new method is introduced for reconstructing a signal from an incomplete sampling of its Discrete Wavelet Transform (DWT). The algorithm yields a minimum-norm estimate satisfying a priori upper and lower bounds on the signal. The method is based on a finite-dimensional representation theory for minimum-norm estimates of bounded signals developed by R.E. Cole. Cole`s work has its origins in earlier techniques of maximum-entropy spectral estimation due to Lang and McClellan, which were adapted by Steinhardt, Goodrich and Roberts for minimum-norm spectral estimation. Cole`s extension of their work provides a representation for minimum-norm estimates of a class of generalized transformsmore » in terms of general correlation data (not just DFT`s of autocorrelation lags, as in spectral estimation). One virtue of this great generality is that it includes the inverse DWT. 20 refs.« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luis, Alfredo

    The use of Renyi entropy as an uncertainty measure alternative to variance leads to the study of states with quantum fluctuations below the levels established by Gaussian states, which are the position-momentum minimum uncertainty states according to variance. We examine the quantum properties of states with exponential wave functions, which combine reduced fluctuations with practical feasibility.

  13. Performance bounds for matched field processing in subsurface object detection applications

    NASA Astrophysics Data System (ADS)

    Sahin, Adnan; Miller, Eric L.

    1998-09-01

    In recent years there has been considerable interest in the use of ground penetrating radar (GPR) for the non-invasive detection and localization of buried objects. In a previous work, we have considered the use of high resolution array processing methods for solving these problems for measurement geometries in which an array of electromagnetic receivers observes the fields scattered by the subsurface targets in response to a plane wave illumination. Our approach uses the MUSIC algorithm in a matched field processing (MFP) scheme to determine both the range and the bearing of the objects. In this paper we derive the Cramer-Rao bounds (CRB) for this MUSIC-based approach analytically. Analysis of the theoretical CRB has shown that there exists an optimum inter-element spacing of array elements for which the CRB is minimum. Furthermore, the optimum inter-element spacing minimizing CRB is smaller than the conventional half wavelength criterion. The theoretical bounds are then verified for two estimators using Monte-Carlo simulations. The first estimator is the MUSIC-based MFP and the second one is the maximum likelihood based MFP. The two approaches differ in the cost functions they optimize. We observe that Monte-Carlo simulated error variances always lie above the values established by CRB. Finally, we evaluate the performance of our MUSIC-based algorithm in the presence of model mismatches. Since the detection algorithm strongly depends on the model used, we have tested the performance of the algorithm when the object radius used in the model is different from the true radius. This analysis reveals that the algorithm is still capable of localizing the objects with a bias depending on the degree of mismatch.

  14. Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.

    ERIC Educational Resources Information Center

    Wang, Yuh-Yin Wu; Schafer, William D.

    This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…

  15. Low-dimensional Representation of Error Covariance

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan

    2000-01-01

    Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.

  16. Rigorous Statistical Bounds in Uncertainty Quantification for One-Layer Turbulent Geophysical Flows

    NASA Astrophysics Data System (ADS)

    Qi, Di; Majda, Andrew J.

    2018-04-01

    Statistical bounds controlling the total fluctuations in mean and variance about a basic steady-state solution are developed for the truncated barotropic flow over topography. Statistical ensemble prediction is an important topic in weather and climate research. Here, the evolution of an ensemble of trajectories is considered using statistical instability analysis and is compared and contrasted with the classical deterministic instability for the growth of perturbations in one pointwise trajectory. The maximum growth of the total statistics in fluctuations is derived relying on the statistical conservation principle of the pseudo-energy. The saturation bound of the statistical mean fluctuation and variance in the unstable regimes with non-positive-definite pseudo-energy is achieved by linking with a class of stable reference states and minimizing the stable statistical energy. Two cases with dependence on initial statistical uncertainty and on external forcing and dissipation are compared and unified under a consistent statistical stability framework. The flow structures and statistical stability bounds are illustrated and verified by numerical simulations among a wide range of dynamical regimes, where subtle transient statistical instability exists in general with positive short-time exponential growth in the covariance even when the pseudo-energy is positive-definite. Among the various scenarios in this paper, there exist strong forward and backward energy exchanges between different scales which are estimated by the rigorous statistical bounds.

  17. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1990-01-01

    An expurgated upper bound on the event error probability of trellis coded modulation is presented. This bound is used to derive a lower bound on the minimum achievable free Euclidean distance d sub (free) of trellis codes. It is shown that the dominant parameters for both bounds, the expurgated error exponent and the asymptotic d sub (free) growth rate, respectively, can be obtained from the cutoff-rate R sub O of the transmission channel by a simple geometric construction, making R sub O the central parameter for finding good trellis codes. Several constellations are optimized with respect to the bounds.

  18. A Minimum Variance Algorithm for Overdetermined TOA Equations with an Altitude Constraint.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero, Louis A; Mason, John J.

    We present a direct (non-iterative) method for solving for the location of a radio frequency (RF) emitter, or an RF navigation receiver, using four or more time of arrival (TOA) measurements and an assumed altitude above an ellipsoidal earth. Both the emitter tracking problem and the navigation application are governed by the same equations, but with slightly different interpreta- tions of several variables. We treat the assumed altitude as a soft constraint, with a specified noise level, just as the TOA measurements are handled, with their respective noise levels. With 4 or more TOA measurements and the assumed altitude, themore » problem is overdetermined and is solved in the weighted least squares sense for the 4 unknowns, the 3-dimensional position and time. We call the new technique the TAQMV (TOA Altitude Quartic Minimum Variance) algorithm, and it achieves the minimum possible error variance for given levels of TOA and altitude estimate noise. The method algebraically produces four solutions, the least-squares solution, and potentially three other low residual solutions, if they exist. In the lightly overdermined cases where multiple local minima in the residual error surface are more likely to occur, this algebraic approach can produce all of the minima even when an iterative approach fails to converge. Algorithm performance in terms of solution error variance and divergence rate for bas eline (iterative) and proposed approach are given in tables.« less

  19. Upper and lower bounds for semi-Markov reliability models of reconfigurable systems

    NASA Technical Reports Server (NTRS)

    White, A. L.

    1984-01-01

    This paper determines the information required about system recovery to compute the reliability of a class of reconfigurable systems. Upper and lower bounds are derived for these systems. The class consists of those systems that satisfy five assumptions: the components fail independently at a low constant rate, fault occurrence and system reconfiguration are independent processes, the reliability model is semi-Markov, the recovery functions which describe system configuration have small means and variances, and the system is well designed. The bounds are easy to compute, and examples are included.

  20. Numerical optimization techniques for bound circulation distribution for minimum induced drag of Nonplanar wings: Computer program documentation

    NASA Technical Reports Server (NTRS)

    Kuhlman, J. M.; Ku, T. J.

    1981-01-01

    A two dimensional advanced panel far-field potential flow model of the undistorted, interacting wakes of multiple lifting surfaces was developed which allows the determination of the spanwise bound circulation distribution required for minimum induced drag. This model was implemented in a FORTRAN computer program, the use of which is documented in this report. The nonplanar wakes are broken up into variable sized, flat panels, as chosen by the user. The wake vortex sheet strength is assumed to vary linearly over each of these panels, resulting in a quadratic variation of bound circulation. Panels are infinite in the streamwise direction. The theory is briefly summarized herein; sample results are given for multiple, nonplanar, lifting surfaces, and the use of the computer program is detailed in the appendixes.

  1. Synthesis of feedback systems with large plant ignorance for prescribed time domain tolerances

    NASA Technical Reports Server (NTRS)

    Horowitz, I. M.; Sidi, M.

    1971-01-01

    There is given a minimum-phase plant transfer function, with prescribed bounds on its parameter values. The plant is imbedded in a two-degree-of freedom feedback system, which is to be designed such that the system time response to a deterministic input lies within specified boundaries. Subject to the above, the design should be such as to minimize the effect of sensor noise at the input to the plant. This report presents a design procedure for this purpose, based on frequency response concepts. The time-domain tolerances are translated into equivalent frequency response tolerances. The latter lead to bounds on the loop transmission function in the form of continuous curves on the Nichols chart. The properties of the loop transmission function which satisfy these bounds with minimum effect of sensor noise, are derived.

  2. A method for minimum risk portfolio optimization under hybrid uncertainty

    NASA Astrophysics Data System (ADS)

    Egorova, Yu E.; Yazenin, A. V.

    2018-03-01

    In this paper, we investigate a minimum risk portfolio model under hybrid uncertainty when the profitability of financial assets is described by fuzzy random variables. According to Feng, the variance of a portfolio is defined as a crisp value. To aggregate fuzzy information the weakest (drastic) t-norm is used. We construct an equivalent stochastic problem of the minimum risk portfolio model and specify the stochastic penalty method for solving it.

  3. Kalman filter for statistical monitoring of forest cover across sub-continental regions [Symposium

    Treesearch

    Raymond L. Czaplewski

    1991-01-01

    The Kalman filter is a generalization of the composite estimator. The univariate composite estimate combines 2 prior estimates of population parameter with a weighted average where the scalar weight is inversely proportional to the variances. The composite estimator is a minimum variance estimator that requires no distributional assumptions other than estimates of the...

  4. Solving portfolio selection problems with minimum transaction lots based on conditional-value-at-risk

    NASA Astrophysics Data System (ADS)

    Setiawan, E. P.; Rosadi, D.

    2017-01-01

    Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.

  5. A binary linear programming formulation of the graph edit distance.

    PubMed

    Justice, Derek; Hero, Alfred

    2006-08-01

    A binary linear programming formulation of the graph edit distance for unweighted, undirected graphs with vertex attributes is derived and applied to a graph recognition problem. A general formulation for editing graphs is used to derive a graph edit distance that is proven to be a metric, provided the cost function for individual edit operations is a metric. Then, a binary linear program is developed for computing this graph edit distance, and polynomial time methods for determining upper and lower bounds on the solution of the binary program are derived by applying solution methods for standard linear programming and the assignment problem. A recognition problem of comparing a sample input graph to a database of known prototype graphs in the context of a chemical information system is presented as an application of the new method. The costs associated with various edit operations are chosen by using a minimum normalized variance criterion applied to pairwise distances between nearest neighbors in the database of prototypes. The new metric is shown to perform quite well in comparison to existing metrics when applied to a database of chemical graphs.

  6. Movement trajectory smoothness is not associated with the endpoint accuracy of rapid multi-joint arm movements in young and older adults

    PubMed Central

    Poston, Brach; Van Gemmert, Arend W.A.; Sharma, Siddharth; Chakrabarti, Somesh; Zavaremi, Shahrzad H.; Stelmach, George

    2013-01-01

    The minimum variance theory proposes that motor commands are corrupted by signal-dependent noise and smooth trajectories with low noise levels are selected to minimize endpoint error and endpoint variability. The purpose of the study was to determine the contribution of trajectory smoothness to the endpoint accuracy and endpoint variability of rapid multi-joint arm movements. Young and older adults performed arm movements (4 blocks of 25 trials) as fast and as accurately as possible to a target with the right (dominant) arm. Endpoint accuracy and endpoint variability along with trajectory smoothness and error were quantified for each block of trials. Endpoint error and endpoint variance were greater in older adults compared with young adults, but decreased at a similar rate with practice for the two age groups. The greater endpoint error and endpoint variance exhibited by older adults were primarily due to impairments in movement extent control and not movement direction control. The normalized jerk was similar for the two age groups, but was not strongly associated with endpoint error or endpoint variance for either group. However, endpoint variance was strongly associated with endpoint error for both the young and older adults. Finally, trajectory error was similar for both groups and was weakly associated with endpoint error for the older adults. The findings are not consistent with the predictions of the minimum variance theory, but support and extend previous observations that movement trajectories and endpoints are planned independently. PMID:23584101

  7. Exotic equilibria of Harary graphs and a new minimum degree lower bound for synchronization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canale, Eduardo A., E-mail: ecanale@pol.una.py; Monzón, Pablo, E-mail: monzon@fing.edu.uy

    2015-02-15

    This work is concerned with stability of equilibria in the homogeneous (equal frequencies) Kuramoto model of weakly coupled oscillators. In 2012 [R. Taylor, J. Phys. A: Math. Theor. 45, 1–15 (2012)], a sufficient condition for almost global synchronization was found in terms of the minimum degree–order ratio of the graph. In this work, a new lower bound for this ratio is given. The improvement is achieved by a concrete infinite sequence of regular graphs. Besides, non standard unstable equilibria of the graphs studied in Wiley et al. [Chaos 16, 015103 (2006)] are shown to exist as conjectured in that work.

  8. Effects of important parameters variations on computing eigenspace-based minimum variance weights for ultrasound tissue harmonic imaging

    NASA Astrophysics Data System (ADS)

    Haji Heidari, Mehdi; Mozaffarzadeh, Moein; Manwar, Rayyan; Nasiriavanaki, Mohammadreza

    2018-02-01

    In recent years, the minimum variance (MV) beamforming has been widely studied due to its high resolution and contrast in B-mode Ultrasound imaging (USI). However, the performance of the MV beamformer is degraded at the presence of noise, as a result of the inaccurate covariance matrix estimation which leads to a low quality image. Second harmonic imaging (SHI) provides many advantages over the conventional pulse-echo USI, such as enhanced axial and lateral resolutions. However, the low signal-to-noise ratio (SNR) is a major problem in SHI. In this paper, Eigenspace-based minimum variance (EIBMV) beamformer has been employed for second harmonic USI. The Tissue Harmonic Imaging (THI) is achieved by Pulse Inversion (PI) technique. Using the EIBMV weights, instead of the MV ones, would lead to reduced sidelobes and improved contrast, without compromising the high resolution of the MV beamformer (even at the presence of a strong noise). In addition, we have investigated the effects of variations of the important parameters in computing EIBMV weights, i.e., K, L, and δ, on the resolution and contrast obtained in SHI. The results are evaluated using numerical data (using point target and cyst phantoms), and the proper parameters of EIBMV are indicated for THI.

  9. Hydraulic geometry of river cross sections; theory of minimum variance

    USGS Publications Warehouse

    Williams, Garnett P.

    1978-01-01

    This study deals with the rates at which mean velocity, mean depth, and water-surface width increase with water discharge at a cross section on an alluvial stream. Such relations often follow power laws, the exponents in which are called hydraulic exponents. The Langbein (1964) minimum-variance theory is examined in regard to its validity and its ability to predict observed hydraulic exponents. The variables used with the theory were velocity, depth, width, bed shear stress, friction factor, slope (energy gradient), and stream power. Slope is often constant, in which case only velocity, depth, width, shear and friction factor need be considered. The theory was tested against a wide range of field data from various geographic areas of the United States. The original theory was intended to produce only the average hydraulic exponents for a group of cross sections in a similar type of geologic or hydraulic environment. The theory does predict these average exponents with a reasonable degree of accuracy. An attempt to forecast the exponents at any selected cross section was moderately successful. Empirical equations are more accurate than the minimum variance, Gauckler-Manning, or Chezy methods. Predictions of the exponent of width are most reliable, the exponent of depth fair, and the exponent of mean velocity poor. (Woodard-USGS)

  10. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm

    NASA Astrophysics Data System (ADS)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.

  11. Lithology-dependent minimum horizontal stress and in-situ stress estimate

    NASA Astrophysics Data System (ADS)

    Zhang, Yushuai; Zhang, Jincai

    2017-04-01

    Based on the generalized Hooke's law with coupling stresses and pore pressure, the minimum horizontal stress is solved with assumption that the vertical, minimum and maximum horizontal stresses are in equilibrium in the subsurface formations. From this derivation, we find that the uniaxial strain method is the minimum value or lower bound of the minimum stress. Using Anderson's faulting theory and this lower bound of the minimum horizontal stress, the coefficient of friction of the fault is derived. It shows that the coefficient of friction may have a much smaller value than what it is commonly assumed (e.g., μf = 0.6-0.7) for in-situ stress estimate. Using the derived coefficient of friction, an improved stress polygon is drawn, which can reduce the uncertainty of in-situ stress calculation by narrowing the area of the conventional stress polygon. It also shows that the coefficient of friction of the fault is dependent on lithology. For example, if the formation in the fault is composed of weak shales, then the coefficient of friction of the fault may be small (as low as μf = 0.2). This implies that this fault is weaker and more likely to have shear failures than the fault composed of sandstones. To avoid the weak fault from shear sliding, it needs to have a higher minimum stress and a lower shear stress. That is, the critically stressed weak fault maintains a higher minimum stress, which explains why a low shear stress appears in the frictionally weak fault.

  12. Mesoscale Gravity Wave Variances from AMSU-A Radiances

    NASA Technical Reports Server (NTRS)

    Wu, Dong L.

    2004-01-01

    A variance analysis technique is developed here to extract gravity wave (GW) induced temperature fluctuations from NOAA AMSU-A (Advanced Microwave Sounding Unit-A) radiance measurements. By carefully removing the instrument/measurement noise, the algorithm can produce reliable GW variances with the minimum detectable value as small as 0.1 K2. Preliminary analyses with AMSU-A data show GW variance maps in the stratosphere have very similar distributions to those found with the UARS MLS (Upper Atmosphere Research Satellite Microwave Limb Sounder). However, the AMSU-A offers better horizontal and temporal resolution for observing regional GW variability, such as activity over sub-Antarctic islands.

  13. Analysis of conditional genetic effects and variance components in developmental genetics.

    PubMed

    Zhu, J

    1995-12-01

    A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.

  14. Analysis of Conditional Genetic Effects and Variance Components in Developmental Genetics

    PubMed Central

    Zhu, J.

    1995-01-01

    A genetic model with additive-dominance effects and genotype X environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t - 1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects. PMID:8601500

  15. Some refinements on the comparison of areal sampling methods via simulation

    Treesearch

    Jeffrey Gove

    2017-01-01

    The design of forest inventories and development of new sampling methods useful in such inventories normally have a two-fold target of design unbiasedness and minimum variance in mind. Many considerations such as costs go into the choices of sampling method for operational and other levels of inventory. However, the variance in terms of meeting a specified level of...

  16. A comparison of coronal and interplanetary current sheet inclinations

    NASA Technical Reports Server (NTRS)

    Behannon, K. W.; Burlaga, L. F.; Hundhausen, A. J.

    1983-01-01

    The HAO white light K-coronameter observations show that the inclination of the heliospheric current sheet at the base of the corona can be both large (nearly vertical with respect to the solar equator) or small during Cararington rotations 1660 - 1666 and even on a single solar rotation. Voyager 1 and 2 magnetic field observations of crossing of the heliospheric current sheet at distances from the Sun of 1.4 and 2.8 AU. Two cases are considered, one in which the corresponding coronameter data indicate a nearly vertical (north-south) current sheet and another in which a nearly horizontal, near equatorial current sheet is indicated. For the crossings of the vertical current sheet, a variance analysis based on hour averages of the magnetic field data gave a minimum variance direction consistent with a steep inclination. The horizontal current sheet was observed by Voyager as a region of mixed polarity and low speeds lasting several days, consistent with multiple crossings of a horizontal but irregular and fluctuating current sheet at 1.4 AU. However, variance analysis of individual current sheet crossings in this interval using 1.92 see averages did not give minimum variance directions consistent with a horizontal current sheet.

  17. Confidence intervals for the between-study variance in random-effects meta-analysis using generalised heterogeneity statistics: should we use unequal tails?

    PubMed

    Jackson, Dan; Bowden, Jack

    2016-09-07

    Confidence intervals for the between study variance are useful in random-effects meta-analyses because they quantify the uncertainty in the corresponding point estimates. Methods for calculating these confidence intervals have been developed that are based on inverting hypothesis tests using generalised heterogeneity statistics. Whilst, under the random effects model, these new methods furnish confidence intervals with the correct coverage, the resulting intervals are usually very wide, making them uninformative. We discuss a simple strategy for obtaining 95 % confidence intervals for the between-study variance with a markedly reduced width, whilst retaining the nominal coverage probability. Specifically, we consider the possibility of using methods based on generalised heterogeneity statistics with unequal tail probabilities, where the tail probability used to compute the upper bound is greater than 2.5 %. This idea is assessed using four real examples and a variety of simulation studies. Supporting analytical results are also obtained. Our results provide evidence that using unequal tail probabilities can result in shorter 95 % confidence intervals for the between-study variance. We also show some further results for a real example that illustrates how shorter confidence intervals for the between-study variance can be useful when performing sensitivity analyses for the average effect, which is usually the parameter of primary interest. We conclude that using unequal tail probabilities when computing 95 % confidence intervals for the between-study variance, when using methods based on generalised heterogeneity statistics, can result in shorter confidence intervals. We suggest that those who find the case for using unequal tail probabilities convincing should use the '1-4 % split', where greater tail probability is allocated to the upper confidence bound. The 'width-optimal' interval that we present deserves further investigation.

  18. Assumption-free estimation of the genetic contribution to refractive error across childhood.

    PubMed

    Guggenheim, Jeremy A; St Pourcain, Beate; McMahon, George; Timpson, Nicholas J; Evans, David M; Williams, Cathy

    2015-01-01

    Studies in relatives have generally yielded high heritability estimates for refractive error: twins 75-90%, families 15-70%. However, because related individuals often share a common environment, these estimates are inflated (via misallocation of unique/common environment variance). We calculated a lower-bound heritability estimate for refractive error free from such bias. Between the ages 7 and 15 years, participants in the Avon Longitudinal Study of Parents and Children (ALSPAC) underwent non-cycloplegic autorefraction at regular research clinics. At each age, an estimate of the variance in refractive error explained by single nucleotide polymorphism (SNP) genetic variants was calculated using genome-wide complex trait analysis (GCTA) using high-density genome-wide SNP genotype information (minimum N at each age=3,404). The variance in refractive error explained by the SNPs ("SNP heritability") was stable over childhood: Across age 7-15 years, SNP heritability averaged 0.28 (SE=0.08, p<0.001). The genetic correlation for refractive error between visits varied from 0.77 to 1.00 (all p<0.001) demonstrating that a common set of SNPs was responsible for the genetic contribution to refractive error across this period of childhood. Simulations suggested lack of cycloplegia during autorefraction led to a small underestimation of SNP heritability (adjusted SNP heritability=0.35; SE=0.09). To put these results in context, the variance in refractive error explained (or predicted) by the time participants spent outdoors was <0.005 and by the time spent reading was <0.01, based on a parental questionnaire completed when the child was aged 8-9 years old. Genetic variation captured by common SNPs explained approximately 35% of the variation in refractive error between unrelated subjects. This value sets an upper limit for predicting refractive error using existing SNP genotyping arrays, although higher-density genotyping in larger samples and inclusion of interaction effects is expected to raise this figure toward twin- and family-based heritability estimates. The same SNPs influenced refractive error across much of childhood. Notwithstanding the strong evidence of association between time outdoors and myopia, and time reading and myopia, less than 1% of the variance in myopia at age 15 was explained by crude measures of these two risk factors, indicating that their effects may be limited, at least when averaged over the whole population.

  19. A Model for Determining Optimal Governance Structure in DoD Acquisition Projects in a Performance-Based Environment

    DTIC Science & Technology

    2010-04-30

    combating market dynamism (Aldrich, 1979; Child, 1972), which is a result of evolving technology, shifting prices, or variance in product availability... principles : (1) human beings are bounded rationally, and (2), as a result of being rationally bound, will always choose to further their own self... principles to govern the relationship among the buyers and suppliers. Our conceptual model aligns the alternative governance structures derived

  20. Minimum Variance Distortionless Response Beamformer with Enhanced Nulling Level Control via Dynamic Mutated Artificial Immune System

    PubMed Central

    Kiong, Tiong Sieh; Salem, S. Balasem; Paw, Johnny Koh Siaw; Sankar, K. Prajindra

    2014-01-01

    In smart antenna applications, the adaptive beamforming technique is used to cancel interfering signals (placing nulls) and produce or steer a strong beam toward the target signal according to the calculated weight vectors. Minimum variance distortionless response (MVDR) beamforming is capable of determining the weight vectors for beam steering; however, its nulling level on the interference sources remains unsatisfactory. Beamforming can be considered as an optimization problem, such that optimal weight vector should be obtained through computation. Hence, in this paper, a new dynamic mutated artificial immune system (DM-AIS) is proposed to enhance MVDR beamforming for controlling the null steering of interference and increase the signal to interference noise ratio (SINR) for wanted signals. PMID:25003136

  1. Minimum variance distortionless response beamformer with enhanced nulling level control via dynamic mutated artificial immune system.

    PubMed

    Kiong, Tiong Sieh; Salem, S Balasem; Paw, Johnny Koh Siaw; Sankar, K Prajindra; Darzi, Soodabeh

    2014-01-01

    In smart antenna applications, the adaptive beamforming technique is used to cancel interfering signals (placing nulls) and produce or steer a strong beam toward the target signal according to the calculated weight vectors. Minimum variance distortionless response (MVDR) beamforming is capable of determining the weight vectors for beam steering; however, its nulling level on the interference sources remains unsatisfactory. Beamforming can be considered as an optimization problem, such that optimal weight vector should be obtained through computation. Hence, in this paper, a new dynamic mutated artificial immune system (DM-AIS) is proposed to enhance MVDR beamforming for controlling the null steering of interference and increase the signal to interference noise ratio (SINR) for wanted signals.

  2. Bounds on strain in large Tertiary shear zones of SE Asia from boudinage restoration

    NASA Astrophysics Data System (ADS)

    Lacassin, R.; Leloup, P. H.; Tapponnier, P.

    1993-06-01

    We have used surface-balanced restoration of stretched, boudinaged layers to estimate minimum amounts of finite strain in the mylonitic gneisses of the Oligo-Miocene Red River-Ailao Shan shear zone (Yunnan, China) and of the Wang Chao shear zone (Thailand). The layer-parallel extension values thus obtained range between 250 and 870%. We discuss how to use such extension values to place bounds on amounts of finite shear strain in these large crustal shear zones. Assuming simple shear, these values imply minimum total and late shear strains of, respectively, 33 ± 6 and 7 ± 3 at several sites along the Red River-Ailao Shan shear zone. For the Wang Chao shear zone a minimum shear strain of 7 ± 4 is deduced. Assuming homogeneous shear would imply that minimum strike-slip displacements along these two left-lateral shear zones, which have been interpreted to result from the India-Asia collision, have been of the order of 330 ± 60 km (Red River-Ailao Shan) and 35 ± 20 km (Wang Chao).

  3. 25 CFR 542.18 - How does a gaming operation apply for a variance from the standards of the part?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false How does a gaming operation apply for a variance from the standards of the part? 542.18 Section 542.18 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.18 How does a gaming operation apply for a...

  4. Achieving the Holevo bound via a bisection decoding protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosati, Matteo; Giovannetti, Vittorio

    2016-06-15

    We present a new decoding protocol to realize transmission of classical information through a quantum channel at asymptotically maximum capacity, achieving the Holevo bound and thus the optimal communication rate. At variance with previous proposals, our scheme recovers the message bit by bit, making use of a series of “yes-no” measurements, organized in bisection fashion, thus determining which codeword was sent in log{sub 2} N steps, N being the number of codewords.

  5. Cramer-Rao Bound, MUSIC, and Maximum Likelihood. Effects of Temporal Phase Difference

    DTIC Science & Technology

    1990-11-01

    Technical Report 1373 November 1990 Cramer-Rao Bound, MUSIC , And Maximum Likelihood Effects of Temporal Phase o Difference C. V. TranI OTIC Approved... MUSIC , and Maximum Likelihood (ML) asymptotic variances corresponding to the two-source direction-of-arrival estimation where sources were modeled as...1pI = 1.00, SNR = 20 dB ..................................... 27 2. MUSIC for two equipowered signals impinging on a 5-element ULA (a) IpI = 0.50, SNR

  6. A test of source-surface model predictions of heliospheric current sheet inclination

    NASA Technical Reports Server (NTRS)

    Burton, M. E.; Crooker, N. U.; Siscoe, G. L.; Smith, E. J.

    1994-01-01

    The orientation of the heliospheric current sheet predicted from a source surface model is compared with the orientation determined from minimum-variance analysis of International Sun-Earth Explorer (ISEE) 3 magnetic field data at 1 AU near solar maximum. Of the 37 cases analyzed, 28 have minimum variance normals that lie orthogonal to the predicted Parker spiral direction. For these cases, the correlation coefficient between the predicted and measured inclinations is 0.6. However, for the subset of 14 cases for which transient signatures (either interplanetary shocks or bidirectional electrons) are absent, the agreement in inclinations improves dramatically, with a correlation coefficient of 0.96. These results validate not only the use of the source surface model as a predictor but also the previously questioned usefulness of minimum variance analysis across complex sector boundaries. In addition, the results imply that interplanetary dynamics have little effect on current sheet inclination at 1 AU. The dependence of the correlation on transient occurrence suggests that the leading edge of a coronal mass ejection (CME), where transient signatures are detected, disrupts the heliospheric current sheet but that the sheet re-forms between the trailing legs of the CME. In this way the global structure of the heliosphere, reflected both in the source surface maps and in the interplanetary sector structure, can be maintained even when the CME occurrence rate is high.

  7. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm.

    PubMed

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  8. The Bayesian Cramér-Rao lower bound in Astrometry

    NASA Astrophysics Data System (ADS)

    Mendez, R. A.; Echeverria, A.; Silva, J.; Orchard, M.

    2018-01-01

    A determination of the highest precision that can be achieved in the measurement of the location of a stellar-like object has been a topic of permanent interest by the astrometric community. The so-called (parametric, or non-Bayesian) Cramér-Rao (CR hereafter) bound provides a lower bound for the variance with which one could estimate the position of a point source. This has been studied recently by Mendez et al. (2013, 2014, 2015). In this work we present a different approach to the same problem (Echeverria et al. 2016), using a Bayesian CR setting which has a number of advantages over the parametric scenario.

  9. The Bayesian Cramér-Rao lower bound in Astrometry

    NASA Astrophysics Data System (ADS)

    Mendez, R. A.; Echeverria, A.; Silva, J.; Orchard, M.

    2017-07-01

    A determination of the highest precision that can be achieved in the measurement of the location of a stellar-like object has been a topic of permanent interest by the astrometric community. The so-called (parametric, or non-Bayesian) Cramér-Rao (CR hereafter) bound provides a lower bound for the variance with which one could estimate the position of a point source. This has been studied recently by Mendez and collaborators (2014, 2015). In this work we present a different approach to the same problem (Echeverria et al. 2016), using a Bayesian CR setting which has a number of advantages over the parametric scenario.

  10. Vegetation greenness impacts on maximum and minimum temperatures in northeast Colorado

    USGS Publications Warehouse

    Hanamean, J. R.; Pielke, R.A.; Castro, C. L.; Ojima, D.S.; Reed, Bradley C.; Gao, Z.

    2003-01-01

    The impact of vegetation on the microclimate has not been adequately considered in the analysis of temperature forecasting and modelling. To fill part of this gap, the following study was undertaken.A daily 850–700 mb layer mean temperature, computed from the National Center for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) reanalysis, and satellite-derived greenness values, as defined by NDVI (Normalised Difference Vegetation Index), were correlated with surface maximum and minimum temperatures at six sites in northeast Colorado for the years 1989–98. The NDVI values, representing landscape greenness, act as a proxy for latent heat partitioning via transpiration. These sites encompass a wide array of environments, from irrigated-urban to short-grass prairie. The explained variance (r2 value) of surface maximum and minimum temperature by only the 850–700 mb layer mean temperature was subtracted from the corresponding explained variance by the 850–700 mb layer mean temperature and NDVI values. The subtraction shows that by including NDVI values in the analysis, the r2 values, and thus the degree of explanation of the surface temperatures, increase by a mean of 6% for the maxima and 8% for the minima over the period March–October. At most sites, there is a seasonal dependence in the explained variance of the maximum temperatures because of the seasonal cycle of plant growth and senescence. Between individual sites, the highest increase in explained variance occurred at the site with the least amount of anthropogenic influence. This work suggests the vegetation state needs to be included as a factor in surface temperature forecasting, numerical modeling, and climate change assessments.

  11. Change in mean temperature as a predictor of extreme temperature change in the Asia-Pacific region

    NASA Astrophysics Data System (ADS)

    Griffiths, G. M.; Chambers, L. E.; Haylock, M. R.; Manton, M. J.; Nicholls, N.; Baek, H.-J.; Choi, Y.; della-Marta, P. M.; Gosai, A.; Iga, N.; Lata, R.; Laurent, V.; Maitrepierre, L.; Nakamigawa, H.; Ouprasitwong, N.; Solofa, D.; Tahani, L.; Thuy, D. T.; Tibig, L.; Trewin, B.; Vediapan, K.; Zhai, P.

    2005-08-01

    Trends (1961-2003) in daily maximum and minimum temperatures, extremes and variance were found to be spatially coherent across the Asia-Pacific region. The majority of stations exhibited significant trends: increases in mean maximum and mean minimum temperature, decreases in cold nights and cool days, and increases in warm nights. No station showed a significant increase in cold days or cold nights, but a few sites showed significant decreases in hot days and warm nights. Significant decreases were observed in both maximum and minimum temperature standard deviation in China, Korea and some stations in Japan (probably reflecting urbanization effects), but also for some Thailand and coastal Australian sites. The South Pacific convergence zone (SPCZ) region between Fiji and the Solomon Islands showed a significant increase in maximum temperature variability.Correlations between mean temperature and the frequency of extreme temperatures were strongest in the tropical Pacific Ocean from French Polynesia to Papua New Guinea, Malaysia, the Philippines, Thailand and southern Japan. Correlations were weaker at continental or higher latitude locations, which may partly reflect urbanization.For non-urban stations, the dominant distribution change for both maximum and minimum temperature involved a change in the mean, impacting on one or both extremes, with no change in standard deviation. This occurred from French Polynesia to Papua New Guinea (except for maximum temperature changes near the SPCZ), in Malaysia, the Philippines, and several outlying Japanese islands. For urbanized stations the dominant change was a change in the mean and variance, impacting on one or both extremes. This result was particularly evident for minimum temperature.The results presented here, for non-urban tropical and maritime locations in the Asia-Pacific region, support the hypothesis that changes in mean temperature may be used to predict changes in extreme temperatures. At urbanized or higher latitude locations, changes in variance should be incorporated.

  12. Curvature Continuous and Bounded Path Planning for Fixed-Wing UAVs

    PubMed Central

    Jiang, Peng; Li, Deshi; Sun, Tao

    2017-01-01

    Unmanned Aerial Vehicles (UAVs) play an important role in applications such as data collection and target reconnaissance. An accurate and optimal path can effectively increase the mission success rate in the case of small UAVs. Although path planning for UAVs is similar to that for traditional mobile robots, the special kinematic characteristics of UAVs (such as their minimum turning radius) have not been taken into account in previous studies. In this paper, we propose a locally-adjustable, continuous-curvature, bounded path-planning algorithm for fixed-wing UAVs. To deal with the curvature discontinuity problem, an optimal interpolation algorithm and a key-point shift algorithm are proposed based on the derivation of a curvature continuity condition. To meet the upper bound for curvature and to render the curvature extrema controllable, a local replanning scheme is designed by combining arcs and Bezier curves with monotonic curvature. In particular, a path transition mechanism is built for the replanning phase using minimum curvature circles for a planning philosophy. Numerical results demonstrate that the analytical planning algorithm can effectively generate continuous-curvature paths, while satisfying the curvature upper bound constraint and allowing UAVs to pass through all predefined waypoints in the desired mission region. PMID:28925960

  13. Curvature Continuous and Bounded Path Planning for Fixed-Wing UAVs.

    PubMed

    Wang, Xiaoliang; Jiang, Peng; Li, Deshi; Sun, Tao

    2017-09-19

    Unmanned Aerial Vehicles (UAVs) play an important role in applications such as data collection and target reconnaissance. An accurate and optimal path can effectively increase the mission success rate in the case of small UAVs. Although path planning for UAVs is similar to that for traditional mobile robots, the special kinematic characteristics of UAVs (such as their minimum turning radius) have not been taken into account in previous studies. In this paper, we propose a locally-adjustable, continuous-curvature, bounded path-planning algorithm for fixed-wing UAVs. To deal with the curvature discontinuity problem, an optimal interpolation algorithm and a key-point shift algorithm are proposed based on the derivation of a curvature continuity condition. To meet the upper bound for curvature and to render the curvature extrema controllable, a local replanning scheme is designed by combining arcs and Bezier curves with monotonic curvature. In particular, a path transition mechanism is built for the replanning phase using minimum curvature circles for a planning philosophy. Numerical results demonstrate that the analytical planning algorithm can effectively generate continuous-curvature paths, while satisfying the curvature upper bound constraint and allowing UAVs to pass through all predefined waypoints in the desired mission region.

  14. Obtaining Reliable Predictions of Terrestrial Energy Coupling From Real-Time Solar Wind Measurement

    NASA Technical Reports Server (NTRS)

    Weimer, Daniel R.

    2001-01-01

    The first draft of a manuscript titled "Variable time delays in the propagation of the interplanetary magnetic field" has been completed, for submission to the Journal of Geophysical Research. In the preparation of this manuscript all data and analysis programs had been updated to the highest temporal resolution possible, at 16 seconds or better. The program which computes the "measured" IMF propagation time delays from these data has also undergone another improvement. In another significant development, a technique has been developed in order to predict IMF phase plane orientations, and the resulting time delays, using only measurements from a single satellite at L1. The "minimum variance" method is used for this computation. Further work will be done on optimizing the choice of several parameters for the minimum variance calculation.

  15. Reduced conservatism in stability robustness bounds by state transformation

    NASA Technical Reports Server (NTRS)

    Yedavalli, R. K.; Liang, Z.

    1986-01-01

    This note addresses the issue of 'conservatism' in the time domain stability robustness bounds obtained by the Liapunov approach. A state transformation is employed to improve the upper bounds on the linear time-varying perturbation of an asymptotically stable linear time-invariant system for robust stability. This improvement is due to the variance of the conservatism of the Liapunov approach with respect to the basis of the vector space in which the Liapunov function is constructed. Improved bounds are obtained, using a transformation, on elemental and vector norms of perturbations (i.e., structured perturbations) as well as on a matrix norm of perturbations (i.e., unstructured perturbations). For the case of a diagonal transformation, an algorithm is proposed to find the 'optimal' transformation. Several examples are presented to illustrate the proposed analysis.

  16. The influence of SO4 and NO3 to the acidity (pH) of rainwater using minimum variance quadratic unbiased estimation (MIVQUE) and maximum likelihood methods

    NASA Astrophysics Data System (ADS)

    Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto

    2017-03-01

    Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.

  17. Consistent use of the standard model effective potential.

    PubMed

    Andreassen, Anders; Frost, William; Schwartz, Matthew D

    2014-12-12

    The stability of the standard model is determined by the true minimum of the effective Higgs potential. We show that the potential at its minimum when computed by the traditional method is strongly dependent on the gauge parameter. It moreover depends on the scale where the potential is calculated. We provide a consistent method for determining absolute stability independent of both gauge and calculation scale, order by order in perturbation theory. This leads to a revised stability bounds m(h)(pole)>(129.4±2.3)  GeV and m(t)(pole)<(171.2±0.3)  GeV. We also show how to evaluate the effect of new physics on the stability bound without resorting to unphysical field values.

  18. Minimum Total-Squared-Correlation Quaternary Signature Sets: New Bounds and Optimal Designs

    DTIC Science & Technology

    2009-12-01

    50, pp. 2433-2440, Oct. 2004. [11] G. S. Rajappan and M. L. Honig, “Signature sequence adaptation for DS - CDMA with multipath," IEEE J. Sel. Areas...OPTIMAL DESIGNS 3671 [14] C. Ding, M. Golin, and T. Kløve, “Meeting the Welch and Karystinos- Pados bounds on DS - CDMA binary signature sets," Des., Codes...Cryp- togr., vol. 30, pp. 73-84, Aug. 2003. [15] V. P. Ipatov, “On the Karystinos-Pados bounds and optimal binary DS - CDMA signature ensembles," IEEE

  19. Upper bounds on the error probabilities and asymptotic error exponents in quantum multiple state discrimination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audenaert, Koenraad M. R., E-mail: koenraad.audenaert@rhul.ac.uk; Department of Physics and Astronomy, University of Ghent, S9, Krijgslaan 281, B-9000 Ghent; Mosonyi, Milán, E-mail: milan.mosonyi@gmail.com

    2014-10-01

    We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ₁, …, σ{sub r}. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ₁, …, σ{sub r}), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov'smore » classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min{sub j« less

  20. Heating Capacity of ReBound Shortwave Diathermy and Moist Hot Packs at Superficial Depths

    PubMed Central

    Hawkes, Amanda R.; Draper, David O.; Johnson, A. Wayne; Diede, Mike T.; Rigby, Justin H.

    2013-01-01

    Context: The effectiveness of a new continuous diathermy unit, ReBound, as a heating modality is unknown. Objective: To compare the effects of ReBound diathermy with silicate-gel moist hot packs on tissue temperature in the human triceps surae muscle. Design:  Crossover study. Setting: University research laboratory. Patients or Other Participants: A total of 12 healthy, college-aged volunteers (4 men, 8 women; age = 22.2 ± 2.25 years, calf subcutaneous fat thickness = 7.2 ± 1.9 mm). Intervention(s): On 2 different days, 1 of 2 modalities (ReBound diathermy, silicate-gel moist hot pack) was applied to the triceps surae muscle of each participant for 30 minutes. After 30 minutes, the modality was removed, and temperature decay was recorded for 20 minutes. Main Outcome Measure(s):  Medial triceps surae intramuscular tissue temperature at a depth of 1 cm was measured using an implantable thermocouple inserted horizontally into the muscle. Measurements were taken every 5 minutes during the 30-minute treatment and every minute during the 20-minute temperature decay, for a total of 50 minutes. Treatment was analyzed through a 2 × 7 mixed-model analysis of variance with repeated measures. Temperature decay was analyzed through a 2 × 21 mixed-model analysis of variance with repeated measures. Results: During the 30-minute application, tissue temperatures at a depth of 1 cm increased more with the ReBound diathermy than with the moist hot pack (F6,66 = 7.14, P < .001). ReBound diathermy and moist hot packs increased tissue temperatures 3.69°C ± 1.50°C and 2.82°C ± 0.90°C, respectively, from baseline. Throughout the temperature decay, ReBound diathermy produced a greater rate of heat dissipation than the moist hot pack (F20,222 = 4.42, P < .001). Conclusions: During a 30-minute treatment at a superficial depth, the ReBound diathermy increased tissue temperature to moderate levels, which were greater than the levels reached with moist hot packs. PMID:23855362

  1. A simple method for assessing occupational exposure via the one-way random effects model.

    PubMed

    Krishnamoorthy, K; Mathew, Thomas; Peng, Jie

    2016-11-01

    A one-way random effects model is postulated for the log-transformed shift-long personal exposure measurements, where the random effect in the model represents an effect due to the worker. Simple closed-form confidence intervals are proposed for the relevant parameters of interest using the method of variance estimates recovery (MOVER). The performance of the confidence bounds is evaluated and compared with those based on the generalized confidence interval approach. Comparison studies indicate that the proposed MOVER confidence bounds are better than the generalized confidence bounds for the overall mean exposure and an upper percentile of the exposure distribution. The proposed methods are illustrated using a few examples involving industrial hygiene data.

  2. Identification of multiple leaks in pipeline: Linearized model, maximum likelihood, and super-resolution localization

    NASA Astrophysics Data System (ADS)

    Wang, Xun; Ghidaoui, Mohamed S.

    2018-07-01

    This paper considers the problem of identifying multiple leaks in a water-filled pipeline based on inverse transient wave theory. The analytical solution to this problem involves nonlinear interaction terms between the various leaks. This paper shows analytically and numerically that these nonlinear terms are of the order of the leak sizes to the power two and; thus, negligible. As a result of this simplification, a maximum likelihood (ML) scheme that identifies leak locations and leak sizes separately is formulated and tested. It is found that the ML estimation scheme is highly efficient and robust with respect to noise. In addition, the ML method is a super-resolution leak localization scheme because its resolvable leak distance (approximately 0.15λmin , where λmin is the minimum wavelength) is below the Nyquist-Shannon sampling theorem limit (0.5λmin). Moreover, the Cramér-Rao lower bound (CRLB) is derived and used to show the efficiency of the ML scheme estimates. The variance of the ML estimator approximates the CRLB proving that the ML scheme belongs to class of best unbiased estimator of leak localization methods.

  3. Measurement of damping and temperature: Precision bounds in Gaussian dissipative channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monras, Alex; Illuminati, Fabrizio

    2011-01-15

    We present a comprehensive analysis of the performance of different classes of Gaussian states in the estimation of Gaussian phase-insensitive dissipative channels. In particular, we investigate the optimal estimation of the damping constant and reservoir temperature. We show that, for two-mode squeezed vacuum probe states, the quantum-limited accuracy of both parameters can be achieved simultaneously. Moreover, we show that for both parameters two-mode squeezed vacuum states are more efficient than coherent, thermal, or single-mode squeezed states. This suggests that at high-energy regimes, two-mode squeezed vacuum states are optimal within the Gaussian setup. This optimality result indicates a stronger form ofmore » compatibility for the estimation of the two parameters. Indeed, not only the minimum variance can be achieved at fixed probe states, but also the optimal state is common to both parameters. Additionally, we explore numerically the performance of non-Gaussian states for particular parameter values to find that maximally entangled states within d-dimensional cutoff subspaces (d{<=}6) perform better than any randomly sampled states with similar energy. However, we also find that states with very similar performance and energy exist with much less entanglement than the maximally entangled ones.« less

  4. Sensor networks for optimal target localization with bearings-only measurements in constrained three-dimensional scenarios.

    PubMed

    Moreno-Salinas, David; Pascoal, Antonio; Aranda, Joaquin

    2013-08-12

    In this paper, we address the problem of determining the optimal geometric configuration of an acoustic sensor network that will maximize the angle-related information available for underwater target positioning. In the set-up adopted, a set of autonomous vehicles carries a network of acoustic units that measure the elevation and azimuth angles between a target and each of the receivers on board the vehicles. It is assumed that the angle measurements are corrupted by white Gaussian noise, the variance of which is distance-dependent. Using tools from estimation theory, the problem is converted into that of minimizing, by proper choice of the sensor positions, the trace of the inverse of the Fisher Information Matrix (also called the Cramer-Rao Bound matrix) to determine the sensor configuration that yields the minimum possible covariance of any unbiased target estimator. It is shown that the optimal configuration of the sensors depends explicitly on the intensity of the measurement noise, the constraints imposed on the sensor configuration, the target depth and the probabilistic distribution that defines the prior uncertainty in the target position. Simulation examples illustrate the key results derived.

  5. Near optimal pentamodes as a tool for guiding stress while minimizing compliance in 3d-printed materials: A complete solution to the weak G-closure problem for 3d-printed materials

    NASA Astrophysics Data System (ADS)

    Milton, Graeme W.; Camar-Eddine, Mohamed

    2018-05-01

    For a composite containing one isotropic elastic material, with positive Lame moduli, and void, with the elastic material occupying a prescribed volume fraction f, and with the composite being subject to an average stress, σ0 , Gibiansky, Cherkaev, and Allaire provided a sharp lower bound Wf(σ0) on the minimum compliance energy σ0 :ɛ0 , in which ɛ0 is the average strain. Here we show these bounds also provide sharp bounds on the possible (σ0 ,ɛ0) -pairs that can coexist in such composites, and thus solve the weak G-closure problem for 3d-printed materials. The materials we use to achieve the extremal (σ0 ,ɛ0) -pairs are denoted as near optimal pentamodes. We also consider two-phase composites containing this isotropic elasticity material and a rigid phase with the elastic material occupying a prescribed volume fraction f, and with the composite being subject to an average strain, ɛ0. For such composites, Allaire and Kohn provided a sharp lower bound W˜f(ɛ0) on the minimum elastic energy σ0 :ɛ0 . We show that these bounds also provide sharp bounds on the possible (σ0 ,ɛ0) -pairs that can coexist in such composites of the elastic and rigid phases, and thus solve the weak G-closure problem in this case too. The materials we use to achieve these extremal (σ0 ,ɛ0) -pairs are denoted as near optimal unimodes.

  6. Diallel analysis for sex-linked and maternal effects.

    PubMed

    Zhu, J; Weir, B S

    1996-01-01

    Genetic models including sex-linked and maternal effects as well as autosomal gene effects are described. Monte Carlo simulations were conducted to compare efficiencies of estimation by minimum norm quadratic unbiased estimation (MINQUE) and restricted maximum likelihood (REML) methods. MINQUE(1), which has 1 for all prior values, has a similar efficiency to MINQUE(θ), which requires prior estimates of parameter values. MINQUE(1) has the advantage over REML of unbiased estimation and convenient computation. An adjusted unbiased prediction (AUP) method is developed for predicting random genetic effects. AUP is desirable for its easy computation and unbiasedness of both mean and variance of predictors. The jackknife procedure is appropriate for estimating the sampling variances of estimated variances (or covariances) and of predicted genetic effects. A t-test based on jackknife variances is applicable for detecting significance of variation. Worked examples from mice and silkworm data are given in order to demonstrate variance and covariance estimation and genetic effect prediction.

  7. Mixed model approaches for diallel analysis based on a bio-model.

    PubMed

    Zhu, J; Weir, B S

    1996-12-01

    A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.

  8. Minimum number of measurements for evaluating Bertholletia excelsa.

    PubMed

    Baldoni, A B; Tonini, H; Tardin, F D; Botelho, S C C; Teodoro, P E

    2017-09-27

    Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of Brazil nut tree (Bertholletia excelsa) genotypes based on fruit yield. For this, we assessed the number of fruits and dry mass of seeds of 75 Brazil nut genotypes, from native forest, located in the municipality of Itaúba, MT, for 5 years. To better estimate r, four procedures were used: analysis of variance (ANOVA), principal component analysis based on the correlation matrix (CPCOR), principal component analysis based on the phenotypic variance and covariance matrix (CPCOV), and structural analysis based on the correlation matrix (mean r - AECOR). There was a significant effect of genotypes and measurements, which reveals the need to study the minimum number of measurements for selecting superior Brazil nut genotypes for a production increase. Estimates of r by ANOVA were lower than those observed with the principal component methodology and close to AECOR. The CPCOV methodology provided the highest estimate of r, which resulted in a lower number of measurements needed to identify superior Brazil nut genotypes for the number of fruits and dry mass of seeds. Based on this methodology, three measurements are necessary to predict the true value of the Brazil nut genotypes with a minimum accuracy of 85%.

  9. Improving Conceptual Models Using AEM Data and Probability Distributions

    NASA Astrophysics Data System (ADS)

    Davis, A. C.; Munday, T. J.; Christensen, N. B.

    2012-12-01

    With emphasis being placed on uncertainty in groundwater modelling and prediction, coupled with questions concerning the value of geophysical methods in hydrogeology, it is important to ask meaningful questions of hydrogeophysical data and inversion results. For example, to characterise aquifers using electromagnetic (EM) data, we ask questions such as "Given that the electrical conductivity of aquifer 'A' is less than x, where is that aquifer elsewhere in the survey area?" The answer may be given by examining inversion models, selecting locations and layers that satisfy the condition 'conductivity <= x', and labelling them as aquifer 'A'. One difficulty with this approach is that the inversion model result often be considered to be the only model for the data. In reality it is just one image of the subsurface that, given the method and the regularisation imposed in the inversion, agrees with measured data within a given error bound. We have no idea whether the final model realised by the inversion satisfies the global minimum error, or whether it is simply in a local minimum. There is a distribution of inversion models that satisfy the error tolerance condition: the final model is not the only one, nor is it necessarily the correct one. AEM inversions are often linearised in the calculation of the parameter sensitivity: we rely on the second derivatives in the Taylor expansion, thus the minimum model has all layer parameters distributed about their mean parameter value with well-defined variance. We investigate the validity of the minimum model, and its uncertainty, by examining the full posterior covariance matrix. We ask questions of the minimum model, and answer them in a probabilistically. The simplest question we can pose is "What is the probability that all layer resistivity values are <= a cut-off value?" We can calculate through use of the erf or the erfc functions. The covariance values of the inversion become marginalised in the integration: only the main diagonal is used. Complications arise when we ask more specific questions, such as "What is the probability that the resistivity of layer 2 <= x, given that layer 1 <= y?" The probability then becomes conditional, calculation includes covariance terms, the integration is taken over many dimensions, and the cross-correlation of parameters becomes important. To illustrate, we examine the inversion results of a Tempest AEM survey over the Uley Basin aquifers in the Eyre Peninsula, South Australia. Key aquifers include the unconfined Bridgewater Formation that overlies the Uley and Wanilla Formations, which contain Tertiary clays and Tertiary sandstone. These Formations overlie weathered basement which define the lower bound of the Uley Basin aquifer systems. By correlating the conductivity of the sub-surface Formation types, we pose questions such as: "What is the probability-depth of the Bridgewater Formation in the Uley South Basin?", "What is the thickness of the Uley Formation?" and "What is the most probable depth to basement?" We use these questions to generate improved conceptual hydrogeological models of the Uley Basin in order to develop better estimates of aquifer extent and the available groundwater resource.

  10. On the design of classifiers for crop inventories

    NASA Technical Reports Server (NTRS)

    Heydorn, R. P.; Takacs, H. C.

    1986-01-01

    Crop proportion estimators that use classifications of satellite data to correct, in an additive way, a given estimate acquired from ground observations are discussed. A linear version of these estimators is optimal, in terms of minimum variance, when the regression of the ground observations onto the satellite observations in linear. When this regression is not linear, but the reverse regression (satellite observations onto ground observations) is linear, the estimator is suboptimal but still has certain appealing variance properties. In this paper expressions are derived for those regressions which relate the intercepts and slopes to conditional classification probabilities. These expressions are then used to discuss the question of classifier designs that can lead to low-variance crop proportion estimates. Variance expressions for these estimates in terms of classifier omission and commission errors are also derived.

  11. Bounding species distribution models

    USGS Publications Warehouse

    Stohlgren, T.J.; Jarnevich, C.S.; Esaias, W.E.; Morisette, J.T.

    2011-01-01

    Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used. ?? 2011 Current Zoology.

  12. Bounding Species Distribution Models

    NASA Technical Reports Server (NTRS)

    Stohlgren, Thomas J.; Jarnevich, Cahterine S.; Morisette, Jeffrey T.; Esaias, Wayne E.

    2011-01-01

    Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].

  13. An eigenvalue localization set for tensors and its applications.

    PubMed

    Zhao, Jianxing; Sang, Caili

    2017-01-01

    A new eigenvalue localization set for tensors is given and proved to be tighter than those presented by Li et al . (Linear Algebra Appl. 481:36-53, 2015) and Huang et al . (J. Inequal. Appl. 2016:254, 2016). As an application of this set, new bounds for the minimum eigenvalue of [Formula: see text]-tensors are established and proved to be sharper than some known results. Compared with the results obtained by Huang et al ., the advantage of our results is that, without considering the selection of nonempty proper subsets S of [Formula: see text], we can obtain a tighter eigenvalue localization set for tensors and sharper bounds for the minimum eigenvalue of [Formula: see text]-tensors. Finally, numerical examples are given to verify the theoretical results.

  14. Empirically Estimable Classification Bounds Based on a Nonparametric Divergence Measure

    PubMed Central

    Berisha, Visar; Wisler, Alan; Hero, Alfred O.; Spanias, Andreas

    2015-01-01

    Information divergence functions play a critical role in statistics and information theory. In this paper we show that a non-parametric f-divergence measure can be used to provide improved bounds on the minimum binary classification probability of error for the case when the training and test data are drawn from the same distribution and for the case where there exists some mismatch between training and test distributions. We confirm the theoretical results by designing feature selection algorithms using the criteria from these bounds and by evaluating the algorithms on a series of pathological speech classification tasks. PMID:26807014

  15. Numerical and analytical bounds on threshold error rates for hypergraph-product codes

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Prabhakar, Sanjay; Dumer, Ilya; Pryadko, Leonid P.

    2018-06-01

    We study analytically and numerically decoding properties of finite-rate hypergraph-product quantum low density parity-check codes obtained from random (3,4)-regular Gallager codes, with a simple model of independent X and Z errors. Several nontrivial lower and upper bounds for the decodable region are constructed analytically by analyzing the properties of the homological difference, equal minus the logarithm of the maximum-likelihood decoding probability for a given syndrome. Numerical results include an upper bound for the decodable region from specific heat calculations in associated Ising models and a minimum-weight decoding threshold of approximately 7 % .

  16. Energy Bounds for a Compressed Elastic Film on a Substrate

    NASA Astrophysics Data System (ADS)

    Bourne, David P.; Conti, Sergio; Müller, Stefan

    2017-04-01

    We study pattern formation in a compressed elastic film which delaminates from a substrate. Our key tool is the determination of rigorous upper and lower bounds on the minimum value of a suitable energy functional. The energy consists of two parts, describing the two main physical effects. The first part represents the elastic energy of the film, which is approximated using the von Kármán plate theory. The second part represents the fracture or delamination energy, which is approximated using the Griffith model of fracture. A simpler model containing the first term alone was previously studied with similar methods by several authors, assuming that the delaminated region is fixed. We include the fracture term, transforming the elastic minimisation into a free boundary problem, and opening the way for patterns which result from the interplay of elasticity and delamination. After rescaling, the energy depends on only two parameters: the rescaled film thickness, {σ }, and a measure of the bonding strength between the film and substrate, {γ }. We prove upper bounds on the minimum energy of the form {σ }^a {γ }^b and find that there are four different parameter regimes corresponding to different values of a and b and to different folding patterns of the film. In some cases, the upper bounds are attained by self-similar folding patterns as observed in experiments. Moreover, for two of the four parameter regimes we prove matching, optimal lower bounds.

  17. Laboratory Estimates of Heritabilities and Genetic Correlations in Nature

    PubMed Central

    Riska, B.; Prout, T.; Turelli, M.

    1989-01-01

    A lower bound on heritability in a natural environment can be determined from the regression of offspring raised in the laboratory on parents raised in nature. An estimate of additive genetic variance in the laboratory is also required. The estimated lower bounds on heritabilities can sometimes be used to demonstrate a significant genetic correlation between two traits in nature, if their genetic and phenotypic correlations in nature have the same sign, and if sample sizes are large, and heritabilities and phenotypic and genetic correlations are high. PMID:2515111

  18. Minimum-variance Brownian motion control of an optically trapped probe.

    PubMed

    Huang, Yanan; Zhang, Zhipeng; Menq, Chia-Hsiang

    2009-10-20

    This paper presents a theoretical and experimental investigation of the Brownian motion control of an optically trapped probe. The Langevin equation is employed to describe the motion of the probe experiencing random thermal force and optical trapping force. Since active feedback control is applied to suppress the probe's Brownian motion, actuator dynamics and measurement delay are included in the equation. The equation of motion is simplified to a first-order linear differential equation and transformed to a discrete model for the purpose of controller design and data analysis. The derived model is experimentally verified by comparing the model prediction to the measured response of a 1.87 microm trapped probe subject to proportional control. It is then employed to design the optimal controller that minimizes the variance of the probe's Brownian motion. Theoretical analysis is derived to evaluate the control performance of a specific optical trap. Both experiment and simulation are used to validate the design as well as theoretical analysis, and to illustrate the performance envelope of the active control. Moreover, adaptive minimum variance control is implemented to maintain the optimal performance in the case in which the system is time varying when operating the actively controlled optical trap in a complex environment.

  19. Cost effective stream-gaging strategies for the Lower Colorado River basin; the Blythe field office operations

    USGS Publications Warehouse

    Moss, Marshall E.; Gilroy, Edward J.

    1980-01-01

    This report describes the theoretical developments and illustrates the applications of techniques that recently have been assembled to analyze the cost-effectiveness of federally funded stream-gaging activities in support of the Colorado River compact and subsequent adjudications. The cost effectiveness of 19 stream gages in terms of minimizing the sum of the variances of the errors of estimation of annual mean discharge is explored by means of a sequential-search optimization scheme. The search is conducted over a set of decision variables that describes the number of times that each gaging route is traveled in a year. A gage route is defined as the most expeditious circuit that is made from a field office to visit one or more stream gages and return to the office. The error variance is defined as a function of the frequency of visits to a gage by using optimal estimation theory. Currently a minimum of 12 visits per year is made to any gage. By changing to a six-visit minimum, the same total error variance can be attained for the 19 stations with a budget of 10% less than the current one. Other strategies are also explored. (USGS)

  20. River meanders - Theory of minimum variance

    USGS Publications Warehouse

    Langbein, Walter Basil; Leopold, Luna Bergere

    1966-01-01

    Meanders are the result of erosion-deposition processes tending toward the most stable form in which the variability of certain essential properties is minimized. This minimization involves the adjustment of the planimetric geometry and the hydraulic factors of depth, velocity, and local slope.The planimetric geometry of a meander is that of a random walk whose most frequent form minimizes the sum of the squares of the changes in direction in each successive unit length. The direction angles are then sine functions of channel distance. This yields a meander shape typically present in meandering rivers and has the characteristic that the ratio of meander length to average radius of curvature in the bend is 4.7.Depth, velocity, and slope are shown by field observations to be adjusted so as to decrease the variance of shear and the friction factor in a meander curve over that in an otherwise comparable straight reach of the same riverSince theory and observation indicate meanders achieve the minimum variance postulated, it follows that for channels in which alternating pools and riffles occur, meandering is the most probable form of channel geometry and thus is more stable geometry than a straight or nonmeandering alinement.

  1. Point focusing using loudspeaker arrays from the perspective of optimal beamforming.

    PubMed

    Bai, Mingsian R; Hsieh, Yu-Hao

    2015-06-01

    Sound focusing is to create a concentrated acoustic field in the region surrounded by a loudspeaker array. This problem was tackled in the previous research via the Helmholtz integral approach, brightness control, acoustic contrast control, etc. In this paper, the same problem was revisited from the perspective of beamforming. A source array model is reformulated in terms of the steering matrix between the source and the field points, which lends itself to the use of beamforming algorithms such as minimum variance distortionless response (MVDR) and linearly constrained minimum variance (LCMV) originally intended for sensor arrays. The beamforming methods are compared with the conventional methods in terms of beam pattern, directional index, and control effort. Objective tests are conducted to assess the audio quality by using perceptual evaluation of audio quality (PEAQ). Experiments of produced sound field and listening tests are conducted in a listening room, with results processed using analysis of variance and regression analysis. In contrast to the conventional energy-based methods, the results have shown that the proposed methods are phase-sensitive in light of the distortionless constraint in formulating the array filters, which helps enhance audio quality and focusing performance.

  2. RFI in hybrid loops - Simulation and experimental results.

    NASA Technical Reports Server (NTRS)

    Ziemer, R. E.; Nelson, D. R.; Raghavan, H. R.

    1972-01-01

    A digital simulation of an imperfect second-order hybrid phase-locked loop (HPLL) operating in radio frequency interference (RFI) is described. Its performance is characterized in terms of phase error variance and phase error probability density function (PDF). Monte-Carlo simulation is used to show that the HPLL can be superior to the conventional phase-locked loops in RFI backgrounds when minimum phase error variance is the goodness criterion. Similar experimentally obtained data are given in support of the simulation data.

  3. Eigenspace-based minimum variance adaptive beamformer combined with delay multiply and sum: experimental study

    NASA Astrophysics Data System (ADS)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2018-02-01

    Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra, called EIBMV-DMAS, using the expansion of DMAS algorithm. The proposed method is used as the reconstruction algorithm in linear-array PAI. EIBMV-DMAS is experimentally evaluated where the quantitative and qualitative results show that it outperforms DAS, DMAS and EIBMV. The proposed method degrades the sidelobes for about 365 %, 221 % and 40 %, compared to DAS, DMAS and EIBMV, respectively. Moreover, EIBMV-DMAS improves the SNR about 158 %, 63 % and 20 %, respectively.

  4. Nonlinear unbiased minimum-variance filter for Mars entry autonomous navigation under large uncertainties and unknown measurement bias.

    PubMed

    Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua

    2018-05-01

    High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Charged particle tracking at Titan, and further applications

    NASA Astrophysics Data System (ADS)

    Bebesi, Zsofia; Erdos, Geza; Szego, Karoly

    2016-04-01

    We use the CAPS ion data of Cassini to investigate the dynamics and origin of Titan's atmospheric ions. We developed a 4th order Runge-Kutta method to calculate particle trajectories in a time reversed scenario. The test particle magnetic field environment imitates the curved magnetic environment in the vicinity of Titan. The minimum variance directions along the S/C trajectory have been calculated for all available Titan flybys, and we assumed a homogeneous field that is perpendicular to the minimum variance direction. Using this method the magnetic field lines have been calculated along the flyby orbits so we could select those observational intervals when Cassini and the upper atmosphere of Titan were magnetically connected. We have also taken the Kronian magnetodisc into consideration, and used different upstream magnetic field approximations depending on whether Titan was located inside of the magnetodisc current sheet, or in the lobe regions. We also discuss the code's applicability to comets.

  6. Microstructure of the IMF turbulences at 2.5 AU

    NASA Technical Reports Server (NTRS)

    Mavromichalaki, H.; Vassilaki, A.; Marmatsouri, L.; Moussas, X.; Quenby, J. J.; Smith, E. J.

    1995-01-01

    A detailed analysis of small period (15-900 sec) magnetohydrodynamic (MHD) turbulences of the interplanetary magnetic field (IMF) has been made using Pioneer-11 high time resolution data (0.75 sec) inside a Corotating Interaction Region (CIR) at a heliocentric distance of 2.5 AU in 1973. The methods used are the hodogram analysis, the minimum variance matrix analysis and the cohenrence analysis. The minimum variance analysis gives evidence of linear polarized wave modes. Coherence analysis has shown that the field fluctuations are dominated by the magnetosonic fast modes with periods 15 sec to 15 min. However, it is also shown that some small amplitude Alfven waves are present in the trailing edge of this region with characteristic periods (15-200 sec). The observed wave modes are locally generated and possibly attributed to the scattering of Alfven waves energy into random magnetosonic waves.

  7. Optical tomographic detection of rheumatoid arthritis with computer-aided classification schemes

    NASA Astrophysics Data System (ADS)

    Klose, Christian D.; Klose, Alexander D.; Netz, Uwe; Beuthan, Jürgen; Hielscher, Andreas H.

    2009-02-01

    A recent research study has shown that combining multiple parameters, drawn from optical tomographic images, leads to better classification results to identifying human finger joints that are affected or not affected by rheumatic arthritis RA. Building up on the research findings of the previous study, this article presents an advanced computer-aided classification approach for interpreting optical image data to detect RA in finger joints. Additional data are used including, for example, maximum and minimum values of the absorption coefficient as well as their ratios and image variances. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index and area under the curve AUC. Results were compared to different benchmarks ("gold standard"): magnet resonance, ultrasound and clinical evaluation. Maximum accuracies (AUC=0.88) were reached when combining minimum/maximum-ratios and image variances and using ultrasound as gold standard.

  8. Does space-time torsion determine the minimum mass of gravitating particles?

    NASA Astrophysics Data System (ADS)

    Böhmer, Christian G.; Burikham, Piyabut; Harko, Tiberiu; Lake, Matthew J.

    2018-03-01

    We derive upper and lower limits for the mass-radius ratio of spin-fluid spheres in Einstein-Cartan theory, with matter satisfying a linear barotropic equation of state, and in the presence of a cosmological constant. Adopting a spherically symmetric interior geometry, we obtain the generalized continuity and Tolman-Oppenheimer-Volkoff equations for a Weyssenhoff spin fluid in hydrostatic equilibrium, expressed in terms of the effective mass, density and pressure, all of which contain additional contributions from the spin. The generalized Buchdahl inequality, which remains valid at any point in the interior, is obtained, and general theoretical limits for the maximum and minimum mass-radius ratios are derived. As an application of our results we obtain gravitational red shift bounds for compact spin-fluid objects, which may (in principle) be used for observational tests of Einstein-Cartan theory in an astrophysical context. We also briefly consider applications of the torsion-induced minimum mass to the spin-generalized strong gravity model for baryons/mesons, and show that the existence of quantum spin imposes a lower bound for spinning particles, which almost exactly reproduces the electron mass.

  9. Does space-time torsion determine the minimum mass of gravitating particles?

    PubMed

    Böhmer, Christian G; Burikham, Piyabut; Harko, Tiberiu; Lake, Matthew J

    2018-01-01

    We derive upper and lower limits for the mass-radius ratio of spin-fluid spheres in Einstein-Cartan theory, with matter satisfying a linear barotropic equation of state, and in the presence of a cosmological constant. Adopting a spherically symmetric interior geometry, we obtain the generalized continuity and Tolman-Oppenheimer-Volkoff equations for a Weyssenhoff spin fluid in hydrostatic equilibrium, expressed in terms of the effective mass, density and pressure, all of which contain additional contributions from the spin. The generalized Buchdahl inequality, which remains valid at any point in the interior, is obtained, and general theoretical limits for the maximum and minimum mass-radius ratios are derived. As an application of our results we obtain gravitational red shift bounds for compact spin-fluid objects, which may (in principle) be used for observational tests of Einstein-Cartan theory in an astrophysical context. We also briefly consider applications of the torsion-induced minimum mass to the spin-generalized strong gravity model for baryons/mesons, and show that the existence of quantum spin imposes a lower bound for spinning particles, which almost exactly reproduces the electron mass.

  10. Numerical bias in bounded and unbounded number line tasks.

    PubMed

    Cohen, Dale J; Blanc-Goldhammer, Daryn

    2011-04-01

    The number line task is often used to assess children's and adults' underlying representations of integers. Traditional bounded number line tasks, however, have limitations that can lead to misinterpretation. Here we present a new task, an unbounded number line task, that overcomes these limitations. In Experiment 1, we show that adults use a biased proportion estimation strategy to complete the traditional bounded number line task. In Experiment 2, we show that adults use a dead-reckoning integer estimation strategy in our unbounded number line task. Participants revealed a positively accelerating numerical bias in both tasks, but showed scalar variance only in the unbounded number line task. We conclude that the unbounded number line task is a more pure measure of integer representation than the bounded number line task, and using these results, we present a preliminary description of adults' underlying representation of integers.

  11. Computer search for binary cyclic UEP codes of odd length up to 65

    NASA Technical Reports Server (NTRS)

    Lin, Mao-Chao; Lin, Chi-Chang; Lin, Shu

    1990-01-01

    Using an exhaustive computation, the unequal error protection capabilities of all binary cyclic codes of odd length up to 65 that have minimum distances at least 3 are found. For those codes that can only have upper bounds on their unequal error protection capabilities computed, an analytic method developed by Dynkin and Togonidze (1976) is used to show that the upper bounds meet the exact unequal error protection capabilities.

  12. Schmidt-number witnesses and bound entanglement

    NASA Astrophysics Data System (ADS)

    Sanpera, Anna; Bruß, Dagmar; Lewenstein, Maciej

    2001-05-01

    The Schmidt number of a mixed state characterizes the minimum Schmidt rank of the pure states needed to construct it. We investigate the Schmidt number of an arbitrary mixed state by studying Schmidt-number witnesses that detect it. We present a canonical form of such witnesses and provide constructive methods for their optimization. Finally, we present strong evidence that all bound entangled states with positive partial transpose in C3⊗C3 have Schmidt number 2.

  13. Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information

    NASA Technical Reports Server (NTRS)

    Howell, L. W., Jr.

    2003-01-01

    A simple power law model consisting of a single spectral index, sigma(sub 2), is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index sigma(sub 2) greater than sigma(sub 1) above E(sub k). The maximum likelihood (ML) procedure was developed for estimating the single parameter sigma(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (Pl) consistency (asymptotically unbiased), (P2) efficiency (asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only be ascertained by calculating the CRB for an assumed energy spectrum- detector response function combination, which can be quite formidable in practice. However, the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are stained in practice are investigated.

  14. Theoretical study of network design methodologies for the aerial relay system. [energy consumption and air traffic control

    NASA Technical Reports Server (NTRS)

    Rivera, J. M.; Simpson, R. W.

    1980-01-01

    The aerial relay system network design problem is discussed. A generalized branch and bound based algorithm is developed which can consider a variety of optimization criteria, such as minimum passenger travel time and minimum liner and feeder operating costs. The algorithm, although efficient, is basically useful for small size networks, due to its nature of exponentially increasing computation time with the number of variables.

  15. Some Conceptual Deficiencies in "Developmental" Behavior Genetics.

    ERIC Educational Resources Information Center

    Gottlieb, Gilbert

    1995-01-01

    Criticizes the application of the statistical procedures of the population-genetic approach within evolutionary biology to the study of psychological development. Argues that the application of the statistical methods of population genetics--primarily the analysis of variance--to the causes of psychological development is bound to result in a…

  16. The isolation limits of stochastic vibration

    NASA Technical Reports Server (NTRS)

    Knopse, C. R.; Allaire, P. E.

    1993-01-01

    The vibration isolation problem is formulated as a 1D kinematic problem. The geometry of the stochastic wall trajectories arising from the stroke constraint is defined in terms of their significant extrema. An optimal control solution for the minimum acceleration return path determines a lower bound on platform mean square acceleration. This bound is expressed in terms of the probability density function on the significant maxima and the conditional fourth moment of the first passage time inverse. The first of these is found analytically while the second is found using a Monte Carlo simulation. The rms acceleration lower bound as a function of available space is then determined through numerical quadrature.

  17. Work extraction from quantum systems with bounded fluctuations in work.

    PubMed

    Richens, Jonathan G; Masanes, Lluis

    2016-11-25

    In the standard framework of thermodynamics, work is a random variable whose average is bounded by the change in free energy of the system. This average work is calculated without regard for the size of its fluctuations. Here we show that for some processes, such as reversible cooling, the fluctuations in work diverge. Realistic thermal machines may be unable to cope with arbitrarily large fluctuations. Hence, it is important to understand how thermodynamic efficiency rates are modified by bounding fluctuations. We quantify the work content and work of formation of arbitrary finite dimensional quantum states when the fluctuations in work are bounded by a given amount c. By varying c we interpolate between the standard and minimum free energies. We derive fundamental trade-offs between the magnitude of work and its fluctuations. As one application of these results, we derive the corrected Carnot efficiency of a qubit heat engine with bounded fluctuations.

  18. Work extraction from quantum systems with bounded fluctuations in work

    PubMed Central

    Richens, Jonathan G.; Masanes, Lluis

    2016-01-01

    In the standard framework of thermodynamics, work is a random variable whose average is bounded by the change in free energy of the system. This average work is calculated without regard for the size of its fluctuations. Here we show that for some processes, such as reversible cooling, the fluctuations in work diverge. Realistic thermal machines may be unable to cope with arbitrarily large fluctuations. Hence, it is important to understand how thermodynamic efficiency rates are modified by bounding fluctuations. We quantify the work content and work of formation of arbitrary finite dimensional quantum states when the fluctuations in work are bounded by a given amount c. By varying c we interpolate between the standard and minimum free energies. We derive fundamental trade-offs between the magnitude of work and its fluctuations. As one application of these results, we derive the corrected Carnot efficiency of a qubit heat engine with bounded fluctuations. PMID:27886177

  19. Work extraction from quantum systems with bounded fluctuations in work

    NASA Astrophysics Data System (ADS)

    Richens, Jonathan G.; Masanes, Lluis

    2016-11-01

    In the standard framework of thermodynamics, work is a random variable whose average is bounded by the change in free energy of the system. This average work is calculated without regard for the size of its fluctuations. Here we show that for some processes, such as reversible cooling, the fluctuations in work diverge. Realistic thermal machines may be unable to cope with arbitrarily large fluctuations. Hence, it is important to understand how thermodynamic efficiency rates are modified by bounding fluctuations. We quantify the work content and work of formation of arbitrary finite dimensional quantum states when the fluctuations in work are bounded by a given amount c. By varying c we interpolate between the standard and minimum free energies. We derive fundamental trade-offs between the magnitude of work and its fluctuations. As one application of these results, we derive the corrected Carnot efficiency of a qubit heat engine with bounded fluctuations.

  20. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the secondmore » explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.« less

  1. Energy bounds in designer gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amsel, Aaron J.; Marolf, Donald

    We consider asymptotically anti-de Sitter gravity coupled to tachyonic scalar fields with mass at or slightly above the Breitenlohner-Freedman bound in d{>=}4 spacetime dimensions. The boundary conditions in these ''designer gravity'' theories are defined in terms of an arbitrary function W. We give a general argument that the Hamiltonian generators of asymptotic symmetries for such systems will be finite, and proceed to construct these generators using the covariant phase space method. The direct calculation confirms that the generators are finite and shows that they take the form of the pure gravity result plus additional contributions from the scalar fields. Bymore » comparing the generators to the spinor charge, we derive a lower bound on the gravitational energy when W has a global minimum and the Breitenlohner-Freedman bound is not saturated.« less

  2. Recurrence quantification analysis of extremes of maximum and minimum temperature patterns for different climate scenarios in the Mesochora catchment in Central-Western Greece

    NASA Astrophysics Data System (ADS)

    Panagoulia, Dionysia; Vlahogianni, Eleni I.

    2018-06-01

    A methodological framework based on nonlinear recurrence analysis is proposed to examine the historical data evolution of extremes of maximum and minimum daily mean areal temperature patterns over time under different climate scenarios. The methodology is based on both historical data and atmospheric General Circulation Model (GCM) produced climate scenarios for the periods 1961-2000 and 2061-2100 which correspond to 1 × CO2 and 2 × CO2 scenarios. Historical data were derived from the actual daily observations coupled with atmospheric circulation patterns (CPs). The dynamics of the temperature was reconstructed in the phase-space from the time series of temperatures. The statistically comparing different temperature patterns were based on some discriminating statistics obtained by the Recurrence Quantification Analysis (RQA). Moreover, the bootstrap method of Schinkel et al. (2009) was adopted to calculate the confidence bounds of RQA parameters based on a structural preserving resampling. The overall methodology was implemented to the mountainous Mesochora catchment in Central-Western Greece. The results reveal substantial similarities between the historical maximum and minimum daily mean areal temperature statistical patterns and their confidence bounds, as well as the maximum and minimum temperature patterns in evolution under the 2 × CO2 scenario. A significant variability and non-stationary behaviour characterizes all climate series analyzed. Fundamental differences are produced from the historical and maximum 1 × CO2 scenarios, the maximum 1 × CO2 and minimum 1 × CO2 scenarios, as well as the confidence bounds for the two CO2 scenarios. The 2 × CO2 scenario reflects the strongest shifts in intensity, duration and frequency in temperature patterns. Such transitions can help the scientists and policy makers to understand the effects of extreme temperature changes on water resources, economic development, and health of ecosystems and hence to proceed to effective proactive management of extreme phenomena. The impacts of the findings on the predictability of the extreme daily mean areal temperature patterns are also commented.

  3. Combining the Hanning windowed interpolated FFT in both directions

    NASA Astrophysics Data System (ADS)

    Chen, Kui Fu; Li, Yan Feng

    2008-06-01

    The interpolated fast Fourier transform (IFFT) has been proposed as a way to eliminate the picket fence effect (PFE) of the fast Fourier transform. The modulus based IFFT, cited in most relevant references, makes use of only the 1st and 2nd highest spectral lines. An approach using three principal spectral lines is proposed. This new approach combines both directions of the complex spectrum based IFFT with the Hanning window. The optimal weight to minimize the estimation variance is established on the first order Taylor series expansion of noise interference. A numerical simulation is carried out, and the results are compared with the Cramer-Rao bound. It is demonstrated that the proposed approach has a lower estimation variance than the two-spectral-line approach. The improvement depends on the extent of sampling deviating from the coherent condition, and the best is decreasing variance by 2/7. However, it is also shown that the estimation variance of the windowed IFFT with the Hanning is significantly higher than that of without windowing.

  4. An analytic technique for statistically modeling random atomic clock errors in estimation

    NASA Technical Reports Server (NTRS)

    Fell, P. J.

    1981-01-01

    Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.

  5. Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.

    PubMed

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2007-05-01

    Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.

  6. Software for the grouped optimal aggregation technique

    NASA Technical Reports Server (NTRS)

    Brown, P. M.; Shaw, G. W. (Principal Investigator)

    1982-01-01

    The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.

  7. Alternate methods for FAAT S-curve generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaufman, A.M.

    The FAAT (Foreign Asset Assessment Team) assessment methodology attempts to derive a probability of effect as a function of incident field strength. The probability of effect is the likelihood that the stress put on a system exceeds its strength. In the FAAT methodology, both the stress and strength are random variables whose statistical properties are estimated by experts. Each random variable has two components of uncertainty: systematic and random. The systematic uncertainty drives the confidence bounds in the FAAT assessment. Its variance can be reduced by improved information. The variance of the random uncertainty is not reducible. The FAAT methodologymore » uses an assessment code called ARES to generate probability of effect curves (S-curves) at various confidence levels. ARES assumes log normal distributions for all random variables. The S-curves themselves are log normal cumulants associated with the random portion of the uncertainty. The placement of the S-curves depends on confidence bounds. The systematic uncertainty in both stress and strength is usually described by a mode and an upper and lower variance. Such a description is not consistent with the log normal assumption of ARES and an unsatisfactory work around solution is used to obtain the required placement of the S-curves at each confidence level. We have looked into this situation and have found that significant errors are introduced by this work around. These errors are at least several dB-W/cm{sup 2} at all confidence levels, but they are especially bad in the estimate of the median. In this paper, we suggest two alternate solutions for the placement of S-curves. To compare these calculational methods, we have tabulated the common combinations of upper and lower variances and generated the relevant S-curves offsets from the mode difference of stress and strength.« less

  8. The control gain region for synchronization in non-diffusively coupled complex networks

    NASA Astrophysics Data System (ADS)

    Gequn, Liu; Wenhui, Li; Huijie, Yang; Knowles, Gareth

    2014-07-01

    The control gain region for synchronization of non-diffusively coupled networks was studied with respect to three conditions: synchronization, synchronization in finite time, and synchronization in the minimum time. Based on cancellation control methodology and master stability function formalism, we found that a complete feasible control gain region may be bounded, unbounded, empty or a union of several bounded and unbounded regions, with a similar shape to the synchronized region. An interesting possibility emerged that a network could be synchronized by both negative and positive feedback control simultaneously. By bridging synchronizability and synchronizing response speeds with a settling time index, we have developed timed synchronized region (TSR) as a substitute for the classical synchronized region to study finite time synchronization. As for the last condition, a graphical method was developed to estimate control gain with the minimum synchronization time (CGMST). Each condition has examples provided for illustration and verification.

  9. A Note on the Kirchhoff and Additive Degree-Kirchhoff Indices of Graphs

    NASA Astrophysics Data System (ADS)

    Yang, Yujun; Klein, Douglas J.

    2015-06-01

    Two resistance-distance-based graph invariants, namely, the Kirchhoff index and the additive degree-Kirchhoff index, are studied. A relation between them is established, with inequalities for the additive degree-Kirchhoff index arising via the Kirchhoff index along with minimum, maximum, and average degrees. Bounds for the Kirchhoff and additive degree-Kirchhoff indices are also determined, and extremal graphs are characterised. In addition, an upper bound for the additive degree-Kirchhoff index is established to improve a previously known result.

  10. Transferred nuclear Overhauser enhancement experiments show that the monoclonal antibody strep 9 selects a local minimum conformation of a Streptococcus group A trisaccharide-hapten.

    PubMed

    Weimar, T; Harris, S L; Pitner, J B; Bock, K; Pinto, B M

    1995-10-17

    Transferred nuclear Overhauser enhancement (TRNOE) experiments have been performed to investigate the bound conformation of the trisaccharide repeating unit of the Streptococcus Group A cell-wall polysaccharide. Thus, the conformations of propyl 3-O-(2-acetamido-2-deoxy-beta-D-glucopyranosyl)-2-O-(alpha-L-rhamnopyran osyl)- alpha-L-rhamnopyranoside [C(A')B] (1) as a free ligand and when complexed to the monoclonal antibody Strep 9 were examined. Improved insights about the conformational preferences of the glycosidic linkages of the trisaccharide ligand showed that the free ligand populates various conformations in aqueous solution, thus displaying relatively flexible behavior. The NOE HNAc-H2A', which was not detected in previous work, accounts for a conformation at the beta-(1-->3) linkage with a phi angle of approximately 180 degrees. Observed TRNOEs for the complex are weak, and their analysis was further complicated by spin diffusion. With the use of transferred rotating-frame Overhauser enhancement (TRROE) experiments, the amount of spin diffusion was assessed experimentally, proving that all of the observed long-range TRNOEs arose through spin diffusion. Four interglycosidic distances, derived from the remaining TRNOEs and TRROEs, together with repulsive constraints, derived from the absence of TRROE effects, were used as input parameters in simulated annealing and molecular mechanics calculations to determine the bound conformation of the trisaccharide. Complexation by the antibody results in the selection of one defined conformation of the carbohydrate hapten. This bound conformation, which is a local energy minimum on the energy maps calculated for the trisaccharide ligand, shows only a change from a +gauche to a -gauche orientation at the psi angle of the alpha-(1-->2) linkage when compared to the global minimum conformation. The results infer that the bound conformation of the Streptococcus Group A cell-wall polysaccharide is different from its previously proposed solution structure (Kreis et al., 1995).

  11. Turbulence Variance Characteristics in the Unstable Atmospheric Boundary Layer above Flat Pine Forest

    NASA Astrophysics Data System (ADS)

    Asanuma, Jun

    Variances of the velocity components and scalars are important as indicators of the turbulence intensity. They also can be utilized to estimate surface fluxes in several types of "variance methods", and the estimated fluxes can be regional values if the variances from which they are calculated are regionally representative measurements. On these motivations, variances measured by an aircraft in the unstable ABL over a flat pine forest during HAPEX-Mobilhy were analyzed within the context of the similarity scaling arguments. The variances of temperature and vertical velocity within the atmospheric surface layer were found to follow closely the Monin-Obukhov similarity theory, and to yield reasonable estimates of the surface sensible heat fluxes when they are used in variance methods. This gives a validation to the variance methods with aircraft measurements. On the other hand, the specific humidity variances were influenced by the surface heterogeneity and clearly fail to obey MOS. A simple analysis based on the similarity law for free convection produced a comprehensible and quantitative picture regarding the effect of the surface flux heterogeneity on the statistical moments, and revealed that variances of the active and passive scalars become dissimilar because of their different roles in turbulence. The analysis also indicated that the mean quantities are also affected by the heterogeneity but to a less extent than the variances. The temperature variances in the mixed layer (ML) were examined by using a generalized top-down bottom-up diffusion model with some combinations of velocity scales and inversion flux models. The results showed that the surface shear stress exerts considerable influence on the lower ML. Also with the temperature and vertical velocity variances ML variance methods were tested, and their feasibility was investigated. Finally, the variances in the ML were analyzed in terms of the local similarity concept; the results confirmed the original hypothesis by Panofsky and McCormick that the local scaling in terms of the local buoyancy flux defines the lower bound of the moments.

  12. A Random Forest Approach to Predict the Spatial Distribution ...

    EPA Pesticide Factsheets

    Modeling the magnitude and distribution of sediment-bound pollutants in estuaries is often limited by incomplete knowledge of the site and inadequate sample density. To address these modeling limitations, a decision-support tool framework was conceived that predicts sediment contamination from the sub-estuary to broader estuary extent. For this study, a Random Forest (RF) model was implemented to predict the distribution of a model contaminant, triclosan (5-chloro-2-(2,4-dichlorophenoxy)phenol) (TCS), in Narragansett Bay, Rhode Island, USA. TCS is an unregulated contaminant used in many personal care products. The RF explanatory variables were associated with TCS transport and fate (proxies) and direct and indirect environmental entry. The continuous RF TCS concentration predictions were discretized into three levels of contamination (low, medium, and high) for three different quantile thresholds. The RF model explained 63% of the variance with a minimum number of variables. Total organic carbon (TOC) (transport and fate proxy) was a strong predictor of TCS contamination causing a mean squared error increase of 59% when compared to permutations of randomized values of TOC. Additionally, combined sewer overflow discharge (environmental entry) and sand (transport and fate proxy) were strong predictors. The discretization models identified a TCS area of greatest concern in the northern reach of Narragansett Bay (Providence River sub-estuary), which was validated wi

  13. Dynamic burstiness of word-occurrence and network modularity in textbook systems

    NASA Astrophysics Data System (ADS)

    Cui, Xue-Mei; Yoon, Chang No; Youn, Hyejin; Lee, Sang Hoon; Jung, Jean S.; Han, Seung Kee

    2017-12-01

    We show that the dynamic burstiness of word occurrence in textbook systems is attributed to the modularity of the word association networks. At first, a measure of dynamic burstiness is introduced to quantify burstiness of word occurrence in a textbook. The advantage of this measure is that the dynamic burstiness is decomposable into two contributions: one coming from the inter-event variance and the other from the memory effects. Comparing network structures of physics textbook systems with those of surrogate random textbooks without the memory or variance effects are absent, we show that the network modularity increases systematically with the dynamic burstiness. The intra-connectivity of individual word representing the strength of a tie with which a node is bound to a module accordingly increases with the dynamic burstiness, suggesting individual words with high burstiness are strongly bound to one module. Based on the frequency and dynamic burstiness, physics terminology is classified into four categories: fundamental words, topical words, special words, and common words. In addition, we test the correlation between the dynamic burstiness of word occurrence and network modularity using a two-state model of burst generation.

  14. Source-space ICA for MEG source imaging.

    PubMed

    Jonmohamadi, Yaqub; Jones, Richard D

    2016-02-01

    One of the most widely used approaches in electroencephalography/magnetoencephalography (MEG) source imaging is application of an inverse technique (such as dipole modelling or sLORETA) on the component extracted by independent component analysis (ICA) (sensor-space ICA + inverse technique). The advantage of this approach over an inverse technique alone is that it can identify and localize multiple concurrent sources. Among inverse techniques, the minimum-variance beamformers offer a high spatial resolution. However, in order to have both high spatial resolution of beamformer and be able to take on multiple concurrent sources, sensor-space ICA + beamformer is not an ideal combination. We propose source-space ICA for MEG as a powerful alternative approach which can provide the high spatial resolution of the beamformer and handle multiple concurrent sources. The concept of source-space ICA for MEG is to apply the beamformer first and then singular value decomposition + ICA. In this paper we have compared source-space ICA with sensor-space ICA both in simulation and real MEG. The simulations included two challenging scenarios of correlated/concurrent cluster sources. Source-space ICA provided superior performance in spatial reconstruction of source maps, even though both techniques performed equally from a temporal perspective. Real MEG from two healthy subjects with visual stimuli were also used to compare performance of sensor-space ICA and source-space ICA. We have also proposed a new variant of minimum-variance beamformer called weight-normalized linearly-constrained minimum-variance with orthonormal lead-field. As sensor-space ICA-based source reconstruction is popular in EEG and MEG imaging, and given that source-space ICA has superior spatial performance, it is expected that source-space ICA will supersede its predecessor in many applications.

  15. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows

    PubMed Central

    Wang, Di; Kleinberg, Robert D.

    2009-01-01

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C2, C3, C4,…. It is known that C2 can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing Ck (k > 2) require solving a linear program. In this paper we prove that C3 can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}n, this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network. PMID:20161596

  16. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows.

    PubMed

    Wang, Di; Kleinberg, Robert D

    2009-11-28

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C(2), C(3), C(4),…. It is known that C(2) can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing C(k) (k > 2) require solving a linear program. In this paper we prove that C(3) can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}(n), this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network.

  17. Uncertainty, imprecision, and the precautionary principle in climate change assessment.

    PubMed

    Borsuk, M E; Tomassini, L

    2005-01-01

    Statistical decision theory can provide useful support for climate change decisions made under conditions of uncertainty. However, the probability distributions used to calculate expected costs in decision theory are themselves subject to uncertainty, disagreement, or ambiguity in their specification. This imprecision can be described using sets of probability measures, from which upper and lower bounds on expectations can be calculated. However, many representations, or classes, of probability measures are possible. We describe six of the more useful classes and demonstrate how each may be used to represent climate change uncertainties. When expected costs are specified by bounds, rather than precise values, the conventional decision criterion of minimum expected cost is insufficient to reach a unique decision. Alternative criteria are required, and the criterion of minimum upper expected cost may be desirable because it is consistent with the precautionary principle. Using simple climate and economics models as an example, we determine the carbon dioxide emissions levels that have minimum upper expected cost for each of the selected classes. There can be wide differences in these emissions levels and their associated costs, emphasizing the need for care when selecting an appropriate class.

  18. Pseudo paths towards minimum energy states in network dynamics

    NASA Astrophysics Data System (ADS)

    Hedayatifar, L.; Hassanibesheli, F.; Shirazi, A. H.; Vasheghani Farahani, S.; Jafari, G. R.

    2017-10-01

    The dynamics of networks forming on Heider balance theory moves towards lower tension states. The condition derived from this theory enforces agents to reevaluate and modify their interactions to achieve equilibrium. These possible changes in network's topology can be considered as various paths that guide systems to minimum energy states. Based on this theory the final destination of a system could reside on a local minimum energy, ;jammed state;, or the global minimum energy, balanced states. The question we would like to address is whether jammed states just appear by chance? Or there exist some pseudo paths that bound a system towards a jammed state. We introduce an indicator to suspect the location of a jammed state based on the Inverse Participation Ratio method (IPR). We provide a margin before a local minimum where the number of possible paths dramatically drastically decreases. This is a condition that proves adequate for ending up on a jammed states.

  19. Design of a compensation for an ARMA model of a discrete time system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Mainemer, C. I.

    1978-01-01

    The design of an optimal dynamic compensator for a multivariable discrete time system is studied. Also the design of compensators to achieve minimum variance control strategies for single input single output systems is analyzed. In the first problem the initial conditions of the plant are random variables with known first and second order moments, and the cost is the expected value of the standard cost, quadratic in the states and controls. The compensator is based on the minimum order Luenberger observer and it is found optimally by minimizing a performance index. Necessary and sufficient conditions for optimality of the compensator are derived. The second problem is solved in three different ways; two of them working directly in the frequency domain and one working in the time domain. The first and second order moments of the initial conditions are irrelevant to the solution. Necessary and sufficient conditions are derived for the compensator to minimize the variance of the output.

  20. Precision of proportion estimation with binary compressed Raman spectrum.

    PubMed

    Réfrégier, Philippe; Scotté, Camille; de Aguiar, Hilton B; Rigneault, Hervé; Galland, Frédéric

    2018-01-01

    The precision of proportion estimation with binary filtering of a Raman spectrum mixture is analyzed when the number of binary filters is equal to the number of present species and when the measurements are corrupted with Poisson photon noise. It is shown that the Cramer-Rao bound provides a useful methodology to analyze the performance of such an approach, in particular when the binary filters are orthogonal. It is demonstrated that a simple linear mean square error estimation method is efficient (i.e., has a variance equal to the Cramer-Rao bound). Evolutions of the Cramer-Rao bound are analyzed when the measuring times are optimized or when the considered proportion for binary filter synthesis is not optimized. Two strategies for the appropriate choice of this considered proportion are also analyzed for the binary filter synthesis.

  1. Efficient Regressions via Optimally Combining Quantile Information*

    PubMed Central

    Zhao, Zhibiao; Xiao, Zhijie

    2014-01-01

    We develop a generally applicable framework for constructing efficient estimators of regression models via quantile regressions. The proposed method is based on optimally combining information over multiple quantiles and can be applied to a broad range of parametric and nonparametric settings. When combining information over a fixed number of quantiles, we derive an upper bound on the distance between the efficiency of the proposed estimator and the Fisher information. As the number of quantiles increases, this upper bound decreases and the asymptotic variance of the proposed estimator approaches the Cramér-Rao lower bound under appropriate conditions. In the case of non-regular statistical estimation, the proposed estimator leads to super-efficient estimation. We illustrate the proposed method for several widely used regression models. Both asymptotic theory and Monte Carlo experiments show the superior performance over existing methods. PMID:25484481

  2. Thermospheric mass density model error variance as a function of time scale

    NASA Astrophysics Data System (ADS)

    Emmert, J. T.; Sutton, E. K.

    2017-12-01

    In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).

  3. Finite state projection based bounds to compare chemical master equation models using single-cell data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, Zachary; Neuert, Gregor; Department of Pharmacology, School of Medicine, Vanderbilt University, Nashville, Tennessee 37232

    2016-08-21

    Emerging techniques now allow for precise quantification of distributions of biological molecules in single cells. These rapidly advancing experimental methods have created a need for more rigorous and efficient modeling tools. Here, we derive new bounds on the likelihood that observations of single-cell, single-molecule responses come from a discrete stochastic model, posed in the form of the chemical master equation. These strict upper and lower bounds are based on a finite state projection approach, and they converge monotonically to the exact likelihood value. These bounds allow one to discriminate rigorously between models and with a minimum level of computational effort.more » In practice, these bounds can be incorporated into stochastic model identification and parameter inference routines, which improve the accuracy and efficiency of endeavors to analyze and predict single-cell behavior. We demonstrate the applicability of our approach using simulated data for three example models as well as for experimental measurements of a time-varying stochastic transcriptional response in yeast.« less

  4. 25 CFR 543.18 - What are the minimum internal control standards for the cage, vault, kiosk, cash and cash...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... available upon demand for each day, shift, and drop cycle (this is not required if the system does not track..., beverage containers, etc., into and out of the cage. (j) Variances. The operation must establish, as...

  5. 25 CFR 543.18 - What are the minimum internal control standards for the cage, vault, kiosk, cash and cash...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... available upon demand for each day, shift, and drop cycle (this is not required if the system does not track..., beverage containers, etc., into and out of the cage. (j) Variances. The operation must establish, as...

  6. Ratio of shear viscosity to entropy density in multifragmentation of Au + Au

    NASA Astrophysics Data System (ADS)

    Zhou, C. L.; Ma, Y. G.; Fang, D. Q.; Li, S. X.; Zhang, G. Q.

    2012-06-01

    The ratio of the shear viscosity (η) to entropy density (s) for the intermediate energy heavy-ion collisions has been calculated by using the Green-Kubo method in the framework of the quantum molecular dynamics model. The theoretical curve of η/s as a function of the incident energy for the head-on Au + Au collisions displays that a minimum region of η/s has been approached at higher incident energies, where the minimum η/s value is about 7 times Kovtun-Son-Starinets (KSS) bound (1/4π). We argue that the onset of minimum η/s region at higher incident energies corresponds to the nuclear liquid gas phase transition in nuclear multifragmentation.

  7. Binary weight distributions of some Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Pollara, F.; Arnold, S.

    1992-01-01

    The binary weight distributions of the (7,5) and (15,9) Reed-Solomon (RS) codes and their duals are computed using the MacWilliams identities. Several mappings of symbols to bits are considered and those offering the largest binary minimum distance are found. These results are then used to compute bounds on the soft-decoding performance of these codes in the presence of additive Gaussian noise. These bounds are useful for finding large binary block codes with good performance and for verifying the performance obtained by specific soft-coding algorithms presently under development.

  8. Minimum Dimension of a Hilbert Space Needed to Generate a Quantum Correlation.

    PubMed

    Sikora, Jamie; Varvitsiotis, Antonios; Wei, Zhaohui

    2016-08-05

    Consider a two-party correlation that can be generated by performing local measurements on a bipartite quantum system. A question of fundamental importance is to understand how many resources, which we quantify by the dimension of the underlying quantum system, are needed to reproduce this correlation. In this Letter, we identify an easy-to-compute lower bound on the smallest Hilbert space dimension needed to generate a given two-party quantum correlation. We show that our bound is tight on many well-known correlations and discuss how it can rule out correlations of having a finite-dimensional quantum representation. We show that our bound is multiplicative under product correlations and also that it can witness the nonconvexity of certain restricted-dimensional quantum correlations.

  9. Resource Constrained Planning of Multiple Projects with Separable Activities

    NASA Astrophysics Data System (ADS)

    Fujii, Susumu; Morita, Hiroshi; Kanawa, Takuya

    In this study we consider a resource constrained planning problem of multiple projects with separable activities. This problem provides a plan to process the activities considering a resource availability with time window. We propose a solution algorithm based on the branch and bound method to obtain the optimal solution minimizing the completion time of all projects. We develop three methods for improvement of computational efficiency, that is, to obtain initial solution with minimum slack time rule, to estimate lower bound considering both time and resource constraints and to introduce an equivalence relation for bounding operation. The effectiveness of the proposed methods is demonstrated by numerical examples. Especially as the number of planning projects increases, the average computational time and the number of searched nodes are reduced.

  10. Patterns and Prevalence of Core Profile Types in the WPPSI Standardization Sample.

    ERIC Educational Resources Information Center

    Glutting, Joseph J.; McDermott, Paul A.

    1990-01-01

    Found most representative subtest profiles for 1,200 children comprising standardization sample of Wechsler Preschool and Primary Scale of Intelligence (WPPSI). Grouped scaled scores from WPPSI subtests according to similar level and shape using sequential minimum-variance cluster analysis with independent replications. Obtained final solution of…

  11. A Review on Sensor, Signal, and Information Processing Algorithms (PREPRINT)

    DTIC Science & Technology

    2010-01-01

    processing [214], ambi- guity surface averaging [215], optimum uncertain field tracking, and optimal minimum variance track - before - detect [216]. In [217, 218...2) (2001) 739–746. [216] S. L. Tantum, L. W. Nolte, J. L. Krolik, K. Harmanci, The performance of matched-field track - before - detect methods using

  12. A Comparison of Item Selection Techniques for Testlets

    ERIC Educational Resources Information Center

    Murphy, Daniel L.; Dodd, Barbara G.; Vaughn, Brandon K.

    2010-01-01

    This study examined the performance of the maximum Fisher's information, the maximum posterior weighted information, and the minimum expected posterior variance methods for selecting items in a computerized adaptive testing system when the items were grouped in testlets. A simulation study compared the efficiency of ability estimation among the…

  13. Low genetic variance in the duration of the incubation period in a collared flycatcher (Ficedula albicollis) population.

    PubMed

    Husby, Arild; Gustafsson, Lars; Qvarnström, Anna

    2012-01-01

    The avian incubation period is associated with high energetic costs and mortality risks suggesting that there should be strong selection to reduce the duration to the minimum required for normal offspring development. Although there is much variation in the duration of the incubation period across species, there is also variation within species. It is necessary to estimate to what extent this variation is genetically determined if we want to predict the evolutionary potential of this trait. Here we use a long-term study of collared flycatchers to examine the genetic basis of variation in incubation duration. We demonstrate limited genetic variance as reflected in the low and nonsignificant additive genetic variance, with a corresponding heritability of 0.04 and coefficient of additive genetic variance of 2.16. Any selection acting on incubation duration will therefore be inefficient. To our knowledge, this is the first time heritability of incubation duration has been estimated in a natural bird population. © 2011 by The University of Chicago.

  14. Overlap between treatment and control distributions as an effect size measure in experiments.

    PubMed

    Hedges, Larry V; Olkin, Ingram

    2016-03-01

    The proportion π of treatment group observations that exceed the control group mean has been proposed as an effect size measure for experiments that randomly assign independent units into 2 groups. We give the exact distribution of a simple estimator of π based on the standardized mean difference and use it to study the small sample bias of this estimator. We also give the minimum variance unbiased estimator of π under 2 models, one in which the variance of the mean difference is known and one in which the variance is unknown. We show how to use the relation between the standardized mean difference and the overlap measure to compute confidence intervals for π and show that these results can be used to obtain unbiased estimators, large sample variances, and confidence intervals for 3 related effect size measures based on the overlap. Finally, we show how the effect size π can be used in a meta-analysis. (c) 2016 APA, all rights reserved).

  15. Wavelet-based multiscale analysis of minimum toe clearance variability in the young and elderly during walking.

    PubMed

    Khandoker, Ahsan H; Karmakar, Chandan K; Begg, Rezaul K; Palaniswami, Marimuthu

    2007-01-01

    As humans age or are influenced by pathology of the neuromuscular system, gait patterns are known to adjust, accommodating for reduced function in the balance control system. The aim of this study was to investigate the effectiveness of a wavelet based multiscale analysis of a gait variable [minimum toe clearance (MTC)] in deriving indexes for understanding age-related declines in gait performance and screening of balance impairments in the elderly. MTC during walking on a treadmill for 30 healthy young, 27 healthy elderly and 10 falls risk elderly subjects with a history of tripping falls were analyzed. The MTC signal from each subject was decomposed to eight detailed signals at different wavelet scales by using the discrete wavelet transform. The variances of detailed signals at scales 8 to 1 were calculated. The multiscale exponent (beta) was then estimated from the slope of the variance progression at successive scales. The variance at scale 5 was significantly (p<0.01) different between young and healthy elderly group. Results also suggest that the Beta between scales 1 to 2 are effective for recognizing falls risk gait patterns. Results have implication for quantifying gait dynamics in normal, ageing and pathological conditions. Early detection of gait pattern changes due to ageing and balance impairments using wavelet-based multiscale analysis might provide the opportunity to initiate preemptive measures to be undertaken to avoid injurious falls.

  16. Bounds on stochastic chemical kinetic systems at steady state

    NASA Astrophysics Data System (ADS)

    Dowdy, Garrett R.; Barton, Paul I.

    2018-02-01

    The method of moments has been proposed as a potential means to reduce the dimensionality of the chemical master equation (CME) appearing in stochastic chemical kinetics. However, attempts to apply the method of moments to the CME usually result in the so-called closure problem. Several authors have proposed moment closure schemes, which allow them to obtain approximations of quantities of interest, such as the mean molecular count for each species. However, these approximations have the dissatisfying feature that they come with no error bounds. This paper presents a fundamentally different approach to the closure problem in stochastic chemical kinetics. Instead of making an approximation to compute a single number for the quantity of interest, we calculate mathematically rigorous bounds on this quantity by solving semidefinite programs. These bounds provide a check on the validity of the moment closure approximations and are in some cases so tight that they effectively provide the desired quantity. In this paper, the bounded quantities of interest are the mean molecular count for each species, the variance in this count, and the probability that the count lies in an arbitrary interval. At present, we consider only steady-state probability distributions, intending to discuss the dynamic problem in a future publication.

  17. Determining size and dispersion of minimum viable populations for land management planning and species conservation

    NASA Astrophysics Data System (ADS)

    Lehmkuhl, John F.

    1984-03-01

    The concept of minimum populations of wildlife and plants has only recently been discussed in the literature. Population genetics has emerged as a basic underlying criterion for determining minimum population size. This paper presents a genetic framework and procedure for determining minimum viable population size and dispersion strategies in the context of multiple-use land management planning. A procedure is presented for determining minimum population size based on maintenance of genetic heterozygosity and reduction of inbreeding. A minimum effective population size ( N e ) of 50 breeding animals is taken from the literature as the minimum shortterm size to keep inbreeding below 1% per generation. Steps in the procedure adjust N e to account for variance in progeny number, unequal sex ratios, overlapping generations, population fluctuations, and period of habitat/population constraint. The result is an approximate census number that falls within a range of effective population size of 50 500 individuals. This population range defines the time range of short- to long-term population fitness and evolutionary potential. The length of the term is a relative function of the species generation time. Two population dispersion strategies are proposed: core population and dispersed population.

  18. Quantum-enhanced metrology for multiple phase estimation with noise

    PubMed Central

    Yue, Jie-Dong; Zhang, Yu-Ran; Fan, Heng

    2014-01-01

    We present a general quantum metrology framework to study the simultaneous estimation of multiple phases in the presence of noise as a discretized model for phase imaging. This approach can lead to nontrivial bounds of the precision for multiphase estimation. Our results show that simultaneous estimation (SE) of multiple phases is always better than individual estimation (IE) of each phase even in noisy environment. The utility of the bounds of multiple phase estimation for photon loss channels is exemplified explicitly. When noise is low, those bounds possess the Heisenberg scale showing quantum-enhanced precision with the O(d) advantage for SE, where d is the number of phases. However, this O(d) advantage of SE scheme in the variance of the estimation may disappear asymptotically when photon loss becomes significant and then only a constant advantage over that of IE scheme demonstrates. Potential application of those results is presented. PMID:25090445

  19. Statistical analysis of effective singular values in matrix rank determination

    NASA Technical Reports Server (NTRS)

    Konstantinides, Konstantinos; Yao, Kung

    1988-01-01

    A major problem in using SVD (singular-value decomposition) as a tool in determining the effective rank of a perturbed matrix is that of distinguishing between significantly small and significantly large singular values to the end, conference regions are derived for the perturbed singular values of matrices with noisy observation data. The analysis is based on the theories of perturbations of singular values and statistical significance test. Threshold bounds for perturbation due to finite-precision and i.i.d. random models are evaluated. In random models, the threshold bounds depend on the dimension of the matrix, the noisy variance, and predefined statistical level of significance. Results applied to the problem of determining the effective order of a linear autoregressive system from the approximate rank of a sample autocorrelation matrix are considered. Various numerical examples illustrating the usefulness of these bounds and comparisons to other previously known approaches are given.

  20. Resilient filtering for time-varying stochastic coupling networks under the event-triggering scheduling

    NASA Astrophysics Data System (ADS)

    Wang, Fan; Liang, Jinling; Dobaie, Abdullah M.

    2018-07-01

    The resilient filtering problem is considered for a class of time-varying networks with stochastic coupling strengths. An event-triggered strategy is adopted to save the network resources by scheduling the signal transmission from the sensors to the filters based on certain prescribed rules. Moreover, the filter parameters to be designed are subject to gain perturbations. The primary aim of the addressed problem is to determine a resilient filter that ensures an acceptable filtering performance for the considered network with event-triggering scheduling. To handle such an issue, an upper bound on the estimation error variance is established for each node according to the stochastic analysis. Subsequently, the resilient filter is designed by locally minimizing the derived upper bound at each iteration. Moreover, rigorous analysis shows the monotonicity of the minimal upper bound regarding the triggering threshold. Finally, a simulation example is presented to show effectiveness of the established filter scheme.

  1. The Cost of Becoming an Outdoor Instructor.

    ERIC Educational Resources Information Center

    Cashel, Chris

    This article describes instructor criteria in three outdoor organizations: Outward Bound (OB), the National Outdoor Leadership School (NOLS), and the Wilderness Education Association (WEA). Common requirements for outdoor leadership programs are outdoor experience and skills, advanced first aid, CPR, and a minimum age requirement. Traditionally…

  2. Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process

    PubMed Central

    Haines, Aaron M.; Zak, Matthew; Hammond, Katie; Scott, J. Michael; Goble, Dale D.; Rachlow, Janet L.

    2013-01-01

    Simple Summary The objective of our study was to evaluate the mention of uncertainty (i.e., variance) associated with population size estimates within U.S. recovery plans for endangered animals. To do this we reviewed all finalized recovery plans for listed terrestrial vertebrate species. We found that more recent recovery plans reported more estimates of population size and uncertainty. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty. We recommend that updated recovery plans combine uncertainty of population size estimates with a minimum detectable difference to aid in successful recovery. Abstract United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data. PMID:26479531

  3. Sampling intraspecific variability in leaf functional traits: Practical suggestions to maximize collected information.

    PubMed

    Petruzzellis, Francesco; Palandrani, Chiara; Savi, Tadeja; Alberti, Roberto; Nardini, Andrea; Bacaro, Giovanni

    2017-12-01

    The choice of the best sampling strategy to capture mean values of functional traits for a species/population, while maintaining information about traits' variability and minimizing the sampling size and effort, is an open issue in functional trait ecology. Intraspecific variability (ITV) of functional traits strongly influences sampling size and effort. However, while adequate information is available about intraspecific variability between individuals (ITV BI ) and among populations (ITV POP ), relatively few studies have analyzed intraspecific variability within individuals (ITV WI ). Here, we provide an analysis of ITV WI of two foliar traits, namely specific leaf area (SLA) and osmotic potential (π), in a population of Quercus ilex L. We assessed the baseline ITV WI level of variation between the two traits and provided the minimum and optimal sampling size in order to take into account ITV WI , comparing sampling optimization outputs with those previously proposed in the literature. Different factors accounted for different amount of variance of the two traits. SLA variance was mostly spread within individuals (43.4% of the total variance), while π variance was mainly spread between individuals (43.2%). Strategies that did not account for all the canopy strata produced mean values not representative of the sampled population. The minimum size to adequately capture the studied functional traits corresponded to 5 leaves taken randomly from 5 individuals, while the most accurate and feasible sampling size was 4 leaves taken randomly from 10 individuals. We demonstrate that the spatial structure of the canopy could significantly affect traits variability. Moreover, different strategies for different traits could be implemented during sampling surveys. We partially confirm sampling sizes previously proposed in the recent literature and encourage future analysis involving different traits.

  4. REML/BLUP and sequential path analysis in estimating genotypic values and interrelationships among simple maize grain yield-related traits.

    PubMed

    Olivoto, T; Nardino, M; Carvalho, I R; Follmann, D N; Ferrari, M; Szareski, V J; de Pelegrin, A J; de Souza, V Q

    2017-03-22

    Methodologies using restricted maximum likelihood/best linear unbiased prediction (REML/BLUP) in combination with sequential path analysis in maize are still limited in the literature. Therefore, the aims of this study were: i) to use REML/BLUP-based procedures in order to estimate variance components, genetic parameters, and genotypic values of simple maize hybrids, and ii) to fit stepwise regressions considering genotypic values to form a path diagram with multi-order predictors and minimum multicollinearity that explains the relationships of cause and effect among grain yield-related traits. Fifteen commercial simple maize hybrids were evaluated in multi-environment trials in a randomized complete block design with four replications. The environmental variance (78.80%) and genotype-vs-environment variance (20.83%) accounted for more than 99% of the phenotypic variance of grain yield, which difficult the direct selection of breeders for this trait. The sequential path analysis model allowed the selection of traits with high explanatory power and minimum multicollinearity, resulting in models with elevated fit (R 2 > 0.9 and ε < 0.3). The number of kernels per ear (NKE) and thousand-kernel weight (TKW) are the traits with the largest direct effects on grain yield (r = 0.66 and 0.73, respectively). The high accuracy of selection (0.86 and 0.89) associated with the high heritability of the average (0.732 and 0.794) for NKE and TKW, respectively, indicated good reliability and prospects of success in the indirect selection of hybrids with high-yield potential through these traits. The negative direct effect of NKE on TKW (r = -0.856), however, must be considered. The joint use of mixed models and sequential path analysis is effective in the evaluation of maize-breeding trials.

  5. Exponential bound in the quest for absolute zero

    NASA Astrophysics Data System (ADS)

    Stefanatos, Dionisis

    2017-10-01

    In most studies for the quantification of the third law of thermodynamics, the minimum temperature which can be achieved with a long but finite-time process scales as a negative power of the process duration. In this article, we use our recent complete solution for the optimal control problem of the quantum parametric oscillator to show that the minimum temperature which can be obtained in this system scales exponentially with the available time. The present work is expected to motivate further research in the active quest for absolute zero.

  6. Exponential bound in the quest for absolute zero.

    PubMed

    Stefanatos, Dionisis

    2017-10-01

    In most studies for the quantification of the third law of thermodynamics, the minimum temperature which can be achieved with a long but finite-time process scales as a negative power of the process duration. In this article, we use our recent complete solution for the optimal control problem of the quantum parametric oscillator to show that the minimum temperature which can be obtained in this system scales exponentially with the available time. The present work is expected to motivate further research in the active quest for absolute zero.

  7. A de-noising method using the improved wavelet threshold function based on noise variance estimation

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Wang, Weida; Xiang, Changle; Han, Lijin; Nie, Haizhao

    2018-01-01

    The precise and efficient noise variance estimation is very important for the processing of all kinds of signals while using the wavelet transform to analyze signals and extract signal features. In view of the problem that the accuracy of traditional noise variance estimation is greatly affected by the fluctuation of noise values, this study puts forward the strategy of using the two-state Gaussian mixture model to classify the high-frequency wavelet coefficients in the minimum scale, which takes both the efficiency and accuracy into account. According to the noise variance estimation, a novel improved wavelet threshold function is proposed by combining the advantages of hard and soft threshold functions, and on the basis of the noise variance estimation algorithm and the improved wavelet threshold function, the research puts forth a novel wavelet threshold de-noising method. The method is tested and validated using random signals and bench test data of an electro-mechanical transmission system. The test results indicate that the wavelet threshold de-noising method based on the noise variance estimation shows preferable performance in processing the testing signals of the electro-mechanical transmission system: it can effectively eliminate the interference of transient signals including voltage, current, and oil pressure and maintain the dynamic characteristics of the signals favorably.

  8. Efficient design of cluster randomized trials with treatment-dependent costs and treatment-dependent unknown variances.

    PubMed

    van Breukelen, Gerard J P; Candel, Math J J M

    2018-06-10

    Cluster randomized trials evaluate the effect of a treatment on persons nested within clusters, where treatment is randomly assigned to clusters. Current equations for the optimal sample size at the cluster and person level assume that the outcome variances and/or the study costs are known and homogeneous between treatment arms. This paper presents efficient yet robust designs for cluster randomized trials with treatment-dependent costs and treatment-dependent unknown variances, and compares these with 2 practical designs. First, the maximin design (MMD) is derived, which maximizes the minimum efficiency (minimizes the maximum sampling variance) of the treatment effect estimator over a range of treatment-to-control variance ratios. The MMD is then compared with the optimal design for homogeneous variances and costs (balanced design), and with that for homogeneous variances and treatment-dependent costs (cost-considered design). The results show that the balanced design is the MMD if the treatment-to control cost ratio is the same at both design levels (cluster, person) and within the range for the treatment-to-control variance ratio. It still is highly efficient and better than the cost-considered design if the cost ratio is within the range for the squared variance ratio. Outside that range, the cost-considered design is better and highly efficient, but it is not the MMD. An example shows sample size calculation for the MMD, and the computer code (SPSS and R) is provided as supplementary material. The MMD is recommended for trial planning if the study costs are treatment-dependent and homogeneity of variances cannot be assumed. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  9. Comparison of Turbulent Thermal Diffusivity and Scalar Variance Models

    NASA Technical Reports Server (NTRS)

    Yoder, Dennis A.

    2016-01-01

    In this study, several variable turbulent Prandtl number formulations are examined for boundary layers, pipe flow, and axisymmetric jets. The model formulations include simple algebraic relations between the thermal diffusivity and turbulent viscosity as well as more complex models that solve transport equations for the thermal variance and its dissipation rate. Results are compared with available data for wall heat transfer and profile measurements of mean temperature, the root-mean-square (RMS) fluctuating temperature, turbulent heat flux and turbulent Prandtl number. For wall-bounded problems, the algebraic models are found to best predict the rise in turbulent Prandtl number near the wall as well as the log-layer temperature profile, while the thermal variance models provide a good representation of the RMS temperature fluctuations. In jet flows, the algebraic models provide no benefit over a constant turbulent Prandtl number approach. Application of the thermal variance models finds that some significantly overpredict the temperature variance in the plume and most underpredict the thermal growth rate of the jet. The models yield very similar fluctuating temperature intensities in jets from straight pipes and smooth contraction nozzles, in contrast to data that indicate the latter should have noticeably higher values. For the particular low subsonic heated jet cases examined, changes in the turbulent Prandtl number had no effect on the centerline velocity decay.

  10. Quantum speed limit constraints on a nanoscale autonomous refrigerator

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Chiranjib; Misra, Avijit; Bhattacharya, Samyadeb; Pati, Arun Kumar

    2018-06-01

    Quantum speed limit, furnishing a lower bound on the required time for the evolution of a quantum system through the state space, imposes an ultimate natural limitation to the dynamics of physical devices. Quantum absorption refrigerators, however, have attracted a great deal of attention in the past few years. In this paper, we discuss the effects of quantum speed limit on the performance of a quantum absorption refrigerator. In particular, we show that there exists a tradeoff relation between the steady cooling rate of the refrigerator and the minimum time taken to reach the steady state. Based on this, we define a figure of merit called "bounding second order cooling rate" and show that this scales linearly with the unitary interaction strength among the constituent qubits. We also study the increase of bounding second-order cooling rate with the thermalization strength. We subsequently demonstrate that coherence in the initial three qubit system can significantly increase the bounding second-order cooling rate. We study the efficiency of the refrigerator at maximum bounding second-order cooling rate and, in a limiting case, we show that the efficiency at maximum bounding second-order cooling rate is given by a simple formula resembling the Curzon-Ahlborn relation.

  11. Numerically stable algorithm for combining census and sample estimates with the multivariate composite estimator

    Treesearch

    R. L. Czaplewski

    2009-01-01

    The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...

  12. Multiple Signal Classification for Determining Direction of Arrival of Frequency Hopping Spread Spectrum Signals

    DTIC Science & Technology

    2014-03-27

    42 4.2.3 Number of Hops Hs . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.2.4 Number of Sensors M... 45 4.5 Standard deviation vs. Ns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.6 Bias...laboratory MTM multiple taper method MUSIC multiple signal classification MVDR minimum variance distortionless reposnse PSK phase shift keying QAM

  13. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter

    PubMed Central

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan

    2018-01-01

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509

  14. Fast computation of an optimal controller for large-scale adaptive optics.

    PubMed

    Massioni, Paolo; Kulcsár, Caroline; Raynaud, Henri-François; Conan, Jean-Marc

    2011-11-01

    The linear quadratic Gaussian regulator provides the minimum-variance control solution for a linear time-invariant system. For adaptive optics (AO) applications, under the hypothesis of a deformable mirror with instantaneous response, such a controller boils down to a minimum-variance phase estimator (a Kalman filter) and a projection onto the mirror space. The Kalman filter gain can be computed by solving an algebraic Riccati matrix equation, whose computational complexity grows very quickly with the size of the telescope aperture. This "curse of dimensionality" makes the standard solvers for Riccati equations very slow in the case of extremely large telescopes. In this article, we propose a way of computing the Kalman gain for AO systems by means of an approximation that considers the turbulence phase screen as the cropped version of an infinite-size screen. We demonstrate the advantages of the methods for both off- and on-line computational time, and we evaluate its performance for classical AO as well as for wide-field tomographic AO with multiple natural guide stars. Simulation results are reported.

  15. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.

    PubMed

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan

    2018-02-06

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.

  16. An algorithm for minimum-cost set-point ordering in a cryogenic wind tunnel

    NASA Technical Reports Server (NTRS)

    Tripp, J. S.

    1981-01-01

    An algorithm for minimum cost ordering of set points in a cryogenic wind tunnel is developed. The procedure generates a matrix of dynamic state transition costs, which is evaluated by means of a single-volume lumped model of the cryogenic wind tunnel and the use of some idealized minimum-costs, which is evaluated by means of a single-volume lumped model of the cryogenic wind tunnel and the use of some idealized minimum-cost state-transition control strategies. A branch and bound algorithm is employed to determine the least costly sequence of state transitions from the transition-cost matrix. Some numerical results based on data for the National Transonic Facility are presented which show a strong preference for state transitions that consume to coolant. Results also show that the choice of the terminal set point in an open odering can produce a wide variation in total cost.

  17. Noncoplanar minimum delta V two-impulse and three-impulse orbital transfer from a regressing oblate earth assembly parking ellipse onto a flyby trans-Mars asymptotic velocity vector.

    NASA Technical Reports Server (NTRS)

    Bean, W. C.

    1971-01-01

    Comparison of two-impulse and three-impulse orbital transfer, using data from a 63-case numerical study. For each case investigated for which coplanarity of the regressing assembly parking ellipse was attained with the target asymptotic velocity vector, a two-impulse maneuver (or a one-impulse equivalent) was found for which the velocity expenditure was within 1% of a reference absolute minimum lower bound. Therefore, for the coplanar cases, use of a minimum delta-V three-impulse maneuver afforded scant improvement in velocity penalty. However, as the noncoplanarity of the parking ellipse and the target asymptotic velocity vector increased, there was a significant increase in the superiority of minimum delta-V three-impulse maneuvers for slowing the growth of velocity expenditure. It is concluded that a multiple-impulse maneuver should be contemplated if nonnominal launch conditions could occur.

  18. Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices

    NASA Astrophysics Data System (ADS)

    Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah@Rozita

    2014-06-01

    Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio.

  19. Revisiting the time until fixation of a neutral mutant in a finite population - A coalescent theory approach.

    PubMed

    Greenbaum, Gili

    2015-09-07

    Evaluation of the time scale of the fixation of neutral mutations is crucial to the theoretical understanding of the role of neutral mutations in evolution. Diffusion approximations of the Wright-Fisher model are most often used to derive analytic formulations of genetic drift, as well as for the time scales of the fixation of neutral mutations. These approximations require a set of assumptions, most notably that genetic drift is a stochastic process in a continuous allele-frequency space, an assumption appropriate for large populations. Here equivalent approximations are derived using a coalescent theory approach which relies on a different set of assumptions than the diffusion approach, and adopts a discrete allele-frequency space. Solutions for the mean and variance of the time to fixation of a neutral mutation derived from the two approaches converge for large populations but slightly differ for small populations. A Markov chain analysis of the Wright-Fisher model for small populations is used to evaluate the solutions obtained, showing that both the mean and the variance are better approximated by the coalescent approach. The coalescence approximation represents a tighter upper-bound for the mean time to fixation than the diffusion approximation, while the diffusion approximation and coalescence approximation form an upper and lower bound, respectively, for the variance. The converging solutions and the small deviations of the two approaches strongly validate the use of diffusion approximations, but suggest that coalescent theory can provide more accurate approximations for small populations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. A tale of two superpotentials: Stability and instability in designer gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amsel, Aaron J.; Marolf, Donald; Hertog, Thomas

    We investigate the stability of asymptotically anti-de Sitter gravity coupled to tachyonic scalar fields with mass at or slightly above the Breitenlohner-Freedman bound. The boundary conditions in these 'designer gravity' theories are defined in terms of an arbitrary function W. Previous work had suggested that the energy in designer gravity is bounded below if (i) W has a global minimum and (ii) the scalar potential admits a superpotential P. More recently, however, certain solutions were found (numerically) to violate the proposed energy bound. We resolve the discrepancy by observing that a given scalar potential can admit two possible branches ofmore » the corresponding superpotential, P{sub {+-}}. When there is a P{sub -} branch, we rigorously prove a lower bound on the energy; the P{sub +} branch alone is not sufficient. Our numerical investigations (i) confirm this picture, (ii) confirm other critical aspects of the (complicated) proofs, and (iii) suggest that the existence of P{sub -} may in fact be necessary (as well as sufficient) for the energy of a designer gravity theory to be bounded below.« less

  1. SU-F-T-18: The Importance of Immobilization Devices in Brachytherapy Treatments of Vaginal Cuff

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shojaei, M; Dumitru, N; Pella, S

    2016-06-15

    Purpose: High dose rate brachytherapy is a highly localized radiation therapy that has a very high dose gradient. Thus one of the most important parts of the treatment is the immobilization. The smallest movement of the patient or applicator can result in dose variation to the surrounding tissues as well as to the tumor to be treated. We will revise the ML Cylinder treatments and their localization challenges. Methods: A retrospective study of 25 patients with 5 treatments each looking into the applicator’s placement in regard to the organs at risk. Motion possibilities for each applicator intra and inter fractionationmore » with their dosimetric implications were covered and measured in regard with their dose variance. The localization immobilization devices used were assessed for the capability to prevent motion before and during the treatment delivery. Results: We focused on the 100% isodose on central axis and a 15 degree displacement due to possible rotation analyzing the dose variations to the bladder and rectum walls. The average dose variation for bladder was 15% of the accepted tolerance, with a minimum variance of 11.1% and a maximum one of 23.14% on the central axis. For the off axis measurements we found an average variation of 16.84% of the accepted tolerance, with a minimum variance of 11.47% and a maximum one of 27.69%. For the rectum we focused on the rectum wall closest to the 120% isodose line. The average dose variation was 19.4%, minimum 11.3% and a maximum of 34.02% from the accepted tolerance values Conclusion: Improved immobilization devices are recommended. For inter-fractionation, localization devices are recommended in place with consistent planning in regards with the initial fraction. Many of the present immobilization devices produced for external radiotherapy can be used to improve the localization of HDR applicators during transportation of the patient and during treatment.« less

  2. Stoichiometry of the Cre recombinase bound to the lox recombining site.

    PubMed Central

    Mack, A; Sauer, B; Abremski, K; Hoess, R

    1992-01-01

    The site-specific recombinase Cre from bacteriophage P1 binds and carries out recombination at a 34 bp lox site. The lox site consists of two 13 bp inverted repeats, separated by an 8 bp spacer region. Both the palindromic nature of the site and the results of footprinting and band shift experiments suggest that a minimum of two Cre molecules bind to a lox site. We report here experiments that demonstrate the absolute stoichiometry of the Cre-lox complex to be one molecule of Cre bound per inverted repeat, or two molecules per lox site. Images PMID:1408747

  3. Quadrupolar, Triple [Delta]-Function Potential in One Dimension

    ERIC Educational Resources Information Center

    Patil, S. H.

    2009-01-01

    The energy and parity eigenstates for quadrupolar, triple [delta]-function potential are analysed. Using the analytical solutions in specific domains, simple expressions are obtained for even- and odd-parity bound-state energies. The Heisenberg uncertainty product is observed to have a minimum for a specific strength of the potential. The…

  4. Upward Bound. Program Objectives, Summer 1971.

    ERIC Educational Resources Information Center

    Wesleyan Univ., Middletown, CT.

    The primary program objectives were as follows: (1) The students will achieve passing grade in the college preparation program; (2) The students will achieve one year academic growth each year as measured by the SCAT and other standardized measurements; (3) The students will achieve the minimum PSAT percentile rank as anticipated for college…

  5. Prediction of episodic acidification in North-eastern USA: An empirical/mechanistic approach

    USGS Publications Warehouse

    Davies, T.D.; Tranter, M.; Wigington, P.J.; Eshleman, K.N.; Peters, N.E.; Van Sickle, J.; DeWalle, David R.; Murdoch, Peter S.

    1999-01-01

    Observations from the US Environmental Protection Agency's Episodic Response Project (ERP) in the North-eastern United States are used to develop an empirical/mechanistic scheme for prediction of the minimum values of acid neutralizing capacity (ANC) during episodes. An acidification episode is defined as a hydrological event during which ANC decreases. The pre-episode ANC is used to index the antecedent condition, and the stream flow increase reflects how much the relative contributions of sources of waters change during the episode. As much as 92% of the total variation in the minimum ANC in individual catchments can be explained (with levels of explanation >70% for nine of the 13 streams) by a multiple linear regression model that includes pre-episode ANC and change in discharge as independent variable. The predictive scheme is demonstrated to be regionally robust, with the regional variance explained ranging from 77 to 83%. The scheme is not successful for each ERP stream, and reasons are suggested for the individual failures. The potential for applying the predictive scheme to other watersheds is demonstrated by testing the model with data from the Panola Mountain Research Watershed in the South-eastern United States, where the variance explained by the model was 74%. The model can also be utilized to assess 'chemically new' and 'chemically old' water sources during acidification episodes.Observations from the US Environmental Protection Agency's Episodic Response Project (ERP) in the Northeastern United States are used to develop an empirical/mechanistic scheme for prediction of the minimum values of acid neutralizing capacity (ANC) during episodes. An acidification episode is defined as a hydrological event during which ANC decreases. The pre-episode ANC is used to index the antecedent condition, and the stream flow increase reflects how much the relative contributions of sources of waters change during the episode. As much as 92% of the total variation in the minimum ANC in individual catchments can be explained (with levels of explanation >70% for nine of the 13 streams) by a multiple linear regression model that includes pre-episode ANC and change in discharge as independent variables. The predictive scheme is demonstrated to be regionally robust, with the regional variance explained ranging from 77 to 83%. The scheme is not successful for each ERP stream, and reasons are suggested for the individual failures. The potential for applying the predictive scheme to other watersheds is demonstrated by testing the model with data from the Panola Mountain Research Watershed in the South-eastern United States, where the variance explained by the model was 74%. The model can also be utilized to assess `chemically new' and `chemically old' water sources during acidification episodes.

  6. Variable variance Preisach model for multilayers with perpendicular magnetic anisotropy

    NASA Astrophysics Data System (ADS)

    Franco, A. F.; Gonzalez-Fuentes, C.; Morales, R.; Ross, C. A.; Dumas, R.; Åkerman, J.; Garcia, C.

    2016-08-01

    We present a variable variance Preisach model that fully accounts for the different magnetization processes of a multilayer structure with perpendicular magnetic anisotropy by adjusting the evolution of the interaction variance as the magnetization changes. We successfully compare in a quantitative manner the results obtained with this model to experimental hysteresis loops of several [CoFeB/Pd ] n multilayers. The effect of the number of repetitions and the thicknesses of the CoFeB and Pd layers on the magnetization reversal of the multilayer structure is studied, and it is found that many of the observed phenomena can be attributed to an increase of the magnetostatic interactions and subsequent decrease of the size of the magnetic domains. Increasing the CoFeB thickness leads to the disappearance of the perpendicular anisotropy, and such a minimum thickness of the Pd layer is necessary to achieve an out-of-plane magnetization.

  7. Experimental study on an FBG strain sensor

    NASA Astrophysics Data System (ADS)

    Liu, Hong-lin; Zhu, Zheng-wei; Zheng, Yong; Liu, Bang; Xiao, Feng

    2018-01-01

    Landslides and other geological disasters occur frequently and often cause high financial and humanitarian cost. The real-time, early-warning monitoring of landslides has important significance in reducing casualties and property losses. In this paper, by taking the high initial precision and high sensitivity advantage of FBG, an FBG strain sensor is designed combining FBGs with inclinometer. The sensor was regarded as a cantilever beam with one end fixed. According to the anisotropic material properties of the inclinometer, a theoretical formula between the FBG wavelength and the deflection of the sensor was established using the elastic mechanics principle. Accuracy of the formula established had been verified through laboratory calibration testing and model slope monitoring experiments. The displacement of landslide could be calculated by the established theoretical formula using the changing values of FBG central wavelength obtained by the demodulation instrument remotely. Results showed that the maximum error at different heights was 9.09%; the average of the maximum error was 6.35%, and its corresponding variance was 2.12; the minimum error was 4.18%; the average of the minimum error was 5.99%, and its corresponding variance was 0.50. The maximum error of the theoretical and the measured displacement decrease gradually, and the variance of the error also decreases gradually. This indicates that the theoretical results are more and more reliable. It also shows that the sensor and the theoretical formula established in this paper can be used for remote, real-time, high precision and early warning monitoring of the slope.

  8. On computations of variance, covariance and correlation for interval data

    NASA Astrophysics Data System (ADS)

    Kishida, Masako

    2017-02-01

    In many practical situations, the data on which statistical analysis is to be performed is only known with interval uncertainty. Different combinations of values from the interval data usually lead to different values of variance, covariance, and correlation. Hence, it is desirable to compute the endpoints of possible values of these statistics. This problem is, however, NP-hard in general. This paper shows that the problem of computing the endpoints of possible values of these statistics can be rewritten as the problem of computing skewed structured singular values ν, for which there exist feasible (polynomial-time) algorithms that compute reasonably tight bounds in most practical cases. This allows one to find tight intervals of the aforementioned statistics for interval data.

  9. Robust cooperation of connected vehicle systems with eigenvalue-bounded interaction topologies in the presence of uncertain dynamics

    NASA Astrophysics Data System (ADS)

    Li, Keqiang; Gao, Feng; Li, Shengbo Eben; Zheng, Yang; Gao, Hongbo

    2017-12-01

    This study presents a distributed H-infinity control method for uncertain platoons with dimensionally and structurally unknown interaction topologies provided that the associated topological eigenvalues are bounded by a predesigned range.With an inverse model to compensate for nonlinear powertrain dynamics, vehicles in a platoon are modeled by third-order uncertain systems with bounded disturbances. On the basis of the eigenvalue decomposition of topological matrices, we convert the platoon system to a norm-bounded uncertain part and a diagonally structured certain part by applying linear transformation. We then use a common Lyapunov method to design a distributed H-infinity controller. Numerically, two linear matrix inequalities corresponding to the minimum and maximum eigenvalues should be solved. The resulting controller can tolerate interaction topologies with eigenvalues located in a certain range. The proposed method can also ensure robustness performance and disturbance attenuation ability for the closed-loop platoon system. Hardware-in-the-loop tests are performed to validate the effectiveness of our method.

  10. Selfbound quantum droplets

    NASA Astrophysics Data System (ADS)

    Langen, Tim; Wenzel, Matthias; Schmitt, Matthias; Boettcher, Fabian; Buehner, Carl; Ferrier-Barbut, Igor; Pfau, Tilman

    2017-04-01

    Self-bound many-body systems are formed through a balance of attractive and repulsive forces and occur in many physical scenarios. Liquid droplets are an example of a self-bound system, formed by a balance of the mutual attractive and repulsive forces that derive from different components of the inter-particle potential. On the basis of the recent finding that an unstable bosonic dipolar gas can be stabilized by a repulsive many-body term, it was predicted that three-dimensional self-bound quantum droplets of magnetic atoms should exist. Here we report on the observation of such droplets using dysprosium atoms, with densities 108 times lower than a helium droplet, in a trap-free levitation field. We find that this dilute magnetic quantum liquid requires a minimum, critical number of atoms, below which the liquid evaporates into an expanding gas as a result of the quantum pressure of the individual constituents. Consequently, around this critical atom number we observe an interaction-driven phase transition between a gas and a self-bound liquid in the quantum degenerate regime with ultracold atoms.

  11. Global synchronization of complex dynamical networks through digital communication with limited data rate.

    PubMed

    Wang, Yan-Wu; Bian, Tao; Xiao, Jiang-Wen; Wen, Changyun

    2015-10-01

    This paper studies the global synchronization of complex dynamical network (CDN) under digital communication with limited bandwidth. To realize the digital communication, the so-called uniform-quantizer-sets are introduced to quantize the states of nodes, which are then encoded and decoded by newly designed encoders and decoders. To meet the requirement of the bandwidth constraint, a scaling function is utilized to guarantee the quantizers having bounded inputs and thus achieving bounded real-time quantization levels. Moreover, a new type of vector norm is introduced to simplify the expression of the bandwidth limit. Through mathematical induction, a sufficient condition is derived to ensure global synchronization of the CDNs. The lower bound on the sum of the real-time quantization levels is analyzed for different cases. Optimization method is employed to relax the requirements on the network topology and to determine the minimum of such lower bound for each case, respectively. Simulation examples are also presented to illustrate the established results.

  12. Impact of the Fano Factor on Position and Energy Estimation in Scintillation Detectors.

    PubMed

    Bora, Vaibhav; Barrett, Harrison H; Jha, Abhinav K; Clarkson, Eric

    2015-02-01

    The Fano factor for an integer-valued random variable is defined as the ratio of its variance to its mean. Light from various scintillation crystals have been reported to have Fano factors from sub-Poisson (Fano factor < 1) to super-Poisson (Fano factor > 1). For a given mean, a smaller Fano factor implies a smaller variance and thus less noise. We investigated if lower noise in the scintillation light will result in better spatial and energy resolutions. The impact of Fano factor on the estimation of position of interaction and energy deposited in simple gamma-camera geometries is estimated by two methods - calculating the Cramér-Rao bound and estimating the variance of a maximum likelihood estimator. The methods are consistent with each other and indicate that when estimating the position of interaction and energy deposited by a gamma-ray photon, the Fano factor of a scintillator does not affect the spatial resolution. A smaller Fano factor results in a better energy resolution.

  13. Antimicrobial flavonoids from Tridax procumbens.

    PubMed

    Jindal, Alka; Kumar, Padma

    2012-01-01

    Callus culture of Tridax procumbens has been established on Murashige and Skoog's medium supplemented with NAA and BAP from nodal segments. Free and bound flavonoids were extracted from 2, 4, 6 and 8 weeks old calli by a well-established method. These free flavonoids were then screened against Staphylococcus aureus (bacteria) and Candida albicans (yeast) for their antimicrobial potential. Minimum inhibitory concentration, minimum bactericidal/fungicidal concentrations and total activity were also evaluated. Apigenin, quercetin and kaempferol were identified from free flavonoids of 4 weeks old callus (most active) through, thin layer chromatography, (TLC) preparative TLC, MP and IR spectral studies.

  14. Entanglement witnesses in spin models

    NASA Astrophysics Data System (ADS)

    Tóth, Géza

    2005-01-01

    We construct entanglement witnesses using fundamental quantum operators of spin models which contain two-particle interactions and have a certain symmetry. By choosing the Hamiltonian as such an operator, our method can be used for detecting entanglement by energy measurement. We apply this method to the Heisenberg model in a cubic lattice with a magnetic field, the XY model, and other familiar spin systems. Our method provides a temperature bound for separable states for systems in thermal equilibrium. We also study the Bose-Hubbard model and relate its energy minimum for separable states to the minimum obtained from the Gutzwiller ansatz.

  15. Post-stratified estimation: with-in strata and total sample size recommendations

    Treesearch

    James A. Westfall; Paul L. Patterson; John W. Coulston

    2011-01-01

    Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...

  16. Maximum Relative Entropy of Coherence: An Operational Coherence Measure.

    PubMed

    Bu, Kaifeng; Singh, Uttam; Fei, Shao-Ming; Pati, Arun Kumar; Wu, Junde

    2017-10-13

    The operational characterization of quantum coherence is the cornerstone in the development of the resource theory of coherence. We introduce a new coherence quantifier based on maximum relative entropy. We prove that the maximum relative entropy of coherence is directly related to the maximum overlap with maximally coherent states under a particular class of operations, which provides an operational interpretation of the maximum relative entropy of coherence. Moreover, we show that, for any coherent state, there are examples of subchannel discrimination problems such that this coherent state allows for a higher probability of successfully discriminating subchannels than that of all incoherent states. This advantage of coherent states in subchannel discrimination can be exactly characterized by the maximum relative entropy of coherence. By introducing a suitable smooth maximum relative entropy of coherence, we prove that the smooth maximum relative entropy of coherence provides a lower bound of one-shot coherence cost, and the maximum relative entropy of coherence is equivalent to the relative entropy of coherence in the asymptotic limit. Similar to the maximum relative entropy of coherence, the minimum relative entropy of coherence has also been investigated. We show that the minimum relative entropy of coherence provides an upper bound of one-shot coherence distillation, and in the asymptotic limit the minimum relative entropy of coherence is equivalent to the relative entropy of coherence.

  17. Diameter-Constrained Steiner Tree

    NASA Astrophysics Data System (ADS)

    Ding, Wei; Lin, Guohui; Xue, Guoliang

    Given an edge-weighted undirected graph G = (V,E,c,w), where each edge e ∈ E has a cost c(e) and a weight w(e), a set S ⊆ V of terminals and a positive constant D 0, we seek a minimum cost Steiner tree where all terminals appear as leaves and its diameter is bounded by D 0. Note that the diameter of a tree represents the maximum weight of path connecting two different leaves in the tree. Such problem is called the minimum cost diameter-constrained Steiner tree problem. This problem is NP-hard even when the topology of Steiner tree is fixed. In present paper we focus on this restricted version and present a fully polynomial time approximation scheme (FPTAS) for computing a minimum cost diameter-constrained Steiner tree under a fixed topology.

  18. Molecular mechanism of direct proflavine-DNA intercalation: evidence for drug-induced minimum base-stacking penalty pathway.

    PubMed

    Sasikala, Wilbee D; Mukherjee, Arnab

    2012-10-11

    DNA intercalation, a biophysical process of enormous clinical significance, has surprisingly eluded molecular understanding for several decades. With appropriate configurational restraint (to prevent dissociation) in all-atom metadynamics simulations, we capture the free energy surface of direct intercalation from minor groove-bound state for the first time using an anticancer agent proflavine. Mechanism along the minimum free energy path reveals that intercalation happens through a minimum base stacking penalty pathway where nonstacking parameters (Twist→Slide/Shift) change first, followed by base stacking parameters (Buckle/Roll→Rise). This mechanism defies the natural fluctuation hypothesis and provides molecular evidence for the drug-induced cavity formation hypothesis. The thermodynamic origin of the barrier is found to be a combination of entropy and desolvation energy.

  19. Optimization of fringe-type laser anemometers for turbine engine component testing

    NASA Technical Reports Server (NTRS)

    Seasholtz, R. G.; Oberle, L. G.; Weikle, D. H.

    1984-01-01

    The fringe type laser anemometer is analyzed using the Cramer-Rao bound for the variance of the estimate of the Doppler frequency as a figure of merit. Mie scattering theory is used to calculate the Doppler signal wherein both the amplitude and phase of the scattered light are taken into account. The noise from wall scatter is calculated using the wall bidirectional reflectivity and the irradiance of the incident beams. A procedure is described to determine the optimum aperture mask for the probe volume located a given distance from a wall. The expected performance of counter type processors is also discussed in relation to the Cramer-Rao bound. Numerical examples are presented for a coaxial backscatter anemometer.

  20. Petawatt laser absorption bounded

    PubMed Central

    Levy, Matthew C.; Wilks, Scott C.; Tabak, Max; Libby, Stephen B.; Baring, Matthew G.

    2014-01-01

    The interaction of petawatt (1015 W) lasers with solid matter forms the basis for advanced scientific applications such as table-top particle accelerators, ultrafast imaging systems and laser fusion. Key metrics for these applications relate to absorption, yet conditions in this regime are so nonlinear that it is often impossible to know the fraction of absorbed light f, and even the range of f is unknown. Here using a relativistic Rankine-Hugoniot-like analysis, we show for the first time that f exhibits a theoretical maximum and minimum. These bounds constrain nonlinear absorption mechanisms across the petawatt regime, forbidding high absorption values at low laser power and low absorption values at high laser power. For applications needing to circumvent the absorption bounds, these results will accelerate a shift from solid targets, towards structured and multilayer targets, and lead the development of new materials. PMID:24938656

  1. Development and Evaluation of a Percutaneous Technique for Repairing Proximal Femora With Metastatic Lesions

    DTIC Science & Technology

    2006-05-01

    bedridden , who are wheel-chair bound, or who have short life expectancies). Key Research Accomplishments • 27 FE models were created and analyzed...minimally invasive procedure is a viable option for, at a minimum, situations with low cyclic loading, such as for patients who are bedridden , who are wheel

  2. Limited variance control in statistical low thrust guidance analysis. [stochastic algorithm for SEP comet Encke flyby mission

    NASA Technical Reports Server (NTRS)

    Jacobson, R. A.

    1975-01-01

    Difficulties arise in guiding a solar electric propulsion spacecraft due to nongravitational accelerations caused by random fluctuations in the magnitude and direction of the thrust vector. These difficulties may be handled by using a low thrust guidance law based on the linear-quadratic-Gaussian problem of stochastic control theory with a minimum terminal miss performance criterion. Explicit constraints are imposed on the variances of the control parameters, and an algorithm based on the Hilbert space extension of a parameter optimization method is presented for calculation of gains in the guidance law. The terminal navigation of a 1980 flyby mission to the comet Encke is used as an example.

  3. Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita

    2014-06-19

    Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stablemore » information ratio.« less

  4. Subset Selection Procedures: A Review and an Assessment

    DTIC Science & Technology

    1984-02-01

    distance function (Alam and Rizvi, 1966; Gupta, 1966; Gupta and Studden, 1970), generalized variance ( Gnanadesikan and Gupta, 1970), and multiple... Gnanadesikan (1966) considered a location type procedure based on sample component means. Except in the case of bivariate normal, only a lower bound of the...Frischtak, 1973; Gnanadesikan , 1966) for ranking multivariate normal populations but the results in these cases are very limited in scope or are asymptotic

  5. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks

    PubMed Central

    Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the number of the sensitive parameters. PMID:26161544

  6. Convergence of Mayer and Virial expansions and the Penrose tree-graph identity

    NASA Astrophysics Data System (ADS)

    Procacci, Aldo; Yuhjtman, Sergio A.

    2017-01-01

    We establish new lower bounds for the convergence radius of the Mayer series and the Virial series of a continuous particle system interacting via a stable and tempered pair potential. Our bounds considerably improve those given by Penrose (J Math Phys 4:1312, 1963) and Ruelle (Ann Phys 5:109-120, 1963) for the Mayer series and by Lebowitz and Penrose (J Math Phys 7:841-847, 1964) for the Virial series. To get our results, we exploit the tree-graph identity given by Penrose (Statistical mechanics: foundations and applications. Benjamin, New York, 1967) using a new partition scheme based on minimum spanning trees.

  7. Soft-decision decoding techniques for linear block codes and their error performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1996-01-01

    The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.

  8. Estimating earthquake location and magnitude from seismic intensity data

    USGS Publications Warehouse

    Bakun, W.H.; Wentworth, C.M.

    1997-01-01

    Analysis of Modified Mercalli intensity (MMI) observations for a training set of 22 California earthquakes suggests a strategy for bounding the epicentral region and moment magnitude M from MMI observations only. We define an intensity magnitude MI that is calibrated to be equal in the mean to M. MI = mean (Mi), where Mi = (MMIi + 3.29 + 0.0206 * ??i)/1.68 and ??i is the epicentral distance (km) of observation MMIi. The epicentral region is bounded by contours of rms [MI] = rms (MI - Mi) - rms0 (MI - Mi-), where rms is the root mean square, rms0 (MI - Mi) is the minimum rms over a grid of assumed epicenters, and empirical site corrections and a distance weighting function are used. Empirical contour values for bounding the epicenter location and empirical bounds for M estimated from MI appropriate for different levels of confidence and different quantities of intensity observations are tabulated. The epicentral region bounds and MI obtained for an independent test set of western California earthquakes are consistent with the instrumental epicenters and moment magnitudes of these earthquakes. The analysis strategy is particularly appropriate for the evaluation of pre-1900 earthquakes for which the only available data are a sparse set of intensity observations.

  9. Uncertainty relations as Hilbert space geometry

    NASA Technical Reports Server (NTRS)

    Braunstein, Samuel L.

    1994-01-01

    Precision measurements involve the accurate determination of parameters through repeated measurements of identically prepared experimental setups. For many parameters there is a 'natural' choice for the quantum observable which is expected to give optimal information; and from this observable one can construct an Heinsenberg uncertainty principle (HUP) bound on the precision attainable for the parameter. However, the classical statistics of multiple sampling directly gives us tools to construct bounds for the precision available for the parameters of interest (even when no obvious natural quantum observable exists, such as for phase, or time); it is found that these direct bounds are more restrictive than those of the HUP. The implication is that the natural quantum observables typically do not encode the optimal information (even for observables such as position, and momentum); we show how this can be understood simply in terms of the Hilbert space geometry. Another striking feature of these bounds to parameter uncertainty is that for a large enough number of repetitions of the measurements all V quantum states are 'minimum uncertainty' states - not just Gaussian wave-packets. Thus, these bounds tell us what precision is achievable as well as merely what is allowed.

  10. Symmetry Parameter Constraints from a Lower Bound on Neutron-matter Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tews, Ingo; Lattimer, James M.; Ohnishi, Akira

    We propose the existence of a lower bound on the energy of pure neutron matter (PNM) on the basis of unitary-gas considerations. We discuss its justification from experimental studies of cold atoms as well as from theoretical studies of neutron matter. We demonstrate that this bound results in limits to the density-dependent symmetry energy, which is the difference between the energies of symmetric nuclear matter and PNM. In particular, this bound leads to a lower limit to the volume symmetry energy parameter S {sub 0}. In addition, for assumed values of S {sub 0} above this minimum, this bound impliesmore » both upper and lower limits to the symmetry energy slope parameter L , which describes the lowest-order density dependence of the symmetry energy. A lower bound on neutron-matter incompressibility is also obtained. These bounds are found to be consistent with both recent calculations of the energies of PNM and constraints from nuclear experiments. Our results are significant because several equations of state that are currently used in astrophysical simulations of supernovae and neutron star mergers, as well as in nuclear physics simulations of heavy-ion collisions, have symmetry energy parameters that violate these bounds. Furthermore, below the nuclear saturation density, the bound on neutron-matter energies leads to a lower limit to the density-dependent symmetry energy, which leads to upper limits to the nuclear surface symmetry parameter and the neutron-star crust–core boundary. We also obtain a lower limit to the neutron-skin thicknesses of neutron-rich nuclei. Above the nuclear saturation density, the bound on neutron-matter energies also leads to an upper limit to the symmetry energy, with implications for neutron-star cooling via the direct Urca process.« less

  11. Symmetry Parameter Constraints from a Lower Bound on Neutron-matter Energy

    NASA Astrophysics Data System (ADS)

    Tews, Ingo; Lattimer, James M.; Ohnishi, Akira; Kolomeitsev, Evgeni E.

    2017-10-01

    We propose the existence of a lower bound on the energy of pure neutron matter (PNM) on the basis of unitary-gas considerations. We discuss its justification from experimental studies of cold atoms as well as from theoretical studies of neutron matter. We demonstrate that this bound results in limits to the density-dependent symmetry energy, which is the difference between the energies of symmetric nuclear matter and PNM. In particular, this bound leads to a lower limit to the volume symmetry energy parameter S 0. In addition, for assumed values of S 0 above this minimum, this bound implies both upper and lower limits to the symmetry energy slope parameter L ,which describes the lowest-order density dependence of the symmetry energy. A lower bound on neutron-matter incompressibility is also obtained. These bounds are found to be consistent with both recent calculations of the energies of PNM and constraints from nuclear experiments. Our results are significant because several equations of state that are currently used in astrophysical simulations of supernovae and neutron star mergers, as well as in nuclear physics simulations of heavy-ion collisions, have symmetry energy parameters that violate these bounds. Furthermore, below the nuclear saturation density, the bound on neutron-matter energies leads to a lower limit to the density-dependent symmetry energy, which leads to upper limits to the nuclear surface symmetry parameter and the neutron-star crust-core boundary. We also obtain a lower limit to the neutron-skin thicknesses of neutron-rich nuclei. Above the nuclear saturation density, the bound on neutron-matter energies also leads to an upper limit to the symmetry energy, with implications for neutron-star cooling via the direct Urca process.

  12. Intelligent ensemble T-S fuzzy neural networks with RCDPSO_DM optimization for effective handling of complex clinical pathway variances.

    PubMed

    Du, Gang; Jiang, Zhibin; Diao, Xiaodi; Yao, Yang

    2013-07-01

    Takagi-Sugeno (T-S) fuzzy neural networks (FNNs) can be used to handle complex, fuzzy, uncertain clinical pathway (CP) variances. However, there are many drawbacks, such as slow training rate, propensity to become trapped in a local minimum and poor ability to perform a global search. In order to improve overall performance of variance handling by T-S FNNs, a new CP variance handling method is proposed in this study. It is based on random cooperative decomposing particle swarm optimization with double mutation mechanism (RCDPSO_DM) for T-S FNNs. Moreover, the proposed integrated learning algorithm, combining the RCDPSO_DM algorithm with a Kalman filtering algorithm, is applied to optimize antecedent and consequent parameters of constructed T-S FNNs. Then, a multi-swarm cooperative immigrating particle swarm algorithm ensemble method is used for intelligent ensemble T-S FNNs with RCDPSO_DM optimization to further improve stability and accuracy of CP variance handling. Finally, two case studies on liver and kidney poisoning variances in osteosarcoma preoperative chemotherapy are used to validate the proposed method. The result demonstrates that intelligent ensemble T-S FNNs based on the RCDPSO_DM achieves superior performances, in terms of stability, efficiency, precision and generalizability, over PSO ensemble of all T-S FNNs with RCDPSO_DM optimization, single T-S FNNs with RCDPSO_DM optimization, standard T-S FNNs, standard Mamdani FNNs and T-S FNNs based on other algorithms (cooperative particle swarm optimization and particle swarm optimization) for CP variance handling. Therefore, it makes CP variance handling more effective. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Assessing the Minimum Number of Synchronization Triggers Necessary for Temporal Variance Compensation in Commercial Electroencephalography (EEG) Systems

    DTIC Science & Technology

    2012-09-01

    by the ARL Translational Neuroscience Branch. It covers the Emotiv EPOC,6 Advanced Brain Monitoring (ABM) B-Alert X10,7 Quasar 8 DSI helmet-based...Systems; ARL-TR-5945; U.S. Army Research Laboratory: Aberdeen Proving Ground, MD, 2012 4 Ibid. 5 Ibid. 6 EPOC is a trademark of Emotiv . 7 B

  14. Foreign Language Training in U.S. Undergraduate IB Programs: Are We Providing Students What They Need to Be Successful?

    ERIC Educational Resources Information Center

    Johnson, Jim

    2017-01-01

    A growing number of U.S. business schools now offer an undergraduate degree in international business (IB), for which training in a foreign language is a requirement. However, there appears to be considerable variance in the minimum requirements for foreign language training across U.S. business schools, including the provision of…

  15. Iterative Minimum Variance Beamformer with Low Complexity for Medical Ultrasound Imaging.

    PubMed

    Deylami, Ali Mohades; Asl, Babak Mohammadzadeh

    2018-06-04

    Minimum variance beamformer (MVB) improves the resolution and contrast of medical ultrasound images compared with delay and sum (DAS) beamformer. The weight vector of this beamformer should be calculated for each imaging point independently, with a cost of increasing computational complexity. The large number of necessary calculations limits this beamformer to application in real-time systems. A beamformer is proposed based on the MVB with lower computational complexity while preserving its advantages. This beamformer avoids matrix inversion, which is the most complex part of the MVB, by solving the optimization problem iteratively. The received signals from two imaging points close together do not vary much in medical ultrasound imaging. Therefore, using the previously optimized weight vector for one point as initial weight vector for the new neighboring point can improve the convergence speed and decrease the computational complexity. The proposed method was applied on several data sets, and it has been shown that the method can regenerate the results obtained by the MVB while the order of complexity is decreased from O(L 3 ) to O(L 2 ). Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.

  16. GIS-based niche modeling for mapping species' habitats

    USGS Publications Warehouse

    Rotenberry, J.T.; Preston, K.L.; Knick, S.

    2006-01-01

    Ecological a??niche modelinga?? using presence-only locality data and large-scale environmental variables provides a powerful tool for identifying and mapping suitable habitat for species over large spatial extents. We describe a niche modeling approach that identifies a minimum (rather than an optimum) set of basic habitat requirements for a species, based on the assumption that constant environmental relationships in a species' distribution (i.e., variables that maintain a consistent value where the species occurs) are most likely to be associated with limiting factors. Environmental variables that take on a wide range of values where a species occurs are less informative because they do not limit a species' distribution, at least over the range of variation sampled. This approach is operationalized by partitioning Mahalanobis D2 (standardized difference between values of a set of environmental variables for any point and mean values for those same variables calculated from all points at which a species was detected) into independent components. The smallest of these components represents the linear combination of variables with minimum variance; increasingly larger components represent larger variances and are increasingly less limiting. We illustrate this approach using the California Gnatcatcher (Polioptila californica Brewster) and provide SAS code to implement it.

  17. Spectral analysis comparisons of Fourier-theory-based methods and minimum variance (Capon) methods

    NASA Astrophysics Data System (ADS)

    Garbanzo-Salas, Marcial; Hocking, Wayne. K.

    2015-09-01

    In recent years, adaptive (data dependent) methods have been introduced into many areas where Fourier spectral analysis has traditionally been used. Although the data-dependent methods are often advanced as being superior to Fourier methods, they do require some finesse in choosing the order of the relevant filters. In performing comparisons, we have found some concerns about the mappings, particularly when related to cases involving many spectral lines or even continuous spectral signals. Using numerical simulations, several comparisons between Fourier transform procedures and minimum variance method (MVM) have been performed. For multiple frequency signals, the MVM resolves most of the frequency content only for filters that have more degrees of freedom than the number of distinct spectral lines in the signal. In the case of Gaussian spectral approximation, MVM will always underestimate the width, and can misappropriate the location of spectral line in some circumstances. Large filters can be used to improve results with multiple frequency signals, but are computationally inefficient. Significant biases can occur when using MVM to study spectral information or echo power from the atmosphere. Artifacts and artificial narrowing of turbulent layers is one such impact.

  18. An Experience Oriented-Convergence Improved Gravitational Search Algorithm for Minimum Variance Distortionless Response Beamforming Optimum.

    PubMed

    Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin

    2016-01-01

    An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents' positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness.

  19. Significant improvements of electrical discharge machining performance by step-by-step updated adaptive control laws

    NASA Astrophysics Data System (ADS)

    Zhou, Ming; Wu, Jianyang; Xu, Xiaoyi; Mu, Xin; Dou, Yunping

    2018-02-01

    In order to obtain improved electrical discharge machining (EDM) performance, we have dedicated more than a decade to correcting one essential EDM defect, the weak stability of the machining, by developing adaptive control systems. The instabilities of machining are mainly caused by complicated disturbances in discharging. To counteract the effects from the disturbances on machining, we theoretically developed three control laws from minimum variance (MV) control law to minimum variance and pole placements coupled (MVPPC) control law and then to a two-step-ahead prediction (TP) control law. Based on real-time estimation of EDM process model parameters and measured ratio of arcing pulses which is also called gap state, electrode discharging cycle was directly and adaptively tuned so that a stable machining could be achieved. To this end, we not only theoretically provide three proved control laws for a developed EDM adaptive control system, but also practically proved the TP control law to be the best in dealing with machining instability and machining efficiency though the MVPPC control law provided much better EDM performance than the MV control law. It was also shown that the TP control law also provided a burn free machining.

  20. The performance of matched-field track-before-detect methods using shallow-water Pacific data.

    PubMed

    Tantum, Stacy L; Nolte, Loren W; Krolik, Jeffrey L; Harmanci, Kerem

    2002-07-01

    Matched-field track-before-detect processing, which extends the concept of matched-field processing to include modeling of the source dynamics, has recently emerged as a promising approach for maintaining the track of a moving source. In this paper, optimal Bayesian and minimum variance beamforming track-before-detect algorithms which incorporate a priori knowledge of the source dynamics in addition to the underlying uncertainties in the ocean environment are presented. A Markov model is utilized for the source motion as a means of capturing the stochastic nature of the source dynamics without assuming uniform motion. In addition, the relationship between optimal Bayesian track-before-detect processing and minimum variance track-before-detect beamforming is examined, revealing how an optimal tracking philosophy may be used to guide the modification of existing beamforming techniques to incorporate track-before-detect capabilities. Further, the benefits of implementing an optimal approach over conventional methods are illustrated through application of these methods to shallow-water Pacific data collected as part of the SWellEX-1 experiment. The results show that incorporating Markovian dynamics for the source motion provides marked improvement in the ability to maintain target track without the use of a uniform velocity hypothesis.

  1. Number-phase minimum-uncertainty state with reduced number uncertainty in a Kerr nonlinear interferometer

    NASA Astrophysics Data System (ADS)

    Kitagawa, M.; Yamamoto, Y.

    1987-11-01

    An alternative scheme for generating amplitude-squeezed states of photons based on unitary evolution which can properly be described by quantum mechanics is presented. This scheme is a nonlinear Mach-Zehnder interferometer containing an optical Kerr medium. The quasi-probability density (QPD) and photon-number distribution of the output field are calculated, and it is demonstrated that the reduced photon-number uncertainty and enhanced phase uncertainty maintain the minimum-uncertainty product. A self-phase-modulation of the single-mode quantized field in the Kerr medium is described based on localized operators. The spatial evolution of the state is demonstrated by QPD in the Schroedinger picture. It is shown that photon-number variance can be reduced to a level far below the limit for an ordinary squeezed state, and that the state prepared using this scheme remains a number-phase minimum-uncertainty state until the maximum reduction of number fluctuations is surpassed.

  2. Hydrograph variances over different timescales in hydropower production networks

    NASA Astrophysics Data System (ADS)

    Zmijewski, Nicholas; Wörman, Anders

    2016-08-01

    The operation of water reservoirs involves a spectrum of timescales based on the distribution of stream flow travel times between reservoirs, as well as the technical, environmental, and social constraints imposed on the operation. In this research, a hydrodynamically based description of the flow between hydropower stations was implemented to study the relative importance of wave diffusion on the spectrum of hydrograph variance in a regulated watershed. Using spectral decomposition of the effluence hydrograph of a watershed, an exact expression of the variance in the outflow response was derived, as a function of the trends of hydraulic and geomorphologic dispersion and management of production and reservoirs. We show that the power spectra of involved time-series follow nearly fractal patterns, which facilitates examination of the relative importance of wave diffusion and possible changes in production demand on the outflow spectrum. The exact spectral solution can also identify statistical bounds of future demand patterns due to limitations in storage capacity. The impact of the hydraulic description of the stream flow on the reservoir discharge was examined for a given power demand in River Dalälven, Sweden, as function of a stream flow Peclet number. The regulation of hydropower production on the River Dalälven generally increased the short-term variance in the effluence hydrograph, whereas wave diffusion decreased the short-term variance over periods of <1 week, depending on the Peclet number (Pe) of the stream reach. This implies that flow variance becomes more erratic (closer to white noise) as a result of current production objectives.

  3. Quantum tachyons

    NASA Astrophysics Data System (ADS)

    Tomaschitz, R.

    2005-02-01

    The interaction of superluminal radiation with matter in atomic bound-bound and bound-free transitions is investigated. We study transitions in the relativistic hydrogen atom effected by superluminal quanta. The superluminal radiation field is coupled by minimal substitution to the Dirac equation in a Coulomb potential. We quantize the interaction to obtain the transition matrix for induced and spontaneous superluminal radiation in hydrogen-like ions. The tachyonic photoelectric effect is scrutinized, the cross-sections for ground state ionization by transversal and longitudinal tachyons are derived. We examine the relativistic regime, high electronic ejection energies, as well as the first order correction to the non-relativistic cross-sections. In the ultra-relativistic limit, both the longitudinal and transversal cross-sections are peaked at small but noticeably different scattering angles. In the non-relativistic limit, the longitudinal cross-section has two maxima, and its minimum is located at the transversal maximum. Ionization cross-sections can thus be used to discriminate longitudinal radiation from transversal tachyons and photons.

  4. Systems and Methods for Parameter Dependent Riccati Equation Approaches to Adaptive Control

    NASA Technical Reports Server (NTRS)

    Kim, Kilsoo (Inventor); Yucelen, Tansel (Inventor); Calise, Anthony J. (Inventor)

    2015-01-01

    Systems and methods for adaptive control are disclosed. The systems and methods can control uncertain dynamic systems. The control system can comprise a controller that employs a parameter dependent Riccati equation. The controller can produce a response that causes the state of the system to remain bounded. The control system can control both minimum phase and non-minimum phase systems. The control system can augment an existing, non-adaptive control design without modifying the gains employed in that design. The control system can also avoid the use of high gains in both the observer design and the adaptive control law.

  5. One-time pad, complexity of verification of keys, and practical security of quantum cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molotkov, S. N., E-mail: sergei.molotkov@gmail.com

    2016-11-15

    A direct relation between the complexity of the complete verification of keys, which is one of the main criteria of security in classical systems, and a trace distance used in quantum cryptography is demonstrated. Bounds for the minimum and maximum numbers of verification steps required to determine the actual key are obtained.

  6. Quantum speedup of Monte Carlo methods.

    PubMed

    Montanaro, Ashley

    2015-09-08

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.

  7. State-independent uncertainty relations and entanglement detection

    NASA Astrophysics Data System (ADS)

    Qian, Chen; Li, Jun-Li; Qiao, Cong-Feng

    2018-04-01

    The uncertainty relation is one of the key ingredients of quantum theory. Despite the great efforts devoted to this subject, most of the variance-based uncertainty relations are state-dependent and suffering from the triviality problem of zero lower bounds. Here we develop a method to get uncertainty relations with state-independent lower bounds. The method works by exploring the eigenvalues of a Hermitian matrix composed by Bloch vectors of incompatible observables and is applicable for both pure and mixed states and for arbitrary number of N-dimensional observables. The uncertainty relation for the incompatible observables can be explained by geometric relations related to the parallel postulate and the inequalities in Horn's conjecture on Hermitian matrix sum. Practical entanglement criteria are also presented based on the derived uncertainty relations.

  8. Quantum speedup of Monte Carlo methods

    PubMed Central

    Montanaro, Ashley

    2015-01-01

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079

  9. Truncated Gaussians as tolerance sets

    NASA Technical Reports Server (NTRS)

    Cozman, Fabio; Krotkov, Eric

    1994-01-01

    This work focuses on the use of truncated Gaussian distributions as models for bounded data measurements that are constrained to appear between fixed limits. The authors prove that the truncated Gaussian can be viewed as a maximum entropy distribution for truncated bounded data, when mean and covariance are given. The characteristic function for the truncated Gaussian is presented; from this, algorithms are derived for calculation of mean, variance, summation, application of Bayes rule and filtering with truncated Gaussians. As an example of the power of their methods, a derivation of the disparity constraint (used in computer vision) from their models is described. The authors' approach complements results in Statistics, but their proposal is not only to use the truncated Gaussian as a model for selected data; they propose to model measurements as fundamentally in terms of truncated Gaussians.

  10. Using Fisher Information Criteria for Chemical Sensor Selection via Convex Optimization Methods

    DTIC Science & Technology

    2016-11-16

    determinant of the inverse Fisher information matrix which is proportional to the global error volume. If a practitioner has a suitable...pro- ceeds from the determinant of the inverse Fisher information matrix which is proportional to the global error volume. If a practitioner has a...design of statistical estimators (i.e. sensors) as their respective inverses act as lower bounds to the (co)variances of the subject estimator, a property

  11. Relation between Pressure Balance Structures and Polar Plumes from Ulysses High Latitude Observations

    NASA Technical Reports Server (NTRS)

    Yamauchi, Yohei; Suess, Steven T.; Sakurai, Takashi

    2002-01-01

    Ulysses observations have shown that pressure balance structures (PBSs) are a common feature in high-latitude, fast solar wind near solar minimum. Previous studies of Ulysses/SWOOPS plasma data suggest these PBSs may be remnants of coronal polar plumes. Here we find support for this suggestion in an analysis of PBS magnetic structure. We used Ulysses magnetometer data and applied a minimum variance analysis to magnetic discontinuities in PBSs. We found that PBSs preferentially contain tangential discontinuities, as opposed to rotational discontinuities and to non-PBS regions in the solar wind. This suggests that PBSs contain structures like current sheets or plasmoids that may be associated with network activity at the base of plumes.

  12. Relation Between Pressure Balance Structures and Polar Plumes from Ulysses High Latitude Observations

    NASA Technical Reports Server (NTRS)

    Yamauchi, Y.; Suess, Steven T.; Sakurai, T.; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    Ulysses observations have shown that pressure balance structures (PBSs) are a common feature in high-latitude, fast solar wind near solar minimum. Previous studies of Ulysses/SWOOPS plasma data suggest these PBSs may be remnants of coronal polar plumes. Here we find support for this suggestion in an analysis of PBS magnetic structure. We used Ulysses magnetometer data and applied a minimum variance analysis to discontinuities. We found that PBSs preferentially contain tangential discontinuities, as opposed to rotational discontinuities and to non-PBS regions in the solar wind. This suggests that PBSs contain structures like current sheets or plasmoids that may be associated with network activity at the base of plumes.

  13. Binomial Test Method for Determining Probability of Detection Capability for Fracture Critical Applications

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    2011-01-01

    The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.

  14. Perspectives for laboratory implementation of the Duan-Lukin-Cirac-Zoller protocol for quantum repeaters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendes, Milrian S.; Felinto, Daniel

    2011-12-15

    We analyze the efficiency and scalability of the Duan-Lukin-Cirac-Zoller (DLCZ) protocol for quantum repeaters focusing on the behavior of the experimentally accessible measures of entanglement for the system, taking into account crucial imperfections of the stored entangled states. We calculate then the degradation of the final state of the quantum-repeater linear chain for increasing sizes of the chain, and characterize it by a lower bound on its concurrence and the ability to violate the Clausner-Horne-Shimony-Holt inequality. The states are calculated up to an arbitrary number of stored excitations, as this number is not fundamentally bound for experiments involving large atomicmore » ensembles. The measurement by avalanche photodetectors is modeled by ''ON/OFF'' positive operator-valued measure operators. As a result, we are able to consistently test the approximation of the real fields by fields with a finite number of excitations, determining the minimum number of excitations required to achieve a desired precision in the prediction of the various measured quantities. This analysis finally determines the minimum purity of the initial state that is required to succeed in the protocol as the size of the chain increases. We also provide a more accurate estimate for the average time required to succeed in each step of the protocol. The minimum purity analysis and the new time estimates are then combined to trace the perspectives for implementation of the DLCZ protocol in present-day laboratory setups.« less

  15. Perspectives for laboratory implementation of the Duan-Lukin-Cirac-Zoller protocol for quantum repeaters

    NASA Astrophysics Data System (ADS)

    Mendes, Milrian S.; Felinto, Daniel

    2011-12-01

    We analyze the efficiency and scalability of the Duan-Lukin-Cirac-Zoller (DLCZ) protocol for quantum repeaters focusing on the behavior of the experimentally accessible measures of entanglement for the system, taking into account crucial imperfections of the stored entangled states. We calculate then the degradation of the final state of the quantum-repeater linear chain for increasing sizes of the chain, and characterize it by a lower bound on its concurrence and the ability to violate the Clausner-Horne-Shimony-Holt inequality. The states are calculated up to an arbitrary number of stored excitations, as this number is not fundamentally bound for experiments involving large atomic ensembles. The measurement by avalanche photodetectors is modeled by “ON/OFF” positive operator-valued measure operators. As a result, we are able to consistently test the approximation of the real fields by fields with a finite number of excitations, determining the minimum number of excitations required to achieve a desired precision in the prediction of the various measured quantities. This analysis finally determines the minimum purity of the initial state that is required to succeed in the protocol as the size of the chain increases. We also provide a more accurate estimate for the average time required to succeed in each step of the protocol. The minimum purity analysis and the new time estimates are then combined to trace the perspectives for implementation of the DLCZ protocol in present-day laboratory setups.

  16. Dispersion and waves in bounded plasmas with subwavelength inhomogeneities: Genesis of MEFIB

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhattacharjee, Sudeep

    Bounded plasma exhibit many interesting behavior that are not found in plasmas of 'infinite' extent such as space and astrophysical plasmas. Our studies have revealed that the dispersion properties of waves in a bounded magnetoplasma deviates considerably from the predictions of the Clemmow-Mullaly-Allis (CMA) model, giving rise to new regimes of wave propagation and absorption. The anisotropy of the medium dictated by the length scales of plasma nonuniformity and magnetostatic field inhomogeneity lead to rotation of the polarization axis an effect similar to the Cotton-Mouton effect in a magneto-optic medium but with distinct differences due to wave induced resonances. Thismore » article highlights some of these interesting effects observed experimentally and corroborated with Monte Carlo simulations. One of the principal outcomes of this research is the genesis of a novel multielement focused ion beam (MEFIB) system that utilizes compact bounded plasmas in a minimum – B field to provide intense focused ion beams of a variety of elements for new research in nanoscience and technology.« less

  17. Continuous hierarchical slope-aspect color display for parametric surfaces

    NASA Technical Reports Server (NTRS)

    Moellering, Harold J. (Inventor); Kimerling, A. Jon (Inventor)

    1994-01-01

    A method for generating an image of a parametric surface, such as the aspect of terrain which maximizes color contrast to permit easy discrimination of the magnitude, ranges, intervals or classes of a surface parameter while making it easy for the user to visualize the form of the surface, such as a landscape. The four pole colors of the opponent process color theory are utilized to represent intervals or classes at 90 degree angles. The color perceived as having maximum measured luminance is selected to portray the color having an azimuth of an assumed light source and the color showing minimum measured luminance portrays the diametrically opposite azimuth. The 90 degree intermediate azimuths are portrayed by unique colors of intermediate measured luminance, such as red and green. Colors between these four pole colors are used which are perceived as mixtures or combinations of their bounding colors and are arranged progressively between their bounding colors to have perceived proportional mixtures of the bounding colors which are proportional to the interval's angular distance from its bounding colors.

  18. Sucrose lyophiles: a semi-quantitative study of residual water content by total X-ray diffraction analysis.

    PubMed

    Bates, S; Jonaitis, D; Nail, S

    2013-10-01

    Total X-ray Powder Diffraction Analysis (TXRPD) using transmission geometry was able to observe significant variance in measured powder patterns for sucrose lyophilizates with differing residual water contents. Integrated diffraction intensity corresponding to the observed variances was found to be linearly correlated to residual water content as measured by an independent technique. The observed variance was concentrated in two distinct regions of the lyophilizate powder pattern, corresponding to the characteristic sucrose matrix double halo and the high angle diffuse region normally associated with free-water. Full pattern fitting of the lyophilizate powder patterns suggested that the high angle variance was better described by the characteristic diffraction profile of a concentrated sucrose/water system rather than by the free-water diffraction profile. This suggests that the residual water in the sucrose lyophilizates is intimately mixed at the molecular level with sucrose molecules forming a liquid/solid solution. The bound nature of the residual water and its impact on the sucrose matrix gives an enhanced diffraction response between 3.0 and 3.5 beyond that expected for free-water. The enhanced diffraction response allows semi-quantitative analysis of residual water contents within the studied sucrose lyophilizates to levels below 1% by weight. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Attempts to Simulate Anisotropies of Solar Wind Fluctuations Using MHD with a Turning Magnetic Field

    NASA Technical Reports Server (NTRS)

    Ghosh, Sanjoy; Roberts, D. Aaron

    2010-01-01

    We examine a "two-component" model of the solar wind to see if any of the observed anisotropies of the fields can be explained in light of the need for various quantities, such as the magnetic minimum variance direction, to turn along with the Parker spiral. Previous results used a 3-D MHD spectral code to show that neither Q2D nor slab-wave components will turn their wave vectors in a turning Parker-like field, and that nonlinear interactions between the components are required to reproduce observations. In these new simulations we use higher resolution in both decaying and driven cases, and with and without a turning background field, to see what, if any, conditions lead to variance anisotropies similar to observations. We focus especially on the middle spectral range, and not the energy-containing scales, of the simulation for comparison with the solar wind. Preliminary results have shown that it is very difficult to produce the required variances with a turbulent cascade.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aziz, Mohd Khairul Bazli Mohd, E-mail: mkbazli@yahoo.com; Yusof, Fadhilah, E-mail: fadhilahy@utm.my; Daud, Zalina Mohd, E-mail: zalina@ic.utm.my

    Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during themore » monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.« less

  1. Bounds on the minimum number of recombination events in a sample history.

    PubMed Central

    Myers, Simon R; Griffiths, Robert C

    2003-01-01

    Recombination is an important evolutionary factor in many organisms, including humans, and understanding its effects is an important task facing geneticists. Detecting past recombination events is thus important; this article introduces statistics that give a lower bound on the number of recombination events in the history of a sample, on the basis of the patterns of variation in the sample DNA. Such lower bounds are appropriate, since many recombination events in the history are typically undetectable, so the true number of historical recombinations is unobtainable. The statistics can be calculated quickly by computer and improve upon the earlier bound of Hudson and Kaplan 1985. A method is developed to combine bounds on local regions in the data to produce more powerful improved bounds. The method is flexible to different models of recombination occurrence. The approach gives recombination event bounds between all pairs of sites, to help identify regions with more detectable recombinations, and these bounds can be viewed graphically. Under coalescent simulations, there is a substantial improvement over the earlier method (of up to a factor of 2) in the expected number of recombination events detected by one of the new minima, across a wide range of parameter values. The method is applied to data from a region within the lipoprotein lipase gene and the amount of detected recombination is substantially increased. Further, there is strong clustering of detected recombination events in an area near the center of the region. A program implementing these statistics, which was used for this article, is available from http://www.stats.ox.ac.uk/mathgen/programs.html. PMID:12586723

  2. Self-bound droplets of a dilute magnetic quantum liquid

    NASA Astrophysics Data System (ADS)

    Schmitt, Matthias; Wenzel, Matthias; Böttcher, Fabian; Ferrier-Barbut, Igor; Pfau, Tilman

    2016-11-01

    Self-bound many-body systems are formed through a balance of attractive and repulsive forces and occur in many physical scenarios. Liquid droplets are an example of a self-bound system, formed by a balance of the mutual attractive and repulsive forces that derive from different components of the inter-particle potential. It has been suggested that self-bound ensembles of ultracold atoms should exist for atom number densities that are 108 times lower than in a helium droplet, which is formed from a dense quantum liquid. However, such ensembles have been elusive up to now because they require forces other than the usual zero-range contact interaction, which is either attractive or repulsive but never both. On the basis of the recent finding that an unstable bosonic dipolar gas can be stabilized by a repulsive many-body term, it was predicted that three-dimensional self-bound quantum droplets of magnetic atoms should exist. Here we report the observation of such droplets in a trap-free levitation field. We find that this dilute magnetic quantum liquid requires a minimum, critical number of atoms, below which the liquid evaporates into an expanding gas as a result of the quantum pressure of the individual constituents. Consequently, around this critical atom number we observe an interaction-driven phase transition between a gas and a self-bound liquid in the quantum degenerate regime with ultracold atoms. These droplets are the dilute counterpart of strongly correlated self-bound systems such as atomic nuclei and helium droplets.

  3. Self-bound droplets of a dilute magnetic quantum liquid.

    PubMed

    Schmitt, Matthias; Wenzel, Matthias; Böttcher, Fabian; Ferrier-Barbut, Igor; Pfau, Tilman

    2016-11-10

    Self-bound many-body systems are formed through a balance of attractive and repulsive forces and occur in many physical scenarios. Liquid droplets are an example of a self-bound system, formed by a balance of the mutual attractive and repulsive forces that derive from different components of the inter-particle potential. It has been suggested that self-bound ensembles of ultracold atoms should exist for atom number densities that are 10 8 times lower than in a helium droplet, which is formed from a dense quantum liquid. However, such ensembles have been elusive up to now because they require forces other than the usual zero-range contact interaction, which is either attractive or repulsive but never both. On the basis of the recent finding that an unstable bosonic dipolar gas can be stabilized by a repulsive many-body term, it was predicted that three-dimensional self-bound quantum droplets of magnetic atoms should exist. Here we report the observation of such droplets in a trap-free levitation field. We find that this dilute magnetic quantum liquid requires a minimum, critical number of atoms, below which the liquid evaporates into an expanding gas as a result of the quantum pressure of the individual constituents. Consequently, around this critical atom number we observe an interaction-driven phase transition between a gas and a self-bound liquid in the quantum degenerate regime with ultracold atoms. These droplets are the dilute counterpart of strongly correlated self-bound systems such as atomic nuclei and helium droplets.

  4. Experimental demonstration of quantum teleportation of a squeezed state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takei, Nobuyuki; Aoki, Takao; Yonezawa, Hidehiro

    2005-10-15

    Quantum teleportation of a squeezed state is demonstrated experimentally. Due to some inevitable losses in experiments, a squeezed vacuum necessarily becomes a mixed state which is no longer a minimum uncertainty state. We establish an operational method of evaluation for quantum teleportation of such a state using fidelity and discuss the classical limit for the state. The measured fidelity for the input state is 0.85{+-}0.05, which is higher than the classical case of 0.73{+-}0.04. We also verify that the teleportation process operates properly for the nonclassical state input and its squeezed variance is certainly transferred through the process. We observemore » the smaller variance of the teleported squeezed state than that for the vacuum state input.« less

  5. Quantizing and sampling considerations in digital phased-locked loops

    NASA Technical Reports Server (NTRS)

    Hurst, G. T.; Gupta, S. C.

    1974-01-01

    The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.

  6. In vivo tear film thickness measurement and tear film dynamics visualization using spectral domain OCT and an efficient delay estimator

    NASA Astrophysics Data System (ADS)

    Aranha dos Santos, Valentin; Schmetterer, Leopold; Gröschl, Martin; Garhofer, Gerhard; Werkmeister, René M.

    2016-03-01

    Dry eye syndrome is a highly prevalent disease of the ocular surface characterized by an instability of the tear film. Traditional methods used for the evaluation of tear film stability are invasive or show limited repeatability. Here we propose a new noninvasive approach to measure tear film thickness using an efficient delay estimator and ultrahigh resolution spectral domain OCT. Silicon wafer phantoms with layers of known thickness and group index were used to validate the estimator-based thickness measurement. A theoretical analysis of the fundamental limit of the precision of the estimator is presented and the analytical expression of the Cramér-Rao lower bound (CRLB), which is the minimum variance that may be achieved by any unbiased estimator, is derived. The performance of the estimator against noise was investigated using simulations. We found that the proposed estimator reaches the CRLB associated with the OCT amplitude signal. The technique was applied in vivo in healthy subjects and dry eye patients. Series of tear film thickness maps were generated, allowing for the visualization of tear film dynamics. Our results show that the central tear film thickness precisely measured in vivo with a coefficient of variation of about 0.65% and that repeatable tear film dynamics can be observed. The presented method has the potential of being an alternative to breakup time measurements (BUT) and could be used in clinical setting to study patients with dry eye disease and monitor their treatments.

  7. Statistical procedures for determination and verification of minimum reporting levels for drinking water methods.

    PubMed

    Winslow, Stephen D; Pepich, Barry V; Martin, John J; Hallberg, George R; Munch, David J; Frebis, Christopher P; Hedrick, Elizabeth J; Krop, Richard A

    2006-01-01

    The United States Environmental Protection Agency's Office of Ground Water and Drinking Water has developed a single-laboratory quantitation procedure: the lowest concentration minimum reporting level (LCMRL). The LCMRL is the lowest true concentration for which future recovery is predicted to fall, with high confidence (99%), between 50% and 150%. The procedure takes into account precision and accuracy. Multiple concentration replicates are processed through the entire analytical method and the data are plotted as measured sample concentration (y-axis) versus true concentration (x-axis). If the data support an assumption of constant variance over the concentration range, an ordinary least-squares regression line is drawn; otherwise, a variance-weighted least-squares regression is used. Prediction interval lines of 99% confidence are drawn about the regression. At the points where the prediction interval lines intersect with data quality objective lines of 50% and 150% recovery, lines are dropped to the x-axis. The higher of the two values is the LCMRL. The LCMRL procedure is flexible because the data quality objectives (50-150%) and the prediction interval confidence (99%) can be varied to suit program needs. The LCMRL determination is performed during method development only. A simpler procedure for verification of data quality objectives at a given minimum reporting level (MRL) is also presented. The verification procedure requires a single set of seven samples taken through the entire method procedure. If the calculated prediction interval is contained within data quality recovery limits (50-150%), the laboratory performance at the MRL is verified.

  8. The effectiveness of texture analysis for mapping forest land using the panchromatic bands of Landsat 7, SPOT, and IRS imagery

    Treesearch

    Michael L. Hoppus; Rachel I. Riemann; Andrew J. Lister; Mark V. Finco

    2002-01-01

    The panchromatic bands of Landsat 7, SPOT, and IRS satellite imagery provide an opportunity to evaluate the effectiveness of texture analysis of satellite imagery for mapping of land use/cover, especially forest cover. A variety of texture algorithms, including standard deviation, Ryherd-Woodcock minimum variance adaptive window, low pass etc., were applied to moving...

  9. Solution Methods for Certain Evolution Equations

    NASA Astrophysics Data System (ADS)

    Vega-Guzman, Jose Manuel

    Solution methods for certain linear and nonlinear evolution equations are presented in this dissertation. Emphasis is placed mainly on the analytical treatment of nonautonomous differential equations, which are challenging to solve despite the existent numerical and symbolic computational software programs available. Ideas from the transformation theory are adopted allowing one to solve the problems under consideration from a non-traditional perspective. First, the Cauchy initial value problem is considered for a class of nonautonomous and inhomogeneous linear diffusion-type equation on the entire real line. Explicit transformations are used to reduce the equations under study to their corresponding standard forms emphasizing on natural relations with certain Riccati(and/or Ermakov)-type systems. These relations give solvability results for the Cauchy problem of the parabolic equation considered. The superposition principle allows to solve formally this problem from an unconventional point of view. An eigenfunction expansion approach is also considered for this general evolution equation. Examples considered to corroborate the efficacy of the proposed solution methods include the Fokker-Planck equation, the Black-Scholes model and the one-factor Gaussian Hull-White model. The results obtained in the first part are used to solve the Cauchy initial value problem for certain inhomogeneous Burgers-type equation. The connection between linear (the Diffusion-type) and nonlinear (Burgers-type) parabolic equations is stress in order to establish a strong commutative relation. Traveling wave solutions of a nonautonomous Burgers equation are also investigated. Finally, it is constructed explicitly the minimum-uncertainty squeezed states for quantum harmonic oscillators. They are derived by the action of corresponding maximal kinematical invariance group on the standard ground state solution. It is shown that the product of the variances attains the required minimum value only at the instances that one variance is a minimum and the other is a maximum, when the squeezing of one of the variances occurs. Such explicit construction is possible due to the relation between the diffusion-type equation studied in the first part and the time-dependent Schrodinger equation. A modication of the radiation field operators for squeezed photons in a perfect cavity is also suggested with the help of a nonstandard solution of Heisenberg's equation of motion.

  10. Eigenspace-based minimum variance beamformer combined with Wiener postfilter for medical ultrasound imaging.

    PubMed

    Zeng, Xing; Chen, Cheng; Wang, Yuanyuan

    2012-12-01

    In this paper, a new beamformer which combines the eigenspace-based minimum variance (ESBMV) beamformer with the Wiener postfilter is proposed for medical ultrasound imaging. The primary goal of this work is to further improve the medical ultrasound imaging quality on the basis of the ESBMV beamformer. In this method, we optimize the ESBMV weights with a Wiener postfilter. With the optimization of the Wiener postfilter, the output power of the new beamformer becomes closer to the actual signal power at the imaging point than the ESBMV beamformer. Different from the ordinary Wiener postfilter, the output signal and noise power needed in calculating the Wiener postfilter are estimated respectively by the orthogonal signal subspace and noise subspace constructed from the eigenstructure of the sample covariance matrix. We demonstrate the performance of the new beamformer when resolving point scatterers and cyst phantom using both simulated data and experimental data and compare it with the delay-and-sum (DAS), the minimum variance (MV) and the ESBMV beamformer. We use the full width at half maximum (FWHM) and the peak-side-lobe level (PSL) to quantify the performance of imaging resolution and the contrast ratio (CR) to quantify the performance of imaging contrast. The FWHM of the new beamformer is only 15%, 50% and 50% of those of the DAS, MV and ESBMV beamformer, while the PSL is 127.2dB, 115dB and 60dB lower. What is more, an improvement of 239.8%, 232.5% and 32.9% in CR using simulated data and an improvement of 814%, 1410.7% and 86.7% in CR using experimental data are achieved compared to the DAS, MV and ESBMV beamformer respectively. In addition, the effect of the sound speed error is investigated by artificially overestimating the speed used in calculating the propagation delay and the results show that the new beamformer provides better robustness against the sound speed errors. Therefore, the proposed beamformer offers a better performance than the DAS, MV and ESBMV beamformer, showing its potential in medical ultrasound imaging. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Longitudinal design considerations to optimize power to detect variances and covariances among rates of change: Simulation results based on actual longitudinal studies

    PubMed Central

    Rast, Philippe; Hofer, Scott M.

    2014-01-01

    We investigated the power to detect variances and covariances in rates of change in the context of existing longitudinal studies using linear bivariate growth curve models. Power was estimated by means of Monte Carlo simulations. Our findings show that typical longitudinal study designs have substantial power to detect both variances and covariances among rates of change in a variety of cognitive, physical functioning, and mental health outcomes. We performed simulations to investigate the interplay among number and spacing of occasions, total duration of the study, effect size, and error variance on power and required sample size. The relation between growth rate reliability (GRR) and effect size to the sample size required to detect power ≥ .80 was non-linear, with rapidly decreasing sample sizes needed as GRR increases. The results presented here stand in contrast to previous simulation results and recommendations (Hertzog, Lindenberger, Ghisletta, & von Oertzen, 2006; Hertzog, von Oertzen, Ghisletta, & Lindenberger, 2008; von Oertzen, Ghisletta, & Lindenberger, 2010), which are limited due to confounds between study length and number of waves, error variance with GCR, and parameter values which are largely out of bounds of actual study values. Power to detect change is generally low in the early phases (i.e. first years) of longitudinal studies but can substantially increase if the design is optimized. We recommend additional assessments, including embedded intensive measurement designs, to improve power in the early phases of long-term longitudinal studies. PMID:24219544

  12. On the Directional Dependence and Null Space Freedom in Uncertainty Bound Identification

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    1997-01-01

    In previous work, the determination of uncertainty models via minimum norm model validation is based on a single set of input and output measurement data. Since uncertainty bounds at each frequency is directionally dependent for multivariable systems, this will lead to optimistic uncertainty levels. In addition, the design freedom in the uncertainty model has not been utilized to further reduce uncertainty levels. The above issues are addressed by formulating a min- max problem. An analytical solution to the min-max problem is given to within a generalized eigenvalue problem, thus avoiding a direct numerical approach. This result will lead to less conservative and more realistic uncertainty models for use in robust control.

  13. An elastic band exercise program for older adults using wheelchairs in Taiwan nursing homes: a cluster randomized trial.

    PubMed

    Chen, Kuei-Min; Li, Chun-Huw; Chang, Ya-Hui; Huang, Hsin-Ting; Cheng, Yin-Yin

    2015-01-01

    The number of older adults using wheelchairs in nursing homes is over 50% of that population, and many of them use wheelchairs due to muscle weakness in the lower extremities. Muscles of older adults are trainable, and progressive resistance exercises using elastic bands can increase muscle strength in older adults. To test the effectiveness of six-month Wheelchair-bound Senior Elastic Band exercises on the functional fitness of older adults in nursing homes. Cluster randomized trial. Ten nursing homes, southern Taiwan. 127 participants were recruited, and 114 of them completed the study. Inclusion criteria were: (1) aged 65 and over, (2) using wheelchairs for mobility, (3) living in the facility for at least three months, (4) cognitively intact, and (5) heavily or moderate dependency in their activities of daily living. The mean age of the participants was 79.15 (7.03) years, and 98.20% of them had chronic illnesses. Participants were randomly assigned to the experimental (five nursing homes, n=59) or the control (five nursing homes, n=55) group based on the nursing homes where they stayed. A 40-min Wheelchair-bound Senior Elastic Band exercise program was implemented three times per week for six months for the experimental group participants. The functional fitness (activities of daily living, lung capacity, body flexibilities, muscle power and endurance) of the participants was examined at baseline, after three months, and at the end of the six months study. The mixed-design, two-way analysis of variance was used to detect the interaction effects, and one-way repeated measures analysis of variance and analysis of covariance were performed to analyze the within-group and between-group differences. At the end of the six-month study, the Wheelchair-bound Senior Elastic Band group had better performances in all of the functional fitness indicators than the control group (all p<0.05). The Wheelchair-bound Senior Elastic Band exercises significantly improved the functional fitness of the older adults in wheelchairs. It is suggested that the program be incorporated as a part of daily activities for nursing home older adults in wheelchairs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. On size-constrained minimum s–t cut problems and size-constrained dense subgraph problems

    DOE PAGES

    Chen, Wenbin; Samatova, Nagiza F.; Stallmann, Matthias F.; ...

    2015-10-30

    In some application cases, the solutions of combinatorial optimization problems on graphs should satisfy an additional vertex size constraint. In this paper, we consider size-constrained minimum s–t cut problems and size-constrained dense subgraph problems. We introduce the minimum s–t cut with at-least-k vertices problem, the minimum s–t cut with at-most-k vertices problem, and the minimum s–t cut with exactly k vertices problem. We prove that they are NP-complete. Thus, they are not polynomially solvable unless P = NP. On the other hand, we also study the densest at-least-k-subgraph problem (DalkS) and the densest at-most-k-subgraph problem (DamkS) introduced by Andersen andmore » Chellapilla [1]. We present a polynomial time algorithm for DalkS when k is bounded by some constant c. We also present two approximation algorithms for DamkS. In conclusion, the first approximation algorithm for DamkS has an approximation ratio of n-1/k-1, where n is the number of vertices in the input graph. The second approximation algorithm for DamkS has an approximation ratio of O (n δ), for some δ < 1/3.« less

  15. Quantifying Systemic Risk by Solutions of the Mean-Variance Risk Model.

    PubMed

    Jurczyk, Jan; Eckrot, Alexander; Morgenstern, Ingo

    2016-01-01

    The world is still recovering from the financial crisis peaking in September 2008. The triggering event was the bankruptcy of Lehman Brothers. To detect such turmoils, one can investigate the time-dependent behaviour of correlations between assets or indices. These cross-correlations have been connected to the systemic risks within markets by several studies in the aftermath of this crisis. We study 37 different US indices which cover almost all aspects of the US economy and show that monitoring an average investor's behaviour can be used to quantify times of increased risk. In this paper the overall investing strategy is approximated by the ground-states of the mean-variance model along the efficient frontier bound to real world constraints. Changes in the behaviour of the average investor is utlilized as a early warning sign.

  16. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies

    PubMed Central

    Rukhin, Andrew L.

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed. PMID:26989583

  17. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.

    PubMed

    Rukhin, Andrew L

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.

  18. Quantum Theory of Three-Dimensional Superresolution Using Rotating-PSF Imagery

    NASA Astrophysics Data System (ADS)

    Prasad, S.; Yu, Z.

    The inverse of the quantum Fisher information (QFI) matrix (and extensions thereof) provides the ultimate lower bound on the variance of any unbiased estimation of a parameter from statistical data, whether of intrinsically quantum mechanical or classical character. We calculate the QFI for Poisson-shot-noise-limited imagery using the rotating PSF that can localize and resolve point sources fully in all three dimensions. We also propose an experimental approach based on the use of computer generated hologram and projective measurements to realize the QFI-limited variance for the problem of super-resolving a closely spaced pair of point sources at a highly reduced photon cost. The paper presents a preliminary analysis of quantum-limited three-dimensional (3D) pair optical super-resolution (OSR) problem with potential applications to astronomical imaging and 3D space-debris localization.

  19. Quantifying Systemic Risk by Solutions of the Mean-Variance Risk Model

    PubMed Central

    Morgenstern, Ingo

    2016-01-01

    The world is still recovering from the financial crisis peaking in September 2008. The triggering event was the bankruptcy of Lehman Brothers. To detect such turmoils, one can investigate the time-dependent behaviour of correlations between assets or indices. These cross-correlations have been connected to the systemic risks within markets by several studies in the aftermath of this crisis. We study 37 different US indices which cover almost all aspects of the US economy and show that monitoring an average investor’s behaviour can be used to quantify times of increased risk. In this paper the overall investing strategy is approximated by the ground-states of the mean-variance model along the efficient frontier bound to real world constraints. Changes in the behaviour of the average investor is utlilized as a early warning sign. PMID:27351482

  20. Realization of the Einstein-Podolsky-Rosen paradox using momentum- and position-entangled photons from spontaneous parametric down conversion.

    PubMed

    Howell, John C; Bennink, Ryan S; Bentley, Sean J; Boyd, R W

    2004-05-28

    We report on a momentum-position realization of the EPR paradox using direct detection in the near and far fields of the photons emitted by collinear type-II phase-matched parametric down conversion. Using this approach we achieved a measured two-photon momentum-position variance product of 0.01 variant Planck's over 2pi (2), which dramatically violates the bounds for the EPR and separability criteria.

  1. Radar Imaging for Urban Sensing

    DTIC Science & Technology

    2010-04-01

    waveforms, we fix the input SNR to the matched filter in all cases. The noise variance may be obtained as, 97 ^ l ^feo (20) where Pmax is the highest...maximum likelihood, and cramer-rao bound," IEEE Trans. Acoust., Speech, Signal Processing, vol. 37, no. 5, pp. 720-741, May 1989. [16] L . Frazier...it may fail to work when directly applied to extended targets or target returns of low SNR. The Beamspace MUSIC (BS-MUSIC), in which the MUSIC

  2. Sex versus asex: An analysis of the role of variance conversion.

    PubMed

    Lewis-Pye, Andrew E M; Montalbán, Antonio

    2017-04-01

    The question as to why most complex organisms reproduce sexually remains a very active research area in evolutionary biology. Theories dating back to Weismann have suggested that the key may lie in the creation of increased variability in offspring, causing enhanced response to selection. Under appropriate conditions, selection is known to result in the generation of negative linkage disequilibrium, with the effect of recombination then being to increase genetic variance by reducing these negative associations between alleles. It has therefore been a matter of significant interest to understand precisely those conditions resulting in negative linkage disequilibrium, and to recognise also the conditions in which the corresponding increase in genetic variation will be advantageous. Here, we prove rigorous results for the multi-locus case, detailing the build up of negative linkage disequilibrium, and describing the long term effect on population fitness for models with and without bounds on fitness contributions from individual alleles. Under the assumption of large but finite bounds on fitness contributions from alleles, the non-linear nature of the effect of recombination on a population presents serious obstacles in finding the genetic composition of populations at equilibrium, and in establishing convergence to those equilibria. We describe techniques for analysing the long term behaviour of sexual and asexual populations for such models, and use these techniques to establish conditions resulting in higher fitnesses for sexually reproducing populations. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Monogamy inequalities for certifiers of continuous-variable Einstein-Podolsky-Rosen entanglement without the assumption of Gaussianity

    NASA Astrophysics Data System (ADS)

    Rosales-Zárate, L.; Teh, R. Y.; Opanchuk, B.; Reid, M. D.

    2017-08-01

    We consider three modes A , B , and C and derive monogamy inequalities that constrain the distribution of bipartite continuous variable Einstein-Podolsky-Rosen entanglement amongst the three modes. The inequalities hold without the assumption of Gaussian states, and are based on measurements of the quadrature phase amplitudes Xi and Pi at each mode i =A ,B ,C . The first monogamy inequality involves the well-known quantity DI J defined by Duan-Giedke-Cirac-Zoller as the sum of the variances of (XI-XJ)/2 and (PI+PJ)/2 where [XI,PJ] =δI J . Entanglement between I and J is certified if DI J<1 . A second monogamy inequality involves the more general entanglement certifier EntIJ defined as the normalized product of the variances of XI-g XJ and PI+g PJ , where g is a real constant. The monogamy inequalities give a lower bound on the values of DB C and EntBC for one pair, given the values DB A and EntBA for the first pair. This lower bound changes in the absence of two-mode Gaussian steering of B . We illustrate for a range of tripartite entangled states, identifying regimes of saturation of the inequalities. The monogamy relations explain without the assumption of Gaussianity the experimentally observed saturation at DA B=0.5 where there is symmetry between modes A and C .

  4. On a theorem of existence for scaling problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osmolovskii, V.G.

    1995-12-05

    The authors study the question of the existence of the global minimum of the functional over the set of functions where {Omega} {contained_in} R{sup n} is a bounded domain, and a fixed function K (x,y) = K (y,x) belongs to L{sub 2} ({Omega} x {Omega}). Such functionals arise in some mathematical models of economics and sociology.

  5. Language structures used by kindergartners with cochlear implants: relationship to phonological awareness, lexical knowledge and hearing loss.

    PubMed

    Nittrouer, Susan; Sansom, Emily; Low, Keri; Rice, Caitlin; Caldwell-Tarr, Amanda

    2014-01-01

    Listeners use their knowledge of how language is structured to aid speech recognition in everyday communication. When it comes to children with congenital hearing loss severe enough to warrant cochlear implants (CIs), the question arises of whether these children can acquire the language knowledge needed to aid speech recognition, in spite of only having spectrally degraded signals available to them. That question was addressed in the present study. Specifically, there were three goals: (1) to compare the language structures used by children with CIs to those of children with normal hearing (NH); (2) to assess the amount of variance in the language measures explained by phonological awareness and lexical knowledge; and (3) to assess the amount of variance in the language measures explained by factors related to the hearing loss itself and subsequent treatment. Language samples were obtained and transcribed for 40 children who had just completed kindergarten: 19 with NH and 21 with CIs. Five measures were derived from Systematic Analysis of Language Transcripts: (1) mean length of utterance in morphemes, (2) number of conjunctions, excluding and, (3) number of personal pronouns, (4) number of bound morphemes, and (5) number of different words. Measures were also collected on phonological awareness and lexical knowledge. Statistics examined group differences, as well as the amount of variance in the language measures explained by phonological awareness, lexical knowledge, and factors related to hearing loss and its treatment for children with CIs. Mean scores of children with CIs were roughly one standard deviation below those of children with NH on all language measures, including lexical knowledge, matching outcomes of other studies. Mean scores of children with CIs were closer to two standard deviations below those of children with NH on two out of three measures of phonological awareness (specifically those related to phonemic structure). Lexical knowledge explained significant amounts of variance on three language measures, but only one measure of phonological awareness (sensitivity to word-final phonemic structure) explained any significant amount of unique variance beyond that, and on only one language measure (number of bound morphemes). Age at first implant, but no other factors related to hearing loss or its treatment, explained significant amounts of variance on the language measures, as well. In spite of early intervention and advances in implant technology, children with CIs are still delayed in learning language, but grammatical knowledge is less affected than phonological awareness. Because there was little contribution to language development measured for phonological awareness independent of lexical knowledge, it was concluded that children with CIs could benefit from intervention focused specifically on helping them learn language structures, in spite of the likely phonological deficits they experience as a consequence of having degraded inputs.

  6. Computing Bounds on Resource Levels for Flexible Plans

    NASA Technical Reports Server (NTRS)

    Muscvettola, Nicola; Rijsman, David

    2009-01-01

    A new algorithm efficiently computes the tightest exact bound on the levels of resources induced by a flexible activity plan (see figure). Tightness of bounds is extremely important for computations involved in planning because tight bounds can save potentially exponential amounts of search (through early backtracking and detection of solutions), relative to looser bounds. The bound computed by the new algorithm, denoted the resource-level envelope, constitutes the measure of maximum and minimum consumption of resources at any time for all fixed-time schedules in the flexible plan. At each time, the envelope guarantees that there are two fixed-time instantiations one that produces the minimum level and one that produces the maximum level. Therefore, the resource-level envelope is the tightest possible resource-level bound for a flexible plan because any tighter bound would exclude the contribution of at least one fixed-time schedule. If the resource- level envelope can be computed efficiently, one could substitute looser bounds that are currently used in the inner cores of constraint-posting scheduling algorithms, with the potential for great improvements in performance. What is needed to reduce the cost of computation is an algorithm, the measure of complexity of which is no greater than a low-degree polynomial in N (where N is the number of activities). The new algorithm satisfies this need. In this algorithm, the computation of resource-level envelopes is based on a novel combination of (1) the theory of shortest paths in the temporal-constraint network for the flexible plan and (2) the theory of maximum flows for a flow network derived from the temporal and resource constraints. The measure of asymptotic complexity of the algorithm is O(N O(maxflow(N)), where O(x) denotes an amount of computing time or a number of arithmetic operations proportional to a number of the order of x and O(maxflow(N)) is the measure of complexity (and thus of cost) of a maximumflow algorithm applied to an auxiliary flow network of 2N nodes. The algorithm is believed to be efficient in practice; experimental analysis shows the practical cost of maxflow to be as low as O(N1.5). The algorithm could be enhanced following at least two approaches. In the first approach, incremental subalgorithms for the computation of the envelope could be developed. By use of temporal scanning of the events in the temporal network, it may be possible to significantly reduce the size of the networks on which it is necessary to run the maximum-flow subalgorithm, thereby significantly reducing the time required for envelope calculation. In the second approach, the practical effectiveness of resource envelopes in the inner loops of search algorithms could be tested for multi-capacity resource scheduling. This testing would include inner-loop backtracking and termination tests and variable and value-ordering heuristics that exploit the properties of resource envelopes more directly.

  7. GPZ: non-stationary sparse Gaussian processes for heteroscedastic uncertainty estimation in photometric redshifts

    NASA Astrophysics Data System (ADS)

    Almosallam, Ibrahim A.; Jarvis, Matt J.; Roberts, Stephen J.

    2016-10-01

    The next generation of cosmology experiments will be required to use photometric redshifts rather than spectroscopic redshifts. Obtaining accurate and well-characterized photometric redshift distributions is therefore critical for Euclid, the Large Synoptic Survey Telescope and the Square Kilometre Array. However, determining accurate variance predictions alongside single point estimates is crucial, as they can be used to optimize the sample of galaxies for the specific experiment (e.g. weak lensing, baryon acoustic oscillations, supernovae), trading off between completeness and reliability in the galaxy sample. The various sources of uncertainty in measurements of the photometry and redshifts put a lower bound on the accuracy that any model can hope to achieve. The intrinsic uncertainty associated with estimates is often non-uniform and input-dependent, commonly known in statistics as heteroscedastic noise. However, existing approaches are susceptible to outliers and do not take into account variance induced by non-uniform data density and in most cases require manual tuning of many parameters. In this paper, we present a Bayesian machine learning approach that jointly optimizes the model with respect to both the predictive mean and variance we refer to as Gaussian processes for photometric redshifts (GPZ). The predictive variance of the model takes into account both the variance due to data density and photometric noise. Using the Sloan Digital Sky Survey (SDSS) DR12 data, we show that our approach substantially outperforms other machine learning methods for photo-z estimation and their associated variance, such as TPZ and ANNZ2. We provide a MATLAB and PYTHON implementations that are available to download at https://github.com/OxfordML/GPz.

  8. Solar-cycle dependence of a model turbulence spectrum using IMP and ACE observations over 38 years

    NASA Astrophysics Data System (ADS)

    Burger, R. A.; Nel, A. E.; Engelbrecht, N. E.

    2014-12-01

    Ab initio modulation models require a number of turbulence quantities as input for any reasonable diffusion tensor. While turbulence transport models describe the radial evolution of such quantities, they in turn require observations in the inner heliosphere as input values. So far we have concentrated on solar minimum conditions (e.g. Engelbrecht and Burger 2013, ApJ), but are now looking at long-term modulation which requires turbulence data over at a least a solar magnetic cycle. As a start we analyzed 1-minute resolution data for the N-component of the magnetic field, from 1974 to 2012, covering about two solar magnetic cycles (initially using IMP and then ACE data). We assume a very simple three-stage power-law frequency spectrum, calculate the integral from the highest to the lowest frequency, and fit it to variances calculated with lags from 5 minutes to 80 hours. From the fit we then obtain not only the asymptotic variance at large lags, but also the spectral index of the inertial and the energy, as well as the breakpoint between the inertial and energy range (bendover scale) and between the energy and cutoff range (cutoff scale). All values given here are preliminary. The cutoff range is a constraint imposed in order to ensure a finite energy density; the spectrum is forced to be either flat or to decrease with decreasing frequency in this range. Given that cosmic rays sample magnetic fluctuations over long periods in their transport through the heliosphere, we average the spectra over at least 27 days. We find that the variance of the N-component has a clear solar cycle dependence, with smaller values (~6 nT2) during solar minimum and larger during solar maximum periods (~17 nT2), well correlated with the magnetic field magnitude (e.g. Smith et al. 2006, ApJ). Whereas the inertial range spectral index (-1.65 ± 0.06) does not show a significant solar cycle variation, the energy range index (-1.1 ± 0.3) seems to be anti-correlated with the variance (Bieber et al. 1993, JGR); both indices show close to normal distributions. In contrast, the variance (e.g. Burlaga and Ness, 1998, JGR), and both the bendover scale (see Ruiz et al. 2014, Solar Physics) and cutoff scale appear to be log-normal distributed.

  9. Theoretical characterization of the minimum energy path for hydrogen atom addition to N2 - Implications for the unimolecular lifetime of HN2

    NASA Technical Reports Server (NTRS)

    Walch, Stephen P.; Duchovic, Ronald J.; Rohlfing, Celeste Mcmichael

    1989-01-01

    Results are reported from CASSCF externally contracted CI ab initio computations of the minimum-energy path for the addition of H to N2. The theoretical basis and numerical implementation of the computations are outlined, and the results are presented in extensive tables and graphs and characterized in detail. The zero-point-corrected barrier for HN2 dissociation is estimated as 8.5 kcal/mol, and the lifetime of the lowest-lying quasi-bound vibrational state of HN2 is found to be between 88 psec and 5.8 nsec (making experimental observation of this species very difficult).

  10. A new Method for Determining the Interplanetary Current-Sheet Local Orientation

    NASA Astrophysics Data System (ADS)

    Blanco, J. J.; Rodríguez-pacheco, J.; Sequeiros, J.

    2003-03-01

    In this work we have developed a new method for determining the interplanetary current sheet local parameters. The method, called `HYTARO' (from Hyperbolic Tangent Rotation), is based on a modified Harris magnetic field. This method has been applied to a pool of 57 events, all of them recorded during solar minimum conditions. The model performance has been tested by comparing both, its outputs and noise response, with these of the `classic MVM' (from Minimum Variance Method). The results suggest that, despite the fact that in many cases they behave in a similar way, there are specific crossing conditions that produce an erroneous MVM response. Moreover, our method shows a lower noise level sensitivity than that of MVM.

  11. Determining Metacarpophalangeal Flexion Angle Tolerance for Reliable Volumetric Joint Space Measurements by High-resolution Peripheral Quantitative Computed Tomography.

    PubMed

    Tom, Stephanie; Frayne, Mark; Manske, Sarah L; Burghardt, Andrew J; Stok, Kathryn S; Boyd, Steven K; Barnabe, Cheryl

    2016-10-01

    The position-dependence of a method to measure the joint space of metacarpophalangeal (MCP) joints using high-resolution peripheral quantitative computed tomography (HR-pQCT) was studied. Cadaveric MCP were imaged at 7 flexion angles between 0 and 30 degrees. The variability in reproducibility for mean, minimum, and maximum joint space widths and volume measurements was calculated for increasing degrees of flexion. Root mean square coefficient of variance values were < 5% under 20 degrees of flexion for mean, maximum, and volumetric joint spaces. Values for minimum joint space width were optimized under 10 degrees of flexion. MCP joint space measurements should be acquired at < 10 degrees of flexion in longitudinal studies.

  12. Comparison of reproducibility of natural head position using two methods.

    PubMed

    Khan, Abdul Rahim; Rajesh, R N G; Dinesh, M R; Sanjay, N; Girish, K S; Venkataraghavan, Karthik

    2012-01-01

    Lateral cephalometric radiographs have become virtually indispensable to orthodontists in the treatment of patients. They are important in orthodontic growth analysis, diagnosis, treatment planning, monitoring of therapy and evaluation of final treatment outcome. The purpose of this study was to evaluate and compare the maximum reproducibility with minimum variation of natural head position using two methods, i.e. the mirror method and the fluid level device method. The study included two sets of 40 lateral cephalograms taken using two methods of obtaining natural head position: (1) The mirror method and (2) fluid level device method, with a time interval of 2 months. Inclusion criteria • Subjects were randomly selected aged between 18 to 26 years Exclusion criteria • History of orthodontic treatment • Any history of respiratory tract problem or chronic mouth breathing • Any congenital deformity • History of traumatically-induced deformity • History of myofacial pain syndrome • Any previous history of head and neck surgery. The result showed that both the methods for obtaining natural head position-the mirror method and fluid level device method were comparable, but maximum reproducibility was more with the fluid level device as shown by the Dahlberg's coefficient and Bland-Altman plot. The minimum variance was seen with the fluid level device method as shown by Precision and Pearson correlation. The mirror method and the fluid level device method used for obtaining natural head position were comparable without any significance, and the fluid level device method was more reproducible and showed less variance when compared to mirror method for obtaining natural head position. Fluid level device method was more reproducible and shows less variance when compared to mirror method for obtaining natural head position.

  13. A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China

    NASA Astrophysics Data System (ADS)

    Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.

    2016-12-01

    Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.

  14. Dynamic and Geometric Analyses of Nudaurelia capensis ωVirus Maturation Reveal the Energy Landscape of Particle Transitions

    PubMed Central

    Tang, Jinghua; Kearney, Bradley M.; Wang, Qiu; Doerschuk, Peter C.; Baker, Timothy S.; Johnson, John E.

    2014-01-01

    Quasi-equivalent viruses that infect animals and bacteria require a maturation process in which particles transition from initially assembled procapsids to infectious virions. Nudaurelia capensis ω virus (NωV) is a T=4, eukaryotic, ssRNA virus that has proved to be an excellent model system for studying the mechanisms of viral maturation. Structures of NωV procapsids (diam. = 480 Å), a maturation intermediate (410 Å), and the mature virion (410 Å) were determined by electron cryo-microscopy and three-dimensional image reconstruction (cryoEM). The cryoEM density for each particle type was analyzed with a recently developed Maximum Likelihood Variance (MLV) method for characterizing microstates occupied in the ensemble of particles used for the reconstructions. The procapsid and the mature capsid had overall low variance (i.e. uniform particle populations) while the maturation intermediate (that had not undergone post-assembly autocatalytic cleavage) had roughly 2-4 times the variance of the first two particles. Without maturation cleavage the particles assume a variety of microstates, as the frustrated subunits cannot reach a minimum energy configuration. Geometric analyses of subunit coordinates provided a quantitative description of the particle reorganization during maturation. Superposition of the four quasi-equivalent subunits in the procapsid had an average root mean square deviation (RMSD) of 3Å while the mature particle had an RMSD of 11Å, showing that the subunits differentiate from near equivalent environments in the procapsid to strikingly non-equivalent environments during maturation. Autocatalytic cleavage is clearly required for the reorganized mature particle to reach the minimum energy state required for stability and infectivity. PMID:24591180

  15. Dynamic and geometric analyses of Nudaurelia capensis ω virus maturation reveal the energy landscape of particle transitions.

    PubMed

    Tang, Jinghua; Kearney, Bradley M; Wang, Qiu; Doerschuk, Peter C; Baker, Timothy S; Johnson, John E

    2014-04-01

    Quasi-equivalent viruses that infect animals and bacteria require a maturation process in which particles transition from initially assembled procapsids to infectious virions. Nudaurelia capensis ω virus (NωV) is a T = 4, eukaryotic, single-stranded ribonucleic acid virus that has proved to be an excellent model system for studying the mechanisms of viral maturation. Structures of NωV procapsids (diameter = 480 Å), a maturation intermediate (410 Å), and the mature virion (410 Å) were determined by electron cryo-microscopy and three-dimensional image reconstruction (cryoEM). The cryoEM density for each particle type was analyzed with a recently developed maximum likelihood variance (MLV) method for characterizing microstates occupied in the ensemble of particles used for the reconstructions. The procapsid and the mature capsid had overall low variance (i.e., uniform particle populations) while the maturation intermediate (that had not undergone post-assembly autocatalytic cleavage) had roughly two to four times the variance of the first two particles. Without maturation cleavage, the particles assume a variety of microstates, as the frustrated subunits cannot reach a minimum energy configuration. Geometric analyses of subunit coordinates provided a quantitative description of the particle reorganization during maturation. Superposition of the four quasi-equivalent subunits in the procapsid had an average root mean square deviation (RMSD) of 3 Å while the mature particle had an RMSD of 11 Å, showing that the subunits differentiate from near equivalent environments in the procapsid to strikingly non-equivalent environments during maturation. Autocatalytic cleavage is clearly required for the reorganized mature particle to reach the minimum energy state required for stability and infectivity. Copyright © 2014 John Wiley & Sons, Ltd.

  16. Geologic and geophysical investigations of the Zuni-Bandera volcanic field, New Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ander, M.E.; Heiken, G.; Eichelberger, J.

    1981-05-01

    A positive, northeast-trending gravity anomaly, 90 km long and 30 km wide, extends southwest from the Zuni uplift, New Mexico. The Zuni-Bandera volcanic field, an alignment of 74 basaltic vents, is parallel to the eastern edge of the anomaly. Lavas display a bimodal distribution of tholeiitic and alkalic compositions, and were erupted over a period from 4 Myr to present. A residual gravity profile taken perpendicular to the major axis of the anomaly was analyzed using linear programming and ideal body theory to obtain bounds on the density contrast, depth, and minimum thickness of the gravity body. Two-dimensionality was assumed.more » The limiting case where the anomalous body reaches the surface gives 0.1 g/cm/sup 3/ as the greatest lower bound on the maximum density contrast. If 0.4 g/cm/sup 3/ is taken as the geologically reasonable upper limit on the maximum density contrast, the least upper bound on the depth of burial is 3.5 km and minimum thickness is 2 km. A shallow mafic intrusion, emplaced sometime before Laramide deformation, is proposed to account for the positive gravity anomaly. Analysis of a magnetotelluric survey suggests that the intrusion is not due to recent basaltic magma associated with the Zuni-Bandera volcanic field. This large basement structure has controlled the development of the volcanic field; vent orientations have changed somewhat through time, but the trend of the volcanic chain followed the edge of the basement structure. It has also exhibited some control on deformation of the sedimentary section.« less

  17. An Experience Oriented-Convergence Improved Gravitational Search Algorithm for Minimum Variance Distortionless Response Beamforming Optimum

    PubMed Central

    Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin

    2016-01-01

    An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents’ positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness. PMID:27399904

  18. A New Look at Some Solar Wind Turbulence Puzzles

    NASA Technical Reports Server (NTRS)

    Roberts, Aaron

    2006-01-01

    Some aspects of solar wind turbulence have defied explanation. While it seems likely that the evolution of Alfvenicity and power spectra are largely explained by the shearing of an initial population of solar-generated Alfvenic fluctuations, the evolution of the anisotropies of the turbulence does not fit into the model so far. A two-component model, consisting of slab waves and quasi-two-dimensional fluctuations, offers some ideas, but does not account for the turning of both wave-vector-space power anisotropies and minimum variance directions in the fluctuating vectors as the Parker spiral turns. We will show observations that indicate that the minimum variance evolution is likely not due to traditional turbulence mechanisms, and offer arguments that the idea of two-component turbulence is at best a local approximation that is of little help in explaining the evolution of the fluctuations. Finally, time-permitting, we will discuss some observations that suggest that the low Alfvenicity of many regions of the solar wind in the inner heliosphere is not due to turbulent evolution, but rather to the existence of convected structures, including mini-clouds and other twisted flux tubes, that were formed with low Alfvenicity. There is still a role for turbulence in the above picture, but it is highly modified from the traditional views.

  19. Upper bounds on the error probabilities and asymptotic error exponents in quantum multiple state discrimination

    NASA Astrophysics Data System (ADS)

    Audenaert, Koenraad M. R.; Mosonyi, Milán

    2014-10-01

    We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ1, …, σr. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ1, …, σr), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov's classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min _{j

  20. Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information

    NASA Technical Reports Server (NTRS)

    Howell, L. W.

    2002-01-01

    A simple power law model consisting of a single spectral index, a is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index alpha(sub 2) greater than alpha(sub 1) above E(sub k). The Maximum likelihood (ML) procedure was developed for estimating the single parameter alpha(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (P1) consistency (asymptotically unbiased). (P2) efficiency asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only he ascertained by calculating the CRB for an assumed energy spectrum-detector response function combination, which can be quite formidable in practice. However. the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are attained in practice are investigated. The ML technique is then extended to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral parameter estimates based on the combination of data sets.

  1. A Theory of Cramer-Rao Bounds for Constrained Parametric Models

    DTIC Science & Technology

    2010-01-01

    reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of...overly optimistic. This occurs frequently in communications when the signal- to-noise ratio (SNR) or data transmission size decreases. 43 3.1, then U ?(φ...space of UTHT , the LSE is BLUE and is given by dT θ̂CLS(x) = d Tθ1 + d TU ( UTQU )† UTHTC−1 ( x−Hθ1 ) (3.28) similar to (3.27) with variance dTU

  2. M-Estimation for Discrete Data: Asymptotic Distribution Theory and Implications.

    DTIC Science & Technology

    1985-11-01

    the influence function of an M-estimator is proportional to its score function; see Hampel (1974) or Huber (1981) for details. Surprisingly, M...consistently estimates 0 when the model is correct. Suppose now that OcR The influence function at F of an M-estimator for e has the form a(x,e...variance and the bound on the influence function at F This is assuming, of course, that the estimator is asymptotically normal at Fe. 6’ The truncation

  3. Wave-Based Algorithms and Bounds for Target Support Estimation

    DTIC Science & Technology

    2015-05-15

    vector electromagnetic formalism in [5]. This theory leads to three main variants of the optical theorem detector, in particular, three alternative...further expands the applicability for transient pulse change detection of ar- bitrary nonlinear-media and time-varying targets [9]. This report... electromagnetic methods a new methodology to estimate the minimum convex source region and the (possibly nonconvex) support of a scattering target from knowledge of

  4. Practical scheme for optimal measurement in quantum interferometric devices

    NASA Astrophysics Data System (ADS)

    Takeoka, Masahiro; Ban, Masashi; Sasaki, Masahide

    2003-06-01

    We apply a Kennedy-type detection scheme, which was originally proposed for a binary communications system, to interferometric sensing devices. We show that the minimum detectable perturbation of the proposed system reaches the ultimate precision bound which is predicted by quantum Neyman-Pearson hypothesis testing. To provide concrete examples, we apply our interferometric scheme to phase shift detection by using coherent and squeezed probe fields.

  5. Fano Bounds for Compact Antennas. Phase I

    DTIC Science & Technology

    2007-10-01

    Transimpedance Amplifiers IEEE Journal of Solid-State Circuits, 39(8), pages 1263–1270. [3] Baher, H. [1984] Synthesis of Electrical Networks, John Wiley...band PHEMTMMIC Low Noise Amplifier with Minimum Input Match- ing Network, Electronics Letters, 36, pages 1627–1629. [7] Edminister, Joseph A. [1965...pages 1328–1331. [11] Gonzalez, Guillermo [1997] Microwave Transistor Amplifiers , Second Edition, Prentice Hall, Upper Saddle River, NJ. 68 [12

  6. On the existence and uniqueness of minima and maxima on spheres of the integral functional of the calculus of variations

    NASA Astrophysics Data System (ADS)

    Ricceri, Biagio

    2006-12-01

    Given a bounded domain [Omega][subset of]Rn, we prove that if is a C1 function whose gradient is Lipschitzian in Rn+1 and non-zero at 0, then, for each r>0 small enough, the restriction of the integral functional to the sphere has a unique global minimum and a unique global maximum.

  7. Bounds on quantum communication via Newtonian gravity

    NASA Astrophysics Data System (ADS)

    Kafri, D.; Milburn, G. J.; Taylor, J. M.

    2015-01-01

    Newtonian gravity yields specific observable consequences, the most striking of which is the emergence of a 1/{{r}2} force. In so far as communication can arise via such interactions between distant particles, we can ask what would be expected for a theory of gravity that only allows classical communication. Many heuristic suggestions for gravity-induced decoherence have this restriction implicitly or explicitly in their construction. Here we show that communication via a 1/{{r}2} force has a minimum noise induced in the system when the communication cannot convey quantum information, in a continuous time analogue to Bell's inequalities. Our derived noise bounds provide tight constraints from current experimental results on any theory of gravity that does not allow quantum communication.

  8. Response of discrete linear systems to forcing functions with inequality constraints.

    NASA Technical Reports Server (NTRS)

    Michalopoulos, C. D.; Riley, T. A.

    1972-01-01

    An analysis is made of the maximum response of discrete, linear mechanical systems to arbitrary forcing functions which lie within specified bounds. Primary attention is focused on the complete determination of the forcing function which will engender maximum displacement to any particular mass element of a multi-degree-of-freedom system. In general, the desired forcing function is found to be a bang-bang type function, i.e., a function which switches from the maximum to the minimum bound and vice-versa at certain instants of time. Examples of two-degree-of-freedom systems, with and without damping, are presented in detail. Conclusions are drawn concerning the effect of damping on the switching times and the general procedure for finding these times is discussed.

  9. An upper bound on the radius of a highly electrically conducting lunar core

    NASA Technical Reports Server (NTRS)

    Hobbs, B. A.; Hood, L. L.; Herbert, F.; Sonett, C. P.

    1983-01-01

    Parker's (1980) nonlinear inverse theory for the electromagnetic sounding problem is converted to a form suitable for analysis of lunar day-side transfer function data by: (1) transforming the solution in plane geometry to that in spherical geometry; and (2) transforming the theoretical lunar transfer function in the dipole limit to an apparent resistivity function. The theory is applied to the revised lunar transfer function data set of Hood et al. (1982), which extends in frequency from 10 to the -5th to 10 to the -3rd Hz. On the assumption that an iron-rich lunar core, whether molten or solid, can be represented by a perfect conductor at the minimum sampled frequency, an upper bound of 435 km on the maximum radius of such a core is calculated. This bound is somewhat larger than values of 360-375 km previously estimated from the same data set via forward model calculations because the prior work did not consider all possible mantle conductivity functions.

  10. Mathematical properties and bounds on haplotyping populations by pure parsimony.

    PubMed

    Wang, I-Lin; Chang, Chia-Yuan

    2011-06-01

    Although the haplotype data can be used to analyze the function of DNA, due to the significant efforts required in collecting the haplotype data, usually the genotype data is collected and then the population haplotype inference (PHI) problem is solved to infer haplotype data from genotype data for a population. This paper investigates the PHI problem based on the pure parsimony criterion (HIPP), which seeks the minimum number of distinct haplotypes to infer a given genotype data. We analyze the mathematical structure and properties for the HIPP problem, propose techniques to reduce the given genotype data into an equivalent one of much smaller size, and analyze the relations of genotype data using a compatible graph. Based on the mathematical properties in the compatible graph, we propose a maximal clique heuristic to obtain an upper bound, and a new polynomial-sized integer linear programming formulation to obtain a lower bound for the HIPP problem. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Stream-temperature patterns of the Muddy Creek basin, Anne Arundel County, Maryland

    USGS Publications Warehouse

    Pluhowski, E.J.

    1981-01-01

    Using a water-balance equation based on a 4.25-year gaging-station record on North Fork Muddy Creek, the following mean annual values were obtained for the Muddy Creek basin: precipitation, 49.0 inches; evapotranspiration, 28.0 inches; runoff, 18.5 inches; and underflow, 2.5 inches. Average freshwater outflow from the Muddy Creek basin to the Rhode River estuary was 12.2 cfs during the period October 1, 1971, to December 31, 1975. Harmonic equations were used to describe seasonal maximum and minimum stream-temperature patterns at 12 sites in the basin. These equations were fitted to continuous water-temperature data obtained periodically at each site between November 1970 and June 1978. The harmonic equations explain at least 78 percent of the variance in maximum stream temperatures and 81 percent of the variance in minimum temperatures. Standard errors of estimate averaged 2.3C (Celsius) for daily maximum water temperatures and 2.1C for daily minimum temperatures. Mean annual water temperatures developed for a 5.4-year base period ranged from 11.9C at Muddy Creek to 13.1C at Many Fork Branch. The largest variations in stream temperatures were detected at thermograph sites below ponded reaches and where forest coverage was sparse or missing. At most sites the largest variations in daily water temperatures were recorded in April whereas the smallest were in September and October. The low thermal inertia of streams in the Muddy Creek basin tends to amplify the impact of surface energy-exchange processes on short-period stream-temperature patterns. Thus, in response to meteorologic events, wide ranging stream-temperature perturbations of as much as 6C have been documented in the basin. (USGS)

  12. Beating Landauer's Bound: Tradeoff between Accuracy and Heat Dissipation

    NASA Astrophysics Data System (ADS)

    Talukdar, Saurav; Bhaban, Shreyas; Salapaka, Murti

    The Landauer's Principle states that erasing of one bit of stored information is necessarily accompanied by heat dissipation of at least kb Tln 2 per bit. However, this is true only if the erasure process is always successful. We demonstrate that if the erasure process has a success probability p, the minimum heat dissipation per bit is given by kb T(plnp + (1 - p) ln (1 - p) + ln 2), referred to as the Generalized Landauer Bound, which is kb Tln 2 if the erasure process is always successful and decreases to zero as p reduces to 0.5. We present a model for a one-bit memory based on a Brownian particle in a double well potential motivated from optical tweezers and achieve erasure by manipulation of the optical fields. The method uniquely provides with a handle on the success proportion of the erasure. The thermodynamics framework for Langevin dynamics developed by Sekimoto is used for computation of heat dissipation in each realization of the erasure process. Using extensive Monte Carlo simulations, we demonstrate that the Landauer Bound of kb Tln 2 is violated by compromising on the success of the erasure process, while validating the existence of the Generalized Landauer Bound.

  13. Slope-aspect color shading for parametric surfaces

    NASA Technical Reports Server (NTRS)

    Moellering, Harold J. (Inventor); Kimerling, A. Jon (Inventor)

    1991-01-01

    The invention is a method for generating an image of a parametric surface, such as the compass direction toward which each surface element of terrain faces, commonly called the slope-aspect azimuth of the surface element. The method maximizes color contrast to permit easy discrimination of the magnitude, ranges, intervals or classes of a surface parameter while making it easy for the user to visualize the form of the surface, such as a landscape. The four pole colors of the opponent process color theory are utilized to represent intervals or classes at 90 degree angles. The color perceived as having maximum measured luminance is selected to portray the color having an azimuth of an assumed light source and the color showing minimum measured luminance portrays the diametrically opposite azimuth. The 90 degree intermediate azimuths are portrayed by unique colors of intermediate measured luminance, such as red and green. Colors between these four pole colors are used which are perceived as mixtures or combinations of their bounding colors and are arranged progressively between their bounding colors to have perceived proportional mixtures of the bounding colors which are proportional to the interval's angular distance from its bounding colors.

  14. Two-sided estimates of minimum-error distinguishability of mixed quantum states via generalized Holevo-Curlander bounds

    NASA Astrophysics Data System (ADS)

    Tyson, Jon

    2009-03-01

    We prove a concise factor-of-2 estimate for the failure rate of optimally distinguishing an arbitrary ensemble of mixed quantum states, generalizing work of Holevo [Theor. Probab. Appl. 23, 411 (1978)] and Curlander [Ph.D. Thesis, MIT, 1979]. A modification to the minimal principle of Cocha and Poor [Proceedings of the 6th International Conference on Quantum Communication, Measurement, and Computing (Rinton, Princeton, NJ, 2003)] is used to derive a suboptimal measurement which has an error rate within a factor of 2 of the optimal by construction. This measurement is quadratically weighted and has appeared as the first iterate of a sequence of measurements proposed by Ježek et al. [Phys. Rev. A 65, 060301 (2002)]. Unlike the so-called pretty good measurement, it coincides with Holevo's asymptotically optimal measurement in the case of nonequiprobable pure states. A quadratically weighted version of the measurement bound by Barnum and Knill [J. Math. Phys. 43, 2097 (2002)] is proven. Bounds on the distinguishability of syndromes in the sense of Schumacher and Westmoreland [Phys. Rev. A 56, 131 (1997)] appear as a corollary. An appendix relates our bounds to the trace-Jensen inequality.

  15. Seasonal trends in vegetation and atmospheric concentrations of PAHs and PBDEs near a sanitary landfill

    NASA Astrophysics Data System (ADS)

    St-Amand, Annick D.; Mayer, Paul M.; Blais, Jules M.

    Spruce needle and atmospheric (gaseous and particulate-bound) concentrations were surveyed near a sanitary landfill from February 2004 to June 2005. The influence of several parameters such as temperature, relative humidity, wind speed and direction, as well as increased domestic heating during the winter was assessed. In general, polybrominated diphenyl ethers (PBDE) and polycyclic aromatic hydrocarbons (PAH) concentrations in spruce needles increased over time and decreased following snowmelt with a minimum coinciding with bud burst of deciduous trees. Atmospheric concentrations of PBDE and PAH, both particulate-bound and gaseous phase, were linked to daily weather events and thus showed more variability than those in spruce needles. Highest PAH concentrations were encountered during the winter, likely reflecting increased emission from heating homes. Pseudo Clausius-Clapeyron plots revealed higher PBDE gaseous concentrations with increasing temperature, but showed no correlation between PAH gaseous concentrations and temperature as this effect was obscured by seasonal emission patterns. Finally, air mass back trajectories and local wind directions revealed that particulate-bound PBDEs, along with both gaseous and particulate-bound PAHs were from local sources, whereas gaseous PBDEs were likely from distant sources.

  16. A Constrained Least Squares Approach to Mobile Positioning: Algorithms and Optimality

    NASA Astrophysics Data System (ADS)

    Cheung, KW; So, HC; Ma, W.-K.; Chan, YT

    2006-12-01

    The problem of locating a mobile terminal has received significant attention in the field of wireless communications. Time-of-arrival (TOA), received signal strength (RSS), time-difference-of-arrival (TDOA), and angle-of-arrival (AOA) are commonly used measurements for estimating the position of the mobile station. In this paper, we present a constrained weighted least squares (CWLS) mobile positioning approach that encompasses all the above described measurement cases. The advantages of CWLS include performance optimality and capability of extension to hybrid measurement cases (e.g., mobile positioning using TDOA and AOA measurements jointly). Assuming zero-mean uncorrelated measurement errors, we show by mean and variance analysis that all the developed CWLS location estimators achieve zero bias and the Cramér-Rao lower bound approximately when measurement error variances are small. The asymptotic optimum performance is also confirmed by simulation results.

  17. The causal link among militarization, economic growth, CO2 emission, and energy consumption.

    PubMed

    Bildirici, Melike E

    2017-02-01

    This paper examines the long-run and the causal relationship among CO 2 emissions, militarization, economic growth, and energy consumption for USA for the period 1960-2013. Using the bound test approach to cointegration, a short-run as well as a long-run relationship among the variables with a positive and a statistically significant relationship between CO 2 emissions and militarization was found. To determine the causal link, MWALD and Rao's F tests were applied. According to Rao's F tests, the evidence of a unidirectional causality running from militarization to CO 2 emissions, from energy consumption to CO 2 emissions, and from militarization to energy consumption all without a feedback was found. Further, the results determined that 26% of the forecast-error variance of CO 2 emissions was explained by the forecast error variance of militarization and 60% by energy consumption.

  18. Influential input classification in probabilistic multimedia models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.

    1999-05-01

    Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less

  19. MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Kamel, Mohamed S.

    2016-01-01

    In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.

  20. Response to selection while maximizing genetic variance in small populations.

    PubMed

    Cervantes, Isabel; Gutiérrez, Juan Pablo; Meuwissen, Theo H E

    2016-09-20

    Rare breeds represent a valuable resource for future market demands. These populations are usually well-adapted, but their low census compromises the genetic diversity and future of these breeds. Since improvement of a breed for commercial traits may also confer higher probabilities of survival for the breed, it is important to achieve good responses to artificial selection. Therefore, efficient genetic management of these populations is essential to ensure that they respond adequately to genetic selection in possible future artificial selection scenarios. Scenarios that maximize the maximum genetic variance in a unique population could be a valuable option. The aim of this work was to study the effect of the maximization of genetic variance to increase selection response and improve the capacity of a population to adapt to a new environment/production system. We simulated a random scenario (A), a full-sib scenario (B), a scenario applying the maximum variance total (MVT) method (C), a MVT scenario with a restriction on increases in average inbreeding (D), a MVT scenario with a restriction on average individual increases in inbreeding (E), and a minimum coancestry scenario (F). Twenty replicates of each scenario were simulated for 100 generations, followed by 10 generations of selection. Effective population size was used to monitor the outcomes of these scenarios. Although the best response to selection was achieved in scenarios B and C, they were discarded because they are unpractical. Scenario A was also discarded because of its low response to selection. Scenario D yielded less response to selection and a smaller effective population size than scenario E, for which response to selection was higher during early generations because of the moderately structured population. In scenario F, response to selection was slightly higher than in Scenario E in the last generations. Application of MVT with a restriction on individual increases in inbreeding resulted in the largest response to selection during early generations, but if inbreeding depression is a concern, a minimum coancestry scenario is then a valuable alternative, in particular for a long-term response to selection.

  1. A high-resolution speleothem record of western equatorial Pacific rainfall: Implications for Holocene ENSO evolution

    NASA Astrophysics Data System (ADS)

    Chen, Sang; Hoffmann, Sharon S.; Lund, David C.; Cobb, Kim M.; Emile-Geay, Julien; Adkins, Jess F.

    2016-05-01

    The El Niño-Southern Oscillation (ENSO) is the primary driver of interannual climate variability in the tropics and subtropics. Despite substantial progress in understanding ocean-atmosphere feedbacks that drive ENSO today, relatively little is known about its behavior on centennial and longer timescales. Paleoclimate records from lakes, corals, molluscs and deep-sea sediments generally suggest that ENSO variability was weaker during the mid-Holocene (4-6 kyr BP) than the late Holocene (0-4 kyr BP). However, discrepancies amongst the records preclude a clear timeline of Holocene ENSO evolution and therefore the attribution of ENSO variability to specific climate forcing mechanisms. Here we present δ18 O results from a U-Th dated speleothem in Malaysian Borneo sampled at sub-annual resolution. The δ18 O of Borneo rainfall is a robust proxy of regional convective intensity and precipitation amount, both of which are directly influenced by ENSO activity. Our estimates of stalagmite δ18 O variance at ENSO periods (2-7 yr) show a significant reduction in interannual variability during the mid-Holocene (3240-3380 and 5160-5230 yr BP) relative to both the late Holocene (2390-2590 yr BP) and early Holocene (6590-6730 yr BP). The Borneo results are therefore inconsistent with lacustrine records of ENSO from the eastern equatorial Pacific that show little or no ENSO variance during the early Holocene. Instead, our results support coral, mollusc and foraminiferal records from the central and eastern equatorial Pacific that show a mid-Holocene minimum in ENSO variance. Reduced mid-Holocene interannual δ18 O variability in Borneo coincides with an overall minimum in mean δ18 O from 3.5 to 5.5 kyr BP. Persistent warm pool convection would tend to enhance the Walker circulation during the mid-Holocene, which likely contributed to reduced ENSO variance during this period. This finding implies that both convective intensity and interannual variability in Borneo are driven by coupled air-sea dynamics that are sensitive to precessional insolation forcing. Isolating the exact mechanisms that drive long-term ENSO evolution will require additional high-resolution paleoclimatic reconstructions and further investigation of Holocene tropical climate evolution using coupled climate models.

  2. The Impact of Truth Surrogate Variance on Quality Assessment/Assurance in Wind Tunnel Testing

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2016-01-01

    Minimum data volume requirements for wind tunnel testing are reviewed and shown to depend on error tolerance, response model complexity, random error variance in the measurement environment, and maximum acceptable levels of inference error risk. Distinctions are made between such related concepts as quality assurance and quality assessment in response surface modeling, as well as between precision and accuracy. Earlier research on the scaling of wind tunnel tests is extended to account for variance in the truth surrogates used at confirmation sites in the design space to validate proposed response models. A model adequacy metric is presented that represents the fraction of the design space within which model predictions can be expected to satisfy prescribed quality specifications. The impact of inference error on the assessment of response model residuals is reviewed. The number of sites where reasonably well-fitted response models actually predict inadequately is shown to be considerably less than the number of sites where residuals are out of tolerance. The significance of such inference error effects on common response model assessment strategies is examined.

  3. [Determination and principal component analysis of mineral elements based on ICP-OES in Nitraria roborowskii fruits from different regions].

    PubMed

    Yuan, Yuan-Yuan; Zhou, Yu-Bi; Sun, Jing; Deng, Juan; Bai, Ying; Wang, Jie; Lu, Xue-Feng

    2017-06-01

    The content of elements in fifteen different regions of Nitraria roborowskii samples were determined by inductively coupled plasma-atomic emission spectrometry(ICP-OES), and its elemental characteristics were analyzed by principal component analysis. The results indicated that 18 mineral elements were detected in N. roborowskii of which V cannot be detected. In addition, contents of Na, K and Ca showed high concentration. Ti showed maximum content variance, while K is minimum. Four principal components were gained from the original data. The cumulative variance contribution rate is 81.542% and the variance contribution of the first principal component was 44.997%, indicating that Cr, Fe, P and Ca were the characteristic elements of N. roborowskii.Thus, the established method was simple, precise and can be used for determination of mineral elements in N.roborowskii Kom. fruits. The elemental distribution characteristics among N.roborowskii fruits are related to geographical origins which were clearly revealed by PCA. All the results will provide good basis for comprehensive utilization of N.roborowskii. Copyright© by the Chinese Pharmaceutical Association.

  4. Enhanced PM10 bounded PAHs from shipping emissions

    NASA Astrophysics Data System (ADS)

    Pongpiachan, S.; Hattayanone, M.; Choochuay, C.; Mekmok, R.; Wuttijak, N.; Ketratanakul, A.

    2015-05-01

    Earlier studies have highlighted the importance of maritime transport as a main contributor of air pollutants in port area. The authors intended to investigate the effects of shipping emissions on the enhancement of PM10 bounded polycyclic aromatic hydrocarbons (PAHs) and mutagenic substances in an industrial area of Rayong province, Thailand. Daily PM10 speciation data across two air quality observatory sites in Thailand during 2010-2013 were collected. Diagnostic binary ratios of PAH congeners, analysis of variances (ANOVA), and principal component analysis (PCA) were employed to evaluate the enhanced genotoxicity of PM10 during the docking period. Significant increase of PAHs and mutagenic index (MI) of PM10 were observed during the docking period in both sampling sites. Although stationary sources like coal combustions from power plants and vehicular exhausts from motorway can play a great role in enhancing PAH concentrations, regulating shipping emissions from diesel engine in the port area like Rayong is predominantly crucial.

  5. Risks of Large Portfolios

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Shi, Xiaofeng

    2014-01-01

    The risk of a large portfolio is often estimated by substituting a good estimator of the volatility matrix. However, the accuracy of such a risk estimator is largely unknown. We study factor-based risk estimators under a large amount of assets, and introduce a high-confidence level upper bound (H-CLUB) to assess the estimation. The H-CLUB is constructed using the confidence interval of risk estimators with either known or unknown factors. We derive the limiting distribution of the estimated risks in high dimensionality. We find that when the dimension is large, the factor-based risk estimators have the same asymptotic variance no matter whether the factors are known or not, which is slightly smaller than that of the sample covariance-based estimator. Numerically, H-CLUB outperforms the traditional crude bounds, and provides an insightful risk assessment. In addition, our simulated results quantify the relative error in the risk estimation, which is usually negligible using 3-month daily data. PMID:26195851

  6. Joint estimation of phase and phase diffusion for quantum metrology.

    PubMed

    Vidrighin, Mihai D; Donati, Gaia; Genoni, Marco G; Jin, Xian-Min; Kolthammer, W Steven; Kim, M S; Datta, Animesh; Barbieri, Marco; Walmsley, Ian A

    2014-04-14

    Phase estimation, at the heart of many quantum metrology and communication schemes, can be strongly affected by noise, whose amplitude may not be known, or might be subject to drift. Here we investigate the joint estimation of a phase shift and the amplitude of phase diffusion at the quantum limit. For several relevant instances, this multiparameter estimation problem can be effectively reshaped as a two-dimensional Hilbert space model, encompassing the description of an interferometer phase probed with relevant quantum states--split single-photons, coherent states or N00N states. For these cases, we obtain a trade-off bound on the statistical variances for the joint estimation of phase and phase diffusion, as well as optimum measurement schemes. We use this bound to quantify the effectiveness of an actual experimental set-up for joint parameter estimation for polarimetry. We conclude by discussing the form of the trade-off relations for more general states and measurements.

  7. Plasma dynamics on current-carrying magnetic flux tubes

    NASA Technical Reports Server (NTRS)

    Swift, Daniel W.

    1992-01-01

    A 1D numerical simulation is used to investigate the evolution of a plasma in a current-carrying magnetic flux tube of variable cross section. A large potential difference, parallel to the magnetic field, is applied across the domain. The result is that density minimum tends to deepen, primarily in the cathode end, and the entire potential drop becomes concentrated across the region of density minimum. The evolution of the simulation shows some sensitivity to particle boundary conditions, but the simulations inevitably evolve into a final state with a nearly stationary double layer near the cathode end. The simulation results are at sufficient variance with observations that it appears unlikely that auroral electrons can be explained by a simple process of acceleration through a field-aligned potential drop.

  8. Highly Entangled, Non-random Subspaces of Tensor Products from Quantum Groups

    NASA Astrophysics Data System (ADS)

    Brannan, Michael; Collins, Benoît

    2018-03-01

    In this paper we describe a class of highly entangled subspaces of a tensor product of finite-dimensional Hilbert spaces arising from the representation theory of free orthogonal quantum groups. We determine their largest singular values and obtain lower bounds for the minimum output entropy of the corresponding quantum channels. An application to the construction of d-positive maps on matrix algebras is also presented.

  9. Network Design for Reliability and Resilience to Attack

    DTIC Science & Technology

    2014-03-01

    attacker can destroy n arcs in the network SPNI Shortest-Path Network-Interdiction problem TSP Traveling Salesman Problem UB upper bound UKR Ukraine...elimination from the traveling salesman problem (TSP). Literature calls a walk that does not contain a cycle a path [19]. The objective function in...arc lengths as random variables with known probability distributions. The m-median problem seeks to design a network with minimum average travel cost

  10. Discrete Fluctuations in Memory Erasure without Energy Cost

    NASA Astrophysics Data System (ADS)

    Croucher, Toshio; Bedkihal, Salil; Vaccaro, Joan A.

    2017-02-01

    According to Landauer's principle, erasing one bit of information incurs a minimum energy cost. Recently, Vaccaro and Barnett (VB) explored information erasure within the context of generalized Gibbs ensembles and demonstrated that for energy-degenerate spin reservoirs the cost of erasure can be solely in terms of a minimum amount of spin angular momentum and no energy. As opposed to the Landauer case, the cost of erasure in this case is associated with an intrinsically discrete degree of freedom. Here we study the discrete fluctuations in this cost and the probability of violation of the VB bound. We also obtain a Jarzynski-like equality for the VB erasure protocol. We find that the fluctuations below the VB bound are exponentially suppressed at a far greater rate and more tightly than for an equivalent Jarzynski expression for VB erasure. We expose a trade-off between the size of the fluctuations and the cost of erasure. We find that the discrete nature of the fluctuations is pronounced in the regime where reservoir spins are maximally polarized. We also state the first laws of thermodynamics corresponding to the conservation of spin angular momentum for this particular erasure protocol. Our work will be important for novel heat engines based on information erasure schemes that do not incur an energy cost.

  11. Accuracy of tree diameter estimation from terrestrial laser scanning by circle-fitting methods

    NASA Astrophysics Data System (ADS)

    Koreň, Milan; Mokroš, Martin; Bucha, Tomáš

    2017-12-01

    This study compares the accuracies of diameter at breast height (DBH) estimations by three initial (minimum bounding box, centroid, and maximum distance) and two refining (Monte Carlo and optimal circle) circle-fitting methods The circle-fitting algorithms were evaluated in multi-scan mode and a simulated single-scan mode on 157 European beech trees (Fagus sylvatica L.). DBH measured by a calliper was used as reference data. Most of the studied circle-fitting algorithms significantly underestimated the mean DBH in both scanning modes. Only the Monte Carlo method in the single-scan mode significantly overestimated the mean DBH. The centroid method proved to be the least suitable and showed significantly different results from the other circle-fitting methods in both scanning modes. In multi-scan mode, the accuracy of the minimum bounding box method was not significantly different from the accuracies of the refining methods The accuracy of the maximum distance method was significantly different from the accuracies of the refining methods in both scanning modes. The accuracy of the Monte Carlo method was significantly different from the accuracy of the optimal circle method in only single-scan mode. The optimal circle method proved to be the most accurate circle-fitting method for DBH estimation from point clouds in both scanning modes.

  12. A Random Forest approach to predict the spatial distribution of sediment pollution in an estuarine system

    PubMed Central

    Kreakie, Betty J.; Cantwell, Mark G.; Nacci, Diane

    2017-01-01

    Modeling the magnitude and distribution of sediment-bound pollutants in estuaries is often limited by incomplete knowledge of the site and inadequate sample density. To address these modeling limitations, a decision-support tool framework was conceived that predicts sediment contamination from the sub-estuary to broader estuary extent. For this study, a Random Forest (RF) model was implemented to predict the distribution of a model contaminant, triclosan (5-chloro-2-(2,4-dichlorophenoxy)phenol) (TCS), in Narragansett Bay, Rhode Island, USA. TCS is an unregulated contaminant used in many personal care products. The RF explanatory variables were associated with TCS transport and fate (proxies) and direct and indirect environmental entry. The continuous RF TCS concentration predictions were discretized into three levels of contamination (low, medium, and high) for three different quantile thresholds. The RF model explained 63% of the variance with a minimum number of variables. Total organic carbon (TOC) (transport and fate proxy) was a strong predictor of TCS contamination causing a mean squared error increase of 59% when compared to permutations of randomized values of TOC. Additionally, combined sewer overflow discharge (environmental entry) and sand (transport and fate proxy) were strong predictors. The discretization models identified a TCS area of greatest concern in the northern reach of Narragansett Bay (Providence River sub-estuary), which was validated with independent test samples. This decision-support tool performed well at the sub-estuary extent and provided the means to identify areas of concern and prioritize bay-wide sampling. PMID:28738089

  13. A New Method for Estimating the Effective Population Size from Allele Frequency Changes

    PubMed Central

    Pollak, Edward

    1983-01-01

    A new procedure is proposed for estimating the effective population size, given that information is available on changes in frequencies of the alleles at one or more independently segregating loci and the population is observed at two or more separate times. Approximate expressions are obtained for the variances of the new statistic, as well as others, also based on allele frequency changes, that have been discussed in the literature. This analysis indicates that the new statistic will generally have a smaller variance than the others. Estimates of effective population sizes and of the standard errors of the estimates are computed for data on two fly populations that have been discussed in earlier papers. In both cases, there is evidence that the effective population size is very much smaller than the minimum census size of the population. PMID:17246147

  14. Measurement Uncertainty Relations for Discrete Observables: Relative Entropy Formulation

    NASA Astrophysics Data System (ADS)

    Barchielli, Alberto; Gregoratti, Matteo; Toigo, Alessandro

    2018-02-01

    We introduce a new information-theoretic formulation of quantum measurement uncertainty relations, based on the notion of relative entropy between measurement probabilities. In the case of a finite-dimensional system and for any approximate joint measurement of two target discrete observables, we define the entropic divergence as the maximal total loss of information occurring in the approximation at hand. For fixed target observables, we study the joint measurements minimizing the entropic divergence, and we prove the general properties of its minimum value. Such a minimum is our uncertainty lower bound: the total information lost by replacing the target observables with their optimal approximations, evaluated at the worst possible state. The bound turns out to be also an entropic incompatibility degree, that is, a good information-theoretic measure of incompatibility: indeed, it vanishes if and only if the target observables are compatible, it is state-independent, and it enjoys all the invariance properties which are desirable for such a measure. In this context, we point out the difference between general approximate joint measurements and sequential approximate joint measurements; to do this, we introduce a separate index for the tradeoff between the error of the first measurement and the disturbance of the second one. By exploiting the symmetry properties of the target observables, exact values, lower bounds and optimal approximations are evaluated in two different concrete examples: (1) a couple of spin-1/2 components (not necessarily orthogonal); (2) two Fourier conjugate mutually unbiased bases in prime power dimension. Finally, the entropic incompatibility degree straightforwardly generalizes to the case of many observables, still maintaining all its relevant properties; we explicitly compute it for three orthogonal spin-1/2 components.

  15. Throughput analysis for the National Airspace System

    NASA Astrophysics Data System (ADS)

    Sureshkumar, Chandrasekar

    The United States National Airspace System (NAS) network performance is currently measured using a variety of metrics based on delay. Developments in the fields of wireless communication, manufacturing and other modes of transportation like road, freight, etc. have explored various metrics that complement the delay metric. In this work, we develop a throughput concept for both the terminal and en-route phases of flight inspired by studies in the above areas and explore the applications of throughput metrics for the en-route airspace of the NAS. These metrics can be applied to the NAS performance at each hierarchical level—the sector, center, regional and national and will consist of multiple layers of networks with the bottom level comprising the traffic pattern modelled as a network of individual sectors acting as nodes. This hierarchical approach is especially suited for executive level decision making as it gives an overall picture of not just the inefficiencies but also the aspects where the NAS has performed well in a given situation from which specific information about the effects of a policy change on the NAS performance at each level can be determined. These metrics are further validated with real traffic data using the Future Air Traffic Management Concepts Evaluation Tool (FACET) for three en-route sectors and an Air Route Traffic Control Center (ARTCC). Further, this work proposes a framework to compute the minimum makespan and the capacity of a runway system in any configuration. Towards this, an algorithm for optimal arrival and departure flight sequencing is proposed. The proposed algorithm is based on a branch-and-bound technique and allows for the efficient computation of the best runway assignment and sequencing of arrival and departure operations that minimize the makespan at a given airport. The lower and upper bounds of the cost of each branch for the best first search in the branch-and-bound algorithm are computed based on the minimum separation standards between arrival and departure operations set by the Federal Aviation Administration. The optimal objective value is mathematically proved to lie between these bounds and the algorithm uses these bounds to efficiently find promising branches and discard all others and terminate with atleast one sequence with the minimal makespan. The proposed algorithm is analyzed and validated through real traffic operations data at the Hartsfield-Jackson Atlanta international airport.

  16. Evaluating climate change impacts on streamflow variability based on a multisite multivariate GCM downscaling method in the Jing River of China

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Jin, Jiming

    2017-11-01

    Projected hydrological variability is important for future resource and hazard management of water supplies because changes in hydrological variability can cause more disasters than changes in the mean state. However, climate change scenarios downscaled from Earth System Models (ESMs) at single sites cannot meet the requirements of distributed hydrologic models for simulating hydrological variability. This study developed multisite multivariate climate change scenarios via three steps: (i) spatial downscaling of ESMs using a transfer function method, (ii) temporal downscaling of ESMs using a single-site weather generator, and (iii) reconstruction of spatiotemporal correlations using a distribution-free shuffle procedure. Multisite precipitation and temperature change scenarios for 2011-2040 were generated from five ESMs under four representative concentration pathways to project changes in streamflow variability using the Soil and Water Assessment Tool (SWAT) for the Jing River, China. The correlation reconstruction method performed realistically for intersite and intervariable correlation reproduction and hydrological modeling. The SWAT model was found to be well calibrated with monthly streamflow with a model efficiency coefficient of 0.78. It was projected that the annual mean precipitation would not change, while the mean maximum and minimum temperatures would increase significantly by 1.6 ± 0.3 and 1.3 ± 0.2 °C; the variance ratios of 2011-2040 to 1961-2005 were 1.15 ± 0.13 for precipitation, 1.15 ± 0.14 for mean maximum temperature, and 1.04 ± 0.10 for mean minimum temperature. A warmer climate was predicted for the flood season, while the dry season was projected to become wetter and warmer; the findings indicated that the intra-annual and interannual variations in the future climate would be greater than in the current climate. The total annual streamflow was found to change insignificantly but its variance ratios of 2011-2040 to 1961-2005 increased by 1.25 ± 0.55. Streamflow variability was predicted to become greater over most months on the seasonal scale because of the increased monthly maximum streamflow and decreased monthly minimum streamflow. The increase in streamflow variability was attributed mainly to larger positive contributions from increased precipitation variances rather than negative contributions from increased mean temperatures.

  17. Windowed and Wavelet Analysis of Marine Stratocumulus Cloud Inhomogeneity

    NASA Technical Reports Server (NTRS)

    Gollmer, Steven M.; Harshvardhan; Cahalan, Robert F.; Snider, Jack B.

    1995-01-01

    To improve radiative transfer calculations for inhomogeneous clouds, a consistent means of modeling inhomogeneity is needed. One current method of modeling cloud inhomogeneity is through the use of fractal parameters. This method is based on the supposition that cloud inhomogeneity over a large range of scales is related. An analysis technique named wavelet analysis provides a means of studying the multiscale nature of cloud inhomogeneity. In this paper, the authors discuss the analysis and modeling of cloud inhomogeneity through the use of wavelet analysis. Wavelet analysis as well as other windowed analysis techniques are used to study liquid water path (LWP) measurements obtained during the marine stratocumulus phase of the First ISCCP (International Satellite Cloud Climatology Project) Regional Experiment. Statistics obtained using analysis windows, which are translated to span the LWP dataset, are used to study the local (small scale) properties of the cloud field as well as their time dependence. The LWP data are transformed onto an orthogonal wavelet basis that represents the data as a number of times series. Each of these time series lies within a frequency band and has a mean frequency that is half the frequency of the previous band. Wavelet analysis combined with translated analysis windows reveals that the local standard deviation of each frequency band is correlated with the local standard deviation of the other frequency bands. The ratio between the standard deviation of adjacent frequency bands is 0.9 and remains constant with respect to time. This ratio defined as the variance coupling parameter is applicable to all of the frequency bands studied and appears to be related to the slope of the data's power spectrum. Similar analyses are performed on two cloud inhomogeneity models, which use fractal-based concepts to introduce inhomogeneity into a uniform cloud field. The bounded cascade model does this by iteratively redistributing LWP at each scale using the value of the local mean. This model is reformulated into a wavelet multiresolution framework, thereby presenting a number of variants of the bounded cascade model. One variant introduced in this paper is the 'variance coupled model,' which redistributes LWP using the local standard deviation and the variance coupling parameter. While the bounded cascade model provides an elegant two- parameter model for generating cloud inhomogeneity, the multiresolution framework provides more flexibility at the expense of model complexity. Comparisons are made with the results from the LWP data analysis to demonstrate both the strengths and weaknesses of these models.

  18. The minimum or natural rate of flow and droplet size ejected by Taylor cone-jets: physical symmetries and scaling laws

    NASA Astrophysics Data System (ADS)

    Gañán-Calvo, A. M.; Rebollo-Muñoz, N.; Montanero, J. M.

    2013-03-01

    We aim to establish the scaling laws for both the minimum rate of flow attainable in the steady cone-jet mode of electrospray, and the size of the resulting droplets in that limit. Use is made of a small body of literature on Taylor cone-jets reporting precise measurements of the transported electric current and droplet size as a function of the liquid properties and flow rate. The projection of the data onto an appropriate non-dimensional parameter space maps a region bounded by the minimum rate of flow attainable in the steady state. To explain these experimental results, we propose a theoretical model based on the generalized concept of physical symmetry, stemming from the system time invariance (steadiness). A group of symmetries rising at the cone-to-jet geometrical transition determines the scaling for the minimum flow rate and related variables. If the flow rate is decreased below that minimum value, those symmetries break down, which leads to dripping. We find that the system exhibits two instability mechanisms depending on the nature of the forces arising against the flow: one dominated by viscosity and the other by the liquid polarity. In the former case, full charge relaxation is guaranteed down to the minimum flow rate, while in the latter the instability condition becomes equivalent to the symmetry breakdown by charge relaxation or separation. When cone-jets are formed without artificially imposing a flow rate, a microjet is issued quasi-steadily. The flow rate naturally ejected this way coincides with the minimum flow rate studied here. This natural flow rate determines the minimum droplet size that can be steadily produced by any electrohydrodynamic means for a given set of liquid properties.

  19. Ground state of high-density matter

    NASA Technical Reports Server (NTRS)

    Copeland, ED; Kolb, Edward W.; Lee, Kimyeong

    1988-01-01

    It is shown that if an upper bound to the false vacuum energy of the electroweak Higgs potential is satisfied, the true ground state of high-density matter is not nuclear matter, or even strange-quark matter, but rather a non-topological soliton where the electroweak symmetry is exact and the fermions are massless. This possibility is examined in the standard SU(3) sub C tensor product SU(2) sub L tensor product U(1) sub Y model. The bound to the false vacuum energy is satisfied only for a narrow range of the Higgs boson masses in the minimal electroweak model (within about 10 eV of its minimum allowed value of 6.6 GeV) and a somewhat wider range for electroweak models with a non-minimal Higgs sector.

  20. Mathematics of gravitational lensing: multiple imaging and magnification

    NASA Astrophysics Data System (ADS)

    Petters, A. O.; Werner, M. C.

    2010-09-01

    The mathematical theory of gravitational lensing has revealed many generic and global properties. Beginning with multiple imaging, we review Morse-theoretic image counting formulas and lower bound results, and complex-algebraic upper bounds in the case of single and multiple lens planes. We discuss recent advances in the mathematics of stochastic lensing, discussing a general formula for the global expected number of minimum lensed images as well as asymptotic formulas for the probability densities of the microlensing random time delay functions, random lensing maps, and random shear, and an asymptotic expression for the global expected number of micro-minima. Multiple imaging in optical geometry and a spacetime setting are treated. We review global magnification relation results for model-dependent scenarios and cover recent developments on universal local magnification relations for higher order caustics.

  1. Universality of quantum gravity corrections.

    PubMed

    Das, Saurya; Vagenas, Elias C

    2008-11-28

    We show that the existence of a minimum measurable length and the related generalized uncertainty principle (GUP), predicted by theories of quantum gravity, influence all quantum Hamiltonians. Thus, they predict quantum gravity corrections to various quantum phenomena. We compute such corrections to the Lamb shift, the Landau levels, and the tunneling current in a scanning tunneling microscope. We show that these corrections can be interpreted in two ways: (a) either that they are exceedingly small, beyond the reach of current experiments, or (b) that they predict upper bounds on the quantum gravity parameter in the GUP, compatible with experiments at the electroweak scale. Thus, more accurate measurements in the future should either be able to test these predictions, or further tighten the above bounds and predict an intermediate length scale between the electroweak and the Planck scale.

  2. A new S-type eigenvalue inclusion set for tensors and its applications.

    PubMed

    Huang, Zheng-Ge; Wang, Li-Gong; Xu, Zhong; Cui, Jing-Jing

    2016-01-01

    In this paper, a new S -type eigenvalue localization set for a tensor is derived by dividing [Formula: see text] into disjoint subsets S and its complement. It is proved that this new set is sharper than those presented by Qi (J. Symb. Comput. 40:1302-1324, 2005), Li et al. (Numer. Linear Algebra Appl. 21:39-50, 2014) and Li et al. (Linear Algebra Appl. 481:36-53, 2015). As applications of the results, new bounds for the spectral radius of nonnegative tensors and the minimum H -eigenvalue of strong M -tensors are established, and we prove that these bounds are tighter than those obtained by Li et al. (Numer. Linear Algebra Appl. 21:39-50, 2014) and He and Huang (J. Inequal. Appl. 2014:114, 2014).

  3. Identification, Characterization, and Utilization of Adult Meniscal Progenitor Cells

    DTIC Science & Technology

    2017-11-01

    approach including row scaling and Ward’s minimum variance method was chosen. This analysis revealed two groups of four samples each. For the selected...articular cartilage in an ovine model. Am J Sports Med. 2008;36(5):841-50. 7. Deshpande BR, Katz JN, Solomon DH, Yelin EH, Hunter DJ, Messier SP, et al...Miosge1,* 1Tissue Regeneration Work Group , Department of Prosthodontics, Medical Faculty, Georg-August-University, 37075 Goettingen, Germany 2Institute of

  4. An Analysis Of The Benefits And Application Of Earned Value Management (EVM) Project Management Techniques For Dod Programs That Do Not Meet Dod Policy Thresholds

    DTIC Science & Technology

    2017-12-01

    carefully to ensure only minimum information needed for effective management control is requested.  Requires cost-benefit analysis and PM...baseline offers metrics that highlights performance treads and program variances. This information provides Program Managers and higher levels of...The existing training philosophy is effective only if the managers using the information have well trained and experienced personnel that can

  5. Ways to improve your correlation functions

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    1993-01-01

    This paper describes a number of ways to improve on the standard method for measuring the two-point correlation function of large scale structure in the Universe. Issues addressed are: (1) the problem of the mean density, and how to solve it; (2) how to estimate the uncertainty in a measured correlation function; (3) minimum variance pair weighting; (4) unbiased estimation of the selection function when magnitudes are discrete; and (5) analytic computation of angular integrals in background pair counts.

  6. The Three-Dimensional Power Spectrum Of Galaxies from the Sloan Digital Sky Survey

    DTIC Science & Technology

    2004-05-10

    aspects of the three-dimensional clustering of a much larger data set involving over 200,000 galaxies with redshifts. This paper is focused on measuring... papers , we will constrain galaxy bias empirically by using clustering measurements on smaller scales (e.g., I. Zehavi et al. 2004, in preparation...minimum-variance measurements in 22 k-bands of both the clustering power and its anisotropy due to redshift-space distortions, with narrow and well

  7. Waveform-based spaceborne GNSS-R wind speed observation: Demonstration and analysis using UK TechDemoSat-1 data

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Yang, Dongkai; Zhang, Bo; Li, Weiqiang

    2018-03-01

    This paper explores two types of mathematical functions to fit single- and full-frequency waveform of spaceborne Global Navigation Satellite System-Reflectometry (GNSS-R), respectively. The metrics of the waveforms, such as the noise floor, peak magnitude, mid-point position of the leading edge, leading edge slope and trailing edge slope, can be derived from the parameters of the proposed models. Because the quality of the UK TDS-1 data is not at the level required by remote sensing mission, the waveforms buried in noise or from ice/land are removed by defining peak-to-mean ratio, cosine similarity of the waveform before wind speed are retrieved. The single-parameter retrieval models are developed by comparing the peak magnitude, leading edge slope and trailing edge slope derived from the parameters of the proposed models with in situ wind speed from the ASCAT scatterometer. To improve the retrieval accuracy, three types of multi-parameter observations based on the principle component analysis (PCA), minimum variance (MV) estimator and Back Propagation (BP) network are implemented. The results indicate that compared to the best results of the single-parameter observation, the approaches based on the principle component analysis and minimum variance could not significantly improve retrieval accuracy, however, the BP networks obtain improvement with the RMSE of 2.55 m/s and 2.53 m/s for single- and full-frequency waveform, respectively.

  8. Fast Minimum Variance Beamforming Based on Legendre Polynomials.

    PubMed

    Bae, MooHo; Park, Sung Bae; Kwon, Sung Jae

    2016-09-01

    Currently, minimum variance beamforming (MV) is actively investigated as a method that can improve the performance of an ultrasound beamformer, in terms of the lateral and contrast resolution. However, this method has the disadvantage of excessive computational complexity since the inverse spatial covariance matrix must be calculated. Some noteworthy methods among various attempts to solve this problem include beam space adaptive beamforming methods and the fast MV method based on principal component analysis, which are similar in that the original signal in the element space is transformed to another domain using an orthonormal basis matrix and the dimension of the covariance matrix is reduced by approximating the matrix only with important components of the matrix, hence making the inversion of the matrix very simple. Recently, we proposed a new method with further reduced computational demand that uses Legendre polynomials as the basis matrix for such a transformation. In this paper, we verify the efficacy of the proposed method through Field II simulations as well as in vitro and in vivo experiments. The results show that the approximation error of this method is less than or similar to those of the above-mentioned methods and that the lateral response of point targets and the contrast-to-speckle noise in anechoic cysts are also better than or similar to those methods when the dimensionality of the covariance matrices is reduced to the same dimension.

  9. Null steering of adaptive beamforming using linear constraint minimum variance assisted by particle swarm optimization, dynamic mutated artificial immune system, and gravitational search algorithm.

    PubMed

    Darzi, Soodabeh; Kiong, Tiong Sieh; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Salem, Balasem

    2014-01-01

    Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program.

  10. Null Steering of Adaptive Beamforming Using Linear Constraint Minimum Variance Assisted by Particle Swarm Optimization, Dynamic Mutated Artificial Immune System, and Gravitational Search Algorithm

    PubMed Central

    Sieh Kiong, Tiong; Tariqul Islam, Mohammad; Ismail, Mahamod; Salem, Balasem

    2014-01-01

    Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program. PMID:25147859

  11. Demographics of an ornate box turtle population experiencing minimal human-induced disturbances

    USGS Publications Warehouse

    Converse, S.J.; Iverson, J.B.; Savidge, J.A.

    2005-01-01

    Human-induced disturbances may threaten the viability of many turtle populations, including populations of North American box turtles. Evaluation of the potential impacts of these disturbances can be aided by long-term studies of populations subject to minimal human activity. In such a population of ornate box turtles (Terrapene ornata ornata) in western Nebraska, we examined survival rates and population growth rates from 1981-2000 based on mark-recapture data. The average annual apparent survival rate of adult males was 0.883 (SE = 0.021) and of adult females was 0.932 (SE = 0.014). Minimum winter temperature was the best of five climate variables as a predictor of adult survival. Survival rates were highest in years with low minimum winter temperatures, suggesting that global warming may result in declining survival. We estimated an average adult population growth rate (????) of 1.006 (SE = 0.065), with an estimated temporal process variance (????2) of 0.029 (95% CI = 0.005-0.176). Stochastic simulations suggest that this mean and temporal process variance would result in a 58% probability of a population decrease over a 20-year period. This research provides evidence that, unless unknown density-dependent mechanisms are operating in the adult age class, significant human disturbances, such as commercial harvest or turtle mortality on roads, represent a potential risk to box turtle populations. ?? 2005 by the Ecological Society of America.

  12. Theoretical and simulated performance for a novel frequency estimation technique

    NASA Technical Reports Server (NTRS)

    Crozier, Stewart N.

    1993-01-01

    A low complexity, open-loop, discrete-time, delay-multiply-average (DMA) technique for estimating the frequency offset for digitally modulated MPSK signals is investigated. A nonlinearity is used to remove the MPSK modulation and generate the carrier component to be extracted. Theoretical and simulated performance results are presented and compared to the Cramer-Rao lower bound (CRLB) for the variance of the frequency estimation error. For all signal-to-noise ratios (SNR's) above threshold, it is shown that the CRLB can essentially be achieved with linear complexity.

  13. Wavelet Analysis of Nonstationary Fluctuations of Monte Carlo-Simulated Excitatory Postsynaptic Currents

    PubMed Central

    Aristizabal, F.; Glavinovic, M. I.

    2003-01-01

    Tracking spectral changes of rapidly varying signals is a demanding task. In this study, we explore on Monte Carlo-simulated glutamate-activated AMPA patch and synaptic currents whether a wavelet analysis offers such a possibility. Unlike Fourier methods that determine only the frequency content of a signal, the wavelet analysis determines both the frequency and the time. This is owing to the nature of the basis functions, which are infinite for Fourier transforms (sines and cosines are infinite), but are finite for wavelet analysis (wavelets are localized waves). In agreement with previous reports, the frequency of the stationary patch current fluctuations is higher for larger currents, whereas the mean-variance plots are parabolic. The spectra of the current fluctuations and mean-variance plots are close to the theoretically predicted values. The median frequency of the synaptic and nonstationary patch currents is, however, time dependent, though at the peak of synaptic currents, the median frequency is insensitive to the number of glutamate molecules released. Such time dependence demonstrates that the “composite spectra” of the current fluctuations gathered over the whole duration of synaptic currents cannot be used to assess the mean open time or effective mean open time of AMPA channels. The current (patch or synaptic) versus median frequency plots show hysteresis. The median frequency is thus not a simple reflection of the overall receptor saturation levels and is greater during the rise phase for the same saturation level. The hysteresis is due to the higher occupancy of the doubly bound state during the rise phase and not due to the spatial spread of the saturation disk, which remains remarkably constant. Albeit time dependent, the variance of the synaptic and nonstationary patch currents can be accurately determined. Nevertheless the evaluation of the number of AMPA channels and their single current from the mean-variance plots of patch or synaptic currents is not highly accurate owing to the varying number of the activatable AMPA channels caused by desensitization. The spatial nonuniformity of open, bound, and desensitized AMPA channels, and the time dependence and spatial nonuniformity of the glutamate concentration in the synaptic cleft, further reduce the accuracy of estimates of the number of AMPA channels from synaptic currents. In conclusion, wavelet analysis of nonstationary fluctuations of patch and synaptic currents expands our ability to determine accurately the variance and frequency of current fluctuations, demonstrates the limits of applicability of techniques currently used to evaluate the single channel current and number of AMPA channels, and offers new insights into the mechanisms involved in the generation of unitary quantal events at excitatory central synapses. PMID:14507683

  14. Wavelet analysis of nonstationary fluctuations of Monte Carlo-simulated excitatory postsynaptic currents.

    PubMed

    Aristizabal, F; Glavinovic, M I

    2003-10-01

    Tracking spectral changes of rapidly varying signals is a demanding task. In this study, we explore on Monte Carlo-simulated glutamate-activated AMPA patch and synaptic currents whether a wavelet analysis offers such a possibility. Unlike Fourier methods that determine only the frequency content of a signal, the wavelet analysis determines both the frequency and the time. This is owing to the nature of the basis functions, which are infinite for Fourier transforms (sines and cosines are infinite), but are finite for wavelet analysis (wavelets are localized waves). In agreement with previous reports, the frequency of the stationary patch current fluctuations is higher for larger currents, whereas the mean-variance plots are parabolic. The spectra of the current fluctuations and mean-variance plots are close to the theoretically predicted values. The median frequency of the synaptic and nonstationary patch currents is, however, time dependent, though at the peak of synaptic currents, the median frequency is insensitive to the number of glutamate molecules released. Such time dependence demonstrates that the "composite spectra" of the current fluctuations gathered over the whole duration of synaptic currents cannot be used to assess the mean open time or effective mean open time of AMPA channels. The current (patch or synaptic) versus median frequency plots show hysteresis. The median frequency is thus not a simple reflection of the overall receptor saturation levels and is greater during the rise phase for the same saturation level. The hysteresis is due to the higher occupancy of the doubly bound state during the rise phase and not due to the spatial spread of the saturation disk, which remains remarkably constant. Albeit time dependent, the variance of the synaptic and nonstationary patch currents can be accurately determined. Nevertheless the evaluation of the number of AMPA channels and their single current from the mean-variance plots of patch or synaptic currents is not highly accurate owing to the varying number of the activatable AMPA channels caused by desensitization. The spatial nonuniformity of open, bound, and desensitized AMPA channels, and the time dependence and spatial nonuniformity of the glutamate concentration in the synaptic cleft, further reduce the accuracy of estimates of the number of AMPA channels from synaptic currents. In conclusion, wavelet analysis of nonstationary fluctuations of patch and synaptic currents expands our ability to determine accurately the variance and frequency of current fluctuations, demonstrates the limits of applicability of techniques currently used to evaluate the single channel current and number of AMPA channels, and offers new insights into the mechanisms involved in the generation of unitary quantal events at excitatory central synapses.

  15. Molecular Engineering of Surfaces for Sensing and Detection

    DTIC Science & Technology

    2005-08-01

    solution was flowed in both chambers at a concentration of 0.05 mg/mL. Biotinylated single- stranded oligonucleotides ( bDNA ) were immobilized on the layer...correspondence between surface-bound bDNA and conjugate, a theoretical minimum coverage of 1.18 × 1012 molecules/cm2 of bDNA is necessary to...immobilize a monolayer of antibody. Above this bDNA coverage a monolayer of immobilized antibody should be observed. These theoretical values are

  16. Discrete Methods and their Applications

    DTIC Science & Technology

    1993-02-03

    problem of finding all near-optimal solutions to a linear program. In paper [18], we give a brief and elementary proof of a result of Hoffman [1952) about...relies only on linear programming duality; second, we obtain geometric and algebraic representations of the bounds that are determined explicitly in...same. We have studied the problem of finding the minimum n such that a given unit interval graph is an n--graph. A linear time algorithm to compute

  17. Cost of remembering a bit of information

    NASA Astrophysics Data System (ADS)

    Chiuchiù; , D.; López-Suárez, M.; Neri, I.; Diamantini, M. C.; Gammaitoni, L.

    2018-05-01

    In 1961, Landauer [R. Landauer, IBM J. Res. Develop. 5, 183 (1961), 10.1147/rd.53.0183] pointed out that resetting a binary memory requires a minimum energy of kBT ln(2 ) . However, once written, any memory is doomed to lose its content if no action is taken. To avoid memory losses, a refresh procedure is periodically performed. We present a theoretical model and an experiment on a microelectromechanical system to evaluate the minimum energy required to preserve one bit of information over time. Two main conclusions are drawn: (i) in principle, the energetic cost to preserve information for a fixed time duration with a given error probability can be arbitrarily reduced if the refresh procedure is performed often enough, and (ii) the Heisenberg uncertainty principle sets an upper bound on the memory lifetime.

  18. Scores on Riley's stuttering severity instrument versions three and four for samples of different length and for different types of speech material.

    PubMed

    Todd, Helena; Mirawdeli, Avin; Costelloe, Sarah; Cavenagh, Penny; Davis, Stephen; Howell, Peter

    2014-12-01

    Riley stated that the minimum speech sample length necessary to compute his stuttering severity estimates was 200 syllables. This was investigated. Procedures supplied for the assessment of readers and non-readers were examined to see whether they give equivalent scores. Recordings of spontaneous speech samples from 23 young children (aged between 2 years 8 months and 6 years 3 months) and 31 older children (aged between 10 years 0 months and 14 years 7 months) were made. Riley's severity estimates were scored on extracts of different lengths. The older children provided spontaneous and read samples, which were scored for severity according to reader and non-reader procedures. Analysis of variance supported the use of 200-syllable-long samples as the minimum necessary for obtaining severity scores. There was no significant difference in SSI-3 scores for the older children when the reader and non-reader procedures were used. Samples that are 200-syllables long are the minimum that is appropriate for obtaining stable Riley's severity scores. The procedural variants provide similar severity scores.

  19. Optimization strategy for and structural properties of traffic efficiency under bounded information accessibility

    NASA Astrophysics Data System (ADS)

    Sanghyun, Ahn; Seungwoong, Ha; Kim, Soo Yong

    2016-06-01

    A vital challenge for many socioeconomic systems is determining the optimum use of limited information. Traffic systems, wherein the range of resources is limited, are a particularly good example of this challenge. Based on bounded information accessibility in terms of, for example, high costs or technical limitations, we develop a new optimization strategy to improve the efficiency of a traffic system with signals and intersections. Numerous studies, including the study by Chowdery and Schadschneider (whose method we denote by ChSch), have attempted to achieve the maximum vehicle speed or the minimum wait time for a given traffic condition. In this paper, we introduce a modified version of ChSch with an independently functioning, decentralized control system. With the new model, we determine the optimization strategy under bounded information accessibility, which proves the existence of an optimal point for phase transitions in the system. The paper also provides insight that can be applied by traffic engineers to create more efficient traffic systems by analyzing the area and symmetry of local sites. We support our results with a statistical analysis using empirical traffic data from Seoul, Korea.

  20. Future mission studies: Preliminary comparisons of solar flux models

    NASA Technical Reports Server (NTRS)

    Ashrafi, S.

    1991-01-01

    The results of comparisons of the solar flux models are presented. (The wavelength lambda = 10.7 cm radio flux is the best indicator of the strength of the ionizing radiations such as solar ultraviolet and x-ray emissions that directly affect the atmospheric density thereby changing the orbit lifetime of satellites. Thus, accurate forecasting of solar flux F sub 10.7 is crucial for orbit determination of spacecrafts.) The measured solar flux recorded by National Oceanic and Atmospheric Administration (NOAA) is compared against the forecasts made by Schatten, MSFC, and NOAA itself. The possibility of a combined linear, unbiased minimum-variance estimation that properly combines all three models into one that minimizes the variance is also discussed. All the physics inherent in each model are combined. This is considered to be the dead-end statistical approach to solar flux forecasting before any nonlinear chaotic approach.

  1. Optimal portfolio strategy with cross-correlation matrix composed by DCCA coefficients: Evidence from the Chinese stock market

    NASA Astrophysics Data System (ADS)

    Sun, Xuelian; Liu, Zixian

    2016-02-01

    In this paper, a new estimator of correlation matrix is proposed, which is composed of the detrended cross-correlation coefficients (DCCA coefficients), to improve portfolio optimization. In contrast to Pearson's correlation coefficients (PCC), DCCA coefficients acquired by the detrended cross-correlation analysis (DCCA) method can describe the nonlinear correlation between assets, and can be decomposed in different time scales. These properties of DCCA make it possible to improve the investment effect and more valuable to investigate the scale behaviors of portfolios. The minimum variance portfolio (MVP) model and the Mean-Variance (MV) model are used to evaluate the effectiveness of this improvement. Stability analysis shows the effect of two kinds of correlation matrices on the estimation error of portfolio weights. The observed scale behaviors are significant to risk management and could be used to optimize the portfolio selection.

  2. Demodulation of messages received with low signal to noise ratio

    NASA Astrophysics Data System (ADS)

    Marguinaud, A.; Quignon, T.; Romann, B.

    The implementation of this all-digital demodulator is derived from maximum likelihood considerations applied to an analytical representation of the received signal. Traditional adapted filters and phase lock loops are replaced by minimum variance estimators and hypothesis tests. These statistical tests become very simple when working on phase signal. These methods, combined with rigorous control data representation allow significant computation savings as compared to conventional realizations. Nominal operation has been verified down to energetic signal over noise of -3 dB upon a QPSK demodulator.

  3. An adaptive technique for estimating the atmospheric density profile during the AE mission

    NASA Technical Reports Server (NTRS)

    Argentiero, P.

    1973-01-01

    A technique is presented for processing accelerometer data obtained during the AE missions in order to estimate the atmospheric density profile. A minimum variance, adaptive filter is utilized. The trajectory of the probe and probe parameters are in a consider mode where their estimates are unimproved but their associated uncertainties are permitted an impact on filter behavior. Simulations indicate that the technique is effective in estimating a density profile to within a few percentage points.

  4. Real-time performance assessment and adaptive control for a water chiller unit in an HVAC system

    NASA Astrophysics Data System (ADS)

    Bai, Jianbo; Li, Yang; Chen, Jianhao

    2018-02-01

    The paper proposes an adaptive control method for a water chiller unit in a HVAC system. Based on the minimum variance evaluation, the adaptive control method was used to realize better control of the water chiller unit. To verify the performance of the adaptive control method, the proposed method was compared with an a conventional PID controller, the simulation results showed that adaptive control method had superior control performance to that of the conventional PID controller.

  5. Optimization of data analysis for the in vivo neutron activation analysis of aluminum in bone.

    PubMed

    Mohseni, H K; Matysiak, W; Chettle, D R; Byun, S H; Priest, N; Atanackovic, J; Prestwich, W V

    2016-10-01

    An existing system at McMaster University has been used for the in vivo measurement of aluminum in human bone. Precise and detailed analysis approaches are necessary to determine the aluminum concentration because of the low levels of aluminum found in the bone and the challenges associated with its detection. Phantoms resembling the composition of the human hand with varying concentrations of aluminum were made for testing the system prior to the application to human studies. A spectral decomposition model and a photopeak fitting model involving the inverse-variance weighted mean and a time-dependent analysis were explored to analyze the results and determine the model with the best performance and lowest minimum detection limit. The results showed that the spectral decomposition and the photopeak fitting model with the inverse-variance weighted mean both provided better results compared to the other methods tested. The spectral decomposition method resulted in a marginally lower detection limit (5μg Al/g Ca) compared to the inverse-variance weighted mean (5.2μg Al/g Ca), rendering both equally applicable to human measurements. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process.

    PubMed

    Haines, Aaron M; Zak, Matthew; Hammond, Katie; Scott, J Michael; Goble, Dale D; Rachlow, Janet L

    2013-08-13

    United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data.

  7. Lekking without a paradox in the buff-breasted sandpiper

    USGS Publications Warehouse

    Lanctot, Richard B.; Scribner, Kim T.; Kempenaers, Bart; Weatherhead, Patrick J.

    1997-01-01

    Females in lek‐breeding species appear to copulate with a small subset of the available males. Such strong directional selection is predicted to decrease additive genetic variance in the preferred male traits, yet females continue to mate selectively, thus generating the lek paradox. In a study of buff‐breasted sandpipers (Tryngites subruficollis), we combine detailed behavioral observations with paternity analyses using single‐locus minisatellite DNA probes to provide the first evidence from a lek‐breeding species that the variance in male reproductive success is much lower than expected. In 17 and 30 broods sampled in two consecutive years, a minimum of 20 and 39 males, respectively, sired offspring. This low variance in male reproductive success resulted from effective use of alternative reproductive tactics by males, females mating with solitary males off leks, and multiple mating by females. Thus, the results of this study suggests that sexual selection through female choice is weak in buff‐breasted sandpipers. The behavior of other lek‐breeding birds is sufficiently similar to that of buff‐breasted sandpipers that paternity studies of those species should be conducted to determine whether leks generally are less paradoxical than they appear.

  8. Kriging analysis of mean annual precipitation, Powder River Basin, Montana and Wyoming

    USGS Publications Warehouse

    Karlinger, M.R.; Skrivan, James A.

    1981-01-01

    Kriging is a statistical estimation technique for regionalized variables which exhibit an autocorrelation structure. Such structure can be described by a semi-variogram of the observed data. The kriging estimate at any point is a weighted average of the data, where the weights are determined using the semi-variogram and an assumed drift, or lack of drift, in the data. Block, or areal, estimates can also be calculated. The kriging algorithm, based on unbiased and minimum-variance estimates, involves a linear system of equations to calculate the weights. Kriging variances can then be used to give confidence intervals of the resulting estimates. Mean annual precipitation in the Powder River basin, Montana and Wyoming, is an important variable when considering restoration of coal-strip-mining lands of the region. Two kriging analyses involving data at 60 stations were made--one assuming no drift in precipitation, and one a partial quadratic drift simulating orographic effects. Contour maps of estimates of mean annual precipitation were similar for both analyses, as were the corresponding contours of kriging variances. Block estimates of mean annual precipitation were made for two subbasins. Runoff estimates were 1-2 percent of the kriged block estimates. (USGS)

  9. Aircrew coordination and decisionmaking: Peer ratings of video tapes made during a full mission simulation

    NASA Technical Reports Server (NTRS)

    Murphy, M. R.; Awe, C. A.

    1986-01-01

    Six professionally active, retired captains rated the coordination and decisionmaking performances of sixteen aircrews while viewing videotapes of a simulated commercial air transport operation. The scenario featured a required diversion and a probable minimum fuel situation. Seven point Likert-type scales were used in rating variables on the basis of a model of crew coordination and decisionmaking. The variables were based on concepts of, for example, decision difficulty, efficiency, and outcome quality; and leader-subordin ate concepts such as person and task-oriented leader behavior, and competency motivation of subordinate crewmembers. Five-front-end variables of the model were in turn dependent variables for a hierarchical regression procedure. The variance in safety performance was explained 46%, by decision efficiency, command reversal, and decision quality. The variance of decision quality, an alternative substantive dependent variable to safety performance, was explained 60% by decision efficiency and the captain's quality of within-crew communications. The variance of decision efficiency, crew coordination, and command reversal were in turn explained 78%, 80%, and 60% by small numbers of preceding independent variables. A principle component, varimax factor analysis supported the model structure suggested by regression analyses.

  10. Signal-dependent noise determines motor planning

    NASA Astrophysics Data System (ADS)

    Harris, Christopher M.; Wolpert, Daniel M.

    1998-08-01

    When we make saccadic eye movements or goal-directed arm movements, there is an infinite number of possible trajectories that the eye or arm could take to reach the target,. However, humans show highly stereotyped trajectories in which velocity profiles of both the eye and hand are smooth and symmetric for brief movements,. Here we present a unifying theory of eye and arm movements based on the single physiological assumption that the neural control signals are corrupted by noise whose variance increases with the size of the control signal. We propose that in the presence of such signal-dependent noise, the shape of a trajectory is selected to minimize the variance of the final eye or arm position. This minimum-variance theory accurately predicts the trajectories of both saccades and arm movements and the speed-accuracy trade-off described by Fitt's law. These profiles are robust to changes in the dynamics of the eye or arm, as found empirically,. Moreover, the relation between path curvature and hand velocity during drawing movements reproduces the empirical `two-thirds power law',. This theory provides a simple and powerful unifying perspective for both eye and arm movement control.

  11. Evaluating detection and estimation capabilities of magnetometer-based vehicle sensors

    NASA Astrophysics Data System (ADS)

    Slater, David M.; Jacyna, Garry M.

    2013-05-01

    In an effort to secure the northern and southern United States borders, MITRE has been tasked with developing Modeling and Simulation (M&S) tools that accurately capture the mapping between algorithm-level Measures of Performance (MOP) and system-level Measures of Effectiveness (MOE) for current/future surveillance systems deployed by the the Customs and Border Protection Office of Technology Innovations and Acquisitions (OTIA). This analysis is part of a larger M&S undertaking. The focus is on two MOPs for magnetometer-based Unattended Ground Sensors (UGS). UGS are placed near roads to detect passing vehicles and estimate properties of the vehicle's trajectory such as bearing and speed. The first MOP considered is the probability of detection. We derive probabilities of detection for a network of sensors over an arbitrary number of observation periods and explore how the probability of detection changes when multiple sensors are employed. The performance of UGS is also evaluated based on the level of variance in the estimation of trajectory parameters. We derive the Cramer-Rao bounds for the variances of the estimated parameters in two cases: when no a priori information is known and when the parameters are assumed to be Gaussian with known variances. Sample results show that UGS perform significantly better in the latter case.

  12. Groundwater management under uncertainty using a stochastic multi-cell model

    NASA Astrophysics Data System (ADS)

    Joodavi, Ata; Zare, Mohammad; Ziaei, Ali Naghi; Ferré, Ty P. A.

    2017-08-01

    The optimization of spatially complex groundwater management models over long time horizons requires the use of computationally efficient groundwater flow models. This paper presents a new stochastic multi-cell lumped-parameter aquifer model that explicitly considers uncertainty in groundwater recharge. To achieve this, the multi-cell model is combined with the constrained-state formulation method. In this method, the lower and upper bounds of groundwater heads are incorporated into the mass balance equation using indicator functions. This provides expressions for the means, variances and covariances of the groundwater heads, which can be included in the constraint set in an optimization model. This method was used to formulate two separate stochastic models: (i) groundwater flow in a two-cell aquifer model with normal and non-normal distributions of groundwater recharge; and (ii) groundwater management in a multiple cell aquifer in which the differences between groundwater abstractions and water demands are minimized. The comparison between the results obtained from the proposed modeling technique with those from Monte Carlo simulation demonstrates the capability of the proposed models to approximate the means, variances and covariances. Significantly, considering covariances between the heads of adjacent cells allows a more accurate estimate of the variances of the groundwater heads. Moreover, this modeling technique requires no discretization of state variables, thus offering an efficient alternative to computationally demanding methods.

  13. Empirical scaling of the length of the longest increasing subsequences of random walks

    NASA Astrophysics Data System (ADS)

    Mendonça, J. Ricardo G.

    2017-02-01

    We provide Monte Carlo estimates of the scaling of the length L n of the longest increasing subsequences of n-step random walks for several different distributions of step lengths, short and heavy-tailed. Our simulations indicate that, barring possible logarithmic corrections, {{L}n}∼ {{n}θ} with the leading scaling exponent 0.60≲ θ ≲ 0.69 for the heavy-tailed distributions of step lengths examined, with values increasing as the distribution becomes more heavy-tailed, and θ ≃ 0.57 for distributions of finite variance, irrespective of the particular distribution. The results are consistent with existing rigorous bounds for θ, although in a somewhat surprising manner. For random walks with step lengths of finite variance, we conjecture that the correct asymptotic behavior of L n is given by \\sqrt{n}\\ln n , and also propose the form for the subleading asymptotics. The distribution of L n was found to follow a simple scaling form with scaling functions that vary with θ. Accordingly, when the step lengths are of finite variance they seem to be universal. The nature of this scaling remains unclear, since we lack a working model, microscopic or hydrodynamic, for the behavior of the length of the longest increasing subsequences of random walks.

  14. Generalized Bezout's Theorem and its applications in coding theory

    NASA Technical Reports Server (NTRS)

    Berg, Gene A.; Feng, Gui-Liang; Rao, T. R. N.

    1996-01-01

    This paper presents a generalized Bezout theorem which can be used to determine a tighter lower bound of the number of distinct points of intersection of two or more curves for a large class of plane curves. A new approach to determine a lower bound on the minimum distance (and also the generalized Hamming weights) for algebraic-geometric codes defined from a class of plane curves is introduced, based on the generalized Bezout theorem. Examples of more efficient linear codes are constructed using the generalized Bezout theorem and the new approach. For d = 4, the linear codes constructed by the new construction are better than or equal to the known linear codes. For d greater than 5, these new codes are better than the known codes. The Klein code over GF(2(sup 3)) is also constructed.

  15. Corrigendum: Bounding sea level projections within the framework of the possibility theory Environ. Res. Lett. (2017 12 014012)

    NASA Astrophysics Data System (ADS)

    Le Cozannet, Gonéri; Manceau, Jean-Charles; Rohmer, Jeremy

    2017-10-01

    Figures 3 and 4 of the article ‘Bounding probabilistic sea-level projections within the framework of the possibility theory’ display a minimum value for sea level rise of 15 cm by 2100 with respect to the 1986-2005 mean for the RCP 8.5. The value of 15 cm is consistent with sea level rise rates dropping back to velocities observed during the 20th century according to recent studies, but not to the current sea level rise velocity of 3.4 mm yr-1, as incorrectly stated in the article. This error has no impact on the rest of the article, including its arguments and conclusions, but it is potentially confusing for scientists willing to reproduce the left side of figures 3 and 4. We apologise for any inconvenience caused.

  16. Exact results for the finite time thermodynamic uncertainty relation

    NASA Astrophysics Data System (ADS)

    Manikandan, Sreekanth K.; Krishnamurthy, Supriya

    2018-03-01

    We obtain exact results for the recently discovered finite-time thermodynamic uncertainty relation, for the dissipated work W d , in a stochastically driven system with non-Gaussian work statistics, both in the steady state and transient regimes, by obtaining exact expressions for any moment of W d at arbitrary times. The uncertainty function (the Fano factor of W d ) is bounded from below by 2k_BT as expected, for all times τ, in both steady state and transient regimes. The lower bound is reached at τ=0 as well as when certain system parameters vanish (corresponding to an equilibrium state). Surprisingly, we find that the uncertainty function also reaches a constant value at large τ for all the cases we have looked at. For a system starting and remaining in steady state, the uncertainty function increases monotonically, as a function of τ as well as other system parameters, implying that the large τ value is also an upper bound. For the same system in the transient regime, however, we find that the uncertainty function can have a local minimum at an accessible time τm , for a range of parameter values. The large τ value for the uncertainty function is hence not a bound in this case. The non-monotonicity suggests, rather counter-intuitively, that there might be an optimal time for the working of microscopic machines, as well as an optimal configuration in the phase space of parameter values. Our solutions show that the ratios of higher moments of the dissipated work are also bounded from below by 2k_BT . For another model, also solvable by our methods, which never reaches a steady state, the uncertainty function, is in some cases, bounded from below by a value less than 2k_BT .

  17. Effects of nonmagnetic disorder on the energy of Yu-Shiba-Rusinov states

    NASA Astrophysics Data System (ADS)

    Kiendl, Thomas; von Oppen, Felix; Brouwer, Piet W.

    2017-10-01

    We study the sensitivity of Yu-Shiba-Rusinov states, bound states that form around magnetic scatterers in superconductors, to the presence of nonmagnetic disorder in both two and three dimensional systems. We formulate a scattering approach to this problem and reduce the effects of disorder to two contributions: disorder-induced normal reflection and a random phase of the amplitude for Andreev reflection. We find that both of these are small even for moderate amounts of disorder. In the dirty limit in which the disorder-induced mean free path is smaller than the superconducting coherence length, the variance of the energy of the Yu-Shiba-Rusinov state remains small in the ratio of the Fermi wavelength and the mean free path. This effect is more pronounced in three dimensions, where only impurities within a few Fermi wavelengths of the magnetic scatterer contribute. In two dimensions the energy variance is larger by a logarithmic factor because impurities contribute up to a distance of the order of the superconducting coherence length.

  18. Ada (Trade Name)/SQL (Structured Query Language) Binding Specification

    DTIC Science & Technology

    1988-06-01

    TYPES iS package ADA-SOL Is type DWPLOYEEyNAME Is new STRING ( 1 .. 30 ); type BOSSNAME is new EMPLOYEENAME; type EMPLOYEE SALARY is digits 7 range 0.00...minimum number of significant decimal digits . All real numbers between the lower and upper bounds, inclusive, belong to the subtype, and are...and the elements of strings. Format <character> -:- < digit > I <letter> ! <special character> < digit > ::- 0111213141516171819 <letter> ::- <upper case

  19. A study of polaritonic transparency in couplers made from excitonic materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Mahi R.; Racknor, Chris

    2015-03-14

    We have studied light matter interaction in quantum dot and exciton-polaritonic coupler hybrid systems. The coupler is made by embedding two slabs of an excitonic material (CdS) into a host excitonic material (ZnO). An ensemble of non-interacting quantum dots is doped in the coupler. The bound exciton polariton states are calculated in the coupler using the transfer matrix method in the presence of the coupling between the external light (photons) and excitons. These bound exciton-polaritons interact with the excitons present in the quantum dots and the coupler is acting as a reservoir. The Schrödinger equation method has been used tomore » calculate the absorption coefficient in quantum dots. It is found that when the distance between two slabs (CdS) is greater than decay length of evanescent waves the absorption spectrum has two peaks and one minimum. The minimum corresponds to a transparent state in the system. However, when the distance between the slabs is smaller than the decay length of evanescent waves, the absorption spectra has three peaks and two transparent states. In other words, one transparent state can be switched to two transparent states when the distance between the two layers is modified. This could be achieved by applying stress and strain fields. It is also found that transparent states can be switched on and off by applying an external control laser field.« less

  20. Investigations of interference between electromagnetic transponders and wireless MOSFET dosimeters: a phantom study.

    PubMed

    Su, Zhong; Zhang, Lisha; Ramakrishnan, V; Hagan, Michael; Anscher, Mitchell

    2011-05-01

    To evaluate both the Calypso Systems' (Calypso Medical Technologies, Inc., Seattle, WA) localization accuracy in the presence of wireless metal-oxide-semiconductor field-effect transistor (MOSFET) dosimeters of dose verification system (DVS, Sicel Technologies, Inc., Morrisville, NC) and the dosimeters' reading accuracy in the presence of wireless electromagnetic transponders inside a phantom. A custom-made, solid-water phantom was fabricated with space for transponders and dosimeters. Two inserts were machined with positioning grooves precisely matching the dimensions of the transponders and dosimeters and were arranged in orthogonal and parallel orientations, respectively. To test the transponder localization accuracy with/without presence of dosimeters (hypothesis 1), multivariate analyses were performed on transponder-derived localization data with and without dosimeters at each preset distance to detect statistically significant localization differences between the control and test sets. To test dosimeter dose-reading accuracy with/without presence of transponders (hypothesis 2), an approach of alternating the transponder presence in seven identical fraction dose (100 cGy) deliveries and measurements was implemented. Two-way analysis of variance was performed to examine statistically significant dose-reading differences between the two groups and the different fractions. A relative-dose analysis method was also used to evaluate transponder impact on dose-reading accuracy after dose-fading effect was removed by a second-order polynomial fit. Multivariate analysis indicated that hypothesis 1 was false; there was a statistically significant difference between the localization data from the control and test sets. However, the upper and lower bounds of the 95% confidence intervals of the localized positional differences between the control and test sets were less than 0.1 mm, which was significantly smaller than the minimum clinical localization resolution of 0.5 mm. For hypothesis 2, analysis of variance indicated that there was no statistically significant difference between the dosimeter readings with and without the presence of transponders. Both orthogonal and parallel configurations had difference of polynomial-fit dose to measured dose values within 1.75%. The phantom study indicated that the Calypso System's localization accuracy was not affected clinically due to the presence of DVS wireless MOSFET dosimeters and the dosimeter-measured doses were not affected by the presence of transponders. Thus, the same patients could be implanted with both transponders and dosimeters to benefit from improved accuracy of radiotherapy treatments offered by conjunctional use of the two systems.

  1. Failure Bounding And Sensitivity Analysis Applied To Monte Carlo Entry, Descent, And Landing Simulations

    NASA Technical Reports Server (NTRS)

    Gaebler, John A.; Tolson, Robert H.

    2010-01-01

    In the study of entry, descent, and landing, Monte Carlo sampling methods are often employed to study the uncertainty in the designed trajectory. The large number of uncertain inputs and outputs, coupled with complicated non-linear models, can make interpretation of the results difficult. Three methods that provide statistical insights are applied to an entry, descent, and landing simulation. The advantages and disadvantages of each method are discussed in terms of the insights gained versus the computational cost. The first method investigated was failure domain bounding which aims to reduce the computational cost of assessing the failure probability. Next a variance-based sensitivity analysis was studied for the ability to identify which input variable uncertainty has the greatest impact on the uncertainty of an output. Finally, probabilistic sensitivity analysis is used to calculate certain sensitivities at a reduced computational cost. These methods produce valuable information that identifies critical mission parameters and needs for new technology, but generally at a significant computational cost.

  2. 3D facial landmarks: Inter-operator variability of manual annotation

    PubMed Central

    2014-01-01

    Background Manual annotation of landmarks is a known source of variance, which exist in all fields of medical imaging, influencing the accuracy and interpretation of the results. However, the variability of human facial landmarks is only sparsely addressed in the current literature as opposed to e.g. the research fields of orthodontics and cephalometrics. We present a full facial 3D annotation procedure and a sparse set of manually annotated landmarks, in effort to reduce operator time and minimize the variance. Method Facial scans from 36 voluntary unrelated blood donors from the Danish Blood Donor Study was randomly chosen. Six operators twice manually annotated 73 anatomical and pseudo-landmarks, using a three-step scheme producing a dense point correspondence map. We analyzed both the intra- and inter-operator variability, using mixed-model ANOVA. We then compared four sparse sets of landmarks in order to construct a dense correspondence map of the 3D scans with a minimum point variance. Results The anatomical landmarks of the eye were associated with the lowest variance, particularly the center of the pupils. Whereas points of the jaw and eyebrows have the highest variation. We see marginal variability in regards to intra-operator and portraits. Using a sparse set of landmarks (n=14), that capture the whole face, the dense point mean variance was reduced from 1.92 to 0.54 mm. Conclusion The inter-operator variability was primarily associated with particular landmarks, where more leniently landmarks had the highest variability. The variables embedded in the portray and the reliability of a trained operator did only have marginal influence on the variability. Further, using 14 of the annotated landmarks we were able to reduced the variability and create a dense correspondences mesh to capture all facial features. PMID:25306436

  3. How good is crude MDL for solving the bias-variance dilemma? An empirical investigation based on Bayesian networks.

    PubMed

    Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli

    2014-01-01

    The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike's Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size.

  4. How Good Is Crude MDL for Solving the Bias-Variance Dilemma? An Empirical Investigation Based on Bayesian Networks

    PubMed Central

    Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli

    2014-01-01

    The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike’s Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size. PMID:24671204

  5. Estimation of stable boundary-layer height using variance processing of backscatter lidar data

    NASA Astrophysics Data System (ADS)

    Saeed, Umar; Rocadenbosch, Francesc

    2017-04-01

    Stable boundary layer (SBL) is one of the most complex and less understood topics in atmospheric science. The type and height of the SBL is an important parameter for several applications such as understanding the formation of haze fog, and accuracy of chemical and pollutant dispersion models, etc. [1]. This work addresses nocturnal Stable Boundary-Layer Height (SBLH) estimation by using variance processing and attenuated backscatter lidar measurements, its principles and limitations. It is shown that temporal and spatial variance profiles of the attenuated backscatter signal are related to the stratification of aerosols in the SBL. A minimum variance SBLH estimator using local minima in the variance profiles of backscatter lidar signals is introduced. The method is validated using data from HD(CP)2 Observational Prototype Experiment (HOPE) campaign at Jülich, Germany [2], under different atmospheric conditions. This work has received funding from the European Union Seventh Framework Programme, FP7 People, ITN Marie Curie Actions Programme (2012-2016) in the frame of ITaRS project (GA 289923), H2020 programme under ACTRIS-2 project (GA 654109), the Spanish Ministry of Economy and Competitiveness - European Regional Development Funds under TEC2015-63832-P project, and from the Generalitat de Catalunya (Grup de Recerca Consolidat) 2014-SGR-583. [1] R. B. Stull, An Introduction to Boundary Layer Meteorology, chapter 12, Stable Boundary Layer, pp. 499-543, Springer, Netherlands, 1988. [2] U. Löhnert, J. H. Schween, C. Acquistapace, K. Ebell, M. Maahn, M. Barrera-Verdejo, A. Hirsikko, B. Bohn, A. Knaps, E. O'Connor, C. Simmer, A. Wahner, and S. Crewell, "JOYCE: Jülich Observatory for Cloud Evolution," Bull. Amer. Meteor. Soc., vol. 96, no. 7, pp. 1157-1174, 2015.

  6. Measuring the Power Spectrum with Peculiar Velocities

    NASA Astrophysics Data System (ADS)

    Macaulay, Edward; Feldman, H. A.; Ferreira, P. G.; Jaffe, A. H.; Agarwal, S.; Hudson, M. J.; Watkins, R.

    2012-01-01

    The peculiar velocities of galaxies are an inherently valuable cosmological probe, providing an unbiased estimate of the distribution of matter on scales much larger than the depth of the survey. Much research interest has been motivated by the high dipole moment of our local peculiar velocity field, which suggests a large scale excess in the matter power spectrum, and can appear to be in some tension with the LCDM model. We use a composite catalogue of 4,537 peculiar velocity measurements with a characteristic depth of 33 h-1 Mpc to estimate the matter power spectrum. We compare the constraints with this method, directly studying the full peculiar velocity catalogue, to results from Macaulay et al. (2011), studying minimum variance moments of the velocity field, as calculated by Watkins, Feldman & Hudson (2009) and Feldman, Watkins & Hudson (2010). We find good agreement with the LCDM model on scales of k > 0.01 h Mpc-1. We find an excess of power on scales of k < 0.01 h Mpc-1, although with a 1 sigma uncertainty which includes the LCDM model. We find that the uncertainty in the excess at these scales is larger than an alternative result studying only moments of the velocity field, which is due to the minimum variance weights used to calculate the moments. At small scales, we are able to clearly discriminate between linear and nonlinear clustering in simulated peculiar velocity catalogues, and find some evidence (although less clear) for linear clustering in the real peculiar velocity data.

  7. Power spectrum estimation from peculiar velocity catalogues

    NASA Astrophysics Data System (ADS)

    Macaulay, E.; Feldman, H. A.; Ferreira, P. G.; Jaffe, A. H.; Agarwal, S.; Hudson, M. J.; Watkins, R.

    2012-09-01

    The peculiar velocities of galaxies are an inherently valuable cosmological probe, providing an unbiased estimate of the distribution of matter on scales much larger than the depth of the survey. Much research interest has been motivated by the high dipole moment of our local peculiar velocity field, which suggests a large-scale excess in the matter power spectrum and can appear to be in some tension with the Λ cold dark matter (ΛCDM) model. We use a composite catalogue of 4537 peculiar velocity measurements with a characteristic depth of 33 h-1 Mpc to estimate the matter power spectrum. We compare the constraints with this method, directly studying the full peculiar velocity catalogue, to results by Macaulay et al., studying minimum variance moments of the velocity field, as calculated by Feldman, Watkins & Hudson. We find good agreement with the ΛCDM model on scales of k > 0.01 h Mpc-1. We find an excess of power on scales of k < 0.01 h Mpc-1 with a 1σ uncertainty which includes the ΛCDM model. We find that the uncertainty in excess at these scales is larger than an alternative result studying only moments of the velocity field, which is due to the minimum variance weights used to calculate the moments. At small scales, we are able to clearly discriminate between linear and non-linear clustering in simulated peculiar velocity catalogues and find some evidence (although less clear) for linear clustering in the real peculiar velocity data.

  8. Energy Balance for a Sonoluminescence Bubble Yields a Measure of Ionization Potential Lowering

    NASA Astrophysics Data System (ADS)

    Kappus, B.; Bataller, A.; Putterman, S. J.

    2013-12-01

    Application of energy conservation between input sound and the microplasma which forms at the moment of sonoluminescence places bounds on the process, whereby the gas is ionized. Detailed pulsed Mie scattering measurements of the radius versus time for a xenon bubble in sulfuric acid provide a complete characterization of the hydrodynamics and minimum radius. For a range of emission intensities, the blackbody spectrum emitted during collapse matches the minimum bubble radius, implying opaque conditions are attained. This requires a degree of ionization >36%. Analysis reveals only 2.1±0.6eV/atom of energy available during light emission. In order to unbind enough charge, collective processes must therefore reduce the ionization potential by at least 75%. We interpret this as evidence that a phase transition to a highly ionized plasma is occurring during sonoluminescence.

  9. Energy balance for a sonoluminescence bubble yields a measure of ionization potential lowering.

    PubMed

    Kappus, B; Bataller, A; Putterman, S J

    2013-12-06

    Application of energy conservation between input sound and the microplasma which forms at the moment of sonoluminescence places bounds on the process, whereby the gas is ionized. Detailed pulsed Mie scattering measurements of the radius versus time for a xenon bubble in sulfuric acid provide a complete characterization of the hydrodynamics and minimum radius. For a range of emission intensities, the blackbody spectrum emitted during collapse matches the minimum bubble radius, implying opaque conditions are attained. This requires a degree of ionization >36%. Analysis reveals only 2.1±0.6  eV/atom of energy available during light emission. In order to unbind enough charge, collective processes must therefore reduce the ionization potential by at least 75%. We interpret this as evidence that a phase transition to a highly ionized plasma is occurring during sonoluminescence.

  10. Ground state energy of electrons in a static point-ion lattice

    NASA Technical Reports Server (NTRS)

    Styer, D. F.; Ashcroft, N. W.

    1983-01-01

    The ground state energy of a neutral collection of protons and electrons was investigated under the assumption that in the ground state configuration, static protons occupy the sites of a rigid Bravais lattice. The Wigner-Seitz method was used in conjunction with three postulated potentials: bare Coulomb, Thomas-Fermi screening, and screening by a uniform bare background charge. Within these approximations, the exact band-minimum energy and wave functions are derived. For each of the three potentials, the approximate minimum ground state energy per proton (relative to isolated electrons and protons) is, respectively, -1.078 Ry, -1.038 Ry, and -1.052 Ry. These three minima all fall at a density of about 0.60 gm/cu cm, which is thus an approximate lower bound on the density of metallic hydrogen at its transition pressure.

  11. Analysis of the Bayesian Cramér-Rao lower bound in astrometry. Studying the impact of prior information in the location of an object

    NASA Astrophysics Data System (ADS)

    Echeverria, Alex; Silva, Jorge F.; Mendez, Rene A.; Orchard, Marcos

    2016-10-01

    Context. The best precision that can be achieved to estimate the location of a stellar-like object is a topic of permanent interest in the astrometric community. Aims: We analyze bounds for the best position estimation of a stellar-like object on a CCD detector array in a Bayesian setting where the position is unknown, but where we have access to a prior distribution. In contrast to a parametric setting where we estimate a parameter from observations, the Bayesian approach estimates a random object (I.e., the position is a random variable) from observations that are statistically dependent on the position. Methods: We characterize the Bayesian Cramér-Rao (CR) that bounds the minimum mean square error (MMSE) of the best estimator of the position of a point source on a linear CCD-like detector, as a function of the properties of detector, the source, and the background. Results: We quantify and analyze the increase in astrometric performance from the use of a prior distribution of the object position, which is not available in the classical parametric setting. This gain is shown to be significant for various observational regimes, in particular in the case of faint objects or when the observations are taken under poor conditions. Furthermore, we present numerical evidence that the MMSE estimator of this problem tightly achieves the Bayesian CR bound. This is a remarkable result, demonstrating that all the performance gains presented in our analysis can be achieved with the MMSE estimator. Conclusions: The Bayesian CR bound can be used as a benchmark indicator of the expected maximum positional precision of a set of astrometric measurements in which prior information can be incorporated. This bound can be achieved through the conditional mean estimator, in contrast to the parametric case where no unbiased estimator precisely reaches the CR bound.

  12. Influence of Layer Thickness, Raster Angle, Deformation Temperature and Recovery Temperature on the Shape-Memory Effect of 3D-Printed Polylactic Acid Samples

    PubMed Central

    Wu, Wenzheng; Ye, Wenli; Wu, Zichao; Geng, Peng; Wang, Yulei; Zhao, Ji

    2017-01-01

    The success of the 3D-printing process depends upon the proper selection of process parameters. However, the majority of current related studies focus on the influence of process parameters on the mechanical properties of the parts. The influence of process parameters on the shape-memory effect has been little studied. This study used the orthogonal experimental design method to evaluate the influence of the layer thickness H, raster angle θ, deformation temperature Td and recovery temperature Tr on the shape-recovery ratio Rr and maximum shape-recovery rate Vm of 3D-printed polylactic acid (PLA). The order and contribution of every experimental factor on the target index were determined by range analysis and ANOVA, respectively. The experimental results indicated that the recovery temperature exerted the greatest effect with a variance ratio of 416.10, whereas the layer thickness exerted the smallest effect on the shape-recovery ratio with a variance ratio of 4.902. The recovery temperature exerted the most significant effect on the maximum shape-recovery rate with the highest variance ratio of 1049.50, whereas the raster angle exerted the minimum effect with a variance ratio of 27.163. The results showed that the shape-memory effect of 3D-printed PLA parts depended strongly on recovery temperature, and depended more weakly on the deformation temperature and 3D-printing parameters. PMID:28825617

  13. Genetic parameters of legendre polynomials for first parity lactation curves.

    PubMed

    Pool, M H; Janss, L L; Meuwissen, T H

    2000-11-01

    Variance components of the covariance function coefficients in a random regression test-day model were estimated by Legendre polynomials up to a fifth order for first-parity records of Dutch dairy cows using Gibbs sampling. Two Legendre polynomials of equal order were used to model the random part of the lactation curve, one for the genetic component and one for permanent environment. Test-day records from cows registered between 1990 to 1996 and collected by regular milk recording were available. For the data set, 23,700 complete lactations were selected from 475 herds sired by 262 sires. Because the application of a random regression model is limited by computing capacity, we investigated the minimum order needed to fit the variance structure in the data sufficiently. Predictions of genetic and permanent environmental variance structures were compared with bivariate estimates on 30-d intervals. A third-order or higher polynomial modeled the shape of variance curves over DIM with sufficient accuracy for the genetic and permanent environment part. Also, the genetic correlation structure was fitted with sufficient accuracy by a third-order polynomial, but, for the permanent environmental component, a fourth order was needed. Because equal orders are suggested in the literature, a fourth-order Legendre polynomial is recommended in this study. However, a rank of three for the genetic covariance matrix and of four for permanent environment allows a simpler covariance function with a reduced number of parameters based on the eigenvalues and eigenvectors.

  14. The Form, and Some Robustness Properties of Integrated Distance Estimators for Linear Models, Applied to Some Published Data Sets.

    DTIC Science & Technology

    1982-06-01

    observation in our framework is the pair (y,x) with x considered given. The influence function for 52 at the Gaussian distribution with mean xB and variance...3/2 - (1+22)o2 2) 1+2x\\/2 x’) 2(3-9) (1+2X) This influence function is bounded in the residual y-xS, and redescends to an asymptote greater than...version of the influence function for B at the Gaussian distribution, given the x. and x, is defined as the normalized differenceJ (see Barnett and

  15. Powered Descent Guidance with General Thrust-Pointing Constraints

    NASA Technical Reports Server (NTRS)

    Carson, John M., III; Acikmese, Behcet; Blackmore, Lars

    2013-01-01

    The Powered Descent Guidance (PDG) algorithm and software for generating Mars pinpoint or precision landing guidance profiles has been enhanced to incorporate thrust-pointing constraints. Pointing constraints would typically be needed for onboard sensor and navigation systems that have specific field-of-view requirements to generate valid ground proximity and terrain-relative state measurements. The original PDG algorithm was designed to enforce both control and state constraints, including maximum and minimum thrust bounds, avoidance of the ground or descent within a glide slope cone, and maximum speed limits. The thrust-bound and thrust-pointing constraints within PDG are non-convex, which in general requires nonlinear optimization methods to generate solutions. The short duration of Mars powered descent requires guaranteed PDG convergence to a solution within a finite time; however, nonlinear optimization methods have no guarantees of convergence to the global optimal or convergence within finite computation time. A lossless convexification developed for the original PDG algorithm relaxed the non-convex thrust bound constraints. This relaxation was theoretically proven to provide valid and optimal solutions for the original, non-convex problem within a convex framework. As with the thrust bound constraint, a relaxation of the thrust-pointing constraint also provides a lossless convexification that ensures the enhanced relaxed PDG algorithm remains convex and retains validity for the original nonconvex problem. The enhanced PDG algorithm provides guidance profiles for pinpoint and precision landing that minimize fuel usage, minimize landing error to the target, and ensure satisfaction of all position and control constraints, including thrust bounds and now thrust-pointing constraints.

  16. Rényi-Fisher entropy product as a marker of topological phase transitions

    NASA Astrophysics Data System (ADS)

    Bolívar, J. C.; Nagy, Ágnes; Romera, Elvira

    2018-05-01

    The combined Rényi-Fisher entropy product of electrons plus holes displays a minimum at the charge neutrality points. The Stam-Rényi difference and the Stam-Rényi uncertainty product of the electrons plus holes, show maxima at the charge neutrality points. Topological quantum numbers capable of detecting the topological insulator and the band insulator phases, are defined. Upper and lower bounds for the position and momentum space Rényi-Fisher entropy products are derived.

  17. Stabilisation of perturbed chains of integrators using Lyapunov-based homogeneous controllers

    NASA Astrophysics Data System (ADS)

    Harmouche, Mohamed; Laghrouche, Salah; Chitour, Yacine; Hamerlain, Mustapha

    2017-12-01

    In this paper, we present a Lyapunov-based homogeneous controller for the stabilisation of a perturbed chain of integrators of arbitrary order r ≥ 1. The proposed controller is based on homogeneous controller for stabilisation of pure chain of integrators. The control of homogeneity degree is also introduced and various controllers are designed using this concept, namely a bounded-controller with minimum amplitude of discontinuous control and a controller with globally fixed-time convergence. The performance of the controller is validated through simulations.

  18. Soldering to a single atomic layer

    NASA Astrophysics Data System (ADS)

    Girit, ćaǧlar Ö.; Zettl, A.

    2007-11-01

    The standard technique to make electrical contact to nanostructures is electron beam lithography. This method has several drawbacks including complexity, cost, and sample contamination. We present a simple technique to cleanly solder submicron sized, Ohmic contacts to nanostructures. To demonstrate, we contact graphene, a single atomic layer of carbon, and investigate low- and high-bias electronic transport. We set lower bounds on the current carrying capacity of graphene. A simple model allows us to obtain device characteristics such as mobility, minimum conductance, and contact resistance.

  19. Soldering to a single atomic layer

    NASA Astrophysics Data System (ADS)

    Girit, Caglar; Zettl, Alex

    2008-03-01

    The standard technique to make electrical contact to nanostructures is electron beam lithography. This method has several drawbacks including complexity, cost, and sample contamination. We present a simple technique to cleanly solder submicron sized, Ohmic contacts to nanostructures. To demonstrate, we contact graphene, a single atomic layer of carbon, and investigate low- and high-bias electronic transport. We set lower bounds on the current carrying capacity of graphene. A simple model allows us to obtain device characteristics such as mobility, minimum conductance, and contact resistance.

  20. Elevation-relief ratio, hypsometric integral, and geomorphic area-altitude analysis.

    NASA Technical Reports Server (NTRS)

    Pike, R. J.; Wilson, S. E.

    1971-01-01

    Mathematical proof establishes identity of hypsometric integral and elevation-relief ratio, two quantitative topographic descriptors developed independently of one another for entirely different purposes. Operationally, values of both measures are in excellent agreement for arbitrarily bounded topographic samples, as well as for low-order fluvial watersheds. By using a point-sampling technique rather than planimetry, elevation-relief ratio (defined as mean elevation minus minimum elevation divided by relief) is calculated manually in about a third of the time required for the hypsometric integral.

  1. Open Smart Energy Gateway (OpenSEG)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The Open Smart Energy Gateway (OpenSEG) aims to provide near-real time smart meter data to consumers without the delays or latencies associated with it being transported to the utility data center and then back to the consumer's application. To do this, the gateway queries the local Smart Meter to which it is bound to get energy consumption information at pre-defined intervals (minimum interval is 4 seconds). OpenSEG then stores the resulting data internally for retrieval by an external application.

  2. “Stringy” coherent states inspired by generalized uncertainty principle

    NASA Astrophysics Data System (ADS)

    Ghosh, Subir; Roy, Pinaki

    2012-05-01

    Coherent States with Fractional Revival property, that explicitly satisfy the Generalized Uncertainty Principle (GUP), have been constructed in the context of Generalized Harmonic Oscillator. The existence of such states is essential in motivating the GUP based phenomenological results present in the literature which otherwise would be of purely academic interest. The effective phase space is Non-Canonical (or Non-Commutative in popular terminology). Our results have a smooth commutative limit, equivalent to Heisenberg Uncertainty Principle. The Fractional Revival time analysis yields an independent bound on the GUP parameter. Using this and similar bounds obtained here, we derive the largest possible value of the (GUP induced) minimum length scale. Mandel parameter analysis shows that the statistics is Sub-Poissonian. Correspondence Principle is deformed in an interesting way. Our computational scheme is very simple as it requires only first order corrected energy values and undeformed basis states.

  3. A first-principles study of methyl lactate adsorption on the chiral Cu (643) surface

    NASA Astrophysics Data System (ADS)

    Yuk, Simuck F.; Asthagiri, Aravind

    2014-11-01

    We used dispersion-corrected density function theory (DFT) to investigate the enantiospecific adsorption of R- and S-methyl lactate on the chiral Cu (643)R surface. Initial study of methyl lactate adsorbed on the Cu (111) surface revealed that the most strongly bound states are associated with interaction of the hydroxyl and alkoxide group with the surface. Using dispersion-corrected DFT-derived pre-factors and desorption energies within the Redhead analysis predicts peak temperatures that are in relatively good agreement with experimental values for molecular methyl lactate desorption from both the Cu (111) and Cu (643)R surface. The global minimum of S-methyl lactate is more firmly bound by 9.5 kJ/mol over its enantiomer on the Cu (643)R surface, with a peak temperature difference of 25 K versus an experimental value of 12 K.

  4. Experimental Potential Energy Curve for the 43 Π Electronic State of NaCs

    NASA Astrophysics Data System (ADS)

    Steely, Andrew; Cooper, Hannah; Zain, Hareem; Whipp, Ciara; Faust, Carl; Kortyna, Andrew; Huennekens, John

    2017-04-01

    We present results from experimental studies of the 43 Π electronic state of the NaCs molecule. This electronic state is interesting in that its potential energy curve likely exhibits a double minimum. As a result, interference effects are observed in the resolved bound-free fluorescence spectra. The optical-optical double resonance method was used to obtain Doppler-free excitation spectra for the 43 Π state. This dataset of measured level energies was expanded largely by observing fluorescence from levels populated by collisions. To aid in level assignments, simulations of resolved bound-free fluorescence spectra were calculated using the BCONT program (R. J. Le Roy, University of Waterloo). Spectroscopic constants were determined to summarize data belonging to inner well, outer well, and above barrier regions of the electronic state. Current work focuses on using the IPA method to construct an experimental potential energy curve. Work supported by NSF and Susquehanna University.

  5. Data parallel sorting for particle simulation

    NASA Technical Reports Server (NTRS)

    Dagum, Leonardo

    1992-01-01

    Sorting on a parallel architecture is a communications intensive event which can incur a high penalty in applications where it is required. In the case of particle simulation, only integer sorting is necessary, and sequential implementations easily attain the minimum performance bound of O (N) for N particles. Parallel implementations, however, have to cope with the parallel sorting problem which, in addition to incurring a heavy communications cost, can make the minimun performance bound difficult to attain. This paper demonstrates how the sorting problem in a particle simulation can be reduced to a merging problem, and describes an efficient data parallel algorithm to solve this merging problem in a particle simulation. The new algorithm is shown to be optimal under conditions usual for particle simulation, and its fieldwise implementation on the Connection Machine is analyzed in detail. The new algorithm is about four times faster than a fieldwise implementation of radix sort on the Connection Machine.

  6. All the adiabatic bound states of NO{sub 2}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salzgeber, R.F.; Mandelshtam, V.; Schlier, C.

    1998-07-01

    We calculated all 2967 even and odd bound states of the adiabatic ground state of NO{sub 2}, using a modification of the abthinspinitio potential energy surface of Leonardi {ital et al.} [J. Chem. Phys. {bold 105}, 9051 (1996)]. The calculation was performed by harmonic inversion of the Chebyshev correlation function generated by a DVR Hamiltonian in Radau coordinates. The relative error for the computed eigenenergies (measured from the potential minimum), is 10{sup {minus}4} or better, corresponding to an absolute error of less than about 2.5thinspcm{sup {minus}1}. Near the dissociation threshold the average density of states is about 0.2/cm{sup {minus}1} formore » each symmetry. Statistical analysis of the states shows some interesting structure of the rigidity parameter {Delta}{sub 3} as a function of energy. {copyright} {ital 1998 American Institute of Physics.}« less

  7. Constructions for finite-state codes

    NASA Technical Reports Server (NTRS)

    Pollara, F.; Mceliece, R. J.; Abdel-Ghaffar, K.

    1987-01-01

    A class of codes called finite-state (FS) codes is defined and investigated. These codes, which generalize both block and convolutional codes, are defined by their encoders, which are finite-state machines with parallel inputs and outputs. A family of upper bounds on the free distance of a given FS code is derived from known upper bounds on the minimum distance of block codes. A general construction for FS codes is then given, based on the idea of partitioning a given linear block into cosets of one of its subcodes, and it is shown that in many cases the FS codes constructed in this way have a d sub free which is as large as possible. These codes are found without the need for lengthy computer searches, and have potential applications for future deep-space coding systems. The issue of catastropic error propagation (CEP) for FS codes is also investigated.

  8. Tectonic evolution of the outer Izu-Bonin-Mariana fore arc system: initial results from IODP Expedition 352

    NASA Astrophysics Data System (ADS)

    Kurz, W.; Ferre, E. C.; Robertson, A. H. F.; Avery, A. J.; Kutterolf, S.

    2015-12-01

    During International Ocean Discovery Program (IODP) Expedition 352, a section through the volcanic stratigraphy of the outer fore arc of the Izu-Bonin-Mariana (IBM) system was drilled to trace magmatism, tectonics, and crustal accretion associated with subduction initiation. Structures within drill cores, borehole and site survey seismic data indicate that tectonic deformation in the outer IBM fore arc is mainly post-magmatic. Extension generated asymmetric sediment basins such as half-grabens at sites 352-U1439 and 352-U1442 on the upper trench slope. Along their eastern margins the basins are bounded by west-dipping normal faults. Deformation was localized along multiple sets of faults, accompanied by syn-tectonic pelagic and volcaniclastic sedimentation. The lowermost sedimentary units were tilted eastward by ~20°. Tilted beds were covered by sub-horizontal beds. Biostratigraphic constraints reveal a minimum age of the oldest sediments at ~ 35 Ma; timing of the sedimentary unconformities is between ~ 27 and 32 Ma. At sites 352-U1440 and 352-U1441 on the outer fore arc strike-slip faults are bounding sediment basins. Sediments were not significantly affected by tectonic tilting. Biostratigraphy gives a minimum age of the basement-cover contact between ~29.5 and 32 Ma. The post-magmatic structures reveal a multiphase tectonic evolution of the outer IBM fore arc. At sites 352-U1439 and 352-U1442, shear with dominant reverse to oblique reverse displacement was localized along subhorizontal fault zones, steep slickensides and shear fractures. These were either re-activated as or cut by normal-faults and strike-slip faults. Extension was also accommodated by steep to subvertical mineralized veins and extensional fractures. Faults at sites 352-U1440 and 352-U1441 show mainly strike-slip kinematics. Sediments overlying the igneous basement(maximum Late Eocene to Recent age), document ash and aeolian input, together with mass wasting of the fault-bounded sediment ponds.

  9. ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.

    2010-08-10

    A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error),more » and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.« less

  10. VizieR Online Data Catalog: AGNs in submm-selected Lockman Hole galaxies (Serjeant+, 2010)

    NASA Astrophysics Data System (ADS)

    Serjeant, S.; Negrello, M.; Pearson, C.; Mortier, A.; Austermann, J.; Aretxaga, I.; Clements, D.; Chapman, S.; Dye, S.; Dunlop, J.; Dunne, L.; Farrah, D.; Hughes, D.; Lee, H. M.; Matsuhara, H.; Ibar, E.; Im, M.; Jeong, W.-S.; Kim, S.; Oyabu, S.; Takagi, T.; Wada, T.; Wilson, G.; Vaccari, M.; Yun, M.

    2013-11-01

    We present a comparison of the SCUBA half degree extragalactic survey (SHADES) at 450μm, 850μm and 1100μm with deep guaranteed time 15μm AKARI FU-HYU survey data and Spitzer guaranteed time data at 3.6-24μm in the Lockman hole east. The AKARI data was analysed using bespoke software based in part on the drizzling and minimum-variance matched filtering developed for SHADES, and was cross-calibrated against ISO fluxes. (2 data files).

  11. A Bayesian approach to parameter and reliability estimation in the Poisson distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1972-01-01

    For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.

  12. Statistical analysis of nonlinearly reconstructed near-infrared tomographic images: Part I--Theory and simulations.

    PubMed

    Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D

    2002-07-01

    Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.

  13. Architectural elements and bounding surfaces in fluvial deposits: anatomy of the Kayenta formation (lower jurassic), Southwest Colorado

    NASA Astrophysics Data System (ADS)

    Miall, Andrew D.

    1988-03-01

    Three well-exposed outcrops in the Kayenta Formation (Lower Jurassic), near Dove Creek in southwestern Colorado, were studied using lateral profiles, in order to test recent regarding architectural-element analysis and the classification and interpretation of internal bounding surfaces. Examination of bounding surfaces within and between elements in the Kayenta outcrops raises problems in applying the three-fold classification of Allen (1983). Enlarging this classification to a six-fold hierarchy permits the discrimination of surfaces intermediate between Allen's second- and third-order types, corresponding to the upper bounding surfaces of macroforms, and internal erosional "reactivation" surfaces within the macroforms. Examples of the first five types of surface occur in the Kayenta outcrops at Dove Creek. The new classifications is offered as a general solution to the problem of description of complex, three-dimensional fluvial sandstone bodies. The Kayenta Formation at Dove Creek consists of a multistorey sandstone body, including the deposits of lateral- and downstream-accreted macroforms. The storeys show no internal cyclicity, neither within individual elements nor through the overall vertical thickness of the formation. Low paleocurrent variance indicates low sinuosity flow, whereas macroform geometry and orientation suggest low to moderate sinuosity. The many internal minor erosion surfaces draped with mud and followed by intraclast breccias imply frequent rapid stage fluctuation, consistent with variable (seasonal? monsonal? ephemmeral?) flow. The results suggest a fluvial architecture similar to that of the South Saskatchewan River, through with a three-dimensional geometry unlike that interpreted from surface studies of that river.

  14. Characteristic and Source of Atmospheric PM10- and PM2.5-bound PAHs in a Typical Metallurgic City Near Yangtze River in China.

    PubMed

    Zhang, Hong; Wang, Ruwei; Xue, Huaqin; Hu, Ruoyu; Liu, Guijian

    2018-02-01

    The characteristics of atmospheric PM 10 - and PM 2.5 -bound polycyclic aromatic hydrocarbons (PAHs) were investigated in Tongling city, China. Results showed that the total concentrations of PM 10 - and PM 2.5 -bound PAHs exhibited distinct seasonal and spatial variability. The metallurgic sites showed the highest PAH concentrations, which is mainly attributed to the metallurgic activities (mainly copper ore smelting) and coal combustion as the smelting fuel. The rural area showed the lowest concentrations, but exhibited significant increase from summer to autumn. This seasonal fluctuation is mainly caused by the biomass burning at the sites in the harvest season. The diagnostic ratio indicated that the main PAHs sources were vehicle exhausts, coal combustion and biomass burning. The total BaP equivalent concentration (BAP-TEQ) was found to be maximum at DGS site in winter, whereas it was minimum at BGC site in summer. Risk assessment indicates that residential exposure to PAHs in the industrial area, especially in the winter season, may pose a greater inhalation cancer risk than people living in living area and rural area.

  15. All the nonadiabatic (J=0) bound states of NO{sub 2}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salzgeber, R.F.; Mandelshtam, V.A.; Schlier, C.

    1999-02-01

    We calculated all 3170 A{sub 1} and B{sub 2} (J=0) vibronic bound states of the coupled electronic ground ({tilde X}&hthinsp;{sup 2}A{sub 1}) and the first excited ({tilde A}&hthinsp;{sup 2}B{sub 2}) surfaces of NO{sub 2}, using a modification of the {ital ab initio} potentials of Leonardi {ital et al.} [J. Chem. Phys. {bold 105}, 9051 (1996)]. The calculation was performed by harmonic inversion of the Chebyshev correlation function generated from a DVR Hamiltonian in Radau coordinates. The rms error of the eigenenergies is about 2.5 cm{sup {minus}1}, corresponding to a relative error of 10{sup {minus}4} near the dissociation energy. The resultsmore » are compared with the adiabatic and diabatic levels calculated from the same surfaces, with experimental data, and with some approximations for the number of states function N(E). The experimental levels are reproduced fairly well up to an energy of 12&hthinsp;000 cm{sup {minus}1} above the potential minimum while the total number of bound levels agrees to within 2{percent} with that calculated from the phase space volume. {copyright} {ital 1999 American Institute of Physics.}« less

  16. Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow

    NASA Astrophysics Data System (ADS)

    Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke

    2017-04-01

    Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.

  17. Law of the Minimum paradoxes.

    PubMed

    Gorban, Alexander N; Pokidysheva, Lyudmila I; Smirnova, Elena V; Tyukina, Tatiana A

    2011-09-01

    The "Law of the Minimum" states that growth is controlled by the scarcest resource (limiting factor). This concept was originally applied to plant or crop growth (Justus von Liebig, 1840, Salisbury, Plant physiology, 4th edn., Wadsworth, Belmont, 1992) and quantitatively supported by many experiments. Some generalizations based on more complicated "dose-response" curves were proposed. Violations of this law in natural and experimental ecosystems were also reported. We study models of adaptation in ensembles of similar organisms under load of environmental factors and prove that violation of Liebig's law follows from adaptation effects. If the fitness of an organism in a fixed environment satisfies the Law of the Minimum then adaptation equalizes the pressure of essential factors and, therefore, acts against the Liebig's law. This is the the Law of the Minimum paradox: if for a randomly chosen pair "organism-environment" the Law of the Minimum typically holds, then in a well-adapted system, we have to expect violations of this law.For the opposite interaction of factors (a synergistic system of factors which amplify each other), adaptation leads from factor equivalence to limitations by a smaller number of factors.For analysis of adaptation, we develop a system of models based on Selye's idea of the universal adaptation resource (adaptation energy). These models predict that under the load of an environmental factor a population separates into two groups (phases): a less correlated, well adapted group and a highly correlated group with a larger variance of attributes, which experiences problems with adaptation. Some empirical data are presented and evidences of interdisciplinary applications to econometrics are discussed. © Society for Mathematical Biology 2010

  18. Target intersection probabilities for parallel-line and continuous-grid types of search

    USGS Publications Warehouse

    McCammon, R.B.

    1977-01-01

    The expressions for calculating the probability of intersection of hidden targets of different sizes and shapes for parallel-line and continuous-grid types of search can be formulated by vsing the concept of conditional probability. When the prior probability of the orientation of a widden target is represented by a uniform distribution, the calculated posterior probabilities are identical with the results obtained by the classic methods of probability. For hidden targets of different sizes and shapes, the following generalizations about the probability of intersection can be made: (1) to a first approximation, the probability of intersection of a hidden target is proportional to the ratio of the greatest dimension of the target (viewed in plane projection) to the minimum line spacing of the search pattern; (2) the shape of the hidden target does not greatly affect the probability of the intersection when the largest dimension of the target is small relative to the minimum spacing of the search pattern, (3) the probability of intersecting a target twice for a particular type of search can be used as a lower bound if there is an element of uncertainty of detection for a particular type of tool; (4) the geometry of the search pattern becomes more critical when the largest dimension of the target equals or exceeds the minimum spacing of the search pattern; (5) for elongate targets, the probability of intersection is greater for parallel-line search than for an equivalent continuous square-grid search when the largest dimension of the target is less than the minimum spacing of the search pattern, whereas the opposite is true when the largest dimension exceeds the minimum spacing; (6) the probability of intersection for nonorthogonal continuous-grid search patterns is not greatly different from the probability of intersection for the equivalent orthogonal continuous-grid pattern when the orientation of the target is unknown. The probability of intersection for an elliptically shaped target can be approximated by treating the ellipse as intermediate between a circle and a line. A search conducted along a continuous rectangular grid can be represented as intermediate between a search along parallel lines and along a continuous square grid. On this basis, an upper and lower bound for the probability of intersection of an elliptically shaped target for a continuous rectangular grid can be calculated. Charts have been constructed that permit the values for these probabilities to be obtained graphically. The use of conditional probability allows the explorationist greater flexibility in considering alternate search strategies for locating hidden targets. ?? 1977 Plenum Publishing Corp.

  19. MOnthly TEmperature DAtabase of Spain 1951-2010: MOTEDAS (2): The Correlation Decay Distance (CDD) and the spatial variability of maximum and minimum monthly temperature in Spain during (1981-2010).

    NASA Astrophysics Data System (ADS)

    Cortesi, Nicola; Peña-Angulo, Dhais; Simolo, Claudia; Stepanek, Peter; Brunetti, Michele; Gonzalez-Hidalgo, José Carlos

    2014-05-01

    One of the key point in the develop of the MOTEDAS dataset (see Poster 1 MOTEDAS) in the framework of the HIDROCAES Project (Impactos Hidrológicos del Calentamiento Global en España, Spanish Ministery of Research CGL2011-27574-C02-01) is the reference series for which no generalized metadata exist. In this poster we present an analysis of spatial variability of monthly minimum and maximum temperatures in the conterminous land of Spain (Iberian Peninsula, IP), by using the Correlation Decay Distance function (CDD), with the aim of evaluating, at sub-regional level, the optimal threshold distance between neighbouring stations for producing the set of reference series used in the quality control (see MOTEDAS Poster 1) and the reconstruction (see MOREDAS Poster 3). The CDD analysis for Tmax and Tmin was performed calculating a correlation matrix at monthly scale between 1981-2010 among monthly mean values of maximum (Tmax) and minimum (Tmin) temperature series (with at least 90% of data), free of anomalous data and homogenized (see MOTEDAS Poster 1), obtained from AEMEt archives (National Spanish Meteorological Agency). Monthly anomalies (difference between data and mean 1981-2010) were used to prevent the dominant effect of annual cycle in the CDD annual estimation. For each station, and time scale, the common variance r2 (using the square of Pearson's correlation coefficient) was calculated between all neighbouring temperature series and the relation between r2 and distance was modelled according to the following equation (1): Log (r2ij) = b*°dij (1) being Log(rij2) the common variance between target (i) and neighbouring series (j), dij the distance between them and b the slope of the ordinary least-squares linear regression model applied taking into account only the surrounding stations within a starting radius of 50 km and with a minimum of 5 stations required. Finally, monthly, seasonal and annual CDD values were interpolated using the Ordinary Kriging with a spherical variogram over conterminous land of Spain, and converted on a regular 10 km2 grid (resolution similar to the mean distance between stations) to map the results. In the conterminous land of Spain the distance at which couples of stations have a common variance in temperature (both maximum Tmax, and minimum Tmin) above the selected threshold (50%, r Pearson ~0.70) on average does not exceed 400 km, with relevant spatial and temporal differences. The spatial distribution of the CDD shows a clear coastland-to-inland gradient at annual, seasonal and monthly scale, with highest spatial variability along the coastland areas and lower variability inland. The highest spatial variability coincide particularly with coastland areas surrounded by mountain chains and suggests that the orography is one of the most driving factor causing higher interstation variability. Moreover, there are some differences between the behaviour of Tmax and Tmin, being Tmin spatially more homogeneous than Tmax, but its lower CDD values indicate that night-time temperature is more variable than diurnal one. The results suggest that in general local factors affects the spatial variability of monthly Tmin more than Tmax and then higher network density would be necessary to capture the higher spatial variability highlighted for Tmin respect to Tmax. The results suggest that in general local factors affects the spatial variability of Tmin more than Tmax and then higher network density would be necessary to capture the higher spatial variability highlighted for minimum temperature respect to maximum temperature. A conservative distance for reference series could be evaluated in 200 km, that we propose for continental land of Spain and use in the development of MOTEDAS.

  20. Minimum variance optimal rate allocation for multiplexed H.264/AVC bitstreams.

    PubMed

    Tagliasacchi, Marco; Valenzise, Giuseppe; Tubaro, Stefano

    2008-07-01

    Consider the problem of transmitting multiple video streams to fulfill a constant bandwidth constraint. The available bit budget needs to be distributed across the sequences in order to meet some optimality criteria. For example, one might want to minimize the average distortion or, alternatively, minimize the distortion variance, in order to keep almost constant quality among the encoded sequences. By working in the rho-domain, we propose a low-delay rate allocation scheme that, at each time instant, provides a closed form solution for either the aforementioned problems. We show that minimizing the distortion variance instead of the average distortion leads, for each of the multiplexed sequences, to a coding penalty less than 0.5 dB, in terms of average PSNR. In addition, our analysis provides an explicit relationship between model parameters and this loss. In order to smooth the distortion also along time, we accommodate a shared encoder buffer to compensate for rate fluctuations. Although the proposed scheme is general, and it can be adopted for any video and image coding standard, we provide experimental evidence by transcoding bitstreams encoded using the state-of-the-art H.264/AVC standard. The results of our simulations reveal that is it possible to achieve distortion smoothing both in time and across the sequences, without sacrificing coding efficiency.

  1. Geographic variability in the association between socioeconomic status and BMI in the USA and Canada.

    PubMed

    Lebel, Alexandre; Kestens, Yan; Clary, Christelle; Bisset, Sherri; Subramanian, S V

    2014-01-01

    Reported associations between socioeconomic status (SES) and obesity are inconsistent depending on gender and geographic location. Globally, these inconsistent observations may hide a variation in the contextual effect on individuals' risk of obesity for subgroups of the population. This study explored the regional variability in the association between SES and BMI in the USA and in Canada, and describes the geographical variance patterns by SES category. The 2009-2010 samples of the Behavioral Risk Factor Surveillance System (BRFSS) and the Canadian Community Health Survey (CCHS) were used for this comparison study. Three-level random intercept and differential variance multilevel models were built separately for women and men to assess region-specific BMI by SES category and their variance bounds. Associations between individual SES and BMI differed importantly by gender and countries. At the regional-level, the mean BMI variation was significantly different between SES categories in the USA, but not in Canada. In the USA, whereas the county-specific mean BMI of higher SES individuals remained close to the mean, its variation grown as SES decreased. At the county level, variation of mean BMI around the regional mean was 5 kg/m2 in the high SES group, and reached 8.8 kg/m2 in the low SES group. This study underlines how BMI varies by country, region, gender and SES. Lower socioeconomic groups within some regions show a much higher variation in BMI than in other regions. Above the BMI regional mean, important variation patterns of BMI by SES and place of residence were found in the USA. No such pattern was found in Canada. This study suggests that a change in the mean does not necessarily reflect the change in the variance. Analyzing the variance by SES may be a good way to detect subtle influences of social forces underlying social inequalities.

  2. Designing a compact high performance brain PET scanner—simulation study

    NASA Astrophysics Data System (ADS)

    Gong, Kuang; Majewski, Stan; Kinahan, Paul E.; Harrison, Robert L.; Elston, Brian F.; Manjeshwar, Ravindra; Dolinsky, Sergei; Stolin, Alexander V.; Brefczynski-Lewis, Julie A.; Qi, Jinyi

    2016-05-01

    The desire to understand normal and disordered human brain function of upright, moving persons in natural environments motivates the development of the ambulatory micro-dose brain PET imager (AMPET). An ideal system would be light weight but with high sensitivity and spatial resolution, although these requirements are often in conflict with each other. One potential approach to meet the design goals is a compact brain-only imaging device with a head-sized aperture. However, a compact geometry increases parallax error in peripheral lines of response, which increases bias and variance in region of interest (ROI) quantification. Therefore, we performed simulation studies to search for the optimal system configuration and to evaluate the potential improvement in quantification performance over existing scanners. We used the Cramér-Rao variance bound to compare the performance for ROI quantification using different scanner geometries. The results show that while a smaller ring diameter can increase photon detection sensitivity and hence reduce the variance at the center of the field of view, it can also result in higher variance in peripheral regions when the length of detector crystal is 15 mm or more. This variance can be substantially reduced by adding depth-of-interaction (DOI) measurement capability to the detector modules. Our simulation study also shows that the relative performance depends on the size of the ROI, and a large ROI favors a compact geometry even without DOI information. Based on these results, we propose a compact ‘helmet’ design using detectors with DOI capability. Monte Carlo simulations show the helmet design can achieve four-fold higher sensitivity and resolve smaller features than existing cylindrical brain PET scanners. The simulations also suggest that improving TOF timing resolution from 400 ps to 200 ps also results in noticeable improvement in image quality, indicating better timing resolution is desirable for brain imaging.

  3. Designing a compact high performance brain PET scanner—simulation study

    PubMed Central

    Gong, Kuang; Majewski, Stan; Kinahan, Paul E; Harrison, Robert L; Elston, Brian F; Manjeshwar, Ravindra; Dolinsky, Sergei; Stolin, Alexander V; Brefczynski-Lewis, Julie A; Qi, Jinyi

    2016-01-01

    The desire to understand normal and disordered human brain function of upright, moving persons in natural environments motivates the development of the ambulatory micro-dose brain PET imager (AMPET). An ideal system would be light weight but with high sensitivity and spatial resolution, although these requirements are often in conflict with each other. One potential approach to meet the design goals is a compact brain-only imaging device with a head-sized aperture. However, a compact geometry increases parallax error in peripheral lines of response, which increases bias and variance in region of interest (ROI) quantification. Therefore, we performed simulation studies to search for the optimal system configuration and to evaluate the potential improvement in quantification performance over existing scanners. We used the Cramér–Rao variance bound to compare the performance for ROI quantification using different scanner geometries. The results show that while a smaller ring diameter can increase photon detection sensitivity and hence reduce the variance at the center of the field of view, it can also result in higher variance in peripheral regions when the length of detector crystal is 15 mm or more. This variance can be substantially reduced by adding depth-of- interaction (DOI) measurement capability to the detector modules. Our simulation study also shows that the relative performance depends on the size of the ROI, and a large ROI favors a compact geometry even without DOI information. Based on these results, we propose a compact ‘helmet’ design using detectors with DOI capability. Monte Carlo simulations show the helmet design can achieve four-fold higher sensitivity and resolve smaller features than existing cylindrical brain PET scanners. The simulations also suggest that improving TOF timing resolution from 400 ps to 200 ps also results in noticeable improvement in image quality, indicating better timing resolution is desirable for brain imaging. PMID:27081753

  4. Case Study of the Minimum Provable Risk Considering the Variation in Background Risk: Effect of Residual Risk on Epidemiological Studies and a Comparative Assessment of Fatal Disease Risk Due to Radiation Exposure.

    PubMed

    Sasaki, Michiya; Ogino, Haruyuki; Hattori, Takatoshi

    2018-06-08

    In order to prove a small increment in a risk of concern in an epidemiological study, a large sample of a population is generally required. Since the background risk of an end point of interest, such as cancer mortality, is affected by various factors, such as lifestyle (diet, smoking, etc.), adjustment for such factors is necessary. However, it is impossible to inclusively and completely adjust for such factors; therefore, uncertainty in the background risk remains for control and exposed populations, indicating that there is a minimum limit to the lower bound for the provable risk regardless of the sample size. In this case study, we developed and discussed the minimum provable risk considering the uncertainty in background risk for hypothetical populations by referring to recent Japanese statistical information to grasp the extent of the minimum provable risk. Risk of fatal diseases due to radiation exposure, which has recently been the focus of radiological protection, was also examined by comparative assessment of the minimum provable risk for cancer and circulatory diseases. It was estimated that the minimum provable risk for circulatory disease mortality was much greater than that for cancer mortality, approximately five to seven times larger; circulatory disease mortality is more difficult to prove as a radiation risk than cancer mortality under the conditions used in this case study.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.

  5. Null-space and statistical significance of first-arrival traveltime inversion

    NASA Astrophysics Data System (ADS)

    Morozov, Igor B.

    2004-03-01

    The strong uncertainty inherent in the traveltime inversion of first arrivals from surface sources is usually removed by using a priori constraints or regularization. This leads to the null-space (data-independent model variability) being inadequately sampled, and consequently, model uncertainties may be underestimated in traditional (such as checkerboard) resolution tests. To measure the full null-space model uncertainties, we use unconstrained Monte Carlo inversion and examine the statistics of the resulting model ensembles. In an application to 1-D first-arrival traveltime inversion, the τ-p method is used to build a set of models that are equivalent to the IASP91 model within small, ~0.02 per cent, time deviations. The resulting velocity variances are much larger, ~2-3 per cent within the regions above the mantle discontinuities, and are interpreted as being due to the null-space. Depth-variant depth averaging is required for constraining the velocities within meaningful bounds, and the averaging scalelength could also be used as a measure of depth resolution. Velocity variances show structure-dependent, negative correlation with the depth-averaging scalelength. Neither the smoothest (Herglotz-Wiechert) nor the mean velocity-depth functions reproduce the discontinuities in the IASP91 model; however, the discontinuities can be identified by the increased null-space velocity (co-)variances. Although derived for a 1-D case, the above conclusions also relate to higher dimensions.

  6. Comparison of different strategies for using fossil calibrations to generate the time prior in Bayesian molecular clock dating.

    PubMed

    Barba-Montoya, Jose; Dos Reis, Mario; Yang, Ziheng

    2017-09-01

    Fossil calibrations are the utmost source of information for resolving the distances between molecular sequences into estimates of absolute times and absolute rates in molecular clock dating analysis. The quality of calibrations is thus expected to have a major impact on divergence time estimates even if a huge amount of molecular data is available. In Bayesian molecular clock dating, fossil calibration information is incorporated in the analysis through the prior on divergence times (the time prior). Here, we evaluate three strategies for converting fossil calibrations (in the form of minimum- and maximum-age bounds) into the prior on times, which differ according to whether they borrow information from the maximum age of ancestral nodes and minimum age of descendent nodes to form constraints for any given node on the phylogeny. We study a simple example that is analytically tractable, and analyze two real datasets (one of 10 primate species and another of 48 seed plant species) using three Bayesian dating programs: MCMCTree, MrBayes and BEAST2. We examine how different calibration strategies, the birth-death process, and automatic truncation (to enforce the constraint that ancestral nodes are older than descendent nodes) interact to determine the time prior. In general, truncation has a great impact on calibrations so that the effective priors on the calibration node ages after the truncation can be very different from the user-specified calibration densities. The different strategies for generating the effective prior also had considerable impact, leading to very different marginal effective priors. Arbitrary parameters used to implement minimum-bound calibrations were found to have a strong impact upon the prior and posterior of the divergence times. Our results highlight the importance of inspecting the joint time prior used by the dating program before any Bayesian dating analysis. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Chaos control of the brushless direct current motor using adaptive dynamic surface control based on neural network with the minimum weights.

    PubMed

    Luo, Shaohua; Wu, Songli; Gao, Ruizhen

    2015-07-01

    This paper investigates chaos control for the brushless DC motor (BLDCM) system by adaptive dynamic surface approach based on neural network with the minimum weights. The BLDCM system contains parameter perturbation, chaotic behavior, and uncertainty. With the help of radial basis function (RBF) neural network to approximate the unknown nonlinear functions, the adaptive law is established to overcome uncertainty of the control gain. By introducing the RBF neural network and adaptive technology into the dynamic surface control design, a robust chaos control scheme is developed. It is proved that the proposed control approach can guarantee that all signals in the closed-loop system are globally uniformly bounded, and the tracking error converges to a small neighborhood of the origin. Simulation results are provided to show that the proposed approach works well in suppressing chaos and parameter perturbation.

  8. Adaptive Control Using Residual Mode Filters Applied to Wind Turbines

    NASA Technical Reports Server (NTRS)

    Frost, Susan A.; Balas, Mark J.

    2011-01-01

    Many dynamic systems containing a large number of modes can benefit from adaptive control techniques, which are well suited to applications that have unknown parameters and poorly known operating conditions. In this paper, we focus on a model reference direct adaptive control approach that has been extended to handle adaptive rejection of persistent disturbances. We extend this adaptive control theory to accommodate problematic modal subsystems of a plant that inhibit the adaptive controller by causing the open-loop plant to be non-minimum phase. We will augment the adaptive controller using a Residual Mode Filter (RMF) to compensate for problematic modal subsystems, thereby allowing the system to satisfy the requirements for the adaptive controller to have guaranteed convergence and bounded gains. We apply these theoretical results to design an adaptive collective pitch controller for a high-fidelity simulation of a utility-scale, variable-speed wind turbine that has minimum phase zeros.

  9. Radiocarbon dating of open systems with bomb effect

    NASA Technical Reports Server (NTRS)

    Mckay, C. P.; Long, A.; Friedmann, E. I.

    1986-01-01

    The application of radiocarbon dating is extended to include systems that are slowly exchanging carbon with the atmosphere. Simple formulae are derived that relate the true age and the exchange rate of carbon to the apparent radiocarbon age. A radiocarbon age determination does not give a unique true age and exchange rate but determines a locus of values bounded by a minimum age and a minimum exchange rate. It is found that for radiocarbon ages as large as 10,000 years it is necessary to correct for the anthropogenic radiocarbon produced in the atmosphere by nuclear weapons testing. A one-term exponential approximation, with an e-folding time of 14.43 years, is used to model this effect and is shown to be accurate to within 3 percent for exchange time constants of 100 years and greater. The approach developed here is not specific to radiocarbon and can be applied to other radioisotopes in open systems.

  10. Optimal Binarization of Gray-Scaled Digital Images via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A. (Inventor); Klinko, Steven J. (Inventor)

    2007-01-01

    A technique for finding an optimal threshold for binarization of a gray scale image employs fuzzy reasoning. A triangular membership function is employed which is dependent on the degree to which the pixels in the image belong to either the foreground class or the background class. Use of a simplified linear fuzzy entropy factor function facilitates short execution times and use of membership values between 0.0 and 1.0 for improved accuracy. To improve accuracy further, the membership function employs lower and upper bound gray level limits that can vary from image to image and are selected to be equal to the minimum and the maximum gray levels, respectively, that are present in the image to be converted. To identify the optimal binarization threshold, an iterative process is employed in which different possible thresholds are tested and the one providing the minimum fuzzy entropy measure is selected.

  11. A MATLAB implementation of the minimum relative entropy method for linear inverse problems

    NASA Astrophysics Data System (ADS)

    Neupauer, Roseanna M.; Borchers, Brian

    2001-08-01

    The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.

  12. Chaos control of the brushless direct current motor using adaptive dynamic surface control based on neural network with the minimum weights

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Shaohua; Department of Mechanical Engineering, Chongqing Aerospace Polytechnic, Chongqing, 400021; Wu, Songli

    2015-07-15

    This paper investigates chaos control for the brushless DC motor (BLDCM) system by adaptive dynamic surface approach based on neural network with the minimum weights. The BLDCM system contains parameter perturbation, chaotic behavior, and uncertainty. With the help of radial basis function (RBF) neural network to approximate the unknown nonlinear functions, the adaptive law is established to overcome uncertainty of the control gain. By introducing the RBF neural network and adaptive technology into the dynamic surface control design, a robust chaos control scheme is developed. It is proved that the proposed control approach can guarantee that all signals in themore » closed-loop system are globally uniformly bounded, and the tracking error converges to a small neighborhood of the origin. Simulation results are provided to show that the proposed approach works well in suppressing chaos and parameter perturbation.« less

  13. Finding local genome rearrangements.

    PubMed

    Simonaitis, Pijus; Swenson, Krister M

    2018-01-01

    The double cut and join (DCJ) model of genome rearrangement is well studied due to its mathematical simplicity and power to account for the many events that transform gene order. These studies have mostly been devoted to the understanding of minimum length scenarios transforming one genome into another. In this paper we search instead for rearrangement scenarios that minimize the number of rearrangements whose breakpoints are unlikely due to some biological criteria. One such criterion has recently become accessible due to the advent of the Hi-C experiment, facilitating the study of 3D spacial distance between breakpoint regions. We establish a link between the minimum number of unlikely rearrangements required by a scenario and the problem of finding a maximum edge-disjoint cycle packing on a certain transformed version of the adjacency graph. This link leads to a 3/2-approximation as well as an exact integer linear programming formulation for our problem, which we prove to be NP-complete. We also present experimental results on fruit flies, showing that Hi-C data is informative when used as a criterion for rearrangements. A new variant of the weighted DCJ distance problem is addressed that ignores scenario length in its objective function. A solution to this problem provides a lower bound on the number of unlikely moves necessary when transforming one gene order into another. This lower bound aids in the study of rearrangement scenarios with respect to chromatin structure, and could eventually be used in the design of a fixed parameter algorithm with a more general objective function.

  14. Using geostatistical methods to estimate snow water equivalence distribution in a mountain watershed

    USGS Publications Warehouse

    Balk, B.; Elder, K.; Baron, Jill S.

    1998-01-01

    Knowledge of the spatial distribution of snow water equivalence (SWE) is necessary to adequately forecast the volume and timing of snowmelt runoff.  In April 1997, peak accumulation snow depth and density measurements were independently taken in the Loch Vale watershed (6.6 km2), Rocky Mountain National Park, Colorado.  Geostatistics and classical statistics were used to estimate SWE distribution across the watershed.  Snow depths were spatially distributed across the watershed through kriging interpolation methods which provide unbiased estimates that have minimum variances.  Snow densities were spatially modeled through regression analysis.  Combining the modeled depth and density with snow-covered area (SCA produced an estimate of the spatial distribution of SWE.  The kriged estimates of snow depth explained 37-68% of the observed variance in the measured depths.  Steep slopes, variably strong winds, and complex energy balance in the watershed contribute to a large degree of heterogeneity in snow depth.

  15. Unbiased estimation in seamless phase II/III trials with unequal treatment effect variances and hypothesis-driven selection rules.

    PubMed

    Robertson, David S; Prevost, A Toby; Bowden, Jack

    2016-09-30

    Seamless phase II/III clinical trials offer an efficient way to select an experimental treatment and perform confirmatory analysis within a single trial. However, combining the data from both stages in the final analysis can induce bias into the estimates of treatment effects. Methods for bias adjustment developed thus far have made restrictive assumptions about the design and selection rules followed. In order to address these shortcomings, we apply recent methodological advances to derive the uniformly minimum variance conditionally unbiased estimator for two-stage seamless phase II/III trials. Our framework allows for the precision of the treatment arm estimates to take arbitrary values, can be utilised for all treatments that are taken forward to phase III and is applicable when the decision to select or drop treatment arms is driven by a multiplicity-adjusted hypothesis testing procedure. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  16. Additive-Multiplicative Approximation of Genotype-Environment Interaction

    PubMed Central

    Gimelfarb, A.

    1994-01-01

    A model of genotype-environment interaction in quantitative traits is considered. The model represents an expansion of the traditional additive (first degree polynomial) approximation of genotypic and environmental effects to a second degree polynomial incorporating a multiplicative term besides the additive terms. An experimental evaluation of the model is suggested and applied to a trait in Drosophila melanogaster. The environmental variance of a genotype in the model is shown to be a function of the genotypic value: it is a convex parabola. The broad sense heritability in a population depends not only on the genotypic and environmental variances, but also on the position of the genotypic mean in the population relative to the minimum of the parabola. It is demonstrated, using the model, that GXE interaction rectional may cause a substantial non-linearity in offspring-parent regression and a reversed response to directional selection. It is also shown that directional selection may be accompanied by an increase in the heritability. PMID:7896113

  17. Combinatorics of least-squares trees.

    PubMed

    Mihaescu, Radu; Pachter, Lior

    2008-09-09

    A recurring theme in the least-squares approach to phylogenetics has been the discovery of elegant combinatorial formulas for the least-squares estimates of edge lengths. These formulas have proved useful for the development of efficient algorithms, and have also been important for understanding connections among popular phylogeny algorithms. For example, the selection criterion of the neighbor-joining algorithm is now understood in terms of the combinatorial formulas of Pauplin for estimating tree length. We highlight a phylogenetically desirable property that weighted least-squares methods should satisfy, and provide a complete characterization of methods that satisfy the property. The necessary and sufficient condition is a multiplicative four-point condition that the variance matrix needs to satisfy. The proof is based on the observation that the Lagrange multipliers in the proof of the Gauss-Markov theorem are tree-additive. Our results generalize and complete previous work on ordinary least squares, balanced minimum evolution, and the taxon-weighted variance model. They also provide a time-optimal algorithm for computation.

  18. A Quantitative Microscopy Technique for Determining the Number of Specific Proteins in Cellular Compartments

    PubMed Central

    Mutch, Sarah A.; Gadd, Jennifer C.; Fujimoto, Bryant S.; Kensel-Hammes, Patricia; Schiro, Perry G.; Bajjalieh, Sandra M.; Chiu, Daniel T.

    2013-01-01

    This protocol describes a method to determine both the average number and variance of proteins in the few to tens of copies in isolated cellular compartments, such as organelles and protein complexes. Other currently available protein quantification techniques either provide an average number but lack information on the variance or are not suitable for reliably counting proteins present in the few to tens of copies. This protocol entails labeling the cellular compartment with fluorescent primary-secondary antibody complexes, TIRF (total internal reflection fluorescence) microscopy imaging of the cellular compartment, digital image analysis, and deconvolution of the fluorescence intensity data. A minimum of 2.5 days is required to complete the labeling, imaging, and analysis of a set of samples. As an illustrative example, we describe in detail the procedure used to determine the copy number of proteins in synaptic vesicles. The same procedure can be applied to other organelles or signaling complexes. PMID:22094731

  19. Global Sensitivity Analysis and Parameter Calibration for an Ecosystem Carbon Model

    NASA Astrophysics Data System (ADS)

    Safta, C.; Ricciuto, D. M.; Sargsyan, K.; Najm, H. N.; Debusschere, B.; Thornton, P. E.

    2013-12-01

    We present uncertainty quantification results for a process-based ecosystem carbon model. The model employs 18 parameters and is driven by meteorological data corresponding to years 1992-2006 at the Harvard Forest site. Daily Net Ecosystem Exchange (NEE) observations were available to calibrate the model parameters and test the performance of the model. Posterior distributions show good predictive capabilities for the calibrated model. A global sensitivity analysis was first performed to determine the important model parameters based on their contribution to the variance of NEE. We then proceed to calibrate the model parameters in a Bayesian framework. The daily discrepancies between measured and predicted NEE values were modeled as independent and identically distributed Gaussians with prescribed daily variance according to the recorded instrument error. All model parameters were assumed to have uninformative priors with bounds set according to expert opinion. The global sensitivity results show that the rate of leaf fall (LEAFALL) is responsible for approximately 25% of the total variance in the average NEE for 1992-2005. A set of 4 other parameters, Nitrogen use efficiency (NUE), base rate for maintenance respiration (BR_MR), growth respiration fraction (RG_FRAC), and allocation to plant stem pool (ASTEM) contribute between 5% and 12% to the variance in average NEE, while the rest of the parameters have smaller contributions. The posterior distributions, sampled with a Markov Chain Monte Carlo algorithm, exhibit significant correlations between model parameters. However LEAFALL, the most important parameter for the average NEE, is not informed by the observational data, while less important parameters show significant updates between their prior and posterior densities. The Fisher information matrix values, indicating which parameters are most informed by the experimental observations, are examined to augment the comparison between the calibration and global sensitivity analysis results.

  20. Random regression models using Legendre orthogonal polynomials to evaluate the milk production of Alpine goats.

    PubMed

    Silva, F G; Torres, R A; Brito, L F; Euclydes, R F; Melo, A L P; Souza, N O; Ribeiro, J I; Rodrigues, M T

    2013-12-11

    The objective of this study was to identify the best random regression model using Legendre orthogonal polynomials to evaluate Alpine goats genetically and to estimate the parameters for test day milk yield. On the test day, we analyzed 20,710 records of milk yield of 667 goats from the Goat Sector of the Universidade Federal de Viçosa. The evaluated models had combinations of distinct fitting orders for polynomials (2-5), random genetic (1-7), and permanent environmental (1-7) fixed curves and a number of classes for residual variance (2, 4, 5, and 6). WOMBAT software was used for all genetic analyses. A random regression model using the best Legendre orthogonal polynomial for genetic evaluation of milk yield on the test day of Alpine goats considered a fixed curve of order 4, curve of genetic additive effects of order 2, curve of permanent environmental effects of order 7, and a minimum of 5 classes of residual variance because it was the most economical model among those that were equivalent to the complete model by the likelihood ratio test. Phenotypic variance and heritability were higher at the end of the lactation period, indicating that the length of lactation has more genetic components in relation to the production peak and persistence. It is very important that the evaluation utilizes the best combination of fixed, genetic additive and permanent environmental regressions, and number of classes of heterogeneous residual variance for genetic evaluation using random regression models, thereby enhancing the precision and accuracy of the estimates of parameters and prediction of genetic values.

  1. A training image evaluation and selection method based on minimum data event distance for multiple-point geostatistics

    NASA Astrophysics Data System (ADS)

    Feng, Wenjie; Wu, Shenghe; Yin, Yanshu; Zhang, Jiajia; Zhang, Ke

    2017-07-01

    A training image (TI) can be regarded as a database of spatial structures and their low to higher order statistics used in multiple-point geostatistics (MPS) simulation. Presently, there are a number of methods to construct a series of candidate TIs (CTIs) for MPS simulation based on a modeler's subjective criteria. The spatial structures of TIs are often various, meaning that the compatibilities of different CTIs with the conditioning data are different. Therefore, evaluation and optimal selection of CTIs before MPS simulation is essential. This paper proposes a CTI evaluation and optimal selection method based on minimum data event distance (MDevD). In the proposed method, a set of MDevD properties are established through calculation of the MDevD of conditioning data events in each CTI. Then, CTIs are evaluated and ranked according to the mean value and variance of the MDevD properties. The smaller the mean value and variance of an MDevD property are, the more compatible the corresponding CTI is with the conditioning data. In addition, data events with low compatibility in the conditioning data grid can be located to help modelers select a set of complementary CTIs for MPS simulation. The MDevD property can also help to narrow the range of the distance threshold for MPS simulation. The proposed method was evaluated using three examples: a 2D categorical example, a 2D continuous example, and an actual 3D oil reservoir case study. To illustrate the method, a C++ implementation of the method is attached to the paper.

  2. Cortical neuron activation induced by electromagnetic stimulation: a quantitative analysis via modelling and simulation.

    PubMed

    Wu, Tiecheng; Fan, Jie; Lee, Kim Seng; Li, Xiaoping

    2016-02-01

    Previous simulation works concerned with the mechanism of non-invasive neuromodulation has isolated many of the factors that can influence stimulation potency, but an inclusive account of the interplay between these factors on realistic neurons is still lacking. To give a comprehensive investigation on the stimulation-evoked neuronal activation, we developed a simulation scheme which incorporates highly detailed physiological and morphological properties of pyramidal cells. The model was implemented on a multitude of neurons; their thresholds and corresponding activation points with respect to various field directions and pulse waveforms were recorded. The results showed that the simulated thresholds had a minor anisotropy and reached minimum when the field direction was parallel to the dendritic-somatic axis; the layer 5 pyramidal cells always had lower thresholds but substantial variances were also observed within layers; reducing pulse length could magnify the threshold values as well as the variance; tortuosity and arborization of axonal segments could obstruct action potential initiation. The dependence of the initiation sites on both the orientation and the duration of the stimulus implies that the cellular excitability might represent the result of the competition between various firing-capable axonal components, each with a unique susceptibility determined by the local geometry. Moreover, the measurements obtained in simulation intimately resemble recordings in physiological and clinical studies, which seems to suggest that, with minimum simplification of the neuron model, the cable theory-based simulation approach can have sufficient verisimilitude to give quantitatively accurate evaluation of cell activities in response to the externally applied field.

  3. Dimensionality and noise in energy selective x-ray imaging

    PubMed Central

    Alvarez, Robert E.

    2013-01-01

    Purpose: To develop and test a method to quantify the effect of dimensionality on the noise in energy selective x-ray imaging. Methods: The Cramèr-Rao lower bound (CRLB), a universal lower limit of the covariance of any unbiased estimator, is used to quantify the noise. It is shown that increasing dimensionality always increases, or at best leaves the same, the variance. An analytic formula for the increase in variance in an energy selective x-ray system is derived. The formula is used to gain insight into the dependence of the increase in variance on the properties of the additional basis functions, the measurement noise covariance, and the source spectrum. The formula is also used with computer simulations to quantify the dependence of the additional variance on these factors. Simulated images of an object with three materials are used to demonstrate the trade-off of increased information with dimensionality and noise. The images are computed from energy selective data with a maximum likelihood estimator. Results: The increase in variance depends most importantly on the dimension and on the properties of the additional basis functions. With the attenuation coefficients of cortical bone, soft tissue, and adipose tissue as the basis functions, the increase in variance of the bone component from two to three dimensions is 1.4 × 103. With the soft tissue component, it is 2.7 × 104. If the attenuation coefficient of a high atomic number contrast agent is used as the third basis function, there is only a slight increase in the variance from two to three basis functions, 1.03 and 7.4 for the bone and soft tissue components, respectively. The changes in spectrum shape with beam hardening also have a substantial effect. They increase the variance by a factor of approximately 200 for the bone component and 220 for the soft tissue component as the soft tissue object thickness increases from 1 to 30 cm. Decreasing the energy resolution of the detectors increases the variance of the bone component markedly with three dimension processing, approximately a factor of 25 as the resolution decreases from 100 to 3 bins. The increase with two dimension processing for adipose tissue is a factor of two and with the contrast agent as the third material for two or three dimensions is also a factor of two for both components. The simulated images show that a maximum likelihood estimator can be used to process energy selective x-ray data to produce images with noise close to the CRLB. Conclusions: The method presented can be used to compute the effects of the object attenuation coefficients and the x-ray system properties on the relationship of dimensionality and noise in energy selective x-ray imaging systems. PMID:24320442

  4. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Kanjilal, Oindrila; Manohar, C. S.

    2017-07-01

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.

  5. Soil Moisture Initialization Error and Subgrid Variability of Precipitation in Seasonal Streamflow Forecasting

    NASA Technical Reports Server (NTRS)

    Koster, Randal D.; Walker, Gregory K.; Mahanama, Sarith P.; Reichle, Rolf H.

    2013-01-01

    Offline simulations over the conterminous United States (CONUS) with a land surface model are used to address two issues relevant to the forecasting of large-scale seasonal streamflow: (i) the extent to which errors in soil moisture initialization degrade streamflow forecasts, and (ii) the extent to which a realistic increase in the spatial resolution of forecasted precipitation would improve streamflow forecasts. The addition of error to a soil moisture initialization field is found to lead to a nearly proportional reduction in streamflow forecast skill. The linearity of the response allows the determination of a lower bound for the increase in streamflow forecast skill achievable through improved soil moisture estimation, e.g., through satellite-based soil moisture measurements. An increase in the resolution of precipitation is found to have an impact on large-scale streamflow forecasts only when evaporation variance is significant relative to the precipitation variance. This condition is met only in the western half of the CONUS domain. Taken together, the two studies demonstrate the utility of a continental-scale land surface modeling system as a tool for addressing the science of hydrological prediction.

  6. Comparison of Kasai Autocorrelation and Maximum Likelihood Estimators for Doppler Optical Coherence Tomography

    PubMed Central

    Chan, Aaron C.; Srinivasan, Vivek J.

    2013-01-01

    In optical coherence tomography (OCT) and ultrasound, unbiased Doppler frequency estimators with low variance are desirable for blood velocity estimation. Hardware improvements in OCT mean that ever higher acquisition rates are possible, which should also, in principle, improve estimation performance. Paradoxically, however, the widely used Kasai autocorrelation estimator’s performance worsens with increasing acquisition rate. We propose that parametric estimators based on accurate models of noise statistics can offer better performance. We derive a maximum likelihood estimator (MLE) based on a simple additive white Gaussian noise model, and show that it can outperform the Kasai autocorrelation estimator. In addition, we also derive the Cramer Rao lower bound (CRLB), and show that the variance of the MLE approaches the CRLB for moderate data lengths and noise levels. We note that the MLE performance improves with longer acquisition time, and remains constant or improves with higher acquisition rates. These qualities may make it a preferred technique as OCT imaging speed continues to improve. Finally, our work motivates the development of more general parametric estimators based on statistical models of decorrelation noise. PMID:23446044

  7. Nucleotide-dependent conformational states of actin

    PubMed Central

    Pfaendtner, Jim; Branduardi, Davide; Parrinello, Michele; Pollard, Thomas D.; Voth, Gregory A.

    2009-01-01

    The influence of the state of the bound nucleotide (ATP, ADP-Pi, or ADP) on the conformational free-energy landscape of actin is investigated. Nucleotide-dependent folding of the DNase-I binding (DB) loop in monomeric actin and the actin trimer is carried out using all-atom molecular dynamics (MD) calculations accelerated with a multiscale implementation of the metadynamics algorithm. Additionally, an investigation of the opening and closing of the actin nucleotide binding cleft is performed. Nucleotide-dependent free-energy profiles for all of these conformational changes are calculated within the framework of metadynamics. We find that in ADP-bound monomer, the folded and unfolded states of the DB loop have similar relative free-energy. This result helps explain the experimental difficulty in obtaining an ordered crystal structure for this region of monomeric actin. However, we find that in the ADP-bound actin trimer, the folded DB loop is stable and in a free-energy minimum. It is also demonstrated that the nucleotide binding cleft favors a closed conformation for the bound nucleotide in the ATP and ADP-Pi states, whereas the ADP state favors an open confirmation, both in the monomer and trimer. These results suggest a mechanism of allosteric interactions between the nucleotide binding cleft and the DB loop. This behavior is confirmed by an additional simulation that shows the folding free-energy as a function of the nucleotide cleft width, which demonstrates that the barrier for folding changes significantly depending on the value of the cleft width. PMID:19620726

  8. Computation of the target state and feedback controls for time optimal consensus in multi-agent systems

    NASA Astrophysics Data System (ADS)

    Mulla, Ameer K.; Patil, Deepak U.; Chakraborty, Debraj

    2018-02-01

    N identical agents with bounded inputs aim to reach a common target state (consensus) in the minimum possible time. Algorithms for computing this time-optimal consensus point, the control law to be used by each agent and the time taken for the consensus to occur, are proposed. Two types of multi-agent systems are considered, namely (1) coupled single-integrator agents on a plane and, (2) double-integrator agents on a line. At the initial time instant, each agent is assumed to have access to the state information of all the other agents. An algorithm, using convexity of attainable sets and Helly's theorem, is proposed, to compute the final consensus target state and the minimum time to achieve this consensus. Further, parts of the computation are parallelised amongst the agents such that each agent has to perform computations of O(N2) run time complexity. Finally, local feedback time-optimal control laws are synthesised to drive each agent to the target point in minimum time. During this part of the operation, the controller for each agent uses measurements of only its own states and does not need to communicate with any neighbouring agents.

  9. A dynamic-solver-consistent minimum action method: With an application to 2D Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Wan, Xiaoliang; Yu, Haijun

    2017-02-01

    This paper discusses the necessity and strategy to unify the development of a dynamic solver and a minimum action method (MAM) for a spatially extended system when employing the large deviation principle (LDP) to study the effects of small random perturbations. A dynamic solver is used to approximate the unperturbed system, and a minimum action method is used to approximate the LDP, which corresponds to solving an Euler-Lagrange equation related to but more complicated than the unperturbed system. We will clarify possible inconsistencies induced by independent numerical approximations of the unperturbed system and the LDP, based on which we propose to define both the dynamic solver and the MAM on the same approximation space for spatial discretization. The semi-discrete LDP can then be regarded as the exact LDP of the semi-discrete unperturbed system, which is a finite-dimensional ODE system. We achieve this methodology for the two-dimensional Navier-Stokes equations using a divergence-free approximation space. The method developed can be used to study the nonlinear instability of wall-bounded parallel shear flows, and be generalized straightforwardly to three-dimensional cases. Numerical experiments are presented.

  10. Estimating the intensity of a cyclic Poisson process in the presence of additive and multiplicative linear trend

    NASA Astrophysics Data System (ADS)

    Wayan Mangku, I.

    2017-10-01

    In this paper we survey some results on estimation of the intensity function of a cyclic Poisson process in the presence of additive and multiplicative linear trend. We do not assume any parametric form for the cyclic component of the intensity function, except that it is periodic. Moreover, we consider the case when there is only a single realization of the Poisson process is observed in a bounded interval. The considered estimators are weakly and strongly consistent when the size of the observation interval indefinitely expands. Asymptotic approximations to the bias and variance of those estimators are presented.

  11. Gaussian Mean Field Lattice Gas

    NASA Astrophysics Data System (ADS)

    Scoppola, Benedetto; Troiani, Alessio

    2018-03-01

    We study rigorously a lattice gas version of the Sherrington-Kirckpatrick spin glass model. In discrete optimization literature this problem is known as unconstrained binary quadratic programming and it belongs to the class NP-hard. We prove that the fluctuations of the ground state energy tend to vanish in the thermodynamic limit, and we give a lower bound of such ground state energy. Then we present a heuristic algorithm, based on a probabilistic cellular automaton, which seems to be able to find configurations with energy very close to the minimum, even for quite large instances.

  12. A no-go theorem for monodromy inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andriot, David, E-mail: david.andriot@aei.mpg.de

    2016-03-01

    We study the embedding of the monodromy inflation mechanism by E. Silverstein and A. Westphal (2008) in a concrete compactification setting. To that end, we look for an appropriate vacuum of type IIA supergravity, corresponding to the minimum of the inflaton potential. We prove a no-go theorem on the existence of such a vacuum, using ten-dimensional equations of motion. Anti-de Sitter and Minkowski vacua are ruled out; de Sitter vacua are not excluded, but have a lower bound on their cosmological constant which is too high for phenomenology.

  13. Spinon confinement in a quasi-one-dimensional XXZ Heisenberg antiferromagnet

    NASA Astrophysics Data System (ADS)

    Lake, Bella; Bera, Anup K.; Essler, Fabian H. L.; Vanderstraeten, Laurens; Hubig, Claudius; Schollwock, Ulrich; Islam, A. T. M. Nazmul; Schneidewind, Astrid; Quintero-Castro, Diana L.

    Half-integer spin Heisenberg chains constitute a key paradigm for quantum number fractionalization: flipping a spin creates a minimum of two elementary spinon excitations. These have been observed in numerous experiments. We report on inelastic neutron scattering experiments on the quasi-one-dimensional anisotropic spin-1/2 Heisenberg antiferromagnet SrCo2V2O8. These reveal a mechanism for temperature-induced spinon confinement, manifesting itself in the formation of sequences of spinon bound states. A theoretical description of this effect is achieved by a combination of analytical and numerical methods.

  14. Ab initio R-matrix calculations of e+-molecule scattering

    NASA Technical Reports Server (NTRS)

    Danby, Grahame; Tennyson, Jonathan

    1990-01-01

    The adaptation of the molecular R-matrix method, originally developed for electron-molecule collision studies, to positron scattering is discussed. Ab initio R-matrix calculations are presented for collisions of low energy positrons with a number of diatomic systems including H2, HF and N2. Differential elastic cross sections for positron-H2 show a minimum at about 45 deg for collision energies between 0.3 and 0.5 Ryd. The calculations predict a bound state of positronHF. Calculations on inelastic processes in N2 and O2 are also discussed.

  15. Automated agitation management accounting for saturation dynamics.

    PubMed

    Rudge, A D; Chase, J G; Shaw, G M; Lee, D

    2004-01-01

    Agitation-sedation cycling in critically ill is damaging to patient health and increases length of and cost. A physiologically representative model of the agitation-sedation system is used as a platform to evaluate feedback controllers offering improved agitation management. A heavy-derivative controller with upper and infusion rate bounds maintains minimum plasma concentrations through a low constant infusion, and minimizes outbursts of agitation through strong, timely boluses. controller provides improved agitation management using from 37 critically ill patients, given the saturation of effect at high concentration. Approval was obtained the Canterbury Ethics Board for this research.

  16. Relations between the efficiency, power and dissipation for linear irreversible heat engine at maximum trade-off figure of merit

    NASA Astrophysics Data System (ADS)

    Iyyappan, I.; Ponmurugan, M.

    2018-03-01

    A trade of figure of merit (\\dotΩ ) criterion accounts the best compromise between the useful input energy and the lost input energy of the heat devices. When the heat engine is working at maximum \\dotΩ criterion its efficiency increases significantly from the efficiency at maximum power. We derive the general relations between the power, efficiency at maximum \\dotΩ criterion and minimum dissipation for the linear irreversible heat engine. The efficiency at maximum \\dotΩ criterion has the lower bound \

  17. Frozen lattice and absorptive model for high angle annular dark field scanning transmission electron microscopy: A comparison study in terms of integrated intensity and atomic column position measurement.

    PubMed

    Alania, M; Lobato, I; Van Aert, S

    2018-01-01

    In this paper, both the frozen lattice (FL) and the absorptive potential (AP) approximation models are compared in terms of the integrated intensity and the precision with which atomic columns can be located from an image acquired using high angle annular dark field (HAADF) scanning transmission electron microscopy (STEM). The comparison is made for atoms of Cu, Ag, and Au. The integrated intensity is computed for both an isolated atomic column and an atomic column inside an FCC structure. The precision has been computed using the so-called Cramér-Rao Lower Bound (CRLB), which provides a theoretical lower bound on the variance with which parameters can be estimated. It is shown that the AP model results into accurate measurements for the integrated intensity only for small detector ranges under relatively low angles and for small thicknesses. In terms of the attainable precision, both methods show similar results indicating picometer range precision under realistic experimental conditions. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Probabilistic Parameter Uncertainty Analysis of Single Input Single Output Control Systems

    NASA Technical Reports Server (NTRS)

    Smith, Brett A.; Kenny, Sean P.; Crespo, Luis G.

    2005-01-01

    The current standards for handling uncertainty in control systems use interval bounds for definition of the uncertain parameters. This approach gives no information about the likelihood of system performance, but simply gives the response bounds. When used in design, current methods of m-analysis and can lead to overly conservative controller design. With these methods, worst case conditions are weighted equally with the most likely conditions. This research explores a unique approach for probabilistic analysis of control systems. Current reliability methods are examined showing the strong areas of each in handling probability. A hybrid method is developed using these reliability tools for efficiently propagating probabilistic uncertainty through classical control analysis problems. The method developed is applied to classical response analysis as well as analysis methods that explore the effects of the uncertain parameters on stability and performance metrics. The benefits of using this hybrid approach for calculating the mean and variance of responses cumulative distribution functions are shown. Results of the probabilistic analysis of a missile pitch control system, and a non-collocated mass spring system, show the added information provided by this hybrid analysis.

  19. Quantum speed limit time in a magnetic resonance

    NASA Astrophysics Data System (ADS)

    Ivanchenko, E. A.

    2017-12-01

    A visualization for dynamics of a qudit spin vector in a time-dependent magnetic field is realized by means of mapping a solution for a spin vector on the three-dimensional spherical curve (vector hodograph). The obtained results obviously display the quantum interference of precessional and nutational effects on the spin vector in the magnetic resonance. For any spin the bottom bounds of the quantum speed limit time (QSL) are found. It is shown that the bottom bound goes down when using multilevel spin systems. Under certain conditions the non-nil minimal time, which is necessary to achieve the orthogonal state from the initial one, is attained at spin S = 2. An estimation of the product of two and three standard deviations of the spin components are presented. We discuss the dynamics of the mutual uncertainty, conditional uncertainty and conditional variance in terms of spin standard deviations. The study can find practical applications in the magnetic resonance, 3D visualization of computational data and in designing of optimized information processing devices for quantum computation and communication.

  20. Assessment of the relative merits of a few methods to detect evolutionary trends.

    PubMed

    Laurin, Michel

    2010-12-01

    Some of the most basic questions about the history of life concern evolutionary trends. These include determining whether or not metazoans have become more complex over time, whether or not body size tends to increase over time (the Cope-Depéret rule), or whether or not brain size has increased over time in various taxa, such as mammals and birds. Despite the proliferation of studies on such topics, assessment of the reliability of results in this field is hampered by the variability of techniques used and the lack of statistical validation of these methods. To solve this problem, simulations are performed using a variety of evolutionary models (gradual Brownian motion, speciational Brownian motion, and Ornstein-Uhlenbeck), with or without a drift of variable amplitude, with variable variance of tips, and with bounds placed close or far from the starting values and final means of simulated characters. These are used to assess the relative merits (power, Type I error rate, bias, and mean absolute value of error on slope estimate) of several statistical methods that have recently been used to assess the presence of evolutionary trends in comparative data. Results show widely divergent performance of the methods. The simple, nonphylogenetic regression (SR) and variance partitioning using phylogenetic eigenvector regression (PVR) with a broken stick selection procedure have greatly inflated Type I error rate (0.123-0.180 at a 0.05 threshold), which invalidates their use in this context. However, they have the greatest power. Most variants of Felsenstein's independent contrasts (FIC; five of which are presented) have adequate Type I error rate, although two have a slightly inflated Type I error rate with at least one of the two reference trees (0.064-0.090 error rate at a 0.05 threshold). The power of all contrast-based methods is always much lower than that of SR and PVR, except under Brownian motion with a strong trend and distant bounds. Mean absolute value of error on slope of all FIC methods is slightly higher than that of phylogenetic generalized least squares (PGLS), SR, and PVR. PGLS performs well, with low Type I error rate, low error on regression coefficient, and power comparable with some FIC methods. Four variants of skewness analysis are examined, and a new method to assess significance of results is presented. However, all have consistently low power, except in rare combinations of trees, trend strength, and distance between final means and bounds. Globally, the results clearly show that FIC-based methods and PGLS are globally better than nonphylogenetic methods and variance partitioning with PVR. FIC methods and PGLS are sensitive to the model of evolution (and, hence, to branch length errors). Our results suggest that regressing raw character contrasts against raw geological age contrasts yields a good combination of power and Type I error rate. New software to facilitate batch analysis is presented.

  1. Potential Seasonal Terrestrial Water Storage Monitoring from GPS Vertical Displacements: A Case Study in the Lower Three-Rivers Headwater Region, China.

    PubMed

    Zhang, Bao; Yao, Yibin; Fok, Hok Sum; Hu, Yufeng; Chen, Qiang

    2016-09-19

    This study uses the observed vertical displacements of Global Positioning System (GPS) time series obtained from the Crustal Movement Observation Network of China (CMONOC) with careful pre- and post-processing to estimate the seasonal crustal deformation in response to the hydrological loading in lower three-rivers headwater region of southwest China, followed by inferring the annual EWH changes through geodetic inversion methods. The Helmert Variance Component Estimation (HVCE) and the Minimum Mean Square Error (MMSE) criterion were successfully employed. The GPS inferred EWH changes agree well qualitatively with the Gravity Recovery and Climate Experiment (GRACE)-inferred and the Global Land Data Assimilation System (GLDAS)-inferred EWH changes, with a discrepancy of 3.2-3.9 cm and 4.8-5.2 cm, respectively. In the research areas, the EWH changes in the Lancang basin is larger than in the other regions, with a maximum of 21.8-24.7 cm and a minimum of 3.1-6.9 cm.

  2. Development of an Empirical Model for Optimization of Machining Parameters to Minimize Power Consumption

    NASA Astrophysics Data System (ADS)

    Kant Garg, Girish; Garg, Suman; Sangwan, K. S.

    2018-04-01

    The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.

  3. Blind Channel Equalization with Colored Source Based on Constrained Optimization Methods

    NASA Astrophysics Data System (ADS)

    Wang, Yunhua; DeBrunner, Linda; DeBrunner, Victor; Zhou, Dayong

    2008-12-01

    Tsatsanis and Xu have applied the constrained minimum output variance (CMOV) principle to directly blind equalize a linear channel—a technique that has proven effective with white inputs. It is generally assumed in the literature that their CMOV method can also effectively equalize a linear channel with a colored source. In this paper, we prove that colored inputs will cause the equalizer to incorrectly converge due to inadequate constraints. We also introduce a new blind channel equalizer algorithm that is based on the CMOV principle, but with a different constraint that will correctly handle colored sources. Our proposed algorithm works for channels with either white or colored inputs and performs equivalently to the trained minimum mean-square error (MMSE) equalizer under high SNR. Thus, our proposed algorithm may be regarded as an extension of the CMOV algorithm proposed by Tsatsanis and Xu. We also introduce several methods to improve the performance of our introduced algorithm in the low SNR condition. Simulation results show the superior performance of our proposed methods.

  4. Comparative efficacy of storage bags, storability and damage potential of bruchid beetle.

    PubMed

    Harish, G; Nataraja, M V; Ajay, B C; Holajjer, Prasanna; Savaliya, S D; Gedia, M V

    2014-12-01

    Groundnut during storage is attacked by number of stored grain pests and management of these insect pests particularly bruchid beetle, Caryedon serratus (Oliver) is of prime importance as they directly damage the pod and kernels. In this regard different storage bags that could be used and duration up to which we can store groundnut has been studied. Super grain bag recorded minimum number of eggs laid and less damage and minimum weight loss in pods and kernels in comparison to other storage bags. Analysis of variance for multiple regression models were found to be significant in all bags for variables viz, number of eggs laid, damage in pods and kernels, weight loss in pods and kernels throughout the season. Multiple comparison results showed that there was a high probability of eggs laid and pod damage in lino bag, fertilizer bag and gunny bag, whereas super grain bag was found to be more effective in managing the C. serratus owing to very low air circulation.

  5. A Robust Statistics Approach to Minimum Variance Portfolio Optimization

    NASA Astrophysics Data System (ADS)

    Yang, Liusha; Couillet, Romain; McKay, Matthew R.

    2015-12-01

    We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.

  6. Noise sensitivity of portfolio selection in constant conditional correlation GARCH models

    NASA Astrophysics Data System (ADS)

    Varga-Haszonits, I.; Kondor, I.

    2007-11-01

    This paper investigates the efficiency of minimum variance portfolio optimization for stock price movements following the Constant Conditional Correlation GARCH process proposed by Bollerslev. Simulations show that the quality of portfolio selection can be improved substantially by computing optimal portfolio weights from conditional covariances instead of unconditional ones. Measurement noise can be further reduced by applying some filtering method on the conditional correlation matrix (such as Random Matrix Theory based filtering). As an empirical support for the simulation results, the analysis is also carried out for a time series of S&P500 stock prices.

  7. A Sparse Matrix Approach for Simultaneous Quantification of Nystagmus and Saccade

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.; Stone, Lee; Boyle, Richard D.

    2012-01-01

    The vestibulo-ocular reflex (VOR) consists of two intermingled non-linear subsystems; namely, nystagmus and saccade. Typically, nystagmus is analysed using a single sufficiently long signal or a concatenation of them. Saccade information is not analysed and discarded due to insufficient data length to provide consistent and minimum variance estimates. This paper presents a novel sparse matrix approach to system identification of the VOR. It allows for the simultaneous estimation of both nystagmus and saccade signals. We show via simulation of the VOR that our technique provides consistent and unbiased estimates in the presence of output additive noise.

  8. Statistical indicators of collective behavior and functional clusters in gene networks of yeast

    NASA Astrophysics Data System (ADS)

    Živković, J.; Tadić, B.; Wick, N.; Thurner, S.

    2006-03-01

    We analyze gene expression time-series data of yeast (S. cerevisiae) measured along two full cell-cycles. We quantify these data by using q-exponentials, gene expression ranking and a temporal mean-variance analysis. We construct gene interaction networks based on correlation coefficients and study the formation of the corresponding giant components and minimum spanning trees. By coloring genes according to their cell function we find functional clusters in the correlation networks and functional branches in the associated trees. Our results suggest that a percolation point of functional clusters can be identified on these gene expression correlation networks.

  9. Gravity anomalies, compensation mechanisms, and the geodynamics of western Ishtar Terra, Venus

    NASA Technical Reports Server (NTRS)

    Grimm, Robert E.; Phillips, Roger J.

    1991-01-01

    Pioneer Venus line-of-sight orbital accelerations were utilized to calculate the geoid and vertical gravity anomalies for western Ishtar Terra on various planes of altitude z sub 0. The apparent depth of isostatic compensation at z sub 0 = 1400 km is 180 + or - 20 km based on the usual method of minimum variance in the isostatic anomaly. An attempt is made here to explain this observation, as well as the regional elevation, peripheral mountain belts, and inferred age of western Ishtar Terra, in terms of one or three broad geodynamic models.

  10. Minimal Model of Prey Localization through the Lateral-Line System

    NASA Astrophysics Data System (ADS)

    Franosch, Jan-Moritz P.; Sobotka, Marion C.; Elepfandt, Andreas; van Hemmen, J. Leo

    2003-10-01

    The clawed frog Xenopus is an aquatic predator catching prey at night by detecting water movements caused by its prey. We present a general method, a “minimal model” based on a minimum-variance estimator, to explain prey detection through the frog's many lateral-line organs, even in case several of them are defunct. We show how waveform reconstruction allows Xenopus' neuronal system to determine both the direction and the character of the prey and even to distinguish two simultaneous wave sources. The results can be applied to many aquatic amphibians, fish, or reptiles such as crocodilians.

  11. Beamforming approaches for untethered, ultrasonic neural dust motes for cortical recording: a simulation study.

    PubMed

    Bertrand, Alexander; Seo, Dongjin; Maksimovic, Filip; Carmena, Jose M; Maharbiz, Michel M; Alon, Elad; Rabaey, Jan M

    2014-01-01

    In this paper, we examine the use of beamforming techniques to interrogate a multitude of neural implants in a distributed, ultrasound-based intra-cortical recording platform known as Neural Dust. We propose a general framework to analyze system design tradeoffs in the ultrasonic beamformer that extracts neural signals from modulated ultrasound waves that are backscattered by free-floating neural dust (ND) motes. Simulations indicate that high-resolution linearly-constrained minimum variance beamforming sufficiently suppresses interference from unselected ND motes and can be incorporated into the ND-based cortical recording system.

  12. Statistical evaluation of metal fill widths for emulated metal fill in parasitic extraction methodology

    NASA Astrophysics Data System (ADS)

    J-Me, Teh; Noh, Norlaili Mohd.; Aziz, Zalina Abdul

    2015-05-01

    In the chip industry today, the key goal of a chip development organization is to develop and market chips within a short time frame to gain foothold on market share. This paper proposes a design flow around the area of parasitic extraction to improve the design cycle time. The proposed design flow utilizes the usage of metal fill emulation as opposed to the current flow which performs metal fill insertion directly. By replacing metal fill structures with an emulation methodology in earlier iterations of the design flow, this is targeted to help reduce runtime in fill insertion stage. Statistical design of experiments methodology utilizing the randomized complete block design was used to select an appropriate emulated metal fill width to improve emulation accuracy. The experiment was conducted on test cases of different sizes, ranging from 1000 gates to 21000 gates. The metal width was varied from 1 x minimum metal width to 6 x minimum metal width. Two-way analysis of variance and Fisher's least significant difference test were used to analyze the interconnect net capacitance values of the different test cases. This paper presents the results of the statistical analysis for the 45 nm process technology. The recommended emulated metal fill width was found to be 4 x the minimum metal width.

  13. Claw length recommendations for dairy cow foot trimming

    PubMed Central

    Archer, S. C.; Newsome, R.; Dibble, H.; Sturrock, C. J.; Chagunda, M. G. G.; Mason, C. S.; Huxley, J. N.

    2015-01-01

    The aim was to describe variation in length of the dorsal hoof wall in contact with the dermis for cows on a single farm, and hence, derive minimum appropriate claw lengths for routine foot trimming. The hind feet of 68 Holstein-Friesian dairy cows were collected post mortem, and the internal structures were visualised using x-ray µCT. The internal distance from the proximal limit of the wall horn to the distal tip of the dermis was measured from cross-sectional sagittal images. A constant was added to allow for a minimum sole thickness of 5 mm and an average wall thickness of 8 mm. Data were evaluated using descriptive statistics and two-level linear regression models with claw nested within cow. Based on 219 claws, the recommended dorsal wall length from the proximal limit of hoof horn was up to 90 mm for 96 per cent of claws, and the median value was 83 mm. Dorsal wall length increased by 1 mm per year of age, yet 85 per cent of the null model variance remained unexplained. Overtrimming can have severe consequences; the authors propose that the minimum recommended claw length stated in training materials for all Holstein-Friesian cows should be increased to 90 mm. PMID:26220848

  14. Estimating multilevel logistic regression models when the number of clusters is low: a comparison of different statistical software procedures.

    PubMed

    Austin, Peter C

    2010-04-22

    Multilevel logistic regression models are increasingly being used to analyze clustered data in medical, public health, epidemiological, and educational research. Procedures for estimating the parameters of such models are available in many statistical software packages. There is currently little evidence on the minimum number of clusters necessary to reliably fit multilevel regression models. We conducted a Monte Carlo study to compare the performance of different statistical software procedures for estimating multilevel logistic regression models when the number of clusters was low. We examined procedures available in BUGS, HLM, R, SAS, and Stata. We found that there were qualitative differences in the performance of different software procedures for estimating multilevel logistic models when the number of clusters was low. Among the likelihood-based procedures, estimation methods based on adaptive Gauss-Hermite approximations to the likelihood (glmer in R and xtlogit in Stata) or adaptive Gaussian quadrature (Proc NLMIXED in SAS) tended to have superior performance for estimating variance components when the number of clusters was small, compared to software procedures based on penalized quasi-likelihood. However, only Bayesian estimation with BUGS allowed for accurate estimation of variance components when there were fewer than 10 clusters. For all statistical software procedures, estimation of variance components tended to be poor when there were only five subjects per cluster, regardless of the number of clusters.

  15. An internal pilot design for prospective cancer screening trials with unknown disease prevalence.

    PubMed

    Brinton, John T; Ringham, Brandy M; Glueck, Deborah H

    2015-10-13

    For studies that compare the diagnostic accuracy of two screening tests, the sample size depends on the prevalence of disease in the study population, and on the variance of the outcome. Both parameters may be unknown during the design stage, which makes finding an accurate sample size difficult. To solve this problem, we propose adapting an internal pilot design. In this adapted design, researchers will accrue some percentage of the planned sample size, then estimate both the disease prevalence and the variances of the screening tests. The updated estimates of the disease prevalence and variance are used to conduct a more accurate power and sample size calculation. We demonstrate that in large samples, the adapted internal pilot design produces no Type I inflation. For small samples (N less than 50), we introduce a novel adjustment of the critical value to control the Type I error rate. We apply the method to two proposed prospective cancer screening studies: 1) a small oral cancer screening study in individuals with Fanconi anemia and 2) a large oral cancer screening trial. Conducting an internal pilot study without adjusting the critical value can cause Type I error rate inflation in small samples, but not in large samples. An internal pilot approach usually achieves goal power and, for most studies with sample size greater than 50, requires no Type I error correction. Further, we have provided a flexible and accurate approach to bound Type I error below a goal level for studies with small sample size.

  16. Extremal values on Zagreb indices of trees with given distance k-domination number.

    PubMed

    Pei, Lidan; Pan, Xiangfeng

    2018-01-01

    Let [Formula: see text] be a graph. A set [Formula: see text] is a distance k -dominating set of G if for every vertex [Formula: see text], [Formula: see text] for some vertex [Formula: see text], where k is a positive integer. The distance k -domination number [Formula: see text] of G is the minimum cardinality among all distance k -dominating sets of G . The first Zagreb index of G is defined as [Formula: see text] and the second Zagreb index of G is [Formula: see text]. In this paper, we obtain the upper bounds for the Zagreb indices of n -vertex trees with given distance k -domination number and characterize the extremal trees, which generalize the results of Borovićanin and Furtula (Appl. Math. Comput. 276:208-218, 2016). What is worth mentioning, for an n -vertex tree T , is that a sharp upper bound on the distance k -domination number [Formula: see text] is determined.

  17. Description and first application of a new technique to measure the gravitational mass of antihydrogen

    NASA Astrophysics Data System (ADS)

    Alpha Collaboration; Amole, C.; Ashkezari, M. D.; Baquero-Ruiz, M.; Bertsche, W.; Butler, E.; Capra, A.; Cesar, C. L.; Charlton, M.; Eriksson, S.; Fajans, J.; Friesen, T.; Fujiwara, M. C.; Gill, D. R.; Gutierrez, A.; Hangst, J. S.; Hardy, W. N.; Hayden, M. E.; Isaac, C. A.; Jonsell, S.; Kurchaninov, L.; Little, A.; Madsen, N.; McKenna, J. T. K.; Menary, S.; Napoli, S. C.; Nolan, P.; Olin, A.; Pusa, P.; Rasmussen, C. Ø.; Robicheaux, F.; Sarid, E.; Silveira, D. M.; So, C.; Thompson, R. I.; van der Werf, D. P.; Wurtele, J. S.; Zhmoginov, A. I.; Charman, A. E.

    2013-04-01

    Physicists have long wondered whether the gravitational interactions between matter and antimatter might be different from those between matter and itself. Although there are many indirect indications that no such differences exist and that the weak equivalence principle holds, there have been no direct, free-fall style, experimental tests of gravity on antimatter. Here we describe a novel direct test methodology; we search for a propensity for antihydrogen atoms to fall downward when released from the ALPHA antihydrogen trap. In the absence of systematic errors, we can reject ratios of the gravitational to inertial mass of antihydrogen >75 at a statistical significance level of 5% worst-case systematic errors increase the minimum rejection ratio to 110. A similar search places somewhat tighter bounds on a negative gravitational mass, that is, on antigravity. This methodology, coupled with ongoing experimental improvements, should allow us to bound the ratio within the more interesting near equivalence regime.

  18. Description and first application of a new technique to measure the gravitational mass of antihydrogen

    PubMed Central

    Amole, C.; Ashkezari, M. D.; Baquero-Ruiz, M.; Bertsche, W.; Butler, E.; Capra, A.; Cesar, C. L.; Charlton, M.; Eriksson, S.; Fajans, J.; Friesen, T.; Fujiwara, M. C.; Gill, D. R.; Gutierrez, A.; Hangst, J. S.; Hardy, W. N.; Hayden, M. E.; Isaac, C. A.; Jonsell, S.; Kurchaninov, L.; Little, A.; Madsen, N.; McKenna, J. T. K.; Menary, S.; Napoli, S. C.; Nolan, P.; Olin, A.; Pusa, P.; Rasmussen, C. Ø; Robicheaux, F.; Sarid, E.; Silveira, D. M.; So, C.; Thompson, R. I.; van der Werf, D. P.; Wurtele, J. S.; Zhmoginov, A. I.; Charman, A. E.

    2013-01-01

    Physicists have long wondered whether the gravitational interactions between matter and antimatter might be different from those between matter and itself. Although there are many indirect indications that no such differences exist and that the weak equivalence principle holds, there have been no direct, free-fall style, experimental tests of gravity on antimatter. Here we describe a novel direct test methodology; we search for a propensity for antihydrogen atoms to fall downward when released from the ALPHA antihydrogen trap. In the absence of systematic errors, we can reject ratios of the gravitational to inertial mass of antihydrogen >75 at a statistical significance level of 5%; worst-case systematic errors increase the minimum rejection ratio to 110. A similar search places somewhat tighter bounds on a negative gravitational mass, that is, on antigravity. This methodology, coupled with ongoing experimental improvements, should allow us to bound the ratio within the more interesting near equivalence regime. PMID:23653197

  19. Capacity planning of a wide-sense nonblocking generalized survivable network

    NASA Astrophysics Data System (ADS)

    Ho, Kwok Shing; Cheung, Kwok Wai

    2006-06-01

    Generalized survivable networks (GSNs) have two interesting properties that are essential attributes for future backbone networks--full survivability against link failures and support for dynamic traffic demands. GSNs incorporate the nonblocking network concept into the survivable network models. Given a set of nodes and a topology that is at least two-edge connected, a certain minimum capacity is required for each edge to form a GSN. The edge capacity is bounded because each node has an input-output capacity limit that serves as a constraint for any allowable traffic demand matrix. The GSN capacity planning problem is nondeterministic polynomial time (NP) hard. We first give a rigorous mathematical framework; then we offer two different solution approaches. The two-phase approach is fast, but the joint optimization approach yields a better bound. We carried out numerical computations for eight networks with different topologies and found that the cost of a GSN is only a fraction (from 52% to 89%) more than that of a static survivable network.

  20. Saddle point localization of molecular wavefunctions.

    PubMed

    Mellau, Georg Ch; Kyuberis, Alexandra A; Polyansky, Oleg L; Zobov, Nikolai; Field, Robert W

    2016-09-15

    The quantum mechanical description of isomerization is based on bound eigenstates of the molecular potential energy surface. For the near-minimum regions there is a textbook-based relationship between the potential and eigenenergies. Here we show how the saddle point region that connects the two minima is encoded in the eigenstates of the model quartic potential and in the energy levels of the [H, C, N] potential energy surface. We model the spacing of the eigenenergies with the energy dependent classical oscillation frequency decreasing to zero at the saddle point. The eigenstates with the smallest spacing are localized at the saddle point. The analysis of the HCN ↔ HNC isomerization states shows that the eigenstates with small energy spacing relative to the effective (v1, v3, ℓ) bending potentials are highly localized in the bending coordinate at the transition state. These spectroscopically detectable states represent a chemical marker of the transition state in the eigenenergy spectrum. The method developed here provides a basis for modeling characteristic patterns in the eigenenergy spectrum of bound states.

  1. Beamforming Based Full-Duplex for Millimeter-Wave Communication

    PubMed Central

    Liu, Xiao; Xiao, Zhenyu; Bai, Lin; Choi, Jinho; Xia, Pengfei; Xia, Xiang-Gen

    2016-01-01

    In this paper, we study beamforming based full-duplex (FD) systems in millimeter-wave (mmWave) communications. A joint transmission and reception (Tx/Rx) beamforming problem is formulated to maximize the achievable rate by mitigating self-interference (SI). Since the optimal solution is difficult to find due to the non-convexity of the objective function, suboptimal schemes are proposed in this paper. A low-complexity algorithm, which iteratively maximizes signal power while suppressing SI, is proposed and its convergence is proven. Moreover, two closed-form solutions, which do not require iterations, are also derived under minimum-mean-square-error (MMSE), zero-forcing (ZF), and maximum-ratio transmission (MRT) criteria. Performance evaluations show that the proposed iterative scheme converges fast (within only two iterations on average) and approaches an upper-bound performance, while the two closed-form solutions also achieve appealing performances, although there are noticeable differences from the upper bound depending on channel conditions. Interestingly, these three schemes show different robustness against the geometry of Tx/Rx antenna arrays and channel estimation errors. PMID:27455256

  2. Probing Majorana bound states via counting statistics of a single electron transistor

    NASA Astrophysics Data System (ADS)

    Li, Zeng-Zhao; Lam, Chi-Hang; You, J. Q.

    2015-06-01

    We propose an approach for probing Majorana bound states (MBSs) in a nanowire via counting statistics of a nearby charge detector in the form of a single-electron transistor (SET). We consider the impacts on the counting statistics by both the local coupling between the detector and an adjacent MBS at one end of a nanowire and the nonlocal coupling to the MBS at the other end. We show that the Fano factor and the skewness of the SET current are minimized for a symmetric SET configuration in the absence of the MBSs or when coupled to a fermionic state. However, the minimum points of operation are shifted appreciably in the presence of the MBSs to asymmetric SET configurations with a higher tunnel rate at the drain than at the source. This feature persists even when varying the nonlocal coupling and the pairing energy between the two MBSs. We expect that these MBS-induced shifts can be measured experimentally with available technologies and can serve as important signatures of the MBSs.

  3. Description and first application of a new technique to measure the gravitational mass of antihydrogen.

    PubMed

    Charman, A E; Amole, C; Ashkezari, M D; Baquero-Ruiz, M; Bertsche, W; Butler, E; Capra, A; Cesar, C L; Charlton, M; Eriksson, S; Fajans, J; Friesen, T; Fujiwara, M C; Gill, D R; Gutierrez, A; Hangst, J S; Hardy, W N; Hayden, M E; Isaac, C A; Jonsell, S; Kurchaninov, L; Little, A; Madsen, N; McKenna, J T K; Menary, S; Napoli, S C; Nolan, P; Olin, A; Pusa, P; Rasmussen, C Ø; Robicheaux, F; Sarid, E; Silveira, D M; So, C; Thompson, R I; van der Werf, D P; Wurtele, J S; Zhmoginov, A I

    2013-01-01

    Physicists have long wondered whether the gravitational interactions between matter and antimatter might be different from those between matter and itself. Although there are many indirect indications that no such differences exist and that the weak equivalence principle holds, there have been no direct, free-fall style, experimental tests of gravity on antimatter. Here we describe a novel direct test methodology; we search for a propensity for antihydrogen atoms to fall downward when released from the ALPHA antihydrogen trap. In the absence of systematic errors, we can reject ratios of the gravitational to inertial mass of antihydrogen >75 at a statistical significance level of 5%; worst-case systematic errors increase the minimum rejection ratio to 110. A similar search places somewhat tighter bounds on a negative gravitational mass, that is, on antigravity. This methodology, coupled with ongoing experimental improvements, should allow us to bound the ratio within the more interesting near equivalence regime.

  4. Time-optical spinup maneuvers of flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Singh, G.; Kabamba, P. T.; Mcclamroch, N. H.

    1990-01-01

    Attitude controllers for spacecraft have been based on the assumption that the bodies being controlled are rigid. Future spacecraft, however, may be quite flexible. Many applications require spinning up/down these vehicles. In this work the minimum time control of these maneuvers is considered. The time-optimal control is shown to possess an important symmetry property. Taking advantage of this property, the necessary and sufficient conditions for optimality are transformed into a system of nonlinear algebraic equations in the control switching times during one half of the maneuver, the maneuver time, and the costates at the mid-maneuver time. These equations can be solved using a homotopy approach. Control spillover measures are introduced and upper bounds on these measures are obtained. For a special case these upper bounds can be expressed in closed form for an infinite dimensional evaluation model. Rotational stiffening effects are ignored in the optimal control analysis. Based on a heuristic argument a simple condition is given which justifies the omission of these nonlinear effects. This condition is validated by numerical simulation.

  5. Upper bound for the span of pencil graph

    NASA Astrophysics Data System (ADS)

    Parvathi, N.; Vimala Rani, A.

    2018-04-01

    An L(2,1)-Coloring or Radio Coloring or λ coloring of a graph is a function f from the vertex set V(G) to the set of all nonnegative integers such that |f(x) ‑ f(y)| ≥ 2 if d(x,y) = 1 and |f(x) ‑ f(y)| ≥ 1 if d(x,y)=2, where d(x,y) denotes the distance between x and y in G. The L(2,1)-coloring number or span number λ(G) of G is the smallest number k such that G has an L(2,1)-coloring with max{f(v) : v ∈ V(G)} = k. [2]The minimum number of colors used in L(2,1)-coloring is called the radio number rn(G) of G (Positive integer). Griggs and yeh conjectured that λ(G) ≤ Δ2 for any simple graph with maximum degree Δ>2. In this article, we consider some special graphs like, n-sunlet graph, pencil graph families and derive its upper bound of (G) and rn(G).

  6. Obtaining electrostatically bound CdS-SiO2 aggregates from electrophoretic concentrates of CdS nanoparticles

    NASA Astrophysics Data System (ADS)

    Bulavchenko, A. I.; Sap'yanik, A. A.; Demidova, M. G.; Rakhmanova, M. I.; Popovetskii, P. S.

    2015-05-01

    Nonaqueous electrophoresis reveals that the electrokinetic potential of CdS nanoparticles increases slightly (85-120 mV) along with the concentration (0-5 × 10-3 M) of sodium bis(2-ethylhexyl) sulfosuccinate (AOT) in n-decane, while negatively charged SiO2 particles acquire positive charge (switching from -75 up to +135 mV). The energies of interparticle interactions in CdS-CdS and CdS-SiO2 systems are calculated from these parameters and the literature values of the Hamaker constants according to the Deryaguin-Landau-Verwey-Overbeek (DLVO) theory. It is concluded that the presence of a minimum (2.5 k B T) on the potential dependences of the CdS-SiO2 system indicates the formation of CdS-SiO2 aggregates electrostatically bound by heterocoagulation at low concentrations of AOT. The luminescent properties of the obtained ultrafine CdS-SiO2 powders depend on the CdS content.

  7. Entropic uncertainty from effective anticommutators

    NASA Astrophysics Data System (ADS)

    Kaniewski, Jedrzej; Tomamichel, Marco; Wehner, Stephanie

    2014-07-01

    We investigate entropic uncertainty relations for two or more binary measurements, for example, spin-1/2 or polarization measurements. We argue that the effective anticommutators of these measurements, i.e., the anticommutators evaluated on the state prior to measuring, are an expedient measure of measurement incompatibility. Based on the knowledge of pairwise effective anticommutators we derive a class of entropic uncertainty relations in terms of conditional Rényi entropies. Our uncertainty relations are formulated in terms of effective measures of incompatibility, which can be certified in a device-independent fashion. Consequently, we discuss potential applications of our findings to device-independent quantum cryptography. Moreover, to investigate the tightness of our analysis we consider the simplest (and very well studied) scenario of two measurements on a qubit. We find that our results outperform the celebrated bound due to Maassen and Uffink [Phys. Rev. Lett. 60, 1103 (1988), 10.1103/PhysRevLett.60.1103] and provide an analytical expression for the minimum uncertainty which also outperforms some recent bounds based on majorization.

  8. Penetration and screening of perpendicularly launched electromagnetic waves through bounded supercritical plasma confined in multicusp magnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dey, Indranuj; Bhattacharjee, Sudeep

    2011-02-15

    The question of electromagnetic wave penetration and screening by a bounded supercritical ({omega}{sub p}>{omega} with {omega}{sub p} and {omega} being the electron-plasma and wave frequencies, respectively) plasma confined in a minimum B multicusp field, for waves launched in the k perpendicular B{sub o} mode, is addressed through experiments and numerical simulations. The scale length of radial plasma nonuniformity (|n{sub e}/({partial_derivative}n{sub e}/{partial_derivative}r)|) and magnetostatic field (B{sub o}) inhomogeneity (|B{sub o}/({partial_derivative}B{sub o}/{partial_derivative}r)|) are much smaller than the free space ({lambda}{sub o}) and guided wavelengths ({lambda}{sub g}). Contrary to predictions of plane wave dispersion theory and the Clemow-Mullaly-Allis (CMA) diagram, for a boundedmore » plasma a finite propagation occurs through the central plasma regions where {alpha}{sub p}{sup 2}={omega}{sub p}{sup 2}/{omega}{sup 2}{>=}1 and {beta}{sub c}{sup 2}={omega}{sub ce}{sup 2}/{omega}{sup 2}<<1({approx}10{sup -4}), with {omega}{sub ce} being the electron cyclotron frequency. Wave screening, as predicted by the plane wave model, does not remain valid due to phase mixing and superposition of reflected waves from the conducting boundary, leading to the formation of electromagnetic standing wave modes. The waves are found to satisfy a modified upper hybrid resonance (UHR) relation in the minimum B field and are damped at the local electron cyclotron resonance (ECR) location.« less

  9. Information-theoretic measures of hydrogen-like ions in weakly coupled Debye plasmas

    NASA Astrophysics Data System (ADS)

    Zan, Li Rong; Jiao, Li Guang; Ma, Jia; Ho, Yew Kam

    2017-12-01

    Recent development of information theory provides researchers an alternative and useful tool to quantitatively investigate the variation of the electronic structure when atoms interact with the external environment. In this work, we make systematic studies on the information-theoretic measures for hydrogen-like ions immersed in weakly coupled plasmas modeled by Debye-Hückel potential. Shannon entropy, Fisher information, and Fisher-Shannon complexity in both position and momentum spaces are quantified in high accuracy for the hydrogen atom in a large number of stationary states. The plasma screening effect on embedded atoms can significantly affect the electronic density distributions, in both conjugate spaces, and it is quantified by the variation of information quantities. It is shown that the composite quantities (the Shannon entropy sum and the Fisher information product in combined spaces and Fisher-Shannon complexity in individual space) give a more comprehensive description of the atomic structure information than single ones. The nodes of wave functions play a significant role in the changes of composite information quantities caused by plasmas. With the continuously increasing screening strength, all composite quantities in circular states increase monotonously, while in higher-lying excited states where nodal structures exist, they first decrease to a minimum and then increase rapidly before the bound state approaches the continuum limit. The minimum represents the most reduction of uncertainty properties of the atom in plasmas. The lower bounds for the uncertainty product of the system based on composite information quantities are discussed. Our research presents a comprehensive survey in the investigation of information-theoretic measures for simple atoms embedded in Debye model plasmas.

  10. Outcomes of high-dose levofloxacin therapy remain bound to the levofloxacin minimum inhibitory concentration in complicated urinary tract infections.

    PubMed

    Armstrong, Eliana S; Mikulca, Janelle A; Cloutier, Daniel J; Bliss, Caleb A; Steenbergen, Judith N

    2016-11-25

    Fluoroquinolones are a guideline-recommended therapy for complicated urinary tract infections, including pyelonephritis. Elevated drug concentrations of fluoroquinolones in the urine and therapy with high-dose levofloxacin are believed to overcome resistance and effectively treat infections caused by resistant bacteria. The ASPECT-cUTI phase 3 clinical trial (ClinicalTrials.gov, NCT01345929 and NCT01345955 , both registered April 28, 2011) provided an opportunity to test this hypothesis by examining the clinical and microbiological outcomes of high-dose levofloxacin treatment by levofloxacin minimum inhibitory concentration. Patients were randomly assigned 1:1 to ceftolozane/tazobactam (1.5 g intravenous every 8 h) or levofloxacin (750 mg intravenous once daily) for 7 days of therapy. The ASPECT-cUTI study provided data on 370 patients with at least one isolate of Enterobacteriaceae at baseline who were treated with levofloxacin. Outcomes were assessed at the test-of-cure (5-9 days after treatment) and late follow-up (21-42 days after treatment) visits in the microbiologically evaluable population (N = 327). Test-of-cure clinical cure rates above 90% were observed at minimum inhibitory concentrations ≤4 μg/mL. Microbiological eradication rates were consistently >90% at levofloxacin minimum inhibitory concentrations ≤0.06 μg/mL. Lack of eradication of causative pathogens at the test-of-cure visit increased the likelihood of relapse by the late follow-up visit. Results from this study do not support levofloxacin therapy for complicated urinary tract infections caused by organisms with levofloxacin minimum inhibitory concentrations ≥4 μg/mL. ClinicalTrials.gov, NCT01345929 and NCT01345955.

  11. A statistical method for estimating rates of soil development and ages of geologic deposits: A design for soil-chronosequence studies

    USGS Publications Warehouse

    Switzer, P.; Harden, J.W.; Mark, R.K.

    1988-01-01

    A statistical method for estimating rates of soil development in a given region based on calibration from a series of dated soils is used to estimate ages of soils in the same region that are not dated directly. The method is designed specifically to account for sampling procedures and uncertainties that are inherent in soil studies. Soil variation and measurement error, uncertainties in calibration dates and their relation to the age of the soil, and the limited number of dated soils are all considered. Maximum likelihood (ML) is employed to estimate a parametric linear calibration curve, relating soil development to time or age on suitably transformed scales. Soil variation on a geomorphic surface of a certain age is characterized by replicate sampling of soils on each surface; such variation is assumed to have a Gaussian distribution. The age of a geomorphic surface is described by older and younger bounds. This technique allows age uncertainty to be characterized by either a Gaussian distribution or by a triangular distribution using minimum, best-estimate, and maximum ages. The calibration curve is taken to be linear after suitable (in certain cases logarithmic) transformations, if required, of the soil parameter and age variables. Soil variability, measurement error, and departures from linearity are described in a combined fashion using Gaussian distributions with variances particular to each sampled geomorphic surface and the number of sample replicates. Uncertainty in age of a geomorphic surface used for calibration is described using three parameters by one of two methods. In the first method, upper and lower ages are specified together with a coverage probability; this specification is converted to a Gaussian distribution with the appropriate mean and variance. In the second method, "absolute" older and younger ages are specified together with a most probable age; this specification is converted to an asymmetric triangular distribution with mode at the most probable age. The statistical variability of the ML-estimated calibration curve is assessed by a Monte Carlo method in which simulated data sets repeatedly are drawn from the distributional specification; calibration parameters are reestimated for each such simulation in order to assess their statistical variability. Several examples are used for illustration. The age of undated soils in a related setting may be estimated from the soil data using the fitted calibration curve. A second simulation to assess age estimate variability is described and applied to the examples. ?? 1988 International Association for Mathematical Geology.

  12. Investigations of interference between electromagnetic transponders and wireless MOSFET dosimeters: A phantom study

    PubMed Central

    Su, Zhong; Zhang, Lisha; Ramakrishnan, V.; Hagan, Michael; Anscher, Mitchell

    2011-01-01

    Purpose: To evaluate both the Calypso Systems’ (Calypso Medical Technologies, Inc., Seattle, WA) localization accuracy in the presence of wireless metal–oxide–semiconductor field-effect transistor (MOSFET) dosimeters of dose verification system (DVS, Sicel Technologies, Inc., Morrisville, NC) and the dosimeters’ reading accuracy in the presence of wireless electromagnetic transponders inside a phantom.Methods: A custom-made, solid-water phantom was fabricated with space for transponders and dosimeters. Two inserts were machined with positioning grooves precisely matching the dimensions of the transponders and dosimeters and were arranged in orthogonal and parallel orientations, respectively. To test the transponder localization accuracy with∕without presence of dosimeters (hypothesis 1), multivariate analyses were performed on transponder-derived localization data with and without dosimeters at each preset distance to detect statistically significant localization differences between the control and test sets. To test dosimeter dose-reading accuracy with∕without presence of transponders (hypothesis 2), an approach of alternating the transponder presence in seven identical fraction dose (100 cGy) deliveries and measurements was implemented. Two-way analysis of variance was performed to examine statistically significant dose-reading differences between the two groups and the different fractions. A relative-dose analysis method was also used to evaluate transponder impact on dose-reading accuracy after dose-fading effect was removed by a second-order polynomial fit.Results: Multivariate analysis indicated that hypothesis 1 was false; there was a statistically significant difference between the localization data from the control and test sets. However, the upper and lower bounds of the 95% confidence intervals of the localized positional differences between the control and test sets were less than 0.1 mm, which was significantly smaller than the minimum clinical localization resolution of 0.5 mm. For hypothesis 2, analysis of variance indicated that there was no statistically significant difference between the dosimeter readings with and without the presence of transponders. Both orthogonal and parallel configurations had difference of polynomial-fit dose to measured dose values within 1.75%.Conclusions: The phantom study indicated that the Calypso System’s localization accuracy was not affected clinically due to the presence of DVS wireless MOSFET dosimeters and the dosimeter-measured doses were not affected by the presence of transponders. Thus, the same patients could be implanted with both transponders and dosimeters to benefit from improved accuracy of radiotherapy treatments offered by conjunctional use of the two systems. PMID:21776780

  13. Nature of Fluctuations on Directional Discontinuities Inside a Solar Ejection: Wind and IMP 8 Observations

    NASA Technical Reports Server (NTRS)

    Vasquez, Bernard J.; Farrugia, Charles J.; Markovskii, Sergei A.; Hollweg, Joseph V.; Richardson, Ian G.; Ogilvie, Keith W.; Lepping, Ronald P.; Lin, Robert P.; Larson, Davin; White, Nicholas E. (Technical Monitor)

    2001-01-01

    A solar ejection passed the Wind spacecraft between December 23 and 26, 1996. On closer examination, we find a sequence of ejecta material, as identified by abnormally low proton temperatures, separated by plasmas with typical solar wind temperatures at 1 AU. Large and abrupt changes in field and plasma properties occurred near the separation boundaries of these regions. At the one boundary we examine here, a series of directional discontinuities was observed. We argue that Alfvenic fluctuations in the immediate vicinity of these discontinuities distort minimum variance normals, introducing uncertainty into the identification of the discontinuities as either rotational or tangential. Carrying out a series of tests on plasma and field data including minimum variance, velocity and magnetic field correlations, and jump conditions, we conclude that the discontinuities are tangential. Furthermore, we find waves superposed on these tangential discontinuities (TDs). The presence of discontinuities allows the existence of both surface waves and ducted body waves. Both probably form in the solar atmosphere where many transverse nonuniformities exist and where theoretically they have been expected. We add to prior speculation that waves on discontinuities may in fact be a common occurrence. In the solar wind, these waves can attain large amplitudes and low frequencies. We argue that such waves can generate dynamical changes at TDs through advection or forced reconnection. The dynamics might so extensively alter the internal structure that the discontinuity would no longer be identified as tangential. Such processes could help explain why the occurrence frequency of TDs observed throughout the solar wind falls off with increasing heliocentric distance. The presence of waves may also alter the nature of the interactions of TDs with the Earth's bow shock in so-called hot flow anomalies.

  14. Analytical Dimensional Reduction of a Fuel Optimal Powered Descent Subproblem

    NASA Technical Reports Server (NTRS)

    Rea, Jeremy R.; Bishop, Robert H.

    2010-01-01

    Current renewed interest in exploration of the moon, Mars, and other planetary objects is driving technology development in many fields of space system design. In particular, there is a desire to land both robotic and human missions on the moon and elsewhere. The landing guidance system must be able to deliver the vehicle to a desired soft landing while meeting several constraints necessary for the safety of the vehicle. Due to performance limitations of current launch vehicles, it is desired to minimize the amount of fuel used. In addition, the landing site may change in real-time in order to avoid previously undetected hazards which become apparent during the landing maneuver. This complicated maneuver can be broken into simpler subproblems that bound the full problem. One such subproblem is to find a minimum-fuel landing solution that meets constraints on the initial state, final state, and bounded thrust acceleration magnitude. With the assumptions of constant gravity and negligible atmosphere, the form of the optimal steering law is known, and the equations of motion can be integrated analytically, resulting in a system of five equations in five unknowns. It is shown that this system of equations can be reduced analytically to two equations in two unknowns. With an additional assumption of constant thrust acceleration magnitude, this system can be reduced further to one equation in one unknown. It is shown that these unknowns can be bounded analytically. An algorithm is developed to quickly and reliably solve the resulting one-dimensional bounded search, and it is used as a real-time guidance applied to a lunar landing test case.

  15. Dimensionality and noise in energy selective x-ray imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alvarez, Robert E.

    Purpose: To develop and test a method to quantify the effect of dimensionality on the noise in energy selective x-ray imaging.Methods: The Cramèr-Rao lower bound (CRLB), a universal lower limit of the covariance of any unbiased estimator, is used to quantify the noise. It is shown that increasing dimensionality always increases, or at best leaves the same, the variance. An analytic formula for the increase in variance in an energy selective x-ray system is derived. The formula is used to gain insight into the dependence of the increase in variance on the properties of the additional basis functions, the measurementmore » noise covariance, and the source spectrum. The formula is also used with computer simulations to quantify the dependence of the additional variance on these factors. Simulated images of an object with three materials are used to demonstrate the trade-off of increased information with dimensionality and noise. The images are computed from energy selective data with a maximum likelihood estimator.Results: The increase in variance depends most importantly on the dimension and on the properties of the additional basis functions. With the attenuation coefficients of cortical bone, soft tissue, and adipose tissue as the basis functions, the increase in variance of the bone component from two to three dimensions is 1.4 × 10{sup 3}. With the soft tissue component, it is 2.7 × 10{sup 4}. If the attenuation coefficient of a high atomic number contrast agent is used as the third basis function, there is only a slight increase in the variance from two to three basis functions, 1.03 and 7.4 for the bone and soft tissue components, respectively. The changes in spectrum shape with beam hardening also have a substantial effect. They increase the variance by a factor of approximately 200 for the bone component and 220 for the soft tissue component as the soft tissue object thickness increases from 1 to 30 cm. Decreasing the energy resolution of the detectors increases the variance of the bone component markedly with three dimension processing, approximately a factor of 25 as the resolution decreases from 100 to 3 bins. The increase with two dimension processing for adipose tissue is a factor of two and with the contrast agent as the third material for two or three dimensions is also a factor of two for both components. The simulated images show that a maximum likelihood estimator can be used to process energy selective x-ray data to produce images with noise close to the CRLB.Conclusions: The method presented can be used to compute the effects of the object attenuation coefficients and the x-ray system properties on the relationship of dimensionality and noise in energy selective x-ray imaging systems.« less

  16. Post-magmatic tectonic deformation of the outer Izu-Bonin-Mariana forearc system: initial results of IODP Expedition 352

    NASA Astrophysics Data System (ADS)

    Kurz, Walter; Ferré, Eric C.; Robertson, Alastair; Avery, Aaron; Christeson, Gail L.; Morgan, Sally; Kutterorf, Steffen; Sager, William W.; Carvallo, Claire; Shervais, John; Party IODP Expedition 352, Scientific

    2015-04-01

    IODP Expedition 352 was designed to drill through the entire volcanic sequence of the Bonin forearc. Four sites were drilled, two on the outer fore arc and two on the upper trench slope. Site survey seismic data, combined with borehole data, indicate that tectonic deformation in the outer IBM fore arc is mainly post-magmatic. Post-magmatic extension resulted in the formation of asymmetric sedimentary basins such as, for example, the half-grabens at sites 352-U1439 and 352-U1442 located on the upper trench slope. Along their eastern margins these basins are bounded by west-dipping normal faults. Sedimentation was mainly syn-tectonic. The lowermost sequence of the sedimentary units was tilted eastward by ~20°. These tilted bedding planes were subsequently covered by sub-horizontally deposited sedimentary beds. Based on biostratigraphic constraints, the minimum age of the oldest sediments is ~ 35 Ma; the timing of the sedimentary unconformities lies between ~ 27 and 32 Ma. At sites 352-U1440 and 352-U1441, located on the outer forearc, post-magmatic deformation resulted mainly in strike-slip faults possibly bounding the sedimentary basins. The sedimentary units within these basins were not significantly affected by post-sedimentary tectonic tilting. Biostratigraphic ages indicate that the minimum age of the basement-cover contact lies between ~29.5 and 32 Ma. Overall, the post-magmatic tectonic structures observed during Expedition 352 reveal a multiphase tectonic evolution of the outer IBM fore arc. At sites 352-U1439 and 352-U1442, shear with dominant reverse to oblique reverse displacement was localized along distinct subhorizontal cataclastic shear zones as well as steeply dipping slickensides and shear fractures. These structures, forming within a contractional tectonic regime, were either re-activated as or cross-cut by normal-faults as well as strike-slip faults. Extension was also accommodated by steeply dipping to subvertical mineralized veins and extensional fractures. Faults observed at sites 352-U1440 and 352-U1441 show mainly strike-slip. The sediments overlying the igneous basement, of maximum Late Eocene to Recent age, document ash and aeolian input, together with mass wasting of the fault-bounded sediment ponds.

  17. Local properties and global structure of McVittie spacetimes with non-flat Friedmann-Lemaître-Robertson-Walker backgrounds

    NASA Astrophysics Data System (ADS)

    Nolan, Brien C.

    2017-11-01

    McVittie spacetimes embed the vacuum Schwarzschild(-(anti) de Sitter) spacetime in an isotropic, Friedmann-Lemaître-Robertson-Walker (FLRW) background universe. The global structure of such spacetimes is well understood when the FLRW background is spatially flat. In this paper, we study the global structure of McVittie spacetimes with spatially non-flat FLRW backgrounds. We derive some basic results on the metric, curvature and matter content of these spacetimes and provide a representation of the metric that makes the study of their global properties possible. In the closed case, we find that at each instant of time, the spacetime is confined to a region bounded by a (positive) minimum and a maximum area radius, and is bounded either to the future or to the past by a scalar curvature singularity. This allowed region only exists when the background scale factor is above a certain minimum, and so is bounded away from the Big Bang singularity, as in the flat case. In the open case, the situation is different, and we focus mainly on this case. In K<0 McVittie spacetimes, radial null geodesics originate in finite affine time in the past at a boundary formed by the union of the Big Bang singularity of the FLRW background and a hypersurface (of varying causal character) which is non-singular in the sense of scalar curvature. Furthermore, in the case of eternally expanding open universes with Λ≥slant0 , we prove that black holes are ubiquitous: ingoing radial null geodesics extend in finite affine time to a hypersurface that forms the boundary of the region from which photons can escape to future null infinity. We determine the structure of the conformal diagrams that can arise in the open case. Finally, we revisit the black hole interpretation of McVittie spacetimes in the spatially flat case, and show that this interpretation holds also in the case of a vanishing cosmological constant, contrary to a previous claim of ours.

  18. Percolation

    NASA Astrophysics Data System (ADS)

    Dã¡Vila, Alã¡N.; Escudero, Christian; López, Jorge, , Dr.

    2004-10-01

    Several methods have been developed in order to study phase transitions in nuclear fragmentation. The one used in this research is Percolation. This method allows us to adjust resulting data to heavy ion collisions experiments. In systems, such as atomic nuclei or molecules, energy is put into the system. The system's particles move away from each other until their links are broken. Some particles will still be linked. The fragments' distribution is found to be a power law. We are witnessing then a critical phenomenon. In our model the particles are represented as occupied spaces in a cubical array. Each particle has a bound to each one of its 6 neighbors. Each bound can be active if the two particles are linked or inactive if they are not. When two or more particles are linked, a fragment is formed. The probability for a specific link to be broken cannot be calculated, so the probability for a bound to be active is going to be used as parameter when trying to adjust the data. For a given probability p several arrays are generated. The fragments are counted. The fragments' distribution is then adjusted to a power law. The probability that generates the better fit is going to be the critical probability that indicates a phase transition. The better fit is found by seeking the fragments' distribution that gives the minimal chi squared when compared to a power law. As additional evidence of criticality the entropy and normalized variance of the mass are also calculated for each probability.

  19. Correlation of serum biomarkers (TSA & LSA) and epithelial dysplasia in early diagnosis of oral precancer and oral cancer.

    PubMed

    Sawhney, Hemant; Kumar, C Anand

    Oral cancer is currently the most frequent cause of cancer-related deaths, which is usually preceded by oral pre-cancerous lesions and conditions. Altered glycosylation of glycoconjugates, such as sialic acid, fucose, etc. are amongst the important molecular changes that accompany malignant transformation. The purpose of our study was to evaluate usefulness of serum Total Sialic Acid (TSA) and serum Lipid-Bound Sialic Acid (LSA) as markers of oral precancerous lesions and histopathologically correlating them with grades of epithelial dysplasia. Blood samples were collected from 50 patients with oral precancer (Leukoplakia & OSMF), 25 patients with untreated oral cancer and 25 healthy subjects. Serum sialic acid (total and lipid bound) levels were measured spectrophotometrically. Tissue samples from all the patients were evaluated for dysplasia. Serum levels of total and lipid bound sialic acid were significantly elevated in patients with oral precancer and cancer when compared with healthy subjects. Analysis of variance test documented that there is progressive rise in serum levels of sialic acid with the degree of dysplastic changes in oral precancer patients. We observed positive correlation between serum levels of the markers and the extent of malignant disease (TNM Clinical staging) as well as histopathological grades. The results suggested that serum levels of TSA and LSA progressively increases with grades of dysplasia in precancerous groups and cancer group, when compared with healthy controls. These glycoconjugates, especially LSA has the clinical utility in indicating a premalignant change.

  20. Statistical evaluation of a project to estimate fish trajectories through the intakes of Kaplan hydropower turbines

    NASA Astrophysics Data System (ADS)

    Sutton, Virginia Kay

    This paper examines statistical issues associated with estimating paths of juvenile salmon through the intakes of Kaplan turbines. Passive sensors, hydrophones, detecting signals from ultrasonic transmitters implanted in individual fish released into the preturbine region were used to obtain the information to estimate fish paths through the intake. Aim and location of the sensors affects the spatial region in which the transmitters can be detected, and formulas relating this region to sensor aiming directions are derived. Cramer-Rao lower bounds for the variance of estimators of fish location are used to optimize placement of each sensor. Finally, a statistical methodology is developed for analyzing angular data collected from optimally placed sensors.

Top