Sample records for statistical parameters including

  1. Local sensitivity analysis for inverse problems solved by singular value decomposition

    USGS Publications Warehouse

    Hill, M.C.; Nolan, B.T.

    2010-01-01

    Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.

  2. Comment on “Two statistics for evaluating parameter identifiability and error reduction” by John Doherty and Randall J. Hunt

    USGS Publications Warehouse

    Hill, Mary C.

    2010-01-01

    Doherty and Hunt (2009) present important ideas for first-order-second moment sensitivity analysis, but five issues are discussed in this comment. First, considering the composite-scaled sensitivity (CSS) jointly with parameter correlation coefficients (PCC) in a CSS/PCC analysis addresses the difficulties with CSS mentioned in the introduction. Second, their new parameter identifiability statistic actually is likely to do a poor job of parameter identifiability in common situations. The statistic instead performs the very useful role of showing how model parameters are included in the estimated singular value decomposition (SVD) parameters. Its close relation to CSS is shown. Third, the idea from p. 125 that a suitable truncation point for SVD parameters can be identified using the prediction variance is challenged using results from Moore and Doherty (2005). Fourth, the relative error reduction statistic of Doherty and Hunt is shown to belong to an emerging set of statistics here named perturbed calculated variance statistics. Finally, the perturbed calculated variance statistics OPR and PPR mentioned on p. 121 are shown to explicitly include the parameter null-space component of uncertainty. Indeed, OPR and PPR results that account for null-space uncertainty have appeared in the literature since 2000.

  3. Seismic activity prediction using computational intelligence techniques in northern Pakistan

    NASA Astrophysics Data System (ADS)

    Asim, Khawaja M.; Awais, Muhammad; Martínez-Álvarez, F.; Iqbal, Talat

    2017-10-01

    Earthquake prediction study is carried out for the region of northern Pakistan. The prediction methodology includes interdisciplinary interaction of seismology and computational intelligence. Eight seismic parameters are computed based upon the past earthquakes. Predictive ability of these eight seismic parameters is evaluated in terms of information gain, which leads to the selection of six parameters to be used in prediction. Multiple computationally intelligent models have been developed for earthquake prediction using selected seismic parameters. These models include feed-forward neural network, recurrent neural network, random forest, multi layer perceptron, radial basis neural network, and support vector machine. The performance of every prediction model is evaluated and McNemar's statistical test is applied to observe the statistical significance of computational methodologies. Feed-forward neural network shows statistically significant predictions along with accuracy of 75% and positive predictive value of 78% in context of northern Pakistan.

  4. Critical discussion of evaluation parameters for inter-observer variability in target definition for radiation therapy.

    PubMed

    Fotina, I; Lütgendorf-Caucig, C; Stock, M; Pötter, R; Georg, D

    2012-02-01

    Inter-observer studies represent a valid method for the evaluation of target definition uncertainties and contouring guidelines. However, data from the literature do not yet give clear guidelines for reporting contouring variability. Thus, the purpose of this work was to compare and discuss various methods to determine variability on the basis of clinical cases and a literature review. In this study, 7 prostate and 8 lung cases were contoured on CT images by 8 experienced observers. Analysis of variability included descriptive statistics, calculation of overlap measures, and statistical measures of agreement. Cross tables with ratios and correlations were established for overlap parameters. It was shown that the minimal set of parameters to be reported should include at least one of three volume overlap measures (i.e., generalized conformity index, Jaccard coefficient, or conformation number). High correlation between these parameters and scatter of the results was observed. A combination of descriptive statistics, overlap measure, and statistical measure of agreement or reliability analysis is required to fully report the interrater variability in delineation.

  5. A Primer on the Statistical Modelling of Learning Curves in Health Professions Education

    ERIC Educational Resources Information Center

    Pusic, Martin V.; Boutis, Kathy; Pecaric, Martin R.; Savenkov, Oleksander; Beckstead, Jason W.; Jaber, Mohamad Y.

    2017-01-01

    Learning curves are a useful way of representing the rate of learning over time. Features include an index of baseline performance (y-intercept), the efficiency of learning over time (slope parameter) and the maximal theoretical performance achievable (upper asymptote). Each of these parameters can be statistically modelled on an individual and…

  6. Bootstrap Methods: A Very Leisurely Look.

    ERIC Educational Resources Information Center

    Hinkle, Dennis E.; Winstead, Wayland H.

    The Bootstrap method, a computer-intensive statistical method of estimation, is illustrated using a simple and efficient Statistical Analysis System (SAS) routine. The utility of the method for generating unknown parameters, including standard errors for simple statistics, regression coefficients, discriminant function coefficients, and factor…

  7. The scatter of mechanical values of carbon fiber composites and its causes. [statistical values of strength

    NASA Technical Reports Server (NTRS)

    Roth, S.

    1979-01-01

    The scatter of experimental data obtained in an investigation of the parameters of structural components was investigated. Strength parameters which are determined by the resin or the adhesion between fiber and resin were included. The statistical characteristics of the mechanical parameters of carbon fiber composites, and the possibilities which exist to reduce this scatter were emphasized. It is found that quality control tests of fiber and resin are important for such a reduction.

  8. Influence of eye biometrics and corneal micro-structure on noncontact tonometry.

    PubMed

    Jesus, Danilo A; Majewska, Małgorzata; Krzyżanowska-Berkowska, Patrycja; Iskander, D Robert

    2017-01-01

    Tonometry is widely used as the main screening tool supporting glaucoma diagnosis. Still, its accuracy could be improved if full knowledge about the variation of the corneal biomechanical properties was available. In this study, Optical Coherence Tomography (OCT) speckle statistics are used to infer the organisation of the corneal micro-structure and hence, to analyse its influence on intraocular pressure (IOP) measurements. Fifty-six subjects were recruited for this prospective study. Macro and micro-structural corneal parameters as well as subject age were considered. Macro-structural analysis included the parameters that are associated with the ocular anatomy, such as central corneal thickness (CCT), corneal radius, axial length, anterior chamber depth and white-to-white corneal diameter. Micro-structural parameters which included OCT speckle statistics were related to the internal organisation of the corneal tissue and its physiological changes during lifetime. The corneal speckle obtained from OCT was modelled with the Generalised Gamma (GG) distribution that is characterised with a scale parameter and two shape parameters. In macro-structure analysis, only CCT showed a statistically significant correlation with IOP (R2 = 0.25, p<0.001). The scale parameter and the ratio of the shape parameters of GG distribution showed statistically significant correlation with IOP (R2 = 0.19, p<0.001 and R2 = 0.17, p<0.001, respectively). For the studied group, a weak, although significant correlation was found between age and IOP (R2 = 0.053, p = 0.04). Forward stepwise regression showed that CCT and the scale parameter of the Generalised Gamma distribution can be combined in a regression model (R2 = 0.39, p<0.001) to study the role of the corneal structure on IOP. We show, for the first time, that corneal micro-structure influences the IOP measurements obtained from noncontact tonometry. OCT speckle statistics can be employed to learn about the corneal micro-structure and hence, to further calibrate the IOP measurements.

  9. Influence of eye biometrics and corneal micro-structure on noncontact tonometry

    PubMed Central

    Majewska, Małgorzata; Krzyżanowska-Berkowska, Patrycja; Iskander, D. Robert

    2017-01-01

    Purpose Tonometry is widely used as the main screening tool supporting glaucoma diagnosis. Still, its accuracy could be improved if full knowledge about the variation of the corneal biomechanical properties was available. In this study, Optical Coherence Tomography (OCT) speckle statistics are used to infer the organisation of the corneal micro-structure and hence, to analyse its influence on intraocular pressure (IOP) measurements. Methods Fifty-six subjects were recruited for this prospective study. Macro and micro-structural corneal parameters as well as subject age were considered. Macro-structural analysis included the parameters that are associated with the ocular anatomy, such as central corneal thickness (CCT), corneal radius, axial length, anterior chamber depth and white-to-white corneal diameter. Micro-structural parameters which included OCT speckle statistics were related to the internal organisation of the corneal tissue and its physiological changes during lifetime. The corneal speckle obtained from OCT was modelled with the Generalised Gamma (GG) distribution that is characterised with a scale parameter and two shape parameters. Results In macro-structure analysis, only CCT showed a statistically significant correlation with IOP (R2 = 0.25, p<0.001). The scale parameter and the ratio of the shape parameters of GG distribution showed statistically significant correlation with IOP (R2 = 0.19, p<0.001 and R2 = 0.17, p<0.001, respectively). For the studied group, a weak, although significant correlation was found between age and IOP (R2 = 0.053, p = 0.04). Forward stepwise regression showed that CCT and the scale parameter of the Generalised Gamma distribution can be combined in a regression model (R2 = 0.39, p<0.001) to study the role of the corneal structure on IOP. Conclusions We show, for the first time, that corneal micro-structure influences the IOP measurements obtained from noncontact tonometry. OCT speckle statistics can be employed to learn about the corneal micro-structure and hence, to further calibrate the IOP measurements. PMID:28472178

  10. The Empirical Nature and Statistical Treatment of Missing Data

    ERIC Educational Resources Information Center

    Tannenbaum, Christyn E.

    2009-01-01

    Introduction. Missing data is a common problem in research and can produce severely misleading analyses, including biased estimates of statistical parameters, and erroneous conclusions. In its 1999 report, the APA Task Force on Statistical Inference encouraged authors to report complications such as missing data and discouraged the use of…

  11. Statistical Characterization of the Mechanical Parameters of Intact Rock Under Triaxial Compression: An Experimental Proof of the Jinping Marble

    NASA Astrophysics Data System (ADS)

    Jiang, Quan; Zhong, Shan; Cui, Jie; Feng, Xia-Ting; Song, Leibo

    2016-12-01

    We investigated the statistical characteristics and probability distribution of the mechanical parameters of natural rock using triaxial compression tests. Twenty cores of Jinping marble were tested under each different levels of confining stress (i.e., 5, 10, 20, 30, and 40 MPa). From these full stress-strain data, we summarized the numerical characteristics and determined the probability distribution form of several important mechanical parameters, including deformational parameters, characteristic strength, characteristic strains, and failure angle. The statistical proofs relating to the mechanical parameters of rock presented new information about the marble's probabilistic distribution characteristics. The normal and log-normal distributions were appropriate for describing random strengths of rock; the coefficients of variation of the peak strengths had no relationship to the confining stress; the only acceptable random distribution for both Young's elastic modulus and Poisson's ratio was the log-normal function; and the cohesive strength had a different probability distribution pattern than the frictional angle. The triaxial tests and statistical analysis also provided experimental evidence for deciding the minimum reliable number of experimental sample and for picking appropriate parameter distributions to use in reliability calculations for rock engineering.

  12. The Use of Breast Magnetic Resonance Imaging Parameters to Identify Possible Signaling Pathways of a Serum Biomarker, HE4.

    PubMed

    Durur-Karakaya, Afak; Durur-Subasi, Irmak; Karaman, Adem; Akcay, Mufide Nuran; Palabiyik, Saziye Sezin; Erdemci, Burak; Alper, Fatih; Acemoglu, Hamit

    2016-01-01

    This study aimed to investigate the relationship between breast magnetic resonance imaging (MRI) parameters; clinical features such as age, tumor diameter, N, T, and TNM stages; and serum human epididymis protein 4 (HE4) levels in patients with breast carcinoma and use this as a means of estimating possible signaling pathways of the biomarker, HE4. Thirty-seven patients with breast cancer were evaluated by breast MRI and serum HE4 levels before therapy. Correlations between parameters including age, tumor diameter T and N, dynamic curve type, enhancement ratio (ER), slope washin (S-WI), time to peak (TTP), slope washout (S-WO), and the serum level of HE4 were investigated statistically. Human epididymis protein 4 levels of early and advanced stage of disease were also compared statistically. Breast MRI parameters showed correlation to serum HE4 levels and correlations were statistically significant. Of these MRI parameters, S-WI had higher correlation coefficient than the others. Human epididymis protein 4 levels were not statistically different in early and advanced stage of disease. High correlation with MRI parameters related to neoangiogenesis may indicate signaling pathway of HE4.

  13. Effect of Nocturnal Intermittent Peritoneal Dialysis on Intraocular Pressure and Anterior Segment Optical Coherence Tomography Parameters.

    PubMed

    Chong, Ka Lung; Samsudin, Amir; Keng, Tee Chau; Kamalden, Tengku Ain; Ramli, Norlina

    2017-02-01

    To evaluate the effect of nocturnal intermittent peritoneal dialysis (NIPD) on intraocular pressure (IOP) and anterior segment optical coherence tomography (ASOCT) parameters. Systemic changes associated with NIPD were also analyzed. Observational study. Nonglaucomatous patients on NIPD underwent systemic and ocular assessment including mean arterial pressure (MAP), body weight, serum osmolarity, visual acuity, IOP measurement, and ASOCT within 2 hours both before and after NIPD. The Zhongshan Angle Assessment Program (ZAAP) was used to measure ASOCT parameters including anterior chamber depth, anterior chamber width, anterior chamber area, anterior chamber volume, lens vault, angle opening distance, trabecular-iris space area, and angle recess area. T tests and Pearson correlation tests were performed with P<0.05 considered statistically significant. A total of 46 eyes from 46 patients were included in the analysis. There were statistically significant reductions in IOP (-1.8±0.6 mm Hg, P=0.003), MAP (-11.9±3.1 mm Hg, P<0.001), body weight (-0.7±2.8 kg, P<0.001), and serum osmolarity (-3.4±2.0 mOsm/L, P=0.002) after NIPD. All the ASOCT parameters did not have any statistically significant changes after NIPD. There were no statistically significant correlations between the changes in IOP, MAP, body weight, and serum osmolarity (all P>0.05). NIPD results in reductions in IOP, MAP, body weight, and serum osmolarity in nonglaucomatous patients.

  14. Two statistics for evaluating parameter identifiability and error reduction

    USGS Publications Warehouse

    Doherty, John; Hunt, Randall J.

    2009-01-01

    Two statistics are presented that can be used to rank input parameters utilized by a model in terms of their relative identifiability based on a given or possible future calibration dataset. Identifiability is defined here as the capability of model calibration to constrain parameters used by a model. Both statistics require that the sensitivity of each model parameter be calculated for each model output for which there are actual or presumed field measurements. Singular value decomposition (SVD) of the weighted sensitivity matrix is then undertaken to quantify the relation between the parameters and observations that, in turn, allows selection of calibration solution and null spaces spanned by unit orthogonal vectors. The first statistic presented, "parameter identifiability", is quantitatively defined as the direction cosine between a parameter and its projection onto the calibration solution space. This varies between zero and one, with zero indicating complete non-identifiability and one indicating complete identifiability. The second statistic, "relative error reduction", indicates the extent to which the calibration process reduces error in estimation of a parameter from its pre-calibration level where its value must be assigned purely on the basis of prior expert knowledge. This is more sophisticated than identifiability, in that it takes greater account of the noise associated with the calibration dataset. Like identifiability, it has a maximum value of one (which can only be achieved if there is no measurement noise). Conceptually it can fall to zero; and even below zero if a calibration problem is poorly posed. An example, based on a coupled groundwater/surface-water model, is included that demonstrates the utility of the statistics. ?? 2009 Elsevier B.V.

  15. Generalized massive optimal data compression

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Wandelt, Benjamin

    2018-05-01

    In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

  16. Statistical error model for a solar electric propulsion thrust subsystem

    NASA Technical Reports Server (NTRS)

    Bantell, M. H.

    1973-01-01

    The solar electric propulsion thrust subsystem statistical error model was developed as a tool for investigating the effects of thrust subsystem parameter uncertainties on navigation accuracy. The model is currently being used to evaluate the impact of electric engine parameter uncertainties on navigation system performance for a baseline mission to Encke's Comet in the 1980s. The data given represent the next generation in statistical error modeling for low-thrust applications. Principal improvements include the representation of thrust uncertainties and random process modeling in terms of random parametric variations in the thrust vector process for a multi-engine configuration.

  17. On-line estimation of error covariance parameters for atmospheric data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.

    1995-01-01

    A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including uniformly distributed model error and time-dependent model error statistics.

  18. Fitting a three-parameter lognormal distribution with applications to hydrogeochemical data from the National Uranium Resource Evaluation Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kane, V.E.

    1979-10-01

    The standard maximum likelihood and moment estimation procedures are shown to have some undesirable characteristics for estimating the parameters in a three-parameter lognormal distribution. A class of goodness-of-fit estimators is found which provides a useful alternative to the standard methods. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Shapiro-Francia tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted-order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Bias and robustness of the procedures are examined and example data sets analyzed including geochemical datamore » from the National Uranium Resource Evaluation Program.« less

  19. Standard and goodness-of-fit parameter estimation methods for the three-parameter lognormal distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kane, V.E.

    1982-01-01

    A class of goodness-of-fit estimators is found to provide a useful alternative in certain situations to the standard maximum likelihood method which has some undesirable estimation characteristics for estimation from the three-parameter lognormal distribution. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Filliben tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Robustness of the procedures are examined and example data sets analyzed.

  20. Sequential Least-Squares Using Orthogonal Transformations. [spacecraft communication/spacecraft tracking-data smoothing

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.

    1975-01-01

    Square root information estimation, starting from its beginnings in least-squares parameter estimation, is considered. Special attention is devoted to discussions of sensitivity and perturbation matrices, computed solutions and their formal statistics, consider-parameters and consider-covariances, and the effects of a priori statistics. The constant-parameter model is extended to include time-varying parameters and process noise, and the error analysis capabilities are generalized. Efficient and elegant smoothing results are obtained as easy consequences of the filter formulation. The value of the techniques is demonstrated by the navigation results that were obtained for the Mariner Venus-Mercury (Mariner 10) multiple-planetary space probe and for the Viking Mars space mission.

  1. Forecasts of non-Gaussian parameter spaces using Box-Cox transformations

    NASA Astrophysics Data System (ADS)

    Joachimi, B.; Taylor, A. N.

    2011-09-01

    Forecasts of statistical constraints on model parameters using the Fisher matrix abound in many fields of astrophysics. The Fisher matrix formalism involves the assumption of Gaussianity in parameter space and hence fails to predict complex features of posterior probability distributions. Combining the standard Fisher matrix with Box-Cox transformations, we propose a novel method that accurately predicts arbitrary posterior shapes. The Box-Cox transformations are applied to parameter space to render it approximately multivariate Gaussian, performing the Fisher matrix calculation on the transformed parameters. We demonstrate that, after the Box-Cox parameters have been determined from an initial likelihood evaluation, the method correctly predicts changes in the posterior when varying various parameters of the experimental setup and the data analysis, with marginally higher computational cost than a standard Fisher matrix calculation. We apply the Box-Cox-Fisher formalism to forecast cosmological parameter constraints by future weak gravitational lensing surveys. The characteristic non-linear degeneracy between matter density parameter and normalization of matter density fluctuations is reproduced for several cases, and the capabilities of breaking this degeneracy by weak-lensing three-point statistics is investigated. Possible applications of Box-Cox transformations of posterior distributions are discussed, including the prospects for performing statistical data analysis steps in the transformed Gaussianized parameter space.

  2. On the statistical distribution in a deformed solid

    NASA Astrophysics Data System (ADS)

    Gorobei, N. N.; Luk'yanenko, A. S.

    2017-09-01

    A modification of the Gibbs distribution in a thermally insulated mechanically deformed solid, where its linear dimensions (shape parameters) are excluded from statistical averaging and included among the macroscopic parameters of state alongside with the temperature, is proposed. Formally, this modification is reduced to corresponding additional conditions when calculating the statistical sum. The shape parameters and the temperature themselves are found from the conditions of mechanical and thermal equilibria of a body, and their change is determined using the first law of thermodynamics. Known thermodynamic phenomena are analyzed for the simple model of a solid, i.e., an ensemble of anharmonic oscillators, within the proposed formalism with an accuracy of up to the first order by the anharmonicity constant. The distribution modification is considered for the classic and quantum temperature regions apart.

  3. Coherent Doppler lidar signal covariance including wind shear and wind turbulence

    NASA Technical Reports Server (NTRS)

    Frehlich, R. G.

    1993-01-01

    The performance of coherent Doppler lidar is determined by the statistics of the coherent Doppler signal. The derivation and calculation of the covariance of the Doppler lidar signal is presented for random atmospheric wind fields with wind shear. The random component is described by a Kolmogorov turbulence spectrum. The signal parameters are clarified for a general coherent Doppler lidar system. There are two distinct physical regimes: one where the transmitted pulse determines the signal statistics and the other where the wind field dominates the signal statistics. The Doppler shift of the signal is identified in terms of the wind field and system parameters.

  4. A design methodology for nonlinear systems containing parameter uncertainty

    NASA Technical Reports Server (NTRS)

    Young, G. E.; Auslander, D. M.

    1983-01-01

    In the present design methodology for nonlinear systems containing parameter uncertainty, a generalized sensitivity analysis is incorporated which employs parameter space sampling and statistical inference. For the case of a system with j adjustable and k nonadjustable parameters, this methodology (which includes an adaptive random search strategy) is used to determine the combination of j adjustable parameter values which maximize the probability of those performance indices which simultaneously satisfy design criteria in spite of the uncertainty due to k nonadjustable parameters.

  5. The Effect of Folate and Folate Plus Zinc Supplementation on Endocrine Parameters and Sperm Characteristics in Sub-Fertile Men: A Systematic Review and Meta-Analysis.

    PubMed

    Irani, Morvarid; Amirian, Malihe; Sadeghi, Ramin; Lez, Justine Le; Latifnejad Roudsari, Robab

    2017-08-29

    To evaluate the effect of folate and folate plus zinc supplementation on endocrine parameters and sperm characteristics in sub fertile men. We conducted a systematic review and meta-analysis. Electronic databases of Medline, Scopus , Google scholar and Persian databases (SID, Iran medex, Magiran, Medlib, Iran doc) were searched from 1966 to December 2016 using a set of relevant keywords including "folate or folic acid AND (infertility, infertile, sterility)".All available randomized controlled trials (RCTs), conducted on a sample of sub fertile men with semen analyses, who took oral folic acid or folate plus zinc, were included. Data collected included endocrine parameters and sperm characteristics. Statistical analyses were done by Comprehensive Meta-analysis Version 2. In total, seven studies were included. Six studies had sufficient data for meta-analysis. "Sperm concentration was statistically higher in men supplemented with folate than with placebo (P < .001)". However, folate supplementation alone did not seem to be more effective than the placebo on the morphology (P = .056) and motility of the sperms (P = .652). Folate plus zinc supplementation did not show any statistically different effect on serum testosterone (P = .86), inhibin B (P = .84), FSH (P = .054), and sperm motility (P = .169) as compared to the placebo. Yet, folate plus zinc showed statistically higher effect on the sperm concentration (P < .001), morphology (P < .001), and serum folate level (P < .001) as compared to placebo. Folate plus zinc supplementation has a positive effect on sperm characteristics in sub fertile men. However, these results should be interpreted with caution due to the important heterogeneity of the studies included in this meta-analysis. Further trials are still needed to confirm the current findings.

  6. Calculation of Weibull strength parameters, Batdorf flaw density constants and related statistical quantities using PC-CARES

    NASA Technical Reports Server (NTRS)

    Szatmary, Steven A.; Gyekenyesi, John P.; Nemeth, Noel N.

    1990-01-01

    This manual describes the operation and theory of the PC-CARES (Personal Computer-Ceramic Analysis and Reliability Evaluation of Structures) computer program for the IBM PC and compatibles running PC-DOS/MS-DOR OR IBM/MS-OS/2 (version 1.1 or higher) operating systems. The primary purpose of this code is to estimate Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities. Included in the manual is the description of the calculation of shape and scale parameters of the two-parameter Weibull distribution using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. The methods for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull line, as well as the techniques for calculating the Batdorf flaw-density constants are also described.

  7. Machine Learning Predictions of a Multiresolution Climate Model Ensemble

    NASA Astrophysics Data System (ADS)

    Anderson, Gemma J.; Lucas, Donald D.

    2018-05-01

    Statistical models of high-resolution climate models are useful for many purposes, including sensitivity and uncertainty analyses, but building them can be computationally prohibitive. We generated a unique multiresolution perturbed parameter ensemble of a global climate model. We use a novel application of a machine learning technique known as random forests to train a statistical model on the ensemble to make high-resolution model predictions of two important quantities: global mean top-of-atmosphere energy flux and precipitation. The random forests leverage cheaper low-resolution simulations, greatly reducing the number of high-resolution simulations required to train the statistical model. We demonstrate that high-resolution predictions of these quantities can be obtained by training on an ensemble that includes only a small number of high-resolution simulations. We also find that global annually averaged precipitation is more sensitive to resolution changes than to any of the model parameters considered.

  8. Reporting of various methodological and statistical parameters in negative studies published in prominent Indian Medical Journals: a systematic review.

    PubMed

    Charan, J; Saxena, D

    2014-01-01

    Biased negative studies not only reflect poor research effort but also have an impact on 'patient care' as they prevent further research with similar objectives, leading to potential research areas remaining unexplored. Hence, published 'negative studies' should be methodologically strong. All parameters that may help a reader to judge validity of results and conclusions should be reported in published negative studies. There is a paucity of data on reporting of statistical and methodological parameters in negative studies published in Indian Medical Journals. The present systematic review was designed with an aim to critically evaluate negative studies published in prominent Indian Medical Journals for reporting of statistical and methodological parameters. Systematic review. All negative studies published in 15 Science Citation Indexed (SCI) medical journals published from India were included in present study. Investigators involved in the study evaluated all negative studies for the reporting of various parameters. Primary endpoints were reporting of "power" and "confidence interval." Power was reported in 11.8% studies. Confidence interval was reported in 15.7% studies. Majority of parameters like sample size calculation (13.2%), type of sampling method (50.8%), name of statistical tests (49.1%), adjustment of multiple endpoints (1%), post hoc power calculation (2.1%) were reported poorly. Frequency of reporting was more in clinical trials as compared to other study designs and in journals having impact factor more than 1 as compared to journals having impact factor less than 1. Negative studies published in prominent Indian medical journals do not report statistical and methodological parameters adequately and this may create problems in the critical appraisal of findings reported in these journals by its readers.

  9. Parametric study of the swimming performance of a fish robot propelled by a flexible caudal fin.

    PubMed

    Low, K H; Chong, C W

    2010-12-01

    In this paper, we aim to study the swimming performance of fish robots by using a statistical approach. A fish robot employing a carangiform swimming mode had been used as an experimental platform for the performance study. The experiments conducted aim to investigate the effect of various design parameters on the thrust capability of the fish robot with a flexible caudal fin. The controllable parameters associated with the fin include frequency, amplitude of oscillation, aspect ratio and the rigidity of the caudal fin. The significance of these parameters was determined in the first set of experiments by using a statistical approach. A more detailed parametric experimental study was then conducted with only those significant parameters. As a result, the parametric study could be completed with a reduced number of experiments and time spent. With the obtained experimental result, we were able to understand the relationship between various parameters and a possible adjustment of parameters to obtain a higher thrust. The proposed statistical method for experimentation provides an objective and thorough analysis of the effects of individual or combinations of parameters on the swimming performance. Such an efficient experimental design helps to optimize the process and determine factors that influence variability.

  10. The Effects of Measurement Error on Statistical Models for Analyzing Change. Final Report.

    ERIC Educational Resources Information Center

    Dunivant, Noel

    The results of six major projects are discussed including a comprehensive mathematical and statistical analysis of the problems caused by errors of measurement in linear models for assessing change. In a general matrix representation of the problem, several new analytic results are proved concerning the parameters which affect bias in…

  11. Dose-escalation designs in oncology: ADEPT and the CRM.

    PubMed

    Shu, Jianfen; O'Quigley, John

    2008-11-20

    The ADEPT software package is not a statistical method in its own right as implied by Gerke and Siedentop (Statist. Med. 2008; DOI: 10.1002/sim.3037). ADEPT implements two-parameter CRM models as described in O'Quigley et al. (Biometrics 1990; 46(1):33-48). All of the basic ideas (use of a two-parameter logistic model, use of a two-dimensional prior for the unknown slope and intercept parameters, sequential estimation and subsequent patient allocation based on minimization of some loss function, flexibility to use cohorts instead of one by one inclusion) are strictly identical. The only, and quite trivial, difference arises in the setting of the prior. O'Quigley et al. (Biometrics 1990; 46(1):33-48) used priors having an analytic expression whereas Whitehead and Brunier (Statist. Med. 1995; 14:33-48) use pseudo-data to play the role of the prior. The question of interest is whether two-parameter CRM works as well, or better, than the one-parameter CRM recommended in O'Quigley et al. (Biometrics 1990; 46(1):33-48). Gerke and Siedentop argue that it does. The published literature suggests otherwise. The conclusions of Gerke and Siedentop stem from three highly particular, and somewhat contrived, situations. Unlike one-parameter CRM (Biometrika 1996; 83:395-405; J. Statist. Plann. Inference 2006; 136:1765-1780; Biometrika 2005; 92:863-873), no statistical properties appear to have been studied for two-parameter CRM. In particular, for two-parameter CRM, the parameter estimates are inconsistent. This ought to be a source of major concern to those proposing its use. Worse still, for finite samples the behavior of estimates can be quite wild despite having incorporated the kind of dampening priors discussed by Gerke and Siedentop. An example in which we illustrate this behavior describes a single patient included at level 1 of 6 levels and experiencing a dose limiting toxicity. The subsequent recommendation is to experiment at level 6! Such problematic behavior is not common. Even so, we show that the allocation behavior of two-parameter CRM is very much less stable than that of one-parameter CRM.

  12. MODFLOW-2000, the U.S. Geological Survey modular ground-water model; user guide to the observation, sensitivity, and parameter-estimation processes and three post-processing programs

    USGS Publications Warehouse

    Hill, Mary C.; Banta, E.R.; Harbaugh, A.W.; Anderman, E.R.

    2000-01-01

    This report documents the Observation, Sensitivity, and Parameter-Estimation Processes of the ground-water modeling computer program MODFLOW-2000. The Observation Process generates model-calculated values for comparison with measured, or observed, quantities. A variety of statistics is calculated to quantify this comparison, including a weighted least-squares objective function. In addition, a number of files are produced that can be used to compare the values graphically. The Sensitivity Process calculates the sensitivity of hydraulic heads throughout the model with respect to specified parameters using the accurate sensitivity-equation method. These are called grid sensitivities. If the Observation Process is active, it uses the grid sensitivities to calculate sensitivities for the simulated values associated with the observations. These are called observation sensitivities. Observation sensitivities are used to calculate a number of statistics that can be used (1) to diagnose inadequate data, (2) to identify parameters that probably cannot be estimated by regression using the available observations, and (3) to evaluate the utility of proposed new data. The Parameter-Estimation Process uses a modified Gauss-Newton method to adjust values of user-selected input parameters in an iterative procedure to minimize the value of the weighted least-squares objective function. Statistics produced by the Parameter-Estimation Process can be used to evaluate estimated parameter values; statistics produced by the Observation Process and post-processing program RESAN-2000 can be used to evaluate how accurately the model represents the actual processes; statistics produced by post-processing program YCINT-2000 can be used to quantify the uncertainty of model simulated values. Parameters are defined in the Ground-Water Flow Process input files and can be used to calculate most model inputs, such as: for explicitly defined model layers, horizontal hydraulic conductivity, horizontal anisotropy, vertical hydraulic conductivity or vertical anisotropy, specific storage, and specific yield; and, for implicitly represented layers, vertical hydraulic conductivity. In addition, parameters can be defined to calculate the hydraulic conductance of the River, General-Head Boundary, and Drain Packages; areal recharge rates of the Recharge Package; maximum evapotranspiration of the Evapotranspiration Package; pumpage or the rate of flow at defined-flux boundaries of the Well Package; and the hydraulic head at constant-head boundaries. The spatial variation of model inputs produced using defined parameters is very flexible, including interpolated distributions that require the summation of contributions from different parameters. Observations can include measured hydraulic heads or temporal changes in hydraulic heads, measured gains and losses along head-dependent boundaries (such as streams), flows through constant-head boundaries, and advective transport through the system, which generally would be inferred from measured concentrations. MODFLOW-2000 is intended for use on any computer operating system. The program consists of algorithms programmed in Fortran 90, which efficiently performs numerical calculations and is fully compatible with the newer Fortran 95. The code is easily modified to be compatible with FORTRAN 77. Coordination for multiple processors is accommodated using Message Passing Interface (MPI) commands. The program is designed in a modular fashion that is intended to support inclusion of new capabilities.

  13. Surface flaw reliability analysis of ceramic components with the SCARE finite element postprocessor program

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, John P.; Nemeth, Noel N.

    1987-01-01

    The SCARE (Structural Ceramics Analysis and Reliability Evaluation) computer program on statistical fast fracture reliability analysis with quadratic elements for volume distributed imperfections is enhanced to include the use of linear finite elements and the capability of designing against concurrent surface flaw induced ceramic component failure. The SCARE code is presently coupled as a postprocessor to the MSC/NASTRAN general purpose, finite element analysis program. The improved version now includes the Weibull and Batdorf statistical failure theories for both surface and volume flaw based reliability analysis. The program uses the two-parameter Weibull fracture strength cumulative failure probability distribution model with the principle of independent action for poly-axial stress states, and Batdorf's shear-sensitive as well as shear-insensitive statistical theories. The shear-sensitive surface crack configurations include the Griffith crack and Griffith notch geometries, using the total critical coplanar strain energy release rate criterion to predict mixed-mode fracture. Weibull material parameters based on both surface and volume flaw induced fracture can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and grouped fracture data. The statistical fast fracture theories for surface flaw induced failure, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.

  14. Statistical properties of filtered pseudorandom digital sequences formed from the sum of maximum-length sequences

    NASA Technical Reports Server (NTRS)

    Wallace, G. R.; Weathers, G. D.; Graf, E. R.

    1973-01-01

    The statistics of filtered pseudorandom digital sequences called hybrid-sum sequences, formed from the modulo-two sum of several maximum-length sequences, are analyzed. The results indicate that a relation exists between the statistics of the filtered sequence and the characteristic polynomials of the component maximum length sequences. An analysis procedure is developed for identifying a large group of sequences with good statistical properties for applications requiring the generation of analog pseudorandom noise. By use of the analysis approach, the filtering process is approximated by the convolution of the sequence with a sum of unit step functions. A parameter reflecting the overall statistical properties of filtered pseudorandom sequences is derived. This parameter is called the statistical quality factor. A computer algorithm to calculate the statistical quality factor for the filtered sequences is presented, and the results for two examples of sequence combinations are included. The analysis reveals that the statistics of the signals generated with the hybrid-sum generator are potentially superior to the statistics of signals generated with maximum-length generators. Furthermore, fewer calculations are required to evaluate the statistics of a large group of hybrid-sum generators than are required to evaluate the statistics of the same size group of approximately equivalent maximum-length sequences.

  15. Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas

    2014-01-01

    Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…

  16. Leads Detection Using Mixture Statistical Distribution Based CRF Algorithm from Sentinel-1 Dual Polarization SAR Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Li, Fei; Zhang, Shengkai; Zhu, Tingting

    2017-04-01

    Synthetic Aperture Radar (SAR) is significantly important for polar remote sensing since it can provide continuous observations in all days and all weather. SAR can be used for extracting the surface roughness information characterized by the variance of dielectric properties and different polarization channels, which make it possible to observe different ice types and surface structure for deformation analysis. In November, 2016, Chinese National Antarctic Research Expedition (CHINARE) 33rd cruise has set sails in sea ice zone in Antarctic. Accurate leads spatial distribution in sea ice zone for routine planning of ship navigation is essential. In this study, the semantic relationship between leads and sea ice categories has been described by the Conditional Random Fields (CRF) model, and leads characteristics have been modeled by statistical distributions in SAR imagery. In the proposed algorithm, a mixture statistical distribution based CRF is developed by considering the contexture information and the statistical characteristics of sea ice for improving leads detection in Sentinel-1A dual polarization SAR imagery. The unary potential and pairwise potential in CRF model is constructed by integrating the posteriori probability estimated from statistical distributions. For mixture statistical distribution parameter estimation, Method of Logarithmic Cumulants (MoLC) is exploited for single statistical distribution parameters estimation. The iteration based Expectation Maximal (EM) algorithm is investigated to calculate the parameters in mixture statistical distribution based CRF model. In the posteriori probability inference, graph-cut energy minimization method is adopted in the initial leads detection. The post-processing procedures including aspect ratio constrain and spatial smoothing approaches are utilized to improve the visual result. The proposed method is validated on Sentinel-1A SAR C-band Extra Wide Swath (EW) Ground Range Detected (GRD) imagery with a pixel spacing of 40 meters near Prydz Bay area, East Antarctica. Main work is listed as follows: 1) A mixture statistical distribution based CRF algorithm has been developed for leads detection from Sentinel-1A dual polarization images. 2) The assessment of the proposed mixture statistical distribution based CRF method and single distribution based CRF algorithm has been presented. 3) The preferable parameters sets including statistical distributions, the aspect ratio threshold and spatial smoothing window size have been provided. In the future, the proposed algorithm will be developed for the operational Sentinel series data sets processing due to its less time consuming cost and high accuracy in leads detection.

  17. Development of a statistical model for cervical cancer cell death with irreversible electroporation in vitro.

    PubMed

    Yang, Yongji; Moser, Michael A J; Zhang, Edwin; Zhang, Wenjun; Zhang, Bing

    2018-01-01

    The aim of this study was to develop a statistical model for cell death by irreversible electroporation (IRE) and to show that the statistic model is more accurate than the electric field threshold model in the literature using cervical cancer cells in vitro. HeLa cell line was cultured and treated with different IRE protocols in order to obtain data for modeling the statistical relationship between the cell death and pulse-setting parameters. In total, 340 in vitro experiments were performed with a commercial IRE pulse system, including a pulse generator and an electric cuvette. Trypan blue staining technique was used to evaluate cell death after 4 hours of incubation following IRE treatment. Peleg-Fermi model was used in the study to build the statistical relationship using the cell viability data obtained from the in vitro experiments. A finite element model of IRE for the electric field distribution was also built. Comparison of ablation zones between the statistical model and electric threshold model (drawn from the finite element model) was used to show the accuracy of the proposed statistical model in the description of the ablation zone and its applicability in different pulse-setting parameters. The statistical models describing the relationships between HeLa cell death and pulse length and the number of pulses, respectively, were built. The values of the curve fitting parameters were obtained using the Peleg-Fermi model for the treatment of cervical cancer with IRE. The difference in the ablation zone between the statistical model and the electric threshold model was also illustrated to show the accuracy of the proposed statistical model in the representation of ablation zone in IRE. This study concluded that: (1) the proposed statistical model accurately described the ablation zone of IRE with cervical cancer cells, and was more accurate compared with the electric field model; (2) the proposed statistical model was able to estimate the value of electric field threshold for the computer simulation of IRE in the treatment of cervical cancer; and (3) the proposed statistical model was able to express the change in ablation zone with the change in pulse-setting parameters.

  18. Global Sensitivity Analysis of Environmental Systems via Multiple Indices based on Statistical Moments of Model Outputs

    NASA Astrophysics Data System (ADS)

    Guadagnini, A.; Riva, M.; Dell'Oca, A.

    2017-12-01

    We propose to ground sensitivity of uncertain parameters of environmental models on a set of indices based on the main (statistical) moments, i.e., mean, variance, skewness and kurtosis, of the probability density function (pdf) of a target model output. This enables us to perform Global Sensitivity Analysis (GSA) of a model in terms of multiple statistical moments and yields a quantification of the impact of model parameters on features driving the shape of the pdf of model output. Our GSA approach includes the possibility of being coupled with the construction of a reduced complexity model that allows approximating the full model response at a reduced computational cost. We demonstrate our approach through a variety of test cases. These include a commonly used analytical benchmark, a simplified model representing pumping in a coastal aquifer, a laboratory-scale tracer experiment, and the migration of fracturing fluid through a naturally fractured reservoir (source) to reach an overlying formation (target). Our strategy allows discriminating the relative importance of model parameters to the four statistical moments considered. We also provide an appraisal of the error associated with the evaluation of our sensitivity metrics by replacing the original system model through the selected surrogate model. Our results suggest that one might need to construct a surrogate model with increasing level of accuracy depending on the statistical moment considered in the GSA. The methodological framework we propose can assist the development of analysis techniques targeted to model calibration, design of experiment, uncertainty quantification and risk assessment.

  19. Directional statistics-based reflectance model for isotropic bidirectional reflectance distribution functions.

    PubMed

    Nishino, Ko; Lombardi, Stephen

    2011-01-01

    We introduce a novel parametric bidirectional reflectance distribution function (BRDF) model that can accurately encode a wide variety of real-world isotropic BRDFs with a small number of parameters. The key observation we make is that a BRDF may be viewed as a statistical distribution on a unit hemisphere. We derive a novel directional statistics distribution, which we refer to as the hemispherical exponential power distribution, and model real-world isotropic BRDFs as mixtures of it. We derive a canonical probabilistic method for estimating the parameters, including the number of components, of this novel directional statistics BRDF model. We show that the model captures the full spectrum of real-world isotropic BRDFs with high accuracy, but a small footprint. We also demonstrate the advantages of the novel BRDF model by showing its use for reflection component separation and for exploring the space of isotropic BRDFs.

  20. 42 CFR 493.1256 - Standard: Control procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... for having control procedures that monitor the accuracy and precision of the complete analytic process..., include two control materials, including one that is capable of detecting errors in the extraction process... control materials having previously determined statistical parameters. (e) For reagent, media, and supply...

  1. A computer program (MODFLOWP) for estimating parameters of a transient, three-dimensional ground-water flow model using nonlinear regression

    USGS Publications Warehouse

    Hill, Mary Catherine

    1992-01-01

    This report documents a new version of the U.S. Geological Survey modular, three-dimensional, finite-difference, ground-water flow model (MODFLOW) which, with the new Parameter-Estimation Package that also is documented in this report, can be used to estimate parameters by nonlinear regression. The new version of MODFLOW is called MODFLOWP (pronounced MOD-FLOW*P), and functions nearly identically to MODFLOW when the ParameterEstimation Package is not used. Parameters are estimated by minimizing a weighted least-squares objective function by the modified Gauss-Newton method or by a conjugate-direction method. Parameters used to calculate the following MODFLOW model inputs can be estimated: Transmissivity and storage coefficient of confined layers; hydraulic conductivity and specific yield of unconfined layers; vertical leakance; vertical anisotropy (used to calculate vertical leakance); horizontal anisotropy; hydraulic conductance of the River, Streamflow-Routing, General-Head Boundary, and Drain Packages; areal recharge rates; maximum evapotranspiration; pumpage rates; and the hydraulic head at constant-head boundaries. Any spatial variation in parameters can be defined by the user. Data used to estimate parameters can include existing independent estimates of parameter values, observed hydraulic heads or temporal changes in hydraulic heads, and observed gains and losses along head-dependent boundaries (such as streams). Model output includes statistics for analyzing the parameter estimates and the model; these statistics can be used to quantify the reliability of the resulting model, to suggest changes in model construction, and to compare results of models constructed in different ways.

  2. The community pharmacist's role in reducing CVD risk factors in Lebanon: a cross-sectional longitudinal study.

    PubMed

    Fahs, Iqbal; Hallit, Souheil; Rahal, Mohamad; Malaeb, Diana

    2018-06-13


    To assess the role of pharmacist in modifying CVDs risk factors among Lebanese adults in urban and rural areas.
    Materials (Subjects) and Methods
    In a prospective survey, 865 out of 1000 participants aged ≥ 45, previously interviewed, agreed to be followed at 1 and 2 year time points. Parameters including blood pressure, lipid profile, blood glucose, average number of risk factors, and atherosclerotic cardiovascular disease (ASCVD) risk were assessed and evaluated at the beginning of the study, then after 1 and 2 years.
    Results:
    After patient's education and during both follow ups, the mean average body mass index (BMI) and systolic blood pressure (SBP) statistically decreased significantly. The lipid profile as well statistically improved significantly during both follow-ups. to around 9%. Further statistically significant improvements in ASCVD risk occurred during the second follow-up to around 8%. Monitoring parameters revealed statistical significant improvements as well.
    Conclusion:
    This study showed that a plan that includes pharmacists, who regularly monitor and follow-up patients, could improve CVD prevention through reduction of risk factors.
    . ©2018The Author(s). Published by S. Karger AG, Basel.

  3. Effects of foot reflexology on anxiety and physiological parameters in patients undergoing coronary artery bypass graft surgery: A clinical trial.

    PubMed

    Abbaszadeh, Yaser; Allahbakhshian, Atefeh; Seyyedrasooli, Alehe; Sarbakhsh, Parvin; Goljarian, Sakineh; Safaei, Naser

    2018-05-01

    This study aimed to investigate the effect of foot reflexology on anxiety and physiological parameters in patients after CABG surgery. This was a single-blind, three-arm, parallel-group, randomized controlled trial with three groups of 40 male patients undergoing CABG. Participants were placed in three groups, named intervention, placebo, and control. Physiological parameters were measured including systolic and diastolic blood pressure, mean arterial pressure, heart rate, respiratory rate, percutaneous oxygen saturation, and anxiety of participants. Results showed a statistically significant difference between intervention and control groups in terms of the level of anxiety (p < 0.05). Also, results showed a statistically significant effect on all physiological parameters except heart rate (p < 0.05). This study indicated that foot reflexology may be used by nurses as an adjunct to standard ICU care to reduce anxiety and stabilize physiological parameters such as systolic, diastolic, mean arterial pressure, and heart rate. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Corneal endothelial cell density and morphology in Phramongkutklao Hospital

    PubMed Central

    Sopapornamorn, Narumon; Lekskul, Manapon; Panichkul, Suthee

    2008-01-01

    Objective To describe the corneal endothelial density and morphology in patients of Phramongkutklao Hospital and the relationship between endothelial cell parameters and other factors. Methods Four hundred and four eyes of 202 volunteers were included. Noncontact specular microscopy was performed after taking a history and testing the visual acuity, intraocular pressure measurement, Schirmer’s test and routine eye examination by slit lamp microscope. The studied parameters included mean endothelial cell density (MCD), coefficient of variation (CV), and percentage of hexagonality. Results The mean age of volunteers was 45.73 years; the range being 20 to 80 years old. Their MCD (SD), mean percentage of CV (SD) and mean (SD) percentage of hexagonality were 2623.49(325) cell/mm2, 39.43(8.23)% and 51.50(10.99)%, respectively. Statistically, MCD decreased significantly with age (p < 0.01). There was a significant difference in the percentage of CV between genders. There was no statistical significance between parameters and other factors. Conclusion The normative data of the corneal endothelium of Thai eyes indicated that, statistically, MCD decreased significantly with age. Previous studies have reported no difference in MCD, percentage of CV, and percentage of hexagonality between gender. Nevertheless, significantly different percentages of CV between genders were presented in this study. PMID:19668398

  5. Phase-Angle Dependence of Determinations of Diameter, Albedo, and Taxonomy: A Case Study of NEO 3691 Bede

    NASA Technical Reports Server (NTRS)

    Wooden, Diane H.; Lederer, Susan M.; Jehin, Emmanuel; Howell, Ellen S.; Fernandez, Yan; Harker, David E.; Ryan, Erin; Lovell, Amy; Woodward, Charles E.; Benner, Lance A.

    2015-01-01

    Parameters important for NEO risk assessment and mitigation include Near-Earth Object diameter and taxonomic classification, which translates to surface composition. Diameters of NEOs are derived from the thermal fluxes measured by WISE, NEOWISE, Spitzer Warm Mission and ground-based telescopes including the IRTF and UKIRT. Diameter and its coupled parameters Albedo and IR beaming parameter (a proxy for thermal inertia and/or surface roughness) are dependent upon the phase angle, which is the Sun-target-observer angle. Orbit geometries of NEOs, however, typically provide for observations at phase angles greater than 20 degrees. At higher phase angles, the observed thermal emission is sampling both the day and night sides of the NEO. We compare thermal models for NEOs that exclude (NEATM) and include (NESTM) night-side emission. We present a case study of NEO 3691 Bede, which is a higher albedo object, X (Ec) or Cgh taxonomy, to highlight the range of H magnitudes for this object (depending on the albedo and phase function slope parameter G), and to examine at different phase angles the taxonomy and thermal model fits for this NEO. Observations of 3691 Bede include our observations with IRTF+SpeX and with the 10 micrometer UKIRT+Michelle instrument, as well as WISE and Spitzer Warm mission data. By examining 3691 Bede as a case study, we highlight the interplay between the derivation of basic physical parameters and observing geometry, and we discuss the uncertainties in H magnitude, taxonomy assignment amongst the X-class (P, M, E), and diameter determinations. Systematic dependencies in the derivation of basic characterization parameters of H-magnitude, diameter, albedo and taxonomy with observing geometry are important to understand. These basic characterization parameters affect the statistical assessments of the NEO population, which in turn, affects the assignment of statistically-assessed basic parameters to discovered but yet-to-be-fully-characterized NEOs.

  6. TRACKING FRESHWATER DIVERSIONS AND ALGAL BLOOMS THAT IMPACT THE NEW ORLEANS STANDARD METROPOLITAN STATISTICAL AREA -

    EPA Science Inventory

    This project will monitor selected water quality parameters, including water temperature, turbidity, salinity, and algal blooms to assess the impacts of freshwater diversions for several selected areas within the New Orleans metropolitan area. The specific areas of study include ...

  7. The use of heart rate turbulence and heart rate variability in the assessment of autonomic regulation and circadian rhythm in patients with systemic lupus erythematosus without apparent heart disease.

    PubMed

    Poliwczak, A R; Waszczykowska, E; Dziankowska-Bartkowiak, B; Koziróg, M; Dworniak, K

    2018-03-01

    Background Systemic lupus erythematosus is a progressive autoimmune disease. There are reports suggesting that patients even without overt signs of cardiovascular complications have impaired autonomic function. The aim of this study was to assess autonomic function using heart rate turbulence and heart rate variability parameters indicated in 24-hour ECG Holter monitoring. Methods Twenty-six women with systemic lupus erythematosus and 30 healthy women were included. Twenty-four hour ambulatory ECG-Holter was performed in home conditions. The basic parameters of heart rate turbulence and heart rate variability were calculated. The analyses were performed for the entire day and separately for daytime activity and night time rest. Results There were no statistically significant differences in the basic anthropometric parameters. The mean duration of disease was 11.52 ± 7.42. There was a statistically significant higher turbulence onset (To) value in patients with systemic lupus erythematosus, median To = -0.17% (minimum -1.47, maximum 3.0) versus To = -1.36% (minimum -4.53, maximum -0.41), P < 0.001. There were no such differences for turbulence slope (Ts). In the 24-hour analysis almost all heart rate variability parameters were significantly lower in the systemic lupus erythematosus group than in the healthy controls, including SDANN and r-MSSD and p50NN. Concerning the morning activity and night resting periods, the results were similar as for the whole day. In the control group, higher values in morning activity were noted for parameters that characterise sympathetic activity, especially SDANN, and were significantly lower for parasympathetic parameters, including r-MSSD and p50NN, which prevailed at night. There were no statistically significant changes for systemic lupus erythematosus patients for p50NN and low and very low frequency. There was a positive correlation between disease duration and SDNN, R = 0.417; P < 0.05 and SDANN, R = 0.464; P < 0.05, a negative correlation between low/high frequency ratio and r-MSSD, R = -0.454; P < 0.05; p50NN, R = -0.435; P < 0.05 and high frequency, R = -0.478; P < 0.05. In contrast, there was no statistically significant correlation between heart rate turbulence and other variables evaluated, including disease duration and the type of autoantibodies. Our study confirms the presence of autonomic disorders with respect to both heart rate variability and heart rate turbulence parameters and the presence of diurnal disturbances of sympathetic-parasympathetic balance. Further studies are required.

  8. The Impact of Model Misspecification on Parameter Estimation and Item-Fit Assessment in Log-Linear Diagnostic Classification Models

    ERIC Educational Resources Information Center

    Kunina-Habenicht, Olga; Rupp, Andre A.; Wilhelm, Oliver

    2012-01-01

    Using a complex simulation study we investigated parameter recovery, classification accuracy, and performance of two item-fit statistics for correct and misspecified diagnostic classification models within a log-linear modeling framework. The basic manipulated test design factors included the number of respondents (1,000 vs. 10,000), attributes (3…

  9. PROUCL 4.0 SOFTWARE

    EPA Science Inventory

    Statistical inference, including both estimation and hypotheses testing approaches, is routinely used to: estimate environmental parameters of interest, such as exposure point concentration (EPC) terms, not-to-exceed values, and background level threshold values (BTVs) for contam...

  10. Surface defect detection in tiling Industries using digital image processing methods: analysis and evaluation.

    PubMed

    Karimi, Mohammad H; Asemani, Davud

    2014-05-01

    Ceramic and tile industries should indispensably include a grading stage to quantify the quality of products. Actually, human control systems are often used for grading purposes. An automatic grading system is essential to enhance the quality control and marketing of the products. Since there generally exist six different types of defects originating from various stages of tile manufacturing lines with distinct textures and morphologies, many image processing techniques have been proposed for defect detection. In this paper, a survey has been made on the pattern recognition and image processing algorithms which have been used to detect surface defects. Each method appears to be limited for detecting some subgroup of defects. The detection techniques may be divided into three main groups: statistical pattern recognition, feature vector extraction and texture/image classification. The methods such as wavelet transform, filtering, morphology and contourlet transform are more effective for pre-processing tasks. Others including statistical methods, neural networks and model-based algorithms can be applied to extract the surface defects. Although, statistical methods are often appropriate for identification of large defects such as Spots, but techniques such as wavelet processing provide an acceptable response for detection of small defects such as Pinhole. A thorough survey is made in this paper on the existing algorithms in each subgroup. Also, the evaluation parameters are discussed including supervised and unsupervised parameters. Using various performance parameters, different defect detection algorithms are compared and evaluated. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Computer programs for computing particle-size statistics of fluvial sediments

    USGS Publications Warehouse

    Stevens, H.H.; Hubbell, D.W.

    1986-01-01

    Two versions of computer programs for inputing data and computing particle-size statistics of fluvial sediments are presented. The FORTRAN 77 language versions are for use on the Prime computer, and the BASIC language versions are for use on microcomputers. The size-statistics program compute Inman, Trask , and Folk statistical parameters from phi values and sizes determined for 10 specified percent-finer values from inputed size and percent-finer data. The program also determines the percentage gravel, sand, silt, and clay, and the Meyer-Peter effective diameter. Documentation and listings for both versions of the programs are included. (Author 's abstract)

  12. Characterizing Uncertainty and Variability in PBPK Models ...

    EPA Pesticide Factsheets

    Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variability; such characterization for PBPK model predictions represents a continuing challenge to both modelers and users. Current practices show significant progress in specifying deterministic biological models and the non-deterministic (often statistical) models, estimating their parameters using diverse data sets from multiple sources, and using them to make predictions and characterize uncertainty and variability. The International Workshop on Uncertainty and Variability in PBPK Models, held Oct 31-Nov 2, 2006, sought to identify the state-of-the-science in this area and recommend priorities for research and changes in practice and implementation. For the short term, these include: (1) multidisciplinary teams to integrate deterministic and non-deterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through more complete documentation of the model structure(s) and parameter values, the results of sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include: (1) theoretic and practical methodological impro

  13. An audit of the statistics and the comparison with the parameter in the population

    NASA Astrophysics Data System (ADS)

    Bujang, Mohamad Adam; Sa'at, Nadiah; Joys, A. Reena; Ali, Mariana Mohamad

    2015-10-01

    The sufficient sample size that is needed to closely estimate the statistics for particular parameters are use to be an issue. Although sample size might had been calculated referring to objective of the study, however, it is difficult to confirm whether the statistics are closed with the parameter for a particular population. All these while, guideline that uses a p-value less than 0.05 is widely used as inferential evidence. Therefore, this study had audited results that were analyzed from various sub sample and statistical analyses and had compared the results with the parameters in three different populations. Eight types of statistical analysis and eight sub samples for each statistical analysis were analyzed. Results found that the statistics were consistent and were closed to the parameters when the sample study covered at least 15% to 35% of population. Larger sample size is needed to estimate parameter that involve with categorical variables compared with numerical variables. Sample sizes with 300 to 500 are sufficient to estimate the parameters for medium size of population.

  14. Characterizing uncertainty and variability in physiologically based pharmacokinetic models: state of the science and needs for research and implementation.

    PubMed

    Barton, Hugh A; Chiu, Weihsueh A; Setzer, R Woodrow; Andersen, Melvin E; Bailer, A John; Bois, Frédéric Y; Dewoskin, Robert S; Hays, Sean; Johanson, Gunnar; Jones, Nancy; Loizou, George; Macphail, Robert C; Portier, Christopher J; Spendiff, Martin; Tan, Yu-Mei

    2007-10-01

    Physiologically based pharmacokinetic (PBPK) models are used in mode-of-action based risk and safety assessments to estimate internal dosimetry in animals and humans. When used in risk assessment, these models can provide a basis for extrapolating between species, doses, and exposure routes or for justifying nondefault values for uncertainty factors. Characterization of uncertainty and variability is increasingly recognized as important for risk assessment; this represents a continuing challenge for both PBPK modelers and users. Current practices show significant progress in specifying deterministic biological models and nondeterministic (often statistical) models, estimating parameters using diverse data sets from multiple sources, using them to make predictions, and characterizing uncertainty and variability of model parameters and predictions. The International Workshop on Uncertainty and Variability in PBPK Models, held 31 Oct-2 Nov 2006, identified the state-of-the-science, needed changes in practice and implementation, and research priorities. For the short term, these include (1) multidisciplinary teams to integrate deterministic and nondeterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through improved documentation of model structure(s), parameter values, sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include (1) theoretical and practical methodological improvements for nondeterministic/statistical modeling; (2) better methods for evaluating alternative model structures; (3) peer-reviewed databases of parameters and covariates, and their distributions; (4) expanded coverage of PBPK models across chemicals with different properties; and (5) training and reference materials, such as cases studies, bibliographies/glossaries, model repositories, and enhanced software. The multidisciplinary dialogue initiated by this Workshop will foster the collaboration, research, data collection, and training necessary to make characterizing uncertainty and variability a standard practice in PBPK modeling and risk assessment.

  15. Efficient Moment-Based Inference of Admixture Parameters and Sources of Gene Flow

    PubMed Central

    Levin, Alex; Reich, David; Patterson, Nick; Berger, Bonnie

    2013-01-01

    The recent explosion in available genetic data has led to significant advances in understanding the demographic histories of and relationships among human populations. It is still a challenge, however, to infer reliable parameter values for complicated models involving many populations. Here, we present MixMapper, an efficient, interactive method for constructing phylogenetic trees including admixture events using single nucleotide polymorphism (SNP) genotype data. MixMapper implements a novel two-phase approach to admixture inference using moment statistics, first building an unadmixed scaffold tree and then adding admixed populations by solving systems of equations that express allele frequency divergences in terms of mixture parameters. Importantly, all features of the model, including topology, sources of gene flow, branch lengths, and mixture proportions, are optimized automatically from the data and include estimates of statistical uncertainty. MixMapper also uses a new method to express branch lengths in easily interpretable drift units. We apply MixMapper to recently published data for Human Genome Diversity Cell Line Panel individuals genotyped on a SNP array designed especially for use in population genetics studies, obtaining confident results for 30 populations, 20 of them admixed. Notably, we confirm a signal of ancient admixture in European populations—including previously undetected admixture in Sardinians and Basques—involving a proportion of 20–40% ancient northern Eurasian ancestry. PMID:23709261

  16. Hybrid Gibbs Sampling and MCMC for CMB Analysis at Small Angular Scales

    NASA Technical Reports Server (NTRS)

    Jewell, Jeffrey B.; Eriksen, H. K.; Wandelt, B. D.; Gorski, K. M.; Huey, G.; O'Dwyer, I. J.; Dickinson, C.; Banday, A. J.; Lawrence, C. R.

    2008-01-01

    A) Gibbs Sampling has now been validated as an efficient, statistically exact, and practically useful method for "low-L" (as demonstrated on WMAP temperature polarization data). B) We are extending Gibbs sampling to directly propagate uncertainties in both foreground and instrument models to total uncertainty in cosmological parameters for the entire range of angular scales relevant for Planck. C) Made possible by inclusion of foreground model parameters in Gibbs sampling and hybrid MCMC and Gibbs sampling for the low signal to noise (high-L) regime. D) Future items to be included in the Bayesian framework include: 1) Integration with Hybrid Likelihood (or posterior) code for cosmological parameters; 2) Include other uncertainties in instrumental systematics? (I.e. beam uncertainties, noise estimation, calibration errors, other).

  17. Statistical modeling of space shuttle environmental data

    NASA Technical Reports Server (NTRS)

    Tubbs, J. D.; Brewer, D. W.

    1983-01-01

    Statistical models which use a class of bivariate gamma distribution are examined. Topics discussed include: (1) the ratio of positively correlated gamma varieties; (2) a method to determine if unequal shape parameters are necessary in bivariate gamma distribution; (3) differential equations for modal location of a family of bivariate gamma distribution; and (4) analysis of some wind gust data using the analytical results developed for modeling application.

  18. Markov chain Monte Carlo estimation of quantum states

    NASA Astrophysics Data System (ADS)

    Diguglielmo, James; Messenger, Chris; Fiurášek, Jaromír; Hage, Boris; Samblowski, Aiko; Schmidt, Tabea; Schnabel, Roman

    2009-03-01

    We apply a Bayesian data analysis scheme known as the Markov chain Monte Carlo to the tomographic reconstruction of quantum states. This method yields a vector, known as the Markov chain, which contains the full statistical information concerning all reconstruction parameters including their statistical correlations with no a priori assumptions as to the form of the distribution from which it has been obtained. From this vector we can derive, e.g., the marginal distributions and uncertainties of all model parameters, and also of other quantities such as the purity of the reconstructed state. We demonstrate the utility of this scheme by reconstructing the Wigner function of phase-diffused squeezed states. These states possess non-Gaussian statistics and therefore represent a nontrivial case of tomographic reconstruction. We compare our results to those obtained through pure maximum-likelihood and Fisher information approaches.

  19. Correlation between the different therapeutic properties of Chinese medicinal herbs and delayed luminescence.

    PubMed

    Pang, Jingxiang; Fu, Jialei; Yang, Meina; Zhao, Xiaolei; van Wijk, Eduard; Wang, Mei; Fan, Hua; Han, Jinxiang

    2016-03-01

    In the practice and principle of Chinese medicine, herbal materials are classified according to their therapeutic properties. 'Cold' and 'heat' are the most important classes of Chinese medicinal herbs according to the theory of traditional Chinese medicine (TCM). In this work, delayed luminescence (DL) was measured for different samples of Chinese medicinal herbs using a sensitive photon multiplier detection system. A comparison of DL parameters, including mean intensity and statistic entropy, was undertaken to discriminate between the 'cold' and 'heat' properties of Chinese medicinal herbs. The results suggest that there are significant differences in mean intensity and statistic entropy and using this method combined with statistical analysis may provide novel parameters for the characterization of Chinese medicinal herbs in relation to their energetic properties. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Statistical Mechanics of Node-perturbation Learning with Noisy Baseline

    NASA Astrophysics Data System (ADS)

    Hara, Kazuyuki; Katahira, Kentaro; Okada, Masato

    2017-02-01

    Node-perturbation learning is a type of statistical gradient descent algorithm that can be applied to problems where the objective function is not explicitly formulated, including reinforcement learning. It estimates the gradient of an objective function by using the change in the object function in response to the perturbation. The value of the objective function for an unperturbed output is called a baseline. Cho et al. proposed node-perturbation learning with a noisy baseline. In this paper, we report on building the statistical mechanics of Cho's model and on deriving coupled differential equations of order parameters that depict learning dynamics. We also show how to derive the generalization error by solving the differential equations of order parameters. On the basis of the results, we show that Cho's results are also apply in general cases and show some general performances of Cho's model.

  1. The landscape of W± and Z bosons produced in pp collisions up to LHC energies

    NASA Astrophysics Data System (ADS)

    Basso, Eduardo; Bourrely, Claude; Pasechnik, Roman; Soffer, Jacques

    2017-10-01

    We consider a selection of recent experimental results on electroweak W± , Z gauge boson production in pp collisions at BNL RHIC and CERN LHC energies in comparison to prediction of perturbative QCD calculations based on different sets of NLO parton distribution functions including the statistical PDF model known from fits to the DIS data. We show that the current statistical PDF parametrization (fitted to the DIS data only) underestimates the LHC data on W± , Z gauge boson production cross sections at the NLO by about 20%. This suggests that there is a need to refit the parameters of the statistical PDF including the latest LHC data.

  2. Voice analysis before and after vocal rehabilitation in patients following open surgery on vocal cords.

    PubMed

    Bunijevac, Mila; Petrović-Lazić, Mirjana; Jovanović-Simić, Nadica; Vuković, Mile

    2016-02-01

    The major role of larynx in speech, respiration and swallowing makes carcinomas of this region and their treatment very influential for patients' life quality. The aim of this study was to assess the importance of voice therapy in patients after open surgery on vocal cords. This study included 21 male patients and the control group of 19 subjects. The vowel (A) was recorded and analyzed for each examinee. All the patients were recorded twice: firstly, when they contacted the clinic and secondly, after a three-month vocal therapy, which was held twiceper week on an outpatient basis. The voice analysis was carried out in the Ear, Nose and Throat (ENT) Clinic, Clinical Hospital Center "Zvezdara" in Belgrade. The values of the acoustic parameters in the patients submitted to open surgery on the vocal cords before vocal rehabilitation and the control group subjects were significantly different in all specified parameters. These results suggest that the voice of the patients was damaged before vocal rehabilitation. The results of the acoustic parameters of the vowel (A) before and after vocal rehabilitation of the patients with open surgery on vocal cords were statistically significantly different. Among the parameters--Jitter (%), Shimmer (%)--the observed difference was highly statistically significant (p < 0.01). The voice turbulence index and the noise/harmonic ratio were also notably improved, and the observed difference was statistically significant (p < 0.05). The analysis of the tremor intensity index showed no significant improvement and the observed difference was not statistically significant (p > 0.05 ). CONCLUSION. There was a significant improvement of the acoustic parameters of the vowel (A) in the study subjects three months following vocal therapy. Only one out of five representative parameters showed no significant improvement.

  3. Choosing an Appropriate Modelling Framework for Analysing Multispecies Co-culture Cell Biology Experiments.

    PubMed

    Markham, Deborah C; Simpson, Matthew J; Baker, Ruth E

    2015-04-01

    In vitro cell biology assays play a crucial role in informing our understanding of the migratory, proliferative and invasive properties of many cell types in different biological contexts. While mono-culture assays involve the study of a population of cells composed of a single cell type, co-culture assays study a population of cells composed of multiple cell types (or subpopulations of cells). Such co-culture assays can provide more realistic insights into many biological processes including tissue repair, tissue regeneration and malignant spreading. Typically, system parameters, such as motility and proliferation rates, are estimated by calibrating a mathematical or computational model to the observed experimental data. However, parameter estimates can be highly sensitive to the choice of model and modelling framework. This observation motivates us to consider the fundamental question of how we can best choose a model to facilitate accurate parameter estimation for a particular assay. In this work we describe three mathematical models of mono-culture and co-culture assays that include different levels of spatial detail. We study various spatial summary statistics to explore if they can be used to distinguish between the suitability of each model over a range of parameter space. Our results for mono-culture experiments are promising, in that we suggest two spatial statistics that can be used to direct model choice. However, co-culture experiments are far more challenging: we show that these same spatial statistics which provide useful insight into mono-culture systems are insufficient for co-culture systems. Therefore, we conclude that great care ought to be exercised when estimating the parameters of co-culture assays.

  4. Use of ocean color scanner data in water quality mapping

    NASA Technical Reports Server (NTRS)

    Khorram, S.

    1981-01-01

    Remotely sensed data, in combination with in situ data, are used in assessing water quality parameters within the San Francisco Bay-Delta. The parameters include suspended solids, chlorophyll, and turbidity. Regression models are developed between each of the water quality parameter measurements and the Ocean Color Scanner (OCS) data. The models are then extended to the entire study area for mapping water quality parameters. The results include a series of color-coded maps, each pertaining to one of the water quality parameters, and the statistical analysis of the OCS data and regression models. It is found that concurrently collected OCS data and surface truth measurements are highly useful in mapping the selected water quality parameters and locating areas having relatively high biological activity. In addition, it is found to be virtually impossible, at least within this test site, to locate such areas on U-2 color and color-infrared photography.

  5. Investigation of Polarization Phase Difference Related to Forest Fields Characterizations

    NASA Astrophysics Data System (ADS)

    Majidi, M.; Maghsoudi, Y.

    2013-09-01

    The information content of Synthetic Aperture Radar (SAR) data significantly included in the radiometric polarization channels, hence polarimetric SAR data should be analyzed in relation with target structure. The importance of the phase difference between two co-polarized scattered signals due to the possible association between the biophysical parameters and the measured Polarization Phase Difference (PPD) statistics of the backscattered signal recorded components has been recognized in geophysical remote sensing. This paper examines two Radarsat-2 images statistics of the phase difference to describe the feasibility of relationship with the physical properties of scattering targets and tries to understand relevance of PPD statistics with various types of forest fields. As well as variation of incidence angle due to affecting on PPD statistics is investigated. The experimental forest pieces that are used in this research are characterized white pine (Pinus strobus L.), red pine (Pinus resinosa Ait.), jack pine (Pinus banksiana Lamb.), white spruce (Picea glauca (Moench Voss), black spruce (Picea mariana (Mill) B.S.P.), poplar (Populus L.), red oak (Quercus rubra L.) , aspen and ground vegetation. The experimental results show that despite of biophysical parameters have a wide diversity, PPD statistics are almost the same. Forest fields distributions as distributed targets have close to zero means regardless of the incidence angle. Also, The PPD distribution are function of both target and sensor parameters, but for more appropriate examination related to PPD statistics the observations should made in the leaf-off season or in bands with lower frequencies.

  6. Statistical analysis of NaOH pretreatment effects on sweet sorghum bagasse characteristics

    NASA Astrophysics Data System (ADS)

    Putri, Ary Mauliva Hada; Wahyuni, Eka Tri; Sudiyani, Yanni

    2017-01-01

    We analyze the behavior of sweet sorghum bagasse characteristics before and after NaOH pretreatments by statistical analysis. These characteristics include the percentages of lignocellulosic materials and the degree of crystallinity. We use the chi-square method to get the values of fitted parameters, and then deploy student's t-test to check whether they are significantly different from zero at 99.73% confidence level (C.L.). We obtain, in the cases of hemicellulose and lignin, that their percentages after pretreatment decrease statistically. On the other hand, crystallinity does not possess similar behavior as the data proves that all fitted parameters in this case might be consistent with zero. Our statistical result is then cross examined with the observations from X-ray diffraction (XRD) and Fourier Transform Infrared (FTIR) Spectroscopy, showing pretty good agreement. This result may indicate that the 10% NaOH pretreatment might not be sufficient in changing the crystallinity index of the sweet sorghum bagasse.

  7. On the Relationship Between Transfer Function-derived Response Times and Hydrograph Analysis Timing Parameters: Are there Similarities?

    NASA Astrophysics Data System (ADS)

    Bansah, S.; Ali, G.; Haque, M. A.; Tang, V.

    2017-12-01

    The proportion of precipitation that becomes streamflow is a function of internal catchment characteristics - which include geology, landscape characteristics and vegetation - and influence overall storage dynamics. The timing and quantity of water discharged by a catchment are indeed embedded in event hydrographs. Event hydrograph timing parameters, such as the response lag and time of concentration, are important descriptors of how long it takes the catchment to respond to input precipitation and how long it takes the latter to filter through the catchment. However, the extent to which hydrograph timing parameters relate to average response times derived from fitting transfer functions to annual hydrographs is unknown. In this study, we used a gamma transfer function to determine catchment average response times as well as event-specific hydrograph parameters across a network of eight nested watersheds ranging from 0.19 km2 to 74.6 km2 prairie catchments located in south central Manitoba (Canada). Various statistical analyses were then performed to correlate average response times - estimated using the parameters of the fitted gamma transfer function - to event-specific hydrograph parameters. Preliminary results show significant interannual variations in response times and hydrograph timing parameters: the former were in the order of a few hours to days, while the latter ranged from a few days to weeks. Some statistically significant relationships were detected between response times and event-specific hydrograph parameters. Future analyses will involve the comparison of statistical distributions of event-specific hydrograph parameters with that of runoff response times and baseflow transit times in order to quantity catchment storage dynamics across a range of temporal scales.

  8. Effects of Intra-Family Parameters: Educative Style and Academic Knowledge of Parents and Their Economic Conditions on Teenagers' Personality and Behavior

    ERIC Educational Resources Information Center

    Bakhtavar, Mohammad; Bayova, Rana

    2015-01-01

    The present study aims to investigate the effects of intra-family parameters; educative styles and academic knowledge of parents and their economic condition on teenagers' personality and behavior. The present study is a descriptive survey. The statistical sample of the study included 166 teenage students from Baku, Azerbaijan and 332 of their…

  9. Trans-dimensional matched-field geoacoustic inversion with hierarchical error models and interacting Markov chains.

    PubMed

    Dettmer, Jan; Dosso, Stan E

    2012-10-01

    This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.

  10. THE Role OF Anisotropy AND Intermittency IN Solar Wind/Magnetosphere Coupling

    NASA Astrophysics Data System (ADS)

    Jankovicova, D.; Voros, Z.

    2006-12-01

    Turbulent fluctuations are common in the solar wind as well as in the Earth's magnetosphere. The fluctuations of both magnetic field and plasma parameters exhibit non-Gaussian statistics. Neither the amplitude of these fluctuations nor their spectral characteristics can provide a full statistical description of multi-scale features in turbulence. It substantiates a statistical approach including the estimation of experimentally accessible statistical moments. In this contribution, we will directly estimate the third (skewness) and the fourth (kurtosis) statistical moments from the available time series of magnetic measurements in the solar wind (ACE and WIND spacecraft) and in the Earth's magnetosphere (SYM-H index). Then we evaluate how the statistical moments change during strong and weak solar wind/magnetosphere coupling intervals.

  11. Space-weather Parameters for 1,000 Active Regions Observed by SDO/HMI

    NASA Astrophysics Data System (ADS)

    Bobra, M.; Liu, Y.; Hoeksema, J. T.; Sun, X.

    2013-12-01

    We present statistical studies of several space-weather parameters, derived from observations of the photospheric vector magnetic field by the Helioseismic and Magnetic Imager (HMI) aboard the Solar Dynamics Observatory, for a thousand active regions. Each active region has been observed every twelve minutes during the entirety of its disk passage. Some of these parameters, such as energy density and shear angle, indicate the deviation of the photospheric magnetic field from that of a potential field. Other parameters include flux, helicity, field gradients, polarity inversion line properties, and measures of complexity. We show that some of these parameters are useful for event prediction.

  12. [Correlations of 18F-Fluorodeoxyglucose Positron Emission Tomography/Magnetic Resonance Imaging Parameters with the Pathological Differentiation of Head and Neck Squamous Cell Carcinoma and Their Diagnostic Efficiencies].

    PubMed

    Dang, Hao Dan; Chen, Yu; Shi, Xiao Hua; Hou, Bo; Xing, Hai Qun; Zhang, Tao; Chen, Xing Ming; Zhang, Zhu Hua; Xue, Hua Dan; Jin, Zheng Yu

    2018-04-28

    Objective To evaluate the correlation of the positron emission tomography/magnetic resonance imaging (PET/MR) parameters with the pathological differentiation of head and neck squamous cell carcinoma(HNSCC) and the diagnostic efficiencies of PET/MR parameters. Methods Patients with clinical suspicion of HNSCC were included and underwent PET/MR scan. HNSCC was pathologically confirmed in all these patients. The PET/MR examination included PET and MR sequences of diffusion-weighted imaging (DWI) and T2-and T1-weighted imaging. The multiple parameters of PET/MR included the mean values of apparent diffusion coefficient(ADC mean ) and the maximum and mean values of standardized uptake value (SUV max and SUV mean ) were measured and estimated. The correlations of all the parameters and distribution between the different tumor differentiation groups were analyzed. Logistic regression was utilized to build the model as the PET/MR combined parameter for predicting the differentiation by multiple parameters of PET/MR. The receiver operating characteristic curve was calculated for each parameter and the combination. Results Totally 23 patients were included in this study:9 patients (9 males and 0 female) had well-differentiated tumor,with an average age of (61.0±6.8)years;14 cases had moderately-differentiated (n=10) or poorly-differentiated tumors (n=4),with an average age of (62.0±9.1) years. All the patients were males. There was statistical correlation between SUV mean and SUV max (P<0.001);however,ADC mean showed no statistical correlation with SUV max and with SUV mean (P=0.42,P=0.13). ADC mean and SUV mean showed significant difference between well-differentiated group and moderately-poorly-differentiated group (P=0.005,P=0.007). Compared with the individual parameters,the combination of PET/MR parameters with SUV mean and ADC mean had higher efficacy in predicting tumor differentiation,with an area under curve of 0.84. Conclusions The distributions of ADC mean ,SUV max and SUV mean differ among HNSCC with different pathological differentiation. Compared with the individual parameters,the combination of the PET/MR parameters has higher efficiency in predicting tumor differentiation.

  13. Transmission overhaul and replacement predictions using Weibull and renewel theory

    NASA Technical Reports Server (NTRS)

    Savage, M.; Lewicki, D. G.

    1989-01-01

    A method to estimate the frequency of transmission overhauls is presented. This method is based on the two-parameter Weibull statistical distribution for component life. A second method is presented to estimate the number of replacement components needed to support the transmission overhaul pattern. The second method is based on renewal theory. Confidence statistics are applied with both methods to improve the statistical estimate of sample behavior. A transmission example is also presented to illustrate the use of the methods. Transmission overhaul frequency and component replacement calculations are included in the example.

  14. Bayesian approach to inverse statistical mechanics.

    PubMed

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  15. Bayesian approach to inverse statistical mechanics

    NASA Astrophysics Data System (ADS)

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  16. Moments of inclination error distribution computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1981-01-01

    A FORTRAN coded computer program is described which calculates orbital inclination error statistics using a closed-form solution. This solution uses a data base of trajectory errors from actual flights to predict the orbital inclination error statistics. The Scott flight history data base consists of orbit insertion errors in the trajectory parameters - altitude, velocity, flight path angle, flight azimuth, latitude and longitude. The methods used to generate the error statistics are of general interest since they have other applications. Program theory, user instructions, output definitions, subroutine descriptions and detailed FORTRAN coding information are included.

  17. ProUCL Version 4.0 Technical Guide

    EPA Science Inventory

    Statistical inference, including both estimation and hypotheses testing approaches, is routinely used to: estimate environmental parameters of interest, such as exposure point concentration (EPC) terms, not-to-exceed values, and background level threshold values (BTVs) for contam...

  18. [Vulnerability to atmospheric and geomagnetic factors of the body functions in healthy male dwellers of the Russian North].

    PubMed

    Markov, A L; Zenchenko, T A; Solonin, Iu G; Boĭko, E R

    2013-01-01

    In April 2009 through to November 2011, a Mars-500 satellite study of Russian Northerners (Syktyvkar citizens) was performed using the standard ECOSAN-2007 procedure evaluating the atmospheric and geomagnetic susceptibility of the main body functional parameters. Seventeen essentially healthy men at the age of 25 to 46 years were investigated. Statistical data treatment included correlation and single-factor analysis of variance. Comparison of the number of statistical correlations of the sum of all functional parameters for participants showed that most often they were sensitive to atmospheric pressure, temperature, relative humidity and oxygen partial pressure (29-35 %), and geomagnetic activity (28 %). Dependence of the functional parameters on the rate of temperature and pressure change was weak and comparable with random coincidence (11 %). Among the hemodynamic parameters, systolic pressure was particularly sensitive to space and terrestrial weather variations (29 %); sensitivity of heart rate and diastolic pressure were determined in 25 % and 21 % of participants, respectively. Among the heart rate variability parameters (HRV) the largest number of statistically reliable correlations was determined for the centralization index (32 %) and high-frequency HRV spectrum (31 %); index of the regulatory systems activity was least dependable (19 %). Life index, maximal breath-holding and Ckibinskaya's cardiorespiratory index are also susceptible. Individual responses of the functional parameters to terrestrial and space weather changes varied with partidpants which points to the necessity of individual approach to evaluation of person's reactions to environmental changes.

  19. Estimation of trabecular bone parameters in children from multisequence MRI using texture-based regression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lekadir, Karim, E-mail: karim.lekadir@upf.edu; Hoogendoorn, Corné; Armitage, Paul

    Purpose: This paper presents a statistical approach for the prediction of trabecular bone parameters from low-resolution multisequence magnetic resonance imaging (MRI) in children, thus addressing the limitations of high-resolution modalities such as HR-pQCT, including the significant exposure of young patients to radiation and the limited applicability of such modalities to peripheral bones in vivo. Methods: A statistical predictive model is constructed from a database of MRI and HR-pQCT datasets, to relate the low-resolution MRI appearance in the cancellous bone to the trabecular parameters extracted from the high-resolution images. The description of the MRI appearance is achieved between subjects by usingmore » a collection of feature descriptors, which describe the texture properties inside the cancellous bone, and which are invariant to the geometry and size of the trabecular areas. The predictive model is built by fitting to the training data a nonlinear partial least square regression between the input MRI features and the output trabecular parameters. Results: Detailed validation based on a sample of 96 datasets shows correlations >0.7 between the trabecular parameters predicted from low-resolution multisequence MRI based on the proposed statistical model and the values extracted from high-resolution HRp-QCT. Conclusions: The obtained results indicate the promise of the proposed predictive technique for the estimation of trabecular parameters in children from multisequence MRI, thus reducing the need for high-resolution radiation-based scans for a fragile population that is under development and growth.« less

  20. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  1. Spatial trends in Pearson Type III statistical parameters

    USGS Publications Warehouse

    Lichty, R.W.; Karlinger, M.R.

    1995-01-01

    Spatial trends in the statistical parameters (mean, standard deviation, and skewness coefficient) of a Pearson Type III distribution of the logarithms of annual flood peaks for small rural basins (less than 90 km2) are delineated using a climate factor CT, (T=2-, 25-, and 100-yr recurrence intervals), which quantifies the effects of long-term climatic data (rainfall and pan evaporation) on observed T-yr floods. Maps showing trends in average parameter values demonstrate the geographically varying influence of climate on the magnitude of Pearson Type III statistical parameters. The spatial trends in variability of the parameter values characterize the sensitivity of statistical parameters to the interaction of basin-runoff characteristics (hydrology) and climate. -from Authors

  2. Ultrasound biomicroscopy and iris pigment dispersion: a case--control study.

    PubMed

    Mora, P; Sangermani, C; Ghirardini, S; Carta, A; Ungaro, N; Gandolfi, Sa

    2010-04-01

    The study involved eyes affected by pigment dispersion syndrome (PDS) or pigmentary glaucoma (PG) investigated by ultrasound biomicroscopy (UBM). Different irido-corneal parameters were assessed and compared with those from healthy controls. The aim was to investigate the capacity of the UBM in differentiating the cases and, potentially, in confirming the pathogenic mechanisms. Patients with a first diagnosis of PDS or PG were included. A cohort of healthy volunteers matched for sex, age and refractive errors was recruited. All underwent UBM examination: the following parameters were assessed in relaxed and stimulated accommodative state in one eye: iris-lens contact (ILC), irido-corneal angle (ICA) and iris concavity (IC). A receiver operating characteristic (ROC) analysis assessed the ability of UBM to discriminate between subjects with and without PDS/PG. There were 24 eyes in the case group: four diagnosed as PG and the remaining 20 as PDS. There were 25 eyes in the control group. The two groups were statistically superimposable except for baseline intraocular pressure, which was higher in the case group (p=0.0001). All UBM parameters were statistically different between the two groups. ICA in near vision was the best-performing parameter, reaching a sensitivity (=specificity) of 0.875 with a cut-off at 53.0 degrees . The second most sensitive parameter was IC, still in near vision. All UBM parameters examined were statistically different between the two groups. ROC analysis showed ICA and IC in near vision to be the most discriminatory parameters. This evidence confirms the importance of iris movements in inducing the particular features of PDS/PG.

  3. Analysis of Acoustic Emission Parameters from Corrosion of AST Bottom Plate in Field Testing

    NASA Astrophysics Data System (ADS)

    Jomdecha, C.; Jirarungsatian, C.; Suwansin, W.

    Field testing of aboveground storage tank (AST) to monitor corrosion of the bottom plate is presented in this chapter. AE testing data of the ten AST with different sizes, materials, and products were employed to monitor the bottom plate condition. AE sensors of 30 and 150 kHz were used to monitor the corrosion activity of up to 24 channels including guard sensors. Acoustic emission (AE) parameters were analyzed to explore the AE parameter patterns of occurring corrosion compared to the laboratory results. Amplitude, count, duration, and energy were main parameters of analysis. Pattern recognition technique with statistical was implemented to eliminate the electrical and environmental noises. The results showed the specific AE patterns of corrosion activities related to the empirical results. In addition, plane algorithm was utilized to locate the significant AE events from corrosion. Both results of parameter patterns and AE event locations can be used to interpret and locate the corrosion activities. Finally, basic statistical grading technique was used to evaluate the bottom plate condition of the AST.

  4. Finding Mass Constraints Through Third Neutrino Mass Eigenstate Decay

    NASA Astrophysics Data System (ADS)

    Gangolli, Nakul; de Gouvêa, André; Kelly, Kevin

    2018-01-01

    In this paper we aim to constrain the decay parameter for the third neutrino mass utilizing already accepted constraints on the other mixing parameters from the Pontecorvo-Maki-Nakagawa-Sakata matrix (PMNS). The main purpose of this project is to determine the parameters that will allow the Jiangmen Underground Neutrino Observatory (JUNO) to observe a decay parameter with some statistical significance. Another goal is to determine the parameters that JUNO could detect in the case that the third neutrino mass is lighter than the first two neutrino species. We also replicate the results that were found in the JUNO Conceptual Design Report (CDR). By utilizing Χ2-squared analysis constraints have been put on the mixing angles, mass squared differences, and the third neutrino decay parameter. These statistical tests take into account background noise and normalization corrections and thus the finalized bounds are a good approximation for the true bounds that JUNO can detect. If the decay parameter is not included in our models, the 99% confidence interval lies within The bounds 0s to 2.80x10-12s. However, if we account for a decay parameter of 3x10-5 ev2, then 99% confidence interval lies within 8.73x10-12s to 8.73x10-11s.

  5. Statistical Inference for Data Adaptive Target Parameters.

    PubMed

    Hubbard, Alan E; Kherad-Pajouh, Sara; van der Laan, Mark J

    2016-05-01

    Consider one observes n i.i.d. copies of a random variable with a probability distribution that is known to be an element of a particular statistical model. In order to define our statistical target we partition the sample in V equal size sub-samples, and use this partitioning to define V splits in an estimation sample (one of the V subsamples) and corresponding complementary parameter-generating sample. For each of the V parameter-generating samples, we apply an algorithm that maps the sample to a statistical target parameter. We define our sample-split data adaptive statistical target parameter as the average of these V-sample specific target parameters. We present an estimator (and corresponding central limit theorem) of this type of data adaptive target parameter. This general methodology for generating data adaptive target parameters is demonstrated with a number of practical examples that highlight new opportunities for statistical learning from data. This new framework provides a rigorous statistical methodology for both exploratory and confirmatory analysis within the same data. Given that more research is becoming "data-driven", the theory developed within this paper provides a new impetus for a greater involvement of statistical inference into problems that are being increasingly addressed by clever, yet ad hoc pattern finding methods. To suggest such potential, and to verify the predictions of the theory, extensive simulation studies, along with a data analysis based on adaptively determined intervention rules are shown and give insight into how to structure such an approach. The results show that the data adaptive target parameter approach provides a general framework and resulting methodology for data-driven science.

  6. Application of Statistically Derived CPAS Parachute Parameters

    NASA Technical Reports Server (NTRS)

    Romero, Leah M.; Ray, Eric S.

    2013-01-01

    The Capsule Parachute Assembly System (CPAS) Analysis Team is responsible for determining parachute inflation parameters and dispersions that are ultimately used in verifying system requirements. A model memo is internally released semi-annually documenting parachute inflation and other key parameters reconstructed from flight test data. Dispersion probability distributions published in previous versions of the model memo were uniform because insufficient data were available for determination of statistical based distributions. Uniform distributions do not accurately represent the expected distributions since extreme parameter values are just as likely to occur as the nominal value. CPAS has taken incremental steps to move away from uniform distributions. Model Memo version 9 (MMv9) made the first use of non-uniform dispersions, but only for the reefing cutter timing, for which a large number of sample was available. In order to maximize the utility of the available flight test data, clusters of parachutes were reconstructed individually starting with Model Memo version 10. This allowed for statistical assessment for steady-state drag area (CDS) and parachute inflation parameters such as the canopy fill distance (n), profile shape exponent (expopen), over-inflation factor (C(sub k)), and ramp-down time (t(sub k)) distributions. Built-in MATLAB distributions were applied to the histograms, and parameters such as scale (sigma) and location (mu) were output. Engineering judgment was used to determine the "best fit" distribution based on the test data. Results include normal, log normal, and uniform (where available data remains insufficient) fits of nominal and failure (loss of parachute and skipped stage) cases for all CPAS parachutes. This paper discusses the uniform methodology that was previously used, the process and result of the statistical assessment, how the dispersions were incorporated into Monte Carlo analyses, and the application of the distributions in trajectory benchmark testing assessments with parachute inflation parameters, drag area, and reefing cutter timing used by CPAS.

  7. A range of complex probabilistic models for RNA secondary structure prediction that includes the nearest-neighbor model and more.

    PubMed

    Rivas, Elena; Lang, Raymond; Eddy, Sean R

    2012-02-01

    The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases.

  8. A range of complex probabilistic models for RNA secondary structure prediction that includes the nearest-neighbor model and more

    PubMed Central

    Rivas, Elena; Lang, Raymond; Eddy, Sean R.

    2012-01-01

    The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases. PMID:22194308

  9. Calculation of evapotranspiration using color-infrared photography. [remote sensing in Arizona

    NASA Technical Reports Server (NTRS)

    Jones, J. E.

    1977-01-01

    Data from 38 color-infrared photographic missions flown during a five year period over the Gila River Phreatophyte Project in southeastern Arizona were analyzed to determine the possibility of identifying and measuring vegetative parameters and their associated hydrologic variables by spectral analysis of the photographs. The derived spectra equations are discussed, and a table of 24 statistical parameters describing the spectral and hydrologic variables is included.

  10. Determination of polarimetric parameters of honey by near-infrared transflectance spectroscopy.

    PubMed

    García-Alvarez, M; Ceresuela, S; Huidobro, J F; Hermida, M; Rodríguez-Otero, J L

    2002-01-30

    NIR transflectance spectroscopy was used to determine polarimetric parameters (direct polarization, polarization after inversion, specific rotation in dry matter, and polarization due to nonmonosaccharides) and sucrose in honey. In total, 156 honey samples were collected during 1992 (45 samples), 1995 (56 samples), and 1996 (55 samples). Samples were analyzed by NIR spectroscopy and polarimetric methods. Calibration (118 samples) and validation (38 samples) sets were made up; honeys from the three years were included in both sets. Calibrations were performed by modified partial least-squares regression and scatter correction by standard normal variation and detrend methods. For direct polarization, polarization after inversion, specific rotation in dry matter, and polarization due to nonmonosaccharides, good statistics (bias, SEV, and R(2)) were obtained for the validation set, and no statistically (p = 0.05) significant differences were found between instrumental and polarimetric methods for these parameters. Statistical data for sucrose were not as good as those of the other parameters. Therefore, NIR spectroscopy is not an effective method for quantitative analysis of sucrose in these honey samples. However, NIR spectroscopy may be an acceptable method for semiquantitative evaluation of sucrose for honeys, such as those in our study, containing up to 3% of sucrose. Further work is necessary to validate the uncertainty at higher levels.

  11. SU-C-BRD-03: Analysis of Accelerator Generated Text Logs for Preemptive Maintenance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Able, CM; Baydush, AH; Nguyen, C

    2014-06-15

    Purpose: To develop a model to analyze medical accelerator generated parameter and performance data that will provide an early warning of performance degradation and impending component failure. Methods: A robust 6 MV VMAT quality assurance treatment delivery was used to test the constancy of accelerator performance. The generated text log files were decoded and analyzed using statistical process control (SPC) methodology. The text file data is a single snapshot of energy specific and overall systems parameters. A total of 36 system parameters were monitored which include RF generation, electron gun control, energy control, beam uniformity control, DC voltage generation, andmore » cooling systems. The parameters were analyzed using Individual and Moving Range (I/MR) charts. The chart limits were calculated using a hybrid technique that included the use of the standard 3σ limits and the parameter/system specification. Synthetic errors/changes were introduced to determine the initial effectiveness of I/MR charts in detecting relevant changes in operating parameters. The magnitude of the synthetic errors/changes was based on: the value of 1 standard deviation from the mean operating parameter of 483 TB systems, a small fraction (≤ 5%) of the operating range, or a fraction of the minor fault deviation. Results: There were 34 parameters in which synthetic errors were introduced. There were 2 parameters (radial position steering coil, and positive 24V DC) in which the errors did not exceed the limit of the I/MR chart. The I chart limit was exceeded for all of the remaining parameters (94.2%). The MR chart limit was exceeded in 29 of the 32 parameters (85.3%) in which the I chart limit was exceeded. Conclusion: Statistical process control I/MR evaluation of text log file parameters may be effective in providing an early warning of performance degradation or component failure for digital medical accelerator systems. Research is Supported by Varian Medical Systems, Inc.« less

  12. Treatment of automotive industry oily wastewater by electrocoagulation: statistical optimization of the operational parameters.

    PubMed

    GilPavas, Edison; Molina-Tirado, Kevin; Gómez-García, Miguel Angel

    2009-01-01

    An electrocoagulation process was used for the treatment of oily wastewater generated from an automotive industry in Medellín (Colombia). An electrochemical cell consisting of four parallel electrodes (Fe and Al) in bipolar configuration was implemented. A multifactorial experimental design was used for evaluating the influence of several parameters including: type and arrangement of electrodes, pH, and current density. Oil and grease removal was defined as the response variable for the statistical analysis. Additionally, the BOD(5), COD, and TOC were monitored during the treatment process. According to the results, at the optimum parameter values (current density = 4.3 mA/cm(2), distance between electrodes = 1.5 cm, Fe as anode, and pH = 12) it was possible to reach a c.a. 95% oils removal, COD and mineralization of 87.4% and 70.6%, respectively. A final biodegradability (BOD(5)/COD) of 0.54 was reached.

  13. Quantum Space Charge Waves in a Waveguide Filled with Fermi-Dirac Plasmas Including Relativistic Wake Field and Quantum Statistical Pressure Effects

    NASA Astrophysics Data System (ADS)

    Hong, Woo-Pyo; Jung, Young-Dae

    2018-03-01

    The effects of quantum statistical degeneracy pressure on the propagation of the quantum space charge wave are investigated in a cylindrically bounded plasma waveguide filled with relativistically degenerate quantum Fermi-Dirac plasmas and the relativistic ion wake field. The results show that the domain of the degenerate parameter for the resonant beam instability significantly increases with an increase of the scaled beam velocity. It is found that the instability domain of the wave number increases with an increase of the degenerate parameter. It is also found that the growth rate for the resonant beam instability decreases with an increase of the degenerate parameter. In addition, it is shown that the lowest harmonic mode provides the maximum value of the growth rates. Moreover, it is shown that the instability domain of the wave number decreases with an increase of the beam velocity.

  14. RAD-ADAPT: Software for modelling clonogenic assay data in radiation biology.

    PubMed

    Zhang, Yaping; Hu, Kaiqiang; Beumer, Jan H; Bakkenist, Christopher J; D'Argenio, David Z

    2017-04-01

    We present a comprehensive software program, RAD-ADAPT, for the quantitative analysis of clonogenic assays in radiation biology. Two commonly used models for clonogenic assay analysis, the linear-quadratic model and single-hit multi-target model, are included in the software. RAD-ADAPT uses maximum likelihood estimation method to obtain parameter estimates with the assumption that cell colony count data follow a Poisson distribution. The program has an intuitive interface, generates model prediction plots, tabulates model parameter estimates, and allows automatic statistical comparison of parameters between different groups. The RAD-ADAPT interface is written using the statistical software R and the underlying computations are accomplished by the ADAPT software system for pharmacokinetic/pharmacodynamic systems analysis. The use of RAD-ADAPT is demonstrated using an example that examines the impact of pharmacologic ATM and ATR kinase inhibition on human lung cancer cell line A549 after ionizing radiation. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Interrelationships Between 3 Keratoconic Cone Parameters.

    PubMed

    Tu, Kyaw L; Tourkmani, Abdo K; Srinivas, Singaram

    2017-09-01

    To find out the interrelationships between 3 parameters of the keratoconic cone. A total of 101 keratoconic eyes of 58 patients were included in this retrospective case series study. A complete eye examination was performed. Kmax (K) and pachymetry at the thinnest point (T) were obtained from the Pentacam tomographer. The vertex to thinnest pachymetry distance (D for decentration) was calculated using trigonometry. Pearson correlation coefficients between T and D, between T and K, and between D and K were calculated. There is a statistically significant positive correlation between thinnest point pachymetry and decentration (R = 0.366, P = 0.0002) and also statistically significant negative correlation between thinnest point pachymetry and Kmax (R = -0.719, P < 0.00001) and decentration and Kmax (R = -0.281, P = 0.0044). The interrelationships between the 3 keratoconic cone parameters suggest that the thinner cones are largely central, that is, decenter less, but show greater steepening.

  16. Calculating background levels for ecological risk parameters in toxic harbor sediment

    USGS Publications Warehouse

    Leadon, C.J.; McDonnell, T.R.; Lear, J.; Barclift, D.

    2007-01-01

    Establishing background levels for biological parameters is necessary in assessing the ecological risks from harbor sediment contaminated with toxic chemicals. For chemicals in sediment, the term contaminated is defined as having concentrations above background and significant human health or ecological risk levels. For biological parameters, a site could be considered contaminated if levels of the parameter are either more or less than the background level, depending on the specific parameter. Biological parameters can include tissue chemical concentrations in ecological receptors, bioassay responses, bioaccumulation levels, and benthic community metrics. Chemical parameters can include sediment concentrations of a variety of potentially toxic chemicals. Indirectly, contaminated harbor sediment can impact shellfish, fish, birds, and marine mammals, and human populations. This paper summarizes the methods used to define background levels for chemical and biological parameters from a survey of ecological risk investigations of marine harbor sediment at California Navy bases. Background levels for regional biological indices used to quantify ecological risks for benthic communities are also described. Generally, background stations are positioned in relatively clean areas exhibiting the same physical and general chemical characteristics as nearby areas with contaminated harbor sediment. The number of background stations and the number of sample replicates per background station depend on the statistical design of the sediment ecological risk investigation, developed through the data quality objective (DQO) process. Biological data from the background stations can be compared to data from a contaminated site by using minimum or maximum background levels or comparative statistics. In Navy ecological risk assessments (ERA's), calculated background levels and appropriate ecological risk screening criteria are used to identify sampling stations and sites with contaminated sediments.

  17. TU-FG-201-03: Automatic Pre-Delivery Verification Using Statistical Analysis of Consistencies in Treatment Plan Parameters by the Treatment Site and Modality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, S; Wu, Y; Chang, X

    Purpose: A novel computer software system, namely APDV (Automatic Pre-Delivery Verification), has been developed for verifying patient treatment plan parameters right prior to treatment deliveries in order to automatically detect and prevent catastrophic errors. Methods: APDV is designed to continuously monitor new DICOM plan files on the TMS computer at the treatment console. When new plans to be delivered are detected, APDV checks the consistencies of plan parameters and high-level plan statistics using underlying rules and statistical properties based on given treatment site, technique and modality. These rules were quantitatively derived by retrospectively analyzing all the EBRT treatment plans ofmore » the past 8 years at authors’ institution. Therapists and physicists will be notified with a warning message displayed on the TMS computer if any critical errors are detected, and check results, confirmation, together with dismissal actions will be saved into database for further review. Results: APDV was implemented as a stand-alone program using C# to ensure required real time performance. Mean values and standard deviations were quantitatively derived for various plan parameters including MLC usage, MU/cGy radio, beam SSD, beam weighting, and the beam gantry angles (only for lateral targets) per treatment site, technique and modality. 2D-based rules of combined MU/cGy ratio and averaged SSD values were also derived using joint probabilities of confidence error ellipses. The statistics of these major treatment plan parameters quantitatively evaluate the consistency of any treatment plans which facilitates automatic APDV checking procedures. Conclusion: APDV could be useful in detecting and preventing catastrophic errors immediately before treatment deliveries. Future plan including automatic patient identify and patient setup checks after patient daily images are acquired by the machine and become available on the TMS computer. This project is supported by the Agency for Healthcare Research and Quality (AHRQ) under award 1R01HS0222888. The senior author received research grants from ViewRay Inc. and Varian Medical System.« less

  18. Equations for normal-mode statistics of sound scattering by a rough elastic boundary in an underwater waveguide, including backscattering.

    PubMed

    Morozov, Andrey K; Colosi, John A

    2017-09-01

    Underwater sound scattering by a rough sea surface, ice, or a rough elastic bottom is studied. The study includes both the scattering from the rough boundary and the elastic effects in the solid layer. A coupled mode matrix is approximated by a linear function of one random perturbation parameter such as the ice-thickness or a perturbation of the surface position. A full two-way coupled mode solution is used to derive the stochastic differential equation for the second order statistics in a Markov approximation.

  19. Parameter Estimation and Model Validation of Nonlinear Dynamical Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abarbanel, Henry; Gill, Philip

    In the performance period of this work under a DOE contract, the co-PIs, Philip Gill and Henry Abarbanel, developed new methods for statistical data assimilation for problems of DOE interest, including geophysical and biological problems. This included numerical optimization algorithms for variational principles, new parallel processing Monte Carlo routines for performing the path integrals of statistical data assimilation. These results have been summarized in the monograph: “Predicting the Future: Completing Models of Observed Complex Systems” by Henry Abarbanel, published by Spring-Verlag in June 2013. Additional results and details have appeared in the peer reviewed literature.

  20. SPIPS: Spectro-Photo-Interferometry of Pulsating Stars

    NASA Astrophysics Data System (ADS)

    Mérand, Antoine

    2017-10-01

    SPIPS (Spectro-Photo-Interferometry of Pulsating Stars) combines radial velocimetry, interferometry, and photometry to estimate physical parameters of pulsating stars, including presence of infrared excess, color excess, Teff, and ratio distance/p-factor. The global model-based parallax-of-pulsation method is implemented in Python. Derived parameters have a high level of confidence; statistical precision is improved (compared to other methods) due to the large number of data taken into account, accuracy is improved by using consistent physical modeling and reliability of the derived parameters is strengthened by redundancy in the data.

  1. Validating an Air Traffic Management Concept of Operation Using Statistical Modeling

    NASA Technical Reports Server (NTRS)

    He, Yuning; Davies, Misty Dawn

    2013-01-01

    Validating a concept of operation for a complex, safety-critical system (like the National Airspace System) is challenging because of the high dimensionality of the controllable parameters and the infinite number of states of the system. In this paper, we use statistical modeling techniques to explore the behavior of a conflict detection and resolution algorithm designed for the terminal airspace. These techniques predict the robustness of the system simulation to both nominal and off-nominal behaviors within the overall airspace. They also can be used to evaluate the output of the simulation against recorded airspace data. Additionally, the techniques carry with them a mathematical value of the worth of each prediction-a statistical uncertainty for any robustness estimate. Uncertainty Quantification (UQ) is the process of quantitative characterization and ultimately a reduction of uncertainties in complex systems. UQ is important for understanding the influence of uncertainties on the behavior of a system and therefore is valuable for design, analysis, and verification and validation. In this paper, we apply advanced statistical modeling methodologies and techniques on an advanced air traffic management system, namely the Terminal Tactical Separation Assured Flight Environment (T-TSAFE). We show initial results for a parameter analysis and safety boundary (envelope) detection in the high-dimensional parameter space. For our boundary analysis, we developed a new sequential approach based upon the design of computer experiments, allowing us to incorporate knowledge from domain experts into our modeling and to determine the most likely boundary shapes and its parameters. We carried out the analysis on system parameters and describe an initial approach that will allow us to include time-series inputs, such as the radar track data, into the analysis

  2. A Handbook of Sound and Vibration Parameters

    DTIC Science & Technology

    1978-09-18

    fixed in space. (Reference 1.) no motion atay node Static Divergence: (See Divergence.) Statistical Energy Analysis (SEA): Statistical energy analysis is...parameters of the circuits come from statistics of the vibrational characteristics of the structure. Statistical energy analysis is uniquely successful

  3. Emulating Simulations of Cosmic Dawn for 21 cm Power Spectrum Constraints on Cosmology, Reionization, and X-Ray Heating

    NASA Astrophysics Data System (ADS)

    Kern, Nicholas S.; Liu, Adrian; Parsons, Aaron R.; Mesinger, Andrei; Greig, Bradley

    2017-10-01

    Current and upcoming radio interferometric experiments are aiming to make a statistical characterization of the high-redshift 21 cm fluctuation signal spanning the hydrogen reionization and X-ray heating epochs of the universe. However, connecting 21 cm statistics to the underlying physical parameters is complicated by the theoretical challenge of modeling the relevant physics at computational speeds quick enough to enable exploration of the high-dimensional and weakly constrained parameter space. In this work, we use machine learning algorithms to build a fast emulator that can accurately mimic an expensive simulation of the 21 cm signal across a wide parameter space. We embed our emulator within a Markov Chain Monte Carlo framework in order to perform Bayesian parameter constraints over a large number of model parameters, including those that govern the Epoch of Reionization, the Epoch of X-ray Heating, and cosmology. As a worked example, we use our emulator to present an updated parameter constraint forecast for the Hydrogen Epoch of Reionization Array experiment, showing that its characterization of a fiducial 21 cm power spectrum will considerably narrow the allowed parameter space of reionization and heating parameters, and could help strengthen Planck's constraints on {σ }8. We provide both our generalized emulator code and its implementation specifically for 21 cm parameter constraints as publicly available software.

  4. Revised Planning Methodology For Signalized Intersections And Operational Analysis Of Exclusive Left-Turn Lanes, Part-II: Models And Procedures (Final Report)

    DOT National Transportation Integrated Search

    1996-04-01

    THIS REPORT ALSO DESCRIBES THE PROCEDURES FOR DIRECT ESTIMATION OF INTERSECTION CAPACITY WITH SIMULATION, INCLUDING A SET OF RIGOROUS STATISTICAL TESTS FOR SIMULATION PARAMETER CALIBRATION FROM FIELD DATA.

  5. Interpolative modeling of GaAs FET S-parameter data bases for use in Monte Carlo simulations

    NASA Technical Reports Server (NTRS)

    Campbell, L.; Purviance, J.

    1992-01-01

    A statistical interpolation technique is presented for modeling GaAs FET S-parameter measurements for use in the statistical analysis and design of circuits. This is accomplished by interpolating among the measurements in a GaAs FET S-parameter data base in a statistically valid manner.

  6. Uncertainty Analysis of Seebeck Coefficient and Electrical Resistivity Characterization

    NASA Technical Reports Server (NTRS)

    Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred

    2014-01-01

    In order to provide a complete description of a materials thermoelectric power factor, in addition to the measured nominal value, an uncertainty interval is required. The uncertainty may contain sources of measurement error including systematic bias error and precision error of a statistical nature. The work focuses specifically on the popular ZEM-3 (Ulvac Technologies) measurement system, but the methods apply to any measurement system. The analysis accounts for sources of systematic error including sample preparation tolerance, measurement probe placement, thermocouple cold-finger effect, and measurement parameters; in addition to including uncertainty of a statistical nature. Complete uncertainty analysis of a measurement system allows for more reliable comparison of measurement data between laboratories.

  7. The effects of platelet apheresis on blood saving and coagulation in bilateral total hip replacement: A prospective study on 60 patients.

    PubMed

    Qu, Zhijun; Wang, Geng; Xu, Chengshi; Zhang, Dazhi; Qu, Xiangdong; Zhou, Haibin; Ma, Jun

    2016-10-01

    Preoperative platelet rich plasma (PRP) harvest has been used in cardiopulmonary surgery for more than 10 years. There is no previous study dealing with PRP in bilateral total hip replacement. This study was to investigate the effects of PRP on blood saving and blood coagulation function in patients with bilateral total hip replacement. A prospective, randomized, clinical trial was conducted. Sixty patients were enrolled, including 30 patients undergoing PRP in the PRP group and 30 controls. The surgery time, total transfusion volume, blood loss, allogenic blood transfusion, autologous blood transfusion, urine volume, drainage volume, some blood parameters (including Fibrinogen, D-dimer, Prothrombin time, international normalizedratio, activated partial thromboplastin time, Platelet, Haemoglobin B), thrombelastogram (TEG) and blood-gas parameters were studied in the perioperative stage. The measurement data were analyzed statistically. There was no statistical difference between the two groups in baseline characteristics, surgery time, total transfusion volume, blood loss, autologous blood transfusion, etc. Allogenic blood transfusion in the PRP group was less than the control group with statistical difference (p = 0.024). Fibrinogen in the PRP group was higher than the control group (p = 0.008). Among the TEG indicators, activated clotting time and coagulation time K in the PRP group were less than the control group. Clotting rate and maximum amplitude in the PRP group were higher. The blood-gas parameters presented no statistical difference. The results suggested that PRP probably played a positive role in blood coagulation function as well as blood saving in patients with bilateral total hip replacement. Copyright © 2016 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  8. Multivariate Meta-Analysis of Heterogeneous Studies Using Only Summary Statistics: Efficiency and Robustness

    PubMed Central

    Liu, Dungang; Liu, Regina; Xie, Minge

    2014-01-01

    Meta-analysis has been widely used to synthesize evidence from multiple studies for common hypotheses or parameters of interest. However, it has not yet been fully developed for incorporating heterogeneous studies, which arise often in applications due to different study designs, populations or outcomes. For heterogeneous studies, the parameter of interest may not be estimable for certain studies, and in such a case, these studies are typically excluded from conventional meta-analysis. The exclusion of part of the studies can lead to a non-negligible loss of information. This paper introduces a metaanalysis for heterogeneous studies by combining the confidence density functions derived from the summary statistics of individual studies, hence referred to as the CD approach. It includes all the studies in the analysis and makes use of all information, direct as well as indirect. Under a general likelihood inference framework, this new approach is shown to have several desirable properties, including: i) it is asymptotically as efficient as the maximum likelihood approach using individual participant data (IPD) from all studies; ii) unlike the IPD analysis, it suffices to use summary statistics to carry out the CD approach. Individual-level data are not required; and iii) it is robust against misspecification of the working covariance structure of the parameter estimates. Besides its own theoretical significance, the last property also substantially broadens the applicability of the CD approach. All the properties of the CD approach are further confirmed by data simulated from a randomized clinical trials setting as well as by real data on aircraft landing performance. Overall, one obtains an unifying approach for combining summary statistics, subsuming many of the existing meta-analysis methods as special cases. PMID:26190875

  9. A new statistical method for characterizing the atmospheres of extrasolar planets

    NASA Astrophysics Data System (ADS)

    Henderson, Cassandra S.; Skemer, Andrew J.; Morley, Caroline V.; Fortney, Jonathan J.

    2017-10-01

    By detecting light from extrasolar planets, we can measure their compositions and bulk physical properties. The technologies used to make these measurements are still in their infancy, and a lack of self-consistency suggests that previous observations have underestimated their systemic errors. We demonstrate a statistical method, newly applied to exoplanet characterization, which uses a Bayesian formalism to account for underestimated errorbars. We use this method to compare photometry of a substellar companion, GJ 758b, with custom atmospheric models. Our method produces a probability distribution of atmospheric model parameters including temperature, gravity, cloud model (fsed) and chemical abundance for GJ 758b. This distribution is less sensitive to highly variant data and appropriately reflects a greater uncertainty on parameter fits.

  10. Correlations of fatty acid supplementation, aeroallergens, shampoo, and ear cleanser with multiple parameters in pruritic dogs.

    PubMed

    Nesbitt, Gene H; Freeman, Lisa M; Hannah, Steven S

    2004-01-01

    Seventy-two pruritic dogs were fed one of four diets controlled for n-6:n-3 fatty acid ratios and total dietary intake of fatty acids. Multiple parameters were evaluated, including clinical and cytological findings, aeroallergen testing, microbial sampling techniques, and effects of an anti-fungal/antibacterial shampoo and ear cleanser. Significant correlations were observed between many clinical parameters, anatomical sampling sites, and microbial counts when data from the diet groups was combined. There were no statistically significant differences between individual diets for any of the clinical parameters. The importance of total clinical management in the control of pruritus was demonstrated.

  11. Selecting Summary Statistics in Approximate Bayesian Computation for Calibrating Stochastic Models

    PubMed Central

    Burr, Tom

    2013-01-01

    Approximate Bayesian computation (ABC) is an approach for using measurement data to calibrate stochastic computer models, which are common in biology applications. ABC is becoming the “go-to” option when the data and/or parameter dimension is large because it relies on user-chosen summary statistics rather than the full data and is therefore computationally feasible. One technical challenge with ABC is that the quality of the approximation to the posterior distribution of model parameters depends on the user-chosen summary statistics. In this paper, the user requirement to choose effective summary statistics in order to accurately estimate the posterior distribution of model parameters is investigated and illustrated by example, using a model and corresponding real data of mitochondrial DNA population dynamics. We show that for some choices of summary statistics, the posterior distribution of model parameters is closely approximated and for other choices of summary statistics, the posterior distribution is not closely approximated. A strategy to choose effective summary statistics is suggested in cases where the stochastic computer model can be run at many trial parameter settings, as in the example. PMID:24288668

  12. Selecting summary statistics in approximate Bayesian computation for calibrating stochastic models.

    PubMed

    Burr, Tom; Skurikhin, Alexei

    2013-01-01

    Approximate Bayesian computation (ABC) is an approach for using measurement data to calibrate stochastic computer models, which are common in biology applications. ABC is becoming the "go-to" option when the data and/or parameter dimension is large because it relies on user-chosen summary statistics rather than the full data and is therefore computationally feasible. One technical challenge with ABC is that the quality of the approximation to the posterior distribution of model parameters depends on the user-chosen summary statistics. In this paper, the user requirement to choose effective summary statistics in order to accurately estimate the posterior distribution of model parameters is investigated and illustrated by example, using a model and corresponding real data of mitochondrial DNA population dynamics. We show that for some choices of summary statistics, the posterior distribution of model parameters is closely approximated and for other choices of summary statistics, the posterior distribution is not closely approximated. A strategy to choose effective summary statistics is suggested in cases where the stochastic computer model can be run at many trial parameter settings, as in the example.

  13. Phase 1 of the near term hybrid passenger vehicle development program, appendix A. Mission analysis and performance specification studies. Volume 2: Appendices

    NASA Technical Reports Server (NTRS)

    Traversi, M.; Barbarek, L. A. C.

    1979-01-01

    A handy reference for JPL minimum requirements and guidelines is presented as well as information on the use of the fundamental information source represented by the Nationwide Personal Transportation Survey. Data on U.S. demographic statistics and highway speeds are included along with methodology for normal parameters evaluation, synthesis of daily distance distributions, and projection of car ownership distributions. The synthesis of tentative mission quantification results, of intermediate mission quantification results, and of mission quantification parameters are considered and 1985 in place fleet fuel economy data are included.

  14. Characteristics of quantitative perfusion parameters on dynamic contrast‐enhanced MRI in mammographically occult breast cancer

    PubMed Central

    Ryu, Jung Kyu; Rhee, Sun Jung; Song, Jeong Yoon; Cho, Soo Hyun

    2016-01-01

    The purpose of this study was to compare the characteristics of quantitative perfusion parameters obtained from dynamic contrast‐enhanced (DCE) magnetic resonance imaging (MRI) in patients with mammographically occult (MO) breast cancers and those with mammographically visible (MV) breast cancers. Quantitative parameters (AUC, Ktrans,kep,ve,vp, and wi) from 13 MO breast cancers and 16 MV breast cancers were mapped after the DCE‐MRI data were acquired. Various prognostic factors, including axillary nodal status, estrogen receptor (ER), progesterone receptor (PR), Ki‐67, p53, E‐cadherin, and human epidermal growth factor receptor 2 (HER2) were obtained in each group. Fisher's exact test was used to compare any differences of the various prognostic factors between the two groups. The Mann‐Whitney U test was applied to compare the quantitative parameters between these two groups. Finally, Spearman's correlation was used to investigate the relationships between perfusion indices and four factors — age, tumor size, Ki‐67, and p53 — for each group. Although age, tumor size, and the prognostic factors were not statistically different between the two groups, the mean values of the quantitative parameters, except wi in the MV group, were higher than those in the MO group without statistical significance (p=0.219). The kep value was significantly different between the two groups (p=0.048), but the other parameters were not. In the MO group, vp with size, ve with p53, and Ktrans and vp with Ki‐67 had significant correlations (p<0.05). However, in the MV group, only kep showed significant correlation with age. The kep value was only the perfusion parameter of statistical significance between MO and MV breast cancers. PACS number(s): 87.19.U‐, 87.61.‐c PMID:27685105

  15. Uncertainty Analysis of Inertial Model Attitude Sensor Calibration and Application with a Recommended New Calibration Method

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.

  16. Large ensemble modeling of the last deglacial retreat of the West Antarctic Ice Sheet: comparison of simple and advanced statistical techniques

    NASA Astrophysics Data System (ADS)

    Pollard, David; Chang, Won; Haran, Murali; Applegate, Patrick; DeConto, Robert

    2016-05-01

    A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ˜ 20 000 yr. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. The analyses provide sea-level-rise envelopes with well-defined parametric uncertainty bounds, but the simple averaging method only provides robust results with full-factorial parameter sampling in the large ensemble. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree well with the more advanced techniques. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds.

  17. ANESTHETIC INDUCTION AND RECOVERY PARAMETERS IN BEARDED DRAGONS (POGONA VITTICEPS): COMPARISON OF ISOFLURANE DELIVERED IN 100% OXYGEN VERSUS 21% OXYGEN.

    PubMed

    O, Odette; Churgin, Sarah M; Sladky, Kurt K; Smith, Lesley J

    2015-09-01

    Inland bearded dragons (Pogona vitticeps, n=6) were anesthetized for 1 hr using isoflurane in either 100% oxygen or 21% oxygen (FI 21; medical-grade room air). Parameters of anesthetic depth were recorded throughout both induction and recovery by an observer blinded to the fraction of inspired oxygen (FiO2), including the loss and return of withdrawal and righting reflexes, muscle tone, ability to intubate or extubate, and return to spontaneous respiration. Physiologic data were recorded every 5 min throughout the anesthetic procedures, including heart rate, body temperature, end-tidal CO2, hemoglobin oxygen saturation (SpO2), and percent expired isoflurane. Lizards were subjected to application of a noxious stimulus (needle stick) at 0, 30, and 60 min, and responses recorded. Following a minimum 7-day washout period, the experiment was repeated with each lizard subjected to the other protocol in a randomized, complete crossover design. The only statistically significant difference was a lower mean SpO2 in the group inspiring 21% oxygen (P<0.0020). No statistically significant differences were detected in any parameters during induction or recovery; however, all values were uniformly shorter for the FI 21 group, indicating a possible clinically significant difference. A larger sample size may have detected statistically significant differences. Further studies are needed to evaluate these effects in other reptile species and with the concurrent use of injectable anesthetic and analgesic drugs.

  18. SCARE: A post-processor program to MSC/NASTRAN for the reliability analysis of structural ceramic components

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, J. P.

    1985-01-01

    A computer program was developed for calculating the statistical fast fracture reliability and failure probability of ceramic components. The program includes the two-parameter Weibull material fracture strength distribution model, using the principle of independent action for polyaxial stress states and Batdorf's shear-sensitive as well as shear-insensitive crack theories, all for volume distributed flaws in macroscopically isotropic solids. Both penny-shaped cracks and Griffith cracks are included in the Batdorf shear-sensitive crack response calculations, using Griffith's maximum tensile stress or critical coplanar strain energy release rate criteria to predict mixed mode fracture. Weibull material parameters can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and fracture data. The reliability prediction analysis uses MSC/NASTRAN stress, temperature and volume output, obtained from the use of three-dimensional, quadratic, isoparametric, or axisymmetric finite elements. The statistical fast fracture theories employed, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.

  19. OPEN PROBLEM: Orbits' statistics in chaotic dynamical systems

    NASA Astrophysics Data System (ADS)

    Arnold, V.

    2008-07-01

    This paper shows how the measurement of the stochasticity degree of a finite sequence of real numbers, published by Kolmogorov in Italian in a journal of insurances' statistics, can be usefully applied to measure the objective stochasticity degree of sequences, originating from dynamical systems theory and from number theory. Namely, whenever the value of Kolmogorov's stochasticity parameter of a given sequence of numbers is too small (or too big), one may conclude that the conjecture describing this sequence as a sample of independent values of a random variables is highly improbable. Kolmogorov used this strategy fighting (in a paper in 'Doklady', 1940) against Lysenko, who had tried to disprove the classical genetics' law of Mendel experimentally. Calculating his stochasticity parameter value for the numbers from Lysenko's experiment reports, Kolmogorov deduced, that, while these numbers were different from the exact fulfilment of Mendel's 3 : 1 law, any smaller deviation would be a manifestation of the report's number falsification. The calculation of the values of the stochasticity parameter would be useful for many other generators of pseudorandom numbers and for many other chaotically looking statistics, including even the prime numbers distribution (discussed in this paper as an example).

  20. New insights into time series analysis. II - Non-correlated observations

    NASA Astrophysics Data System (ADS)

    Ferreira Lopes, C. E.; Cross, N. J. G.

    2017-08-01

    Context. Statistical parameters are used to draw conclusions in a vast number of fields such as finance, weather, industrial, and science. These parameters are also used to identify variability patterns on photometric data to select non-stochastic variations that are indicative of astrophysical effects. New, more efficient, selection methods are mandatory to analyze the huge amount of astronomical data. Aims: We seek to improve the current methods used to select non-stochastic variations on non-correlated data. Methods: We used standard and new data-mining parameters to analyze non-correlated data to find the best way to discriminate between stochastic and non-stochastic variations. A new approach that includes a modified Strateva function was performed to select non-stochastic variations. Monte Carlo simulations and public time-domain data were used to estimate its accuracy and performance. Results: We introduce 16 modified statistical parameters covering different features of statistical distribution such as average, dispersion, and shape parameters. Many dispersion and shape parameters are unbound parameters, I.e. equations that do not require the calculation of average. Unbound parameters are computed with single loop and hence decreasing running time. Moreover, the majority of these parameters have lower errors than previous parameters, which is mainly observed for distributions with few measurements. A set of non-correlated variability indices, sample size corrections, and a new noise model along with tests of different apertures and cut-offs on the data (BAS approach) are introduced. The number of mis-selections are reduced by about 520% using a single waveband and 1200% combining all wavebands. On the other hand, the even-mean also improves the correlated indices introduced in Paper I. The mis-selection rate is reduced by about 18% if the even-mean is used instead of the mean to compute the correlated indices in the WFCAM database. Even-statistics allows us to improve the effectiveness of both correlated and non-correlated indices. Conclusions: The selection of non-stochastic variations is improved by non-correlated indices. The even-averages provide a better estimation of mean and median for almost all statistical distributions analyzed. The correlated variability indices, which are proposed in the first paper of this series, are also improved if the even-mean is used. The even-parameters will also be useful for classifying light curves in the last step of this project. We consider that the first step of this project, where we set new techniques and methods that provide a huge improvement on the efficiency of selection of variable stars, is now complete. Many of these techniques may be useful for a large number of fields. Next, we will commence a new step of this project regarding the analysis of period search methods.

  1. Estimation of Quasi-Stiffness and Propulsive Work of the Human Ankle in the Stance Phase of Walking

    PubMed Central

    Shamaei, Kamran; Sawicki, Gregory S.; Dollar, Aaron M.

    2013-01-01

    Characterizing the quasi-stiffness and work of lower extremity joints is critical for evaluating human locomotion and designing assistive devices such as prostheses and orthoses intended to emulate the biological behavior of human legs. This work aims to establish statistical models that allow us to predict the ankle quasi-stiffness and net mechanical work for adults walking on level ground. During the stance phase of walking, the ankle joint propels the body through three distinctive phases of nearly constant stiffness known as the quasi-stiffness of each phase. Using a generic equation for the ankle moment obtained through an inverse dynamics analysis, we identify key independent parameters needed to predict ankle quasi-stiffness and propulsive work and also the functional form of each correlation. These parameters include gait speed, ankle excursion, and subject height and weight. Based on the identified form of the correlation and key variables, we applied linear regression on experimental walking data for 216 gait trials across 26 subjects (speeds from 0.75–2.63 m/s) to obtain statistical models of varying complexity. The most general forms of the statistical models include all the key parameters and have an R2 of 75% to 81% in the prediction of the ankle quasi-stiffnesses and propulsive work. The most specific models include only subject height and weight and could predict the ankle quasi-stiffnesses and work for optimal walking speed with average error of 13% to 30%. We discuss how these models provide a useful framework and foundation for designing subject- and gait-specific prosthetic and exoskeletal devices designed to emulate biological ankle function during level ground walking. PMID:23555839

  2. PMMA/PS coaxial electrospinning: a statistical analysis on processing parameters

    NASA Astrophysics Data System (ADS)

    Rahmani, Shahrzad; Arefazar, Ahmad; Latifi, Masoud

    2017-08-01

    Coaxial electrospinning, as a versatile method for producing core-shell fibers, is known to be very sensitive to two classes of influential factors including material and processing parameters. Although coaxial electrospinning has been the focus of many studies, the effects of processing parameters on the outcomes of this method have not yet been well investigated. A good knowledge of the impacts of processing parameters and their interactions on coaxial electrospinning can make it possible to better control and optimize this process. Hence, in this study, the statistical technique of response surface method (RSM) using the design of experiments on four processing factors of voltage, distance, core and shell flow rates was applied. Transmission electron microscopy (TEM), scanning electron microscopy (SEM), oil immersion and Fluorescent microscopy were used to characterize fiber morphology. The core and shell diameters of fibers were measured and the effects of all factors and their interactions were discussed. Two polynomial models with acceptable R-squares were proposed to describe the core and shell diameters as functions of the processing parameters. Voltage and distance were recognized as the most significant and influential factors on shell diameter, while core diameter was mainly under the influence of core and shell flow rates besides the voltage.

  3. Statistical Study of ICMEs and Their Sheaths During Solar Cycle 23 (1996 - 2008)

    NASA Astrophysics Data System (ADS)

    Mitsakou, E.; Moussas, X.

    2014-08-01

    We have created a new catalog of 325 interplanetary coronal mass ejections (ICMEs) using their in-situ plasma signatures from 1996 to 2008; this time period includes Solar Cycle 23. The data set came from the OMNI near-Earth database. The one-minute resolution data that we used include magnetic-field strength, solar-wind speed, proton density, proton temperature, and plasma β. We compared this new catalog with other published catalogs. For every event, we indicated the presence of an ICME-driven shock. We identified the boundaries of ICMEs and their sheaths, and examined the statistical properties of characteristic parameters. We derived the duration and radial width of ICMEs and sheaths in the region near Earth. The statistical analysis of all events shows that, on average, sheaths travel faster than ICMEs, which indicates the expansion of CMEs in the interplanetary medium. They have higher mean magnetic-field strength values than ICMEs, and they are denser. They have higher mean proton temperature and plasma β than ICMEs, but they are smaller than ICMEs and last for a shorter time. The events were divided into different categories according to whether they included a shock and according to the phase of Solar Cycle 23 in which they are observed, i.e. ascending, maximum, or descending phase. We compared the different categories. We present a catalog of events available to the scientific community that studies ICMEs, and show the distribution and statistical properties of various parameters during these phenomena that govern the solar wind, the interplanetary medium, and space weather.

  4. Sequential Markov chain Monte Carlo filter with simultaneous model selection for electrocardiogram signal modeling.

    PubMed

    Edla, Shwetha; Kovvali, Narayan; Papandreou-Suppappola, Antonia

    2012-01-01

    Constructing statistical models of electrocardiogram (ECG) signals, whose parameters can be used for automated disease classification, is of great importance in precluding manual annotation and providing prompt diagnosis of cardiac diseases. ECG signals consist of several segments with different morphologies (namely the P wave, QRS complex and the T wave) in a single heart beat, which can vary across individuals and diseases. Also, existing statistical ECG models exhibit a reliance upon obtaining a priori information from the ECG data by using preprocessing algorithms to initialize the filter parameters, or to define the user-specified model parameters. In this paper, we propose an ECG modeling technique using the sequential Markov chain Monte Carlo (SMCMC) filter that can perform simultaneous model selection, by adaptively choosing from different representations depending upon the nature of the data. Our results demonstrate the ability of the algorithm to track various types of ECG morphologies, including intermittently occurring ECG beats. In addition, we use the estimated model parameters as the feature set to classify between ECG signals with normal sinus rhythm and four different types of arrhythmia.

  5. Establishment and Assessment of Plasma Disruption and Warning Databases from EAST

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Robert, Granetz; Xiao, Bingjia; Li, Jiangang; Yang, Fei; Li, Junjun; Chen, Dalong

    2016-12-01

    Disruption database and disruption warning database of the EAST tokamak had been established by a disruption research group. The disruption database, based on Structured Query Language (SQL), comprises 41 disruption parameters, which include current quench characteristics, EFIT equilibrium characteristics, kinetic parameters, halo currents, and vertical motion. Presently most disruption databases are based on plasma experiments of non-superconducting tokamak devices. The purposes of the EAST database are to find disruption characteristics and disruption statistics to the fully superconducting tokamak EAST, to elucidate the physics underlying tokamak disruptions, to explore the influence of disruption on superconducting magnets and to extrapolate toward future burning plasma devices. In order to quantitatively assess the usefulness of various plasma parameters for predicting disruptions, a similar SQL database to Alcator C-Mod for EAST has been created by compiling values for a number of proposed disruption-relevant parameters sampled from all plasma discharges in the 2015 campaign. The detailed statistic results and analysis of two databases on the EAST tokamak are presented. supported by the National Magnetic Confinement Fusion Science Program of China (No. 2014GB103000)

  6. Pattern statistics on Markov chains and sensitivity to parameter estimation

    PubMed Central

    Nuel, Grégory

    2006-01-01

    Background: In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,...). Results: In the particular case where pattern statistics (overlap counting only) computed through binomial approximations we use the delta-method to give an explicit expression of σ, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. Conclusion: We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation. PMID:17044916

  7. Pattern statistics on Markov chains and sensitivity to parameter estimation.

    PubMed

    Nuel, Grégory

    2006-10-17

    In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,...). In the particular case where pattern statistics (overlap counting only) computed through binomial approximations we use the delta-method to give an explicit expression of sigma, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation.

  8. Parameter estimation techniques based on optimizing goodness-of-fit statistics for structural reliability

    NASA Technical Reports Server (NTRS)

    Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.

    1993-01-01

    New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.

  9. Surveillance of industrial processes with correlated parameters

    DOEpatents

    White, Andrew M.; Gross, Kenny C.; Kubic, William L.; Wigeland, Roald A.

    1996-01-01

    A system and method for surveillance of an industrial process. The system and method includes a plurality of sensors monitoring industrial process parameters, devices to convert the sensed data to computer compatible information and a computer which executes computer software directed to analyzing the sensor data to discern statistically reliable alarm conditions. The computer software is executed to remove serial correlation information and then calculate Mahalanobis distribution data to carry out a probability ratio test to determine alarm conditions.

  10. Multivariable Parametric Cost Model for Ground Optical Telescope Assembly

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Rowell, Ginger Holmes; Reese, Gayle; Byberg, Alicia

    2005-01-01

    A parametric cost model for ground-based telescopes is developed using multivariable statistical analysis of both engineering and performance parameters. While diameter continues to be the dominant cost driver, diffraction-limited wavelength is found to be a secondary driver. Other parameters such as radius of curvature are examined. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e., multi-telescope phased-array systems). Additionally, single variable models Based on aperture diameter are derived.

  11. Progress in Turbulence Detection via GNSS Occultation Data

    NASA Technical Reports Server (NTRS)

    Cornman, L. B.; Goodrich, R. K.; Axelrad, P.; Barlow, E.

    2012-01-01

    The increased availability of radio occultation (RO) data offers the ability to detect and study turbulence in the Earth's atmosphere. An analysis of how RO data can be used to determine the strength and location of turbulent regions is presented. This includes the derivation of a model for the power spectrum of the log-amplitude and phase fluctuations of the permittivity (or index of refraction) field. The bulk of the paper is then concerned with the estimation of the model parameters. Parameter estimators are introduced and some of their statistical properties are studied. These estimators are then applied to simulated log-amplitude RO signals. This includes the analysis of global statistics derived from a large number of realizations, as well as case studies that illustrate various specific aspects of the problem. Improvements to the basic estimation methods are discussed, and their beneficial properties are illustrated. The estimation techniques are then applied to real occultation data. Only two cases are presented, but they illustrate some of the salient features inherent in real data.

  12. The effects of DRIE operational parameters on vertically aligned micropillar arrays

    NASA Astrophysics Data System (ADS)

    Miller, Kane; Li, Mingxiao; Walsh, Kevin M.; Fu, Xiao-An

    2013-03-01

    Vertically aligned silicon micropillar arrays have been created by deep reactive ion etching (DRIE) and used for a number of microfabricated devices including microfluidic devices, micropreconcentrators and photovoltaic cells. This paper delineates an experimental design performed on the Bosch process of DRIE of micropillar arrays. The arrays are fabricated with direct-write optical lithography without photomask, and the effects of DRIE process parameters, including etch cycle time, passivation cycle time, platen power and coil power on profile angle, scallop depth and scallop peak-to-peak distance are studied by statistical design of experiments. Scanning electron microscope images are used for measuring the resultant profile angles and characterizing the scalloping effect on the pillar sidewalls. The experimental results indicate the effects of the determining factors, etch cycle time, passivation cycle time and platen power, on the micropillar profile angles and scallop depths. An optimized DRIE process recipe for creating nearly 90° and smooth surface (invisible scalloping) has been obtained as a result of the statistical design of experiments.

  13. Probabilistic evaluation of damage potential in earthquake-induced liquefaction in a 3-D soil deposit

    NASA Astrophysics Data System (ADS)

    Halder, A.; Miller, F. J.

    1982-03-01

    A probabilistic model to evaluate the risk of liquefaction at a site and to limit or eliminate damage during earthquake induced liquefaction is proposed. The model is extended to consider three dimensional nonhomogeneous soil properties. The parameters relevant to the liquefaction phenomenon are identified, including: (1) soil parameters; (2) parameters required to consider laboratory test and sampling effects; and (3) loading parameters. The fundamentals of risk based design concepts pertient to liquefaction are reviewed. A detailed statistical evaluation of the soil parameters in the proposed liquefaction model is provided and the uncertainty associated with the estimation of in situ relative density is evaluated for both direct and indirect methods. It is found that the liquefaction potential the uncertainties in the load parameters could be higher than those in the resistance parameters.

  14. A multimodal wave spectrum-based approach for statistical downscaling of local wave climate

    USGS Publications Warehouse

    Hegermiller, Christie; Antolinez, Jose A A; Rueda, Ana C.; Camus, Paula; Perez, Jorge; Erikson, Li; Barnard, Patrick; Mendez, Fernando J.

    2017-01-01

    Characterization of wave climate by bulk wave parameters is insufficient for many coastal studies, including those focused on assessing coastal hazards and long-term wave climate influences on coastal evolution. This issue is particularly relevant for studies using statistical downscaling of atmospheric fields to local wave conditions, which are often multimodal in large ocean basins (e.g. the Pacific). Swell may be generated in vastly different wave generation regions, yielding complex wave spectra that are inadequately represented by a single set of bulk wave parameters. Furthermore, the relationship between atmospheric systems and local wave conditions is complicated by variations in arrival time of wave groups from different parts of the basin. Here, we address these two challenges by improving upon the spatiotemporal definition of the atmospheric predictor used in statistical downscaling of local wave climate. The improved methodology separates the local wave spectrum into “wave families,” defined by spectral peaks and discrete generation regions, and relates atmospheric conditions in distant regions of the ocean basin to local wave conditions by incorporating travel times computed from effective energy flux across the ocean basin. When applied to locations with multimodal wave spectra, including Southern California and Trujillo, Peru, the new methodology improves the ability of the statistical model to project significant wave height, peak period, and direction for each wave family, retaining more information from the full wave spectrum. This work is the base of statistical downscaling by weather types, which has recently been applied to coastal flooding and morphodynamic applications.

  15. Software for Statistical Analysis of Weibull Distributions with Application to Gear Fatigue Data: User Manual with Verification

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    2002-01-01

    The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type 1 censoring. The software was verified by reproducing results published by others.

  16. Software for Statistical Analysis of Weibull Distributions with Application to Gear Fatigue Data: User Manual with Verification

    NASA Technical Reports Server (NTRS)

    Kranz, Timothy L.

    2002-01-01

    The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type I censoring. The software was verified by reproducing results published by others.

  17. SEDIDAT: A BASIC program for the collection and statistical analysis of particle settling velocity data

    NASA Astrophysics Data System (ADS)

    Wright, Robyn; Thornberg, Steven M.

    SEDIDAT is a series of compiled IBM-BASIC (version 2.0) programs that direct the collection, statistical calculation, and graphic presentation of particle settling velocity and equivalent spherical diameter for samples analyzed using the settling tube technique. The programs follow a menu-driven format that is understood easily by students and scientists with little previous computer experience. Settling velocity is measured directly (cm,sec) and also converted into Chi units. Equivalent spherical diameter (reported in Phi units) is calculated using a modified Gibbs equation for different particle densities. Input parameters, such as water temperature, settling distance, particle density, run time, and Phi;Chi interval are changed easily at operator discretion. Optional output to a dot-matrix printer includes a summary of moment and graphic statistical parameters, a tabulation of individual and cumulative weight percents, a listing of major distribution modes, and cumulative and histogram plots of a raw time, settling velocity. Chi and Phi data.

  18. Predictive Model for the Design of Zwitterionic Polymer Brushes: A Statistical Design of Experiments Approach.

    PubMed

    Kumar, Ramya; Lahann, Joerg

    2016-07-06

    The performance of polymer interfaces in biology is governed by a wide spectrum of interfacial properties. With the ultimate goal of identifying design parameters for stem cell culture coatings, we developed a statistical model that describes the dependence of brush properties on surface-initiated polymerization (SIP) parameters. Employing a design of experiments (DOE) approach, we identified operating boundaries within which four gel architecture regimes can be realized, including a new regime of associated brushes in thin films. Our statistical model can accurately predict the brush thickness and the degree of intermolecular association of poly[{2-(methacryloyloxy) ethyl} dimethyl-(3-sulfopropyl) ammonium hydroxide] (PMEDSAH), a previously reported synthetic substrate for feeder-free and xeno-free culture of human embryonic stem cells. DOE-based multifunctional predictions offer a powerful quantitative framework for designing polymer interfaces. For example, model predictions can be used to decrease the critical thickness at which the wettability transition occurs by simply increasing the catalyst quantity from 1 to 3 mol %.

  19. Verification of Eulerian-Eulerian and Eulerian-Lagrangian simulations for turbulent fluid-particle flows

    DOE PAGES

    Patel, Ravi G.; Desjardins, Olivier; Kong, Bo; ...

    2017-09-01

    Here, we present a verification study of three simulation techniques for fluid–particle flows, including an Euler–Lagrange approach (EL) inspired by Jackson's seminal work on fluidized particles, a quadrature–based moment method based on the anisotropic Gaussian closure (AG), and the traditional two-fluid model. We perform simulations of two problems: particles in frozen homogeneous isotropic turbulence (HIT) and cluster-induced turbulence (CIT). For verification, we evaluate various techniques for extracting statistics from EL and study the convergence properties of the three methods under grid refinement. The convergence is found to depend on the simulation method and on the problem, with CIT simulations posingmore » fewer difficulties than HIT. Specifically, EL converges under refinement for both HIT and CIT, but statistics exhibit dependence on the postprocessing parameters. For CIT, AG produces similar results to EL. For HIT, converging both TFM and AG poses challenges. Overall, extracting converged, parameter-independent Eulerian statistics remains a challenge for all methods.« less

  20. Conditions, interventions, and outcomes in nursing research: a comparative analysis of North American and European/International journals. (1981-1990).

    PubMed

    Abraham, I L; Chalifoux, Z L; Evers, G C; De Geest, S

    1995-04-01

    This study compared the conceptual foci and methodological characteristics of research projects which tested the effects of nursing interventions, published in four general nursing research journals with predominantly North American, and two with predominantly European/International authorship and readership. Dimensions and variables of comparison included: nature of subjects, design issues, statistical methodology, statistical power, and types of interventions and outcomes. Although some differences emerged, the most striking and consistent finding was that there were no statistically significant differences (and thus similarities) in the content foci and methodological parameters of the intervention studies published in both groups of journals. We conclude that European/International and North American nursing intervention studies, as reported in major general nursing research journals, are highly similar in the parameters studied, yet in need of overall improvement. Certainly, there is no empirical support for the common (explicit or implicit) ethnocentric American bias that leadership in nursing intervention research resides with and in the United States of America.

  1. Verification of Eulerian-Eulerian and Eulerian-Lagrangian simulations for turbulent fluid-particle flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patel, Ravi G.; Desjardins, Olivier; Kong, Bo

    Here, we present a verification study of three simulation techniques for fluid–particle flows, including an Euler–Lagrange approach (EL) inspired by Jackson's seminal work on fluidized particles, a quadrature–based moment method based on the anisotropic Gaussian closure (AG), and the traditional two-fluid model. We perform simulations of two problems: particles in frozen homogeneous isotropic turbulence (HIT) and cluster-induced turbulence (CIT). For verification, we evaluate various techniques for extracting statistics from EL and study the convergence properties of the three methods under grid refinement. The convergence is found to depend on the simulation method and on the problem, with CIT simulations posingmore » fewer difficulties than HIT. Specifically, EL converges under refinement for both HIT and CIT, but statistics exhibit dependence on the postprocessing parameters. For CIT, AG produces similar results to EL. For HIT, converging both TFM and AG poses challenges. Overall, extracting converged, parameter-independent Eulerian statistics remains a challenge for all methods.« less

  2. Comparison of Sperm Parameters in Patients with Infertility Induced by Genital Infection versus Varicocele

    PubMed Central

    Pajovic, Bogdan; Dimitrovski, Antonio; Radojevic, Nemanja; Vukovic, Marko

    2015-01-01

    Background: Male infertility is a common and complex problem and, despite much research in this field, the major cause of infertility unfortunately remains unknown. Genital infection and varicocele are important causes of infertility. Aims: To compare the influence of genital infection and varicocele individually on male infertility based on semen analysis. Study Design: Cross-sectional study. Methods: The study included 120 infertile patients divided into two groups according to the presence of genital infection or varicocele. The first group included 60 examinees with proven genital infection, but without varicocele formation. The second included 60 patients with varicocele, regardless of the varicocele grade, but without genital infection. The fertile parameters were compared and an assessment was performed on the impact on quality of spermatogenesis due to infection and varicocele. Results: There is a statistically significant difference regarding abnormal forms of spermatozoids (45.94±9.79 vs. 25.27±6.54) and progressive motility (8.15±1.24 vs. 24.95±7.2), between two groups of patients. However, acidity of ejaculates, minimum sperm concentration, total spermatozoid motility and ejaculate volume showed no statistically significant difference. Conclusion: The study showed a stronger negative influence of genital infection on fertile parameters over varicocele. The significance of our study is the lack of contemporary researches comparing varicocele and genital infection influence on male infertility individually. PMID:26185712

  3. Comparison of Sperm Parameters in Patients with Infertility Induced by Genital Infection versus Varicocele.

    PubMed

    Pajovic, Bogdan; Dimitrovski, Antonio; Radojevic, Nemanja; Vukovic, Marko

    2015-07-01

    Male infertility is a common and complex problem and, despite much research in this field, the major cause of infertility unfortunately remains unknown. Genital infection and varicocele are important causes of infertility. To compare the influence of genital infection and varicocele individually on male infertility based on semen analysis. Cross-sectional study. The study included 120 infertile patients divided into two groups according to the presence of genital infection or varicocele. The first group included 60 examinees with proven genital infection, but without varicocele formation. The second included 60 patients with varicocele, regardless of the varicocele grade, but without genital infection. The fertile parameters were compared and an assessment was performed on the impact on quality of spermatogenesis due to infection and varicocele. There is a statistically significant difference regarding abnormal forms of spermatozoids (45.94±9.79 vs. 25.27±6.54) and progressive motility (8.15±1.24 vs. 24.95±7.2), between two groups of patients. However, acidity of ejaculates, minimum sperm concentration, total spermatozoid motility and ejaculate volume showed no statistically significant difference. The study showed a stronger negative influence of genital infection on fertile parameters over varicocele. The significance of our study is the lack of contemporary researches comparing varicocele and genital infection influence on male infertility individually.

  4. Improving the efficiency of the cardiac catheterization laboratories through understanding the stochastic behavior of the scheduled procedures.

    PubMed

    Stepaniak, Pieter S; Soliman Hamad, Mohamed A; Dekker, Lukas R C; Koolen, Jacques J

    2014-01-01

    In this study, we sought to analyze the stochastic behavior of Catherization Laboratories (Cath Labs) procedures in our institution. Statistical models may help to improve estimated case durations to support management in the cost-effective use of expensive surgical resources. We retrospectively analyzed all the procedures performed in the Cath Labs in 2012. The duration of procedures is strictly positive (larger than zero) and has mostly a large minimum duration. Because of the strictly positive character of the Cath Lab procedures, a fit of a lognormal model may be desirable. Having a minimum duration requires an estimate of the threshold (shift) parameter of the lognormal model. Therefore, the 3-parameter lognormal model is interesting. To avoid heterogeneous groups of observations, we tested every group-cardiologist-procedure combination for the normal, 2- and 3-parameter lognormal distribution. The total number of elective and emergency procedures performed was 6,393 (8,186 h). The final analysis included 6,135 procedures (7,779 h). Electrophysiology (intervention) procedures fit the 3-parameter lognormal model 86.1% (80.1%). Using Friedman test statistics, we conclude that the 3-parameter lognormal model is superior to the 2-parameter lognormal model. Furthermore, the 2-parameter lognormal is superior to the normal model. Cath Lab procedures are well-modelled by lognormal models. This information helps to improve and to refine Cath Lab schedules and hence their efficient use.

  5. Principle of maximum entropy for reliability analysis in the design of machine components

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin

    2018-03-01

    We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.

  6. A perceptual space of local image statistics.

    PubMed

    Victor, Jonathan D; Thengone, Daniel J; Rizvi, Syed M; Conte, Mary M

    2015-12-01

    Local image statistics are important for visual analysis of textures, surfaces, and form. There are many kinds of local statistics, including those that capture luminance distributions, spatial contrast, oriented segments, and corners. While sensitivity to each of these kinds of statistics have been well-studied, much less is known about visual processing when multiple kinds of statistics are relevant, in large part because the dimensionality of the problem is high and different kinds of statistics interact. To approach this problem, we focused on binary images on a square lattice - a reduced set of stimuli which nevertheless taps many kinds of local statistics. In this 10-parameter space, we determined psychophysical thresholds to each kind of statistic (16 observers) and all of their pairwise combinations (4 observers). Sensitivities and isodiscrimination contours were consistent across observers. Isodiscrimination contours were elliptical, implying a quadratic interaction rule, which in turn determined ellipsoidal isodiscrimination surfaces in the full 10-dimensional space, and made predictions for sensitivities to complex combinations of statistics. These predictions, including the prediction of a combination of statistics that was metameric to random, were verified experimentally. Finally, check size had only a mild effect on sensitivities over the range from 2.8 to 14min, but sensitivities to second- and higher-order statistics was substantially lower at 1.4min. In sum, local image statistics form a perceptual space that is highly stereotyped across observers, in which different kinds of statistics interact according to simple rules. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. A perceptual space of local image statistics

    PubMed Central

    Victor, Jonathan D.; Thengone, Daniel J.; Rizvi, Syed M.; Conte, Mary M.

    2015-01-01

    Local image statistics are important for visual analysis of textures, surfaces, and form. There are many kinds of local statistics, including those that capture luminance distributions, spatial contrast, oriented segments, and corners. While sensitivity to each of these kinds of statistics have been well-studied, much less is known about visual processing when multiple kinds of statistics are relevant, in large part because the dimensionality of the problem is high and different kinds of statistics interact. To approach this problem, we focused on binary images on a square lattice – a reduced set of stimuli which nevertheless taps many kinds of local statistics. In this 10-parameter space, we determined psychophysical thresholds to each kind of statistic (16 observers) and all of their pairwise combinations (4 observers). Sensitivities and isodiscrimination contours were consistent across observers. Isodiscrimination contours were elliptical, implying a quadratic interaction rule, which in turn determined ellipsoidal isodiscrimination surfaces in the full 10-dimensional space, and made predictions for sensitivities to complex combinations of statistics. These predictions, including the prediction of a combination of statistics that was metameric to random, were verified experimentally. Finally, check size had only a mild effect on sensitivities over the range from 2.8 to 14 min, but sensitivities to second- and higher-order statistics was substantially lower at 1.4 min. In sum, local image statistics forms a perceptual space that is highly stereotyped across observers, in which different kinds of statistics interact according to simple rules. PMID:26130606

  8. A practical approach for the scale-up of roller compaction process.

    PubMed

    Shi, Weixian; Sprockel, Omar L

    2016-09-01

    An alternative approach for the scale-up of ribbon formation during roller compaction was investigated, which required only one batch at the commercial scale to set the operational conditions. The scale-up of ribbon formation was based on a probability method. It was sufficient in describing the mechanism of ribbon formation at both scales. In this method, a statistical relationship between roller compaction parameters and ribbon attributes (thickness and density) was first defined with DoE using a pilot Alexanderwerk WP120 roller compactor. While the milling speed was included in the design, it has no practical effect on granule properties within the study range despite its statistical significance. The statistical relationship was then adapted to a commercial Alexanderwerk WP200 roller compactor with one experimental run. The experimental run served as a calibration of the statistical model parameters. The proposed transfer method was then confirmed by conducting a mapping study on the Alexanderwerk WP200 using a factorial DoE, which showed a match between the predictions and the verification experiments. The study demonstrates the applicability of the roller compaction transfer method using the statistical model from the development scale calibrated with one experiment point at the commercial scale. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Instantaneous polarization statistic property of EM waves incident on time-varying reentry plasma

    NASA Astrophysics Data System (ADS)

    Bai, Bowen; Liu, Yanming; Li, Xiaoping; Yao, Bo; Shi, Lei

    2018-06-01

    An analytical method is proposed in this paper to study the effect of time-varying reentry plasma sheath on the instantaneous polarization statistic property of electromagnetic (EM) waves. Based on the disturbance property of the hypersonic fluid, the spatial-temporal model of the time-varying reentry plasma sheath is established. An analytical technique referred to as transmission line analogy is developed to calculate the instantaneous transmission coefficient of EM wave propagation in time-varying plasma. Then, the instantaneous polarization statistic theory of EM wave propagation in the time-varying plasma sheath is developed. Taking the S-band telemetry right hand circularly polarized wave as an example, effects of incident angle and plasma parameters, including the electron density and the collision frequency on the EM wave's polarization statistic property are studied systematically. Statistical results indicate that the lower the collision frequency and the larger the electron density and incident angle is, the worse the deterioration of the polarization property is. Meanwhile, in conditions of critical parameters of certain electron density, collision frequency, and incident angle, the transmitted waves have both the right and left hand polarization mode, and the polarization mode will reverse. The calculation results could provide useful information for adaptive polarization receiving of the spacecraft's reentry communication.

  10. Surveillance of industrial processes with correlated parameters

    DOEpatents

    White, A.M.; Gross, K.C.; Kubic, W.L.; Wigeland, R.A.

    1996-12-17

    A system and method for surveillance of an industrial process are disclosed. The system and method includes a plurality of sensors monitoring industrial process parameters, devices to convert the sensed data to computer compatible information and a computer which executes computer software directed to analyzing the sensor data to discern statistically reliable alarm conditions. The computer software is executed to remove serial correlation information and then calculate Mahalanobis distribution data to carry out a probability ratio test to determine alarm conditions. 10 figs.

  11. A Situational-Awareness System For Networked Infantry Including An Accelerometer-Based Shot-Identification Algorithm For Direct-Fire Weapons

    DTIC Science & Technology

    2016-09-01

    noise density and temperature sensitivity of these devices are all on the same order of magnitude. Even the worst- case noise density of the GCDC...accelerations from a handgun firing were distinct from other impulsive events on the wrist, such as using a hammer. Loeffler first identified potential shots by...spikes, taking various statistical parameters. He used a logistic regression model on these parameters and was able to classify 98.9% of shots

  12. Multivariable Parametric Cost Model for Ground Optical: Telescope Assembly

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Rowell, Ginger Holmes; Reese, Gayle; Byberg, Alicia

    2004-01-01

    A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis of both engineering and performance parameters. While diameter continues to be the dominant cost driver, diffraction limited wavelength is found to be a secondary driver. Other parameters such as radius of curvature were examined. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter were derived.

  13. Statistical Parameter Study of the Time Interval Distribution for Nonparalyzable, Paralyzable, and Hybrid Dead Time Models

    NASA Astrophysics Data System (ADS)

    Syam, Nur Syamsi; Maeng, Seongjin; Kim, Myo Gwang; Lim, Soo Yeon; Lee, Sang Hoon

    2018-05-01

    A large dead time of a Geiger Mueller (GM) detector may cause a large count loss in radiation measurements and consequently may cause distortion of the Poisson statistic of radiation events into a new distribution. The new distribution will have different statistical parameters compared to the original distribution. Therefore, the variance, skewness, and excess kurtosis in association with the observed count rate of the time interval distribution for well-known nonparalyzable, paralyzable, and nonparalyzable-paralyzable hybrid dead time models of a Geiger Mueller detector were studied using Monte Carlo simulation (GMSIM). These parameters were then compared with the statistical parameters of a perfect detector to observe the change in the distribution. The results show that the behaviors of the statistical parameters for the three dead time models were different. The values of the skewness and the excess kurtosis of the nonparalyzable model are equal or very close to those of the perfect detector, which are ≅2 for skewness, and ≅6 for excess kurtosis, while the statistical parameters in the paralyzable and hybrid model obtain minimum values that occur around the maximum observed count rates. The different trends of the three models resulting from the GMSIM simulation can be used to distinguish the dead time behavior of a GM counter; i.e. whether the GM counter can be described best by using the nonparalyzable, paralyzable, or hybrid model. In a future study, these statistical parameters need to be analyzed further to determine the possibility of using them to determine a dead time for each model, particularly for paralyzable and hybrid models.

  14. High-temperature behavior of a deformed Fermi gas obeying interpolating statistics.

    PubMed

    Algin, Abdullah; Senay, Mustafa

    2012-04-01

    An outstanding idea originally introduced by Greenberg is to investigate whether there is equivalence between intermediate statistics, which may be different from anyonic statistics, and q-deformed particle algebra. Also, a model to be studied for addressing such an idea could possibly provide us some new consequences about the interactions of particles as well as their internal structures. Motivated mainly by this idea, in this work, we consider a q-deformed Fermi gas model whose statistical properties enable us to effectively study interpolating statistics. Starting with a generalized Fermi-Dirac distribution function, we derive several thermostatistical functions of a gas of these deformed fermions in the thermodynamical limit. We study the high-temperature behavior of the system by analyzing the effects of q deformation on the most important thermostatistical characteristics of the system such as the entropy, specific heat, and equation of state. It is shown that such a deformed fermion model in two and three spatial dimensions exhibits the interpolating statistics in a specific interval of the model deformation parameter 0 < q < 1. In particular, for two and three spatial dimensions, it is found from the behavior of the third virial coefficient of the model that the deformation parameter q interpolates completely between attractive and repulsive systems, including the free boson and fermion cases. From the results obtained in this work, we conclude that such a model could provide much physical insight into some interacting theories of fermions, and could be useful to further study the particle systems with intermediate statistics.

  15. The power and robustness of maximum LOD score statistics.

    PubMed

    Yoo, Y J; Mendell, N R

    2008-07-01

    The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.

  16. ZERODUR: deterministic approach for strength design

    NASA Astrophysics Data System (ADS)

    Hartmann, Peter

    2012-12-01

    There is an increasing request for zero expansion glass ceramic ZERODUR substrates being capable of enduring higher operational static loads or accelerations. The integrity of structures such as optical or mechanical elements for satellites surviving rocket launches, filigree lightweight mirrors, wobbling mirrors, and reticle and wafer stages in microlithography must be guaranteed with low failure probability. Their design requires statistically relevant strength data. The traditional approach using the statistical two-parameter Weibull distribution suffered from two problems. The data sets were too small to obtain distribution parameters with sufficient accuracy and also too small to decide on the validity of the model. This holds especially for the low failure probability levels that are required for reliable applications. Extrapolation to 0.1% failure probability and below led to design strengths so low that higher load applications seemed to be not feasible. New data have been collected with numbers per set large enough to enable tests on the applicability of the three-parameter Weibull distribution. This distribution revealed to provide much better fitting of the data. Moreover it delivers a lower threshold value, which means a minimum value for breakage stress, allowing of removing statistical uncertainty by introducing a deterministic method to calculate design strength. Considerations taken from the theory of fracture mechanics as have been proven to be reliable with proof test qualifications of delicate structures made from brittle materials enable including fatigue due to stress corrosion in a straight forward way. With the formulae derived, either lifetime can be calculated from given stress or allowable stress from minimum required lifetime. The data, distributions, and design strength calculations for several practically relevant surface conditions of ZERODUR are given. The values obtained are significantly higher than those resulting from the two-parameter Weibull distribution approach and no longer subject to statistical uncertainty.

  17. Cine phase-contrast MRI evaluation of normal aqueductal cerebrospinal fluid flow according to sex and age.

    PubMed

    Unal, Ozkan; Kartum, Alp; Avcu, Serhat; Etlik, Omer; Arslan, Halil; Bora, Aydin

    2009-12-01

    The aim of this study was cerebrospinal flow quantification in the cerebral aqueduct using cine phase-contrast magnetic resonance imaging (MRI) technique in both sexes and five different age groups to provide normative data. Sixty subjects with no cerebral pathology were included in this study. Subjects were divided into five age groups: < or =14 years, 15-24 years, 25-34 years, 35-44 years, and > or =45 years. Phase, rephase, and magnitude images were acquired by 1.5 T MR unit at the level of cerebral aqueduct with spoiled gradient echo through-plane, which is a cine phase-contrast sequence. At this level, peak flow velocity (cm/s), average flow rate (cm/ s), average flow (L/min), volumes in cranial and caudal directions (mL), and net volumes (mL) were studied. There was a statistically significant difference in peak flow between the age group of < or =14 years and the older age groups. There were no statistically significant differences in average velocity, cranial and caudal volume, net volume, and average flow parameters among different age groups. Statistically significant differences were not detected in flow parameters between sexes. When using cine-phase contrast MRI in the cerebral aqueduct, only the peak velocity showed a statistically significant difference between age groups; it was higher in subjects aged < or =14 years than those in older age groups. When performing age-dependent clinical studies including adolescents, this should be taken into consideration.

  18. Intrinsic factor antibody negative atrophic gastritis; is it different from pernicious anaemia?

    PubMed

    Amarapurkar, D N; Amarapurkar, A D

    2010-01-01

    H. pylori gastritis and autoimmune gastritis are the two main types of chronic atrophic gastritis. Parietal cell antibody (PCA) and intrinsic factor antibody (IFA) are characteristic of autoimmune gastritis, of which IFA is more specific. Patients who are IFA negative are considered under the category of chronic atrophic gastritis. To differentiate IFA positive from IFA negative chronic atrophic gastritis. Fifty consecutive patients of biopsy proven chronic atrophic gastritis were included in this study. All patients underwent haematological and biochemical tests including serum LDH, vitamin B12 and fasting serum gastrin levels. PCA and IFA antibodies were tested in all patients. Multiple gastric biopsies from body and antrum of the stomach were taken and evaluated for presence of intestinal metaplasia, endocrine cell hyperplasia, carcinoid and H. pylori infection. Patients were grouped as group A (IFA positive) and group B (IFA negative). The mean laboratory values and histological parameters were compared between the two groups using appropriate statistical methods. Eighteen patients were in group A (mean age 55.5 +/- 13 years, male: female = 16:2) and thirty-two in group B (mean age 49.7 +/- 13 years, male: female = 25:7). There was no statistically significant difference between median values of haemoglobin, MCV, LDH, Vitamin B12 and serum gastrin in both the groups. None of the histological parameters showed any significant difference. There was no statistically significant difference in haematological, biochemical and histological parameters in IFA positive and negative gastritis. These may be the spectrum of the same disease, where H. pylori may be responsible for initiating the process.

  19. Evaluation of body weight, body mass index, and body fat percentage changes in early stages of fixed orthodontic therapy.

    PubMed

    Sandeep, K Sai; Singaraju, Gowri Sankar; Reddy, V Karunakar; Mandava, Prasad; Bhavikati, Venkata N; Reddy, Rohit

    2016-01-01

    The aim of this study was to evaluate and compare the changes in body weight, body mass index (BMI), and body fat percentage (BFP) during the initial stages of fixed orthodontic treatment. The sample for this observational prospective study included 68 individuals with fixed orthodontic appliance in the age group of 18-25 years of both the sexes (25 males and 43 females). The control group consisted of 60 individuals (24 males and 36 females). The weight, BMI, and BFP were measured using a Body Composition Monitor at three points of time "T1" initial; "T2" after 1 month; and "T2" after 3 months. The results were tabulated and analyzed with the Statistical Package for the Social Sciences software. The mean changes between different parameters in both the study and control groups and between males and females in the study group was compared by using two-tailed unpaired student's t-test. The statistical significance is set atP ≤ 0.05. There was an overall decrease in the body weight, BMI, and BFP after 1 month in the study cohort, which was statistically significant compared to the control group (P < 0.0001). This was followed by an increase in the parameters after the end of the 3(rd) month. Comparison of the parameters between the study and control group at the start of the treatment and at the end of the 3(rd) month had no statistical significance. There was a marked variation in the changes of these parameters between males and females of the study group, which is statistically significant (<0.0001). There is a definite reduction in the weight, BMP, and BMI at the end of the first month followed by a gain of weight, but not at the initial point by the end of the 3(rd) month.

  20. An Economic Impact Study: How and Why To Do One.

    ERIC Educational Resources Information Center

    Graefe, Martin; Wells, Matt

    1996-01-01

    An economic impact study tells the community about a camp's contribution, and is good advertising. Describes an economic impact study and its benefits. Uses Concordia Language Villages' study to illustrate features of an impact study, including goals and scope, parameters and assumptions, statistical information, research methodology, review…

  1. A New Look at Bias in Aptitude Tests.

    ERIC Educational Resources Information Center

    Scheuneman, Janice Dowd

    1981-01-01

    Statistical bias in measurement and ethnic-group bias in testing are discussed, reviewing predictive and construct validity studies. Item bias is reconceptualized to include distance of item content from respondent's experience. Differing values of mean and standard deviation for bias parameter are analyzed in a simulation. References are…

  2. A new test statistic for climate models that includes field and spatial dependencies using Gaussian Markov random fields

    DOE PAGES

    Nosedal-Sanchez, Alvaro; Jackson, Charles S.; Huerta, Gabriel

    2016-07-20

    A new test statistic for climate model evaluation has been developed that potentially mitigates some of the limitations that exist for observing and representing field and space dependencies of climate phenomena. Traditionally such dependencies have been ignored when climate models have been evaluated against observational data, which makes it difficult to assess whether any given model is simulating observed climate for the right reasons. The new statistic uses Gaussian Markov random fields for estimating field and space dependencies within a first-order grid point neighborhood structure. We illustrate the ability of Gaussian Markov random fields to represent empirical estimates of fieldmore » and space covariances using "witch hat" graphs. We further use the new statistic to evaluate the tropical response of a climate model (CAM3.1) to changes in two parameters important to its representation of cloud and precipitation physics. Overall, the inclusion of dependency information did not alter significantly the recognition of those regions of parameter space that best approximated observations. However, there were some qualitative differences in the shape of the response surface that suggest how such a measure could affect estimates of model uncertainty.« less

  3. Characterizing microstructural features of biomedical samples by statistical analysis of Mueller matrix images

    NASA Astrophysics Data System (ADS)

    He, Honghui; Dong, Yang; Zhou, Jialing; Ma, Hui

    2017-03-01

    As one of the salient features of light, polarization contains abundant structural and optical information of media. Recently, as a comprehensive description of polarization property, the Mueller matrix polarimetry has been applied to various biomedical studies such as cancerous tissues detections. In previous works, it has been found that the structural information encoded in the 2D Mueller matrix images can be presented by other transformed parameters with more explicit relationship to certain microstructural features. In this paper, we present a statistical analyzing method to transform the 2D Mueller matrix images into frequency distribution histograms (FDHs) and their central moments to reveal the dominant structural features of samples quantitatively. The experimental results of porcine heart, intestine, stomach, and liver tissues demonstrate that the transformation parameters and central moments based on the statistical analysis of Mueller matrix elements have simple relationships to the dominant microstructural properties of biomedical samples, including the density and orientation of fibrous structures, the depolarization power, diattenuation and absorption abilities. It is shown in this paper that the statistical analysis of 2D images of Mueller matrix elements may provide quantitative or semi-quantitative criteria for biomedical diagnosis.

  4. A new test statistic for climate models that includes field and spatial dependencies using Gaussian Markov random fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nosedal-Sanchez, Alvaro; Jackson, Charles S.; Huerta, Gabriel

    A new test statistic for climate model evaluation has been developed that potentially mitigates some of the limitations that exist for observing and representing field and space dependencies of climate phenomena. Traditionally such dependencies have been ignored when climate models have been evaluated against observational data, which makes it difficult to assess whether any given model is simulating observed climate for the right reasons. The new statistic uses Gaussian Markov random fields for estimating field and space dependencies within a first-order grid point neighborhood structure. We illustrate the ability of Gaussian Markov random fields to represent empirical estimates of fieldmore » and space covariances using "witch hat" graphs. We further use the new statistic to evaluate the tropical response of a climate model (CAM3.1) to changes in two parameters important to its representation of cloud and precipitation physics. Overall, the inclusion of dependency information did not alter significantly the recognition of those regions of parameter space that best approximated observations. However, there were some qualitative differences in the shape of the response surface that suggest how such a measure could affect estimates of model uncertainty.« less

  5. Probability distributions of the electroencephalogram envelope of preterm infants.

    PubMed

    Saji, Ryoya; Hirasawa, Kyoko; Ito, Masako; Kusuda, Satoshi; Konishi, Yukuo; Taga, Gentaro

    2015-06-01

    To determine the stationary characteristics of electroencephalogram (EEG) envelopes for prematurely born (preterm) infants and investigate the intrinsic characteristics of early brain development in preterm infants. Twenty neurologically normal sets of EEGs recorded in infants with a post-conceptional age (PCA) range of 26-44 weeks (mean 37.5 ± 5.0 weeks) were analyzed. Hilbert transform was applied to extract the envelope. We determined the suitable probability distribution of the envelope and performed a statistical analysis. It was found that (i) the probability distributions for preterm EEG envelopes were best fitted by lognormal distributions at 38 weeks PCA or less, and by gamma distributions at 44 weeks PCA; (ii) the scale parameter of the lognormal distribution had positive correlations with PCA as well as a strong negative correlation with the percentage of low-voltage activity; (iii) the shape parameter of the lognormal distribution had significant positive correlations with PCA; (iv) the statistics of mode showed significant linear relationships with PCA, and, therefore, it was considered a useful index in PCA prediction. These statistics, including the scale parameter of the lognormal distribution and the skewness and mode derived from a suitable probability distribution, may be good indexes for estimating stationary nature in developing brain activity in preterm infants. The stationary characteristics, such as discontinuity, asymmetry, and unimodality, of preterm EEGs are well indicated by the statistics estimated from the probability distribution of the preterm EEG envelopes. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  6. Chemical freezeout parameters within generic nonextensive statistics

    NASA Astrophysics Data System (ADS)

    Tawfik, Abdel; Yassin, Hayam; Abo Elyazeed, Eman R.

    2018-06-01

    The particle production in relativistic heavy-ion collisions seems to be created in a dynamically disordered system which can be best described by an extended exponential entropy. In distinguishing between the applicability of this and Boltzmann-Gibbs (BG) in generating various particle-ratios, generic (non)extensive statistics is introduced to the hadron resonance gas model. Accordingly, the degree of (non)extensivity is determined by the possible modifications in the phase space. Both BG extensivity and Tsallis nonextensivity are included as very special cases defined by specific values of the equivalence classes (c, d). We found that the particle ratios at energies ranging between 3.8 and 2760 GeV are best reproduced by nonextensive statistics, where c and d range between ˜ 0.9 and ˜ 1 . The present work aims at illustrating that the proposed approach is well capable to manifest the statistical nature of the system on interest. We don't aim at highlighting deeper physical insights. In other words, while the resulting nonextensivity is neither BG nor Tsallis, the freezeout parameters are found very compatible with BG and accordingly with the well-known freezeout phase-diagram, which is in an excellent agreement with recent lattice calculations. We conclude that the particle production is nonextensive but should not necessarily be accompanied by a radical change in the intensive or extensive thermodynamic quantities, such as internal energy and temperature. Only, the two critical exponents defining the equivalence classes (c, d) are the physical parameters characterizing the (non)extensivity.

  7. General displaced SU(1, 1) number states: Revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dehghani, A., E-mail: alireza.dehghani@gmail.com, E-mail: a-dehghani@tabrizu.ac.ir

    2014-04-15

    The most general displaced number states, based on the bosonic and an irreducible representation of the Lie algebra symmetry of su(1, 1) and associated with the Calogero-Sutherland model are introduced. Here, we utilize the Barut-Girardello displacement operator instead of the Klauder-Perelomov counterpart, to construct new kind of the displaced number states which can be classified in nonlinear coherent states regime, too, with special nonlinearity functions. They depend on two parameters, and can be converted into the well-known Barut-Girardello coherent and number states, respectively, depending on which of the parameters equal to zero. A discussion of the statistical properties of thesemore » states is included. Significant are their squeezing properties and anti-bunching effects which can be raised by increasing the energy quantum number. Depending on the particular choice of the parameters of the above scenario, we are able to determine the status of compliance with flexible statistics. Major parts of the issue is spent on something that these states, in fact, should be considered as new kind of photon-added coherent states, too. Which can be reproduced through an iterated action of a creation operator on new nonlinear Barut-Girardello coherent states. Where the latter carry, also, outstanding statistical features.« less

  8. Performance evaluation of spectral vegetation indices using a statistical sensitivity function

    USGS Publications Warehouse

    Ji, Lei; Peters, Albert J.

    2007-01-01

    A great number of spectral vegetation indices (VIs) have been developed to estimate biophysical parameters of vegetation. Traditional techniques for evaluating the performance of VIs are regression-based statistics, such as the coefficient of determination and root mean square error. These statistics, however, are not capable of quantifying the detailed relationship between VIs and biophysical parameters because the sensitivity of a VI is usually a function of the biophysical parameter instead of a constant. To better quantify this relationship, we developed a “sensitivity function” for measuring the sensitivity of a VI to biophysical parameters. The sensitivity function is defined as the first derivative of the regression function, divided by the standard error of the dependent variable prediction. The function elucidates the change in sensitivity over the range of the biophysical parameter. The Student's t- or z-statistic can be used to test the significance of VI sensitivity. Additionally, we developed a “relative sensitivity function” that compares the sensitivities of two VIs when the biophysical parameters are unavailable.

  9. VizieR Online Data Catalog: The ESO DIBs Large Exploration Survey (Cox+, 2017)

    NASA Astrophysics Data System (ADS)

    Cox, N. L. J.; Cami, J.; Farhang, A.; Smoker, J.; Monreal-Ibero, A.; Lallement, R.; Sarre, P. J.; Marshall, C. C. M.; Smith, K. T.; Evans, C. J.; Royer, P.; Linnartz, H.; Cordiner, M. A.; Joblin, C.; van Loon, J. T.; Foing, B. H.; Bhatt, N. H.; Bron, E.; Elyajouri, M.; de Koter, A.; Ehrenfreund, P.; Javadi, A.; Kaper, L.; Khosroshadi, H. G.; Laverick, M.; Le Petit, F.; Mulas, G.; Roueff, E.; Salama, F.; Spaans, M.

    2018-01-01

    We constructed a statistically representative survey sample that probes a wide range of interstellar environment parameters including reddening E(B-V), visual extinction AV, total-to-selective extinction ratio RV, and molecular hydrogen fraction fH2. EDIBLES provides the community with optical (~305-1042nm) spectra at high spectral resolution (R~70000 in the blue arm and 100000 in the red arm) and high signal-to-noise (S/N; median value ~500-1000), for a statistically significant sample of interstellar sightlines. Many of the >100 sightlines included in the survey already have auxiliary available ultraviolet, infrared and/or polarisation data on the dust and gas components. (2 data files).

  10. SU-E-T-79: Comparison of Doses Received by the Hippocampus in Patients Treated with Single Vs Multiple Isocenter Based Stereotactic Radiation Therapy to the Brain for Multiple Brain Metastases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Algan, O; Giem, J; Young, J

    Purpose: To investigate the doses received by the hippocampus and normal brain tissue during a course of stereotactic radiotherapy utilizing a single isocenter (SI) versus multiple isocenter (MI) in patients with multiple intracranial metastases. Methods: Seven patients imaged with MRI including SPGR sequence and diagnosed with 2–3 brain metastases were included in this retrospective study. Two sets of stereotactic IMRT treatment plans, (MI vs SI), were generated. The hippocampus was contoured on SPGR sequences and doses received by the hippocampus and whole brain were calculated. The prescribed dose was 25Gy in 5 fractions. The two groups were compared using t-testmore » analysis. Results: There were 17 lesions in 7 patients. The median tumor, right hippocampus, left hippocampus and brain volumes were: 3.37cc, 2.56cc, 3.28cc, and 1417cc respectively. In comparing the two treatment plans, there was no difference in the PTV coverage except in the tail of the DVH curve. All tumors had V95 > 99.5%. The only statistically significant parameter was the V100 (72% vs 45%, p=0.002, favoring MI). All other evaluated parameters including the V95 and V98 did not reveal any statistically significant differences. None of the evaluated dosimetric parameters for the hippocampus (V100, V80, V60, V40, V20, V10, D100, D90, D70, D50, D30, D10) revealed any statistically significant differences (all p-values > 0.31) between MI and SI plans. The total brain dose was slightly higher in the SI plans, especially in the lower dose regions, although this difference was not statistically significant. Utilizing brain-sub-PTV volumes did not change these results. Conclusion: The use of SI treatment planning for patients with up to 3 brain metastases produces similar PTV coverage and similar normal tissue doses to the hippocampus and the brain compared to MI plans. SI treatment planning should be considered in patients with multiple brain metastases undergoing stereotactic treatment.« less

  11. Electromagnetic wave scattering from rough terrain

    NASA Astrophysics Data System (ADS)

    Papa, R. J.; Lennon, J. F.; Taylor, R. L.

    1980-09-01

    This report presents two aspects of a program designed to calculate electromagnetic scattering from rough terrain: (1) the use of statistical estimation techniques to determine topographic parameters and (2) the results of a single-roughness-scale scattering calculation based on those parameters, including comparison with experimental data. In the statistical part of the present calculation, digitized topographic maps are used to generate data bases for the required scattering cells. The application of estimation theory to the data leads to the specification of statistical parameters for each cell. The estimated parameters are then used in a hypothesis test to decide on a probability density function (PDF) that represents the height distribution in the cell. Initially, the formulation uses a single observation of the multivariate data. A subsequent approach involves multiple observations of the heights on a bivariate basis, and further refinements are being considered. The electromagnetic scattering analysis, the second topic, calculates the amount of specular and diffuse multipath power reaching a monopulse receiver from a pulsed beacon positioned over a rough Earth. The program allows for spatial inhomogeneities and multiple specular reflection points. The analysis of shadowing by the rough surface has been extended to the case where the surface heights are distributed exponentially. The calculated loss of boresight pointing accuracy attributable to diffuse multipath is then compared with the experimental results. The extent of the specular region, the use of localized height variations, and the effect of the azimuthal variation in power pattern are all assessed.

  12. The statistical analysis of energy release in small-scale coronal structures

    NASA Astrophysics Data System (ADS)

    Ulyanov, Artyom; Kuzin, Sergey; Bogachev, Sergey

    We present the results of statistical analysis of impulsive flare-like brightenings, which numerously occur in the quiet regions of solar corona. For our study, we utilized high-cadence observations performed with two EUV-telescopes - TESIS/Coronas-Photon and AIA/SDO. In total, we processed 6 sequences of images, registered throughout the period between 2009 and 2013, covering the rising phase of the 24th solar cycle. Based on high-speed DEM estimation method, we developed a new technique to evaluate the main parameters of detected events (geometrical sizes, duration, temperature and thermal energy). We then obtained the statistical distributions of these parameters and examined their variations depending on the level of solar activity. The results imply that near the minimum of the solar cycle the energy release in quiet corona is mainly provided by small-scale events (nanoflares), whereas larger events (microflares) prevail on the peak of activity. Furthermore, we investigated the coronal conditions that had specified the formation and triggering of registered flares. By means of photospheric magnetograms obtained with MDI/SoHO and HMI/SDO instruments, we examined the topology of local magnetic fields at different stages: the pre-flare phase, the peak of intensity and the ending phase. To do so, we introduced a number of topological parameters including the total magnetic flux, the distance between magnetic sources and their mutual arrangement. The found correlation between the change of these parameters and the formation of flares may offer an important tool for application of flare forecasting.

  13. A multibody knee model with discrete cartilage prediction of tibio-femoral contact mechanics.

    PubMed

    Guess, Trent M; Liu, Hongzeng; Bhashyam, Sampath; Thiagarajan, Ganesh

    2013-01-01

    Combining musculoskeletal simulations with anatomical joint models capable of predicting cartilage contact mechanics would provide a valuable tool for studying the relationships between muscle force and cartilage loading. As a step towards producing multibody musculoskeletal models that include representation of cartilage tissue mechanics, this research developed a subject-specific multibody knee model that represented the tibia plateau cartilage as discrete rigid bodies that interacted with the femur through deformable contacts. Parameters for the compliant contact law were derived using three methods: (1) simplified Hertzian contact theory, (2) simplified elastic foundation contact theory and (3) parameter optimisation from a finite element (FE) solution. The contact parameters and contact friction were evaluated during a simulated walk in a virtual dynamic knee simulator, and the resulting kinematics were compared with measured in vitro kinematics. The effects on predicted contact pressures and cartilage-bone interface shear forces during the simulated walk were also evaluated. The compliant contact stiffness parameters had a statistically significant effect on predicted contact pressures as well as all tibio-femoral motions except flexion-extension. The contact friction was not statistically significant to contact pressures, but was statistically significant to medial-lateral translation and all rotations except flexion-extension. The magnitude of kinematic differences between model formulations was relatively small, but contact pressure predictions were sensitive to model formulation. The developed multibody knee model was computationally efficient and had a computation time 283 times faster than a FE simulation using the same geometries and boundary conditions.

  14. Reference charts for fetal biometric parameters in twin pregnancies according to chorionicity.

    PubMed

    Araujo Júnior, Edward; Ruano, Rodrigo; Javadian, Pouya; Martins, Wellington P; Elito, Julio; Pires, Claudio Rodrigues; Zanforlin Filho, Sebastião Marques

    2014-04-01

    The objective of this article is to determine reference values for fetal biometric parameters in twin pregnancies and to compare these values between monochorionic and dichorionic pregnancies. A retrospective cross-sectional study was conducted among 157 monochorionic and 176 dichorionic twin pregnancies between 14 and 38 weeks of gestation. Biometric measurements included the biparietal diameter (BPD), abdominal circumference (AC), femurs length (FL) and estimated fetal weight (EFW). To evaluate the correlation between biometric parameters and gestational age, polynomial regression models were created, with adjustments using the coefficient of determination (R(2) ). Comparison between monochorionic and dichorionic pregnancies was performed using analysis of covariance. The mean BPD, AC, FL and EFW for the dichorionic pregnancies were 56.16 mm, 191.1 mm, 41.08 mm and 816.1 g, respectively. The mean BPD, AC, FL and EFW for the monochorionic pregnancies were 57.14 mm, 184.2 mm, 39.29 mm and 723.4 g, respectively. There was a statistical difference between mono and dichorionic pregnancies for all the biometric parameters (BPD p = 0.012; AC p = 0.047; FL p = 0.007; EFW p = 0.011). Reference curves of biometric parameters in twin pregnancies were determined. Biometric parameters were statistically different between monochorionic and dichorionic pregnancies. © 2014 John Wiley & Sons, Ltd.

  15. Disaster Metrics: Evaluation of de Boer's Disaster Severity Scale (DSS) Applied to Earthquakes.

    PubMed

    Bayram, Jamil D; Zuabi, Shawki; McCord, Caitlin M; Sherak, Raphael A G; Hsu, Edberdt B; Kelen, Gabor D

    2015-02-01

    Quantitative measurement of the medical severity following multiple-casualty events (MCEs) is an important goal in disaster medicine. In 1990, de Boer proposed a 13-point, 7-parameter scale called the Disaster Severity Scale (DSS). Parameters include cause, duration, radius, number of casualties, nature of injuries, rescue time, and effect on surrounding community. Hypothesis This study aimed to examine the reliability and dimensionality (number of salient themes) of de Boer's DSS scale through its application to 144 discrete earthquake events. A search for earthquake events was conducted via National Oceanic and Atmospheric Administration (NOAA) and US Geological Survey (USGS) databases. Two experts in the field of disaster medicine independently reviewed and assigned scores for parameters that had no data readily available (nature of injuries, rescue time, and effect on surrounding community), and differences were reconciled via consensus. Principle Component Analysis was performed using SPSS Statistics for Windows Version 22.0 (IBM Corp; Armonk, New York USA) to evaluate the reliability and dimensionality of the DSS. A total of 144 individual earthquakes from 2003 through 2013 were identified and scored. Of 13 points possible, the mean score was 6.04, the mode = 5, minimum = 4, maximum = 11, and standard deviation = 2.23. Three parameters in the DSS had zero variance (ie, the parameter received the same score in all 144 earthquakes). Because of the zero contribution to variance, these three parameters (cause, duration, and radius) were removed to run the statistical analysis. Cronbach's alpha score, a coefficient of internal consistency, for the remaining four parameters was found to be robust at 0.89. Principle Component Analysis showed uni-dimensional characteristics with only one component having an eigenvalue greater than one at 3.17. The 4-parameter DSS, however, suffered from restriction of scoring range on both parameter and scale levels. Jan de Boer's DSS in its 7-parameter format fails to hold statistically in a dataset of 144 earthquakes subjected to analysis. A modified 4-parameter scale was found to quantitatively assess medical severity more directly, but remains flawed due to range restriction on both individual parameter and scale levels. Further research is needed in the field of disaster metrics to develop a scale that is reliable in its complete set of parameters, capable of better fine discrimination, and uni-dimensional in measurement of the medical severity of MCEs.

  16. Using the MCNP Taylor series perturbation feature (efficiently) for shielding problems

    NASA Astrophysics Data System (ADS)

    Favorite, Jeffrey

    2017-09-01

    The Taylor series or differential operator perturbation method, implemented in MCNP and invoked using the PERT card, can be used for efficient parameter studies in shielding problems. This paper shows how only two PERT cards are needed to generate an entire parameter study, including statistical uncertainty estimates (an additional three PERT cards can be used to give exact statistical uncertainties). One realistic example problem involves a detailed helium-3 neutron detector model and its efficiency as a function of the density of its high-density polyethylene moderator. The MCNP differential operator perturbation capability is extremely accurate for this problem. A second problem involves the density of the polyethylene reflector of the BeRP ball and is an example of first-order sensitivity analysis using the PERT capability. A third problem is an analytic verification of the PERT capability.

  17. Algorithmic detectability threshold of the stochastic block model

    NASA Astrophysics Data System (ADS)

    Kawamoto, Tatsuro

    2018-03-01

    The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.

  18. LANDSAT menhaden and thread herring resources investigation. [Gulf of Mexico

    NASA Technical Reports Server (NTRS)

    Kemmerer, A. J. (Principal Investigator); Brucks, J. T.; Butler, J. A.; Faller, K. H.; Holley, H. J.; Leming, T. D.; Savastano, K. J.; Vanselous, T. M.

    1977-01-01

    The author has identified the following significant results. The relationship between the distribution of menhaden and selected oceanographic parameters (water color, turbidity, and possibly chlorophyll concentrations) was established. Similar relationships for thread herring were not established nor were relationships relating to the abundance of either species. Use of aircraft and LANDSAT remote sensing instruments to measure or infer a set of basic oceanographic parameters was evaluated. Parameters which could be accurately inferred included surface water temperature, salinity, and color. Water turbidity (Secchi disk) was evaluated as marginally inferrable from the LANDSAT MSS data and chlorophyll-a concentrations as less than marginal. These evaluations considered the parameters only as experienced in the two test areas using available sensors and statistical techniques.

  19. Selection and Reporting of Statistical Methods to Assess Reliability of a Diagnostic Test: Conformity to Recommended Methods in a Peer-Reviewed Journal

    PubMed Central

    Park, Ji Eun; Han, Kyunghwa; Sung, Yu Sub; Chung, Mi Sun; Koo, Hyun Jung; Yoon, Hee Mang; Choi, Young Jun; Lee, Seung Soo; Kim, Kyung Won; Shin, Youngbin; An, Suah; Cho, Hyo-Min

    2017-01-01

    Objective To evaluate the frequency and adequacy of statistical analyses in a general radiology journal when reporting a reliability analysis for a diagnostic test. Materials and Methods Sixty-three studies of diagnostic test accuracy (DTA) and 36 studies reporting reliability analyses published in the Korean Journal of Radiology between 2012 and 2016 were analyzed. Studies were judged using the methodological guidelines of the Radiological Society of North America-Quantitative Imaging Biomarkers Alliance (RSNA-QIBA), and COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) initiative. DTA studies were evaluated by nine editorial board members of the journal. Reliability studies were evaluated by study reviewers experienced with reliability analysis. Results Thirty-one (49.2%) of the 63 DTA studies did not include a reliability analysis when deemed necessary. Among the 36 reliability studies, proper statistical methods were used in all (5/5) studies dealing with dichotomous/nominal data, 46.7% (7/15) of studies dealing with ordinal data, and 95.2% (20/21) of studies dealing with continuous data. Statistical methods were described in sufficient detail regarding weighted kappa in 28.6% (2/7) of studies and regarding the model and assumptions of intraclass correlation coefficient in 35.3% (6/17) and 29.4% (5/17) of studies, respectively. Reliability parameters were used as if they were agreement parameters in 23.1% (3/13) of studies. Reproducibility and repeatability were used incorrectly in 20% (3/15) of studies. Conclusion Greater attention to the importance of reporting reliability, thorough description of the related statistical methods, efforts not to neglect agreement parameters, and better use of relevant terminology is necessary. PMID:29089821

  20. Impact of spurious shear on cosmological parameter estimates from weak lensing observables

    DOE PAGES

    Petri, Andrea; May, Morgan; Haiman, Zoltán; ...

    2014-12-30

    We research, residual errors in shear measurements, after corrections for instrument systematics and atmospheric effects, can impact cosmological parameters derived from weak lensing observations. Here we combine convergence maps from our suite of ray-tracing simulations with random realizations of spurious shear. This allows us to quantify the errors and biases of the triplet (Ω m,w,σ 8) derived from the power spectrum (PS), as well as from three different sets of non-Gaussian statistics of the lensing convergence field: Minkowski functionals (MFs), low-order moments (LMs), and peak counts (PKs). Our main results are as follows: (i) We find an order of magnitudemore » smaller biases from the PS than in previous work. (ii) The PS and LM yield biases much smaller than the morphological statistics (MF, PK). (iii) For strictly Gaussian spurious shear with integrated amplitude as low as its current estimate of σ sys 2 ≈ 10 -7, biases from the PS and LM would be unimportant even for a survey with the statistical power of Large Synoptic Survey Telescope. However, we find that for surveys larger than ≈ 100 deg 2, non-Gaussianity in the noise (not included in our analysis) will likely be important and must be quantified to assess the biases. (iv) The morphological statistics (MF, PK) introduce important biases even for Gaussian noise, which must be corrected in large surveys. The biases are in different directions in (Ωm,w,σ8) parameter space, allowing self-calibration by combining multiple statistics. Our results warrant follow-up studies with more extensive lensing simulations and more accurate spurious shear estimates.« less

  1. Statistical analysis of multivariate atmospheric variables. [cloud cover

    NASA Technical Reports Server (NTRS)

    Tubbs, J. D.

    1979-01-01

    Topics covered include: (1) estimation in discrete multivariate distributions; (2) a procedure to predict cloud cover frequencies in the bivariate case; (3) a program to compute conditional bivariate normal parameters; (4) the transformation of nonnormal multivariate to near-normal; (5) test of fit for the extreme value distribution based upon the generalized minimum chi-square; (6) test of fit for continuous distributions based upon the generalized minimum chi-square; (7) effect of correlated observations on confidence sets based upon chi-square statistics; and (8) generation of random variates from specified distributions.

  2. PyEvolve: a toolkit for statistical modelling of molecular evolution.

    PubMed

    Butterfield, Andrew; Vedagiri, Vivek; Lang, Edward; Lawrence, Cath; Wakefield, Matthew J; Isaev, Alexander; Huttley, Gavin A

    2004-01-05

    Examining the distribution of variation has proven an extremely profitable technique in the effort to identify sequences of biological significance. Most approaches in the field, however, evaluate only the conserved portions of sequences - ignoring the biological significance of sequence differences. A suite of sophisticated likelihood based statistical models from the field of molecular evolution provides the basis for extracting the information from the full distribution of sequence variation. The number of different problems to which phylogeny-based maximum likelihood calculations can be applied is extensive. Available software packages that can perform likelihood calculations suffer from a lack of flexibility and scalability, or employ error-prone approaches to model parameterisation. Here we describe the implementation of PyEvolve, a toolkit for the application of existing, and development of new, statistical methods for molecular evolution. We present the object architecture and design schema of PyEvolve, which includes an adaptable multi-level parallelisation schema. The approach for defining new methods is illustrated by implementing a novel dinucleotide model of substitution that includes a parameter for mutation of methylated CpG's, which required 8 lines of standard Python code to define. Benchmarking was performed using either a dinucleotide or codon substitution model applied to an alignment of BRCA1 sequences from 20 mammals, or a 10 species subset. Up to five-fold parallel performance gains over serial were recorded. Compared to leading alternative software, PyEvolve exhibited significantly better real world performance for parameter rich models with a large data set, reducing the time required for optimisation from approximately 10 days to approximately 6 hours. PyEvolve provides flexible functionality that can be used either for statistical modelling of molecular evolution, or the development of new methods in the field. The toolkit can be used interactively or by writing and executing scripts. The toolkit uses efficient processes for specifying the parameterisation of statistical models, and implements numerous optimisations that make highly parameter rich likelihood functions solvable within hours on multi-cpu hardware. PyEvolve can be readily adapted in response to changing computational demands and hardware configurations to maximise performance. PyEvolve is released under the GPL and can be downloaded from http://cbis.anu.edu.au/software.

  3. OPTESIM, a Versatile Toolbox for Numerical Simulation of Electron Spin Echo Envelope Modulation (ESEEM) that Features Hybrid Optimization and Statistical Assessment of Parameters

    PubMed Central

    Sun, Li; Hernandez-Guzman, Jessica; Warncke, Kurt

    2009-01-01

    Electron spin echo envelope modulation (ESEEM) is a technique of pulsed-electron paramagnetic resonance (EPR) spectroscopy. The analyis of ESEEM data to extract information about the nuclear and electronic structure of a disordered (powder) paramagnetic system requires accurate and efficient numerical simulations. A single coupled nucleus of known nuclear g value (gN) and spin I=1 can have up to eight adjustable parameters in the nuclear part of the spin Hamiltonian. We have developed OPTESIM, an ESEEM simulation toolbox, for automated numerical simulation of powder two- and three-pulse one-dimensional ESEEM for arbitrary number (N) and type (I, gN) of coupled nuclei, and arbitrary mutual orientations of the hyperfine tensor principal axis systems for N>1. OPTESIM is based in the Matlab environment, and includes the following features: (1) a fast algorithm for translation of the spin Hamiltonian into simulated ESEEM, (2) different optimization methods that can be hybridized to achieve an efficient coarse-to-fine grained search of the parameter space and convergence to a global minimum, (3) statistical analysis of the simulation parameters, which allows the identification of simultaneous confidence regions at specific confidence levels. OPTESIM also includes a geometry-preserving spherical averaging algorithm as default for N>1, and global optimization over multiple experimental conditions, such as the dephasing time ( ) for three-pulse ESEEM, and external magnetic field values. Application examples for simulation of 14N coupling (N=1, N=2) in biological and chemical model paramagnets are included. Automated, optimized simulations by using OPTESIM lead to a convergence on dramatically shorter time scales, relative to manual simulations. PMID:19553148

  4. The use and misuse of statistical analyses. [in geophysics and space physics

    NASA Technical Reports Server (NTRS)

    Reiff, P. H.

    1983-01-01

    The statistical techniques most often used in space physics include Fourier analysis, linear correlation, auto- and cross-correlation, power spectral density, and superposed epoch analysis. Tests are presented which can evaluate the significance of the results obtained through each of these. Data presented without some form of error analysis are frequently useless, since they offer no way of assessing whether a bump on a spectrum or on a superposed epoch analysis is real or merely a statistical fluctuation. Among many of the published linear correlations, for instance, the uncertainty in the intercept and slope is not given, so that the significance of the fitted parameters cannot be assessed.

  5. Statistical Analysis of speckle noise reduction techniques for echocardiographic Images

    NASA Astrophysics Data System (ADS)

    Saini, Kalpana; Dewal, M. L.; Rohit, Manojkumar

    2011-12-01

    Echocardiography is the safe, easy and fast technology for diagnosing the cardiac diseases. As in other ultrasound images these images also contain speckle noise. In some cases this speckle noise is useful such as in motion detection. But in general noise removal is required for better analysis of the image and proper diagnosis. Different Adaptive and anisotropic filters are included for statistical analysis. Statistical parameters such as Signal-to-Noise Ratio (SNR), Peak Signal-to-Noise Ratio (PSNR), and Root Mean Square Error (RMSE) calculated for performance measurement. One more important aspect that there may be blurring during speckle noise removal. So it is prefered that filter should be able to enhance edges during noise removal.

  6. Maps on statistical manifolds exactly reduced from the Perron-Frobenius equations for solvable chaotic maps

    NASA Astrophysics Data System (ADS)

    Goto, Shin-itiro; Umeno, Ken

    2018-03-01

    Maps on a parameter space for expressing distribution functions are exactly derived from the Perron-Frobenius equations for a generalized Boole transform family. Here the generalized Boole transform family is a one-parameter family of maps, where it is defined on a subset of the real line and its probability distribution function is the Cauchy distribution with some parameters. With this reduction, some relations between the statistical picture and the orbital one are shown. From the viewpoint of information geometry, the parameter space can be identified with a statistical manifold, and then it is shown that the derived maps can be characterized. Also, with an induced symplectic structure from a statistical structure, symplectic and information geometric aspects of the derived maps are discussed.

  7. A growing social network model in geographical space

    NASA Astrophysics Data System (ADS)

    Antonioni, Alberto; Tomassini, Marco

    2017-09-01

    In this work we propose a new model for the generation of social networks that includes their often ignored spatial aspects. The model is a growing one and links are created either taking space into account, or disregarding space and only considering the degree of target nodes. These two effects can be mixed linearly in arbitrary proportions through a parameter. We numerically show that for a given range of the combination parameter, and for given mean degree, the generated network class shares many important statistical features with those observed in actual social networks, including the spatial dependence of connections. Moreover, we show that the model provides a good qualitative fit to some measured social networks.

  8. On the Response of the Special Sensor Microwave/Imager to the Marine Environment: Implications for Atmospheric Parameter Retrievals. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Petty, Grant W.

    1990-01-01

    A reasonably rigorous basis for understanding and extracting the physical information content of Special Sensor Microwave/Imager (SSM/I) satellite images of the marine environment is provided. To this end, a comprehensive algebraic parameterization is developed for the response of the SSM/I to a set of nine atmospheric and ocean surface parameters. The brightness temperature model includes a closed-form approximation to microwave radiative transfer in a non-scattering atmosphere and fitted models for surface emission and scattering based on geometric optics calculations for the roughened sea surface. The combined model is empirically tuned using suitable sets of SSM/I data and coincident surface observations. The brightness temperature model is then used to examine the sensitivity of the SSM/I to realistic variations in the scene being observed and to evaluate the theoretical maximum precision of global SSM/I retrievals of integrated water vapor, integrated cloud liquid water, and surface wind speed. A general minimum-variance method for optimally retrieving geophysical parameters from multichannel brightness temperature measurements is outlined, and several global statistical constraints of the type required by this method are computed. Finally, a unified set of efficient statistical and semi-physical algorithms is presented for obtaining fields of surface wind speed, integrated water vapor, cloud liquid water, and precipitation from SSM/I brightness temperature data. Features include: a semi-physical method for retrieving integrated cloud liquid water at 15 km resolution and with rms errors as small as approximately 0.02 kg/sq m; a 3-channel statistical algorithm for integrated water vapor which was constructed so as to have improved linear response to water vapor and reduced sensitivity to precipitation; and two complementary indices of precipitation activity (based on 37 GHz attenuation and 85 GHz scattering, respectively), each of which are relatively insensitive to variations in other environmental parameters.

  9. The Vertical Flux Method (VFM) for regional estimates of temporally and spatially varying nitrate fluxes in unsaturated zone and groundwater

    NASA Astrophysics Data System (ADS)

    Green, C. T.; Liao, L.; Nolan, B. T.; Juckem, P. F.; Ransom, K.; Harter, T.

    2017-12-01

    Process-based modeling of regional NO3- fluxes to groundwater is critical for understanding and managing water quality. Measurements of atmospheric tracers of groundwater age and dissolved-gas indicators of denitrification progress have potential to improve estimates of NO3- reactive transport processes. This presentation introduces a regionalized version of a vertical flux method (VFM) that uses simple mathematical estimates of advective-dispersive reactive transport with regularization procedures to calibrate estimated tracer concentrations to observed equivalents. The calibrated VFM provides estimates of chemical, hydrologic and reaction parameters (source concentration time series, recharge, effective porosity, dispersivity, reaction rate coefficients) and derived values (e.g. mean unsaturated zone travel time, eventual depth of the NO3- front) for individual wells. Statistical learning methods are used to extrapolate parameters and predictions from wells to continuous areas. The regional VFM was applied to 473 well samples in central-eastern Wisconsin. Chemical measurements included O2, NO3-, N2 from denitrification, and atmospheric tracers of groundwater age including carbon-14, chlorofluorocarbons, tritium, and triogiogenic helium. VFM results were consistent with observed chemistry, and calibrated parameters were in-line with independent estimates. Results indicated that (1) unsaturated zone travel times were a substantial portion of the transit time to wells and streams (2) fractions of N leached to groundwater have changed over time, with increasing fractions from manure and decreasing fractions from fertilizer, and (3) under current practices and conditions, 60% of the shallow aquifer will eventually be affected by NO3- contamination. Based on GIS coverages of variables related to soils, land use and hydrology, the VFM results at individual wells were extrapolated regionally using boosted regression trees, a statistical learning approach, that related the GIS variables to the VFM parameters and predictions. Future work will explore applications at larger scales with direct integration of the statistical prediction model with the mechanistic VFM.

  10. Minnesota State Grant Statistics Over Time. 2003 Edition.

    ERIC Educational Resources Information Center

    Minnesota Higher Education Services Office, 2004

    2004-01-01

    This report contains data on the number and dollar amounts of Minnesota State Grants received by Minnesota undergraduates for Fiscal Years 1978 through 2003. Combined federal Pell and State Grant awards are also reported. The report includes a table showing the parameters used in the State Grant Program from Fiscal Year 1984 through 2003. The…

  11. A Case Study to Examine Peer Grouping and Aspirant Selection. Professional File. Article 132, Fall 2013

    ERIC Educational Resources Information Center

    D'Allegro, Mary Lou; Zhou, Kai

    2013-01-01

    Peer selection based on the similarity of a couple of institutional parameters, by itself, is insufficient. Several other considerations, including clarity of purpose, alignment of institutional information to that purpose, identification of appropriate statistical procedures, review of preliminary peer sets, and the application of additional…

  12. Determining fundamental properties of matter created in ultrarelativistic heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Novak, J.; Novak, K.; Pratt, S.; Vredevoogd, J.; Coleman-Smith, C. E.; Wolpert, R. L.

    2014-03-01

    Posterior distributions for physical parameters describing relativistic heavy-ion collisions, such as the viscosity of the quark-gluon plasma, are extracted through a comparison of hydrodynamic-based transport models to experimental results from 100AGeV+100AGeV Au +Au collisions at the Relativistic Heavy Ion Collider. By simultaneously varying six parameters and by evaluating several classes of observables, we are able to explore the complex intertwined dependencies of observables on model parameters. The methods provide a full multidimensional posterior distribution for the model output, including a range of acceptable values for each parameter, and reveal correlations between them. The breadth of observables and the number of parameters considered here go beyond previous studies in this field. The statistical tools, which are based upon Gaussian process emulators, are tested in detail and should be extendable to larger data sets and a higher number of parameters.

  13. Statistical Aspects of North Atlantic Basin Tropical Cyclones During the Weather Satellite Era, 1960-2013. Part 2

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2014-01-01

    This Technical Publication (TP) is part 2 of a two-part study of the North Atlantic basin tropical cyclones that occurred during the weather satellite era, 1960-2013. In particular, this TP examines the inferred statistical relationships between 25 tropical cyclone parameters and 9 specific climate-related factors, including the (1) Oceanic Niño Index (ONI), (2) Southern Oscillation Index (SOI), (3) Atlantic Multidecadal Oscillation (AMO) index, (4) Quasi-Biennial Oscillation (QBO) index, (5) North Atlantic Oscillation (NAO) index of the Climate Prediction Center (CPC), (6) NAO index of the Climate Research Unit (CRU), (7) Armagh surface air temperature (ASAT), (8) Global Land-Ocean Temperature Index (GLOTI), and (9) Mauna Loa carbon dioxide (CO2) (MLCO2) index. Part 1 of this two-part study examined the statistical aspects of the 25 tropical cyclone parameters (e.g., frequencies, peak wind speed (PWS), accumulated cyclone energy (ACE), etc.) and provided the results of statistical testing (i.e., runs-testing, the t-statistic for independent samples, and Poisson distributions). Also, the study gave predictions for the frequencies of the number of tropical cyclones (NTC), number of hurricanes (NH), number of major hurricanes (NMH), and number of United States land-falling hurricanes (NUSLFH) expected for the 2014 season, based on the statistics of the overall interval 1960-2013, the subinterval 1995-2013, and whether the year 2014 would be either an El Niño year (ENY) or a non-El Niño year (NENY).

  14. Photospheric Magnetic Field Properties of Flaring versus Flare-quiet Active Regions. II. Discriminant Analysis

    NASA Astrophysics Data System (ADS)

    Leka, K. D.; Barnes, G.

    2003-10-01

    We apply statistical tests based on discriminant analysis to the wide range of photospheric magnetic parameters described in a companion paper by Leka & Barnes, with the goal of identifying those properties that are important for the production of energetic events such as solar flares. The photospheric vector magnetic field data from the University of Hawai'i Imaging Vector Magnetograph are well sampled both temporally and spatially, and we include here data covering 24 flare-event and flare-quiet epochs taken from seven active regions. The mean value and rate of change of each magnetic parameter are treated as separate variables, thus evaluating both the parameter's state and its evolution, to determine which properties are associated with flaring. Considering single variables first, Hotelling's T2-tests show small statistical differences between flare-producing and flare-quiet epochs. Even pairs of variables considered simultaneously, which do show a statistical difference for a number of properties, have high error rates, implying a large degree of overlap of the samples. To better distinguish between flare-producing and flare-quiet populations, larger numbers of variables are simultaneously considered; lower error rates result, but no unique combination of variables is clearly the best discriminator. The sample size is too small to directly compare the predictive power of large numbers of variables simultaneously. Instead, we rank all possible four-variable permutations based on Hotelling's T2-test and look for the most frequently appearing variables in the best permutations, with the interpretation that they are most likely to be associated with flaring. These variables include an increasing kurtosis of the twist parameter and a larger standard deviation of the twist parameter, but a smaller standard deviation of the distribution of the horizontal shear angle and a horizontal field that has a smaller standard deviation but a larger kurtosis. To support the ``sorting all permutations'' method of selecting the most frequently occurring variables, we show that the results of a single 10-variable discriminant analysis are consistent with the ranking. We demonstrate that individually, the variables considered here have little ability to differentiate between flaring and flare-quiet populations, but with multivariable combinations, the populations may be distinguished.

  15. Trans-dimensional inversion of microtremor array dispersion data with hierarchical autoregressive error models

    NASA Astrophysics Data System (ADS)

    Dettmer, Jan; Molnar, Sheri; Steininger, Gavin; Dosso, Stan E.; Cassidy, John F.

    2012-02-01

    This paper applies a general trans-dimensional Bayesian inference methodology and hierarchical autoregressive data-error models to the inversion of microtremor array dispersion data for shear wave velocity (vs) structure. This approach accounts for the limited knowledge of the optimal earth model parametrization (e.g. the number of layers in the vs profile) and of the data-error statistics in the resulting vs parameter uncertainty estimates. The assumed earth model parametrization influences estimates of parameter values and uncertainties due to different parametrizations leading to different ranges of data predictions. The support of the data for a particular model is often non-unique and several parametrizations may be supported. A trans-dimensional formulation accounts for this non-uniqueness by including a model-indexing parameter as an unknown so that groups of models (identified by the indexing parameter) are considered in the results. The earth model is parametrized in terms of a partition model with interfaces given over a depth-range of interest. In this work, the number of interfaces (layers) in the partition model represents the trans-dimensional model indexing. In addition, serial data-error correlations are addressed by augmenting the geophysical forward model with a hierarchical autoregressive error model that can account for a wide range of error processes with a small number of parameters. Hence, the limited knowledge about the true statistical distribution of data errors is also accounted for in the earth model parameter estimates, resulting in more realistic uncertainties and parameter values. Hierarchical autoregressive error models do not rely on point estimates of the model vector to estimate data-error statistics, and have no requirement for computing the inverse or determinant of a data-error covariance matrix. This approach is particularly useful for trans-dimensional inverse problems, as point estimates may not be representative of the state space that spans multiple subspaces of different dimensionalities. The order of the autoregressive process required to fit the data is determined here by posterior residual-sample examination and statistical tests. Inference for earth model parameters is carried out on the trans-dimensional posterior probability distribution by considering ensembles of parameter vectors. In particular, vs uncertainty estimates are obtained by marginalizing the trans-dimensional posterior distribution in terms of vs-profile marginal distributions. The methodology is applied to microtremor array dispersion data collected at two sites with significantly different geology in British Columbia, Canada. At both sites, results show excellent agreement with estimates from invasive measurements.

  16. StatisticAl Characteristics of Cloud over Beijing, China Obtained FRom Ka band Doppler Radar Observation

    NASA Astrophysics Data System (ADS)

    LIU, J.; Bi, Y.; Duan, S.; Lu, D.

    2017-12-01

    It is well-known that cloud characteristics, such as top and base heights and their layering structure of micro-physical parameters, spatial coverage and temporal duration are very important factors influencing both radiation budget and its vertical partitioning as well as hydrological cycle through precipitation data. Also, cloud structure and their statistical distribution and typical values will have respective characteristics with geographical and seasonal variation. Ka band radar is a powerful tool to obtain above parameters around the world, such as ARM cloud radar at the Oklahoma US, Since 2006, Cloudsat is one of NASA's A-Train satellite constellation, continuously observe the cloud structure with global coverage, but only twice a day it monitor clouds over same local site at same local time.By using IAP Ka band Doppler radar which has been operating continuously since early 2013 over the roof of IAP building in Beijing, we obtained the statistical characteristic of clouds, including cloud layering, cloud top and base heights, as well as the thickness of each cloud layer and their distribution, and were analyzed monthly and seasonal and diurnal variation, statistical analysis of cloud reflectivity profiles is also made. The analysis covers both non-precipitating clouds and precipitating clouds. Also, some preliminary comparison of the results with Cloudsat/Calipso products for same period and same area are made.

  17. [Value influence of different compatibilities of main active parts in yangyintongnao granule on pharmacokinetics parameters in rats with cerebral ischemia reperfusion injury by total amount statistic moment method].

    PubMed

    Guo, Ying; Yang, Jiehong; Znang, Hengyi; Fu, Xuchun; Zhnag, Yuyan; Wan, Haitong

    2010-02-01

    To study the influence of the different combinations of the main active parts in Yangyintongnao granule on the pharmacokinetics parameters of the two active components--ligustrazine and puerarin using the method of total amount statistic moment for pharmacokinetics. Combinations were formed according to the dosages of the four active parts (alkaloid, flavone, saponin, naphtha) by orthogonal experiment L9 (3(4)). Blood concentrations of ligustrazine and puerarin were determinated by HPLC at different time. Zero rank moment (AUC) and one rank moment (MRT, mean residence time) of ligustrazine and puerarin have been worked out to calculate the total amount statistic moment parameters was analyzed of Yangyintongnao granule by the method of the total amount statistic moment. The influence of different compatibilities on the pharmacokinetics parameters was analyzed by orthogonal test. Flavone has the strongest effect than saponin on the total AUC. Ligustrazine has the strongest effect on the total MRT. Saponin has little effect on the two parameters, but naphtha has more effect on both of them. It indicates that naphtha may promote metabolism of ligustrazine and puerarin in rat. Total amount statistic moment parameters can be used to guide for compatibilities of TCM.

  18. Phenomenon of statistical instability of the third type systems—complexity

    NASA Astrophysics Data System (ADS)

    Eskov, V. V.; Gavrilenko, T. V.; Eskov, V. M.; Vokhmina, Yu. V.

    2017-11-01

    The problem of the existence and special properties of third type systems has been formulated within the new chaos-self-organization theory. In fact, a global problem of the possibility of the existence of steady-state regimes for homeostatic systems has been considered. These systems include not only medical and biological systems, but also the dynamics of meteorological parameters, as well as the ambient parameters of the environment in which humans are located. The new approach has been used to give a new definition for homeostatic systems (complexity).

  19. Using sensitivity analysis in model calibration efforts

    USGS Publications Warehouse

    Tiedeman, Claire; Hill, Mary C.

    2003-01-01

    In models of natural and engineered systems, sensitivity analysis can be used to assess relations among system state observations, model parameters, and model predictions. The model itself links these three entities, and model sensitivities can be used to quantify the links. Sensitivities are defined as the derivatives of simulated quantities (such as simulated equivalents of observations, or model predictions) with respect to model parameters. We present four measures calculated from model sensitivities that quantify the observation-parameter-prediction links and that are especially useful during the calibration and prediction phases of modeling. These four measures are composite scaled sensitivities (CSS), prediction scaled sensitivities (PSS), the value of improved information (VOII) statistic, and the observation prediction (OPR) statistic. These measures can be used to help guide initial calibration of models, collection of field data beneficial to model predictions, and recalibration of models updated with new field information. Once model sensitivities have been calculated, each of the four measures requires minimal computational effort. We apply the four measures to a three-layer MODFLOW-2000 (Harbaugh et al., 2000; Hill et al., 2000) model of the Death Valley regional ground-water flow system (DVRFS), located in southern Nevada and California. D’Agnese et al. (1997, 1999) developed and calibrated the model using nonlinear regression methods. Figure 1 shows some of the observations, parameters, and predictions for the DVRFS model. Observed quantities include hydraulic heads and spring flows. The 23 defined model parameters include hydraulic conductivities, vertical anisotropies, recharge rates, evapotranspiration rates, and pumpage. Predictions of interest for this regional-scale model are advective transport paths from potential contamination sites underlying the Nevada Test Site and Yucca Mountain.

  20. GENASIS Basics: Object-oriented utilitarian functionality for large-scale physics simulations (Version 2)

    NASA Astrophysics Data System (ADS)

    Cardall, Christian Y.; Budiardja, Reuben D.

    2017-05-01

    GenASiS Basics provides Fortran 2003 classes furnishing extensible object-oriented utilitarian functionality for large-scale physics simulations on distributed memory supercomputers. This functionality includes physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. This revision -Version 2 of Basics - makes mostly minor additions to functionality and includes some simplifying name changes.

  1. Estimating urban ground-level PM10 using MODIS 3km AOD product and meteorological parameters from WRF model

    NASA Astrophysics Data System (ADS)

    Ghotbi, Saba; Sotoudeheian, Saeed; Arhami, Mohammad

    2016-09-01

    Satellite remote sensing products of AOD from MODIS along with appropriate meteorological parameters were used to develop statistical models and estimate ground-level PM10. Most of previous studies obtained meteorological data from synoptic weather stations, with rather sparse spatial distribution, and used it along with 10 km AOD product to develop statistical models, applicable for PM variations in regional scale (resolution of ≥10 km). In the current study, meteorological parameters were simulated with 3 km resolution using WRF model and used along with the rather new 3 km AOD product (launched in 2014). The resulting PM statistical models were assessed for a polluted and largely variable urban area, Tehran, Iran. Despite the critical particulate pollution problem, very few PM studies were conducted in this area. The issue of rather poor direct PM-AOD associations existed, due to different factors such as variations in particles optical properties, in addition to bright background issue for satellite data, as the studied area located in the semi-arid areas of Middle East. Statistical approach of linear mixed effect (LME) was used, and three types of statistical models including single variable LME model (using AOD as independent variable) and multiple variables LME model by using meteorological data from two sources, WRF model and synoptic stations, were examined. Meteorological simulations were performed using a multiscale approach and creating an appropriate physic for the studied region, and the results showed rather good agreements with recordings of the synoptic stations. The single variable LME model was able to explain about 61%-73% of daily PM10 variations, reflecting a rather acceptable performance. Statistical models performance improved through using multivariable LME and incorporating meteorological data as auxiliary variables, particularly by using fine resolution outputs from WRF (R2 = 0.73-0.81). In addition, rather fine resolution for PM estimates was mapped for the studied city, and resulting concentration maps were consistent with PM recordings at the existing stations.

  2. A statistical approach to quasi-extinction forecasting.

    PubMed

    Holmes, Elizabeth Eli; Sabo, John L; Viscido, Steven Vincent; Fagan, William Fredric

    2007-12-01

    Forecasting population decline to a certain critical threshold (the quasi-extinction risk) is one of the central objectives of population viability analysis (PVA), and such predictions figure prominently in the decisions of major conservation organizations. In this paper, we argue that accurate forecasting of a population's quasi-extinction risk does not necessarily require knowledge of the underlying biological mechanisms. Because of the stochastic and multiplicative nature of population growth, the ensemble behaviour of population trajectories converges to common statistical forms across a wide variety of stochastic population processes. This paper provides a theoretical basis for this argument. We show that the quasi-extinction surfaces of a variety of complex stochastic population processes (including age-structured, density-dependent and spatially structured populations) can be modelled by a simple stochastic approximation: the stochastic exponential growth process overlaid with Gaussian errors. Using simulated and real data, we show that this model can be estimated with 20-30 years of data and can provide relatively unbiased quasi-extinction risk with confidence intervals considerably smaller than (0,1). This was found to be true even for simulated data derived from some of the noisiest population processes (density-dependent feedback, species interactions and strong age-structure cycling). A key advantage of statistical models is that their parameters and the uncertainty of those parameters can be estimated from time series data using standard statistical methods. In contrast for most species of conservation concern, biologically realistic models must often be specified rather than estimated because of the limited data available for all the various parameters. Biologically realistic models will always have a prominent place in PVA for evaluating specific management options which affect a single segment of a population, a single demographic rate, or different geographic areas. However, for forecasting quasi-extinction risk, statistical models that are based on the convergent statistical properties of population processes offer many advantages over biologically realistic models.

  3. Measuring the statistical validity of summary meta-analysis and meta-regression results for use in clinical practice.

    PubMed

    Willis, Brian H; Riley, Richard D

    2017-09-20

    An important question for clinicians appraising a meta-analysis is: are the findings likely to be valid in their own practice-does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity-where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple ('leave-one-out') cross-validation technique, we demonstrate how we may test meta-analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta-analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta-analysis and a tailored meta-regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within-study variance, between-study variance, study sample size, and the number of studies in the meta-analysis. Finally, we apply Vn to two published meta-analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta-analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  4. Orthotopic bladder substitution in men revisited: identification of continence predictors.

    PubMed

    Koraitim, M M; Atta, M A; Foda, M K

    2006-11-01

    We determined the impact of the functional characteristics of the neobladder and urethral sphincter on continence results, and determined the most significant predictors of continence. A total of 88 male patients 29 to 70 years old underwent orthotopic bladder substitution with tubularized ileocecal segment (40) and detubularized sigmoid (25) or ileum (23). Uroflowmetry, cystometry and urethral pressure profilometry were performed at 13 to 36 months (mean 19) postoperatively. The correlation between urinary continence and 28 urodynamic variables was assessed. Parameters that correlated significantly with continence were entered into a multivariate analysis using a logistic regression model to determine the most significant predictors of continence. Maximum urethral closure pressure was the only parameter that showed a statistically significant correlation with diurnal continence. Nocturnal continence had not only a statistically significant positive correlation with maximum urethral closure pressure, but also statistically significant negative correlations with maximum contraction amplitude, and baseline pressure at mid and maximum capacity. Three of these 4 parameters, including maximum urethral closure pressure, maximum contraction amplitude and baseline pressure at mid capacity, proved to be significant predictors of continence on multivariate analysis. While daytime continence is determined by maximum urethral closure pressure, during the night it is the net result of 2 forces that have about equal influence but in opposite directions, that is maximum urethral closure pressure vs maximum contraction amplitude plus baseline pressure at mid capacity. Two equations were derived from the logistic regression model to predict the probability of continence after orthotopic bladder substitution, including Z1 (diurnal) = 0.605 + 0.0085 maximum urethral closure pressure and Z2 (nocturnal) = 0.841 + 0.01 [maximum urethral closure pressure - (maximum contraction amplitude + baseline pressure at mid capacity)].

  5. An initial-abstraction, constant-loss model for unit hydrograph modeling for applicable watersheds in Texas

    USGS Publications Warehouse

    Asquith, William H.; Roussel, Meghan C.

    2007-01-01

    Estimation of representative hydrographs from design storms, which are known as design hydrographs, provides for cost-effective, riskmitigated design of drainage structures such as bridges, culverts, roadways, and other infrastructure. During 2001?07, the U.S. Geological Survey (USGS), in cooperation with the Texas Department of Transportation, investigated runoff hydrographs, design storms, unit hydrographs,and watershed-loss models to enhance design hydrograph estimation in Texas. Design hydrographs ideally should mimic the general volume, peak, and shape of observed runoff hydrographs. Design hydrographs commonly are estimated in part by unit hydrographs. A unit hydrograph is defined as the runoff hydrograph that results from a unit pulse of excess rainfall uniformly distributed over the watershed at a constant rate for a specific duration. A time-distributed, watershed-loss model is required for modeling by unit hydrographs. This report develops a specific time-distributed, watershed-loss model known as an initial-abstraction, constant-loss model. For this watershed-loss model, a watershed is conceptualized to have the capacity to store or abstract an absolute depth of rainfall at and near the beginning of a storm. Depths of total rainfall less than this initial abstraction do not produce runoff. The watershed also is conceptualized to have the capacity to remove rainfall at a constant rate (loss) after the initial abstraction is satisfied. Additional rainfall inputs after the initial abstraction is satisfied contribute to runoff if the rainfall rate (intensity) is larger than the constant loss. The initial abstraction, constant-loss model thus is a two-parameter model. The initial-abstraction, constant-loss model is investigated through detailed computational and statistical analysis of observed rainfall and runoff data for 92 USGS streamflow-gaging stations (watersheds) in Texas with contributing drainage areas from 0.26 to 166 square miles. The analysis is limited to a previously described, watershed-specific, gamma distribution model of the unit hydrograph. In particular, the initial-abstraction, constant-loss model is tuned to the gamma distribution model of the unit hydrograph. A complex computational analysis of observed rainfall and runoff for the 92 watersheds was done to determine, by storm, optimal values of initial abstraction and constant loss. Optimal parameter values for a given storm were defined as those values that produced a modeled runoff hydrograph with volume equal to the observed runoff hydrograph and also minimized the residual sum of squares of the two hydrographs. Subsequently, the means of the optimal parameters were computed on a watershed-specific basis. These means for each watershed are considered the most representative, are tabulated, and are used in further statistical analyses. Statistical analyses of watershed-specific, initial abstraction and constant loss include documentation of the distribution of each parameter using the generalized lambda distribution. The analyses show that watershed development has substantial influence on initial abstraction and limited influence on constant loss. The means and medians of the 92 watershed-specific parameters are tabulated with respect to watershed development; although they have considerable uncertainty, these parameters can be used for parameter prediction for ungaged watersheds. The statistical analyses of watershed-specific, initial abstraction and constant loss also include development of predictive procedures for estimation of each parameter for ungaged watersheds. Both regression equations and regression trees for estimation of initial abstraction and constant loss are provided. The watershed characteristics included in the regression analyses are (1) main-channel length, (2) a binary factor representing watershed development, (3) a binary factor representing watersheds with an abundance of rocky and thin-soiled terrain, and (4) curve numb

  6. Quons, an interpolation between Bose and Fermi oscillators

    NASA Technical Reports Server (NTRS)

    Greenberg, O. W.

    1993-01-01

    After a brief mention of Bose and Fermi oscillators and of particles which obey other types of statistics, including intermediate statistics, parastatistics, paronic statistics, anyon statistics, and infinite statistics, I discuss the statistics of 'quons' (pronounced to rhyme with muons), particles whose annihilation and creation operators obey the q-deformed commutation relation (the quon algebra or q-mutator) which interpolates between fermions and bosons. I emphasize that the operator for interaction with an external source must be an effective Bose operator in all cases. To accomplish this for parabose, parafermi and quon operators, I introduce parabose, parafermi, and quon Grassmann numbers, respectively. I also discuss interactions of non-relativistic quons, quantization of quon fields with antiparticles, calculation of vacuum matrix elements of relativistic quon fields, demonstration of the TCP theorem, cluster decomposition, and Wick's theorem for relativistic quon fields, and the failure of local commutativity of observables for relativistic quon fields. I conclude with the bound on the parameter q for electrons due to the Ramberg-Snow experiment.

  7. A Parameter Subset Selection Algorithm for Mixed-Effects Models

    DOE PAGES

    Schmidt, Kathleen L.; Smith, Ralph C.

    2016-01-01

    Mixed-effects models are commonly used to statistically model phenomena that include attributes associated with a population or general underlying mechanism as well as effects specific to individuals or components of the general mechanism. This can include individual effects associated with data from multiple experiments. However, the parameterizations used to incorporate the population and individual effects are often unidentifiable in the sense that parameters are not uniquely specified by the data. As a result, the current literature focuses on model selection, by which insensitive parameters are fixed or removed from the model. Model selection methods that employ information criteria are applicablemore » to both linear and nonlinear mixed-effects models, but such techniques are limited in that they are computationally prohibitive for large problems due to the number of possible models that must be tested. To limit the scope of possible models for model selection via information criteria, we introduce a parameter subset selection (PSS) algorithm for mixed-effects models, which orders the parameters by their significance. In conclusion, we provide examples to verify the effectiveness of the PSS algorithm and to test the performance of mixed-effects model selection that makes use of parameter subset selection.« less

  8. Temperature and Voltage Offsets in High- ZT Thermoelectrics

    NASA Astrophysics Data System (ADS)

    Levy, George S.

    2018-06-01

    Thermodynamic temperature can take on different meanings. Kinetic temperature is an expectation value and a function of the kinetic energy distribution. Statistical temperature is a parameter of the distribution. Kinetic temperature and statistical temperature, identical in Maxwell-Boltzmann statistics, can differ in other statistics such as those of Fermi-Dirac or Bose-Einstein when a field is present. Thermal equilibrium corresponds to zero statistical temperature gradient, not zero kinetic temperature gradient. Since heat carriers in thermoelectrics are fermions, the difference between these two temperatures may explain voltage and temperature offsets observed during meticulous Seebeck measurements in which the temperature-voltage curve does not go through the origin. In conventional semiconductors, temperature offsets produced by fermionic electrical carriers are not observable because they are shorted by heat phonons in the lattice. In high- ZT materials, however, these offsets have been detected but attributed to faulty laboratory procedures. Additional supporting evidence for spontaneous voltages and temperature gradients includes data collected in epistatic experiments and in the plasma Q-machine. Device fabrication guidelines for testing the hypothesis are suggested including using unipolar junctions stacked in a superlattice, alternating n/ n + and p/ p + junctions, selecting appropriate dimensions, doping, and loading.

  9. Temperature and Voltage Offsets in High-ZT Thermoelectrics

    NASA Astrophysics Data System (ADS)

    Levy, George S.

    2017-10-01

    Thermodynamic temperature can take on different meanings. Kinetic temperature is an expectation value and a function of the kinetic energy distribution. Statistical temperature is a parameter of the distribution. Kinetic temperature and statistical temperature, identical in Maxwell-Boltzmann statistics, can differ in other statistics such as those of Fermi-Dirac or Bose-Einstein when a field is present. Thermal equilibrium corresponds to zero statistical temperature gradient, not zero kinetic temperature gradient. Since heat carriers in thermoelectrics are fermions, the difference between these two temperatures may explain voltage and temperature offsets observed during meticulous Seebeck measurements in which the temperature-voltage curve does not go through the origin. In conventional semiconductors, temperature offsets produced by fermionic electrical carriers are not observable because they are shorted by heat phonons in the lattice. In high-ZT materials, however, these offsets have been detected but attributed to faulty laboratory procedures. Additional supporting evidence for spontaneous voltages and temperature gradients includes data collected in epistatic experiments and in the plasma Q-machine. Device fabrication guidelines for testing the hypothesis are suggested including using unipolar junctions stacked in a superlattice, alternating n/n + and p/p + junctions, selecting appropriate dimensions, doping, and loading.

  10. Statistical characteristics of trajectories of diamagnetic unicellular organisms in a magnetic field.

    PubMed

    Gorobets, Yu I; Gorobets, O Yu

    2015-01-01

    The statistical model is proposed in this paper for description of orientation of trajectories of unicellular diamagnetic organisms in a magnetic field. The statistical parameter such as the effective energy is calculated on basis of this model. The resulting effective energy is the statistical characteristics of trajectories of diamagnetic microorganisms in a magnetic field connected with their metabolism. The statistical model is applicable for the case when the energy of the thermal motion of bacteria is negligible in comparison with their energy in a magnetic field and the bacteria manifest the significant "active random movement", i.e. there is the randomizing motion of the bacteria of non thermal nature, for example, movement of bacteria by means of flagellum. The energy of the randomizing active self-motion of bacteria is characterized by the new statistical parameter for biological objects. The parameter replaces the energy of the randomizing thermal motion in calculation of the statistical distribution. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Dissolution comparisons using a Multivariate Statistical Distance (MSD) test and a comparison of various approaches for calculating the measurements of dissolution profile comparison.

    PubMed

    Cardot, J-M; Roudier, B; Schütz, H

    2017-07-01

    The f 2 test is generally used for comparing dissolution profiles. In cases of high variability, the f 2 test is not applicable, and the Multivariate Statistical Distance (MSD) test is frequently proposed as an alternative by the FDA and EMA. The guidelines provide only general recommendations. MSD tests can be performed either on raw data with or without time as a variable or on parameters of models. In addition, data can be limited-as in the case of the f 2 test-to dissolutions of up to 85% or to all available data. In the context of the present paper, the recommended calculation included all raw dissolution data up to the first point greater than 85% as a variable-without the various times as parameters. The proposed MSD overcomes several drawbacks found in other methods.

  12. A Backscatter-Lidar Forward-Operator

    NASA Astrophysics Data System (ADS)

    Geisinger, Armin; Behrendt, Andreas; Wulfmeyer, Volker; Vogel, Bernhard; Mattis, Ina; Flentje, Harald; Förstner, Jochen; Potthast, Roland

    2015-04-01

    We have developed a forward-operator which is capable of calculating virtual lidar profiles from atmospheric state simulations. The operator allows us to compare lidar measurements and model simulations based on the same measurement parameter: the lidar backscatter profile. This method simplifies qualitative comparisons and also makes quantitative comparisons possible, including statistical error quantification. Implemented into an aerosol-capable model system, the operator will act as a component to assimilate backscatter-lidar measurements. As many weather services maintain already networks of backscatter-lidars, such data are acquired already in an operational manner. To estimate and quantify errors due to missing or uncertain aerosol information, we started sensitivity studies about several scattering parameters such as the aerosol size and both the real and imaginary part of the complex index of refraction. Furthermore, quantitative and statistical comparisons between measurements and virtual measurements are shown in this study, i.e. applying the backscatter-lidar forward-operator on model output.

  13. Quantum-statistical theory of microwave detection using superconducting tunnel junctions

    NASA Astrophysics Data System (ADS)

    Deviatov, I. A.; Kuzmin, L. S.; Likharev, K. K.; Migulin, V. V.; Zorin, A. B.

    1986-09-01

    A quantum-statistical theory of microwave and millimeter-wave detection using superconducting tunnel junctions is developed, with a rigorous account of quantum, thermal, and shot noise arising from fluctuation sources associated with the junctions, signal source, and matching circuits. The problem of the noise characterization in the quantum sensitivity range is considered and a general noise parameter Theta(N) is introduced. This parameter is shown to be an adequate figure of merit for most receivers of interest while some devices can require a more complex characterization. Analytical expressions and/or numerically calculated plots for Theta(N) are presented for the most promising detection modes including the parametric amplification, heterodyne mixing, and quadratic videodetection, using both the quasiparticle-current and the Cooper-pair-current nonlinearities. Ultimate minimum values of Theta(N) for each detection mode are compared and found to be in agreement with limitations imposed by the quantum-mechanical uncertainty principle.

  14. Remote sensing-aided systems for snow qualification, evapotranspiration estimation, and their application in hydrologic models

    NASA Technical Reports Server (NTRS)

    Korram, S.

    1977-01-01

    The design of general remote sensing-aided methodologies was studied to provide the estimates of several important inputs to water yield forecast models. These input parameters are snow area extent, snow water content, and evapotranspiration. The study area is Feather River Watershed (780,000 hectares), Northern California. The general approach involved a stepwise sequence of identification of the required information, sample design, measurement/estimation, and evaluation of results. All the relevent and available information types needed in the estimation process are being defined. These include Landsat, meteorological satellite, and aircraft imagery, topographic and geologic data, ground truth data, and climatic data from ground stations. A cost-effective multistage sampling approach was employed in quantification of all the required parameters. The physical and statistical models for both snow quantification and evapotranspiration estimation was developed. These models use the information obtained by aerial and ground data through appropriate statistical sampling design.

  15. A new item response theory model to adjust data allowing examinee choice

    PubMed Central

    Costa, Marcelo Azevedo; Braga Oliveira, Rivert Paulo

    2018-01-01

    In a typical questionnaire testing situation, examinees are not allowed to choose which items they answer because of a technical issue in obtaining satisfactory statistical estimates of examinee ability and item difficulty. This paper introduces a new item response theory (IRT) model that incorporates information from a novel representation of questionnaire data using network analysis. Three scenarios in which examinees select a subset of items were simulated. In the first scenario, the assumptions required to apply the standard Rasch model are met, thus establishing a reference for parameter accuracy. The second and third scenarios include five increasing levels of violating those assumptions. The results show substantial improvements over the standard model in item parameter recovery. Furthermore, the accuracy was closer to the reference in almost every evaluated scenario. To the best of our knowledge, this is the first proposal to obtain satisfactory IRT statistical estimates in the last two scenarios. PMID:29389996

  16. Anisotropic analysis of trabecular architecture in human femur bone radiographs using quaternion wavelet transforms.

    PubMed

    Sangeetha, S; Sujatha, C M; Manamalli, D

    2014-01-01

    In this work, anisotropy of compressive and tensile strength regions of femur trabecular bone are analysed using quaternion wavelet transforms. The normal and abnormal femur trabecular bone radiographic images are considered for this study. The sub-anatomic regions, which include compressive and tensile regions, are delineated using pre-processing procedures. These delineated regions are subjected to quaternion wavelet transforms and statistical parameters are derived from the transformed images. These parameters are correlated with apparent porosity, which is derived from the strength regions. Further, anisotropy is also calculated from the transformed images and is analyzed. Results show that the anisotropy values derived from second and third phase components of quaternion wavelet transform are found to be distinct for normal and abnormal samples with high statistical significance for both compressive and tensile regions. These investigations demonstrate that architectural anisotropy derived from QWT analysis is able to differentiate normal and abnormal samples.

  17. Static shape control for flexible structures

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Scheid, R. E., Jr.

    1986-01-01

    An integrated methodology is described for defining static shape control laws for large flexible structures. The techniques include modeling, identifying and estimating the control laws of distributed systems characterized in terms of infinite dimensional state and parameter spaces. The models are expressed as interconnected elliptic partial differential equations governing a range of static loads, with the capability of analyzing electromagnetic fields around antenna systems. A second-order analysis is carried out for statistical errors, and model parameters are determined by maximizing an appropriate defined likelihood functional which adjusts the model to observational data. The parameter estimates are derived from the conditional mean of the observational data, resulting in a least squares superposition of shape functions obtained from the structural model.

  18. A new statistical method for design and analyses of component tolerance

    NASA Astrophysics Data System (ADS)

    Movahedi, Mohammad Mehdi; Khounsiavash, Mohsen; Otadi, Mahmood; Mosleh, Maryam

    2017-03-01

    Tolerancing conducted by design engineers to meet customers' needs is a prerequisite for producing high-quality products. Engineers use handbooks to conduct tolerancing. While use of statistical methods for tolerancing is not something new, engineers often use known distributions, including the normal distribution. Yet, if the statistical distribution of the given variable is unknown, a new statistical method will be employed to design tolerance. In this paper, we use generalized lambda distribution for design and analyses component tolerance. We use percentile method (PM) to estimate the distribution parameters. The findings indicated that, when the distribution of the component data is unknown, the proposed method can be used to expedite the design of component tolerance. Moreover, in the case of assembled sets, more extensive tolerance for each component with the same target performance can be utilized.

  19. THE MEASUREMENT OF BONE QUALITY USING GRAY LEVEL CO-OCCURRENCE MATRIX TEXTURAL FEATURES.

    PubMed

    Shirvaikar, Mukul; Huang, Ning; Dong, Xuanliang Neil

    2016-10-01

    In this paper, statistical methods for the estimation of bone quality to predict the risk of fracture are reported. Bone mineral density and bone architecture properties are the main contributors of bone quality. Dual-energy X-ray Absorptiometry (DXA) is the traditional clinical measurement technique for bone mineral density, but does not include architectural information to enhance the prediction of bone fragility. Other modalities are not practical due to cost and access considerations. This study investigates statistical parameters based on the Gray Level Co-occurrence Matrix (GLCM) extracted from two-dimensional projection images and explores links with architectural properties and bone mechanics. Data analysis was conducted on Micro-CT images of 13 trabecular bones (with an in-plane spatial resolution of about 50μm). Ground truth data for bone volume fraction (BV/TV), bone strength and modulus were available based on complex 3D analysis and mechanical tests. Correlation between the statistical parameters and biomechanical test results was studied using regression analysis. The results showed Cluster-Shade was strongly correlated with the microarchitecture of the trabecular bone and related to mechanical properties. Once the principle thesis of utilizing second-order statistics is established, it can be extended to other modalities, providing cost and convenience advantages for patients and doctors.

  20. THE MEASUREMENT OF BONE QUALITY USING GRAY LEVEL CO-OCCURRENCE MATRIX TEXTURAL FEATURES

    PubMed Central

    Shirvaikar, Mukul; Huang, Ning; Dong, Xuanliang Neil

    2016-01-01

    In this paper, statistical methods for the estimation of bone quality to predict the risk of fracture are reported. Bone mineral density and bone architecture properties are the main contributors of bone quality. Dual-energy X-ray Absorptiometry (DXA) is the traditional clinical measurement technique for bone mineral density, but does not include architectural information to enhance the prediction of bone fragility. Other modalities are not practical due to cost and access considerations. This study investigates statistical parameters based on the Gray Level Co-occurrence Matrix (GLCM) extracted from two-dimensional projection images and explores links with architectural properties and bone mechanics. Data analysis was conducted on Micro-CT images of 13 trabecular bones (with an in-plane spatial resolution of about 50μm). Ground truth data for bone volume fraction (BV/TV), bone strength and modulus were available based on complex 3D analysis and mechanical tests. Correlation between the statistical parameters and biomechanical test results was studied using regression analysis. The results showed Cluster-Shade was strongly correlated with the microarchitecture of the trabecular bone and related to mechanical properties. Once the principle thesis of utilizing second-order statistics is established, it can be extended to other modalities, providing cost and convenience advantages for patients and doctors. PMID:28042512

  1. The effect of the dynamic wet troposphere on radio interferometric measurements

    NASA Technical Reports Server (NTRS)

    Treuhaft, R. N.; Lanyi, G. E.

    1987-01-01

    A statistical model of water vapor fluctuations is used to describe the effect of the dynamic wet troposphere on radio interferometric measurements. It is assumed that the spatial structure of refractivity is approximated by Kolmogorov turbulence theory, and that the temporal fluctuations are caused by spatial patterns moved over a site by the wind, and these assumptions are examined for the VLBI delay and delay rate observables. The results suggest that the delay rate measurement error is usually dominated by water vapor fluctuations, and water vapor induced VLBI parameter errors and correlations are determined as a function of the delay observable errors. A method is proposed for including the water vapor fluctuations in the parameter estimation method to obtain improved parameter estimates and parameter covariances.

  2. Statistical inference involving binomial and negative binomial parameters.

    PubMed

    García-Pérez, Miguel A; Núñez-Antón, Vicente

    2009-05-01

    Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.

  3. Boosting Bayesian parameter inference of nonlinear stochastic differential equation models by Hamiltonian scale separation.

    PubMed

    Albert, Carlo; Ulzega, Simone; Stoop, Ruedi

    2016-04-01

    Parameter inference is a fundamental problem in data-driven modeling. Given observed data that is believed to be a realization of some parameterized model, the aim is to find parameter values that are able to explain the observed data. In many situations, the dominant sources of uncertainty must be included into the model for making reliable predictions. This naturally leads to stochastic models. Stochastic models render parameter inference much harder, as the aim then is to find a distribution of likely parameter values. In Bayesian statistics, which is a consistent framework for data-driven learning, this so-called posterior distribution can be used to make probabilistic predictions. We propose a novel, exact, and very efficient approach for generating posterior parameter distributions for stochastic differential equation models calibrated to measured time series. The algorithm is inspired by reinterpreting the posterior distribution as a statistical mechanics partition function of an object akin to a polymer, where the measurements are mapped on heavier beads compared to those of the simulated data. To arrive at distribution samples, we employ a Hamiltonian Monte Carlo approach combined with a multiple time-scale integration. A separation of time scales naturally arises if either the number of measurement points or the number of simulation points becomes large. Furthermore, at least for one-dimensional problems, we can decouple the harmonic modes between measurement points and solve the fastest part of their dynamics analytically. Our approach is applicable to a wide range of inference problems and is highly parallelizable.

  4. EFFECTS OF AROMATHERAPY MASSAGE ON THE SLEEP QUALITY AND PHYSIOLOGICAL PARAMETERS OF PATIENTS IN A SURGICAL INTENSIVE CARE UNIT.

    PubMed

    Özlü, Zeynep Karaman; Bilican, Pınar

    2017-01-01

    Surgical pain is experienced by inpatients with clinical, disease-related concerns, unknown encounters after surgery, quality of sleep, restrictions in position after surgery is known to be serious. The study was conducted to determine the effect of aromatherapy massage on quality of sleep and physiological parameters in surgical intensive care patients. This is an experimental study. The sample of this study consisted of 60 patients who were divided into two groups as experimental group and control group including 30 patients in each one. The participants were postoperative patients, absent complications, who were unconscious and extubated. A data collection form on personal characteristics of the patients, a registration form on their physical parameters and the Richards-Campbell Sleep Scale (RCSQ) were used to collect the data of the study. The Richards-Campbell Sleep Scale indicated that while the experimental group had a mean score of 53.80 ± 13.20, the control group had a mean score of 29.08 ± 9.71 and there was a statistically significant difference between mean scores of the groups. In a comparison of physiologic parameters, only diastolic blood pressure measuring between parameters in favor of an assembly as a statistically significant difference was detected. Results of the study showed that aromatherapy massage enhanced the sleep quality of patients in a surgical intensive care unit and resulted in some positive changes in their physiological parameters.

  5. EFFECTS OF AROMATHERAPY MASSAGE ON THE SLEEP QUALITY AND PHYSIOLOGICAL PARAMETERS OF PATIENTS IN A SURGICAL INTENSIVE CARE UNIT

    PubMed Central

    Özlü, Zeynep Karaman; Bilican, Pınar

    2017-01-01

    Background: Surgical pain is experienced by inpatients with clinical, disease-related concerns, unknown encounters after surgery, quality of sleep, restrictions in position after surgery is known to be serious. The study was conducted to determine the effect of aromatherapy massage on quality of sleep and physiological parameters in surgical intensive care patients. Materials and Methods: This is an experimental study. The sample of this study consisted of 60 patients who were divided into two groups as experimental group and control group including 30 patients in each one. The participants were postoperative patients, absent complications, who were unconscious and extubated. A data collection form on personal characteristics of the patients, a registration form on their physical parameters and the Richards-Campbell Sleep Scale (RCSQ) were used to collect the data of the study. Results: The Richards-Campbell Sleep Scale indicated that while the experimental group had a mean score of 53.80 ± 13.20, the control group had a mean score of 29.08 ± 9.71 and there was a statistically significant difference between mean scores of the groups. In a comparison of physiologic parameters, only diastolic blood pressure measuring between parameters in favor of an assembly as a statistically significant difference was detected. Conclusions: Results of the study showed that aromatherapy massage enhanced the sleep quality of patients in a surgical intensive care unit and resulted in some positive changes in their physiological parameters. PMID:28480419

  6. Using Perturbed Physics Ensembles and Machine Learning to Select Parameters for Reducing Regional Biases in a Global Climate Model

    NASA Astrophysics Data System (ADS)

    Li, S.; Rupp, D. E.; Hawkins, L.; Mote, P.; McNeall, D. J.; Sarah, S.; Wallom, D.; Betts, R. A.

    2017-12-01

    This study investigates the potential to reduce known summer hot/dry biases over Pacific Northwest in the UK Met Office's atmospheric model (HadAM3P) by simultaneously varying multiple model parameters. The bias-reduction process is done through a series of steps: 1) Generation of perturbed physics ensemble (PPE) through the volunteer computing network weather@home; 2) Using machine learning to train "cheap" and fast statistical emulators of climate model, to rule out regions of parameter spaces that lead to model variants that do not satisfy observational constraints, where the observational constraints (e.g., top-of-atmosphere energy flux, magnitude of annual temperature cycle, summer/winter temperature and precipitation) are introduced sequentially; 3) Designing a new PPE by "pre-filtering" using the emulator results. Steps 1) through 3) are repeated until results are considered to be satisfactory (3 times in our case). The process includes a sensitivity analysis to find dominant parameters for various model output metrics, which reduces the number of parameters to be perturbed with each new PPE. Relative to observational uncertainty, we achieve regional improvements without introducing large biases in other parts of the globe. Our results illustrate the potential of using machine learning to train cheap and fast statistical emulators of climate model, in combination with PPEs in systematic model improvement.

  7. Inverse and forward modeling under uncertainty using MRE-based Bayesian approach

    NASA Astrophysics Data System (ADS)

    Hou, Z.; Rubin, Y.

    2004-12-01

    A stochastic inverse approach for subsurface characterization is proposed and applied to shallow vadose zone at a winery field site in north California and to a gas reservoir at the Ormen Lange field site in the North Sea. The approach is formulated in a Bayesian-stochastic framework, whereby the unknown parameters are identified in terms of their statistical moments or their probabilities. Instead of the traditional single-valued estimation /prediction provided by deterministic methods, the approach gives a probability distribution for an unknown parameter. This allows calculating the mean, the mode, and the confidence interval, which is useful for a rational treatment of uncertainty and its consequences. The approach also allows incorporating data of various types and different error levels, including measurements of state variables as well as information such as bounds on or statistical moments of the unknown parameters, which may represent prior information. To obtain minimally subjective prior probabilities required for the Bayesian approach, the principle of Minimum Relative Entropy (MRE) is employed. The approach is tested in field sites for flow parameters identification and soil moisture estimation in the vadose zone and for gas saturation estimation at great depth below the ocean floor. Results indicate the potential of coupling various types of field data within a MRE-based Bayesian formalism for improving the estimation of the parameters of interest.

  8. Large ensemble modeling of last deglacial retreat of the West Antarctic Ice Sheet: comparison of simple and advanced statistical techniques

    NASA Astrophysics Data System (ADS)

    Pollard, D.; Chang, W.; Haran, M.; Applegate, P.; DeConto, R.

    2015-11-01

    A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ~ 20 000 years. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree quite well with the more advanced techniques, but only for a large ensemble with full factorial parameter sampling. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds. Each run is extended 5000 years into the "future" with idealized ramped climate warming. In the majority of runs with reasonable scores, this produces grounding-line retreat deep into the West Antarctic interior, and the analysis provides sea-level-rise envelopes with well defined parametric uncertainty bounds.

  9. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  10. Statistical mechanics of two-dimensional shuffled foams: prediction of the correlation between geometry and topology.

    PubMed

    Durand, Marc; Käfer, Jos; Quilliet, Catherine; Cox, Simon; Talebi, Shirin Ataei; Graner, François

    2011-10-14

    We propose an analytical model for the statistical mechanics of shuffled two-dimensional foams with moderate bubble size polydispersity. It predicts without any adjustable parameters the correlations between the number of sides n of the bubbles (topology) and their areas A (geometry) observed in experiments and numerical simulations of shuffled foams. Detailed statistics show that in shuffled cellular patterns n correlates better with √A (as claimed by Desch and Feltham) than with A (as claimed by Lewis and widely assumed in the literature). At the level of the whole foam, standard deviations Δn and ΔA are in proportion. Possible applications include correlations of the detailed distributions of n and A, three-dimensional foams, and biological tissues.

  11. Effects of the magnetic field direction on the Tsallis statistic

    NASA Astrophysics Data System (ADS)

    González-Casanova, Diego F.; Lazarian, A.; Cho, J.

    2018-04-01

    We extend the use of the Tsallis statistic to measure the differences in gas dynamics relative to the mean magnetic field present from natural eddy-type motions existing in magnetohydrodynamical (MHD) turbulence. The variation in gas dynamics was estimated using the Tsallis parameters on the incremental probability distribution function of the observables (intensity and velocity centroid) obtained from compressible MHD simulations. We find that the Tsallis statistic is susceptible to the anisotropy produced by the magnetic field, even when anisotropy is present the Tsallis statistic can be used to determine MHD parameters such as the Sonic Mach number. We quantize the goodness of the Tsallis parameters using the coefficient of determination to measure the differences in the gas dynamics. These parameters also determine the level of magnetization and compressibility of the medium. To further simulate realistic spectroscopic observational data, we introduced smoothing, noise, and cloud boundaries to the MHD simulations.

  12. Statistical Signal Process in R Language in the Pharmacovigilance Programme of India.

    PubMed

    Kumar, Aman; Ahuja, Jitin; Shrivastava, Tarani Prakash; Kumar, Vipin; Kalaiselvan, Vivekanandan

    2018-05-01

    The Ministry of Health & Family Welfare, Government of India, initiated the Pharmacovigilance Programme of India (PvPI) in July 2010. The purpose of the PvPI is to collect data on adverse reactions due to medications, analyze it, and use the reference to recommend informed regulatory intervention, besides communicating the risk to health care professionals and the public. The goal of the present study was to apply statistical tools to find the relationship between drugs and ADRs for signal detection by R programming. Four statistical parameters were proposed for quantitative signal detection. These 4 parameters are IC 025 , PRR and PRR lb , chi-square, and N 11 ; we calculated these 4 values using R programming. We analyzed 78,983 drug-ADR combinations, and the total count of drug-ADR combination was 4,20,060. During the calculation of the statistical parameter, we use 3 variables: (1) N 11 (number of counts), (2) N 1. (Drug margin), and (3) N .1 (ADR margin). The structure and calculation of these 4 statistical parameters in R language are easily understandable. On the basis of the IC value (IC value >0), out of the 78,983 drug-ADR combination (drug-ADR combination), we found the 8,667 combinations to be significantly associated. The calculation of statistical parameters in R language is time saving and allows to easily identify new signals in the Indian ICSR (Individual Case Safety Reports) database.

  13. Comparative Evaluation of Platelet-Rich Fibrin Biomaterial and Open Flap Debridement in the Treatment of Two and Three Wall Intrabony Defects

    PubMed Central

    Ajwani, Himanshu; Shetty, Sharath; Gopalakrishnan, Dharmarajan; Kathariya, Rahul; Kulloli, Anita; Dolas, R S; Pradeep, A R

    2015-01-01

    Background: Platelet-rich concentrates are the most widely used regenerative biomaterials. Stimulation and acceleration of soft and hard tissue healing are due to local and continuous delivery of growth factors and proteins, mimicking the needs of the physiological wound healing and reparative tissue processes. This article aims to evaluate the clinical efficacy of open flap debridement (OFD) with or without platelet-rich fibrin (PRF) in the treatment of intrabony defects. Materials and Methods: Twenty subjects with forty intrabony defects were treated with either autologous PRF with open-flap debridement (test, n = 20) or open-flap debridement alone (control, n = 20). Soft tissue parameters included: Plaque index, sulcus bleeding index, probing depth, relative attachment level and gingival marginal level (GML). The hard tissue parameters included-distances from: Cement enamel junction to the base of the defect (CEJ-BOD): Alveolar crest to the base of the defect (AC-BOD): And CEJ to AC. The parameters were recorded at baseline and at 9 months postoperatively calculated using standardized radiographs by image-analysis software. Results: Statistically significant (0.005*) intragroup improvements were seen with all the hard and soft parameters in both test and control groups, except for GML. Statistically significant improvements were seen with the mean defect fill (CEJ-BOD and AC-BOD) (P = 0.003*) when intergroup comparisons were made. Conclusions: Adjunctive use of PRF with OFD significantly improves defect fill when compared to OFD alone. PRF has consistently been showing regenerative potential; it is simple, easy and inexpensive biomaterial compared with bone grafts. PMID:25954068

  14. Parameter estimation method that directly compares gravitational wave observations to numerical relativity

    NASA Astrophysics Data System (ADS)

    Lange, J.; O'Shaughnessy, R.; Boyle, M.; Calderón Bustillo, J.; Campanelli, M.; Chu, T.; Clark, J. A.; Demos, N.; Fong, H.; Healy, J.; Hemberger, D. A.; Hinder, I.; Jani, K.; Khamesra, B.; Kidder, L. E.; Kumar, P.; Laguna, P.; Lousto, C. O.; Lovelace, G.; Ossokine, S.; Pfeiffer, H.; Scheel, M. A.; Shoemaker, D. M.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.

    2017-11-01

    We present and assess a Bayesian method to interpret gravitational wave signals from binary black holes. Our method directly compares gravitational wave data to numerical relativity (NR) simulations. In this study, we present a detailed investigation of the systematic and statistical parameter estimation errors of this method. This procedure bypasses approximations used in semianalytical models for compact binary coalescence. In this work, we use the full posterior parameter distribution for only generic nonprecessing binaries, drawing inferences away from the set of NR simulations used, via interpolation of a single scalar quantity (the marginalized log likelihood, ln L ) evaluated by comparing data to nonprecessing binary black hole simulations. We also compare the data to generic simulations, and discuss the effectiveness of this procedure for generic sources. We specifically assess the impact of higher order modes, repeating our interpretation with both l ≤2 as well as l ≤3 harmonic modes. Using the l ≤3 higher modes, we gain more information from the signal and can better constrain the parameters of the gravitational wave signal. We assess and quantify several sources of systematic error that our procedure could introduce, including simulation resolution and duration; most are negligible. We show through examples that our method can recover the parameters for equal mass, zero spin, GW150914-like, and unequal mass, precessing spin sources. Our study of this new parameter estimation method demonstrates that we can quantify and understand the systematic and statistical error. This method allows us to use higher order modes from numerical relativity simulations to better constrain the black hole binary parameters.

  15. The prognostic value of clinical characteristics and parameters of cerebrospinal fluid hydrodynamics in shunting for idiopathic normal pressure hydrocephalus.

    PubMed

    Delwel, E J; de Jong, D A; Avezaat, C J J

    2005-10-01

    It is difficult to predict which patients with symptoms and radiological signs of normal pressure hydrocephalus (NPH) will benefit from a shunting procedure and which patients will not. Risk of this procedure is also higher in patients with NPH than in the overall population of hydrocephalic patients. The aim of this study is to investigate which clinical characteristics, CT parameters and parameters of cerebrospinal fluid dynamics could predict improvement after shunting. Eighty-three consecutive patients with symptoms and radiological signs of NPH were included in a prospective study. Parameters of the cerebrospinal fluid dynamics were measured by calculation of computerised data obtained by a constant-flow lumbar infusion test. Sixty-six patients considered candidates for surgery were treated with a medium-pressure Spitz-Holter valve; in seventeen patients a shunting procedure was not considered indicated. Clinical and radiological follow-up was performed for at least one year postoperatively. The odds ratio, the sensitivity and specificity as well as the positive and negative predictive value of individual and combinations of measured parameters did not show a statistically significant relation to clinical improvement after shunting. We conclude that neither individual parameters nor combinations of measured parameters show any statistically significant relation to clinical improvement following shunting procedures in patients suspected of NPH. We suggest restricting the term normal pressure hydrocephalus to cases that improve after shunting and using the term normal pressure hydrocephalus syndrome for patients suspected of NPH and for patients not improving after implantation of a proven well-functioning shunt.

  16. ModelTest Server: a web-based tool for the statistical selection of models of nucleotide substitution online

    PubMed Central

    Posada, David

    2006-01-01

    ModelTest server is a web-based application for the selection of models of nucleotide substitution using the program ModelTest. The server takes as input a text file with likelihood scores for the set of candidate models. Models can be selected with hierarchical likelihood ratio tests, or with the Akaike or Bayesian information criteria. The output includes several statistics for the assessment of model selection uncertainty, for model averaging or to estimate the relative importance of model parameters. The server can be accessed at . PMID:16845102

  17. Model Parameter Variability for Enhanced Anaerobic Bioremediation of DNAPL Source Zones

    NASA Astrophysics Data System (ADS)

    Mao, X.; Gerhard, J. I.; Barry, D. A.

    2005-12-01

    The objective of the Source Area Bioremediation (SABRE) project, an international collaboration of twelve companies, two government agencies and three research institutions, is to evaluate the performance of enhanced anaerobic bioremediation for the treatment of chlorinated ethene source areas containing dense, non-aqueous phase liquids (DNAPL). This 4-year, 5.7 million dollars research effort focuses on a pilot-scale demonstration of enhanced bioremediation at a trichloroethene (TCE) DNAPL field site in the United Kingdom, and includes a significant program of laboratory and modelling studies. Prior to field implementation, a large-scale, multi-laboratory microcosm study was performed to determine the optimal system properties to support dehalogenation of TCE in site soil and groundwater. This statistically-based suite of experiments measured the influence of key variables (electron donor, nutrient addition, bioaugmentation, TCE concentration and sulphate concentration) in promoting the reductive dechlorination of TCE to ethene. As well, a comprehensive biogeochemical numerical model was developed for simulating the anaerobic dehalogenation of chlorinated ethenes. An appropriate (reduced) version of this model was combined with a parameter estimation method based on fitting of the experimental results. Each of over 150 individual microcosm calibrations involved matching predicted and observed time-varying concentrations of all chlorinated compounds. This study focuses on an analysis of this suite of fitted model parameter values. This includes determining the statistical correlation between parameters typically employed in standard Michaelis-Menten type rate descriptions (e.g., maximum dechlorination rates, half-saturation constants) and the key experimental variables. The analysis provides insight into the degree to which aqueous phase TCE and cis-DCE inhibit dechlorination of less-chlorinated compounds. Overall, this work provides a database of the numerical modelling parameters typically employed for simulating TCE dechlorination relevant for a range of system conditions (e.g, bioaugmented, high TCE concentrations, etc.). The significance of the obtained variability of parameters is illustrated with one-dimensional simulations of enhanced anaerobic bioremediation of residual TCE DNAPL.

  18. Measurements of Time-Dependent CP-Asymmetry Parameters in B Meson Decays to η' K 0 and of Branching Fractions of SU(3) Related Modes with BaBar Experiment at SLAC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biassoni, Pietro

    2009-01-01

    In this thesis work we have measured the following upper limits at 90% of confidence level, for B meson decays (in units of 10 -6), using a statistics of 465.0 x 10 6 Bmore » $$\\bar{B}$$ pairs: β(B 0 → ηK 0) < 1.6 β(B 0 → ηη) < 1.4 β(B 0 → η'η') < 2.1 β(B 0 → ηΦ) < 0.52 β(B 0 → ηω) < 1.6 β(B 0 → η'Φ) < 1.2 β(B 0 → η'ω) < 1.7 We have no observation of any decay mode, statistical significance for our measurements is in the range 1.3-3.5 standard deviation. We have a 3.5σ evidence for B → ηω and a 3.1 σ evidence for B → η'ω. The absence of observation of the B 0 → ηK 0 open an issue related to the large difference compared to the charged mode B + → ηK + branching fraction, which is measured to be 3.7 ± 0.4 ± 0.1 [118]. Our results represent substantial improvements of the previous ones [109, 110, 111] and are consistent with theoretical predictions. All these results were presented at Flavor Physics and CP Violation (FPCP) 2008 Conference, that took place in Taipei, Taiwan. They will be soon included into a paper to be submitted to Physical Review D. For time-dependent analysis, we have reconstructed 1820 ± 48 flavor-tagged B 0 → η'K 0 events, using the final BABAR statistic of 467.4 x 10 6 B$$\\bar{B}$$ pairs. We use these events to measure the time-dependent asymmetry parameters S and C. We find S = 0.59 ± 0.08 ± 0.02, and C = -0.06 ± 0.06 ± 0.02. A non-zero value of C would represent a directly CP non-conserving component in B 0 → η'K 0, while S would be equal to sin2β measured in B 0 → J/ΨK s 0 [108], a mixing-decay interference effect, provided the decay is dominated by amplitudes of a single weak phase. The new measured value of S can be considered in agreement with the expectations of the 'Standard Model', inside the experimental and theoretical uncertainties. Inconsistency of our result for S with CP conservation (S = 0) has a significance of 7.1 standard deviations (statistical and systematics included). Our result for the direct-CP violation parameter C is 0.9 standard deviations from zero (statistical and systematics included). Our results are in agreement with the previous ones [18]. Despite the statistics is only 20% larger than the one used in previous measurement, we improved of 20% the error on S and of 14% the error on C. This error is the smaller ever achieved, by both BABAR and Belle, in Time-Dependent CP Violation Parameters measurement is a b → s transition.« less

  19. The impact of varicocelectomy on sperm parameters: a meta-analysis.

    PubMed

    Schauer, Ingrid; Madersbacher, Stephan; Jost, Romy; Hübner, Wilhelm Alexander; Imhof, Martin

    2012-05-01

    We determined the impact of 3 surgical techniques (high ligation, inguinal varicocelectomy and the subinguinal approach) for varicocelectomy on sperm parameters (count and motility) and pregnancy rates. By searching the literature using MEDLINE and the Cochrane Library with the last search performed in February 2011, focusing on the last 20 years, a total of 94 articles published between 1975 and 2011 reporting on sperm parameters before and after varicocelectomy were identified. Inclusion criteria for this meta-analysis were at least 2 semen analyses (before and 3 or more months after the procedure), patient age older than 19 years, clinical subfertility and/or abnormal semen parameters, and a clinically palpable varicocele. To rule out skewing factors a bias analysis was performed, and statistical analysis was done with RevMan5(®) and SPSS 15.0(®). A total of 14 articles were included in the statistical analysis. All 3 surgical approaches led to significant or highly significant postoperative improvement of both parameters with only slight numeric differences among the techniques. This difference did not reach statistical significance for sperm count (p = 0.973) or sperm motility (p = 0.372). After high ligation surgery sperm count increased by 10.85 million per ml (p = 0.006) and motility by 6.80% (p <0.00001) on the average. Inguinal varicocelectomy led to an improvement in sperm count of 7.17 million per ml (p <0.0001) while motility changed by 9.44% (p = 0.001). Subinguinal varicocelectomy provided an increase in sperm count of 9.75 million per ml (p = 0.002) and sperm motility by 12.25% (p = 0.001). Inguinal varicocelectomy showed the highest pregnancy rate of 41.48% compared to 26.90% and 26.56% after high ligation and subinguinal varicocelectomy, respectively, and the difference was statistically significant (p = 0.035). This meta-analysis suggests that varicocelectomy leads to significant improvements in sperm count and motility regardless of surgical technique, with the inguinal approach offering the highest pregnancy rate. Copyright © 2012 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  20. The platelet activating factor acetyl hydrolase, oxidized low-density lipoprotein, paraoxonase 1 and arylesterase levels in treated and untreated patients with polycystic ovary syndrome.

    PubMed

    Carlioglu, Ayse; Kaygusuz, Ikbal; Karakurt, Feridun; Gumus, Ilknur Inegol; Uysal, Aysel; Kasapoglu, Benan; Armutcu, Ferah; Uysal, Sema; Keskin, Esra Aktepe; Koca, Cemile

    2014-11-01

    To evaluate the platelet activating factor acetyl hydrolyze (PAF-AH), oxidized low-density lipoprotein (ox-LDL), paraoxonase 1 (PON1), arylesterase (ARE) levels and the effects of metformin and Diane-35 (ethinyl oestradiol + cyproterone acetate) therapies on these parameters and to determine the PON1 polymorphisms among PCOS patients. Ninety patients with PCOS, age 30, and body mass index-matched healthy controls were included in the study. Patients were divided into three groups: metformin treatment, Diane-35 treatment and no medication groups. The treatment with metformin or Diane-35 was continued for 6 months and all subjects were evaluated with clinical and biochemical parameters 6 months later. One-way Anova test, t test and non-parametric Mann-Whitney U tests were used for statistical analysis. PAF-AH and ox-LDL levels were statistically significantly higher in untreated PCOS patients than controls, and they were statistically significantly lower in patients treated with metformin or Diane-35 than untreated PCOS patients. In contrast, there were lower PON1 (not statistically significant) and ARE (statistically significant) levels in untreated PCOS patients than the control group and they significantly increased after metformin and Diane-35 treatments. In PCOS patients serum PON1 levels for QQ, QR and RR phenotypes were statistically significantly lower than the control group. In patients with PCOS, proatherogenic markers increase. The treatment of PCOS with metformin or Diane-35 had positive effects on lipid profile, increased PON1 level, which is a protector from atherosclerosis and decreased the proatherogenic PAF-AH and ox-LDL levels.

  1. Measuring the statistical validity of summary meta‐analysis and meta‐regression results for use in clinical practice

    PubMed Central

    Riley, Richard D.

    2017-01-01

    An important question for clinicians appraising a meta‐analysis is: are the findings likely to be valid in their own practice—does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity—where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple (‘leave‐one‐out’) cross‐validation technique, we demonstrate how we may test meta‐analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta‐analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta‐analysis and a tailored meta‐regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within‐study variance, between‐study variance, study sample size, and the number of studies in the meta‐analysis. Finally, we apply Vn to two published meta‐analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta‐analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28620945

  2. Uncertainty quantification and risk analyses of CO2 leakage in heterogeneous geological formations

    NASA Astrophysics Data System (ADS)

    Hou, Z.; Murray, C. J.; Rockhold, M. L.

    2012-12-01

    A stochastic sensitivity analysis framework is adopted to evaluate the impact of spatial heterogeneity in permeability on CO2 leakage risk. The leakage is defined as the total mass of CO2 moving into the overburden through the caprock-overburden interface, in both gaseous and liquid (dissolved) phases. The entropy-based framework has the ability to quantify the uncertainty associated with the input parameters in the form of prior pdfs (probability density functions). Effective sampling of the prior pdfs enables us to fully explore the parameter space and systematically evaluate the individual and combined effects of the parameters of interest on CO2 leakage risk. The parameters that are considered in the study include: mean, variance, and horizontal to vertical spatial anisotropy ratio for caprock permeability, and those same parameters for reservoir permeability. Given the sampled spatial variogram parameters, multiple realizations of permeability fields were generated using GSLIB subroutines. For each permeability field, a numerical simulator, STOMP, (in the water-salt-CO2-energy operational mode) is used to simulate the CO2 migration within the reservoir and caprock up to 50 years after injection. Due to intensive computational demand, we run both a scalable version simulator eSTOMP and serial STOMP on various supercomputers. We then perform statistical analyses and summarize the relationships between the parameters of interest (mean/variance/anisotropy ratio of caprock and reservoir permeability) and CO2 leakage ratio. We also present the effects of those parameters on CO2 plume radius and reservoir injectivity. The statistical analysis provides a reduced order model that can be used to estimate the impact of heterogeneity on caprock leakage.

  3. Ground-Based Telescope Parametric Cost Model

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Rowell, Ginger Holmes

    2004-01-01

    A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis, The model includes both engineering and performance parameters. While diameter continues to be the dominant cost driver, other significant factors include primary mirror radius of curvature and diffraction limited wavelength. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e.. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter are derived. This analysis indicates that recent mirror technology advances have indeed reduced the historical telescope cost curve.

  4. Global tilt and lumbar lordosis index: two parameters correlating with health-related quality of life scores-but how do they truly impact disability?

    PubMed

    Boissière, Louis; Takemoto, Mitsuru; Bourghli, Anouar; Vital, Jean-Marc; Pellisé, Ferran; Alanay, Ahmet; Yilgor, Caglar; Acaroglu, Emre; Perez-Grueso, Francisco Javier; Kleinstück, Frank; Obeid, Ibrahim

    2017-04-01

    Many radiological parameters have been reported to correlate with patient's disability including sagittal vertical axis (SVA), pelvic tilt (PT), and pelvic incidence minus lumbar lordosis (PI-LL). European literature reports other parameters such as lumbar lordosis index (LLI) and the global tilt (GT). If most parameters correlate with health-related quality of life scores (HRQLs), their impact on disability remains unclear. This study aimed to validate these parameters by investigating their correlation with HRQLs. It also aimed to evaluate the relationship between each of these sagittal parameters and HRQLs to fully understand the impact in adult spinal deformity management. A retrospective review of a multicenter, prospective database was carried out. The database inclusion criteria were adults (>18 years old) presenting any of the following radiographic parameters: scoliosis (Cobb ≥20°), SVA ≥5 cm, thoracic kyphosis ≥60° or PT ≥25°. All patients with complete data at baseline were included. Health-related quality of life scores, demographic variables (DVs), and radiographic parameters were collected at baseline. Differences in HRQLs among groups of each DV were assessed with analyses of variance. Correlations between radiographic variables and HRQLs were assessed using the Spearman rank correlation. Multivariate linear regression models were fitted for each of the HRQLs (Oswestry Disability Index [ODI], Scoliosis Research Society-22 subtotal score, or physical component summaries) with sagittal parameters and covariants as independent variables. A p<.05 value was considered statistically significant. Among a total of 755 included patients (mean age, 52.1 years), 431 were non-surgical candidates and 324 were surgical candidates. Global tilt and LLI significantly correlated with HRQLs (r=0.4 and -0.3, respectively) for univariate analysis. Demographic variables such as age, gender, body mass index, past surgery, and surgical or non-surgical candidate were significant predictors of ODI score. The likelihood ratio tests for the addition of the sagittal parameters showed that SVA, GT, T1 sagittal tilt, PI-LL, and LLI were statistically significant predictors for ODI score even adjusted for covariates. The differences of R 2 values from Model 1 were 1.5% at maximum, indicating that the addition of sagittal parameters to the reference model increased only 1.5% of the variance of ODI explained by the models. GT and LLI appear to be independent radiographic parameters impacting ODI variance. If most of the parameters described in the literature are correlated with ODI, the impact of these radiographic parameters is less than 2% of ODI variance, whereas 40% are explained by DVs. The importance of radiographic parameters lies more on their purpose to describe and understand the malalignment mechanisms than their univariate correlation with HRQLs. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. THERMUS—A thermal model package for ROOT

    NASA Astrophysics Data System (ADS)

    Wheaton, S.; Cleymans, J.; Hauer, M.

    2009-01-01

    THERMUS is a package of C++ classes and functions allowing statistical-thermal model analyses of particle production in relativistic heavy-ion collisions to be performed within the ROOT framework of analysis. Calculations are possible within three statistical ensembles; a grand-canonical treatment of the conserved charges B, S and Q, a fully canonical treatment of the conserved charges, and a mixed-canonical ensemble combining a canonical treatment of strangeness with a grand-canonical treatment of baryon number and electric charge. THERMUS allows for the assignment of decay chains and detector efficiencies specific to each particle yield, which enables sensible fitting of model parameters to experimental data. Program summaryProgram title: THERMUS, version 2.1 Catalogue identifier: AEBW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 17 152 No. of bytes in distributed program, including test data, etc.: 93 581 Distribution format: tar.gz Programming language: C++ Computer: PC, Pentium 4, 1 GB RAM (not hardware dependent) Operating system: Linux: FEDORA, RedHat, etc. Classification: 17.7 External routines: Numerical Recipes in C [1], ROOT [2] Nature of problem: Statistical-thermal model analyses of heavy-ion collision data require the calculation of both primordial particle densities and contributions from resonance decay. A set of thermal parameters (the number depending on the particular model imposed) and a set of thermalized particles, with their decays specified, is required as input to these models. The output is then a complete set of primordial thermal quantities for each particle, together with the contributions to the final particle yields from resonance decay. In many applications of statistical-thermal models it is required to fit experimental particle multiplicities or particle ratios. In such analyses, the input is a set of experimental yields and ratios, a set of particles comprising the assumed hadron resonance gas formed in the collision and the constraints to be placed on the system. The thermal model parameters consistent with the specified constraints leading to the best-fit to the experimental data are then output. Solution method: THERMUS is a package designed for incorporation into the ROOT [2] framework, used extensively by the heavy-ion community. As such, it utilizes a great deal of ROOT's functionality in its operation. ROOT features used in THERMUS include its containers, the wrapper TMinuit implementing the MINUIT fitting package, and the TMath class of mathematical functions and routines. Arguably the most useful feature is the utilization of CINT as the control language, which allows interactive access to the THERMUS objects. Three distinct statistical ensembles are included in THERMUS, while additional options to include quantum statistics, resonance width and excluded volume corrections are also available. THERMUS provides a default particle list including all mesons (up to the K4∗ (2045)) and baryons (up to the Ω) listed in the July 2002 Particle Physics Booklet [3]. For each typically unstable particle in this list, THERMUS includes a text-file listing its decays. With thermal parameters specified, THERMUS calculates primordial thermal densities either by performing numerical integrations or else, in the case of the Boltzmann approximation without resonance width in the grand-canonical ensemble, by evaluating Bessel functions. Particle decay chains are then used to evaluate experimental observables (i.e. particle yields following resonance decay). Additional detector efficiency factors allow fine-tuning of the model predictions to a specific detector arrangement. When parameters are required to be constrained, use is made of the 'Numerical Recipes in C' [1] function which applies the Broyden globally convergent secant method of solving nonlinear systems of equations. Since the NRC software is not freely-available, it has to be purchased by the user. THERMUS provides the means of imposing a large number of constraints on the chosen model (amongst others, THERMUS can fix the baryon-to-charge ratio of the system, the strangeness density of the system and the primordial energy per hadron). Fits to experimental data are accomplished in THERMUS by using the ROOT TMinuit class. In its default operation, the standard χ function is minimized, yielding the set of best-fit thermal parameters. THERMUS allows the assignment of separate decay chains to each experimental input. In this way, the model is able to match the specific feed-down corrections of a particular data set. Running time: Depending on the analysis required, run-times vary from seconds (for the evaluation of particle multiplicities given a set of parameters) to several minutes (for fits to experimental data subject to constraints). References:W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery, Numerical Recipes in C: The Art of Scientific Computing, Cambridge University Press, Cambridge, 2002. R. Brun, F. Rademakers, Nucl. Inst. Meth. Phys. Res. A 389 (1997) 81. See also http://root.cern.ch/. K. Hagiwara et al., Phys. Rev. D 66 (2002) 010001.

  6. Sub-poissonian photon statistics in the coherent state Jaynes-Cummings model in non-resonance

    NASA Astrophysics Data System (ADS)

    Zhang, Jia-tai; Fan, An-fu

    1992-03-01

    We study a model with a two-level atom (TLA) non-resonance interacting with a single-mode quantized cavity field (QCF). The photon number probability function, the mean photon number and Mandel's fluctuation parameter are calculated. The sub-Poissonian distributions of the photon statistics are obtained in non-resonance interaction. This statistical properties are strongly dependent on the detuning parameters.

  7. Distribution of water quality parameters in Dhemaji district, Assam (India).

    PubMed

    Buragohain, Mridul; Bhuyan, Bhabajit; Sarma, H P

    2010-07-01

    The primary objective of this study is to present a statistically significant water quality database of Dhemaji district, Assam (India) with special reference to pH, fluoride, nitrate, arsenic, iron, sodium and potassium. 25 water samples collected from different locations of five development blocks in Dhemaji district have been studied separately. The implications presented are based on statistical analyses of the raw data. Normal distribution statistics and reliability analysis (correlation and covariance matrix) have been employed to find out the distribution pattern, localisation of data, and other related information. Statistical observations show that all the parameters under investigation exhibit non uniform distribution with a long asymmetric tail either on the right or left side of the median. The width of the third quartile was consistently found to be more than the second quartile for each parameter. Differences among mean, mode and median, significant skewness and kurtosis value indicate that the distribution of various water quality parameters in the study area is widely off normal. Thus, the intrinsic water quality is not encouraging due to unsymmetrical distribution of various water quality parameters in the study area.

  8. Relationship of body weight parameters with the incidence of common spontaneous tumors in Tg.rasH2 mice.

    PubMed

    Paranjpe, Madhav G; Denton, Melissa D; Vidmar, Tom J; Elbekai, Reem H

    2014-10-01

    The mechanistic relationship between increased food consumption, increased body weights, and increased incidence of tumors has been well established in 2-year rodent models. Body weight parameters such as initial body weights, terminal body weights, food consumption, and the body weight gains in grams and percentages were analyzed to determine whether such relationship exists between these parameters with the incidence of common spontaneous tumors in Tg.rasH2 mice. None of these body weight parameters had any statistically significant relationship with the incidence of common spontaneous tumors in Tg.rasH2 males, namely lung tumors, splenic hemangiosarcomas, nonsplenic hemangiosarcomas, combined incidence of all hemangiosarcomas, and Harderian gland tumors. These parameters also did not have any statistically significant relationship with the incidence of lung and Harderian gland tumors in females. However, in females, increased initial body weights did have a statistically significant relationship with the nonsplenic hemangiosarcomas, and increased terminal body weights did have a statistically significant relationship with the incidence of splenic hemangiosarcomas, nonsplenic hemangiosarcomas, and the combined incidence of all hemangiosarcomas. In addition, increased body weight gains in grams and percentages had a statistically significant relationship with the combined incidence of all hemangiosarcomas in females, but not separately with splenic and nonsplenic hemangiosarcomas. © 2013 by The Author(s).

  9. Statistical Distributions of Optical Flares from Gamma-Ray Bursts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yi, Shuang-Xi; Yu, Hai; Wang, F. Y.

    2017-07-20

    We statistically study gamma-ray burst (GRB) optical flares from the Swift /UVOT catalog. We compile 119 optical flares, including 77 flares with redshift measurements. Some tight correlations among the timescales of optical flares are found. For example, the rise time is correlated with the decay time, and the duration time is correlated with the peak time of optical flares. These two tight correlations indicate that longer rise times are associated with longer decay times of optical flares and also suggest that broader optical flares peak at later times, which are consistent with the corresponding correlations of X-ray flares. We alsomore » study the frequency distributions of optical flare parameters, including the duration time, rise time, decay time, peak time, and waiting time. Similar power-law distributions for optical and X-ray flares are found. Our statistic results imply that GRB optical flares and X-ray flares may share the similar physical origin, and both of them are possibly related to central engine activities.« less

  10. VizieR Online Data Catalog: Fundamental parameters of Kepler stars (Silva Aguirre+, 2015)

    NASA Astrophysics Data System (ADS)

    Silva Aguirre, V.; Davies, G. R.; Basu, S.; Christensen-Dalsgaard, J.; Creevey, O.; Metcalfe, T. S.; Bedding, T. R.; Casagrande, L.; Handberg, R.; Lund, M. N.; Nissen, P. E.; Chaplin, W. J.; Huber, D.; Serenelli, A. M.; Stello, D.; van Eylen, V.; Campante, T. L.; Elsworth, Y.; Gilliland, R. L.; Hekker, S.; Karoff, C.; Kawaler, S. D.; Kjeldsen, H.; Lundkvist, M. S.

    2016-02-01

    Our sample has been extracted from the 77 exoplanet host stars presented in Huber et al. (2013, Cat. J/ApJ/767/127). We have made use of the full time-base of observations from the Kepler satellite to uniformly determine precise fundamental stellar parameters, including ages, for a sample of exoplanet host stars where high-quality asteroseismic data were available. We devised a Bayesian procedure flexible in its input and applied it to different grids of models to study systematics from input physics and extract statistically robust properties for all stars. (4 data files).

  11. Wash-out in N{sub 2}-dominated leptogenesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hahn-Woernle, F., E-mail: fhahnwo@mppmu.mpg.de

    2010-08-01

    We study the wash-out of a cosmological baryon asymmetry produced via leptogenesis by subsequent interactions. Therefore we focus on a scenario in which a lepton asymmetry is established in the out-of-equilibrium decays of the next-to-lightest right-handed neutrino. We apply the full classical Boltzmann equations without the assumption of kinetic equilibrium and including all quantum statistical factors to calculate the wash-out of the lepton asymmetry by interactions of the lightest right-handed state. We include scattering processes with top quarks in our analysis. This is of particular interest since the wash-out is enhanced by scatterings and the use of mode equations withmore » quantum statistical distribution functions. In this way we provide a restriction on the parameter space for this scenario.« less

  12. Peri-implant soft tissue colour around titanium and zirconia abutments: a prospective randomized controlled clinical study.

    PubMed

    Cosgarea, Raluca; Gasparik, Cristina; Dudea, Diana; Culic, Bogdan; Dannewitz, Bettina; Sculean, Anton

    2015-05-01

    To objectively determine the difference in colour between the peri-implant soft tissue at titanium and zirconia abutments. Eleven patients, each with two contralaterally inserted osteointegrated dental implants, were included in this study. The implants were restored either with titanium abutments and porcelain-fused-to-metal crowns, or with zirconia abutments and ceramic crowns. Prior and after crown cementation, multi-spectral images of the peri-implant soft tissues and the gingiva of the neighbouring teeth were taken with a colorimeter. The colour parameters L*, a*, b*, c* and the colour differences ΔE were calculated. Descriptive statistics, including non-parametric tests and correlation coefficients, were used for statistical analyses of the data. Compared to the gingiva of the neighbouring teeth, the peri-implant soft tissue around titanium and zirconia (test group), showed distinguishable ΔE both before and after crown cementation. Colour differences around titanium were statistically significant different (P = 0.01) only at 1 mm prior to crown cementation compared to zirconia. Compared to the gingiva of the neighbouring teeth, statistically significant (P < 0.01) differences were found for all colour parameter, either before or after crown cementation for both abutments; more significant differences were registered for titanium abutments. Tissue thickness correlated positively with c*-values for titanium at 1 mm and 2 mm from the gingival margin. Within their limits, the present data indicate that: (i) The peri-implant soft tissue around titanium and zirconia showed colour differences when compared to the soft tissue around natural teeth, and (ii) the peri-implant soft tissue around zirconia demonstrated a better colour match to the soft tissue at natural teeth than titanium. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. The Tully-Fisher relation for flat galaxies

    NASA Astrophysics Data System (ADS)

    Makarov, D. I.; Zaitseva, N. A.; Bizyaev, D. V.

    2018-06-01

    We construct a multiparametric Tully-Fisher (TF) relation for a large sample of edge-on galaxies from the Revised Flat Galaxy Catalog using H I data from the EDD database and parameters from the EGIS catalog. We incorporate a variety of additional parameters including structural parameters of edge-on galaxies in different bandpasses. Besides the rotation curve maximum, only the H I-to-optical luminosity ratio and optical colours play a statistically significant role in the multiparametric TF relation. We are able to decrease the standard deviation of the multiparametric TF relation down to 0.32 mag, which is at the level of best modern samples of galaxies used for studies of the matter motion in the Universe via the TF-relation.

  14. Identification of atypical flight patterns

    NASA Technical Reports Server (NTRS)

    Statler, Irving C. (Inventor); Ferryman, Thomas A. (Inventor); Amidan, Brett G. (Inventor); Whitney, Paul D. (Inventor); White, Amanda M. (Inventor); Willse, Alan R. (Inventor); Cooley, Scott K. (Inventor); Jay, Joseph Griffith (Inventor); Lawrence, Robert E. (Inventor); Mosbrucker, Chris (Inventor)

    2005-01-01

    Method and system for analyzing aircraft data, including multiple selected flight parameters for a selected phase of a selected flight, and for determining when the selected phase of the selected flight is atypical, when compared with corresponding data for the same phase for other similar flights. A flight signature is computed using continuous-valued and discrete-valued flight parameters for the selected flight parameters and is optionally compared with a statistical distribution of other observed flight signatures, yielding atypicality scores for the same phase for other similar flights. A cluster analysis is optionally applied to the flight signatures to define an optimal collection of clusters. A level of atypicality for a selected flight is estimated, based upon an index associated with the cluster analysis.

  15. Information-Decay Pursuit of Dynamic Parameters in Student Models

    DTIC Science & Technology

    1994-04-01

    simple worked-through example). Commercially available computer programs for structuring and using Bayesian inference include ERGO ( Noetic Systems...Tukey, J.W. (1977). Data analysis and Regression: A second course in statistics. Reading, MA: Addison-Wesley. Noetic Systems, Inc. (1991). ERGO...Naval Academy Division of Educational Studies Annapolis MD 21402-5002 Elmory Univerity Dr Janice Gifford 210 Fiabburne Bldg University of

  16. Immunohistochemical expression of matrix metalloproteinase 13 in chronic periodontitis.

    PubMed

    Nagasupriya, Alapati; Rao, Donimukkala Bheemalingeswara; Ravikanth, Manyam; Kumar, Nalabolu Govind; Ramachandran, Cinnamanoor Rajmani; Saraswathi, Thillai Rajashekaran

    2014-01-01

    The extracellular matrix is a complex integrated system responsible for the physiologic properties of connective tissue. Collagen is the major extracellular component that is altered in pathologic conditions, mainly periodontitis. The destruction involves proteolytic enzymes, primarily matrix metalloproteinases (MMPs), which play a key role in mediating and regulating the connective tissue destruction in periodontitis. The study group included 40 patients with clinically diagnosed chronic periodontitis. The control group included 20 patients with clinically normal gingiva covering impacted third molars undergoing extraction or in areas where crown-lengthening procedures were performed. MMP-13 expression was demonstrated using immunohistochemistry in all the gingival biopsies, and the data were analyzed statistically. MMP-13 expression was observed more in chronic periodontitis when compared with normal gingiva. MMP-13 expression was expressed by fibroblasts, lymphocytes, macrophages, plasma cells, and basal cells of the sulcular epithelium. Comparative evaluation of all the clinical and histologic parameters with MMP-13 expression showed high statistical significance with Spearman correlation coefficient. Elevated levels of MMP-13 may play a role in the pathogenesis of chronic periodontitis. There is a direct correlation of increased expression of MMP-13 with various clinical and histologic parameters in disease severity.

  17. A practical and systematic review of Weibull statistics for reporting strengths of dental materials

    PubMed Central

    Quinn, George D.; Quinn, Janet B.

    2011-01-01

    Objectives To review the history, theory and current applications of Weibull analyses sufficient to make informed decisions regarding practical use of the analysis in dental material strength testing. Data References are made to examples in the engineering and dental literature, but this paper also includes illustrative analyses of Weibull plots, fractographic interpretations, and Weibull distribution parameters obtained for a dense alumina, two feldspathic porcelains, and a zirconia. Sources Informational sources include Weibull's original articles, later articles specific to applications and theoretical foundations of Weibull analysis, texts on statistics and fracture mechanics and the international standards literature. Study Selection The chosen Weibull analyses are used to illustrate technique, the importance of flaw size distributions, physical meaning of Weibull parameters and concepts of “equivalent volumes” to compare measured strengths obtained from different test configurations. Conclusions Weibull analysis has a strong theoretical basis and can be of particular value in dental applications, primarily because of test specimen size limitations and the use of different test configurations. Also endemic to dental materials, however, is increased difficulty in satisfying application requirements, such as confirming fracture origin type and diligence in obtaining quality strength data. PMID:19945745

  18. [Status and progress of stimulating parameters in acupuncture treatment of ischemic cerebrovascular disease].

    PubMed

    Wei, Yuan-yuan; Fan, Xiao-nong; Wang, Shu; Shi, Xue-min

    2008-08-01

    Acute ischemic cerebrovascular disease is one of the critical diseases seriously endangering human health. Acupuncture therapy, an effective treatment method for many types of disorders has been generally acknowledged. In recent years, many scientific researchers have studied the relationship between the effects of acupuncture in relieving cerebral ischemia-induced sequelae and the stimulating parameters. The acupuncture stimulating parameter includes the frequency of electroacupuncture (EA), the frequency of acupuncture treatment, and the acquired quantity of stimulation, etc for clinical patients and experimental animals. It was found that different stimulating parameters may have different efficacies. Current research results provide a good basis not only for analysis of the factors of acupuncture-produced effects, but also for determination of the optimal combination of stimulating parameters. However, acupuncture therapeutic effect involves multiple factors and multiple levels, and current quantitative acupuncture parameter researches have been mainly restricted to animal experiments. Hence, more researches in which statistics specialists take part are definitely needed.

  19. Effects of suvorexant, an orexin receptor antagonist, on sleep parameters as measured by polysomnography in healthy men.

    PubMed

    Sun, Hong; Kennedy, William P; Wilbraham, Darren; Lewis, Nicole; Calder, Nicole; Li, Xiaodong; Ma, Junshui; Yee, Ka Lai; Ermlich, Susan; Mangin, Eric; Lines, Christopher; Rosen, Laura; Chodakewitz, Jeffrey; Murphy, Gail M

    2013-02-01

    Suvorexant (MK-4305) is an orexin receptor antagonist being developed for the treatment of insomnia. This report describes the effects of nighttime administration of suvorexant on polysomnography (PSG) sleep parameters in healthy young men. Randomized, double-blind, placebo-controlled, 4-period crossover PSG study, followed by an additional 5(th) period to assess pharmacokinetics. Sleep laboratory. Healthy young men between 18 and 45 years of age (22 enrolled, 19 completed). Periods 1-4: suvorexant (10 mg, 50 mg, or 100 mg) or placebo 1 h before nighttime PSG recording. Period 5: suvorexant 10 mg, 50 mg, or 100 mg. In Periods 1-4, overnight sleep parameters were recorded by PSG and next-morning residual effects were assessed by psychomotor performance tests and subjective assessments. Statistically significant sleep-promoting effects were observed with all doses of suvorexant compared to placebo. Suvorexant 50 mg and 100 mg significantly decreased latency to persistent sleep and wake after sleep onset time, and increased sleep efficiency. Suvorexant 10 mg significantly decreased wake after sleep onset time. There were no statistically significant effects of suvorexant on EEG frequency bands including delta (slow wave) activity based on power spectral analysis. Suvorexant was well tolerated. There was no evidence of next-day residual effects for suvorexant 10 mg. Suvorexant 50 mg statistically significantly reduced subjective alertness, and suvorexant 100 mg significantly increased reaction time and reduced subjective alertness. There were no statistically significant effects of any suvorexant dose on digit symbol substitution test performance. In Period 5, plasma samples of suvorexant were collected for pharmacokinetic evaluation. The median T(max) was 3 hours and apparent terminal t(½) was 9-13 hours. In healthy young men without sleep disorders, suvorexant promoted sleep with some evidence of residual effects at the highest doses.

  20. Establishment of a center of excellence for applied mathematical and statistical research

    NASA Technical Reports Server (NTRS)

    Woodward, W. A.; Gray, H. L.

    1983-01-01

    The state of the art was assessed with regards to efforts in support of the crop production estimation problem and alternative generic proportion estimation techniques were investigated. Topics covered include modeling the greeness profile (Badhwarmos model), parameter estimation using mixture models such as CLASSY, and minimum distance estimation as an alternative to maximum likelihood estimation. Approaches to the problem of obtaining proportion estimates when the underlying distributions are asymmetric are examined including the properties of Weibull distribution.

  1. Climate Considerations Of The Electricity Supply Systems In Industries

    NASA Astrophysics Data System (ADS)

    Asset, Khabdullin; Zauresh, Khabdullina

    2014-12-01

    The study is focused on analysis of climate considerations of electricity supply systems in a pellet industry. The developed analysis model consists of two modules: statistical data of active power losses evaluation module and climate aspects evaluation module. The statistical data module is presented as a universal mathematical model of electrical systems and components of industrial load. It forms a basis for detailed accounting of power loss from the voltage levels. On the basis of the universal model, a set of programs is designed to perform the calculation and experimental research. It helps to obtain the statistical characteristics of the power losses and loads of the electricity supply systems and to define the nature of changes in these characteristics. Within the module, several methods and algorithms for calculating parameters of equivalent circuits of low- and high-voltage ADC and SD with a massive smooth rotor with laminated poles are developed. The climate aspects module includes an analysis of the experimental data of power supply system in pellet production. It allows identification of GHG emission reduction parameters: operation hours, type of electrical motors, values of load factor and deviation of standard value of voltage.

  2. The Global Signature of Ocean Wave Spectra

    NASA Astrophysics Data System (ADS)

    Portilla-Yandún, Jesús

    2018-01-01

    A global atlas of ocean wave spectra is developed and presented. The development is based on a new technique for deriving wave spectral statistics, which is applied to the extensive ERA-Interim database from European Centre of Medium-Range Weather Forecasts. Spectral statistics is based on the idea of long-term wave systems, which are unique and distinct at every geographical point. The identification of those wave systems allows their separation from the overall spectrum using the partition technique. Their further characterization is made using standard integrated parameters, which turn out much more meaningful when applied to the individual components than to the total spectrum. The parameters developed include the density distribution of spectral partitions, which is the main descriptor; the identified wave systems; the individual distribution of the characteristic frequencies, directions, wave height, wave age, seasonal variability of wind and waves; return periods derived from extreme value analysis; and crossing-sea probabilities. This information is made available in web format for public use at http://www.modemat.epn.edu.ec/#/nereo. It is found that wave spectral statistics offers the possibility to synthesize data while providing a direct and comprehensive view of the local and regional wave conditions.

  3. Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods.

    PubMed

    Vizcaíno, Iván P; Carrera, Enrique V; Muñoz-Romero, Sergio; Cumbal, Luis H; Rojo-Álvarez, José Luis

    2017-10-16

    Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer's kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer's kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem.

  4. Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods

    PubMed Central

    Vizcaíno, Iván P.; Muñoz-Romero, Sergio; Cumbal, Luis H.

    2017-01-01

    Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer’s kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer’s kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem. PMID:29035333

  5. Statistical Properties of Echosignal Obtained from Human Dermis In Vivo

    NASA Astrophysics Data System (ADS)

    Piotrzkowska, Hanna; Litniewski, Jerzy; Nowicki, Andrzej; Szymańska, Elżbieta

    The paper presents the classification of the healthy skin and the skin lesions (basal cell carcinoma and actinic keratosis), basing on the statistical parameters of the envelope of ultrasonic echoes. The envelope was modeled using Rayleigh and non-Rayleigh (K-distribution) statistics. Furthermore, the characteristic parameter of the K-distribution, the effective number of scatterers was investigated. Also the attenuation coefficient was used for the skin lesion assessment.

  6. Optimal Regulation of Structural Systems with Uncertain Parameters.

    DTIC Science & Technology

    1981-02-02

    been addressed, in part, by Statistical Energy Analysis . Moti- vated by a concern with high frequency vibration and acoustical- structural...Parameter Systems," AFOSR-TR-79-0753 (May, 1979). 25. R. H. Lyon, Statistical Energy Analysis of Dynamical Systems: Theory and Applications, (M.I.T...Press, Cambridge, Mass., 1975). 26. E. E. Ungar, " Statistical Energy Analysis of Vibrating Systems," Trans. ASME, J. Eng. Ind. 89, 626 (1967). 139 27

  7. Robust functional statistics applied to Probability Density Function shape screening of sEMG data.

    PubMed

    Boudaoud, S; Rix, H; Al Harrach, M; Marin, F

    2014-01-01

    Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.

  8. Compounding approach for univariate time series with nonstationary variances

    NASA Astrophysics Data System (ADS)

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  9. Compounding approach for univariate time series with nonstationary variances.

    PubMed

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  10. Noise level and MPEG-2 encoder statistics

    NASA Astrophysics Data System (ADS)

    Lee, Jungwoo

    1997-01-01

    Most software in the movie and broadcasting industries are still in analog film or tape format, which typically contains random noise that originated from film, CCD camera, and tape recording. The performance of the MPEG-2 encoder may be significantly degraded by the noise. It is also affected by the scene type that includes spatial and temporal activity. The statistical property of noise originating from camera and tape player is analyzed and the models for the two types of noise are developed. The relationship between the noise, the scene type, and encoder statistics of a number of MPEG-2 parameters such as motion vector magnitude, prediction error, and quant scale are discussed. This analysis is intended to be a tool for designing robust MPEG encoding algorithms such as preprocessing and rate control.

  11. [How to start a neuroimaging study].

    PubMed

    Narumoto, Jin

    2012-06-01

    In order to help researchers understand how to start a neuroimaging study, several tips are described in this paper. These include 1) Choice of an imaging modality, 2) Statistical method, and 3) Interpretation of the results. 1) There are several imaging modalities available in clinical research. Advantages and disadvantages of each modality are described. 2) Statistical Parametric Mapping, which is the most common statistical software for neuroimaging analysis, is described in terms of parameter setting in normalization and level of significance. 3) In the discussion section, the region which shows a significant difference between patients and normal controls should be discussed in relation to the neurophysiology of the disease, making reference to previous reports from neuroimaging studies in normal controls, lesion studies and animal studies. A typical pattern of discussion is described.

  12. A comparative study of hematological parameters of α and β thalassemias in a high prevalence zone: Saudi Arabia

    PubMed Central

    Mehdi, Syed Riaz; Al Dahmash, Badr Abdullah

    2011-01-01

    BACKGROUND AND AIMS: Saudi Arabia falls in the high prevalent zone of αα and β thalassemias. Early screening for the type of thalassemia is essential for further investigations and management. The study was carried out to differentiate the type of thalassemia based on red cell indices and other hematological parameters. MATERIALS AND METHODS: The study was carried out on 991 clinically suspected cases of thalassemias in Riyadh, Saudi Arabia. The hematological parameters were studied on Coulter STKS. Cellulose acetate hemoglobin electrophoresis and high-performance liquid chromatography (HPLC) were performed on all the blood samples. Gene deletion studies were carried out by restriction fragment length polymorphism (RFLP) technique using the restriction endonucleases Bam HI. STATISTICAL ANALYSIS: Statistical analysis was performed on SPSS 11.5 version. RESULTS: The hemoglobin electrophoresis and gene studies revealed that there were 406 (40.96%) and 59 (5.95 %) cases of β thalassemia trait and β thalassemia major respectively including adults and children. 426 cases of various deletion forms of α thalassemias were seen. Microcytosis was a common feature in β thalassemias trait and (-α/-α) and (--/αα) types of α thalassemias. MCH was a more significant distinguishing feature among thalassemias. β thalassemia major and α thalassemia (-α/αα) had almost normal hematological parameters. CONCLUSION: MCV and RBC counts are not statistically significant features for discriminating between α and β thalassemias. There is need for development of a discrimination index to differentiate between α and β thalassemias traits on the lines of discriminatory Indices available for distinguishing β thalassemias trait from iron deficiency anemia. PMID:22345994

  13. Large-scale galaxy bias

    NASA Astrophysics Data System (ADS)

    Jeong, Donghui; Desjacques, Vincent; Schmidt, Fabian

    2018-01-01

    Here, we briefly introduce the key results of the recent review (arXiv:1611.09787), whose abstract is as following. This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy (or halo) statistics. We then review the excursion set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  14. Large-scale galaxy bias

    NASA Astrophysics Data System (ADS)

    Desjacques, Vincent; Jeong, Donghui; Schmidt, Fabian

    2018-02-01

    This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy statistics. We then review the excursion-set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  15. Peri-implant parameters, tumor necrosis factor-alpha, and interleukin-1 beta levels in vaping individuals.

    PubMed

    Al-Aali, Khulud A; Alrabiah, Mohammed; ArRejaie, Aws S; Abduljabbar, Tariq; Vohra, Fahim; Akram, Zohaib

    2018-03-25

    To the author's knowledge, there has been no study that has assessed clinical, radiographic, and immunological peri-implant parameters among individuals vaping e-cigarette (e-cig). This pilot study aimed to compare clinical and radiographic peri-implant parameters and levels of tumor necrosis factor alpha (TNF-α) and interleukin (IL)-1β levels among individuals vaping e-cigs and never smoker (NS). Forty-seven individuals vaping e-cigs (group-1) and 45 NS (group-2) were included. Demographic and implant-related data were collected using a structured baseline questionnaire. Peri-implant plaque index (PI), bleeding on probing (BOP), and probing depth (PD) were recorded and peri-implant bone loss (PIBL) were assessed using standardized digital radiographs. Enzyme-linked immunosorbent assay was used to assess the levels of TNF-α and IL-1β in peri-implant sulcular fluid. Bleeding on probing showed statistically significantly higher values in group-2 patients as compared to group-1 patients (P < .01). Probing depth ≥ 4 mm and PIBL was statistically significantly higher in group-1 patients as compared to group-2 patients (P < .05). Mean concentrations of TNF-α (P < .001) and IL-1β (P < .01) were statistically significantly increased in individuals in group 1 as compared with group 2. A significant positive correlations were found between TNF-α levels and BOP (P = .024) and PIBL (P = .016); and significant positive correlation was found between IL-1β and PIBL (P = .018) in group 1, respectively. Clinical and radiographic peri-implant parameters are compromised among vaping individuals. Increased levels of proinflammatory cytokines in peri-implant sulcular fluid may suggest greater local inflammatory response in vaping individuals for peri-implant inflammation. © 2018 Wiley Periodicals, Inc.

  16. An Open Label Clinical Trial to Evaluate the Efficacy and Tolerance of a Retinol and Vitamin C Facial Regimen in Women With Mild-to-Moderate Hyperpigmentation and Photodamaged Facial Skin.

    PubMed

    Herndon, James H; Jiang, Lily I; Kononov, Tatiana; Fox, Theresa

    2016-04-01

    A 12-week open-label, single-center clinical usage trial was conducted to determine the effectiveness of a dual product regimen consisting of a 0.5% retinol treatment and an anti-aging moisturizer with 30% vitamin C in women with mild to moderate hyperpigmented and photodamaged facial skin. Clinical grading of several efficacy parameters, tolerability evaluations, subject self-assessment questionnaires, and digital photography were completed at baseline and at weeks 4, 8, and 12. A total of 44 women completed the study. Effective ingredients incorporated into the 0.5% retinol treatment included encapsulated retinol for a retinol concentration of 0.5%, bakuchiol, and Ophiopogon japonicus root extract. The anti-aging moisturizer with 30% vitamin C contained 30% vitamin C in the form of tetrahexyldecyl ascorbate (THD ascorbate), alpha-tocopheryl acetate (vitamin E) and ubiquinone (coenzyme Q10). The facial regimen produced a statistically significant decrease (improvement) in clinical grading scores for all parameters assessed at weeks 8 and 12 when compared with baseline scores. In addition, the majority of these parameters were improved at week 4. The test regimen was well-perceived by the subjects for various inquiries regarding facial skin condition, product efficacy, and product attributes. Several tolerability parameters were assessed with no statistically significant increase except for dryness. A statistically significant increase in clinical grading scores for dryness on the face occurred at weeks 4 and 8 when compared to baseline scores. The increase in dryness is expected when introducing a retinol product to a facial regimen and the dryness did not persist to the week 12 time point.

  17. Evaluation of wind field statistics near and inside clouds using a coherent Doppler lidar

    NASA Astrophysics Data System (ADS)

    Lottman, Brian Todd

    1998-09-01

    This work proposes advanced techniques for measuring the spatial wind field statistics near and inside clouds using a vertically pointing solid state coherent Doppler lidar on a fixed ground based platform. The coherent Doppler lidar is an ideal instrument for high spatial and temporal resolution velocity estimates. The basic parameters of lidar are discussed, including a complete statistical description of the Doppler lidar signal. This description is extended to cases with simple functional forms for aerosol backscatter and velocity. An estimate for the mean velocity over a sensing volume is produced by estimating the mean spectra. There are many traditional spectral estimators, which are useful for conditions with slowly varying velocity and backscatter. A new class of estimators (novel) is introduced that produces reliable velocity estimates for conditions with large variations in aerosol backscatter and velocity with range, such as cloud conditions. Performance of traditional and novel estimators is computed for a variety of deterministic atmospheric conditions using computer simulated data. Wind field statistics are produced for actual data for a cloud deck, and for multi- layer clouds. Unique results include detection of possible spectral signatures for rain, estimates for the structure function inside a cloud deck, reliable velocity estimation techniques near and inside thin clouds, and estimates for simple wind field statistics between cloud layers.

  18. Statistical models for the analysis and design of digital polymerase chain (dPCR) experiments

    USGS Publications Warehouse

    Dorazio, Robert; Hunter, Margaret

    2015-01-01

    Statistical methods for the analysis and design of experiments using digital PCR (dPCR) have received only limited attention and have been misused in many instances. To address this issue and to provide a more general approach to the analysis of dPCR data, we describe a class of statistical models for the analysis and design of experiments that require quantification of nucleic acids. These models are mathematically equivalent to generalized linear models of binomial responses that include a complementary, log–log link function and an offset that is dependent on the dPCR partition volume. These models are both versatile and easy to fit using conventional statistical software. Covariates can be used to specify different sources of variation in nucleic acid concentration, and a model’s parameters can be used to quantify the effects of these covariates. For purposes of illustration, we analyzed dPCR data from different types of experiments, including serial dilution, evaluation of copy number variation, and quantification of gene expression. We also showed how these models can be used to help design dPCR experiments, as in selection of sample sizes needed to achieve desired levels of precision in estimates of nucleic acid concentration or to detect differences in concentration among treatments with prescribed levels of statistical power.

  19. Statistical Models for the Analysis and Design of Digital Polymerase Chain Reaction (dPCR) Experiments.

    PubMed

    Dorazio, Robert M; Hunter, Margaret E

    2015-11-03

    Statistical methods for the analysis and design of experiments using digital PCR (dPCR) have received only limited attention and have been misused in many instances. To address this issue and to provide a more general approach to the analysis of dPCR data, we describe a class of statistical models for the analysis and design of experiments that require quantification of nucleic acids. These models are mathematically equivalent to generalized linear models of binomial responses that include a complementary, log-log link function and an offset that is dependent on the dPCR partition volume. These models are both versatile and easy to fit using conventional statistical software. Covariates can be used to specify different sources of variation in nucleic acid concentration, and a model's parameters can be used to quantify the effects of these covariates. For purposes of illustration, we analyzed dPCR data from different types of experiments, including serial dilution, evaluation of copy number variation, and quantification of gene expression. We also showed how these models can be used to help design dPCR experiments, as in selection of sample sizes needed to achieve desired levels of precision in estimates of nucleic acid concentration or to detect differences in concentration among treatments with prescribed levels of statistical power.

  20. Flood characteristics of urban watersheds in the United States

    USGS Publications Warehouse

    Sauer, Vernon B.; Thomas, W.O.; Stricker, V.A.; Wilson, K.V.

    1983-01-01

    A nationwide study of flood magnitude and frequency in urban areas was made for the purpose of reviewing available literature, compiling an urban flood data base, and developing methods of estimating urban floodflow characteristics in ungaged areas. The literature review contains synopses of 128 recent publications related to urban floodflow. A data base of 269 gaged basins in 56 cities and 31 States, including Hawaii, contains a wide variety of topographic and climatic characteristics, land-use variables, indices of urbanization, and flood-frequency estimates. Three sets of regression equations were developed to estimate flood discharges for ungaged sites for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years. Two sets of regression equations are based on seven independent parameters and the third is based on three independent parameters. The only difference in the two sets of seven-parameter equations is the use of basin lag time in one and lake and reservoir storage in the other. Of primary importance in these equations is an independent estimate of the equivalent rural discharge for the ungaged basin. The equations adjust the equivalent rural discharge to an urban condition. The primary adjustment factor, or index of urbanization, is the basin development factor, a measure of the extent of development of the drainage system in the basin. This measure includes evaluations of storm drains (sewers), channel improvements, and curb-and-gutter streets. The basin development factor is statistically very significant and offers a simple and effective way of accounting for drainage development and runoff response in urban areas. Percentage of impervious area is also included in the seven-parameter equations as an additional measure of urbanization and apparently accounts for increased runoff volumes. This factor is not highly significant for large floods, which supports the generally held concept that imperviousness is not a dominant factor when soils become more saturated during large storms. Other parameters in the seven-parameter equations include drainage area size, channel slope, rainfall intensity, lake and reservoir storage, and basin lag time. These factors are all statistically significant and provide logical indices of basin conditions. The three-parameter equations include only the three most significant parameters: rural discharge, basin-development factor, and drainage area size. All three sets of regression equations provide unbiased estimates of urban flood frequency. The seven-parameter regression equations without basin lag time have average standard errors of regression varying from ? 37 percent for the 5-year flood to ? 44 percent for the 100-year flood and ? 49 percent for the 500-year flood. The other two sets of regression equations have similar accuracy. Several tests for bias, sensitivity, and hydrologic consistency are included which support the conclusion that the equations are useful throughout the United States. All estimating equations were developed from data collected on drainage basins where temporary in-channel storage, due to highway embankments, was not significant. Consequently, estimates made with these equations do not account for the reducing effect of this temporary detention storage.

  1. Evaluation of bond strength of resin cements using different general-purpose statistical software packages for two-parameter Weibull statistics.

    PubMed

    Roos, Malgorzata; Stawarczyk, Bogna

    2012-07-01

    This study evaluated and compared Weibull parameters of resin bond strength values using six different general-purpose statistical software packages for two-parameter Weibull distribution. Two-hundred human teeth were randomly divided into 4 groups (n=50), prepared and bonded on dentin according to the manufacturers' instructions using the following resin cements: (i) Variolink (VAN, conventional resin cement), (ii) Panavia21 (PAN, conventional resin cement), (iii) RelyX Unicem (RXU, self-adhesive resin cement) and (iv) G-Cem (GCM, self-adhesive resin cement). Subsequently, all specimens were stored in water for 24h at 37°C. Shear bond strength was measured and the data were analyzed using Anderson-Darling goodness-of-fit (MINITAB 16) and two-parameter Weibull statistics with the following statistical software packages: Excel 2011, SPSS 19, MINITAB 16, R 2.12.1, SAS 9.1.3. and STATA 11.2 (p≤0.05). Additionally, the three-parameter Weibull was fitted using MNITAB 16. Two-parameter Weibull calculated with MINITAB and STATA can be compared using an omnibus test and using 95% CI. In SAS only 95% CI were directly obtained from the output. R provided no estimates of 95% CI. In both SAS and R the global comparison of the characteristic bond strength among groups is provided by means of the Weibull regression. EXCEL and SPSS provided no default information about 95% CI and no significance test for the comparison of Weibull parameters among the groups. In summary, conventional resin cement VAN showed the highest Weibull modulus and characteristic bond strength. There are discrepancies in the Weibull statistics depending on the software package and the estimation method. The information content in the default output provided by the software packages differs to very high extent. Copyright © 2012 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  2. Income inequality, poverty, and population health: evidence from recent data for the United States.

    PubMed

    Ram, Rati

    2005-12-01

    In this study, state-level US data for the years 2000 and 1990 are used to provide additional evidence on the roles of income inequality and poverty in population health. Five main points are noted. First, contrary to the suggestion made in several recent studies, the income inequality parameter is observed to be quite robust and carries statistical significance in mortality equations estimated from several observation sets and a fairly wide variety of specificational choices. Second, the evidence does not indicate that significance of income inequality is lost when education variables are included. Third, similarly, the income inequality parameter shows significance when a race variable is added, and also when both race and urbanization terms are entered. Fourth, while poverty is seen to have some mortality-increasing consequence, the role of income inequality appears stronger. Fifth, income inequality retains statistical significance when a quadratic income term is added and also if the log-log version of a fairly inclusive model is estimated. I therefore suggest that the recent skepticism articulated by several scholars in regard to the robustness of the income inequality parameters in mortality equations estimated from the US data should be reconsidered.

  3. Observability of ionospheric space-time structure with ISR: A simulation study

    NASA Astrophysics Data System (ADS)

    Swoboda, John; Semeter, Joshua; Zettergren, Matthew; Erickson, Philip J.

    2017-02-01

    The sources of error from electronically steerable array (ESA) incoherent scatter radar (ISR) systems are investigated both theoretically and with use of an open-source ISR simulator, developed by the authors, called Simulator for ISR (SimISR). The main sources of error incorporated in the simulator include statistical uncertainty, which arises due to nature of the measurement mechanism and the inherent space-time ambiguity from the sensor. SimISR can take a field of plasma parameters, parameterized by time and space, and create simulated ISR data at the scattered electric field (i.e., complex receiver voltage) level, subsequently processing these data to show possible reconstructions of the original parameter field. To demonstrate general utility, we show a number of simulation examples, with two cases using data from a self-consistent multifluid transport model. Results highlight the significant influence of the forward model of the ISR process and the resulting statistical uncertainty on plasma parameter measurements and the core experiment design trade-offs that must be made when planning observations. These conclusions further underscore the utility of this class of measurement simulator as a design tool for more optimal experiment design efforts using flexible ESA class ISR systems.

  4. Quantitative analysis of spatial variability of geotechnical parameters

    NASA Astrophysics Data System (ADS)

    Fang, Xing

    2018-04-01

    Geotechnical parameters are the basic parameters of geotechnical engineering design, while the geotechnical parameters have strong regional characteristics. At the same time, the spatial variability of geotechnical parameters has been recognized. It is gradually introduced into the reliability analysis of geotechnical engineering. Based on the statistical theory of geostatistical spatial information, the spatial variability of geotechnical parameters is quantitatively analyzed. At the same time, the evaluation of geotechnical parameters and the correlation coefficient between geotechnical parameters are calculated. A residential district of Tianjin Survey Institute was selected as the research object. There are 68 boreholes in this area and 9 layers of mechanical stratification. The parameters are water content, natural gravity, void ratio, liquid limit, plasticity index, liquidity index, compressibility coefficient, compressive modulus, internal friction angle, cohesion and SP index. According to the principle of statistical correlation, the correlation coefficient of geotechnical parameters is calculated. According to the correlation coefficient, the law of geotechnical parameters is obtained.

  5. [Dexpanthenol nasal spray as an effective therapeutic principle for treatment of rhinitis sicca anterior].

    PubMed

    Kehrl, W; Sonnemann, U

    1998-09-01

    Controlled clinical studies on medical treatment of rhinitis sicca anterior have not yet been published. Therapy recommendations are based on experiences but not on results of controlled clinical studies. The aim of this study was to examine the efficacy and tolerance of a new form of application of Dexpanthenol in physiologic saline solution (Nasicur). A randomized comparison of parallel groups was performed. One group was treated with the nasal spray while the control group received a placebo. The assessment of nasal breathing resistance and the extent of crust formation according to scores were defined as target parameters. Statistical analysis was carried out according to Wilcoxon at alpha < or = 0.05. Forty-eight outpatients diagnosed with rhinitis sicca anterior were included in this study. Twenty-four received the medication, and 29 were treated with a placebo. The superiority of the dexpanthenol nasal spray in comparison to the placebo medication was demonstrated for both target parameters as clinically relevant and statistically significant. The placebo spray showed clinical improvement of the other treatment outcome parameters. Dexpanthenol nasal spray showed no statistically significant difference in comparison to placebo. The clinically proven efficacy is emphasized by good tolerance of both treatments which was validated by the objective rhinoscopy findings. Good compliance was confirmed. The result of the controlled clinical study confirms that the dexpanthenol nasal spray is an effective medicinal treatment of rhinitis sicca anterior and is more effective than common medications.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, Richard O.

    The application of statistics to environmental pollution monitoring studies requires a knowledge of statistical analysis methods particularly well suited to pollution data. This book fills that need by providing sampling plans, statistical tests, parameter estimation procedure techniques, and references to pertinent publications. Most of the statistical techniques are relatively simple, and examples, exercises, and case studies are provided to illustrate procedures. The book is logically divided into three parts. Chapters 1, 2, and 3 are introductory chapters. Chapters 4 through 10 discuss field sampling designs and Chapters 11 through 18 deal with a broad range of statistical analysis procedures. Somemore » statistical techniques given here are not commonly seen in statistics book. For example, see methods for handling correlated data (Sections 4.5 and 11.12), for detecting hot spots (Chapter 10), and for estimating a confidence interval for the mean of a lognormal distribution (Section 13.2). Also, Appendix B lists a computer code that estimates and tests for trends over time at one or more monitoring stations using nonparametric methods (Chapters 16 and 17). Unfortunately, some important topics could not be included because of their complexity and the need to limit the length of the book. For example, only brief mention could be made of time series analysis using Box-Jenkins methods and of kriging techniques for estimating spatial and spatial-time patterns of pollution, although multiple references on these topics are provided. Also, no discussion of methods for assessing risks from environmental pollution could be included.« less

  7. A statistical methodology for estimating transport parameters: Theory and applications to one-dimensional advectivec-dispersive systems

    USGS Publications Warehouse

    Wagner, Brian J.; Gorelick, Steven M.

    1986-01-01

    A simulation nonlinear multiple-regression methodology for estimating parameters that characterize the transport of contaminants is developed and demonstrated. Finite difference contaminant transport simulation is combined with a nonlinear weighted least squares multiple-regression procedure. The technique provides optimal parameter estimates and gives statistics for assessing the reliability of these estimates under certain general assumptions about the distributions of the random measurement errors. Monte Carlo analysis is used to estimate parameter reliability for a hypothetical homogeneous soil column for which concentration data contain large random measurement errors. The value of data collected spatially versus data collected temporally was investigated for estimation of velocity, dispersion coefficient, effective porosity, first-order decay rate, and zero-order production. The use of spatial data gave estimates that were 2–3 times more reliable than estimates based on temporal data for all parameters except velocity. Comparison of estimated linear and nonlinear confidence intervals based upon Monte Carlo analysis showed that the linear approximation is poor for dispersion coefficient and zero-order production coefficient when data are collected over time. In addition, examples demonstrate transport parameter estimation for two real one-dimensional systems. First, the longitudinal dispersivity and effective porosity of an unsaturated soil are estimated using laboratory column data. We compare the reliability of estimates based upon data from individual laboratory experiments versus estimates based upon pooled data from several experiments. Second, the simulation nonlinear regression procedure is extended to include an additional governing equation that describes delayed storage during contaminant transport. The model is applied to analyze the trends, variability, and interrelationship of parameters in a mourtain stream in northern California.

  8. Variations in the Parameters of Background Seismic Noise during the Preparation Stages of Strong Earthquakes in the Kamchatka Region

    NASA Astrophysics Data System (ADS)

    Kasimova, V. A.; Kopylova, G. N.; Lyubushin, A. A.

    2018-03-01

    The results of the long (2011-2016) investigation of background seismic noise (BSN) in Kamchatka by the method suggested by Doct. Sci. (Phys.-Math.) A.A. Lyubushin with the use of the data from the network of broadband seismic stations of the Geophysical Survey of the Russian Academy of Sciences are presented. For characterizing the BSN field and its variability, continuous time series of the statistical parameters of the multifractal singularity spectra and wavelet expansion calculated from the records at each station are used. These parameters include the generalized Hurst exponent α*, singularity spectrum support width Δα, wavelet spectral exponent β, minimal normalized entropy of wavelet coefficients En, and spectral measure of their coherent behavior. The peculiarities in the spatiotemporal distribution of the BSN parameters as a probable response to the earthquakes with M w = 6.8-8.3 that occurred in Kamchatka in 2013 and 2016 are considered. It is established that these seismic events were preceded by regular variations in the BSN parameters, which lasted for a few months and consisted in the reduction of the median and mean α*, Δα, and β values estimated over all the stations and in the increase of the En values. Based on the increase in the spectral measure of the coherent behavior of the four-variate time series of the median and mean values of the considered statistics, the effect of the enhancement of the synchronism in the joint (collective) behavior of these parameters during a certain period prior to the mantle earthquake in the Sea of Okhotsk (May 24, 2013, M w = 8.3) is diagnosed. The procedures for revealing the precursory effects in the variations of the BSN parameters are described and the examples of these effects are presented.

  9. Application of nonlinear least-squares regression to ground-water flow modeling, west-central Florida

    USGS Publications Warehouse

    Yobbi, D.K.

    2000-01-01

    A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.

  10. Modeling the Risk of Radiation-Induced Acute Esophagitis for Combined Washington University and RTOG Trial 93-11 Lung Cancer Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Ellen X.; Bradley, Jeffrey D.; El Naqa, Issam

    2012-04-01

    Purpose: To construct a maximally predictive model of the risk of severe acute esophagitis (AE) for patients who receive definitive radiation therapy (RT) for non-small-cell lung cancer. Methods and Materials: The dataset includes Washington University and RTOG 93-11 clinical trial data (events/patients: 120/374, WUSTL = 101/237, RTOG9311 = 19/137). Statistical model building was performed based on dosimetric and clinical parameters (patient age, sex, weight loss, pretreatment chemotherapy, concurrent chemotherapy, fraction size). A wide range of dose-volume parameters were extracted from dearchived treatment plans, including Dx, Vx, MOHx (mean of hottest x% volume), MOCx (mean of coldest x% volume), and gEUDmore » (generalized equivalent uniform dose) values. Results: The most significant single parameters for predicting acute esophagitis (RTOG Grade 2 or greater) were MOH85, mean esophagus dose (MED), and V30. A superior-inferior weighted dose-center position was derived but not found to be significant. Fraction size was found to be significant on univariate logistic analysis (Spearman R = 0.421, p < 0.00001) but not multivariate logistic modeling. Cross-validation model building was used to determine that an optimal model size needed only two parameters (MOH85 and concurrent chemotherapy, robustly selected on bootstrap model-rebuilding). Mean esophagus dose (MED) is preferred instead of MOH85, as it gives nearly the same statistical performance and is easier to compute. AE risk is given as a logistic function of (0.0688 Asterisk-Operator MED+1.50 Asterisk-Operator ConChemo-3.13), where MED is in Gy and ConChemo is either 1 (yes) if concurrent chemotherapy was given, or 0 (no). This model correlates to the observed risk of AE with a Spearman coefficient of 0.629 (p < 0.000001). Conclusions: Multivariate statistical model building with cross-validation suggests that a two-variable logistic model based on mean dose and the use of concurrent chemotherapy robustly predicts acute esophagitis risk in combined-data WUSTL and RTOG 93-11 trial datasets.« less

  11. Definition of a simple statistical parameter for the quantification of orientation in two dimensions: application to cells on grooves of nanometric depths.

    PubMed

    Davidson, P; Bigerelle, M; Bounichane, B; Giazzon, M; Anselme, K

    2010-07-01

    Contact guidance is generally evaluated by measuring the orientation angle of cells. However, statistical analyses are rarely performed on these parameters. Here we propose a statistical analysis based on a new parameter sigma, the orientation parameter, defined as the dispersion of the distribution of orientation angles. This parameter can be used to obtain a truncated Gaussian distribution that models the distribution of the data between -90 degrees and +90 degrees. We established a threshold value of the orientation parameter below which the data can be considered to be aligned within a 95% confidence interval. Applying our orientation parameter to cells on grooves and using a modelling approach, we established the relationship sigma=alpha(meas)+(52 degrees -alpha(meas))/(1+C(GDE)R) where the parameter C(GDE) represents the sensitivity of cells to groove depth, and R the groove depth. The values of C(GDE) obtained allowed us to compare the contact guidance of human osteoprogenitor (HOP) cells across experiments involving different groove depths, times in culture and inoculation densities. We demonstrate that HOP cells are able to identify and respond to the presence of grooves 30, 100, 200 and 500 nm deep and that the deeper the grooves, the higher the cell orientation. The evolution of the sensitivity (C(GDE)) with culture time is roughly sigmoidal with an asymptote, which is a function of inoculation density. The sigma parameter defined here is a universal parameter that can be applied to all orientation measurements and does not require a mathematical background or knowledge of directional statistics. Copyright 2010 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  12. Statistical mechanics in the context of special relativity.

    PubMed

    Kaniadakis, G

    2002-11-01

    In Ref. [Physica A 296, 405 (2001)], starting from the one parameter deformation of the exponential function exp(kappa)(x)=(sqrt[1+kappa(2)x(2)]+kappax)(1/kappa), a statistical mechanics has been constructed which reduces to the ordinary Boltzmann-Gibbs statistical mechanics as the deformation parameter kappa approaches to zero. The distribution f=exp(kappa)(-beta E+betamu) obtained within this statistical mechanics shows a power law tail and depends on the nonspecified parameter beta, containing all the information about the temperature of the system. On the other hand, the entropic form S(kappa)= integral d(3)p(c(kappa) f(1+kappa)+c(-kappa) f(1-kappa)), which after maximization produces the distribution f and reduces to the standard Boltzmann-Shannon entropy S0 as kappa-->0, contains the coefficient c(kappa) whose expression involves, beside the Boltzmann constant, another nonspecified parameter alpha. In the present effort we show that S(kappa) is the unique existing entropy obtained by a continuous deformation of S0 and preserving unaltered its fundamental properties of concavity, additivity, and extensivity. These properties of S(kappa) permit to determine unequivocally the values of the above mentioned parameters beta and alpha. Subsequently, we explain the origin of the deformation mechanism introduced by kappa and show that this deformation emerges naturally within the Einstein special relativity. Furthermore, we extend the theory in order to treat statistical systems in a time dependent and relativistic context. Then, we show that it is possible to determine in a self consistent scheme within the special relativity the values of the free parameter kappa which results to depend on the light speed c and reduces to zero as c--> infinity recovering in this way the ordinary statistical mechanics and thermodynamics. The statistical mechanics here presented, does not contain free parameters, preserves unaltered the mathematical and epistemological structure of the ordinary statistical mechanics and is suitable to describe a very large class of experimentally observed phenomena in low and high energy physics and in natural, economic, and social sciences. Finally, in order to test the correctness and predictability of the theory, as working example we consider the cosmic rays spectrum, which spans 13 decades in energy and 33 decades in flux, finding a high quality agreement between our predictions and observed data.

  13. Statistical metrics for the characterization of karst network geometry and topology

    NASA Astrophysics Data System (ADS)

    Collon, Pauline; Bernasconi, David; Vuilleumier, Cécile; Renard, Philippe

    2017-04-01

    Statistical metrics can be used to analyse the morphology of natural or simulated karst systems; they allow describing, comparing, and quantifying their geometry and topology. In this paper, we present and discuss a set of such metrics. We study their properties and their usefulness based on a set of more than 30 karstic networks mapped by speleologists. The data set includes some of the largest explored cave systems in the world and represents a broad range of geological and speleogenetic conditions allowing us to test the proposed metrics, their variability, and their usefulness for the discrimination of different morphologies. All the proposed metrics require that the topographical survey of the caves are first converted to graphs consisting of vertices and edges. This data preprocessing includes several quality check operations and some corrections to ensure that the karst is represented as accurately as possible. The statistical parameters relating to the geometry of the system are then directly computed on the graphs, while the topological parameters are computed on a reduced version of the network focusing only on its structure. Among the tested metrics, we include some that were previously proposed such as tortuosity or the Howard's coefficients. We also investigate the possibility to use new metrics derived from graph theory. In total, 21 metrics are introduced, discussed in detail, and compared on the basis of our data set. This work shows that orientation analysis and, in particular, the entropy of the orientation data can help to detect the existence of inception features. The statistics on branch length are useful to describe the extension of the conduits within the network. Rather surprisingly, the tortuosity does not vary very significantly. It could be heavily influenced by the survey methodology. The degree of interconnectivity of the network, related to the presence of maze patterns, can be measured using different metrics such as the Howard's parameters, global cyclic coefficient, or the average vertex degree. The average vertex degree of the reduced graph proved to be the most useful as it is simple to compute, it discriminates properly the interconnected systems (mazes) from the acyclic ones (tree-like structures), and it permits us to classify the acyclic systems as a function of the total number of branches. This topological information is completed by three parameters, allowing us to refine the description. The correlation of vertex degree is rather simple to obtain. It is systematically positive on all studied data sets indicating a predominance of assortative networks among karst systems. The average shortest path length is related to the transport efficiency. It is shown to be mainly correlated to the size of the network. Finally, central point dominance allows us to identify the presence of a centralized organization.

  14. Comparison of different parameters for recording sagittal maxillo mandibular relation using natural head posture: A cephalometric study

    PubMed Central

    Singh, Ashish Kumar; Ganeshkar, Sanjay V.; Mehrotra, Praveen; Bhagchandani, Jitendra

    2013-01-01

    Background: Commonly used parameters for anteroposterior assessment of the jaw relationship includes several analyses such as ANB, NA-Pog, AB-NPog, Wits appraisal, Harvold's unit length difference, Beta angle. Considering the fact that there are several parameters (with different range and values) which account for sagittal relation, and still the published literature for comparisons and correlation of these measurements is scarce. Therefore, the objective of this study was to correlate these values in subjects of Indian origin. Materials and Methods: The sample consisted of fifty adult individuals (age group 18-26 years) with equal number of males and females. The selection criteria included subjects with no previous history of orthodontic and/or orthognathic surgical treatment; orthognathic facial profile; Angle's Class I molar relation; clinical Frankfort Mandibular plane angle FMA of 30±5° and no gross facial asymmetry. The cephalograms were taken in natural head position (NHP). Seven sagittal skeletal parameters were measured in the cephalograms and subjected to statistical evaluation with Wits reading on the true horizontal as reference. A correlation coefficient analysis was done to assess the significance of association between these variables. Results: ANB angle showed statistically significant correlation for the total sample, though the values were insignificant for the individual groups and therefore may not be very accurate. Wits appraisal was seen to have a significant correlation only in the female sample group. Conclusions: If cephalograms cannot be recorded in a NHP, then the best indicator for recording A-P skeletal dimension would be angle AB-NPog, followed by Harvold's unit length difference. However, considering biologic variability, more than one reading should necessarily be used to verify the same. PMID:24987638

  15. Development of a design space and predictive statistical model for capsule filling of low-fill-weight inhalation products.

    PubMed

    Faulhammer, E; Llusa, M; Wahl, P R; Paudel, A; Lawrence, S; Biserni, S; Calzolari, V; Khinast, J G

    2016-01-01

    The objectives of this study were to develop a predictive statistical model for low-fill-weight capsule filling of inhalation products with dosator nozzles via the quality by design (QbD) approach and based on that to create refined models that include quadratic terms for significant parameters. Various controllable process parameters and uncontrolled material attributes of 12 powders were initially screened using a linear model with partial least square (PLS) regression to determine their effect on the critical quality attributes (CQA; fill weight and weight variability). After identifying critical material attributes (CMAs) and critical process parameters (CPPs) that influenced the CQA, model refinement was performed to study if interactions or quadratic terms influence the model. Based on the assessment of the effects of the CPPs and CMAs on fill weight and weight variability for low-fill-weight inhalation products, we developed an excellent linear predictive model for fill weight (R(2 )= 0.96, Q(2 )= 0.96 for powders with good flow properties and R(2 )= 0.94, Q(2 )= 0.93 for cohesive powders) and a model that provides a good approximation of the fill weight variability for each powder group. We validated the model, established a design space for the performance of different types of inhalation grade lactose on low-fill weight capsule filling and successfully used the CMAs and CPPs to predict fill weight of powders that were not included in the development set.

  16. Multiple Myeloma Index for Risk of Infection.

    PubMed

    T, Valkovic; V, Gacic; A, Nacinovic-Duletic

    2018-01-01

    Based on our earlier research into the main characteristics and risk factors for infections in hospitalized patients with multiple myeloma, we created the numerical Multiple Myeloma Index for Risk of Infection (MMIRI) to predict infection in myeloma patients. The included factors that could influence the pathogenesis and incidence of infections were sex, performance status, Durie Salmon stage of disease, International Staging System, serum creatinine level, immune paresis, neutropenia, serum ferritin level, the presence of any catheters, disease duration, stable/progressive disease, and type of therapy. For each of these parameters, the strength of association with infection was statistically estimated and specific number of points was assigned to each of these parameters, proportional to the strength of the association. When designing the MMIRI, we included only those parameters that we determined were pathophysiologically associated with the infection. After further statistical analysis, we identified an optimal cutoff score of 6 or above as indicating a significant risk for infection, with a sensitivity of 93.2% and specificity of 80.2%. The scoring system in the retrospective receiver operating characteristic analysis showed an area under the curve of 0.918. The potential value of the MMIRI is the possibility of identifying those patients who would benefit from the prophylactic administration of antibiotics and other anti-infective measures while minimizing the contribution to antibiotic resistance related to the overuse of these drugs. As far as we know, this index represents the first attempt to create such an instrument for predicting the occurrence of infections in myeloma patients.

  17. Sleep respiratory parameters in children with idiopathic epilepsy: A cross-sectional study.

    PubMed

    Gogou, Maria; Haidopoulou, Katerina; Eboriadou, Maria; Pavlidou, Efterpi; Hatzistylianou, Maria; Pavlou, Evaggelos

    2016-10-01

    The aim of this study is to explore and compare through polysomnography respiratory sleep parameters between children with idiopathic epilepsy and healthy children. Our cross-sectional study included 40 children with idiopathic epilepsy and 27 healthy children, who underwent overnight polysomnography. Data about sleep respiratory parameters were obtained and statistically analyzed. The level of statistical significance was set at 0.05. The prevalence of Obstructive Sleep Apnea Syndrome was significantly higher in the epilepsy group (35% vs 7.4%, p<0.01). Moreover, the odds ratio of an obstructive apnea index ≥1 in the epilepsy group was 10.6 (95% Confidence Intervals: 3.08-37.08) in comparison to the control group. The mean value of the obstructive apnea-hypopnea index was significantly higher in children with epilepsy compared to healthy children (2.46±1.22 vs 1.21±0.83, p=0.027). The mean values of central apnea index and desaturation index were comparable between these two groups. Longest apnea duration was significantly higher in the group of poor seizure control. All other sleep respiratory variables did not differ significantly between children with poor and good seizure control and between children with generalized and focal epilepsy. Children with epilepsy seem to present more prominent sleep breathing instability in comparison to healthy children, which mainly includes a predisposition to obstructive respiratory events. More studies are needed to investigate the relationship between sleep apneas and seizure control. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Nonlinear Curve-Fitting Program

    NASA Technical Reports Server (NTRS)

    Everhart, Joel L.; Badavi, Forooz F.

    1989-01-01

    Nonlinear optimization algorithm helps in finding best-fit curve. Nonlinear Curve Fitting Program, NLINEAR, interactive curve-fitting routine based on description of quadratic expansion of X(sup 2) statistic. Utilizes nonlinear optimization algorithm calculating best statistically weighted values of parameters of fitting function and X(sup 2) minimized. Provides user with such statistical information as goodness of fit and estimated values of parameters producing highest degree of correlation between experimental data and mathematical model. Written in FORTRAN 77.

  19. Statistical hadronization with exclusive channels in e +e - annihilation

    DOE PAGES

    Ferroni, L.; Becattini, F.

    2012-01-01

    We present a systematic analysis of exclusive hadronic channels in e +e - collisions at centre-of-mass energies between 2.1 and 2.6 GeV within the statistical hadronization model. Because of the low multiplicities involved, calculations have been carried out in the full microcanonical ensemble, including conservation of energy-momentum, angular momentum, parity, isospin, and all relevant charges. We show that the data is in an overall good agreement with the model for an energy density of about 0.5 GeV/fm 3 and an extra strangeness suppression parameter γ S 0:7, essentially the same values found with fits to inclusive multiplicities at higher energy.

  20. A new exact and more powerful unconditional test of no treatment effect from binary matched pairs.

    PubMed

    Lloyd, Chris J

    2008-09-01

    We consider the problem of testing for a difference in the probability of success from matched binary pairs. Starting with three standard inexact tests, the nuisance parameter is first estimated and then the residual dependence is eliminated by maximization, producing what I call an E+M P-value. The E+M P-value based on McNemar's statistic is shown numerically to dominate previous suggestions, including partially maximized P-values as described in Berger and Sidik (2003, Statistical Methods in Medical Research 12, 91-108). The latter method, however, may have computational advantages for large samples.

  1. Statistical performance evaluation of ECG transmission using wireless networks.

    PubMed

    Shakhatreh, Walid; Gharaibeh, Khaled; Al-Zaben, Awad

    2013-07-01

    This paper presents simulation of the transmission of biomedical signals (using ECG signal as an example) over wireless networks. Investigation of the effect of channel impairments including SNR, pathloss exponent, path delay and network impairments such as packet loss probability; on the diagnosability of the received ECG signal are presented. The ECG signal is transmitted through a wireless network system composed of two communication protocols; an 802.15.4- ZigBee protocol and an 802.11b protocol. The performance of the transmission is evaluated using higher order statistics parameters such as kurtosis and Negative Entropy in addition to the common techniques such as the PRD, RMS and Cross Correlation.

  2. A note on some statistical properties of rise time parameters used in muon arrival time measurements

    NASA Technical Reports Server (NTRS)

    Vanderwalt, D. J.; Devilliers, E. J.

    1985-01-01

    Most investigations of the muon arrival time distribution in EAS during the past decade made use of parameters which can collectively be called rise time parameters. The rise time parameter T sub A/B is defined as the time taken for the integrated pulse from a detector to rise from A% to B% of its full amplitude. The use of these parameters are usually restricted to the determination of the radial dependence thereof. This radial dependence of the rise time parameters are usually taken as a signature of the particle interaction characteristics in the shower. As these parameters have a stochastic nature, it seems reasonable that one should also take notice of this aspect of the rise time parameters. A statistical approach to rise time parameters is presented.

  3. Performance evaluation of a hybrid-passive landfill leachate treatment system using multivariate statistical techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wallace, Jack, E-mail: jack.wallace@ce.queensu.ca; Champagne, Pascale, E-mail: champagne@civil.queensu.ca; Monnier, Anne-Charlotte, E-mail: anne-charlotte.monnier@insa-lyon.fr

    Highlights: • Performance of a hybrid passive landfill leachate treatment system was evaluated. • 33 Water chemistry parameters were sampled for 21 months and statistically analyzed. • Parameters were strongly linked and explained most (>40%) of the variation in data. • Alkalinity, ammonia, COD, heavy metals, and iron were criteria for performance. • Eight other parameters were key in modeling system dynamics and criteria. - Abstract: A pilot-scale hybrid-passive treatment system operated at the Merrick Landfill in North Bay, Ontario, Canada, treats municipal landfill leachate and provides for subsequent natural attenuation. Collected leachate is directed to a hybrid-passive treatment system,more » followed by controlled release to a natural attenuation zone before entering the nearby Little Sturgeon River. The study presents a comprehensive evaluation of the performance of the system using multivariate statistical techniques to determine the interactions between parameters, major pollutants in the leachate, and the biological and chemical processes occurring in the system. Five parameters (ammonia, alkalinity, chemical oxygen demand (COD), “heavy” metals of interest, with atomic weights above calcium, and iron) were set as criteria for the evaluation of system performance based on their toxicity to aquatic ecosystems and importance in treatment with respect to discharge regulations. System data for a full range of water quality parameters over a 21-month period were analyzed using principal components analysis (PCA), as well as principal components (PC) and partial least squares (PLS) regressions. PCA indicated a high degree of association for most parameters with the first PC, which explained a high percentage (>40%) of the variation in the data, suggesting strong statistical relationships among most of the parameters in the system. Regression analyses identified 8 parameters (set as independent variables) that were most frequently retained for modeling the five criteria parameters (set as dependent variables), on a statistically significant level: conductivity, dissolved oxygen (DO), nitrite (NO{sub 2}{sup −}), organic nitrogen (N), oxidation reduction potential (ORP), pH, sulfate and total volatile solids (TVS). The criteria parameters and the significant explanatory parameters were most important in modeling the dynamics of the passive treatment system during the study period. Such techniques and procedures were found to be highly valuable and could be applied to other sites to determine parameters of interest in similar naturalized engineered systems.« less

  4. Estimation Model of Spacecraft Parameters and Cost Based on a Statistical Analysis of COMPASS Designs

    NASA Technical Reports Server (NTRS)

    Gerberich, Matthew W.; Oleson, Steven R.

    2013-01-01

    The Collaborative Modeling for Parametric Assessment of Space Systems (COMPASS) team at Glenn Research Center has performed integrated system analysis of conceptual spacecraft mission designs since 2006 using a multidisciplinary concurrent engineering process. The set of completed designs was archived in a database, to allow for the study of relationships between design parameters. Although COMPASS uses a parametric spacecraft costing model, this research investigated the possibility of using a top-down approach to rapidly estimate the overall vehicle costs. This paper presents the relationships between significant design variables, including breakdowns of dry mass, wet mass, and cost. It also develops a model for a broad estimate of these parameters through basic mission characteristics, including the target location distance, the payload mass, the duration, the delta-v requirement, and the type of mission, propulsion, and electrical power. Finally, this paper examines the accuracy of this model in regards to past COMPASS designs, with an assessment of outlying spacecraft, and compares the results to historical data of completed NASA missions.

  5. Trajectory Dispersed Vehicle Process for Space Launch System

    NASA Technical Reports Server (NTRS)

    Statham, Tamara; Thompson, Seth

    2017-01-01

    The Space Launch System (SLS) vehicle is part of NASA's deep space exploration plans that includes manned missions to Mars. Manufacturing uncertainties in design parameters are key considerations throughout SLS development as they have significant effects on focus parameters such as lift-off-thrust-to-weight, vehicle payload, maximum dynamic pressure, and compression loads. This presentation discusses how the SLS program captures these uncertainties by utilizing a 3 degree of freedom (DOF) process called Trajectory Dispersed (TD) analysis. This analysis biases nominal trajectories to identify extremes in the design parameters for various potential SLS configurations and missions. This process utilizes a Design of Experiments (DOE) and response surface methodologies (RSM) to statistically sample uncertainties, and develop resulting vehicles using a Maximum Likelihood Estimate (MLE) process for targeting uncertainties bias. These vehicles represent various missions and configurations which are used as key inputs into a variety of analyses in the SLS design process, including 6 DOF dispersions, separation clearances, and engine out failure studies.

  6. Ergodicity of Truncated Stochastic Navier Stokes with Deterministic Forcing and Dispersion

    NASA Astrophysics Data System (ADS)

    Majda, Andrew J.; Tong, Xin T.

    2016-10-01

    Turbulence in idealized geophysical flows is a very rich and important topic. The anisotropic effects of explicit deterministic forcing, dispersive effects from rotation due to the β -plane and F-plane, and topography together with random forcing all combine to produce a remarkable number of realistic phenomena. These effects have been studied through careful numerical experiments in the truncated geophysical models. These important results include transitions between coherent jets and vortices, and direct and inverse turbulence cascades as parameters are varied, and it is a contemporary challenge to explain these diverse statistical predictions. Here we contribute to these issues by proving with full mathematical rigor that for any values of the deterministic forcing, the β - and F-plane effects and topography, with minimal stochastic forcing, there is geometric ergodicity for any finite Galerkin truncation. This means that there is a unique smooth invariant measure which attracts all statistical initial data at an exponential rate. In particular, this rigorous statistical theory guarantees that there are no bifurcations to multiple stable and unstable statistical steady states as geophysical parameters are varied in contrast to claims in the applied literature. The proof utilizes a new statistical Lyapunov function to account for enstrophy exchanges between the statistical mean and the variance fluctuations due to the deterministic forcing. It also requires careful proofs of hypoellipticity with geophysical effects and uses geometric control theory to establish reachability. To illustrate the necessity of these conditions, a two-dimensional example is developed which has the square of the Euclidean norm as the Lyapunov function and is hypoelliptic with nonzero noise forcing, yet fails to be reachable or ergodic.

  7. Implications of the methodological choices for hydrologic portrayals of climate change over the contiguous United States: Statistically downscaled forcing data and hydrologic models

    USGS Publications Warehouse

    Mizukami, Naoki; Clark, Martyn P.; Gutmann, Ethan D.; Mendoza, Pablo A.; Newman, Andrew J.; Nijssen, Bart; Livneh, Ben; Hay, Lauren E.; Arnold, Jeffrey R.; Brekke, Levi D.

    2016-01-01

    Continental-domain assessments of climate change impacts on water resources typically rely on statistically downscaled climate model outputs to force hydrologic models at a finer spatial resolution. This study examines the effects of four statistical downscaling methods [bias-corrected constructed analog (BCCA), bias-corrected spatial disaggregation applied at daily (BCSDd) and monthly scales (BCSDm), and asynchronous regression (AR)] on retrospective hydrologic simulations using three hydrologic models with their default parameters (the Community Land Model, version 4.0; the Variable Infiltration Capacity model, version 4.1.2; and the Precipitation–Runoff Modeling System, version 3.0.4) over the contiguous United States (CONUS). Biases of hydrologic simulations forced by statistically downscaled climate data relative to the simulation with observation-based gridded data are presented. Each statistical downscaling method produces different meteorological portrayals including precipitation amount, wet-day frequency, and the energy input (i.e., shortwave radiation), and their interplay affects estimations of precipitation partitioning between evapotranspiration and runoff, extreme runoff, and hydrologic states (i.e., snow and soil moisture). The analyses show that BCCA underestimates annual precipitation by as much as −250 mm, leading to unreasonable hydrologic portrayals over the CONUS for all models. Although the other three statistical downscaling methods produce a comparable precipitation bias ranging from −10 to 8 mm across the CONUS, BCSDd severely overestimates the wet-day fraction by up to 0.25, leading to different precipitation partitioning compared to the simulations with other downscaled data. Overall, the choice of downscaling method contributes to less spread in runoff estimates (by a factor of 1.5–3) than the choice of hydrologic model with use of the default parameters if BCCA is excluded.

  8. CEval: All-in-one software for data processing and statistical evaluations in affinity capillary electrophoresis.

    PubMed

    Dubský, Pavel; Ördögová, Magda; Malý, Michal; Riesová, Martina

    2016-05-06

    We introduce CEval software (downloadable for free at echmet.natur.cuni.cz) that was developed for quicker and easier electrophoregram evaluation and further data processing in (affinity) capillary electrophoresis. This software allows for automatic peak detection and evaluation of common peak parameters, such as its migration time, area, width etc. Additionally, the software includes a nonlinear regression engine that performs peak fitting with the Haarhoff-van der Linde (HVL) function, including automated initial guess of the HVL function parameters. HVL is a fundamental peak-shape function in electrophoresis, based on which the correct effective mobility of the analyte represented by the peak is evaluated. Effective mobilities of an analyte at various concentrations of a selector can be further stored and plotted in an affinity CE mode. Consequently, the mobility of the free analyte, μA, mobility of the analyte-selector complex, μAS, and the apparent complexation constant, K('), are first guessed automatically from the linearized data plots and subsequently estimated by the means of nonlinear regression. An option that allows two complexation dependencies to be fitted at once is especially convenient for enantioseparations. Statistical processing of these data is also included, which allowed us to: i) express the 95% confidence intervals for the μA, μAS and K(') least-squares estimates, ii) do hypothesis testing on the estimated parameters for the first time. We demonstrate the benefits of the CEval software by inspecting complexation of tryptophan methyl ester with two cyclodextrins, neutral heptakis(2,6-di-O-methyl)-β-CD and charged heptakis(6-O-sulfo)-β-CD. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Thyroid V50 Highly Predictive of Hypothyroidism in Head-and-Neck Cancer Patients Treated With Intensity-modulated Radiotherapy (IMRT).

    PubMed

    Sachdev, Sean; Refaat, Tamer; Bacchus, Ian D; Sathiaseelan, Vythialinga; Mittal, Bharat B

    2017-08-01

    Radiation-induced hypothyroidism affects a significant number of patients with head-and-neck squamous cell cancer (HNSCC). We examined detailed dosimetric and clinical parameters to better determine the risk of hypothyroidism in euthyroid HNSCC patients treated with intensity-modulated radiation therapy (IMRT). From 2006 to 2010, 75 clinically euthyroid patients with HNSCC were treated with sequential IMRT. The cohort included 59 men and 16 females with a median age of 55 years (range, 30 to 89 y) who were treated to a median dose of 70 Gy (range, 60 to 75 Gy) with concurrent chemotherapy in nearly all (95%) cases. Detailed thyroid dosimetric parameters including maximum dose, mean dose, and other parameters (eg, V50-percent volume receiving at least 50 Gy) were obtained. Freedom from hypothyroidism was evaluated using the Kaplan-Meier method. Univariate and multivariate analyses were conducted using Cox regression. After a median follow-up period of 50 months, 25 patients (33%) became hypothyroid. On univariate analysis, thyroid V50 was highly correlated with developing hypothyroidism (P=0.035). Other dosimetric paramaters including mean thyroid dose (P=0.11) and maximum thyroid dose (P=0.39) did not reach statistical significance. On multivariate analysis incorporating patient, tumor, and treatment variables, V50 remained highly statistically significant (P=0.037). Regardless of other factors, for V50>60%, the odds ratio of developing hypothyroidism was 6.76 (P=0.002). In HNSCC patients treated with IMRT, thyroid V50 highly predicts the risk of developing hypothyroidism. V50>60% puts patients at a significantly higher risk of becoming hypothyroid. This can be a useful dose constraint to consider during treatment planning.

  10. Measurement of the hyperelastic properties of 44 pathological ex vivo breast tissue samples

    NASA Astrophysics Data System (ADS)

    O'Hagan, Joseph J.; Samani, Abbas

    2009-04-01

    The elastic and hyperelastic properties of biological soft tissues have been of interest to the medical community. There are several biomedical applications where parameters characterizing such properties are critical for a reliable clinical outcome. These applications include surgery planning, needle biopsy and brachtherapy where tissue biomechanical modeling is involved. Another important application is interpreting nonlinear elastography images. While there has been considerable research on the measurement of the linear elastic modulus of small tissue samples, little research has been conducted for measuring parameters that characterize the nonlinear elasticity of tissues included in tissue slice specimens. This work presents hyperelastic measurement results of 44 pathological ex vivo breast tissue samples. For each sample, five hyperelastic models have been used, including the Yeoh, N = 2 polynomial, N = 1 Ogden, Arruda-Boyce, and Veronda-Westmann models. Results show that the Yeoh, polynomial and Ogden models are the most accurate in terms of fitting experimental data. The results indicate that almost all of the parameters corresponding to the pathological tissues are between two times to over two orders of magnitude larger than those of normal tissues, with C11 showing the most significant difference. Furthermore, statistical analysis indicates that C02 of the Yeoh model, and C11 and C20 of the polynomial model have very good potential for cancer classification as they show statistically significant differences for various cancer types, especially for invasive lobular carcinoma. In addition to the potential for use in cancer classification, the presented data are very important for applications such as surgery planning and virtual reality based clinician training systems where accurate nonlinear tissue response modeling is required.

  11. A Comparison of the Forecast Skills among Three Numerical Models

    NASA Astrophysics Data System (ADS)

    Lu, D.; Reddy, S. R.; White, L. J.

    2003-12-01

    Three numerical weather forecast models, MM5, COAMPS and WRF, operating with a joint effort of NOAA HU-NCAS and Jackson State University (JSU) during summer 2003 have been chosen to study their forecast skills against observations. The models forecast over the same region with the same initialization, boundary condition, forecast length and spatial resolution. AVN global dataset have been ingested as initial conditions. Grib resolution of 27 km is chosen to represent the current mesoscale model. The forecasts with the length of 36h are performed to output the result with 12h interval. The key parameters used to evaluate the forecast skill include 12h accumulated precipitation, sea level pressure, wind, surface temperature and dew point. Precipitation is evaluated statistically using conventional skill scores, Threat Score (TS) and Bias Score (BS), for different threshold values based on 12h rainfall observations whereas other statistical methods such as Mean Error (ME), Mean Absolute Error(MAE) and Root Mean Square Error (RMSE) are applied to other forecast parameters.

  12. A self-consistency approach to improve microwave rainfall rate estimation from space

    NASA Technical Reports Server (NTRS)

    Kummerow, Christian; Mack, Robert A.; Hakkarinen, Ida M.

    1989-01-01

    A multichannel statistical approach is used to retrieve rainfall rates from the brightness temperature T(B) observed by passive microwave radiometers flown on a high-altitude NASA aircraft. T(B) statistics are based upon data generated by a cloud radiative model. This model simulates variabilities in the underlying geophysical parameters of interest, and computes their associated T(B) in each of the available channels. By further imposing the requirement that the observed T(B) agree with the T(B) values corresponding to the retrieved parameters through the cloud radiative transfer model, the results can be made to agree quite well with coincident radar-derived rainfall rates. Some information regarding the cloud vertical structure is also obtained by such an added requirement. The applicability of this technique to satellite retrievals is also investigated. Data which might be observed by satellite-borne radiometers, including the effects of nonuniformly filled footprints, are simulated by the cloud radiative model for this purpose.

  13. Statistical classifiers on multifractal parameters for optical diagnosis of cervical cancer

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Sabyasachi; Pratiher, Sawon; Kumar, Rajeev; Krishnamoorthy, Vigneshram; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.

    2017-06-01

    An augmented set of multifractal parameters with physical interpretations have been proposed to quantify the varying distribution and shape of the multifractal spectrum. The statistical classifier with accuracy of 84.17% validates the adequacy of multi-feature MFDFA characterization of elastic scattering spectroscopy for optical diagnosis of cancer.

  14. Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, WanYin; Zhang, Jie; Florita, Anthony

    2015-12-08

    Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance,more » cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.« less

  15. Paleomagnetism.org: An online multi-platform open source environment for paleomagnetic data analysis

    NASA Astrophysics Data System (ADS)

    Koymans, Mathijs R.; Langereis, Cor G.; Pastor-Galán, Daniel; van Hinsbergen, Douwe J. J.

    2016-08-01

    This contribution provides an overview of Paleomagnetism.org, an open-source, multi-platform online environment for paleomagnetic data analysis. Paleomagnetism.org provides an interactive environment where paleomagnetic data can be interpreted, evaluated, visualized, and exported. The Paleomagnetism.org application is split in to an interpretation portal, a statistics portal, and a portal for miscellaneous paleomagnetic tools. In the interpretation portal, principle component analysis can be performed on visualized demagnetization diagrams. Interpreted directions and great circles can be combined to find great circle solutions. These directions can be used in the statistics portal, or exported as data and figures. The tools in the statistics portal cover standard Fisher statistics for directions and VGPs, including other statistical parameters used as reliability criteria. Other available tools include an eigenvector approach foldtest, two reversal test including a Monte Carlo simulation on mean directions, and a coordinate bootstrap on the original data. An implementation is included for the detection and correction of inclination shallowing in sediments following TK03.GAD. Finally we provide a module to visualize VGPs and expected paleolatitudes, declinations, and inclinations relative to widely used global apparent polar wander path models in coordinates of major continent-bearing plates. The tools in the miscellaneous portal include a net tectonic rotation (NTR) analysis to restore a body to its paleo-vertical and a bootstrapped oroclinal test using linear regressive techniques, including a modified foldtest around a vertical axis. Paleomagnetism.org provides an integrated approach for researchers to work with visualized (e.g. hemisphere projections, Zijderveld diagrams) paleomagnetic data. The application constructs a custom exportable file that can be shared freely and included in public databases. This exported file contains all data and can later be imported to the application by other researchers. The accessibility and simplicity through which paleomagnetic data can be interpreted, analyzed, visualized, and shared makes Paleomagnetism.org of interest to the community.

  16. Statistical Analysis of Large-Scale Structure of Universe

    NASA Astrophysics Data System (ADS)

    Tugay, A. V.

    While galaxy cluster catalogs were compiled many decades ago, other structural elements of cosmic web are detected at definite level only in the newest works. For example, extragalactic filaments were described by velocity field and SDSS galaxy distribution during the last years. Large-scale structure of the Universe could be also mapped in the future using ATHENA observations in X-rays and SKA in radio band. Until detailed observations are not available for the most volume of Universe, some integral statistical parameters can be used for its description. Such methods as galaxy correlation function, power spectrum, statistical moments and peak statistics are commonly used with this aim. The parameters of power spectrum and other statistics are important for constraining the models of dark matter, dark energy, inflation and brane cosmology. In the present work we describe the growth of large-scale density fluctuations in one- and three-dimensional case with Fourier harmonics of hydrodynamical parameters. In result we get power-law relation for the matter power spectrum.

  17. Histogram analysis of ADC in rectal cancer: associations with different histopathological findings including expression of EGFR, Hif1-alpha, VEGF, p53, PD1, and KI 67. A preliminary study.

    PubMed

    Meyer, Hans Jonas; Höhn, Annekathrin; Surov, Alexey

    2018-04-06

    Functional imaging modalities like Diffusion-weighted imaging are increasingly used to predict tumor behavior like cellularity and vascularity in different tumors. Histogram analysis is an emergent imaging analysis, in which every voxel is used to obtain a histogram and therefore statistically information about tumors can be provided. The purpose of this study was to elucidate possible associations between ADC histogram parameters and several immunhistochemical features in rectal cancer. Overall, 11 patients with histologically proven rectal cancer were included into the study. There were 2 (18.18%) females and 9 males with a mean age of 67.1 years. KI 67-index, expression of p53, EGFR, VEGF, and Hif1-alpha were semiautomatically estimated. The tumors were divided into PD1-positive and PD1-negative lesions. ADC histogram analysis was performed as a whole lesion measurement using an in-house matlab application. Spearman's correlation analysis revealed a strong correlation between EGFR expression and ADCmax (p=0.72, P=0.02). None of the vascular parameters (VEGF, Hif1-alpha) correlated with ADC parameters. Kurtosis and skewness correlated inversely with p53 expression (p=-0.64, P=0.03 and p=-0.81, P=0.002, respectively). ADCmedian and ADCmode correlated with Ki67 (p=-0.62, P=0.04 and p=-0.65, P=0.03, respectively). PD1-positive tumors showed statistically significant lower ADCmax values in comparison to PD1-negative tumors, 1.93 ± 0.36 vs 2.32 ± 0.47×10 -3 mm 2 /s, p=0.04. Several associations were identified between histogram parameter derived from ADC maps and EGFR, KI 67 and p53 expression in rectal cancer. Furthermore, ADCmax was different between PD1 positive and PD1 negative tumors indicating an important role of ADC parameters for possible future treatment prediction.

  18. Histogram analysis of ADC in rectal cancer: associations with different histopathological findings including expression of EGFR, Hif1-alpha, VEGF, p53, PD1, and KI 67. A preliminary study

    PubMed Central

    Meyer, Hans Jonas; Höhn, Annekathrin; Surov, Alexey

    2018-01-01

    Functional imaging modalities like Diffusion-weighted imaging are increasingly used to predict tumor behavior like cellularity and vascularity in different tumors. Histogram analysis is an emergent imaging analysis, in which every voxel is used to obtain a histogram and therefore statistically information about tumors can be provided. The purpose of this study was to elucidate possible associations between ADC histogram parameters and several immunhistochemical features in rectal cancer. Overall, 11 patients with histologically proven rectal cancer were included into the study. There were 2 (18.18%) females and 9 males with a mean age of 67.1 years. KI 67-index, expression of p53, EGFR, VEGF, and Hif1-alpha were semiautomatically estimated. The tumors were divided into PD1-positive and PD1-negative lesions. ADC histogram analysis was performed as a whole lesion measurement using an in-house matlab application. Spearman's correlation analysis revealed a strong correlation between EGFR expression and ADCmax (p=0.72, P=0.02). None of the vascular parameters (VEGF, Hif1-alpha) correlated with ADC parameters. Kurtosis and skewness correlated inversely with p53 expression (p=-0.64, P=0.03 and p=-0.81, P=0.002, respectively). ADCmedian and ADCmode correlated with Ki67 (p=-0.62, P=0.04 and p=-0.65, P=0.03, respectively). PD1-positive tumors showed statistically significant lower ADCmax values in comparison to PD1-negative tumors, 1.93 ± 0.36 vs 2.32 ± 0.47×10−3mm2/s, p=0.04. Several associations were identified between histogram parameter derived from ADC maps and EGFR, KI 67 and p53 expression in rectal cancer. Furthermore, ADCmax was different between PD1 positive and PD1 negative tumors indicating an important role of ADC parameters for possible future treatment prediction. PMID:29719621

  19. Evaluating performances of simplified physically based landslide susceptibility models.

    NASA Astrophysics Data System (ADS)

    Capparelli, Giovanna; Formetta, Giuseppe; Versace, Pasquale

    2015-04-01

    Rainfall induced shallow landslides cause significant damages involving loss of life and properties. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. This paper presents a package of GIS based models for landslide susceptibility analysis. It was integrated in the NewAge-JGrass hydrological model using the Object Modeling System (OMS) modeling framework. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices (GOF) by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system offers the possibility to investigate and fairly compare the quality and the robustness of models and models parameters, according a procedure that includes: i) model parameters estimation by optimizing each of the GOF index separately, ii) models evaluation in the ROC plane by using each of the optimal parameter set, and iii) GOF robustness evaluation by assessing their sensitivity to the input parameter variation. This procedure was repeated for all three models. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, Average Index (AI) optimization coupled with model M3 is the best modeling solution for our test case. This research was funded by PON Project No. 01_01503 "Integrated Systems for Hydrogeological Risk Monitoring, Early Warning and Mitigation Along the Main Lifelines", CUP B31H11000370005, in the framework of the National Operational Program for "Research and Competitiveness" 2007-2013.

  20. Fractional superstatistics from a kinetic approach

    NASA Astrophysics Data System (ADS)

    Ourabah, Kamel; Tribeche, Mouloud

    2018-03-01

    Through a kinetic approach, in which temperature fluctuations are taken into account, we obtain generalized fractional statistics interpolating between Fermi-Dirac and Bose-Einstein statistics. The latter correspond to the superstatistical analogues of the Polychronakos and Haldane-Wu statistics. The virial coefficients corresponding to these statistics are worked out and compared to those of an ideal two-dimensional anyon gas. It is shown that the obtained statistics reproduce correctly the second and third virial coefficients of an anyon gas. On this basis, a link is established between the statistical parameter and the strength of fluctuations. A further generalization is suggested by allowing the statistical parameter to fluctuate. As a by-product, superstatistics of ewkons, introduced recently to deal with dark energy [Phys. Rev. E 94, 062115 (2016), 10.1103/PhysRevE.94.062115], are also obtained within the same method.

  1. Using Electronic Data Interchange to Report Product Quality

    DTIC Science & Technology

    1993-03-01

    Numbers 0 31.1 S........................ . . . . ........... .... . .--- . ... N/U 140 SPS Sampling Parameters for Summary Statistics 0 1 N/U 150 REF...DTM Date/Time Reference 0 1 N/U 190 REF Reference Numbers 021 .................................. .......... .. ... NAU 200 STA Statistics 0 1 N/U 210...Measurements 0 1 N/U 120 DTM Date/Time Reference 0 >1 N/U 130 REF Reference Numbers 0 >1 :LOOIV f-SPS N/U 140 SPS Sampling Parameters for Summary Statistics 0 1

  2. Crossover between the Gaussian orthogonal ensemble, the Gaussian unitary ensemble, and Poissonian statistics.

    PubMed

    Schweiner, Frank; Laturner, Jeanine; Main, Jörg; Wunner, Günter

    2017-11-01

    Until now only for specific crossovers between Poissonian statistics (P), the statistics of a Gaussian orthogonal ensemble (GOE), or the statistics of a Gaussian unitary ensemble (GUE) have analytical formulas for the level spacing distribution function been derived within random matrix theory. We investigate arbitrary crossovers in the triangle between all three statistics. To this aim we propose an according formula for the level spacing distribution function depending on two parameters. Comparing the behavior of our formula for the special cases of P→GUE, P→GOE, and GOE→GUE with the results from random matrix theory, we prove that these crossovers are described reasonably. Recent investigations by F. Schweiner et al. [Phys. Rev. E 95, 062205 (2017)2470-004510.1103/PhysRevE.95.062205] have shown that the Hamiltonian of magnetoexcitons in cubic semiconductors can exhibit all three statistics in dependence on the system parameters. Evaluating the numerical results for magnetoexcitons in dependence on the excitation energy and on a parameter connected with the cubic valence band structure and comparing the results with the formula proposed allows us to distinguish between regular and chaotic behavior as well as between existent or broken antiunitary symmetries. Increasing one of the two parameters, transitions between different crossovers, e.g., from the P→GOE to the P→GUE crossover, are observed and discussed.

  3. Landslides triggered by the 12 January 2010 Port-au-Prince, Haiti, Mw = 7.0 earthquake: visual interpretation, inventory compiling, and spatial distribution statistical analysis

    NASA Astrophysics Data System (ADS)

    Xu, C.; Shyu, J. B. H.; Xu, X.

    2014-07-01

    The 12 January 2010 Port-au-Prince, Haiti, earthquake (Mw= 7.0) triggered tens of thousands of landslides. The purpose of this study is to investigate the correlations of the occurrence of landslides and the thicknesses of their erosion with topographic, geologic, and seismic parameters. A total of 30 828 landslides triggered by the earthquake covered a total area of 15.736 km2, distributed in an area more than 3000 km2, and the volume of landslide accumulation materials is estimated to be about 29 700 000 m3. These landslides are of various types, mostly belonging to shallow disrupted landslides and rock falls, but also include coherent deep-seated landslides and rock slides. These landslides were delineated using pre- and post-earthquake high-resolution satellite images. Spatial distribution maps and contour maps of landslide number density, landslide area percentage, and landslide erosion thickness were constructed in order to analyze the spatial distribution patterns of co-seismic landslides. Statistics of size distribution and morphometric parameters of co-seismic landslides were carried out and were compared with other earthquake events in the world. Four proxies of co-seismic landslide abundance, including landslides centroid number density (LCND), landslide top number density (LTND), landslide area percentage (LAP), and landslide erosion thickness (LET) were used to correlate co-seismic landslides with various environmental parameters. These parameters include elevation, slope angle, slope aspect, slope curvature, topographic position, distance from drainages, lithology, distance from the epicenter, distance from the Enriquillo-Plantain Garden fault, distance along the fault, and peak ground acceleration (PGA). A comparison of these impact parameters on co-seismic landslides shows that slope angle is the strongest impact parameter on co-seismic landslide occurrence. Our co-seismic landslide inventory is much more detailed than other inventories in several previous publications. Therefore, we carried out comparisons of inventories of landslides triggered by the Haiti earthquake with other published results and proposed possible reasons for any differences. We suggest that the empirical functions between earthquake magnitude and co-seismic landslides need to be updated on the basis of the abundant and more complete co-seismic landslide inventories recently available.

  4. Landslides triggered by the 12 January 2010 Mw 7.0 Port-au-Prince, Haiti, earthquake: visual interpretation, inventory compiling and spatial distribution statistical analysis

    NASA Astrophysics Data System (ADS)

    Xu, C.; Shyu, J. B. H.; Xu, X.-W.

    2014-02-01

    The 12 January 2010 Port-au-Prince, Haiti, earthquake (Mw 7.0) triggered tens of thousands of landslides. The purpose of this study is to investigate the correlations of the occurrence of landslides and their erosion thicknesses with topographic factors, seismic parameters, and their distance from roads. A total of 30 828 landslides triggered by the earthquake covered a total area of 15.736 km2, distributed in an area more than 3000 km2, and the volume of landslide accumulation materials is estimated to be about 29 700 000 m3. These landslides are of various types, mostly belonging to shallow disrupted landslides and rock falls, but also include coherent deep-seated landslides and rock slides. These landslides were delineated using pre- and post-earthquake high-resolutions satellite images. Spatial distribution maps and contour maps of landslide number density, landslide area percentage, and landslide erosion thickness were constructed in order to analyze the spatial distribution patterns of co-seismic landslides. Statistics of size distribution and morphometric parameters of co-seismic landslides were carried out and were compared with other earthquake events in the world. Four proxies of co-seismic landslide abundance, including landslides centroid number density (LCND), landslide top number density (LTND), landslide area percentage (LAP), and landslide erosion thickness (LET) were used to correlate co-seismic landslides with various landslide controlling parameters. These controlling parameters include elevation, slope angle, slope aspect, slope curvature, topographic position, distance from drainages, lithology, distance from the epicenter, distance from the Enriquillo-Plantain Garden fault, distance along the fault, and peak ground acceleration (PGA). A comparison of these impact parameters on co-seismic landslides shows that slope angle is the strongest impact parameter on co-seismic landslide occurrence. Our co-seismic landslide inventory is much more detailed than other inventories in several previous publications. Therefore, we carried out comparisons of inventories of landslides triggered by the Haiti earthquake with other published results and proposed possible reasons of any differences. We suggest that the empirical functions between earthquake magnitude and co-seismic landslides need to update on the basis of the abundant and more complete co-seismic landslide inventories recently available.

  5. Cosmology constraints from shear peak statistics in Dark Energy Survey Science Verification data

    NASA Astrophysics Data System (ADS)

    Kacprzak, T.; Kirk, D.; Friedrich, O.; Amara, A.; Refregier, A.; Marian, L.; Dietrich, J. P.; Suchyta, E.; Aleksić, J.; Bacon, D.; Becker, M. R.; Bonnett, C.; Bridle, S. L.; Chang, C.; Eifler, T. F.; Hartley, W. G.; Huff, E. M.; Krause, E.; MacCrann, N.; Melchior, P.; Nicola, A.; Samuroff, S.; Sheldon, E.; Troxel, M. A.; Weller, J.; Zuntz, J.; Abbott, T. M. C.; Abdalla, F. B.; Armstrong, R.; Benoit-Lévy, A.; Bernstein, G. M.; Bernstein, R. A.; Bertin, E.; Brooks, D.; Burke, D. L.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Castander, F. J.; Crocce, M.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Evrard, A. E.; Neto, A. Fausti; Flaugher, B.; Fosalba, P.; Frieman, J.; Gerdes, D. W.; Goldstein, D. A.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Jain, B.; James, D. J.; Jarvis, M.; Kuehn, K.; Kuropatkin, N.; Lahav, O.; Lima, M.; March, M.; Marshall, J. L.; Martini, P.; Miller, C. J.; Miquel, R.; Mohr, J. J.; Nichol, R. C.; Nord, B.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Rykoff, E. S.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Vikram, V.; Walker, A. R.; Zhang, Y.; DES Collaboration

    2016-12-01

    Shear peak statistics has gained a lot of attention recently as a practical alternative to the two-point statistics for constraining cosmological parameters. We perform a shear peak statistics analysis of the Dark Energy Survey (DES) Science Verification (SV) data, using weak gravitational lensing measurements from a 139 deg2 field. We measure the abundance of peaks identified in aperture mass maps, as a function of their signal-to-noise ratio, in the signal-to-noise range 04 would require significant corrections, which is why we do not include them in our analysis. We compare our results to the cosmological constraints from the two-point analysis on the SV field and find them to be in good agreement in both the central value and its uncertainty. We discuss prospects for future peak statistics analysis with upcoming DES data.

  6. Identification of dynamic systems, theory and formulation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1985-01-01

    The problem of estimating parameters of dynamic systems is addressed in order to present the theoretical basis of system identification and parameter estimation in a manner that is complete and rigorous, yet understandable with minimal prerequisites. Maximum likelihood and related estimators are highlighted. The approach used requires familiarity with calculus, linear algebra, and probability, but does not require knowledge of stochastic processes or functional analysis. The treatment emphasizes unification of the various areas in estimation in dynamic systems is treated as a direct outgrowth of the static system theory. Topics covered include basic concepts and definitions; numerical optimization methods; probability; statistical estimators; estimation in static systems; stochastic processes; state estimation in dynamic systems; output error, filter error, and equation error methods of parameter estimation in dynamic systems, and the accuracy of the estimates.

  7. Correlation of RNA secondary structure statistics with thermodynamic stability and applications to folding.

    PubMed

    Wu, Johnny C; Gardner, David P; Ozer, Stuart; Gutell, Robin R; Ren, Pengyu

    2009-08-28

    The accurate prediction of the secondary and tertiary structure of an RNA with different folding algorithms is dependent on several factors, including the energy functions. However, an RNA higher-order structure cannot be predicted accurately from its sequence based on a limited set of energy parameters. The inter- and intramolecular forces between this RNA and other small molecules and macromolecules, in addition to other factors in the cell such as pH, ionic strength, and temperature, influence the complex dynamics associated with transition of a single stranded RNA to its secondary and tertiary structure. Since all of the factors that affect the formation of an RNAs 3D structure cannot be determined experimentally, statistically derived potential energy has been used in the prediction of protein structure. In the current work, we evaluate the statistical free energy of various secondary structure motifs, including base-pair stacks, hairpin loops, and internal loops, using their statistical frequency obtained from the comparative analysis of more than 50,000 RNA sequences stored in the RNA Comparative Analysis Database (rCAD) at the Comparative RNA Web (CRW) Site. Statistical energy was computed from the structural statistics for several datasets. While the statistical energy for a base-pair stack correlates with experimentally derived free energy values, suggesting a Boltzmann-like distribution, variation is observed between different molecules and their location on the phylogenetic tree of life. Our statistical energy values calculated for several structural elements were utilized in the Mfold RNA-folding algorithm. The combined statistical energy values for base-pair stacks, hairpins and internal loop flanks result in a significant improvement in the accuracy of secondary structure prediction; the hairpin flanks contribute the most.

  8. Does Previous Hip Surgery Effect the Outcome of Tönnis Triple Periacetabular Osteotomy? Mid-Term Results.

    PubMed

    Konya, Mehmet Nuri; Aydn, Bahattin Kerem; Yldrm, Timur; Sofu, Hakan; Gürsu, Sarper

    2016-03-01

    Hip dysplasia (HD) is 1 of the major reasons of coxarthrosis. The goal of the treatment of HD by Tönnis triple pelvic osteotomy (TPAO) is to improve the function of hip joint while relieving pain, delaying and possibly preventing end-stage arthritis. The aim of this study is to compare the clinical and radiological results of TPAO to determine if previous surgery has a negative effect on TPAO.Patients operated with TPAO between 2005 and 2010, included in this study. Patients divided into 2 groups: primary acetabular dysplasia (PAD) and residual acetabular dysplasia (RAD). Prepostoperatively, hip range of motion, Harris hip score (HHS), Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) hip score, visual analog scores (VAS), impingement tests, and also the presence of Trendelenburg sign (TS) were investigated for clinical evaluation. For radiological analysis pre-postoperative, anterior-posterior (AP) pelvis and faux profile radiographs were used. Acetabular index, lateral center edge (LCE) angle, and Sharp angles were measured by AP pelvis; anterior center edge (ACE) angle were measured by faux profile radiography. All the clinical and radiological data of the groups were analyzed separately for the pre-postoperative scores also the amount of improvement in all parameters were analyzed.SPSS20 (SPSS Inc., Chicago, IL) was used for statistical analysis. Wilcoxon test, McNemar test, paired t tests, and Mann-Whitney U tests were used to compare the groups. P < 0.05 were defined as statistically significant.Study included 27 patients: 17 patients were in PAD and 10 patients were in RAD. The mean follow-up period was 6.2 years (5.2-10.3 years). In all patients, the radiological and the clinical outcomes were better after TPAO except the flexion of the hip parameter. When the patient groups were evaluated as pre-postoperatively, more statistically significant parameters were found in the PAD group when compared with RAD group. Extension, impingement, TS, VAS, HHS, WOMAC score parameters in clinical outcome and LCE, ACE, Sharp angle, coverage ratio in radiological results were significantly better in PAD group postoperatively but in RAD group; only extension, VAS, HHS, and WOMAC parameters were clinically and LCE and Coverage ratio were significantly different compared with the preoperative measurements. The change of the parameters that used for the evaluation of clinical and radiological results did not show a significant difference between groups.Our data suggest that TPAO can be performed on patients with HD for both groups. Although there were fewer parameters which changed significantly after TPAO in RAD patients; the improvement of radiological and clinical results was similar for groups. Further long-term follow-up studies with large number of patients are needed to determine the proper results of TPAO.

  9. Studies of vorticity imbalance and stability, moisture budget, atmospheric energetics, and gradients of meteorological parameters during AVE 3

    NASA Technical Reports Server (NTRS)

    Scoggins, J. R. (Editor)

    1978-01-01

    Four diagnostic studies of AVE 3. are presented. AVE 3 represents a high wind speed wintertime situation, while most AVE's analyzed previously represented springtime conditions with rather low wind speeds. The general areas of analysis include the examination of budgets of vorticity, moisture, kinetic energy, and potential energy and a synoptic and statistical study of the horizontal gradients of meteorological parameters. Conclusions are integrated with and compared to those obtained in previously analyzed experiments (mostly springtime weather situations) so as to establish a more definitive understanding of the structure and dynamics of the atmosphere under a wide range of synoptic conditions.

  10. Meta-analysis of five photodisinfection clinical trials for periodontitis

    NASA Astrophysics Data System (ADS)

    Andersen, Roger C.; Loebel, Nicolas G.; Andersen, Dane M.

    2009-06-01

    Photodynamic therapy(PDT) has been demonstrated to effectively kill human periopathogens in vitro. To evaluate the efficacy of PDT in vivo a series of clinical trials was carried out in multiple centers and populations. Clinical parameters including clinical attachment level, pocket probing depth and bleeding on probing were all evaluated. All groups received the standard of care, scaling and root planing, and the treatment group additionally received a single treatment of PDT. Of the total 309 patients and over 40,000 pockets treated in these 5 trials it was determined that photodynamic therapy provided a statistically significant improvement in clinical parameters over scaling and root planing alone.

  11. The Efficacy of Galaxy Shape Parameters in Photometric Redshift Estimation: A Neural Network Approach

    NASA Astrophysics Data System (ADS)

    Singal, J.; Shmakova, M.; Gerke, B.; Griffith, R. L.; Lotz, J.

    2011-05-01

    We present a determination of the effects of including galaxy morphological parameters in photometric redshift estimation with an artificial neural network method. Neural networks, which recognize patterns in the information content of data in an unbiased way, can be a useful estimator of the additional information contained in extra parameters, such as those describing morphology, if the input data are treated on an equal footing. We use imaging and five band photometric magnitudes from the All-wavelength Extended Groth Strip International Survey (AEGIS). It is shown that certain principal components of the morphology information are correlated with galaxy type. However, we find that for the data used the inclusion of morphological information does not have a statistically significant benefit for photometric redshift estimation with the techniques employed here. The inclusion of these parameters may result in a tradeoff between extra information and additional noise, with the additional noise becoming more dominant as more parameters are added.

  12. Parameter discovery in stochastic biological models using simulated annealing and statistical model checking.

    PubMed

    Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J

    2014-01-01

    Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.

  13. Development of uncertainty-based work injury model using Bayesian structural equation modelling.

    PubMed

    Chatterjee, Snehamoy

    2014-01-01

    This paper proposed a Bayesian method-based structural equation model (SEM) of miners' work injury for an underground coal mine in India. The environmental and behavioural variables for work injury were identified and causal relationships were developed. For Bayesian modelling, prior distributions of SEM parameters are necessary to develop the model. In this paper, two approaches were adopted to obtain prior distribution for factor loading parameters and structural parameters of SEM. In the first approach, the prior distributions were considered as a fixed distribution function with specific parameter values, whereas, in the second approach, prior distributions of the parameters were generated from experts' opinions. The posterior distributions of these parameters were obtained by applying Bayesian rule. The Markov Chain Monte Carlo sampling in the form Gibbs sampling was applied for sampling from the posterior distribution. The results revealed that all coefficients of structural and measurement model parameters are statistically significant in experts' opinion-based priors, whereas, two coefficients are not statistically significant when fixed prior-based distributions are applied. The error statistics reveals that Bayesian structural model provides reasonably good fit of work injury with high coefficient of determination (0.91) and less mean squared error as compared to traditional SEM.

  14. Constraints on Cosmological Parameters from the Angular Power Spectrum of a Combined 2500 deg2 SPT-SZ and Planck Gravitational Lensing Map

    NASA Astrophysics Data System (ADS)

    Simard, G.; Omori, Y.; Aylor, K.; Baxter, E. J.; Benson, B. A.; Bleem, L. E.; Carlstrom, J. E.; Chang, C. L.; Cho, H.-M.; Chown, R.; Crawford, T. M.; Crites, A. T.; de Haan, T.; Dobbs, M. A.; Everett, W. B.; George, E. M.; Halverson, N. W.; Harrington, N. L.; Henning, J. W.; Holder, G. P.; Hou, Z.; Holzapfel, W. L.; Hrubes, J. D.; Knox, L.; Lee, A. T.; Leitch, E. M.; Luong-Van, D.; Manzotti, A.; McMahon, J. J.; Meyer, S. S.; Mocanu, L. M.; Mohr, J. J.; Natoli, T.; Padin, S.; Pryke, C.; Reichardt, C. L.; Ruhl, J. E.; Sayre, J. T.; Schaffer, K. K.; Shirokoff, E.; Staniszewski, Z.; Stark, A. A.; Story, K. T.; Vanderlinde, K.; Vieira, J. D.; Williamson, R.; Wu, W. L. K.

    2018-06-01

    We report constraints on cosmological parameters from the angular power spectrum of a cosmic microwave background (CMB) gravitational lensing potential map created using temperature data from 2500 deg2 of South Pole Telescope (SPT) data supplemented with data from Planck in the same sky region, with the statistical power in the combined map primarily from the SPT data. We fit the lensing power spectrum to a model including cold dark matter and a cosmological constant ({{Λ }}{CDM}), and to models with single-parameter extensions to {{Λ }}{CDM}. We find constraints that are comparable to and consistent with those found using the full-sky Planck CMB lensing data, e.g., {σ }8{{{Ω }}}{{m}}0.25 = 0.598 ± 0.024 from the lensing data alone with weak priors placed on other parameters. Combining with primary CMB data, we explore single-parameter extensions to {{Λ }}{CDM}. We find {{{Ω }}}k =-{0.012}-0.023+0.021 or {M}ν < 0.70 eV at 95% confidence, in good agreement with results including the lensing potential as measured by Planck. We include two parameters that scale the effect of lensing on the CMB: {A}L, which scales the lensing power spectrum in both the lens reconstruction power and in the smearing of the acoustic peaks, and {A}φ φ , which scales only the amplitude of the lensing reconstruction power spectrum. We find {A}φ φ × {A}L = 1.01 ± 0.08 for the lensing map made from combined SPT and Planck data, indicating that the amount of lensing is in excellent agreement with expectations from the observed CMB angular power spectrum when not including the information from smearing of the acoustic peaks.

  15. QCD Precision Measurements and Structure Function Extraction at a High Statistics, High Energy Neutrino Scattering Experiment:. NuSOnG

    NASA Astrophysics Data System (ADS)

    Adams, T.; Batra, P.; Bugel, L.; Camilleri, L.; Conrad, J. M.; de Gouvêa, A.; Fisher, P. H.; Formaggio, J. A.; Jenkins, J.; Karagiorgi, G.; Kobilarcik, T. R.; Kopp, S.; Kyle, G.; Loinaz, W. A.; Mason, D. A.; Milner, R.; Moore, R.; Morfín, J. G.; Nakamura, M.; Naples, D.; Nienaber, P.; Olness, F. I.; Owens, J. F.; Pate, S. F.; Pronin, A.; Seligman, W. G.; Shaevitz, M. H.; Schellman, H.; Schienbein, I.; Syphers, M. J.; Tait, T. M. P.; Takeuchi, T.; Tan, C. Y.; van de Water, R. G.; Yamamoto, R. K.; Yu, J. Y.

    We extend the physics case for a new high-energy, ultra-high statistics neutrino scattering experiment, NuSOnG (Neutrino Scattering On Glass) to address a variety of issues including precision QCD measurements, extraction of structure functions, and the derived Parton Distribution Functions (PDF's). This experiment uses a Tevatron-based neutrino beam to obtain a sample of Deep Inelastic Scattering (DIS) events which is over two orders of magnitude larger than past samples. We outline an innovative method for fitting the structure functions using a parametrized energy shift which yields reduced systematic uncertainties. High statistics measurements, in combination with improved systematics, will enable NuSOnG to perform discerning tests of fundamental Standard Model parameters as we search for deviations which may hint of "Beyond the Standard Model" physics.

  16. User manual for Blossom statistical package for R

    USGS Publications Warehouse

    Talbert, Marian; Cade, Brian S.

    2005-01-01

    Blossom is an R package with functions for making statistical comparisons with distance-function based permutation tests developed by P.W. Mielke, Jr. and colleagues at Colorado State University (Mielke and Berry, 2001) and for testing parameters estimated in linear models with permutation procedures developed by B. S. Cade and colleagues at the Fort Collins Science Center, U.S. Geological Survey. This manual is intended to provide identical documentation of the statistical methods and interpretations as the manual by Cade and Richards (2005) does for the original Fortran program, but with changes made with respect to command inputs and outputs to reflect the new implementation as a package for R (R Development Core Team, 2012). This implementation in R has allowed for numerous improvements not supported by the Cade and Richards (2005) Fortran implementation, including use of categorical predictor variables in most routines.

  17. Analysis of sensitivity of simulated recharge to selected parameters for seven watersheds modeled using the precipitation-runoff modeling system

    USGS Publications Warehouse

    Ely, D. Matthew

    2006-01-01

    Recharge is a vital component of the ground-water budget and methods for estimating it range from extremely complex to relatively simple. The most commonly used techniques, however, are limited by the scale of application. One method that can be used to estimate ground-water recharge includes process-based models that compute distributed water budgets on a watershed scale. These models should be evaluated to determine which model parameters are the dominant controls in determining ground-water recharge. Seven existing watershed models from different humid regions of the United States were chosen to analyze the sensitivity of simulated recharge to model parameters. Parameter sensitivities were determined using a nonlinear regression computer program to generate a suite of diagnostic statistics. The statistics identify model parameters that have the greatest effect on simulated ground-water recharge and that compare and contrast the hydrologic system responses to those parameters. Simulated recharge in the Lost River and Big Creek watersheds in Washington State was sensitive to small changes in air temperature. The Hamden watershed model in west-central Minnesota was developed to investigate the relations that wetlands and other landscape features have with runoff processes. Excess soil moisture in the Hamden watershed simulation was preferentially routed to wetlands, instead of to the ground-water system, resulting in little sensitivity of any parameters to recharge. Simulated recharge in the North Fork Pheasant Branch watershed, Wisconsin, demonstrated the greatest sensitivity to parameters related to evapotranspiration. Three watersheds were simulated as part of the Model Parameter Estimation Experiment (MOPEX). Parameter sensitivities for the MOPEX watersheds, Amite River, Louisiana and Mississippi, English River, Iowa, and South Branch Potomac River, West Virginia, were similar and most sensitive to small changes in air temperature and a user-defined flow routing parameter. Although the primary objective of this study was to identify, by geographic region, the importance of the parameter value to the simulation of ground-water recharge, the secondary objectives proved valuable for future modeling efforts. The value of a rigorous sensitivity analysis can (1) make the calibration process more efficient, (2) guide additional data collection, (3) identify model limitations, and (4) explain simulated results.

  18. Development of an analytical solution for the Budyko watershed parameter in terms of catchment physical features

    NASA Astrophysics Data System (ADS)

    Reaver, N.; Kaplan, D. A.; Jawitz, J. W.

    2017-12-01

    The Budyko hypothesis states that a catchment's long-term water and energy balances are dependent on two relatively easy to measure quantities: rainfall depth and potential evaporation. This hypothesis is expressed as a simple function, the Budyko equation, which allows for the prediction of a catchment's actual evapotranspiration and discharge from measured rainfall depth and potential evaporation, data which are widely available. However, the two main analytically derived forms of the Budyko equation contain a single unknown watershed parameter, whose value varies across catchments; variation in this parameter has been used to explain the hydrological behavior of different catchments. The watershed parameter is generally thought of as a lumped quantity that represents the influence of all catchment biophysical features (e.g. soil type and depth, vegetation type, timing of rainfall, etc). Previous work has shown that the parameter is statistically correlated with catchment properties, but an explicit expression has been elusive. While the watershed parameter can be determined empirically by fitting the Budyko equation to measured data in gauged catchments where actual evapotranspiration can be estimated, this limits the utility of the framework for predicting impacts to catchment hydrology due to changing climate and land use. In this study, we developed an analytical solution for the lumped catchment parameter for both forms of the Budyko equation. We combined these solutions with a statistical soil moisture model to obtain analytical solutions for the Budyko equation parameter as a function of measurable catchment physical features, including rooting depth, soil porosity, and soil wilting point. We tested the predictive power of these solutions using the U.S. catchments in the MOPEX database. We also compared the Budyko equation parameter estimates generated from our analytical solutions (i.e. predicted parameters) with those obtained through the calibration of the Budyko equation to discharge data (i.e. empirical parameters), and found good agreement. These results suggest that it is possible to predict the Budyko equation watershed parameter directly from physical features, even for ungauged catchments.

  19. MTS dye based colorimetric CTLL-2 cell proliferation assay for product release and stability monitoring of interleukin-15: assay qualification, standardization and statistical analysis.

    PubMed

    Soman, Gopalan; Yang, Xiaoyi; Jiang, Hengguang; Giardina, Steve; Vyas, Vinay; Mitra, George; Yovandich, Jason; Creekmore, Stephen P; Waldmann, Thomas A; Quiñones, Octavio; Alvord, W Gregory

    2009-08-31

    A colorimetric cell proliferation assay using soluble tetrazolium salt [(CellTiter 96(R) Aqueous One Solution) cell proliferation reagent, containing the (3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium, inner salt) and an electron coupling reagent phenazine ethosulfate], was optimized and qualified for quantitative determination of IL-15 dependent CTLL-2 cell proliferation activity. An in-house recombinant Human (rHu)IL-15 reference lot was standardized (IU/mg) against an international reference standard. Specificity of the assay for IL-15 was documented by illustrating the ability of neutralizing anti-IL-15 antibodies to block the product specific CTLL-2 cell proliferation and the lack of blocking effect with anti-IL-2 antibodies. Under the defined assay conditions, the linear dose-response concentration range was between 0.04 and 0.17ng/ml of the rHuIL-15 produced in-house and 0.5-3.0IU/ml for the international standard. Statistical analysis of the data was performed with the use of scripts written in the R Statistical Language and Environment utilizing a four-parameter logistic regression fit analysis procedure. The overall variation in the ED(50) values for the in-house reference standard from 55 independent estimates performed over the period of 1year was 12.3% of the average. Excellent intra-plate and within-day/inter-plate consistency was observed for all four parameter estimates in the model. Different preparations of rHuIL-15 showed excellent intra-plate consistency in the parameter estimates corresponding to the lower and upper asymptotes as well as to the 'slope' factor at the mid-point. The ED(50) values showed statistically significant differences for different lots and for control versus stressed samples. Three R-scripts improve data analysis capabilities allowing one to describe assay variations, to draw inferences between data sets from formal statistical tests, and to set up improved assay acceptance criteria based on comparability and consistency in the four parameters of the model. The assay is precise, accurate and robust and can be fully validated. Applications of the assay were established including process development support, release of the rHuIL-15 product for pre-clinical and clinical studies, and for monitoring storage stability.

  20. Statistical methods for the beta-binomial model in teratology.

    PubMed Central

    Yamamoto, E; Yanagimoto, T

    1994-01-01

    The beta-binomial model is widely used for analyzing teratological data involving littermates. Recent developments in statistical analyses of teratological data are briefly reviewed with emphasis on the model. For statistical inference of the parameters in the beta-binomial distribution, separation of the likelihood introduces an likelihood inference. This leads to reducing biases of estimators and also to improving accuracy of empirical significance levels of tests. Separate inference of the parameters can be conducted in a unified way. PMID:8187716

  1. Probability density cloud as a geometrical tool to describe statistics of scattered light.

    PubMed

    Yaitskova, Natalia

    2017-04-01

    First-order statistics of scattered light is described using the representation of the probability density cloud, which visualizes a two-dimensional distribution for complex amplitude. The geometric parameters of the cloud are studied in detail and are connected to the statistical properties of phase. The moment-generating function for intensity is obtained in a closed form through these parameters. An example of exponentially modified normal distribution is provided to illustrate the functioning of this geometrical approach.

  2. The Asymmetry Parameter and Branching Ratio of Sigma Plus Radiative Decay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foucher, Maurice Emile

    1992-05-01

    We have measured the asymmetry parameter and branching ratio of themore » $$\\Sigma^+$$ radiative decay. This high statistics experiment (FNAL 761) was performed in the Proton Center charged hyperon beam at Fermi National Accelerator Laboratory in Batavia, Illinois. We find for the asymmetry parameter -0.720 $$\\pm$$ 0.086 $$\\pm$$ 0.045 where the first error is statistical and the second is systematic. This result is based on a sample of 34754 $$\\pm$$ 212 events. We find a preliminary value for the branching ratio $$Br ( \\Sigma^+ \\to p\\gamma )$$ $$/ Br ( \\Sigma^+ \\to p \\pi^0 )$$ = (2.14 $$\\pm$$ 0.07 $$\\pm$$ 0.11) x $$10^{-3}$$ where the first error is statistical and the second is systematic. This result is based on a sample of 31040 $$\\pm$$ 650 events. Both results are in agreement with previous low statistics measurements.« less

  3. On the Spike Train Variability Characterized by Variance-to-Mean Power Relationship.

    PubMed

    Koyama, Shinsuke

    2015-07-01

    We propose a statistical method for modeling the non-Poisson variability of spike trains observed in a wide range of brain regions. Central to our approach is the assumption that the variance and the mean of interspike intervals are related by a power function characterized by two parameters: the scale factor and exponent. It is shown that this single assumption allows the variability of spike trains to have an arbitrary scale and various dependencies on the firing rate in the spike count statistics, as well as in the interval statistics, depending on the two parameters of the power function. We also propose a statistical model for spike trains that exhibits the variance-to-mean power relationship. Based on this, a maximum likelihood method is developed for inferring the parameters from rate-modulated spike trains. The proposed method is illustrated on simulated and experimental spike trains.

  4. Different Manhattan project: automatic statistical model generation

    NASA Astrophysics Data System (ADS)

    Yap, Chee Keng; Biermann, Henning; Hertzmann, Aaron; Li, Chen; Meyer, Jon; Pao, Hsing-Kuo; Paxia, Salvatore

    2002-03-01

    We address the automatic generation of large geometric models. This is important in visualization for several reasons. First, many applications need access to large but interesting data models. Second, we often need such data sets with particular characteristics (e.g., urban models, park and recreation landscape). Thus we need the ability to generate models with different parameters. We propose a new approach for generating such models. It is based on a top-down propagation of statistical parameters. We illustrate the method in the generation of a statistical model of Manhattan. But the method is generally applicable in the generation of models of large geographical regions. Our work is related to the literature on generating complex natural scenes (smoke, forests, etc) based on procedural descriptions. The difference in our approach stems from three characteristics: modeling with statistical parameters, integration of ground truth (actual map data), and a library-based approach for texture mapping.

  5. Hyperparameterization of soil moisture statistical models for North America with Ensemble Learning Models (Elm)

    NASA Astrophysics Data System (ADS)

    Steinberg, P. D.; Brener, G.; Duffy, D.; Nearing, G. S.; Pelissier, C.

    2017-12-01

    Hyperparameterization, of statistical models, i.e. automated model scoring and selection, such as evolutionary algorithms, grid searches, and randomized searches, can improve forecast model skill by reducing errors associated with model parameterization, model structure, and statistical properties of training data. Ensemble Learning Models (Elm), and the related Earthio package, provide a flexible interface for automating the selection of parameters and model structure for machine learning models common in climate science and land cover classification, offering convenient tools for loading NetCDF, HDF, Grib, or GeoTiff files, decomposition methods like PCA and manifold learning, and parallel training and prediction with unsupervised and supervised classification, clustering, and regression estimators. Continuum Analytics is using Elm to experiment with statistical soil moisture forecasting based on meteorological forcing data from NASA's North American Land Data Assimilation System (NLDAS). There Elm is using the NSGA-2 multiobjective optimization algorithm for optimizing statistical preprocessing of forcing data to improve goodness-of-fit for statistical models (i.e. feature engineering). This presentation will discuss Elm and its components, including dask (distributed task scheduling), xarray (data structures for n-dimensional arrays), and scikit-learn (statistical preprocessing, clustering, classification, regression), and it will show how NSGA-2 is being used for automate selection of soil moisture forecast statistical models for North America.

  6. Statistics of some atmospheric turbulence records relevant to aircraft response calculations

    NASA Technical Reports Server (NTRS)

    Mark, W. D.; Fischer, R. W.

    1981-01-01

    Methods for characterizing atmospheric turbulence are described. The methods illustrated include maximum likelihood estimation of the integral scale and intensity of records obeying the von Karman transverse power spectral form, constrained least-squares estimation of the parameters of a parametric representation of autocorrelation functions, estimation of the power spectra density of the instantaneous variance of a record with temporally fluctuating variance, and estimation of the probability density functions of various turbulence components. Descriptions of the computer programs used in the computations are given, and a full listing of these programs is included.

  7. Monitoring walking and cycling of middle-aged to older community dwellers using wireless wearable accelerometers.

    PubMed

    Zhang, Yuting; Beenakker, Karel G M; Butala, Pankil M; Lin, Cheng-Chieh; Little, Thomas D C; Maier, Andrea B; Stijntjes, Marjon; Vartanian, Richard; Wagenaar, Robert C

    2012-01-01

    Changes in gait parameters have been shown to be an important indicator of several age-related cognitive and physical declines of older adults. In this paper we propose a method to monitor and analyze walking and cycling activities based on a triaxial accelerometer worn on one ankle. We use an algorithm that can (1) distinguish between static and dynamic functional activities, (2) detect walking and cycling events, (3) identify gait parameters, including step frequency, number of steps, number of walking periods, and total walking duration per day, and (4) evaluate cycling parameters, including cycling frequency, number of cycling periods, and total cycling duration. Our algorithm is evaluated against the triaxial accelerometer data obtained from a group of 297 middle-aged to older adults wearing an activity monitor on the right ankle for approximately one week while performing unconstrained daily activities in the home and community setting. The correlation coefficients between each of detected gait and cycling parameters on two weekdays are all statistically significant, ranging from 0.668 to 0.873. These results demonstrate good test-retest reliability of our method in monitoring walking and cycling activities and analyzing gait and cycling parameters. This algorithm is efficient and causal in time and thus implementable for real-time monitoring and feedback.

  8. Measurement of Muon Neutrino Quasielastic Scattering on Carbon

    NASA Astrophysics Data System (ADS)

    Aguilar-Arevalo, A. A.; Bazarko, A. O.; Brice, S. J.; Brown, B. C.; Bugel, L.; Cao, J.; Coney, L.; Conrad, J. M.; Cox, D. C.; Curioni, A.; Djurcic, Z.; Finley, D. A.; Fleming, B. T.; Ford, R.; Garcia, F. G.; Garvey, G. T.; Green, C.; Green, J. A.; Hart, T. L.; Hawker, E.; Imlay, R.; Johnson, R. A.; Kasper, P.; Katori, T.; Kobilarcik, T.; Kourbanis, I.; Koutsoliotas, S.; Laird, E. M.; Link, J. M.; Liu, Y.; Liu, Y.; Louis, W. C.; Mahn, K. B. M.; Marsh, W.; Martin, P. S.; McGregor, G.; Metcalf, W.; Meyers, P. D.; Mills, F.; Mills, G. B.; Monroe, J.; Moore, C. D.; Nelson, R. H.; Nienaber, P.; Ouedraogo, S.; Patterson, R. B.; Perevalov, D.; Polly, C. C.; Prebys, E.; Raaf, J. L.; Ray, H.; Roe, B. P.; Russell, A. D.; Sandberg, V.; Schirato, R.; Schmitz, D.; Shaevitz, M. H.; Shoemaker, F. C.; Smith, D.; Sorel, M.; Spentzouris, P.; Stancu, I.; Stefanski, R. J.; Sung, M.; Tanaka, H. A.; Tayloe, R.; Tzanov, M.; van de Water, R.; Wascko, M. O.; White, D. H.; Wilking, M. J.; Yang, H. J.; Zeller, G. P.; Zimmerman, E. D.

    2008-01-01

    The observation of neutrino oscillations is clear evidence for physics beyond the standard model. To make precise measurements of this phenomenon, neutrino oscillation experiments, including MiniBooNE, require an accurate description of neutrino charged current quasielastic (CCQE) cross sections to predict signal samples. Using a high-statistics sample of νμ CCQE events, MiniBooNE finds that a simple Fermi gas model, with appropriate adjustments, accurately characterizes the CCQE events observed in a carbon-based detector. The extracted parameters include an effective axial mass, MAeff=1.23±0.20GeV, that describes the four-momentum dependence of the axial-vector form factor of the nucleon, and a Pauli-suppression parameter, κ=1.019±0.011. Such a modified Fermi gas model may also be used by future accelerator-based experiments measuring neutrino oscillations on nuclear targets.

  9. SPOTting Model Parameters Using a Ready-Made Python Package

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2017-04-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.

  10. SPOTting Model Parameters Using a Ready-Made Python Package.

    PubMed

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2015-01-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.

  11. SPOTting Model Parameters Using a Ready-Made Python Package

    PubMed Central

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2015-01-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function. PMID:26680783

  12. Assessment of uncertainties of the models used in thermal-hydraulic computer codes

    NASA Astrophysics Data System (ADS)

    Gricay, A. S.; Migrov, Yu. A.

    2015-09-01

    The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.

  13. The GnRH analogue triptorelin confers ovarian radio-protection to adult female rats.

    PubMed

    Camats, N; García, F; Parrilla, J J; Calaf, J; Martín-Mateo, M; Caldés, M Garcia

    2009-10-02

    There is a controversy regarding the effects of the analogues of the gonadotrophin-releasing hormone (GnRH) in radiotherapy. This has led us to study the possible radio-protection of the ovarian function of a GnRH agonist analogue (GnRHa), triptorelin, in adult, female rats (Rattus norvegicus sp.). The effects of the X-irradiation on the oocytes of ovarian primordial follicles, with and without GnRHa treatment, were compared, directly in the female rats (F(0)) with reproductive parameters, and in the somatic cells of the resulting foetuses (F(1)) with cytogenetical parameters. In order to do this, the ovaries and uteri from 82 females were extracted for the reproductive analysis and 236 foetuses were obtained for cytogenetical analysis. The cytogenetical study was based on the data from 22,151 metaphases analysed. The cytogenetical parameters analysed to assess the existence of chromosomal instability were the number of aberrant metaphases (2234) and the number (2854) and type of structural chromosomal aberrations, including gaps and breaks. Concerning the reproductive analysis of the ovaries and the uteri, the parameters analysed were the number of corpora lutea, implantations, implantation losses and foetuses. Triptorelin confers radio-protection of the ovaries in front of chromosomal instability, which is different, with respect to the single and fractioned dose. The cytogenetical analysis shows a general decrease in most of the parameters of the triptorelin-treated groups, with respect to their controls, and some of these differences were considered to be statistically significant. The reproductive analysis indicates that there is also radio-protection by the agonist, although minor to the cytogenetical one. Only some of the analysed parameters show a statistically significant decrease in the triptorelin-treated groups.

  14. Classification of arterial and venous cerebral vasculature based on wavelet postprocessing of CT perfusion data.

    PubMed

    Havla, Lukas; Schneider, Moritz J; Thierfelder, Kolja M; Beyer, Sebastian E; Ertl-Wagner, Birgit; Reiser, Maximilian F; Sommer, Wieland H; Dietrich, Olaf

    2016-02-01

    The purpose of this study was to propose and evaluate a new wavelet-based technique for classification of arterial and venous vessels using time-resolved cerebral CT perfusion data sets. Fourteen consecutive patients (mean age 73 yr, range 17-97) with suspected stroke but no pathology in follow-up MRI were included. A CT perfusion scan with 32 dynamic phases was performed during intravenous bolus contrast-agent application. After rigid-body motion correction, a Paul wavelet (order 1) was used to calculate voxelwise the wavelet power spectrum (WPS) of each attenuation-time course. The angiographic intensity A was defined as the maximum of the WPS, located at the coordinates T (time axis) and W (scale/width axis) within the WPS. Using these three parameters (A, T, W) separately as well as combined by (1) Fisher's linear discriminant analysis (FLDA), (2) logistic regression (LogR) analysis, or (3) support vector machine (SVM) analysis, their potential to classify 18 different arterial and venous vessel segments per subject was evaluated. The best vessel classification was obtained using all three parameters A and T and W [area under the curve (AUC): 0.953 with FLDA and 0.957 with LogR or SVM]. In direct comparison, the wavelet-derived parameters provided performance at least equal to conventional attenuation-time-course parameters. The maximum AUC obtained from the proposed wavelet parameters was slightly (although not statistically significantly) higher than the maximum AUC (0.945) obtained from the conventional parameters. A new method to classify arterial and venous cerebral vessels with high statistical accuracy was introduced based on the time-domain wavelet transform of dynamic CT perfusion data in combination with linear or nonlinear multidimensional classification techniques.

  15. Magnification Bias in Gravitational Arc Statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caminha, G. B.; Estrada, J.; Makler, M.

    2013-08-29

    The statistics of gravitational arcs in galaxy clusters is a powerful probe of cluster structure and may provide complementary cosmological constraints. Despite recent progresses, discrepancies still remain among modelling and observations of arc abundance, specially regarding the redshift distribution of strong lensing clusters. Besides, fast "semi-analytic" methods still have to incorporate the success obtained with simulations. In this paper we discuss the contribution of the magnification in gravitational arc statistics. Although lensing conserves surface brightness, the magnification increases the signal-to-noise ratio of the arcs, enhancing their detectability. We present an approach to include this and other observational effects in semi-analyticmore » calculations for arc statistics. The cross section for arc formation ({\\sigma}) is computed through a semi-analytic method based on the ratio of the eigenvalues of the magnification tensor. Using this approach we obtained the scaling of {\\sigma} with respect to the magnification, and other parameters, allowing for a fast computation of the cross section. We apply this method to evaluate the expected number of arcs per cluster using an elliptical Navarro--Frenk--White matter distribution. Our results show that the magnification has a strong effect on the arc abundance, enhancing the fraction of arcs, moving the peak of the arc fraction to higher redshifts, and softening its decrease at high redshifts. We argue that the effect of magnification should be included in arc statistics modelling and that it could help to reconcile arcs statistics predictions with the observational data.« less

  16. Bayesian statistics and Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Koch, K. R.

    2018-03-01

    The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.

  17. Cross-sectional, Observational Study of Anterior Segment Parameters Using Anterior Segment Optical Coherence Tomography in North Indian Population

    PubMed Central

    Dalal, Latika Khatri; Dhasmana, Renu; Maitreya, Amit

    2017-01-01

    Purpose: To study the anterior segment (AS) parameters using AS optical coherence tomography (AS-OCT) in the North Indian population. Methods: A hospital-based, observational, cross-sectional study was conducted over a period of 1 year. It included 251 normal individuals aged 20–70 years. Participants underwent imaging with AS-OCT. Ocular parameters included anterior chamber angle (ACA), iris cross-sectional area (ICSA), iris thickness (IT), and iris curvature (IC). The parameters were measured nasally and temporally for both sexes and different age groups. Results: The mean age of participants was 48.3 ± 13.9 years and 50.6% were men. The ACA decreased with age whereas ICSA, IT, and IC increased with age. The ACA (P = 0.0001nasally and temporally), ICSA (P = 0.011 nasally, P = 0.027 temporally), IT750 (P = 0.001 nasally, P = 0.011 temporally), IT1500 (P = 0.002 nasally, P = 0.002 temporally), and IC (P = 0.059 nasally, P = 0.128 temporally) underwent statistically significant changes with increasing age. No significant difference was seen in parameters of different sex. Conclusion: In this subset of the Indian population, the change in the AC parameters with age influences the AC dimensions predisposing the eye to glaucomatous conditions. These data are applicable clinically for the assessment and surgical management of patients requiring AS surgery. PMID:28671154

  18. New Predictive Parameters of Bell’s Palsy: Neutrophil to Lymphocyte Ratio and Platelet to Lymphocyte Ratio

    PubMed Central

    Atan, Doğan; İkincioğulları, Aykut; Köseoğlu, Sabri; Özcan, Kürşat Murat; Çetin, Mehmet Ali; Ensari, Serdar; Dere, Hüseyin

    2015-01-01

    Background: Bell’s palsy is the most frequent cause of unilateral facial paralysis. Inflammation is thought to play an important role in the pathogenesis of Bell’s palsy. Aims: Neutrophil to lymphocyte ratio (NLR) and platelet to lymphocyte ratio (PLR) are simple and inexpensive tests which are indicative of inflammation and can be calculated by all physicians. The aim of this study was to reveal correlations of Bell’s palsy and degree of paralysis with NLR and PLR. Study Design: Case-control study. Methods: The retrospective study was performed January 2010 and December 2013. Ninety-nine patients diagnosed as Bell’s palsy were included in the Bell’s palsy group and ninety-nine healthy individuals with the same demographic characteristics as the Bell’s palsy group were included in the control group. As a result of analyses, NLR and PLR were calculated. Results: The mean NLR was 4.37 in the Bell’s palsy group and 1.89 in the control group with a statistically significant difference (p<0.001). The mean PLR was 137.5 in the Bell’s palsy group and 113.75 in the control group with a statistically significant difference (p=0.008). No statistically significant relation was detected between the degree of facial paralysis and NLR and PLR. Conclusion: The NLR and the PLR were significantly higher in patients with Bell’s palsy. This is the first study to reveal a relation between Bell’s palsy and PLR. NLR and PLR can be used as auxiliary parameters in the diagnosis of Bell’s palsy. PMID:26167340

  19. Probabilistic-driven oriented Speckle reducing anisotropic diffusion with application to cardiac ultrasonic images.

    PubMed

    Vegas-Sanchez-Ferrero, G; Aja-Fernandez, S; Martin-Fernandez, M; Frangi, A F; Palencia, C

    2010-01-01

    A novel anisotropic diffusion filter is proposed in this work with application to cardiac ultrasonic images. It includes probabilistic models which describe the probability density function (PDF) of tissues and adapts the diffusion tensor to the image iteratively. For this purpose, a preliminary study is performed in order to select the probability models that best fit the stastitical behavior of each tissue class in cardiac ultrasonic images. Then, the parameters of the diffusion tensor are defined taking into account the statistical properties of the image at each voxel. When the structure tensor of the probability of belonging to each tissue is included in the diffusion tensor definition, a better boundaries estimates can be obtained instead of calculating directly the boundaries from the image. This is the main contribution of this work. Additionally, the proposed method follows the statistical properties of the image in each iteration. This is considered as a second contribution since state-of-the-art methods suppose that noise or statistical properties of the image do not change during the filter process.

  20. Theoretic aspects of the identification of the parameters in the optimal control model

    NASA Technical Reports Server (NTRS)

    Vanwijk, R. A.; Kok, J. J.

    1977-01-01

    The identification of the parameters of the optimal control model from input-output data of the human operator is considered. Accepting the basic structure of the model as a cascade of a full-order observer and a feedback law, and suppressing the inherent optimality of the human controller, the parameters to be identified are the feedback matrix, the observer gain matrix, and the intensity matrices of the observation noise and the motor noise. The identification of the parameters is a statistical problem, because the system and output are corrupted by noise, and therefore the solution must be based on the statistics (probability density function) of the input and output data of the human operator. However, based on the statistics of the input-output data of the human operator, no distinction can be made between the observation and the motor noise, which shows that the model suffers from overparameterization.

  1. Factors influencing medical informatics examination grade--can biorhythm, astrological sign, seasonal aspect, or bad statistics predict outcome?

    PubMed

    Petrovecki, Mladen; Rahelić, Dario; Bilić-Zulle, Lidija; Jelec, Vjekoslav

    2003-02-01

    To investigate whether and to what extent various parameters, such as individual characteristics, computer habits, situational factors, and pseudoscientific variables, influence Medical Informatics examination grade, and how inadequate statistical analysis can lead to wrong conclusions. The study included a total of 382 second-year undergraduate students at the Rijeka University School of Medicine in the period from 1996/97 to 2000/01 academic year. After passing the Medical Informatics exam, students filled out an anonymous questionnaire about their attitude toward learning medical informatics. They were asked to grade the course organization and curriculum content, and provide their date of birth; sex; study year; high school grades; Medical Informatics examination grade, type, and term; and describe their computer habits. From these data, we determined their zodiac signs and biorhythm. Data were compared by the use of t-test, one-way ANOVA with Tukey's honest significance difference test, and randomized complete block design ANOVA. Out of 21 variables analyzed, only 10 correlated with the average grade. Students taking Medical Informatics examination in the 1998/99 academic year earned lower average grade than any other generation. Significantly higher Medical Informatics exam grade was earned by students who finished a grammar high school; owned and regularly used a computer, Internet, and e-mail (p< or =0.002 for all items); passed an oral exam without taking a written test (p=0.004), or did not repeat the exam (p<0.001). Better high-school students and students with better grades from high-school informatics course also scored significantly better (p=0.032 and p<0.001, respectively). Grade in high-school mathematics, student's sex, and time of year when the examination was taken were not related to the grade, and neither were pseudoscientific parameters, such as student zodiac sign, zodiac sign quality, or biorhythm cycles, except when intentionally inadequate statistics was used for data analysis. Medical Informatics examination grades correlated with general learning capacity and computer habits of students, but showed no relation to other investigated parameters, such as examination term or pseudoscientific parameters. Inadequate statistical analysis can always confirm false conclusions.

  2. Conducting Simulation Studies in the R Programming Environment.

    PubMed

    Hallgren, Kevin A

    2013-10-12

    Simulation studies allow researchers to answer specific questions about data analysis, statistical power, and best-practices for obtaining accurate results in empirical research. Despite the benefits that simulation research can provide, many researchers are unfamiliar with available tools for conducting their own simulation studies. The use of simulation studies need not be restricted to researchers with advanced skills in statistics and computer programming, and such methods can be implemented by researchers with a variety of abilities and interests. The present paper provides an introduction to methods used for running simulation studies using the R statistical programming environment and is written for individuals with minimal experience running simulation studies or using R. The paper describes the rationale and benefits of using simulations and introduces R functions relevant for many simulation studies. Three examples illustrate different applications for simulation studies, including (a) the use of simulations to answer a novel question about statistical analysis, (b) the use of simulations to estimate statistical power, and (c) the use of simulations to obtain confidence intervals of parameter estimates through bootstrapping. Results and fully annotated syntax from these examples are provided.

  3. Adaptive firefly algorithm: parameter analysis and its application.

    PubMed

    Cheung, Ngaam J; Ding, Xue-Ming; Shen, Hong-Bin

    2014-01-01

    As a nature-inspired search algorithm, firefly algorithm (FA) has several control parameters, which may have great effects on its performance. In this study, we investigate the parameter selection and adaptation strategies in a modified firefly algorithm - adaptive firefly algorithm (AdaFa). There are three strategies in AdaFa including (1) a distance-based light absorption coefficient; (2) a gray coefficient enhancing fireflies to share difference information from attractive ones efficiently; and (3) five different dynamic strategies for the randomization parameter. Promising selections of parameters in the strategies are analyzed to guarantee the efficient performance of AdaFa. AdaFa is validated over widely used benchmark functions, and the numerical experiments and statistical tests yield useful conclusions on the strategies and the parameter selections affecting the performance of AdaFa. When applied to the real-world problem - protein tertiary structure prediction, the results demonstrated improved variants can rebuild the tertiary structure with the average root mean square deviation less than 0.4Å and 1.5Å from the native constrains with noise free and 10% Gaussian white noise.

  4. Adaptive Firefly Algorithm: Parameter Analysis and its Application

    PubMed Central

    Shen, Hong-Bin

    2014-01-01

    As a nature-inspired search algorithm, firefly algorithm (FA) has several control parameters, which may have great effects on its performance. In this study, we investigate the parameter selection and adaptation strategies in a modified firefly algorithm — adaptive firefly algorithm (AdaFa). There are three strategies in AdaFa including (1) a distance-based light absorption coefficient; (2) a gray coefficient enhancing fireflies to share difference information from attractive ones efficiently; and (3) five different dynamic strategies for the randomization parameter. Promising selections of parameters in the strategies are analyzed to guarantee the efficient performance of AdaFa. AdaFa is validated over widely used benchmark functions, and the numerical experiments and statistical tests yield useful conclusions on the strategies and the parameter selections affecting the performance of AdaFa. When applied to the real-world problem — protein tertiary structure prediction, the results demonstrated improved variants can rebuild the tertiary structure with the average root mean square deviation less than 0.4Å and 1.5Å from the native constrains with noise free and 10% Gaussian white noise. PMID:25397812

  5. Joint inversion of marine seismic AVA and CSEM data using statistical rock-physics models and Markov random fields: Stochastic inversion of AVA and CSEM data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, J.; Hoversten, G.M.

    2011-09-15

    Joint inversion of seismic AVA and CSEM data requires rock-physics relationships to link seismic attributes to electrical properties. Ideally, we can connect them through reservoir parameters (e.g., porosity and water saturation) by developing physical-based models, such as Gassmann’s equations and Archie’s law, using nearby borehole logs. This could be difficult in the exploration stage because information available is typically insufficient for choosing suitable rock-physics models and for subsequently obtaining reliable estimates of the associated parameters. The use of improper rock-physics models and the inaccuracy of the estimates of model parameters may cause misleading inversion results. Conversely, it is easy tomore » derive statistical relationships among seismic and electrical attributes and reservoir parameters from distant borehole logs. In this study, we develop a Bayesian model to jointly invert seismic AVA and CSEM data for reservoir parameter estimation using statistical rock-physics models; the spatial dependence of geophysical and reservoir parameters are carried out by lithotypes through Markov random fields. We apply the developed model to a synthetic case, which simulates a CO{sub 2} monitoring application. We derive statistical rock-physics relations from borehole logs at one location and estimate seismic P- and S-wave velocity ratio, acoustic impedance, density, electrical resistivity, lithotypes, porosity, and water saturation at three different locations by conditioning to seismic AVA and CSEM data. Comparison of the inversion results with their corresponding true values shows that the correlation-based statistical rock-physics models provide significant information for improving the joint inversion results.« less

  6. Variability-aware compact modeling and statistical circuit validation on SRAM test array

    NASA Astrophysics Data System (ADS)

    Qiao, Ying; Spanos, Costas J.

    2016-03-01

    Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose a variability-aware compact model characterization methodology based on stepwise parameter selection. Transistor I-V measurements are obtained from bit transistor accessible SRAM test array fabricated using a collaborating foundry's 28nm FDSOI technology. Our in-house customized Monte Carlo simulation bench can incorporate these statistical compact models; and simulation results on SRAM writability performance are very close to measurements in distribution estimation. Our proposed statistical compact model parameter extraction methodology also has the potential of predicting non-Gaussian behavior in statistical circuit performances through mixtures of Gaussian distributions.

  7. Ensemble engineering and statistical modeling for parameter calibration towards optimal design of microbial fuel cells

    NASA Astrophysics Data System (ADS)

    Sun, Hongyue; Luo, Shuai; Jin, Ran; He, Zhen

    2017-07-01

    Mathematical modeling is an important tool to investigate the performance of microbial fuel cell (MFC) towards its optimized design. To overcome the shortcoming of traditional MFC models, an ensemble model is developed through integrating both engineering model and statistical analytics for the extrapolation scenarios in this study. Such an ensemble model can reduce laboring effort in parameter calibration and require fewer measurement data to achieve comparable accuracy to traditional statistical model under both the normal and extreme operation regions. Based on different weight between current generation and organic removal efficiency, the ensemble model can give recommended input factor settings to achieve the best current generation and organic removal efficiency. The model predicts a set of optimal design factors for the present tubular MFCs including the anode flow rate of 3.47 mL min-1, organic concentration of 0.71 g L-1, and catholyte pumping flow rate of 14.74 mL min-1 to achieve the peak current at 39.2 mA. To maintain 100% organic removal efficiency, the anode flow rate and organic concentration should be controlled lower than 1.04 mL min-1 and 0.22 g L-1, respectively. The developed ensemble model can be potentially modified to model other types of MFCs or bioelectrochemical systems.

  8. Statistics of partially-polarized fields: beyond the Stokes vector and coherence matrix

    NASA Astrophysics Data System (ADS)

    Charnotskii, Mikhail

    2017-08-01

    Traditionally, the partially-polarized light is characterized by the four Stokes parameters. Equivalent description is also provided by correlation tensor of the optical field. These statistics specify only the second moments of the complex amplitudes of the narrow-band two-dimensional electric field of the optical wave. Electric field vector of the random quasi monochromatic wave is a nonstationary oscillating two-dimensional real random variable. We introduce a novel statistical description of these partially polarized waves: the Period-Averaged Probability Density Function (PA-PDF) of the field. PA-PDF contains more information on the polarization state of the field than the Stokes vector. In particular, in addition to the conventional distinction between the polarized and depolarized components of the field PA-PDF allows to separate the coherent and fluctuating components of the field. We present several model examples of the fields with identical Stokes vectors and very distinct shapes of PA-PDF. In the simplest case of the nonstationary, oscillating normal 2-D probability distribution of the real electrical field and stationary 4-D probability distribution of the complex amplitudes, the newly-introduced PA-PDF is determined by 13 parameters that include the first moments and covariance matrix of the quadrature components of the oscillating vector field.

  9. An improved approach for flight readiness certification: Probabilistic models for flaw propagation and turbine blade failure. Volume 1: Methodology and applications

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for designs failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.

  10. An improved approach for flight readiness certification: Probabilistic models for flaw propagation and turbine blade failure. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflights systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for design, failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.

  11. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  12. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples, volume 1

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  13. Canopy reflectance modelling of semiarid vegetation

    NASA Technical Reports Server (NTRS)

    Franklin, Janet

    1994-01-01

    Three different types of remote sensing algorithms for estimating vegetation amount and other land surface biophysical parameters were tested for semiarid environments. These included statistical linear models, the Li-Strahler geometric-optical canopy model, and linear spectral mixture analysis. The two study areas were the National Science Foundation's Jornada Long Term Ecological Research site near Las Cruces, NM, in the northern Chihuahuan desert, and the HAPEX-Sahel site near Niamey, Niger, in West Africa, comprising semiarid rangeland and subtropical crop land. The statistical approach (simple and multiple regression) resulted in high correlations between SPOT satellite spectral reflectance and shrub and grass cover, although these correlations varied with the spatial scale of aggregation of the measurements. The Li-Strahler model produced estimated of shrub size and density for both study sites with large standard errors. In the Jornada, the estimates were accurate enough to be useful for characterizing structural differences among three shrub strata. In Niger, the range of shrub cover and size in short-fallow shrublands is so low that the necessity of spatially distributed estimation of shrub size and density is questionable. Spectral mixture analysis of multiscale, multitemporal, multispectral radiometer data and imagery for Niger showed a positive relationship between fractions of spectral endmembers and surface parameters of interest including soil cover, vegetation cover, and leaf area index.

  14. On the effect of model parameters on forecast objects

    NASA Astrophysics Data System (ADS)

    Marzban, Caren; Jones, Corinne; Li, Ning; Sandgathe, Scott

    2018-04-01

    Many physics-based numerical models produce a gridded, spatial field of forecasts, e.g., a temperature map. The field for some quantities generally consists of spatially coherent and disconnected objects. Such objects arise in many problems, including precipitation forecasts in atmospheric models, eddy currents in ocean models, and models of forest fires. Certain features of these objects (e.g., location, size, intensity, and shape) are generally of interest. Here, a methodology is developed for assessing the impact of model parameters on the features of forecast objects. The main ingredients of the methodology include the use of (1) Latin hypercube sampling for varying the values of the model parameters, (2) statistical clustering algorithms for identifying objects, (3) multivariate multiple regression for assessing the impact of multiple model parameters on the distribution (across the forecast domain) of object features, and (4) methods for reducing the number of hypothesis tests and controlling the resulting errors. The final output of the methodology is a series of box plots and confidence intervals that visually display the sensitivities. The methodology is demonstrated on precipitation forecasts from a mesoscale numerical weather prediction model.

  15. The determination of the acoustic parameters of volcanic rocks from compressional velocity measurements

    USGS Publications Warehouse

    Carroll, R.D.

    1969-01-01

    A statistical analysis was made of the relationship of various acoustic parameters of volcanic rocks to compressional wave velocities for data obtained in a volcanic region in Nevada. Some additional samples, chiefly granitic rocks, were also included in the study to extend the range of parameters and the variety of siliceous rock types sampled. Laboratory acoustic measurements obtained on 62 dry core samples were grouped with similar measurements obtained from geophysical logging devices at several depth intervals in a hole from which 15 of the core samples had been obtained. The effects of lithostatic and hydrostatic load on changing the rock acoustic parameters measured in the hole were noticeable when compared with the laboratory measurements on the same core. The results of the analyses determined by grouping all of the data, however, indicate that dynamic Young's, shear and bulk modulus, shear velocity, shear and compressional characteristic impedance, as well as amplitude and energy reflection coefficients may be reliably estimated on the basis of the compressional wave velocities of the rocks investigated. Less precise estimates can be made of density based on the rock compressional velocity. The possible extension of these relationships to include many siliceous rocks is suggested. ?? 1969.

  16. TU-FG-201-09: Predicting Accelerator Dysfunction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Able, C; Nguyen, C; Baydush, A

    Purpose: To develop an integrated statistical process control (SPC) framework using digital performance and component data accumulated within the accelerator system that can detect dysfunction prior to unscheduled downtime. Methods: Seven digital accelerators were monitored for twelve to 18 months. The accelerators were operated in a ‘run to failure mode’ with the individual institutions determining when service would be initiated. Institutions were required to submit detailed service reports. Trajectory and text log files resulting from a robust daily VMAT QA delivery were decoded and evaluated using Individual and Moving Range (I/MR) control charts. The SPC evaluation was presented in amore » customized dashboard interface that allows the user to review 525 monitored parameters (480 MLC parameters). Chart limits were calculated using a hybrid technique that includes the standard SPC 3σ limits and an empirical factor based on the parameter/system specification. The individual (I) grand mean values and control limit ranges of the I/MR charts of all accelerators were compared using statistical (ranked analysis of variance (ANOVA)) and graphical analyses to determine consistency of operating parameters. Results: When an alarm or warning was directly connected to field service, process control charts predicted dysfunction consistently on beam generation related parameters (BGP)– RF Driver Voltage, Gun Grid Voltage, and Forward Power (W); beam uniformity parameters – angle and position steering coil currents; and Gantry position accuracy parameter: cross correlation max-value. Control charts for individual MLC – cross correlation max-value/position detected 50% to 60% of MLCs serviced prior to dysfunction or failure. In general, non-random changes were detected 5 to 80 days prior to a service intervention. The ANOVA comparison of BGP determined that each accelerator parameter operated at a distinct value. Conclusion: The SPC framework shows promise. Long term monitoring coordinated with service will be required to definitively determine the effectiveness of the model. Varian Medical System, Inc. provided funding in support of the research presented.« less

  17. Bayesian uncertainty analysis for complex systems biology models: emulation, global parameter searches and evaluation of gene functions.

    PubMed

    Vernon, Ian; Liu, Junli; Goldstein, Michael; Rowe, James; Topping, Jen; Lindsey, Keith

    2018-01-02

    Many mathematical models have now been employed across every area of systems biology. These models increasingly involve large numbers of unknown parameters, have complex structure which can result in substantial evaluation time relative to the needs of the analysis, and need to be compared to observed data of various forms. The correct analysis of such models usually requires a global parameter search, over a high dimensional parameter space, that incorporates and respects the most important sources of uncertainty. This can be an extremely difficult task, but it is essential for any meaningful inference or prediction to be made about any biological system. It hence represents a fundamental challenge for the whole of systems biology. Bayesian statistical methodology for the uncertainty analysis of complex models is introduced, which is designed to address the high dimensional global parameter search problem. Bayesian emulators that mimic the systems biology model but which are extremely fast to evaluate are embeded within an iterative history match: an efficient method to search high dimensional spaces within a more formal statistical setting, while incorporating major sources of uncertainty. The approach is demonstrated via application to a model of hormonal crosstalk in Arabidopsis root development, which has 32 rate parameters, for which we identify the sets of rate parameter values that lead to acceptable matches between model output and observed trend data. The multiple insights into the model's structure that this analysis provides are discussed. The methodology is applied to a second related model, and the biological consequences of the resulting comparison, including the evaluation of gene functions, are described. Bayesian uncertainty analysis for complex models using both emulators and history matching is shown to be a powerful technique that can greatly aid the study of a large class of systems biology models. It both provides insight into model behaviour and identifies the sets of rate parameters of interest.

  18. Grain-Size Based Additivity Models for Scaling Multi-rate Uranyl Surface Complexation in Subsurface Sediments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.

    This study statistically analyzed a grain-size based additivity model that has been proposed to scale reaction rates and parameters from laboratory to field. The additivity model assumed that reaction properties in a sediment including surface area, reactive site concentration, reaction rate, and extent can be predicted from field-scale grain size distribution by linearly adding reaction properties for individual grain size fractions. This study focused on the statistical analysis of the additivity model with respect to reaction rate constants using multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment as an example. Experimental data of rate-limited U(VI) desorption in amore » stirred flow-cell reactor were used to estimate the statistical properties of multi-rate parameters for individual grain size fractions. The statistical properties of the rate constants for the individual grain size fractions were then used to analyze the statistical properties of the additivity model to predict rate-limited U(VI) desorption in the composite sediment, and to evaluate the relative importance of individual grain size fractions to the overall U(VI) desorption. The result indicated that the additivity model provided a good prediction of the U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model, and U(VI) desorption in individual grain size fractions have to be simulated in order to apply the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel size fraction (2-8mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.« less

  19. A statistical study of decaying kink oscillations detected using SDO/AIA

    NASA Astrophysics Data System (ADS)

    Goddard, C. R.; Nisticò, G.; Nakariakov, V. M.; Zimovets, I. V.

    2016-01-01

    Context. Despite intensive studies of kink oscillations of coronal loops in the last decade, a large-scale statistically significant investigation of the oscillation parameters has not been made using data from the Solar Dynamics Observatory (SDO). Aims: We carry out a statistical study of kink oscillations using extreme ultraviolet imaging data from a previously compiled catalogue. Methods: We analysed 58 kink oscillation events observed by the Atmospheric Imaging Assembly (AIA) on board SDO during its first four years of operation (2010-2014). Parameters of the oscillations, including the initial apparent amplitude, period, length of the oscillating loop, and damping are studied for 120 individual loop oscillations. Results: Analysis of the initial loop displacement and oscillation amplitude leads to the conclusion that the initial loop displacement prescribes the initial amplitude of oscillation in general. The period is found to scale with the loop length, and a linear fit of the data cloud gives a kink speed of Ck = (1330 ± 50) km s-1. The main body of the data corresponds to kink speeds in the range Ck = (800-3300) km s-1. Measurements of 52 exponential damping times were made, and it was noted that at least 21 of the damping profiles may be better approximated by a combination of non-exponential and exponential profiles rather than a purely exponential damping envelope. There are nine additional cases where the profile appears to be purely non-exponential and no damping time was measured. A scaling of the exponential damping time with the period is found, following the previously established linear scaling between these two parameters.

  20. Corneal biomechanical parameters and intraocular pressure: the effect of topical anesthesia

    PubMed Central

    Ogbuehi, Kelechi C

    2012-01-01

    Background The intraocular pressures and biomechanical parameters measured by the ocular response analyzer make the analyzer a useful tool for the diagnosis and management of anterior segment disease. This observational study was designed to investigate the effect of topical anesthesia on the parameters measured by the ocular response analyzer: corneal hysteresis, corneal resistance factor, Goldmann-correlated intraocular pressure (IOPg), and corneal-compensated intraocular pressure (IOPcc). Methods Two sets of measurements were made for 78 eyes of 39 subjects, approximately 1 week apart. In session 1, each eye of each subject was randomized into one of three groups: polyvinyl alcohol (0.5%), tetracaine hydrochloride (0.5%), or oxybuprocaine hydrochloride (0.4%). In session 2, eyes that were in the polyvinyl alcohol group in session 1 were assigned to the tetracaine group, those in the tetracaine group in session 1 were assigned to oxybuprocaine group, and those in the oxybuprocaine group in session 1 were assigned to the polyvinyl alcohol group. For both sessions, each subject first had his or her central corneal thickness assessed with a specular microscope, followed by measurements of intraocular pressure and corneal biomechanical parameters with the Ocular Response Analyzer. All measurements were repeated for 2 minutes and 5 minutes following the instillation of either polyvinyl alcohol, tetracaine, or oxybuprocaine. The level of statistical significance was 0.05. Results Polyvinyl alcohol, tetracaine hydrochloride, and oxybuprocaine hydrochloride had no statistically significant (P > 0.05) effect on any of the biomechanical parameters of the cornea. There was no statistically significant effect on either IOPg (P > 0.05) or IOPcc (P > 0.05) 2 minutes after the eye drops were instilled in either session. Five minutes after the eye drops were instilled, polyvinyl alcohol showed no statistically significant effect on either IOPg (P > 0.05) or IOPcc (P > 0.05) in either session. Oxybuprocaine and tetracaine caused statistically significant (P < 0.05) reductions in IOPg in session 1, but only tetracaine had a significant (P < 0.05) effect in session 2. Tetracaine also caused a statistically significant (P < 0.05) reduction in IOPcc in session 1. Conclusion The statistically significant effect of topical anesthesia on IOPg varies with the anesthetic used, and while this effect was statistically significant in this study, the small effect is probably not clinically relevant. There was no effect on any of the biomechanical parameters of the cornea. PMID:22791966

  1. Corneal biomechanical parameters and intraocular pressure: the effect of topical anesthesia.

    PubMed

    Ogbuehi, Kelechi C

    2012-01-01

    The intraocular pressures and biomechanical parameters measured by the ocular response analyzer make the analyzer a useful tool for the diagnosis and management of anterior segment disease. This observational study was designed to investigate the effect of topical anesthesia on the parameters measured by the ocular response analyzer: corneal hysteresis, corneal resistance factor, Goldmann-correlated intraocular pressure (IOPg), and corneal-compensated intraocular pressure (IOPcc). Two sets of measurements were made for 78 eyes of 39 subjects, approximately 1 week apart. In session 1, each eye of each subject was randomized into one of three groups: polyvinyl alcohol (0.5%), tetracaine hydrochloride (0.5%), or oxybuprocaine hydrochloride (0.4%). In session 2, eyes that were in the polyvinyl alcohol group in session 1 were assigned to the tetracaine group, those in the tetracaine group in session 1 were assigned to oxybuprocaine group, and those in the oxybuprocaine group in session 1 were assigned to the polyvinyl alcohol group. For both sessions, each subject first had his or her central corneal thickness assessed with a specular microscope, followed by measurements of intraocular pressure and corneal biomechanical parameters with the Ocular Response Analyzer. All measurements were repeated for 2 minutes and 5 minutes following the instillation of either polyvinyl alcohol, tetracaine, or oxybuprocaine. The level of statistical significance was 0.05. Polyvinyl alcohol, tetracaine hydrochloride, and oxybuprocaine hydrochloride had no statistically significant (P > 0.05) effect on any of the biomechanical parameters of the cornea. There was no statistically significant effect on either IOPg (P > 0.05) or IOPcc (P > 0.05) 2 minutes after the eye drops were instilled in either session. Five minutes after the eye drops were instilled, polyvinyl alcohol showed no statistically significant effect on either IOPg (P > 0.05) or IOPcc (P > 0.05) in either session. Oxybuprocaine and tetracaine caused statistically significant (P < 0.05) reductions in IOPg in session 1, but only tetracaine had a significant (P < 0.05) effect in session 2. Tetracaine also caused a statistically significant (P < 0.05) reduction in IOPcc in session 1. The statistically significant effect of topical anesthesia on IOPg varies with the anesthetic used, and while this effect was statistically significant in this study, the small effect is probably not clinically relevant. There was no effect on any of the biomechanical parameters of the cornea.

  2. Effect of ultrasound frequency on the Nakagami statistics of human liver tissues.

    PubMed

    Tsui, Po-Hsiang; Zhou, Zhuhuang; Lin, Ying-Hsiu; Hung, Chieh-Ming; Chung, Shih-Jou; Wan, Yung-Liang

    2017-01-01

    The analysis of the backscattered statistics using the Nakagami parameter is an emerging ultrasound technique for assessing hepatic steatosis and fibrosis. Previous studies indicated that the echo amplitude distribution of a normal liver follows the Rayleigh distribution (the Nakagami parameter m is close to 1). However, using different frequencies may change the backscattered statistics of normal livers. This study explored the frequency dependence of the backscattered statistics in human livers and then discussed the sources of ultrasound scattering in the liver. A total of 30 healthy participants were enrolled to undergo a standard care ultrasound examination on the liver, which is a natural model containing diffuse and coherent scatterers. The liver of each volunteer was scanned from the right intercostal view to obtain image raw data at different central frequencies ranging from 2 to 3.5 MHz. Phantoms with diffuse scatterers only were also made to perform ultrasound scanning using the same protocol for comparisons with clinical data. The Nakagami parameter-frequency correlation was evaluated using Pearson correlation analysis. The median and interquartile range of the Nakagami parameter obtained from livers was 1.00 (0.98-1.05) for 2 MHz, 0.93 (0.89-0.98) for 2.3 MHz, 0.87 (0.84-0.92) for 2.5 MHz, 0.82 (0.77-0.88) for 3.3 MHz, and 0.81 (0.76-0.88) for 3.5 MHz. The Nakagami parameter decreased with the increasing central frequency (r = -0.67, p < 0.0001). However, the effect of ultrasound frequency on the statistical distribution of the backscattered envelopes was not found in the phantom results (r = -0.147, p = 0.0727). The current results demonstrated that the backscattered statistics of normal livers is frequency-dependent. Moreover, the coherent scatterers may be the primary factor to dominate the frequency dependence of the backscattered statistics in a liver.

  3. Physics-based statistical model and simulation method of RF propagation in urban environments

    DOEpatents

    Pao, Hsueh-Yuan; Dvorak, Steven L.

    2010-09-14

    A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.

  4. Modeling envelope statistics of blood and myocardium for segmentation of echocardiographic images.

    PubMed

    Nillesen, Maartje M; Lopata, Richard G P; Gerrits, Inge H; Kapusta, Livia; Thijssen, Johan M; de Korte, Chris L

    2008-04-01

    The objective of this study was to investigate the use of speckle statistics as a preprocessing step for segmentation of the myocardium in echocardiographic images. Three-dimensional (3D) and biplane image sequences of the left ventricle of two healthy children and one dog (beagle) were acquired. Pixel-based speckle statistics of manually segmented blood and myocardial regions were investigated by fitting various probability density functions (pdf). The statistics of heart muscle and blood could both be optimally modeled by a K-pdf or Gamma-pdf (Kolmogorov-Smirnov goodness-of-fit test). Scale and shape parameters of both distributions could differentiate between blood and myocardium. Local estimation of these parameters was used to obtain parametric images, where window size was related to speckle size (5 x 2 speckles). Moment-based and maximum-likelihood estimators were used. Scale parameters were still able to differentiate blood from myocardium; however, smoothing of edges of anatomical structures occurred. Estimation of the shape parameter required a larger window size, leading to unacceptable blurring. Using these parameters as an input for segmentation resulted in unreliable segmentation. Adaptive mean squares filtering was then introduced using the moment-based scale parameter (sigma(2)/mu) of the Gamma-pdf to automatically steer the two-dimensional (2D) local filtering process. This method adequately preserved sharpness of the edges. In conclusion, a trade-off between preservation of sharpness of edges and goodness-of-fit when estimating local shape and scale parameters is evident for parametric images. For this reason, adaptive filtering outperforms parametric imaging for the segmentation of echocardiographic images.

  5. Detecting changes in ultrasound backscattered statistics by using Nakagami parameters: Comparisons of moment-based and maximum likelihood estimators.

    PubMed

    Lin, Jen-Jen; Cheng, Jung-Yu; Huang, Li-Fei; Lin, Ying-Hsiu; Wan, Yung-Liang; Tsui, Po-Hsiang

    2017-05-01

    The Nakagami distribution is an approximation useful to the statistics of ultrasound backscattered signals for tissue characterization. Various estimators may affect the Nakagami parameter in the detection of changes in backscattered statistics. In particular, the moment-based estimator (MBE) and maximum likelihood estimator (MLE) are two primary methods used to estimate the Nakagami parameters of ultrasound signals. This study explored the effects of the MBE and different MLE approximations on Nakagami parameter estimations. Ultrasound backscattered signals of different scatterer number densities were generated using a simulation model, and phantom experiments and measurements of human liver tissues were also conducted to acquire real backscattered echoes. Envelope signals were employed to estimate the Nakagami parameters by using the MBE, first- and second-order approximations of MLE (MLE 1 and MLE 2 , respectively), and Greenwood approximation (MLE gw ) for comparisons. The simulation results demonstrated that, compared with the MBE and MLE 1 , the MLE 2 and MLE gw enabled more stable parameter estimations with small sample sizes. Notably, the required data length of the envelope signal was 3.6 times the pulse length. The phantom and tissue measurement results also showed that the Nakagami parameters estimated using the MLE 2 and MLE gw could simultaneously differentiate various scatterer concentrations with lower standard deviations and reliably reflect physical meanings associated with the backscattered statistics. Therefore, the MLE 2 and MLE gw are suggested as estimators for the development of Nakagami-based methodologies for ultrasound tissue characterization. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. The Statistical Power of the Cluster Randomized Block Design with Matched Pairs--A Simulation Study

    ERIC Educational Resources Information Center

    Dong, Nianbo; Lipsey, Mark

    2010-01-01

    This study uses simulation techniques to examine the statistical power of the group- randomized design and the matched-pair (MP) randomized block design under various parameter combinations. Both nearest neighbor matching and random matching are used for the MP design. The power of each design for any parameter combination was calculated from…

  7. Inference of reaction rate parameters based on summary statistics from experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin

    Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less

  8. Inference of reaction rate parameters based on summary statistics from experiments

    DOE PAGES

    Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin; ...

    2016-10-15

    Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less

  9. Applications of the DOE/NASA wind turbine engineering information system

    NASA Technical Reports Server (NTRS)

    Neustadter, H. E.; Spera, D. A.

    1981-01-01

    A statistical analysis of data obtained from the Technology and Engineering Information Systems was made. The systems analyzed consist of the following elements: (1) sensors which measure critical parameters (e.g., wind speed and direction, output power, blade loads and component vibrations); (2) remote multiplexing units (RMUs) on each wind turbine which frequency-modulate, multiplex and transmit sensor outputs; (3) on-site instrumentation to record, process and display the sensor output; and (4) statistical analysis of data. Two examples of the capabilities of these systems are presented. The first illustrates the standardized format for application of statistical analysis to each directly measured parameter. The second shows the use of a model to estimate the variability of the rotor thrust loading, which is a derived parameter.

  10. A consistent framework for Horton regression statistics that leads to a modified Hack's law

    USGS Publications Warehouse

    Furey, P.R.; Troutman, B.M.

    2008-01-01

    A statistical framework is introduced that resolves important problems with the interpretation and use of traditional Horton regression statistics. The framework is based on a univariate regression model that leads to an alternative expression for Horton ratio, connects Horton regression statistics to distributional simple scaling, and improves the accuracy in estimating Horton plot parameters. The model is used to examine data for drainage area A and mainstream length L from two groups of basins located in different physiographic settings. Results show that confidence intervals for the Horton plot regression statistics are quite wide. Nonetheless, an analysis of covariance shows that regression intercepts, but not regression slopes, can be used to distinguish between basin groups. The univariate model is generalized to include n > 1 dependent variables. For the case where the dependent variables represent ln A and ln L, the generalized model performs somewhat better at distinguishing between basin groups than two separate univariate models. The generalized model leads to a modification of Hack's law where L depends on both A and Strahler order ??. Data show that ?? plays a statistically significant role in the modified Hack's law expression. ?? 2008 Elsevier B.V.

  11. Identifying the Source of Misfit in Item Response Theory Models.

    PubMed

    Liu, Yang; Maydeu-Olivares, Alberto

    2014-01-01

    When an item response theory model fails to fit adequately, the items for which the model provides a good fit and those for which it does not must be determined. To this end, we compare the performance of several fit statistics for item pairs with known asymptotic distributions under maximum likelihood estimation of the item parameters: (a) a mean and variance adjustment to bivariate Pearson's X(2), (b) a bivariate subtable analog to Reiser's (1996) overall goodness-of-fit test, (c) a z statistic for the bivariate residual cross product, and (d) Maydeu-Olivares and Joe's (2006) M2 statistic applied to bivariate subtables. The unadjusted Pearson's X(2) with heuristically determined degrees of freedom is also included in the comparison. For binary and ordinal data, our simulation results suggest that the z statistic has the best Type I error and power behavior among all the statistics under investigation when the observed information matrix is used in its computation. However, if one has to use the cross-product information, the mean and variance adjusted X(2) is recommended. We illustrate the use of pairwise fit statistics in 2 real-data examples and discuss possible extensions of the current research in various directions.

  12. Active contours on statistical manifolds and texture segmentation

    Treesearch

    Sang-Mook Lee; A. Lynn Abbott; Neil A. Clark; Philip A. Araman

    2005-01-01

    A new approach to active contours on statistical manifolds is presented. The statistical manifolds are 2- dimensional Riemannian manifolds that are statistically defined by maps that transform a parameter domain onto a set of probability density functions. In this novel framework, color or texture features are measured at each image point and their statistical...

  13. Active contours on statistical manifolds and texture segmentaiton

    Treesearch

    Sang-Mook Lee; A. Lynn Abbott; Neil A. Clark; Philip A. Araman

    2005-01-01

    A new approach to active contours on statistical manifolds is presented. The statistical manifolds are 2- dimensional Riemannian manifolds that are statistically defined by maps that transform a parameter domain onto-a set of probability density functions. In this novel framework, color or texture features are measured at each Image point and their statistical...

  14. Urticaceae pollen concentration in the atmosphere of North Western Spain.

    PubMed

    Vega-Maray, Ana Maria; Valencia-Barrera, Rosa; Fernandez-Gonzalez, Delia; Fraile, Roberto

    2003-01-01

    Plants of the Urticaceae family can develop into a pest on soils enriched with nitrogen. Urticaceae pollen is a biohazard because it elicits severe pollinosis. Pollen grains were sampled by using a Lanzoni seven-day-recording trap from February 1995-December 2000 in the atmosphere of the city of Ponferrada (Leon, North Western Spain). The Spearman test was used to analyse the statistical correlation between Urticaceae pollen and certain meteorological factors in different main pollination periods. Maximum values are reached in June and July, minimum levels are recorded in January and December. The parameters bearing the greatest positive influence on the occurrence of Urticaceae pollen grains are: temperature (maximum, minimum and mean), humidity (absolute, wet-bulb temperature, dew point and mixing ratio) and south western wind direction; negative parameters are: relative humidity, rainfall and period without wind. The highest correlation coefficients were obtained with temperature and wet-bulb. Absolute humidity and wet-bulb temperature yielded better correlation than relative humidity; hence, these two parameters must be included in this type of study. The use of one main pollination period or another in statistical analysis has an influence on the coefficient value. The behaviour of the pollen grains in the atmosphere during the year also influences the results.

  15. A Statistical Approach to Identify Superluminous Supernovae and Probe Their Diversity

    NASA Astrophysics Data System (ADS)

    Inserra, C.; Prajs, S.; Gutierrez, C. P.; Angus, C.; Smith, M.; Sullivan, M.

    2018-02-01

    We investigate the identification of hydrogen-poor superluminous supernovae (SLSNe I) using a photometric analysis, without including an arbitrary magnitude threshold. We assemble a homogeneous sample of previously classified SLSNe I from the literature, and fit their light curves using Gaussian processes. From the fits, we identify four photometric parameters that have a high statistical significance when correlated, and combine them in a parameter space that conveys information on their luminosity and color evolution. This parameter space presents a new definition for SLSNe I, which can be used to analyze existing and future transient data sets. We find that 90% of previously classified SLSNe I meet our new definition. We also examine the evidence for two subclasses of SLSNe I, combining their photometric evolution with spectroscopic information, namely the photospheric velocity and its gradient. A cluster analysis reveals the presence of two distinct groups. “Fast” SLSNe show fast light curves and color evolution, large velocities, and a large velocity gradient. “Slow” SLSNe show slow light curve and color evolution, small expansion velocities, and an almost non-existent velocity gradient. Finally, we discuss the impact of our analyses in the understanding of the powering engine of SLSNe, and their implementation as cosmological probes in current and future surveys.

  16. New generation of hydraulic pedotransfer functions for Europe

    PubMed Central

    Tóth, B; Weynants, M; Nemes, A; Makó, A; Bilas, G; Tóth, G

    2015-01-01

    A range of continental-scale soil datasets exists in Europe with different spatial representation and based on different principles. We developed comprehensive pedotransfer functions (PTFs) for applications principally on spatial datasets with continental coverage. The PTF development included the prediction of soil water retention at various matric potentials and prediction of parameters to characterize soil moisture retention and the hydraulic conductivity curve (MRC and HCC) of European soils. We developed PTFs with a hierarchical approach, determined by the input requirements. The PTFs were derived by using three statistical methods: (i) linear regression where there were quantitative input variables, (ii) a regression tree for qualitative, quantitative and mixed types of information and (iii) mean statistics of developer-defined soil groups (class PTF) when only qualitative input parameters were available. Data of the recently established European Hydropedological Data Inventory (EU-HYDI), which holds the most comprehensive geographical and thematic coverage of hydro-pedological data in Europe, were used to train and test the PTFs. The applied modelling techniques and the EU-HYDI allowed the development of hydraulic PTFs that are more reliable and applicable for a greater variety of input parameters than those previously available for Europe. Therefore the new set of PTFs offers tailored advanced tools for a wide range of applications in the continent. PMID:25866465

  17. Wireless Channel Characterization: Modeling the 5 GHz Microwave Landing System Extension Band for Future Airport Surface Communications

    NASA Technical Reports Server (NTRS)

    Matolak, D. W.; Apaza, Rafael; Foore, Lawrence R.

    2006-01-01

    We describe a recently completed wideband wireless channel characterization project for the 5 GHz Microwave Landing System (MLS) extension band, for airport surface areas. This work included mobile measurements at large and small airports, and fixed point-to-point measurements. Mobile measurements were made via transmission from the air traffic control tower (ATCT), or from an airport field site (AFS), to a receiving ground vehicle on the airport surface. The point-to-point measurements were between ATCT and AFSs. Detailed statistical channel models were developed from all these measurements. Measured quantities include propagation path loss and power delay profiles, from which we obtain delay spreads, frequency domain correlation (coherence bandwidths), fading amplitude statistics, and channel parameter correlations. In this paper we review the project motivation, measurement coordination, and illustrate measurement results. Example channel modeling results for several propagation conditions are also provided, highlighting new findings.

  18. SEDPAK—A comprehensive operational system and data-processing package in APPLESOFT BASIC for a settling tube, sediment analyzer

    NASA Astrophysics Data System (ADS)

    Goldbery, R.; Tehori, O.

    SEDPAK provides a comprehensive software package for operation of a settling tube and sand analyzer (2-0.063 mm) and includes data-processing programs for statistical and graphic output of results. The programs are menu-driven and written in APPLESOFT BASIC, conforming with APPLE 3.3 DOS. Data storage and retrieval from disc is an important feature of SEDPAK. Additional features of SEDPAK include condensation of raw settling data via standard size-calibration curves to yield statistical grain-size parameters, plots of grain-size frequency distributions and cumulative log/probability curves. The program also has a module for processing of grain-size frequency data from sieved samples. An addition feature of SEDPAK is the option for automatic data processing and graphic output of a sequential or nonsequential array of samples on one side of a disc.

  19. Data communications in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-11-12

    Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer composed of compute nodes that execute a parallel application, each compute node including application processors that execute the parallel application and at least one management processor dedicated to gathering information regarding data communications. The PAMI is composed of data communications endpoints, each endpoint composed of a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources. Embodiments function by gathering call site statistics describing data communications resulting from execution of data communications instructions and identifying in dependence upon the call cite statistics a data communications algorithm for use in executing a data communications instruction at a call site in the parallel application.

  20. OPR-PPR, a Computer Program for Assessing Data Importance to Model Predictions Using Linear Statistics

    USGS Publications Warehouse

    Tonkin, Matthew J.; Tiedeman, Claire; Ely, D. Matthew; Hill, Mary C.

    2007-01-01

    The OPR-PPR program calculates the Observation-Prediction (OPR) and Parameter-Prediction (PPR) statistics that can be used to evaluate the relative importance of various kinds of data to simulated predictions. The data considered fall into three categories: (1) existing observations, (2) potential observations, and (3) potential information about parameters. The first two are addressed by the OPR statistic; the third is addressed by the PPR statistic. The statistics are based on linear theory and measure the leverage of the data, which depends on the location, the type, and possibly the time of the data being considered. For example, in a ground-water system the type of data might be a head measurement at a particular location and time. As a measure of leverage, the statistics do not take into account the value of the measurement. As linear measures, the OPR and PPR statistics require minimal computational effort once sensitivities have been calculated. Sensitivities need to be calculated for only one set of parameter values; commonly these are the values estimated through model calibration. OPR-PPR can calculate the OPR and PPR statistics for any mathematical model that produces the necessary OPR-PPR input files. In this report, OPR-PPR capabilities are presented in the context of using the ground-water model MODFLOW-2000 and the universal inverse program UCODE_2005. The method used to calculate the OPR and PPR statistics is based on the linear equation for prediction standard deviation. Using sensitivities and other information, OPR-PPR calculates (a) the percent increase in the prediction standard deviation that results when one or more existing observations are omitted from the calibration data set; (b) the percent decrease in the prediction standard deviation that results when one or more potential observations are added to the calibration data set; or (c) the percent decrease in the prediction standard deviation that results when potential information on one or more parameters is added.

  1. Sensitivity of the model error parameter specification in weak-constraint four-dimensional variational data assimilation

    NASA Astrophysics Data System (ADS)

    Shaw, Jeremy A.; Daescu, Dacian N.

    2017-08-01

    This article presents the mathematical framework to evaluate the sensitivity of a forecast error aspect to the input parameters of a weak-constraint four-dimensional variational data assimilation system (w4D-Var DAS), extending the established theory from strong-constraint 4D-Var. Emphasis is placed on the derivation of the equations for evaluating the forecast sensitivity to parameters in the DAS representation of the model error statistics, including bias, standard deviation, and correlation structure. A novel adjoint-based procedure for adaptive tuning of the specified model error covariance matrix is introduced. Results from numerical convergence tests establish the validity of the model error sensitivity equations. Preliminary experiments providing a proof-of-concept are performed using the Lorenz multi-scale model to illustrate the theoretical concepts and potential benefits for practical applications.

  2. Estimation of spatial-temporal gait parameters using a low-cost ultrasonic motion analysis system.

    PubMed

    Qi, Yongbin; Soh, Cheong Boon; Gunawan, Erry; Low, Kay-Soon; Thomas, Rijil

    2014-08-20

    In this paper, a low-cost motion analysis system using a wireless ultrasonic sensor network is proposed and investigated. A methodology has been developed to extract spatial-temporal gait parameters including stride length, stride duration, stride velocity, stride cadence, and stride symmetry from 3D foot displacements estimated by the combination of spherical positioning technique and unscented Kalman filter. The performance of this system is validated against a camera-based system in the laboratory with 10 healthy volunteers. Numerical results show the feasibility of the proposed system with average error of 2.7% for all the estimated gait parameters. The influence of walking speed on the measurement accuracy of proposed system is also evaluated. Statistical analysis demonstrates its capability of being used as a gait assessment tool for some medical applications.

  3. Statistical Validation of Surrogate Endpoints: Another Look at the Prentice Criterion and Other Criteria.

    PubMed

    Saraf, Sanatan; Mathew, Thomas; Roy, Anindya

    2015-01-01

    For the statistical validation of surrogate endpoints, an alternative formulation is proposed for testing Prentice's fourth criterion, under a bivariate normal model. In such a setup, the criterion involves inference concerning an appropriate regression parameter, and the criterion holds if the regression parameter is zero. Testing such a null hypothesis has been criticized in the literature since it can only be used to reject a poor surrogate, and not to validate a good surrogate. In order to circumvent this, an equivalence hypothesis is formulated for the regression parameter, namely the hypothesis that the parameter is equivalent to zero. Such an equivalence hypothesis is formulated as an alternative hypothesis, so that the surrogate endpoint is statistically validated when the null hypothesis is rejected. Confidence intervals for the regression parameter and tests for the equivalence hypothesis are proposed using bootstrap methods and small sample asymptotics, and their performances are numerically evaluated and recommendations are made. The choice of the equivalence margin is a regulatory issue that needs to be addressed. The proposed equivalence testing formulation is also adopted for other parameters that have been proposed in the literature on surrogate endpoint validation, namely, the relative effect and proportion explained.

  4. On approaches to analyze the sensitivity of simulated hydrologic fluxes to model parameters in the community land model

    DOE PAGES

    Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; ...

    2015-12-04

    Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less

  5. A variational approach to parameter estimation in ordinary differential equations.

    PubMed

    Kaschek, Daniel; Timmer, Jens

    2012-08-14

    Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

  6. An architecture for efficient gravitational wave parameter estimation with multimodal linear surrogate models

    NASA Astrophysics Data System (ADS)

    O'Shaughnessy, Richard; Blackman, Jonathan; Field, Scott E.

    2017-07-01

    The recent direct observation of gravitational waves has further emphasized the desire for fast, low-cost, and accurate methods to infer the parameters of gravitational wave sources. Due to expense in waveform generation and data handling, the cost of evaluating the likelihood function limits the computational performance of these calculations. Building on recently developed surrogate models and a novel parameter estimation pipeline, we show how to quickly generate the likelihood function as an analytic, closed-form expression. Using a straightforward variant of a production-scale parameter estimation code, we demonstrate our method using surrogate models of effective-one-body and numerical relativity waveforms. Our study is the first time these models have been used for parameter estimation and one of the first ever parameter estimation calculations with multi-modal numerical relativity waveforms, which include all \\ell ≤slant 4 modes. Our grid-free method enables rapid parameter estimation for any waveform with a suitable reduced-order model. The methods described in this paper may also find use in other data analysis studies, such as vetting coincident events or the computation of the coalescing-compact-binary detection statistic.

  7. The Value of Data and Metadata Standardization for Interoperability in Giovanni Or: Why Your Product's Metadata Causes Us Headaches!

    NASA Technical Reports Server (NTRS)

    Smit, Christine; Hegde, Mahabaleshwara; Strub, Richard; Bryant, Keith; Li, Angela; Petrenko, Maksym

    2017-01-01

    Giovanni is a data exploration and visualization tool at the NASA Goddard Earth Sciences Data Information Services Center (GES DISC). It has been around in one form or another for more than 15 years. Giovanni calculates simple statistics and produces 22 different visualizations for more than 1600 geophysical parameters from more than 90 satellite and model products. Giovanni relies on external data format standards to ensure interoperability, including the NetCDF CF Metadata Conventions. Unfortunately, these standards were insufficient to make Giovanni's internal data representation truly simple to use. Finding and working with dimensions can be convoluted with the CF Conventions. Furthermore, the CF Conventions are silent on machine-friendly descriptive metadata such as the parameter's source product and product version. In order to simplify analyzing disparate earth science data parameters in a unified way, we developed Giovanni's internal standard. First, the format standardizes parameter dimensions and variables so they can be easily found. Second, the format adds all the machine-friendly metadata Giovanni needs to present our parameters to users in a consistent and clear manner. At a glance, users can grasp all the pertinent information about parameters both during parameter selection and after visualization.

  8. Extended Kalman Filter for Estimation of Parameters in Nonlinear State-Space Models of Biochemical Networks

    PubMed Central

    Sun, Xiaodian; Jin, Li; Xiong, Momiao

    2008-01-01

    It is system dynamics that determines the function of cells, tissues and organisms. To develop mathematical models and estimate their parameters are an essential issue for studying dynamic behaviors of biological systems which include metabolic networks, genetic regulatory networks and signal transduction pathways, under perturbation of external stimuli. In general, biological dynamic systems are partially observed. Therefore, a natural way to model dynamic biological systems is to employ nonlinear state-space equations. Although statistical methods for parameter estimation of linear models in biological dynamic systems have been developed intensively in the recent years, the estimation of both states and parameters of nonlinear dynamic systems remains a challenging task. In this report, we apply extended Kalman Filter (EKF) to the estimation of both states and parameters of nonlinear state-space models. To evaluate the performance of the EKF for parameter estimation, we apply the EKF to a simulation dataset and two real datasets: JAK-STAT signal transduction pathway and Ras/Raf/MEK/ERK signaling transduction pathways datasets. The preliminary results show that EKF can accurately estimate the parameters and predict states in nonlinear state-space equations for modeling dynamic biochemical networks. PMID:19018286

  9. Representing Micro-Macro Linkages by Actor-Based Dynamic Network Models

    PubMed Central

    Snijders, Tom A.B.; Steglich, Christian E.G.

    2014-01-01

    Stochastic actor-based models for network dynamics have the primary aim of statistical inference about processes of network change, but may be regarded as a kind of agent-based models. Similar to many other agent-based models, they are based on local rules for actor behavior. Different from many other agent-based models, by including elements of generalized linear statistical models they aim to be realistic detailed representations of network dynamics in empirical data sets. Statistical parallels to micro-macro considerations can be found in the estimation of parameters determining local actor behavior from empirical data, and the assessment of goodness of fit from the correspondence with network-level descriptives. This article studies several network-level consequences of dynamic actor-based models applied to represent cross-sectional network data. Two examples illustrate how network-level characteristics can be obtained as emergent features implied by micro-specifications of actor-based models. PMID:25960578

  10. Pointwise probability reinforcements for robust statistical inference.

    PubMed

    Frénay, Benoît; Verleysen, Michel

    2014-02-01

    Statistical inference using machine learning techniques may be difficult with small datasets because of abnormally frequent data (AFDs). AFDs are observations that are much more frequent in the training sample that they should be, with respect to their theoretical probability, and include e.g. outliers. Estimates of parameters tend to be biased towards models which support such data. This paper proposes to introduce pointwise probability reinforcements (PPRs): the probability of each observation is reinforced by a PPR and a regularisation allows controlling the amount of reinforcement which compensates for AFDs. The proposed solution is very generic, since it can be used to robustify any statistical inference method which can be formulated as a likelihood maximisation. Experiments show that PPRs can be easily used to tackle regression, classification and projection: models are freed from the influence of outliers. Moreover, outliers can be filtered manually since an abnormality degree is obtained for each observation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. 'Chain pooling' model selection as developed for the statistical analysis of a rotor burst protection experiment

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1977-01-01

    A statistical decision procedure called chain pooling had been developed for model selection in fitting the results of a two-level fixed-effects full or fractional factorial experiment not having replication. The basic strategy included the use of one nominal level of significance for a preliminary test and a second nominal level of significance for the final test. The subject has been reexamined from the point of view of using as many as three successive statistical model deletion procedures in fitting the results of a single experiment. The investigation consisted of random number studies intended to simulate the results of a proposed aircraft turbine-engine rotor-burst-protection experiment. As a conservative approach, population model coefficients were chosen to represent a saturated 2 to the 4th power experiment with a distribution of parameter values unfavorable to the decision procedures. Three model selection strategies were developed.

  12. A Management Information System Model for Program Management. Ph.D. Thesis - Oklahoma State Univ.; [Computerized Systems Analysis

    NASA Technical Reports Server (NTRS)

    Shipman, D. L.

    1972-01-01

    The development of a model to simulate the information system of a program management type of organization is reported. The model statistically determines the following parameters: type of messages, destinations, delivery durations, type processing, processing durations, communication channels, outgoing messages, and priorites. The total management information system of the program management organization is considered, including formal and informal information flows and both facilities and equipment. The model is written in General Purpose System Simulation 2 computer programming language for use on the Univac 1108, Executive 8 computer. The model is simulated on a daily basis and collects queue and resource utilization statistics for each decision point. The statistics are then used by management to evaluate proposed resource allocations, to evaluate proposed changes to the system, and to identify potential problem areas. The model employs both empirical and theoretical distributions which are adjusted to simulate the information flow being studied.

  13. Process air quality data

    NASA Technical Reports Server (NTRS)

    Butler, C. M.; Hogge, J. E.

    1978-01-01

    Air quality sampling was conducted. Data for air quality parameters, recorded on written forms, punched cards or magnetic tape, are available for 1972 through 1975. Computer software was developed to (1) calculate several daily statistical measures of location, (2) plot time histories of data or the calculated daily statistics, (3) calculate simple correlation coefficients, and (4) plot scatter diagrams. Computer software was developed for processing air quality data to include time series analysis and goodness of fit tests. Computer software was developed to (1) calculate a larger number of daily statistical measures of location, and a number of daily monthly and yearly measures of location, dispersion, skewness and kurtosis, (2) decompose the extended time series model and (3) perform some goodness of fit tests. The computer program is described, documented and illustrated by examples. Recommendations are made for continuation of the development of research on processing air quality data.

  14. The potential of statistical shape modelling for geometric morphometric analysis of human teeth in archaeological research

    PubMed Central

    Fernee, Christianne; Browne, Martin; Zakrzewski, Sonia

    2017-01-01

    This paper introduces statistical shape modelling (SSM) for use in osteoarchaeology research. SSM is a full field, multi-material analytical technique, and is presented as a supplementary geometric morphometric (GM) tool. Lower mandibular canines from two archaeological populations and one modern population were sampled, digitised using micro-CT, aligned, registered to a baseline and statistically modelled using principal component analysis (PCA). Sample material properties were incorporated as a binary enamel/dentin parameter. Results were assessed qualitatively and quantitatively using anatomical landmarks. Finally, the technique’s application was demonstrated for inter-sample comparison through analysis of the principal component (PC) weights. It was found that SSM could provide high detail qualitative and quantitative insight with respect to archaeological inter- and intra-sample variability. This technique has value for archaeological, biomechanical and forensic applications including identification, finite element analysis (FEA) and reconstruction from partial datasets. PMID:29216199

  15. Cosmology constraints from shear peak statistics in Dark Energy Survey Science Verification data

    DOE PAGES

    Kacprzak, T.; Kirk, D.; Friedrich, O.; ...

    2016-08-19

    Shear peak statistics has gained a lot of attention recently as a practical alternative to the two point statistics for constraining cosmological parameters. We perform a shear peak statistics analysis of the Dark Energy Survey (DES) Science Verification (SV) data, using weak gravitational lensing measurements from a 139 degmore » $^2$ field. We measure the abundance of peaks identified in aperture mass maps, as a function of their signal-to-noise ratio, in the signal-to-noise range $$0<\\mathcal S / \\mathcal N<4$$. To predict the peak counts as a function of cosmological parameters we use a suite of $N$-body simulations spanning 158 models with varying $$\\Omega_{\\rm m}$$ and $$\\sigma_8$$, fixing $w = -1$, $$\\Omega_{\\rm b} = 0.04$$, $h = 0.7$ and $$n_s=1$$, to which we have applied the DES SV mask and redshift distribution. In our fiducial analysis we measure $$\\sigma_{8}(\\Omega_{\\rm m}/0.3)^{0.6}=0.77 \\pm 0.07$$, after marginalising over the shear multiplicative bias and the error on the mean redshift of the galaxy sample. We introduce models of intrinsic alignments, blending, and source contamination by cluster members. These models indicate that peaks with $$\\mathcal S / \\mathcal N>4$$ would require significant corrections, which is why we do not include them in our analysis. We compare our results to the cosmological constraints from the two point analysis on the SV field and find them to be in good agreement in both the central value and its uncertainty. As a result, we discuss prospects for future peak statistics analysis with upcoming DES data.« less

  16. Influence of Structural Features and Fracture Processes on Surface Roughness: A Case Study from the Krosno Sandstones of the Górka-Mucharz Quarry (Little Beskids, Southern Poland)

    NASA Astrophysics Data System (ADS)

    Pieczara, Łukasz

    2015-09-01

    The paper presents the results of analysis of surface roughness parameters in the Krosno Sandstones of Mucharz, southern Poland. It was aimed at determining whether these parameters are influenced by structural features (mainly the laminar distribution of mineral components and directional distribution of non-isometric grains) and fracture processes. The tests applied in the analysis enabled us to determine and describe the primary statistical parameters used in the quantitative description of surface roughness, as well as specify the usefulness of contact profilometry as a method of visualizing spatial differentiation of fracture processes in rocks. These aims were achieved by selecting a model material (Krosno Sandstones from the Górka-Mucharz Quarry) and an appropriate research methodology. The schedule of laboratory analyses included: identification analyses connected with non-destructive ultrasonic tests, aimed at the preliminary determination of rock anisotropy, strength point load tests (cleaved surfaces were obtained due to destruction of rock samples), microscopic analysis (observation of thin sections in order to determine the mechanism of inducing fracture processes) and a test method of measuring surface roughness (two- and three-dimensional diagrams, topographic and contour maps, and statistical parameters of surface roughness). The highest values of roughness indicators were achieved for surfaces formed under the influence of intragranular fracture processes (cracks propagating directly through grains). This is related to the structural features of the Krosno Sandstones (distribution of lamination and bedding).

  17. [Individual social factors and their association with environmental socioeconomic factors--a descriptive small-area analysis in the city of Dortmund].

    PubMed

    Neuner, B; Berger, K

    2010-11-01

    Apart from individual resources and individual risk factors, environmental socioeconomic factors are determinants of individual health and illness. The aim of this investigation was to evaluate the association of small-area environmental socioeconomic parameters (proportion of 14-year-old and younger population, proportion of married citizens, proportion of unemployed, and the number of private cars per inhabitant) with individual socioeconomic parameters (education, income, unemployment, social class and the country of origin) in Dortmund, a major city in Germany. After splitting the small-area environmental socioeconomic parameters of 62 statistical administration units into quintiles, differences in the distribution of individual social parameters were evaluated using adjusted tests for trend. Overall, 1,312 study participants (mean age 53.6 years, 52.9% women) were included. Independently of age and gender, individual social parameters were unequally distributed across areas with different small-area environmental socioeconomic parameters. A place of birth abroad and social class were significantly associated with all small-area environmental socioeconomic parameters. If the impact of environmental socioeconomic parameters on individual health or illness is determined, the unequal small-area distribution of individual social parameters should be considered. © Georg Thieme Verlag KG Stuttgart · New York.

  18. How does spa treatment affect cardiovascular function and vascular endothelium in patients with generalized osteoarthritis? A pilot study through plasma asymmetric di-methyl arginine (ADMA) and L-arginine/ADMA ratio

    NASA Astrophysics Data System (ADS)

    Karaarslan, Fatih; Ozkuk, Kagan; Seringec Karabulut, Serap; Bekpinar, Seldag; Karagulle, Mufit Zeki; Erdogan, Nergis

    2017-12-01

    The study aims to investigate the effect of spa treatment on vascular endothelium and clinical symptoms of generalized osteoarthritis. Forty generalized osteoarthritis (GOA) patients referred to a government spa hospital, and 40 GOA patients followed on university hospital locomotor system disease ambulatory clinics were included as study and control groups, respectively. Study group received spa treatment including thermal water baths, physical therapy modalities, and exercises. Control group was followed with home exercises for 15 days. Plasma ADMA, L-arginine, L-arginine/ADMA ratio, routine blood analyses, 6-min walking test, including fingertip O2 saturation, systolic/diastolic blood pressure, and pulse rate, were measured at the beginning and at the end of treatment. Groups were evaluated with VAS pain, patient, and physician global assessment; HAQ; and WOMAC at the beginning, at the end, and after 1 month of treatment. In study group, L-arginine and L-arginine/ADMA ratio showed statistically significant increase after treatment. Plasma ADMA levels did not change. There is no significant difference in intergroup comparison. Study group displayed statistically significant improvements in all clinical parameters. The study showed that spa treatment does not cause any harm to the vascular endothelium through ADMA. Significant increase in plasma L-arginine and L-arginine/ADMA ratio suggests that balneotherapy may play a preventive role on cardiovascular diseases. Balneotherapy provides meaningful improvements on clinical parameters of GOA.

  19. How does spa treatment affect cardiovascular function and vascular endothelium in patients with generalized osteoarthritis? A pilot study through plasma asymmetric di-methyl arginine (ADMA) and L-arginine/ADMA ratio.

    PubMed

    Karaarslan, Fatih; Ozkuk, Kagan; Seringec Karabulut, Serap; Bekpinar, Seldag; Karagulle, Mufit Zeki; Erdogan, Nergis

    2018-05-01

    The study aims to investigate the effect of spa treatment on vascular endothelium and clinical symptoms of generalized osteoarthritis. Forty generalized osteoarthritis (GOA) patients referred to a government spa hospital, and 40 GOA patients followed on university hospital locomotor system disease ambulatory clinics were included as study and control groups, respectively. Study group received spa treatment including thermal water baths, physical therapy modalities, and exercises. Control group was followed with home exercises for 15 days. Plasma ADMA, L-arginine, L-arginine/ADMA ratio, routine blood analyses, 6-min walking test, including fingertip O 2 saturation, systolic/diastolic blood pressure, and pulse rate, were measured at the beginning and at the end of treatment. Groups were evaluated with VAS pain, patient, and physician global assessment; HAQ; and WOMAC at the beginning, at the end, and after 1 month of treatment. In study group, L-arginine and L-arginine/ADMA ratio showed statistically significant increase after treatment. Plasma ADMA levels did not change. There is no significant difference in intergroup comparison. Study group displayed statistically significant improvements in all clinical parameters. The study showed that spa treatment does not cause any harm to the vascular endothelium through ADMA. Significant increase in plasma L-arginine and L-arginine/ADMA ratio suggests that balneotherapy may play a preventive role on cardiovascular diseases. Balneotherapy provides meaningful improvements on clinical parameters of GOA.

  20. [The effect of fluoride on electrochemical corrosion of the dental pure titanium before and after adhesion of Streptococcus mutans].

    PubMed

    Geng, Li; Qiao, Guang-yan; Gu, Kai-ka

    2016-04-01

    To investigate the effect of fluoride on electrochemical corrosion of the dental pure titanium before and after adhesion of Streptococcus mutans. The dental pure titanium specimens were tested by electrochemical measurement system including electrochemical impedance spectroscopy (EIS) and potentiodynamic polarization curve (PD) methods in artificial saliva with 0 g/L and 1.0 g/L sodium fluoride before and after dipped into culture medium with Streptococcus mutans for 24 h. The corrosion parameters, including the polarization resistance (R(ct)), corrosion potential (E(corr)), pitting breakdown potential (E(b)), and the difference between E(corr) and E(b) representing the "pseudo-passivation" (ΔE) obtained from the electrochemical tests were used to evaluate the corrosion resistance of dental pure titanium. The data were statistically analyzed by 2×2 factorial statistical analysis to examine the effect of sodium fluoride and adhesion of Streptococcus mutans using SPSS 12.0 software package. The results showed that the corrosion parameters including R(ct), Ecorr, E(b), and ΔE of pure titanium had significant difference between before and after adhesion of Streptococcus mutans in the same solution(P<0.05), and in artificial saliva with 0 g/L and 1.0 g/L sodium fluoride(P<0.05). The dental pure titanium was prone to corrosion in artificial saliva with sodium fluoride. The corrosion resistance of pure titanium decreased distinctly after immersed in culture medium with Streptococcus mutans.

  1. GuiTope: an application for mapping random-sequence peptides to protein sequences.

    PubMed

    Halperin, Rebecca F; Stafford, Phillip; Emery, Jack S; Navalkar, Krupa Arun; Johnston, Stephen Albert

    2012-01-03

    Random-sequence peptide libraries are a commonly used tool to identify novel ligands for binding antibodies, other proteins, and small molecules. It is often of interest to compare the selected peptide sequences to the natural protein binding partners to infer the exact binding site or the importance of particular residues. The ability to search a set of sequences for similarity to a set of peptides may sometimes enable the prediction of an antibody epitope or a novel binding partner. We have developed a software application designed specifically for this task. GuiTope provides a graphical user interface for aligning peptide sequences to protein sequences. All alignment parameters are accessible to the user including the ability to specify the amino acid frequency in the peptide library; these frequencies often differ significantly from those assumed by popular alignment programs. It also includes a novel feature to align di-peptide inversions, which we have found improves the accuracy of antibody epitope prediction from peptide microarray data and shows utility in analyzing phage display datasets. Finally, GuiTope can randomly select peptides from a given library to estimate a null distribution of scores and calculate statistical significance. GuiTope provides a convenient method for comparing selected peptide sequences to protein sequences, including flexible alignment parameters, novel alignment features, ability to search a database, and statistical significance of results. The software is available as an executable (for PC) at http://www.immunosignature.com/software and ongoing updates and source code will be available at sourceforge.net.

  2. How does spa treatment affect cardiovascular function and vascular endothelium in patients with generalized osteoarthritis? A pilot study through plasma asymmetric di-methyl arginine (ADMA) and L-arginine/ADMA ratio

    NASA Astrophysics Data System (ADS)

    Karaarslan, Fatih; Ozkuk, Kagan; Seringec Karabulut, Serap; Bekpinar, Seldag; Karagulle, Mufit Zeki; Erdogan, Nergis

    2018-05-01

    The study aims to investigate the effect of spa treatment on vascular endothelium and clinical symptoms of generalized osteoarthritis. Forty generalized osteoarthritis (GOA) patients referred to a government spa hospital, and 40 GOA patients followed on university hospital locomotor system disease ambulatory clinics were included as study and control groups, respectively. Study group received spa treatment including thermal water baths, physical therapy modalities, and exercises. Control group was followed with home exercises for 15 days. Plasma ADMA, L-arginine, L-arginine/ADMA ratio, routine blood analyses, 6-min walking test, including fingertip O2 saturation, systolic/diastolic blood pressure, and pulse rate, were measured at the beginning and at the end of treatment. Groups were evaluated with VAS pain, patient, and physician global assessment; HAQ; and WOMAC at the beginning, at the end, and after 1 month of treatment. In study group, L-arginine and L-arginine/ADMA ratio showed statistically significant increase after treatment. Plasma ADMA levels did not change. There is no significant difference in intergroup comparison. Study group displayed statistically significant improvements in all clinical parameters. The study showed that spa treatment does not cause any harm to the vascular endothelium through ADMA. Significant increase in plasma L-arginine and L-arginine/ADMA ratio suggests that balneotherapy may play a preventive role on cardiovascular diseases. Balneotherapy provides meaningful improvements on clinical parameters of GOA.

  3. Supervised Machine Learning for Regionalization of Environmental Data: Distribution of Uranium in Groundwater in Ukraine

    NASA Astrophysics Data System (ADS)

    Govorov, Michael; Gienko, Gennady; Putrenko, Viktor

    2018-05-01

    In this paper, several supervised machine learning algorithms were explored to define homogeneous regions of con-centration of uranium in surface waters in Ukraine using multiple environmental parameters. The previous study was focused on finding the primary environmental parameters related to uranium in ground waters using several methods of spatial statistics and unsupervised classification. At this step, we refined the regionalization using Artifi-cial Neural Networks (ANN) techniques including Multilayer Perceptron (MLP), Radial Basis Function (RBF), and Convolutional Neural Network (CNN). The study is focused on building local ANN models which may significantly improve the prediction results of machine learning algorithms by taking into considerations non-stationarity and autocorrelation in spatial data.

  4. Further developments in cloud statistics for computer simulations

    NASA Technical Reports Server (NTRS)

    Chang, D. T.; Willand, J. H.

    1972-01-01

    This study is a part of NASA's continued program to provide global statistics of cloud parameters for computer simulation. The primary emphasis was on the development of the data bank of the global statistical distributions of cloud types and cloud layers and their applications in the simulation of the vertical distributions of in-cloud parameters such as liquid water content. These statistics were compiled from actual surface observations as recorded in Standard WBAN forms. Data for a total of 19 stations were obtained and reduced. These stations were selected to be representative of the 19 primary cloud climatological regions defined in previous studies of cloud statistics. Using the data compiled in this study, a limited study was conducted of the hemogeneity of cloud regions, the latitudinal dependence of cloud-type distributions, the dependence of these statistics on sample size, and other factors in the statistics which are of significance to the problem of simulation. The application of the statistics in cloud simulation was investigated. In particular, the inclusion of the new statistics in an expanded multi-step Monte Carlo simulation scheme is suggested and briefly outlined.

  5. Experimental statistics for biological sciences.

    PubMed

    Bang, Heejung; Davidian, Marie

    2010-01-01

    In this chapter, we cover basic and fundamental principles and methods in statistics - from "What are Data and Statistics?" to "ANOVA and linear regression," which are the basis of any statistical thinking and undertaking. Readers can easily find the selected topics in most introductory statistics textbooks, but we have tried to assemble and structure them in a succinct and reader-friendly manner in a stand-alone chapter. This text has long been used in real classroom settings for both undergraduate and graduate students who do or do not major in statistical sciences. We hope that from this chapter, readers would understand the key statistical concepts and terminologies, how to design a study (experimental or observational), how to analyze the data (e.g., describe the data and/or estimate the parameter(s) and make inference), and how to interpret the results. This text would be most useful if it is used as a supplemental material, while the readers take their own statistical courses or it would serve as a great reference text associated with a manual for any statistical software as a self-teaching guide.

  6. Effects of dance movement therapy on selected cardiovascular parameters and estimated maximum oxygen consumption in hypertensive patients.

    PubMed

    Aweto, H A; Owoeye, O B A; Akinbo, S R A; Onabajo, A A

    2012-01-01

    Objective:Arterial hypertension is a medical condition associated with increased risks of of death, cardiovascular mortality and cardiovascular morbidity including stroke, coronary heart disease, atrial fibrillation and renal insufficiency. Regular physical exercise is considered to be an important part of the non-pharmacologictreatment of hypertension. The purpose of this study was to investigate the effects of dance movement therapy (DMT) on selected cardiovascular parameters and estimated maximum oxygen consumption in hypertensive patients. Fifty (50) subjects with hypertension participated in the study. They were randomly assigned to 2 equal groups; A (DMT group) and B (Control group). Group A carried out dance movement therapy 2 times a week for 4 weeks while group B underwent some educational sessions 2 times a week for the same duration. All the subjects were on anti-hypertensive drugs. 38 subjects completed the study with the DMTgroup having a total of 23 subjects (10 males and 13 females) and the control group 15 subjects (6 males and 9 females). Descriptive statistics of mean, standard deviation and inferential statistics of paired and independentt-testwere used for data analysis. Following four weeks of dance movement therapy, paired t-test analysis showed that there was a statistically significant difference in the Resting systolic blood pressure (RSBP) (p < 0.001*), Resting diastolic blood pressure (RDBP) (p < 0.001*), Resting heart rate (RHR) (p = 0.024*), Maximum heart rate (MHR) (p=0.002*) and Estimated oxygen consumption (VO2max) (p = 0.023*) in subjects in group A (p < 0.05) while there was no significant difference observed in outcome variables of subjects in group B (p > 0.05). Independent t-test analysis between the differences in the pre and post intervention scores of groups A and B also showed statistically significant differences in all the outcome variables (p <0.05). DMT was effective in improving cardiovascular parameters and estimated maximum oxygen consumption in hypertensive patients.

  7. The application of feature selection to the development of Gaussian process models for percutaneous absorption.

    PubMed

    Lam, Lun Tak; Sun, Yi; Davey, Neil; Adams, Rod; Prapopoulou, Maria; Brown, Marc B; Moss, Gary P

    2010-06-01

    The aim was to employ Gaussian processes to assess mathematically the nature of a skin permeability dataset and to employ these methods, particularly feature selection, to determine the key physicochemical descriptors which exert the most significant influence on percutaneous absorption, and to compare such models with established existing models. Gaussian processes, including automatic relevance detection (GPRARD) methods, were employed to develop models of percutaneous absorption that identified key physicochemical descriptors of percutaneous absorption. Using MatLab software, the statistical performance of these models was compared with single linear networks (SLN) and quantitative structure-permeability relationships (QSPRs). Feature selection methods were used to examine in more detail the physicochemical parameters used in this study. A range of statistical measures to determine model quality were used. The inherently nonlinear nature of the skin data set was confirmed. The Gaussian process regression (GPR) methods yielded predictive models that offered statistically significant improvements over SLN and QSPR models with regard to predictivity (where the rank order was: GPR > SLN > QSPR). Feature selection analysis determined that the best GPR models were those that contained log P, melting point and the number of hydrogen bond donor groups as significant descriptors. Further statistical analysis also found that great synergy existed between certain parameters. It suggested that a number of the descriptors employed were effectively interchangeable, thus questioning the use of models where discrete variables are output, usually in the form of an equation. The use of a nonlinear GPR method produced models with significantly improved predictivity, compared with SLN or QSPR models. Feature selection methods were able to provide important mechanistic information. However, it was also shown that significant synergy existed between certain parameters, and as such it was possible to interchange certain descriptors (i.e. molecular weight and melting point) without incurring a loss of model quality. Such synergy suggested that a model constructed from discrete terms in an equation may not be the most appropriate way of representing mechanistic understandings of skin absorption.

  8. Investigating the impact of design characteristics on statistical efficiency within discrete choice experiments: A systematic survey.

    PubMed

    Vanniyasingam, Thuva; Daly, Caitlin; Jin, Xuejing; Zhang, Yuan; Foster, Gary; Cunningham, Charles; Thabane, Lehana

    2018-06-01

    This study reviews simulation studies of discrete choice experiments to determine (i) how survey design features affect statistical efficiency, (ii) and to appraise their reporting quality. Statistical efficiency was measured using relative design (D-) efficiency, D-optimality, or D-error. For this systematic survey, we searched Journal Storage (JSTOR), Since Direct, PubMed, and OVID which included a search within EMBASE. Searches were conducted up to year 2016 for simulation studies investigating the impact of DCE design features on statistical efficiency. Studies were screened and data were extracted independently and in duplicate. Results for each included study were summarized by design characteristic. Previously developed criteria for reporting quality of simulation studies were also adapted and applied to each included study. Of 371 potentially relevant studies, 9 were found to be eligible, with several varying in study objectives. Statistical efficiency improved when increasing the number of choice tasks or alternatives; decreasing the number of attributes, attribute levels; using an unrestricted continuous "manipulator" attribute; using model-based approaches with covariates incorporating response behaviour; using sampling approaches that incorporate previous knowledge of response behaviour; incorporating heterogeneity in a model-based design; correctly specifying Bayesian priors; minimizing parameter prior variances; and using an appropriate method to create the DCE design for the research question. The simulation studies performed well in terms of reporting quality. Improvement is needed in regards to clearly specifying study objectives, number of failures, random number generators, starting seeds, and the software used. These results identify the best approaches to structure a DCE. An investigator can manipulate design characteristics to help reduce response burden and increase statistical efficiency. Since studies varied in their objectives, conclusions were made on several design characteristics, however, the validity of each conclusion was limited. Further research should be conducted to explore all conclusions in various design settings and scenarios. Additional reviews to explore other statistical efficiency outcomes and databases can also be performed to enhance the conclusions identified from this review.

  9. Statistics, Computation, and Modeling in Cosmology

    NASA Astrophysics Data System (ADS)

    Jewell, Jeff; Guiness, Joe; SAMSI 2016 Working Group in Cosmology

    2017-01-01

    Current and future ground and space based missions are designed to not only detect, but map out with increasing precision, details of the universe in its infancy to the present-day. As a result we are faced with the challenge of analyzing and interpreting observations from a wide variety of instruments to form a coherent view of the universe. Finding solutions to a broad range of challenging inference problems in cosmology is one of the goals of the “Statistics, Computation, and Modeling in Cosmology” workings groups, formed as part of the year long program on ‘Statistical, Mathematical, and Computational Methods for Astronomy’, hosted by the Statistical and Applied Mathematical Sciences Institute (SAMSI), a National Science Foundation funded institute. Two application areas have emerged for focused development in the cosmology working group involving advanced algorithmic implementations of exact Bayesian inference for the Cosmic Microwave Background, and statistical modeling of galaxy formation. The former includes study and development of advanced Markov Chain Monte Carlo algorithms designed to confront challenging inference problems including inference for spatial Gaussian random fields in the presence of sources of galactic emission (an example of a source separation problem). Extending these methods to future redshift survey data probing the nonlinear regime of large scale structure formation is also included in the working group activities. In addition, the working group is also focused on the study of ‘Galacticus’, a galaxy formation model applied to dark matter-only cosmological N-body simulations operating on time-dependent halo merger trees. The working group is interested in calibrating the Galacticus model to match statistics of galaxy survey observations; specifically stellar mass functions, luminosity functions, and color-color diagrams. The group will use subsampling approaches and fractional factorial designs to statistically and computationally efficiently explore the Galacticus parameter space. The group will also use the Galacticus simulations to study the relationship between the topological and physical structure of the halo merger trees and the properties of the resulting galaxies.

  10. Association between pathology and texture features of multi parametric MRI of the prostate

    NASA Astrophysics Data System (ADS)

    Kuess, Peter; Andrzejewski, Piotr; Nilsson, David; Georg, Petra; Knoth, Johannes; Susani, Martin; Trygg, Johan; Helbich, Thomas H.; Polanec, Stephan H.; Georg, Dietmar; Nyholm, Tufve

    2017-10-01

    The role of multi-parametric (mp)MRI in the diagnosis and treatment of prostate cancer has increased considerably. An alternative to visual inspection of mpMRI is the evaluation using histogram-based (first order statistics) parameters and textural features (second order statistics). The aims of the present work were to investigate the relationship between benign and malignant sub-volumes of the prostate and textures obtained from mpMR images. The performance of tumor prediction was investigated based on the combination of histogram-based and textural parameters. Subsequently, the relative importance of mpMR images was assessed and the benefit of additional imaging analyzed. Finally, sub-structures based on the PI-RADS classification were investigated as potential regions to automatically detect maligned lesions. Twenty-five patients who received mpMRI prior to radical prostatectomy were included in the study. The imaging protocol included T2, DWI, and DCE. Delineation of tumor regions was performed based on pathological information. First and second order statistics were derived from each structure and for all image modalities. The resulting data were processed with multivariate analysis, using PCA (principal component analysis) and OPLS-DA (orthogonal partial least squares discriminant analysis) for separation of malignant and healthy tissue. PCA showed a clear difference between tumor and healthy regions in the peripheral zone for all investigated images. The predictive ability of the OPLS-DA models increased for all image modalities when first and second order statistics were combined. The predictive value reached a plateau after adding ADC and T2, and did not increase further with the addition of other image information. The present study indicates a distinct difference in the signatures between malign and benign prostate tissue. This is an absolute prerequisite for automatic tumor segmentation, but only the first step in that direction. For the specific identified signature, DCE did not add complementary information to T2 and ADC maps.

  11. Influence of manufacturing parameters on the strength of PLA parts using Layered Manufacturing technique: A statistical approach

    NASA Astrophysics Data System (ADS)

    Jaya Christiyan, K. G.; Chandrasekhar, U.; Mathivanan, N. Rajesh; Venkateswarlu, K.

    2018-02-01

    A 3D printing was successfully used to fabricate samples of Polylactic Acid (PLA). Processing parameters such as Lay-up speed, Lay-up thickness, and printing nozzle were varied. All samples were tested for flexural strength using three point load test. A statistical mathematical model was developed to correlate the processing parameters with flexural strength. The result clearly demonstrated that the lay-up thickness and nozzle diameter influenced flexural strength significantly, whereas lay-up speed hardly influenced the flexural strength.

  12. Parameter optimization in biased decoy-state quantum key distribution with both source errors and statistical fluctuations

    NASA Astrophysics Data System (ADS)

    Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin

    2017-10-01

    The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.

  13. Study of air traffic over KLFIR

    NASA Astrophysics Data System (ADS)

    Nusyirwan, I. F.; Rohani, J. Mohd

    2017-12-01

    This paper shares the overview of the work currently being conducted with the Department of Civil Aviation Malaysia related to the air traffic. The aim is to study air traffic performance over KL and KK FIR, and the area of interest in this paper is the Kuala Lumpur Flight Information Region (KLFIR). The air traffic performance parameters includes general air traffic movement such as level allocation, number of movements, sector load analysis and also more specific parameters such as airborne delays, effects of weather to the air movements as well as ground delays. To achieve this, a huge effort has been undertaken that includes live data collection algorithm development and real time statistical analysis algorithm development. The main outcome from this multi-disciplinary work is the long-term analysis on the air traffic performance in Malaysia, which will put the country at par in the aviation community, namely the International Civil Aviation Organization (ICAO).

  14. User's Guide for Monthly Vector Wind Profile Model

    NASA Technical Reports Server (NTRS)

    Adelfang, S. I.

    1999-01-01

    The background, theoretical concepts, and methodology for construction of vector wind profiles based on a statistical model are presented. The derived monthly vector wind profiles are to be applied by the launch vehicle design community for establishing realistic estimates of critical vehicle design parameter dispersions related to wind profile dispersions. During initial studies a number of months are used to establish the model profiles that produce the largest monthly dispersions of ascent vehicle aerodynamic load indicators. The largest monthly dispersions for wind, which occur during the winter high-wind months, are used for establishing the design reference dispersions for the aerodynamic load indicators. This document includes a description of the computational process for the vector wind model including specification of input data, parameter settings, and output data formats. Sample output data listings are provided to aid the user in the verification of test output.

  15. A simple dynamic subgrid-scale model for LES of particle-laden turbulence

    NASA Astrophysics Data System (ADS)

    Park, George Ilhwan; Bassenne, Maxime; Urzay, Javier; Moin, Parviz

    2017-04-01

    In this study, a dynamic model for large-eddy simulations is proposed in order to describe the motion of small inertial particles in turbulent flows. The model is simple, involves no significant computational overhead, contains no adjustable parameters, and is flexible enough to be deployed in any type of flow solvers and grids, including unstructured setups. The approach is based on the use of elliptic differential filters to model the subgrid-scale velocity. The only model parameter, which is related to the nominal filter width, is determined dynamically by imposing consistency constraints on the estimated subgrid energetics. The performance of the model is tested in large-eddy simulations of homogeneous-isotropic turbulence laden with particles, where improved agreement with direct numerical simulation results is observed in the dispersed-phase statistics, including particle acceleration, local carrier-phase velocity, and preferential-concentration metrics.

  16. Second-generation corneal deformation signal waveform analysis in normal, forme fruste keratoconic, and manifest keratoconic corneas after statistical correction for potentially confounding factors.

    PubMed

    Zhang, Lijun; Danesh, Jennifer; Tannan, Anjali; Phan, Vivian; Yu, Fei; Hamilton, D Rex

    2015-10-01

    To evaluate the difference in corneal biomechanical waveform parameters between manifest keratoconus, forme fruste keratoconus, and healthy eyes with a second-generation biomechanical waveform analyzer (Ocular Response Analyzer 2). Jules Stein Eye Institute, University of California, Los Angeles, California, USA. Retrospective chart review. The biomechanical waveform analyzer was used to obtain corneal hysteresis (CH), corneal resistance factor (CRF), and 37 biomechanical waveform parameters in manifest keratoconus eyes, forme fruste keratoconus eyes, and healthy eyes. Useful distinguishing parameters were found using t tests and a multivariable logistic regression model with stepwise variable selection. Potential confounders were controlled for. The study included 68 manifest keratoconus eyes, 64 forme fruste keratoconus eyes, and 249 healthy eyes. There was a statistical difference in the mean CRF between the normal group (10.2 mm Hg ± 1.7 [SD]) and keratoconus group (6.3 ± 1.9 mm Hg) (P = .003), and between the normal group and the forme fruste keratoconus group (7.8 ± 1.4 mm Hg) (P < .0001). There was no statistical difference in the mean CH between the normal group and the keratoconus group or the forme fruste keratoconus group. The CRF, height of peak 1 (P1) (P = .001), downslope of P1 (dslope1) (P = .027), upslope of peak 2 (P2) (P = .004), and downslope of P2 (P = .006) distinguished the normal group from the keratoconus groups. The CRF, downslope of P2 derived from upper 50% of applanation peak (P = .035), dslope1 (P = .014), and upslope of P1 (P = .008) distinguished the normal group from the forme fruste keratoconus group. Differences in multiple biomechanical waveform parameters can differentiate between healthy and diseased conditions and might improve early diagnosis of keratoconus and forme fruste keratoconus. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  17. Comparison of dark energy models: A perspective from the latest observational data

    NASA Astrophysics Data System (ADS)

    Li, Miao; Li, Xiaodong; Zhang, Xin

    2010-09-01

    We compare some popular dark energy models under the assumption of a flat universe by using the latest observational data including the type Ia supernovae Constitution compilation, the baryon acoustic oscillation measurement from the Sloan Digital Sky Survey, the cosmic microwave background measurement given by the seven-year Wilkinson Microwave Anisotropy Probe observations and the determination of H 0 from the Hubble Space Telescope. Model comparison statistics such as the Bayesian and Akaike information criteria are applied to assess the worth of the models. These statistics favor models that give a good fit with fewer parameters. Based on this analysis, we find that the simplest cosmological constant model that has only one free parameter is still preferred by the current data. For other dynamical dark energy models, we find that some of them, such as the α dark energy, constant w, generalized Chaplygin gas, Chevalliear-Polarski-Linder parametrization, and holographic dark energy models, can provide good fits to the current data, and three of them, namely, the Ricci dark energy, agegraphic dark energy, and Dvali-Gabadadze-Porrati models, are clearly disfavored by the data.

  18. An accurate behavioral model for single-photon avalanche diode statistical performance simulation

    NASA Astrophysics Data System (ADS)

    Xu, Yue; Zhao, Tingchen; Li, Ding

    2018-01-01

    An accurate behavioral model is presented to simulate important statistical performance of single-photon avalanche diodes (SPADs), such as dark count and after-pulsing noise. The derived simulation model takes into account all important generation mechanisms of the two kinds of noise. For the first time, thermal agitation, trap-assisted tunneling and band-to-band tunneling mechanisms are simultaneously incorporated in the simulation model to evaluate dark count behavior of SPADs fabricated in deep sub-micron CMOS technology. Meanwhile, a complete carrier trapping and de-trapping process is considered in afterpulsing model and a simple analytical expression is derived to estimate after-pulsing probability. In particular, the key model parameters of avalanche triggering probability and electric field dependence of excess bias voltage are extracted from Geiger-mode TCAD simulation and this behavioral simulation model doesn't include any empirical parameters. The developed SPAD model is implemented in Verilog-A behavioral hardware description language and successfully operated on commercial Cadence Spectre simulator, showing good universality and compatibility. The model simulation results are in a good accordance with the test data, validating high simulation accuracy.

  19. Prompt Injections of Highly Relativistic Electrons Induced by Interplanetary Shocks: A Statistical Study of Van Allen Probes Observations

    NASA Technical Reports Server (NTRS)

    Schiller, Q.; Kanekal, S. G.; Jian, L. K,; Li, X.; Jones, A.; Baker, D. N.; Jaynes, A.; Spence, H. E.

    2016-01-01

    We conduct a statistical study on the sudden response of outer radiation belt electrons due to interplanetary (IP) shocks during the Van Allen Probes era, i.e., 2012 to 2015. Data from the Relativistic Electron-Proton Telescope instrument on board Van Allen Probes are used to investigate the highly relativistic electron response (E greater than 1.8 MeV) within the first few minutes after shock impact. We investigate the relationship of IP shock parameters, such as Mach number, with the highly relativistic electron response, including spectral properties and radial location of the shock-induced injection. We find that the driving solar wind structure of the shock does not affect occurrence for enhancement events, 25% of IP shocks are associated with prompt energization, and 14% are associated with MeV electron depletion. Parameters that represent IP shock strength are found to correlate best with highest levels of energization, suggesting that shock strength may play a key role in the severity of the enhancements. However, not every shock results in an enhancement, indicating that magnetospheric preconditioning may be required.

  20. Statistical Mechanical Analysis of Online Learning with Weight Normalization in Single Layer Perceptron

    NASA Astrophysics Data System (ADS)

    Yoshida, Yuki; Karakida, Ryo; Okada, Masato; Amari, Shun-ichi

    2017-04-01

    Weight normalization, a newly proposed optimization method for neural networks by Salimans and Kingma (2016), decomposes the weight vector of a neural network into a radial length and a direction vector, and the decomposed parameters follow their steepest descent update. They reported that learning with the weight normalization achieves better converging speed in several tasks including image recognition and reinforcement learning than learning with the conventional parameterization. However, it remains theoretically uncovered how the weight normalization improves the converging speed. In this study, we applied a statistical mechanical technique to analyze on-line learning in single layer linear and nonlinear perceptrons with weight normalization. By deriving order parameters of the learning dynamics, we confirmed quantitatively that weight normalization realizes fast converging speed by automatically tuning the effective learning rate, regardless of the nonlinearity of the neural network. This property is realized when the initial value of the radial length is near the global minimum; therefore, our theory suggests that it is important to choose the initial value of the radial length appropriately when using weight normalization.

  1. Comparison of Ganglion Cell and Retinal Nerve Fiber Layer Thickness in Pigment Dispersion Syndrome, Pigmentary Glaucoma, and Healthy Subjects with Spectral-domain OCT.

    PubMed

    Arifoglu, Hasan Basri; Simavli, Huseyin; Midillioglu, Inci; Berk Ergun, Sule; Simsek, Saban

    2017-01-01

    To evaluate the ganglion cell complex (GCC) and retinal nerve fiber layer (RNFL) thickness in pigment dispersion syndrome (PDS) and pigmentary glaucoma (PG) with RTVue spectral domain optical coherence tomography (SD-OCT). A total of 102 subjects were enrolled: 29 with PDS, 18 with PG, and 55 normal subjects. Full ophthalmic examination including visual field analysis was performed. SD-OCT was used to analyze GCC superior, GCC inferior, and average RNFL thickness. To compare the discrimination capabilities, the areas under the receiver operating characteristic curves were assessed. Superior GCC, inferior GCC, and RNFL thickness values of patients with PG were statistically signicantly lower than those of patients with PDS (p < 0.001) and healthy individuals (p < 0.001 for all). No statistically significant difference was found between PDS and normal subjects in same parameters (p > 0.05). The SD-OCT-derived GCC and RNFL thickness parameters can be useful to discriminate PG from both PDS and normal subjects.

  2. VARIABLE SELECTION FOR REGRESSION MODELS WITH MISSING DATA

    PubMed Central

    Garcia, Ramon I.; Ibrahim, Joseph G.; Zhu, Hongtu

    2009-01-01

    We consider the variable selection problem for a class of statistical models with missing data, including missing covariate and/or response data. We investigate the smoothly clipped absolute deviation penalty (SCAD) and adaptive LASSO and propose a unified model selection and estimation procedure for use in the presence of missing data. We develop a computationally attractive algorithm for simultaneously optimizing the penalized likelihood function and estimating the penalty parameters. Particularly, we propose to use a model selection criterion, called the ICQ statistic, for selecting the penalty parameters. We show that the variable selection procedure based on ICQ automatically and consistently selects the important covariates and leads to efficient estimates with oracle properties. The methodology is very general and can be applied to numerous situations involving missing data, from covariates missing at random in arbitrary regression models to nonignorably missing longitudinal responses and/or covariates. Simulations are given to demonstrate the methodology and examine the finite sample performance of the variable selection procedures. Melanoma data from a cancer clinical trial is presented to illustrate the proposed methodology. PMID:20336190

  3. The effect of abdominal obesity in patients with polycystic ovary syndrome on metabolic parameters.

    PubMed

    Franik, G; Bizoń, A; Włoch, S; Pluta, D; Blukacz, Ł; Milnerowicz, H; Madej, P

    2017-11-01

    Polycystic ovarian syndrome and obesity contribute to the metabolic complications for women of reproductive age. The aim of present study was to analyze the effect of abdominal obesity expressed using waist/hip ratio (WHR) in patients with polycystic ovary syndrome on metabolic parameters. The study included 659 women with PCOS with WHR <0.8 and ≥0.8 aged between 17 and 44 years. Patients were tested for follicular stimulating hormone, luteinizing hormone, 17-beta-estradiol, dehydroepiandrosterone sulfate, androstenedione, sex hormone binding globulin, and total lipid profile during the follicular phase (within 3 and 5 days of their menstrual cycle). Also, fasting glucose and insulin concentrations, and after, oral-glucose glucose administration, were determinate. De Ritis and Castelli index I and II were calculated. Women with WHR ≥0.8 had higher concentration of glucose and  insulin (both fasting and after 120 min of oral administration of 75 g glucose), as well as HOMA-IR value, than women with WHR value < 0.8. Also, abdominal obesity disorders hormonal parameters. Higher free androgen index and lower concentration of sex hormone binding globulin and dehydroepiandrosterone sulfate were found in female with WHR ≥ 0.8. Follicular stimulating hormone, luteinizing hormone, androstenedione, and 17-beta-estradiol, were on similar level in both groups. Elevation in triglycerides, total cholesterol, and low-density lipoprotein levels, as well as decrease in high density lipoprotein level in serum of women with WHR value ≥ 0.8, were found when compared to women with WHR < 0.8. A statistically significant correlation was found between WHR value and glucose, insulin, sex hormone binding globulin, free androgen index and lipid profile parameters. Abdominal obesity causes additional disorders in metabolic and hormonal parameters in PCOS women, which confirmed changes in analyzed parameters between PCOS women with WHR < 0.8 and WHR ≥ 0.8 and statistically significant correlations between WHR value and analyzed parameters.

  4. Impact of the calibration period on the conceptual rainfall-runoff model parameter estimates

    NASA Astrophysics Data System (ADS)

    Todorovic, Andrijana; Plavsic, Jasna

    2015-04-01

    A conceptual rainfall-runoff model is defined by its structure and parameters, which are commonly inferred through model calibration. Parameter estimates depend on objective function(s), optimisation method, and calibration period. Model calibration over different periods may result in dissimilar parameter estimates, while model efficiency decreases outside calibration period. Problem of model (parameter) transferability, which conditions reliability of hydrologic simulations, has been investigated for decades. In this paper, dependence of the parameter estimates and model performance on calibration period is analysed. The main question that is addressed is: are there any changes in optimised parameters and model efficiency that can be linked to the changes in hydrologic or meteorological variables (flow, precipitation and temperature)? Conceptual, semi-distributed HBV-light model is calibrated over five-year periods shifted by a year (sliding time windows). Length of the calibration periods is selected to enable identification of all parameters. One water year of model warm-up precedes every simulation, which starts with the beginning of a water year. The model is calibrated using the built-in GAP optimisation algorithm. The objective function used for calibration is composed of Nash-Sutcliffe coefficient for flows and logarithms of flows, and volumetric error, all of which participate in the composite objective function with approximately equal weights. Same prior parameter ranges are used in all simulations. The model is calibrated against flows observed at the Slovac stream gauge on the Kolubara River in Serbia (records from 1954 to 2013). There are no trends in precipitation nor in flows, however, there is a statistically significant increasing trend in temperatures at this catchment. Parameter variability across the calibration periods is quantified in terms of standard deviations of normalised parameters, enabling detection of the most variable parameters. Correlation coefficients among optimised model parameters and total precipitation P, mean temperature T and mean flow Q are calculated to give an insight into parameter dependence on the hydrometeorological drivers. The results reveal high sensitivity of almost all model parameters towards calibration period. The highest variability is displayed by the refreezing coefficient, water holding capacity, and temperature gradient. The only statistically significant (decreasing) trend is detected in the evapotranspiration reduction threshold. Statistically significant correlation is detected between the precipitation gradient and precipitation depth, and between the time-area histogram base and flows. All other correlations are not statistically significant, implying that changes in optimised parameters cannot generally be linked to the changes in P, T or Q. As for the model performance, the model reproduces the observed runoff satisfactorily, though the runoff is slightly overestimated in wet periods. The Nash-Sutcliffe efficiency coefficient (NSE) ranges from 0.44 to 0.79. Higher NSE values are obtained over wetter periods, what is supported by statistically significant correlation between NSE and flows. Overall, no systematic variations in parameters or in model performance are detected. Parameter variability may therefore rather be attributed to errors in data or inadequacies in the model structure. Further research is required to examine the impact of the calibration strategy or model structure on the variability in optimised parameters in time.

  5. 40 CFR Appendix IV to Part 265 - Tests for Significance

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... introductory statistics texts. ... student's t-test involves calculation of the value of a t-statistic for each comparison of the mean... parameter with its initial background concentration or value. The calculated value of the t-statistic must...

  6. The dynamical core of the Aeolus 1.0 statistical-dynamical atmosphere model: validation and parameter optimization

    NASA Astrophysics Data System (ADS)

    Totz, Sonja; Eliseev, Alexey V.; Petri, Stefan; Flechsig, Michael; Caesar, Levke; Petoukhov, Vladimir; Coumou, Dim

    2018-02-01

    We present and validate a set of equations for representing the atmosphere's large-scale general circulation in an Earth system model of intermediate complexity (EMIC). These dynamical equations have been implemented in Aeolus 1.0, which is a statistical-dynamical atmosphere model (SDAM) and includes radiative transfer and cloud modules (Coumou et al., 2011; Eliseev et al., 2013). The statistical dynamical approach is computationally efficient and thus enables us to perform climate simulations at multimillennia timescales, which is a prime aim of our model development. Further, this computational efficiency enables us to scan large and high-dimensional parameter space to tune the model parameters, e.g., for sensitivity studies.Here, we present novel equations for the large-scale zonal-mean wind as well as those for planetary waves. Together with synoptic parameterization (as presented by Coumou et al., 2011), these form the mathematical description of the dynamical core of Aeolus 1.0.We optimize the dynamical core parameter values by tuning all relevant dynamical fields to ERA-Interim reanalysis data (1983-2009) forcing the dynamical core with prescribed surface temperature, surface humidity and cumulus cloud fraction. We test the model's performance in reproducing the seasonal cycle and the influence of the El Niño-Southern Oscillation (ENSO). We use a simulated annealing optimization algorithm, which approximates the global minimum of a high-dimensional function.With non-tuned parameter values, the model performs reasonably in terms of its representation of zonal-mean circulation, planetary waves and storm tracks. The simulated annealing optimization improves in particular the model's representation of the Northern Hemisphere jet stream and storm tracks as well as the Hadley circulation.The regions of high azonal wind velocities (planetary waves) are accurately captured for all validation experiments. The zonal-mean zonal wind and the integrated lower troposphere mass flux show good results in particular in the Northern Hemisphere. In the Southern Hemisphere, the model tends to produce too-weak zonal-mean zonal winds and a too-narrow Hadley circulation. We discuss possible reasons for these model biases as well as planned future model improvements and applications.

  7. Selection of physiological parameters for optoelectronic system supporting behavioral therapy of autistic children

    NASA Astrophysics Data System (ADS)

    Landowska, A.; Karpienko, K.; Wróbel, M.; Jedrzejewska-Szczerska, M.

    2014-11-01

    In this article the procedure of selection of physiological parameters for optoelectronic system supporting behavioral therapy of autistic children is proposed. Authors designed and conducted an experiment in which a group of 30 health volunteers (16 females and 14 males) were examined. Under controlled conditions people were exposed to a stressful situation caused by the picture or sound (1kHz constant sound, which was gradually silenced and finished with a shot sound). For each of volunteers, a set of physiological parameters were recorded, including: skin conductance, heart rate, peripheral temperature, respiration rate and electromyography. The selected characteristics were measured in different locations in order to choose the most suitable one for the designed therapy supporting system. The bio-statistical analysis allowed us to discern the proper physiological parameters that are most associated to changes due to emotional state of a patient, such as: skin conductance, temperatures and respiration rate. This allowed us to design optoelectronic sensors network for supporting behavioral therapy of children with autism.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simard, G.; et al.

    We report constraints on cosmological parameters from the angular power spectrum of a cosmic microwave background (CMB) gravitational lensing potential map created using temperature data from 2500 degmore » $^2$ of South Pole Telescope (SPT) data supplemented with data from Planck in the same sky region, with the statistical power in the combined map primarily from the SPT data. We fit the corresponding lensing angular power spectrum to a model including cold dark matter and a cosmological constant ($$\\Lambda$$CDM), and to models with single-parameter extensions to $$\\Lambda$$CDM. We find constraints that are comparable to and consistent with constraints found using the full-sky Planck CMB lensing data. Specifically, we find $$\\sigma_8 \\Omega_{\\rm m}^{0.25}=0.598 \\pm 0.024$$ from the lensing data alone with relatively weak priors placed on the other $$\\Lambda$$CDM parameters. In combination with primary CMB data from Planck, we explore single-parameter extensions to the $$\\Lambda$$CDM model. We find $$\\Omega_k = -0.012^{+0.021}_{-0.023}$$ or $$M_{\

  9. HEART: an automated beat-to-beat cardiovascular analysis package using Matlab.

    PubMed

    Schroeder, M J Mark J; Perreault, Bill; Ewert, D L Daniel L; Koenig, S C Steven C

    2004-07-01

    A computer program is described for beat-to-beat analysis of cardiovascular parameters from high-fidelity pressure and flow waveforms. The Hemodynamic Estimation and Analysis Research Tool (HEART) is a post-processing analysis software package developed in Matlab that enables scientists and clinicians to document, load, view, calibrate, and analyze experimental data that have been digitally saved in ascii or binary format. Analysis routines include traditional hemodynamic parameter estimates as well as more sophisticated analyses such as lumped arterial model parameter estimation and vascular impedance frequency spectra. Cardiovascular parameter values of all analyzed beats can be viewed and statistically analyzed. An attractive feature of the HEART program is the ability to analyze data with visual quality assurance throughout the process, thus establishing a framework toward which Good Laboratory Practice (GLP) compliance can be obtained. Additionally, the development of HEART on the Matlab platform provides users with the flexibility to adapt or create study specific analysis files according to their specific needs. Copyright 2003 Elsevier Ltd.

  10. Stochastic differential equation (SDE) model of opening gold share price of bursa saham malaysia

    NASA Astrophysics Data System (ADS)

    Hussin, F. N.; Rahman, H. A.; Bahar, A.

    2017-09-01

    Black and Scholes option pricing model is one of the most recognized stochastic differential equation model in mathematical finance. Two parameter estimation methods have been utilized for the Geometric Brownian model (GBM); historical and discrete method. The historical method is a statistical method which uses the property of independence and normality logarithmic return, giving out the simplest parameter estimation. Meanwhile, discrete method considers the function of density of transition from the process of diffusion normal log which has been derived from maximum likelihood method. These two methods are used to find the parameter estimates samples of Malaysians Gold Share Price data such as: Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas, and Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas Shariah. Modelling of gold share price is essential since fluctuation of gold affects worldwide economy nowadays, including Malaysia. It is found that discrete method gives the best parameter estimates than historical method due to the smallest Root Mean Square Error (RMSE) value.

  11. Some Biochemical and Hematological Parameters among Petrol Station Attendants: A Comparative Study

    PubMed Central

    Abou-ElWafa, Hala Samir; Albadry, Ahmed A.; Bazeed, Fagr B.

    2015-01-01

    Objective. To describe selected biochemical and hematological parameters (blood picture, liver enzymes, and kidney functions) in petrol station attendants in Mansoura city. Methods. This is a comparative cross-sectional study. The exposed group included 102 petrol station attendants. They were compared to a matched group of healthy 102 male service and office workers at the Faculty of Medicine, Mansoura University. The results of blood picture, liver enzymes, and kidney functions were compared between both groups. Results. Mean Red Blood Cells (RBCs) count, hemoglobin level, and Hematocrit (HCT) level were significantly lower in petrol station attendants than the comparison group. All other blood picture parameters showed nonsignificant difference between both groups. Liver enzymes, renal functions, serum albumin, and total protein showed statistically nonsignificant difference between both groups except for alanine aminotransferase (ALT) which was significantly higher in petrol station attendants. Conclusions. Some laboratory parameters among petrol station attendants showed changes that could be attributed to workplace exposure and should be given attention at preemployment and periodic medical examination. PMID:26634207

  12. Statistical parameters of random heterogeneity estimated by analysing coda waves based on finite difference method

    NASA Astrophysics Data System (ADS)

    Emoto, K.; Saito, T.; Shiomi, K.

    2017-12-01

    Short-period (<1 s) seismograms are strongly affected by small-scale (<10 km) heterogeneities in the lithosphere. In general, short-period seismograms are analysed based on the statistical method by considering the interaction between seismic waves and randomly distributed small-scale heterogeneities. Statistical properties of the random heterogeneities have been estimated by analysing short-period seismograms. However, generally, the small-scale random heterogeneity is not taken into account for the modelling of long-period (>2 s) seismograms. We found that the energy of the coda of long-period seismograms shows a spatially flat distribution. This phenomenon is well known in short-period seismograms and results from the scattering by small-scale heterogeneities. We estimate the statistical parameters that characterize the small-scale random heterogeneity by modelling the spatiotemporal energy distribution of long-period seismograms. We analyse three moderate-size earthquakes that occurred in southwest Japan. We calculate the spatial distribution of the energy density recorded by a dense seismograph network in Japan at the period bands of 8-16 s, 4-8 s and 2-4 s and model them by using 3-D finite difference (FD) simulations. Compared to conventional methods based on statistical theories, we can calculate more realistic synthetics by using the FD simulation. It is not necessary to assume a uniform background velocity, body or surface waves and scattering properties considered in general scattering theories. By taking the ratio of the energy of the coda area to that of the entire area, we can separately estimate the scattering and the intrinsic absorption effects. Our result reveals the spectrum of the random inhomogeneity in a wide wavenumber range including the intensity around the corner wavenumber as P(m) = 8πε2a3/(1 + a2m2)2, where ε = 0.05 and a = 3.1 km, even though past studies analysing higher-frequency records could not detect the corner. Finally, we estimate the intrinsic attenuation by modelling the decay rate of the energy. The method proposed in this study is suitable for quantifying the statistical properties of long-wavelength subsurface random inhomogeneity, which leads the way to characterizing a wider wavenumber range of spectra, including the corner wavenumber.

  13. Cytological Study of Breast Carcinoma Before and After Oncotherapy with Special Reference to Morphometry and Proliferative Activity.

    PubMed

    Koley, Sananda; Chakrabarti, Srabani; Pathak, Swapan; Manna, Asim Kumar; Basu, Siddhartha

    2015-12-01

    Our study was done to assess the cytological changes due to oncotherapy in breast carcinoma especially on morphometry and proliferative activity. Cytological aspirates were collected from a total of 32 cases of invasive ductal carcinoma both before and after oncotherapy. Morphometry was done on the stained cytological smears to assess the different morphological parameters of cell dimension by using the ocular morphometer and the software AutoCAD 2007. Staining was done with Ki-67 and proliferating cell nuclear antigen (PCNA) as proliferative markers. Different morphological parameters were compared before and after oncotherapy by unpaired Student's t test. Statistically significant differences were found in morphometric parameters, e.g., mean nuclear diameter, mean nuclear area, mean cell diameter, and mean cell area, and in the expression of proliferative markers (Ki-67 and PCNA). Statistical analysis was done by obtaining p values. There are statistically significant differences between morphological parameter of breast carcinoma cells before and after oncotherapy.

  14. A statistical investigation into the relationship between meteorological parameters and suicide

    NASA Astrophysics Data System (ADS)

    Dixon, Keith W.; Shulman, Mark D.

    1983-06-01

    Many previous studies of relationships between weather and suicides have been inconclusive and contradictory. This study investigated the relationship between suicide frequency and meteorological conditions in people who are psychologically predisposed to commit suicide. Linear regressions of diurnal temperature change, departure of temperature from the climatic norm, mean daytime sky cover, and the number of hours of precipitation for each day were performed on daily suicide totals using standard computer methods. Statistical analyses of suicide data for days with and without frontal passages were also performed. Days with five or more suicides (clusterdays) were isolated, and their weather parameters compared with those of nonclusterdays. Results show that neither suicide totals nor clusterday occurrence can be predicted using these meteorological parameters, since statistically significant relationships were not found. Although the data hinted that frontal passages and large daily temperature changes may occur on days with above average suicide totals, it was concluded that the influence of the weather parameters used, on the suicide rate, is a minor one, if indeed one exists.

  15. Optimization of pulsed laser welding process parameters in order to attain minimum underfill and undercut defects in thin 316L stainless steel foils

    NASA Astrophysics Data System (ADS)

    Pakmanesh, M. R.; Shamanian, M.

    2018-02-01

    In this study, the optimization of pulsed Nd:YAG laser welding parameters was done on the lap-joint of a 316L stainless steel foil with the aim of reducing weld defects through response surface methodology. For this purpose, the effects of peak power, pulse-duration, and frequency were investigated. The most important weld defects seen in this method include underfill and undercut. By presenting a second-order polynomial, the above-mentioned statistical method was managed to be well employed to balance the welding parameters. The results showed that underfill increased with the increased power and reduced frequency, it first increased and then decreased with the increased pulse-duration; and the most important parameter affecting it was the power, whose effect was 65%. The undercut increased with the increased power, pulse-duration, and frequency; and the most important parameter affecting it was the power, whose effect was 64%. Finally, by superimposing different responses, improved conditions were presented to attain a weld with no defects.

  16. Constraints on Cosmological Parameters from the Angular Power Spectrum of a Combined 2500 deg$^2$ SPT-SZ and Planck Gravitational Lensing Map

    DOE PAGES

    Simard, G.; et al.

    2018-06-20

    We report constraints on cosmological parameters from the angular power spectrum of a cosmic microwave background (CMB) gravitational lensing potential map created using temperature data from 2500 degmore » $^2$ of South Pole Telescope (SPT) data supplemented with data from Planck in the same sky region, with the statistical power in the combined map primarily from the SPT data. We fit the corresponding lensing angular power spectrum to a model including cold dark matter and a cosmological constant ($$\\Lambda$$CDM), and to models with single-parameter extensions to $$\\Lambda$$CDM. We find constraints that are comparable to and consistent with constraints found using the full-sky Planck CMB lensing data. Specifically, we find $$\\sigma_8 \\Omega_{\\rm m}^{0.25}=0.598 \\pm 0.024$$ from the lensing data alone with relatively weak priors placed on the other $$\\Lambda$$CDM parameters. In combination with primary CMB data from Planck, we explore single-parameter extensions to the $$\\Lambda$$CDM model. We find $$\\Omega_k = -0.012^{+0.021}_{-0.023}$$ or $$M_{\

  17. Constraints on Cosmological Parameters from the Angular Power Spectrum of a Combined 2500 deg$^2$ SPT-SZ and Planck Gravitational Lensing Map

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simard, G.; et al.

    We report constraints on cosmological parameters from the angular power spectrum of a cosmic microwave background (CMB) gravitational lensing potential map created using temperature data from 2500 degmore » $^2$ of South Pole Telescope (SPT) data supplemented with data from Planck in the same sky region, with the statistical power in the combined map primarily from the SPT data. We fit the corresponding lensing angular power spectrum to a model including cold dark matter and a cosmological constant ($$\\Lambda$$CDM), and to models with single-parameter extensions to $$\\Lambda$$CDM. We find constraints that are comparable to and consistent with constraints found using the full-sky Planck CMB lensing data. Specifically, we find $$\\sigma_8 \\Omega_{\\rm m}^{0.25}=0.598 \\pm 0.024$$ from the lensing data alone with relatively weak priors placed on the other $$\\Lambda$$CDM parameters. In combination with primary CMB data from Planck, we explore single-parameter extensions to the $$\\Lambda$$CDM model. We find $$\\Omega_k = -0.012^{+0.021}_{-0.023}$$ or $$M_{\

  18. Launch Vehicle Propulsion Design with Multiple Selection Criteria

    NASA Technical Reports Server (NTRS)

    Shelton, Joey D.; Frederick, Robert A.; Wilhite, Alan W.

    2005-01-01

    The approach and techniques described herein define an optimization and evaluation approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system. The method uses Monte Carlo simulations, genetic algorithm solvers, a propulsion thermo-chemical code, power series regression curves for historical data, and statistical models in order to optimize a vehicle system. The system, including parameters for engine chamber pressure, area ratio, and oxidizer/fuel ratio, was modeled and optimized to determine the best design for seven separate design weight and cost cases by varying design and technology parameters. Significant model results show that a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Other key findings show the sensitivity of propulsion parameters, technology factors, and cost factors and how these parameters differ when cost and weight are optimized separately. Each of the three key propulsion parameters; chamber pressure, area ratio, and oxidizer/fuel ratio, are optimized in the seven design cases and results are plotted to show impacts to engine mass and overall vehicle mass.

  19. Hydrology and trout populations of cold-water rivers of Michigan and Wisconsin

    USGS Publications Warehouse

    Hendrickson, G.E.; Knutilla, R.L.

    1974-01-01

    Statistical multiple-regression analyses showed significant relationships between trout populations and hydrologic parameters. Parameters showing the higher levels of significance were temperature, hardness of water, percentage of gravel bottom, percentage of bottom vegetation, variability of streamflow, and discharge per unit drainage area. Trout populations increase with lower levels of annual maximum water temperatures, with increase in water hardness, and with increase in percentage of gravel and bottom vegetation. Trout populations also increase with decrease in variability of streamflow, and with increase in discharge per unit drainage area. Most hydrologic parameters were significant when evaluated collectively, but no parameter, by itself, showed a high degree of correlation with trout populations in regression analyses that included all the streams sampled. Regression analyses of stream segments that were restricted to certain limits of hardness, temperature, or percentage of gravel bottom showed improvements in correlation. Analyses of trout populations, in pounds per acre and pounds per mile and hydrologic parameters resulted in regression equations from which trout populations could be estimated with standard errors of 89 and 84 per cent, respectively.

  20. Investigation of Statistical Inference Methodologies Through Scale Model Propagation Experiments

    DTIC Science & Technology

    2015-09-30

    statistical inference methodologies for ocean- acoustic problems by investigating and applying statistical methods to data collected from scale-model...to begin planning experiments for statistical inference applications. APPROACH In the ocean acoustics community over the past two decades...solutions for waveguide parameters. With the introduction of statistical inference to the field of ocean acoustics came the desire to interpret marginal

  1. General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models.

    PubMed

    de Villemereuil, Pierre; Schielzeth, Holger; Nakagawa, Shinichi; Morrissey, Michael

    2016-11-01

    Methods for inference and interpretation of evolutionary quantitative genetic parameters, and for prediction of the response to selection, are best developed for traits with normal distributions. Many traits of evolutionary interest, including many life history and behavioral traits, have inherently nonnormal distributions. The generalized linear mixed model (GLMM) framework has become a widely used tool for estimating quantitative genetic parameters for nonnormal traits. However, whereas GLMMs provide inference on a statistically convenient latent scale, it is often desirable to express quantitative genetic parameters on the scale upon which traits are measured. The parameters of fitted GLMMs, despite being on a latent scale, fully determine all quantities of potential interest on the scale on which traits are expressed. We provide expressions for deriving each of such quantities, including population means, phenotypic (co)variances, variance components including additive genetic (co)variances, and parameters such as heritability. We demonstrate that fixed effects have a strong impact on those parameters and show how to deal with this by averaging or integrating over fixed effects. The expressions require integration of quantities determined by the link function, over distributions of latent values. In general cases, the required integrals must be solved numerically, but efficient methods are available and we provide an implementation in an R package, QGglmm. We show that known formulas for quantities such as heritability of traits with binomial and Poisson distributions are special cases of our expressions. Additionally, we show how fitted GLMM can be incorporated into existing methods for predicting evolutionary trajectories. We demonstrate the accuracy of the resulting method for evolutionary prediction by simulation and apply our approach to data from a wild pedigreed vertebrate population. Copyright © 2016 de Villemereuil et al.

  2. Assessing statistical differences between parameters estimates in Partial Least Squares path modeling.

    PubMed

    Rodríguez-Entrena, Macario; Schuberth, Florian; Gelhard, Carsten

    2018-01-01

    Structural equation modeling using partial least squares (PLS-SEM) has become a main-stream modeling approach in various disciplines. Nevertheless, prior literature still lacks a practical guidance on how to properly test for differences between parameter estimates. Whereas existing techniques such as parametric and non-parametric approaches in PLS multi-group analysis solely allow to assess differences between parameters that are estimated for different subpopulations, the study at hand introduces a technique that allows to also assess whether two parameter estimates that are derived from the same sample are statistically different. To illustrate this advancement to PLS-SEM, we particularly refer to a reduced version of the well-established technology acceptance model.

  3. Sensitivity analysis of helicopter IMC decelerating steep approach and landing performance to navigation system parameters

    NASA Technical Reports Server (NTRS)

    Karmali, M. S.; Phatak, A. V.

    1982-01-01

    Results of a study to investigate, by means of a computer simulation, the performance sensitivity of helicopter IMC DSAL operations as a function of navigation system parameters are presented. A mathematical model representing generically a navigation system is formulated. The scenario simulated consists of a straight in helicopter approach to landing along a 6 deg glideslope. The deceleration magnitude chosen is 03g. The navigation model parameters are varied and the statistics of the total system errors (TSE) computed. These statistics are used to determine the critical navigation system parameters that affect the performance of the closed-loop navigation, guidance and control system of a UH-1H helicopter.

  4. Role of morphometry in the cytological differentiation of benign and malignant thyroid lesions

    PubMed Central

    Khatri, Pallavi; Choudhury, Monisha; Jain, Manjula; Thomas, Shaji

    2017-01-01

    Context: Thyroid nodules represent a common problem, with an estimated prevalence of 4–7%. Although fine needle aspiration cytology (FNAC) has been accepted as a first line diagnostic test, the rate of false negative reports of malignancy is still high. Nuclear morphometry is the measurement of nuclear parameters by image analysis. Image analysis can merge the advantages of morphologic interpretation with those of quantitative data. Aims: To evaluate the nuclear morphometric parameters in fine needle aspirates of thyroid lesions and to study its role in differentiating benign from malignant thyroid lesions. Material and Methods: The study included 19 benign and 16 malignant thyroid lesions. Image analysis was performed on Giemsa-stained FNAC slides by Nikon NIS-Elements Advanced Research software (Version 4.00). Nuclear morphometric parameters analyzed included nuclear size, shape, texture, and density parameters. Statistical Analysis: Normally distributed continuous variables were compared using the unpaired t-test for two groups and analysis of variance was used for three or more groups. Tukey or Tamhane's T2 multiple comparison test was used to assess the differences between the individual groups. Categorical variables were analyzed using the chi square test. Results and Conclusion: Five out of the six nuclear size parameters as well as all the texture and density parameters studied were significant in distinguishing between benign and malignant thyroid lesions (P < 0.05). Cut-off values were derived to differentiate between benign and malignant cases. PMID:28182069

  5. On the treatment of evapotranspiration, soil moisture accounting, and aquifer recharge in monthly water balance models

    USGS Publications Warehouse

    Alley, William M.

    1984-01-01

    Several two- to six-parameter regional water balance models are examined by using 50-year records of monthly streamflow at 10 sites in New Jersey. These models include variants of the Thornthwaite-Mather model, the Palmer model, and the more recent Thomas abcd model. Prediction errors are relatively similar among the models. However, simulated values of state variables such as soil moisture storage differ substantially among the models, and fitted parameter values for different models sometimes indicated an entirely different type of basin response to precipitation. Some problems in parameter identification are noted, including difficulties in identifying an appropriate time lag factor for the Thornthwaite-Mather-type model for basins with little groundwater storage, very high correlations between upper and lower storages in the Palmer-type model, and large sensitivity of parameter a of the abcd model to bias in estimates of precipitation and potential evapotranspiration. Modifications to the threshold concept of the Thornthwaite-Mather model were statistically valid for the six stations in northern New Jersey. The abcd model resulted in a simulated seasonal cycle of groundwater levels similar to fluctuations observed in nearby wells but with greater persistence. These results suggest that extreme caution should be used in attaching physical significance to model parameters and in using the state variables of the models in indices of drought and basin productivity.

  6. Statistical and population genetics issues of two Hungarian datasets from the aspect of DNA evidence interpretation.

    PubMed

    Szabolcsi, Zoltán; Farkas, Zsuzsa; Borbély, Andrea; Bárány, Gusztáv; Varga, Dániel; Heinrich, Attila; Völgyi, Antónia; Pamjav, Horolma

    2015-11-01

    When the DNA profile from a crime-scene matches that of a suspect, the weight of DNA evidence depends on the unbiased estimation of the match probability of the profiles. For this reason, it is required to establish and expand the databases that reflect the actual allele frequencies in the population applied. 21,473 complete DNA profiles from Databank samples were used to establish the allele frequency database to represent the population of Hungarian suspects. We used fifteen STR loci (PowerPlex ESI16) including five, new ESS loci. The aim was to calculate the statistical, forensic efficiency parameters for the Databank samples and compare the newly detected data to the earlier report. The population substructure caused by relatedness may influence the frequency of profiles estimated. As our Databank profiles were considered non-random samples, possible relationships between the suspects can be assumed. Therefore, population inbreeding effect was estimated using the FIS calculation. The overall inbreeding parameter was found to be 0.0106. Furthermore, we tested the impact of the two allele frequency datasets on 101 randomly chosen STR profiles, including full and partial profiles. The 95% confidence interval estimates for the profile frequencies (pM) resulted in a tighter range when we used the new dataset compared to the previously published ones. We found that the FIS had less effect on frequency values in the 21,473 samples than the application of minimum allele frequency. No genetic substructure was detected by STRUCTURE analysis. Due to the low level of inbreeding effect and the high number of samples, the new dataset provides unbiased and precise estimates of LR for statistical interpretation of forensic casework and allows us to use lower allele frequencies. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  7. Generalized Background Error covariance matrix model (GEN_BE v2.0)

    NASA Astrophysics Data System (ADS)

    Descombes, G.; Auligné, T.; Vandenberghe, F.; Barker, D. M.

    2014-07-01

    The specification of state background error statistics is a key component of data assimilation since it affects the impact observations will have on the analysis. In the variational data assimilation approach, applied in geophysical sciences, the dimensions of the background error covariance matrix (B) are usually too large to be explicitly determined and B needs to be modeled. Recent efforts to include new variables in the analysis such as cloud parameters and chemical species have required the development of the code to GENerate the Background Errors (GEN_BE) version 2.0 for the Weather Research and Forecasting (WRF) community model to allow for a simpler, flexible, robust, and community-oriented framework that gathers methods used by meteorological operational centers and researchers. We present the advantages of this new design for the data assimilation community by performing benchmarks and showing some of the new features on data assimilation test cases. As data assimilation for clouds remains a challenge, we present a multivariate approach that includes hydrometeors in the control variables and new correlated errors. In addition, the GEN_BE v2.0 code is employed to diagnose error parameter statistics for chemical species, which shows that it is a tool flexible enough to involve new control variables. While the generation of the background errors statistics code has been first developed for atmospheric research, the new version (GEN_BE v2.0) can be easily extended to other domains of science and be chosen as a testbed for diagnostic and new modeling of B. Initially developed for variational data assimilation, the model of the B matrix may be useful for variational ensemble hybrid methods as well.

  8. Generalized background error covariance matrix model (GEN_BE v2.0)

    NASA Astrophysics Data System (ADS)

    Descombes, G.; Auligné, T.; Vandenberghe, F.; Barker, D. M.; Barré, J.

    2015-03-01

    The specification of state background error statistics is a key component of data assimilation since it affects the impact observations will have on the analysis. In the variational data assimilation approach, applied in geophysical sciences, the dimensions of the background error covariance matrix (B) are usually too large to be explicitly determined and B needs to be modeled. Recent efforts to include new variables in the analysis such as cloud parameters and chemical species have required the development of the code to GENerate the Background Errors (GEN_BE) version 2.0 for the Weather Research and Forecasting (WRF) community model. GEN_BE allows for a simpler, flexible, robust, and community-oriented framework that gathers methods used by some meteorological operational centers and researchers. We present the advantages of this new design for the data assimilation community by performing benchmarks of different modeling of B and showing some of the new features in data assimilation test cases. As data assimilation for clouds remains a challenge, we present a multivariate approach that includes hydrometeors in the control variables and new correlated errors. In addition, the GEN_BE v2.0 code is employed to diagnose error parameter statistics for chemical species, which shows that it is a tool flexible enough to implement new control variables. While the generation of the background errors statistics code was first developed for atmospheric research, the new version (GEN_BE v2.0) can be easily applied to other domains of science and chosen to diagnose and model B. Initially developed for variational data assimilation, the model of the B matrix may be useful for variational ensemble hybrid methods as well.

  9. Can Eosinophil Count, Platelet Count, and Mean Platelet Volume Be a Positive Predictive Factor in Penile Arteriogenic Erectile Dysfunction Etiopathogenesis?

    PubMed Central

    Sönmez, Mehmet Giray; Göğer, Yunus Emre; Sönmez, Leyla Öztürk; Aydın, Arif; Balasar, Mehmet; Kara, Cengiz

    2016-01-01

    Blood count parameters of patients referring with erectile dysfunction (ED) were examined in this study and it was investigated whether eosinophil count (EC), platelet count (PC), and mean platelet volume values among the suspected predictive parameters which may play a role in especially penile arteriogenic ED etiopathogenesis had a contribution on pathogenesis. Patients referring with ED complaint were evaluated. Depending on the medical story, ED degree was determined by measuring International Index of Erectile Function. Penile Doppler ultrasonography was taken in patients suspected to have vasculogenic ED. According to penile Doppler ultrasonography result, patients with arterial deficiency were included in the penile arteriogenic ED group and the patients with normal results were included in the nonvasculogenic ED group. A total of 36 patients participated in the study from the penile arteriogenic ED group and 32 patients from the nonvasculogenic ED group. Compared with the nonvasculogenic ED group, the penile arteriogenic ED group’s low International Index of Erectile Function score, high EC, mean platelet volume and PC values were detected to be statistically significant (p < .001, p = .021, p = .018, p = .034, respectively). No statistically significant difference was observed among the two groups when age, white blood cells, red blood cells, and hemoglobin values were considered. Pansystolic volume velocities were detected as statistically significantly low compared with the nonvasculogenic ED group in the measurements made in 5th, 10th, 15th, and 20th minutes on the right and left sides in the penile arteriogenic ED group. High MPV value and PC is a significant predictive factor for penile arteriogenic ED and vasculogenic ED and high EC is specifically predictive of arteriogenic ED. PMID:27895254

  10. Substituting values for censored data from Texas, USA, reservoirs inflated and obscured trends in analyses commonly used for water quality target development.

    PubMed

    Grantz, Erin; Haggard, Brian; Scott, J Thad

    2018-06-12

    We calculated four median datasets (chlorophyll a, Chl a; total phosphorus, TP; and transparency) using multiple approaches to handling censored observations, including substituting fractions of the quantification limit (QL; dataset 1 = 1QL, dataset 2 = 0.5QL) and statistical methods for censored datasets (datasets 3-4) for approximately 100 Texas, USA reservoirs. Trend analyses of differences between dataset 1 and 3 medians indicated percent difference increased linearly above thresholds in percent censored data (%Cen). This relationship was extrapolated to estimate medians for site-parameter combinations with %Cen > 80%, which were combined with dataset 3 as dataset 4. Changepoint analysis of Chl a- and transparency-TP relationships indicated threshold differences up to 50% between datasets. Recursive analysis identified secondary thresholds in dataset 4. Threshold differences show that information introduced via substitution or missing due to limitations of statistical methods biased values, underestimated error, and inflated the strength of TP thresholds identified in datasets 1-3. Analysis of covariance identified differences in linear regression models relating transparency-TP between datasets 1, 2, and the more statistically robust datasets 3-4. Study findings identify high-risk scenarios for biased analytical outcomes when using substitution. These include high probability of median overestimation when %Cen > 50-60% for a single QL, or when %Cen is as low 16% for multiple QL's. Changepoint analysis was uniquely vulnerable to substitution effects when using medians from sites with %Cen > 50%. Linear regression analysis was less sensitive to substitution and missing data effects, but differences in model parameters for transparency cannot be discounted and could be magnified by log-transformation of the variables.

  11. Dispersion of a Passive Scalar Fluctuating Plume in a Turbulent Boundary Layer. Part I: Velocity and Concentration Measurements

    NASA Astrophysics Data System (ADS)

    Nironi, Chiara; Salizzoni, Pietro; Marro, Massimo; Mejean, Patrick; Grosjean, Nathalie; Soulhac, Lionel

    2015-09-01

    The prediction of the probability density function (PDF) of a pollutant concentration within atmospheric flows is of primary importance in estimating the hazard related to accidental releases of toxic or flammable substances and their effects on human health. This need motivates studies devoted to the characterization of concentration statistics of pollutants dispersion in the lower atmosphere, and their dependence on the parameters controlling their emissions. As is known from previous experimental results, concentration fluctuations are significantly influenced by the diameter of the source and its elevation. In this study, we aim to further investigate the dependence of the dispersion process on the source configuration, including source size, elevation and emission velocity. To that end we study experimentally the influence of these parameters on the statistics of the concentration of a passive scalar, measured at several distances downwind of the source. We analyze the spatial distribution of the first four moments of the concentration PDFs, with a focus on the variance, its dissipation and production and its spectral density. The information provided by the dataset, completed by estimates of the intermittency factors, allow us to discuss the role of the main mechanisms controlling the scalar dispersion and their link to the form of the PDF. The latter is shown to be very well approximated by a Gamma distribution, irrespective of the emission conditions and the distance from the source. Concentration measurements are complemented by a detailed description of the velocity statistics, including direct estimates of the Eulerian integral length scales from two-point correlations, a measurement that has been rarely presented to date.

  12. Lateral parapatellar and subvastus approaches are superior to the medial parapatellar approach in terms of soft tissue perfusion.

    PubMed

    Koçak, Aykut; Özmeriç, Ahmet; Koca, Gökhan; Senes, Mehmet; Yumuşak, Nihat; Iltar, Serkan; Korkmaz, Meliha; Alemdaroğlu, Kadir Bahadır

    2017-08-23

    The arthrotomy techniques of knee surgery may cause varying degrees of disruption to the tissue blood supply. The aim of this study was to investigate the effects of the medial parapatellar (MPPa), midvastus (MVa), subvastus (SVa) and lateral parapatellar (LPPa) approaches on regional tissue perfusion of the knee. In this experimental study, a total of 28 female rabbits were applied with four different arthrotomy techniques as Group MPPa, Group MVa, Group SVa and Group LPPa. The blood supply of the tissue around the knee was examined by scintigraphic imaging including the perfusion reserve and T max , and biochemical alteration of the oxidative stress parameters including malondialdehyde (MDA), fluorescent oxidation products (FlOPs), and histopathological findings were evaluated on tissue samples after 3 weeks. The perfusion reserve was increased in all four groups compared to the healthy, contralateral knees. In the Group LPPa, the vascularity was significantly increased compared to the Group MPPa (p = 0.006). In the examination of biochemical parameters, the increase in MDA levels was statistically significant in the Group MPPa compared with the Group LPPa (p = 0.004), and in the Group MVa compared with the Group LPPa (p = 0.006). The increase in the value of MDA levels was striking in the Group MPPa and Group MVa compared with the control group (p = 0.004, p = 0.004, respectively). The increase in another oxidative stress parameter, the tissue FlOPs levels, was statistically significant in the Group MPPa compared with the control group (p = 0.035). The LPPa and SVa caused less oxidative stress and less disruption of the muscle blood supply, in biochemical and scintigraphic parameters, compared to the MPPa and MVa. Therefore, in clinical practice, the SVa is preferable to the MPPa and MVa in total knee arthroplasty and the LPPa should be preferred more frequently in selected cases with critical soft tissue viability.

  13. Non-linear matter power spectrum covariance matrix errors and cosmological parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Blot, L.; Corasaniti, P. S.; Amendola, L.; Kitching, T. D.

    2016-06-01

    The covariance of the matter power spectrum is a key element of the analysis of galaxy clustering data. Independent realizations of observational measurements can be used to sample the covariance, nevertheless statistical sampling errors will propagate into the cosmological parameter inference potentially limiting the capabilities of the upcoming generation of galaxy surveys. The impact of these errors as function of the number of realizations has been previously evaluated for Gaussian distributed data. However, non-linearities in the late-time clustering of matter cause departures from Gaussian statistics. Here, we address the impact of non-Gaussian errors on the sample covariance and precision matrix errors using a large ensemble of N-body simulations. In the range of modes where finite volume effects are negligible (0.1 ≲ k [h Mpc-1] ≲ 1.2), we find deviations of the variance of the sample covariance with respect to Gaussian predictions above ˜10 per cent at k > 0.3 h Mpc-1. Over the entire range these reduce to about ˜5 per cent for the precision matrix. Finally, we perform a Fisher analysis to estimate the effect of covariance errors on the cosmological parameter constraints. In particular, assuming Euclid-like survey characteristics we find that a number of independent realizations larger than 5000 is necessary to reduce the contribution of sampling errors to the cosmological parameter uncertainties at subpercent level. We also show that restricting the analysis to large scales k ≲ 0.2 h Mpc-1 results in a considerable loss in constraining power, while using the linear covariance to include smaller scales leads to an underestimation of the errors on the cosmological parameters.

  14. Modeling the shape and composition of the human body using dual energy X-ray absorptiometry images

    PubMed Central

    Shepherd, John A.; Fan, Bo; Schwartz, Ann V.; Cawthon, Peggy; Cummings, Steven R.; Kritchevsky, Stephen; Nevitt, Michael; Santanasto, Adam; Cootes, Timothy F.

    2017-01-01

    There is growing evidence that body shape and regional body composition are strong indicators of metabolic health. The purpose of this study was to develop statistical models that accurately describe holistic body shape, thickness, and leanness. We hypothesized that there are unique body shape features that are predictive of mortality beyond standard clinical measures. We developed algorithms to process whole-body dual-energy X-ray absorptiometry (DXA) scans into body thickness and leanness images. We performed statistical appearance modeling (SAM) and principal component analysis (PCA) to efficiently encode the variance of body shape, leanness, and thickness across sample of 400 older Americans from the Health ABC study. The sample included 200 cases and 200 controls based on 6-year mortality status, matched on sex, race and BMI. The final model contained 52 points outlining the torso, upper arms, thighs, and bony landmarks. Correlation analyses were performed on the PCA parameters to identify body shape features that vary across groups and with metabolic risk. Stepwise logistic regression was performed to identify sex and race, and predict mortality risk as a function of body shape parameters. These parameters are novel body composition features that uniquely identify body phenotypes of different groups and predict mortality risk. Three parameters from a SAM of body leanness and thickness accurately identified sex (training AUC = 0.99) and six accurately identified race (training AUC = 0.91) in the sample dataset. Three parameters from a SAM of only body thickness predicted mortality (training AUC = 0.66, validation AUC = 0.62). Further study is warranted to identify specific shape/composition features that predict other health outcomes. PMID:28423041

  15. Multivariate figures of merit (FOM) investigation on the effect of instrument parameters on a Fourier transform-near infrared spectroscopy (FT-NIRS) based content uniformity method on core tablets.

    PubMed

    Doddridge, Greg D; Shi, Zhenqi

    2015-01-01

    Since near infrared spectroscopy (NIRS) was introduced to the pharmaceutical industry, efforts have been spent to leverage the power of chemometrics to extract out the best possible signal to correlate with the analyte of the interest. In contrast, only a few studies addressed the potential impact of instrument parameters, such as resolution and co-adds (i.e., the number of averaged replicate spectra), on the method performance of error statistics. In this study, a holistic approach was used to evaluate the effect of the instrument parameters of a FT-NIR spectrometer on the performance of a content uniformity method with respect to a list of figures of merit. The figures of merit included error statistics, signal-to-noise ratio (S/N), sensitivity, analytical sensitivity, effective resolution, selectivity, limit of detection (LOD), and noise. A Bruker MPA FT-NIR spectrometer was used for the investigation of an experimental design in terms of resolution (4 cm(-1) and 32 cm(-1)) and co-adds (256 and 16) plus a center point at 8 cm(-1) and 32 co-adds. Given the balance among underlying chemistry, instrument parameters, chemometrics, and measurement time, 8 cm(-1) and 32 co-adds in combination with appropriate 2nd derivative preprocessing was found to fit best for the intended purpose as a content uniformity method. The considerations for optimizing both instrument parameters and chemometrics were proposed and discussed in order to maximize the method performance for its intended purpose for future NIRS method development in R&D. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Statistical theory of dynamo

    NASA Astrophysics Data System (ADS)

    Kim, E.; Newton, A. P.

    2012-04-01

    One major problem in dynamo theory is the multi-scale nature of the MHD turbulence, which requires statistical theory in terms of probability distribution functions. In this contribution, we present the statistical theory of magnetic fields in a simplified mean field α-Ω dynamo model by varying the statistical property of alpha, including marginal stability and intermittency, and then utilize observational data of solar activity to fine-tune the mean field dynamo model. Specifically, we first present a comprehensive investigation into the effect of the stochastic parameters in a simplified α-Ω dynamo model. Through considering the manifold of marginal stability (the region of parameter space where the mean growth rate is zero), we show that stochastic fluctuations are conductive to dynamo. Furthermore, by considering the cases of fluctuating alpha that are periodic and Gaussian coloured random noise with identical characteristic time-scales and fluctuating amplitudes, we show that the transition to dynamo is significantly facilitated for stochastic alpha with random noise. Furthermore, we show that probability density functions (PDFs) of the growth-rate, magnetic field and magnetic energy can provide a wealth of useful information regarding the dynamo behaviour/intermittency. Finally, the precise statistical property of the dynamo such as temporal correlation and fluctuating amplitude is found to be dependent on the distribution the fluctuations of stochastic parameters. We then use observations of solar activity to constrain parameters relating to the effect in stochastic α-Ω nonlinear dynamo models. This is achieved through performing a comprehensive statistical comparison by computing PDFs of solar activity from observations and from our simulation of mean field dynamo model. The observational data that are used are the time history of solar activity inferred for C14 data in the past 11000 years on a long time scale and direct observations of the sun spot numbers obtained in recent years 1795-1995 on a short time scale. Monte Carlo simulations are performed on these data to obtain PDFs of the solar activity on both long and short time scales. These PDFs are then compared with predicted PDFs from numerical simulation of our α-Ω dynamo model, where α is assumed to have both mean α0 and fluctuating α' parts. By varying the correlation time of fluctuating α', the ratio of the amplitude of the fluctuating to mean alpha <α'2>/α02 (where angular brackets <> denote ensemble average), and the ratio of poloidal to toroidal magnetic fields, we show that the results from our stochastic dynamo model can match the PDFs of solar activity on both long and short time scales. In particular, a good agreement is obtained when the fluctuation in alpha is roughly equal to the mean part with a correlation time shorter than the solar period.

  17. Distribution of ULF energy (f is less than 80 mHz) in the inner magnetosphere - A statistical analysis of AMPTE CCE magnetic field data

    NASA Technical Reports Server (NTRS)

    Takahashi, Kazue; Anderson, Brian J.

    1992-01-01

    Magnetic field measurements made with the AMPTE CCE spacecraft are used to investigate the distribution of ULF energy in the inner magnetosphere. The data base is employed to examine the spatial distribution of ULF energy. The spatial distribution of wave power and spectral structures are used to identify several pulsation types, including multiharmonic toroidal oscillations; equatorial compressional Pc 3 oscillations; second harmonic poloidal oscillations; and nightside compressional oscillations. The frequencies of the toroidal oscillations are applied to determine the statistical radial profile of the plasma mass density and Alfven velocity. A clear signature of the plasma pause in the profiles of these average parameters is found.

  18. Desensitized Optimal Filtering and Sensor Fusion Toolkit

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.

    2015-01-01

    Analytical Mechanics Associates, Inc., has developed a software toolkit that filters and processes navigational data from multiple sensor sources. A key component of the toolkit is a trajectory optimization technique that reduces the sensitivity of Kalman filters with respect to model parameter uncertainties. The sensor fusion toolkit also integrates recent advances in adaptive Kalman and sigma-point filters for non-Gaussian problems with error statistics. This Phase II effort provides new filtering and sensor fusion techniques in a convenient package that can be used as a stand-alone application for ground support and/or onboard use. Its modular architecture enables ready integration with existing tools. A suite of sensor models and noise distribution as well as Monte Carlo analysis capability are included to enable statistical performance evaluations.

  19. Evaluation of the clinical and antimicrobial effects of the Er:YAG laser or topical gaseous ozone as adjuncts to initial periodontal therapy.

    PubMed

    Yılmaz, Selçuk; Algan, Serdar; Gursoy, Hare; Noyan, Ulku; Kuru, Bahar Eren; Kadir, Tanju

    2013-06-01

    The aim of this study was to evaluate the clinical and microbiological results of treatment with the Er:YAG laser and topical gaseous ozone application as adjuncts to initial periodontal therapy in chronic periodontitis (CP) patients. Although many studies have evaluated the effectiveness of the Er:YAG laser as an adjunct to initial periodontal therapy, few studies have focused on the use of gaseous ozone as an adjunct. Thirty patients with CP were randomly divided into three parallel groups, each composed of 10 individuals with at least four teeth having at least one approximal site with a probing depth (PD) of ≥5 mm and a sulcus bleeding index (SBI) ≥2 in each quadrant. Groups of patients received: (1) Scaling and root planing (SRP)+Er:YAG laser; (2) SRP+topical gaseous ozone; or (3) SRP alone. The microbiological and clinical parameters were monitored at day 0 and day 90. At the end of the observation period, statistically significant improvements in clinical parameters were observed within each group. Parallel to the clinical changes, all treatments reduced the number of total bacteria and the proportion of obligately anaerobic microorganisms. Although intergroup comparisons of microbiological parameters showed no significant differences, clinical findings, including attachment gain and PD reduction, were found to be statistically significant in favor of the SRP+Er:YAG laser group. Although statistically nonsignificant, the fact that the obligate anaerobic change was mostly observed in the SRP+Er:YAG laser group, and a similar decrease was noted in the SRP+topical gaseous ozone group, shows that ozone has an antimicrobial effect equivalent to that of the Er:YAG laser.

  20. Model for predicting the injury severity score.

    PubMed

    Hagiwara, Shuichi; Oshima, Kiyohiro; Murata, Masato; Kaneko, Minoru; Aoki, Makoto; Kanbe, Masahiko; Nakamura, Takuro; Ohyama, Yoshio; Tamura, Jun'ichi

    2015-07-01

    To determine the formula that predicts the injury severity score from parameters that are obtained in the emergency department at arrival. We reviewed the medical records of trauma patients who were transferred to the emergency department of Gunma University Hospital between January 2010 and December 2010. The injury severity score, age, mean blood pressure, heart rate, Glasgow coma scale, hemoglobin, hematocrit, red blood cell count, platelet count, fibrinogen, international normalized ratio of prothrombin time, activated partial thromboplastin time, and fibrin degradation products, were examined in those patients on arrival. To determine the formula that predicts the injury severity score, multiple linear regression analysis was carried out. The injury severity score was set as the dependent variable, and the other parameters were set as candidate objective variables. IBM spss Statistics 20 was used for the statistical analysis. Statistical significance was set at P  < 0.05. To select objective variables, the stepwise method was used. A total of 122 patients were included in this study. The formula for predicting the injury severity score (ISS) was as follows: ISS = 13.252-0.078(mean blood pressure) + 0.12(fibrin degradation products). The P -value of this formula from analysis of variance was <0.001, and the multiple correlation coefficient (R) was 0.739 (R 2  = 0.546). The multiple correlation coefficient adjusted for the degrees of freedom was 0.538. The Durbin-Watson ratio was 2.200. A formula for predicting the injury severity score in trauma patients was developed with ordinary parameters such as fibrin degradation products and mean blood pressure. This formula is useful because we can predict the injury severity score easily in the emergency department.

  1. Analysis of Anatomic and Functional Measures in X-Linked Retinoschisis

    PubMed Central

    Cukras, Catherine A.; Huryn, Laryssa A.; Jeffrey, Brett P.; Turriff, Amy; Sieving, Paul A.

    2018-01-01

    Purpose To examine the symmetry of structural and functional parameters between eyes in patients with X-linked retinoschisis (XLRS), as well as changes in visual acuity and electrophysiology over time. Methods This is a single-center observational study of 120 males with XLRS who were evaluated at the National Eye Institute. Examinations included best-corrected visual acuity for all participants, as well as ERG recording and optical coherence tomography (OCT) on a subset of participants. Statistical analyses were performed using nonparametric Spearman correlations and linear regression. Results Our analyses demonstrated a statistically significant correlation of structural and functional measures between the two eyes of XLRS patients for all parameters. OCT central macular thickness (n = 78; Spearman r = 0.83, P < 0.0001) and ERG b/a ratio (n = 78; Spearman r = 0.82, P < 0.0001) were the most strongly correlated between a participant's eyes, whereas visual acuity was less strongly correlated (n = 120; Spearman r = 0.47, P < 0.0001). Stability of visual acuity was observed with an average change of less than one letter (n = 74; OD −0.66 and OS −0.70 letters) in a mean follow-up time of 6.8 years. There was no statistically significant change in the ERG b/a ratio within eyes over time. Conclusions Although a broad spectrum of clinical phenotypes is observed across individuals with XLRS, our study demonstrates a significant correlation of structural and functional findings between the two eyes and stability of measures of acuity and ERG parameters over time. These results highlight the utility of the fellow eye as a useful reference for monocular interventional trials.

  2. The influence of control group reproduction on the statistical power of the Environmental Protection Agency's Medaka Extended One Generation Reproduction Test (MEOGRT).

    PubMed

    Flynn, Kevin; Swintek, Joe; Johnson, Rodney

    2017-02-01

    Because of various Congressional mandates to protect the environment from endocrine disrupting chemicals (EDCs), the United States Environmental Protection Agency (USEPA) initiated the Endocrine Disruptor Screening Program. In the context of this framework, the Office of Research and Development within the USEPA developed the Medaka Extended One Generation Reproduction Test (MEOGRT) to characterize the endocrine action of a suspected EDC. One important endpoint of the MEOGRT is fecundity of medaka breeding pairs. Power analyses were conducted to determine the number of replicates needed in proposed test designs and to determine the effects that varying reproductive parameters (e.g. mean fecundity, variance, and days with no egg production) would have on the statistical power of the test. The MEOGRT Reproduction Power Analysis Tool (MRPAT) is a software tool developed to expedite these power analyses by both calculating estimates of the needed reproductive parameters (e.g. population mean and variance) and performing the power analysis under user specified scenarios. Example scenarios are detailed that highlight the importance of the reproductive parameters on statistical power. When control fecundity is increased from 21 to 38 eggs per pair per day and the variance decreased from 49 to 20, the gain in power is equivalent to increasing replication by 2.5 times. On the other hand, if 10% of the breeding pairs, including controls, do not spawn, the power to detect a 40% decrease in fecundity drops to 0.54 from nearly 0.98 when all pairs have some level of egg production. Perhaps most importantly, MRPAT was used to inform the decision making process that lead to the final recommendation of the MEOGRT to have 24 control breeding pairs and 12 breeding pairs in each exposure group. Published by Elsevier Inc.

  3. A randomized double-blind placebo-controlled crossover-style trial of buspirone in functional dysphagia and ineffective esophageal motility.

    PubMed

    Aggarwal, Nitin; Thota, Prashanthi Nagavenkata; Lopez, Rocio; Gabbard, Scott

    2018-02-01

    Studies suggest that Ineffective Esophageal Motility (IEM) is the manometric correlate of Functional Dysphagia (FD). Currently, there is no accepted therapy for either condition. Buspirone is a serotonin modulating medication and has been shown to augment esophageal peristaltic amplitude in healthy volunteers. We aimed to determine if buspirone improves manometric parameters and symptoms in patients with overlapping IEM/FD. We performed a prospective, double-blind, placebo-controlled, crossover-style trial of 10 patients with IEM/FD. The study consisted of two 2-week treatment arms with a 2-week washout period. Outcomes measured at baseline, end of week 2, and week 6 include high resolution esophageal manometry (HREM), the Mayo Dysphagia Questionnaire-14 (MDQ-14), and the GERD-HRQL. The mean age of our 10 patients was 53 ± 9 years and 70% were female. After treatment with buspirone, 30% of patients had normalization of IEM on manometry; however, there was 30% normalization in the placebo group as well. Comparing buspirone to placebo, there was no statistically significant difference in the HREM parameters measured. There was also no statistically significant difference in symptom outcomes for buspirone compared to placebo. Of note, patients had a statistically significant decrease in the total GERD-HRQL total score when treated with placebo compared to baseline levels. Despite previous data demonstrating improved esophageal motility in healthy volunteers, our study shows no difference in terms of HREM parameters or symptom scores in IEM/FD patients treated with buspirone compared to placebo. Further research is necessary to identify novel agents for this condition. © 2017 John Wiley & Sons Ltd.

  4. Statistics of the work done on a quantum critical system by quenching a control parameter.

    PubMed

    Silva, Alessandro

    2008-09-19

    We study the statistics of the work done on a quantum critical system by quenching a control parameter in the Hamiltonian. We elucidate the relation between the probability distribution of the work and the Loschmidt echo, a quantity emerging usually in the context of dephasing. Using this connection we characterize the statistics of the work done on a quantum Ising chain by quenching locally or globally the transverse field. We show that for local quenches starting at criticality the probability distribution of the work displays an interesting edge singularity.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kacprzak, T.; Kirk, D.; Friedrich, O.

    Shear peak statistics has gained a lot of attention recently as a practical alternative to the two point statistics for constraining cosmological parameters. We perform a shear peak statistics analysis of the Dark Energy Survey (DES) Science Verification (SV) data, using weak gravitational lensing measurements from a 139 degmore » $^2$ field. We measure the abundance of peaks identified in aperture mass maps, as a function of their signal-to-noise ratio, in the signal-to-noise range $$0<\\mathcal S / \\mathcal N<4$$. To predict the peak counts as a function of cosmological parameters we use a suite of $N$-body simulations spanning 158 models with varying $$\\Omega_{\\rm m}$$ and $$\\sigma_8$$, fixing $w = -1$, $$\\Omega_{\\rm b} = 0.04$$, $h = 0.7$ and $$n_s=1$$, to which we have applied the DES SV mask and redshift distribution. In our fiducial analysis we measure $$\\sigma_{8}(\\Omega_{\\rm m}/0.3)^{0.6}=0.77 \\pm 0.07$$, after marginalising over the shear multiplicative bias and the error on the mean redshift of the galaxy sample. We introduce models of intrinsic alignments, blending, and source contamination by cluster members. These models indicate that peaks with $$\\mathcal S / \\mathcal N>4$$ would require significant corrections, which is why we do not include them in our analysis. We compare our results to the cosmological constraints from the two point analysis on the SV field and find them to be in good agreement in both the central value and its uncertainty. As a result, we discuss prospects for future peak statistics analysis with upcoming DES data.« less

  6. Effect of Low-Dose MDCT and Iterative Reconstruction on Trabecular Bone Microstructure Assessment.

    PubMed

    Kopp, Felix K; Holzapfel, Konstantin; Baum, Thomas; Nasirudin, Radin A; Mei, Kai; Garcia, Eduardo G; Burgkart, Rainer; Rummeny, Ernst J; Kirschke, Jan S; Noël, Peter B

    2016-01-01

    We investigated the effects of low-dose multi detector computed tomography (MDCT) in combination with statistical iterative reconstruction algorithms on trabecular bone microstructure parameters. Twelve donated vertebrae were scanned with the routine radiation exposure used in our department (standard-dose) and a low-dose protocol. Reconstructions were performed with filtered backprojection (FBP) and maximum-likelihood based statistical iterative reconstruction (SIR). Trabecular bone microstructure parameters were assessed and statistically compared for each reconstruction. Moreover, fracture loads of the vertebrae were biomechanically determined and correlated to the assessed microstructure parameters. Trabecular bone microstructure parameters based on low-dose MDCT and SIR significantly correlated with vertebral bone strength. There was no significant difference between microstructure parameters calculated on low-dose SIR and standard-dose FBP images. However, the results revealed a strong dependency on the regularization strength applied during SIR. It was observed that stronger regularization might corrupt the microstructure analysis, because the trabecular structure is a very small detail that might get lost during the regularization process. As a consequence, the introduction of SIR for trabecular bone microstructure analysis requires a specific optimization of the regularization parameters. Moreover, in comparison to other approaches, superior noise-resolution trade-offs can be found with the proposed methods.

  7. Interpreting the Weibull fitting parameters for diffusion-controlled release data

    NASA Astrophysics Data System (ADS)

    Ignacio, Maxime; Chubynsky, Mykyta V.; Slater, Gary W.

    2017-11-01

    We examine the diffusion-controlled release of molecules from passive delivery systems using both analytical solutions of the diffusion equation and numerically exact Lattice Monte Carlo data. For very short times, the release process follows a √{ t } power law, typical of diffusion processes, while the long-time asymptotic behavior is exponential. The crossover time between these two regimes is determined by the boundary conditions and initial loading of the system. We show that while the widely used Weibull function provides a reasonable fit (in terms of statistical error), it has two major drawbacks: (i) it does not capture the correct limits and (ii) there is no direct connection between the fitting parameters and the properties of the system. Using a physically motivated interpolating fitting function that correctly includes both time regimes, we are able to predict the values of the Weibull parameters which allows us to propose a physical interpretation.

  8. Effects of vegetation canopy on the radar backscattering coefficient

    NASA Technical Reports Server (NTRS)

    Mo, T.; Blanchard, B. J.; Schmugge, T. J.

    1983-01-01

    Airborne L- and C-band scatterometer data, taken over both vegetation-covered and bare fields, were systematically analyzed and theoretically reproduced, using a recently developed model for calculating radar backscattering coefficients of rough soil surfaces. The results show that the model can reproduce the observed angular variations of radar backscattering coefficient quite well via a least-squares fit method. Best fits to the data provide estimates of the statistical properties of the surface roughness, which is characterized by two parameters: the standard deviation of surface height, and the surface correlation length. In addition, the processes of vegetation attenuation and volume scattering require two canopy parameters, the canopy optical thickness and a volume scattering factor. Canopy parameter values for individual vegetation types, including alfalfa, milo and corn, were also determined from the best-fit results. The uncertainties in the scatterometer data were also explored.

  9. Sensitivity analysis, calibration, and testing of a distributed hydrological model using error‐based weighting and one objective function

    USGS Publications Warehouse

    Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.

    2009-01-01

    We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.

  10. Testing the statistical compatibility of independent data sets

    NASA Astrophysics Data System (ADS)

    Maltoni, M.; Schwetz, T.

    2003-08-01

    We discuss a goodness-of-fit method which tests the compatibility between statistically independent data sets. The method gives sensible results even in cases where the χ2 minima of the individual data sets are very low or when several parameters are fitted to a large number of data points. In particular, it avoids the problem that a possible disagreement between data sets becomes diluted by data points which are insensitive to the crucial parameters. A formal derivation of the probability distribution function for the proposed test statistics is given, based on standard theorems of statistics. The application of the method is illustrated on data from neutrino oscillation experiments, and its complementarity to the standard goodness-of-fit is discussed.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calcino, Josh; Davis, Tamara, E-mail: j.calcino@uq.edu.au, E-mail: tamarad@physics.uq.edu.au

    Recent papers have shown that a small systematic redshift shift (Δ z ∼ 10{sup −5}) in measurements of type Ia supernovae can cause a significant bias (∼1%) in the recovery of cosmological parameters. Such a redshift shift could be caused, for example, by a gravitational redshift due to the density of our local environment. The sensitivity of supernova data to redshift shifts means supernovae make excellent probes of inhomogeneities. We therefore invert the analysis, and try to diagnose the nature of our local gravitational environment by fitting for Δ z as an extra free parameter alongside the usual cosmological parameters.more » Using the Joint Light-curve SN Ia dataset we find the best fit includes a systematic redshift shift of Δ z = (2.6{sup +2.7}{sub −2.8}) × 10{sup −4}. This is a larger shift than would be expected due to gravitational redshifts in a standard Λ-Cold Dark Matter universe (though still consistent with zero), and would correspond to a monopole Doppler shift of about 100 km s{sup −1} moving away from the Milky-Way. However, since most supernova measurements are made to a redshift precision of no better than 10{sup −3}, it is possible that a systematic error smaller than the statistical error remains in the data and is responsible for the shift; or that it is an insignificant statistical fluctuation. We find that when Δ z is included as a free parameter while fitting to the JLA SN Ia data, the constraints on the matter density shifts to Ω {sub m} = 0.313{sup +0.042}{sub −0.040}, bringing it into better agreement with the CMB cosmological parameter constraints from Planck. A positive Δ z ∼ 2.6×10{sup −4} would also cause us to overestimate the supernova measurement of Hubble's constant by Δ H {sub 0} ∼ 1 kms{sup −1}Mpc{sup −1}. However this overestimation should diminish as one increases the low-redshift cutoff, and this is not seen in the most recent data.« less

  12. Optimization of operating parameters in polysilicon chemical vapor deposition reactor with response surface methodology

    NASA Astrophysics Data System (ADS)

    An, Li-sha; Liu, Chun-jiao; Liu, Ying-wen

    2018-05-01

    In the polysilicon chemical vapor deposition reactor, the operating parameters are complex to affect the polysilicon's output. Therefore, it is very important to address the coupling problem of multiple parameters and solve the optimization in a computationally efficient manner. Here, we adopted Response Surface Methodology (RSM) to analyze the complex coupling effects of different operating parameters on silicon deposition rate (R) and further achieve effective optimization of the silicon CVD system. Based on finite numerical experiments, an accurate RSM regression model is obtained and applied to predict the R with different operating parameters, including temperature (T), pressure (P), inlet velocity (V), and inlet mole fraction of H2 (M). The analysis of variance is conducted to describe the rationality of regression model and examine the statistical significance of each factor. Consequently, the optimum combination of operating parameters for the silicon CVD reactor is: T = 1400 K, P = 3.82 atm, V = 3.41 m/s, M = 0.91. The validation tests and optimum solution show that the results are in good agreement with those from CFD model and the deviations of the predicted values are less than 4.19%. This work provides a theoretical guidance to operate the polysilicon CVD process.

  13. Criteria for radiologic diagnosis of hypochondroplasia in neonates.

    PubMed

    Saito, Tomoko; Nagasaki, Keisuke; Nishimura, Gen; Wada, Masaki; Nyuzuki, Hiromi; Takagi, Masaki; Hasegawa, Tomonobu; Amano, Naoko; Murotsuki, Jun; Sawai, Hideaki; Yamada, Takahiro; Sato, Shuhei; Saitoh, Akihiko

    2016-04-01

    A radiologic diagnosis of hypochondroplasia is hampered by the absence of age-dependent radiologic criteria, particularly in the neonatal period. To establish radiologic criteria and scoring system for identifying neonates with fibroblast growth factor receptor 3 (FGFR3)-associated hypochondroplasia. This retrospective study included 7 hypochondroplastic neonates and 30 controls. All subjects underwent radiologic examination within 28 days after birth. We evaluated parameters reflecting the presence of (1) short ilia, (2) squared ilia, (3) short greater sciatic notch, (4) horizontal acetabula, (5) short femora, (6) broad femora, (7) metaphyseal flaring, (8) lumbosacral interpedicular distance narrowing and (9) ovoid radiolucency of the proximal femora. Only parameters 1, 3, 4, 5 and 6 were statistically different between the two groups. Parameters 3, 5 and 6 did not overlap between the groups, while parameters 1 and 4 did. Based on these results, we propose a scoring system for hypochondroplasia. Two major criteria (parameters 3 and 6) were assigned scores of 2, whereas 4 minor criteria (parameters 1, 4, 5 and 9) were assigned scores of 1. All neonates with hypochondroplasia in our material scored ≥6. Our set of diagnostic radiologic criteria might be useful for early identification of hypochondroplastic neonates.

  14. Evaluation of Smartphone Inertial Sensor Performance for Cross-Platform Mobile Applications

    PubMed Central

    Kos, Anton; Tomažič, Sašo; Umek, Anton

    2016-01-01

    Smartphone sensors are being increasingly used in mobile applications. The performance of sensors varies considerably among different smartphone models and the development of a cross-platform mobile application might be a very complex and demanding task. A publicly accessible resource containing real-life-situation smartphone sensor parameters could be of great help for cross-platform developers. To address this issue we have designed and implemented a pilot participatory sensing application for measuring, gathering, and analyzing smartphone sensor parameters. We start with smartphone accelerometer and gyroscope bias and noise parameters. The application database presently includes sensor parameters of more than 60 different smartphone models of different platforms. It is a modest, but important start, offering information on several statistical parameters of the measured smartphone sensors and insights into their performance. The next step, a large-scale cloud-based version of the application, is already planned. The large database of smartphone sensor parameters may prove particularly useful for cross-platform developers. It may also be interesting for individual participants who would be able to check-up and compare their smartphone sensors against a large number of similar or identical models. PMID:27049391

  15. Determining wave direction using curvature parameters.

    PubMed

    de Queiroz, Eduardo Vitarelli; de Carvalho, João Luiz Baptista

    2016-01-01

    The curvature of the sea wave was tested as a parameter for estimating wave direction in the search for better results in estimates of wave direction in shallow waters, where waves of different sizes, frequencies and directions intersect and it is difficult to characterize. We used numerical simulations of the sea surface to determine wave direction calculated from the curvature of the waves. Using 1000 numerical simulations, the statistical variability of the wave direction was determined. The results showed good performance by the curvature parameter for estimating wave direction. Accuracy in the estimates was improved by including wave slope parameters in addition to curvature. The results indicate that the curvature is a promising technique to estimate wave directions.•In this study, the accuracy and precision of curvature parameters to measure wave direction are analyzed using a model simulation that generates 1000 wave records with directional resolution.•The model allows the simultaneous simulation of time-series wave properties such as sea surface elevation, slope and curvature and they were used to analyze the variability of estimated directions.•The simultaneous acquisition of slope and curvature parameters can contribute to estimates wave direction, thus increasing accuracy and precision of results.

  16. Broadband spectral fitting of blazars using XSPEC

    NASA Astrophysics Data System (ADS)

    Sahayanathan, Sunder; Sinha, Atreyee; Misra, Ranjeev

    2018-03-01

    The broadband spectral energy distribution (SED) of blazars is generally interpreted as radiation arising from synchrotron and inverse Compton mechanisms. Traditionally, the underlying source parameters responsible for these emission processes, like particle energy density, magnetic field, etc., are obtained through simple visual reproduction of the observed fluxes. However, this procedure is incapable of providing confidence ranges for the estimated parameters. In this work, we propose an efficient algorithm to perform a statistical fit of the observed broadband spectrum of blazars using different emission models. Moreover, we use the observable quantities as the fit parameters, rather than the direct source parameters which govern the resultant SED. This significantly improves the convergence time and eliminates the uncertainty regarding initial guess parameters. This approach also has an added advantage of identifying the degenerate parameters, which can be removed by including more observable information and/or additional constraints. A computer code developed based on this algorithm is implemented as a user-defined routine in the standard X-ray spectral fitting package, XSPEC. Further, we demonstrate the efficacy of the algorithm by fitting the well sampled SED of blazar 3C 279 during its gamma ray flare in 2014.

  17. Masked areas in shear peak statistics. A forward modeling approach

    DOE PAGES

    Bard, D.; Kratochvil, J. M.; Dawson, W.

    2016-03-09

    The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impactmore » of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance.« less

  18. MASKED AREAS IN SHEAR PEAK STATISTICS: A FORWARD MODELING APPROACH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bard, D.; Kratochvil, J. M.; Dawson, W., E-mail: djbard@slac.stanford.edu

    2016-03-10

    The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impactmore » of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance.« less

  19. High-resolution x-ray guided three-dimensional diffuse optical tomography of joint tissues in hand osteoarthritis: Morphological and functional assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan Zhen; Zhang Qizhi; Sobel, Eric S.

    Purpose: The aim of this study was to investigate the potential use of multimodality functional imaging techniques to identify the quantitative optical findings that can be used to distinguish between osteoarthritic and normal finger joints. Methods: Between 2006 and 2009, the distal interphalangeal finger joints from 40 female subjects including 22 patients and 18 healthy controls were examined clinically and scanned by a hybrid imaging system. This system integrated x-ray tomosynthetic setup with a diffuse optical imaging system. Optical absorption and scattering images were recovered based on a regularization-based hybrid reconstruction algorithm. A receiver operating characteristic curve was used tomore » calculate the statistical significance of specific optical features obtained from osteoarthritic and healthy joints groups. Results: The three-dimensional optical and x-ray images captured made it possible to quantify optical properties and joint space width of finger joints. Based on the recovered optical absorption and scattering parameters, the authors observed statistically significant differences between healthy and osteoarthritis finger joints. Conclusions: The statistical results revealed that sensitivity and specificity values up to 92% and 100%, respectively, can be achieved when optical properties of joint tissues were used as classifiers. This suggests that these optical imaging parameters are possible indicators for diagnosing osteoarthritis and monitoring its progression.« less

  20. An overview of groundwater chemistry studies in Malaysia.

    PubMed

    Kura, Nura Umar; Ramli, Mohammad Firuz; Sulaiman, Wan Nor Azmin; Ibrahim, Shaharin; Aris, Ahmad Zaharin

    2018-03-01

    In this paper, numerous studies on groundwater in Malaysia were reviewed with the aim of evaluating past trends and the current status for discerning the sustainability of the water resources in the country. It was found that most of the previous groundwater studies (44 %) focused on the islands and mostly concentrated on qualitative assessment with more emphasis being placed on seawater intrusion studies. This was then followed by inland-based studies, with Selangor state leading the studies which reflected the current water challenges facing the state. From a methodological perspective, geophysics, graphical methods, and statistical analysis are the dominant techniques (38, 25, and 25 %) respectively. The geophysical methods especially the 2D resistivity method cut across many subjects such as seawater intrusion studies, quantitative assessment, and hydraulic parameters estimation. The statistical techniques used include multivariate statistical analysis techniques and ANOVA among others, most of which are quality related studies using major ions, in situ parameters, and heavy metals. Conversely, numerical techniques like MODFLOW were somewhat less admired which is likely due to their complexity in nature and high data demand. This work will facilitate researchers in identifying the specific areas which need improvement and focus, while, at the same time, provide policymakers and managers with an executive summary and knowledge of the current situation in groundwater studies and where more work needs to be done for sustainable development.

Top