Ng, S K; McLachlan, G J
2003-04-15
We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright 2003 John Wiley & Sons, Ltd.
On non-parametric maximum likelihood estimation of the bivariate survivor function.
Prentice, R L
The likelihood function for the bivariate survivor function F, under independent censorship, is maximized to obtain a non-parametric maximum likelihood estimator &Fcirc;. &Fcirc; may or may not be unique depending on the configuration of singly- and doubly-censored pairs. The likelihood function can be maximized by placing all mass on the grid formed by the uncensored failure times, or half lines beyond the failure time grid, or in the upper right quadrant beyond the grid. By accumulating the mass along lines (or regions) where the likelihood is flat, one obtains a partially maximized likelihood as a function of parameters that can be uniquely estimated. The score equations corresponding to these point mass parameters are derived, using a Lagrange multiplier technique to ensure unit total mass, and a modified Newton procedure is used to calculate the parameter estimates in some limited simulation studies. Some considerations for the further development of non-parametric bivariate survivor function estimators are briefly described.
A strategy for improved computational efficiency of the method of anchored distributions
NASA Astrophysics Data System (ADS)
Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram
2013-06-01
This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.
Falk, Carl F; Cai, Li
2016-06-01
We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.
Modeling gene expression measurement error: a quasi-likelihood approach
Strimmer, Korbinian
2003-01-01
Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution) or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale). Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood). Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic) variance structure of the data. As the quasi-likelihood behaves (almost) like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye) effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also improved the power of tests to identify differential expression. PMID:12659637
Statistics of some atmospheric turbulence records relevant to aircraft response calculations
NASA Technical Reports Server (NTRS)
Mark, W. D.; Fischer, R. W.
1981-01-01
Methods for characterizing atmospheric turbulence are described. The methods illustrated include maximum likelihood estimation of the integral scale and intensity of records obeying the von Karman transverse power spectral form, constrained least-squares estimation of the parameters of a parametric representation of autocorrelation functions, estimation of the power spectra density of the instantaneous variance of a record with temporally fluctuating variance, and estimation of the probability density functions of various turbulence components. Descriptions of the computer programs used in the computations are given, and a full listing of these programs is included.
Measurement of CIB power spectra with CAM-SPEC from Planck HFI maps
NASA Astrophysics Data System (ADS)
Mak, Suet Ying; Challinor, Anthony; Efstathiou, George; Lagache, Guilaine
2015-08-01
We present new measurements of the cosmic infrared background (CIB) anisotropies and its first likelihood using Planck HFI data at 353, 545, and 857 GHz. The measurements are based on cross-frequency power spectra and likelihood analysis using the CAM-SPEC package, rather than map based template removal of foregrounds as done in previous Planck CIB analysis. We construct the likelihood of the CIB temperature fluctuations, an extension of CAM-SPEC likelihood as used in CMB analysis to higher frequency, and use it to drive the best estimate of the CIB power spectrum over three decades in multiple moment, l, covering 50 ≤ l ≤ 2500. We adopt parametric models of the CIB and foreground contaminants (Galactic cirrus, infrared point sources, and cosmic microwave background anisotropies), and calibrate the dataset uniformly across frequencies with known Planck beam and noise properties in the likelihood construction. We validate our likelihood through simulations and extensive suite of consistency tests, and assess the impact of instrumental and data selection effects on the final CIB power spectrum constraints. Two approaches are developed for interpreting the CIB power spectrum. The first approach is based on simple parametric model which model the cross frequency power using amplitudes, correlation coefficients, and known multipole dependence. The second approach is based on the physical models for galaxy clustering and the evolution of infrared emission of galaxies. The new approaches fit all auto- and cross- power spectra very well, with the best fit of χ2ν = 1.04 (parametric model). Using the best foreground solution, we find that the cleaned CIB power spectra are in good agreement with previous Planck and Herschel measurements.
Parametric modelling of cost data in medical studies.
Nixon, R M; Thompson, S G
2004-04-30
The cost of medical resources used is often recorded for each patient in clinical studies in order to inform decision-making. Although cost data are generally skewed to the right, interest is in making inferences about the population mean cost. Common methods for non-normal data, such as data transformation, assuming asymptotic normality of the sample mean or non-parametric bootstrapping, are not ideal. This paper describes possible parametric models for analysing cost data. Four example data sets are considered, which have different sample sizes and degrees of skewness. Normal, gamma, log-normal, and log-logistic distributions are fitted, together with three-parameter versions of the latter three distributions. Maximum likelihood estimates of the population mean are found; confidence intervals are derived by a parametric BC(a) bootstrap and checked by MCMC methods. Differences between model fits and inferences are explored.Skewed parametric distributions fit cost data better than the normal distribution, and should in principle be preferred for estimating the population mean cost. However for some data sets, we find that models that fit badly can give similar inferences to those that fit well. Conversely, particularly when sample sizes are not large, different parametric models that fit the data equally well can lead to substantially different inferences. We conclude that inferences are sensitive to choice of statistical model, which itself can remain uncertain unless there is enough data to model the tail of the distribution accurately. Investigating the sensitivity of conclusions to choice of model should thus be an essential component of analysing cost data in practice. Copyright 2004 John Wiley & Sons, Ltd.
Minimum distance classification in remote sensing
NASA Technical Reports Server (NTRS)
Wacker, A. G.; Landgrebe, D. A.
1972-01-01
The utilization of minimum distance classification methods in remote sensing problems, such as crop species identification, is considered. Literature concerning both minimum distance classification problems and distance measures is reviewed. Experimental results are presented for several examples. The objective of these examples is to: (a) compare the sample classification accuracy of a minimum distance classifier, with the vector classification accuracy of a maximum likelihood classifier, and (b) compare the accuracy of a parametric minimum distance classifier with that of a nonparametric one. Results show the minimum distance classifier performance is 5% to 10% better than that of the maximum likelihood classifier. The nonparametric classifier is only slightly better than the parametric version.
Vexler, Albert; Tanajian, Hovig; Hutson, Alan D
In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.
A Parametric k-Means Algorithm
Tarpey, Thaddeus
2007-01-01
Summary The k points that optimally represent a distribution (usually in terms of a squared error loss) are called the k principal points. This paper presents a computationally intensive method that automatically determines the principal points of a parametric distribution. Cluster means from the k-means algorithm are nonparametric estimators of principal points. A parametric k-means approach is introduced for estimating principal points by running the k-means algorithm on a very large simulated data set from a distribution whose parameters are estimated using maximum likelihood. Theoretical and simulation results are presented comparing the parametric k-means algorithm to the usual k-means algorithm and an example on determining sizes of gas masks is used to illustrate the parametric k-means algorithm. PMID:17917692
Assessment of parametric uncertainty for groundwater reactive transport modeling,
Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun
2014-01-01
The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.
Yang, Li; Wang, Guobao; Qi, Jinyi
2016-04-01
Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.
Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies
Rukhin, Andrew L.
2011-01-01
A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed. PMID:26989583
Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.
Rukhin, Andrew L
2011-01-01
A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.
Use of Bayes theorem to correct size-specific sampling bias in growth data.
Troynikov, V S
1999-03-01
The bayesian decomposition of posterior distribution was used to develop a likelihood function to correct bias in the estimates of population parameters from data collected randomly with size-specific selectivity. Positive distributions with time as a parameter were used for parametrization of growth data. Numerical illustrations are provided. The alternative applications of the likelihood to estimate selectivity parameters are discussed.
Parametric Model Based On Imputations Techniques for Partly Interval Censored Data
NASA Astrophysics Data System (ADS)
Zyoud, Abdallah; Elfaki, F. A. M.; Hrairi, Meftah
2017-12-01
The term ‘survival analysis’ has been used in a broad sense to describe collection of statistical procedures for data analysis. In this case, outcome variable of interest is time until an event occurs where the time to failure of a specific experimental unit might be censored which can be right, left, interval, and Partly Interval Censored data (PIC). In this paper, analysis of this model was conducted based on parametric Cox model via PIC data. Moreover, several imputation techniques were used, which are: midpoint, left & right point, random, mean, and median. Maximum likelihood estimate was considered to obtain the estimated survival function. These estimations were then compared with the existing model, such as: Turnbull and Cox model based on clinical trial data (breast cancer data), for which it showed the validity of the proposed model. Result of data set indicated that the parametric of Cox model proved to be more superior in terms of estimation of survival functions, likelihood ratio tests, and their P-values. Moreover, based on imputation techniques; the midpoint, random, mean, and median showed better results with respect to the estimation of survival function.
A Non-parametric Cutout Index for Robust Evaluation of Identified Proteins*
Serang, Oliver; Paulo, Joao; Steen, Hanno; Steen, Judith A.
2013-01-01
This paper proposes a novel, automated method for evaluating sets of proteins identified using mass spectrometry. The remaining peptide-spectrum match score distributions of protein sets are compared to an empirical absent peptide-spectrum match score distribution, and a Bayesian non-parametric method reminiscent of the Dirichlet process is presented to accurately perform this comparison. Thus, for a given protein set, the process computes the likelihood that the proteins identified are correctly identified. First, the method is used to evaluate protein sets chosen using different protein-level false discovery rate (FDR) thresholds, assigning each protein set a likelihood. The protein set assigned the highest likelihood is used to choose a non-arbitrary protein-level FDR threshold. Because the method can be used to evaluate any protein identification strategy (and is not limited to mere comparisons of different FDR thresholds), we subsequently use the method to compare and evaluate multiple simple methods for merging peptide evidence over replicate experiments. The general statistical approach can be applied to other types of data (e.g. RNA sequencing) and generalizes to multivariate problems. PMID:23292186
Generalized linear mixed models with varying coefficients for longitudinal data.
Zhang, Daowen
2004-03-01
The routinely assumed parametric functional form in the linear predictor of a generalized linear mixed model for longitudinal data may be too restrictive to represent true underlying covariate effects. We relax this assumption by representing these covariate effects by smooth but otherwise arbitrary functions of time, with random effects used to model the correlation induced by among-subject and within-subject variation. Due to the usually intractable integration involved in evaluating the quasi-likelihood function, the double penalized quasi-likelihood (DPQL) approach of Lin and Zhang (1999, Journal of the Royal Statistical Society, Series B61, 381-400) is used to estimate the varying coefficients and the variance components simultaneously by representing a nonparametric function by a linear combination of fixed effects and random effects. A scaled chi-squared test based on the mixed model representation of the proposed model is developed to test whether an underlying varying coefficient is a polynomial of certain degree. We evaluate the performance of the procedures through simulation studies and illustrate their application with Indonesian children infectious disease data.
Spatial design and strength of spatial signal: Effects on covariance estimation
Irvine, Kathryn M.; Gitelman, Alix I.; Hoeting, Jennifer A.
2007-01-01
In a spatial regression context, scientists are often interested in a physical interpretation of components of the parametric covariance function. For example, spatial covariance parameter estimates in ecological settings have been interpreted to describe spatial heterogeneity or “patchiness” in a landscape that cannot be explained by measured covariates. In this article, we investigate the influence of the strength of spatial dependence on maximum likelihood (ML) and restricted maximum likelihood (REML) estimates of covariance parameters in an exponential-with-nugget model, and we also examine these influences under different sampling designs—specifically, lattice designs and more realistic random and cluster designs—at differing intensities of sampling (n=144 and 361). We find that neither ML nor REML estimates perform well when the range parameter and/or the nugget-to-sill ratio is large—ML tends to underestimate the autocorrelation function and REML produces highly variable estimates of the autocorrelation function. The best estimates of both the covariance parameters and the autocorrelation function come under the cluster sampling design and large sample sizes. As a motivating example, we consider a spatial model for stream sulfate concentration.
Convergence optimization of parametric MLEM reconstruction for estimation of Patlak plot parameters.
Angelis, Georgios I; Thielemans, Kris; Tziortzi, Andri C; Turkheimer, Federico E; Tsoumpas, Charalampos
2011-07-01
In dynamic positron emission tomography data many researchers have attempted to exploit kinetic models within reconstruction such that parametric images are estimated directly from measurements. This work studies a direct parametric maximum likelihood expectation maximization algorithm applied to [(18)F]DOPA data using reference-tissue input function. We use a modified version for direct reconstruction with a gradually descending scheme of subsets (i.e. 18-6-1) initialized with the FBP parametric image for faster convergence and higher accuracy. The results compared with analytic reconstructions show quantitative robustness (i.e. minimal bias) and clinical reproducibility within six human acquisitions in the region of clinical interest. Bland-Altman plots for all the studies showed sufficient quantitative agreement between the direct reconstructed parametric maps and the indirect FBP (--0.035x+0.48E--5). Copyright © 2011 Elsevier Ltd. All rights reserved.
Gaussian copula as a likelihood function for environmental models
NASA Astrophysics Data System (ADS)
Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.
2017-12-01
Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an interesting departure from the usage of fully parametric distributions as likelihood functions - and they could help us to better capture the statistical properties of errors and make more reliable predictions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, J.R.; Heger, A.S.; Koen, B.V.
1984-04-01
This report is the result of a preliminary feasibility study of the applicability of Stein and related parametric empirical Bayes (PEB) estimators to the Nuclear Plant Reliability Data System (NPRDS). A new estimator is derived for the means of several independent Poisson distributions with different sampling times. This estimator is applied to data from NPRDS in an attempt to improve failure rate estimation. Theoretical and Monte Carlo results indicate that the new PEB estimator can perform significantly better than the standard maximum likelihood estimator if the estimation of the individual means can be combined through the loss function or throughmore » a parametric class of prior distributions.« less
Modeling personnel turnover in the parametric organization
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1991-01-01
A model is developed for simulating the dynamics of a newly formed organization, credible during all phases of organizational development. The model development process is broken down into the activities of determining the tasks required for parametric cost analysis (PCA), determining the skills required for each PCA task, determining the skills available in the applicant marketplace, determining the structure of the model, implementing the model, and testing it. The model, parameterized by the likelihood of job function transition, has demonstrated by the capability to represent the transition of personnel across functional boundaries within a parametric organization using a linear dynamical system, and the ability to predict required staffing profiles to meet functional needs at the desired time. The model can be extended by revisions of the state and transition structure to provide refinements in functional definition for the parametric and extended organization.
Schmidt, K; Witte, H
1999-11-01
Recently the assumption of the independence of individual frequency components in a signal has been rejected, for example, for the EEG during defined physiological states such as sleep or sedation [9, 10]. Thus, the use of higher-order spectral analysis capable of detecting interrelations between individual signal components has proved useful. The aim of the present study was to investigate the quality of various non-parametric and parametric estimation algorithms using simulated as well as true physiological data. We employed standard algorithms available for the MATLAB. The results clearly show that parametric bispectral estimation is superior to non-parametric estimation in terms of the quality of peak localisation and the discrimination from other peaks.
Hutson, Alan D
2018-01-01
In this note, we develop a new and novel semi-parametric estimator of the survival curve that is comparable to the product-limit estimator under very relaxed assumptions. The estimator is based on a beta parametrization that warps the empirical distribution of the observed censored and uncensored data. The parameters are obtained using a pseudo-maximum likelihood approach adjusting the survival curve accounting for the censored observations. In the univariate setting, the new estimator tends to better extend the range of the survival estimation given a high degree of censoring. However, the key feature of this paper is that we develop a new two-group semi-parametric exact permutation test for comparing survival curves that is generally superior to the classic log-rank and Wilcoxon tests and provides the best global power across a variety of alternatives. The new test is readily extended to the k group setting. PMID:26988931
Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows
NASA Astrophysics Data System (ADS)
Srivastav, R. K.; Srinivasan, K.; Sudheer, K.
2009-05-01
Synthetic streamflow data generation involves the synthesis of likely streamflow patterns that are statistically indistinguishable from the observed streamflow data. The various kinds of stochastic models adopted for multi-season streamflow generation in hydrology are: i) parametric models which hypothesize the form of the periodic dependence structure and the distributional form a priori (examples are PAR, PARMA); disaggregation models that aim to preserve the correlation structure at the periodic level and the aggregated annual level; ii) Nonparametric models (examples are bootstrap/kernel based methods), which characterize the laws of chance, describing the stream flow process, without recourse to prior assumptions as to the form or structure of these laws; (k-nearest neighbor (k-NN), matched block bootstrap (MABB)); non-parametric disaggregation model. iii) Hybrid models which blend both parametric and non-parametric models advantageously to model the streamflows effectively. Despite many of these developments that have taken place in the field of stochastic modeling of streamflows over the last four decades, accurate prediction of the storage and the critical drought characteristics has been posing a persistent challenge to the stochastic modeler. This is partly because, usually, the stochastic streamflow model parameters are estimated by minimizing a statistically based objective function (such as maximum likelihood (MLE) or least squares (LS) estimation) and subsequently the efficacy of the models is being validated based on the accuracy of prediction of the estimates of the water-use characteristics, which requires large number of trial simulations and inspection of many plots and tables. Still accurate prediction of the storage and the critical drought characteristics may not be ensured. In this study a multi-objective optimization framework is proposed to find the optimal hybrid model (blend of a simple parametric model, PAR(1) model and matched block bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.
Why preferring parametric forecasting to nonparametric methods?
Jabot, Franck
2015-05-07
A recent series of papers by Charles T. Perretti and collaborators have shown that nonparametric forecasting methods can outperform parametric methods in noisy nonlinear systems. Such a situation can arise because of two main reasons: the instability of parametric inference procedures in chaotic systems which can lead to biased parameter estimates, and the discrepancy between the real system dynamics and the modeled one, a problem that Perretti and collaborators call "the true model myth". Should ecologists go on using the demanding parametric machinery when trying to forecast the dynamics of complex ecosystems? Or should they rely on the elegant nonparametric approach that appears so promising? It will be here argued that ecological forecasting based on parametric models presents two key comparative advantages over nonparametric approaches. First, the likelihood of parametric forecasting failure can be diagnosed thanks to simple Bayesian model checking procedures. Second, when parametric forecasting is diagnosed to be reliable, forecasting uncertainty can be estimated on virtual data generated with the fitted to data parametric model. In contrast, nonparametric techniques provide forecasts with unknown reliability. This argumentation is illustrated with the simple theta-logistic model that was previously used by Perretti and collaborators to make their point. It should convince ecologists to stick to standard parametric approaches, until methods have been developed to assess the reliability of nonparametric forecasting. Copyright © 2015 Elsevier Ltd. All rights reserved.
Cost-Aware Design of a Discrimination Strategy for Unexploded Ordnance Cleanup
2011-02-25
Acronyms ANN: Artificial Neural Network AUC: Area Under the Curve BRAC: Base Realignment And Closure DLRT: Distance Likelihood Ratio Test EER...Discriminative Aggregate Nonparametric [25] Artificial Neural Network ANN Discriminative Aggregate Parametric [33] 11 Results and Discussion Task #1
Non-linear auto-regressive models for cross-frequency coupling in neural time series
Tallot, Lucille; Grabot, Laetitia; Doyère, Valérie; Grenier, Yves; Gramfort, Alexandre
2017-01-01
We address the issue of reliably detecting and quantifying cross-frequency coupling (CFC) in neural time series. Based on non-linear auto-regressive models, the proposed method provides a generative and parametric model of the time-varying spectral content of the signals. As this method models the entire spectrum simultaneously, it avoids the pitfalls related to incorrect filtering or the use of the Hilbert transform on wide-band signals. As the model is probabilistic, it also provides a score of the model “goodness of fit” via the likelihood, enabling easy and legitimate model selection and parameter comparison; this data-driven feature is unique to our model-based approach. Using three datasets obtained with invasive neurophysiological recordings in humans and rodents, we demonstrate that these models are able to replicate previous results obtained with other metrics, but also reveal new insights such as the influence of the amplitude of the slow oscillation. Using simulations, we demonstrate that our parametric method can reveal neural couplings with shorter signals than non-parametric methods. We also show how the likelihood can be used to find optimal filtering parameters, suggesting new properties on the spectrum of the driving signal, but also to estimate the optimal delay between the coupled signals, enabling a directionality estimation in the coupling. PMID:29227989
Estimating Function Approaches for Spatial Point Processes
NASA Astrophysics Data System (ADS)
Deng, Chong
Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.
Analysis of survival in breast cancer patients by using different parametric models
NASA Astrophysics Data System (ADS)
Enera Amran, Syahila; Asrul Afendi Abdullah, M.; Kek, Sie Long; Afiqah Muhamad Jamil, Siti
2017-09-01
In biomedical applications or clinical trials, right censoring was often arising when studying the time to event data. In this case, some individuals are still alive at the end of the study or lost to follow up at a certain time. It is an important issue to handle the censoring data in order to prevent any bias information in the analysis. Therefore, this study was carried out to analyze the right censoring data with three different parametric models; exponential model, Weibull model and log-logistic models. Data of breast cancer patients from Hospital Sultan Ismail, Johor Bahru from 30 December 2008 until 15 February 2017 was used in this study to illustrate the right censoring data. Besides, the covariates included in this study are the time of breast cancer infection patients survive t, age of each patients X1 and treatment given to the patients X2 . In order to determine the best parametric models in analysing survival of breast cancer patients, the performance of each model was compare based on Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and log-likelihood value using statistical software R. When analysing the breast cancer data, all three distributions were shown consistency of data with the line graph of cumulative hazard function resembles a straight line going through the origin. As the result, log-logistic model was the best fitted parametric model compared with exponential and Weibull model since it has the smallest value in AIC and BIC, also the biggest value in log-likelihood.
Spatial hydrological drought characteristics in Karkheh River basin, southwest Iran using copulas
NASA Astrophysics Data System (ADS)
Dodangeh, Esmaeel; Shahedi, Kaka; Shiau, Jenq-Tzong; MirAkbari, Maryam
2017-08-01
Investigation on drought characteristics such as severity, duration, and frequency is crucial for water resources planning and management in a river basin. While the methodology for multivariate drought frequency analysis is well established by applying the copulas, the estimation on the associated parameters by various parameter estimation methods and the effects on the obtained results have not yet been investigated. This research aims at conducting a comparative analysis between the maximum likelihood parametric and non-parametric method of the Kendall τ estimation method for copulas parameter estimation. The methods were employed to study joint severity-duration probability and recurrence intervals in Karkheh River basin (southwest Iran) which is facing severe water-deficit problems. Daily streamflow data at three hydrological gauging stations (Tang Sazbon, Huleilan and Polchehr) near the Karkheh dam were used to draw flow duration curves (FDC) of these three stations. The Q_{75} index extracted from the FDC were set as threshold level to abstract drought characteristics such as drought duration and severity on the basis of the run theory. Drought duration and severity were separately modeled using the univariate probabilistic distributions and gamma-GEV, LN2-exponential, and LN2-gamma were selected as the best paired drought severity-duration inputs for copulas according to the Akaike Information Criteria (AIC), Kolmogorov-Smirnov and chi-square tests. Archimedean Clayton, Frank, and extreme value Gumbel copulas were employed to construct joint cumulative distribution functions (JCDF) of droughts for each station. Frank copula at Tang Sazbon and Gumbel at Huleilan and Polchehr stations were identified as the best copulas based on the performance evaluation criteria including AIC, BIC, log-likelihood and root mean square error (RMSE) values. Based on the RMSE values, nonparametric Kendall-τ is preferred to the parametric maximum likelihood estimation method. The results showed greater drought return periods by the parametric ML method in comparison to the nonparametric Kendall τ estimation method. The results also showed that stations located in tributaries (Huleilan and Polchehr) have close return periods, while the station along the main river (Tang Sazbon) has the smaller return periods for the drought events with identical drought duration and severity.
Direct Parametric Reconstruction With Joint Motion Estimation/Correction for Dynamic Brain PET Data.
Jiao, Jieqing; Bousse, Alexandre; Thielemans, Kris; Burgos, Ninon; Weston, Philip S J; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Markiewicz, Pawel; Ourselin, Sebastien
2017-01-01
Direct reconstruction of parametric images from raw photon counts has been shown to improve the quantitative analysis of dynamic positron emission tomography (PET) data. However it suffers from subject motion which is inevitable during the typical acquisition time of 1-2 hours. In this work we propose a framework to jointly estimate subject head motion and reconstruct the motion-corrected parametric images directly from raw PET data, so that the effects of distorted tissue-to-voxel mapping due to subject motion can be reduced in reconstructing the parametric images with motion-compensated attenuation correction and spatially aligned temporal PET data. The proposed approach is formulated within the maximum likelihood framework, and efficient solutions are derived for estimating subject motion and kinetic parameters from raw PET photon count data. Results from evaluations on simulated [ 11 C]raclopride data using the Zubal brain phantom and real clinical [ 18 F]florbetapir data of a patient with Alzheimer's disease show that the proposed joint direct parametric reconstruction motion correction approach can improve the accuracy of quantifying dynamic PET data with large subject motion.
Stochastic Ordering Using the Latent Trait and the Sum Score in Polytomous IRT Models.
ERIC Educational Resources Information Center
Hemker, Bas T.; Sijtsma, Klaas; Molenaar, Ivo W.; Junker, Brian W.
1997-01-01
Stochastic ordering properties are investigated for a broad class of item response theory (IRT) models for which the monotone likelihood ratio does not hold. A taxonomy is given for nonparametric and parametric models for polytomous models based on the hierarchical relationship between the models. (SLD)
NASA Astrophysics Data System (ADS)
Yang, Yang; Peng, Zhike; Dong, Xingjian; Zhang, Wenming; Clifton, David A.
2018-03-01
A challenge in analysing non-stationary multi-component signals is to isolate nonlinearly time-varying signals especially when they are overlapped in time and frequency plane. In this paper, a framework integrating time-frequency analysis-based demodulation and a non-parametric Gaussian latent feature model is proposed to isolate and recover components of such signals. The former aims to remove high-order frequency modulation (FM) such that the latter is able to infer demodulated components while simultaneously discovering the number of the target components. The proposed method is effective in isolating multiple components that have the same FM behavior. In addition, the results show that the proposed method is superior to generalised demodulation with singular-value decomposition-based method, parametric time-frequency analysis with filter-based method and empirical model decomposition base method, in recovering the amplitude and phase of superimposed components.
Heggeseth, Brianna C; Jewell, Nicholas P
2013-07-20
Multivariate Gaussian mixtures are a class of models that provide a flexible parametric approach for the representation of heterogeneous multivariate outcomes. When the outcome is a vector of repeated measurements taken on the same subject, there is often inherent dependence between observations. However, a common covariance assumption is conditional independence-that is, given the mixture component label, the outcomes for subjects are independent. In this paper, we study, through asymptotic bias calculations and simulation, the impact of covariance misspecification in multivariate Gaussian mixtures. Although maximum likelihood estimators of regression and mixing probability parameters are not consistent under misspecification, they have little asymptotic bias when mixture components are well separated or if the assumed correlation is close to the truth even when the covariance is misspecified. We also present a robust standard error estimator and show that it outperforms conventional estimators in simulations and can indicate that the model is misspecified. Body mass index data from a national longitudinal study are used to demonstrate the effects of misspecification on potential inferences made in practice. Copyright © 2013 John Wiley & Sons, Ltd.
Quantum state estimation when qubits are lost: a no-data-left-behind approach
Williams, Brian P.; Lougovski, Pavel
2017-04-06
We present an approach to Bayesian mean estimation of quantum states using hyperspherical parametrization and an experiment-specific likelihood which allows utilization of all available data, even when qubits are lost. With this method, we report the first closed-form Bayesian mean and maximum likelihood estimates for the ideal single qubit. Due to computational constraints, we utilize numerical sampling to determine the Bayesian mean estimate for a photonic two-qubit experiment in which our novel analysis reduces burdens associated with experimental asymmetries and inefficiencies. This method can be applied to quantum states of any dimension and experimental complexity.
The Role of Parametric Assumptions in Adaptive Bayesian Estimation
ERIC Educational Resources Information Center
Alcala-Quintana, Rocio; Garcia-Perez, Miguel A.
2004-01-01
Variants of adaptive Bayesian procedures for estimating the 5% point on a psychometric function were studied by simulation. Bias and standard error were the criteria to evaluate performance. The results indicated a superiority of (a) uniform priors, (b) model likelihood functions that are odd symmetric about threshold and that have parameter…
A Bootstrap Generalization of Modified Parallel Analysis for IRT Dimensionality Assessment
ERIC Educational Resources Information Center
Finch, Holmes; Monahan, Patrick
2008-01-01
This article introduces a bootstrap generalization to the Modified Parallel Analysis (MPA) method of test dimensionality assessment using factor analysis. This methodology, based on the use of Marginal Maximum Likelihood nonlinear factor analysis, provides for the calculation of a test statistic based on a parametric bootstrap using the MPA…
Period Estimation for Sparsely-sampled Quasi-periodic Light Curves Applied to Miras
NASA Astrophysics Data System (ADS)
He, Shiyuan; Yuan, Wenlong; Huang, Jianhua Z.; Long, James; Macri, Lucas M.
2016-12-01
We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequency parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period-luminosity relations.
Chan, Aaron C.; Srinivasan, Vivek J.
2013-01-01
In optical coherence tomography (OCT) and ultrasound, unbiased Doppler frequency estimators with low variance are desirable for blood velocity estimation. Hardware improvements in OCT mean that ever higher acquisition rates are possible, which should also, in principle, improve estimation performance. Paradoxically, however, the widely used Kasai autocorrelation estimator’s performance worsens with increasing acquisition rate. We propose that parametric estimators based on accurate models of noise statistics can offer better performance. We derive a maximum likelihood estimator (MLE) based on a simple additive white Gaussian noise model, and show that it can outperform the Kasai autocorrelation estimator. In addition, we also derive the Cramer Rao lower bound (CRLB), and show that the variance of the MLE approaches the CRLB for moderate data lengths and noise levels. We note that the MLE performance improves with longer acquisition time, and remains constant or improves with higher acquisition rates. These qualities may make it a preferred technique as OCT imaging speed continues to improve. Finally, our work motivates the development of more general parametric estimators based on statistical models of decorrelation noise. PMID:23446044
flexsurv: A Platform for Parametric Survival Modeling in R
Jackson, Christopher H.
2018-01-01
flexsurv is an R package for fully-parametric modeling of survival data. Any parametric time-to-event distribution may be fitted if the user supplies a probability density or hazard function, and ideally also their cumulative versions. Standard survival distributions are built in, including the three and four-parameter generalized gamma and F distributions. Any parameter of any distribution can be modeled as a linear or log-linear function of covariates. The package also includes the spline model of Royston and Parmar (2002), in which both baseline survival and covariate effects can be arbitrarily flexible parametric functions of time. The main model-fitting function, flexsurvreg, uses the familiar syntax of survreg from the standard survival package (Therneau 2016). Censoring or left-truncation are specified in ‘Surv’ objects. The models are fitted by maximizing the full log-likelihood, and estimates and confidence intervals for any function of the model parameters can be printed or plotted. flexsurv also provides functions for fitting and predicting from fully-parametric multi-state models, and connects with the mstate package (de Wreede, Fiocco, and Putter 2011). This article explains the methods and design principles of the package, giving several worked examples of its use. PMID:29593450
Broët, Philippe; Tsodikov, Alexander; De Rycke, Yann; Moreau, Thierry
2004-06-01
This paper presents two-sample statistics suited for testing equality of survival functions against improper semi-parametric accelerated failure time alternatives. These tests are designed for comparing either the short- or the long-term effect of a prognostic factor, or both. These statistics are obtained as partial likelihood score statistics from a time-dependent Cox model. As a consequence, the proposed tests can be very easily implemented using widely available software. A breast cancer clinical trial is presented as an example to demonstrate the utility of the proposed tests.
NASA Astrophysics Data System (ADS)
Smetanin, S. N.; Jelínek, M., Jr.; Kubeček, V.; Jelínková, H.
2015-09-01
Optimal conditions of low-threshold collinear parametric Raman comb generation in calcite (CaCO3) are experimentally investigated under 20 ps laser pulse excitation, in agreement with the theoretical study. The collinear parametric Raman generation of the highest number of Raman components in the short calcite crystals corresponding to the optimal condition of Stokes-anti-Stokes coupling was achieved. At the excitation wavelength of 1064 nm, using the optimum-length crystal resulted in the effective multi-octave frequency Raman comb generation containing up to five anti-Stokes and more than four Stokes components (from 674 nm to 1978 nm). The 532 nm pumping resulted in the frequency Raman comb generation from the 477 nm 2nd anti-Stokes up to the 692 nm 4th Stokes component. Using the crystal with a non-optimal length leads to the Stokes components generation only with higher thresholds because of the cascade-like stimulated Raman scattering with suppressed parametric coupling.
Determining prescription durations based on the parametric waiting time distribution.
Støvring, Henrik; Pottegård, Anton; Hallas, Jesper
2016-12-01
The purpose of the study is to develop a method to estimate the duration of single prescriptions in pharmacoepidemiological studies when the single prescription duration is not available. We developed an estimation algorithm based on maximum likelihood estimation of a parametric two-component mixture model for the waiting time distribution (WTD). The distribution component for prevalent users estimates the forward recurrence density (FRD), which is related to the distribution of time between subsequent prescription redemptions, the inter-arrival density (IAD), for users in continued treatment. We exploited this to estimate percentiles of the IAD by inversion of the estimated FRD and defined the duration of a prescription as the time within which 80% of current users will have presented themselves again. Statistical properties were examined in simulation studies, and the method was applied to empirical data for four model drugs: non-steroidal anti-inflammatory drugs (NSAIDs), warfarin, bendroflumethiazide, and levothyroxine. Simulation studies found negligible bias when the data-generating model for the IAD coincided with the FRD used in the WTD estimation (Log-Normal). When the IAD consisted of a mixture of two Log-Normal distributions, but was analyzed with a single Log-Normal distribution, relative bias did not exceed 9%. Using a Log-Normal FRD, we estimated prescription durations of 117, 91, 137, and 118 days for NSAIDs, warfarin, bendroflumethiazide, and levothyroxine, respectively. Similar results were found with a Weibull FRD. The algorithm allows valid estimation of single prescription durations, especially when the WTD reliably separates current users from incident users, and may replace ad-hoc decision rules in automated implementations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Latent component-based gear tooth fault detection filter using advanced parametric modeling
NASA Astrophysics Data System (ADS)
Ettefagh, M. M.; Sadeghi, M. H.; Rezaee, M.; Chitsaz, S.
2009-10-01
In this paper, a new parametric model-based filter is proposed for gear tooth fault detection. The designing of the filter consists of identifying the most proper latent component (LC) of the undamaged gearbox signal by analyzing the instant modules (IMs) and instant frequencies (IFs) and then using the component with lowest IM as the proposed filter output for detecting fault of the gearbox. The filter parameters are estimated by using the LC theory in which an advanced parametric modeling method has been implemented. The proposed method is applied on the signals, extracted from simulated gearbox for detection of the simulated gear faults. In addition, the method is used for quality inspection of the produced Nissan-Junior vehicle gearbox by gear profile error detection in an industrial test bed. For evaluation purpose, the proposed method is compared with the previous parametric TAR/AR-based filters in which the parametric model residual is considered as the filter output and also Yule-Walker and Kalman filter are implemented for estimating the parameters. The results confirm the high performance of the new proposed fault detection method.
A Versatile Omnibus Test for Detecting Mean and Variance Heterogeneity
Bailey, Matthew; Kauwe, John S. K.; Maxwell, Taylor J.
2014-01-01
Recent research has revealed loci that display variance heterogeneity through various means such as biological disruption, linkage disequilibrium (LD), gene-by-gene (GxG), or gene-by-environment (GxE) interaction. We propose a versatile likelihood ratio test that allows joint testing for mean and variance heterogeneity (LRTMV) or either effect alone (LRTM or LRTV) in the presence of covariates. Using extensive simulations for our method and others we found that all parametric tests were sensitive to non-normality regardless of any trait transformations. Coupling our test with the parametric bootstrap solves this issue. Using simulations and empirical data from a known mean-only functional variant we demonstrate how linkage disequilibrium (LD) can produce variance-heterogeneity loci (vQTL) in a predictable fashion based on differential allele frequencies, high D’ and relatively low r2 values. We propose that a joint test for mean and variance heterogeneity is more powerful than a variance only test for detecting vQTL. This takes advantage of loci that also have mean effects without sacrificing much power to detect variance only effects. We discuss using vQTL as an approach to detect gene-by-gene interactions and also how vQTL are related to relationship loci (rQTL) and how both can create prior hypothesis for each other and reveal the relationships between traits and possibly between components of a composite trait. PMID:24482837
Semi-Parametric Item Response Functions in the Context of Guessing. CRESST Report 844
ERIC Educational Resources Information Center
Falk, Carl F.; Cai, Li
2015-01-01
We present a logistic function of a monotonic polynomial with a lower asymptote, allowing additional flexibility beyond the three-parameter logistic model. We develop a maximum marginal likelihood based approach to estimate the item parameters. The new item response model is demonstrated on math assessment data from a state, and a computationally…
BROËT, PHILIPPE; TSODIKOV, ALEXANDER; DE RYCKE, YANN; MOREAU, THIERRY
2010-01-01
This paper presents two-sample statistics suited for testing equality of survival functions against improper semi-parametric accelerated failure time alternatives. These tests are designed for comparing either the short- or the long-term effect of a prognostic factor, or both. These statistics are obtained as partial likelihood score statistics from a time-dependent Cox model. As a consequence, the proposed tests can be very easily implemented using widely available software. A breast cancer clinical trial is presented as an example to demonstrate the utility of the proposed tests. PMID:15293627
PERIOD ESTIMATION FOR SPARSELY SAMPLED QUASI-PERIODIC LIGHT CURVES APPLIED TO MIRAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Shiyuan; Huang, Jianhua Z.; Long, James
2016-12-01
We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequencymore » parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period–luminosity relations.« less
Decker, Anna L.; Hubbard, Alan; Crespi, Catherine M.; Seto, Edmund Y.W.; Wang, May C.
2015-01-01
While child and adolescent obesity is a serious public health concern, few studies have utilized parameters based on the causal inference literature to examine the potential impacts of early intervention. The purpose of this analysis was to estimate the causal effects of early interventions to improve physical activity and diet during adolescence on body mass index (BMI), a measure of adiposity, using improved techniques. The most widespread statistical method in studies of child and adolescent obesity is multi-variable regression, with the parameter of interest being the coefficient on the variable of interest. This approach does not appropriately adjust for time-dependent confounding, and the modeling assumptions may not always be met. An alternative parameter to estimate is one motivated by the causal inference literature, which can be interpreted as the mean change in the outcome under interventions to set the exposure of interest. The underlying data-generating distribution, upon which the estimator is based, can be estimated via a parametric or semi-parametric approach. Using data from the National Heart, Lung, and Blood Institute Growth and Health Study, a 10-year prospective cohort study of adolescent girls, we estimated the longitudinal impact of physical activity and diet interventions on 10-year BMI z-scores via a parameter motivated by the causal inference literature, using both parametric and semi-parametric estimation approaches. The parameters of interest were estimated with a recently released R package, ltmle, for estimating means based upon general longitudinal treatment regimes. We found that early, sustained intervention on total calories had a greater impact than a physical activity intervention or non-sustained interventions. Multivariable linear regression yielded inflated effect estimates compared to estimates based on targeted maximum-likelihood estimation and data-adaptive super learning. Our analysis demonstrates that sophisticated, optimal semiparametric estimation of longitudinal treatment-specific means via ltmle provides an incredibly powerful, yet easy-to-use tool, removing impediments for putting theory into practice. PMID:26046009
Cosmological constraints on interacting light particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brust, Christopher; Cui, Yanou; Sigurdson, Kris, E-mail: cbrust@perimeterinstitute.ca, E-mail: yanou.cui@ucr.edu, E-mail: krs@phas.ubc.ca
2017-08-01
Cosmological observations are becoming increasingly sensitive to the effects of light particles in the form of dark radiation (DR) at the time of recombination. The conventional observable of effective neutrino number, N {sub eff}, is insufficient for probing generic, interacting models of DR. In this work, we perform likelihood analyses which allow both free-streaming effective neutrinos (parametrized by N {sub eff}) and interacting effective neutrinos (parametrized by N {sub fld}). We motivate an alternative parametrization of DR in terms of N {sub tot} (total effective number of neutrinos) and f {sub fs} (the fraction of effective neutrinos which are free-streaming),more » which is less degenerate than using N {sub eff} and N {sub fld}. Using the Planck 2015 likelihoods in conjunction with measurements of baryon acoustic oscillations (BAO), we find constraints on the total amount of beyond the Standard Model effective neutrinos (both free-streaming and interacting) of Δ N {sub tot} < 0.39 at 2σ. In addition, we consider the possibility that this scenario alleviates the tensions between early-time and late-time cosmological observations, in particular the measurements of σ{sub 8} (the amplitude of matter power fluctuations at 8 h {sup −1} Mpc), finding a mild preference for interactions among light species. We further forecast the sensitivities of a variety of future experiments, including Advanced ACTPol (a representative CMB Stage-III experiment), CMB Stage-IV, and the Euclid satellite. This study is relevant for probing non-standard neutrino physics as well as a wide variety of new particle physics models beyond the Standard Model that involve dark radiation.« less
NASA Astrophysics Data System (ADS)
Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen
2018-07-01
Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper, we use massive asymptotically optimal data compression to reduce the dimensionality of the data space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parametrized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate DELFI with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological data sets.
Validation of a Parametric Approach for 3d Fortification Modelling: Application to Scale Models
NASA Astrophysics Data System (ADS)
Jacquot, K.; Chevrier, C.; Halin, G.
2013-02-01
Parametric modelling approach applied to cultural heritage virtual representation is a field of research explored for years since it can address many limitations of digitising tools. For example, essential historical sources for fortification virtual reconstructions like plans-reliefs have several shortcomings when they are scanned. To overcome those problems, knowledge based-modelling can be used: knowledge models based on the analysis of theoretical literature of a specific domain such as bastioned fortification treatises can be the cornerstone of the creation of a parametric library of fortification components. Implemented in Grasshopper, these components are manually adjusted on the data available (i.e. 3D surveys of plans-reliefs or scanned maps). Most of the fortification area is now modelled and the question of accuracy assessment is raised. A specific method is used to evaluate the accuracy of the parametric components. The results of the assessment process will allow us to validate the parametric approach. The automation of the adjustment process can finally be planned. The virtual model of fortification is part of a larger project aimed at valorising and diffusing a very unique cultural heritage item: the collection of plans-reliefs. As such, knowledge models are precious assets when automation and semantic enhancements will be considered.
Coupled parametric design of flow control and duct shape
NASA Technical Reports Server (NTRS)
Florea, Razvan (Inventor); Bertuccioli, Luca (Inventor)
2009-01-01
A method for designing gas turbine engine components using a coupled parametric analysis of part geometry and flow control is disclosed. Included are the steps of parametrically defining the geometry of the duct wall shape, parametrically defining one or more flow control actuators in the duct wall, measuring a plurality of performance parameters or metrics (e.g., flow characteristics) of the duct and comparing the results of the measurement with desired or target parameters, and selecting the optimal duct geometry and flow control for at least a portion of the duct, the selection process including evaluating the plurality of performance metrics in a pareto analysis. The use of this method in the design of inter-turbine transition ducts, serpentine ducts, inlets, diffusers, and similar components provides a design which reduces pressure losses and flow profile distortions.
Petersen, Sidsel R; Alkeskjold, Thomas T; Olausson, Christina B; Lægsgaard, Jesper
2014-08-15
The parametric gain range of a degenerate four-wave mixing process is determined in the undepleted pump regime. The gain range is considered with and without taking the mode field distributions of the four-wave mixing components into account. It is found that the mode field distributions have to be included to evaluate the parametric gain correctly in dispersion-tailored speciality fibers and that mode profile engineering can provide a way to increase the parametric gain range.
NASA Astrophysics Data System (ADS)
Smetanin, Sergei; Jelínek, Michal; Kubeček, Václav
2017-05-01
Lasers based on stimulated-Raman-scattering process can be used for the frequency-conversion to the wavelengths that are not readily available from solid-state lasers. Parametric Raman lasers allow generation of not only Stokes, but also anti-Stokes components. However, practically all the known crystalline parametric Raman anti-Stokes lasers have very low conversion efficiencies of about 1 % at theoretically predicted values of up to 40 % because of relatively narrow angular tolerance of phase matching in comparison with angular divergence of the interacting beams. In our investigation, to widen the angular tolerance of four-wave mixing and to obtain high conversion efficiency into the antiStokes wave we propose and study a new scheme of the parametric Raman anti-Stokes laser at 503 nm with phasematched collinear beam interaction of orthogonally polarized Raman components in calcite under 532 nm 20 ps laser pumping. We use only one 532-nm laser source to pump the Raman-active calcite crystal oriented at the phase matched angle for orthogonally polarized Raman components four-wave mixing. Additionally, we split the 532-nm laser radiation into the orthogonally polarized components entering to the Raman-active calcite crystal at the certain incidence angles to fulfill the tangential phase matching compensating walk-off of extraordinary waves for collinear beam interaction in the crystal with the widest angular tolerance of four-wave mixing. For the first time the highest 503-nm anti-Stokes conversion efficiency of 30 % close to the theoretical limit of about 40 % at overall optical efficiency of the parametric Raman anti-Stokes generation of up to 3.5 % in calcite is obtained due to realization of tangential phase matching insensitive to the angular mismatch.
A parametric method for determining the number of signals in narrow-band direction finding
NASA Astrophysics Data System (ADS)
Wu, Qiang; Fuhrmann, Daniel R.
1991-08-01
A novel and more accurate method to determine the number of signals in the multisource direction finding problem is developed. The information-theoretic criteria of Yin and Krishnaiah (1988) are applied to a set of quantities which are evaluated from the log-likelihood function. Based on proven asymptotic properties of the maximum likelihood estimation, these quantities have the properties required by the criteria. Since the information-theoretic criteria use these quantities instead of the eigenvalues of the estimated correlation matrix, this approach possesses the advantage of not requiring a subjective threshold, and also provides higher performance than when eigenvalues are used. Simulation results are presented and compared to those obtained from the nonparametric method given by Wax and Kailath (1985).
Nowak, Michael D.; Smith, Andrew B.; Simpson, Carl; Zwickl, Derrick J.
2013-01-01
Molecular divergence time analyses often rely on the age of fossil lineages to calibrate node age estimates. Most divergence time analyses are now performed in a Bayesian framework, where fossil calibrations are incorporated as parametric prior probabilities on node ages. It is widely accepted that an ideal parameterization of such node age prior probabilities should be based on a comprehensive analysis of the fossil record of the clade of interest, but there is currently no generally applicable approach for calculating such informative priors. We provide here a simple and easily implemented method that employs fossil data to estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade, which can be used to fit an informative parametric prior probability distribution on a node age. Specifically, our method uses the extant diversity and the stratigraphic distribution of fossil lineages confidently assigned to a clade to fit a branching model of lineage diversification. Conditioning this on a simple model of fossil preservation, we estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade. The likelihood surface of missing history can then be translated into a parametric prior probability distribution on the age of the clade of interest. We show that the method performs well with simulated fossil distribution data, but that the likelihood surface of missing history can at times be too complex for the distribution-fitting algorithm employed by our software tool. An empirical example of the application of our method is performed to estimate echinoid node ages. A simulation-based sensitivity analysis using the echinoid data set shows that node age prior distributions estimated under poor preservation rates are significantly less informative than those estimated under high preservation rates. PMID:23755303
A preliminary study of longitudinal neuroadaptation associated with recovery from addiction.
Forster, Sarah E; Finn, Peter R; Brown, Joshua W
2016-11-01
Few studies have explored longitudinal change in event-related brain responses during early recovery from addiction. Moreover, existing findings yield evidence of both increased and decreased signaling within reward and control centers over time. The current study explored reward- and control-related signals in a risky decision-making task and specifically investigated parametric modulations of the BOLD signal, rather than signal magnitude alone. It was hypothesized that risk-related signals during decision-making and outcome evaluation would reflect recovery and that change in specific signals would correspond with improved treatment outcomes. Twenty-one substance dependent individuals were recruited upon enrollment in community-based substance use treatment programs, wherein they received treatment-as-usual. Participants completed functional neuroimaging assessments at baseline and 3-month follow-up while performing the Balloon Analogue Risk Task (BART). Risk- and reward-sensitive signals were identified using parametric modulators. Substance use was tracked throughout the 3-month study interval using the timeline follow-back procedure. Longitudinal contrasts of parametric modulators suggested improved formation of risk-informed outcome expectations at follow-up. Specifically, a greater response to high risk (low-likelihood) positive feedback was identified in caudal anterior cingulate cortex (ACC) and a greater response to low risk (low-likelihood) negative feedback was identified in caudal ACC and inferior frontal gyrus. In addition, attenuation of a ventromedial prefrontal cortex (vmPFC) "reward-seeking" signal (i.e., increasing response with greater reward) during risky decisions at follow-up was associated with less substance use during the study interval. Changes in risk- and reward-related signaling in ACC/vmPFC appear to reflect recovery and may support sobriety. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
GEE-Smoothing Spline in Semiparametric Model with Correlated Nominal Data
NASA Astrophysics Data System (ADS)
Ibrahim, Noor Akma; Suliadi
2010-11-01
In this paper we propose GEE-Smoothing spline in the estimation of semiparametric models with correlated nominal data. The method can be seen as an extension of parametric generalized estimating equation to semiparametric models. The nonparametric component is estimated using smoothing spline specifically the natural cubic spline. We use profile algorithm in the estimation of both parametric and nonparametric components. The properties of the estimators are evaluated using simulation studies.
Rapid Parameterization Schemes for Aircraft Shape Optimization
NASA Technical Reports Server (NTRS)
Li, Wu
2012-01-01
A rapid shape parameterization tool called PROTEUS is developed for aircraft shape optimization. This tool can be applied directly to any aircraft geometry that has been defined in PLOT3D format, with the restriction that each aircraft component must be defined by only one data block. PROTEUS has eight types of parameterization schemes: planform, wing surface, twist, body surface, body scaling, body camber line, shifting/scaling, and linear morphing. These parametric schemes can be applied to two types of components: wing-type surfaces (e.g., wing, canard, horizontal tail, vertical tail, and pylon) and body-type surfaces (e.g., fuselage, pod, and nacelle). These schemes permit the easy setup of commonly used shape modification methods, and each customized parametric scheme can be applied to the same type of component for any configuration. This paper explains the mathematics for these parametric schemes and uses two supersonic configurations to demonstrate the application of these schemes.
Crowther, Michael J; Look, Maxime P; Riley, Richard D
2014-09-28
Multilevel mixed effects survival models are used in the analysis of clustered survival data, such as repeated events, multicenter clinical trials, and individual participant data (IPD) meta-analyses, to investigate heterogeneity in baseline risk and covariate effects. In this paper, we extend parametric frailty models including the exponential, Weibull and Gompertz proportional hazards (PH) models and the log logistic, log normal, and generalized gamma accelerated failure time models to allow any number of normally distributed random effects. Furthermore, we extend the flexible parametric survival model of Royston and Parmar, modeled on the log-cumulative hazard scale using restricted cubic splines, to include random effects while also allowing for non-PH (time-dependent effects). Maximum likelihood is used to estimate the models utilizing adaptive or nonadaptive Gauss-Hermite quadrature. The methods are evaluated through simulation studies representing clinically plausible scenarios of a multicenter trial and IPD meta-analysis, showing good performance of the estimation method. The flexible parametric mixed effects model is illustrated using a dataset of patients with kidney disease and repeated times to infection and an IPD meta-analysis of prognostic factor studies in patients with breast cancer. User-friendly Stata software is provided to implement the methods. Copyright © 2014 John Wiley & Sons, Ltd.
Parametric Excitation of Electrostatic Dust-Modes by Ion-Cyclotron Waves in a Dusty Plasma
NASA Astrophysics Data System (ADS)
Islam, M. K.; Salahuddin, M.; Ferdous, T.; Salimullah, M.
A large amplitude electrostatic ion-cyclotron wave propagating through a magnetized and collisional dusty plasma undergoes strong parametric instability off the low-frequency dust-modes. The presence of the dust-component has effect on the nonlinear coupling via the dust-modes. The ion-neutral collisions are seen to have significant effect on the damping and consequent overall growth of the parametric excitation process.
Empirical likelihood method for non-ignorable missing data problems.
Guan, Zhong; Qin, Jing
2017-01-01
Missing response problem is ubiquitous in survey sampling, medical, social science and epidemiology studies. It is well known that non-ignorable missing is the most difficult missing data problem where the missing of a response depends on its own value. In statistical literature, unlike the ignorable missing data problem, not many papers on non-ignorable missing data are available except for the full parametric model based approach. In this paper we study a semiparametric model for non-ignorable missing data in which the missing probability is known up to some parameters, but the underlying distributions are not specified. By employing Owen (1988)'s empirical likelihood method we can obtain the constrained maximum empirical likelihood estimators of the parameters in the missing probability and the mean response which are shown to be asymptotically normal. Moreover the likelihood ratio statistic can be used to test whether the missing of the responses is non-ignorable or completely at random. The theoretical results are confirmed by a simulation study. As an illustration, the analysis of a real AIDS trial data shows that the missing of CD4 counts around two years are non-ignorable and the sample mean based on observed data only is biased.
Maximum Likelihood Estimations and EM Algorithms with Length-biased Data
Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu
2012-01-01
SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840
Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi
2016-02-01
Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.
Frey, H Christopher; Zhao, Yuchao
2004-11-15
Probabilistic emission inventories were developed for urban air toxic emissions of benzene, formaldehyde, chromium, and arsenic for the example of Houston. Variability and uncertainty in emission factors were quantified for 71-97% of total emissions, depending upon the pollutant and data availability. Parametric distributions for interunit variability were fit using maximum likelihood estimation (MLE), and uncertainty in mean emission factors was estimated using parametric bootstrap simulation. For data sets containing one or more nondetected values, empirical bootstrap simulation was used to randomly sample detection limits for nondetected values and observations for sample values, and parametric distributions for variability were fit using MLE estimators for censored data. The goodness-of-fit for censored data was evaluated by comparison of cumulative distributions of bootstrap confidence intervals and empirical data. The emission inventory 95% uncertainty ranges are as small as -25% to +42% for chromium to as large as -75% to +224% for arsenic with correlated surrogates. Uncertainty was dominated by only a few source categories. Recommendations are made for future improvements to the analysis.
Parametric System Model for a Stirling Radioisotope Generator
NASA Technical Reports Server (NTRS)
Schmitz, Paul C.
2015-01-01
A Parametric System Model (PSM) was created in order to explore conceptual designs, the impact of component changes and power level on the performance of the Stirling Radioisotope Generator (SRG). Using the General Purpose Heat Source (GPHS approximately 250 Wth) modules as the thermal building block from which a SRG is conceptualized, trade studies are performed to understand the importance of individual component scaling on isotope usage. Mathematical relationships based on heat and power throughput, temperature, mass, and volume were developed for each of the required subsystems. The PSM uses these relationships to perform component- and system-level trades.
Parametric System Model for a Stirling Radioisotope Generator
NASA Technical Reports Server (NTRS)
Schmitz, Paul C.
2014-01-01
A Parametric System Model (PSM) was created in order to explore conceptual designs, the impact of component changes and power level on the performance of Stirling Radioisotope Generator (SRG). Using the General Purpose Heat Source (GPHS approximately 250 watt thermal) modules as the thermal building block around which a SRG is conceptualized, trade studies are performed to understand the importance of individual component scaling on isotope usage. Mathematical relationships based on heat and power throughput, temperature, mass and volume were developed for each of the required subsystems. The PSM uses these relationships to perform component and system level trades.
Parametric instability induced by X-mode wave heating at EISCAT
NASA Astrophysics Data System (ADS)
Wang, Xiang; Zhou, Chen; Liu, Moran; Honary, Farideh; Ni, Binbin; Zhao, Zhengyu
2016-10-01
In this paper, we present results of parametric instability induced by X-mode wave heating observed by EISCAT (European Incoherent Scatter Scientific Association) radar at Tromsø, Norway. Three typical X-mode ionospheric heating experiments on 22 October 2013, 19 October 2012, and 21 February 2013 are investigated in details. Both parametric decay instability (PDI) and oscillating two-stream instability are observed during the X-mode heating period. We suggest that the full dispersion relationship of the Langmuir wave can be employed to analyze the X-mode parametric instability excitation. A modified kinetic electron distribution is proposed and analyzed, which is able to satisfy the matching condition of parametric instability excitation. Parallel electric field component of X-mode heating wave can also exceed the parametric instability excitation threshold under certain conditions.
Johnson, Rebecca N; Agapow, Paul-Michael; Crozier, Ross H
2003-11-01
The ant subfamily Formicinae is a large assemblage (2458 species (J. Nat. Hist. 29 (1995) 1037), including species that weave leaf nests together with larval silk and in which the metapleural gland-the ancestrally defining ant character-has been secondarily lost. We used sequences from two mitochondrial genes (cytochrome b and cytochrome oxidase 2) from 18 formicine and 4 outgroup taxa to derive a robust phylogeny, employing a search for tree islands using 10000 randomly constructed trees as starting points and deriving a maximum likelihood consensus tree from the ML tree and those not significantly different from it. Non-parametric bootstrapping showed that the ML consensus tree fit the data significantly better than three scenarios based on morphology, with that of Bolton (Identification Guide to the Ant Genera of the World, Harvard University Press, Cambridge, MA) being the best among these alternative trees. Trait mapping showed that weaving had arisen at least four times and possibly been lost once. A maximum likelihood analysis showed that loss of the metapleural gland is significantly associated with the weaver life-pattern. The graph of the frequencies with which trees were discovered versus their likelihood indicates that trees with high likelihoods have much larger basins of attraction than those with lower likelihoods. While this result indicates that single searches are more likely to find high- than low-likelihood tree islands, it also indicates that searching only for the single best tree may lose important information.
Survivorship analysis when cure is a possibility: a Monte Carlo study.
Goldman, A I
1984-01-01
Parametric survivorship analyses of clinical trials commonly involves the assumption of a hazard function constant with time. When the empirical curve obviously levels off, one can modify the hazard function model by use of a Gompertz or Weibull distribution with hazard decreasing over time. Some cancer treatments are thought to cure some patients within a short time of initiation. Then, instead of all patients having the same hazard, decreasing over time, a biologically more appropriate model assumes that an unknown proportion (1 - pi) have constant high risk whereas the remaining proportion (pi) have essentially no risk. This paper discusses the maximum likelihood estimation of pi and the power curves of the likelihood ratio test. Monte Carlo studies provide results for a variety of simulated trials; empirical data illustrate the methods.
Fuel cell on-site integrated energy system parametric analysis of a residential complex
NASA Technical Reports Server (NTRS)
Simons, S. N.
1977-01-01
A parametric energy-use analysis was performed for a large apartment complex served by a fuel cell on-site integrated energy system (OS/IES). The variables parameterized include operating characteristics for four phosphoric acid fuel cells, eight OS/IES energy recovery systems, and four climatic locations. The annual fuel consumption for selected parametric combinations are presented and a breakeven economic analysis is presented for one parametric combination. The results show fuel cell electrical efficiency and system component choice have the greatest effect on annual fuel consumption; fuel cell thermal efficiency and geographic location have less of an effect.
Dai, James Y.; Hughes, James P.
2012-01-01
The meta-analytic approach to evaluating surrogate end points assesses the predictiveness of treatment effect on the surrogate toward treatment effect on the clinical end point based on multiple clinical trials. Definition and estimation of the correlation of treatment effects were developed in linear mixed models and later extended to binary or failure time outcomes on a case-by-case basis. In a general regression setting that covers nonnormal outcomes, we discuss in this paper several metrics that are useful in the meta-analytic evaluation of surrogacy. We propose a unified 3-step procedure to assess these metrics in settings with binary end points, time-to-event outcomes, or repeated measures. First, the joint distribution of estimated treatment effects is ascertained by an estimating equation approach; second, the restricted maximum likelihood method is used to estimate the means and the variance components of the random treatment effects; finally, confidence intervals are constructed by a parametric bootstrap procedure. The proposed method is evaluated by simulations and applications to 2 clinical trials. PMID:22394448
NASA Astrophysics Data System (ADS)
Han, Feng; Zheng, Yi
2018-06-01
Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.
Liu, Xiaoming; Fu, Yun-Xin; Maxwell, Taylor J.; Boerwinkle, Eric
2010-01-01
It is known that sequencing error can bias estimation of evolutionary or population genetic parameters. This problem is more prominent in deep resequencing studies because of their large sample size n, and a higher probability of error at each nucleotide site. We propose a new method based on the composite likelihood of the observed SNP configurations to infer population mutation rate θ = 4Neμ, population exponential growth rate R, and error rate ɛ, simultaneously. Using simulation, we show the combined effects of the parameters, θ, n, ɛ, and R on the accuracy of parameter estimation. We compared our maximum composite likelihood estimator (MCLE) of θ with other θ estimators that take into account the error. The results show the MCLE performs well when the sample size is large or the error rate is high. Using parametric bootstrap, composite likelihood can also be used as a statistic for testing the model goodness-of-fit of the observed DNA sequences. The MCLE method is applied to sequence data on the ANGPTL4 gene in 1832 African American and 1045 European American individuals. PMID:19952140
NASA Astrophysics Data System (ADS)
Núñez, M.; Robie, T.; Vlachos, D. G.
2017-10-01
Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).
A survey of kernel-type estimators for copula and their applications
NASA Astrophysics Data System (ADS)
Sumarjaya, I. W.
2017-10-01
Copulas have been widely used to model nonlinear dependence structure. Main applications of copulas include areas such as finance, insurance, hydrology, rainfall to name but a few. The flexibility of copula allows researchers to model dependence structure beyond Gaussian distribution. Basically, a copula is a function that couples multivariate distribution functions to their one-dimensional marginal distribution functions. In general, there are three methods to estimate copula. These are parametric, nonparametric, and semiparametric method. In this article we survey kernel-type estimators for copula such as mirror reflection kernel, beta kernel, transformation method and local likelihood transformation method. Then, we apply these kernel methods to three stock indexes in Asia. The results of our analysis suggest that, albeit variation in information criterion values, the local likelihood transformation method performs better than the other kernel methods.
Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.
2015-01-01
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448
NASA Astrophysics Data System (ADS)
Dai, Xiaoqian; Tian, Jie; Chen, Zhe
2010-03-01
Parametric images can represent both spatial distribution and quantification of the biological and physiological parameters of tracer kinetics. The linear least square (LLS) method is a well-estimated linear regression method for generating parametric images by fitting compartment models with good computational efficiency. However, bias exists in LLS-based parameter estimates, owing to the noise present in tissue time activity curves (TTACs) that propagates as correlated error in the LLS linearized equations. To address this problem, a volume-wise principal component analysis (PCA) based method is proposed. In this method, firstly dynamic PET data are properly pre-transformed to standardize noise variance as PCA is a data driven technique and can not itself separate signals from noise. Secondly, the volume-wise PCA is applied on PET data. The signals can be mostly represented by the first few principle components (PC) and the noise is left in the subsequent PCs. Then the noise-reduced data are obtained using the first few PCs by applying 'inverse PCA'. It should also be transformed back according to the pre-transformation method used in the first step to maintain the scale of the original data set. Finally, the obtained new data set is used to generate parametric images using the linear least squares (LLS) estimation method. Compared with other noise-removal method, the proposed method can achieve high statistical reliability in the generated parametric images. The effectiveness of the method is demonstrated both with computer simulation and with clinical dynamic FDG PET study.
Parametric instability in the high power era of Advanced LIGO
NASA Astrophysics Data System (ADS)
Hardwick, Terra; Blair, Carl; Kennedy, Ross; Evans, Matthew; Fritschel, Peter; LIGO Virgo Scientific Collaboration
2017-01-01
After the first direct detections of gravitational waves, Advanced LIGO aims to increase its detection rate during the upcoming science runs through a series of detector improvements, including increased optical power. Higher circulating power increases the likelihood for three-mode parametric instabilities (PIs), in which mechanical modes of the mirrors scatter light into higher-order optical modes in the cavity and the resulting optical modes reinforce the mechanical modes via radiation pressure. Currently, LIGO uses two PI mitigation methods: thermal tuning to change the cavity g-factor and effectively decrease the frequency overlap between mechanical and optical modes, and active damping of mechanical modes with electrostatic actuation. While the combined methods provide stability at the current operating power, there is evidence that these will be insufficient for the next planned power increase; future suppression methods including acoustic mode dampers and dynamic g-factor modulation are discussed.
Simple and flexible SAS and SPSS programs for analyzing lag-sequential categorical data.
O'Connor, B P
1999-11-01
This paper describes simple and flexible programs for analyzing lag-sequential categorical data, using SAS and SPSS. The programs read a stream of codes and produce a variety of lag-sequential statistics, including transitional frequencies, expected transitional frequencies, transitional probabilities, adjusted residuals, z values, Yule's Q values, likelihood ratio tests of stationarity across time and homogeneity across groups or segments, transformed kappas for unidirectional dependence, bidirectional dependence, parallel and nonparallel dominance, and significance levels based on both parametric and randomization tests.
The l z ( p ) * Person-Fit Statistic in an Unfolding Model Context.
Tendeiro, Jorge N
2017-01-01
Although person-fit analysis has a long-standing tradition within item response theory, it has been applied in combination with dominance response models almost exclusively. In this article, a popular log likelihood-based parametric person-fit statistic under the framework of the generalized graded unfolding model is used. Results from a simulation study indicate that the person-fit statistic performed relatively well in detecting midpoint response style patterns and not so well in detecting extreme response style patterns.
A hyperbolastic type-I diffusion process: Parameter estimation by means of the firefly algorithm.
Barrera, Antonio; Román-Román, Patricia; Torres-Ruiz, Francisco
2018-01-01
A stochastic diffusion process, whose mean function is a hyperbolastic curve of type I, is presented. The main characteristics of the process are studied and the problem of maximum likelihood estimation for the parameters of the process is considered. To this end, the firefly metaheuristic optimization algorithm is applied after bounding the parametric space by a stagewise procedure. Some examples based on simulated sample paths and real data illustrate this development. Copyright © 2017 Elsevier B.V. All rights reserved.
Modelling road accident blackspots data with the discrete generalized Pareto distribution.
Prieto, Faustino; Gómez-Déniz, Emilio; Sarabia, José María
2014-10-01
This study shows how road traffic networks events, in particular road accidents on blackspots, can be modelled with simple probabilistic distributions. We considered the number of crashes and the number of fatalities on Spanish blackspots in the period 2003-2007, from Spanish General Directorate of Traffic (DGT). We modelled those datasets, respectively, with the discrete generalized Pareto distribution (a discrete parametric model with three parameters) and with the discrete Lomax distribution (a discrete parametric model with two parameters, and particular case of the previous model). For that, we analyzed the basic properties of both parametric models: cumulative distribution, survival, probability mass, quantile and hazard functions, genesis and rth-order moments; applied two estimation methods of their parameters: the μ and (μ+1) frequency method and the maximum likelihood method; used two goodness-of-fit tests: Chi-square test and discrete Kolmogorov-Smirnov test based on bootstrap resampling; and compared them with the classical negative binomial distribution in terms of absolute probabilities and in models including covariates. We found that those probabilistic models can be useful to describe the road accident blackspots datasets analyzed. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wibowo, Wahyu; Wene, Chatrien; Budiantara, I. Nyoman; Permatasari, Erma Oktania
2017-03-01
Multiresponse semiparametric regression is simultaneous equation regression model and fusion of parametric and nonparametric model. The regression model comprise several models and each model has two components, parametric and nonparametric. The used model has linear function as parametric and polynomial truncated spline as nonparametric component. The model can handle both linearity and nonlinearity relationship between response and the sets of predictor variables. The aim of this paper is to demonstrate the application of the regression model for modeling of effect of regional socio-economic on use of information technology. More specific, the response variables are percentage of households has access to internet and percentage of households has personal computer. Then, predictor variables are percentage of literacy people, percentage of electrification and percentage of economic growth. Based on identification of the relationship between response and predictor variable, economic growth is treated as nonparametric predictor and the others are parametric predictors. The result shows that the multiresponse semiparametric regression can be applied well as indicate by the high coefficient determination, 90 percent.
NASA Astrophysics Data System (ADS)
Smetanin, S. N.; Jelínek, M.; Kubeček, V.
2017-07-01
Stimulated-Raman-scattering in crystals can be used for the single-pass frequency-conversion to the Stokes-shifted wavelengths. The anti-Stokes shift can also be achieved but the phase-matching condition has to be fulfilled because of the parametric four-wave mixing process. To widen the angular-tolerance of four-wave mixing and to obtain high-conversion-efficiency into the anti-Stokes, we developed a new scheme of the parametric Raman anti-Stokes laser at 503 nm with phase-matched collinear beam interaction of orthogonally-polarized Raman components in calcite oriented at the phase-matched angle under 532 nm 20 ps laser excitation. The excitation laser beam was split into two orthogonally-polarized components entering the calcite at the certain incidence angles to fulfill the nearly collinear phase-matching and also to compensate walk-off of extraordinary waves for collinear beam interaction. The phase matching of parametric Raman interaction is tangential and insensitive to the angular mismatch if the Poynting vectors of the biharmonic pump and parametrically generated (anti-Stokes) waves are collinear. For the first time it allows to achieve experimentally the highest conversion efficiency into the anti-Stokes wave (503 nm) up to 30% from the probe wave and up to 3.5% from both pump and probe waves in the single-pass picosecond parametric calcite Raman laser. The highest anti-Stokes pulse energy was 1.4 μJ.
A Bayesian Alternative for Multi-objective Ecohydrological Model Specification
NASA Astrophysics Data System (ADS)
Tang, Y.; Marshall, L. A.; Sharma, A.; Ajami, H.
2015-12-01
Process-based ecohydrological models combine the study of hydrological, physical, biogeochemical and ecological processes of the catchments, which are usually more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov Chain Monte Carlo (MCMC) techniques. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological framework. In our study, a formal Bayesian approach is implemented in an ecohydrological model which combines a hydrological model (HyMOD) and a dynamic vegetation model (DVM). Simulations focused on one objective likelihood (Streamflow/LAI) and multi-objective likelihoods (Streamflow and LAI) with different weights are compared. Uniform, weakly informative and strongly informative prior distributions are used in different simulations. The Kullback-leibler divergence (KLD) is used to measure the dis(similarity) between different priors and corresponding posterior distributions to examine the parameter sensitivity. Results show that different prior distributions can strongly influence posterior distributions for parameters, especially when the available data is limited or parameters are insensitive to the available data. We demonstrate differences in optimized parameters and uncertainty limits in different cases based on multi-objective likelihoods vs. single objective likelihoods. We also demonstrate the importance of appropriately defining the weights of objectives in multi-objective calibration according to different data types.
Two models for evaluating landslide hazards
Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.
2006-01-01
Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.
Parametric Mass Modeling for Mars Entry, Descent and Landing System Analysis Study
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.; Komar, D. R.
2011-01-01
This paper provides an overview of the parametric mass models used for the Entry, Descent, and Landing Systems Analysis study conducted by NASA in FY2009-2010. The study examined eight unique exploration class architectures that included elements such as a rigid mid-L/D aeroshell, a lifting hypersonic inflatable decelerator, a drag supersonic inflatable decelerator, a lifting supersonic inflatable decelerator implemented with a skirt, and subsonic/supersonic retro-propulsion. Parametric models used in this study relate the component mass to vehicle dimensions and mission key environmental parameters such as maximum deceleration and total heat load. The use of a parametric mass model allows the simultaneous optimization of trajectory and mass sizing parameters.
Parametric manipulation of the conflict signal and control-state adaptation.
Forster, Sarah E; Carter, Cameron S; Cohen, Jonathan D; Cho, Raymond Y
2011-04-01
Mechanisms by which the brain monitors and modulates performance are an important focus of recent research. The conflict-monitoring hypothesis posits that the ACC detects conflict between competing response pathways which, in turn, signals for enhanced control. The N2, an ERP component that has been localized to ACC, has been observed after high conflict stimuli. As a candidate index of the conflict signal, the N2 would be expected to be sensitive to the degree of response conflict present, a factor that depends on both the features of external stimuli and the internal control state. In the present study, we sought to explore the relationship between N2 amplitude and these variables through use of a modified Eriksen flankers task in which target-distracter compatibility was parametrically varied. We hypothesized that greater target-distracter incompatibility would result in higher levels of response conflict, as indexed by both behavior and the N2 component. Consistent with this prediction, there were parametric degradations in behavioral performance and increases in N2 amplitudes with increasing incompatibility. Further, increasingly incompatible stimuli led to the predicted parametric increases in control on subsequent incompatible trials as evidenced by enhanced performance and reduced N2 amplitudes. These findings suggest that the N2 component and associated behavioral performance are finely sensitive to the degree of response conflict present and to the control adjustments that result from modulations in conflict.
Qiu, Yi; Wei, Xiaoming; Du, Shuxin; Wong, Kenneth K Y; Tsia, Kevin K; Xu, Yiqing
2018-04-16
We propose a passively mode-locked fiber optical parametric oscillator assisted with optical time-stretch. Thanks to the lately developed optical time-stretch technique, the onset oscillating spectral components can be temporally dispersed across the pump envelope and further compete for the parametric gain with the other parts of onset oscillating sidebands within the pump envelope. By matching the amount of dispersion in optical time-stretch with the pulse width of the quasi-CW pump and oscillating one of the parametric sidebands inside the fiber cavity, we numerically show that the fiber parametric oscillator can be operated in a single pulse regime. By varying the amount of the intracavity dispersion, we further verify that the origin of this single pulse mode-locking regime is due to the optical pulse stretching and compression.
Guillaume, Bryan; Wang, Changqing; Poh, Joann; Shen, Mo Jun; Ong, Mei Lyn; Tan, Pei Fang; Karnani, Neerja; Meaney, Michael; Qiu, Anqi
2018-06-01
Statistical inference on neuroimaging data is often conducted using a mass-univariate model, equivalent to fitting a linear model at every voxel with a known set of covariates. Due to the large number of linear models, it is challenging to check if the selection of covariates is appropriate and to modify this selection adequately. The use of standard diagnostics, such as residual plotting, is clearly not practical for neuroimaging data. However, the selection of covariates is crucial for linear regression to ensure valid statistical inference. In particular, the mean model of regression needs to be reasonably well specified. Unfortunately, this issue is often overlooked in the field of neuroimaging. This study aims to adopt the existing Confounder Adjusted Testing and Estimation (CATE) approach and to extend it for use with neuroimaging data. We propose a modification of CATE that can yield valid statistical inferences using Principal Component Analysis (PCA) estimators instead of Maximum Likelihood (ML) estimators. We then propose a non-parametric hypothesis testing procedure that can improve upon parametric testing. Monte Carlo simulations show that the modification of CATE allows for more accurate modelling of neuroimaging data and can in turn yield a better control of False Positive Rate (FPR) and Family-Wise Error Rate (FWER). We demonstrate its application to an Epigenome-Wide Association Study (EWAS) on neonatal brain imaging and umbilical cord DNA methylation data obtained as part of a longitudinal cohort study. Software for this CATE study is freely available at http://www.bioeng.nus.edu.sg/cfa/Imaging_Genetics2.html. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Clayton, Louie
2004-01-01
This paper provides a discussion of the history of Carbon Cloth Phenolic (CCP) ply lifting in the Redesigned Solid Rocket Motor (RSRM) Program, a brief presentation of theoretical methods used for analytical evaluation, and results of parametric analyses of CCP material subject to test conditions of the Laser Hardened Material Evaluation Laboratory. CCP ply lift can occur in regions of the RSRM nozzle where ply angle to flame surface is generally less than about 20 degrees. There is a heat rate dependence on likelihood and severity of the condition with the higher heating rates generally producing more ply lift. The event occurs in-depth, near the heated surface, where the load necessary to mechanically separate the CCP plies is produced by the initial stages of pyrolysis gas generation due to the thermal decomposition of the phenolic resin matrix. Due to the shallow lay-up angle of the composite, normal components of the indepth mechanical load, due to "pore pressure", are imparted primarily as a cross-ply tensile force on the interlaminar ply boundaries. Tensile capability in the cross-ply (out of plane) direction is solely determined by the matrix material capability. The elevated temperature matrix material capabilities are overcome by pressure induced mechanical normal stress and ply-lift occurs. A theoretical model used for CCP in-depth temperature, pressure, and normal stress prediction, based on first principles, is briefly discussed followed by a parametric evaluation of response variables subject to boundary conditions typical of on-going test programs at the LHMEL facility. Model response demonstrates general trends observed in test and provides insight into the interactivity of material properties and constitutive relationships.
Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M.; El Fakhri, Georges
2013-01-01
Purpose: Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Methods: Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. Results: At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%–29% and 32%–70% for 50 × 106 and 10 × 106 detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40–50 iterations), while more than 500 iterations were needed for CG. Conclusions: The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method. PMID:24089922
Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M; El Fakhri, Georges
2013-10-01
Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%-29% and 32%-70% for 50 × 10(6) and 10 × 10(6) detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40-50 iterations), while more than 500 iterations were needed for CG. The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method.
NASA Astrophysics Data System (ADS)
Wang, Xiang; Cannon, Patrick; Zhou, Chen; Honary, Farideh; Ni, Binbin; Zhao, Zhengyu
2016-04-01
Recent ionospheric modification experiments performed at Tromsø, Norway, have indicated that X-mode pump wave is capable of stimulating high-frequency enhanced plasma lines, which manifests the excitation of parametric instability. This paper investigates theoretically how the observation can be explained by the excitation of parametric instability driven by X-mode pump wave. The threshold of the parametric instability has been calculated for several recent experimental observations at Tromsø, illustrating that our derived equations for the excitation of parametric instability for X-mode heating can explain the experimental observations. According to our theoretical calculation, a minimum fraction of pump wave electric field needs to be directed along the geomagnetic field direction in order for the parametric instability threshold to be met. A full-wave finite difference time domain simulation has been performed to demonstrate that a small parallel component of pump wave electric field can be achieved during X-mode heating in the presence of inhomogeneous plasma.
Wang, Shijun; Liu, Peter; Turkbey, Baris; Choyke, Peter; Pinto, Peter; Summers, Ronald M
2012-01-01
In this paper, we propose a new pharmacokinetic model for parameter estimation of dynamic contrast-enhanced (DCE) MRI by using Gaussian process inference. Our model is based on the Tofts dual-compartment model for the description of tracer kinetics and the observed time series from DCE-MRI is treated as a Gaussian stochastic process. The parameter estimation is done through a maximum likelihood approach and we propose a variant of the coordinate descent method to solve this likelihood maximization problem. The new model was shown to outperform a baseline method on simulated data. Parametric maps generated on prostate DCE data with the new model also provided better enhancement of tumors, lower intensity on false positives, and better boundary delineation when compared with the baseline method. New statistical parameter maps from the process model were also found to be informative, particularly when paired with the PK parameter maps.
Testing the causality of Hawkes processes with time reversal
NASA Astrophysics Data System (ADS)
Cordi, Marcus; Challet, Damien; Muni Toke, Ioane
2018-03-01
We show that univariate and symmetric multivariate Hawkes processes are only weakly causal: the true log-likelihoods of real and reversed event time vectors are almost equal, thus parameter estimation via maximum likelihood only weakly depends on the direction of the arrow of time. In ideal (synthetic) conditions, tests of goodness of parametric fit unambiguously reject backward event times, which implies that inferring kernels from time-symmetric quantities, such as the autocovariance of the event rate, only rarely produce statistically significant fits. Finally, we find that fitting financial data with many-parameter kernels may yield significant fits for both arrows of time for the same event time vector, sometimes favouring the backward time direction. This goes to show that a significant fit of Hawkes processes to real data with flexible kernels does not imply a definite arrow of time unless one tests it.
Population Structure and Genomic Breed Composition in an Angus-Brahman Crossbred Cattle Population.
Gobena, Mesfin; Elzo, Mauricio A; Mateescu, Raluca G
2018-01-01
Crossbreeding is a common strategy used in tropical and subtropical regions to enhance beef production, and having accurate knowledge of breed composition is essential for the success of a crossbreeding program. Although pedigree records have been traditionally used to obtain the breed composition of crossbred cattle, the accuracy of pedigree-based breed composition can be reduced by inaccurate and/or incomplete records and Mendelian sampling. Breed composition estimation from genomic data has multiple advantages including higher accuracy without being affected by missing, incomplete, or inaccurate records and the ability to be used as independent authentication of breed in breed-labeled beef products. The present study was conducted with 676 Angus-Brahman crossbred cattle with genotype and pedigree information to evaluate the feasibility and accuracy of using genomic data to determine breed composition. We used genomic data in parametric and non-parametric methods to detect population structure due to differences in breed composition while accounting for the confounding effect of close familial relationships. By applying principal component analysis (PCA) and the maximum likelihood method of ADMIXTURE to genomic data, it was possible to successfully characterize population structure resulting from heterogeneous breed ancestry, while accounting for close familial relationships. PCA results offered additional insight into the different hierarchies of genetic variation structuring. The first principal component was strongly correlated with Angus-Brahman proportions, and the second represented variation within animals that have a relatively more extended Brangus lineage-indicating the presence of a distinct pattern of genetic variation in these cattle. Although there was strong agreement between breed proportions estimated from pedigree and genetic information, there were significant discrepancies between these two methods for certain animals. This was most likely due to inaccuracies in the pedigree-based estimation of breed composition, which supported the case for using genomic information to complement and/or replace pedigree information when estimating breed composition. Comparison with a supervised analysis where purebreds are used as the training set suggest that accurate predictions can be achieved even in the absence of purebred population information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, L.T.; Hickey, M.
This paper summarizes the progress to date by CH2M HILL and the UKAEA in development of a parametric modelling capability for estimating the costs of large nuclear decommissioning projects in the United Kingdom (UK) and Europe. The ability to successfully apply parametric cost estimating techniques will be a key factor to commercial success in the UK and European multi-billion dollar waste management, decommissioning and environmental restoration markets. The most useful parametric models will be those that incorporate individual components representing major elements of work: reactor decommissioning, fuel cycle facility decommissioning, waste management facility decommissioning and environmental restoration. Models must bemore » sufficiently robust to estimate indirect costs and overheads, permit pricing analysis and adjustment, and accommodate the intricacies of international monetary exchange, currency fluctuations and contingency. The development of a parametric cost estimating capability is also a key component in building a forward estimating strategy. The forward estimating strategy will enable the preparation of accurate and cost-effective out-year estimates, even when work scope is poorly defined or as yet indeterminate. Preparation of cost estimates for work outside the organizations current sites, for which detailed measurement is not possible and historical cost data does not exist, will also be facilitated. (authors)« less
Towards the generation of a parametric foot model using principal component analysis: A pilot study.
Scarton, Alessandra; Sawacha, Zimi; Cobelli, Claudio; Li, Xinshan
2016-06-01
There have been many recent developments in patient-specific models with their potential to provide more information on the human pathophysiology and the increase in computational power. However they are not yet successfully applied in a clinical setting. One of the main challenges is the time required for mesh creation, which is difficult to automate. The development of parametric models by means of the Principle Component Analysis (PCA) represents an appealing solution. In this study PCA has been applied to the feet of a small cohort of diabetic and healthy subjects, in order to evaluate the possibility of developing parametric foot models, and to use them to identify variations and similarities between the two populations. Both the skin and the first metatarsal bones have been examined. Besides the reduced sample of subjects considered in the analysis, results demonstrated that the method adopted herein constitutes a first step towards the realization of a parametric foot models for biomechanical analysis. Furthermore the study showed that the methodology can successfully describe features in the foot, and evaluate differences in the shape of healthy and diabetic subjects. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Covariate analysis of bivariate survival data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, L.E.
1992-01-01
The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methodsmore » have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.« less
Costello, Darcé M.; Swendsen, Joel; Rose, Jennifer S.; Dierker, Lisa C.
2009-01-01
This study used semi-parametric group-based modeling to explore unconditional and conditional trajectories of self-reported depressed mood from age 12 to 25. Drawing on data from the National Longitudinal Study of Adolescent Health (N=11,559), four distinct trajectories were identified: no depressed mood, stable low depressed mood, early high declining depressed mood, and late escalating depressed mood. Baseline risk factors associated with greater likelihood of membership in depressed mood trajectory groups compared to the no depressed mood group included being female, Black/African American, Hispanic/Latino American, or Asian American, lower SES, using alcohol, tobacco, or other drugs on a weekly basis, and delinquent behavior. Baseline protective factors associated with greater likelihood of membership in the no depressed mood group compared to the depressed mood trajectory groups included two-parent family structure, feeling connected to parents, peers, or school, and self esteem. With the exception of delinquent behavior, risk and protective factors also distinguished the likelihood of membership among several of the three depressed mood groups. The results add to basic etiologic research regarding developmental pathways of depressed mood in adolescence and young adulthood. PMID:18377115
Semiparametric time-to-event modeling in the presence of a latent progression event.
Rice, John D; Tsodikov, Alex
2017-06-01
In cancer research, interest frequently centers on factors influencing a latent event that must precede a terminal event. In practice it is often impossible to observe the latent event precisely, making inference about this process difficult. To address this problem, we propose a joint model for the unobserved time to the latent and terminal events, with the two events linked by the baseline hazard. Covariates enter the model parametrically as linear combinations that multiply, respectively, the hazard for the latent event and the hazard for the terminal event conditional on the latent one. We derive the partial likelihood estimators for this problem assuming the latent event is observed, and propose a profile likelihood-based method for estimation when the latent event is unobserved. The baseline hazard in this case is estimated nonparametrically using the EM algorithm, which allows for closed-form Breslow-type estimators at each iteration, bringing improved computational efficiency and stability compared with maximizing the marginal likelihood directly. We present simulation studies to illustrate the finite-sample properties of the method; its use in practice is demonstrated in the analysis of a prostate cancer data set. © 2016, The International Biometric Society.
Shape, Illumination, and Reflectance from Shading
2013-05-29
the global entropy of log-reflectance. 3) An “absolute” prior on reflectance which prefers to paint the scene with some colors ( white , gray, green...in log- RGB from pixel i to pixel j, and c (· ;α, σ) is the negative log-likelihood of a discrete univariate Gaussian scale mixture (GSM), parametrized...gs(R) = ∑ i ∑ j∈N(i) C (Ri −Rj ;αR, σR,ΣR) (6) Where Ri−Rj is now a 3-vector of the log- RGB differ- ences, α are mixing coefficients, σ are the
A mixture model-based approach to the clustering of microarray expression data.
McLachlan, G J; Bean, R W; Peel, D
2002-03-01
This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets. EMMIX-GENE is available at http://www.maths.uq.edu.au/~gjm/emmix-gene/
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...
2016-02-05
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less
Synthesis and Control of Flexible Systems with Component-Level Uncertainties
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Lim, Kyong B.
2009-01-01
An efficient and computationally robust method for synthesis of component dynamics is developed. The method defines the interface forces/moments as feasible vectors in transformed coordinates to ensure that connectivity requirements of the combined structure are met. The synthesized system is then defined in a transformed set of feasible coordinates. The simplicity of form is exploited to effectively deal with modeling parametric and non-parametric uncertainties at the substructure level. Uncertainty models of reasonable size and complexity are synthesized for the combined structure from those in the substructure models. In particular, we address frequency and damping uncertainties at the component level. The approach first considers the robustness of synthesized flexible systems. It is then extended to deal with non-synthesized dynamic models with component-level uncertainties by projecting uncertainties to the system level. A numerical example is given to demonstrate the feasibility of the proposed approach.
Somers, George T; Spencer, Ryan J
2012-04-01
Do undergraduate rural clinical rotations increase the likelihood of medical students to choose a rural career once pre-existent likelihood is accounted for? A prospective, controlled quasi-experiment using self-paired scores on the SOMERS Index of rural career choice likelihood, before and after 3 years of clinical rotations in either mainly rural or mainly urban locations. Monash University medical school, Australia. Fifty-eight undergraduate-entry medical students (35% of the 2002 entry class). The SOMERS Index of rural career choice likelihood and its component indicators. There was an overall decline in SOMERS Index score (22%) and in each of its components (12-41%). Graduating students who attended rural rotations were more likely to choose a rural career on graduation (difference in SOMERS score: 24.1 (95% CI, 15.0-33.3) P<0.0001); however, at entry, students choosing rural rotations had an even greater SOMERS score (difference: 27.1 (95% CI, 18.2-36.1) P<0.0001). Self-paired pre-post reductions in likelihood were not affected by attending mainly rural or urban rotations, nor were there differences based on rural background alone or sex. While rural rotations are an important component of undergraduate medical training, it is the nature of the students choosing to study in rural locations rather than experiences during the course that is the greater influence on rural career choice. In order to improve the rural medical workforce crisis, medical schools should attract more students with pre-existent likelihood to choose a rural career. The SOMERS Index was found to be a useful tool for this quantitative analysis. © 2012 The Authors. Australian Journal of Rural Health © 2012 National Rural Health Alliance Inc.
NASA Astrophysics Data System (ADS)
Luu, Gia Thien; Boualem, Abdelbassit; Duy, Tran Trung; Ravier, Philippe; Butteli, Olivier
Muscle Fiber Conduction Velocity (MFCV) can be calculated from the time delay between the surface electromyographic (sEMG) signals recorded by electrodes aligned with the fiber direction. In order to take into account the non-stationarity during the dynamic contraction (the most daily life situation) of the data, the developed methods have to consider that the MFCV changes over time, which induces time-varying delays and the data is non-stationary (change of Power Spectral Density (PSD)). In this paper, the problem of TVD estimation is considered using a parametric method. First, the polynomial model of TVD has been proposed. Then, the TVD model parameters are estimated by using a maximum likelihood estimation (MLE) strategy solved by a deterministic optimization technique (Newton) and stochastic optimization technique, called simulated annealing (SA). The performance of the two techniques is also compared. We also derive two appropriate Cramer-Rao Lower Bounds (CRLB) for the estimated TVD model parameters and for the TVD waveforms. Monte-Carlo simulation results show that the estimation of both the model parameters and the TVD function is unbiased and that the variance obtained is close to the derived CRBs. A comparison with non-parametric approaches of the TVD estimation is also presented and shows the superiority of the method proposed.
The effect of pumping noise on the characteristics of a single-stage parametric amplifier
NASA Astrophysics Data System (ADS)
Medvedev, S. Iu.; Muzychuk, O. V.
1983-10-01
An analysis is made of the operation of a single-stage parametric amplifier based on a varactor with a sharp transition. Analytical expressions are obtained for the statistical moments of the output signal, the signal-noise ratio, and other characteristics in the case when the output signal and the pump are a mixture of harmonic oscillation and Gaussian noise. It is shown that, when a noise component is present in the pump, an increase of its harmonic component to values close to the threshold leads to a sharp decrease in the signal-noise ratio at the amplifier output.
Stochastic stability of parametrically excited random systems
NASA Astrophysics Data System (ADS)
Labou, M.
2004-01-01
Multidegree-of-freedom dynamic systems subjected to parametric excitation are analyzed for stochastic stability. The variation of excitation intensity with time is described by the sum of a harmonic function and a stationary random process. The stability boundaries are determined by the stochastic averaging method. The effect of random parametric excitation on the stability of trivial solutions of systems of differential equations for the moments of phase variables is studied. It is assumed that the frequency of harmonic component falls within the region of combination resonances. Stability conditions for the first and second moments are obtained. It turns out that additional parametric excitation may have a stabilizing or destabilizing effect, depending on the values of certain parameters of random excitation. As an example, the stability of a beam in plane bending is analyzed.
Spectral decompositions of multiple time series: a Bayesian non-parametric approach.
Macaro, Christian; Prado, Raquel
2014-01-01
We consider spectral decompositions of multiple time series that arise in studies where the interest lies in assessing the influence of two or more factors. We write the spectral density of each time series as a sum of the spectral densities associated to the different levels of the factors. We then use Whittle's approximation to the likelihood function and follow a Bayesian non-parametric approach to obtain posterior inference on the spectral densities based on Bernstein-Dirichlet prior distributions. The prior is strategically important as it carries identifiability conditions for the models and allows us to quantify our degree of confidence in such conditions. A Markov chain Monte Carlo (MCMC) algorithm for posterior inference within this class of frequency-domain models is presented.We illustrate the approach by analyzing simulated and real data via spectral one-way and two-way models. In particular, we present an analysis of functional magnetic resonance imaging (fMRI) brain responses measured in individuals who participated in a designed experiment to study pain perception in humans.
Ultra-Broad-Band Optical Parametric Amplifier or Oscillator
NASA Technical Reports Server (NTRS)
Strekalov, Dmitry; Matsko, Andrey; Savchenkov, Anatolly; Maleki, Lute
2009-01-01
A concept for an ultra-broad-band optical parametric amplifier or oscillator has emerged as a by-product of a theoretical study in fundamental quantum optics. The study was originally intended to address the question of whether the two-photon temporal correlation function of light [in particular, light produced by spontaneous parametric down conversion (SPDC)] can be considerably narrower than the inverse of the spectral width (bandwidth) of the light. The answer to the question was found to be negative. More specifically, on the basis of the universal integral relations between the quantum two-photon temporal correlation and the classical spectrum of light, it was found that the lower limit of two-photon correlation time is set approximately by the inverse of the bandwidth. The mathematical solution for the minimum two-photon correlation time also provides the minimum relative frequency dispersion of the down-converted light components; in turn, the minimum relative frequency dispersion translates to the maximum bandwidth, which is important for the design of an ultra-broad-band optical parametric oscillator or amplifier. In the study, results of an analysis of the general integral relations were applied in the case of an optically nonlinear, frequency-dispersive crystal in which SPDC produces collinear photons. Equations were found for the crystal orientation and pump wavelength, specific for each parametric-down-converting crystal, that eliminate the relative frequency dispersion of collinear degenerate (equal-frequency) signal and idler components up to the fourth order in the frequency-detuning parameter
Age-dependent biochemical quantities: an approach for calculating reference intervals.
Bjerner, J
2007-01-01
A parametric method is often preferred when calculating reference intervals for biochemical quantities, as non-parametric methods are less efficient and require more observations/study subjects. Parametric methods are complicated, however, because of three commonly encountered features. First, biochemical quantities seldom display a Gaussian distribution, and there must either be a transformation procedure to obtain such a distribution or a more complex distribution has to be used. Second, biochemical quantities are often dependent on a continuous covariate, exemplified by rising serum concentrations of MUC1 (episialin, CA15.3) with increasing age. Third, outliers often exert substantial influence on parametric estimations and therefore need to be excluded before calculations are made. The International Federation of Clinical Chemistry (IFCC) currently recommends that confidence intervals be calculated for the reference centiles obtained. However, common statistical packages allowing for the adjustment of a continuous covariate do not make this calculation. In the method described in the current study, Tukey's fence is used to eliminate outliers and two-stage transformations (modulus-exponential-normal) in order to render Gaussian distributions. Fractional polynomials are employed to model functions for mean and standard deviations dependent on a covariate, and the model is selected by maximum likelihood. Confidence intervals are calculated for the fitted centiles by combining parameter estimation and sampling uncertainties. Finally, the elimination of outliers was made dependent on covariates by reiteration. Though a good knowledge of statistical theory is needed when performing the analysis, the current method is rewarding because the results are of practical use in patient care.
Rigby, Robert A; Stasinopoulos, D Mikis
2004-10-15
The Box-Cox power exponential (BCPE) distribution, developed in this paper, provides a model for a dependent variable Y exhibiting both skewness and kurtosis (leptokurtosis or platykurtosis). The distribution is defined by a power transformation Y(nu) having a shifted and scaled (truncated) standard power exponential distribution with parameter tau. The distribution has four parameters and is denoted BCPE (mu,sigma,nu,tau). The parameters, mu, sigma, nu and tau, may be interpreted as relating to location (median), scale (approximate coefficient of variation), skewness (transformation to symmetry) and kurtosis (power exponential parameter), respectively. Smooth centile curves are obtained by modelling each of the four parameters of the distribution as a smooth non-parametric function of an explanatory variable. A Fisher scoring algorithm is used to fit the non-parametric model by maximizing a penalized likelihood. The first and expected second and cross derivatives of the likelihood, with respect to mu, sigma, nu and tau, required for the algorithm, are provided. The centiles of the BCPE distribution are easy to calculate, so it is highly suited to centile estimation. This application of the BCPE distribution to smooth centile estimation provides a generalization of the LMS method of the centile estimation to data exhibiting kurtosis (as well as skewness) different from that of a normal distribution and is named here the LMSP method of centile estimation. The LMSP method of centile estimation is applied to modelling the body mass index of Dutch males against age. 2004 John Wiley & Sons, Ltd.
Estimating hazard ratios in cohort data with missing disease information due to death.
Binder, Nadine; Herrnböck, Anne-Sophie; Schumacher, Martin
2017-03-01
In clinical and epidemiological studies information on the primary outcome of interest, that is, the disease status, is usually collected at a limited number of follow-up visits. The disease status can often only be retrieved retrospectively in individuals who are alive at follow-up, but will be missing for those who died before. Right-censoring the death cases at the last visit (ad-hoc analysis) yields biased hazard ratio estimates of a potential risk factor, and the bias can be substantial and occur in either direction. In this work, we investigate three different approaches that use the same likelihood contributions derived from an illness-death multistate model in order to more adequately estimate the hazard ratio by including the death cases into the analysis: a parametric approach, a penalized likelihood approach, and an imputation-based approach. We investigate to which extent these approaches allow for an unbiased regression analysis by evaluating their performance in simulation studies and on a real data example. In doing so, we use the full cohort with complete illness-death data as reference and artificially induce missing information due to death by setting discrete follow-up visits. Compared to an ad-hoc analysis, all considered approaches provide less biased or even unbiased results, depending on the situation studied. In the real data example, the parametric approach is seen to be too restrictive, whereas the imputation-based approach could almost reconstruct the original event history information. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Pan, X. G.; Wang, J. Q.; Zhou, H. Y.
2013-05-01
The variance component estimation (VCE) based on semi-parametric estimator with weighted matrix of data depth has been proposed, because the coupling system model error and gross error exist in the multi-source heterogeneous measurement data of space and ground combined TT&C (Telemetry, Tracking and Command) technology. The uncertain model error has been estimated with the semi-parametric estimator model, and the outlier has been restrained with the weighted matrix of data depth. On the basis of the restriction of the model error and outlier, the VCE can be improved and used to estimate weighted matrix for the observation data with uncertain model error or outlier. Simulation experiment has been carried out under the circumstance of space and ground combined TT&C. The results show that the new VCE based on the model error compensation can determine the rational weight of the multi-source heterogeneous data, and restrain the outlier data.
Sleep analysis for wearable devices applying autoregressive parametric models.
Mendez, M O; Villantieri, O; Bianchi, A; Cerutti, S
2005-01-01
We applied time-variant and time-invariant parametric models in both healthy subjects and patients with sleep disorder recordings in order to assess the skills of those approaches to sleep disorders diagnosis in wearable devices. The recordings present the Obstructive Sleep Apnea (OSA) pathology which is characterized by fluctuations in the heart rate, bradycardia in apneonic phase and tachycardia at the recovery of ventilation. Data come from a web database in www.physionet.org. During OSA the spectral indexes obtained by time-variant lattice filters presented oscillations that correspond to the changes brady-tachycardia of the RR intervals and greater values than healthy ones. Multivariate autoregressive models showed an increment in very low frequency component (PVLF) at each apneic event. Also a rise in high frequency component (PHF) occurred over the breathing restore in the spectrum of both quadratic coherence and cross-spectrum in OSA. These autoregressive parametric approaches could help in the diagnosis of Sleep Disorder inside of the wearable devices.
BLIND EXTRACTION OF AN EXOPLANETARY SPECTRUM THROUGH INDEPENDENT COMPONENT ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waldmann, I. P.; Tinetti, G.; Hollis, M. D. J.
2013-03-20
Blind-source separation techniques are used to extract the transmission spectrum of the hot-Jupiter HD189733b recorded by the Hubble/NICMOS instrument. Such a 'blind' analysis of the data is based on the concept of independent component analysis. The detrending of Hubble/NICMOS data using the sole assumption that nongaussian systematic noise is statistically independent from the desired light-curve signals is presented. By not assuming any prior or auxiliary information but the data themselves, it is shown that spectroscopic errors only about 10%-30% larger than parametric methods can be obtained for 11 spectral bins with bin sizes of {approx}0.09 {mu}m. This represents a reasonablemore » trade-off between a higher degree of objectivity for the non-parametric methods and smaller standard errors for the parametric de-trending. Results are discussed in light of previous analyses published in the literature. The fact that three very different analysis techniques yield comparable spectra is a strong indication of the stability of these results.« less
Parametric estimation for reinforced concrete relief shelter for Aceh cases
NASA Astrophysics Data System (ADS)
Atthaillah; Saputra, Eri; Iqbal, Muhammad
2018-05-01
This paper was a work in progress (WIP) to discover a rapid parametric framework for post-disaster permanent shelter’s materials estimation. The intended shelters were reinforced concrete construction with bricks as its wall. Inevitably, in post-disaster cases, design variations were needed to help suited victims condition. It seemed impossible to satisfy a beneficiary with a satisfactory design utilizing the conventional method. This study offered a parametric framework to overcome slow construction-materials estimation issue against design variations. Further, this work integrated parametric tool, which was Grasshopper to establish algorithms that simultaneously model, visualize, calculate and write the calculated data to a spreadsheet in a real-time. Some customized Grasshopper components were created using GHPython scripting for a more optimized algorithm. The result from this study was a partial framework that successfully performed modeling, visualization, calculation and writing the calculated data simultaneously. It meant design alterations did not escalate time needed for modeling, visualization, and material estimation. Further, the future development of the parametric framework will be made open source.
Nonrelativistic approaches derived from point-coupling relativistic models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lourenco, O.; Dutra, M.; Delfino, A.
2010-03-15
We construct nonrelativistic versions of relativistic nonlinear hadronic point-coupling models, based on new normalized spinor wave functions after small component reduction. These expansions give us energy density functionals that can be compared to their relativistic counterparts. We show that the agreement between the nonrelativistic limit approach and the Skyrme parametrizations becomes strongly dependent on the incompressibility of each model. We also show that the particular case A=B=0 (Walecka model) leads to the same energy density functional of the Skyrme parametrizations SV and ZR2, while the truncation scheme, up to order {rho}{sup 3}, leads to parametrizations for which {sigma}=1.
Characteristics of stereo reproduction with parametric loudspeakers
NASA Astrophysics Data System (ADS)
Aoki, Shigeaki; Toba, Masayoshi; Tsujita, Norihisa
2012-05-01
A parametric loudspeaker utilizes nonlinearity of a medium and is known as a super-directivity loudspeaker. The parametric loudspeaker is one of the prominent applications of nonlinear ultrasonics. So far, the applications have been limited monaural reproduction sound system for public address in museum, station and street etc. In this paper, we discussed characteristics of stereo reproduction with two parametric loudspeakers by comparing with those with two ordinary dynamic loudspeakers. In subjective tests, three typical listening positions were selected to investigate the possibility of correct sound localization in a wide listening area. The binaural information was ILD (Interaural Level Difference) or ITD (Interaural Time Delay). The parametric loudspeaker was an equilateral hexagon. The inner and outer diameters were 99 and 112 mm, respectively. Signals were 500 Hz, 1 kHz, 2 kHz and 4 kHz pure tones and pink noise. Three young males listened to test signals 10 times in each listening condition. Subjective test results showed that listeners at the three typical listening positions perceived correct sound localization of all signals using the parametric loudspeakers. It was almost similar to those using the ordinary dynamic loudspeakers, however, except for the case of sinusoidal waves with ITD. It was determined the parametric loudspeaker could exclude the contradiction between the binaural information ILD and ITD that occurred in stereo reproduction with ordinary dynamic loudspeakers because the super directivity of parametric loudspeaker suppressed the cross talk components.
Carvajal, Roberto C; Arias, Luis E; Garces, Hugo O; Sbarbaro, Daniel G
2016-04-01
This work presents a non-parametric method based on a principal component analysis (PCA) and a parametric one based on artificial neural networks (ANN) to remove continuous baseline features from spectra. The non-parametric method estimates the baseline based on a set of sampled basis vectors obtained from PCA applied over a previously composed continuous spectra learning matrix. The parametric method, however, uses an ANN to filter out the baseline. Previous studies have demonstrated that this method is one of the most effective for baseline removal. The evaluation of both methods was carried out by using a synthetic database designed for benchmarking baseline removal algorithms, containing 100 synthetic composed spectra at different signal-to-baseline ratio (SBR), signal-to-noise ratio (SNR), and baseline slopes. In addition to deomonstrating the utility of the proposed methods and to compare them in a real application, a spectral data set measured from a flame radiation process was used. Several performance metrics such as correlation coefficient, chi-square value, and goodness-of-fit coefficient were calculated to quantify and compare both algorithms. Results demonstrate that the PCA-based method outperforms the one based on ANN both in terms of performance and simplicity. © The Author(s) 2016.
Meyer, Karin; Kirkpatrick, Mark
2005-01-01
Principal component analysis is a widely used 'dimension reduction' technique, albeit generally at a phenotypic level. It is shown that we can estimate genetic principal components directly through a simple reparameterisation of the usual linear, mixed model. This is applicable to any analysis fitting multiple, correlated genetic effects, whether effects for individual traits or sets of random regression coefficients to model trajectories. Depending on the magnitude of genetic correlation, a subset of the principal component generally suffices to capture the bulk of genetic variation. Corresponding estimates of genetic covariance matrices are more parsimonious, have reduced rank and are smoothed, with the number of parameters required to model the dispersion structure reduced from k(k + 1)/2 to m(2k - m + 1)/2 for k effects and m principal components. Estimation of these parameters, the largest eigenvalues and pertaining eigenvectors of the genetic covariance matrix, via restricted maximum likelihood using derivatives of the likelihood, is described. It is shown that reduced rank estimation can reduce computational requirements of multivariate analyses substantially. An application to the analysis of eight traits recorded via live ultrasound scanning of beef cattle is given. PMID:15588566
Li, Shi; Mukherjee, Bhramar; Batterman, Stuart; Ghosh, Malay
2013-12-01
Case-crossover designs are widely used to study short-term exposure effects on the risk of acute adverse health events. While the frequentist literature on this topic is vast, there is no Bayesian work in this general area. The contribution of this paper is twofold. First, the paper establishes Bayesian equivalence results that require characterization of the set of priors under which the posterior distributions of the risk ratio parameters based on a case-crossover and time-series analysis are identical. Second, the paper studies inferential issues under case-crossover designs in a Bayesian framework. Traditionally, a conditional logistic regression is used for inference on risk-ratio parameters in case-crossover studies. We consider instead a more general full likelihood-based approach which makes less restrictive assumptions on the risk functions. Formulation of a full likelihood leads to growth in the number of parameters proportional to the sample size. We propose a semi-parametric Bayesian approach using a Dirichlet process prior to handle the random nuisance parameters that appear in a full likelihood formulation. We carry out a simulation study to compare the Bayesian methods based on full and conditional likelihood with the standard frequentist approaches for case-crossover and time-series analysis. The proposed methods are illustrated through the Detroit Asthma Morbidity, Air Quality and Traffic study, which examines the association between acute asthma risk and ambient air pollutant concentrations. © 2013, The International Biometric Society.
Conversion of Component-Based Point Definition to VSP Model and Higher Order Meshing
NASA Technical Reports Server (NTRS)
Ordaz, Irian
2011-01-01
Vehicle Sketch Pad (VSP) has become a powerful conceptual and parametric geometry tool with numerous export capabilities for third-party analysis codes as well as robust surface meshing capabilities for computational fluid dynamics (CFD) analysis. However, a capability gap currently exists for reconstructing a fully parametric VSP model of a geometry generated by third-party software. A computer code called GEO2VSP has been developed to close this gap and to allow the integration of VSP into a closed-loop geometry design process with other third-party design tools. Furthermore, the automated CFD surface meshing capability of VSP are demonstrated for component-based point definition geometries in a conceptual analysis and design framework.
On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.
Yamazaki, Keisuke
2012-07-01
Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.
Efficient logistic regression designs under an imperfect population identifier.
Albert, Paul S; Liu, Aiyi; Nansel, Tonja
2014-03-01
Motivated by actual study designs, this article considers efficient logistic regression designs where the population is identified with a binary test that is subject to diagnostic error. We consider the case where the imperfect test is obtained on all participants, while the gold standard test is measured on a small chosen subsample. Under maximum-likelihood estimation, we evaluate the optimal design in terms of sample selection as well as verification. We show that there may be substantial efficiency gains by choosing a small percentage of individuals who test negative on the imperfect test for inclusion in the sample (e.g., verifying 90% test-positive cases). We also show that a two-stage design may be a good practical alternative to a fixed design in some situations. Under optimal and nearly optimal designs, we compare maximum-likelihood and semi-parametric efficient estimators under correct and misspecified models with simulations. The methodology is illustrated with an analysis from a diabetes behavioral intervention trial. © 2013, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Chernick, Julian A.; Perlovsky, Leonid I.; Tye, David M.
1994-06-01
This paper describes applications of maximum likelihood adaptive neural system (MLANS) to the characterization of clutter in IR images and to the identification of targets. The characterization of image clutter is needed to improve target detection and to enhance the ability to compare performance of different algorithms using diverse imagery data. Enhanced unambiguous IFF is important for fratricide reduction while automatic cueing and targeting is becoming an ever increasing part of operations. We utilized MLANS which is a parametric neural network that combines optimal statistical techniques with a model-based approach. This paper shows that MLANS outperforms classical classifiers, the quadratic classifier and the nearest neighbor classifier, because on the one hand it is not limited to the usual Gaussian distribution assumption and can adapt in real time to the image clutter distribution; on the other hand MLANS learns from fewer samples and is more robust than the nearest neighbor classifiers. Future research will address uncooperative IFF using fused IR and MMW data.
Kwasniok, Frank
2013-11-01
A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.
The minimum distance approach to classification
NASA Technical Reports Server (NTRS)
Wacker, A. G.; Landgrebe, D. A.
1971-01-01
The work to advance the state-of-the-art of miminum distance classification is reportd. This is accomplished through a combination of theoretical and comprehensive experimental investigations based on multispectral scanner data. A survey of the literature for suitable distance measures was conducted and the results of this survey are presented. It is shown that minimum distance classification, using density estimators and Kullback-Leibler numbers as the distance measure, is equivalent to a form of maximum likelihood sample classification. It is also shown that for the parametric case, minimum distance classification is equivalent to nearest neighbor classification in the parameter space.
NASA Astrophysics Data System (ADS)
Bakoban, Rana A.
2017-08-01
The coefficient of variation [CV] has several applications in applied statistics. So in this paper, we adopt Bayesian and non-Bayesian approaches for the estimation of CV under type-II censored data from extension exponential distribution [EED]. The point and interval estimate of the CV are obtained for each of the maximum likelihood and parametric bootstrap techniques. Also the Bayesian approach with the help of MCMC method is presented. A real data set is presented and analyzed, hence the obtained results are used to assess the obtained theoretical results.
NASA Astrophysics Data System (ADS)
Anees, Asim; Aryal, Jagannath; O'Reilly, Małgorzata M.; Gale, Timothy J.; Wardlaw, Tim
2016-12-01
A robust non-parametric framework, based on multiple Radial Basic Function (RBF) kernels, is proposed in this study, for detecting land/forest cover changes using Landsat 7 ETM+ images. One of the widely used frameworks is to find change vectors (difference image) and use a supervised classifier to differentiate between change and no-change. The Bayesian Classifiers e.g. Maximum Likelihood Classifier (MLC), Naive Bayes (NB), are widely used probabilistic classifiers which assume parametric models, e.g. Gaussian function, for the class conditional distributions. However, their performance can be limited if the data set deviates from the assumed model. The proposed framework exploits the useful properties of Least Squares Probabilistic Classifier (LSPC) formulation i.e. non-parametric and probabilistic nature, to model class posterior probabilities of the difference image using a linear combination of a large number of Gaussian kernels. To this end, a simple technique, based on 10-fold cross-validation is also proposed for tuning model parameters automatically instead of selecting a (possibly) suboptimal combination from pre-specified lists of values. The proposed framework has been tested and compared with Support Vector Machine (SVM) and NB for detection of defoliation, caused by leaf beetles (Paropsisterna spp.) in Eucalyptus nitens and Eucalyptus globulus plantations of two test areas, in Tasmania, Australia, using raw bands and band combination indices of Landsat 7 ETM+. It was observed that due to multi-kernel non-parametric formulation and probabilistic nature, the LSPC outperforms parametric NB with Gaussian assumption in change detection framework, with Overall Accuracy (OA) ranging from 93.6% (κ = 0.87) to 97.4% (κ = 0.94) against 85.3% (κ = 0.69) to 93.4% (κ = 0.85), and is more robust to changing data distributions. Its performance was comparable to SVM, with added advantages of being probabilistic and capable of handling multi-class problems naturally with its original formulation.
Parametric Mass Reliability Study
NASA Technical Reports Server (NTRS)
Holt, James P.
2014-01-01
The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leebens-Mack, Jim; Raubeson, Linda A.; Cui, Liying
2005-05-27
While there has been strong support for Amborella and Nymphaeales (water lilies) as branching from basal-most nodes in the angiosperm phylogeny, this hypothesis has recently been challenged by phylogenetic analyses of 61 protein-coding genes extracted from the chloroplast genome sequences of Amborella, Nymphaea and 12 other available land plant chloroplast genomes. These character-rich analyses placed the monocots, represented by three grasses (Poaceae), as sister to all other extant angiosperm lineages. We have extracted protein-coding regions from draft sequences for six additional chloroplast genomes to test whether this surprising result could be an artifact of long-branch attraction due to limited taxonmore » sampling. The added taxa include three monocots (Acorus, Yucca and Typha), a water lily (Nuphar), a ranunculid(Ranunculus), and a gymnosperm (Ginkgo). Phylogenetic analyses of the expanded DNA and protein datasets together with microstructural characters (indels) provided unambiguous support for Amborella and the Nymphaeales as branching from the basal-most nodes in the angiospermphylogeny. However, their relative positions proved to be dependent on method of analysis, with parsimony favoring Amborella as sister to all other angiosperms, and maximum likelihood and neighbor-joining methods favoring an Amborella + Nympheales clade as sister. The maximum likelihood phylogeny supported the later hypothesis, but the likelihood for the former hypothesis was not significantly different. Parametric bootstrap analysis, single gene phylogenies, estimated divergence dates and conflicting in del characters all help to illuminate the nature of the conflict in resolution of the most basal nodes in the angiospermphylogeny. Molecular dating analyses provided median age estimates of 161 mya for the most recent common ancestor of all extant angiosperms and 145 mya for the most recent common ancestor of monocots, magnoliids andeudicots. Whereas long sequences reduce variance in branch lengths and molecular dating estimates, the impact of improved taxon sampling on the rooting of the angiosperm phylogeny together with the results of parametric bootstrap analyses demonstrate how long-branch attraction can mislead genome-scale phylogenetic analyses.« less
Models and analysis for multivariate failure time data
NASA Astrophysics Data System (ADS)
Shih, Joanna Huang
The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.
Variational Bayesian Parameter Estimation Techniques for the General Linear Model
Starke, Ludger; Ostwald, Dirk
2017-01-01
Variational Bayes (VB), variational maximum likelihood (VML), restricted maximum likelihood (ReML), and maximum likelihood (ML) are cornerstone parametric statistical estimation techniques in the analysis of functional neuroimaging data. However, the theoretical underpinnings of these model parameter estimation techniques are rarely covered in introductory statistical texts. Because of the widespread practical use of VB, VML, ReML, and ML in the neuroimaging community, we reasoned that a theoretical treatment of their relationships and their application in a basic modeling scenario may be helpful for both neuroimaging novices and practitioners alike. In this technical study, we thus revisit the conceptual and formal underpinnings of VB, VML, ReML, and ML and provide a detailed account of their mathematical relationships and implementational details. We further apply VB, VML, ReML, and ML to the general linear model (GLM) with non-spherical error covariance as commonly encountered in the first-level analysis of fMRI data. To this end, we explicitly derive the corresponding free energy objective functions and ensuing iterative algorithms. Finally, in the applied part of our study, we evaluate the parameter and model recovery properties of VB, VML, ReML, and ML, first in an exemplary setting and then in the analysis of experimental fMRI data acquired from a single participant under visual stimulation. PMID:28966572
Dang, Cuong Cao; Lefort, Vincent; Le, Vinh Sy; Le, Quang Si; Gascuel, Olivier
2011-10-01
Amino acid replacement rate matrices are an essential basis of protein studies (e.g. in phylogenetics and alignment). A number of general purpose matrices have been proposed (e.g. JTT, WAG, LG) since the seminal work of Margaret Dayhoff and co-workers. However, it has been shown that matrices specific to certain protein groups (e.g. mitochondrial) or life domains (e.g. viruses) differ significantly from general average matrices, and thus perform better when applied to the data to which they are dedicated. This Web server implements the maximum-likelihood estimation procedure that was used to estimate LG, and provides a number of tools and facilities. Users upload a set of multiple protein alignments from their domain of interest and receive the resulting matrix by email, along with statistics and comparisons with other matrices. A non-parametric bootstrap is performed optionally to assess the variability of replacement rate estimates. Maximum-likelihood trees, inferred using the estimated rate matrix, are also computed optionally for each input alignment. Finely tuned procedures and up-to-date ML software (PhyML 3.0, XRATE) are combined to perform all these heavy calculations on our clusters. http://www.atgc-montpellier.fr/ReplacementMatrix/ olivier.gascuel@lirmm.fr Supplementary data are available at http://www.atgc-montpellier.fr/ReplacementMatrix/
Likelihood of achieving air quality targets under model uncertainties.
Digar, Antara; Cohan, Daniel S; Cox, Dennis D; Kim, Byeong-Uk; Boylan, James W
2011-01-01
Regulatory attainment demonstrations in the United States typically apply a bright-line test to predict whether a control strategy is sufficient to attain an air quality standard. Photochemical models are the best tools available to project future pollutant levels and are a critical part of regulatory attainment demonstrations. However, because photochemical models are uncertain and future meteorology is unknowable, future pollutant levels cannot be predicted perfectly and attainment cannot be guaranteed. This paper introduces a computationally efficient methodology for estimating the likelihood that an emission control strategy will achieve an air quality objective in light of uncertainties in photochemical model input parameters (e.g., uncertain emission and reaction rates, deposition velocities, and boundary conditions). The method incorporates Monte Carlo simulations of a reduced form model representing pollutant-precursor response under parametric uncertainty to probabilistically predict the improvement in air quality due to emission control. The method is applied to recent 8-h ozone attainment modeling for Atlanta, Georgia, to assess the likelihood that additional controls would achieve fixed (well-defined) or flexible (due to meteorological variability and uncertain emission trends) targets of air pollution reduction. The results show that in certain instances ranking of the predicted effectiveness of control strategies may differ between probabilistic and deterministic analyses.
Nonparametric predictive inference for combining diagnostic tests with parametric copula
NASA Astrophysics Data System (ADS)
Muhammad, Noryanti; Coolen, F. P. A.; Coolen-Maturi, T.
2017-09-01
Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine and health care. The Receiver Operating Characteristic (ROC) curve is a popular statistical tool for describing the performance of diagnostic tests. The area under the ROC curve (AUC) is often used as a measure of the overall performance of the diagnostic test. In this paper, we interest in developing strategies for combining test results in order to increase the diagnostic accuracy. We introduce nonparametric predictive inference (NPI) for combining two diagnostic test results with considering dependence structure using parametric copula. NPI is a frequentist statistical framework for inference on a future observation based on past data observations. NPI uses lower and upper probabilities to quantify uncertainty and is based on only a few modelling assumptions. While copula is a well-known statistical concept for modelling dependence of random variables. A copula is a joint distribution function whose marginals are all uniformly distributed and it can be used to model the dependence separately from the marginal distributions. In this research, we estimate the copula density using a parametric method which is maximum likelihood estimator (MLE). We investigate the performance of this proposed method via data sets from the literature and discuss results to show how our method performs for different family of copulas. Finally, we briefly outline related challenges and opportunities for future research.
Nishiura, Hiroshi
2009-01-01
Determination of the most appropriate quarantine period for those exposed to smallpox is crucial to the construction of an effective preparedness program against a potential bioterrorist attack. This study reanalyzed data on the incubation period distribution of smallpox to allow the optimal quarantine period to be objectively calculated. In total, 131 cases of smallpox were examined; incubation periods were extracted from four different sets of historical data and only cases arising from exposure for a single day were considered. The mean (median and standard deviation (SD)) incubation period was 12.5 (12.0, 2.2) days. Assuming lognormal and gamma distributions for the incubation period, maximum likelihood estimates (and corresponding 95% confidence interval (CI)) of the 95th percentile were 16.4 (95% CI: 15.6, 17.9) and 16.2 (95% CI: 15.5, 17.4) days, respectively. Using a non-parametric method, the 95th percentile point was estimated as 16 (95% CI: 15, 17) days. The upper 95% CIs of the incubation periods at the 90th, 95th and 99th percentiles were shorter than 17, 18 and 23 days, respectively, using both parametric and non-parametric methods. These results suggest that quarantine measures can ensure non-infection among those exposed to smallpox with probabilities higher than 95-99%, if the exposed individuals are quarantined for 18-23 days after the date of contact tracing.
The Relation between Factor Score Estimates, Image Scores, and Principal Component Scores
ERIC Educational Resources Information Center
Velicer, Wayne F.
1976-01-01
Investigates the relation between factor score estimates, principal component scores, and image scores. The three methods compared are maximum likelihood factor analysis, principal component analysis, and a variant of rescaled image analysis. (RC)
THz-wave parametric sources and imaging applications
NASA Astrophysics Data System (ADS)
Kawase, Kodo
2004-12-01
We have studied the generation of terahertz (THz) waves by optical parametric processes based on laser light scattering from the polariton mode of nonlinear crystals. Using parametric oscillation of MgO-doped LiNbO3 crystal pumped by a nano-second Q-switched Nd:YAG laser, we have realized a widely tunable coherent THz-wave sources with a simple configuration. We have also developed a novel basic technology for THz imaging, which allows detection and identification of chemicals by introducing the component spatial pattern analysis. The spatial distributions of the chemicals were obtained from terahertz multispectral trasillumination images, using absorption spectra previously measured with a widely tunable THz-wave parametric oscillator. Further we have applied this technique to the detection and identification of illicit drugs concealed in envelopes. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.
Separation of β-amyloid binding and white matter uptake of 18F-flutemetamol using spectral analysis
Heurling, Kerstin; Buckley, Christopher; Vandenberghe, Rik; Laere, Koen Van; Lubberink, Mark
2015-01-01
The kinetic components of the β-amyloid ligand 18F-flutemetamol binding in grey and white matter were investigated through spectral analysis, and a method developed for creation of parametric images separating grey and white matter uptake. Tracer uptake in grey and white matter and cerebellar cortex was analyzed through spectral analysis in six subjects, with (n=4) or without (n=2) apparent β-amyloid deposition, having undergone dynamic 18F-flutemetamol scanning with arterial blood sampling. The spectra were divided into three components: slow, intermediate and fast basis function rates. The contribution of each of the components to total volume of distribution (VT) was assessed for different tissue types. The slow component dominated in white matter (average 90%), had a higher contribution to grey matter VT in subjects with β-amyloid deposition (average 44%) than without (average 6%) and was absent in cerebellar cortex, attributing the slow component of 18F-flutemetamol uptake in grey matter to β-amyloid binding. Parametric images of voxel-based spectral analysis were created for VT, the slow component and images segmented based on the slow component contribution; confirming that grey matter and white matter uptake can be discriminated on voxel-level using a threshold for the contribution from the slow component to VT. PMID:26550542
Lee, L.; Helsel, D.
2007-01-01
Analysis of low concentrations of trace contaminants in environmental media often results in left-censored data that are below some limit of analytical precision. Interpretation of values becomes complicated when there are multiple detection limits in the data-perhaps as a result of changing analytical precision over time. Parametric and semi-parametric methods, such as maximum likelihood estimation and robust regression on order statistics, can be employed to model distributions of multiply censored data and provide estimates of summary statistics. However, these methods are based on assumptions about the underlying distribution of data. Nonparametric methods provide an alternative that does not require such assumptions. A standard nonparametric method for estimating summary statistics of multiply-censored data is the Kaplan-Meier (K-M) method. This method has seen widespread usage in the medical sciences within a general framework termed "survival analysis" where it is employed with right-censored time-to-failure data. However, K-M methods are equally valid for the left-censored data common in the geosciences. Our S-language software provides an analytical framework based on K-M methods that is tailored to the needs of the earth and environmental sciences community. This includes routines for the generation of empirical cumulative distribution functions, prediction or exceedance probabilities, and related confidence limits computation. Additionally, our software contains K-M-based routines for nonparametric hypothesis testing among an unlimited number of grouping variables. A primary characteristic of K-M methods is that they do not perform extrapolation and interpolation. Thus, these routines cannot be used to model statistics beyond the observed data range or when linear interpolation is desired. For such applications, the aforementioned parametric and semi-parametric methods must be used.
Cheung, Li C; Pan, Qing; Hyun, Noorie; Schiffman, Mark; Fetterman, Barbara; Castle, Philip E; Lorey, Thomas; Katki, Hormuzd A
2017-09-30
For cost-effectiveness and efficiency, many large-scale general-purpose cohort studies are being assembled within large health-care providers who use electronic health records. Two key features of such data are that incident disease is interval-censored between irregular visits and there can be pre-existing (prevalent) disease. Because prevalent disease is not always immediately diagnosed, some disease diagnosed at later visits are actually undiagnosed prevalent disease. We consider prevalent disease as a point mass at time zero for clinical applications where there is no interest in time of prevalent disease onset. We demonstrate that the naive Kaplan-Meier cumulative risk estimator underestimates risks at early time points and overestimates later risks. We propose a general family of mixture models for undiagnosed prevalent disease and interval-censored incident disease that we call prevalence-incidence models. Parameters for parametric prevalence-incidence models, such as the logistic regression and Weibull survival (logistic-Weibull) model, are estimated by direct likelihood maximization or by EM algorithm. Non-parametric methods are proposed to calculate cumulative risks for cases without covariates. We compare naive Kaplan-Meier, logistic-Weibull, and non-parametric estimates of cumulative risk in the cervical cancer screening program at Kaiser Permanente Northern California. Kaplan-Meier provided poor estimates while the logistic-Weibull model was a close fit to the non-parametric. Our findings support our use of logistic-Weibull models to develop the risk estimates that underlie current US risk-based cervical cancer screening guidelines. Published 2017. This article has been contributed to by US Government employees and their work is in the public domain in the USA. Published 2017. This article has been contributed to by US Government employees and their work is in the public domain in the USA.
Investigation on the coloured noise in GPS-derived position with time-varying seasonal signals
NASA Astrophysics Data System (ADS)
Gruszczynska, Marta; Klos, Anna; Bos, Machiel Simon; Bogusz, Janusz
2016-04-01
The seasonal signals in the GPS-derived time series arise from real geophysical signals related to tidal (residual) or non-tidal (loadings from atmosphere, ocean and continental hydrosphere, thermo elastic strain, etc.) effects and numerical artefacts including aliasing from mismodelling in short periods or repeatability of the GPS satellite constellation with respect to the Sun (draconitics). Singular Spectrum Analysis (SSA) is a method for investigation of nonlinear dynamics, suitable to either stationary or non-stationary data series without prior knowledge about their character. The aim of SSA is to mathematically decompose the original time series into a sum of slowly varying trend, seasonal oscillations and noise. In this presentation we will explore the ability of SSA to subtract the time-varying seasonal signals in GPS-derived North-East-Up topocentric components and show properties of coloured noise from residua. For this purpose we used data from globally distributed IGS (International GNSS Service) permanent stations processed by the JPL (Jet Propulsion Laboratory) in a PPP (Precise Point Positioning) mode. After introducing a threshold of 13 years, 264 stations left with a maximum length reaching 23 years. The data was initially pre-processed for outliers, offsets and gaps. The SSA was applied to pre-processed series to estimate the time-varying seasonal signals. We adopted a 3-years window as the optimal dimension of its size determined with the Akaike's Information Criteria (AIC) values. A Fisher-Snedecor test corrected for the presence of temporal correlation was used to determine the statistical significance of reconstructed components. This procedure showed that first four components describing annual and semi-annual signals, are significant at a 99.7% confidence level, which corresponds to 3-sigma criterion. We compared the non-parametric SSA approach with a commonly chosen parametric Least-Squares Estimation that assumes constant amplitudes and phases over time. We noticed a maximum difference in seasonal oscillation of 3.5 mm and a maximum change in velocity of 0.15 mm/year for Up component (YELL, Yellowknife, Canada), when SSA and LSE are compared. The annual signal has the greatest influence on data variability in time series, while the semi-annual signal in Up component has much smaller contribution in the total variance of data. For some stations more than 35% of the total variance is explained by annual signal. According to the Power Spectral Densities (PSD) we proved that SSA has the ability to properly subtract the seasonals changing in time with almost no influence on power-law character of stochastic part. Then, the modified Maximum Likelihood Estimation (MLE) in Hector software was applied to SSA-filtered time series. We noticed a significant improvement in spectral indices and power-law amplitudes in comparison to classically determined ones with LSE, which will be the main subject of this presentation.
Parametrization of Drag and Turbulence for Urban Neighbourhoods with Trees
NASA Astrophysics Data System (ADS)
Krayenhoff, E. S.; Santiago, J.-L.; Martilli, A.; Christen, A.; Oke, T. R.
2015-08-01
Urban canopy parametrizations designed to be coupled with mesoscale models must predict the integrated effect of urban obstacles on the flow at each height in the canopy. To assess these neighbourhood-scale effects, results of microscale simulations may be horizontally-averaged. Obstacle-resolving computational fluid dynamics (CFD) simulations of neutrally-stratified flow through canopies of blocks (buildings) with varying distributions and densities of porous media (tree foliage) are conducted, and the spatially-averaged impacts on the flow of these building-tree combinations are assessed. The accuracy with which a one-dimensional (column) model with a one-equation (-) turbulence scheme represents spatially-averaged CFD results is evaluated. Individual physical mechanisms by which trees and buildings affect flow in the column model are evaluated in terms of relative importance. For the treed urban configurations considered, effects of buildings and trees may be considered independently. Building drag coefficients and length scale effects need not be altered due to the presence of tree foliage; therefore, parametrization of spatially-averaged flow through urban neighbourhoods with trees is greatly simplified. The new parametrization includes only source and sink terms significant for the prediction of spatially-averaged flow profiles: momentum drag due to buildings and trees (and the associated wake production of turbulent kinetic energy), modification of length scales by buildings, and enhanced dissipation of turbulent kinetic energy due to the small scale of tree foliage elements. Coefficients for the Santiago and Martilli (Boundary-Layer Meteorol 137: 417-439, 2010) parametrization of building drag coefficients and length scales are revised. Inclusion of foliage terms from the new parametrization in addition to the Santiago and Martilli building terms reduces root-mean-square difference (RMSD) of the column model streamwise velocity component and turbulent kinetic energy relative to the CFD model by 89 % in the canopy and 71 % above the canopy on average for the highest leaf area density scenarios tested: . RMSD values with the new parametrization are less than 20 % of mean layer magnitude for the streamwise velocity component within and above the canopy, and for above-canopy turbulent kinetic energy; RMSD values for within-canopy turbulent kinetic energy are negligible for most scenarios. The foliage-related portion of the new parametrization is required for scenarios with tree foliage of equal or greater height than the buildings, and for scenarios with foliage below roof height for building plan area densities less than approximately 0.25.
Arribas-Gil, Ana; De la Cruz, Rolando; Lebarbier, Emilie; Meza, Cristian
2015-06-01
We propose a classification method for longitudinal data. The Bayes classifier is classically used to determine a classification rule where the underlying density in each class needs to be well modeled and estimated. This work is motivated by a real dataset of hormone levels measured at the early stages of pregnancy that can be used to predict normal versus abnormal pregnancy outcomes. The proposed model, which is a semiparametric linear mixed-effects model (SLMM), is a particular case of the semiparametric nonlinear mixed-effects class of models (SNMM) in which finite dimensional (fixed effects and variance components) and infinite dimensional (an unknown function) parameters have to be estimated. In SNMM's maximum likelihood estimation is performed iteratively alternating parametric and nonparametric procedures. However, if one can make the assumption that the random effects and the unknown function interact in a linear way, more efficient estimation methods can be used. Our contribution is the proposal of a unified estimation procedure based on a penalized EM-type algorithm. The Expectation and Maximization steps are explicit. In this latter step, the unknown function is estimated in a nonparametric fashion using a lasso-type procedure. A simulation study and an application on real data are performed. © 2015, The International Biometric Society.
Critical discussion on the "observed" water balances of five sub-basins in the Everest region
NASA Astrophysics Data System (ADS)
Chevallier, P.; Eeckman, J.; Nepal, S.; Delclaux, F.; Wagnon, P.; Brun, F.; Koirala, D.
2017-12-01
The hydrometeorological components of five Dudh Koshi River sub-basins on the Nepalese side of the Mount Everest have been monitored during four hydrological years (2013-2017), with altitudes ranging from 2000 m to Everest top, areas between 4.65 and 1207 km², and proportions of glaciated areas between nil and 45%. This data set is completed with glacier mass balance observations. The analysis of the observed data and the resulting water balances show large uncertainties of different types: aleatory, epistemic or semantic, following the classification proposed by Beven (2016). The discussion is illustrated using results from two modeling approaches, physical (ISBA, Noilhan and Planton, 1996) and conceptual (J2000, Krause, 2001), as well as large scale glacier mass balances obtained by the way of a recent remote sensing processing method. References: Beven, K., 2016. Facets of uncertainty: epistemic uncertainty, non-stationarity, likelihood, hypothesis testing, and communication. Hydrological Sciences Journal 61, 1652-1665. doi:10.1080/02626667.2015.1031761 Krause, P., 2001. Das hydrologische Modellsystem J2000: Beschreibung und Anwendung in groen Flueinzugsgebieten, Schriften des Forschungszentrum Jülich. Reihe Umwelt/Environment; Band 29. Noilhan, J., Planton, S., 1989. A single parametrization of land surface processes for meteorological models. Monthly Weather Review 536-549.
Finite mixture model: A maximum likelihood estimation approach on time series data
NASA Astrophysics Data System (ADS)
Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-09-01
Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.
Bignardi, A B; El Faro, L; Torres Júnior, R A A; Cardoso, V L; Machado, P F; Albuquerque, L G
2011-10-31
We analyzed 152,145 test-day records from 7317 first lactations of Holstein cows recorded from 1995 to 2003. Our objective was to model variations in test-day milk yield during the first lactation of Holstein cows by random regression model (RRM), using various functions in order to obtain adequate and parsimonious models for the estimation of genetic parameters. Test-day milk yields were grouped into weekly classes of days in milk, ranging from 1 to 44 weeks. The contemporary groups were defined as herd-test-day. The analyses were performed using a single-trait RRM, including the direct additive, permanent environmental and residual random effects. In addition, contemporary group and linear and quadratic effects of the age of cow at calving were included as fixed effects. The mean trend of milk yield was modeled with a fourth-order orthogonal Legendre polynomial. The additive genetic and permanent environmental covariance functions were estimated by random regression on two parametric functions, Ali and Schaeffer and Wilmink, and on B-spline functions of days in milk. The covariance components and the genetic parameters were estimated by the restricted maximum likelihood method. Results from RRM parametric and B-spline functions were compared to RRM on Legendre polynomials and with a multi-trait analysis, using the same data set. Heritability estimates presented similar trends during mid-lactation (13 to 31 weeks) and between week 37 and the end of lactation, for all RRM. Heritabilities obtained by multi-trait analysis were of a lower magnitude than those estimated by RRM. The RRMs with a higher number of parameters were more useful to describe the genetic variation of test-day milk yield throughout the lactation. RRM using B-spline and Legendre polynomials as base functions appears to be the most adequate to describe the covariance structure of the data.
A Semi-parametric Multivariate Gap-filling Model for Eddy Covariance Latent Heat Flux
NASA Astrophysics Data System (ADS)
Li, M.; Chen, Y.
2010-12-01
Quantitative descriptions of latent heat fluxes are important to study the water and energy exchanges between terrestrial ecosystems and the atmosphere. The eddy covariance approaches have been recognized as the most reliable technique for measuring surface fluxes over time scales ranging from hours to years. However, unfavorable micrometeorological conditions, instrument failures, and applicable measurement limitations may cause inevitable flux gaps in time series data. Development and application of suitable gap-filling techniques are crucial to estimate long term fluxes. In this study, a semi-parametric multivariate gap-filling model was developed to fill latent heat flux gaps for eddy covariance measurements. Our approach combines the advantages of a multivariate statistical analysis (principal component analysis, PCA) and a nonlinear interpolation technique (K-nearest-neighbors, KNN). The PCA method was first used to resolve the multicollinearity relationships among various hydrometeorological factors, such as radiation, soil moisture deficit, LAI, and wind speed. The KNN method was then applied as a nonlinear interpolation tool to estimate the flux gaps as the weighted sum latent heat fluxes with the K-nearest distances in the PCs’ domain. Two years, 2008 and 2009, of eddy covariance and hydrometeorological data from a subtropical mixed evergreen forest (the Lien-Hua-Chih Site) were collected to calibrate and validate the proposed approach with artificial gaps after standard QC/QA procedures. The optimal K values and weighting factors were determined by the maximum likelihood test. The results of gap-filled latent heat fluxes conclude that developed model successful preserving energy balances of daily, monthly, and yearly time scales. Annual amounts of evapotranspiration from this study forest were 747 mm and 708 mm for 2008 and 2009, respectively. Nocturnal evapotranspiration was estimated with filled gaps and results are comparable with other studies. Seasonal and daily variability of latent heat fluxes were also discussed.
Fitting Item Response Theory Models to Two Personality Inventories: Issues and Insights.
Chernyshenko, O S; Stark, S; Chan, K Y; Drasgow, F; Williams, B
2001-10-01
The present study compared the fit of several IRT models to two personality assessment instruments. Data from 13,059 individuals responding to the US-English version of the Fifth Edition of the Sixteen Personality Factor Questionnaire (16PF) and 1,770 individuals responding to Goldberg's 50 item Big Five Personality measure were analyzed. Various issues pertaining to the fit of the IRT models to personality data were considered. We examined two of the most popular parametric models designed for dichotomously scored items (i.e., the two- and three-parameter logistic models) and a parametric model for polytomous items (Samejima's graded response model). Also examined were Levine's nonparametric maximum likelihood formula scoring models for dichotomous and polytomous data, which were previously found to provide good fits to several cognitive ability tests (Drasgow, Levine, Tsien, Williams, & Mead, 1995). The two- and three-parameter logistic models fit some scales reasonably well but not others; the graded response model generally did not fit well. The nonparametric formula scoring models provided the best fit of the models considered. Several implications of these findings for personality measurement and personnel selection were described.
Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM☆
López, J.D.; Litvak, V.; Espinosa, J.J.; Friston, K.; Barnes, G.R.
2014-01-01
The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy—an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. PMID:24041874
NASA Technical Reports Server (NTRS)
Rosenberg, Leigh; Hihn, Jairus; Roust, Kevin; Warfield, Keith
2000-01-01
This paper presents an overview of a parametric cost model that has been built at JPL to estimate costs of future, deep space, robotic science missions. Due to the recent dramatic changes in JPL business practices brought about by an internal reengineering effort known as develop new products (DNP), high-level historic cost data is no longer considered analogous to future missions. Therefore, the historic data is of little value in forecasting costs for projects developed using the DNP process. This has lead to the development of an approach for obtaining expert opinion and also for combining actual data with expert opinion to provide a cost database for future missions. In addition, the DNP cost model has a maximum of objective cost drivers which reduces the likelihood of model input error. Version 2 is now under development which expands the model capabilities, links it more tightly with key design technical parameters, and is grounded in more rigorous statistical techniques. The challenges faced in building this model will be discussed, as well as it's background, development approach, status, validation, and future plans.
ANA: Astrophysical Neutrino Anisotropy
NASA Astrophysics Data System (ADS)
Denton, Peter
2017-08-01
ANA calculates the likelihood function for a model comprised of two components to the astrophysical neutrino flux detected by IceCube. The first component is extragalactic. Since point sources have not been found and there is increasing evidence that one source catalog cannot describe the entire data set, ANA models the extragalactic flux as isotropic. The second component is galactic. A variety of catalogs of interest are also provided. ANA takes the galactic contribution to be proportional to the matter density of the universe. The likelihood function has one free parameter fgal that is the fraction of the astrophysical flux that is galactic. ANA finds the best fit value of fgal and scans over 0
Cooling an Optically Trapped Ultracold Fermi Gas by Periodical Driving.
Li, Jiaming; de Melo, Leonardo F; Luo, Le
2017-03-30
We present a cooling method for a cold Fermi gas by parametrically driving atomic motions in a crossed-beam optical dipole trap (ODT). Our method employs the anharmonicity of the ODT, in which the hotter atoms at the edge of the trap feel the anharmonic components of the trapping potential, while the colder atoms in the center of the trap feel the harmonic one. By modulating the trap depth with frequencies that are resonant with the anharmonic components, we selectively excite the hotter atoms out of the trap while keeping the colder atoms in the trap, generating parametric cooling. This experimental protocol starts with a magneto-optical trap (MOT) that is loaded by a Zeeman slower. The precooled atoms in the MOT are then transferred to an ODT, and a bias magnetic field is applied to create an interacting Fermi gas. We then lower the trapping potential to prepare a cold Fermi gas near the degenerate temperature. After that, we sweep the magnetic field to the noninteracting regime of the Fermi gas, in which the parametric cooling can be manifested by modulating the intensity of the optical trapping beams. We find that the parametric cooling effect strongly depends on the modulation frequencies and amplitudes. With the optimized frequency and amplitude, we measure the dependence of the cloud energy on the modulation time. We observe that the cloud energy is changed in an anisotropic way, where the energy of the axial direction is significantly reduced by parametric driving. The cooling effect is limited to the axial direction because the dominant anharmonicity of the crossed-beam ODT is along the axial direction. Finally, we propose to extend this protocol for the trapping potentials of large anharmonicity in all directions, which provides a promising scheme for cooling quantum gases using external driving.
NASA Technical Reports Server (NTRS)
Egbert, James Allen
2016-01-01
In support of ground system development for the Space Launch System (SLS), engineers are tasked with building immense engineering models of extreme complexity. The various systems require rigorous analysis of pneumatics, hydraulic, cryogenic, and hypergolic systems. There are certain standards that each of these systems must meet, in the form of pressure vessel system (PVS) certification reports. These reports can be hundreds of pages long, and require many hours to compile. Traditionally, each component is analyzed individually, often utilizing hand calculations in the design process. The objective of this opportunity is to perform these analyses in an integrated fashion with the parametric CADCAE environment. This allows for systems to be analyzed on an assembly level in a semi-automated fashion, which greatly improves accuracy and efficiency. To accomplish this, component specific parameters were stored in the Windchill database to individual Creo Parametric models based on spec control drawings. These parameters were then accessed by using the Prime Analysis within Creo Parametric. MathCAD Prime spreadsheets were created that automatically extracted these parameters, performed calculations, and generated reports. The reports described component compatibility based on local conditions such as pressure, temperature, density, and flow rates. The reports also determined component pairing compatibility, such as properly sizing relief valves with regulators. The reports stored the input conditions that were used to determine compatibility to increase traceability of component selection. The desired workflow for using this tool would begin with a Creo Schematics diagram of a PVS system. This schematic would store local conditions and locations of components. The schematic would then populate an assembly within Creo Parametric, using Windchill database parts. These parts would have their attributes already assigned, and the MathCAD spreadsheets could begin running through database parts to determine which components would be suited for specific locations within the assembly. This eliminates a significant amount of time from the design process, and makes initial analysis assessments more accurate. Each component that would be checked for a location within the assembly would generate a report, showing whether the component was compatible. These reports could be used to generate the PVS report without the need to perform the same analysis multiple times. This process also has the potential to be expanded upon to further automate PVS reports. The integration of software codes or macros could be used to automatically check through hundreds of parts for each location on the schematic. If the software could recognize which type of component would be necessary for each location, it is possible that simply starting the macro could completely choose all the components needed for the schematic, and in turn the system. This would save many hours of work initially selecting components, which could end up saving money. Overall, this process helps to automate initial component selections for PVS systems to fit local design specifications. These selections will automatically generate reports showing how the design criteria are met by the specific component that was chosen. These reports will contribute to easier compilation of the PVS certification reports, which currently take a great amount of time and effort to produce.
Parametric amplification in quasi-PT symmetric coupled waveguide structures
NASA Astrophysics Data System (ADS)
Zhong, Q.; Ahmed, A.; Dadap, J. I.; Osgood, R. M., Jr.; El-Ganainy, R.
2016-12-01
The concept of non-Hermitian parametric amplification was recently proposed as a means to achieve an efficient energy conversion throughout the process of nonlinear three wave mixing in the absence of phase matching. Here we investigate this effect in a waveguide coupler arrangement whose characteristics are tailored to introduce passive PT symmetry only for the idler component. By means of analytical solutions and numerical analysis, we demonstrate the utility of these novel schemes and obtain the optimal design conditions for these devices.
Differential diagnosis of normal pressure hydrocephalus by MRI mean diffusivity histogram analysis.
Ivkovic, M; Liu, B; Ahmed, F; Moore, D; Huang, C; Raj, A; Kovanlikaya, I; Heier, L; Relkin, N
2013-01-01
Accurate diagnosis of normal pressure hydrocephalus is challenging because the clinical symptoms and radiographic appearance of NPH often overlap those of other conditions, including age-related neurodegenerative disorders such as Alzheimer and Parkinson diseases. We hypothesized that radiologic differences between NPH and AD/PD can be characterized by a robust and objective MR imaging DTI technique that does not require intersubject image registration or operator-defined regions of interest, thus avoiding many pitfalls common in DTI methods. We collected 3T DTI data from 15 patients with probable NPH and 25 controls with AD, PD, or dementia with Lewy bodies. We developed a parametric model for the shape of intracranial mean diffusivity histograms that separates brain and ventricular components from a third component composed mostly of partial volume voxels. To accurately fit the shape of the third component, we constructed a parametric function named the generalized Voss-Dyke function. We then examined the use of the fitting parameters for the differential diagnosis of NPH from AD, PD, and DLB. Using parameters for the MD histogram shape, we distinguished clinically probable NPH from the 3 other disorders with 86% sensitivity and 96% specificity. The technique yielded 86% sensitivity and 88% specificity when differentiating NPH from AD only. An adequate parametric model for the shape of intracranial MD histograms can distinguish NPH from AD, PD, or DLB with high sensitivity and specificity.
Comparison of thawing and freezing dark energy parametrizations
NASA Astrophysics Data System (ADS)
Pantazis, G.; Nesseris, S.; Perivolaropoulos, L.
2016-05-01
Dark energy equation of state w (z ) parametrizations with two parameters and given monotonicity are generically either convex or concave functions. This makes them suitable for fitting either freezing or thawing quintessence models but not both simultaneously. Fitting a data set based on a freezing model with an unsuitable (concave when increasing) w (z ) parametrization [like Chevallier-Polarski-Linder (CPL)] can lead to significant misleading features like crossing of the phantom divide line, incorrect w (z =0 ), incorrect slope, etc., that are not present in the underlying cosmological model. To demonstrate this fact we generate scattered cosmological data at both the level of w (z ) and the luminosity distance DL(z ) based on either thawing or freezing quintessence models and fit them using parametrizations of convex and of concave type. We then compare statistically significant features of the best fit w (z ) with actual features of the underlying model. We thus verify that the use of unsuitable parametrizations can lead to misleading conclusions. In order to avoid these problems it is important to either use both convex and concave parametrizations and select the one with the best χ2 or use principal component analysis thus splitting the redshift range into independent bins. In the latter case, however, significant information about the slope of w (z ) at high redshifts is lost. Finally, we propose a new family of parametrizations w (z )=w0+wa(z/1 +z )n which generalizes the CPL and interpolates between thawing and freezing parametrizations as the parameter n increases to values larger than 1.
Predicting sun protection behaviors using protection motivation variables.
Ch'ng, Joanne W M; Glendon, A Ian
2014-04-01
Protection motivation theory components were used to predict sun protection behaviors (SPBs) using four outcome measures: typical reported behaviors, previous reported behaviors, current sunscreen use as determined by interview, and current observed behaviors (clothing worn) to control for common method bias. Sampled from two SE Queensland public beaches during summer, 199 participants aged 18-29 years completed a questionnaire measuring perceived severity, perceived vulnerability, response efficacy, response costs, and protection motivation (PM). Personal perceived risk (similar to threat appraisal) and response likelihood (similar to coping appraisal) were derived from their respective PM components. Protection motivation predicted all four SPB criterion variables. Personal perceived risk and response likelihood predicted protection motivation. Protection motivation completely mediated the effect of response likelihood on all four criterion variables. Alternative models are considered. Strengths and limitations of the study are outlined and suggestions made for future research.
A model-averaging method for assessing groundwater conceptual model uncertainty.
Ye, Ming; Pohlmann, Karl F; Chapman, Jenny B; Pohll, Greg M; Reeves, Donald M
2010-01-01
This study evaluates alternative groundwater models with different recharge and geologic components at the northern Yucca Flat area of the Death Valley Regional Flow System (DVRFS), USA. Recharge over the DVRFS has been estimated using five methods, and five geological interpretations are available at the northern Yucca Flat area. Combining the recharge and geological components together with additional modeling components that represent other hydrogeological conditions yields a total of 25 groundwater flow models. As all the models are plausible given available data and information, evaluating model uncertainty becomes inevitable. On the other hand, hydraulic parameters (e.g., hydraulic conductivity) are uncertain in each model, giving rise to parametric uncertainty. Propagation of the uncertainty in the models and model parameters through groundwater modeling causes predictive uncertainty in model predictions (e.g., hydraulic head and flow). Parametric uncertainty within each model is assessed using Monte Carlo simulation, and model uncertainty is evaluated using the model averaging method. Two model-averaging techniques (on the basis of information criteria and GLUE) are discussed. This study shows that contribution of model uncertainty to predictive uncertainty is significantly larger than that of parametric uncertainty. For the recharge and geological components, uncertainty in the geological interpretations has more significant effect on model predictions than uncertainty in the recharge estimates. In addition, weighted residuals vary more for the different geological models than for different recharge models. Most of the calibrated observations are not important for discriminating between the alternative models, because their weighted residuals vary only slightly from one model to another.
Samin, Oliver A.; Civil, Ian D.
1999-01-01
Retrospectively calculated NISS was compared with the prospectively calculated ISS from data derived from the trauma registry of the Trauma Services of the Auckland Hospital as to which test is a better predictor of patient outcome, which is defined as the likelihood of death. The area under the curve (AUC) for ISS and NISS were computed using the non-parametric approach. AUC for ISS = 0.95835, and AUC for NISS = 0.97350, p <0.012. Misclassification rate for ISS was 2.77% and the value for NISS was 2.43%.
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2010-07-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root- n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2013-01-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root-n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided. PMID:24790286
Paninski, Liam; Haith, Adrian; Szirtes, Gabor
2008-02-01
We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.
Computational Fatigue Life Analysis of Carbon Fiber Laminate
NASA Astrophysics Data System (ADS)
Shastry, Shrimukhi G.; Chandrashekara, C. V., Dr.
2018-02-01
In the present scenario, many traditional materials are being replaced by composite materials for its light weight and high strength properties. Industries like automotive industry, aerospace industry etc., are some of the examples which uses composite materials for most of its components. Replacing of components which are subjected to static load or impact load are less challenging compared to components which are subjected to dynamic loading. Replacing the components made up of composite materials demands many stages of parametric study. One such parametric study is the fatigue analysis of composite material. This paper focuses on the fatigue life analysis of the composite material by using computational techniques. A composite plate is considered for the study which has a hole at the center. The analysis is carried on (0°/90°/90°/90°/90°)s laminate sequence and (45°/-45°)2s laminate sequence by using a computer script. The life cycles for both the lay-up sequence are compared with each other. It is observed that, for the same material and geometry of the component, cross ply laminates show better fatigue life than that of angled ply laminates.
Modeling and Prediction of Krueger Device Noise
NASA Technical Reports Server (NTRS)
Guo, Yueping; Burley, Casey L.; Thomas, Russell H.
2016-01-01
This paper presents the development of a noise prediction model for aircraft Krueger flap devices that are considered as alternatives to leading edge slotted slats. The prediction model decomposes the total Krueger noise into four components, generated by the unsteady flows, respectively, in the cove under the pressure side surface of the Krueger, in the gap between the Krueger trailing edge and the main wing, around the brackets supporting the Krueger device, and around the cavity on the lower side of the main wing. For each noise component, the modeling follows a physics-based approach that aims at capturing the dominant noise-generating features in the flow and developing correlations between the noise and the flow parameters that control the noise generation processes. The far field noise is modeled using each of the four noise component's respective spectral functions, far field directivities, Mach number dependencies, component amplitudes, and other parametric trends. Preliminary validations are carried out by using small scale experimental data, and two applications are discussed; one for conventional aircraft and the other for advanced configurations. The former focuses on the parametric trends of Krueger noise on design parameters, while the latter reveals its importance in relation to other airframe noise components.
A time series intervention analysis (TSIA) of dendrochronological data to infer the tree growth-climate-disturbance relations and forest disturbance history is described. Maximum likelihood is used to estimate the parameters of a structural time series model with components for ...
Lehmann, A; Scheffler, Ch; Hermanussen, M
2010-02-01
Recent progress in modelling individual growth has been achieved by combining the principal component analysis and the maximum likelihood principle. This combination models growth even in incomplete sets of data and in data obtained at irregular intervals. We re-analysed late 18th century longitudinal growth of German boys from the boarding school Carlsschule in Stuttgart. The boys, aged 6-23 years, were measured at irregular 3-12 monthly intervals during the period 1771-1793. At the age of 18 years, mean height was 1652 mm, but height variation was large. The shortest boy reached 1474 mm, the tallest 1826 mm. Measured height closely paralleled modelled height, with mean difference of 4 mm, SD 7 mm. Seasonal height variation was found. Low growth rates occurred in spring and high growth rates in summer and autumn. The present study demonstrates that combining the principal component analysis and the maximum likelihood principle enables growth modelling in historic height data also. Copyright (c) 2009 Elsevier GmbH. All rights reserved.
Topics in inference and decision-making with partial knowledge
NASA Technical Reports Server (NTRS)
Safavian, S. Rasoul; Landgrebe, David
1990-01-01
Two essential elements needed in the process of inference and decision-making are prior probabilities and likelihood functions. When both of these components are known accurately and precisely, the Bayesian approach provides a consistent and coherent solution to the problems of inference and decision-making. In many situations, however, either one or both of the above components may not be known, or at least may not be known precisely. This problem of partial knowledge about prior probabilities and likelihood functions is addressed. There are at least two ways to cope with this lack of precise knowledge: robust methods, and interval-valued methods. First, ways of modeling imprecision and indeterminacies in prior probabilities and likelihood functions are examined; then how imprecision in the above components carries over to the posterior probabilities is examined. Finally, the problem of decision making with imprecise posterior probabilities and the consequences of such actions are addressed. Application areas where the above problems may occur are in statistical pattern recognition problems, for example, the problem of classification of high-dimensional multispectral remote sensing image data.
Josephson parametric converter saturation and higher order effects
NASA Astrophysics Data System (ADS)
Liu, G.; Chien, T.-C.; Cao, X.; Lanes, O.; Alpern, E.; Pekker, D.; Hatridge, M.
2017-11-01
Microwave parametric amplifiers based on Josephson junctions have become indispensable components of many quantum information experiments. One key limitation which has not been well predicted by theory is the gain saturation behavior which limits the amplifier's ability to process large amplitude signals. The typical explanation for this behavior in phase-preserving amplifiers based on three-wave mixing, such as the Josephson Parametric Converter, is pump depletion, in which the consumption of pump photons to produce amplification results in a reduction in gain. However, in this work, we present experimental data and theoretical calculations showing that the fourth-order Kerr nonlinearities inherent in Josephson junctions are the dominant factor. The Kerr-based theory has the unusual property of causing saturation to both lower and higher gains, depending on bias conditions. This work presents an efficient methodology for optimizing device performance in the presence of Kerr nonlinearities while retaining device tunability and points to the necessity of controlling higher-order Hamiltonian terms to make further improvements in parametric devices.
Brayton Power Conversion System Parametric Design Modelling for Nuclear Electric Propulsion
NASA Technical Reports Server (NTRS)
Ashe, Thomas L.; Otting, William D.
1993-01-01
The parametrically based closed Brayton cycle (CBC) computer design model was developed for inclusion into the NASA LeRC overall Nuclear Electric Propulsion (NEP) end-to-end systems model. The code is intended to provide greater depth to the NEP system modeling which is required to more accurately predict the impact of specific technology on system performance. The CBC model is parametrically based to allow for conducting detailed optimization studies and to provide for easy integration into an overall optimizer driver routine. The power conversion model includes the modeling of the turbines, alternators, compressors, ducting, and heat exchangers (hot-side heat exchanger and recuperator). The code predicts performance to significant detail. The system characteristics determined include estimates of mass, efficiency, and the characteristic dimensions of the major power conversion system components. These characteristics are parametrically modeled as a function of input parameters such as the aerodynamic configuration (axial or radial), turbine inlet temperature, cycle temperature ratio, power level, lifetime, materials, and redundancy.
Chaudhuri, Shomesh E; Merfeld, Daniel M
2013-03-01
Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.
Dawson, Ree; Lavori, Philip W
2012-01-01
Clinical demand for individualized "adaptive" treatment policies in diverse fields has spawned development of clinical trial methodology for their experimental evaluation via multistage designs, building upon methods intended for the analysis of naturalistically observed strategies. Because often there is no need to parametrically smooth multistage trial data (in contrast to observational data for adaptive strategies), it is possible to establish direct connections among different methodological approaches. We show by algebraic proof that the maximum likelihood (ML) and optimal semiparametric (SP) estimators of the population mean of the outcome of a treatment policy and its standard error are equal under certain experimental conditions. This result is used to develop a unified and efficient approach to design and inference for multistage trials of policies that adapt treatment according to discrete responses. We derive a sample size formula expressed in terms of a parametric version of the optimal SP population variance. Nonparametric (sample-based) ML estimation performed well in simulation studies, in terms of achieved power, for scenarios most likely to occur in real studies, even though sample sizes were based on the parametric formula. ML outperformed the SP estimator; differences in achieved power predominately reflected differences in their estimates of the population mean (rather than estimated standard errors). Neither methodology could mitigate the potential for overestimated sample sizes when strong nonlinearity was purposely simulated for certain discrete outcomes; however, such departures from linearity may not be an issue for many clinical contexts that make evaluation of competitive treatment policies meaningful.
Growth of perturbations in dark energy parametrization scenarios
NASA Astrophysics Data System (ADS)
Mehrabi, Ahmad
2018-04-01
In this paper, we study the evolution of dark matter perturbations in the linear regime by considering the possibility of dark energy perturbations. To do this, two popular parametrizations, Chevallier-Polarski-Linder (CPL) and Barboza-Alcaniz (BA), with the same number of free parameters and different redshift dependency have been considered. We integrate the full relativistic equations to obtain the growth of matter fluctuations for both clustering and smooth versions of CPL and BA dark energy. The growth rate is larger (smaller) than the Λ CDM in the smooth cases when w <-1 (w >-1 ), but the dark energy clustering gives a larger (smaller) growth index when w >-1 (w <-1 ). We measure the relative difference of the growth rate with respect to concordance Λ CDM and study how it changes depending on the free parameters. Furthermore, it is found that the difference of growth rates between smooth CPL and BA is negligible, less than 0.5%, while for the clustering case, the difference is considerable and might be as large as 2%. Eventually, using the latest geometrical and growth rate observational data, we perform an overall likelihood analysis and show that both smooth and clustering cases of CPL and BA parametrizations are consistent with observations. In particular, we find the dark energy figure of merit is approximately 70 for the BA and approximately 30 for the CPL, which indicates the BA model constrains relatively better than the CPL one.
Galka, Andreas; Siniatchkin, Michael; Stephani, Ulrich; Groening, Kristina; Wolff, Stephan; Bosch-Bayard, Jorge; Ozaki, Tohru
2010-12-01
The analysis of time series obtained by functional magnetic resonance imaging (fMRI) may be approached by fitting predictive parametric models, such as nearest-neighbor autoregressive models with exogeneous input (NNARX). As a part of the modeling procedure, it is possible to apply instantaneous linear transformations to the data. Spatial smoothing, a common preprocessing step, may be interpreted as such a transformation. The autoregressive parameters may be constrained, such that they provide a response behavior that corresponds to the canonical haemodynamic response function (HRF). We present an algorithm for estimating the parameters of the linear transformations and of the HRF within a rigorous maximum-likelihood framework. Using this approach, an optimal amount of both the spatial smoothing and the HRF can be estimated simultaneously for a given fMRI data set. An example from a motor-task experiment is discussed. It is found that, for this data set, weak, but non-zero, spatial smoothing is optimal. Furthermore, it is demonstrated that activated regions can be estimated within the maximum-likelihood framework.
Statistical field estimators for multiscale simulations.
Eapen, Jacob; Li, Ju; Yip, Sidney
2005-11-01
We present a systematic approach for generating smooth and accurate fields from particle simulation data using the notions of statistical inference. As an extension to a parametric representation based on the maximum likelihood technique previously developed for velocity and temperature fields, a nonparametric estimator based on the principle of maximum entropy is proposed for particle density and stress fields. Both estimators are applied to represent molecular dynamics data on shear-driven flow in an enclosure which exhibits a high degree of nonlinear characteristics. We show that the present density estimator is a significant improvement over ad hoc bin averaging and is also free of systematic boundary artifacts that appear in the method of smoothing kernel estimates. Similarly, the velocity fields generated by the maximum likelihood estimator do not show any edge effects that can be erroneously interpreted as slip at the wall. For low Reynolds numbers, the velocity fields and streamlines generated by the present estimator are benchmarked against Newtonian continuum calculations. For shear velocities that are a significant fraction of the thermal speed, we observe a form of shear localization that is induced by the confining boundary.
A parametric shell analysis of the shuttle 51-L SRB AFT field joint
NASA Technical Reports Server (NTRS)
Davis, Randall C.; Bowman, Lynn M.; Hughes, Robert M., IV; Jackson, Brian J.
1990-01-01
Following the Shuttle 51-L accident, an investigation was conducted to determine the cause of the failure. Investigators at the Langley Research Center focused attention on the structural behavior of the field joints with O-ring seals in the steel solid rocket booster (SRB) cases. The shell-of-revolution computer program BOSOR4 was used to model the aft field joint of the solid rocket booster case. The shell model consisted of the SRB wall and joint geometry present during the Shuttle 51-L flight. A parametric study of the joint was performed on the geometry, including joint clearances, contact between the joint components, and on the loads, induced and applied. In addition combinations of geometry and loads were evaluated. The analytical results from the parametric study showed that contact between the joint components was a primary contributor to allowing hot gases to blow by the O-rings. Based upon understanding the original joint behavior, various proposed joint modifications are shown and analyzed in order to provide additional insight and information. Finally, experimental results from a hydro-static pressurization of a test rocket booster case to study joint motion are presented and verified analytically.
Local parametric instability near elliptic points in vortex flows under shear deformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koshel, Konstantin V., E-mail: kvkoshel@poi.dvo.ru; Institute of Applied Mathematics, FEB RAS, 7, Radio Street, Vladivostok 690022; Far Eastern Federal University, 8, Sukhanova Street, Vladivostok 690950
The dynamics of two point vortices embedded in an oscillatory external flow consisted of shear and rotational components is addressed. The region associated with steady-state elliptic points of the vortex motion is established to experience local parametric instability. The instability forces the point vortices with initial positions corresponding to the steady-state elliptic points to move in spiral-like divergent trajectories. This divergent motion continues until the nonlinear effects suppress their motion near the region associated with the steady-state separatrices. The local parametric instability is then demonstrated not to contribute considerably to enhancing the size of the chaotic motion regions. Instead, themore » size of the chaotic motion region mostly depends on overlaps of the nonlinear resonances emerging in the perturbed system.« less
L.U.St: a tool for approximated maximum likelihood supertree reconstruction.
Akanni, Wasiu A; Creevey, Christopher J; Wilkinson, Mark; Pisani, Davide
2014-06-12
Supertrees combine disparate, partially overlapping trees to generate a synthesis that provides a high level perspective that cannot be attained from the inspection of individual phylogenies. Supertrees can be seen as meta-analytical tools that can be used to make inferences based on results of previous scientific studies. Their meta-analytical application has increased in popularity since it was realised that the power of statistical tests for the study of evolutionary trends critically depends on the use of taxon-dense phylogenies. Further to that, supertrees have found applications in phylogenomics where they are used to combine gene trees and recover species phylogenies based on genome-scale data sets. Here, we present the L.U.St package, a python tool for approximate maximum likelihood supertree inference and illustrate its application using a genomic data set for the placental mammals. L.U.St allows the calculation of the approximate likelihood of a supertree, given a set of input trees, performs heuristic searches to look for the supertree of highest likelihood, and performs statistical tests of two or more supertrees. To this end, L.U.St implements a winning sites test allowing ranking of a collection of a-priori selected hypotheses, given as a collection of input supertree topologies. It also outputs a file of input-tree-wise likelihood scores that can be used as input to CONSEL for calculation of standard tests of two trees (e.g. Kishino-Hasegawa, Shimidoara-Hasegawa and Approximately Unbiased tests). This is the first fully parametric implementation of a supertree method, it has clearly understood properties, and provides several advantages over currently available supertree approaches. It is easy to implement and works on any platform that has python installed. bitBucket page - https://afro-juju@bitbucket.org/afro-juju/l.u.st.git. Davide.Pisani@bristol.ac.uk.
Automation and Evaluation of the SOWH Test with SOWHAT.
Church, Samuel H; Ryan, Joseph F; Dunn, Casey W
2015-11-01
The Swofford-Olsen-Waddell-Hillis (SOWH) test evaluates statistical support for incongruent phylogenetic topologies. It is commonly applied to determine if the maximum likelihood tree in a phylogenetic analysis is significantly different than an alternative hypothesis. The SOWH test compares the observed difference in log-likelihood between two topologies to a null distribution of differences in log-likelihood generated by parametric resampling. The test is a well-established phylogenetic method for topology testing, but it is sensitive to model misspecification, it is computationally burdensome to perform, and its implementation requires the investigator to make several decisions that each have the potential to affect the outcome of the test. We analyzed the effects of multiple factors using seven data sets to which the SOWH test was previously applied. These factors include a number of sample replicates, likelihood software, the introduction of gaps to simulated data, the use of distinct models of evolution for data simulation and likelihood inference, and a suggested test correction wherein an unresolved "zero-constrained" tree is used to simulate sequence data. To facilitate these analyses and future applications of the SOWH test, we wrote SOWHAT, a program that automates the SOWH test. We find that inadequate bootstrap sampling can change the outcome of the SOWH test. The results also show that using a zero-constrained tree for data simulation can result in a wider null distribution and higher p-values, but does not change the outcome of the SOWH test for most of the data sets tested here. These results will help others implement and evaluate the SOWH test and allow us to provide recommendations for future applications of the SOWH test. SOWHAT is available for download from https://github.com/josephryan/SOWHAT. © The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
Technical Note: Approximate Bayesian parameterization of a complex tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2013-08-01
Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.
Parametric disordering of meta-atoms and nonlinear topological transitions in liquid metacrystals
NASA Astrophysics Data System (ADS)
Zharov, Alexander A.; Zharova, Nina A.; Zharov, Alexander A.
2017-09-01
We show that amplitude-modulated electromagnetic wave incident onto a liquid metacrystal may cause parametric instability of meta-atoms resulting in isotropization of the medium that can be treated in terms of effective temperature. It makes possible to switch the sign of certain components of dielectric permittivity and/or magnetic permeability tensors that, in turn, modifies the topology of isofrequency surface. At the same time it leads to the changes of the conditions of electromagnetic wave propagation appearing in the form of focusing or defocusing nonlinearity.
Empirical velocity profiles for galactic rotation curves
NASA Astrophysics Data System (ADS)
López Fune, E.
2018-04-01
A unified parametrization of the circular velocity, which accurately fits 850 galaxy rotation curves without needing in advance the knowledge of the luminous matter components, nor a fixed dark matter halo model, is proposed. A notable feature is that the associated gravitational potential increases with the distance from the galaxy centre, giving rise to a length-scale indicating a finite size of a galaxy, and after, the Keplerian fall-off of the parametrized circular velocity is recovered according to Newtonian gravity, making possible the estimation of the total mass enclosed by the galaxy.
Rebarber, Andrei; Bender, Samuel; Silverstein, Michael; Saltzman, Daniel H; Klauser, Chad K; Fox, Nathan S
2014-02-01
To report the obstetrical outcomes in patients with twin pregnancies who underwent an emergency/physical exam-indicated cerclage and to compare them to patients with singleton pregnancies undergoing the same procedure. Patients who underwent emergency/physical exam-indicated cerclage in the second trimester in one maternal-fetal medicine practice from July 1997 to March 2012 were reviewed. We defined an emergency/physical exam-indicated cerclage as any cerclage placed in a patient with a dilated cervix on examination or membranes visible at the external cervical os on speculum examination. We compared outcomes between patients with singleton and twin pregnancies using non-parametric testing. There were 43 patients (12 twin and 31 singleton pregnancies) who underwent emergency/physical exam-indicated cerclage placement. The median gestational age at cerclage placement, cervical dilation, maternal age, and cerclage type were similar between the groups. Comparing twins to singletons, the median time from cerclage placement to delivery was similar (92 vs. 106 days, p=0.330), as was the median gestational age at delivery (33.5 vs. 35.0 weeks, p=0.244). The likelihood of delivery at >32 weeks (75.0% vs. 71.0%, p>0.999) and the likelihood of neonatal survival to discharge (83.3% vs. 83.9%, p>0.999) were also similar. Emergency/physical exam-indicated cerclage in twin pregnancies can be associated with favorable outcomes, including a high likelihood of delivery at >32 weeks and a high likelihood of survival. Their outcomes appear similar to singleton pregnancies. Cerclage should be considered an option for patients with twin pregnancies and a dilated cervix in the second trimester. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Methods for fitting a parametric probability distribution to most probable number data.
Williams, Michael S; Ebel, Eric D
2012-07-02
Every year hundreds of thousands, if not millions, of samples are collected and analyzed to assess microbial contamination in food and water. The concentration of pathogenic organisms at the end of the production process is low for most commodities, so a highly sensitive screening test is used to determine whether the organism of interest is present in a sample. In some applications, samples that test positive are subjected to quantitation. The most probable number (MPN) technique is a common method to quantify the level of contamination in a sample because it is able to provide estimates at low concentrations. This technique uses a series of dilution count experiments to derive estimates of the concentration of the microorganism of interest. An application for these data is food-safety risk assessment, where the MPN concentration estimates can be fitted to a parametric distribution to summarize the range of potential exposures to the contaminant. Many different methods (e.g., substitution methods, maximum likelihood and regression on order statistics) have been proposed to fit microbial contamination data to a distribution, but the development of these methods rarely considers how the MPN technique influences the choice of distribution function and fitting method. An often overlooked aspect when applying these methods is whether the data represent actual measurements of the average concentration of microorganism per milliliter or the data are real-valued estimates of the average concentration, as is the case with MPN data. In this study, we propose two methods for fitting MPN data to a probability distribution. The first method uses a maximum likelihood estimator that takes average concentration values as the data inputs. The second is a Bayesian latent variable method that uses the counts of the number of positive tubes at each dilution to estimate the parameters of the contamination distribution. The performance of the two fitting methods is compared for two data sets that represent Salmonella and Campylobacter concentrations on chicken carcasses. The results demonstrate a bias in the maximum likelihood estimator that increases with reductions in average concentration. The Bayesian method provided unbiased estimates of the concentration distribution parameters for all data sets. We provide computer code for the Bayesian fitting method. Published by Elsevier B.V.
Bayesian hierarchical functional data analysis via contaminated informative priors.
Scarpa, Bruno; Dunson, David B
2009-09-01
A variety of flexible approaches have been proposed for functional data analysis, allowing both the mean curve and the distribution about the mean to be unknown. Such methods are most useful when there is limited prior information. Motivated by applications to modeling of temperature curves in the menstrual cycle, this article proposes a flexible approach for incorporating prior information in semiparametric Bayesian analyses of hierarchical functional data. The proposed approach is based on specifying the distribution of functions as a mixture of a parametric hierarchical model and a nonparametric contamination. The parametric component is chosen based on prior knowledge, while the contamination is characterized as a functional Dirichlet process. In the motivating application, the contamination component allows unanticipated curve shapes in unhealthy menstrual cycles. Methods are developed for posterior computation, and the approach is applied to data from a European fecundability study.
New Opportunities for Remote Sensing Ionospheric Irregularities by Fitting Scintillation Spectra
NASA Astrophysics Data System (ADS)
Carrano, C. S.; Rino, C. L.; Groves, K. M.
2017-12-01
In a recent paper, we presented a phase screen theory for the spectrum of intensity scintillations when the refractive index irregularities follow a two-component power law [Carrano and Rino, DOI: 10.1002/2015RS005903]. More recently we have investigated the inverse problem, whereby phase screen parameters are inferred from scintillation time series. This is accomplished by fitting the spectrum of intensity fluctuations with a parametrized theoretical model using Maximum Likelihood (ML) methods. The Markov-Chain Monte-Carlo technique provides a-posteriori errors and confidence intervals. The Akaike Information Criterion (AIC) provides justification for the use of one- or two-component irregularity models. We refer to this fitting as Irregularity Parameter Estimation (IPE) since it provides a statistical description of the irregularities from the scintillations they produce. In this talk, we explore some new opportunities for remote sensing ionospheric irregularities afforded by IPE. Statistical characterization of irregularities and the plasma bubbles in which they are embedded provides insight into the development of the underlying instability. In a companion paper by Rino et al., IPE is used to interpret scintillation due to simulated EPB structure. IPE can be used to reconcile multi-frequency scintillation observations and to construct high fidelity scintillation simulation tools. In space-to-ground propagation scenarios, for which an estimate of the distance to the scattering region is available a-priori, IPE enables retrieval of zonal irregularity drift. In radio occultation scenarios, the distance to the irregularities is generally unknown but IPE enables retrieval of Fresnel frequency. A geometric model for the effective scan velocity maps Fresnel frequency to Fresnel scale, yielding the distance to the irregularities. We demonstrate this approach by geolocating irregularities observed by the CORISS instrument onboard the C/NOFS satellite.
Three-Dimensional Modeling of Aircraft High-Lift Components with Vehicle Sketch Pad
NASA Technical Reports Server (NTRS)
Olson, Erik D.
2016-01-01
Vehicle Sketch Pad (OpenVSP) is a parametric geometry modeler that has been used extensively for conceptual design studies of aircraft, including studies using higher-order analysis. OpenVSP can model flap and slat surfaces using simple shearing of the airfoil coordinates, which is an appropriate level of complexity for lower-order aerodynamic analysis methods. For three-dimensional analysis, however, there is not a built-in method for defining the high-lift components in OpenVSP in a realistic manner, or for controlling their complex motions in a parametric manner that is intuitive to the designer. This paper seeks instead to utilize OpenVSP's existing capabilities, and establish a set of best practices for modeling high-lift components at a level of complexity suitable for higher-order analysis methods. Techniques are described for modeling the flap and slat components as separate three-dimensional surfaces, and for controlling their motion using simple parameters defined in the local hinge-axis frame of reference. To demonstrate the methodology, an OpenVSP model for the Energy-Efficient Transport (EET) AR12 wind-tunnel model has been created, taking advantage of OpenVSP's Advanced Parameter Linking capability to translate the motions of the high-lift components from the hinge-axis coordinate system to a set of transformations in OpenVSP's frame of reference.
Butterfly valve in a virtual environment
NASA Astrophysics Data System (ADS)
Talekar, Aniruddha; Patil, Saurabh; Thakre, Prashant; Rajkumar, E.
2017-11-01
Assembly of components is one of the processes involved in product design and development. The present paper deals with the assembly of a simple butterfly valve components in a virtual environment. The assembly has been carried out using virtual reality software by trial and error methods. The parts are modelled using parametric software (SolidWorks), meshed accordingly, and then called into virtual environment for assembly.
Meta II: Multi-Model Language Suite for Cyber Physical Systems
2013-03-01
AVM META) projects have developed tools for designing cyber physical (or Mechatronic ) Systems . These systems are increasingly complex, take much...projects have developed tools for designing cyber physical (CPS) (or Mechatronic ) systems . Exemplified by modern amphibious and ground military...and parametric interface of Simulink models and defines associations with CyPhy components and component interfaces. 2. Embedded Systems Modeling
A modular approach for item response theory modeling with the R package flirt.
Jeon, Minjeong; Rijmen, Frank
2016-06-01
The new R package flirt is introduced for flexible item response theory (IRT) modeling of psychological, educational, and behavior assessment data. flirt integrates a generalized linear and nonlinear mixed modeling framework with graphical model theory. The graphical model framework allows for efficient maximum likelihood estimation. The key feature of flirt is its modular approach to facilitate convenient and flexible model specifications. Researchers can construct customized IRT models by simply selecting various modeling modules, such as parametric forms, number of dimensions, item and person covariates, person groups, link functions, etc. In this paper, we describe major features of flirt and provide examples to illustrate how flirt works in practice.
Survival Data and Regression Models
NASA Astrophysics Data System (ADS)
Grégoire, G.
2014-12-01
We start this chapter by introducing some basic elements for the analysis of censored survival data. Then we focus on right censored data and develop two types of regression models. The first one concerns the so-called accelerated failure time models (AFT), which are parametric models where a function of a parameter depends linearly on the covariables. The second one is a semiparametric model, where the covariables enter in a multiplicative form in the expression of the hazard rate function. The main statistical tool for analysing these regression models is the maximum likelihood methodology and, in spite we recall some essential results about the ML theory, we refer to the chapter "Logistic Regression" for a more detailed presentation.
Estimation of the sea surface's two-scale backscatter parameters
NASA Technical Reports Server (NTRS)
Wentz, F. J.
1978-01-01
The relationship between the sea-surface normalized radar cross section and the friction velocity vector is determined using a parametric two-scale scattering model. The model parameters are found from a nonlinear maximum likelihood estimation. The estimation is based on aircraft scatterometer measurements and the sea-surface anemometer measurements collected during the JONSWAP '75 experiment. The estimates of the ten model parameters converge to realistic values that are in good agreement with the available oceanographic data. The rms discrepancy between the model and the cross section measurements is 0.7 db, which is the rms sum of a 0.3 db average measurement error and a 0.6 db modeling error.
Statistical methods for astronomical data with upper limits. I - Univariate distributions
NASA Technical Reports Server (NTRS)
Feigelson, E. D.; Nelson, P. I.
1985-01-01
The statistical treatment of univariate censored data is discussed. A heuristic derivation of the Kaplan-Meier maximum-likelihood estimator from first principles is presented which results in an expression amenable to analytic error analysis. Methods for comparing two or more censored samples are given along with simple computational examples, stressing the fact that most astronomical problems involve upper limits while the standard mathematical methods require lower limits. The application of univariate survival analysis to six data sets in the recent astrophysical literature is described, and various aspects of the use of survival analysis in astronomy, such as the limitations of various two-sample tests and the role of parametric modelling, are discussed.
THz-wave parametric source and its imaging applications
NASA Astrophysics Data System (ADS)
Kawase, Kodo
2004-08-01
Widely tunable coherent terahertz (THz) wave generation has been demonstrated based on the parametric oscillation using MgO doped LiNbO3 crystal pumped by a Q-switched Nd:YAG laser. This method exhibits multiple advantages like wide tunability, coherency and compactness of its system. We have developed a novel basic technology for terahertz (THz) imaging, which allows detection and identification of chemicals by introducing the component spatial pattern analysis. The spatial distributions of the chemicals were obtained from terahertz multispectral transillumination images, using absorption spectra previously measured with a widely tunable THz-wave parametric oscillator. Further we have applied this technique to the detection and identification of illicit drugs concealed in envelopes. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casarini, L.; Bonometto, S.A.; Tessarotto, E.
2016-08-01
We discuss an extension of the Coyote emulator to predict non-linear matter power spectra of dark energy (DE) models with a scale factor dependent equation of state of the form w = w {sub 0}+(1- a ) w {sub a} . The extension is based on the mapping rule between non-linear spectra of DE models with constant equation of state and those with time varying one originally introduced in ref. [40]. Using a series of N-body simulations we show that the spectral equivalence is accurate to sub-percent level across the same range of modes and redshift covered by the Coyotemore » suite. Thus, the extended emulator provides a very efficient and accurate tool to predict non-linear power spectra for DE models with w {sub 0}- w {sub a} parametrization. According to the same criteria we have developed a numerical code that we have implemented in a dedicated module for the CAMB code, that can be used in combination with the Coyote Emulator in likelihood analyses of non-linear matter power spectrum measurements. All codes can be found at https://github.com/luciano-casarini/pkequal.« less
Development of probabilistic emission inventories of air toxics for Jacksonville, Florida, USA.
Zhao, Yuchao; Frey, H Christopher
2004-11-01
Probabilistic emission inventories were developed for 1,3-butadiene, mercury (Hg), arsenic (As), benzene, formaldehyde, and lead for Jacksonville, FL. To quantify inter-unit variability in empirical emission factor data, the Maximum Likelihood Estimation (MLE) method or the Method of Matching Moments was used to fit parametric distributions. For data sets that contain nondetected measurements, a method based upon MLE was used for parameter estimation. To quantify the uncertainty in urban air toxic emission factors, parametric bootstrap simulation and empirical bootstrap simulation were applied to uncensored and censored data, respectively. The probabilistic emission inventories were developed based on the product of the uncertainties in the emission factors and in the activity factors. The uncertainties in the urban air toxics emission inventories range from as small as -25 to +30% for Hg to as large as -83 to +243% for As. The key sources of uncertainty in the emission inventory for each toxic are identified based upon sensitivity analysis. Typically, uncertainty in the inventory of a given pollutant can be attributed primarily to a small number of source categories. Priorities for improving the inventories and for refining the probabilistic analysis are discussed.
Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM.
López, J D; Litvak, V; Espinosa, J J; Friston, K; Barnes, G R
2014-01-01
The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy-an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. © 2013. Published by Elsevier Inc. All rights reserved.
Varma, N K; Kushwaha, R; Beydoun, A; Williams, W J; Drury, I
1997-10-01
The purpose of this paper is to compare the morphological features of interictal epileptiform discharges (IED) in patients with benign epilepsy of childhood with centrotemporal spikes to IED of those with symptomatic localization related epilepsies using information theory. Three patients from each clinical group were selected. Two-second epochs centered at the peak negativity of the sharp waves were analyzed from a referential montage during stage I sleep. The epochs from the two groups were compared using parametric and information theory analysis. Information analysis determined the likelihood of correctly identifying the clinical group based on the IED. Standard parametric, morphological and spectral analyses were also performed. We found no significant difference in the morphology of the sharp wave between the two groups. The after-going slow wave contained the greatest information that separated the two groups. This result was supported by morphological and spectral differences in the after-going slow wave. Greater distinguishing information is held in the after-going slow wave than the sharp wave for the identification of clinical groups. Information analysis may assist in differentiating clinical syndromes from EEG signals.
Zhu, Qiaohao; Carriere, K C
2016-01-01
Publication bias can significantly limit the validity of meta-analysis when trying to draw conclusion about a research question from independent studies. Most research on detection and correction for publication bias in meta-analysis focus mainly on funnel plot-based methodologies or selection models. In this paper, we formulate publication bias as a truncated distribution problem, and propose new parametric solutions. We develop methodologies of estimating the underlying overall effect size and the severity of publication bias. We distinguish the two major situations, in which publication bias may be induced by: (1) small effect size or (2) large p-value. We consider both fixed and random effects models, and derive estimators for the overall mean and the truncation proportion. These estimators will be obtained using maximum likelihood estimation and method of moments under fixed- and random-effects models, respectively. We carried out extensive simulation studies to evaluate the performance of our methodology, and to compare with the non-parametric Trim and Fill method based on funnel plot. We find that our methods based on truncated normal distribution perform consistently well, both in detecting and correcting publication bias under various situations.
Comparing Pixel- and Object-Based Approaches in Effectively Classifying Wetland-Dominated Landscapes
Berhane, Tedros M.; Lane, Charles R.; Wu, Qiusheng; Anenkhonov, Oleg A.; Chepinoga, Victor V.; Autrey, Bradley C.; Liu, Hongxing
2018-01-01
Wetland ecosystems straddle both terrestrial and aquatic habitats, performing many ecological functions directly and indirectly benefitting humans. However, global wetland losses are substantial. Satellite remote sensing and classification informs wise wetland management and monitoring. Both pixel- and object-based classification approaches using parametric and non-parametric algorithms may be effectively used in describing wetland structure and habitat, but which approach should one select? We conducted both pixel- and object-based image analyses (OBIA) using parametric (Iterative Self-Organizing Data Analysis Technique, ISODATA, and maximum likelihood, ML) and non-parametric (random forest, RF) approaches in the Barguzin Valley, a large wetland (~500 km2) in the Lake Baikal, Russia, drainage basin. Four Quickbird multispectral bands plus various spatial and spectral metrics (e.g., texture, Non-Differentiated Vegetation Index, slope, aspect, etc.) were analyzed using field-based regions of interest sampled to characterize an initial 18 ISODATA-based classes. Parsimoniously using a three-layer stack (Quickbird band 3, water ratio index (WRI), and mean texture) in the analyses resulted in the highest accuracy, 87.9% with pixel-based RF, followed by OBIA RF (segmentation scale 5, 84.6% overall accuracy), followed by pixel-based ML (83.9% overall accuracy). Increasing the predictors from three to five by adding Quickbird bands 2 and 4 decreased the pixel-based overall accuracy while increasing the OBIA RF accuracy to 90.4%. However, McNemar’s chi-square test confirmed no statistically significant difference in overall accuracy among the classifiers (pixel-based ML, RF, or object-based RF) for either the three- or five-layer analyses. Although potentially useful in some circumstances, the OBIA approach requires substantial resources and user input (such as segmentation scale selection—which was found to substantially affect overall accuracy). Hence, we conclude that pixel-based RF approaches are likely satisfactory for classifying wetland-dominated landscapes. PMID:29707381
Berhane, Tedros M; Lane, Charles R; Wu, Qiusheng; Anenkhonov, Oleg A; Chepinoga, Victor V; Autrey, Bradley C; Liu, Hongxing
2018-01-01
Wetland ecosystems straddle both terrestrial and aquatic habitats, performing many ecological functions directly and indirectly benefitting humans. However, global wetland losses are substantial. Satellite remote sensing and classification informs wise wetland management and monitoring. Both pixel- and object-based classification approaches using parametric and non-parametric algorithms may be effectively used in describing wetland structure and habitat, but which approach should one select? We conducted both pixel- and object-based image analyses (OBIA) using parametric (Iterative Self-Organizing Data Analysis Technique, ISODATA, and maximum likelihood, ML) and non-parametric (random forest, RF) approaches in the Barguzin Valley, a large wetland (~500 km 2 ) in the Lake Baikal, Russia, drainage basin. Four Quickbird multispectral bands plus various spatial and spectral metrics (e.g., texture, Non-Differentiated Vegetation Index, slope, aspect, etc.) were analyzed using field-based regions of interest sampled to characterize an initial 18 ISODATA-based classes. Parsimoniously using a three-layer stack (Quickbird band 3, water ratio index (WRI), and mean texture) in the analyses resulted in the highest accuracy, 87.9% with pixel-based RF, followed by OBIA RF (segmentation scale 5, 84.6% overall accuracy), followed by pixel-based ML (83.9% overall accuracy). Increasing the predictors from three to five by adding Quickbird bands 2 and 4 decreased the pixel-based overall accuracy while increasing the OBIA RF accuracy to 90.4%. However, McNemar's chi-square test confirmed no statistically significant difference in overall accuracy among the classifiers (pixel-based ML, RF, or object-based RF) for either the three- or five-layer analyses. Although potentially useful in some circumstances, the OBIA approach requires substantial resources and user input (such as segmentation scale selection-which was found to substantially affect overall accuracy). Hence, we conclude that pixel-based RF approaches are likely satisfactory for classifying wetland-dominated landscapes.
Sengupta Chattopadhyay, Amrita; Hsiao, Ching-Lin; Chang, Chien Ching; Lian, Ie-Bin; Fann, Cathy S J
2014-01-01
Identifying susceptibility genes that influence complex diseases is extremely difficult because loci often influence the disease state through genetic interactions. Numerous approaches to detect disease-associated SNP-SNP interactions have been developed, but none consistently generates high-quality results under different disease scenarios. Using summarizing techniques to combine a number of existing methods may provide a solution to this problem. Here we used three popular non-parametric methods-Gini, absolute probability difference (APD), and entropy-to develop two novel summary scores, namely principle component score (PCS) and Z-sum score (ZSS), with which to predict disease-associated genetic interactions. We used a simulation study to compare performance of the non-parametric scores, the summary scores, the scaled-sum score (SSS; used in polymorphism interaction analysis (PIA)), and the multifactor dimensionality reduction (MDR). The non-parametric methods achieved high power, but no non-parametric method outperformed all others under a variety of epistatic scenarios. PCS and ZSS, however, outperformed MDR. PCS, ZSS and SSS displayed controlled type-I-errors (<0.05) compared to GS, APDS, ES (>0.05). A real data study using the genetic-analysis-workshop 16 (GAW 16) rheumatoid arthritis dataset identified a number of interesting SNP-SNP interactions. © 2013 Elsevier B.V. All rights reserved.
Nearly noiseless amplification of microwave signals with a Josephson parametric amplifier
NASA Astrophysics Data System (ADS)
Castellanos-Beltran, Manuel
2009-03-01
A degenerate parametric amplifier transforms an incident coherent state by amplifying one of its quadrature components while deamplifying the other. This transformation, when performed by an ideal parametric amplifier, is completely deterministic and reversible; therefore the amplifier in principle can be noiseless. We attempt to realize a noiseless amplifier of this type at microwave frequencies with a Josephson parametric amplifier (JPA). To this end, we have built a superconducting microwave cavity containing many dc-SQUIDs. This arrangement creates a non-linear medium in a cavity and it is closely analogous to an optical parametric amplifier. In my talk, I will describe the current performance of this circuit, where I show I can amplify signals with less added noise than a quantum-limited amplifier that amplifies both quadratures. In addition, the JPA also squeezes the electromagnetic vacuum fluctuations by 10 dB. Finally, I will discuss our effort to put two such amplifiers in series in order to undo the first stage of squeezing with a second stage of amplification, demonstrating that the amplification process is truly reversible.[4pt] M. A. Castellanos-Beltran, K. D. Irwin, G. C. Hilton, L. R. Vale and K. W. Lehnert, Nature Physics, published on line, http://dx.doi.org/10.1038/nphys1090 (2008).
Maximum likelihood clustering with dependent feature trees
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1981-01-01
The decomposition of mixture density of the data into its normal component densities is considered. The densities are approximated with first order dependent feature trees using criteria of mutual information and distance measures. Expressions are presented for the criteria when the densities are Gaussian. By defining different typs of nodes in a general dependent feature tree, maximum likelihood equations are developed for the estimation of parameters using fixed point iterations. The field structure of the data is also taken into account in developing maximum likelihood equations. Experimental results from the processing of remotely sensed multispectral scanner imagery data are included.
Free response approach in a parametric system
NASA Astrophysics Data System (ADS)
Huang, Dishan; Zhang, Yueyue; Shao, Hexi
2017-07-01
In this study, a new approach to predict the free response in a parametric system is investigated. It is proposed in the special form of a trigonometric series with an exponentially decaying function of time, based on the concept of frequency splitting. By applying harmonic balance, the parametric vibration equation is transformed into an infinite set of homogeneous linear equations, from which the principal oscillation frequency can be computed, and all coefficients of harmonic components can be obtained. With initial conditions, arbitrary constants in a general solution can be determined. To analyze the computational accuracy and consistency, an approach error function is defined, which is used to assess the computational error in the proposed approach and in the standard numerical approach based on the Runge-Kutta algorithm. Furthermore, an example of a dynamic model of airplane wing flutter on a turbine engine is given to illustrate the applicability of the proposed approach. Numerical solutions show that the proposed approach exhibits high accuracy in mathematical expression, and it is valuable for theoretical research and engineering applications of parametric systems.
Kral, L
2007-05-01
We present a complex stabilization and control system for a commercially available optical parametric oscillator. The system is able to stabilize the oscillator's output wavelength at a narrow spectral line of atomic iodine with subpicometer precision, allowing utilization of this solid-state parametric oscillator as a front end of a high-power photodissociation laser chain formed by iodine gas amplifiers. In such setup, a precise wavelength matching between the front end and the amplifier chain is necessary due to extremely narrow spectral lines of the gaseous iodine (approximately 20 pm). The system is based on a personal computer, a heated iodine cell, and a few other low-cost components. It automatically identifies the proper peak within the iodine absorption spectrum, and then keeps the oscillator tuned to this peak with high precision and reliability. The use of the solid-state oscillator as the front end allows us to use the whole iodine laser system as a pump laser for the optical parametric chirped pulse amplification, as it enables precise time synchronization with a signal Ti:sapphire laser.
Brain segmentation and the generation of cortical surfaces
NASA Technical Reports Server (NTRS)
Joshi, M.; Cui, J.; Doolittle, K.; Joshi, S.; Van Essen, D.; Wang, L.; Miller, M. I.
1999-01-01
This paper describes methods for white matter segmentation in brain images and the generation of cortical surfaces from the segmentations. We have developed a system that allows a user to start with a brain volume, obtained by modalities such as MRI or cryosection, and constructs a complete digital representation of the cortical surface. The methodology consists of three basic components: local parametric modeling and Bayesian segmentation; surface generation and local quadratic coordinate fitting; and surface editing. Segmentations are computed by parametrically fitting known density functions to the histogram of the image using the expectation maximization algorithm [DLR77]. The parametric fits are obtained locally rather than globally over the whole volume to overcome local variations in gray levels. To represent the boundary of the gray and white matter we use triangulated meshes generated using isosurface generation algorithms [GH95]. A complete system of local parametric quadratic charts [JWM+95] is superimposed on the triangulated graph to facilitate smoothing and geodesic curve tracking. Algorithms for surface editing include extraction of the largest closed surface. Results for several macaque brains are presented comparing automated and hand surface generation. Copyright 1999 Academic Press.
A Parametric Oscillator Experiment for Undergraduates
NASA Astrophysics Data System (ADS)
Huff, Alison; Thompson, Johnathon; Pate, Jacob; Kim, Hannah; Chiao, Raymond; Sharping, Jay
We describe an upper-division undergraduate-level analytic mechanics experiment or classroom demonstration of a weakly-damped pendulum driven into parametric resonance. Students can derive the equations of motion from first principles and extract key oscillator features, such as quality factor and parametric gain, from experimental data. The apparatus is compact, portable and easily constructed from inexpensive components. Motion control and data acquisition are accomplished using an Arduino micro-controller incorporating a servo motor, laser sensor, and data logger. We record the passage time of the pendulum through its equilibrium position and obtain the maximum speed per oscillation as a function of time. As examples of the interesting physics which the experiment reveals, we present contour plots depicting the energy of the system as functions of driven frequency and modulation depth. We observe the transition to steady state oscillation and compare the experimental oscillation threshold with theoretical expectations. A thorough understanding of this hands-on laboratory exercise provides a foundation for current research in quantum information and opto-mechanics, where damped harmonic motion, quality factor, and parametric amplification are central.
Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.
Han, Lei; Zhang, Yu; Zhang, Tong
2016-08-01
The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.
Delineating parameter unidentifiabilities in complex models
NASA Astrophysics Data System (ADS)
Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis
2017-03-01
Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.
Modeling envelope statistics of blood and myocardium for segmentation of echocardiographic images.
Nillesen, Maartje M; Lopata, Richard G P; Gerrits, Inge H; Kapusta, Livia; Thijssen, Johan M; de Korte, Chris L
2008-04-01
The objective of this study was to investigate the use of speckle statistics as a preprocessing step for segmentation of the myocardium in echocardiographic images. Three-dimensional (3D) and biplane image sequences of the left ventricle of two healthy children and one dog (beagle) were acquired. Pixel-based speckle statistics of manually segmented blood and myocardial regions were investigated by fitting various probability density functions (pdf). The statistics of heart muscle and blood could both be optimally modeled by a K-pdf or Gamma-pdf (Kolmogorov-Smirnov goodness-of-fit test). Scale and shape parameters of both distributions could differentiate between blood and myocardium. Local estimation of these parameters was used to obtain parametric images, where window size was related to speckle size (5 x 2 speckles). Moment-based and maximum-likelihood estimators were used. Scale parameters were still able to differentiate blood from myocardium; however, smoothing of edges of anatomical structures occurred. Estimation of the shape parameter required a larger window size, leading to unacceptable blurring. Using these parameters as an input for segmentation resulted in unreliable segmentation. Adaptive mean squares filtering was then introduced using the moment-based scale parameter (sigma(2)/mu) of the Gamma-pdf to automatically steer the two-dimensional (2D) local filtering process. This method adequately preserved sharpness of the edges. In conclusion, a trade-off between preservation of sharpness of edges and goodness-of-fit when estimating local shape and scale parameters is evident for parametric images. For this reason, adaptive filtering outperforms parametric imaging for the segmentation of echocardiographic images.
The 12 GHz mixer/local oscillator and parametric amplifier. [considering all solid state circuitry
NASA Technical Reports Server (NTRS)
Dickens, L. E.
1976-01-01
The results of the initial implementation of the proposed design, the design modifications, and limitations are presented. Also included are data on component parts of the breadboard amplifier and the converter.
Scalar limitations of diffractive optical elements
NASA Technical Reports Server (NTRS)
Johnson, Eric G.; Hochmuth, Diane; Moharam, M. G.; Pommet, Drew
1993-01-01
In this paper, scalar limitations of diffractive optic components are investigated using coupled wave analyses. Results are presented for linear phase gratings and fanout devices. In addition, a parametric curve is given which correlates feature size with scalar performance.
Britz, Juliane; Pitts, Michael A
2011-11-01
We used an intermittent stimulus presentation to investigate event-related potential (ERP) components associated with perceptual reversals during binocular rivalry. The combination of spatiotemporal ERP analysis with source imaging and statistical parametric mapping of the concomitant source differences yielded differences in three time windows: reversals showed increased activity in early visual (∼120 ms) and in inferior frontal and anterior temporal areas (∼400-600 ms) and decreased activity in the ventral stream (∼250-350 ms). The combination of source imaging and statistical parametric mapping suggests that these differences were due to differences in generator strength and not generator configuration, unlike the initiation of reversals in right inferior parietal areas. These results are discussed within the context of the extensive network of brain areas that has been implicated in the initiation, implementation, and appraisal of bistable perceptual reversals. Copyright © 2011 Society for Psychophysiological Research.
Gas engine heat pump cycle analysis. Volume 1: Model description and generic analysis
NASA Astrophysics Data System (ADS)
Fischer, R. D.
1986-10-01
The task has prepared performance and cost information to assist in evaluating the selection of high voltage alternating current components, values for component design variables, and system configurations and operating strategy. A steady-state computer model for performance simulation of engine-driven and electrically driven heat pumps was prepared and effectively used for parametric and seasonal performance analyses. Parametric analysis showed the effect of variables associated with design of recuperators, brine coils, domestic hot water heat exchanger, compressor size, engine efficiency, insulation on exhaust and brine piping. Seasonal performance data were prepared for residential and commercial units in six cities with system configurations closely related to existing or contemplated hardware of the five GRI engine contractors. Similar data were prepared for an advanced variable-speed electric unit for comparison purposes. The effect of domestic hot water production on operating costs was determined. Four fan-operating strategies and two brine loop configurations were explored.
Development of Parametric Mass and Volume Models for an Aerospace SOFC/Gas Turbine Hybrid System
NASA Technical Reports Server (NTRS)
Tornabene, Robert; Wang, Xiao-yen; Steffen, Christopher J., Jr.; Freeh, Joshua E.
2005-01-01
In aerospace power systems, mass and volume are key considerations to produce a viable design. The utilization of fuel cells is being studied for a commercial aircraft electrical power unit. Based on preliminary analyses, a SOFC/gas turbine system may be a potential solution. This paper describes the parametric mass and volume models that are used to assess an aerospace hybrid system design. The design tool utilizes input from the thermodynamic system model and produces component sizing, performance, and mass estimates. The software is designed such that the thermodynamic model is linked to the mass and volume model to provide immediate feedback during the design process. It allows for automating an optimization process that accounts for mass and volume in its figure of merit. Each component in the system is modeled with a combination of theoretical and empirical approaches. A description of the assumptions and design analyses is presented.
Melanie Stidham; Sarah McCaffrey; Eric Toman; Bruce Shindler
2014-01-01
Within the wildland-urban interface (WUI), wildfire risk contains both individual and collective components. The likelihood that a particular home will be threatened by wildfire in any given year is low, but at a broader scale the likelihood that a home somewhere in the WUI will be threatened is substantially higher. From a risk mitigation perspective, individuals may...
NASA Astrophysics Data System (ADS)
Wang, Pan-Pan; Yu, Qiang; Hu, Yong-Jun; Miao, Chang-Xin
2017-11-01
Current research in broken rotor bar (BRB) fault detection in induction motors is primarily focused on a high-frequency resolution analysis of the stator current. Compared with a discrete Fourier transformation, the parametric spectrum estimation technique has a higher frequency accuracy and resolution. However, the existing detection methods based on parametric spectrum estimation cannot realize online detection, owing to the large computational cost. To improve the efficiency of BRB fault detection, a new detection method based on the min-norm algorithm and least square estimation is proposed in this paper. First, the stator current is filtered using a band-pass filter and divided into short overlapped data windows. The min-norm algorithm is then applied to determine the frequencies of the fundamental and fault characteristic components with each overlapped data window. Next, based on the frequency values obtained, a model of the fault current signal is constructed. Subsequently, a linear least squares problem solved through singular value decomposition is designed to estimate the amplitudes and phases of the related components. Finally, the proposed method is applied to a simulated current and an actual motor, the results of which indicate that, not only parametric spectrum estimation technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, Jaideep; Lee, Jina; Lefantzi, Sophia
The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. To that end, we construct a multiresolution spatial parametrization for fossil-fuel CO2 emissions (ffCO2), to be used in atmospheric inversions. Such a parametrization does not currently exist. The parametrization uses wavelets to accurately capture the multiscale, nonstationary nature of ffCO2 emissions and employs proxies of human habitation, e.g., images of lights at night and maps of built-up areas to reduce the dimensionality of the multiresolution parametrization.more » The parametrization is used in a synthetic data inversion to test its suitability for use in atmospheric inverse problem. This linear inverse problem is predicated on observations of ffCO2 concentrations collected at measurement towers. We adapt a convex optimization technique, commonly used in the reconstruction of compressively sensed images, to perform sparse reconstruction of the time-variant ffCO2 emission field. We also borrow concepts from compressive sensing to impose boundary conditions i.e., to limit ffCO2 emissions within an irregularly shaped region (the United States, in our case). We find that the optimization algorithm performs a data-driven sparsification of the spatial parametrization and retains only of those wavelets whose weights could be estimated from the observations. Further, our method for the imposition of boundary conditions leads to a 10computational saving over conventional means of doing so. We conclude with a discussion of the accuracy of the estimated emissions and the suitability of the spatial parametrization for use in inverse problems with a significant degree of regularization.« less
An Evaluation of the Effects of Variable Sampling on Component, Image, and Factor Analysis.
ERIC Educational Resources Information Center
Velicer, Wayne F.; Fava, Joseph L.
1987-01-01
Principal component analysis, image component analysis, and maximum likelihood factor analysis were compared to assess the effects of variable sampling. Results with respect to degree of saturation and average number of variables per factor were clear and dramatic. Differential effects on boundary cases and nonconvergence problems were also found.…
Known-component 3D-2D registration for quality assurance of spine surgery pedicle screw placement
NASA Astrophysics Data System (ADS)
Uneri, A.; De Silva, T.; Stayman, J. W.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Gokaslan, Z. L.; Wolinsky, J.-P.; Siewerdsen, J. H.
2015-10-01
A 3D-2D image registration method is presented that exploits knowledge of interventional devices (e.g. K-wires or spine screws—referred to as ‘known components’) to extend the functionality of intraoperative radiography/fluoroscopy by providing quantitative measurement and quality assurance (QA) of the surgical product. The known-component registration (KC-Reg) algorithm uses robust 3D-2D registration combined with 3D component models of surgical devices known to be present in intraoperative 2D radiographs. Component models were investigated that vary in fidelity from simple parametric models (e.g. approximation of a screw as a simple cylinder, referred to as ‘parametrically-known’ component [pKC] registration) to precise models based on device-specific CAD drawings (referred to as ‘exactly-known’ component [eKC] registration). 3D-2D registration from three intraoperative radiographs was solved using the covariance matrix adaptation evolution strategy (CMA-ES) to maximize image-gradient similarity, relating device placement relative to 3D preoperative CT of the patient. Spine phantom and cadaver studies were conducted to evaluate registration accuracy and demonstrate QA of the surgical product by verification of the type of devices delivered and conformance within the ‘acceptance window’ of the spinal pedicle. Pedicle screws were successfully registered to radiographs acquired from a mobile C-arm, providing TRE 1-4 mm and <5° using simple parametric (pKC) models, further improved to <1 mm and <1° using eKC registration. Using advanced pKC models, screws that did not match the device models specified in the surgical plan were detected with an accuracy of >99%. Visualization of registered devices relative to surgical planning and the pedicle acceptance window provided potentially valuable QA of the surgical product and reliable detection of pedicle screw breach. 3D-2D registration combined with 3D models of known surgical devices offers a novel method for intraoperative QA. The method provides a near-real-time independent check against pedicle breach, facilitating revision within the same procedure if necessary and providing more rigorous verification of the surgical product.
NASA Astrophysics Data System (ADS)
Mazzetti, S.; Giannini, V.; Russo, F.; Regge, D.
2018-05-01
Computer-aided diagnosis (CAD) systems are increasingly being used in clinical settings to report multi-parametric magnetic resonance imaging (mp-MRI) of the prostate. Usually, CAD systems automatically highlight cancer-suspicious regions to the radiologist, reducing reader variability and interpretation errors. Nevertheless, implementing this software requires the selection of which mp-MRI parameters can best discriminate between malignant and non-malignant regions. To exploit functional information, some parameters are derived from dynamic contrast-enhanced (DCE) acquisitions. In particular, much CAD software employs pharmacokinetic features, such as K trans and k ep, derived from the Tofts model, to estimate a likelihood map of malignancy. However, non-pharmacokinetic models can be also used to describe DCE-MRI curves, without any requirement for prior knowledge or measurement of the arterial input function, which could potentially lead to large errors in parameter estimation. In this work, we implemented an empirical function derived from the phenomenological universalities (PUN) class to fit DCE-MRI. The parameters of the PUN model are used in combination with T2-weighted and diffusion-weighted acquisitions to feed a support vector machine classifier to produce a voxel-wise malignancy likelihood map of the prostate. The results were all compared to those for a CAD system based on Tofts pharmacokinetic features to describe DCE-MRI curves, using different quality aspects of image segmentation, while also evaluating the number and size of false positive (FP) candidate regions. This study included 61 patients with 70 biopsy-proven prostate cancers (PCa). The metrics used to evaluate segmentation quality between the two CAD systems were not statistically different, although the PUN-based CAD reported a lower number of FP, with reduced size compared to the Tofts-based CAD. In conclusion, the CAD software based on PUN parameters is a feasible means with which to detect PCa, without affecting segmentation quality, and hence it could be successfully applied in clinical settings, improving the automated diagnosis process and reducing computational complexity.
Janssen, Eva; van Osch, Liesbeth; Lechner, Lilian; Candel, Math; de Vries, Hein
2012-01-01
Despite the increased recognition of affect in guiding probability estimates, perceived risk has been mainly operationalised in a cognitive way and the differentiation between rational and intuitive judgements is largely unexplored. This study investigated the validity of a measurement instrument differentiating cognitive and affective probability beliefs and examined whether behavioural decision making is mainly guided by cognition or affect. Data were obtained from four surveys focusing on smoking (N=268), fruit consumption (N=989), sunbed use (N=251) and sun protection (N=858). Correlational analyses showed that affective likelihood was more strongly correlated with worry compared to cognitive likelihood and confirmatory factor analysis provided support for a two-factor model of perceived likelihood instead of a one-factor model (i.e. cognition and affect combined). Furthermore, affective likelihood was significantly associated with the various outcome variables, whereas the association for cognitive likelihood was absent in three studies. The findings provide support for the construct validity of the measures used to assess cognitive and affective likelihood. Since affective likelihood might be a better predictor of health behaviour than the commonly used cognitive operationalisation, both dimensions should be considered in future research.
PET image reconstruction using multi-parametric anato-functional priors
NASA Astrophysics Data System (ADS)
Mehranian, Abolfazl; Belzunce, Martin A.; Niccolini, Flavia; Politis, Marios; Prieto, Claudia; Turkheimer, Federico; Hammers, Alexander; Reader, Andrew J.
2017-08-01
In this study, we investigate the application of multi-parametric anato-functional (MR-PET) priors for the maximum a posteriori (MAP) reconstruction of brain PET data in order to address the limitations of the conventional anatomical priors in the presence of PET-MR mismatches. In addition to partial volume correction benefits, the suitability of these priors for reconstruction of low-count PET data is also introduced and demonstrated, comparing to standard maximum-likelihood (ML) reconstruction of high-count data. The conventional local Tikhonov and total variation (TV) priors and current state-of-the-art anatomical priors including the Kaipio, non-local Tikhonov prior with Bowsher and Gaussian similarity kernels are investigated and presented in a unified framework. The Gaussian kernels are calculated using both voxel- and patch-based feature vectors. To cope with PET and MR mismatches, the Bowsher and Gaussian priors are extended to multi-parametric priors. In addition, we propose a modified joint Burg entropy prior that by definition exploits all parametric information in the MAP reconstruction of PET data. The performance of the priors was extensively evaluated using 3D simulations and two clinical brain datasets of [18F]florbetaben and [18F]FDG radiotracers. For simulations, several anato-functional mismatches were intentionally introduced between the PET and MR images, and furthermore, for the FDG clinical dataset, two PET-unique active tumours were embedded in the PET data. Our simulation results showed that the joint Burg entropy prior far outperformed the conventional anatomical priors in terms of preserving PET unique lesions, while still reconstructing functional boundaries with corresponding MR boundaries. In addition, the multi-parametric extension of the Gaussian and Bowsher priors led to enhanced preservation of edge and PET unique features and also an improved bias-variance performance. In agreement with the simulation results, the clinical results also showed that the Gaussian prior with voxel-based feature vectors, the Bowsher and the joint Burg entropy priors were the best performing priors. However, for the FDG dataset with simulated tumours, the TV and proposed priors were capable of preserving the PET-unique tumours. Finally, an important outcome was the demonstration that the MAP reconstruction of a low-count FDG PET dataset using the proposed joint entropy prior can lead to comparable image quality to a conventional ML reconstruction with up to 5 times more counts. In conclusion, multi-parametric anato-functional priors provide a solution to address the pitfalls of the conventional priors and are therefore likely to increase the diagnostic confidence in MR-guided PET image reconstructions.
Entangled Parametric Hierarchies: Problems for an Overspecified Universal Grammar
Boeckx, Cedric; Leivada, Evelina
2013-01-01
This study addresses the feasibility of the classical notion of parameter in linguistic theory from the perspective of parametric hierarchies. A novel program-based analysis is implemented in order to show certain empirical problems related to these hierarchies. The program was developed on the basis of an enriched data base spanning 23 contemporary and 5 ancient languages. The empirical issues uncovered cast doubt on classical parametric models of language acquisition as well as on the conceptualization of an overspecified Universal Grammar that has parameters among its primitives. Pinpointing these issues leads to the proposal that (i) the (bio)logical problem of language acquisition does not amount to a process of triggering innately pre-wired values of parameters and (ii) it paves the way for viewing language, epigenetic (‘parametric’) variation as an externalization-related epiphenomenon, whose learning component may be more important than what sometimes is assumed. PMID:24019867
NASA Astrophysics Data System (ADS)
Dai, Jun; Zhou, Haigang; Zhao, Shaoquan
2017-01-01
This paper considers a multi-scale future hedge strategy that minimizes lower partial moments (LPM). To do this, wavelet analysis is adopted to decompose time series data into different components. Next, different parametric estimation methods with known distributions are applied to calculate the LPM of hedged portfolios, which is the key to determining multi-scale hedge ratios over different time scales. Then these parametric methods are compared with the prevailing nonparametric kernel metric method. Empirical results indicate that in the China Securities Index 300 (CSI 300) index futures and spot markets, hedge ratios and hedge efficiency estimated by the nonparametric kernel metric method are inferior to those estimated by parametric hedging model based on the features of sequence distributions. In addition, if minimum-LPM is selected as a hedge target, the hedging periods, degree of risk aversion, and target returns can affect the multi-scale hedge ratios and hedge efficiency, respectively.
A comparison of minimum distance and maximum likelihood techniques for proportion estimation
NASA Technical Reports Server (NTRS)
Woodward, W. A.; Schucany, W. R.; Lindsey, H.; Gray, H. L.
1982-01-01
The estimation of mixing proportions P sub 1, P sub 2,...P sub m in the mixture density f(x) = the sum of the series P sub i F sub i(X) with i = 1 to M is often encountered in agricultural remote sensing problems in which case the p sub i's usually represent crop proportions. In these remote sensing applications, component densities f sub i(x) have typically been assumed to be normally distributed, and parameter estimation has been accomplished using maximum likelihood (ML) techniques. Minimum distance (MD) estimation is examined as an alternative to ML where, in this investigation, both procedures are based upon normal components. Results indicate that ML techniques are superior to MD when component distributions actually are normal, while MD estimation provides better estimates than ML under symmetric departures from normality. When component distributions are not symmetric, however, it is seen that neither of these normal based techniques provides satisfactory results.
Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1981-01-01
A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.
Schmid, Matthias; Küchenhoff, Helmut; Hoerauf, Achim; Tutz, Gerhard
2016-02-28
Survival trees are a popular alternative to parametric survival modeling when there are interactions between the predictor variables or when the aim is to stratify patients into prognostic subgroups. A limitation of classical survival tree methodology is that most algorithms for tree construction are designed for continuous outcome variables. Hence, classical methods might not be appropriate if failure time data are measured on a discrete time scale (as is often the case in longitudinal studies where data are collected, e.g., quarterly or yearly). To address this issue, we develop a method for discrete survival tree construction. The proposed technique is based on the result that the likelihood of a discrete survival model is equivalent to the likelihood of a regression model for binary outcome data. Hence, we modify tree construction methods for binary outcomes such that they result in optimized partitions for the estimation of discrete hazard functions. By applying the proposed method to data from a randomized trial in patients with filarial lymphedema, we demonstrate how discrete survival trees can be used to identify clinically relevant patient groups with similar survival behavior. Copyright © 2015 John Wiley & Sons, Ltd.
Semiparametric Time-to-Event Modeling in the Presence of a Latent Progression Event
Rice, John D.; Tsodikov, Alex
2017-01-01
Summary In cancer research, interest frequently centers on factors influencing a latent event that must precede a terminal event. In practice it is often impossible to observe the latent event precisely, making inference about this process difficult. To address this problem, we propose a joint model for the unobserved time to the latent and terminal events, with the two events linked by the baseline hazard. Covariates enter the model parametrically as linear combinations that multiply, respectively, the hazard for the latent event and the hazard for the terminal event conditional on the latent one. We derive the partial likelihood estimators for this problem assuming the latent event is observed, and propose a profile likelihood–based method for estimation when the latent event is unobserved. The baseline hazard in this case is estimated nonparametrically using the EM algorithm, which allows for closed-form Breslow-type estimators at each iteration, bringing improved computational efficiency and stability compared with maximizing the marginal likelihood directly. We present simulation studies to illustrate the finite-sample properties of the method; its use in practice is demonstrated in the analysis of a prostate cancer data set. PMID:27556886
Spontaneous symmetry breaking in coupled parametrically driven waveguides.
Dror, Nir; Malomed, Boris A
2009-01-01
We introduce a system of linearly coupled parametrically driven damped nonlinear Schrödinger equations, which models a laser based on a nonlinear dual-core waveguide with parametric amplification symmetrically applied to both cores. The model may also be realized in terms of parallel ferromagnetic films, in which the parametric gain is provided by an external field. We analyze spontaneous symmetry breaking (SSB) of fundamental and multiple solitons in this system, which was not studied systematically before in linearly coupled dissipative systems with intrinsic nonlinearity. For fundamental solitons, the analysis reveals three distinct SSB scenarios. Unlike the standard dual-core-fiber model, the present system gives rise to a vast bistability region, which may be relevant to applications. Other noteworthy findings are restabilization of the symmetric soliton after it was destabilized by the SSB bifurcation, and the existence of a generic situation with all solitons unstable in the single-component (decoupled) model, while both symmetric and asymmetric solitons may be stable in the coupled system. The stability of the asymmetric solitons is identified via direct simulations, while for symmetric and antisymmetric ones the stability is verified too through the computation of stability eigenvalues, families of antisymmetric solitons being entirely unstable. In this way, full stability maps for the symmetric solitons are produced. We also investigate the SSB bifurcation of two-soliton bound states (it breaks the symmetry between the two components, while the two peaks in the shape of the soliton remain mutually symmetric). The family of the asymmetric double-peak states may decouple from its symmetric counterpart, being no longer connected to it by the bifurcation, with a large portion of the asymmetric family remaining stable.
NASA Astrophysics Data System (ADS)
Watanabe, Yuuki; Kawase, Kodo; Ikari, Tomofumi; Ito, Hiromasa; Ishikawa, Youichi; Minamide, Hiroaki
2003-10-01
We separated the component spatial patterns of frequency-dependent absorption in chemicals and frequency-independent components such as plastic, paper, and measurement noise in terahertz (THz) spectroscopic images, using known spectral curves. Our measurement system, which uses a widely tunable coherent THz-wave parametric oscillator source, can image at a specific frequency in the range 1-2 THz. The component patterns of chemicals can easily be extracted by use of the frequency-independent components. This method could be successfully used for nondestructive inspection for the detection of illegal drugs and devices of bioterrorism concealed, e.g., inside mail and packages.
NASA Astrophysics Data System (ADS)
Noh, S. J.; Rakovec, O.; Kumar, R.; Samaniego, L. E.
2015-12-01
Accurate and reliable streamflow prediction is essential to mitigate social and economic damage coming from water-related disasters such as flood and drought. Sequential data assimilation (DA) may facilitate improved streamflow prediction using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. However, if parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by model ensemble may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we evaluate impacts of streamflow data assimilation over European river basins. Especially, a multi-parametric ensemble approach is tested to consider the effects of parametric uncertainty in DA. Because augmentation of parameters is not required within an assimilation window, the approach could be more stable with limited ensemble members and have potential for operational uses. To consider the response times and non-Gaussian characteristics of internal hydrologic processes, lagged particle filtering is utilized. The presentation will be focused on gains and limitations of streamflow data assimilation and multi-parametric ensemble method over large-scale basins.
Genetic mixed linear models for twin survival data.
Ha, Il Do; Lee, Youngjo; Pawitan, Yudi
2007-07-01
Twin studies are useful for assessing the relative importance of genetic or heritable component from the environmental component. In this paper we develop a methodology to study the heritability of age-at-onset or lifespan traits, with application to analysis of twin survival data. Due to limited period of observation, the data can be left truncated and right censored (LTRC). Under the LTRC setting we propose a genetic mixed linear model, which allows general fixed predictors and random components to capture genetic and environmental effects. Inferences are based upon the hierarchical-likelihood (h-likelihood), which provides a statistically efficient and unified framework for various mixed-effect models. We also propose a simple and fast computation method for dealing with large data sets. The method is illustrated by the survival data from the Swedish Twin Registry. Finally, a simulation study is carried out to evaluate its performance.
Multisite EPR oximetry from multiple quadrature harmonics.
Ahmad, R; Som, S; Johnson, D H; Zweier, J L; Kuppusamy, P; Potter, L C
2012-01-01
Multisite continuous wave (CW) electron paramagnetic resonance (EPR) oximetry using multiple quadrature field modulation harmonics is presented. First, a recently developed digital receiver is used to extract multiple harmonics of field modulated projection data. Second, a forward model is presented that relates the projection data to unknown parameters, including linewidth at each site. Third, a maximum likelihood estimator of unknown parameters is reported using an iterative algorithm capable of jointly processing multiple quadrature harmonics. The data modeling and processing are applicable for parametric lineshapes under nonsaturating conditions. Joint processing of multiple harmonics leads to 2-3-fold acceleration of EPR data acquisition. For demonstration in two spatial dimensions, both simulations and phantom studies on an L-band system are reported. Copyright © 2011 Elsevier Inc. All rights reserved.
Estimation of correlation functions by stochastic approximation.
NASA Technical Reports Server (NTRS)
Habibi, A.; Wintz, P. A.
1972-01-01
Consideration of the autocorrelation function of a zero-mean stationary random process. The techniques are applicable to processes with nonzero mean provided the mean is estimated first and subtracted. Two recursive techniques are proposed, both of which are based on the method of stochastic approximation and assume a functional form for the correlation function that depends on a number of parameters that are recursively estimated from successive records. One technique uses a standard point estimator of the correlation function to provide estimates of the parameters that minimize the mean-square error between the point estimates and the parametric function. The other technique provides estimates of the parameters that maximize a likelihood function relating the parameters of the function to the random process. Examples are presented.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.
Maximum likelihood estimation of finite mixture model for economic data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biros, George
Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. Thesemore » include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a central challenge in UQ, especially for large-scale models. We propose to develop the mathematical tools to address these challenges in the context of extreme-scale problems. 4. Parallel scalable algorithms for Bayesian optimal experimental design (OED). Bayesian inversion yields quantified uncertainties in the model parameters, which can be propagated forward through the model to yield uncertainty in outputs of interest. This opens the way for designing new experiments to reduce the uncertainties in the model parameters and model predictions. Such experimental design problems have been intractable for large-scale problems using conventional methods; we will create OED algorithms that exploit the structure of the PDE model and the parameter-to-output map to overcome these challenges. Parallel algorithms for these four problems were created, analyzed, prototyped, implemented, tuned, and scaled up for leading-edge supercomputers, including UT-Austin’s own 10 petaflops Stampede system, ANL’s Mira system, and ORNL’s Titan system. While our focus is on fundamental mathematical/computational methods and algorithms, we will assess our methods on model problems derived from several DOE mission applications, including multiscale mechanics and ice sheet dynamics.« less
Terahertz parametric sources and imaging applications
NASA Astrophysics Data System (ADS)
Kawase, Kodo; Ogawa, Yuichi; Minamide, Hiroaki; Ito, Hiromasa
2005-07-01
We have studied the generation of terahertz (THz) waves by optical parametric processes based on laser light scattering from the polariton mode of nonlinear crystals. Using parametric oscillation of LiNbO3 or MgO-doped LiNbO3 crystal pumped by a nano-second Q-switched Nd:YAG laser, we have realized a widely tunable coherent THz-wave source with a simple configuration. We report the detailed characteristics of the oscillation and the radiation including tunability, spatial and temporal coherency, uni-directivity, and efficiency. A Fourier transform limited THz-wave spectrum narrowing was achieved by introducing the injection seeding method. Further, we have developed a spectroscopic THz imaging system using a THz-wave parametric oscillator, which allows detection and identification of drugs concealed in envelopes, by introducing the component spatial pattern analysis. Several images of the envelope are recorded at different THz frequencies and then processed. The final result is an image that reveals what substances are present in the envelope, in what quantity, and how they are distributed across the envelope area. The example presented here shows the identification of three drugs, two of which are illegal, while one is an over-the-counter drug.
Tsamados, Michel; Feltham, Daniel; Petty, Alek; Schroeder, David; Flocco, Daniela
2015-10-13
We present a modelling study of processes controlling the summer melt of the Arctic sea ice cover. We perform a sensitivity study and focus our interest on the thermodynamics at the ice-atmosphere and ice-ocean interfaces. We use the Los Alamos community sea ice model CICE, and additionally implement and test three new parametrization schemes: (i) a prognostic mixed layer; (ii) a three equation boundary condition for the salt and heat flux at the ice-ocean interface; and (iii) a new lateral melt parametrization. Recent additions to the CICE model are also tested, including explicit melt ponds, a form drag parametrization and a halodynamic brine drainage scheme. The various sea ice parametrizations tested in this sensitivity study introduce a wide spread in the simulated sea ice characteristics. For each simulation, the total melt is decomposed into its surface, bottom and lateral melt components to assess the processes driving melt and how this varies regionally and temporally. Because this study quantifies the relative importance of several processes in driving the summer melt of sea ice, this work can serve as a guide for future research priorities. © 2015 The Author(s).
Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.
ERIC Educational Resources Information Center
Wang, Yuh-Yin Wu; Schafer, William D.
This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…
Significance of parametric spectral ratio methods in detection and recognition of whispered speech
NASA Astrophysics Data System (ADS)
Mathur, Arpit; Reddy, Shankar M.; Hegde, Rajesh M.
2012-12-01
In this article the significance of a new parametric spectral ratio method that can be used to detect whispered speech segments within normally phonated speech is described. Adaptation methods based on the maximum likelihood linear regression (MLLR) are then used to realize a mismatched train-test style speech recognition system. This proposed parametric spectral ratio method computes a ratio spectrum of the linear prediction (LP) and the minimum variance distortion-less response (MVDR) methods. The smoothed ratio spectrum is then used to detect whispered segments of speech within neutral speech segments effectively. The proposed LP-MVDR ratio method exhibits robustness at different SNRs as indicated by the whisper diarization experiments conducted on the CHAINS and the cell phone whispered speech corpus. The proposed method also performs reasonably better than the conventional methods for whisper detection. In order to integrate the proposed whisper detection method into a conventional speech recognition engine with minimal changes, adaptation methods based on the MLLR are used herein. The hidden Markov models corresponding to neutral mode speech are adapted to the whispered mode speech data in the whispered regions as detected by the proposed ratio method. The performance of this method is first evaluated on whispered speech data from the CHAINS corpus. The second set of experiments are conducted on the cell phone corpus of whispered speech. This corpus is collected using a set up that is used commercially for handling public transactions. The proposed whisper speech recognition system exhibits reasonably better performance when compared to several conventional methods. The results shown indicate the possibility of a whispered speech recognition system for cell phone based transactions.
Goll, Daniel S.; Brovkin, Victor; Liski, Jari; ...
2015-08-12
The quantification of sources and sinks of carbon from land use and land cover changes (LULCC) is uncertain. We investigated how the parametrization of LULCC and of organic matter decomposition, as well as initial land cover, affects the historical and future carbon fluxes in an Earth System Model (ESM). Using the land component of the Max Planck Institute ESM, we found that the historical (1750–2010) LULCC flux varied up to 25% depending on the fraction of biomass which enters the atmosphere directly due to burning or is used in short-lived products. The uncertainty in the decadal LULCC fluxes of themore » recent past due to the parametrization of decomposition and direct emissions was 0.6 Pg C yr $-$1, which is 3 times larger than the uncertainty previously attributed to model and method in general. Preindustrial natural land cover had a larger effect on decadal LULCC fluxes than the aforementioned parameter sensitivity (1.0 Pg C yr $-$1). Regional differences between reconstructed and dynamically computed land covers, in particular, at low latitudes, led to differences in historical LULCC emissions of 84–114 Pg C, globally. This effect is larger than the effects of forest regrowth, shifting cultivation, or climate feedbacks and comparable to the effect of differences among studies in the terminology of LULCC. Finally, in general, we find that the practice of calibrating the net land carbon balance to provide realistic boundary conditions for the climate component of an ESM hampers the applicability of the land component outside its primary field of application.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goll, Daniel S.; Brovkin, Victor; Liski, Jari
The quantification of sources and sinks of carbon from land use and land cover changes (LULCC) is uncertain. We investigated how the parametrization of LULCC and of organic matter decomposition, as well as initial land cover, affects the historical and future carbon fluxes in an Earth System Model (ESM). Using the land component of the Max Planck Institute ESM, we found that the historical (1750–2010) LULCC flux varied up to 25% depending on the fraction of biomass which enters the atmosphere directly due to burning or is used in short-lived products. The uncertainty in the decadal LULCC fluxes of themore » recent past due to the parametrization of decomposition and direct emissions was 0.6 Pg C yr $-$1, which is 3 times larger than the uncertainty previously attributed to model and method in general. Preindustrial natural land cover had a larger effect on decadal LULCC fluxes than the aforementioned parameter sensitivity (1.0 Pg C yr $-$1). Regional differences between reconstructed and dynamically computed land covers, in particular, at low latitudes, led to differences in historical LULCC emissions of 84–114 Pg C, globally. This effect is larger than the effects of forest regrowth, shifting cultivation, or climate feedbacks and comparable to the effect of differences among studies in the terminology of LULCC. Finally, in general, we find that the practice of calibrating the net land carbon balance to provide realistic boundary conditions for the climate component of an ESM hampers the applicability of the land component outside its primary field of application.« less
Reconstruction of interaction rate in holographic dark energy
NASA Astrophysics Data System (ADS)
Mukherjee, Ankan
2016-11-01
The present work is based on the holographic dark energy model with Hubble horizon as the infrared cut-off. The interaction rate between dark energy and dark matter has been reconstructed for three different parameterizations of the deceleration parameter. Observational constraints on the model parameters have been obtained by maximum likelihood analysis using the observational Hubble parameter data (OHD), type Ia supernovab data (SNe), baryon acoustic oscillation data (BAO) and the distance prior of cosmic microwave background (CMB) namely the CMB shift parameter data (CMBShift). The interaction rate obtained in the present work remains always positive and increases with expansion. It is very similar to the result obtained by Sen and Pavon [1] where the interaction rate has been reconstructed for a parametrization of the dark energy equation of state. Tighter constraints on the interaction rate have been obtained in the present work as it is based on larger data sets. The nature of the dark energy equation of state parameter has also been studied for the present models. Though the reconstruction is done from different parametrizations, the overall nature of the interaction rate is very similar in all the cases. Different information criteria and the Bayesian evidence, which have been invoked in the context of model selection, show that the these models are at close proximity of each other.
Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng
2015-01-01
Summary The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function. PMID:27346982
Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng
2016-06-01
The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function.
Static and Dynamic Characteristics of a Long-Span Cable-Stayed Bridge with CFRP Cables
Xie, Xu; Li, Xiaozhang; Shen, Yonggang
2014-01-01
In this study, the scope of CFRP cables in cable-stayed bridges is studied by establishing a numerical model of a 1400-m span of the same. The mechanical properties and characteristics of CFRP stay cables and of a cable-stayed bridge with CFRP cables are here subjected to comprehensive analysis. The anomalies in the damping properties of free vibration, nonlinear parametric vibration and wind fluctuating vibration between steel cables and CFRP cables are determined. The structural stiffness, wind resistance and traffic vibration of the cable-stayed bridge with CFRP cables are also analyzed. It was found that the static performances of a cable-stayed bridge with CFRP cables and steel cables are basically the same. The natural frequencies of CFRP cables do not coincide with the major natural frequencies of the cable-stayed bridge, so the likelihood of CFRP cable-bridge coupling vibration is minuscule. For CFRP cables, the response amplitudes of both parametric vibration and wind fluctuating vibration are smaller than those of steel cables. It can be concluded from the research that the use of CFRP cables does not change the dynamic characteristics of the vehicle-bridge coupling vibration. Therefore, they can be used in long-span cable-stayed bridges with an excellent mechanical performance. PMID:28788710
Static and Dynamic Characteristics of a Long-Span Cable-Stayed Bridge with CFRP Cables.
Xie, Xu; Li, Xiaozhang; Shen, Yonggang
2014-06-23
In this study, the scope of CFRP cables in cable-stayed bridges is studied by establishing a numerical model of a 1400-m span of the same. The mechanical properties and characteristics of CFRP stay cables and of a cable-stayed bridge with CFRP cables are here subjected to comprehensive analysis. The anomalies in the damping properties of free vibration, nonlinear parametric vibration and wind fluctuating vibration between steel cables and CFRP cables are determined. The structural stiffness, wind resistance and traffic vibration of the cable-stayed bridge with CFRP cables are also analyzed. It was found that the static performances of a cable-stayed bridge with CFRP cables and steel cables are basically the same. The natural frequencies of CFRP cables do not coincide with the major natural frequencies of the cable-stayed bridge, so the likelihood of CFRP cable-bridge coupling vibration is minuscule. For CFRP cables, the response amplitudes of both parametric vibration and wind fluctuating vibration are smaller than those of steel cables. It can be concluded from the research that the use of CFRP cables does not change the dynamic characteristics of the vehicle-bridge coupling vibration. Therefore, they can be used in long-span cable-stayed bridges with an excellent mechanical performance.
NASA Technical Reports Server (NTRS)
Stahle, C. V.; Gongloff, H. R.
1977-01-01
A preliminary assessment of vibroacoustic test plan optimization for free flyer STS payloads is presented and the effects on alternate test plans for Spacelab sortie payloads number of missions are also examined. The component vibration failure probability and the number of components in the housekeeping subassemblies are provided. Decision models are used to evaluate the cost effectiveness of seven alternate test plans using protoflight hardware.
Mackey, Dawn C.; Hubbard, Alan E.; Cawthon, Peggy M.; Cauley, Jane A.; Cummings, Steven R.; Tager, Ira B.
2011-01-01
Few studies have examined the relation between usual physical activity level and rate of hip fracture in older men or applied semiparametric methods from the causal inference literature that estimate associations without assuming a particular parametric model. Using the Physical Activity Scale for the Elderly, the authors measured usual physical activity level at baseline (2000–2002) in 5,682 US men ≥65 years of age who were enrolled in the Osteoporotic Fractures in Men Study. Physical activity levels were classified as low (bottom quartile of Physical Activity Scale for the Elderly score), moderate (middle quartiles), or high (top quartile). Hip fractures were confirmed by central review. Marginal associations between physical activity and hip fracture were estimated with 3 estimation methods: inverse probability-of-treatment weighting, G-computation, and doubly robust targeted maximum likelihood estimation. During 6.5 years of follow-up, 95 men (1.7%) experienced a hip fracture. The unadjusted risk of hip fracture was lower in men with a high physical activity level versus those with a low physical activity level (relative risk = 0.51, 95% confidence interval: 0.28, 0.92). In semiparametric analyses that controlled confounding, hip fracture risk was not lower with moderate (e.g., targeted maximum likelihood estimation relative risk = 0.92, 95% confidence interval: 0.62, 1.44) or high (e.g., targeted maximum likelihood estimation relative risk = 0.88, 95% confidence interval: 0.53, 2.03) physical activity relative to low. This study does not support a protective effect of usual physical activity on hip fracture in older men. PMID:21303805
NASA Astrophysics Data System (ADS)
Baluev, Roman V.
2013-08-01
We present PlanetPack, a new software tool that we developed to facilitate and standardize the advanced analysis of radial velocity (RV) data for the goal of exoplanets detection, characterization, and basic dynamical N-body simulations. PlanetPack is a command-line interpreter, that can run either in an interactive mode or in a batch mode of automatic script interpretation. Its major abilities include: (i) advanced RV curve fitting with the proper maximum-likelihood treatment of unknown RV jitter; (ii) user-friendly multi-Keplerian as well as Newtonian N-body RV fits; (iii) use of more efficient maximum-likelihood periodograms that involve the full multi-planet fitting (sometimes called as “residual” or “recursive” periodograms); (iv) easily calculatable parametric 2D likelihood function level contours, reflecting the asymptotic confidence regions; (v) fitting under some useful functional constraints is user-friendly; (vi) basic tasks of short- and long-term planetary dynamical simulation using a fast Everhart-type integrator based on Gauss-Legendre spacings; (vii) fitting the data with red noise (auto-correlated errors); (viii) various analytical and numerical methods for the tasks of determining the statistical significance. It is planned that further functionality may be added to PlanetPack in the future. During the development of this software, a lot of effort was made to improve the calculational speed, especially for CPU-demanding tasks. PlanetPack was written in pure C++ (standard of 1998/2003), and is expected to be compilable and useable on a wide range of platforms.
Shi, J Q; Wang, B; Will, E J; West, R M
2012-11-20
We propose a new semiparametric model for functional regression analysis, combining a parametric mixed-effects model with a nonparametric Gaussian process regression model, namely a mixed-effects Gaussian process functional regression model. The parametric component can provide explanatory information between the response and the covariates, whereas the nonparametric component can add nonlinearity. We can model the mean and covariance structures simultaneously, combining the information borrowed from other subjects with the information collected from each individual subject. We apply the model to dose-response curves that describe changes in the responses of subjects for differing levels of the dose of a drug or agent and have a wide application in many areas. We illustrate the method for the management of renal anaemia. An individual dose-response curve is improved when more information is included by this mechanism from the subject/patient over time, enabling a patient-specific treatment regime. Copyright © 2012 John Wiley & Sons, Ltd.
Control of Risks Through the Use of Procedures: A Method for Evaluating the Change in Risk
NASA Technical Reports Server (NTRS)
Praino, Gregory T.; Sharit, Joseph
2010-01-01
This paper considers how procedures can be used to control risks faced by an organization and proposes a means of recognizing if a particular procedure reduces risk or contributes to the organization's exposure. The proposed method was developed out of the review of work documents and the governing procedures performed in the wake of the Columbia accident by NASA and the Space Shuttle prime contractor, United Space Alliance, LLC. A technique was needed to understand the rules, or procedural controls, in place at the time in the context of how important the role of each rule was. The proposed method assesses procedural risks, the residual risk associated with a hazard after a procedure's influence is accounted for, by considering each clause of a procedure as a unique procedural control that may be beneficial or harmful. For procedural risks with consequences severe enough to threaten the survival of the organization, the method measures the characteristics of each risk on a scale that is an alternative to the traditional consequence/likelihood couple. The dual benefits of the substitute scales are that they eliminate both the need to quantify a relationship between different consequence types and the need for the extensive history a probabilistic risk assessment would require. Control Value is used as an analog for the consequence, where the value of a rule is based on how well the control reduces the severity of the consequence when operating successfully. This value is composed of two parts: the inevitability of the consequence in the absence of the control, and the opportunity to intervene before the consequence is realized. High value controls will be ones where there is minimal need for intervention but maximum opportunity to actively prevent the outcome. Failure Likelihood is used as the substitute for the conventional likelihood of the outcome. For procedural controls, a failure is considered to be any non-malicious violation of the rule, whether intended or not. The model used for describing the Failure Likelihood considers how well a task was established by evaluating that task on five components. The components selected to define a well established task are: that it be defined, assigned to someone capable, that they be trained appropriately, that the actions be organized to enable proper completion and that some form of independent monitoring be performed. Validation of the method was based on the information provided by a group of experts in Space Shuttle ground processing when they were presented with 5 scenarios that identified a clause from a procedure. For each scenario, they recorded their perception of how important the associated rule was and how likely it was to fail. They then rated the components of Control Value and Failure Likelihood for all the scenarios. The order in which each reviewer ranked the scenarios Control Value and Failure Likelihood was compared to the order in which they ranked the scenarios for each of the associated components; inevitability and opportunity for Control Value and definition, assignment, training, organization and monitoring for Failure Likelihood. This order comparison showed how the components contributed to a relative relationship to the substitute risk element. With the relationship established for Space Shuttle ground processing, this method can be used to gauge if the introduction or removal of a particular rule will increase or decrease the .risk associated with the hazard it is intended to control.
Fan, Zhen; Dani, Melanie; Femminella, Grazia D; Wood, Melanie; Calsolaro, Valeria; Veronese, Mattia; Turkheimer, Federico; Gentleman, Steve; Brooks, David J; Hinz, Rainer; Edison, Paul
2018-07-01
Neuroinflammation and microglial activation play an important role in amnestic mild cognitive impairment (MCI) and Alzheimer's disease. In this study, we investigated the spatial distribution of neuroinflammation in MCI subjects, using spectral analysis (SA) to generate parametric maps and quantify 11 C-PBR28 PET, and compared these with compartmental and other kinetic models of quantification. Thirteen MCI and nine healthy controls were enrolled in this study. Subjects underwent 11 C-PBR28 PET scans with arterial cannulation. Spectral analysis with an arterial plasma input function was used to generate 11 C-PBR28 parametric maps. These maps were then compared with regional 11 C-PBR28 V T (volume of distribution) using a two-tissue compartment model and Logan graphic analysis. Amyloid load was also assessed with 18 F-Flutemetamol PET. With SA, three component peaks were identified in addition to blood volume. The 11 C-PBR28 impulse response function (IRF) at 90 min produced the lowest coefficient of variation. Single-subject analysis using this IRF demonstrated microglial activation in five out of seven amyloid-positive MCI subjects. IRF parametric maps of 11 C-PBR28 uptake revealed a group-wise significant increase in neuroinflammation in amyloid-positive MCI subjects versus HC in multiple cortical association areas, and particularly in the temporal lobe. Interestingly, compartmental analysis detected group-wise increase in 11 C-PBR28 binding in the thalamus of amyloid-positive MCI subjects, while Logan parametric maps did not perform well. This study demonstrates for the first time that spectral analysis can be used to generate parametric maps of 11 C-PBR28 uptake, and is able to detect microglial activation in amyloid-positive MCI subjects. IRF parametric maps of 11 C-PBR28 uptake allow voxel-wise single-subject analysis and could be used to evaluate microglial activation in individual subjects.
The mechanism by which nonlinearity sustains turbulence in plane Couette flow
NASA Astrophysics Data System (ADS)
Nikolaidis, M.-A.; Farrell, B. F.; Ioannou, P. J.
2018-04-01
Turbulence in wall-bounded shear flow results from a synergistic interaction between linear non-normality and nonlinearity in which non-normal growth of a subset of perturbations configured to transfer energy from the externally forced component of the turbulent state to the perturbation component maintains the perturbation energy, while the subset of energy-transferring perturbations is replenished by nonlinearity. Although it is accepted that both linear non-normality mediated energy transfer from the forced component of the mean flow and nonlinear interactions among perturbations are required to maintain the turbulent state, the detailed physical mechanism by which these processes interact in maintaining turbulence has not been determined. In this work a statistical state dynamics based analysis is performed on turbulent Couette flow at R = 600 and a comparison to DNS is used to demonstrate that the perturbation component in Couette flow turbulence is replenished by a non-normality mediated parametric growth process in which the fluctuating streamwise mean flow has been adjusted to marginal Lyapunov stability. It is further shown that the alternative mechanism in which the subspace of non-normally growing perturbations is maintained directly by perturbation-perturbation nonlinearity does not contribute to maintaining the turbulent state. This work identifies parametric interaction between the fluctuating streamwise mean flow and the streamwise varying perturbations to be the mechanism of the nonlinear interaction maintaining the perturbation component of the turbulent state, and identifies the associated Lyapunov vectors with positive energetics as the structures of the perturbation subspace supporting the turbulence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Xinliang; Lu, Quanming; Hao, Yufei
2014-01-01
The parametric instabilities of an Alfvén wave in a proton-electron plasma system are found to have great influence on proton dynamics, where part of the protons can be accelerated through the Landau resonance with the excited ion acoustic waves, and a beam component along the background magnetic field is formed. In this paper, with a one-dimensional hybrid simulation model, we investigate the evolution of the parametric instabilities of a monochromatic left-hand polarized Alfvén wave in a proton-electron-alpha plasma with a low beta. When the drift velocity between the protons and alpha particles is sufficiently large, the wave numbers of themore » backward daughter Alfvén waves can be cascaded toward higher values due to the modulational instability during the nonlinear evolution of the parametric instabilities, and the alpha particles are resonantly heated in both the parallel and perpendicular direction by the backward waves. On the other hand, when the drift velocity of alpha particles is small, the alpha particles are heated in the linear growth stage of the parametric instabilities due to the Landau resonance with the excited ion acoustic waves. Therefore, the heating occurs only in the parallel direction, and there is no obvious heating in the perpendicular direction. The relevance of our results to the preferential heating of heavy ions observed in the solar wind within 0.3 AU is also discussed in this paper.« less
Illiquidity premium and expected stock returns in the UK: A new approach
NASA Astrophysics Data System (ADS)
Chen, Jiaqi; Sherif, Mohamed
2016-09-01
This study examines the relative importance of liquidity risk for the time-series and cross-section of stock returns in the UK. We propose a simple way to capture the multidimensionality of illiquidity. Our analysis indicates that existing illiquidity measures have considerable asset specific components, which justifies our new approach. Further, we use an alternative test of the Amihud (2002) measure and parametric and non-parametric methods to investigate whether liquidity risk is priced in the UK. We find that the inclusion of the illiquidity factor in the capital asset pricing model plays a significant role in explaining the cross-sectional variation in stock returns, in particular with the Fama-French three-factor model. Further, using Hansen-Jagannathan non-parametric bounds, we find that the illiquidity-augmented capital asset pricing models yield a small distance error, other non-liquidity based models fail to yield economically plausible distance values. Our findings have important implications for managing the liquidity risk of equity portfolios.
Subharmonics, Chaos, and Beyond
NASA Technical Reports Server (NTRS)
Adler, Laszlo; Yost, William T.; Cantrell, John H.
2011-01-01
While studying finite amplitude ultrasonic wave resonance in a one dimensional liquid-filled cavity, which is formed by a narrow band transducer and a plane reflector, subharmonics of the driver's frequency were observed in addition to the expected harmonic structure. Subsequently it was realized that the system was one of the many examples where parametric resonance takes place and in which the observed subharmonics are parametrically generated. Parametric resonance occurs in any physical system which has a periodically modulated natural frequency. The generation mechanism also requires a sufficiently high threshold value of the driving amplitude so that the system becomes increasingly nonlinear in response. The nonlinear features were recently investigated and are the objective of this presentation. An ultrasonic interferometer with optical precision was built. The transducers were compressional undamped quartz and Lithium Niobate crystals ranging from 1-10 Mhz, and driven by a high power amplifier. Both an optical diffraction system and a receive transducer attached to an aligned reflector with lapped flat and parallel surfaces were used to observe the generated frequency components in the cavity.
Scalability of the muscular action in a parametric 3D model of the index finger.
Sancho-Bru, Joaquín L; Vergara, Margarita; Rodríguez-Cervantes, Pablo-Jesús; Giurintano, David J; Pérez-González, Antonio
2008-01-01
A method for scaling the muscle action is proposed and used to achieve a 3D inverse dynamic model of the human finger with all its components scalable. This method is based on scaling the physiological cross-sectional area (PCSA) in a Hill muscle model. Different anthropometric parameters and maximal grip force data have been measured and their correlations have been analyzed and used for scaling the PCSA of each muscle. A linear relationship between the normalized PCSA and the product of the length and breadth of the hand has been finally used for scaling, with a slope of 0.01315 cm(-2), with the length and breadth of the hand expressed in centimeters. The parametric muscle model has been included in a parametric finger model previously developed by the authors, and it has been validated reproducing the results of an experiment in which subjects from different population groups exerted maximal voluntary forces with their index finger in a controlled posture.
Membrane reactor for water detritiation: a parametric study on operating parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mascarade, J.; Liger, K.; Troulay, M.
2015-03-15
This paper presents the results of a parametric study done on a single stage finger-type packed-bed membrane reactor (PBMR) used for heavy water vapor de-deuteration. Parametric studies have been done on 3 operating parameters which are: the membrane temperature, the total feed flow rate and the feed composition through D{sub 2}O content variations. Thanks to mass spectrometer analysis of streams leaving the PBMR, speciation of deuterated species was achieved. Measurement of the amounts of each molecular component allowed the calculation of reaction quotient at the packed-bed outlet. While temperature variation mainly influences permeation efficiency, feed flow rate perturbation reveals dependencemore » of conversion and permeation properties to contact time between catalyst and reacting mixture. The study shows that isotopic exchange reactions occurring on the catalyst particles surface are not thermodynamically balanced. Moreover, the variation of the heavy water content in the feed exhibits competition between permeation and conversion kinetics.« less
A Multi-Wavelength IR Laser for Space Applications
NASA Technical Reports Server (NTRS)
Li, Steven X.; Yu, Anthony W.; Sun, Xiaoli; Fahey, Molly E.; Numata, Kenji; Krainak, Michael A.
2017-01-01
We present a laser technology development with space flight heritage to generate laser wavelengths in the near- to mid-infrared (NIR to MIR) for space lidar applications. Integrating an optical parametric crystal to the LOLA (Lunar Orbiter Laser Altimeter) laser transmitter design affords selective laser wavelengths from NIR to MIR that are not easily obtainable from traditional diode pumped solid-state lasers. By replacing the output coupler of the LOLA laser with a properly designed parametric crystal, we successfully demonstrated a monolithic intra-cavity optical parametric oscillator (iOPO) laser based on all high technology readiness level (TRL) subsystems and components. Several desired wavelengths have been generated including 2.1 microns, 2.7 microns and 3.4 microns. This laser can also be used in trace-gas remote sensing, as many molecules possess their unique vibrational transitions in NIR to MIR wavelength region, as well as in time-of-flight mass spectrometer where desorption of samples using MIR laser wavelengths have been successfully demonstrated.
A multi-wavelength IR laser for space applications
NASA Astrophysics Data System (ADS)
Li, Steven X.; Yu, Anthony W.; Sun, Xiaoli; Fahey, Molly E.; Numata, Kenji; Krainak, Michael A.
2017-05-01
We present a laser technology development with space flight heritage to generate laser wavelengths in the near- to midinfrared (NIR to MIR) for space lidar applications. Integrating an optical parametric crystal to the LOLA (Lunar Orbiter Laser Altimeter) laser transmitter design affords selective laser wavelengths from NIR to MIR that are not easily obtainable from traditional diode pumped solid-state lasers. By replacing the output coupler of the LOLA laser with a properly designed parametric crystal, we successfully demonstrated a monolithic intra-cavity optical parametric oscillator (iOPO) laser based on all high technology readiness level (TRL) subsystems and components. Several desired wavelengths have been generated including 2.1 µm, 2.7 μm and 3.4 μm. This laser can also be used in trace-gas remote sensing, as many molecules possess their unique vibrational transitions in NIR to MIR wavelength region, as well as in time-of-flight mass spectrometer where desorption of samples using MIR laser wavelengths have been successfully demonstrated
Return-to-Work Within a Complex and Dynamic Organizational Work Disability System.
Jetha, Arif; Pransky, Glenn; Fish, Jon; Hettinger, Lawrence J
2016-09-01
Background Return-to-work (RTW) within a complex organizational system can be associated with suboptimal outcomes. Purpose To apply a sociotechnical systems perspective to investigate complexity in RTW; to utilize system dynamics modeling (SDM) to examine how feedback relationships between individual, psychosocial, and organizational factors make up the work disability system and influence RTW. Methods SDMs were developed within two companies. Thirty stakeholders including senior managers, and frontline supervisors and workers participated in model building sessions. Participants were asked questions that elicited information about the structure of the work disability system and were translated into feedback loops. To parameterize the model, participants were asked to estimate the shape and magnitude of the relationship between key model components. Data from published literature were also accessed to supplement participant estimates. Data were entered into a model created in the software program Vensim. Simulations were conducted to examine how financial incentives and light duty work disability-related policies, utilized by the participating companies, influenced RTW likelihood and preparedness. Results The SDMs were multidimensional, including individual attitudinal characteristics, health factors, and organizational components. Among the causal pathways uncovered, psychosocial components including workplace social support, supervisor and co-worker pressure, and supervisor-frontline worker communication impacted RTW likelihood and preparedness. Interestingly, SDM simulations showed that work disability-related policies in both companies resulted in a diminishing or opposing impact on RTW preparedness and likelihood. Conclusion SDM provides a novel systems view of RTW. Policy and psychosocial component relationships within the system have important implications for RTW, and may contribute to unanticipated outcomes.
Austin, Peter C
2010-04-22
Multilevel logistic regression models are increasingly being used to analyze clustered data in medical, public health, epidemiological, and educational research. Procedures for estimating the parameters of such models are available in many statistical software packages. There is currently little evidence on the minimum number of clusters necessary to reliably fit multilevel regression models. We conducted a Monte Carlo study to compare the performance of different statistical software procedures for estimating multilevel logistic regression models when the number of clusters was low. We examined procedures available in BUGS, HLM, R, SAS, and Stata. We found that there were qualitative differences in the performance of different software procedures for estimating multilevel logistic models when the number of clusters was low. Among the likelihood-based procedures, estimation methods based on adaptive Gauss-Hermite approximations to the likelihood (glmer in R and xtlogit in Stata) or adaptive Gaussian quadrature (Proc NLMIXED in SAS) tended to have superior performance for estimating variance components when the number of clusters was small, compared to software procedures based on penalized quasi-likelihood. However, only Bayesian estimation with BUGS allowed for accurate estimation of variance components when there were fewer than 10 clusters. For all statistical software procedures, estimation of variance components tended to be poor when there were only five subjects per cluster, regardless of the number of clusters.
NASA/Air Force Cost Model: NAFCOM
NASA Technical Reports Server (NTRS)
Winn, Sharon D.; Hamcher, John W. (Technical Monitor)
2002-01-01
The NASA/Air Force Cost Model (NAFCOM) is a parametric estimating tool for space hardware. It is based on historical NASA and Air Force space projects and is primarily used in the very early phases of a development project. NAFCOM can be used at the subsystem or component levels.
Han, Jubong; Lee, K B; Lee, Jong-Man; Park, Tae Soon; Oh, J S; Oh, Pil-Jei
2016-03-01
We discuss a new method to incorporate Type B uncertainty into least-squares procedures. The new method is based on an extension of the likelihood function from which a conventional least-squares function is derived. The extended likelihood function is the product of the original likelihood function with additional PDFs (Probability Density Functions) that characterize the Type B uncertainties. The PDFs are considered to describe one's incomplete knowledge on correction factors being called nuisance parameters. We use the extended likelihood function to make point and interval estimations of parameters in the basically same way as the least-squares function used in the conventional least-squares method is derived. Since the nuisance parameters are not of interest and should be prevented from appearing in the final result, we eliminate such nuisance parameters by using the profile likelihood. As an example, we present a case study for a linear regression analysis with a common component of Type B uncertainty. In this example we compare the analysis results obtained from using our procedure with those from conventional methods. Copyright © 2015. Published by Elsevier Ltd.
Learning Time-Varying Coverage Functions
Du, Nan; Liang, Yingyu; Balcan, Maria-Florina; Song, Le
2015-01-01
Coverage functions are an important class of discrete functions that capture the law of diminishing returns arising naturally from applications in social network analysis, machine learning, and algorithmic game theory. In this paper, we propose a new problem of learning time-varying coverage functions, and develop a novel parametrization of these functions using random features. Based on the connection between time-varying coverage functions and counting processes, we also propose an efficient parameter learning algorithm based on likelihood maximization, and provide a sample complexity analysis. We applied our algorithm to the influence function estimation problem in information diffusion in social networks, and show that with few assumptions about the diffusion processes, our algorithm is able to estimate influence significantly more accurately than existing approaches on both synthetic and real world data. PMID:25960624
Learning Time-Varying Coverage Functions.
Du, Nan; Liang, Yingyu; Balcan, Maria-Florina; Song, Le
2014-12-08
Coverage functions are an important class of discrete functions that capture the law of diminishing returns arising naturally from applications in social network analysis, machine learning, and algorithmic game theory. In this paper, we propose a new problem of learning time-varying coverage functions, and develop a novel parametrization of these functions using random features. Based on the connection between time-varying coverage functions and counting processes, we also propose an efficient parameter learning algorithm based on likelihood maximization, and provide a sample complexity analysis. We applied our algorithm to the influence function estimation problem in information diffusion in social networks, and show that with few assumptions about the diffusion processes, our algorithm is able to estimate influence significantly more accurately than existing approaches on both synthetic and real world data.
Future evolution in a backreaction model and the analogous scalar field cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, Amna; Majumdar, A.S., E-mail: amnaalig@gmail.com, E-mail: archan@bose.res.in
We investigate the future evolution of the universe using the Buchert framework for averaged backreaction in the context of a two-domain partition of the universe. We show that this approach allows for the possibility of the global acceleration vanishing at a finite future time, provided that none of the subdomains accelerate individually. The model at large scales is analogously described in terms of a homogeneous scalar field emerging with a potential that is fixed and free from phenomenological parametrization. The dynamics of this scalar field is explored in the analogous FLRW cosmology. We use observational data from Type Ia Supernovae,more » Baryon Acoustic Oscillations, and Cosmic Microwave Background to constrain the parameters of the model for a viable cosmology, providing the corresponding likelihood contours.« less
Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...
2017-11-08
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Xin; Garikapati, Venu M.; You, Daehyun
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
Proportion estimation using prior cluster purities
NASA Technical Reports Server (NTRS)
Terrell, G. R. (Principal Investigator)
1980-01-01
The prior distribution of CLASSY component purities is studied, and this information incorporated into maximum likelihood crop proportion estimators. The method is tested on Transition Year spring small grain segments.
NASA Technical Reports Server (NTRS)
1973-01-01
A computer program for rapid parametric evaluation of various types of cryogenics spacecraft systems is presented. The mathematical techniques of the program provide the capability for in-depth analysis combined with rapid problem solution for the production of a large quantity of soundly based trade-study data. The program requires a large data bank capable of providing characteristics performance data for a wide variety of component assemblies used in cryogenic systems. The program data requirements are divided into: (1) the semipermanent data tables and source data for performance characteristics and (2) the variable input data which contains input parameters which may be perturbated for parametric system studies.
Simplifying the circuit of Josephson parametric converters
NASA Astrophysics Data System (ADS)
Abdo, Baleegh; Brink, Markus; Chavez-Garcia, Jose; Keefe, George
Josephson parametric converters (JPCs) are quantum-limited three-wave mixing devices that can play various important roles in quantum information processing in the microwave domain, including amplification of quantum signals, transduction of quantum information, remote entanglement of qubits, nonreciprocal amplification, and circulation of signals. However, the input-output and biasing circuit of a state-of-the-art JPC consists of bulky components, i.e. two commercial off-chip broadband 180-degree hybrids, four phase-matched short coax cables, and one superconducting magnetic coil. Such bulky hardware significantly hinders the integration of JPCs in scalable quantum computing architectures. In my talk, I will present ideas on how to simplify the JPC circuit and show preliminary experimental results
NASA Astrophysics Data System (ADS)
Sumantari, Y. D.; Slamet, I.; Sugiyanto
2017-06-01
Semiparametric regression is a statistical analysis method that consists of parametric and nonparametric regression. There are various approach techniques in nonparametric regression. One of the approach techniques is spline. Central Java is one of the most densely populated province in Indonesia. Population density in this province can be modeled by semiparametric regression because it consists of parametric and nonparametric component. Therefore, the purpose of this paper is to determine the factors that in uence population density in Central Java using the semiparametric spline regression model. The result shows that the factors which in uence population density in Central Java is Family Planning (FP) active participants and district minimum wage.
Model-based tomographic reconstruction of objects containing known components.
Stayman, J Webster; Otake, Yoshito; Prince, Jerry L; Khanna, A Jay; Siewerdsen, Jeffrey H
2012-10-01
The likelihood of finding manufactured components (surgical tools, implants, etc.) within a tomographic field-of-view has been steadily increasing. One reason is the aging population and proliferation of prosthetic devices, such that more people undergoing diagnostic imaging have existing implants, particularly hip and knee implants. Another reason is that use of intraoperative imaging (e.g., cone-beam CT) for surgical guidance is increasing, wherein surgical tools and devices such as screws and plates are placed within or near to the target anatomy. When these components contain metal, the reconstructed volumes are likely to contain severe artifacts that adversely affect the image quality in tissues both near and far from the component. Because physical models of such components exist, there is a unique opportunity to integrate this knowledge into the reconstruction algorithm to reduce these artifacts. We present a model-based penalized-likelihood estimation approach that explicitly incorporates known information about component geometry and composition. The approach uses an alternating maximization method that jointly estimates the anatomy and the position and pose of each of the known components. We demonstrate that the proposed method can produce nearly artifact-free images even near the boundary of a metal implant in simulated vertebral pedicle screw reconstructions and even under conditions of substantial photon starvation. The simultaneous estimation of device pose also provides quantitative information on device placement that could be valuable to quality assurance and verification of treatment delivery.
A parametric ribcage geometry model accounting for variations among the adult population.
Wang, Yulong; Cao, Libo; Bai, Zhonghao; Reed, Matthew P; Rupp, Jonathan D; Hoff, Carrie N; Hu, Jingwen
2016-09-06
The objective of this study is to develop a parametric ribcage model that can account for morphological variations among the adult population. Ribcage geometries, including 12 pair of ribs, sternum, and thoracic spine, were collected from CT scans of 101 adult subjects through image segmentation, landmark identification (1016 for each subject), symmetry adjustment, and template mesh mapping (26,180 elements for each subject). Generalized procrustes analysis (GPA), principal component analysis (PCA), and regression analysis were used to develop a parametric ribcage model, which can predict nodal locations of the template mesh according to age, sex, height, and body mass index (BMI). Two regression models, a quadratic model for estimating the ribcage size and a linear model for estimating the ribcage shape, were developed. The results showed that the ribcage size was dominated by the height (p=0.000) and age-sex-interaction (p=0.007) and the ribcage shape was significantly affected by the age (p=0.0005), sex (p=0.0002), height (p=0.0064) and BMI (p=0.0000). Along with proper assignment of cortical bone thickness, material properties and failure properties, this parametric ribcage model can directly serve as the mesh of finite element ribcage models for quantifying effects of human characteristics on thoracic injury risks. Copyright © 2016 Elsevier Ltd. All rights reserved.
THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures
Theobald, Douglas L.; Wuttke, Deborah S.
2008-01-01
Summary THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. PMID:16777907
Zhan, Tingting; Chevoneva, Inna; Iglewicz, Boris
2010-01-01
The family of weighted likelihood estimators largely overlaps with minimum divergence estimators. They are robust to data contaminations compared to MLE. We define the class of generalized weighted likelihood estimators (GWLE), provide its influence function and discuss the efficiency requirements. We introduce a new truncated cubic-inverse weight, which is both first and second order efficient and more robust than previously reported weights. We also discuss new ways of selecting the smoothing bandwidth and weighted starting values for the iterative algorithm. The advantage of the truncated cubic-inverse weight is illustrated in a simulation study of three-components normal mixtures model with large overlaps and heavy contaminations. A real data example is also provided. PMID:20835375
An analytic modeling and system identification study of rotor/fuselage dynamics at hover
NASA Technical Reports Server (NTRS)
Hong, Steven W.; Curtiss, H. C., Jr.
1993-01-01
A combination of analytic modeling and system identification methods have been used to develop an improved dynamic model describing the response of articulated rotor helicopters to control inputs. A high-order linearized model of coupled rotor/body dynamics including flap and lag degrees of freedom and inflow dynamics with literal coefficients is compared to flight test data from single rotor helicopters in the near hover trim condition. The identification problem was formulated using the maximum likelihood function in the time domain. The dynamic model with literal coefficients was used to generate the model states, and the model was parametrized in terms of physical constants of the aircraft rather than the stability derivatives resulting in a significant reduction in the number of quantities to be identified. The likelihood function was optimized using the genetic algorithm approach. This method proved highly effective in producing an estimated model from flight test data which included coupled fuselage/rotor dynamics. Using this approach it has been shown that blade flexibility is a significant contributing factor to the discrepancies between theory and experiment shown in previous studies. Addition of flexible modes, properly incorporating the constraint due to the lag dampers, results in excellent agreement between flight test and theory, especially in the high frequency range.
Nie, Hui; Cheng, Jing; Small, Dylan S
2011-12-01
In many clinical studies with a survival outcome, administrative censoring occurs when follow-up ends at a prespecified date and many subjects are still alive. An additional complication in some trials is that there is noncompliance with the assigned treatment. For this setting, we study the estimation of the causal effect of treatment on survival probability up to a given time point among those subjects who would comply with the assignment to both treatment and control. We first discuss the standard instrumental variable (IV) method for survival outcomes and parametric maximum likelihood methods, and then develop an efficient plug-in nonparametric empirical maximum likelihood estimation (PNEMLE) approach. The PNEMLE method does not make any assumptions on outcome distributions, and makes use of the mixture structure in the data to gain efficiency over the standard IV method. Theoretical results of the PNEMLE are derived and the method is illustrated by an analysis of data from a breast cancer screening trial. From our limited mortality analysis with administrative censoring times 10 years into the follow-up, we find a significant benefit of screening is present after 4 years (at the 5% level) and this persists at 10 years follow-up. © 2011, The International Biometric Society.
An analytic modeling and system identification study of rotor/fuselage dynamics at hover
NASA Technical Reports Server (NTRS)
Hong, Steven W.; Curtiss, H. C., Jr.
1993-01-01
A combination of analytic modeling and system identification methods have been used to develop an improved dynamic model describing the response of articulated rotor helicopters to control inputs. A high-order linearized model of coupled rotor/body dynamics including flap and lag degrees of freedom and inflow dynamics with literal coefficients is compared to flight test data from single rotor helicopters in the near hover trim condition. The identification problem was formulated using the maximum likelihood function in the time domain. The dynamic model with literal coefficients was used to generate the model states, and the model was parametrized in terms of physical constants of the aircraft rather than the stability derivatives, resulting in a significant reduction in the number of quantities to be identified. The likelihood function was optimized using the genetic algorithm approach. This method proved highly effective in producing an estimated model from flight test data which included coupled fuselage/rotor dynamics. Using this approach it has been shown that blade flexibility is a significant contributing factor to the discrepancies between theory and experiment shown in previous studies. Addition of flexible modes, properly incorporating the constraint due to the lag dampers, results in excellent agreement between flight test and theory, especially in the high frequency range.
Simulated maximum likelihood method for estimating kinetic rates in gene expression.
Tian, Tianhai; Xu, Songlin; Gao, Junbin; Burrage, Kevin
2007-01-01
Kinetic rate in gene expression is a key measurement of the stability of gene products and gives important information for the reconstruction of genetic regulatory networks. Recent developments in experimental technologies have made it possible to measure the numbers of transcripts and protein molecules in single cells. Although estimation methods based on deterministic models have been proposed aimed at evaluating kinetic rates from experimental observations, these methods cannot tackle noise in gene expression that may arise from discrete processes of gene expression, small numbers of mRNA transcript, fluctuations in the activity of transcriptional factors and variability in the experimental environment. In this paper, we develop effective methods for estimating kinetic rates in genetic regulatory networks. The simulated maximum likelihood method is used to evaluate parameters in stochastic models described by either stochastic differential equations or discrete biochemical reactions. Different types of non-parametric density functions are used to measure the transitional probability of experimental observations. For stochastic models described by biochemical reactions, we propose to use the simulated frequency distribution to evaluate the transitional density based on the discrete nature of stochastic simulations. The genetic optimization algorithm is used as an efficient tool to search for optimal reaction rates. Numerical results indicate that the proposed methods can give robust estimations of kinetic rates with good accuracy.
SEMIPARAMETRIC EFFICIENT ESTIMATION FOR SHARED-FRAILTY MODELS WITH DOUBLY-CENSORED CLUSTERED DATA
Wang, Jane-Ling
2018-01-01
In this paper, we investigate frailty models for clustered survival data that are subject to both left- and right-censoring, termed “doubly-censored data”. This model extends current survival literature by broadening the application of frailty models from right-censoring to a more complicated situation with additional left censoring. Our approach is motivated by a recent Hepatitis B study where the sample consists of families. We adopt a likelihood approach that aims at the nonparametric maximum likelihood estimators (NPMLE). A new algorithm is proposed, which not only works well for clustered data but also improve over existing algorithm for independent and doubly-censored data, a special case when the frailty variable is a constant equal to one. This special case is well known to be a computational challenge due to the left censoring feature of the data. The new algorithm not only resolves this challenge but also accommodate the additional frailty variable effectively. Asymptotic properties of the NPMLE are established along with semi-parametric efficiency of the NPMLE for the finite-dimensional parameters. The consistency of Bootstrap estimators for the standard errors of the NPMLE is also discussed. We conducted some simulations to illustrate the numerical performance and robustness of the proposed algorithm, which is also applied to the Hepatitis B data. PMID:29527068
Systematic Review of Video-Based Instruction Component and Parametric Analyses
ERIC Educational Resources Information Center
Bennett, Kyle D.; Aljehany, Mashal Salman; Altaf, Enas Mohammednour
2017-01-01
Video-based instruction (VBI) has a substantial amount of research supporting its use with individuals with autism spectrum disorder and other developmental disabilities. However, it has typically been implemented as a treatment package containing multiple interventions. Additionally, there are procedural variations of VBI. Thus, it is difficult…
ERIC Educational Resources Information Center
Pek, Jolynn; Losardo, Diane; Bauer, Daniel J.
2011-01-01
Compared to parametric models, nonparametric and semiparametric approaches to modeling nonlinearity between latent variables have the advantage of recovering global relationships of unknown functional form. Bauer (2005) proposed an indirect application of finite mixtures of structural equation models where latent components are estimated in the…
Turing instability in reaction-diffusion systems with nonlinear diffusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zemskov, E. P., E-mail: zemskov@ccas.ru
2013-10-15
The Turing instability is studied in two-component reaction-diffusion systems with nonlinear diffusion terms, and the regions in parametric space where Turing patterns can form are determined. The boundaries between super- and subcritical bifurcations are found. Calculations are performed for one-dimensional brusselator and oregonator models.
Rotations with Rodrigues' Vector
ERIC Educational Resources Information Center
Pina, E.
2011-01-01
The rotational dynamics was studied from the point of view of Rodrigues' vector. This vector is defined here by its connection with other forms of parametrization of the rotation matrix. The rotation matrix was expressed in terms of this vector. The angular velocity was computed using the components of Rodrigues' vector as coordinates. It appears…
Additivity and maximum likelihood estimation of nonlinear component biomass models
David L.R. Affleck
2015-01-01
Since Parresol's (2001) seminal paper on the subject, it has become common practice to develop nonlinear tree biomass equations so as to ensure compatibility among total and component predictions and to fit equations jointly using multi-step least squares (MSLS) methods. In particular, many researchers have specified total tree biomass models by aggregating the...
Probabilistic Modeling of the Renal Stone Formation Module
NASA Technical Reports Server (NTRS)
Best, Lauren M.; Myers, Jerry G.; Goodenow, Debra A.; McRae, Michael P.; Jackson, Travis C.
2013-01-01
The Integrated Medical Model (IMM) is a probabilistic tool, used in mission planning decision making and medical systems risk assessments. The IMM project maintains a database of over 80 medical conditions that could occur during a spaceflight, documenting an incidence rate and end case scenarios for each. In some cases, where observational data are insufficient to adequately define the inflight medical risk, the IMM utilizes external probabilistic modules to model and estimate the event likelihoods. One such medical event of interest is an unpassed renal stone. Due to a high salt diet and high concentrations of calcium in the blood (due to bone depletion caused by unloading in the microgravity environment) astronauts are at a considerable elevated risk for developing renal calculi (nephrolithiasis) while in space. Lack of observed incidences of nephrolithiasis has led HRP to initiate the development of the Renal Stone Formation Module (RSFM) to create a probabilistic simulator capable of estimating the likelihood of symptomatic renal stone presentation in astronauts on exploration missions. The model consists of two major parts. The first is the probabilistic component, which utilizes probability distributions to assess the range of urine electrolyte parameters and a multivariate regression to transform estimated crystal density and size distributions to the likelihood of the presentation of nephrolithiasis symptoms. The second is a deterministic physical and chemical model of renal stone growth in the kidney developed by Kassemi et al. The probabilistic component of the renal stone model couples the input probability distributions describing the urine chemistry, astronaut physiology, and system parameters with the physical and chemical outputs and inputs to the deterministic stone growth model. These two parts of the model are necessary to capture the uncertainty in the likelihood estimate. The model will be driven by Monte Carlo simulations, continuously randomly sampling the probability distributions of the electrolyte concentrations and system parameters that are inputs into the deterministic model. The total urine chemistry concentrations are used to determine the urine chemistry activity using the Joint Expert Speciation System (JESS), a biochemistry model. Information used from JESS is then fed into the deterministic growth model. Outputs from JESS and the deterministic model are passed back to the probabilistic model where a multivariate regression is used to assess the likelihood of a stone forming and the likelihood of a stone requiring clinical intervention. The parameters used to determine to quantify these risks include: relative supersaturation (RS) of calcium oxalate, citrate/calcium ratio, crystal number density, total urine volume, pH, magnesium excretion, maximum stone width, and ureteral location. Methods and Validation: The RSFM is designed to perform a Monte Carlo simulation to generate probability distributions of clinically significant renal stones, as well as provide an associated uncertainty in the estimate. Initially, early versions will be used to test integration of the components and assess component validation and verification (V&V), with later versions used to address questions regarding design reference mission scenarios. Once integrated with the deterministic component, the credibility assessment of the integrated model will follow NASA STD 7009 requirements.
Small signal amplifiers and converters for millimeter wave Satcom systems
NASA Technical Reports Server (NTRS)
Okean, H. C.
1979-01-01
This paper describes the current state of the art and the various design tradeoffs encompassing the variety of small signal active circuit 'building blocks' deployed in millimeter wave Satcom receivers and transmitters. Included in this catagory are such low noise receiver components as parametric and FET amplifiers and low loss mixer downconverters as well as low level transmitter driver components such as resistive and varactor upconverters. Current and projected state of the art performance data will be presented along with specific examples of operating hardware.
Multi-modal vibration amplitudes of taut inclined cables due to direct and/or parametric excitation
NASA Astrophysics Data System (ADS)
Macdonald, J. H. G.
2016-02-01
Cables are often prone to potentially damaging large amplitude vibrations. The dynamic excitation may be from external loading or motion of the cable ends, the latter including direct excitation, normally from components of end motion transverse to the cable, and parametric excitation induced by axial components of end motion causing dynamic tension variations. Geometric nonlinearity can be important, causing stiffening behaviour and nonlinear modal coupling. Previous analyses of the vibrations, often neglecting sag, have generally dealt with direct and parametric excitation separately or have reverted to numerical solutions of the responses. Here a nonlinear cable model is adopted, applicable to taut cables such as on cable-stayed bridges, that allows for cable inclination, small sag (such that the vibration modes are similar to those of a taut string), multiple modes in both planes and end motion and/or external forcing close to any natural frequency. Based on the method of scaling and averaging it is found that, for sinusoidal inputs and positive damping, non-zero steady state responses can only occur in the modes in each plane with natural frequencies close to the excitation frequency and those with natural frequencies close to half this frequency. Analytical solutions, in the form of non-dimensional polynomial equations, are derived for the steady state vibration amplitudes in up to three modes simultaneously: the directly excited mode, the corresponding nonlinearly coupled mode in the orthogonal plane and a parametrically excited mode with half the natural frequency. The stability of the solutions is also identified. The outputs of the equations are consistent with previous results, where available. Example results from the analytical solutions are presented for a typical inclined bridge cable subject to vertical excitation of the lower end, and they are validated by numerical integration of the equations of motion and against some previous experimental results. It is shown that the modal interactions and sag (although very small) affect the responses significantly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos, M. Vargas dos; Reis, R.R.R.; Waga, I., E-mail: vargas@if.ufrj.br, E-mail: ribamar@if.ufrj.br, E-mail: ioav@if.ufrj.br
2016-02-01
We revisit the kink-like parametrization of the deceleration parameter q(z) [1], which considers a transition, at redshift z{sub t}, from cosmic deceleration to acceleration. In this parametrization the initial, at z >> z{sub t}, value of the q-parameter is q{sub i}, its final, z=−1, value is q{sub f} and the duration of the transition is parametrized by τ. By assuming a flat space geometry we obtain constraints on the free parameters of the model using recent data from type Ia supernovae (SN Ia), baryon acoustic oscillations (BAO), cosmic microwave background (CMB) and the Hubble parameter H(z). The use of H(z) data introducesmore » an explicit dependence of the combined likelihood on the present value of the Hubble parameter H{sub 0}, allowing us to explore the influence of different priors when marginalizing over this parameter. We also study the importance of the CMB information in the results by considering data from WMAP7, WMAP9 (Wilkinson Microwave Anisotropy Probe—7 and 9 years) and Planck 2015. We show that the contours and best fit do not depend much on the different CMB data used and that the considered new BAO data is responsible for most of the improvement in the results. Assuming a flat space geometry, q{sub i}=1/2 and expressing the present value of the deceleration parameter q{sub 0} as a function of the other three free parameters, we obtain z{sub t}=0.67{sup +0.10}{sub −0.08}, τ=0.26{sup +0.14}{sub −0.10} and q{sub 0}=−0.48{sup +0.11}{sub −0.13}, at 68% of confidence level, with an uniform prior over H{sub 0}. If in addition we fix q{sub f}=−1, as in flat ΛCDM, DGP and Chaplygin quartessence that are special models described by our parametrization, we get z{sub t}=0.66{sup +0.03}{sub −0.04}, τ=0.33{sup +0.04}{sub −0.04} and q{sub 0}=−0.54{sup +0.05}{sub −0.07}, in excellent agreement with flat ΛCDM for which τ=1/3. We also obtain for flat wCDM, another dark energy model described by our parametrization, the constraint on the equation of state parameter −1.22 < w < −0.78 at more than 99% confidence level.« less
Cure modeling in real-time prediction: How much does it help?
Ying, Gui-Shuang; Zhang, Qiang; Lan, Yu; Li, Yimei; Heitjan, Daniel F
2017-08-01
Various parametric and nonparametric modeling approaches exist for real-time prediction in time-to-event clinical trials. Recently, Chen (2016 BMC Biomedical Research Methodology 16) proposed a prediction method based on parametric cure-mixture modeling, intending to cover those situations where it appears that a non-negligible fraction of subjects is cured. In this article we apply a Weibull cure-mixture model to create predictions, demonstrating the approach in RTOG 0129, a randomized trial in head-and-neck cancer. We compare the ultimate realized data in RTOG 0129 to interim predictions from a Weibull cure-mixture model, a standard Weibull model without a cure component, and a nonparametric model based on the Bayesian bootstrap. The standard Weibull model predicted that events would occur earlier than the Weibull cure-mixture model, but the difference was unremarkable until late in the trial when evidence for a cure became clear. Nonparametric predictions often gave undefined predictions or infinite prediction intervals, particularly at early stages of the trial. Simulations suggest that cure modeling can yield better-calibrated prediction intervals when there is a cured component, or the appearance of a cured component, but at a substantial cost in the average width of the intervals. Copyright © 2017 Elsevier Inc. All rights reserved.
Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.
Böhning, Dankmar; Kuhnert, Ronny
2006-12-01
This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.
Space transfer vehicle concepts and requirements study. Volume 3, book 1: Program cost estimates
NASA Technical Reports Server (NTRS)
Peffley, Al F.
1991-01-01
The Space Transfer Vehicle (STV) Concepts and Requirements Study cost estimate and program planning analysis is presented. The cost estimating technique used to support STV system, subsystem, and component cost analysis is a mixture of parametric cost estimating and selective cost analogy approaches. The parametric cost analysis is aimed at developing cost-effective aerobrake, crew module, tank module, and lander designs with the parametric cost estimates data. This is accomplished using cost as a design parameter in an iterative process with conceptual design input information. The parametric estimating approach segregates costs by major program life cycle phase (development, production, integration, and launch support). These phases are further broken out into major hardware subsystems, software functions, and tasks according to the STV preliminary program work breakdown structure (WBS). The WBS is defined to a low enough level of detail by the study team to highlight STV system cost drivers. This level of cost visibility provided the basis for cost sensitivity analysis against various design approaches aimed at achieving a cost-effective design. The cost approach, methodology, and rationale are described. A chronological record of the interim review material relating to cost analysis is included along with a brief summary of the study contract tasks accomplished during that period of review and the key conclusions or observations identified that relate to STV program cost estimates. The STV life cycle costs are estimated on the proprietary parametric cost model (PCM) with inputs organized by a project WBS. Preliminary life cycle schedules are also included.
An introduction of component fusion extend Kalman filtering method
NASA Astrophysics Data System (ADS)
Geng, Yue; Lei, Xusheng
2018-05-01
In this paper, the Component Fusion Extend Kalman Filtering (CFEKF) algorithm is proposed. Assuming each component of error propagation are independent with Gaussian distribution. The CFEKF can be obtained through the maximum likelihood of propagation error, which can adjust the state transition matrix and the measured matrix adaptively. With minimize linearization error, CFEKF can an effectively improve the estimation accuracy of nonlinear system state. The computation of CFEKF is similar to EKF which is easy for application.
Kawada, Tomoyuki; Otsuka, Toshiaki; Inagaki, Hirofumi; Wakayama, Yoko; Li, Qing; Katsumata, Masao
2009-10-01
The Framingham Risk Score (FRS) has frequently been used in the United States to predict the 10-year risk of coronary heart disease (CHD). Components of the metabolic syndrome and several lifestyle factors have also been evaluated to estimate the risk of CHD. To determine the relationship between the FRS and components of metabolic syndrome as coronary risk indicators, the authors conducted a cross-sectional study of 2,619 Japanese male workers, ranging in age from 40 to 64 years, at a single workplace. Although the estimation by the FRS and metabolic syndrome involved some different factors, significant association of the risk estimated by the 2 methods was observed. When logistic regression analysis was conducted with adjustment for several lifestyle factors, the FRS and serum insulin were found to be significantly associated with the risk of likelihood of metabolic syndrome. The odds ratios and 95% confidence intervals of FRS by per standard deviation increment and serum insulin by increasing 1 microIU/mL for the prediction of metabolic syndrome were 2.50 (2.17-2.88) and 1.24 (1.20-1.27), respectively. A preventive effect of abstaining from drinking every day and eating breakfast almost daily against the likelihood of metabolic syndrome was also observed. In conclusion, the FRS and insulin were found to be significantly associated with the risk of likelihood of metabolic syndrome, even after controlling for weight change.
An overview of fatigue failures at the Rocky Flats Wind System Test Center
NASA Technical Reports Server (NTRS)
Waldon, C. A.
1981-01-01
Potential small wind energy conversion (SWECS) design problems were identified to improve product quality and reliability. Mass produced components such as gearboxes, generators, bearings, etc., are generally reliable due to their widespread uniform use in other industries. The likelihood of failure increases, though, in the interfacing of these components and in SWECS components designed for a specific system use. Problems relating to the structural integrity of such components are discussed and analyzed with techniques currently used in quality assurance programs in other manufacturing industries.
Dissociating response conflict and error likelihood in anterior cingulate cortex.
Yeung, Nick; Nieuwenhuis, Sander
2009-11-18
Neuroimaging studies consistently report activity in anterior cingulate cortex (ACC) in conditions of high cognitive demand, leading to the view that ACC plays a crucial role in the control of cognitive processes. According to one prominent theory, the sensitivity of ACC to task difficulty reflects its role in monitoring for the occurrence of competition, or "conflict," between responses to signal the need for increased cognitive control. However, a contrasting theory proposes that ACC is the recipient rather than source of monitoring signals, and that ACC activity observed in relation to task demand reflects the role of this region in learning about the likelihood of errors. Response conflict and error likelihood are typically confounded, making the theories difficult to distinguish empirically. The present research therefore used detailed computational simulations to derive contrasting predictions regarding ACC activity and error rate as a function of response speed. The simulations demonstrated a clear dissociation between conflict and error likelihood: fast response trials are associated with low conflict but high error likelihood, whereas slow response trials show the opposite pattern. Using the N2 component as an index of ACC activity, an EEG study demonstrated that when conflict and error likelihood are dissociated in this way, ACC activity tracks conflict and is negatively correlated with error likelihood. These findings support the conflict-monitoring theory and suggest that, in speeded decision tasks, ACC activity reflects current task demands rather than the retrospective coding of past performance.
Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.
Youssef, Noha H; Elshahed, Mostafa S
2008-09-01
Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.
Fourment, Mathieu; Holmes, Edward C
2014-07-24
Early methods for estimating divergence times from gene sequence data relied on the assumption of a molecular clock. More sophisticated methods were created to model rate variation and used auto-correlation of rates, local clocks, or the so called "uncorrelated relaxed clock" where substitution rates are assumed to be drawn from a parametric distribution. In the case of Bayesian inference methods the impact of the prior on branching times is not clearly understood, and if the amount of data is limited the posterior could be strongly influenced by the prior. We develop a maximum likelihood method--Physher--that uses local or discrete clocks to estimate evolutionary rates and divergence times from heterochronous sequence data. Using two empirical data sets we show that our discrete clock estimates are similar to those obtained by other methods, and that Physher outperformed some methods in the estimation of the root age of an influenza virus data set. A simulation analysis suggests that Physher can outperform a Bayesian method when the real topology contains two long branches below the root node, even when evolution is strongly clock-like. These results suggest it is advisable to use a variety of methods to estimate evolutionary rates and divergence times from heterochronous sequence data. Physher and the associated data sets used here are available online at http://code.google.com/p/physher/.
Bossard, N; Descotes, F; Bremond, A G; Bobin, Y; De Saint Hilaire, P; Golfier, F; Awada, A; Mathevet, P M; Berrerd, L; Barbier, Y; Estève, J
2003-11-01
The prognostic value of cathepsin D has been recently recognized, but as many quantitative tumor markers, its clinical use remains unclear partly because of methodological issues in defining cut-off values. Guidelines have been proposed for analyzing quantitative prognostic factors, underlining the need for keeping data continuous, instead of categorizing them. Flexible approaches, parametric and non-parametric, have been proposed in order to improve the knowledge of the functional form relating a continuous factor to the risk. We studied the prognostic value of cathepsin D in a retrospective hospital cohort of 771 patients with breast cancer, and focused our overall survival analysis, based on the Cox regression, on two flexible approaches: smoothing splines and fractional polynomials. We also determined a cut-off value from the maximum likelihood estimate of a threshold model. These different approaches complemented each other for (1) identifying the functional form relating cathepsin D to the risk, and obtaining a cut-off value and (2) optimizing the adjustment for complex covariate like age at diagnosis in the final multivariate Cox model. We found a significant increase in the death rate, reaching 70% with a doubling of the level of cathepsin D, after the threshold of 37.5 pmol mg(-1). The proper prognostic impact of this marker could be confirmed and a methodology providing appropriate ways to use markers in clinical practice was proposed.
A mixture model for bovine abortion and foetal survival.
Hanson, Timothy; Bedrick, Edward J; Johnson, Wesley O; Thurmond, Mark C
2003-05-30
The effect of spontaneous abortion on the dairy industry is substantial, costing the industry on the order of US dollars 200 million per year in California alone. We analyse data from a cohort study of nine dairy herds in Central California. A key feature of the analysis is the observation that only a relatively small proportion of cows will abort (around 10;15 per cent), so that it is inappropriate to analyse the time-to-abortion (TTA) data as if it were standard censored survival data, with cows that fail to abort by the end of the study treated as censored observations. We thus broaden the scope to consider the analysis of foetal lifetime distribution (FLD) data for the cows, with the dual goals of characterizing the effects of various risk factors on (i). the likelihood of abortion and, conditional on abortion status, on (ii). the risk of early versus late abortion. A single model is developed to accomplish both goals with two sets of specific herd effects modelled as random effects. Because multimodal foetal hazard functions are expected for the TTA data, both a parametric mixture model and a non-parametric model are developed. Furthermore, the two sets of analyses are linked because of anticipated dependence between the random herd effects. All modelling and inferences are accomplished using modern Bayesian methods. Copyright 2003 John Wiley & Sons, Ltd.
On parametrized cold dense matter equation-of-state inference
NASA Astrophysics Data System (ADS)
Riley, Thomas E.; Raaijmakers, Geert; Watts, Anna L.
2018-07-01
Constraining the equation of state of cold dense matter in compact stars is a major science goal for observing programmes being conducted using X-ray, radio, and gravitational wave telescopes. We discuss Bayesian hierarchical inference of parametrized dense matter equations of state. In particular, we generalize and examine two inference paradigms from the literature: (i) direct posterior equation-of-state parameter estimation, conditioned on observations of a set of rotating compact stars; and (ii) indirect parameter estimation, via transformation of an intermediary joint posterior distribution of exterior spacetime parameters (such as gravitational masses and coordinate equatorial radii). We conclude that the former paradigm is not only tractable for large-scale analyses, but is principled and flexible from a Bayesian perspective while the latter paradigm is not. The thematic problem of Bayesian prior definition emerges as the crux of the difference between these paradigms. The second paradigm should in general only be considered as an ill-defined approach to the problem of utilizing archival posterior constraints on exterior spacetime parameters; we advocate for an alternative approach whereby such information is repurposed as an approximative likelihood function. We also discuss why conditioning on a piecewise-polytropic equation-of-state model - currently standard in the field of dense matter study - can easily violate conditions required for transformation of a probability density distribution between spaces of exterior (spacetime) and interior (source matter) parameters.
Reconstruction of interaction rate in holographic dark energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukherjee, Ankan, E-mail: ankan_ju@iiserkol.ac.in
2016-11-01
The present work is based on the holographic dark energy model with Hubble horizon as the infrared cut-off. The interaction rate between dark energy and dark matter has been reconstructed for three different parameterizations of the deceleration parameter. Observational constraints on the model parameters have been obtained by maximum likelihood analysis using the observational Hubble parameter data (OHD), type Ia supernovab data (SNe), baryon acoustic oscillation data (BAO) and the distance prior of cosmic microwave background (CMB) namely the CMB shift parameter data (CMBShift). The interaction rate obtained in the present work remains always positive and increases with expansion. Itmore » is very similar to the result obtained by Sen and Pavon [1] where the interaction rate has been reconstructed for a parametrization of the dark energy equation of state. Tighter constraints on the interaction rate have been obtained in the present work as it is based on larger data sets. The nature of the dark energy equation of state parameter has also been studied for the present models. Though the reconstruction is done from different parametrizations, the overall nature of the interaction rate is very similar in all the cases. Different information criteria and the Bayesian evidence, which have been invoked in the context of model selection, show that the these models are at close proximity of each other.« less
A Statistician's View of Upcoming Grand Challenges
NASA Astrophysics Data System (ADS)
Meng, Xiao Li
2010-01-01
In this session we have seen some snapshots of the broad spectrum of challenges, in this age of huge, complex, computer-intensive models, data, instruments,and questions. These challenges bridge astronomy at many wavelengths; basic physics; machine learning; -- and statistics. At one end of our spectrum, we think of 'compressing' the data with non-parametric methods. This raises the question of creating 'pseudo-replicas' of the data for uncertainty estimates. What would be involved in, e.g. boot-strap and related methods? Somewhere in the middle are these non-parametric methods for encapsulating the uncertainty information. At the far end, we find more model-based approaches, with the physics model embedded in the likelihood and analysis. The other distinctive problem is really the 'black-box' problem, where one has a complicated e.g. fundamental physics-based computer code, or 'black box', and one needs to know how changing the parameters at input -- due to uncertainties of any kind -- will map to changing the output. All of these connect to challenges in complexity of data and computation speed. Dr. Meng will highlight ways to 'cut corners' with advanced computational techniques, such as Parallel Tempering and Equal Energy methods. As well, there are cautionary tales of running automated analysis with real data -- where "30 sigma" outliers due to data artifacts can be more common than the astrophysical event of interest.
Towards the Optimal Pixel Size of dem for Automatic Mapping of Landslide Areas
NASA Astrophysics Data System (ADS)
Pawłuszek, K.; Borkowski, A.; Tarolli, P.
2017-05-01
Determining appropriate spatial resolution of digital elevation model (DEM) is a key step for effective landslide analysis based on remote sensing data. Several studies demonstrated that choosing the finest DEM resolution is not always the best solution. Various DEM resolutions can be applicable for diverse landslide applications. Thus, this study aims to assess the influence of special resolution on automatic landslide mapping. Pixel-based approach using parametric and non-parametric classification methods, namely feed forward neural network (FFNN) and maximum likelihood classification (ML), were applied in this study. Additionally, this allowed to determine the impact of used classification method for selection of DEM resolution. Landslide affected areas were mapped based on four DEMs generated at 1 m, 2 m, 5 m and 10 m spatial resolution from airborne laser scanning (ALS) data. The performance of the landslide mapping was then evaluated by applying landslide inventory map and computation of confusion matrix. The results of this study suggests that the finest scale of DEM is not always the best fit, however working at 1 m DEM resolution on micro-topography scale, can show different results. The best performance was found at 5 m DEM-resolution for FFNN and 1 m DEM resolution for results. The best performance was found to be using 5 m DEM-resolution for FFNN and 1 m DEM resolution for ML classification.
Preliminary design study of advanced multistage axial flow core compressors
NASA Technical Reports Server (NTRS)
Wisler, D. C.; Koch, C. C.; Smith, L. H., Jr.
1977-01-01
A preliminary design study was conducted to identify an advanced core compressor for use in new high-bypass-ratio turbofan engines to be introduced into commercial service in the 1980's. An evaluation of anticipated compressor and related component 1985 state-of-the-art technology was conducted. A parametric screening study covering a large number of compressor designs was conducted to determine the influence of the major compressor design features on efficiency, weight, cost, blade life, aircraft direct operating cost, and fuel usage. The trends observed in the parametric screening study were used to develop three high-efficiency, high-economic-payoff compressor designs. These three compressors were studied in greater detail to better evaluate their aerodynamic and mechanical feasibility.
A Methodology for the Parametric Reconstruction of Non-Steady and Noisy Meteorological Time Series
NASA Astrophysics Data System (ADS)
Rovira, F.; Palau, J. L.; Millán, M.
2009-09-01
Climatic and meteorological time series often show some persistence (in time) in the variability of certain features. One could regard annual, seasonal and diurnal time variability as trivial persistence in the variability of some meteorological magnitudes (as, e.g., global radiation, air temperature above surface, etc.). In these cases, the traditional Fourier transform into frequency space will show the principal harmonics as the components with the largest amplitude. Nevertheless, meteorological measurements often show other non-steady (in time) variability. Some fluctuations in measurements (at different time scales) are driven by processes that prevail on some days (or months) of the year but disappear on others. By decomposing a time series into time-frequency space through the continuous wavelet transformation, one is able to determine both the dominant modes of variability and how those modes vary in time. This study is based on a numerical methodology to analyse non-steady principal harmonics in noisy meteorological time series. This methodology combines both the continuous wavelet transform and the development of a parametric model that includes the time evolution of the principal and the most statistically significant harmonics of the original time series. The parameterisation scheme proposed in this study consists of reproducing the original time series by means of a statistically significant finite sum of sinusoidal signals (waves), each defined by using the three usual parameters: amplitude, frequency and phase. To ensure the statistical significance of the parametric reconstruction of the original signal, we propose a standard statistical t-student analysis of the confidence level of the amplitude in the parametric spectrum for the different wave components. Once we have assured the level of significance of the different waves composing the parametric model, we can obtain the statistically significant principal harmonics (in time) of the original time series by using the Fourier transform of the modelled signal. Acknowledgements The CEAM Foundation is supported by the Generalitat Valenciana and BANCAIXA (València, Spain). This study has been partially funded by the European Commission (FP VI, Integrated Project CIRCE - No. 036961) and by the Ministerio de Ciencia e Innovación, research projects "TRANSREG” (CGL2007-65359/CLI) and "GRACCIE” (CSD2007-00067, Program CONSOLIDER-INGENIO 2010).
Shuttle cryogenic supply system optimization study. Volume 6: Appendixes
NASA Technical Reports Server (NTRS)
1973-01-01
The optimization of the cryogenic supply system for space shuttles is discussed. The subjects considered are: (1) auxiliary power unit parametric data, (2) propellant acquisition, (3) thermal protection and thermodynamic properties, (4) instrumentation and controls, and (5) initial component redundancy evaluations. Diagrams of the systems are provided. Graphs of the performance capabilities are included.
Force Project Technology Presentation to the NRCC
2014-02-04
Functional Bridge components Smart Odometer Adv Pretreatment Smart Bridge Multi-functional Gap Crossing Fuel Automated Tracking System Adv...comprehensive matrix of candidate composite material systems and textile reinforcement architectures via modeling/analyses and testing. Product(s...Validated Dynamic Modeling tool based on parametric study using material models to reliably predict the textile mechanics of the hose
NASA Astrophysics Data System (ADS)
Chen, Maomao; Zhou, Yuan; Su, Han; Zhang, Dong; Luo, Jianwen
2017-04-01
Imaging of the pharmacokinetic parameters in dynamic fluorescence molecular tomography (DFMT) can provide three-dimensional metabolic information for biological studies and drug development. However, owing to the ill-posed nature of the FMT inverse problem, the relatively low quality of the parametric images makes it difficult to investigate the different metabolic processes of the fluorescent targets with small distances. An excitation-resolved multispectral DFMT method is proposed; it is based on the fact that the fluorescent targets with different concentrations show different variations in the excitation spectral domain and can be considered independent signal sources. With an independent component analysis method, the spatial locations of different fluorescent targets can be decomposed, and the fluorescent yields of the targets at different time points can be recovered. Therefore, the metabolic process of each component can be independently investigated. Simulations and phantom experiments are carried out to evaluate the performance of the proposed method. The results demonstrated that the proposed excitation-resolved multispectral method can effectively improve the reconstruction accuracy of the parametric images in DFMT.
Fiske, Ian J.; Royle, J. Andrew; Gross, Kevin
2014-01-01
Ecologists and wildlife biologists increasingly use latent variable models to study patterns of species occurrence when detection is imperfect. These models have recently been generalized to accommodate both a more expansive description of state than simple presence or absence, and Markovian dynamics in the latent state over successive sampling seasons. In this paper, we write these multi-season, multi-state models as hidden Markov models to find both maximum likelihood estimates of model parameters and finite-sample estimators of the trajectory of the latent state over time. These estimators are especially useful for characterizing population trends in species of conservation concern. We also develop parametric bootstrap procedures that allow formal inference about latent trend. We examine model behavior through simulation, and we apply the model to data from the North American Amphibian Monitoring Program.
Ryberg, Karen R.; Vecchia, Aldo V.
2013-01-01
The seawaveQ R package fits a parametric regression model (seawaveQ) to pesticide concentration data from streamwater samples to assess variability and trends. The model incorporates the strong seasonality and high degree of censoring common in pesticide data and users can incorporate numerous ancillary variables, such as streamflow anomalies. The model is fitted to pesticide data using maximum likelihood methods for censored data and is robust in terms of pesticide, stream location, and degree of censoring of the concentration data. This R package standardizes this methodology for trend analysis, documents the code, and provides help and tutorial information, as well as providing additional utility functions for plotting pesticide and other chemical concentration data.
Sung, Yun Ju; Di, Yanming; Fu, Audrey Q; Rothstein, Joseph H; Sieh, Weiva; Tong, Liping; Thompson, Elizabeth A; Wijsman, Ellen M
2007-01-01
We performed multipoint linkage analyses with multiple programs and models for several gene expression traits in the Centre d'Etude du Polymorphisme Humain families. All analyses provided consistent results for both peak location and shape. Variance-components (VC) analysis gave wider peaks and Bayes factors gave fewer peaks. Among programs from the MORGAN package, lm_multiple performed better than lm_markers, resulting in less Markov-chain Monte Carlo (MCMC) variability between runs, and the program lm_twoqtl provided higher LOD scores by also including either a polygenic component or an additional quantitative trait locus.
Sung, Yun Ju; Di, Yanming; Fu, Audrey Q; Rothstein, Joseph H; Sieh, Weiva; Tong, Liping; Thompson, Elizabeth A; Wijsman, Ellen M
2007-01-01
We performed multipoint linkage analyses with multiple programs and models for several gene expression traits in the Centre d'Etude du Polymorphisme Humain families. All analyses provided consistent results for both peak location and shape. Variance-components (VC) analysis gave wider peaks and Bayes factors gave fewer peaks. Among programs from the MORGAN package, lm_multiple performed better than lm_markers, resulting in less Markov-chain Monte Carlo (MCMC) variability between runs, and the program lm_twoqtl provided higher LOD scores by also including either a polygenic component or an additional quantitative trait locus. PMID:18466597
NASA Astrophysics Data System (ADS)
Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan
2016-10-01
This paper introduces mixed fuzzy and interval parametric uncertainties into the FE components of the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model for mid-frequency analysis of built-up systems, thus an uncertain ensemble combining non-parametric with mixed fuzzy and interval parametric uncertainties comes into being. A fuzzy interval Finite Element/Statistical Energy Analysis (FIFE/SEA) framework is proposed to obtain the uncertain responses of built-up systems, which are described as intervals with fuzzy bounds, termed as fuzzy-bounded intervals (FBIs) in this paper. Based on the level-cut technique, a first-order fuzzy interval perturbation FE/SEA (FFIPFE/SEA) and a second-order fuzzy interval perturbation FE/SEA method (SFIPFE/SEA) are developed to handle the mixed parametric uncertainties efficiently. FFIPFE/SEA approximates the response functions by the first-order Taylor series, while SFIPFE/SEA improves the accuracy by considering the second-order items of Taylor series, in which all the mixed second-order items are neglected. To further improve the accuracy, a Chebyshev fuzzy interval method (CFIM) is proposed, in which the Chebyshev polynomials is used to approximate the response functions. The FBIs are eventually reconstructed by assembling the extrema solutions at all cut levels. Numerical results on two built-up systems verify the effectiveness of the proposed methods.
Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J
2013-01-01
Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.
Newman, Phil; Adams, Roger; Waddington, Gordon
2012-09-01
To examine the relationship between two clinical test results and future diagnosis of (Medial Tibial Stress Syndrome) MTSS in personnel at a military trainee establishment. Data from a preparticipation musculoskeletal screening test performed on 384 Australian Defence Force Academy Officer Cadets were compared against 693 injuries reported by 326 of the Officer Cadets in the following 16 months. Data were held in an Injury Surveillance database and analysed using χ² and Fisher's Exact tests, and Receiver Operating Characteristic Curve analysis. Diagnosis of MTSS, confirmed by an independent blinded health practitioner. Both the palpation and oedema clinical tests were each found to be significant predictors for later onset of MTSS. Specifically: Shin palpation test OR 4.63, 95% CI 2.5 to 8.5, Positive Likelihood Ratio 3.38, Negative Likelihood Ratio 0.732, Pearson χ² p<0.001; Shin oedema test OR 76.1 95% CI 9.6 to 602.7, Positive Likelihood Ratio 7.26, Negative Likelihood Ratio 0.095, Fisher's Exact p<0.001; Combined Shin Palpation Test and Shin Oedema Test Positive Likelihood Ratio 7.94, Negative Likelihood Ratio <0.001, Fisher's Exact p<0.001. Female gender was found to be an independent risk factor (OR 2.97, 95% CI 1.66 to 5.31, Positive Likelihood Ratio 2.09, Negative Likelihood Ratio 0.703, Pearson χ² p<0.001) for developing MTSS. The tests for MTSS employed here are components of a normal clinical examination used to diagnose MTSS. This paper confirms that these tests and female gender can also be confidently applied in predicting those in an asymptomatic population who are at greater risk of developing MTSS symptoms with activity at some point in the future.
Self-consistent Bulge/Disk/Halo Galaxy Dynamical Modeling Using Integral Field Kinematics
NASA Astrophysics Data System (ADS)
Taranu, D. S.; Obreschkow, D.; Dubinski, J. J.; Fogarty, L. M. R.; van de Sande, J.; Catinella, B.; Cortese, L.; Moffett, A.; Robotham, A. S. G.; Allen, J. T.; Bland-Hawthorn, J.; Bryant, J. J.; Colless, M.; Croom, S. M.; D'Eugenio, F.; Davies, R. L.; Drinkwater, M. J.; Driver, S. P.; Goodwin, M.; Konstantopoulos, I. S.; Lawrence, J. S.; López-Sánchez, Á. R.; Lorente, N. P. F.; Medling, A. M.; Mould, J. R.; Owers, M. S.; Power, C.; Richards, S. N.; Tonini, C.
2017-11-01
We introduce a method for modeling disk galaxies designed to take full advantage of data from integral field spectroscopy (IFS). The method fits equilibrium models to simultaneously reproduce the surface brightness, rotation, and velocity dispersion profiles of a galaxy. The models are fully self-consistent 6D distribution functions for a galaxy with a Sérsic profile stellar bulge, exponential disk, and parametric dark-matter halo, generated by an updated version of GalactICS. By creating realistic flux-weighted maps of the kinematic moments (flux, mean velocity, and dispersion), we simultaneously fit photometric and spectroscopic data using both maximum-likelihood and Bayesian (MCMC) techniques. We apply the method to a GAMA spiral galaxy (G79635) with kinematics from the SAMI Galaxy Survey and deep g- and r-band photometry from the VST-KiDS survey, comparing parameter constraints with those from traditional 2D bulge-disk decomposition. Our method returns broadly consistent results for shared parameters while constraining the mass-to-light ratios of stellar components and reproducing the H I-inferred circular velocity well beyond the limits of the SAMI data. Although the method is tailored for fitting integral field kinematic data, it can use other dynamical constraints like central fiber dispersions and H I circular velocities, and is well-suited for modeling galaxies with a combination of deep imaging and H I and/or optical spectra (resolved or otherwise). Our implementation (MagRite) is computationally efficient and can generate well-resolved models and kinematic maps in under a minute on modern processors.
Tang, Wan; Lu, Naiji; Chen, Tian; Wang, Wenjuan; Gunzler, Douglas David; Han, Yu; Tu, Xin M
2015-10-30
Zero-inflated Poisson (ZIP) and negative binomial (ZINB) models are widely used to model zero-inflated count responses. These models extend the Poisson and negative binomial (NB) to address excessive zeros in the count response. By adding a degenerate distribution centered at 0 and interpreting it as describing a non-risk group in the population, the ZIP (ZINB) models a two-component population mixture. As in applications of Poisson and NB, the key difference between ZIP and ZINB is the allowance for overdispersion by the ZINB in its NB component in modeling the count response for the at-risk group. Overdispersion arising in practice too often does not follow the NB, and applications of ZINB to such data yield invalid inference. If sources of overdispersion are known, other parametric models may be used to directly model the overdispersion. Such models too are subject to assumed distributions. Further, this approach may not be applicable if information about the sources of overdispersion is unavailable. In this paper, we propose a distribution-free alternative and compare its performance with these popular parametric models as well as a moment-based approach proposed by Yu et al. [Statistics in Medicine 2013; 32: 2390-2405]. Like the generalized estimating equations, the proposed approach requires no elaborate distribution assumptions. Compared with the approach of Yu et al., it is more robust to overdispersed zero-inflated responses. We illustrate our approach with both simulated and real study data. Copyright © 2015 John Wiley & Sons, Ltd.
Mistaking minds and machines: How speech affects dehumanization and anthropomorphism.
Schroeder, Juliana; Epley, Nicholas
2016-11-01
Treating a human mind like a machine is an essential component of dehumanization, whereas attributing a humanlike mind to a machine is an essential component of anthropomorphism. Here we tested how a cue closely connected to a person's actual mental experience-a humanlike voice-affects the likelihood of mistaking a person for a machine, or a machine for a person. We predicted that paralinguistic cues in speech are particularly likely to convey the presence of a humanlike mind, such that removing voice from communication (leaving only text) would increase the likelihood of mistaking the text's creator for a machine. Conversely, adding voice to a computer-generated script (resulting in speech) would increase the likelihood of mistaking the text's creator for a human. Four experiments confirmed these hypotheses, demonstrating that people are more likely to infer a human (vs. computer) creator when they hear a voice expressing thoughts than when they read the same thoughts in text. Adding human visual cues to text (i.e., seeing a person perform a script in a subtitled video clip), did not increase the likelihood of inferring a human creator compared with only reading text, suggesting that defining features of personhood may be conveyed more clearly in speech (Experiments 1 and 2). Removing the naturalistic paralinguistic cues that convey humanlike capacity for thinking and feeling, such as varied pace and intonation, eliminates the humanizing effect of speech (Experiment 4). We discuss implications for dehumanizing others through text-based media, and for anthropomorphizing machines through speech-based media. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Evans, Jennifer A; Elliott, Jeffrey A; Gorman, Michael R
2011-07-01
The endogenous circadian pacemaker of mammals is synchronized to the environmental day by the ambient cycle of relative light and dark. The present studies assessed the actions of light in a novel circadian entrainment paradigm where activity rhythms are bifurcated following exposure to a 24-h light:dark:light:dark (LDLD) cycle. Bifurcated entrainment under LDLD reflects the temporal dissociation of component oscillators that comprise the circadian system and is facilitated when daily scotophases are dimly lit rather than completely dark. Although bifurcation can be stably maintained in LDLD, it is quickly reversed under constant conditions. Here the authors examine whether dim scotophase illumination acts to maintain bifurcated entrainment under LDLD through potential interactions with the parametric actions of bright light during the two daily photophases. In three experiments, wheel-running rhythms of Syrian hamsters were bifurcated under LDLD with dimly lit scotophases, and after several weeks, dim scotophase illumination was either retained or extinguished. Additionally, "full" and "skeleton" photophases were employed under LDLD cycles with dimly lit or completely dark scotophases to distinguish parametric from nonparametric effects of bright light. Rhythm bifurcation was more stable in full versus skeleton LDLD cycles. Dim light facilitated the maintenance of bifurcated entrainment under full LDLD cycles but did not prevent the loss of rhythm bifurcation in skeleton LDLD cycles. These studies indicate that parametric actions of bright light maintain the bifurcated entrainment state; that dim scotophase illumination increases the stability of the bifurcated state; and that dim light interacts with the parametric effects of bright light to increase the stability of rhythm bifurcation under full LDLD cycles. A further understanding of the novel actions of dim light may lead to new strategies for understanding, preventing, and treating chronobiological disturbances.
Component pattern analysis of chemicals using multispectral THz imaging system
NASA Astrophysics Data System (ADS)
Kawase, Kodo; Ogawa, Yuichi; Watanabe, Yuki
2004-04-01
We have developed a novel basic technology for terahertz (THz) imaging, which allows detection and identification of chemicals by introducing the component spatial pattern analysis. The spatial distributions of the chemicals were obtained from terahertz multispectral transillumination images, using absorption spectra previously measured with a widely tunable THz-wave parametric oscillator. Further we have applied this technique to the detection and identification of illicit drugs concealed in envelopes. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.
Isotope Brayton electric power system for the 500 to 2500 watt range
NASA Technical Reports Server (NTRS)
Macosko, R. P.; Barna, G. J.; Block, H. B.; Ingle, B. D.
1972-01-01
An extensive study was conducted at the Lewis Research Center to evaluate an isotope Brayton electric power system for use in the 500 to 2500 W power range. Overall system simplicity was emphasized in order to reduce parasitic power losses and improve system reliability. Detailed parametric cycle analysis, conceptual component designs, and evaluation of system packaging were included. A single-loop system (gas) with six major components including one rotating unit was selected. Calculated net system efficiency varies from 23 to 28 percent over the power range.
Lee, E Henry; Wickham, Charlotte; Beedlow, Peter A; Waschmann, Ronald S; Tingey, David T
2017-10-01
A time series intervention analysis (TSIA) of dendrochronological data to infer the tree growth-climate-disturbance relations and forest disturbance history is described. Maximum likelihood is used to estimate the parameters of a structural time series model with components for climate and forest disturbances (i.e., pests, diseases, fire). The statistical method is illustrated with a tree-ring width time series for a mature closed-canopy Douglas-fir stand on the west slopes of the Cascade Mountains of Oregon, USA that is impacted by Swiss needle cast disease caused by the foliar fungus, Phaecryptopus gaeumannii (Rhode) Petrak. The likelihood-based TSIA method is proposed for the field of dendrochronology to understand the interaction of temperature, water, and forest disturbances that are important in forest ecology and climate change studies.
Kück, Patrick; Meusemann, Karen; Dambach, Johannes; Thormann, Birthe; von Reumont, Björn M; Wägele, Johann W; Misof, Bernhard
2010-03-31
Methods of alignment masking, which refers to the technique of excluding alignment blocks prior to tree reconstructions, have been successful in improving the signal-to-noise ratio in sequence alignments. However, the lack of formally well defined methods to identify randomness in sequence alignments has prevented a routine application of alignment masking. In this study, we compared the effects on tree reconstructions of the most commonly used profiling method (GBLOCKS) which uses a predefined set of rules in combination with alignment masking, with a new profiling approach (ALISCORE) based on Monte Carlo resampling within a sliding window, using different data sets and alignment methods. While the GBLOCKS approach excludes variable sections above a certain threshold which choice is left arbitrary, the ALISCORE algorithm is free of a priori rating of parameter space and therefore more objective. ALISCORE was successfully extended to amino acids using a proportional model and empirical substitution matrices to score randomness in multiple sequence alignments. A complex bootstrap resampling leads to an even distribution of scores of randomly similar sequences to assess randomness of the observed sequence similarity. Testing performance on real data, both masking methods, GBLOCKS and ALISCORE, helped to improve tree resolution. The sliding window approach was less sensitive to different alignments of identical data sets and performed equally well on all data sets. Concurrently, ALISCORE is capable of dealing with different substitution patterns and heterogeneous base composition. ALISCORE and the most relaxed GBLOCKS gap parameter setting performed best on all data sets. Correspondingly, Neighbor-Net analyses showed the most decrease in conflict. Alignment masking improves signal-to-noise ratio in multiple sequence alignments prior to phylogenetic reconstruction. Given the robust performance of alignment profiling, alignment masking should routinely be used to improve tree reconstructions. Parametric methods of alignment profiling can be easily extended to more complex likelihood based models of sequence evolution which opens the possibility of further improvements.
A tool for the estimation of the distribution of landslide area in R
NASA Astrophysics Data System (ADS)
Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.
2012-04-01
We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.
Parametric Modulation of Error-Related ERP Components by the Magnitude of Visuo-Motor Mismatch
ERIC Educational Resources Information Center
Vocat, Roland; Pourtois, Gilles; Vuilleumier, Patrik
2011-01-01
Errors generate typical brain responses, characterized by two successive event-related potentials (ERP) following incorrect action: the error-related negativity (ERN) and the positivity error (Pe). However, it is unclear whether these error-related responses are sensitive to the magnitude of the error, or instead show all-or-none effects. We…
Automation of the CFD Process on Distributed Computing Systems
NASA Technical Reports Server (NTRS)
Tejnil, Ed; Gee, Ken; Rizk, Yehia M.
2000-01-01
A script system was developed to automate and streamline portions of the CFD process. The system was designed to facilitate the use of CFD flow solvers on supercomputer and workstation platforms within a parametric design event. Integrating solver pre- and postprocessing phases, the fully automated ADTT script system marshalled the required input data, submitted the jobs to available computational resources, and processed the resulting output data. A number of codes were incorporated into the script system, which itself was part of a larger integrated design environment software package. The IDE and scripts were used in a design event involving a wind tunnel test. This experience highlighted the need for efficient data and resource management in all parts of the CFD process. To facilitate the use of CFD methods to perform parametric design studies, the script system was developed using UNIX shell and Perl languages. The goal of the work was to minimize the user interaction required to generate the data necessary to fill a parametric design space. The scripts wrote out the required input files for the user-specified flow solver, transferred all necessary input files to the computational resource, submitted and tracked the jobs using the resource queuing structure, and retrieved and post-processed the resulting dataset. For computational resources that did not run queueing software, the script system established its own simple first-in-first-out queueing structure to manage the workload. A variety of flow solvers were incorporated in the script system, including INS2D, PMARC, TIGER and GASP. Adapting the script system to a new flow solver was made easier through the use of object-oriented programming methods. The script system was incorporated into an ADTT integrated design environment and evaluated as part of a wind tunnel experiment. The system successfully generated the data required to fill the desired parametric design space. This stressed the computational resources required to compute and store the information. The scripts were continually modified to improve the utilization of the computational resources and reduce the likelihood of data loss due to failures. An ad-hoc file server was created to manage the large amount of data being generated as part of the design event. Files were stored and retrieved as needed to create new jobs and analyze the results. Additional information is contained in the original.
Macera, Márcia A C; Louzada, Francisco; Cancho, Vicente G; Fontes, Cor J F
2015-03-01
In this paper, we introduce a new model for recurrent event data characterized by a baseline rate function fully parametric, which is based on the exponential-Poisson distribution. The model arises from a latent competing risk scenario, in the sense that there is no information about which cause was responsible for the event occurrence. Then, the time of each recurrence is given by the minimum lifetime value among all latent causes. The new model has a particular case, which is the classical homogeneous Poisson process. The properties of the proposed model are discussed, including its hazard rate function, survival function, and ordinary moments. The inferential procedure is based on the maximum likelihood approach. We consider an important issue of model selection between the proposed model and its particular case by the likelihood ratio test and score test. Goodness of fit of the recurrent event models is assessed using Cox-Snell residuals. A simulation study evaluates the performance of the estimation procedure in the presence of a small and moderate sample sizes. Applications on two real data sets are provided to illustrate the proposed methodology. One of them, first analyzed by our team of researchers, considers the data concerning the recurrence of malaria, which is an infectious disease caused by a protozoan parasite that infects red blood cells. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz
2015-02-01
In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.
Learning Quantitative Sequence-Function Relationships from Massively Parallel Experiments
NASA Astrophysics Data System (ADS)
Atwal, Gurinder S.; Kinney, Justin B.
2016-03-01
A fundamental aspect of biological information processing is the ubiquity of sequence-function relationships—functions that map the sequence of DNA, RNA, or protein to a biochemically relevant activity. Most sequence-function relationships in biology are quantitative, but only recently have experimental techniques for effectively measuring these relationships been developed. The advent of such "massively parallel" experiments presents an exciting opportunity for the concepts and methods of statistical physics to inform the study of biological systems. After reviewing these recent experimental advances, we focus on the problem of how to infer parametric models of sequence-function relationships from the data produced by these experiments. Specifically, we retrace and extend recent theoretical work showing that inference based on mutual information, not the standard likelihood-based approach, is often necessary for accurately learning the parameters of these models. Closely connected with this result is the emergence of "diffeomorphic modes"—directions in parameter space that are far less constrained by data than likelihood-based inference would suggest. Analogous to Goldstone modes in physics, diffeomorphic modes arise from an arbitrarily broken symmetry of the inference problem. An analytically tractable model of a massively parallel experiment is then described, providing an explicit demonstration of these fundamental aspects of statistical inference. This paper concludes with an outlook on the theoretical and computational challenges currently facing studies of quantitative sequence-function relationships.
The Educational Consequences of Teen Childbearing
Kane, Jennifer B.; Morgan, S. Philip; Harris, Kathleen Mullan; Guilkey, David K.
2013-01-01
A huge literature shows that teen mothers face a variety of detriments across the life course, including truncated educational attainment. To what extent is this association causal? The estimated effects of teen motherhood on schooling vary widely, ranging from no discernible difference to 2.6 fewer years among teen mothers. The magnitude of educational consequences is therefore uncertain, despite voluminous policy and prevention efforts that rest on the assumption of a negative and presumably causal effect. This study adjudicates between two potential sources of inconsistency in the literature—methodological differences or cohort differences—by using a single, high-quality data source: namely, The National Longitudinal Study of Adolescent Health. We replicate analyses across four different statistical strategies: ordinary least squares regression; propensity score matching; and parametric and semiparametric maximum likelihood estimation. Results demonstrate educational consequences of teen childbearing, with estimated effects between 0.7 and 1.9 fewer years of schooling among teen mothers. We select our preferred estimate (0.7), derived from semiparametric maximum likelihood estimation, on the basis of weighing the strengths and limitations of each approach. Based on the range of estimated effects observed in our study, we speculate that variable statistical methods are the likely source of inconsistency in the past. We conclude by discussing implications for future research and policy, and recommend that future studies employ a similar multimethod approach to evaluate findings. PMID:24078155
Empirical projection-based basis-component decomposition method
NASA Astrophysics Data System (ADS)
Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland
2009-02-01
Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.
Closed-cycle gas dynamic laser design investigation
NASA Technical Reports Server (NTRS)
Ketch, G. W.; Young, W. E.
1977-01-01
A conceptual design study was made of a closed cycle gas-dynamic laser to provide definition of the major components in the laser loop. The system potential application is for long range power transmission by way of high power laser beams to provide satellite propulsion energy for orbit changing or station keeping. A parametric cycle optimization was conducted to establish the thermodynamic requirements for the system components. A conceptual design was conducted of the closed cycle system and the individual components to define physical characteristics and establish the system size and weight. Technology confirmation experimental demonstration programs were outlined to develop, evaluate, and demonstrate the technology base needed for this closed cycle GDL system.
Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data
Xu, Lizhen; Paterson, Andrew D.; Turpin, Williams; Xu, Wei
2015-01-01
Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects. PMID:26148172
Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data.
Xu, Lizhen; Paterson, Andrew D; Turpin, Williams; Xu, Wei
2015-01-01
Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects.
Sampling-free Bayesian inversion with adaptive hierarchical tensor representations
NASA Astrophysics Data System (ADS)
Eigel, Martin; Marschall, Manuel; Schneider, Reinhold
2018-03-01
A sampling-free approach to Bayesian inversion with an explicit polynomial representation of the parameter densities is developed, based on an affine-parametric representation of a linear forward model. This becomes feasible due to the complete treatment in function spaces, which requires an efficient model reduction technique for numerical computations. The advocated perspective yields the crucial benefit that error bounds can be derived for all occuring approximations, leading to provable convergence subject to the discretization parameters. Moreover, it enables a fully adaptive a posteriori control with automatic problem-dependent adjustments of the employed discretizations. The method is discussed in the context of modern hierarchical tensor representations, which are used for the evaluation of a random PDE (the forward model) and the subsequent high-dimensional quadrature of the log-likelihood, alleviating the ‘curse of dimensionality’. Numerical experiments demonstrate the performance and confirm the theoretical results.
Regression analysis of informative current status data with the additive hazards model.
Zhao, Shishun; Hu, Tao; Ma, Ling; Wang, Peijie; Sun, Jianguo
2015-04-01
This paper discusses regression analysis of current status failure time data arising from the additive hazards model in the presence of informative censoring. Many methods have been developed for regression analysis of current status data under various regression models if the censoring is noninformative, and also there exists a large literature on parametric analysis of informative current status data in the context of tumorgenicity experiments. In this paper, a semiparametric maximum likelihood estimation procedure is presented and in the method, the copula model is employed to describe the relationship between the failure time of interest and the censoring time. Furthermore, I-splines are used to approximate the nonparametric functions involved and the asymptotic consistency and normality of the proposed estimators are established. A simulation study is conducted and indicates that the proposed approach works well for practical situations. An illustrative example is also provided.
A computer program for multiple decrement life table analyses.
Poole, W K; Cooley, P C
1977-06-01
Life table analysis has traditionally been the tool of choice in analyzing distribution of "survival" times when a parametric form for the survival curve could not be reasonably assumed. Chiang, in two papers [1,2] formalized the theory of life table analyses in a Markov chain framework and derived maximum likelihood estimates of the relevant parameters for the analyses. He also discussed how the techniques could be generalized to consider competing risks and follow-up studies. Although various computer programs exist for doing different types of life table analysis [3] to date, there has not been a generally available, well documented computer program to carry out multiple decrement analyses, either by Chiang's or any other method. This paper describes such a program developed by Research Triangle Institute. A user's manual is available at printing costs which supplements the contents of this paper with a discussion of the formula used in the program listing.
Lörincz, András; Póczos, Barnabás
2003-06-01
In optimizations the dimension of the problem may severely, sometimes exponentially increase optimization time. Parametric function approximatiors (FAPPs) have been suggested to overcome this problem. Here, a novel FAPP, cost component analysis (CCA) is described. In CCA, the search space is resampled according to the Boltzmann distribution generated by the energy landscape. That is, CCA converts the optimization problem to density estimation. Structure of the induced density is searched by independent component analysis (ICA). The advantage of CCA is that each independent ICA component can be optimized separately. In turn, (i) CCA intends to partition the original problem into subproblems and (ii) separating (partitioning) the original optimization problem into subproblems may serve interpretation. Most importantly, (iii) CCA may give rise to high gains in optimization time. Numerical simulations illustrate the working of the algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Y.; Parsons, T.; King, R.
This report summarizes the theory, verification, and validation of a new sizing tool for wind turbine drivetrain components, the Drivetrain Systems Engineering (DriveSE) tool. DriveSE calculates the dimensions and mass properties of the hub, main shaft, main bearing(s), gearbox, bedplate, transformer if up-tower, and yaw system. The level of fi¬ delity for each component varies depending on whether semiempirical parametric or physics-based models are used. The physics-based models have internal iteration schemes based on system constraints and design criteria. Every model is validated against available industry data or finite-element analysis. The verification and validation results show that the models reasonablymore » capture primary drivers for the sizing and design of major drivetrain components.« less
A numerical study of blood flow using mixture theory
Wu, Wei-Tao; Aubry, Nadine; Massoudi, Mehrdad; Kim, Jeongho; Antaki, James F.
2014-01-01
In this paper, we consider the two dimensional flow of blood in a rectangular microfluidic channel. We use Mixture Theory to treat this problem as a two-component system: One component is the red blood cells (RBCs) modeled as a generalized Reiner–Rivlin type fluid, which considers the effects of volume fraction (hematocrit) and influence of shear rate upon viscosity. The other component, plasma, is assumed to behave as a linear viscous fluid. A CFD solver based on OpenFOAM® was developed and employed to simulate a specific problem, namely blood flow in a two dimensional micro-channel, is studied. Finally to better understand this two-component flow system and the effects of the different parameters, the equations are made dimensionless and a parametric study is performed. PMID:24791016
A numerical study of blood flow using mixture theory.
Wu, Wei-Tao; Aubry, Nadine; Massoudi, Mehrdad; Kim, Jeongho; Antaki, James F
2014-03-01
In this paper, we consider the two dimensional flow of blood in a rectangular microfluidic channel. We use Mixture Theory to treat this problem as a two-component system: One component is the red blood cells (RBCs) modeled as a generalized Reiner-Rivlin type fluid, which considers the effects of volume fraction (hematocrit) and influence of shear rate upon viscosity. The other component, plasma, is assumed to behave as a linear viscous fluid. A CFD solver based on OpenFOAM ® was developed and employed to simulate a specific problem, namely blood flow in a two dimensional micro-channel, is studied. Finally to better understand this two-component flow system and the effects of the different parameters, the equations are made dimensionless and a parametric study is performed.
Assessing state efforts to integrate transportation, land use and climate change.
DOT National Transportation Integrated Search
2016-12-01
Climate change is increasingly recognized as a threat to life on earth. Continued emission of greenhouse gases will cause further : warming and long-lasting changes in all components of the climate system, increasing the likelihood of severe, perv...
Global population structure and adaptive evolution of aflatoxin-producing fungi
USDA-ARS?s Scientific Manuscript database
We employed interspecific principal component analyses for six different categories (geography, species, precipitation, temperature, aflatoxin chemotype profile, and mating type) and inferred maximum likelihood phylogenies for six combined loci, including two aflatoxin cluster regions (aflM/alfN and...
Maity, Arnab; Carroll, Raymond J; Mammen, Enno; Chatterjee, Nilanjan
2009-01-01
Motivated from the problem of testing for genetic effects on complex traits in the presence of gene-environment interaction, we develop score tests in general semiparametric regression problems that involves Tukey style 1 degree-of-freedom form of interaction between parametrically and non-parametrically modelled covariates. We find that the score test in this type of model, as recently developed by Chatterjee and co-workers in the fully parametric setting, is biased and requires undersmoothing to be valid in the presence of non-parametric components. Moreover, in the presence of repeated outcomes, the asymptotic distribution of the score test depends on the estimation of functions which are defined as solutions of integral equations, making implementation difficult and computationally taxing. We develop profiled score statistics which are unbiased and asymptotically efficient and can be performed by using standard bandwidth selection methods. In addition, to overcome the difficulty of solving functional equations, we give easy interpretations of the target functions, which in turn allow us to develop estimation procedures that can be easily implemented by using standard computational methods. We present simulation studies to evaluate type I error and power of the method proposed compared with a naive test that does not consider interaction. Finally, we illustrate our methodology by analysing data from a case-control study of colorectal adenoma that was designed to investigate the association between colorectal adenoma and the candidate gene NAT2 in relation to smoking history.
NASA Astrophysics Data System (ADS)
Stark, Dominic; Launet, Barthelemy; Schawinski, Kevin; Zhang, Ce; Koss, Michael; Turp, M. Dennis; Sartori, Lia F.; Zhang, Hantian; Chen, Yiru; Weigel, Anna K.
2018-06-01
The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the point spread function (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey r-band images with artificial AGN point sources added that are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover point source and host galaxy magnitudes with smaller systematic error and a lower average scatter (49 per cent). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ± 50 per cent if it is trained on multiple PSFs. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN is more robust and easy to use than parametric methods as it requires no input parameters.
Parametric Crowd Generation Software for MS&T Simulations and Training
2007-02-20
3 Technology Overview 5 Dynemotion System Components 5 Dynemotion System Architecture 6 Dynemotion-Enabled NPC Brain Cycles 9 Dynemotion API...Contents 10 Development Project Background Information 11 Potential Application and Impact for the DoD 13 Project Objectives, Scope...Methodology 13 Benefits of the Project 13 Project Innovation 14 *l_essons Learned and Open Questions 14 Research and Development Challenges 16
ERIC Educational Resources Information Center
Negley, Sandra Anne
The main purpose of this study was to determine and describe the effects of the comprehension component of the criterion-referenced reading program, Wisconsin Design for Reading Skills Development (WDRSD), on reading achievement and self-concept of sixth-grade students. Parametric techniques were employed in a quasi-experimental study involving 44…
THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.
Theobald, Douglas L; Wuttke, Deborah S
2006-09-01
THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.
OBIST methodology incorporating modified sensitivity of pulses for active analogue filter components
NASA Astrophysics Data System (ADS)
Khade, R. H.; Chaudhari, D. S.
2018-03-01
In this paper, oscillation-based built-in self-test method is used to diagnose catastrophic and parametric faults in integrated circuits. Sallen-Key low pass filter and high pass filter circuits with different gains are used to investigate defects. Variation in seven parameters of operational amplifier (OP-AMP) like gain, input impedance, output impedance, slew rate, input bias current, input offset current, input offset voltage and catastrophic as well as parametric defects in components outside OP-AMP are introduced in the circuit and simulation results are analysed. Oscillator output signal is converted to pulses which are used to generate a signature of the circuit. The signature and pulse count changes with the type of fault present in the circuit under test (CUT). The change in oscillation frequency is observed for fault detection. Designer has flexibility to predefine tolerance band of cut-off frequency and range of pulses for which circuit should be accepted. The fault coverage depends upon the required tolerance band of the CUT. We propose a modification of sensitivity of parameter (pulses) to avoid test escape and enhance yield. Result shows that the method provides 100% fault coverage for catastrophic faults.
Solar tower power plant using a particle-heated steam generator: Modeling and parametric study
NASA Astrophysics Data System (ADS)
Krüger, Michael; Bartsch, Philipp; Pointner, Harald; Zunft, Stefan
2016-05-01
Within the framework of the project HiTExStor II, a system model for the entire power plant consisting of volumetric air receiver, air-sand heat exchanger, sand storage system, steam generator and water-steam cycle was implemented in software "Ebsilon Professional". As a steam generator, the two technologies fluidized bed cooler and moving bed heat exchangers were considered. Physical models for the non-conventional power plant components as air- sand heat exchanger, fluidized bed coolers and moving bed heat exchanger had to be created and implemented in the simulation environment. Using the simulation model for the power plant, the individual components and subassemblies have been designed and the operating parameters were optimized in extensive parametric studies in terms of the essential degrees of freedom. The annual net electricity output for different systems was determined in annual performance calculations at a selected location (Huelva, Spain) using the optimized values for the studied parameters. The solution with moderate regenerative feed water heating has been found the most advantageous. Furthermore, the system with moving bed heat exchanger prevails over the system with fluidized bed cooler due to a 6 % higher net electricity yield.
Hirayama, Jun-ichiro; Hyvärinen, Aapo; Kiviniemi, Vesa; Kawanabe, Motoaki; Yamashita, Okito
2016-01-01
Characterizing the variability of resting-state functional brain connectivity across subjects and/or over time has recently attracted much attention. Principal component analysis (PCA) serves as a fundamental statistical technique for such analyses. However, performing PCA on high-dimensional connectivity matrices yields complicated “eigenconnectivity” patterns, for which systematic interpretation is a challenging issue. Here, we overcome this issue with a novel constrained PCA method for connectivity matrices by extending the idea of the previously proposed orthogonal connectivity factorization method. Our new method, modular connectivity factorization (MCF), explicitly introduces the modularity of brain networks as a parametric constraint on eigenconnectivity matrices. In particular, MCF analyzes the variability in both intra- and inter-module connectivities, simultaneously finding network modules in a principled, data-driven manner. The parametric constraint provides a compact module-based visualization scheme with which the result can be intuitively interpreted. We develop an optimization algorithm to solve the constrained PCA problem and validate our method in simulation studies and with a resting-state functional connectivity MRI dataset of 986 subjects. The results show that the proposed MCF method successfully reveals the underlying modular eigenconnectivity patterns in more general situations and is a promising alternative to existing methods. PMID:28002474
Tips and Tricks for Successful Application of Statistical Methods to Biological Data.
Schlenker, Evelyn
2016-01-01
This chapter discusses experimental design and use of statistics to describe characteristics of data (descriptive statistics) and inferential statistics that test the hypothesis posed by the investigator. Inferential statistics, based on probability distributions, depend upon the type and distribution of the data. For data that are continuous, randomly and independently selected, as well as normally distributed more powerful parametric tests such as Student's t test and analysis of variance (ANOVA) can be used. For non-normally distributed or skewed data, transformation of the data (using logarithms) may normalize the data allowing use of parametric tests. Alternatively, with skewed data nonparametric tests can be utilized, some of which rely on data that are ranked prior to statistical analysis. Experimental designs and analyses need to balance between committing type 1 errors (false positives) and type 2 errors (false negatives). For a variety of clinical studies that determine risk or benefit, relative risk ratios (random clinical trials and cohort studies) or odds ratios (case-control studies) are utilized. Although both use 2 × 2 tables, their premise and calculations differ. Finally, special statistical methods are applied to microarray and proteomics data, since the large number of genes or proteins evaluated increase the likelihood of false discoveries. Additional studies in separate samples are used to verify microarray and proteomic data. Examples in this chapter and references are available to help continued investigation of experimental designs and appropriate data analysis.
Ionic network analysis of tectosilicates: the example of coesite at variable pressure.
Reifenberg, Melina; Thomas, Noel W
2018-04-01
The method of ionic network analysis [Thomas (2017). Acta Cryst. B73, 74-86] is extended to tectosilicates through the example of coesite, the high-pressure polymorph of SiO 2 . The structural refinements of Černok et al. [Z. Kristallogr. (2014), 229, 761-773] are taken as the starting point for applying the method. Its purpose is to predict the unit-cell parameters and atomic coordinates at (p-T-X) values in-between those of diffraction experiments. The essential development step for tectosilicates is to define a pseudocubic parameterization of the O 4 cages of the SiO 4 tetrahedra. The six parameters a PC , b PC , c PC , α PC , β PC and γ PC allow a full quantification of the tetrahedral structure, i.e. distortion and enclosed volume. Structural predictions for coesite require that two separate quasi-planar networks are defined, one for the silicon ions and the other for the O 4 cage midpoints. A set of parametric curves is used to describe the evolution with pressure of these networks and the pseudocubic parameters. These are derived by fitting to the crystallographic data. Application of the method to monoclinic feldspars and to quartz and cristobalite is discussed. Further, a novel two-parameter quantification of the degree of tetrahedral distortion is described. At pressures in excess of ca 20.45 GPa it is not possible to find a self-consistent solution to the parametric curves for coesite, pointing to the likelihood of a phase transition.
Estimation of rates-across-sites distributions in phylogenetic substitution models.
Susko, Edward; Field, Chris; Blouin, Christian; Roger, Andrew J
2003-10-01
Previous work has shown that it is often essential to account for the variation in rates at different sites in phylogenetic models in order to avoid phylogenetic artifacts such as long branch attraction. In most current models, the gamma distribution is used for the rates-across-sites distributions and is implemented as an equal-probability discrete gamma. In this article, we introduce discrete distribution estimates with large numbers of equally spaced rate categories allowing us to investigate the appropriateness of the gamma model. With large numbers of rate categories, these discrete estimates are flexible enough to approximate the shape of almost any distribution. Likelihood ratio statistical tests and a nonparametric bootstrap confidence-bound estimation procedure based on the discrete estimates are presented that can be used to test the fit of a parametric family. We applied the methodology to several different protein data sets, and found that although the gamma model often provides a good parametric model for this type of data, rate estimates from an equal-probability discrete gamma model with a small number of categories will tend to underestimate the largest rates. In cases when the gamma model assumption is in doubt, rate estimates coming from the discrete rate distribution estimate with a large number of rate categories provide a robust alternative to gamma estimates. An alternative implementation of the gamma distribution is proposed that, for equal numbers of rate categories, is computationally more efficient during optimization than the standard gamma implementation and can provide more accurate estimates of site rates.
Terahertz parametric sources and imaging applications
NASA Astrophysics Data System (ADS)
Yamashita, M.; Ogawa, Y.; Otani, C.; Kawase, K.
2005-12-01
We have studied the generation of terahertz (THz) waves by optical parametric processes based on laser light scattering from the polariton mode of nonlinear crystals. Using parametric oscillation of LiNbO 3 or MgO-doped LiNbO 3 crystal pumped by a nano-second Q-switched Nd:YAG laser, we have realized a widely tunable coherent THz-wave sources with a simple configuration. We report the detailed characteristics of the oscillation and the radiation including tunability, spatial and temporal coherency, uni directivity, and efficiency. A Fourier transform limited THz-wave spectrum narrowing was achieved by introducing the injection seeding method. Further, we have developed a spectroscopic THz imaging system using a TPO, which allows detection and identification of drugs concealed in envelopes, by introducing the component spatial pattern analysis. Several images of the envelope are recorded at different THz frequencies and then processed. The final result is an image that reveals what substances are present in the envelope, in what quantity, and how they are distributed across the envelope area. The example presented here shows the identification of three drugs, two of which illegal, while one is an over-the-counter drug.
Orbit transfer rocket engine technology program: Advanced engine study
NASA Technical Reports Server (NTRS)
Erickson, C. M.
1992-01-01
In Task D.6 of the Advanced Engine Study, three primary subtasks were accomplished: (1) design of parametric data; (2) engine requirement variation studies; and (3) vehicle study/engine study coordination. Parametric data were generated for vacuum thrusts ranging from 7500 lbf to 50,000 lbf, nozzle expansion ratios from 600 to 1200, and engine mixture ratios from 5:1 to 7:1. Failure Modes and Effects Analysis (FMEA) was used as a departure point for these parametric analyses. These data are intended to assist in definition and trade studies. In the Engine Requirements Variation Studies, the individual effects of increasing the throttling ratio from 10:1 to 20:1 and requiring the engine to operate at a maximum mixture ratio of 12:1 were determined. Off design engine balances were generated at these extreme conditions and individual component operating requirements analyzed in detail. Potential problems were identified and possible solutions generated. In the Vehicle Study/Engine Study coordination subtask, vehicle contractor support was provided as needed, addressing a variety of issues uncovered during vehicle trade studies. This support was primarily provided during Technical Interchange Meetings (TIM) in which Space Exploration Initiative (SEI) studies were addressed.
Horovitz, Silvina G; Rossion, Bruno; Skudlarski, Pawel; Gore, John C
2004-08-01
Face perception is typically associated with activation in the inferior occipital, superior temporal (STG), and fusiform gyri (FG) and with an occipitotemporal electrophysiological component peaking around 170 ms on the scalp, the N170. However, the relationship between the N170 and the multiple face-sensitive activations observed in neuroimaging is unclear. It has been recently shown that the amplitude of the N170 component monotonically decreases as gaussian noise is added to a picture of a face [Jemel et al., 2003]. To help clarify the sources of the N170 without a priori assumptions regarding their number and locations, ERPs and fMRI were recorded in five subjects in the same experiment, in separate sessions. We used a parametric paradigm in which the amplitude of the N170 was modulated by varying the level of noise in a picture, and identified regions where the percent signal change in fMRI correlated with the ERP data. N170 signals were observed for pictures of both cars and faces but were stronger for faces. A monotonic decrease with added noise was observed for the N170 at right hemisphere sites but was less clear on the left and occipital central sites. Correlations between fMRI signal and N170 amplitudes for faces were highly significant (P < 0.001) in bilateral fusiform gyrus and superior temporal gyrus. For cars, the strongest correlations were observed in the parahippocampal region and in the STG (P < 0.005). Besides contributing to clarify the spatiotemporal course of face processing, this study illustrates how ERP information may be used synergistically in fMRI analyses. Parametric designs may be developed further to provide some timing information on fMRI activity and help identify the generators of ERP signals.
Parametric resonance in quantum electrodynamics vacuum birefringence
NASA Astrophysics Data System (ADS)
Arza, Ariel; Elias, Ricardo Gabriel
2018-05-01
Vacuum magnetic birefringence is one of the most interesting nonlinear phenomena in quantum electrodynamics because it is a pure photon-photon result of the theory and it directly signalizes the violation of the classical superposition principle of electromagnetic fields in the full quantum theory. We perform analytical and numerical calculations when an electromagnetic wave interacts with an oscillating external magnetic field. We find that in an ideal cavity, when the external field frequency is around the electromagnetic wave frequency, the normal and parallel components of the wave suffer parametric resonance at different rates, producing a vacuum birefringence effect growing in time. We also study the case where there is no cavity and the oscillating magnetic field is spatially localized in a region of length L . In both cases we find also a rotation of the elliptical axis.
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies.
Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong
2017-05-07
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18 F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans-each containing 1/8th of the total number of events-were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18 F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of [Formula: see text], the tracer transport rate (ml · min -1 · ml -1 ), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced [Formula: see text] maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced [Formula: see text] estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in-vivo studies
Petibon, Yoann; Rakvongthai, Yothin; Fakhri, Georges El; Ouyang, Jinsong
2017-01-01
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves -TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in-vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans - each containing 1/8th of the total number of events - were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard Ordered Subset Expectation Maximization (OSEM) reconstruction algorithm on one side, and the One-Step Late Maximum a Posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of K1, the tracer transport rate (mL.min−1.mL−1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced K1 maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced K1 estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in-vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance. PMID:28379843
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies
NASA Astrophysics Data System (ADS)
Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong
2017-05-01
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans—each containing 1/8th of the total number of events—were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of {{K}1} , the tracer transport rate (ml · min-1 · ml-1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced {{K}1} maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced {{K}1} estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.
NASA Astrophysics Data System (ADS)
Alevizos, Evangelos; Snellen, Mirjam; Simons, Dick; Siemes, Kerstin; Greinert, Jens
2018-06-01
This study applies three classification methods exploiting the angular dependence of acoustic seafloor backscatter along with high resolution sub-bottom profiling for seafloor sediment characterization in the Eckernförde Bay, Baltic Sea Germany. This area is well suited for acoustic backscatter studies due to its shallowness, its smooth bathymetry and the presence of a wide range of sediment types. Backscatter data were acquired using a Seabeam1180 (180 kHz) multibeam echosounder and sub-bottom profiler data were recorded using a SES-2000 parametric sonar transmitting 6 and 12 kHz. The high density of seafloor soundings allowed extracting backscatter layers for five beam angles over a large part of the surveyed area. A Bayesian probability method was employed for sediment classification based on the backscatter variability at a single incidence angle, whereas Maximum Likelihood Classification (MLC) and Principal Components Analysis (PCA) were applied to the multi-angle layers. The Bayesian approach was used for identifying the optimum number of acoustic classes because cluster validation is carried out prior to class assignment and class outputs are ordinal categorical values. The method is based on the principle that backscatter values from a single incidence angle express a normal distribution for a particular sediment type. The resulting Bayesian classes were well correlated to median grain sizes and the percentage of coarse material. The MLC method uses angular response information from five layers of training areas extracted from the Bayesian classification map. The subsequent PCA analysis is based on the transformation of these five layers into two principal components that comprise most of the data variability. These principal components were clustered in five classes after running an external cluster validation test. In general both methods MLC and PCA, separated the various sediment types effectively, showing good agreement (kappa >0.7) with the Bayesian approach which also correlates well with ground truth data (r2 > 0.7). In addition, sub-bottom data were used in conjunction with the Bayesian classification results to characterize acoustic classes with respect to their geological and stratigraphic interpretation. The joined interpretation of seafloor and sub-seafloor data sets proved to be an efficient approach for a better understanding of seafloor backscatter patchiness and to discriminate acoustically similar classes in different geological/bathymetric settings.
Using the Health Belief Model for Bulimia Prevention.
ERIC Educational Resources Information Center
Grodner, Michele
1991-01-01
Discusses application of the Health Belief Model to the prevention of bulimia, describing each model component. The article considers the individual's beliefs about bulimia and bulimic-like behaviors as a means of predicting the likelihood of behavior change to prevent clinically diagnosable bulimia. (SM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, Jonathan; Thompson, Sandra E.; Brothers, Alan J.
The ability to estimate the likelihood of future events based on current and historical data is essential to the decision making process of many government agencies. Successful predictions related to terror events and characterizing the risks will support development of options for countering these events. The predictive tasks involve both technical and social component models. The social components have presented a particularly difficult challenge. This paper outlines some technical considerations of this modeling activity. Both data and predictions associated with the technical and social models will likely be known with differing certainties or accuracies – a critical challenge is linkingmore » across these model domains while respecting this fundamental difference in certainty level. This paper will describe the technical approach being taken to develop the social model and identification of the significant interfaces between the technical and social modeling in the context of analysis of diversion of nuclear material.« less
Distillation of the two-mode squeezed state.
Kurochkin, Yury; Prasad, Adarsh S; Lvovsky, A I
2014-02-21
We experimentally demonstrate entanglement distillation of the two-mode squeezed state obtained by parametric down-conversion. Applying the photon annihilation operator to both modes, we raise the fraction of the photon-pair component in the state, resulting in the increase of both squeezing and entanglement by about 50%. Because of the low amount of initial squeezing, the distilled state does not experience significant loss of Gaussian character.
Atmospheric cloud physics thermal systems analysis
NASA Technical Reports Server (NTRS)
1977-01-01
Engineering analyses performed on the Atmospheric Cloud Physics (ACPL) Science Simulator expansion chamber and associated thermal control/conditioning system are reported. Analyses were made to develop a verified thermal model and to perform parametric thermal investigations to evaluate systems performance characteristics. Thermal network representations of solid components and the complete fluid conditioning system were solved simultaneously using the Systems Improved Numerical Differencing Analyzer (SINDA) computer program.
The Importance of Practice in the Development of Statistics.
1983-01-01
RESOLUTION TEST CHART NATIONAL BUREAU OIF STANDARDS 1963 -A NRC Technical Summary Report #2471 C THE IMORTANCE OF PRACTICE IN to THE DEVELOPMENT OF STATISTICS...component analysis, bioassay, limits for a ratio, quality control, sampling inspection, non-parametric tests , transformation theory, ARIMA time series...models, sequential tests , cumulative sum charts, data analysis plotting techniques, and a resolution of the Bayes - frequentist controversy. It appears
Energy-exchange collisions of dark-bright-bright vector solitons.
Radhakrishnan, R; Manikandan, N; Aravinthan, K
2015-12-01
We find a dark component guiding the practically interesting bright-bright vector one-soliton to two different parametric domains giving rise to different physical situations by constructing a more general form of three-component dark-bright-bright mixed vector one-soliton solution of the generalized Manakov model with nine free real parameters. Moreover our main investigation of the collision dynamics of such mixed vector solitons by constructing the multisoliton solution of the generalized Manakov model with the help of Hirota technique reveals that the dark-bright-bright vector two-soliton supports energy-exchange collision dynamics. In particular the dark component preserves its initial form and the energy-exchange collision property of the bright-bright vector two-soliton solution of the Manakov model during collision. In addition the interactions between bound state dark-bright-bright vector solitons reveal oscillations in their amplitudes. A similar kind of breathing effect was also experimentally observed in the Bose-Einstein condensates. Some possible ways are theoretically suggested not only to control this breathing effect but also to manage the beating, bouncing, jumping, and attraction effects in the collision dynamics of dark-bright-bright vector solitons. The role of multiple free parameters in our solution is examined to define polarization vector, envelope speed, envelope width, envelope amplitude, grayness, and complex modulation of our solution. It is interesting to note that the polarization vector of our mixed vector one-soliton evolves in sphere or hyperboloid depending upon the initial parametric choices.
Joint analysis of BICEP2/keck array and Planck Data.
Ade, P A R; Aghanim, N; Ahmed, Z; Aikin, R W; Alexander, K D; Arnaud, M; Aumont, J; Baccigalupi, C; Banday, A J; Barkats, D; Barreiro, R B; Bartlett, J G; Bartolo, N; Battaner, E; Benabed, K; Benoît, A; Benoit-Lévy, A; Benton, S J; Bernard, J-P; Bersanelli, M; Bielewicz, P; Bischoff, C A; Bock, J J; Bonaldi, A; Bonavera, L; Bond, J R; Borrill, J; Bouchet, F R; Boulanger, F; Brevik, J A; Bucher, M; Buder, I; Bullock, E; Burigana, C; Butler, R C; Buza, V; Calabrese, E; Cardoso, J-F; Catalano, A; Challinor, A; Chary, R-R; Chiang, H C; Christensen, P R; Colombo, L P L; Combet, C; Connors, J; Couchot, F; Coulais, A; Crill, B P; Curto, A; Cuttaia, F; Danese, L; Davies, R D; Davis, R J; de Bernardis, P; de Rosa, A; de Zotti, G; Delabrouille, J; Delouis, J-M; Désert, F-X; Dickinson, C; Diego, J M; Dole, H; Donzelli, S; Doré, O; Douspis, M; Dowell, C D; Duband, L; Ducout, A; Dunkley, J; Dupac, X; Dvorkin, C; Efstathiou, G; Elsner, F; Enßlin, T A; Eriksen, H K; Falgarone, E; Filippini, J P; Finelli, F; Fliescher, S; Forni, O; Frailis, M; Fraisse, A A; Franceschi, E; Frejsel, A; Galeotta, S; Galli, S; Ganga, K; Ghosh, T; Giard, M; Gjerløw, E; Golwala, S R; González-Nuevo, J; Górski, K M; Gratton, S; Gregorio, A; Gruppuso, A; Gudmundsson, J E; Halpern, M; Hansen, F K; Hanson, D; Harrison, D L; Hasselfield, M; Helou, G; Henrot-Versillé, S; Herranz, D; Hildebrandt, S R; Hilton, G C; Hivon, E; Hobson, M; Holmes, W A; Hovest, W; Hristov, V V; Huffenberger, K M; Hui, H; Hurier, G; Irwin, K D; Jaffe, A H; Jaffe, T R; Jewell, J; Jones, W C; Juvela, M; Karakci, A; Karkare, K S; Kaufman, J P; Keating, B G; Kefeli, S; Keihänen, E; Kernasovskiy, S A; Keskitalo, R; Kisner, T S; Kneissl, R; Knoche, J; Knox, L; Kovac, J M; Krachmalnicoff, N; Kunz, M; Kuo, C L; Kurki-Suonio, H; Lagache, G; Lähteenmäki, A; Lamarre, J-M; Lasenby, A; Lattanzi, M; Lawrence, C R; Leitch, E M; Leonardi, R; Levrier, F; Lewis, A; Liguori, M; Lilje, P B; Linden-Vørnle, M; López-Caniego, M; Lubin, P M; Lueker, M; Macías-Pérez, J F; Maffei, B; Maino, D; Mandolesi, N; Mangilli, A; Maris, M; Martin, P G; Martínez-González, E; Masi, S; Mason, P; Matarrese, S; Megerian, K G; Meinhold, P R; Melchiorri, A; Mendes, L; Mennella, A; Migliaccio, M; Mitra, S; Miville-Deschênes, M-A; Moneti, A; Montier, L; Morgante, G; Mortlock, D; Moss, A; Munshi, D; Murphy, J A; Naselsky, P; Nati, F; Natoli, P; Netterfield, C B; Nguyen, H T; Nørgaard-Nielsen, H U; Noviello, F; Novikov, D; Novikov, I; O'Brient, R; Ogburn, R W; Orlando, A; Pagano, L; Pajot, F; Paladini, R; Paoletti, D; Partridge, B; Pasian, F; Patanchon, G; Pearson, T J; Perdereau, O; Perotto, L; Pettorino, V; Piacentini, F; Piat, M; Pietrobon, D; Plaszczynski, S; Pointecouteau, E; Polenta, G; Ponthieu, N; Pratt, G W; Prunet, S; Pryke, C; Puget, J-L; Rachen, J P; Reach, W T; Rebolo, R; Reinecke, M; Remazeilles, M; Renault, C; Renzi, A; Richter, S; Ristorcelli, I; Rocha, G; Rossetti, M; Roudier, G; Rowan-Robinson, M; Rubiño-Martín, J A; Rusholme, B; Sandri, M; Santos, D; Savelainen, M; Savini, G; Schwarz, R; Scott, D; Seiffert, M D; Sheehy, C D; Spencer, L D; Staniszewski, Z K; Stolyarov, V; Sudiwala, R; Sunyaev, R; Sutton, D; Suur-Uski, A-S; Sygnet, J-F; Tauber, J A; Teply, G P; Terenzi, L; Thompson, K L; Toffolatti, L; Tolan, J E; Tomasi, M; Tristram, M; Tucci, M; Turner, A D; Valenziano, L; Valiviita, J; Van Tent, B; Vibert, L; Vielva, P; Vieregg, A G; Villa, F; Wade, L A; Wandelt, B D; Watson, R; Weber, A C; Wehus, I K; White, M; White, S D M; Willmert, J; Wong, C L; Yoon, K W; Yvon, D; Zacchei, A; Zonca, A
2015-03-13
We report the results of a joint analysis of data from BICEP2/Keck Array and Planck. BICEP2 and Keck Array have observed the same approximately 400 deg^{2} patch of sky centered on RA 0 h, Dec. -57.5°. The combined maps reach a depth of 57 nK deg in Stokes Q and U in a band centered at 150 GHz. Planck has observed the full sky in polarization at seven frequencies from 30 to 353 GHz, but much less deeply in any given region (1.2 μK deg in Q and U at 143 GHz). We detect 150×353 cross-correlation in B modes at high significance. We fit the single- and cross-frequency power spectra at frequencies ≥150 GHz to a lensed-ΛCDM model that includes dust and a possible contribution from inflationary gravitational waves (as parametrized by the tensor-to-scalar ratio r), using a prior on the frequency spectral behavior of polarized dust emission from previous Planck analysis of other regions of the sky. We find strong evidence for dust and no statistically significant evidence for tensor modes. We probe various model variations and extensions, including adding a synchrotron component in combination with lower frequency data, and find that these make little difference to the r constraint. Finally, we present an alternative analysis which is similar to a map-based cleaning of the dust contribution, and show that this gives similar constraints. The final result is expressed as a likelihood curve for r, and yields an upper limit r_{0.05}<0.12 at 95% confidence. Marginalizing over dust and r, lensing B modes are detected at 7.0σ significance.
Cluster and propensity based approximation of a network
2013-01-01
Background The models in this article generalize current models for both correlation networks and multigraph networks. Correlation networks are widely applied in genomics research. In contrast to general networks, it is straightforward to test the statistical significance of an edge in a correlation network. It is also easy to decompose the underlying correlation matrix and generate informative network statistics such as the module eigenvector. However, correlation networks only capture the connections between numeric variables. An open question is whether one can find suitable decompositions of the similarity measures employed in constructing general networks. Multigraph networks are attractive because they support likelihood based inference. Unfortunately, it is unclear how to adjust current statistical methods to detect the clusters inherent in many data sets. Results Here we present an intuitive and parsimonious parametrization of a general similarity measure such as a network adjacency matrix. The cluster and propensity based approximation (CPBA) of a network not only generalizes correlation network methods but also multigraph methods. In particular, it gives rise to a novel and more realistic multigraph model that accounts for clustering and provides likelihood based tests for assessing the significance of an edge after controlling for clustering. We present a novel Majorization-Minimization (MM) algorithm for estimating the parameters of the CPBA. To illustrate the practical utility of the CPBA of a network, we apply it to gene expression data and to a bi-partite network model for diseases and disease genes from the Online Mendelian Inheritance in Man (OMIM). Conclusions The CPBA of a network is theoretically appealing since a) it generalizes correlation and multigraph network methods, b) it improves likelihood based significance tests for edge counts, c) it directly models higher-order relationships between clusters, and d) it suggests novel clustering algorithms. The CPBA of a network is implemented in Fortran 95 and bundled in the freely available R package PropClust. PMID:23497424
Schwartzkopf, Wade C; Bovik, Alan C; Evans, Brian L
2005-12-01
Traditional chromosome imaging has been limited to grayscale images, but recently a 5-fluorophore combinatorial labeling technique (M-FISH) was developed wherein each class of chromosomes binds with a different combination of fluorophores. This results in a multispectral image, where each class of chromosomes has distinct spectral components. In this paper, we develop new methods for automatic chromosome identification by exploiting the multispectral information in M-FISH chromosome images and by jointly performing chromosome segmentation and classification. We (1) develop a maximum-likelihood hypothesis test that uses multispectral information, together with conventional criteria, to select the best segmentation possibility; (2) use this likelihood function to combine chromosome segmentation and classification into a robust chromosome identification system; and (3) show that the proposed likelihood function can also be used as a reliable indicator of errors in segmentation, errors in classification, and chromosome anomalies, which can be indicators of radiation damage, cancer, and a wide variety of inherited diseases. We show that the proposed multispectral joint segmentation-classification method outperforms past grayscale segmentation methods when decomposing touching chromosomes. We also show that it outperforms past M-FISH classification techniques that do not use segmentation information.
Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation
Meyer, Karin
2016-01-01
Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty—derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated—rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined. PMID:27317681
Social Support and HIV Risks Among Migrant and Non-Migrant Market Workers in Almaty, Kazakhstan.
Mergenova, Gaukhar; Shaw, Stacey A; Terlikbayeva, Assel; Gilbert, Louisa; Gensburg, Lenore; Primbetova, Sholpan; El-Bassel, Nabila
2017-08-01
Migration processes are listed within the primary factors facilitating the heterosexual spread of HIV. The study examines the relationship between social support, sexual HIV risk behaviors and sexually transmitted infections (STIs) among 1342 male migrant and non-migrant market workers from Barakholka Market in Almaty, Kazakhstan. (1) higher level of perceived social support [Enhancing Recovery in Coronary Heart Disease (ENRICHD) Social Support Instrument (ESSI score)] was associated with a lower likelihood of having sex with a female sex worker (FSW) [OR = 0.952 (0.927, 0.978) p < .001]; (2) higher availability of friends was associated with a higher likelihood of having STIs [OR = 1.244 (1.007, 1.537), p < .05]; (3) larger network size was associated with a higher likelihood of having STIs [OR = 1.201 (1.026, 1.407), p < .05]; (4) loneliness was associated with an increased likelihood of having unprotected sex with any female partner [RR = 1.102 (1.027, 1.182), p < .05]. Results suggest that social support factors should be considered as a component of HIV and STI prevention programs for male migrant workers from Central Asia in Kazakhstan.
NASA Astrophysics Data System (ADS)
Zha, N.; Capaldi, D. P. I.; Pike, D.; McCormack, D. G.; Cunningham, I. A.; Parraga, G.
2015-03-01
Pulmonary x-ray computed tomography (CT) may be used to characterize emphysema and airways disease in patients with chronic obstructive pulmonary disease (COPD). One analysis approach - parametric response mapping (PMR) utilizes registered inspiratory and expiratory CT image volumes and CT-density-histogram thresholds, but there is no consensus regarding the threshold values used, or their clinical meaning. Principal-component-analysis (PCA) of the CT density histogram can be exploited to quantify emphysema using data-driven CT-density-histogram thresholds. Thus, the objective of this proof-of-concept demonstration was to develop a PRM approach using PCA-derived thresholds in COPD patients and ex-smokers without airflow limitation. Methods: Fifteen COPD ex-smokers and 5 normal ex-smokers were evaluated. Thoracic CT images were also acquired at full inspiration and full expiration and these images were non-rigidly co-registered. PCA was performed for the CT density histograms, from which the components with the highest eigenvalues greater than one were summed. Since the values of the principal component curve correlate directly with the variability in the sample, the maximum and minimum points on the curve were used as threshold values for the PCA-adjusted PRM technique. Results: A significant correlation was determined between conventional and PCA-adjusted PRM with 3He MRI apparent diffusion coefficient (p<0.001), with CT RA950 (p<0.0001), as well as with 3He MRI ventilation defect percent, a measurement of both small airways disease (p=0.049 and p=0.06, respectively) and emphysema (p=0.02). Conclusions: PRM generated using PCA thresholds of the CT density histogram showed significant correlations with CT and 3He MRI measurements of emphysema, but not airways disease.
Parametric Inlet Tested in Glenn's 10- by 10-Foot Supersonic Wind Tunnel
NASA Technical Reports Server (NTRS)
Slater, John W.; Davis, David O.; Solano, Paul A.
2005-01-01
The Parametric Inlet is an innovative concept for the inlet of a gas-turbine propulsion system for supersonic aircraft. The concept approaches the performance of past inlet concepts, but with less mechanical complexity, lower weight, and greater aerodynamic stability and safety. Potential applications include supersonic cruise aircraft and missiles. The Parametric Inlet uses tailored surfaces to turn the incoming supersonic flow inward toward an axis of symmetry. The terminal shock spans the opening of the subsonic diffuser leading to the engine. The external cowl area is smaller, which reduces cowl drag. The use of only external supersonic compression avoids inlet unstart--an unsafe shock instability present in previous inlet designs that use internal supersonic compression. This eliminates the need for complex mechanical systems to control unstart, which reduces weight. The conceptual design was conceived by TechLand Research, Inc. (North Olmsted, OH), which received funding through NASA s Small-Business Innovation Research program. The Boeing Company (Seattle, WA) also participated in the conceptual design. The NASA Glenn Research Center became involved starting with the preliminary design of a model for testing in Glenn s 10- by 10-Foot Supersonic Wind Tunnel (10 10 SWT). The inlet was sized for a speed of Mach 2.35 while matching requirements of an existing cold pipe used in previous inlet tests. The parametric aspects of the model included interchangeable components for different cowl lip, throat slot, and sidewall leading-edge shapes and different vortex generator configurations. Glenn researchers used computational fluid dynamics (CFD) tools for three-dimensional, turbulent flow analysis to further refine the aerodynamic design.
Comment on "Parametric Instability Induced by X-Mode Wave Heating at EISCAT" by Wang et al. (2016)
NASA Astrophysics Data System (ADS)
Blagoveshchenskaya, N. F.; Borisova, T. D.; Yeoman, T. K.
2017-12-01
In their recent article Wang et al. (2016) analyzed observations from EISCAT (European Incoherent Scatter) Scientific Association Russian X-mode heating experiments and claimed to explain the potential mechanisms for the parametric decay instability (PDI) and oscillating two-stream instability (OTSI). Wang et al. (2016) claim that they cannot separate the HF-enhanced plasma and ion lines excited by O or X mode in the EISCAT UHF radar spectra. Because of this they distinguished the parametric instability excited by O-/X-mode heating waves according to their different excitation heights. Their reflection heights were determined from ionosonde records, which provide a rough measure of excitation altitudes and cannot be used for the separation of the O- and X-mode effects. The serious limitation in their analysis is the use of a 30 s integration time of the UHF radar data. There are also serious disagreements between their analysis and the real observational facts. The fact is that it is the radical difference in the behavior of the X- and O-mode plasma and ion line spectra derived with a 5 s resolution, which provides the correct separation of the X- and O-mode effects. It is not discussed and explained how the parallel component of the electric field under X-mode heating is generated. Apart from the leakage to the O mode, results by Wang et al. (2016) do not explain the potential mechanisms for PDI and OTSI and add nothing to understanding the physical factors accounting for the parametric instability generated by an X-mode HF pump wave.
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; Rakovec, Oldrich; Kumar, Rohini; Samaniego, Luis
2016-04-01
There have been tremendous improvements in distributed hydrologic modeling (DHM) which made a process-based simulation with a high spatiotemporal resolution applicable on a large spatial scale. Despite of increasing information on heterogeneous property of a catchment, DHM is still subject to uncertainties inherently coming from model structure, parameters and input forcing. Sequential data assimilation (DA) may facilitate improved streamflow prediction via DHM using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is, however, often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. If parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by DHM may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we present a global multi-parametric ensemble approach to incorporate parametric uncertainty of DHM in DA to improve streamflow predictions. To effectively represent and control uncertainty of high-dimensional parameters with limited number of ensemble, MPR method is incorporated with DA. Lagged particle filtering is utilized to consider the response times and non-Gaussian characteristics of internal hydrologic processes. The hindcasting experiments are implemented to evaluate impacts of the proposed DA method on streamflow predictions in multiple European river basins having different climate and catchment characteristics. Because augmentation of parameters is not required within an assimilation window, the approach could be stable with limited ensemble members and viable for practical uses.
Supercritical nonlinear parametric dynamics of Timoshenko microbeams
NASA Astrophysics Data System (ADS)
Farokhi, Hamed; Ghayesh, Mergen H.
2018-06-01
The nonlinear supercritical parametric dynamics of a Timoshenko microbeam subject to an axial harmonic excitation force is examined theoretically, by means of different numerical techniques, and employing a high-dimensional analysis. The time-variant axial load is assumed to consist of a mean value along with harmonic fluctuations. In terms of modelling, a continuous expression for the elastic potential energy of the system is developed based on the modified couple stress theory, taking into account small-size effects; the kinetic energy of the system is also modelled as a continuous function of the displacement field. Hamilton's principle is employed to balance the energies and to obtain the continuous model of the system. Employing the Galerkin scheme along with an assumed-mode technique, the energy terms are reduced, yielding a second-order reduced-order model with finite number of degrees of freedom. A transformation is carried out to convert the second-order reduced-order model into a double-dimensional first order one. A bifurcation analysis is performed for the system in the absence of the axial load fluctuations. Moreover, a mean value for the axial load is selected in the supercritical range, and the principal parametric resonant response, due to the time-variant component of the axial load, is obtained - as opposed to transversely excited systems, for parametrically excited system (such as our problem here), the nonlinear resonance occurs in the vicinity of twice any natural frequency of the linear system; this is accomplished via use of the pseudo-arclength continuation technique, a direct time integration, an eigenvalue analysis, and the Floquet theory for stability. The natural frequencies of the system prior to and beyond buckling are also determined. Moreover, the effect of different system parameters on the nonlinear supercritical parametric dynamics of the system is analysed, with special consideration to the effect of the length-scale parameter.
Balzer, Laura B; Zheng, Wenjing; van der Laan, Mark J; Petersen, Maya L
2018-01-01
We often seek to estimate the impact of an exposure naturally occurring or randomly assigned at the cluster-level. For example, the literature on neighborhood determinants of health continues to grow. Likewise, community randomized trials are applied to learn about real-world implementation, sustainability, and population effects of interventions with proven individual-level efficacy. In these settings, individual-level outcomes are correlated due to shared cluster-level factors, including the exposure, as well as social or biological interactions between individuals. To flexibly and efficiently estimate the effect of a cluster-level exposure, we present two targeted maximum likelihood estimators (TMLEs). The first TMLE is developed under a non-parametric causal model, which allows for arbitrary interactions between individuals within a cluster. These interactions include direct transmission of the outcome (i.e. contagion) and influence of one individual's covariates on another's outcome (i.e. covariate interference). The second TMLE is developed under a causal sub-model assuming the cluster-level and individual-specific covariates are sufficient to control for confounding. Simulations compare the alternative estimators and illustrate the potential gains from pairing individual-level risk factors and outcomes during estimation, while avoiding unwarranted assumptions. Our results suggest that estimation under the sub-model can result in bias and misleading inference in an observational setting. Incorporating working assumptions during estimation is more robust than assuming they hold in the underlying causal model. We illustrate our approach with an application to HIV prevention and treatment.
Adaptive time-sequential binary sensing for high dynamic range imaging
NASA Astrophysics Data System (ADS)
Hu, Chenhui; Lu, Yue M.
2012-06-01
We present a novel image sensor for high dynamic range imaging. The sensor performs an adaptive one-bit quantization at each pixel, with the pixel output switched from 0 to 1 only if the number of photons reaching that pixel is greater than or equal to a quantization threshold. With an oracle knowledge of the incident light intensity, one can pick an optimal threshold (for that light intensity) and the corresponding Fisher information contained in the output sequence follows closely that of an ideal unquantized sensor over a wide range of intensity values. This observation suggests the potential gains one may achieve by adaptively updating the quantization thresholds. As the main contribution of this work, we propose a time-sequential threshold-updating rule that asymptotically approaches the performance of the oracle scheme. With every threshold mapped to a number of ordered states, the dynamics of the proposed scheme can be modeled as a parametric Markov chain. We show that the frequencies of different thresholds converge to a steady-state distribution that is concentrated around the optimal choice. Moreover, numerical experiments show that the theoretical performance measures (Fisher information and Craḿer-Rao bounds) can be achieved by a maximum likelihood estimator, which is guaranteed to find globally optimal solution due to the concavity of the log-likelihood functions. Compared with conventional image sensors and the strategy that utilizes a constant single-photon threshold considered in previous work, the proposed scheme attains orders of magnitude improvement in terms of sensor dynamic ranges.
Anderson v. University of Wisconsin: Handicap and Race Discrimination in Readmission Procedures.
ERIC Educational Resources Information Center
Smith, Elizabeth R.
1989-01-01
"Anderson v. University of Wisconsin" gives important guidance to universities by detailing the components of race and handicap discrimination claims, and illustrating how these claims can succeed. Readmission procedures that could reduce the likelihood of charges of discrimination are suggested. (Author/MLW)
Nagashima, Yoshihiko; Oosako, Takuya; Takase, Yuichi; Ejiri, Akira; Watanabe, Osamu; Kobayashi, Hiroaki; Adachi, Yuuki; Tojo, Hiroshi; Yamaguchi, Takashi; Kurashina, Hiroki; Yamada, Kotaro; An, Byung Il; Kasahara, Hiroshi; Shimpo, Fujio; Kumazawa, Ryuhei; Hayashi, Hiroyuki; Matsuzawa, Haduki; Hiratsuka, Junichi; Hanashima, Kentaro; Kakuda, Hidetoshi; Sakamoto, Takuya; Wakatsuki, Takuma
2010-06-18
We present an observation of beat oscillation generation by coupled modes associated with parametric decay instability (PDI) during radio frequency (rf) wave heating experiments on the Tokyo Spherical Tokamak-2. Nearly identical PDI spectra, which are characterized by the coexistence of the rf pump wave, the lower-sideband wave, and the low-frequency oscillation in the ion-cyclotron range of frequency, are observed at various locations in the edge plasma. A bispectral power analysis was used to experimentally discriminate beat oscillation from the resonant mode for the first time. The pump and lower-sideband waves have resonant mode components, while the low-frequency oscillation is exclusively excited by nonlinear coupling of the pump and lower-sideband waves. Newly discovered nonlocal transport channels in spectral space and in real space via PDI are described.
[The research protocol VI: How to choose the appropriate statistical test. Inferential statistics].
Flores-Ruiz, Eric; Miranda-Novales, María Guadalupe; Villasís-Keever, Miguel Ángel
2017-01-01
The statistical analysis can be divided in two main components: descriptive analysis and inferential analysis. An inference is to elaborate conclusions from the tests performed with the data obtained from a sample of a population. Statistical tests are used in order to establish the probability that a conclusion obtained from a sample is applicable to the population from which it was obtained. However, choosing the appropriate statistical test in general poses a challenge for novice researchers. To choose the statistical test it is necessary to take into account three aspects: the research design, the number of measurements and the scale of measurement of the variables. Statistical tests are divided into two sets, parametric and nonparametric. Parametric tests can only be used if the data show a normal distribution. Choosing the right statistical test will make it easier for readers to understand and apply the results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Advani, S.H.; Lee, T.S.; Moon, H.
1992-10-01
The analysis of pertinent energy components or affiliated characteristic times for hydraulic stimulation processes serves as an effective tool for fracture configuration designs optimization, and control. This evaluation, in conjunction with parametric sensitivity studies, provides a rational base for quantifying dominant process mechanisms and the roles of specified reservoir properties relative to controllable hydraulic fracture variables for a wide spectrum of treatment scenarios. Results are detailed for the following multi-task effort: (a) Application of characteristic time concept and parametric sensitivity studies for specialized fracture geometries (rectangular, penny-shaped, elliptical) and three-layered elliptic crack models (in situ stress, elastic moduli, and fracturemore » toughness contrasts). (b) Incorporation of leak-off effects for models investigated in (a). (c) Simulation of generalized hydraulic fracture models and investigation of the role of controllable vaxiables and uncontrollable system properties. (d) Development of guidelines for hydraulic fracture design and optimization.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Advani, S.H.; Lee, T.S.; Moon, H.
1992-10-01
The analysis of pertinent energy components or affiliated characteristic times for hydraulic stimulation processes serves as an effective tool for fracture configuration designs optimization, and control. This evaluation, in conjunction with parametric sensitivity studies, provides a rational base for quantifying dominant process mechanisms and the roles of specified reservoir properties relative to controllable hydraulic fracture variables for a wide spectrum of treatment scenarios. Results are detailed for the following multi-task effort: (a) Application of characteristic time concept and parametric sensitivity studies for specialized fracture geometries (rectangular, penny-shaped, elliptical) and three-layered elliptic crack models (in situ stress, elastic moduli, and fracturemore » toughness contrasts). (b) Incorporation of leak-off effects for models investigated in (a). (c) Simulation of generalized hydraulic fracture models and investigation of the role of controllable vaxiables and uncontrollable system properties. (d) Development of guidelines for hydraulic fracture design and optimization.« less
Thermal Testing and Analysis of an Efficient High-Temperature Multi-Screen Internal Insulation
NASA Technical Reports Server (NTRS)
Weiland, Stefan; Handrick, Karin; Daryabeigi, Kamran
2007-01-01
Conventional multi-layer insulations exhibit excellent insulation performance but they are limited to the temperature range to which their components reflective foils and spacer materials are compatible. For high temperature applications, the internal multi-screen insulation IMI has been developed that utilizes unique ceramic material technology to produce reflective screens with high temperature stability. For analytical insulation sizing a parametric material model is developed that includes the main contributors for heat flow which are radiation and conduction. The adaptation of model-parameters based on effective steady-state thermal conductivity measurements performed at NASA Langley Research Center (LaRC) allows for extrapolation to arbitrary stack configurations and temperature ranges beyond the ones that were covered in the conductivity measurements. Experimental validation of the parametric material model was performed during the thermal qualification test of the X-38 Chin-panel, where test results and predictions showed a good agreement.
NASA Astrophysics Data System (ADS)
AsséMat, Elie; Machnes, Shai; Tannor, David; Wilhelm-Mauch, Frank
In part I, we presented the theoretic foundations of the GOAT algorithm for the optimal control of quantum systems. Here in part II, we focus on several applications of GOAT to superconducting qubits architecture. First, we consider a control-Z gate on Xmons qubits with an Erf parametrization of the optimal pulse. We show that a fast and accurate gate can be obtained with only 16 parameters, as compared to hundreds of parameters required in other algorithms. We present numerical evidences that such parametrization should allow an efficient in-situ calibration of the pulse. Next, we consider the flux-tunable coupler by IBM. We show optimization can be carried out in a more realistic model of the system than was employed in the original study, which is expected to further simplify the calibration process. Moreover, GOAT reduced the complexity of the optimal pulse to only 6 Fourier components, composed with analytic wrappers.
NASA Technical Reports Server (NTRS)
To, Wing H.
2005-01-01
Quantum optical experiments require all the components involved to be extremely stable relative to each other. The stability can be "measured" by using an interferometric experiment. A pair of coherent photons produced by parametric down-conversion could be chosen to be orthogonally polarized initially. By rotating the polarization of one of the wave packets, they can be recombined at a beam splitter such that interference will occur. Theoretically, the interference will create four terms in the wave function. Two terms with both photons going to the same detector, and two terms will have the photons each going to different detectors. However, the latter will cancel each other out, thus no photons will arrive at the two detectors simultaneously under ideal conditions. The stability Of the test-bed can then be inferred by the dependence of coincidence count on the rotation angle.
Modified neural networks for rapid recovery of tokamak plasma parameters for real time control
NASA Astrophysics Data System (ADS)
Sengupta, A.; Ranjan, P.
2002-07-01
Two modified neural network techniques are used for the identification of the equilibrium plasma parameters of the Superconducting Steady State Tokamak I from external magnetic measurements. This is expected to ultimately assist in a real time plasma control. As different from the conventional network structure where a single network with the optimum number of processing elements calculates the outputs, a multinetwork system connected in parallel does the calculations here in one of the methods. This network is called the double neural network. The accuracy of the recovered parameters is clearly more than the conventional network. The other type of neural network used here is based on the statistical function parametrization combined with a neural network. The principal component transformation removes linear dependences from the measurements and a dimensional reduction process reduces the dimensionality of the input space. This reduced and transformed input set, rather than the entire set, is fed into the neural network input. This is known as the principal component transformation-based neural network. The accuracy of the recovered parameters in the latter type of modified network is found to be a further improvement over the accuracy of the double neural network. This result differs from that obtained in an earlier work where the double neural network showed better performance. The conventional network and the function parametrization methods have also been used for comparison. The conventional network has been used for an optimization of the set of magnetic diagnostics. The effective set of sensors, as assessed by this network, are compared with the principal component based network. Fault tolerance of the neural networks has been tested. The double neural network showed the maximum resistance to faults in the diagnostics, while the principal component based network performed poorly. Finally the processing times of the methods have been compared. The double network and the principal component network involve the minimum computation time, although the conventional network also performs well enough to be used in real time.
Fast Multiscale Algorithms for Wave Propagation in Heterogeneous Environments
2016-01-07
methods for waves’’, Nonlinear solvers for high- intensity focused ultrasound with application to cancer treatment, AIMS, Palo Alto, 2012. ``Hermite...formulation but different parametrizations. . . . . . . . . . . . 6 4 Density µ(t) at mode 0 for scattering of a plane Gaussian pulse from a sphere. On the...spatiotemporal scales. Two crucial components of the highly-efficient, general-purpose wave simulator we envision are • Reliable, low -cost methods for truncating
Effect of Loss on Multiplexed Single-Photon Sources (Open Access Publisher’s Version)
2015-04-28
lossy components on near- and long-term experimental goals, we simulate themultiplexed sources when used formany- photon state generation under various...efficient integer factorization and digital quantum simulation [7, 8], which relies critically on the development of a high-performance, on-demand photon ...SPDC) or spontaneous four-wave mixing: parametric processes which use a pump laser in a nonlinearmaterial to spontaneously generate photon pairs
2017-03-23
solutions obtained through their proposed method to comparative instances of a generalized assignment problem with either ordinal cost components or... method flag: Designates the method by which the changed/ new assignment problem instance is solved. methodFlag = 0:SMAWarmstart Returns a matching...of randomized perturbations. We examine the contrasts between these methods in the context of assigning Army Officers among a set of identified
Parametric Weight Study of Cryogenic Metallic Tanks for the ``Bimodal'' NTR Mars Vehicle Concept
NASA Astrophysics Data System (ADS)
Kosareo, Daniel N.; Roche, Joseph M.
2006-01-01
A parametric weight assessment of large cryogenic metallic tanks was conducted using the design optimization capabilities in the ANSYS ® finite element analysis code. This analysis was performed to support the sizing of a ``bimodal'' nuclear thermal rocket (NTR) Mars vehicle concept developed at the NASA Glenn Research Center. The tank design study was driven by two load conditions: an in-line, ``Shuttle-derived'' heavy-lift launch with the tanks filled and pressurized, and a burst-test pressure. The main tank structural arrangement is a state-of-the art metallic construction which uses an aluminum-lithium alloy stiffened internally with a ring and stringer framework. The tanks must carry liquid hydrogen in separate launches to orbit where all vehicle components will dock and mate. All tank designs stayed within the available mass and payload volume limits of both the in-line heavy lift and Shuttle derived launch vehicles. Weight trends were developed over a range of tank lengths with varying stiffener cross-sections and tank wall thicknesses. The object of this parametric study was to verify that the proper mass was allocated for the tanks in the overall vehicle sizing model. This paper summarizes the tank weights over a range of tank lengths.
Castro, Marcelo P; Pataky, Todd C; Sole, Gisela; Vilas-Boas, Joao Paulo
2015-07-16
Ground reaction force (GRF) data from men and women are commonly pooled for analyses. However, it may not be justifiable to pool sexes on the basis of discrete parameters extracted from continuous GRF gait waveforms because this can miss continuous effects. Forty healthy participants (20 men and 20 women) walked at a cadence of 100 steps per minute across two force plates, recording GRFs. Two statistical methods were used to test the null hypothesis of no mean GRF differences between sexes: (i) Statistical Parametric Mapping-using the entire three-component GRF waveform; and (ii) traditional approach-using the first and second vertical GRF peaks. Statistical Parametric Mapping results suggested large sex differences, which post-hoc analyses suggested were due predominantly to higher anterior-posterior and vertical GRFs in early stance in women compared to men. Statistically significant differences were observed for the first GRF peak and similar values for the second GRF peak. These contrasting results emphasise that different parts of the waveform have different signal strengths and thus that one may use the traditional approach to choose arbitrary metrics and make arbitrary conclusions. We suggest that researchers and clinicians consider both the entire gait waveforms and sex-specificity when analysing GRF data. Copyright © 2015 Elsevier Ltd. All rights reserved.
Parametric Analysis of Cyclic Phase Change and Energy Storage in Solar Heat Receivers
NASA Technical Reports Server (NTRS)
Hall, Carsie A., III; Glakpe, Emmanuel K.; Cannon, Joseph N.; Kerslake, Thomas W.
1997-01-01
A parametric study on cyclic melting and freezing of an encapsulated phase change material (PCM), integrated into a solar heat receiver, has been performed. The cyclic nature of the present melt/freeze problem is relevant to latent heat thermal energy storage (LHTES) systems used to power solar Brayton engines in microgravity environments. Specifically, a physical and numerical model of the solar heat receiver component of NASA Lewis Research Center's Ground Test Demonstration (GTD) project was developed. Multi-conjugate effects such as the convective fluid flow of a low-Prandtl-number fluid, coupled with thermal conduction in the phase change material, containment tube and working fluid conduit were accounted for in the model. A single-band thermal radiation model was also included to quantify reradiative energy exchange inside the receiver and losses through the aperture. The eutectic LiF-CaF2 was used as the phase change material (PCM) and a mixture of He/Xe was used as the working fluid coolant. A modified version of the computer code HOTTube was used to generate results in the two-phase regime. Results indicate that parametric changes in receiver gas inlet temperature and receiver heat input effects higher sensitivity to changes in receiver gas exit temperatures.
Preprototype nitrogen supply subsystem development
NASA Technical Reports Server (NTRS)
Heppner, D. B.; Fort, J. H.; Schubert, F. H.
1982-01-01
The design and development of a test stand for the Nitrogen Generation Module (NGM) and a series of tests which verified its operation and performance capability are described. Over 900 hours of parametric testing were achieved. The results from this testing were then used to design an advanced NGM and a self contained, preprototype Nitrogen Supply Subsystem. The NGM consists of three major components: nitrogen generation module, pressure controller and hydrazine storage tank and ancillary components. The most important improvement is the elimination of all sealing surfaces, achieved with a total welded or brazed construction. Additionally, performance was improved by increasing hydrogen separating capability by 20% with no increase in overall packaging size.
Two-component Thermal Dust Emission Model: Application to the Planck HFI Maps
NASA Astrophysics Data System (ADS)
Meisner, Aaron M.; Finkbeiner, Douglas P.
2014-06-01
We present full-sky, 6.1 arcminute resolution maps of dust optical depth and temperature derived by fitting the Finkbeiner et al. (1999) two-component dust emission model to the Planck HFI and IRAS 100 micron maps. This parametrization of the far infrared thermal dust SED as the sum of two modified blackbodies serves as an important alternative to the commonly adopted single modified blackbody dust emission model. We expect our Planck-based maps of dust temperature and optical depth to form the basis for a next-generation, high-resolution extinction map which will additionally incorporate small-scale detail from WISE imaging.
Intercultural Conflict and Mediation: An Intergroup Perspective
ERIC Educational Resources Information Center
Rubenfeld, Sara; Clement, Richard
2012-01-01
This study investigates the role of components of intercultural competence in the use of intercultural mediation behaviors. Through the use of the Revised Intercultural Mediation Measure, an instrument revised by the authors, 291 Anglophone and 161 Francophone participants in Canada were asked to indicate their likelihood of employing various…
NASA Astrophysics Data System (ADS)
Sardet, Laure; Patilea, Valentin
When pricing a specific insurance premium, actuary needs to evaluate the claims cost distribution for the warranty. Traditional actuarial methods use parametric specifications to model claims distribution, like lognormal, Weibull and Pareto laws. Mixtures of such distributions allow to improve the flexibility of the parametric approach and seem to be quite well-adapted to capture the skewness, the long tails as well as the unobserved heterogeneity among the claims. In this paper, instead of looking for a finely tuned mixture with many components, we choose a parsimonious mixture modeling, typically a two or three-component mixture. Next, we use the mixture cumulative distribution function (CDF) to transform data into the unit interval where we apply a beta-kernel smoothing procedure. A bandwidth rule adapted to our methodology is proposed. Finally, the beta-kernel density estimate is back-transformed to recover an estimate of the original claims density. The beta-kernel smoothing provides an automatic fine-tuning of the parsimonious mixture and thus avoids inference in more complex mixture models with many parameters. We investigate the empirical performance of the new method in the estimation of the quantiles with simulated nonnegative data and the quantiles of the individual claims distribution in a non-life insurance application.
Localized Principal Component Analysis based Curve Evolution: A Divide and Conquer Approach
Appia, Vikram; Ganapathy, Balaji; Yezzi, Anthony; Faber, Tracy
2014-01-01
We propose a novel localized principal component analysis (PCA) based curve evolution approach which evolves the segmenting curve semi-locally within various target regions (divisions) in an image and then combines these locally accurate segmentation curves to obtain a global segmentation. The training data for our approach consists of training shapes and associated auxiliary (target) masks. The masks indicate the various regions of the shape exhibiting highly correlated variations locally which may be rather independent of the variations in the distant parts of the global shape. Thus, in a sense, we are clustering the variations exhibited in the training data set. We then use a parametric model to implicitly represent each localized segmentation curve as a combination of the local shape priors obtained by representing the training shapes and the masks as a collection of signed distance functions. We also propose a parametric model to combine the locally evolved segmentation curves into a single hybrid (global) segmentation. Finally, we combine the evolution of these semilocal and global parameters to minimize an objective energy function. The resulting algorithm thus provides a globally accurate solution, which retains the local variations in shape. We present some results to illustrate how our approach performs better than the traditional approach with fully global PCA. PMID:25520901
NASA Astrophysics Data System (ADS)
Smetanin, S. N.
2014-11-01
Using mathematical modelling we have studied the conditions of low-threshold collinear optical frequency comb generation under transient (picosecond) stimulated Raman scattering (SRS) and parametric four-wave coupling of SRS components in crystals. It is shown that Raman-parametric generation of an octave-spanning optical frequency comb occurs most effectively under intermediate, transient SRS at a pump pulse duration exceeding the dephasing time by five-to-twenty times. We have found the optimal values of not only the laser pump pulse duration, but also of the Raman crystal lengths corresponding to highly efficient generation of an optical frequency comb from the second anti-Stokes to the fourth Stokes Raman components. For the KGd(WO4)2 (high dispersion) and Ba(NO3)2 (low dispersion) crystals pumped at a wavelength of 1.064 μm and a pulse duration five or more times greater than the dephasing time, the optimum length of the crystal was 0.3 and 0.6 cm, respectively, which is consistent with the condition of the most effective Stokes - anti-Stokes coupling ΔkL ≈ 15, where Δk is the wave detuning from phase matching of Stokes - anti-Stokes coupling, determined by the refractive index dispersion of the SRS medium.
Shin, Hyunjin; Mutlu, Miray; Koomen, John M.; Markey, Mia K.
2007-01-01
Noise in mass spectrometry can interfere with identification of the biochemical substances in the sample. For example, the electric motors and circuits inside the mass spectrometer or in nearby equipment generate random noise that may distort the true shape of mass spectra. This paper presents a stochastic signal processing approach to analyzing noise from electrical noise sources (i.e., noise from instrumentation) in MALDI TOF mass spectrometry. Noise from instrumentation was hypothesized to be a mixture of thermal noise, 1/f noise, and electric or magnetic interference in the instrument. Parametric power spectral density estimation was conducted to derive the power distribution of noise from instrumentation with respect to frequencies. As expected, the experimental results show that noise from instrumentation contains 1/f noise and prominent periodic components in addition to thermal noise. These periodic components imply that the mass spectrometers used in this study may not be completely shielded from the internal or external electrical noise sources. However, according to a simulation study of human plasma mass spectra, noise from instrumentation does not seem to affect mass spectra significantly. In conclusion, analysis of noise from instrumentation using stochastic signal processing here provides an intuitive perspective on how to quantify noise in mass spectrometry through spectral modeling. PMID:19455245
Bazot, Marc; Daraï, Emile
2018-03-01
The aim of the present review, conducted according to PRISMA statement recommendations, was to evaluate the contribution of transvaginal sonography (TVS) and magnetic resonance imaging (MRI) to diagnose adenomyosis. Although there is a lack of consensus on adenomyosis classification, three subtypes are described, internal, external adenomyosis, and adenomyomas. Using TVS, whatever the subtype, pooled sensitivities, pooled specificities, and pooled positive likelihood ratios are 0.72-0.82, 0.85-0.81, and 4.67-3.7, respectively, but with a high heterogeneity between the studies. MRI has a pooled sensitivity of 0.77, specificity of 0.89, positive likelihood ratio of 6.5, and negative likelihood ratio of 0.2 for all subtypes. Our results suggest that MRI is more useful than TVS in the diagnosis of adenomyosis. Further studies are required to determine the performance of direct signs (cystic component) and indirect signs (characteristics of junctional zone) to avoid misdiagnosis of adenomyosis. Copyright © 2018 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Peters, C. (Principal Investigator)
1980-01-01
A general theorem is given which establishes the existence and uniqueness of a consistent solution of the likelihood equations given a sequence of independent random vectors whose distributions are not identical but have the same parameter set. In addition, it is shown that the consistent solution is a MLE and that it is asymptotically normal and efficient. Two applications are discussed: one in which independent observations of a normal random vector have missing components, and the other in which the parameters in a mixture from an exponential family are estimated using independent homogeneous sample blocks of different sizes.
A bias-corrected estimator in multiple imputation for missing data.
Tomita, Hiroaki; Fujisawa, Hironori; Henmi, Masayuki
2018-05-29
Multiple imputation (MI) is one of the most popular methods to deal with missing data, and its use has been rapidly increasing in medical studies. Although MI is rather appealing in practice since it is possible to use ordinary statistical methods for a complete data set once the missing values are fully imputed, the method of imputation is still problematic. If the missing values are imputed from some parametric model, the validity of imputation is not necessarily ensured, and the final estimate for a parameter of interest can be biased unless the parametric model is correctly specified. Nonparametric methods have been also proposed for MI, but it is not so straightforward as to produce imputation values from nonparametrically estimated distributions. In this paper, we propose a new method for MI to obtain a consistent (or asymptotically unbiased) final estimate even if the imputation model is misspecified. The key idea is to use an imputation model from which the imputation values are easily produced and to make a proper correction in the likelihood function after the imputation by using the density ratio between the imputation model and the true conditional density function for the missing variable as a weight. Although the conditional density must be nonparametrically estimated, it is not used for the imputation. The performance of our method is evaluated by both theory and simulation studies. A real data analysis is also conducted to illustrate our method by using the Duke Cardiac Catheterization Coronary Artery Disease Diagnostic Dataset. Copyright © 2018 John Wiley & Sons, Ltd.
Autonomous power expert system
NASA Technical Reports Server (NTRS)
Walters, Jerry L.; Petrik, Edward J.; Roth, Mary Ellen; Truong, Long Van; Quinn, Todd; Krawczonek, Walter M.
1990-01-01
The Autonomous Power Expert (APEX) system was designed to monitor and diagnose fault conditions that occur within the Space Station Freedom Electrical Power System (SSF/EPS) Testbed. APEX is designed to interface with SSF/EPS testbed power management controllers to provide enhanced autonomous operation and control capability. The APEX architecture consists of three components: (1) a rule-based expert system, (2) a testbed data acquisition interface, and (3) a power scheduler interface. Fault detection, fault isolation, justification of probable causes, recommended actions, and incipient fault analysis are the main functions of the expert system component. The data acquisition component requests and receives pertinent parametric values from the EPS testbed and asserts the values into a knowledge base. Power load profile information is obtained from a remote scheduler through the power scheduler interface component. The current APEX design and development work is discussed. Operation and use of APEX by way of the user interface screens is also covered.
Harmonic component detection: Optimized Spectral Kurtosis for operational modal analysis
NASA Astrophysics Data System (ADS)
Dion, J.-L.; Tawfiq, I.; Chevallier, G.
2012-01-01
This work is a contribution in the field of Operational Modal Analysis to identify the modal parameters of mechanical structures using only measured responses. The study deals with structural responses coupled with harmonic components amplitude and frequency modulated in a short range, a common combination for mechanical systems with engines and other rotating machines in operation. These harmonic components generate misleading data interpreted erroneously by the classical methods used in OMA. The present work attempts to differentiate maxima in spectra stemming from harmonic components and structural modes. The detection method proposed is based on the so-called Optimized Spectral Kurtosis and compared with others definitions of Spectral Kurtosis described in the literature. After a parametric study of the method, a critical study is performed on numerical simulations and then on an experimental structure in operation in order to assess the method's performance.
First impressions: gait cues drive reliable trait judgements.
Thoresen, John C; Vuong, Quoc C; Atkinson, Anthony P
2012-09-01
Personality trait attribution can underpin important social decisions and yet requires little effort; even a brief exposure to a photograph can generate lasting impressions. Body movement is a channel readily available to observers and allows judgements to be made when facial and body appearances are less visible; e.g., from great distances. Across three studies, we assessed the reliability of trait judgements of point-light walkers and identified motion-related visual cues driving observers' judgements. The findings confirm that observers make reliable, albeit inaccurate, trait judgements, and these were linked to a small number of motion components derived from a Principal Component Analysis of the motion data. Parametric manipulation of the motion components linearly affected trait ratings, providing strong evidence that the visual cues captured by these components drive observers' trait judgements. Subsequent analyses suggest that reliability of trait ratings was driven by impressions of emotion, attractiveness and masculinity. Copyright © 2012 Elsevier B.V. All rights reserved.
Tau-REx: A new look at the retrieval of exoplanetary atmospheres
NASA Astrophysics Data System (ADS)
Waldmann, Ingo
2014-11-01
The field of exoplanetary spectroscopy is as fast moving as it is new. With an increasing amount of space and ground based instruments obtaining data on a large set of extrasolar planets we are indeed entering the era of exoplanetary characterisation. Permanently at the edge of instrument feasibility, it is as important as it is difficult to find the most optimal and objective methodologies to analysing and interpreting current data. This is particularly true for smaller and fainter Earth and Super-Earth type planets.For low to mid signal to noise (SNR) observations, we are prone to two sources of biases: 1) Prior selection in the data reduction and analysis; 2) Prior constraints on the spectral retrieval. In Waldmann et al. (2013), Morello et al. (2014) and Waldmann (2012, 2014) we have shown a prior-free approach to data analysis based on non-parametric machine learning techniques. Following these approaches we will present a new take on the spectral retrieval of extrasolar planets. Tau-REx (tau-retrieval of exoplanets) is a new line-by-line, atmospheric retrieval framework. In the past the decision on what opacity sources go into an atmospheric model were usually user defined. Manual input can lead to model biases and poor convergence of the atmospheric model to the data. In Tau-REx we have set out to solve this. Through custom built pattern recognition software, Tau-REx is able to rapidly identify the most likely atmospheric opacities from a large number of possible absorbers/emitters (ExoMol or HiTran data bases) and non-parametrically constrain the prior space for the Bayesian retrieval. Unlike other (MCMC based) techniques, Tau-REx is able to fully integrate high-dimensional log-likelihood spaces and to calculate the full Bayesian Evidence of the atmospheric models. We achieve this through a combination of Nested Sampling and a high degree of code parallelisation. This allows for an exact and unbiased Bayesian model selection and a fully mapping of potential model-data degeneracies. Together with non-parametric data de-trending of exoplanetary spectra, we can reach an un- precedented level of objectivity in our atmospheric characterisation of these foreign worlds.
Dumont, Martine; Jurysta, Fabrice; Lanquart, Jean-Pol; Noseda, André; van de Borne, Philippe; Linkowski, Paul
2007-12-01
To investigate the dynamics of the synchronization between heart rate variability and sleep electroencephalogram power spectra and the effect of sleep apnea-hypopnea syndrome. Heart rate and sleep electroencephalogram signals were recorded in controls and patients with sleep apnea-hypopnea syndrome that were matched for age, gender, sleep parameters, and blood pressure. Spectral analysis was applied to electrocardiogram and electroencephalogram sleep recordings to obtain power values every 20s. Synchronization likelihood was computed between time series of the normalized high frequency spectral component of RR-intervals and all electroencephalographic frequency bands. Detrended fluctuation analysis was applied to the synchronizations in order to qualify their dynamic behaviors. For all sleep bands, the fluctuations of the synchronization between sleep EEG and heart activity appear scale free and the scaling exponent is close to one as for 1/f noise. We could not detect any effect due to sleep apnea-hypopnea syndrome. The synchronizations between the high frequency component of heart rate variability and all sleep power bands exhibited robust fluctuations characterized by self-similar temporal behavior of 1/f noise type. No effects of sleep apnea-hypopnea syndrome were observed in these synchronizations. Sleep apnea-hypopnea syndrome does not affect the interdependence between the high frequency component of heart rate variability and all sleep power bands as measured by synchronization likelihood.
Component variations and their effects on bipolar nickel-hydrogen cell performance
NASA Technical Reports Server (NTRS)
Manzo, Michelle A.; Gahn, Randall F.; Gonzalez-Sanabria, Olga D.; Cataldo, Robert L.; Gemeiner, Russel P.
1987-01-01
A 50 cell bipolar nickel-hydrogen battery was assembled to demonstrate the feasibility of constructing a high voltage stack of cells. Various component combinations were tested in this battery. The battery had approximately 1 ampere-hour of capacity and was constructed from components with an active area of 2 x 2 inches. The components were parametrically varied to give a comparison of nickel electrodes, hydrogen electrodes, separators, fill procedures and electrolyte reservoir plate thicknesses. Groups of five cells were constructed using the same components; ten combinations were tested in all. The battery was thoroughly characterized at various change and discharge rates as well as with various pulse patterns and rates. Over a period of 1400 40-percent DOD LEO cycles some of the groups began to exhibit performance differences. In general, only separator variations had a significant effect on cell performance. It also appears that shunt currents may have been operating within the stack, resulting in electrolyte transfer from one cell to another, thus contributing to cell performance variations.
Component variations and their effects on bipolar nickel-hydrogen cell performance
NASA Technical Reports Server (NTRS)
Manzo, Michelle A.; Gahn, Randall F.; Gonzalez-Sanabria, Olga D.; Cataldo, Robert L.; Gemeiner, Russel P.
1987-01-01
A 50 cell bipolar nickel-hydrogen battery was assembled to demonstrate the feasibility of constructing a high voltage stack of cells. Various component combinations were tested in this battery. The battery had approximately 1 ampere-hour of capacity and was constructed from components with an active area of 2" X 2". The components were parametrically varied to give a comparison of nickel electrodes, hydrogen electrodes, separators, fill procedures and electrolyte reservoir plate thicknesses. Groups of five cells were constructed using the same components; ten combinations were tested in all. The battery was thoroughly characterized at various change and discharge rates as well as with various pulse patterns and rates. Over a period of 1400 40% DOD LEO cycles some of the groups began to exhibit performance differences. In general, only separator variations had a significant effect on cell performance. It also appears that shunt currents may have been operating within the stack, resulting in electrolyte transfer from one cell to another, thus contributing to cell performance variations.
Accurate Structural Correlations from Maximum Likelihood Superpositions
Theobald, Douglas L; Wuttke, Deborah S
2008-01-01
The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR) models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA) of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method (“PCA plots”) for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology. PMID:18282091
NASA Astrophysics Data System (ADS)
Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.
2014-08-01
We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for testing periods of 10-20 yr. The testing results suggest that our model is a viable candidate model to serve for long-term forecasting on timescales of years to decades for the European region.
NASA Astrophysics Data System (ADS)
Strader, Anne; Schneider, Max; Schorlemmer, Danijel; Liukis, Maria
2016-04-01
The Collaboratory for the Study of Earthquake Predictability (CSEP) was developed to rigorously test earthquake forecasts retrospectively and prospectively through reproducible, completely transparent experiments within a controlled environment (Zechar et al., 2010). During 2006-2011, thirteen five-year time-invariant prospective earthquake mainshock forecasts developed by the Regional Earthquake Likelihood Models (RELM) working group were evaluated through the CSEP testing center (Schorlemmer and Gerstenberger, 2007). The number, spatial, and magnitude components of the forecasts were compared to the respective observed seismicity components using a set of consistency tests (Schorlemmer et al., 2007, Zechar et al., 2010). In the initial experiment, all but three forecast models passed every test at the 95% significance level, with all forecasts displaying consistent log-likelihoods (L-test) and magnitude distributions (M-test) with the observed seismicity. In the ten-year RELM experiment update, we reevaluate these earthquake forecasts over an eight-year period from 2008-2016, to determine the consistency of previous likelihood testing results over longer time intervals. Additionally, we test the Uniform California Earthquake Rupture Forecast (UCERF2), developed by the U.S. Geological Survey (USGS), and the earthquake rate model developed by the California Geological Survey (CGS) and the USGS for the National Seismic Hazard Mapping Program (NSHMP) against the RELM forecasts. Both the UCERF2 and NSHMP forecasts pass all consistency tests, though the Helmstetter et al. (2007) and Shen et al. (2007) models exhibit greater information gain per earthquake according to the T- and W- tests (Rhoades et al., 2011). Though all but three RELM forecasts pass the spatial likelihood test (S-test), multiple forecasts fail the M-test due to overprediction of the number of earthquakes during the target period. Though there is no significant difference between the UCERF2 and NSHMP models, residual scores show that the NSHMP model is preferred in locations with earthquake occurrence, due to the lower seismicity rates forecasted by the UCERF2 model.
SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.
Chu, Annie; Cui, Jenny; Dinov, Ivo D
2009-03-01
The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models.
Least-squares reverse time migration in elastic media
NASA Astrophysics Data System (ADS)
Ren, Zhiming; Liu, Yang; Sen, Mrinal K.
2017-02-01
Elastic reverse time migration (RTM) can yield accurate subsurface information (e.g. PP and PS reflectivity) by imaging the multicomponent seismic data. However, the existing RTM methods are still insufficient to provide satisfactory results because of the finite recording aperture, limited bandwidth and imperfect illumination. Besides, the P- and S-wave separation and the polarity reversal correction are indispensable in conventional elastic RTM. Here, we propose an iterative elastic least-squares RTM (LSRTM) method, in which the imaging accuracy is improved gradually with iteration. We first use the Born approximation to formulate the elastic de-migration operator, and employ the Lagrange multiplier method to derive the adjoint equations and gradients with respect to reflectivity. Then, an efficient inversion workflow (only four forward computations needed in each iteration) is introduced to update the reflectivity. Synthetic and field data examples reveal that the proposed LSRTM method can obtain higher-quality images than the conventional elastic RTM. We also analyse the influence of model parametrizations and misfit functions in elastic LSRTM. We observe that Lamé parameters, velocity and impedance parametrizations have similar and plausible migration results when the structures of different models are correlated. For an uncorrelated subsurface model, velocity and impedance parametrizations produce fewer artefacts caused by parameter crosstalk than the Lamé coefficient parametrization. Correlation- and convolution-type misfit functions are effective when amplitude errors are involved and the source wavelet is unknown, respectively. Finally, we discuss the dependence of elastic LSRTM on migration velocities and its antinoise ability. Imaging results determine that the new elastic LSRTM method performs well as long as the low-frequency components of migration velocities are correct. The quality of images of elastic LSRTM degrades with increasing noise.
Spacecraft Conceptual Design Compared to the Apollo Lunar Lander
NASA Technical Reports Server (NTRS)
Young, C.; Bowie, J.; Rust, R.; Lenius, J.; Anderson, M.; Connolly, J.
2011-01-01
Future human exploration of the Moon will require an optimized spacecraft design with each sub-system achieving the required minimum capability and maintaining high reliability. The objective of this study was to trade capability with reliability and minimize mass for the lunar lander spacecraft. The NASA parametric concept for a 3-person vehicle to the lunar surface with a 30% mass margin totaled was considerably heavier than the Apollo 15 Lunar Module "as flown" mass of 16.4 metric tons. The additional mass was attributed to mission requirements and system design choices that were made to meet the realities of modern spaceflight. The parametric tool used to size the current concept, Envision, accounts for primary and secondary mass requirements. For example, adding an astronaut increases the mass requirements for suits, water, food, oxygen, as well as, the increase in volume. The environmental control sub-systems becomes heavier with the increased requirements and more structure was needed to support the additional mass. There was also an increase in propellant usage. For comparison, an "Apollo-like" vehicle was created by removing these additional requirements. Utilizing the Envision parametric mass calculation tool and a quantitative reliability estimation tool designed by Valador Inc., it was determined that with today?s current technology a Lunar Module (LM) with Apollo capability could be built with less mass and similar reliability. The reliability of this new lander was compared to Apollo Lunar Module utilizing the same methodology, adjusting for mission timeline changes as well as component differences. Interestingly, the parametric concept's overall estimated risk for loss of mission (LOM) and loss of crew (LOC) did not significantly improve when compared to Apollo.
PBOOST: a GPU-based tool for parallel permutation tests in genome-wide association studies.
Yang, Guangyuan; Jiang, Wei; Yang, Qiang; Yu, Weichuan
2015-05-01
The importance of testing associations allowing for interactions has been demonstrated by Marchini et al. (2005). A fast method detecting associations allowing for interactions has been proposed by Wan et al. (2010a). The method is based on likelihood ratio test with the assumption that the statistic follows the χ(2) distribution. Many single nucleotide polymorphism (SNP) pairs with significant associations allowing for interactions have been detected using their method. However, the assumption of χ(2) test requires the expected values in each cell of the contingency table to be at least five. This assumption is violated in some identified SNP pairs. In this case, likelihood ratio test may not be applicable any more. Permutation test is an ideal approach to checking the P-values calculated in likelihood ratio test because of its non-parametric nature. The P-values of SNP pairs having significant associations with disease are always extremely small. Thus, we need a huge number of permutations to achieve correspondingly high resolution for the P-values. In order to investigate whether the P-values from likelihood ratio tests are reliable, a fast permutation tool to accomplish large number of permutations is desirable. We developed a permutation tool named PBOOST. It is based on GPU with highly reliable P-value estimation. By using simulation data, we found that the P-values from likelihood ratio tests will have relative error of >100% when 50% cells in the contingency table have expected count less than five or when there is zero expected count in any of the contingency table cells. In terms of speed, PBOOST completed 10(7) permutations for a single SNP pair from the Wellcome Trust Case Control Consortium (WTCCC) genome data (Wellcome Trust Case Control Consortium, 2007) within 1 min on a single Nvidia Tesla M2090 device, while it took 60 min in a single CPU Intel Xeon E5-2650 to finish the same task. More importantly, when simultaneously testing 256 SNP pairs for 10(7) permutations, our tool took only 5 min, while the CPU program took 10 h. By permuting on a GPU cluster consisting of 40 nodes, we completed 10(12) permutations for all 280 SNP pairs reported with P-values smaller than 1.6 × 10⁻¹² in the WTCCC datasets in 1 week. The source code and sample data are available at http://bioinformatics.ust.hk/PBOOST.zip. gyang@ust.hk; eeyu@ust.hk Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Micro-Structured Materials for Generation of Coherent Light and Optical Signal Processing
2008-12-22
Bliss, and D. Weyburne,, "GaAs optical parametric oscillator with circularly polarized and depolarized pump", Optics Letters, No. 18, Vol. 32, pp...Because we measure the space-charge field by propagating the intense green laser beam along the crystal c- axis, the polarization of the light is...ordinary. Most applications utilize light with extraordinary polarization to make use of the largest component of the nonlinear or electro-optic tensor
Preliminary design of a mini-Brayton Compressor-Alternator-Turbine (CAT)
NASA Technical Reports Server (NTRS)
1973-01-01
The preliminary design of a mini-Brayton compressor-alternator-turbine system is discussed. The program design goals are listed. The optimum system characteristics over the entire range of power output were determined by performing a wide-range parametric study. The ability to develop the required components to the degree necessary within the limitations of present technology is evaluated. The sensitivity of the system to various individual design parameters was analyzed.
Gallium arsenide (GaAs) solar cell modeling studies
NASA Technical Reports Server (NTRS)
Heinbockel, J. H.
1980-01-01
Various models were constructed which will allow for the variation of system components. Computer studies were then performed using the models constructed in order to study the effects of various system changes. In particular, GaAs and Si flat plate solar power arrays were studied and compared. Series and shunt resistance models were constructed. Models for the chemical kinetics of the annealing process were prepared. For all models constructed, various parametric studies were performed.
Use of artificial landscapes to isolate controls on burn probability
Marc-Andre Parisien; Carol Miller; Alan A. Ager; Mark A. Finney
2010-01-01
Techniques for modeling burn probability (BP) combine the stochastic components of fire regimes (ignitions and weather) with sophisticated fire growth algorithms to produce high-resolution spatial estimates of the relative likelihood of burning. Despite the numerous investigations of fire patterns from either observed or simulated sources, the specific influence of...
Further Iterations on Using the Problem-Analysis Framework
ERIC Educational Resources Information Center
Annan, Michael; Chua, Jocelyn; Cole, Rachel; Kennedy, Emma; James, Robert; Markusdottir, Ingibjorg; Monsen, Jeremy; Robertson, Lucy; Shah, Sonia
2013-01-01
A core component of applied educational and child psychology practice is the skilfulness with which practitioners are able to rigorously structure and conceptualise complex real world human problems. This is done in such a way that when they (with others) jointly work on them, there is an increased likelihood of positive outcomes being achieved…
Long-term success of stump sprout regeneration in baldcypress
Richard F. Keim; Jim L. Chambers; Melinda S. Hughes; Emile S. Gardiner; William H. Conner; John W. Day; Stephen P. Faulkner; Kenneth W. McLeod; Craig A. Miller; J. Andrew Nyman; Gary P. Shaffer; Luben D. Dimov
2006-01-01
Baldcypress [Taxodium distichum (L.) Rich.] is one of very few conifers that produces stump sprouts capable of becoming full-grown trees. Previous studies have addressed early survival of baldcypress stump sprouts but have not addressed the likelihood of sprouts becoming an important component of mature stands. We surveyed stands throughout south...
Feasibility of potable water generators to meet vessel numeric ballast water discharge limits.
Albert, Ryan J; Viveiros, Edward; Falatko, Debra S; Tamburri, Mario N
2017-07-15
Ballast water is taken on-board vessels into ballast water tanks to maintain vessel draft, buoyancy, and stability. Unmanaged ballast water contains aquatic organisms that, when transported and discharged to non-native waters, may establish as invasive species. Technologies capable of achieving regulatory limits designed to decrease the likelihood of invasion include onboard ballast water management systems. However, to date, the treatment development and manufacturing marketplace is limited to large vessels with substantial ballast requirements. For smaller vessels or vessels with reduced ballast requirements, we evaluated the feasibility of meeting the discharge limits by generating ballast water using onboard potable water generators. Case studies and parametric analyses demonstrated the architectural feasibility of installing potable water generators onboard actual vessels with minimal impacts for most vessel types evaluated. Furthermore, land-based testing of a potable water generator demonstrated capability to meet current numeric discharge limits for living organisms in all size classes. Published by Elsevier Ltd.
Estimating population diversity with CatchAll
Bunge, John; Woodard, Linda; Böhning, Dankmar; Foster, James A.; Connolly, Sean; Allen, Heather K.
2012-01-01
Motivation: The massive data produced by next-generation sequencing require advanced statistical tools. We address estimating the total diversity or species richness in a population. To date, only relatively simple methods have been implemented in available software. There is a need for software employing modern, computationally intensive statistical analyses including error, goodness-of-fit and robustness assessments. Results: We present CatchAll, a fast, easy-to-use, platform-independent program that computes maximum likelihood estimates for finite-mixture models, weighted linear regression-based analyses and coverage-based non-parametric methods, along with outlier diagnostics. Given sample ‘frequency count’ data, CatchAll computes 12 different diversity estimates and applies a model-selection algorithm. CatchAll also derives discounted diversity estimates to adjust for possibly uncertain low-frequency counts. It is accompanied by an Excel-based graphics program. Availability: Free executable downloads for Linux, Windows and Mac OS, with manual and source code, at www.northeastern.edu/catchall. Contact: jab18@cornell.edu PMID:22333246
The potential for sexual transmission to compromise control of Ebola virus outbreaks.
Vinson, John E; Drake, John M; Rohani, Pejman; Park, Andrew W
2016-06-01
Recent evidence suggests that sexual contact may give rise to transmission of Ebola virus long after infection has been cleared from blood. We develop a simple mathematical model that incorporates contact transmission and sexual transmission parametrized from data relating to the 2013-2015 West African Ebola epidemic. The model explores scenarios where contact transmission is reduced following infection events, capturing behaviour change, and quantifies how these actions reducing transmission may be compromised by sexual transmission in terms of increasing likelihood, size and duration of outbreaks. We characterize the extent to which sexual transmission operates in terms of the probability of initial infection resolving to sexual infectiousness and the sexual transmission rate, and relate these parameters to the overall case burden. We find that sexual transmission can have large effects on epidemic dynamics (increasing attack ratios from 25% in scenarios without sexual transmission but with contact-transmission-reducing behaviour, up to 80% in equivalent scenarios with sexual transmission). © 2016 The Author(s).
Goerg, Georg M.
2015-01-01
I present a parametric, bijective transformation to generate heavy tail versions of arbitrary random variables. The tail behavior of this heavy tail Lambert W × F X random variable depends on a tail parameter δ ≥ 0: for δ = 0, Y ≡ X, for δ > 0 Y has heavier tails than X. For X being Gaussian it reduces to Tukey's h distribution. The Lambert W function provides an explicit inverse transformation, which can thus remove heavy tails from observed data. It also provides closed-form expressions for the cumulative distribution (cdf) and probability density function (pdf). As a special case, these yield analytic expression for Tukey's h pdf and cdf. Parameters can be estimated by maximum likelihood and applications to S&P 500 log-returns demonstrate the usefulness of the presented methodology. The R package LambertW implements most of the introduced methodology and is publicly available on CRAN. PMID:26380372
An empirical study of scanner system parameters
NASA Technical Reports Server (NTRS)
Landgrebe, D.; Biehl, L.; Simmons, W.
1976-01-01
The selection of the current combination of parametric values (instantaneous field of view, number and location of spectral bands, signal-to-noise ratio, etc.) of a multispectral scanner is a complex problem due to the strong interrelationship these parameters have with one another. The study was done with the proposed scanner known as Thematic Mapper in mind. Since an adequate theoretical procedure for this problem has apparently not yet been devised, an empirical simulation approach was used with candidate parameter values selected by the heuristic means. The results obtained using a conventional maximum likelihood pixel classifier suggest that although the classification accuracy declines slightly as the IFOV is decreased this is more than made up by an improved mensuration accuracy. Further, the use of a classifier involving both spatial and spectral features shows a very substantial tendency to resist degradation as the signal-to-noise ratio is decreased. And finally, further evidence is provided of the importance of having at least one spectral band in each of the major available portions of the optical spectrum.
Hagar, Yolanda C; Harvey, Danielle J; Beckett, Laurel A
2016-08-30
We develop a multivariate cure survival model to estimate lifetime patterns of colorectal cancer screening. Screening data cover long periods of time, with sparse observations for each person. Some events may occur before the study begins or after the study ends, so the data are both left-censored and right-censored, and some individuals are never screened (the 'cured' population). We propose a multivariate parametric cure model that can be used with left-censored and right-censored data. Our model allows for the estimation of the time to screening as well as the average number of times individuals will be screened. We calculate likelihood functions based on the observations for each subject using a distribution that accounts for within-subject correlation and estimate parameters using Markov chain Monte Carlo methods. We apply our methods to the estimation of lifetime colorectal cancer screening behavior in the SEER-Medicare data set. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Mixed effect Poisson log-linear models for clinical and epidemiological sleep hypnogram data
Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian; Punjabi, Naresh M.
2013-01-01
Bayesian Poisson log-linear multilevel models scalable to epidemiological studies are proposed to investigate population variability in sleep state transition rates. Hierarchical random effects are used to account for pairings of subjects and repeated measures within those subjects, as comparing diseased to non-diseased subjects while minimizing bias is of importance. Essentially, non-parametric piecewise constant hazards are estimated and smoothed, allowing for time-varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming exponentially distributed survival times. Such re-derivation allows synthesis of two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed. Supplementary material includes the analyzed data set as well as the code for a reproducible analysis. PMID:22241689
Evaluating the risk of industrial espionage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bott, T.F.
1998-12-31
A methodology for estimating the relative probabilities of different compromise paths for protected information by insider and visitor intelligence collectors has been developed based on an event-tree analysis of the intelligence collection operation. The analyst identifies target information and ultimate users who might attempt to gain that information. The analyst then uses an event tree to develop a set of compromise paths. Probability models are developed for each of the compromise paths that user parameters based on expert judgment or historical data on security violations. The resulting probability estimates indicate the relative likelihood of different compromise paths and provide anmore » input for security resource allocation. Application of the methodology is demonstrated using a national security example. A set of compromise paths and probability models specifically addressing this example espionage problem are developed. The probability models for hard-copy information compromise paths are quantified as an illustration of the results using parametric values representative of historical data available in secure facilities, supplemented where necessary by expert judgment.« less
Laser-Plasma Interactions on NIKE and the Fusion Test Facility
NASA Astrophysics Data System (ADS)
Phillips, Lee; Weaver, James
2008-11-01
Recent proposed designs for a Fusion Test Facility (FTF) (Obenchain et al., Phys. Plasmas 13 056320 (2006)) for direct-drive ICF targets for energy applications involve high implosion velocities combined with higher laser irradiances. The use of high irradiances increases the likelihood of deleterious laser plasma instabilities (LPI) but the proposed use of a 248 nm KrF laser to drive these targets is expected to minimize the LPI risk. We examine, using simulation results from NRL's FAST hydrocode, the proposed operational regimes of the FTF in relation to the thresholds for the SRS, SBS, and 2-plasmon instabilities. Simulations are also used to help design and interpret ongoing experiments being conducted at NRL's NIKE facility for the purpose of generating and studying LPI. Target geometries and laser pulseshapes were devised in order to create plasma conditions with long scalelengths and low electron temperatures that allow the growth of parametric instabilities. These simulations include the effects of finite beam angles through the use of raytracing.
Inference from single occasion capture experiments using genetic markers.
Hettiarachchige, Chathurika K H; Huggins, Richard M
2018-05-01
Accurate estimation of the size of animal populations is an important task in ecological science. Recent advances in the field of molecular genetics researches allow the use of genetic data to estimate the size of a population from a single capture occasion rather than repeated occasions as in the usual capture-recapture experiments. Estimating the population size using genetic data also has sometimes led to estimates that differ markedly from each other and also from classical capture-recapture estimates. Here, we develop a closed form estimator that uses genetic information to estimate the size of a population consisting of mothers and daughters, focusing on estimating the number of mothers, using data from a single sample. We demonstrate the estimator is consistent and propose a parametric bootstrap to estimate the standard errors. The estimator is evaluated in a simulation study and applied to real data. We also consider maximum likelihood in this setting and discover problems that preclude its general use. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Multi-stage decoding for multi-level block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao
1991-01-01
Various types of multistage decoding for multilevel block modulation codes, in which the decoding of a component code at each stage can be either soft decision or hard decision, maximum likelihood or bounded distance are discussed. Error performance for codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. It was found that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. It was found that the difference in performance between the suboptimum multi-stage soft decision maximum likelihood decoding of a modulation code and the single stage optimum decoding of the overall code is very small, only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.
NASA Astrophysics Data System (ADS)
Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto
2017-03-01
Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.
MODELING LEFT-TRUNCATED AND RIGHT-CENSORED SURVIVAL DATA WITH LONGITUDINAL COVARIATES
Su, Yu-Ru; Wang, Jane-Ling
2018-01-01
There is a surge in medical follow-up studies that include longitudinal covariates in the modeling of survival data. So far, the focus has been largely on right censored survival data. We consider survival data that are subject to both left truncation and right censoring. Left truncation is well known to produce biased sample. The sampling bias issue has been resolved in the literature for the case which involves baseline or time-varying covariates that are observable. The problem remains open however for the important case where longitudinal covariates are present in survival models. A joint likelihood approach has been shown in the literature to provide an effective way to overcome those difficulties for right censored data, but this approach faces substantial additional challenges in the presence of left truncation. Here we thus propose an alternative likelihood to overcome these difficulties and show that the regression coefficient in the survival component can be estimated unbiasedly and efficiently. Issues about the bias for the longitudinal component are discussed. The new approach is illustrated numerically through simulations and data from a multi-center AIDS cohort study. PMID:29479122
A Maximum-Likelihood Approach to Force-Field Calibration.
Zaborowski, Bartłomiej; Jagieła, Dawid; Czaplewski, Cezary; Hałabis, Anna; Lewandowska, Agnieszka; Żmudzińska, Wioletta; Ołdziej, Stanisław; Karczyńska, Agnieszka; Omieczynski, Christian; Wirecki, Tomasz; Liwo, Adam
2015-09-28
A new approach to the calibration of the force fields is proposed, in which the force-field parameters are obtained by maximum-likelihood fitting of the calculated conformational ensembles to the experimental ensembles of training system(s). The maximum-likelihood function is composed of logarithms of the Boltzmann probabilities of the experimental conformations, calculated with the current energy function. Because the theoretical distribution is given in the form of the simulated conformations only, the contributions from all of the simulated conformations, with Gaussian weights in the distances from a given experimental conformation, are added to give the contribution to the target function from this conformation. In contrast to earlier methods for force-field calibration, the approach does not suffer from the arbitrariness of dividing the decoy set into native-like and non-native structures; however, if such a division is made instead of using Gaussian weights, application of the maximum-likelihood method results in the well-known energy-gap maximization. The computational procedure consists of cycles of decoy generation and maximum-likelihood-function optimization, which are iterated until convergence is reached. The method was tested with Gaussian distributions and then applied to the physics-based coarse-grained UNRES force field for proteins. The NMR structures of the tryptophan cage, a small α-helical protein, determined at three temperatures (T = 280, 305, and 313 K) by Hałabis et al. ( J. Phys. Chem. B 2012 , 116 , 6898 - 6907 ), were used. Multiplexed replica-exchange molecular dynamics was used to generate the decoys. The iterative procedure exhibited steady convergence. Three variants of optimization were tried: optimization of the energy-term weights alone and use of the experimental ensemble of the folded protein only at T = 280 K (run 1); optimization of the energy-term weights and use of experimental ensembles at all three temperatures (run 2); and optimization of the energy-term weights and the coefficients of the torsional and multibody energy terms and use of experimental ensembles at all three temperatures (run 3). The force fields were subsequently tested with a set of 14 α-helical and two α + β proteins. Optimization run 1 resulted in better agreement with the experimental ensemble at T = 280 K compared with optimization run 2 and in comparable performance on the test set but poorer agreement of the calculated folding temperature with the experimental folding temperature. Optimization run 3 resulted in the best fit of the calculated ensembles to the experimental ones for the tryptophan cage but in much poorer performance on the training set, suggesting that use of a small α-helical protein for extensive force-field calibration resulted in overfitting of the data for this protein at the expense of transferability. The optimized force field resulting from run 2 was found to fold 13 of the 14 tested α-helical proteins and one small α + β protein with the correct topologies; the average structures of 10 of them were predicted with accuracies of about 5 Å C(α) root-mean-square deviation or better. Test simulations with an additional set of 12 α-helical proteins demonstrated that this force field performed better on α-helical proteins than the previous parametrizations of UNRES. The proposed approach is applicable to any problem of maximum-likelihood parameter estimation when the contributions to the maximum-likelihood function cannot be evaluated at the experimental points and the dimension of the configurational space is too high to construct histograms of the experimental distributions.
Lewis, Melissa A; Rhew, Isaac C; Fairlie, Anne M; Swanson, Alex; Anderson, Judyth; Kaysen, Debra
2018-03-06
The purpose of this study was to evaluate personalized feedback intervention (PFI) framing with two web-delivered PFIs aimed to reduce young adult alcohol-related risky sexual behavior (RSB). Combined PFIs typically use an additive approach whereby independent components on drinking and components on RSB are presented without the discussion of the influence of alcohol on RSB. In contrast, an integrated PFI highlights the RSB-alcohol connection by presenting integrated alcohol and RSB components that focus on the role of intoxication as a barrier to risk reduction in sexual situations. In a randomized controlled trial, 402 (53.98% female) sexually active young adults aged 18-25 were randomly assigned to a combined PFI, an integrated PFI, or attention control. All assessment and intervention procedures were web-based. At the 1-month follow-up, those randomly assigned to the integrated condition had a lower likelihood of having any casual sex partners compared to those in the control group. At the 6-month follow-up, the combined condition had a lower likelihood of having any casual sex partners compared to those in the control group. When examining alcohol-related RSB, at the 1-month follow-up, both interventions showed a lower likelihood of any drinking prior to sex compared to the control group. When examining alcohol-related sexual consequences, results showed a reduction in the non-zero count of consequences in the integrated condition compared to the control at the 1-month follow-up. For typical drinks per week, those in the combined condition showed a greater reduction in the non-zero count of drinks than those in the control condition at the 1-month follow-up. While there were no significant differences between the two interventions, the current findings highlight the utility of two efficacious web-based alcohol and RSB interventions among a national sample of at-risk young adults.
Michelsen, Brigitte; Kristianslund, Eirik Klami; Sexton, Joseph; Hammer, Hilde Berner; Fagerli, Karen Minde; Lie, Elisabeth; Wierød, Ada; Kalstad, Synøve; Rødevand, Erik; Krøll, Frode; Haugeberg, Glenn; Kvien, Tore K
2017-11-01
To investigate the predictive value of baseline depression/anxiety on the likelihood of achieving joint remission in rheumatoid arthritis (RA) and psoriatic arthritis (PsA) as well as the associations between baseline depression/anxiety and the components of the remission criteria at follow-up. We included 1326 patients with RA and 728 patients with PsA from the prospective observational NOR-DMARD study starting first-time tumour necrosis factor inhibitors or methotrexate. The predictive value of depression/anxiety on remission was explored in prespecified logistic regression models and the associations between baseline depression/anxiety and the components of the remission criteria in prespecified multiple linear regression models. Baseline depression/anxiety according to EuroQoL-5D-3L, Short Form-36 (SF-36) Mental Health subscale ≤56 and SF-36 Mental Component Summary ≤38 negatively predicted 28-joint Disease Activity Score <2.6, Simplified Disease Activity Index ≤3.3, Clinical Disease Activity Index ≤2.8, ACR/EULAR Boolean and Disease Activity Index for Psoriatic Arthritis ≤4 remission after 3 and 6 months treatment in RA (p≤0.008) and partly in PsA (p from 0.001 to 0.73). Baseline depression/anxiety was associated with increased patient's and evaluator's global assessment, tender joint count and joint pain in RA at follow-up, but not with swollen joint count and acute phase reactants. Depression and anxiety may reduce likelihood of joint remission based on composite scores in RA and PsA and should be taken into account in individual patients when making a shared decision on a treatment target. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Resolving the percentage of component terrains within single resolution elements
NASA Technical Reports Server (NTRS)
Marsh, S. E.; Switzer, P.; Kowalik, W. S.; Lyon, R. J. P.
1980-01-01
An approximate maximum likelihood technique employing a widely available discriminant analysis program is discussed that has been developed for resolving the percentage of component terrains within single resolution elements. The method uses all four channels of Landsat data simultaneously and does not require prior knowledge of the percentage of components in mixed pixels. It was tested in five cases that were chosen to represent mixtures of outcrop, soil and vegetation which would typically be encountered in geologic studies with Landsat data. For all five cases, the method proved to be superior to single band weighted average and linear regression techniques and permitted an estimate of the total area occupied by component terrains to within plus or minus 6% of the true area covered. Its major drawback is a consistent overestimation of the pixel component percent of the darker materials (vegetation) and an underestimation of the pixel component percent of the brighter materials (sand).
Shuttle cryogenic supply system optimization study
NASA Technical Reports Server (NTRS)
1971-01-01
Technical information on different cryogenic supply systems is presented for selecting representative designs. Parametric data and sensitivity studies, and an evaluation of related technology status are included. An integrated mathematical model for hardware program support was developed. The life support system, power generation, and propellant supply are considered. The major study conclusions are the following: Optimum integrated systems tend towards maximizing liquid storage. Vacuum jacketing of tanks is a major effect on integrated systems. Subcritical storage advantages over supercritical storage decrease as the quantity of propellant or reactant decreases. Shuttle duty cycles are not severe. The operational mode has a significant effect on reliability. Components are available for most subsystem applications. Subsystems and components require a minimum amount of technology development.
Isotope Brayton electric power system for the 500 to 2500 watt range.
NASA Technical Reports Server (NTRS)
Macosko, R. P.; Barna, G. J.; Block, H. B.; Ingle, B. D.
1972-01-01
An extensive study was conducted at the Lewis Research Center to evaluate an isotope Brayton electric power system for use in the 500 to 2500 W power range. The study emphasized overall system simplicity in order to reduce parasitic power losses and improve system reliability. The study included detailed parametric cycle analysis, conceptual component designs, and evaluation of system packaging. The study has resulted in the selection of a single-loop system (gas) with six major components including one rotating unit. Calculated net system efficiency varies from 23 to 28% over the power range. The use of the Pu-238 heat source being developed for the Multi-Hundred-Watt Radioisotope Thermoelectric Generator program was assumed.
Solar-powered airplane design for long-endurance, high-altitude flight
NASA Technical Reports Server (NTRS)
Youngblood, J. W.; Talay, T. A.
1982-01-01
This paper describes the performance analysis and design of a solar-powered airplane for long-endurance, unmanned, high-altitude cruise flight utilizing electric propulsion and solar energy collection/storage devices. For a fixed calendar date and geocentric latitude, the daily energy balance, airplane sizing, and airplane aerodynamics relations combine to determine airplane size and geometry to meet mission requirements. Vehicle component weight loadings, aerodynamic parameters, and current and projected values of power train component characteristics form the basis of the solution. For a specified mission, a candidate airplane design is presented to demonstrate the feasibility of solar-powered long endurance flight. Parametric data are presented to illustrate the airplane's mission flexibility.
Sizing Power Components of an Electrically Driven Tail Cone Thruster and a Range Extender
NASA Technical Reports Server (NTRS)
Jansen, Ralph H.; Bowman, Cheryl; Jankovsky, Amy
2016-01-01
The aeronautics industry has been challenged on many fronts to increase efficiency, reduce emissions, and decrease dependency on carbon-based fuels. This paper provides an overview of the turboelectric and hybrid electric technologies being developed under NASA's Advanced Air Transportation Technology (AATT) Project and discusses how these technologies can impact vehicle design. The discussion includes an overview of key hybrid electric studies and technology investments, the approach to making informed investment decisions based on key performance parameters and mission studies, and the power system architectures for two candidate aircraft. Finally, the power components for a single-aisle turboelectric aircraft with an electrically driven tail cone thruster and for a hybrid-electric nine-passenger aircraft with a range extender are parametrically sized, and the sensitivity of these components to key parameters is presented.
Baig, Mirza Rafi
2012-01-01
Introduction The current study aims to explore the knowledge, attitude, and perception of hospital pharmacists towards HIV/AIDS and patients living with HIV/AIDS (PLWHA) in the state of Kedah, Malaysia. Material and methods This was a cross-sectional study conducted among the hospital pharmacists in three government hospitals in Kedah, using a self-administered 43-item questionnaire. Data analysis was done using non-parametric and multinomial regression. Results A total of 75 respondents participated in this study, resulting in a response rate of 60.8%. The majority were found to be well aware of the causes of HIV/AIDS. However, about 34 (45.3%) believed erroneously that HIV/AIDS cannot be transmitted through tattooing or body piercing. Nearly 25 (33.3%) of the respondents believed that preventing the use of intravenous drugs may not be effective to prevent HIV/AIDS and endorsed social isolation as a measure to prevent HIV/AIDS. The majority (66.6%) had negative attitudes and about 20% held extremely negative attitudes. Findings from regression modelling revealed that hospital (–2 log likelihood = 215.182, χ2 = 18.060, Df = 8, p = 0.021) and gender (–2 log likelihood = 213.643, χ2 = 16.521, Df = 8, p = 0.035) were more likely to affect the attitudes of respondents. Conclusions Overall, more than one third of the respondents were found to have negative attitudes towards PLWHA. Gender, job experience, and hospitals with more HIV/AIDS patient visits were the main factors affecting attitudes. PMID:24482660
Vegetables- and antioxidant-related nutrients, genetic susceptibility, and non-Hodgkin lymphoma risk
Kelemen, Linda E.; Wang, Sophia S.; Lim, Unhee; Cozen, Wendy; Schenk, Maryjean; Hartge, Patricia; Li, Yan; Rothman, Nathaniel; Davis, Scott; Chanock, Stephen J.; Ward, Mary H.
2009-01-01
Genetic susceptibility to DNA oxidation, carcinogen metabolism, and altered DNA repair may increase non-Hodgkin lymphoma (NHL) risk, whereas vegetables-and antioxidant-related nutrients may decrease risk. We evaluated the interaction of a priori-defined dietary factors with 28 polymorphisms in these metabolic pathways. Incident cases (n = 1,141) were identified during 1998–2000 from four cancer registries and frequency-matched to population-based controls (n = 949). We estimated diet-gene joint effects using two-phase semi-parametric maximum-likelihood methods, which utilized genotype data from all subjects as well as 371 cases and 311 controls with available diet information. Adjusted odds ratios (95% confidence intervals) were lower among common allele carriers with higher dietary intakes. For the GSTM3 3-base insertion and higher total vegetable intake, the risk was 0.56 (0.35–0.92, p interaction = 0.03); for GSTP1 A114V and higher cruciferous vegetable intake, the risk was 0.52 (0.34–0.81, p interaction = 0.02); for OGG1 S326C and higher daily zinc intake, the risk was 0.71 (0.47–1.08, p interaction = 0.04) and for XRCC3 T241M and higher green leafy vegetable intake, the risk was 0.63 (0.41–0.97, p interaction = 0.03). Calculation of the false positive report probability determined a high likelihood of falsely positive associations. Although most associations have not been examined previously with NHL, our results suggest the examined polymorphisms are not modifiers of the association between vegetable and zinc intakes and NHL risk. PMID:18204928
Morales, Leonardo Fabio; Gordon-Larsen, Penny; Guilkey, David
2016-12-01
We estimate a structural dynamic model of the determinants of obesity. In addition to including many of the well-recognized endogenous factors mentioned in the literature as obesity determinants, we also model the individual's residential location as a choice variable, which is the main contribution of this paper to the literature. This allows us to control for an individual's self-selection into communities that possess the types of amenities in the built environment, which in turn affect their obesity-related behaviors such as physical activity (PA) and fast food consumption. We specify reduced form equations for a set of endogenous demand decisions, together with an obesity structural equation. The whole system of equations is jointly estimated by a semi-parametric full information log-likelihood method that allows for a general pattern of correlation in the errors across equations. Our model predicts a reduction in adult obesity of 7 percentage points as a result of a continued high level PA from adolescence into adulthood; a reduction of 0.7 (3) percentage points in adult obesity as a result of one standard deviation reduction in weekly fast food consumption for women (men); and a reduction of 0.02 (0.05) in adult obesity as a result of one standard deviation change in several neighborhood amenities for women (men). Another key finding is that controlling for residential self-selection has substantive implications. To our knowledge, this has not been yet documented within a full information maximum likelihood framework. Copyright © 2016 Elsevier B.V. All rights reserved.
Model averaging in linkage analysis.
Matthysse, Steven
2006-06-05
Methods for genetic linkage analysis are traditionally divided into "model-dependent" and "model-independent," but there may be a useful place for an intermediate class, in which a broad range of possible models is considered as a parametric family. It is possible to average over model space with an empirical Bayes prior that weights models according to their goodness of fit to epidemiologic data, such as the frequency of the disease in the population and in first-degree relatives (and correlations with other traits in the pleiotropic case). For averaging over high-dimensional spaces, Markov chain Monte Carlo (MCMC) has great appeal, but it has a near-fatal flaw: it is not possible, in most cases, to provide rigorous sufficient conditions to permit the user safely to conclude that the chain has converged. A way of overcoming the convergence problem, if not of solving it, rests on a simple application of the principle of detailed balance. If the starting point of the chain has the equilibrium distribution, so will every subsequent point. The first point is chosen according to the target distribution by rejection sampling, and subsequent points by an MCMC process that has the target distribution as its equilibrium distribution. Model averaging with an empirical Bayes prior requires rapid estimation of likelihoods at many points in parameter space. Symbolic polynomials are constructed before the random walk over parameter space begins, to make the actual likelihood computations at each step of the random walk very fast. Power analysis in an illustrative case is described. (c) 2006 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Runyan, T. E.; Wood, W. T.; Palmsten, M. L.; Zhang, R.
2016-12-01
Gas hydrates, specifically methane hydrates, are sparsely sampled on a global scale, and their accumulation is difficult to predict geospatially. Several attempts have been made at estimating global inventories, and to some extent geospatial distribution, using geospatial extrapoltions guided with geophysical and geochemical methods. Our objective is to quantitatively predict the geospatial likelihood of encountering methane hydrates, with uncertainty. Predictions could be incorporated into analyses of drilling hazards as well as climate change. We use global data sets (including water depth, temperature, pressure, TOC, sediment thickness, and heat flow) as parameters to train a k-nearest neighbor (KNN) machine learning technique. The KNN is unsupervised and non-parametric, we do not provide any interpretive influence on prior probability distribution, so our results are strictly data driven. We have selected as test sites several locations where gas hydrates have been well studied, each with significantly different geologic settings.These include: The Blake Ridge (U.S. East Coast), Hydrate Ridge (U.S. West Coast), and the Gulf of Mexico. We then use KNN to quantify similarities between these sites, and determine, via the distance in parameter space, what is the likelihood and uncertainty of encountering gas hydrate anywhere in the world. Here we are operating under the assumption that the distance in parameter space is proportional to the probability of the occurrence of gas hydrate. We then compare these global similarity maps made from our several test sites to identify the geologic (geophyisical, bio-geochemical) parameters best suited for predicting gas hydrate occurrence.
Morales, Leonardo Fabio; Gordon-Larsen, Penny; Guilkey, David
2017-01-01
We estimate a structural dynamic model of the determinants of obesity. In addition to including many of the well-recognized endogenous factors mentioned in the literature as obesity determinants, we also model the individual’s residential location as a choice variable, which is the main contribution of this paper to the literature. This allows us to control for an individual's self-selection into communities that possess the types of amenities in the built environment, which in turn affect their obesity-related behaviors such as physical activity (PA) and fast food consumption. We specify reduced form equations for a set of endogenous demand decisions, together with an obesity structural equation. The whole system of equations is jointly estimated by a semi-parametric full information log-likelihood method that allows for a general pattern of correlation in the errors across equations. Our model predicts a reduction in adult obesity of 7 percentage points as a result of a continued high level PA from adolescence into adulthood; a reduction of 0.7 (3) percentage points in adult obesity as a result of one standard deviation reduction in weekly fast food consumption for women (men); and a reduction of 0.02 (0.05) in adult obesity as a result of one standard deviation change in several neighborhood amenities for women (men). Another key finding is that controlling for residential self-selection has substantive implications. To our knowledge, this has not been yet documented within a full information maximum likelihood framework. PMID:27459276
NASA Astrophysics Data System (ADS)
Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim
2014-11-01
In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.
Review of Statistical Methods for Analysing Healthcare Resources and Costs
Mihaylova, Borislava; Briggs, Andrew; O'Hagan, Anthony; Thompson, Simon G
2011-01-01
We review statistical methods for analysing healthcare resource use and costs, their ability to address skewness, excess zeros, multimodality and heavy right tails, and their ease for general use. We aim to provide guidance on analysing resource use and costs focusing on randomised trials, although methods often have wider applicability. Twelve broad categories of methods were identified: (I) methods based on the normal distribution, (II) methods following transformation of data, (III) single-distribution generalized linear models (GLMs), (IV) parametric models based on skewed distributions outside the GLM family, (V) models based on mixtures of parametric distributions, (VI) two (or multi)-part and Tobit models, (VII) survival methods, (VIII) non-parametric methods, (IX) methods based on truncation or trimming of data, (X) data components models, (XI) methods based on averaging across models, and (XII) Markov chain methods. Based on this review, our recommendations are that, first, simple methods are preferred in large samples where the near-normality of sample means is assured. Second, in somewhat smaller samples, relatively simple methods, able to deal with one or two of above data characteristics, may be preferable but checking sensitivity to assumptions is necessary. Finally, some more complex methods hold promise, but are relatively untried; their implementation requires substantial expertise and they are not currently recommended for wider applied work. Copyright © 2010 John Wiley & Sons, Ltd. PMID:20799344
Building and using a statistical 3D motion atlas for analyzing myocardial contraction in MRI
NASA Astrophysics Data System (ADS)
Rougon, Nicolas F.; Petitjean, Caroline; Preteux, Francoise J.
2004-05-01
We address the issue of modeling and quantifying myocardial contraction from 4D MR sequences, and present an unsupervised approach for building and using a statistical 3D motion atlas for the normal heart. This approach relies on a state-of-the-art variational non rigid registration (NRR) technique using generalized information measures, which allows for robust intra-subject motion estimation and inter-subject anatomical alignment. The atlas is built from a collection of jointly acquired tagged and cine MR exams in short- and long-axis views. Subject-specific non parametric motion estimates are first obtained by incremental NRR of tagged images onto the end-diastolic (ED) frame. Individual motion data are then transformed into the coordinate system of a reference subject using subject-to-reference mappings derived by NRR of cine ED images. Finally, principal component analysis of aligned motion data is performed for each cardiac phase, yielding a mean model and a set of eigenfields encoding kinematic ariability. The latter define an organ-dedicated hierarchical motion basis which enables parametric motion measurement from arbitrary tagged MR exams. To this end, the atlas is transformed into subject coordinates by reference-to-subject NRR of ED cine frames. Atlas-based motion estimation is then achieved by parametric NRR of tagged images onto the ED frame, yielding a compact description of myocardial contraction during diastole.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
NASA Technical Reports Server (NTRS)
Kovach, L. S.; Zdankiewicz, E. M.
1987-01-01
Vapor compression distillation technology for phase change recovery of potable water from wastewater has evolved as a technically mature approach for use aboard the Space Station. A program to parametrically test an advanced preprototype Vapor Compression Distillation Subsystem (VCDS) was completed during 1985 and 1986. In parallel with parametric testing, a hardware improvement program was initiated to test the feasibility of incorporating several key improvements into the advanced preprototype VCDS following initial parametric tests. Specific areas of improvement included long-life, self-lubricated bearings, a lightweight, highly-efficient compressor, and a long-life magnetic drive. With the exception of the self-lubricated bearings, these improvements are incorporated. The advanced preprototype VCDS was designed to reclaim 95 percent of the available wastewater at a nominal water recovery rate of 1.36 kg/h achieved at a solids concentration of 2.3 percent and 308 K condenser temperature. While this performance was maintained for the initial testing, a 300 percent improvement in water production rate with a corresponding lower specific energy was achieved following incorporation of the improvements. Testing involved the characterization of key VCDS performance factors as a function of recycle loop solids concentration, distillation unit temperature and fluids pump speed. The objective of this effort was to expand the VCDS data base to enable defining optimum performance characteristics for flight hardware development.
NASA Astrophysics Data System (ADS)
Foyo-Moreno, I.; Vida, J.; Olmo, F. J.; Alados-Arboledas, L.
2000-11-01
Since the discovery of the ozone depletion in Antarctic and the globally declining trend of stratospheric ozone concentration, public and scientific concern has been raised in the last decades. A very important consequence of this fact is the increased broadband and spectral UV radiation in the environment and the biological effects and heath risks that may take place in the near future. The absence of widespread measurements of this radiometric flux has lead to the development and use of alternative estimation procedures such as the parametric approaches. Parametric models compute the radiant energy using available atmospheric parameters. Some parametric models compute the global solar irradiance at surface level by addition of its direct beam and diffuse components. In the present work, we have developed a comparison between two cloudless sky parametrization schemes. Both methods provide an estimation of the solar spectral irradiance that can be integrated spectrally within the limits of interest. For this test we have used data recorded in a radiometric station located at Granada (37.180°N, 3.580°W, 660 m a.m.s.l.), an inland location. The database includes hourly values of the relevant variables covering the years 1994-95. The performance of the models has been tested in relation to their predictive capability of global solar irradiance in the UV range (290-385 nm). After our study, it appears that information concerning the aerosol radiative effects is fundamental in order to obtain a good estimation. The original version of SPCTRAL2 provides estimates of the experimental values with negligible mean bias deviation. This suggests not only the appropriateness of the model but also the convenience of the aerosol features fixed in it to Granada conditions. SMARTS2 model offers increased flexibility concerning the selection of different aerosol models included in the code and provides the best results when the selected models are those considered as urban. Although SMARTS2 provide slightly worse results, both models give estimates of solar ultraviolet irradiance with mean bias deviation below 5%, and root mean square deviation close to experimental errors.
NASA Technical Reports Server (NTRS)
Jackson, L. Neal; Crenshaw, John, Sr.; Davidson, William L.; Herbert, Frank J.; Bilodeau, James W.; Stoval, J. Michael; Sutton, Terry
1989-01-01
The optimum hardware miniaturization level with the lowest cost impact for space biology hardware was determined. Space biology hardware and/or components/subassemblies/assemblies which are the most likely candidates for application of miniaturization are to be defined and relative cost impacts of such miniaturization are to be analyzed. A mathematical or statistical analysis method with the capability to support development of parametric cost analysis impacts for levels of production design miniaturization are provided.
Numerical model of solar dynamic radiator for parametric analysis
NASA Technical Reports Server (NTRS)
Rhatigan, Jennifer L.
1989-01-01
Growth power requirements for Space Station Freedom will be met through addition of 25 kW solar dynamic (SD) power modules. The SD module rejects waste heat from the power conversion cycle to space through a pumped-loop, multi-panel, deployable radiator. The baseline radiator configuration was defined during the Space Station conceptual design phase and is a function of the state point and heat rejection requirements of the power conversion unit. Requirements determined by the overall station design such as mass, system redundancy, micrometeoroid and space debris impact survivability, launch packaging, costs, and thermal and structural interaction with other station components have also been design drivers for the radiator configuration. Extensive thermal and power cycle modeling capabilities have been developed which are powerful tools in Station design and analysis, but which prove cumbersome and costly for simple component preliminary design studies. In order to aid in refining the SD radiator to the mature design stage, a simple and flexible numerical model was developed. The model simulates heat transfer and fluid flow performance of the radiator and calculates area mass and impact survivability for many combinations of flow tube and panel configurations, fluid and material properties, and environmental and cycle variations. A brief description and discussion of the numerical model, it's capabilities and limitations, and results of the parametric studies performed is presented.
NASA Astrophysics Data System (ADS)
Meadows, Alexander R.; Cupal, Josef; Hříbek, Petr; Durák, Michal; Kramer, Daniel; Rus, Bedřich
2017-05-01
We present the design of a collinear femtosecond optical parametric amplification (OPA) system producing a tunable output at wavelengths between 1030 nm and 1080 nm from a Ti:Sapphire pump laser at a wavelength of 795 nm. Generation of a supercontinuum seed pulse is followed by one stage of amplification in Beta Barium Borate (BBO) and two stages of amplification in Potassium Titanyle Arsenate (KTA), resulting in a 225 μJ output pulse with a duration of 90 fs. The output of the system has been measured by self-referenced spectral interferometry to yield the complete spectrum and spectral phase of the pulse. When compared to KTP, the greater transparency of KTA in the spectral range from 3 - 4 μm allows for reduced idler absorption and enhanced gain from the OPA process when it is pumped by the fundamental frequency of a Ti:sapphire laser. In turn, the use of the Ti:sapphire fundamental at 795 nm as a pump improves the efficiency with which light can be converted to wavelengths between 1030 nm and 1080 nm and subsequently used to test components for Nd-based laser systems. This OPA system is operated at 1 kHz for diagnostic development and laser-induced damage threshold testing of optical components for the ELI-Beamlines project.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smetanin, S N
2014-11-30
Using mathematical modelling we have studied the conditions of low-threshold collinear optical frequency comb generation under transient (picosecond) stimulated Raman scattering (SRS) and parametric four-wave coupling of SRS components in crystals. It is shown that Raman-parametric generation of an octave-spanning optical frequency comb occurs most effectively under intermediate, transient SRS at a pump pulse duration exceeding the dephasing time by five-to-twenty times. We have found the optimal values of not only the laser pump pulse duration, but also of the Raman crystal lengths corresponding to highly efficient generation of an optical frequency comb from the second anti-Stokes to the fourthmore » Stokes Raman components. For the KGd(WO{sub 4}){sub 2} (high dispersion) and Ba(NO{sub 3}){sub 2} (low dispersion) crystals pumped at a wavelength of 1.064 μm and a pulse duration five or more times greater than the dephasing time, the optimum length of the crystal was 0.3 and 0.6 cm, respectively, which is consistent with the condition of the most effective Stokes – anti-Stokes coupling ΔkL ≈ 15, where Δk is the wave detuning from phase matching of Stokes – anti-Stokes coupling, determined by the refractive index dispersion of the SRS medium. (nonlinear optical phenomena)« less
Marmarelis, Vasilis Z.; Berger, Theodore W.
2009-01-01
Parametric and non-parametric modeling methods are combined to study the short-term plasticity (STP) of synapses in the central nervous system (CNS). The nonlinear dynamics of STP are modeled by means: (1) previously proposed parametric models based on mechanistic hypotheses and/or specific dynamical processes, and (2) non-parametric models (in the form of Volterra kernels) that transforms the presynaptic signals into postsynaptic signals. In order to synergistically use the two approaches, we estimate the Volterra kernels of the parametric models of STP for four types of synapses using synthetic broadband input–output data. Results show that the non-parametric models accurately and efficiently replicate the input–output transformations of the parametric models. Volterra kernels provide a general and quantitative representation of the STP. PMID:18506609
Extracting Spurious Latent Classes in Growth Mixture Modeling with Nonnormal Errors
ERIC Educational Resources Information Center
Guerra-Peña, Kiero; Steinley, Douglas
2016-01-01
Growth mixture modeling is generally used for two purposes: (1) to identify mixtures of normal subgroups and (2) to approximate oddly shaped distributions by a mixture of normal components. Often in applied research this methodology is applied to both of these situations indistinctly: using the same fit statistics and likelihood ratio tests. This…
ERIC Educational Resources Information Center
Ham, Sandra; Walker, Erica
This paper describes the Milwaukee Public Schools' involvement in Equity 2000, a standards-based reform initiative to enhance mathematics education and achievement among students of color, thereby increasing their likelihood of college enrollment and completion. The study highlights efforts to support and sustain a key component of Equity 2000:…
Additive nonlinear biomass equations: A likelihood-based approach
David L. R. Affleck; Ulises Dieguez-Aranda
2016-01-01
Since Parresolâs (Can. J. For. Res. 31:865-878, 2001) seminal article on the topic, it has become standard to develop nonlinear tree biomass equations to ensure compatibility among total and component predictions and to fit these equations using multistep generalized least-squares methods. In particular, many studies have specified equations for total tree...
Taking a Broader View: Using Institutional Research's Natural Qualities for Transformation
ERIC Educational Resources Information Center
Leimer, Christina
2009-01-01
Many authors (Kezar, 2005; Duke, 2002; Keeling, Underhile, and Wall, 2007; Matier, Sidle, and Hurst, 1994) propose a new model of higher education organizations or a style of decision making whose central component is collaboration. This key characteristic helps make full use of all of the institution's resources to increase the likelihood that it…
2007-12-14
successful Standardization Program for Hull, Mechanical, and Electrical (HM&E) equipment and components of ships in the surface force costs the United......standardization will significantly increases the likelihood of achieving transformation successes . Properly identifying and understanding the root cause of a
Stratoudaki, Theodosia; Ellwood, Robert; Sharples, Steve; Clark, Matthew; Somekh, Michael G; Collison, Ian J
2011-04-01
A dual frequency mixing technique has been developed for measuring velocity changes caused by material nonlinearity. The technique is based on the parametric interaction between two surface acoustic waves (SAWs): The low frequency pump SAW generated by a transducer and the high frequency probe SAW generated and detected using laser ultrasonics. The pump SAW stresses the material under the probe SAW. The stress (typically <5 MPa) is controlled by varying the timing between the pump and probe waves. The nonlinear interaction is measured as a phase modulation of the probe SAW and equated to a velocity change. The velocity-stress relationship is used as a measure of material nonlinearity. Experiments were conducted to observe the pump-probe interaction by changing the pump frequency and compare the nonlinear response of aluminum and fused silica. Experiments showed these two materials had opposite nonlinear responses, consistent with previously published data. The technique could be applied to life-time predictions of engineered components by measuring changes in nonlinear response caused by fatigue.
The benefits of adaptive parametrization in multi-objective Tabu Search optimization
NASA Astrophysics Data System (ADS)
Ghisu, Tiziano; Parks, Geoffrey T.; Jaeggi, Daniel M.; Jarrett, Jerome P.; Clarkson, P. John
2010-10-01
In real-world optimization problems, large design spaces and conflicting objectives are often combined with a large number of constraints, resulting in a highly multi-modal, challenging, fragmented landscape. The local search at the heart of Tabu Search, while being one of its strengths in highly constrained optimization problems, requires a large number of evaluations per optimization step. In this work, a modification of the pattern search algorithm is proposed: this modification, based on a Principal Components' Analysis of the approximation set, allows both a re-alignment of the search directions, thereby creating a more effective parametrization, and also an informed reduction of the size of the design space itself. These changes make the optimization process more computationally efficient and more effective - higher quality solutions are identified in fewer iterations. These advantages are demonstrated on a number of standard analytical test functions (from the ZDT and DTLZ families) and on a real-world problem (the optimization of an axial compressor preliminary design).
Generating grids directly on CAD database surfaces using a parametric evaluator approach
NASA Technical Reports Server (NTRS)
Gatzhe, Timothy D.; Melson, Thomas G.
1995-01-01
A very important, but often overlooked step in grid generation is acquiring a suitable geometry definition of the vehicle to be analyzed. In the past, geometry was usually obtained by generating a number of cross-sections of each component. A number of recent efforts have focussed on non-uniform rational B-spline surfaces (NURBS) to provide as single type of analytic surface to deal with inside the grid generator. This approach has required the development of tools to read other types of surfaces and convert them, either exactly or by approximation, into a NURBS surface. This paper describes a more generic parametric evaluator approach, which does not rely on a particular surface type internal to the grid generation system and is less restrictive in the number of surface types that can be represented exactly. This approach has been implemented in the McDonnell Douglas grid generation system, MACGS, and offers direct access to all types of surfaces from a Unigraphics part file.
NASA Astrophysics Data System (ADS)
Scherzinger, J.; Al Jebali, R.; Annand, J. R. M.; Fissum, K. G.; Hall-Wilton, R.; Kanaki, K.; Lundin, M.; Nilsson, B.; Perrey, H.; Rosborg, A.; Svensson, H.
2016-12-01
The response of a NE-213 liquid-scintillator detector has been measured using tagged neutrons from 2 to 6 MeV originating from an Am/Be neutron source. The neutron energies were determined using the time-of-flight technique. Pulse-shape discrimination was employed to discern between gamma-rays and neutrons. The behavior of both the fast (35 ns) and the combined fast and slow (475 ns) components of the neutron scintillation-light pulses were studied. Three different prescriptions were used to relate the neutron maximum energy-transfer edges to the corresponding recoil-proton scintillation-light yields, and the results were compared to simulations. The overall normalizations of parametrizations which predict the fast or total light yield of the scintillation pulses were also tested. Our results agree with both existing data and existing parametrizations. We observe a clear sensitivity to the portion and length of the neutron scintillation-light pulse considered.
Kim, D.; Burge, J.; Lane, T.; Pearlson, G. D; Kiehl, K. A; Calhoun, V. D.
2008-01-01
We utilized a discrete dynamic Bayesian network (dDBN) approach (Burge et al., 2007) to determine differences in brain regions between patients with schizophrenia and healthy controls on a measure of effective connectivity, termed the approximate conditional likelihood score (ACL) (Burge and Lane, 2005). The ACL score represents a class-discriminative measure of effective connectivity by measuring the relative likelihood of the correlation between brain regions in one group versus another. The algorithm is capable of finding non-linear relationships between brain regions because it uses discrete rather than continuous values and attempts to model temporal relationships with a first-order Markov and stationary assumption constraint (Papoulis, 1991). Since Bayesian networks are overly sensitive to noisy data, we introduced an independent component analysis (ICA) filtering approach that attempted to reduce the noise found in fMRI data by unmixing the raw datasets into a set of independent spatial component maps. Components that represented noise were removed and the remaining components reconstructed into the dimensions of the original fMRI datasets. We applied the dDBN algorithm to a group of 35 patients with schizophrenia and 35 matched healthy controls using an ICA filtered and unfiltered approach. We determined that filtering the data significantly improved the magnitude of the ACL score. Patients showed the greatest ACL scores in several regions, most markedly the cerebellar vermis and hemispheres. Our findings suggest that schizophrenia patients exhibit weaker connectivity than healthy controls in multiple regions, including bilateral temporal and frontal cortices, plus cerebellum during an auditory paradigm. PMID:18602482
Cheng, Xiaoyin; Li, Zhoulei; Liu, Zhen; Navab, Nassir; Huang, Sung-Cheng; Keller, Ulrich; Ziegler, Sibylle; Shi, Kuangyu
2015-02-12
The separation of multiple PET tracers within an overlapping scan based on intrinsic differences of tracer pharmacokinetics is challenging, due to limited signal-to-noise ratio (SNR) of PET measurements and high complexity of fitting models. In this study, we developed a direct parametric image reconstruction (DPIR) method for estimating kinetic parameters and recovering single tracer information from rapid multi-tracer PET measurements. This is achieved by integrating a multi-tracer model in a reduced parameter space (RPS) into dynamic image reconstruction. This new RPS model is reformulated from an existing multi-tracer model and contains fewer parameters for kinetic fitting. Ordered-subsets expectation-maximization (OSEM) was employed to approximate log-likelihood function with respect to kinetic parameters. To incorporate the multi-tracer model, an iterative weighted nonlinear least square (WNLS) method was employed. The proposed multi-tracer DPIR (MTDPIR) algorithm was evaluated on dual-tracer PET simulations ([18F]FDG and [11C]MET) as well as on preclinical PET measurements ([18F]FLT and [18F]FDG). The performance of the proposed algorithm was compared to the indirect parameter estimation method with the original dual-tracer model. The respective contributions of the RPS technique and the DPIR method to the performance of the new algorithm were analyzed in detail. For the preclinical evaluation, the tracer separation results were compared with single [18F]FDG scans of the same subjects measured 2 days before the dual-tracer scan. The results of the simulation and preclinical studies demonstrate that the proposed MT-DPIR method can improve the separation of multiple tracers for PET image quantification and kinetic parameter estimations.
From cosmic deceleration to acceleration: new constraints from SN Ia and BAO/CMB
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giostri, R.; Santos, M. Vargas dos; Waga, I.
2012-03-01
We use type Ia supernovae (SN Ia) data in combination with recent baryonic acoustic oscillations (BAO) and cosmic microwave background (CMB) observations to constrain a kink-like parametrization of the deceleration parameter (q). This q-parametrization can be written in terms of the initial (q{sub i}) and present (q{sub 0}) values of the deceleration parameter, the redshift of the cosmic transition from deceleration to acceleration (z{sub t}) and the redshift width of such transition (τ). By assuming a flat space geometry, q{sub i} = 1/2 and adopting a likelihood approach to deal with the SN Ia data we obtain, at the 68%more » confidence level (C.L.), that: z{sub t} = 0.56{sup +0.13}{sub −0.10}, τ = 0.47{sup +0.16}{sub −0.20} and q{sub 0} = −0.31{sup +0.11}{sub −0.11} when we combine BAO/CMB observations with SN Ia data processed with the MLCS2k2 light-curve fitter. When in this combination we use the SALT2 fitter we get instead, at the same C.L.: z{sub t} = 0.64{sup +0.13}{sub −0.07}, τ = 0.36{sup +0.11}{sub −0.17} and q{sub 0} = −0.53{sup +0.17}{sub −0.13}. Our results indicate, with a quite general and model independent approach, that MLCS2k2 favors Dvali-Gabadadze-Porrati-like cosmological models, while SALT2 favors ΛCDM-like ones. Progress in determining the transition redshift and/or the present value of the deceleration parameter depends crucially on solving the issue of the difference obtained when using these two light-curve fitters.« less
NASA Astrophysics Data System (ADS)
Oh, Jungsu S.; Kim, Jae Seung; Chae, Sun Young; Oh, Minyoung; Oh, Seung Jun; Cha, Seung Nam; Chang, Ho-Jong; Lee, Chong Sik; Lee, Jae Hong
2017-03-01
We present an optimized voxelwise statistical parametric mapping (SPM) of partial-volume (PV)-corrected positron emission tomography (PET) of 11C Pittsburgh Compound B (PiB), incorporating the anatomical precision of magnetic resonance image (MRI) and amyloid β (A β) burden-specificity of PiB PET. First, we applied region-based partial-volume correction (PVC), termed the geometric transfer matrix (GTM) method, to PiB PET, creating MRI-based lobar parcels filled with mean PiB uptakes. Then, we conducted a voxelwise PVC by multiplying the original PET by the ratio of a GTM-based PV-corrected PET to a 6-mm-smoothed PV-corrected PET. Finally, we conducted spatial normalizations of the PV-corrected PETs onto the study-specific template. As such, we increased the accuracy of the SPM normalization and the tissue specificity of SPM results. Moreover, lobar smoothing (instead of whole-brain smoothing) was applied to increase the signal-to-noise ratio in the image without degrading the tissue specificity. Thereby, we could optimize a voxelwise group comparison between subjects with high and normal A β burdens (from 10 patients with Alzheimer's disease, 30 patients with Lewy body dementia, and 9 normal controls). Our SPM framework outperformed than the conventional one in terms of the accuracy of the spatial normalization (85% of maximum likelihood tissue classification volume) and the tissue specificity (larger gray matter, and smaller cerebrospinal fluid volume fraction from the SPM results). Our SPM framework optimized the SPM of a PV-corrected A β PET in terms of anatomical precision, normalization accuracy, and tissue specificity, resulting in better detection and localization of A β burdens in patients with Alzheimer's disease and Lewy body dementia.
Exact Bayesian Inference for Phylogenetic Birth-Death Models.
Parag, K V; Pybus, O G
2018-04-26
Inferring the rates of change of a population from a reconstructed phylogeny of genetic sequences is a central problem in macro-evolutionary biology, epidemiology, and many other disciplines. A popular solution involves estimating the parameters of a birth-death process (BDP), which links the shape of the phylogeny to its birth and death rates. Modern BDP estimators rely on random Markov chain Monte Carlo (MCMC) sampling to infer these rates. Such methods, while powerful and scalable, cannot be guaranteed to converge, leading to results that may be hard to replicate or difficult to validate. We present a conceptually and computationally different parametric BDP inference approach using flexible and easy to implement Snyder filter (SF) algorithms. This method is deterministic so its results are provable, guaranteed, and reproducible. We validate the SF on constant rate BDPs and find that it solves BDP likelihoods known to produce robust estimates. We then examine more complex BDPs with time-varying rates. Our estimates compare well with a recently developed parametric MCMC inference method. Lastly, we performmodel selection on an empirical Agamid species phylogeny, obtaining results consistent with the literature. The SF makes no approximations, beyond those required for parameter quantisation and numerical integration, and directly computes the posterior distribution of model parameters. It is a promising alternative inference algorithm that may serve either as a standalone Bayesian estimator or as a useful diagnostic reference for validating more involved MCMC strategies. The Snyder filter is implemented in Matlab and the time-varying BDP models are simulated in R. The source code and data are freely available at https://github.com/kpzoo/snyder-birth-death-code. kris.parag@zoo.ox.ac.uk. Supplementary material is available at Bioinformatics online.
Jensen, Frants H; Wahlberg, Magnus; Beedholm, Kristian; Johnson, Mark; de Soto, Natacha Aguilar; Madsen, Peter T
2015-05-01
Echolocating animals exercise an extensive control over the spectral and temporal properties of their biosonar signals to facilitate perception of their actively generated auditory scene when homing in on prey. The intensity and directionality of the biosonar beam defines the field of view of echolocating animals by affecting the acoustic detection range and angular coverage. However, the spatial relationship between an echolocating predator and its prey changes rapidly, resulting in different biosonar requirements throughout prey pursuit and capture. Here, we measured single-click beam patterns using a parametric fit procedure to test whether free-ranging Atlantic spotted dolphins (Stenella frontalis) modify their biosonar beam width. We recorded echolocation clicks using a linear array of receivers and estimated the beam width of individual clicks using a parametric spectral fit, cross-validated with well-established composite beam pattern estimates. The dolphins apparently increased the biosonar beam width, to a large degree without changing the signal frequency, when they approached the recording array. This is comparable to bats that also expand their field of view during prey capture, but achieve this by decreasing biosonar frequency. This behaviour may serve to decrease the risk that rapid escape movements of prey take them outside the biosonar beam of the predator. It is likely that shared sensory requirements have resulted in bats and toothed whales expanding their acoustic field of view at close range to increase the likelihood of successfully acquiring prey using echolocation, representing a case of convergent evolution of echolocation behaviour between these two taxa. © 2015. Published by The Company of Biologists Ltd.
Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines
NASA Astrophysics Data System (ADS)
Gibbons, Steven J.; Kværna, Tormod; Harris, David B.; Dodge, Douglas A.
2016-04-01
Aftershock sequences following very large earthquakes present enormous challenges to near-realtime generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase association algorithms and a significant deterioration in the quality of underlying fully automatic event bulletins. Current processing pipelines were designed a generation ago and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams which are then scanned by a phase association algorithm to form event hypotheses. We consider the scenario where a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located using a separate specially targeted semi-automatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid search algorithm which may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove over half of the original detections which could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Further reductions in the number of detections in the parametric data streams are likely using correlation and subspace detectors and/or empirical matched field processing.
Shoari, Niloofar; Dubé, Jean-Sébastien; Chenouri, Shoja'eddin
2015-11-01
In environmental studies, concentration measurements frequently fall below detection limits of measuring instruments, resulting in left-censored data. Some studies employ parametric methods such as the maximum likelihood estimator (MLE), robust regression on order statistic (rROS), and gamma regression on order statistic (GROS), while others suggest a non-parametric approach, the Kaplan-Meier method (KM). Using examples of real data from a soil characterization study in Montreal, we highlight the need for additional investigations that aim at unifying the existing literature. A number of studies have examined this issue; however, those considering data skewness and model misspecification are rare. These aspects are investigated in this paper through simulations. Among other findings, results show that for low skewed data, the performance of different statistical methods is comparable, regardless of the censoring percentage and sample size. For highly skewed data, the performance of the MLE method under lognormal and Weibull distributions is questionable; particularly, when the sample size is small or censoring percentage is high. In such conditions, MLE under gamma distribution, rROS, GROS, and KM are less sensitive to skewness. Related to model misspecification, MLE based on lognormal and Weibull distributions provides poor estimates when the true distribution of data is misspecified. However, the methods of rROS, GROS, and MLE under gamma distribution are generally robust to model misspecifications regardless of skewness, sample size, and censoring percentage. Since the characteristics of environmental data (e.g., type of distribution and skewness) are unknown a priori, we suggest using MLE based on gamma distribution, rROS and GROS. Copyright © 2015 Elsevier Ltd. All rights reserved.
Vigan, Marie; Stirnemann, Jérôme; Mentré, France
2014-05-01
Analysis of repeated time-to-event data is increasingly performed in pharmacometrics using parametric frailty models. The aims of this simulation study were (1) to assess estimation performance of Stochastic Approximation Expectation Maximization (SAEM) algorithm in MONOLIX, Adaptive Gaussian Quadrature (AGQ), and Laplace algorithm in PROC NLMIXED of SAS and (2) to evaluate properties of test of a dichotomous covariate on occurrence of events. The simulation setting is inspired from an analysis of occurrence of bone events after the initiation of treatment by imiglucerase in patients with Gaucher Disease (GD). We simulated repeated events with an exponential model and various dropout rates: no, low, or high. Several values of baseline hazard model, variability, number of subject, and effect of covariate were studied. For each scenario, 100 datasets were simulated for estimation performance and 500 for test performance. We evaluated estimation performance through relative bias and relative root mean square error (RRMSE). We studied properties of Wald and likelihood ratio test (LRT). We used these methods to analyze occurrence of bone events in patients with GD after starting an enzyme replacement therapy. SAEM with three chains and AGQ algorithms provided good estimates of parameters much better than SAEM with one chain and Laplace which often provided poor estimates. Despite a small number of repeated events, SAEM with three chains and AGQ gave small biases and RRMSE. Type I errors were closed to 5%, and power varied as expected for SAEM with three chains and AGQ. Probability of having at least one event under treatment was 19.1%.
A review of parametric approaches specific to aerodynamic design process
NASA Astrophysics Data System (ADS)
Zhang, Tian-tian; Wang, Zhen-guo; Huang, Wei; Yan, Li
2018-04-01
Parametric modeling of aircrafts plays a crucial role in the aerodynamic design process. Effective parametric approaches have large design space with a few variables. Parametric methods that commonly used nowadays are summarized in this paper, and their principles have been introduced briefly. Two-dimensional parametric methods include B-Spline method, Class/Shape function transformation method, Parametric Section method, Hicks-Henne method and Singular Value Decomposition method, and all of them have wide application in the design of the airfoil. This survey made a comparison among them to find out their abilities in the design of the airfoil, and the results show that the Singular Value Decomposition method has the best parametric accuracy. The development of three-dimensional parametric methods is limited, and the most popular one is the Free-form deformation method. Those methods extended from two-dimensional parametric methods have promising prospect in aircraft modeling. Since different parametric methods differ in their characteristics, real design process needs flexible choice among them to adapt to subsequent optimization procedure.