Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano
2015-01-01
We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926
Kubo, Taichi
2008-02-01
We have measured the top quark mass with the dynamical likelihood method. The data corresponding to an integrated luminosity of 1.7fb^{-1} was collected in proton antiproton collisions at a center of mass energy of 1.96 TeV with the CDF detector at Fermilab Tevatron during the period March 2002-March 2007. We select t$\\bar{t}$ pair production candidates by requiring one high energy lepton and four jets, in which at least one of jets must be tagged as a b-jet. In order to reconstruct the top quark mass, we use the dynamical likelihood method based on maximum likelihood method where a likelihood is defined as the differential cross section multiplied by the transfer function from observed quantities to parton quantities, as a function of the top quark mass and the jet energy scale(JES). With this method, we measure the top quark mass to be 171.6 ± 2.0 (stat.+ JES) ± 1.3(syst.) = 171.6 ± 2.4 GeV/c^{2}.
Yorita, Kohei
2005-03-01
We have measured the top quark mass with the dynamical likelihood method (DLM) using the CDF II detector at the Fermilab Tevatron. The Tevatron produces top and anti-top pairs in pp collisions at a center of mass energy of 1.96 TeV. The data sample used in this paper was accumulated from March 2002 through August 2003 which corresponds to an integrated luminosity of 162 pb^{-1}.
Synthesizing Regression Results: A Factored Likelihood Method
ERIC Educational Resources Information Center
Wu, Meng-Jia; Becker, Betsy Jane
2013-01-01
Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported…
Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.
ERIC Educational Resources Information Center
Butler, Ronald W.
The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…
Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions
NASA Astrophysics Data System (ADS)
Barrett, Harrison H.; Dainty, Christopher; Lara, David
2007-02-01
Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack-Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack-Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods.
Abulencia, A.; Acosta, D.; Adelman, Jahred A.; Affolder, Anthony A.; Akimoto, T.; Albrow, M.G.; Ambrose, D.; Amerio, S.; Amidei, D.; Anastassov, A.; Anikeev, K.; /Taiwan, Inst. Phys. /Argonne /Barcelona, IFAE /Baylor U. /INFN, Bologna /Bologna U. /Brandeis U. /UC, Davis /UCLA /UC, San Diego /UC, Santa Barbara
2005-12-01
This report describes a measurement of the top quark mass, M{sub top}, with the dynamical likelihood method (DLM) using the CDF II detector at the Fermilab Tevatron. The Tevatron produces top/anti-top (t{bar t}) pairs in p{bar p} collisions at a center-of-mass energy of 1.96 TeV. The data sample used in this analysis was accumulated from March 2002 through August 2004, which corresponds to an integrated luminosity of 318 pb{sup -1}. They use the t{bar t} candidates in the ''lepton+jets'' decay channel, requiring at least one jet identified as a b quark by finding an displaced secondary vertex. The DLM defines a likelihood for each event based on the differential cross section as a function of M{sub top} per unit phase space volume of the final partons, multiplied by the transfer functions from jet to parton energies. The method takes into account all possible jet combinations in an event, and the likelihood is multiplied event by event to derive the top quark mass by the maximum likelihood method. Using 63 t{bar t} candidates observed in the data, with 9.2 events expected from background, they measure the top quark mass to be 173.2{sub -2.4}{sup +2.6}(stat.) {+-} 3.2(syst.) GeV/c{sup 2}, or 173.2{sub -4.0}{sup +4.1} GeV/c{sup 2}.
Measuring coherence of computer-assisted likelihood ratio methods.
Haraksim, Rudolf; Ramos, Daniel; Meuwly, Didier; Berger, Charles E H
2015-04-01
Measuring the performance of forensic evaluation methods that compute likelihood ratios (LRs) is relevant for both the development and the validation of such methods. A framework of performance characteristics categorized as primary and secondary is introduced in this study to help achieve such development and validation. Ground-truth labelled fingerprint data is used to assess the performance of an example likelihood ratio method in terms of those performance characteristics. Discrimination, calibration, and especially the coherence of this LR method are assessed as a function of the quantity and quality of the trace fingerprint specimen. Assessment of the coherence revealed a weakness of the comparison algorithm in the computer-assisted likelihood ratio method used.
Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM
ERIC Educational Resources Information Center
Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman
2012-01-01
This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics
NASA Astrophysics Data System (ADS)
Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc
2016-03-01
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics.
Arampatzis, Georgios; Katsoulakis, Markos A; Rey-Bellet, Luc
2016-03-14
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Empirical likelihood method for non-ignorable missing data problems.
Guan, Zhong; Qin, Jing
2017-01-01
Missing response problem is ubiquitous in survey sampling, medical, social science and epidemiology studies. It is well known that non-ignorable missing is the most difficult missing data problem where the missing of a response depends on its own value. In statistical literature, unlike the ignorable missing data problem, not many papers on non-ignorable missing data are available except for the full parametric model based approach. In this paper we study a semiparametric model for non-ignorable missing data in which the missing probability is known up to some parameters, but the underlying distributions are not specified. By employing Owen (1988)'s empirical likelihood method we can obtain the constrained maximum empirical likelihood estimators of the parameters in the missing probability and the mean response which are shown to be asymptotically normal. Moreover the likelihood ratio statistic can be used to test whether the missing of the responses is non-ignorable or completely at random. The theoretical results are confirmed by a simulation study. As an illustration, the analysis of a real AIDS trial data shows that the missing of CD4 counts around two years are non-ignorable and the sample mean based on observed data only is biased.
Evaluating maximum likelihood estimation methods to determine the hurst coefficients
NASA Astrophysics Data System (ADS)
Kendziorski, C. M.; Bassingthwaighte, J. B.; Tonellato, P. J.
1999-12-01
A maximum likelihood estimation method implemented in S-PLUS ( S-MLE) to estimate the Hurst coefficient ( H) is evaluated. The Hurst coefficient, with 0.5< H<1, characterizes long memory time series by quantifying the rate of decay of the autocorrelation function. S-MLE was developed to estimate H for fractionally differenced (fd) processes. However, in practice it is difficult to distinguish between fd processes and fractional Gaussian noise (fGn) processes. Thus, the method is evaluated for estimating H for both fd and fGn processes. S-MLE gave biased results of H for fGn processes of any length and for fd processes of lengths less than 2 10. A modified method is proposed to correct for this bias. It gives reliable estimates of H for both fd and fGn processes of length greater than or equal to 2 11.
Error detection for genetic data, using likelihood methods
Ehm, M.G.; Kimmel, M.; Cottingham, R.W. Jr.
1996-01-01
As genetic maps become denser, the effect of laboratory typing errors becomes more serious. We review a general method for detecting errors in pedigree genotyping data that is a variant of the likelihood-ratio test statistic. It pinpoints individuals and loci with relatively unlikely genotypes. Power and significance studies using Monte Carlo methods are shown by using simulated data with pedigree structures similar to the CEPH pedigrees and a larger experimental pedigree used in the study of idiopathic dilated cardiomyopathy (DCM). The studies show the index detects errors for small values of {theta} with high power and an acceptable false positive rate. The method was also used to check for errors in DCM laboratory pedigree data and to estimate the error rate in CEPH chromosome 6 data. The errors flagged by our method in the DCM pedigree were confirmed by the laboratory. The results are consistent with estimated false-positive and false-negative rates obtained using simulation. 21 refs., 5 figs., 2 tabs.
Constrained maximum likelihood modal parameter identification applied to structural dynamics
NASA Astrophysics Data System (ADS)
El-Kafafy, Mahmoud; Peeters, Bart; Guillaume, Patrick; De Troyer, Tim
2016-05-01
A new modal parameter estimation method to directly establish modal models of structural dynamic systems satisfying two physically motivated constraints will be presented. The constraints imposed in the identified modal model are the reciprocity of the frequency response functions (FRFs) and the estimation of normal (real) modes. The motivation behind the first constraint (i.e. reciprocity) comes from the fact that modal analysis theory shows that the FRF matrix and therefore the residue matrices are symmetric for non-gyroscopic, non-circulatory, and passive mechanical systems. In other words, such types of systems are expected to obey Maxwell-Betti's reciprocity principle. The second constraint (i.e. real mode shapes) is motivated by the fact that analytical models of structures are assumed to either be undamped or proportional damped. Therefore, normal (real) modes are needed for comparison with these analytical models. The work done in this paper is a further development of a recently introduced modal parameter identification method called ML-MM that enables us to establish modal model that satisfies such motivated constraints. The proposed constrained ML-MM method is applied to two real experimental datasets measured on fully trimmed cars. This type of data is still considered as a significant challenge in modal analysis. The results clearly demonstrate the applicability of the method to real structures with significant non-proportional damping and high modal densities.
Comparisons of likelihood and machine learning methods of individual classification
Guinand, B.; Topchy, A.; Page, K.S.; Burnham-Curtis, M. K.; Punch, W.F.; Scribner, K.T.
2002-01-01
“Assignment tests” are designed to determine population membership for individuals. One particular application based on a likelihood estimate (LE) was introduced by Paetkau et al. (1995; see also Vásquez-Domínguez et al. 2001) to assign an individual to the population of origin on the basis of multilocus genotype and expectations of observing this genotype in each potential source population. The LE approach can be implemented statistically in a Bayesian framework as a convenient way to evaluate hypotheses of plausible genealogical relationships (e.g., that an individual possesses an ancestor in another population) (Dawson and Belkhir 2001;Pritchard et al. 2000; Rannala and Mountain 1997). Other studies have evaluated the confidence of the assignment (Almudevar 2000) and characteristics of genotypic data (e.g., degree of population divergence, number of loci, number of individuals, number of alleles) that lead to greater population assignment (Bernatchez and Duchesne 2000; Cornuet et al. 1999; Haig et al. 1997; Shriver et al. 1997; Smouse and Chevillon 1998). Main statistical and conceptual differences between methods leading to the use of an assignment test are given in, for example,Cornuet et al. (1999) and Rosenberg et al. (2001). Howeve
Likelihood based observability analysis and confidence intervals for predictions of dynamic models
2012-01-01
Background Predicting a system’s behavior based on a mathematical model is a primary task in Systems Biology. If the model parameters are estimated from experimental data, the parameter uncertainty has to be translated into confidence intervals for model predictions. For dynamic models of biochemical networks, the nonlinearity in combination with the large number of parameters hampers the calculation of prediction confidence intervals and renders classical approaches as hardly feasible. Results In this article reliable confidence intervals are calculated based on the prediction profile likelihood. Such prediction confidence intervals of the dynamic states can be utilized for a data-based observability analysis. The method is also applicable if there are non-identifiable parameters yielding to some insufficiently specified model predictions that can be interpreted as non-observability. Moreover, a validation profile likelihood is introduced that should be applied when noisy validation experiments are to be interpreted. Conclusions The presented methodology allows the propagation of uncertainty from experimental to model predictions. Although presented in the context of ordinary differential equations, the concept is general and also applicable to other types of models. Matlab code which can be used as a template to implement the method is provided at http://www.fdmold.uni-freiburg.de/∼ckreutz/PPL. PMID:22947028
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; Beerli, Peter; Zeng, Xiankui; Lu, Dan; Tao, Yuezan
2016-02-05
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamic integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.
NASA Technical Reports Server (NTRS)
Murphy, P. C.
1984-01-01
An algorithm for maximum likelihood (ML) estimation is developed primarily for multivariable dynamic systems. The algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). The method determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort compared with integrating the analytically determined sensitivity equations or using a finite-difference method. Different surface-fitting methods are discussed and demonstrated. Aircraft estimation problems are solved by using both simulated and real-flight data to compare MNRES with commonly used methods; in these solutions MNRES is found to be equally accurate and substantially faster. MNRES eliminates the need to derive sensitivity equations, thus producing a more generally applicable algorithm.
A composite likelihood method for bivariate meta-analysis in diagnostic systematic reviews
Liu, Yulun; Ning, Jing; Nie, Lei; Zhu, Hongjian; Chu, Haitao
2014-01-01
Diagnostic systematic review is a vital step in the evaluation of diagnostic technologies. In many applications, it involves pooling pairs of sensitivity and specificity of a dichotomized diagnostic test from multiple studies. We propose a composite likelihood method for bivariate meta-analysis in diagnostic systematic reviews. This method provides an alternative way to make inference on diagnostic measures such as sensitivity, specificity, likelihood ratios and diagnostic odds ratio. Its main advantages over the standard likelihood method are the avoidance of the non-convergence problem, which is non-trivial when the number of studies are relatively small, the computational simplicity and some robustness to model mis-specifications. Simulation studies show that the composite likelihood method maintains high relative efficiency compared to that of the standard likelihood method. We illustrate our method in a diagnostic review of the performance of contemporary diagnostic imaging technologies for detecting metastases in patients with melanoma. PMID:25512146
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...
2016-02-05
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less
An Empirical Likelihood Method for Semiparametric Linear Regression with Right Censored Data
Fang, Kai-Tai; Li, Gang; Lu, Xuyang; Qin, Hong
2013-01-01
This paper develops a new empirical likelihood method for semiparametric linear regression with a completely unknown error distribution and right censored survival data. The method is based on the Buckley-James (1979) estimating equation. It inherits some appealing properties of the complete data empirical likelihood method. For example, it does not require variance estimation which is problematic for the Buckley-James estimator. We also extend our method to incorporate auxiliary information. We compare our method with the synthetic data empirical likelihood of Li and Wang (2003) using simulations. We also illustrate our method using Stanford heart transplantation data. PMID:23573169
Maximum-Likelihood Adaptive Filter for Partially Observed Boolean Dynamical Systems
NASA Astrophysics Data System (ADS)
Imani, Mahdi; Braga-Neto, Ulisses M.
2017-01-01
Partially-observed Boolean dynamical systems (POBDS) are a general class of nonlinear models with application in estimation and control of Boolean processes based on noisy and incomplete measurements. The optimal minimum mean square error (MMSE) algorithms for POBDS state estimation, namely, the Boolean Kalman filter (BKF) and Boolean Kalman smoother (BKS), are intractable in the case of large systems, due to computational and memory requirements. To address this, we propose approximate MMSE filtering and smoothing algorithms based on the auxiliary particle filter (APF) method from sequential Monte-Carlo theory. These algorithms are used jointly with maximum-likelihood (ML) methods for simultaneous state and parameter estimation in POBDS models. In the presence of continuous parameters, ML estimation is performed using the expectation-maximization (EM) algorithm; we develop for this purpose a special smoother which reduces the computational complexity of the EM algorithm. The resulting particle-based adaptive filter is applied to a POBDS model of Boolean gene regulatory networks observed through noisy RNA-Seq time series data, and performance is assessed through a series of numerical experiments using the well-known cell cycle gene regulatory model.
ERIC Educational Resources Information Center
Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.
2016-01-01
The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…
2015-08-01
MODIFIED MAXIMUM LIKELIHOOD ESTIMATION METHOD FOR COMPLETELY SEPARATED AND QUASI-COMPLETELY SEPARATED DATA...Likelihood Estimation Method for Completely Separated and Quasi-Completely Separated Data for a Dose-Response Model 5a. CONTRACT NUMBER 5b. GRANT...quasi-completely separated , the traditional maximum likelihood estimation (MLE) method generates infinite estimates. The bias-reduction (BR) method
NASA Technical Reports Server (NTRS)
1979-01-01
The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.
Lele, Subhash R; Dennis, Brian; Lutscher, Frithjof
2007-07-01
We introduce a new statistical computing method, called data cloning, to calculate maximum likelihood estimates and their standard errors for complex ecological models. Although the method uses the Bayesian framework and exploits the computational simplicity of the Markov chain Monte Carlo (MCMC) algorithms, it provides valid frequentist inferences such as the maximum likelihood estimates and their standard errors. The inferences are completely invariant to the choice of the prior distributions and therefore avoid the inherent subjectivity of the Bayesian approach. The data cloning method is easily implemented using standard MCMC software. Data cloning is particularly useful for analysing ecological situations in which hierarchical statistical models, such as state-space models and mixed effects models, are appropriate. We illustrate the method by fitting two nonlinear population dynamics models to data in the presence of process and observation noise.
Laser-Based Slam with Efficient Occupancy Likelihood Map Learning for Dynamic Indoor Scenes
NASA Astrophysics Data System (ADS)
Li, Li; Yao, Jian; Xie, Renping; Tu, Jinge; Feng, Chen
2016-06-01
Location-Based Services (LBS) have attracted growing attention in recent years, especially in indoor environments. The fundamental technique of LBS is the map building for unknown environments, this technique also named as simultaneous localization and mapping (SLAM) in robotic society. In this paper, we propose a novel approach for SLAMin dynamic indoor scenes based on a 2D laser scanner mounted on a mobile Unmanned Ground Vehicle (UGV) with the help of the grid-based occupancy likelihood map. Instead of applying scan matching in two adjacent scans, we propose to match current scan with the occupancy likelihood map learned from all previous scans in multiple scales to avoid the accumulation of matching errors. Due to that the acquisition of the points in a scan is sequential but not simultaneous, there unavoidably exists the scan distortion at different extents. To compensate the scan distortion caused by the motion of the UGV, we propose to integrate a velocity of a laser range finder (LRF) into the scan matching optimization framework. Besides, to reduce the effect of dynamic objects such as walking pedestrians often existed in indoor scenes as much as possible, we propose a new occupancy likelihood map learning strategy by increasing or decreasing the probability of each occupancy grid after each scan matching. Experimental results in several challenged indoor scenes demonstrate that our proposed approach is capable of providing high-precision SLAM results.
ERIC Educational Resources Information Center
Samejima, Fumiko
1977-01-01
A method of estimating item characteristic functions is proposed, in which a set of test items, whose operating characteristics are known and which give a constant test information function for a wide range of ability, are used. The method is based on maximum likelihood estimation procedures. (Author/JKS)
A Maximum Likelihood Method for Latent Class Regression Involving a Censored Dependent Variable.
ERIC Educational Resources Information Center
Jedidi, Kamel; And Others
1993-01-01
A method is proposed to simultaneously estimate regression functions and subject membership in "k" latent classes or groups given a censored dependent variable for a cross-section of subjects. Maximum likelihood estimates are obtained using an EM algorithm. The method is illustrated through a consumer psychology application. (SLD)
Huang, Jinxin; Clarkson, Eric; Kupinski, Matthew; Lee, Kye-Sung; Maki, Kara L; Ross, David S; Aquavella, James V; Rolland, Jannick P
2013-01-01
Understanding tear film dynamics is a prerequisite for advancing the management of Dry Eye Disease (DED). In this paper, we discuss the use of optical coherence tomography (OCT) and statistical decision theory to analyze the tear film dynamics of a digital phantom. We implement a maximum-likelihood (ML) estimator to interpret OCT data based on mathematical models of Fourier-Domain OCT and the tear film. With the methodology of task-based assessment, we quantify the tradeoffs among key imaging system parameters. We find, on the assumption that the broadband light source is characterized by circular Gaussian statistics, ML estimates of 40 nm +/- 4 nm for an axial resolution of 1 μm and an integration time of 5 μs. Finally, the estimator is validated with a digital phantom of tear film dynamics, which reveals estimates of nanometer precision.
Evaluation of dynamic coastal response to sea-level rise modifies inundation likelihood
Lentz, Erika E.; Thieler, E. Robert; Plant, Nathaniel G.; Stippa, Sawyer R.; Horton, Radley M.; Gesch, Dean B.
2016-01-01
Sea-level rise (SLR) poses a range of threats to natural and built environments1, 2, making assessments of SLR-induced hazards essential for informed decision making3. We develop a probabilistic model that evaluates the likelihood that an area will inundate (flood) or dynamically respond (adapt) to SLR. The broad-area applicability of the approach is demonstrated by producing 30 × 30 m resolution predictions for more than 38,000 km2 of diverse coastal landscape in the northeastern United States. Probabilistic SLR projections, coastal elevation and vertical land movement are used to estimate likely future inundation levels. Then, conditioned on future inundation levels and the current land-cover type, we evaluate the likelihood of dynamic response versus inundation. We find that nearly 70% of this coastal landscape has some capacity to respond dynamically to SLR, and we show that inundation models over-predict land likely to submerge. This approach is well suited to guiding coastal resource management decisions that weigh future SLR impacts and uncertainty against ecological targets and economic constraints.
Evaluation of Dynamic Coastal Response to Sea-level Rise Modifies Inundation Likelihood
NASA Technical Reports Server (NTRS)
Lentz, Erika E.; Thieler, E. Robert; Plant, Nathaniel G.; Stippa, Sawyer R.; Horton, Radley M.; Gesch, Dean B.
2016-01-01
Sea-level rise (SLR) poses a range of threats to natural and built environments, making assessments of SLR-induced hazards essential for informed decision making. We develop a probabilistic model that evaluates the likelihood that an area will inundate (flood) or dynamically respond (adapt) to SLR. The broad-area applicability of the approach is demonstrated by producing 30x30m resolution predictions for more than 38,000 sq km of diverse coastal landscape in the northeastern United States. Probabilistic SLR projections, coastal elevation and vertical land movement are used to estimate likely future inundation levels. Then, conditioned on future inundation levels and the current land-cover type, we evaluate the likelihood of dynamic response versus inundation. We find that nearly 70% of this coastal landscape has some capacity to respond dynamically to SLR, and we show that inundation models over-predict land likely to submerge. This approach is well suited to guiding coastal resource management decisions that weigh future SLR impacts and uncertainty against ecological targets and economic constraints.
Xia, Xuhua
2016-09-01
While pairwise sequence alignment (PSA) by dynamic programming is guaranteed to generate one of the optimal alignments, multiple sequence alignment (MSA) of highly divergent sequences often results in poorly aligned sequences, plaguing all subsequent phylogenetic analysis. One way to avoid this problem is to use only PSA to reconstruct phylogenetic trees, which can only be done with distance-based methods. I compared the accuracy of this new computational approach (named PhyPA for phylogenetics by pairwise alignment) against the maximum likelihood method using MSA (the ML+MSA approach), based on nucleotide, amino acid and codon sequences simulated with different topologies and tree lengths. I present a surprising discovery that the fast PhyPA method consistently outperforms the slow ML+MSA approach for highly diverged sequences even when all optimization options were turned on for the ML+MSA approach. Only when sequences are not highly diverged (i.e., when a reliable MSA can be obtained) does the ML+MSA approach outperforms PhyPA. The true topologies are always recovered by ML with the true alignment from the simulation. However, with MSA derived from alignment programs such as MAFFT or MUSCLE, the recovered topology consistently has higher likelihood than that for the true topology. Thus, the failure to recover the true topology by the ML+MSA is not because of insufficient search of tree space, but by the distortion of phylogenetic signal by MSA methods. I have implemented in DAMBE PhyPA and two approaches making use of multi-gene data sets to derive phylogenetic support for subtrees equivalent to resampling techniques such as bootstrapping and jackknifing.
Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods
ERIC Educational Resources Information Center
Zhong, Xiaoling; Yuan, Ke-Hai
2011-01-01
In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…
Likelihood methods for regression models with expensive variables missing by design.
Zhao, Yang; Lawless, Jerald F; McLeish, Donald L
2009-02-01
In some applications involving regression the values of certain variables are missing by design for some individuals. For example, in two-stage studies (Zhao and Lipsitz, 1992), data on "cheaper" variables are collected on a random sample of individuals in stage I, and then "expensive" variables are measured for a subsample of these in stage II. So the "expensive" variables are missing by design at stage I. Both estimating function and likelihood methods have been proposed for cases where either covariates or responses are missing. We extend the semiparametric maximum likelihood (SPML) method for missing covariate problems (e.g. Chen, 2004; Ibrahim et al., 2005; Zhang and Rockette, 2005, 2007) to deal with more general cases where covariates and/or responses are missing by design, and show that profile likelihood ratio tests and interval estimation are easily implemented. Simulation studies are provided to examine the performance of the likelihood methods and to compare their efficiencies with estimating function methods for problems involving (a) a missing covariate and (b) a missing response variable. We illustrate the ease of implementation of SPML and demonstrate its high efficiency.
NASA Astrophysics Data System (ADS)
Magnard, C.; Small, D.; Meier, E.
2015-03-01
The phase estimation of cross-track multibaseline synthetic aperture interferometric data is usually thought to be very efficiently achieved using the maximum likelihood (ML) method. The suitability of this method is investigated here as applied to airborne single pass multibaseline data. Experimental interferometric data acquired with a Ka-band sensor were processed using (a) a ML method that fuses the complex data from all receivers and (b) a coarse-to-fine method that only uses the intermediate baselines to unwrap the phase values from the longest baseline. The phase noise was analyzed for both methods: in most cases, a small improvement was found when the ML method was used.
Efficient and exact maximum likelihood quantisation of genomic features using dynamic programming.
Song, Mingzhou; Haralick, Robert M; Boissinot, Stéphane
2010-01-01
An efficient and exact dynamic programming algorithm is introduced to quantise a continuous random variable into a discrete random variable that maximises the likelihood of the quantised probability distribution for the original continuous random variable. Quantisation is often useful before statistical analysis and modelling of large discrete network models from observations of multiple continuous random variables. The quantisation algorithm is applied to genomic features including the recombination rate distribution across the chromosomes and the non-coding transposable element LINE-1 in the human genome. The association pattern is studied between the recombination rate, obtained by quantisation at genomic locations around LINE-1 elements, and the length groups of LINE-1 elements, also obtained by quantisation on LINE-1 length. The exact and density-preserving quantisation approach provides an alternative superior to the inexact and distance-based univariate iterative k-means clustering algorithm for discretisation.
Williamson, Ross S; Sahani, Maneesh; Pillow, Jonathan W
2015-04-01
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.
Kadengye, Damazo T; Cools, Wilfried; Ceulemans, Eva; Van den Noortgate, Wim
2012-06-01
Missing data, such as item responses in multilevel data, are ubiquitous in educational research settings. Researchers in the item response theory (IRT) context have shown that ignoring such missing data can create problems in the estimation of the IRT model parameters. Consequently, several imputation methods for dealing with missing item data have been proposed and shown to be effective when applied with traditional IRT models. Additionally, a nonimputation direct likelihood analysis has been shown to be an effective tool for handling missing observations in clustered data settings. This study investigates the performance of six simple imputation methods, which have been found to be useful in other IRT contexts, versus a direct likelihood analysis, in multilevel data from educational settings. Multilevel item response data were simulated on the basis of two empirical data sets, and some of the item scores were deleted, such that they were missing either completely at random or simply at random. An explanatory IRT model was used for modeling the complete, incomplete, and imputed data sets. We showed that direct likelihood analysis of the incomplete data sets produced unbiased parameter estimates that were comparable to those from a complete data analysis. Multiple-imputation approaches of the two-way mean and corrected item mean substitution methods displayed varying degrees of effectiveness in imputing data that in turn could produce unbiased parameter estimates. The simple random imputation, adjusted random imputation, item means substitution, and regression imputation methods seemed to be less effective in imputing missing item scores in multilevel data settings.
Retrospective Likelihood Based Methods for Analyzing Case-Cohort Genetic Association Studies
Shen, Yuanyuan; Cai, Tianxi; Chen, Yu; Yang, Ying; Chen, Jinbo
2016-01-01
Summary The case cohort (CCH) design is a cost effective design for assessing genetic susceptibility with time-to-event data especially when the event rate is low. In this work, we propose a powerful pseudo score test for assessing the association between a single nucleotide polymorphism (SNP) and the event time under the CCH design. The pseudo score is derived from a pseudo likelihood which is an estimated retrospective likelihood that treats the SNP genotype as the dependent variable and time-to-event outcome and other covariates as independent variables. It exploits the fact that the genetic variable is often distributed independent of covariates or only related to a low-dimensional subset. Estimates of hazard ratio parameters for association can be obtained by maximizing the pseudo likelihood. A unique advantage of our method is that it allows the censoring distribution to depend on covariates that are only measured for the CCH sample while not requiring the knowledge of follow up or covariate information on subjects not selected into the CCH sample. In addition to these flexibilities, the proposed method has high relative efficiency compared with commonly used alternative approaches. We study large sample properties of this method and assess its finite sample performance using both simulated and real data examples. PMID:26177343
Efficient Simulation and Likelihood Methods for Non-Neutral Multi-Allele Models
Joyce, Paul; Genz, Alan
2012-01-01
Abstract Throughout the 1980s, Simon Tavaré made numerous significant contributions to population genetics theory. As genetic data, in particular DNA sequence, became more readily available, a need to connect population-genetic models to data became the central issue. The seminal work of Griffiths and Tavaré (1994a, 1994b, 1994c) was among the first to develop a likelihood method to estimate the population-genetic parameters using full DNA sequences. Now, we are in the genomics era where methods need to scale-up to handle massive data sets, and Tavaré has led the way to new approaches. However, performing statistical inference under non-neutral models has proved elusive. In tribute to Simon Tavaré, we present an article in spirit of his work that provides a computationally tractable method for simulating and analyzing data under a class of non-neutral population-genetic models. Computational methods for approximating likelihood functions and generating samples under a class of allele-frequency based non-neutral parent-independent mutation models were proposed by Donnelly, Nordborg, and Joyce (DNJ) (Donnelly et al., 2001). DNJ (2001) simulated samples of allele frequencies from non-neutral models using neutral models as auxiliary distribution in a rejection algorithm. However, patterns of allele frequencies produced by neutral models are dissimilar to patterns of allele frequencies produced by non-neutral models, making the rejection method inefficient. For example, in some cases the methods in DNJ (2001) require 109 rejections before a sample from the non-neutral model is accepted. Our method simulates samples directly from the distribution of non-neutral models, making simulation methods a practical tool to study the behavior of the likelihood and to perform inference on the strength of selection. PMID:22697240
Method and apparatus for implementing a traceback maximum-likelihood decoder in a hypercube network
NASA Technical Reports Server (NTRS)
Pollara-Bozzola, Fabrizio (Inventor)
1989-01-01
A method and a structure to implement maximum-likelihood decoding of convolutional codes on a network of microprocessors interconnected as an n-dimensional cube (hypercube). By proper reordering of states in the decoder, only communication between adjacent processors is required. Communication time is limited to that required for communication only of the accumulated metrics and not the survivor parameters of a Viterbi decoding algorithm. The survivor parameters are stored at a local processor's memory and a trace-back method is employed to ascertain the decoding result. Faster and more efficient operation is enabled, and decoding of large constraint length codes is feasible using standard VLSI technology.
Fachet, Melanie; Flassig, Robert J; Rihko-Struckmann, Liisa; Sundmacher, Kai
2014-12-01
In this work, a photoautotrophic growth model incorporating light and nutrient effects on growth and pigmentation of Dunaliella salina was formulated. The model equations were taken from literature and modified according to the experimental setup with special emphasis on model reduction. The proposed model has been evaluated with experimental data of D. salina cultivated in a flat-plate photobioreactor under stressed and non-stressed conditions. Simulation results show that the model can represent the experimental data accurately. The identifiability of the model parameters was studied using the profile likelihood method. This analysis revealed that three model parameters are practically non-identifiable. However, some of these non-identifiabilities can be resolved by model reduction and additional measurements. As a conclusion, our results suggest that the proposed model equations result in a predictive growth model for D. salina.
Likelihood ratio data to report the validation of a forensic fingerprint evaluation method.
Ramos, Daniel; Haraksim, Rudolf; Meuwly, Didier
2017-02-01
Data to which the authors refer to throughout this article are likelihood ratios (LR) computed from the comparison of 5-12 minutiae fingermarks with fingerprints. These LRs data are used for the validation of a likelihood ratio (LR) method in forensic evidence evaluation. These data present a necessary asset for conducting validation experiments when validating LR methods used in forensic evidence evaluation and set up validation reports. These data can be also used as a baseline for comparing the fingermark evidence in the same minutiae configuration as presented in (D. Meuwly, D. Ramos, R. Haraksim,) [1], although the reader should keep in mind that different feature extraction algorithms and different AFIS systems used may produce different LRs values. Moreover, these data may serve as a reproducibility exercise, in order to train the generation of validation reports of forensic methods, according to [1]. Alongside the data, a justification and motivation for the use of methods is given. These methods calculate LRs from the fingerprint/mark data and are subject to a validation procedure. The choice of using real forensic fingerprint in the validation and simulated data in the development is described and justified. Validation criteria are set for the purpose of validation of the LR methods, which are used to calculate the LR values from the data and the validation report. For privacy and data protection reasons, the original fingerprint/mark images cannot be shared. But these images do not constitute the core data for the validation, contrarily to the LRs that are shared.
Maximum-likelihood methods for array processing based on time-frequency distributions
NASA Astrophysics Data System (ADS)
Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.
1999-11-01
This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.
Maximum Likelihood, Profile Likelihood, and Penalized Likelihood: A Primer
Cole, Stephen R.; Chu, Haitao; Greenland, Sander
2014-01-01
The method of maximum likelihood is widely used in epidemiology, yet many epidemiologists receive little or no education in the conceptual underpinnings of the approach. Here we provide a primer on maximum likelihood and some important extensions which have proven useful in epidemiologic research, and which reveal connections between maximum likelihood and Bayesian methods. For a given data set and probability model, maximum likelihood finds values of the model parameters that give the observed data the highest probability. As with all inferential statistical methods, maximum likelihood is based on an assumed model and cannot account for bias sources that are not controlled by the model or the study design. Maximum likelihood is nonetheless popular, because it is computationally straightforward and intuitive and because maximum likelihood estimators have desirable large-sample properties in the (largely fictitious) case in which the model has been correctly specified. Here, we work through an example to illustrate the mechanics of maximum likelihood estimation and indicate how improvements can be made easily with commercial software. We then describe recent extensions and generalizations which are better suited to observational health research and which should arguably replace standard maximum likelihood as the default method. PMID:24173548
Nonparametric maximum likelihood estimation of probability densities by penalty function methods
NASA Technical Reports Server (NTRS)
Demontricher, G. F.; Tapia, R. A.; Thompson, J. R.
1974-01-01
When it is known a priori exactly to which finite dimensional manifold the probability density function gives rise to a set of samples, the parametric maximum likelihood estimation procedure leads to poor estimates and is unstable; while the nonparametric maximum likelihood procedure is undefined. A very general theory of maximum penalized likelihood estimation which should avoid many of these difficulties is presented. It is demonstrated that each reproducing kernel Hilbert space leads, in a very natural way, to a maximum penalized likelihood estimator and that a well-known class of reproducing kernel Hilbert spaces gives polynomial splines as the nonparametric maximum penalized likelihood estimates.
NASA Technical Reports Server (NTRS)
Klein, V.
1980-01-01
A frequency domain maximum likelihood method is developed for the estimation of airplane stability and control parameters from measured data. The model of an airplane is represented by a discrete-type steady state Kalman filter with time variables replaced by their Fourier series expansions. The likelihood function of innovations is formulated, and by its maximization with respect to unknown parameters the estimation algorithm is obtained. This algorithm is then simplified to the output error estimation method with the data in the form of transformed time histories, frequency response curves, or spectral and cross-spectral densities. The development is followed by a discussion on the equivalence of the cost function in the time and frequency domains, and on advantages and disadvantages of the frequency domain approach. The algorithm developed is applied in four examples to the estimation of longitudinal parameters of a general aviation airplane using computer generated and measured data in turbulent and still air. The cost functions in the time and frequency domains are shown to be equivalent; therefore, both approaches are complementary and not contradictory. Despite some computational advantages of parameter estimation in the frequency domain, this approach is limited to linear equations of motion with constant coefficients.
Equivalence between modularity optimization and maximum likelihood methods for community detection
NASA Astrophysics Data System (ADS)
Newman, M. E. J.
2016-11-01
We demonstrate an equivalence between two widely used methods of community detection in networks, the method of modularity maximization and the method of maximum likelihood applied to the degree-corrected stochastic block model. Specifically, we show an exact equivalence between maximization of the generalized modularity that includes a resolution parameter and the special case of the block model known as the planted partition model, in which all communities in a network are assumed to have statistically similar properties. Among other things, this equivalence provides a mathematically principled derivation of the modularity function, clarifies the conditions and assumptions of its use, and gives an explicit formula for the optimal value of the resolution parameter.
Smolin, John A; Gambetta, Jay M; Smith, Graeme
2012-02-17
We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.
NASA Astrophysics Data System (ADS)
Sobolev, V. S.; Zhuravel', F. A.; Kashcheeva, G. A.
2016-11-01
This paper presents a comparative analysis of the errors of two alternative methods of estimating the central frequency of signals of laser Doppler systems, one of which is based on the maximum likelihood criterion and the other on the so-called pulse-pair technique. Using computer simulation, the standard deviations of the Doppler signal frequency from its true values are determined for both methods and plots of the ratios of these deviations as a measure of the accuracy gain of one of them are constructed. The results can be used by developers of appropriate systems to choose an optimal algorithm of signal processing based on a compromise between the accuracy and speed of the systems as well as the labor intensity of calculations.
Determination of instrumentation errors from measured data using maximum likelihood method
NASA Technical Reports Server (NTRS)
Keskar, D. A.; Klein, V.
1980-01-01
The maximum likelihood method is used for estimation of unknown initial conditions, constant bias and scale factor errors in measured flight data. The model for the system to be identified consists of the airplane six-degree-of-freedom kinematic equations, and the output equations specifying the measured variables. The estimation problem is formulated in a general way and then, for practical use, simplified by ignoring the effect of process noise. The algorithm developed is first applied to computer generated data having different levels of process noise for the demonstration of the robustness of the method. Then the real flight data are analyzed and the results compared with those obtained by the extended Kalman filter algorithm.
A guideline for the validation of likelihood ratio methods used for forensic evidence evaluation.
Meuwly, Didier; Ramos, Daniel; Haraksim, Rudolf
2016-04-26
This Guideline proposes a protocol for the validation of forensic evaluation methods at the source level, using the Likelihood Ratio framework as defined within the Bayes' inference model. In the context of the inference of identity of source, the Likelihood Ratio is used to evaluate the strength of the evidence for a trace specimen, e.g. a fingermark, and a reference specimen, e.g. a fingerprint, to originate from common or different sources. Some theoretical aspects of probabilities necessary for this Guideline were discussed prior to its elaboration, which started after a workshop of forensic researchers and practitioners involved in this topic. In the workshop, the following questions were addressed: "which aspects of a forensic evaluation scenario need to be validated?", "what is the role of the LR as part of a decision process?" and "how to deal with uncertainty in the LR calculation?". The questions: "what to validate?" focuses on the validation methods and criteria and "how to validate?" deals with the implementation of the validation protocol. Answers to these questions were deemed necessary with several objectives. First, concepts typical for validation standards [1], such as performance characteristics, performance metrics and validation criteria, will be adapted or applied by analogy to the LR framework. Second, a validation strategy will be defined. Third, validation methods will be described. Finally, a validation protocol and an example of validation report will be proposed, which can be applied to the forensic fields developing and validating LR methods for the evaluation of the strength of evidence at source level under the following propositions.
Likelihood ratio meta-analysis: New motivation and approach for an old method.
Dormuth, Colin R; Filion, Kristian B; Platt, Robert W
2016-03-01
A 95% confidence interval (CI) in an updated meta-analysis may not have the expected 95% coverage. If a meta-analysis is simply updated with additional data, then the resulting 95% CI will be wrong because it will not have accounted for the fact that the earlier meta-analysis failed or succeeded to exclude the null. This situation can be avoided by using the likelihood ratio (LR) as a measure of evidence that does not depend on type-1 error. We show how an LR-based approach, first advanced by Goodman, can be used in a meta-analysis to pool data from separate studies to quantitatively assess where the total evidence points. The method works by estimating the log-likelihood ratio (LogLR) function from each study. Those functions are then summed to obtain a combined function, which is then used to retrieve the total effect estimate, and a corresponding 'intrinsic' confidence interval. Using as illustrations the CAPRIE trial of clopidogrel versus aspirin in the prevention of ischemic events, and our own meta-analysis of higher potency statins and the risk of acute kidney injury, we show that the LR-based method yields the same point estimate as the traditional analysis, but with an intrinsic confidence interval that is appropriately wider than the traditional 95% CI. The LR-based method can be used to conduct both fixed effect and random effects meta-analyses, it can be applied to old and new meta-analyses alike, and results can be presented in a format that is familiar to a meta-analytic audience.
Maximum likelihood methods for investigating reporting rates of rings on hunter-shot birds
Conroy, M.J.; Morgan, B.J.T.; North, P.M.
1985-01-01
It is well known that hunters do not report 100% of the rings that they find on shot birds. Reward studies can be used to estimate what this reporting rate is, by comparison of recoveries of rings offering a monetary reward, to ordinary rings. A reward study of American Black Ducks (Anas rubripes) is used to illustrate the design, and to motivate the development of statistical models for estimation and for testing hypotheses of temporal and geographic variation in reporting rates. The method involves indexing the data (recoveries) and parameters (reporting, harvest, and solicitation rates) by geographic and temporal strata. Estimates are obtained under unconstrained (e.g., allowing temporal variability in reporting rates) and constrained (e.g., constant reporting rates) models, and hypotheses are tested by likelihood ratio. A FORTRAN program, available from the author, is used to perform the computations.
A Maximum Likelihood Method for Reconstruction of the Evolution of Eukaryotic Gene Structure
Carmel, Liran; Rogozin, Igor B.; Wolf, Yuri I.; Koonin, Eugene V.
2012-01-01
Spliceosomal introns are one of the principal distinctive features of eukaryotes. Nevertheless, different large-scale studies disagree about even the most basic features of their evolution. In order to come up with a more reliable reconstruction of intron evolution, we developed a model that is far more comprehensive than previous ones. This model is rich in parameters, and estimating them accurately is infeasible by straightforward likelihood maximization. Thus, we have developed an expectation-maximization algorithm that allows for efficient maximization. Here, we outline the model and describe the expectation-maximization algorithm in detail. Since the method works with intron presence–absence maps, it is expected to be instrumental for the analysis of the evolution of other binary characters as well. PMID:19381540
An alternative empirical likelihood method in missing response problems and causal inference.
Ren, Kaili; Drummond, Christopher A; Brewster, Pamela S; Haller, Steven T; Tian, Jiang; Cooper, Christopher J; Zhang, Biao
2016-11-30
Missing responses are common problems in medical, social, and economic studies. When responses are missing at random, a complete case data analysis may result in biases. A popular debias method is inverse probability weighting proposed by Horvitz and Thompson. To improve efficiency, Robins et al. proposed an augmented inverse probability weighting method. The augmented inverse probability weighting estimator has a double-robustness property and achieves the semiparametric efficiency lower bound when the regression model and propensity score model are both correctly specified. In this paper, we introduce an empirical likelihood-based estimator as an alternative to Qin and Zhang (2007). Our proposed estimator is also doubly robust and locally efficient. Simulation results show that the proposed estimator has better performance when the propensity score is correctly modeled. Moreover, the proposed method can be applied in the estimation of average treatment effect in observational causal inferences. Finally, we apply our method to an observational study of smoking, using data from the Cardiovascular Outcomes in Renal Atherosclerotic Lesions clinical trial. Copyright © 2016 John Wiley & Sons, Ltd.
Evolutionary analysis of apolipoprotein E by Maximum Likelihood and complex network methods
Benevides, Leandro de Jesus; de Carvalho, Daniel Santana; Andrade, Roberto Fernandes Silva; Bomfim, Gilberto Cafezeiro; Fernandes, Flora Maria de Campos
2016-01-01
Abstract Apolipoprotein E (apo E) is a human glycoprotein with 299 amino acids, and it is a major component of very low density lipoproteins (VLDL) and a group of high-density lipoproteins (HDL). Phylogenetic studies are important to clarify how various apo E proteins are related in groups of organisms and whether they evolved from a common ancestor. Here, we aimed at performing a phylogenetic study on apo E carrying organisms. We employed a classical and robust method, such as Maximum Likelihood (ML), and compared the results using a more recent approach based on complex networks. Thirty-two apo E amino acid sequences were downloaded from NCBI. A clear separation could be observed among three major groups: mammals, fish and amphibians. The results obtained from ML method, as well as from the constructed networks showed two different groups: one with mammals only (C1) and another with fish (C2), and a single node with the single sequence available for an amphibian. The accordance in results from the different methods shows that the complex networks approach is effective in phylogenetic studies. Furthermore, our results revealed the conservation of apo E among animal groups. PMID:27560837
Latz, Ellen
2016-01-01
The potential of soils to naturally suppress inherent plant pathogens is an important ecosystem function. Usually, pathogen infection assays are used for estimating the suppressive potential of soils. In natural soils, however, co-occurring pathogens might simultaneously infect plants complicating the estimation of a focal pathogen’s infection rate (initial slope of the infection-curve) as a measure of soil suppressiveness. Here, we present a method in R correcting for these unwanted effects by developing a two pathogen mono-molecular infection model. We fit the two pathogen mono-molecular infection model to data by using an integrative approach combining a numerical simulation of the model with an iterative maximum likelihood fit. We show that in presence of co-occurring pathogens using uncorrected data leads to a critical under- or overestimation of soil suppressiveness measures. In contrast, our new approach enables to precisely estimate soil suppressiveness measures such as plant infection rate and plant resistance time. Our method allows a correction of measured infection parameters that is necessary in case different pathogens are present. Moreover, our model can be (1) adapted to use other models such as the logistic or the Gompertz model; and (2) it could be extended by a facilitation parameter if infections in plants increase the susceptibility to new infections. We propose our method to be particularly useful for exploring soil suppressiveness of natural soils from different sites (e.g., in biodiversity experiments). PMID:27833800
Shih, Weichung Joe; Li, Gang; Wang, Yining
2016-03-01
Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one.
Two-locus models of disease: Comparison of likelihood and nonparametric linkage methods
Goldin, L.R. ); Weeks, D.E. )
1993-10-01
The power to detect linkage for likelihood and nonparametric (Haseman-Elston, affected-sib-pair, and affected-pedigree-member) methods is compared for the case of a common, dichotomous trait resulting from the segregation of two loci. Pedigree data for several two-locus epistatic and heterogeneity models have been simulated, with one of the loci linked to a marker locus. Replicate samples of 20 three-generation pedigrees (16 individuals/pedigree) were simulated and then ascertained for having at least 6 affected individuals. The power of linkage detection calculated under the correct two-locus model is only slightly higher than that under a single locus model with reduced penetrance. As expected, the nonparametric linkage methods have somewhat lower power than does the lod-score method, the difference depending on the mode of transmission of the linked locus. Thus, for many pedigree linkage studies, the lod-score method will have the best power. However, this conclusion depends on how many times the lod score will be calculated for a given marker. The Haseman-Elston method would likely be preferable to calculating lod scores under a large number of genetic models (i.e., varying both the mode of transmission and the penetrances), since such an analysis requires an increase in the critical value of the lod criterion. The power of the affected-pedigree-member method is lower than the other methods, which can be shown to be largely due to the fact that marker genotypes for unaffected individuals are not used. 31 refs., 1 fig., 5 tabs.
Quantifying uncertainty in predictions of groundwater levels using formal likelihood methods
NASA Astrophysics Data System (ADS)
Marchant, Ben; Mackay, Jonathan; Bloomfield, John
2016-09-01
Informal and formal likelihood methods can be used to quantify uncertainty in modelled predictions of groundwater levels (GWLs). Informal methods use a relatively subjective criterion to identify sets of plausible or behavioural parameters of the GWL models. In contrast, formal methods specify a statistical model for the residuals or errors of the GWL model. The formal uncertainty estimates are only reliable when the assumptions of the statistical model are appropriate. We apply the formal approach to historical reconstructions of GWL hydrographs from four UK boreholes. We test whether a model which assumes Gaussian and independent errors is sufficient to represent the residuals or whether a model which includes temporal autocorrelation and a general non-Gaussian distribution is required. Groundwater level hydrographs are often observed at irregular time intervals so we use geostatistical methods to quantify the temporal autocorrelation rather than more standard time series methods such as autoregressive models. According to the Akaike Information Criterion, the more general statistical model better represents the residuals of the GWL model. However, no substantial difference between the accuracy of the GWL predictions and the estimates of their uncertainty is observed when the two statistical models are compared. When the general model is applied, significant temporal correlation over periods ranging from 3 to 20 months is evident for the different boreholes. When the GWL model parameters are sampled using a Markov Chain Monte Carlo approach the distributions based on the general statistical model differ from those of the Gaussian model, particularly for the boreholes with the most autocorrelation. These results suggest that the independent Gaussian model of residuals is sufficient to estimate the uncertainty of a GWL prediction on a single date. However, if realistically autocorrelated simulations of GWL hydrographs for multiple dates are required or if the
A maximum-likelihood multi-resolution weak lensing mass reconstruction method
NASA Astrophysics Data System (ADS)
Khiabanian, Hossein
Gravitational lensing is formed when the light from a distant source is "bent" around a massive object. Lensing analysis has increasingly become the method of choice for studying dark matter, so much that it is one of the main tools that will be employed in the future surveys to study the dark energy and its equation of state as well as the evolution of galaxy clustering. Unlike other popular techniques for selecting galaxy clusters (such as studying the X-ray emission or observing the over-densities of galaxies), weak gravitational lensing does not have the disadvantage of relying on the luminous matter and provides a parameter-free reconstruction of the projected mass distribution in clusters without dependence on baryon content. Gravitational lensing also provides a unique test for the presence of truly dark clusters, though it is otherwise an expensive detection method. Therefore it is essential to make use of all the information provided by the data to improve the quality of the lensing analysis. This thesis project has been motivated by the limitations encountered with the commonly used direct reconstruction methods of producing mass maps. We have developed a multi-resolution maximum-likelihood reconstruction method for producing two dimensional mass maps using weak gravitational lensing data. To utilize all the shear information, we employ an iterative inverse method with a properly selected regularization coefficient which fits the deflection potential at the position of each galaxy. By producing mass maps with multiple resolutions in the different parts of the observed field, we can achieve a uniform signal to noise level by increasing the resolution in regions of higher distortions or regions with an over-density of background galaxies. In addition, we are able to better study the sub- structure of the massive clusters at a resolution which is not attainable in the rest of the observed field.
ERIC Educational Resources Information Center
Molenaar, Peter C. M.; Nesselroade, John R.
1998-01-01
Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…
Rius, Jordi
2006-09-01
The maximum-likelihood method is applied to direct methods to derive a more general probability density function of the triple-phase sums which is capable of predicting negative values. This study also proves that maximization of the origin-free modulus sum function S yields, within the limitations imposed by the assumed approximations, the maximum-likelihood estimates of the phases. It thus represents the formal theoretical justification of the S function that was initially derived from Patterson-function arguments [Rius (1993). Acta Cryst. A49, 406-409].
A Composite-Likelihood Method for Detecting Incomplete Selective Sweep from Population Genomic Data.
Vy, Ha My T; Kim, Yuseob
2015-06-01
Adaptive evolution occurs as beneficial mutations arise and then increase in frequency by positive natural selection. How, when, and where in the genome such evolutionary events occur is a fundamental question in evolutionary biology. It is possible to detect ongoing positive selection or an incomplete selective sweep in species with sexual reproduction because, when a beneficial mutation is on the way to fixation, homologous chromosomes in the population are divided into two groups: one carrying the beneficial allele with very low polymorphism at nearby linked loci and the other carrying the ancestral allele with a normal pattern of sequence variation. Previous studies developed long-range haplotype tests to capture this difference between two groups as the signal of an incomplete selective sweep. In this study, we propose a composite-likelihood-ratio (CLR) test for detecting incomplete selective sweeps based on the joint sampling probabilities for allele frequencies of two groups as a function of strength of selection and recombination rate. Tested against simulated data, this method yielded statistical power and accuracy in parameter estimation that are higher than the iHS test and comparable to the more recently developed nSL test. This procedure was also applied to African Drosophila melanogaster population genomic data to detect candidate genes under ongoing positive selection. Upon visual inspection of sequence polymorphism, candidates detected by our CLR method exhibited clear haplotype structures predicted under incomplete selective sweeps. Our results suggest that different methods capture different aspects of genetic information regarding incomplete sweeps and thus are partially complementary to each other.
NASA Astrophysics Data System (ADS)
Stollenwerk, Nico
2009-09-01
Basic stochastic processes, like the SIS and SIR epidemics, are used to describe data from an internet based surveillance system, the InfluenzaNet. Via generating functions, in some simplifying situations there can be analytic expressions derived for the probability. From this likelihood functions for parameter estimation are constructed. This is a nice application in which partial differential equations appear in epidemiological applications without invoking any explicitly spatial aspect. All steps can eventually be bridged by numeric simulations in case of analytical difficulties [1, 2].
Estimating parameters of a multiple autoregressive model by the modified maximum likelihood method
NASA Astrophysics Data System (ADS)
Bayrak, Özlem Türker; Akkaya, Aysen D.
2010-02-01
We consider a multiple autoregressive model with non-normal error distributions, the latter being more prevalent in practice than the usually assumed normal distribution. Since the maximum likelihood equations have convergence problems (Puthenpura and Sinha, 1986) [11], we work out modified maximum likelihood equations by expressing the maximum likelihood equations in terms of ordered residuals and linearizing intractable nonlinear functions (Tiku and Suresh, 1992) [8]. The solutions, called modified maximum estimators, are explicit functions of sample observations and therefore easy to compute. They are under some very general regularity conditions asymptotically unbiased and efficient (Vaughan and Tiku, 2000) [4]. We show that for small sample sizes, they have negligible bias and are considerably more efficient than the traditional least squares estimators. We show that our estimators are robust to plausible deviations from an assumed distribution and are therefore enormously advantageous as compared to the least squares estimators. We give a real life example.
NASA Astrophysics Data System (ADS)
Lovreglio, Ruggiero; Ronchi, Enrico; Nilsson, Daniel
2015-11-01
The formulation of pedestrian floor field cellular automaton models is generally based on hypothetical assumptions to represent reality. This paper proposes a novel methodology to calibrate these models using experimental trajectories. The methodology is based on likelihood function optimization and allows verifying whether the parameters defining a model statistically affect pedestrian navigation. Moreover, it allows comparing different model specifications or the parameters of the same model estimated using different data collection techniques, e.g. virtual reality experiment, real data, etc. The methodology is here implemented using navigation data collected in a Virtual Reality tunnel evacuation experiment including 96 participants. A trajectory dataset in the proximity of an emergency exit is used to test and compare different metrics, i.e. Euclidean and modified Euclidean distance, for the static floor field. In the present case study, modified Euclidean metrics provide better fitting with the data. A new formulation using random parameters for pedestrian cellular automaton models is also defined and tested.
HIV AND POPULATION DYNAMICS: A GENERAL MODEL AND MAXIMUM-LIKELIHOOD STANDARDS FOR EAST AFRICA*
HEUVELINE, PATRICK
2014-01-01
In high-prevalence populations, the HIV epidemic undermines the validity of past empirical models and related demographic techniques. A parsimonious model of HIV and population dynamics is presented here and fit to 46,000 observations, gathered from 11 East African populations. The fitted model simulates HIV and population dynamics with standard demographic inputs and only two additional parameters for the onset and scale of the epidemic. The underestimation of the general prevalence of HIV in samples of pregnant women and the fertility impact of HIV are examples of the dynamic interactions that demographic models must reproduce and are shown here to increase over time even with constant prevalence levels. As a result, the impact of HIV on population growth appears to have been underestimated by current population projections that ignore this dynamic. PMID:12846130
Comparisons of Four Methods for Estimating a Dynamic Factor Model
ERIC Educational Resources Information Center
Zhang, Zhiyong; Hamaker, Ellen L.; Nesselroade, John R.
2008-01-01
Four methods for estimating a dynamic factor model, the direct autoregressive factor score (DAFS) model, are evaluated and compared. The first method estimates the DAFS model using a Kalman filter algorithm based on its state space model representation. The second one employs the maximum likelihood estimation method based on the construction of a…
A likelihood method to cross-calibrate air-shower detectors
NASA Astrophysics Data System (ADS)
Dembinski, Hans Peter; Kégl, Balázs; Mariş, Ioana C.; Roth, Markus; Veberič, Darko
2016-01-01
We present a detailed statistical treatment of the energy calibration of hybrid air-shower detectors, which combine a surface detector array and a fluorescence detector, to obtain an unbiased estimate of the calibration curve. The special features of calibration data from air showers prevent unbiased results, if a standard least-squares fit is applied to the problem. We develop a general maximum-likelihood approach, based on the detailed statistical model, to solve the problem. Our approach was developed for the Pierre Auger Observatory, but the applied principles are general and can be transferred to other air-shower experiments, even to the cross-calibration of other observables. Since our general likelihood function is expensive to compute, we derive two approximations with significantly smaller computational cost. In the recent years both have been used to calibrate data of the Pierre Auger Observatory. We demonstrate that these approximations introduce negligible bias when they are applied to simulated toy experiments, which mimic realistic experimental conditions.
How to use dynamic light scattering to improve the likelihood of growing macromolecular crystals.
Borgstahl, Gloria E O
2007-01-01
Dynamic light scattering (DLS) has become one of the most useful diagnostic tools for crystallization. The main purpose of using DLS in crystal screening is to help the investigator understand the size distribution, stability, and aggregation state of macromolecules in solution. It can also be used to understand how experimental variables influence aggregation. With commercially available instruments, DLS is easy to perform, and most of the sample is recoverable. Most usefully, the homogeneity or monodispersity of a sample, as measured by DLS, can be predictive of crystallizability.
Rao, D C; Vogler, G P; McGue, M; Russell, J M
1987-01-01
A general method for maximum-likelihood estimation of familial correlations from pedigree data is presented. The method is applicable to any type of data structure, including pedigrees in which variable numbers of individuals are present within classes of relatives, data in which multiple phenotypic measures are obtained on each individual, and multiple group analyses in which some correlations are equated across groups. The method is applied to data on high-density lipoprotein cholesterol and total cholesterol levels obtained from participants in the Swedish Twin Family Study. Results indicate that there is strong familial resemblance for both traits but little cross-trait resemblance. PMID:3687943
A method for selecting M dwarfs with an increased likelihood of unresolved ultracool companionship
NASA Astrophysics Data System (ADS)
Cook, N. J.; Pinfield, D. J.; Marocco, F.; Burningham, B.; Jones, H. R. A.; Frith, J.; Zhong, J.; Luo, A. L.; Qi, Z. X.; Lucas, P. W.; Gromadzki, M.; Day-Jones, A. C.; Kurtev, R. G.; Guo, Y. X.; Wang, Y. F.; Bai, Y.; Yi, Z. P.; Smart, R. L.
2016-04-01
Locating ultracool companions to M dwarfs is important for constraining low-mass formation models, the measurement of substellar dynamical masses and radii, and for testing ultracool evolutionary models. We present an optimized method for identifying M dwarfs which may have unresolved ultracool companions. We construct a catalogue of 440 694 M dwarf candidates, from Wide-Field Infrared Survey Explorer, Two Micron All-Sky Survey and Sloan Digital Sky Survey, based on optical- and near-infrared colours and reduced proper motion. With strict reddening, photometric and quality constraints we isolate a subsample of 36 898 M dwarfs and search for possible mid-infrared M dwarf + ultracool dwarf candidates by comparing M dwarfs which have similar optical/near-infrared colours (chosen for their sensitivity to effective temperature and metallicity). We present 1082 M dwarf + ultracool dwarf candidates for follow-up. Using simulated ultracool dwarf companions to M dwarfs, we estimate that the occurrence of unresolved ultracool companions amongst our M dwarf + ultracool dwarf candidates should be at least four times the average for our full M dwarf catalogue. We discuss possible contamination and bias and predict yields of candidates based on our simulations.
DREAM3: Network Inference Using Dynamic Context Likelihood of Relatedness and the Inferelator
2010-03-22
Methods Enzymol 350: 469–483. 47. Johnson DS, Mortazavi A, Myers RM, Wold B (2007) Genome- wide mapping of in vivo protein-dna interactions. Science ...Mathematics, Courant Institute of Mathematical Sciences , New York University, New York, New York, United States of America, 4 Department of Computer Science ...Courant Institute of Mathematical Sciences , New York University, New York, New York, United States of America Abstract Background: Many current works
A maximum likelihood method for high resolution proton radiography/proton CT
NASA Astrophysics Data System (ADS)
Collins-Fekete, Charles-Antoine; Brousmiche, Sébastien; Portillo, Stephen K. N.; Beaulieu, Luc; Seco, Joao
2016-12-01
Multiple Coulomb scattering (MCS) is the largest contributor to blurring in proton imaging. In this work, we developed a maximum likelihood least squares estimator that improves proton radiography’s spatial resolution. The water equivalent thickness (WET) through projections defined from the source to the detector pixels were estimated such that they maximizes the likelihood of the energy loss of every proton crossing the volume. The length spent in each projection was calculated through the optimized cubic spline path estimate. The proton radiographies were produced using Geant4 simulations. Three phantoms were studied here: a slanted cube in a tank of water to measure 2D spatial resolution, a voxelized head phantom for clinical performance evaluation as well as a parametric Catphan phantom (CTP528) for 3D spatial resolution. Two proton beam configurations were used: a parallel and a conical beam. Proton beams of 200 and 330 MeV were simulated to acquire the radiography. Spatial resolution is increased from 2.44 lp cm-1 to 4.53 lp cm-1 in the 200 MeV beam and from 3.49 lp cm-1 to 5.76 lp cm-1 in the 330 MeV beam. Beam configurations do not affect the reconstructed spatial resolution as investigated between a radiography acquired with the parallel (3.49 lp cm-1 to 5.76 lp cm-1) or conical beam (from 3.49 lp cm-1 to 5.56 lp cm-1). The improved images were then used as input in a photon tomography algorithm. The proton CT reconstruction of the Catphan phantom shows high spatial resolution (from 2.79 to 5.55 lp cm-1 for the parallel beam and from 3.03 to 5.15 lp cm-1 for the conical beam) and the reconstruction of the head phantom, although qualitative, shows high contrast in the gradient region. The proposed formulation of the optimization demonstrates serious potential to increase the spatial resolution (up by 65 % ) in proton radiography and greatly accelerate proton computed tomography reconstruction.
A maximum likelihood method for high resolution proton radiography/proton CT.
Collins-Fekete, Charles-Antoine; Brousmiche, Sébastien; Portillo, Stephen K N; Beaulieu, Luc; Seco, Joao
2016-12-07
Multiple Coulomb scattering (MCS) is the largest contributor to blurring in proton imaging. In this work, we developed a maximum likelihood least squares estimator that improves proton radiography's spatial resolution. The water equivalent thickness (WET) through projections defined from the source to the detector pixels were estimated such that they maximizes the likelihood of the energy loss of every proton crossing the volume. The length spent in each projection was calculated through the optimized cubic spline path estimate. The proton radiographies were produced using Geant4 simulations. Three phantoms were studied here: a slanted cube in a tank of water to measure 2D spatial resolution, a voxelized head phantom for clinical performance evaluation as well as a parametric Catphan phantom (CTP528) for 3D spatial resolution. Two proton beam configurations were used: a parallel and a conical beam. Proton beams of 200 and 330 MeV were simulated to acquire the radiography. Spatial resolution is increased from 2.44 lp cm(-1) to 4.53 lp cm(-1) in the 200 MeV beam and from 3.49 lp cm(-1) to 5.76 lp cm(-1) in the 330 MeV beam. Beam configurations do not affect the reconstructed spatial resolution as investigated between a radiography acquired with the parallel (3.49 lp cm(-1) to 5.76 lp cm(-1)) or conical beam (from 3.49 lp cm(-1) to 5.56 lp cm(-1)). The improved images were then used as input in a photon tomography algorithm. The proton CT reconstruction of the Catphan phantom shows high spatial resolution (from 2.79 to 5.55 lp cm(-1) for the parallel beam and from 3.03 to 5.15 lp cm(-1) for the conical beam) and the reconstruction of the head phantom, although qualitative, shows high contrast in the gradient region. The proposed formulation of the optimization demonstrates serious potential to increase the spatial resolution (up by 65[Formula: see text]) in proton radiography and greatly accelerate proton computed tomography reconstruction.
NASA Astrophysics Data System (ADS)
Osmaston, Miles
2013-04-01
In my oral(?) contribution to this session [1] I use my studies of the fundamental physics of gravitation to derive a reason for expecting the vertical gradient of electron density (= radial electric field) in the ionosphere to be closely affected by another field, directly associated with the ordinary gravitational potential (g) present at the Earth's surface. I have called that other field the Gravity-Electric (G-E) field. A calibration of this linkage relationship could be provided by noting corresponding co-seismic changes in (g) and in the ionosphere when, for example, a major normal-fault slippage occurs. But we are here concerned with precursory changes. This means we are looking for mechanisms which, on suitably short timescales, would generate pre-quake elastic deformation that changes the local (g). This poster supplements my talk by noting, for more relaxed discussion, what I see as potentially relevant plate dynamical mechanisms. Timescale constraints. If monitoring for ionospheric precursors is on only short timescales, their detectability is limited to correspondingly tectonically active regions. But as our monitoring becomes more precise and over longer terms, this constraint will relax. Most areas of the Earth are undergoing very slow heating or cooling and corresponding volume or epeirogenic change; major earthquakes can result but we won't have detected any accumulating ionospheric precursor. Transcurrent faulting. In principle, slip on a straight fault, even in a stick-slip manner, should produce little vertical deformation, but a kink, such as has caused the Transverse Ranges on the San Andreas Fault, would seem worth monitoring for precursory build-up in the ionosphere. Plate closure - subducting plate downbend. The traditionally presumed elastic flexure downbend mechanism is incorrect. 'Seismic coupling' has long been recognized by seismologists, invoking the repeated occurrence of 'asperities' to temporarily lock subduction and allow stress
Sentürk, Damla; Dalrymple, Lorien S; Mu, Yi; Nguyen, Danh V
2014-11-10
We propose a new weighted hurdle regression method for modeling count data, with particular interest in modeling cardiovascular events in patients on dialysis. Cardiovascular disease remains one of the leading causes of hospitalization and death in this population. Our aim is to jointly model the relationship/association between covariates and (i) the probability of cardiovascular events, a binary process, and (ii) the rate of events once the realization is positive-when the 'hurdle' is crossed-using a zero-truncated Poisson distribution. When the observation period or follow-up time, from the start of dialysis, varies among individuals, the estimated probability of positive cardiovascular events during the study period will be biased. Furthermore, when the model contains covariates, then the estimated relationship between the covariates and the probability of cardiovascular events will also be biased. These challenges are addressed with the proposed weighted hurdle regression method. Estimation for the weighted hurdle regression model is a weighted likelihood approach, where standard maximum likelihood estimation can be utilized. The method is illustrated with data from the United States Renal Data System. Simulation studies show the ability of proposed method to successfully adjust for differential follow-up times and incorporate the effects of covariates in the weighting.
Average Likelihood Methods of Classification of Code Division Multiple Access (CDMA)
2016-05-01
where the system’s stochastic model is either incomplete or too complex to be described in mathematical terms. Feature based methods often provide an...developing mathematical rules that guarantees optimal performance in noise, i.e., rules that guarantee the lowest error in classification. The method is...suitable in problems where models are available and have low complexity. Its main disadvantage is the development of rules due to the mathematical
Zhao, Yueqin; Yi, Min; Tiwari, Ram C
2016-05-02
A likelihood ratio test, recently developed for the detection of signals of adverse events for a drug of interest in the FDA Adverse Events Reporting System database, is extended to detect signals of adverse events simultaneously for all the drugs in a drug class. The extended likelihood ratio test methods, based on Poisson model (Ext-LRT) and zero-inflated Poisson model (Ext-ZIP-LRT), are discussed and are analytically shown, like the likelihood ratio test method, to control the type-I error and false discovery rate. Simulation studies are performed to evaluate the performance characteristics of Ext-LRT and Ext-ZIP-LRT. The proposed methods are applied to the Gadolinium drug class in FAERS database. An in-house likelihood ratio test tool, incorporating the Ext-LRT methodology, is being developed in the Food and Drug Administration.
A Maximum Likelihood Ensemble Data Assimilation Method Tailored to the Inner Radiation Belt
NASA Astrophysics Data System (ADS)
Guild, T. B.; O'Brien, T. P., III; Mazur, J. E.
2014-12-01
The Earth's radiation belts are composed of energetic protons and electrons whose fluxes span many orders of magnitude, whose distributions are log-normal, and where data-model differences can be large and also log-normal. This physical system thus challenges standard data assimilation methods relying on underlying assumptions of Gaussian distributions of measurements and data-model differences, where innovations to the model are small. We have therefore developed a data assimilation method tailored to these properties of the inner radiation belt, analogous to the ensemble Kalman filter but for the unique cases of non-Gaussian model and measurement errors, and non-linear model and measurement distributions. We apply this method to the inner radiation belt proton populations, using the SIZM inner belt model [Selesnick et al., 2007] and SAMPEX/PET and HEO proton observations to select the most likely ensemble members contributing to the state of the inner belt. We will describe the algorithm, the method of generating ensemble members, our choice of minimizing the difference between instrument counts not phase space densities, and demonstrate the method with our reanalysis of the inner radiation belt throughout solar cycle 23. We will report on progress to continue our assimilation into solar cycle 24 using the Van Allen Probes/RPS observations.
NASA Technical Reports Server (NTRS)
Iliff, K. W.; Maine, R. E.
1976-01-01
A maximum likelihood estimation method was applied to flight data and procedures to facilitate the routine analysis of a large amount of flight data were described. Techniques that can be used to obtain stability and control derivatives from aircraft maneuvers that are less than ideal for this purpose are described. The techniques involve detecting and correcting the effects of dependent or nearly dependent variables, structural vibration, data drift, inadequate instrumentation, and difficulties with the data acquisition system and the mathematical model. The use of uncertainty levels and multiple maneuver analysis also proved to be useful in improving the quality of the estimated coefficients. The procedures used for editing the data and for overall analysis are also discussed.
NASA Astrophysics Data System (ADS)
Islam, Fahima Fahmida
Sparse tomography is an efficient technique which saves time as well as minimizes cost. However, due to few angular data it implies the image reconstruction problem as ill-posed. In the ill posed problem, even with exact data constraints, the inversion cannot be uniquely performed. Therefore, selection of suitable method to optimize the reconstruction problems plays an important role in sparse data CT. Use of regularization function is a well-known method to control the artifacts in limited angle data acquisition. In this work, we propose directional total variation regularized ordered subset (OS) type image reconstruction method for neutron limited data CT. Total variation (TV) regularization works as edge preserving regularization which not only preserves the sharp edge but also reduces many of the artifacts that are very common in limited data CT. However TV itself is not direction dependent. Therefore, TV is not very suitable for images with a dominant direction. The images with dominant direction it is important to know the total variation at certain direction. Hence, here a directional TV is used as prior term. TV regularization assumes the constraint of piecewise smoothness. As the original image is not piece wise constant image, sparsifying transform is used to convert the image in to sparse image or piecewise constant image. Along with this regularized function (D TV) the likelihood function which is adapted as objective function. To optimize this objective function a OS type algorithm is used. Generally there are two methods available to make OS method convergent. This work proposes OS type directional TV regularized likelihood reconstruction method which yields fast convergence as well as good quality image. Initial iteration starts with the filtered back projection (FBP) reconstructed image. The indication of convergence is determined by the convergence index between two successive reconstructed images. The quality of the image is assessed by showing
New methods to assess severity and likelihood of urban flood risk from intense rainfall
NASA Astrophysics Data System (ADS)
Fewtrell, Tim; Foote, Matt; Bates, Paul; Ntelekos, Alexandros
2010-05-01
the construction of appropriate probabilistic flood models. This paper will describe new research being undertaken to assess the practicality of ultra-high resolution, ground based laser-scanner data for flood modelling in urban centres, using new hydraulic propagation methods to determine the feasibility of such data to be applied within stochastic event models. Results from the collection of ‘point cloud' data collected from a mobile terrestrial laser-scanner system in a key urban centre, combined with appropriate datasets, will be summarized here and an initial assessment of the potential for the use of such data in stochastic event sets will be made. Conclusions are drawn from comparisons with previous studies and underlying DEM products of similar resolutions in terms of computational time, flood extent and flood depth. Based on the above, the study provides some current recommendations on the most appropriate resolution of input data for urban hydraulic modelling.
NASA Technical Reports Server (NTRS)
Rheinfurth, M. H.; Wilson, H. B.
1991-01-01
The monograph was prepared to give the practicing engineer a clear understanding of dynamics with special consideration given to the dynamic analysis of aerospace systems. It is conceived to be both a desk-top reference and a refresher for aerospace engineers in government and industry. It could also be used as a supplement to standard texts for in-house training courses on the subject. Beginning with the basic concepts of kinematics and dynamics, the discussion proceeds to treat the dynamics of a system of particles. Both classical and modern formulations of the Lagrange equations, including constraints, are discussed and applied to the dynamic modeling of aerospace structures using the modal synthesis technique.
Terwilliger, J.D.
1995-03-01
Historically, most methods for detecting linkage disequilibrium were designed for use with diallelic marker loci, for which the analysis is straightforward. With the advent of polymorphic markers with many alleles, the normal approach to their analysis has been either to extend the methodology for two-allele systems (leading to an increase in df and to a corresponding loss of power) or to select the allele believed to be associated and then collapse the other alleles, reducing, in a biased way, the locus to a diallelic system. I propose a likelihood-based approach to testing for linkage disequilibrium, an approach that becomes more conservative as the number of alleles increases, and as the number of markers considered jointly increases in a multipoint test for linkage disequilibrium, while maintaining high power. Properties of this method for detecting associations and fine mapping the location of disease traits are investigated. It is found to be, in general, more powerful than conventional methods, and it provides a tractable framework for the fine mapping of new disease loci. Application to the cystic fibrosis data of Kerem et al. is included to illustrate the method. 12 refs., 4 figs., 4 tabs.
The Likelihood Function and Likelihood Statistics
NASA Astrophysics Data System (ADS)
Robinson, Edward L.
2016-01-01
The likelihood function is a necessary component of Bayesian statistics but not of frequentist statistics. The likelihood function can, however, serve as the foundation for an attractive variant of frequentist statistics sometimes called likelihood statistics. We will first discuss the definition and meaning of the likelihood function, giving some examples of its use and abuse - most notably in the so-called prosecutor's fallacy. Maximum likelihood estimation is the aspect of likelihood statistics familiar to most people. When data points are known to have Gaussian probability distributions, maximum likelihood parameter estimation leads directly to least-squares estimation. When the data points have non-Gaussian distributions, least-squares estimation is no longer appropriate. We will show how the maximum likelihood principle leads to logical alternatives to least squares estimation for non-Gaussian distributions, taking the Poisson distribution as an example.The likelihood ratio is the ratio of the likelihoods of, for example, two hypotheses or two parameters. Likelihood ratios can be treated much like un-normalized probability distributions, greatly extending the applicability and utility of likelihood statistics. Likelihood ratios are prone to the same complexities that afflict posterior probability distributions in Bayesian statistics. We will show how meaningful information can be extracted from likelihood ratios by the Laplace approximation, by marginalizing, or by Markov chain Monte Carlo sampling.
Cuenca, José; Aleza, Pablo; Juárez, José; García-Lor, Andrés; Froelicher, Yann; Navarro, Luis; Ollitrault, Patrick
2015-01-01
Polyploidisation is a key source of diversification and speciation in plants. Most researchers consider sexual polyploidisation leading to unreduced gamete as its main origin. Unreduced gametes are useful in several crop breeding schemes. Their formation mechanism, i.e., First-Division Restitution (FDR) or Second-Division Restitution (SDR), greatly impacts the gametic and population structures and, therefore, the breeding efficiency. Previous methods to identify the underlying mechanism required the analysis of a large set of markers over large progeny. This work develops a new maximum-likelihood method to identify the unreduced gamete formation mechanism both at the population and individual levels using independent centromeric markers. Knowledge of marker-centromere distances greatly improves the statistical power of the comparison between the SDR and FDR hypotheses. Simulating data demonstrated the importance of selecting markers very close to the centromere to obtain significant conclusions at individual level. This new method was used to identify the meiotic restitution mechanism in nineteen mandarin genotypes used as female parents in triploid citrus breeding. SDR was identified for 85.3% of 543 triploid hybrids and FDR for 0.6%. No significant conclusions were obtained for 14.1% of the hybrids. At population level SDR was the predominant mechanisms for the 19 parental mandarins. PMID:25894579
Cuenca, José; Aleza, Pablo; Juárez, José; García-Lor, Andrés; Froelicher, Yann; Navarro, Luis; Ollitrault, Patrick
2015-04-20
Polyploidisation is a key source of diversification and speciation in plants. Most researchers consider sexual polyploidisation leading to unreduced gamete as its main origin. Unreduced gametes are useful in several crop breeding schemes. Their formation mechanism, i.e., First-Division Restitution (FDR) or Second-Division Restitution (SDR), greatly impacts the gametic and population structures and, therefore, the breeding efficiency. Previous methods to identify the underlying mechanism required the analysis of a large set of markers over large progeny. This work develops a new maximum-likelihood method to identify the unreduced gamete formation mechanism both at the population and individual levels using independent centromeric markers. Knowledge of marker-centromere distances greatly improves the statistical power of the comparison between the SDR and FDR hypotheses. Simulating data demonstrated the importance of selecting markers very close to the centromere to obtain significant conclusions at individual level. This new method was used to identify the meiotic restitution mechanism in nineteen mandarin genotypes used as female parents in triploid citrus breeding. SDR was identified for 85.3% of 543 triploid hybrids and FDR for 0.6%. No significant conclusions were obtained for 14.1% of the hybrids. At population level SDR was the predominant mechanisms for the 19 parental mandarins.
NASA Astrophysics Data System (ADS)
Wang, Yan; Huang, Hong; Huang, Lida; Ristic, Branko
2017-03-01
Source term estimation for atmospheric dispersion deals with estimation of the emission strength and location of an emitting source using all available information, including site description, meteorological data, concentration observations and prior information. In this paper, Bayesian methods for source term estimation are evaluated using Prairie Grass field observations. The methods include those that require the specification of the likelihood function and those which are likelihood free, also known as approximate Bayesian computation (ABC) methods. The performances of five different likelihood functions in the former and six different distance measures in the latter case are compared for each component of the source parameter vector based on Nemenyi test over all the 68 data sets available in the Prairie Grass field experiment. Several likelihood functions and distance measures are introduced to source term estimation for the first time. Also, ABC method is improved in many aspects. Results show that discrepancy measures which refer to likelihood functions and distance measures collectively have significant influence on source estimation. There is no single winning algorithm, but these methods can be used collectively to provide more robust estimates.
Eberhard, Wynn L
2017-04-01
The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.
Matilainen, Kaarina; Mäntysaari, Esa A; Lidauer, Martin H; Strandén, Ismo; Thompson, Robin
2013-01-01
Estimation of variance components by Monte Carlo (MC) expectation maximization (EM) restricted maximum likelihood (REML) is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR), where the information matrix was generated via sampling; MC average information(AI), where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.
NASA Astrophysics Data System (ADS)
Ma, Yin-Zhe; Scott, Douglas
2013-01-01
It has been argued recently that the galaxy peculiar velocity field provides evidence of excessive power on scales of 50 h-1 Mpc, which seems to be inconsistent with the standard Λ cold dark matter (ΛCDM) cosmological model. We discuss several assumptions and conventions used in studies of the large-scale bulk flow to check whether this claim is robust under a variety of conditions. Rather than using a composite catalogue we select samples from the SN, ENEAR, Spiral Field I-band Survey (SFI++) and First Amendment Supernovae (A1SN) catalogues, and correct for Malmquist bias in each according to the IRAS PSCz density field. We also use slightly different assumptions about the small-scale velocity dispersion and the parametrization of the matter power spectrum when calculating the variance of the bulk flow. By combining the likelihood of individual catalogues using a Bayesian hyper-parameter method, we find that the joint likelihood of the amplitude parameter gives σ8 = 0.65+ 0.47- 0.35 (68 per cent confidence region), which is entirely consistent with the ΛCDM model. In addition, the bulk flow magnitude, v ˜ 310 km s-1, and direction, (l, b) ˜ (280° ± 8°, 5.1° ± 6°), found by each of the catalogues are all consistent with each other, and with the bulk flow results from most previous studies. Furthermore, the bulk flow velocities in different shells of the surveys constrain (σ8, Ωm) to be (1.01+ 0.26- 0.20, 0.31+ 0.28- 0.14) for SFI++ and (1.04+ 0.32- 0.24, 0.28+ 0.30- 0.14) for ENEAR, which are consistent with the 7-year Wilkinson and Microwave Anisotropy Probe (WMAP7) best-fitting values. We finally discuss the differences between our conclusions and those of the studies claiming the largest bulk flows.
Wang, Chaolong; Schroeder, Kari B; Rosenberg, Noah A
2012-10-01
Allelic dropout is a commonly observed source of missing data in microsatellite genotypes, in which one or both allelic copies at a locus fail to be amplified by the polymerase chain reaction. Especially for samples with poor DNA quality, this problem causes a downward bias in estimates of observed heterozygosity and an upward bias in estimates of inbreeding, owing to mistaken classifications of heterozygotes as homozygotes when one of the two copies drops out. One general approach for avoiding allelic dropout involves repeated genotyping of homozygous loci to minimize the effects of experimental error. Existing computational alternatives often require replicate genotyping as well. These approaches, however, are costly and are suitable only when enough DNA is available for repeated genotyping. In this study, we propose a maximum-likelihood approach together with an expectation-maximization algorithm to jointly estimate allelic dropout rates and allele frequencies when only one set of nonreplicated genotypes is available. Our method considers estimates of allelic dropout caused by both sample-specific factors and locus-specific factors, and it allows for deviation from Hardy-Weinberg equilibrium owing to inbreeding. Using the estimated parameters, we correct the bias in the estimation of observed heterozygosity through the use of multiple imputations of alleles in cases where dropout might have occurred. With simulated data, we show that our method can (1) effectively reproduce patterns of missing data and heterozygosity observed in real data; (2) correctly estimate model parameters, including sample-specific dropout rates, locus-specific dropout rates, and the inbreeding coefficient; and (3) successfully correct the downward bias in estimating the observed heterozygosity. We find that our method is fairly robust to violations of model assumptions caused by population structure and by genotyping errors from sources other than allelic dropout. Because the data sets
Nagy, László G; Urban, Alexander; Orstadius, Leif; Papp, Tamás; Larsson, Ellen; Vágvölgyi, Csaba
2010-12-01
Recently developed comparative phylogenetic methods offer a wide spectrum of applications in evolutionary biology, although it is generally accepted that their statistical properties are incompletely known. Here, we examine and compare the statistical power of the ML and Bayesian methods with regard to selection of best-fit models of fruiting-body evolution and hypothesis testing of ancestral states on a real-life data set of a physiological trait (autodigestion) in the family Psathyrellaceae. Our phylogenies are based on the first multigene data set generated for the family. Two different coding regimes (binary and multistate) and two data sets differing in taxon sampling density are examined. The Bayesian method outperformed Maximum Likelihood with regard to statistical power in all analyses. This is particularly evident if the signal in the data is weak, i.e. in cases when the ML approach does not provide support to choose among competing hypotheses. Results based on binary and multistate coding differed only modestly, although it was evident that multistate analyses were less conclusive in all cases. It seems that increased taxon sampling density has favourable effects on inference of ancestral states, while model parameters are influenced to a smaller extent. The model best fitting our data implies that the rate of losses of deliquescence equals zero, although model selection in ML does not provide proper support to reject three of the four candidate models. The results also support the hypothesis that non-deliquescence (lack of autodigestion) has been ancestral in Psathyrellaceae, and that deliquescent fruiting bodies represent the preferred state, having evolved independently several times during evolution.
NASA Technical Reports Server (NTRS)
Gayman, W. H.
1974-01-01
Test method and apparatus determine fluid effective mass and damping in frequency range where effective mass may be considered as total mass less sum of slosh masses. Apparatus is designed so test tank and its mounting yoke are supported from structural test wall by series of flexures.
ERIC Educational Resources Information Center
Han, Kyung T.; Guo, Fanmin
2014-01-01
The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…
Stepwise Signal Extraction via Marginal Likelihood
Du, Chao; Kao, Chu-Lan Michael
2015-01-01
This paper studies the estimation of stepwise signal. To determine the number and locations of change-points of the stepwise signal, we formulate a maximum marginal likelihood estimator, which can be computed with a quadratic cost using dynamic programming. We carry out extensive investigation on the choice of the prior distribution and study the asymptotic properties of the maximum marginal likelihood estimator. We propose to treat each possible set of change-points equally and adopt an empirical Bayes approach to specify the prior distribution of segment parameters. Detailed simulation study is performed to compare the effectiveness of this method with other existing methods. We demonstrate our method on single-molecule enzyme reaction data and on DNA array CGH data. Our study shows that this method is applicable to a wide range of models and offers appealing results in practice. PMID:27212739
Profile Likelihood and Incomplete Data.
Zhang, Zhiwei
2010-04-01
According to the law of likelihood, statistical evidence is represented by likelihood functions and its strength measured by likelihood ratios. This point of view has led to a likelihood paradigm for interpreting statistical evidence, which carefully distinguishes evidence about a parameter from error probabilities and personal belief. Like other paradigms of statistics, the likelihood paradigm faces challenges when data are observed incompletely, due to non-response or censoring, for instance. Standard methods to generate likelihood functions in such circumstances generally require assumptions about the mechanism that governs the incomplete observation of data, assumptions that usually rely on external information and cannot be validated with the observed data. Without reliable external information, the use of untestable assumptions driven by convenience could potentially compromise the interpretability of the resulting likelihood as an objective representation of the observed evidence. This paper proposes a profile likelihood approach for representing and interpreting statistical evidence with incomplete data without imposing untestable assumptions. The proposed approach is based on partial identification and is illustrated with several statistical problems involving missing data or censored data. Numerical examples based on real data are presented to demonstrate the feasibility of the approach.
Howard, Philip D; Dixon, Louise
2013-06-01
Recent studies of multiwave risk assessment have investigated the association between changes in risk factors and violent recidivism. This study analyzed a large multiwave data set of English and Welsh offenders (N = 196,493), assessed in realistic correctional conditions using the static/dynamic Offender Assessment System (OASys). It aimed to compare the predictive validity of the OASys Violence Predictor (OVP) under mandated repeated assessment and one-time initial assessment conditions. Scores on 5 of OVP's 7 purportedly dynamic risk factors changed in 6 to 15% of pairs of successive assessments, whereas the other 2 seldom changed. Violent reoffenders had higher initial total and dynamic OVP scores than nonreoffenders, yet nonreoffenders' dynamic scores fell by significantly more between initial and final assessment. OVP scores from the current assessment achieved greater predictive validity than those from the initial assessment. Cox regression models showed that, for total OVP scores and most risk factors, both the initial score and the change in score from initial to current assessment significantly predicted reoffending. These results consistently showed that OVP includes several causal dynamic risk factors for violent recidivism, which can be measured reliably in operational settings. This adds to the evidence base that links changes in risk factors to changes in future reoffending risk and links the use of repeated assessments to incremental improvements in predictive validity. Further research could quantify the costs and benefits of reassessment in correctional practice, study associations between treatment and dynamic risk factors, and separate the effects of improvements and deteriorations in dynamic risk.
NASA Technical Reports Server (NTRS)
Grove, R. D.; Bowles, R. L.; Mayhew, S. C.
1972-01-01
A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.
NASA Technical Reports Server (NTRS)
Bueno, R. A.
1977-01-01
Results of the generalized likelihood ratio (GLR) technique for the detection of failures in aircraft application are presented, and its relationship to the properties of the Kalman-Bucy filter is examined. Under the assumption that the system is perfectly modeled, the detectability and distinguishability of four failure types are investigated by means of analysis and simulations. Detection of failures is found satisfactory, but problems in identifying correctly the mode of a failure may arise. These issues are closely examined as well as the sensitivity of GLR to modeling errors. The advantages and disadvantages of this technique are discussed, and various modifications are suggested to reduce its limitations in performance and computational complexity.
Dynamic Method for Identifying Collected Sample Mass
NASA Technical Reports Server (NTRS)
Carson, John
2008-01-01
G-Sample is designed for sample collection missions to identify the presence and quantity of sample material gathered by spacecraft equipped with end effectors. The software method uses a maximum-likelihood estimator to identify the collected sample's mass based on onboard force-sensor measurements, thruster firings, and a dynamics model of the spacecraft. This makes sample mass identification a computation rather than a process requiring additional hardware. Simulation examples of G-Sample are provided for spacecraft model configurations with a sample collection device mounted on the end of an extended boom. In the absence of thrust knowledge errors, the results indicate that G-Sample can identify the amount of collected sample mass to within 10 grams (with 95-percent confidence) by using a force sensor with a noise and quantization floor of 50 micrometers. These results hold even in the presence of realistic parametric uncertainty in actual spacecraft inertia, center-of-mass offset, and first flexibility modes. Thrust profile knowledge is shown to be a dominant sensitivity for G-Sample, entering in a nearly one-to-one relationship with the final mass estimation error. This means thrust profiles should be well characterized with onboard accelerometers prior to sample collection. An overall sample-mass estimation error budget has been developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.
Computational Methods for Structural Mechanics and Dynamics
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)
1989-01-01
Topics addressed include: transient dynamics; transient finite element method; transient analysis in impact and crash dynamic studies; multibody computer codes; dynamic analysis of space structures; multibody mechanics and manipulators; spatial and coplanar linkage systems; flexible body simulation; multibody dynamics; dynamical systems; and nonlinear characteristics of joints.
Bayesian computation via empirical likelihood
Mengersen, Kerrie L.; Pudlo, Pierre; Robert, Christian P.
2013-01-01
Approximate Bayesian computation has become an essential tool for the analysis of complex stochastic models when the likelihood function is numerically unavailable. However, the well-established statistical method of empirical likelihood provides another route to such settings that bypasses simulations from the model and the choices of the approximate Bayesian computation parameters (summary statistics, distance, tolerance), while being convergent in the number of observations. Furthermore, bypassing model simulations may lead to significant time savings in complex models, for instance those found in population genetics. The Bayesian computation with empirical likelihood algorithm we develop in this paper also provides an evaluation of its own performance through an associated effective sample size. The method is illustrated using several examples, including estimation of standard distributions, time series, and population genetics models. PMID:23297233
Molenaar, P C; Nesselroade, J R
1998-07-01
The study of intraindividual variability pervades empirical inquiry in virtually all subdisciplines of psychology. The statistical analysis of multivariate time-series data - a central product of intraindividual investigations -requires special modeling techniques. The dynamic factor model (DFM), which is a generalization of the traditional common factor model, has been proposed by Molenaar (1985) for systematically extracting information from multivariate time- series via latent variable modeling. Implementation of the DFM model has taken several forms, one of which involves specifying it as a covariance-structure model and estimating its parameters from a block-Toeplitz matrix derived from the multivariate time-ser~es. We compare two methods for estimating DFM parameters within a covariance-structure framework - pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation - by means of a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates of comparable precision, but only the ADF method gives standard errors and chi-square statistics that appear to be consistent. The relative ordering of the values of all estimates appears to be very similar across methods. When the manifest time-series is relatively short, the two methods appear to perform about equally well.
The phylogenetic likelihood library.
Flouri, T; Izquierdo-Carrasco, F; Darriba, D; Aberer, A J; Nguyen, L-T; Minh, B Q; Von Haeseler, A; Stamatakis, A
2015-03-01
We introduce the Phylogenetic Likelihood Library (PLL), a highly optimized application programming interface for developing likelihood-based phylogenetic inference and postanalysis software. The PLL implements appropriate data structures and functions that allow users to quickly implement common, error-prone, and labor-intensive tasks, such as likelihood calculations, model parameter as well as branch length optimization, and tree space exploration. The highly optimized and parallelized implementation of the phylogenetic likelihood function and a thorough documentation provide a framework for rapid development of scalable parallel phylogenetic software. By example of two likelihood-based phylogenetic codes we show that the PLL improves the sequential performance of current software by a factor of 2-10 while requiring only 1 month of programming time for integration. We show that, when numerical scaling for preventing floating point underflow is enabled, the double precision likelihood calculations in the PLL are up to 1.9 times faster than those in BEAGLE. On an empirical DNA dataset with 2000 taxa the AVX version of PLL is 4 times faster than BEAGLE (scaling enabled and required). The PLL is available at http://www.libpll.org under the GNU General Public License (GPL).
Di Maro, Antimo; Citores, Lucía; Russo, Rosita; Iglesias, Rosario; Ferreras, José Miguel
2014-08-01
Ribosome-inactivating proteins (RIPs) from angiosperms are rRNA N-glycosidases that have been proposed as defence proteins against virus and fungi. They have been classified as type 1 RIPs, consisting of single-chain proteins, and type 2 RIPs, consisting of an A chain with RIP properties covalently linked to a B chain with lectin properties. In this work we have carried out a broad search of RIP sequence data banks from angiosperms in order to study their main structural characteristics and phylogenetic evolution. The comparison of the sequences revealed the presence, outside of the active site, of a novel structure that might be involved in the internal protein dynamics linked to enzyme catalysis. Also the B-chains presented another conserved structure that might function either supporting the beta-trefoil structure or in the communication between both sugar-binding sites. A systematic phylogenetic analysis of RIP sequences revealed that the most primitive type 1 RIPs were similar to that of the actual monocots (Poaceae and Asparagaceae). The primitive RIPs evolved to the dicot type 1 related RIPs (like those from Caryophyllales, Lamiales and Euphorbiales). The gene of a type 1 RIP related with the actual Euphorbiaceae type 1 RIPs fused with a double beta trefoil lectin gene similar to the actual Cucurbitaceae lectins to generate the type 2 RIPs and finally this gene underwent deletions rendering either type 1 RIPs (like those from Cucurbitaceae, Rosaceae and Iridaceae) or lectins without A chain (like those from Adoxaceae).
ERIC Educational Resources Information Center
Lee, Sik-Yum; Xia, Ye-Mao
2006-01-01
By means of more than a dozen user friendly packages, structural equation models (SEMs) are widely used in behavioral, education, social, and psychological research. As the underlying theory and methods in these packages are vulnerable to outliers and distributions with longer-than-normal tails, a fundamental problem in the field is the…
Costa, Rui J.; Wilkinson-Herbots, Hilde
2017-01-01
The isolation-with-migration (IM) model is commonly used to make inferences about gene flow during speciation, using polymorphism data. However, it has been reported that the parameter estimates obtained by fitting the IM model are very sensitive to the model’s assumptions—including the assumption of constant gene flow until the present. This article is concerned with the isolation-with-initial-migration (IIM) model, which drops precisely this assumption. In the IIM model, one ancestral population divides into two descendant subpopulations, between which there is an initial period of gene flow and a subsequent period of isolation. We derive a very fast method of fitting an extended version of the IIM model, which also allows for asymmetric gene flow and unequal population sizes. This is a maximum-likelihood method, applicable to data on the number of segregating sites between pairs of DNA sequences from a large number of independent loci. In addition to obtaining parameter estimates, our method can also be used, by means of likelihood-ratio tests, to distinguish between alternative models representing the following divergence scenarios: (a) divergence with potentially asymmetric gene flow until the present, (b) divergence with potentially asymmetric gene flow until some point in the past and in isolation since then, and (c) divergence in complete isolation. We illustrate the procedure on pairs of Drosophila sequences from ∼30,000 loci. The computing time needed to fit the most complex version of the model to this data set is only a couple of minutes. The R code to fit the IIM model can be found in the supplementary files of this article. PMID:28193727
Markov chain Monte Carlo without likelihoods.
Marjoram, Paul; Molitor, John; Plagnol, Vincent; Tavare, Simon
2003-12-23
Many stochastic simulation approaches for generating observations from a posterior distribution depend on knowing a likelihood function. However, for many complex probability models, such likelihoods are either impossible or computationally prohibitive to obtain. Here we present a Markov chain Monte Carlo method for generating observations from a posterior distribution without the use of likelihoods. It can also be used in frequentist applications, in particular for maximum-likelihood estimation. The approach is illustrated by an example of ancestral inference in population genetics. A number of open problems are highlighted in the discussion.
Augmented Likelihood Image Reconstruction.
Stille, Maik; Kleine, Matthias; Hägele, Julian; Barkhausen, Jörg; Buzug, Thorsten M
2016-01-01
The presence of high-density objects remains an open problem in medical CT imaging. Data of projections passing through objects of high density, such as metal implants, are dominated by noise and are highly affected by beam hardening and scatter. Reconstructed images become less diagnostically conclusive because of pronounced artifacts that manifest as dark and bright streaks. A new reconstruction algorithm is proposed with the aim to reduce these artifacts by incorporating information about shape and known attenuation coefficients of a metal implant. Image reconstruction is considered as a variational optimization problem. The afore-mentioned prior knowledge is introduced in terms of equality constraints. An augmented Lagrangian approach is adapted in order to minimize the associated log-likelihood function for transmission CT. During iterations, temporally appearing artifacts are reduced with a bilateral filter and new projection values are calculated, which are used later on for the reconstruction. A detailed evaluation in cooperation with radiologists is performed on software and hardware phantoms, as well as on clinically relevant patient data of subjects with various metal implants. Results show that the proposed reconstruction algorithm is able to outperform contemporary metal artifact reduction methods such as normalized metal artifact reduction.
Santra, Kalyan; Smith, Emily A.; Petrich, Jacob W.; Song, Xueyu
2016-12-12
It is often convenient to know the minimum amount of data needed in order to obtain a result of desired accuracy and precision. It is a necessity in the case of subdiffraction-limited microscopies, such as stimulated emission depletion (STED) microscopy, owing to the limited sample volumes and the extreme sensitivity of the samples to photobleaching and photodamage. We present a detailed comparison of probability-based techniques (the maximum likelihood method and methods based on the binomial and the Poisson distributions) with residual minimization-based techniques for retrieving the fluorescence decay parameters for various two-fluorophore mixtures, as a function of the total number of photon counts, in time-correlated, single-photon counting experiments. The probability-based techniques proved to be the most robust (insensitive to initial values) in retrieving the target parameters and, in fact, performed equivalently to 2-3 significant figures. This is to be expected, as we demonstrate that the three methods are fundamentally related. Furthermore, methods based on the Poisson and binomial distributions have the desirable feature of providing a bin-by-bin analysis of a single fluorescence decay trace, which thus permits statistics to be acquired using only the one trace for not only the mean and median values of the fluorescence decay parameters but also for the associated standard deviations. Lastly, these probability-based methods lend themselves well to the analysis of the sparse data sets that are encountered in subdiffraction-limited microscopies.
Chen, Qingxia; Ibrahim, Joseph G
2014-07-01
Multiple Imputation, Maximum Likelihood and Fully Bayesian methods are the three most commonly used model-based approaches in missing data problems. Although it is easy to show that when the responses are missing at random (MAR), the complete case analysis is unbiased and efficient, the aforementioned methods are still commonly used in practice for this setting. To examine the performance of and relationships between these three methods in this setting, we derive and investigate small sample and asymptotic expressions of the estimates and standard errors, and fully examine how these estimates are related for the three approaches in the linear regression model when the responses are MAR. We show that when the responses are MAR in the linear model, the estimates of the regression coefficients using these three methods are asymptotically equivalent to the complete case estimates under general conditions. One simulation and a real data set from a liver cancer clinical trial are given to compare the properties of these methods when the responses are MAR.
Novel methods for molecular dynamics simulations.
Elber, R
1996-04-01
In the past year, significant progress was made in the development of molecular dynamics methods for the liquid phase and for biological macromolecules. Specifically, faster algorithms to pursue molecular dynamics simulations were introduced and advances were made in the design of new optimization algorithms guided by molecular dynamics protocols. A technique to calculate the quantum spectra of protein vibrations was introduced.
NASA Astrophysics Data System (ADS)
Inaniwa, Taku; Kohno, Toshiyuki; Tomitani, Takehiro
2005-12-01
In radiation therapy with hadron beams, conformal irradiation to a tumour can be achieved by using the properties of incident ions such as the high dose concentration around the Bragg peak. For the effective utilization of such properties, it is necessary to evaluate the volume irradiated with hadron beams and the deposited dose distribution in a patient's body. Several methods have been proposed for this purpose, one of which uses the positron emitters generated through fragmentation reactions between incident ions and target nuclei. In the previous paper, we showed that the maximum likelihood estimation (MLE) method could be applicable to the estimation of beam end-point from the measured positron emitting activity distribution for mono-energetic beam irradiations. In a practical treatment, a spread-out Bragg peak (SOBP) beam is used to achieve a uniform biological dose distribution in the whole target volume. Therefore, in the present paper, we proposed to extend the MLE method to estimations of the position of the distal and proximal edges of the SOBP from the detected annihilation gamma ray distribution. We confirmed the effectiveness of the method by means of simulations. Although polyethylene was adopted as a substitute for a soft tissue target in validating the method, the proposed method is equally applicable to general cases, provided that the reaction cross sections between the incident ions and the target nuclei are known. The relative advantage of incident beam species to determine the position of the distal and the proximal edges was compared. Furthermore, we ascertained the validity of applying the MLE method to determinations of the position of the distal and the proximal edges of an SOBP by simulations and we gave a physical explanation of the distal and the proximal information.
NASA Astrophysics Data System (ADS)
Flandoli, F.; Giorgi, E.; Aspinall, W. A.; Neri, A.
2009-04-01
Expert elicitation is a method to obtain estimates for variables of interest when data is sparse or ambiguous. A team of experts is created and each is asked to provide three values for each target variable (typically the 5% quantile, the median, and the 95% quantile). If some weight can be associated with each expert, then different opinions can be pooled to generate a weighted mean, thus providing an estimate of the uncertain variable. The key challenge is to assign a proper weight to each expert. To determine this weight empirically, the experts can be asked a set of 'seed' questions, whose values are known by the analyst (facilitator). In this approach, the experts provide three separate quantile values for each question, and the expert's capability of quantifying uncertainty can be evaluated. For instance, the Cooke classical model quantifies the collective scientific uncertainty through an expert scoring scheme by which weights are ascribed to individual experts on the basis of empirically determined calibration and informativeness scores obtained from a probability analysis of individual performances. In our work, we compare such a method to a new algorithm in which the calibration score is substituted by a one based on the likelihood of observing these expert performances. The simple idea behind this is that of rewarding more strongly those experts whose seed item median values are systematically closer to the true values. Given the three quantile values provided by every expert for each question, we fit a Beta distribution to each test item response, and compute the probability that the location parameter of that distribution corresponds to the real value, by chance. For each expert, the geometric mean of these probabilities is computed as the likelihood factor, L(e), of the expert, thus providing an alternative ‘calibration' score. An information factor, I(e), is also computed as arithmetic mean of the relative entropies of the expert's distributions
NASA Astrophysics Data System (ADS)
Olivares, G.; Teferle, F. N.
2013-12-01
Geodetic time series provide information which helps to constrain theoretical models of geophysical processes. It is well established that such time series, for example from GPS, superconducting gravity or mean sea level (MSL), contain time-correlated noise which is usually assumed to be a combination of a long-term stochastic process (characterized by a power-law spectrum) and random noise. Therefore, when fitting a model to geodetic time series it is essential to also estimate the stochastic parameters beside the deterministic ones. Often the stochastic parameters include the power amplitudes of both time-correlated and random noise, as well as, the spectral index of the power-law process. To date, the most widely used method for obtaining these parameter estimates is based on maximum likelihood estimation (MLE). We present an integration method, the Bayesian Monte Carlo Markov Chain (MCMC) method, which, by using Markov chains, provides a sample of the posteriori distribution of all parameters and, thereby, using Monte Carlo integration, all parameters and their uncertainties are estimated simultaneously. This algorithm automatically optimizes the Markov chain step size and estimates the convergence state by spectral analysis of the chain. We assess the MCMC method through comparison with MLE, using the recently released GPS position time series from JPL and apply it also to the MSL time series from the Revised Local Reference data base of the PSMSL. Although the parameter estimates for both methods are fairly equivalent, they suggest that the MCMC method has some advantages over MLE, for example, without further computations it provides the spectral index uncertainty, is computationally stable and detects multimodality.
Optical methods in fault dynamics
NASA Astrophysics Data System (ADS)
Uenishi, K.; Rossmanith, H. P.
2003-10-01
The Rayleigh pulse interaction with a pre-stressed, partially contacting interface between similar and dissimilar materials is investigated experimentally as well as numerically. This study is intended to obtain an improved understanding of the interface (fault) dynamics during the earthquake rupture process. Using dynamic photoelasticity in conjunction with high-speed cinematography, snapshots of time-dependent isochromatic fringe patterns associated with Rayleigh pulse-interface interaction are experimentally recorded. It is shown that interface slip (instability) can be triggered dynamically by a pulse which propagates along the interface at the Rayleigh wave speed. For the numerical investigation, the finite difference wave simulator SWIFD is used for solving the problem under different combinations of contacting materials. The effect of acoustic impedance ratio of the two contacting materials on the wave patterns is discussed. The results indicate that upon interface rupture, Mach (head) waves, which carry a relatively large amount of energy in a concentrated form, can be generated and propagated from the interface contact region (asperity) into the acoustically softer material. Such Mach waves can cause severe damage onto a particular region inside an adjacent acoustically softer area. This type of damage concentration might be a possible reason for the generation of the "damage belt" in Kobe, Japan, on the occasion of the 1995 Hyogo-ken Nanbu (Kobe) Earthquake.
Andrews, Steven S.; Rutherford, Suzannah
2016-01-01
Experimental measurements require calibration to transform measured signals into physically meaningful values. The conventional approach has two steps: the experimenter deduces a conversion function using measurements on standards and then calibrates (or normalizes) measurements on unknown samples with this function. The deduction of the conversion function from only the standard measurements causes the results to be quite sensitive to experimental noise. It also implies that any data collected without reliable standards must be discarded. Here we show that a “1-step calibration method” reduces these problems for the common situation in which samples are measured in batches, where a batch could be an immunoblot (Western blot), an enzyme-linked immunosorbent assay (ELISA), a sequence of spectra, or a microarray, provided that some sample measurements are replicated across multiple batches. The 1-step method computes all calibration results iteratively from all measurements. It returns the most probable values for the sample compositions under the assumptions of a statistical model, making them the maximum likelihood predictors. It is less sensitive to measurement error on standards and enables use of some batches that do not include standards. In direct comparison of both real and simulated immunoblot data, the 1-step method consistently exhibited smaller errors than the conventional “2-step” method. These results suggest that the 1-step method is likely to be most useful for cases where experimenters want to analyze existing data that are missing some standard measurements and where experimenters want to extract the best results possible from their data. Open source software for both methods is available for download or on-line use. PMID:26908370
Santra, Kalyan; Smith, Emily A.; Petrich, Jacob W.; ...
2016-12-12
It is often convenient to know the minimum amount of data needed in order to obtain a result of desired accuracy and precision. It is a necessity in the case of subdiffraction-limited microscopies, such as stimulated emission depletion (STED) microscopy, owing to the limited sample volumes and the extreme sensitivity of the samples to photobleaching and photodamage. We present a detailed comparison of probability-based techniques (the maximum likelihood method and methods based on the binomial and the Poisson distributions) with residual minimization-based techniques for retrieving the fluorescence decay parameters for various two-fluorophore mixtures, as a function of the total numbermore » of photon counts, in time-correlated, single-photon counting experiments. The probability-based techniques proved to be the most robust (insensitive to initial values) in retrieving the target parameters and, in fact, performed equivalently to 2-3 significant figures. This is to be expected, as we demonstrate that the three methods are fundamentally related. Furthermore, methods based on the Poisson and binomial distributions have the desirable feature of providing a bin-by-bin analysis of a single fluorescence decay trace, which thus permits statistics to be acquired using only the one trace for not only the mean and median values of the fluorescence decay parameters but also for the associated standard deviations. Lastly, these probability-based methods lend themselves well to the analysis of the sparse data sets that are encountered in subdiffraction-limited microscopies.« less
Likelihood Principle and Maximum Likelihood Estimator of Location Parameter for Cauchy Distribution.
1986-05-01
consistency (or strong consistency) of maximum likelihood estimator has been studied by many researchers, for example, Wald (1949), Wolfowitz (1953, 1965...20, 595-601. [25] Wolfowitz , J. (1953). The method of maximum likelihood and Wald theory of decision functions. Indag. Math., Vol. 15, 114-119. [26...Probability Letters Vol. 1, No. 3, 197-202. [24] Wald , A. (1949). Note on the consistency of maximum likelihood estimates. Ann. Math. Statist., Vol
Salas-Leiva, Dayana E.; Meerow, Alan W.; Calonje, Michael; Griffith, M. Patrick; Francisco-Ortega, Javier; Nakamura, Kyoko; Stevenson, Dennis W.; Lewis, Carl E.; Namoff, Sandra
2013-01-01
Background and aims Despite a recent new classification, a stable phylogeny for the cycads has been elusive, particularly regarding resolution of Bowenia, Stangeria and Dioon. In this study, five single-copy nuclear genes (SCNGs) are applied to the phylogeny of the order Cycadales. The specific aim is to evaluate several gene tree–species tree reconciliation approaches for developing an accurate phylogeny of the order, to contrast them with concatenated parsimony analysis and to resolve the erstwhile problematic phylogenetic position of these three genera. Methods DNA sequences of five SCNGs were obtained for 20 cycad species representing all ten genera of Cycadales. These were analysed with parsimony, maximum likelihood (ML) and three Bayesian methods of gene tree–species tree reconciliation, using Cycas as the outgroup. A calibrated date estimation was developed with Bayesian methods, and biogeographic analysis was also conducted. Key Results Concatenated parsimony, ML and three species tree inference methods resolve exactly the same tree topology with high support at most nodes. Dioon and Bowenia are the first and second branches of Cycadales after Cycas, respectively, followed by an encephalartoid clade (Macrozamia–Lepidozamia–Encephalartos), which is sister to a zamioid clade, of which Ceratozamia is the first branch, and in which Stangeria is sister to Microcycas and Zamia. Conclusions A single, well-supported phylogenetic hypothesis of the generic relationships of the Cycadales is presented. However, massive extinction events inferred from the fossil record that eliminated broader ancestral distributions within Zamiaceae compromise accurate optimization of ancestral biogeographical areas for that hypothesis. While major lineages of Cycadales are ancient, crown ages of all modern genera are no older than 12 million years, supporting a recent hypothesis of mostly Miocene radiations. This phylogeny can contribute to an accurate infrafamilial
Maximum Likelihood Fusion Model
2014-08-09
data fusion, hypothesis testing,maximum likelihood estimation, mobile robot navigation REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT...61 vi 9 Bibliography 62 vii 10 LIST OF FIGURES 1.1 Illustration of mobile robotic agents. Land rovers such as (left) Pioneer robots ...simultaneous localization and mapping 1 15 Figure 1.1: Illustration of mobile robotic agents. Land rovers such as (left) Pioneer robots , (center) Segways
NASA Astrophysics Data System (ADS)
Marques, G. O. L. C.
2011-01-01
This paper addresses the efficiency of the maximum likelihood ( ML) method in jointly estimating the fractional integration parameters ds and d, respectively associated with seasonal and non-seasonal long-memory components in discrete stochastic processes. The influence of the size of non-seasonal parameter over seasonal parameter estimation, and vice versa, was analyzed in the space d×ds∈(0,1)×(0,1) by using mean squared error statistics MSE(d) and MSE(dˆ). This study was based on Monte Carlo simulation experiments using the ML estimator with Whittle’s approximation in the frequency domain. Numerical results revealed that efficiency in jointly estimating each integration parameter is affected in different ways by their sizes: as ds and d increase simultaneously to 1, MSE(d) and MSE(dˆ) become larger; however, effects on MSE(d) are much stronger than the effects on MSE(dˆ). Moreover, as each parameter tends individually to 1, MSE(dˆ) becomes larger, but MSE(d) is barely influenced.
Barrett, Harrison H.; White, Timothy; Parra, Lucas C.
2010-01-01
As photon-counting imaging systems become more complex, there is a trend toward measuring more attributes of each individual event. In various imaging systems the attributes can include several position variables, time variables, and energies. If more than about four attributes are measured for each event, it is not practical to record the data in an image matrix. Instead it is more efficient to use a simple list where every attribute is stored for every event. It is the purpose of this paper to discuss the concept of likelihood for such list-mode data. We present expressions for list-mode likelihood with an arbitrary number of attributes per photon and for both preset counts and preset time. Maximization of this likelihood can lead to a practical reconstruction algorithm with list-mode data, but that aspect is covered in a separate paper [IEEE Trans. Med. Imaging (to be published)]. An expression for lesion detectability for list-mode data is also derived and compared with the corresponding expression for conventional binned data. PMID:9379247
[Contrastive study on dynamic spectrum extraction method].
Li, Gang; Zhou, Mei; Wang, Hui-quan; Xiong, Chan; Lin, Ling
2012-05-01
Dynamic spectrum method extracts the absorbance of the artery pulse blood with some wavelengths. The method can reduce some influence such as measurement condition, individual difference and spectrum overlap. It is a new way for noninvasive blood components detection However, how to choose a dynamic spectrum extraction method is one of the key links for the weak ingredient spectrum signal. Now there are two methods to extract the dynamic spectral signal-frequency domain analysis and single-trial estimation in time domain In the present research, comparison analysis and research on the two methods were carrued out completely. Theoretical analysis and experimental results show that the two methods extract the dynamic spectrum from different angles. But they are the same in essence--the basic principle of dynamic spectrum, the signal statistical and average properties. With the pulse wave of relative stable period and amplitude, high precision dynamic spectrum can be obtained by the two methods. With the unstable pulse wave due to the influence of finger shake and contact-pressure change, the dynamic spectrum extracted by single-trial estimation is more accurate than the one by frequecy domain analysis.
SPT Lensing Likelihood: South Pole Telescope CMB lensing likelihood code
NASA Astrophysics Data System (ADS)
Feeney, Stephen M.; Peiris, Hiranya V.; Verde, Licia
2014-11-01
The SPT lensing likelihood code, written in Fortran90, performs a Gaussian likelihood based upon the lensing potential power spectrum using a file from CAMB (ascl:1102.026) which contains the normalization required to get the power spectrum that the likelihood call is expecting.
DALI: Derivative Approximation for LIkelihoods
NASA Astrophysics Data System (ADS)
Sellentin, Elena
2015-07-01
DALI (Derivative Approximation for LIkelihoods) is a fast approximation of non-Gaussian likelihoods. It extends the Fisher Matrix in a straightforward way and allows for a wider range of posterior shapes. The code is written in C/C++.
Maximum Likelihood Estimation of Multivariate Polyserial and Polychoric Correlation Coefficients.
ERIC Educational Resources Information Center
Poon, Wai-Yin; Lee, Sik-Yum
1987-01-01
Reparameterization is used to find the maximum likelihood estimates of parameters in a multivariate model having some component variable observable only in polychotomous form. Maximum likelihood estimates are found by a Fletcher Powell algorithm. In addition, the partition maximum likelihood method is proposed and illustrated. (Author/GDC)
NASA Technical Reports Server (NTRS)
Klein, V.
1979-01-01
Two identification methods, the equation error method and the output error method, are used to estimate stability and control parameter values from flight data for a low-wing, single-engine, general aviation airplane. The estimated parameters from both methods are in very good agreement primarily because of sufficient accuracy of measured data. The estimated static parameters also agree with the results from steady flights. The effect of power different input forms are demonstrated. Examination of all results available gives the best values of estimated parameters and specifies their accuracies.
Sampling variability and estimates of density dependence: a composite-likelihood approach.
Lele, Subhash R
2006-01-01
It is well known that sampling variability, if not properly taken into account, affects various ecologically important analyses. Statistical inference for stochastic population dynamics models is difficult when, in addition to the process error, there is also sampling error. The standard maximum-likelihood approach suffers from large computational burden. In this paper, I discuss an application of the composite-likelihood method for estimation of the parameters of the Gompertz model in the presence of sampling variability. The main advantage of the method of composite likelihood is that it reduces the computational burden substantially with little loss of statistical efficiency. Missing observations are a common problem with many ecological time series. The method of composite likelihood can accommodate missing observations in a straightforward fashion. Environmental conditions also affect the parameters of stochastic population dynamics models. This method is shown to handle such nonstationary population dynamics processes as well. Many ecological time series are short, and statistical inferences based on such short time series tend to be less precise. However, spatial replications of short time series provide an opportunity to increase the effective sample size. Application of likelihood-based methods for spatial time-series data for population dynamics models is computationally prohibitive. The method of composite likelihood is shown to have significantly less computational burden, making it possible to analyze large spatial time-series data. After discussing the methodology in general terms, I illustrate its use by analyzing a time series of counts of American Redstart (Setophaga ruticilla) from the Breeding Bird Survey data, San Joaquin kit fox (Vulpes macrotis mutica) population abundance data, and spatial time series of Bull trout (Salvelinus confluentus) redds count data.
The Sherpa Maximum Likelihood Estimator
NASA Astrophysics Data System (ADS)
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Dynamic discretization method for solving Kepler's equation
NASA Astrophysics Data System (ADS)
Feinstein, Scott A.; McLaughlin, Craig A.
2006-09-01
Kepler’s equation needs to be solved many times for a variety of problems in Celestial Mechanics. Therefore, computing the solution to Kepler’s equation in an efficient manner is of great importance to that community. There are some historical and many modern methods that address this problem. Of the methods known to the authors, Fukushima’s discretization technique performs the best. By taking more of a system approach and combining the use of discretization with the standard computer science technique known as dynamic programming, we were able to achieve even better performance than Fukushima. We begin by defining Kepler’s equation for the elliptical case and describe existing solution methods. We then present our dynamic discretization method and show the results of a comparative analysis. This analysis will demonstrate that, for the conditions of our tests, dynamic discretization performs the best.
Dynamic Waypoint Navigation Using Voronoi Classifier Methods
2004-12-01
Robotics Mobility Laboratory Warren, MI 48397-5000 ABSTRACT This paper details the development of a dynamic waypoint navigation method ...elements of the environment are known initially and are used in the computation of the initial path). The drawback to this method is that the robot
Simulating protein dynamics: Novel methods and applications
NASA Astrophysics Data System (ADS)
Vishal, V.
This Ph.D dissertation describes several methodological advances in molecular dynamics (MD) simulations. Methods like Markov State Models can be used effectively in combination with distributed computing to obtain long time scale behavior from an ensemble of short simulations. Advanced computing architectures like Graphics Processors can be used to greatly extend the scope of MD. Applications of MD techniques to problems like Alzheimer's Disease and fundamental questions in protein dynamics are described.
SWECS tower dynamics analysis methods and results
NASA Technical Reports Server (NTRS)
Wright, A. D.; Sexton, J. H.; Butterfield, C. P.; Thresher, R. M.
1981-01-01
Several different tower dynamics analysis methods and computer codes were used to determine the natural frequencies and mode shapes of both guyed and freestanding wind turbine towers. These analysis methods are described and the results for two types of towers, a guyed tower and a freestanding tower, are shown. The advantages and disadvantages in the use of and the accuracy of each method are also described.
Quasi-likelihood for Spatial Point Processes
Guan, Yongtao; Jalilian, Abdollah; Waagepetersen, Rasmus
2014-01-01
Summary Fitting regression models for intensity functions of spatial point processes is of great interest in ecological and epidemiological studies of association between spatially referenced events and geographical or environmental covariates. When Cox or cluster process models are used to accommodate clustering not accounted for by the available covariates, likelihood based inference becomes computationally cumbersome due to the complicated nature of the likelihood function and the associated score function. It is therefore of interest to consider alternative more easily computable estimating functions. We derive the optimal estimating function in a class of first-order estimating functions. The optimal estimating function depends on the solution of a certain Fredholm integral equation which in practise is solved numerically. The derivation of the optimal estimating function has close similarities to the derivation of quasi-likelihood for standard data sets. The approximate solution is further equivalent to a quasi-likelihood score for binary spatial data. We therefore use the term quasi-likelihood for our optimal estimating function approach. We demonstrate in a simulation study and a data example that our quasi-likelihood method for spatial point processes is both statistically and computationally efficient. PMID:26041970
Disequilibrium mapping: Composite likelihood for pairwise disequilibrium
Devlin, B.; Roeder, K.; Risch, N.
1996-08-15
The pattern of linkage disequilibrium between a disease locus and a set of marker loci has been shown to be a useful tool for geneticists searching for disease genes. Several methods have been advanced to utilize the pairwise disequilibrium between the disease locus and each of a set of marker loci. However, none of the methods take into account the information from all pairs simultaneously while also modeling the variability in the disequilibrium values due to the evolutionary dynamics of the population. We propose a Composite Likelihood CL model that has these features when the physical distances between the marker loci are known or can be approximated. In this instance, and assuming that there is a single disease mutation, the CL model depends on only three parameters, the recombination fraction between the disease locus and an arbitrary marker locus, {theta}, the age of the mutation, and a variance parameter. When the CL is maximized over a grid of {theta}, it provides a graph that can direct the search for the disease locus. We also show how the CL model can be generalized to account for multiple disease mutations. Evolutionary simulations demonstrate the power of the analyses, as well as their potential weaknesses. Finally, we analyze the data from two mapped diseases, cystic fibrosis and diastrophic dysplasia, finding that the CL method performs well in both cases. 28 refs., 6 figs., 4 tabs.
Method for monitoring slow dynamics recovery
NASA Astrophysics Data System (ADS)
Haller, Kristian C. E.; Hedberg, Claes M.
2012-11-01
Slow Dynamics is a specific material property, which for example is connected to the degree of damage. It is therefore of importance to be able to attain proper measurements of it. Usually it has been monitored by acoustic resonance methods which have very high sensitivity as such. However, because the acoustic wave is acting both as conditioner and as probe, the measurement is affecting the result which leads to a mixing of the fast nonlinear response to the excitation and the slow dynamics material recovery. In this article a method is introduced which, for the first time, removes the fast dynamics from the process and allows the behavior of the slow dynamics to be monitored by itself. The new method has the ability to measure at the shortest possible recovery times, and at very small conditioning strains. For the lowest strains the sound speed increases with strain, while at higher strains a linear decreasing dependence is observed. This is the first method and test that has been able to monitor the true material state recovery process.
Solution Methods for Stochastic Dynamic Linear Programs.
1980-12-01
Linear Programming, IIASA , Laxenburg, Austria, June 2-6, 1980. [2] Aghili, P., R.H., Cramer and H.W. Thompson, "On the applicability of two- stage...Laxenburg, Austria, May, 1978. [52] Propoi, A. and V. Krivonozhko, ’The simplex method for dynamic linear programs", RR-78-14, IIASA , Vienna, Austria
Interfacial gauge methods for incompressible fluid dynamics
Saye, Robert
2016-01-01
Designing numerical methods for incompressible fluid flow involving moving interfaces, for example, in the computational modeling of bubble dynamics, swimming organisms, or surface waves, presents challenges due to the coupling of interfacial forces with incompressibility constraints. A class of methods, denoted interfacial gauge methods, is introduced for computing solutions to the corresponding incompressible Navier-Stokes equations. These methods use a type of “gauge freedom” to reduce the numerical coupling between fluid velocity, pressure, and interface position, allowing high-order accurate numerical methods to be developed more easily. Making use of an implicit mesh discontinuous Galerkin framework, developed in tandem with this work, high-order results are demonstrated, including surface tension dynamics in which fluid velocity, pressure, and interface geometry are computed with fourth-order spatial accuracy in the maximum norm. Applications are demonstrated with two-phase fluid flow displaying fine-scaled capillary wave dynamics, rigid body fluid-structure interaction, and a fluid-jet free surface flow problem exhibiting vortex shedding induced by a type of Plateau-Rayleigh instability. The developed methods can be generalized to other types of interfacial flow and facilitate precise computation of complex fluid interface phenomena. PMID:27386567
Interfacial gauge methods for incompressible fluid dynamics.
Saye, Robert
2016-06-01
Designing numerical methods for incompressible fluid flow involving moving interfaces, for example, in the computational modeling of bubble dynamics, swimming organisms, or surface waves, presents challenges due to the coupling of interfacial forces with incompressibility constraints. A class of methods, denoted interfacial gauge methods, is introduced for computing solutions to the corresponding incompressible Navier-Stokes equations. These methods use a type of "gauge freedom" to reduce the numerical coupling between fluid velocity, pressure, and interface position, allowing high-order accurate numerical methods to be developed more easily. Making use of an implicit mesh discontinuous Galerkin framework, developed in tandem with this work, high-order results are demonstrated, including surface tension dynamics in which fluid velocity, pressure, and interface geometry are computed with fourth-order spatial accuracy in the maximum norm. Applications are demonstrated with two-phase fluid flow displaying fine-scaled capillary wave dynamics, rigid body fluid-structure interaction, and a fluid-jet free surface flow problem exhibiting vortex shedding induced by a type of Plateau-Rayleigh instability. The developed methods can be generalized to other types of interfacial flow and facilitate precise computation of complex fluid interface phenomena.
NASA Technical Reports Server (NTRS)
Grove, R. D.; Mayhew, S. C.
1973-01-01
A computer program (Langley program C1123) has been developed for estimating aircraft stability and control parameters from flight test data. These parameters are estimated by the maximum likelihood estimation procedure implemented on a real-time digital simulation system, which uses the Control Data 6600 computer. This system allows the investigator to interact with the program in order to obtain satisfactory results. Part of this system, the control and display capabilities, is described for this program. This report also describes the computer program by presenting the program variables, subroutines, flow charts, listings, and operational features. Program usage is demonstrated with a test case using pseudo or simulated flight data.
Evaluation of Dynamic Methods for Earthwork Assessment
NASA Astrophysics Data System (ADS)
Vlček, Jozef; Ďureková, Dominika; Zgútová, Katarína
2015-05-01
Rapid development of road construction imposes requests on fast and quality methods for earthwork quality evaluation. Dynamic methods are now adopted in numerous civil engineering sections. Especially evaluation of the earthwork quality can be sped up using dynamic equipment. This paper presents the results of the parallel measurements of chosen devices for determining the level of compaction of soils. Measurements were used to develop the correlations between values obtained from various apparatuses. Correlations show that examined apparatuses are suitable for examination of compaction level of fine-grained soils with consideration of boundary conditions of used equipment. Presented methods are quick and results can be obtained immediately after measurement, and they are thus suitable in cases when construction works have to be performed in a short period of time.
Spectral Methods for Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Zang, T. A.; Streett, C. L.; Hussaini, M. Y.
1994-01-01
As a tool for large-scale computations in fluid dynamics, spectral methods were prophesized in 1944, born in 1954, virtually buried in the mid-1960's, resurrected in 1969, evangalized in the 1970's, and catholicized in the 1980's. The use of spectral methods for meteorological problems was proposed by Blinova in 1944 and the first numerical computations were conducted by Silberman (1954). By the early 1960's computers had achieved sufficient power to permit calculations with hundreds of degrees of freedom. For problems of this size the traditional way of computing the nonlinear terms in spectral methods was expensive compared with finite-difference methods. Consequently, spectral methods fell out of favor. The expense of computing nonlinear terms remained a severe drawback until Orszag (1969) and Eliasen, Machenauer, and Rasmussen (1970) developed the transform methods that still form the backbone of many large-scale spectral computations. The original proselytes of spectral methods were meteorologists involved in global weather modeling and fluid dynamicists investigating isotropic turbulence. The converts who were inspired by the successes of these pioneers remained, for the most part, confined to these and closely related fields throughout the 1970's. During that decade spectral methods appeared to be well-suited only for problems governed by ordinary diSerential eqllations or by partial differential equations with periodic boundary conditions. And, of course, the solution itself needed to be smooth. Some of the obstacles to wider application of spectral methods were: (1) poor resolution of discontinuous solutions; (2) inefficient implementation of implicit methods; and (3) drastic geometric constraints. All of these barriers have undergone some erosion during the 1980's, particularly the latter two. As a result, the applicability and appeal of spectral methods for computational fluid dynamics has broadened considerably. The motivation for the use of spectral
Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data
NASA Technical Reports Server (NTRS)
Iliff, K. W.; Maine, R. E.
1985-01-01
Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.
Growing local likelihood network: Emergence of communities
NASA Astrophysics Data System (ADS)
Chen, S.; Small, M.
2015-10-01
In many real situations, networks grow only via local interactions. New nodes are added to the growing network with information only pertaining to a small subset of existing nodes. Multilevel marketing, social networks, and disease models can all be depicted as growing networks based on local (network path-length) distance information. In these examples, all nodes whose distance from a chosen center is less than d form a subgraph. Hence, we grow networks with information only from these subgraphs. Moreover, we use a likelihood-based method, where at each step we modify the networks by changing their likelihood to be closer to the expected degree distribution. Combining the local information and the likelihood method, we grow networks that exhibit novel features. We discover that the likelihood method, over certain parameter ranges, can generate networks with highly modulated communities, even when global information is not available. Communities and clusters are abundant in real-life networks, and the method proposed here provides a natural mechanism for the emergence of communities in scale-free networks. In addition, the algorithmic implementation of network growth via local information is substantially faster than global methods and allows for the exploration of much larger networks.
Mesoscopic Simulation Methods for Polymer Dynamics
NASA Astrophysics Data System (ADS)
Larson, Ronald
2015-03-01
We assess the accuracy and efficiency of mesoscopic simulation methods, namely Brownian Dynamics (BD), Stochastic Rotation Dynamics (SRD) and Dissipative Particle Dynamics (DPD), for polymers in solution at equilibrium and in flows in microfluidic geometries. Both SRD and DPD use solvent ``particles'' to carry momentum, and so account automatically for hydrodynamic interactions both within isolated polymer coils, and with other polymer molecules and with nearby solid boundaries. We assess quantitatively the effects of artificial particle inertia and fluid compressibility and show that they can be made small with appropriate choice of simulation parameters. We then use these methods to study flow-induced migration of polymer chains produced by: 1) hydrodynamic interactions, 2) streamline curvature or stress-gradients, and 3) convection of wall depletion zones. We show that huge concentration gradients can be produced by these mechanisms in microfluidic geometries that can be exploited for separation of polymers by size in periodic contraction-expansion geometries. We also assess the range of conditions for which BD, SRD or DPD is preferable for mesoscopic simulations. Finally, we show how such methods can be used to simulate quantitatively the swimming of micro-organisms such as E. coli. In collaboration with Lei Jiang and Tongyang Zhao, University of Michigan, Ann Arbor, MI.
A Likelihood-Based SLIC Superpixel Algorithm for SAR Images Using Generalized Gamma Distribution
Zou, Huanxin; Qin, Xianxiang; Zhou, Shilin; Ji, Kefeng
2016-01-01
The simple linear iterative clustering (SLIC) method is a recently proposed popular superpixel algorithm. However, this method may generate bad superpixels for synthetic aperture radar (SAR) images due to effects of speckle and the large dynamic range of pixel intensity. In this paper, an improved SLIC algorithm for SAR images is proposed. This algorithm exploits the likelihood information of SAR image pixel clusters. Specifically, a local clustering scheme combining intensity similarity with spatial proximity is proposed. Additionally, for post-processing, a local edge-evolving scheme that combines spatial context and likelihood information is introduced as an alternative to the connected components algorithm. To estimate the likelihood information of SAR image clusters, we incorporated a generalized gamma distribution (GГD). Finally, the superiority of the proposed algorithm was validated using both simulated and real-world SAR images. PMID:27438840
Likelihood analysis of earthquake focal mechanism distributions
NASA Astrophysics Data System (ADS)
Kagan, Yan Y.; Jackson, David D.
2015-06-01
In our paper published earlier we discussed forecasts of earthquake focal mechanism and ways to test the forecast efficiency. Several verification methods were proposed, but they were based on ad hoc, empirical assumptions, thus their performance is questionable. We apply a conventional likelihood method to measure the skill of earthquake focal mechanism orientation forecasts. The advantage of such an approach is that earthquake rate prediction can be adequately combined with focal mechanism forecast, if both are based on the likelihood scores, resulting in a general forecast optimization. We measure the difference between two double-couple sources as the minimum rotation angle that transforms one into the other. We measure the uncertainty of a focal mechanism forecast (the variability), and the difference between observed and forecasted orientations (the prediction error), in terms of these minimum rotation angles. To calculate the likelihood score we need to compare actual forecasts or occurrences of predicted events with the null hypothesis that the mechanism's 3-D orientation is random (or equally probable). For 3-D rotation the random rotation angle distribution is not uniform. To better understand the resulting complexities, we calculate the information (likelihood) score for two theoretical rotational distributions (Cauchy and von Mises-Fisher), which are used to approximate earthquake source orientation pattern. We then calculate the likelihood score for earthquake source forecasts and for their validation by future seismicity data. Several issues need to be explored when analyzing observational results: their dependence on forecast and data resolution, internal dependence of scores on forecasted angle and random variability of likelihood scores. Here, we propose a simple tentative solution but extensive theoretical and statistical analysis is needed.
Comparing Methods for Dynamic Airspace Configuration
NASA Technical Reports Server (NTRS)
Zelinski, Shannon; Lai, Chok Fung
2011-01-01
This paper compares airspace design solutions for dynamically reconfiguring airspace in response to nominal daily traffic volume fluctuation. Airspace designs from seven algorithmic methods and a representation of current day operations in Kansas City Center were simulated with two times today's demand traffic. A three-configuration scenario was used to represent current day operations. Algorithms used projected unimpeded flight tracks to design initial 24-hour plans to switch between three configurations at predetermined reconfiguration times. At each reconfiguration time, algorithms used updated projected flight tracks to update the subsequent planned configurations. Compared to the baseline, most airspace design methods reduced delay and increased reconfiguration complexity, with similar traffic pattern complexity results. Design updates enabled several methods to as much as half the delay from their original designs. Freeform design methods reduced delay and increased reconfiguration complexity the most.
B-spline Method in Fluid Dynamics
NASA Technical Reports Server (NTRS)
Botella, Olivier; Shariff, Karim; Mansour, Nagi N. (Technical Monitor)
2001-01-01
B-spline functions are bases for piecewise polynomials that possess attractive properties for complex flow simulations : they have compact support, provide a straightforward handling of boundary conditions and grid nonuniformities, and yield numerical schemes with high resolving power, where the order of accuracy is a mere input parameter. This paper reviews the progress made on the development and application of B-spline numerical methods to computational fluid dynamics problems. Basic B-spline approximation properties is investigated, and their relationship with conventional numerical methods is reviewed. Some fundamental developments towards efficient complex geometry spline methods are covered, such as local interpolation methods, fast solution algorithms on cartesian grid, non-conformal block-structured discretization, formulation of spline bases of higher continuity over triangulation, and treatment of pressure oscillations in Navier-Stokes equations. Application of some of these techniques to the computation of viscous incompressible flows is presented.
Implicit integration methods for dislocation dynamics
NASA Astrophysics Data System (ADS)
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; Hommes, G.; Aubry, S.; Arsenlis, A.
2015-03-01
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. This paper investigates the viability of high-order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a way of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; Hommes, G.; Aubry, S.; Arsenlis, A.
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a way of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.
Integration based profile likelihood calculation for PDE constrained parameter estimation problems
NASA Astrophysics Data System (ADS)
Boiger, R.; Hasenauer, J.; Hroß, S.; Kaltenbacher, B.
2016-12-01
Partial differential equation (PDE) models are widely used in engineering and natural sciences to describe spatio-temporal processes. The parameters of the considered processes are often unknown and have to be estimated from experimental data. Due to partial observations and measurement noise, these parameter estimates are subject to uncertainty. This uncertainty can be assessed using profile likelihoods, a reliable but computationally intensive approach. In this paper, we present the integration based approach for the profile likelihood calculation developed by (Chen and Jennrich 2002 J. Comput. Graph. Stat. 11 714-32) and adapt it to inverse problems with PDE constraints. While existing methods for profile likelihood calculation in parameter estimation problems with PDE constraints rely on repeated optimization, the proposed approach exploits a dynamical system evolving along the likelihood profile. We derive the dynamical system for the unreduced estimation problem, prove convergence and study the properties of the integration based approach for the PDE case. To evaluate the proposed method, we compare it with state-of-the-art algorithms for a simple reaction-diffusion model for a cellular patterning process. We observe a good accuracy of the method as well as a significant speed up as compared to established methods. Integration based profile calculation facilitates rigorous uncertainty analysis for computationally demanding parameter estimation problems with PDE constraints.
Factors Influencing Likelihood of Voice Therapy Attendance.
Misono, Stephanie; Marmor, Schelomo; Roy, Nelson; Mau, Ted; Cohen, Seth M
2017-03-01
Objective To identify factors associated with the likelihood of attending voice therapy among patients referred for it in the CHEER (Creating Healthcare Excellence through Education and Research) practice-based research network infrastructure. Study Design Prospectively enrolled cross-sectional study. Setting CHEER network of community and academic sites. Methods Data were collected on patient-reported demographics, voice-related diagnoses, voice-related handicap (Voice Handicap Index-10), likelihood of attending voice therapy (VT), and opinions on factors influencing likelihood of attending VT. The relationships between patient characteristics/opinions and likelihood of attending VT were investigated. Results A total of 170 patients with various voice-related diagnoses reported receiving a recommendation for VT. Of those, 85% indicated that they were likely to attend it, regardless of voice-related handicap severity. The most common factors influencing likelihood of VT attendance were insurance/copay, relief that it was not cancer, and travel. Those who were not likely to attend VT identified, as important factors, unclear potential improvement, not understanding the purpose of therapy, and concern that it would be too hard. In multivariate analysis, factors associated with greater likelihood of attending VT included shorter travel distance, age (40-59 years), and being seen in an academic practice. Conclusions Most patients reported plans to attend VT as recommended. Patients who intended to attend VT reported different considerations in their decision making from those who did not plan to attend. These findings may inform patient counseling and efforts to increase access to voice care.
Optimization of dynamic systems using collocation methods
NASA Astrophysics Data System (ADS)
Holden, Michael Eric
The time-based simulation is an important tool for the engineer. Often a time-domain simulation is the most expedient to construct, the most capable of handling complex modeling issues, or the most understandable with an engineer's physical intuition. Aeroelastic systems, for example, are often most easily solved with a nonlinear time-based approach to allow the use of high fidelity models. Simulations of automatic flight control systems can also be easier to model in the time domain, especially when nonlinearities are present. Collocation is an optimization method for systems that incorporate a time-domain simulation. Instead of integrating the equations of motion for each design iteration, the optimizer iteratively solves the simulation as it finds the optimal design. This forms a smooth, well-posed, sparse optimization problem, transforming the numerical integration's sequential calculation into a set of constraints that can be evaluated in any order, or even in parallel. The collocation method used in this thesis has been improved from existing techniques in several ways, in particular with a very simple and computationally inexpensive method of applying dynamic constraints, such as damping, that are more traditionally calculated with linear models in the frequency domain. This thesis applies the collocation method to a range of aircraft design problems, from minimizing the weight of a wing with a flutter constraint, to gain-scheduling the stability augmentation system of a small-scale flight control testbed, to aeroservoelastic design of a large aircraft concept. Collocation methods have not been applied to aeroelastic simulations in the past, although the combination of nonlinear aerodynamic analyses with structural dynamics and stability constraints is well-suited to collocation. The results prove the collocation method's worth as a tool for aircraft design, particularly when applied to the multidisciplinary numerical models used today.
New methods for quantum mechanical reaction dynamics
Thompson, Ward Hugh
1996-12-01
Quantum mechanical methods are developed to describe the dynamics of bimolecular chemical reactions. We focus on developing approaches for directly calculating the desired quantity of interest. Methods for the calculation of single matrix elements of the scattering matrix (S-matrix) and initial state-selected reaction probabilities are presented. This is accomplished by the use of absorbing boundary conditions (ABC) to obtain a localized (L^{2}) representation of the outgoing wave scattering Green`s function. This approach enables the efficient calculation of only a single column of the S-matrix with a proportionate savings in effort over the calculation of the entire S-matrix. Applying this method to the calculation of the initial (or final) state-selected reaction probability, a more averaged quantity, requires even less effort than the state-to-state S-matrix elements. It is shown how the same representation of the Green`s function can be effectively applied to the calculation of negative ion photodetachment intensities. Photodetachment spectroscopy of the anion ABC^{-} can be a very useful method for obtaining detailed information about the neutral ABC potential energy surface, particularly if the ABC^{-} geometry is similar to the transition state of the neutral ABC. Total and arrangement-selected photodetachment spectra are calculated for the H_{3}O^{-} system, providing information about the potential energy surface for the OH + H_{2} reaction when compared with experimental results. Finally, we present methods for the direct calculation of the thermal rate constant from the flux-position and flux-flux correlation functions. The spirit of transition state theory is invoked by concentrating on the short time dynamics in the area around the transition state that determine reactivity. These methods are made efficient by evaluating the required quantum mechanical trace in the basis of eigenstates of the
Schwarz method for earthquake source dynamics
Badea, Lori Ionescu, Ioan R. Wolf, Sylvie
2008-04-01
Dynamic faulting under slip-dependent friction in a linear elastic domain (in-plane and 3D configurations) is considered. The use of an implicit time-stepping scheme (Newmark method) allows much larger values of the time step than the critical CFL time step, and higher accuracy to handle the non-smoothness of the interface constitutive law (slip weakening friction). The finite element form of the quasi-variational inequality is solved by a Schwarz domain decomposition method, by separating the inner nodes of the domain from the nodes on the fault. In this way, the quasi-variational inequality splits into two subproblems. The first one is a large linear system of equations, and its unknowns are related to the mesh nodes of the first subdomain (i.e. lying inside the domain). The unknowns of the second subproblem are the degrees of freedom of the mesh nodes of the second subdomain (i.e. lying on the domain boundary where the conditions of contact and friction are imposed). This nonlinear subproblem is solved by the same Schwarz algorithm, leading to some local nonlinear subproblems of a very small size. Numerical experiments are performed to illustrate convergence in time and space, instability capturing, energy dissipation and the influence of normal stress variations. We have used the proposed numerical method to compute source dynamics phenomena on complex and realistic 2D fault models (branched fault systems)
Dynamic data filtering system and method
Bickford, Randall L; Palnitkar, Rahul M
2014-04-29
A computer-implemented dynamic data filtering system and method for selectively choosing operating data of a monitored asset that modifies or expands a learned scope of an empirical model of normal operation of the monitored asset while simultaneously rejecting operating data of the monitored asset that is indicative of excessive degradation or impending failure of the monitored asset, and utilizing the selectively chosen data for adaptively recalibrating the empirical model to more accurately monitor asset aging changes or operating condition changes of the monitored asset.
Direct anharmonic correction method by molecular dynamics
NASA Astrophysics Data System (ADS)
Liu, Zhong-Li; Li, Rui; Zhang, Xiu-Lu; Qu, Nuo; Cai, Ling-Cang
2017-04-01
The quick calculation of accurate anharmonic effects of lattice vibrations is crucial to the calculations of thermodynamic properties, the construction of the multi-phase diagram and equation of states of materials, and the theoretical designs of new materials. In this paper, we proposed a direct free energy interpolation (DFEI) method based on the temperature dependent phonon density of states (TD-PDOS) reduced from molecular dynamics simulations. Using the DFEI method, after anharmonic free energy corrections we reproduced the thermal expansion coefficients, the specific heat, the thermal pressure, the isothermal bulk modulus, and the Hugoniot P- V- T relationships of Cu easily and accurately. The extensive tests on other materials including metal, alloy, semiconductor and insulator also manifest that the DFEI method can easily uncover the rest anharmonicity that the quasi-harmonic approximation (QHA) omits. It is thus evidenced that the DFEI method is indeed a very efficient method used to conduct anharmonic effect corrections beyond QHA. More importantly it is much more straightforward and easier compared to previous anharmonic methods.
Concurrent DSMC Method Using Dynamic Domain Decomposition
NASA Astrophysics Data System (ADS)
Wu, J.-S.; Tseng, K.-C.
2003-05-01
In the current study, a parallel two-dimensional direct simulation Monte Carlo method is reported, which incorporates a multi-level graph-partitioning technique to dynamically decompose the computational domain. The current DSMC method is implemented on an unstructured mesh using particle ray-tracing technique, which takes the advantages of the cell connectivity information. Standard Message Passage Interface (MPI) is used to communicate data between processors. In addition, different strategies applying the Stop at Rise (SAR) [7] scheme is utilized to determine when to adapt the workload distribution among processors. Corresponding analysis of parallel performance is reported using the results of a high-speed driven cavity flow on IBM-SP2 parallel machines (memory-distributed, CPU 160 MHz, RAM 256 MB each) up to 64 processors. Small, medium and large problems, based on the number of particles and cells, are simulated. Results, applying SAR scheme every two time steps, show that parallel efficiency is 57%, 90% and 107% for small, medium and large problems, respectively, at 64 processors. In general, benefits of applying SAR scheme at larger periods decrease gradually with increasing problem size. Detailed time analysis shows that degree of imbalance levels off very rapidly at a relatively low value (30%˜40%) with increasing number of processors applying dynamic load balancing, while it, at a value of 5˜6 times larger, increases with increasing number of processors without dynamic load balancing. At the end, the completed code is applied to compute a near-continuum gas flow to demonstrate its superior computational capability.
Domain decomposition methods in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Gropp, William D.; Keyes, David E.
1991-01-01
The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.
Methods and systems for combustion dynamics reduction
Kraemer, Gilbert Otto; Varatharajan, Balachandar; Srinivasan, Shiva; Lynch, John Joseph; Yilmaz, Ertan; Kim, Kwanwoo; Lacy, Benjamin; Crothers, Sarah; Singh, Kapil Kumar
2009-08-25
Methods and systems for combustion dynamics reduction are provided. A combustion chamber may include a first premixer and a second premixer. Each premixer may include at least one fuel injector, at least one air inlet duct, and at least one vane pack for at least partially mixing the air from the air inlet duct or ducts and fuel from the fuel injector or injectors. Each vane pack may include a plurality of fuel orifices through which at least a portion of the fuel and at least a portion of the air may pass. The vane pack or packs of the first premixer may be positioned at a first axial position and the vane pack or packs of the second premixer may be positioned at a second axial position axially staggered with respect to the first axial position.
NMR Methods to Study Dynamic Allostery
Grutsch, Sarina; Brüschweiler, Sven; Tollinger, Martin
2016-01-01
Nuclear magnetic resonance (NMR) spectroscopy provides a unique toolbox of experimental probes for studying dynamic processes on a wide range of timescales, ranging from picoseconds to milliseconds and beyond. Along with NMR hardware developments, recent methodological advancements have enabled the characterization of allosteric proteins at unprecedented detail, revealing intriguing aspects of allosteric mechanisms and increasing the proportion of the conformational ensemble that can be observed by experiment. Here, we present an overview of NMR spectroscopic methods for characterizing equilibrium fluctuations in free and bound states of allosteric proteins that have been most influential in the field. By combining NMR experimental approaches with molecular simulations, atomistic-level descriptions of the mechanisms by which allosteric phenomena take place are now within reach. PMID:26964042
NASA Astrophysics Data System (ADS)
Shang, Yilun
2016-08-01
How complex a network is crucially impacts its function and performance. In many modern applications, the networks involved have a growth property and sparse structures, which pose challenges to physicists and applied mathematicians. In this paper, we introduce the forest likelihood as a plausible measure to gauge how difficult it is to construct a forest in a non-preferential attachment way. Based on the notions of admittable labeling and path construction, we propose algorithms for computing the forest likelihood of a given forest. Concrete examples as well as the distributions of forest likelihoods for all forests with some fixed numbers of nodes are presented. Moreover, we illustrate the ideas on real-life networks, including a benzenoid tree, a mathematical family tree, and a peer-to-peer network.
Applications of Langevin and Molecular Dynamics methods
NASA Astrophysics Data System (ADS)
Lomdahl, P. S.
Computer simulation of complex nonlinear and disordered phenomena from materials science is rapidly becoming an active and new area serving as a guide for experiments and for testing of theoretical concepts. This is especially true when novel massively parallel computer systems and techniques are used on these problems. In particular the Langevin dynamics simulation technique has proven useful in situations where the time evolution of a system in contact with a heat bath is to be studied. The traditional way to study systems in contact with a heat bath has been via the Monte Carlo method. While this method has indeed been used successfully in many applications, it has difficulty addressing true dynamical questions. Large systems of coupled stochastic ODE's (or Langevin equations) are commonly the end result of a theoretical description of higher dimensional nonlinear systems in contact with a heat bath. The coupling is often local in nature, because it reflects local interactions formulated on a lattice, the lattice for example represents the underlying discreteness of a substrate of atoms or discrete k-values in Fourier space. The fundamental unit of parallelism thus has a direct analog in the physical system the authors are interested in. In these lecture notes the authors illustrate the use of Langevin stochastic simulation techniques on a number of nonlinear problems from materials science and condensed matter physics that have attracted attention in recent years. First, the authors review the idea behind the fluctuation-dissipation theorem which forms that basis for the numerical Langevin stochastic simulation scheme. The authors then show applications of the technique to various problems from condensed matter and materials science.
Likelihood reinstates Archaeopteryx as a primitive bird.
Lee, Michael S Y; Worthy, Trevor H
2012-04-23
The widespread view that Archaeopteryx was a primitive (basal) bird has been recently challenged by a comprehensive phylogenetic analysis that placed Archaeopteryx with deinonychosaurian theropods. The new phylogeny suggested that typical bird flight (powered by the front limbs only) either evolved at least twice, or was lost/modified in some deinonychosaurs. However, this parsimony-based result was acknowledged to be weakly supported. Maximum-likelihood and related Bayesian methods applied to the same dataset yield a different and more orthodox result: Archaeopteryx is restored as a basal bird with bootstrap frequency of 73 per cent and posterior probability of 1. These results are consistent with a single origin of typical (forelimb-powered) bird flight. The Archaeopteryx-deinonychosaur clade retrieved by parsimony is supported by more characters (which are on average more homoplasious), whereas the Archaeopteryx-bird clade retrieved by likelihood-based methods is supported by fewer characters (but on average less homoplasious). Both positions for Archaeopteryx remain plausible, highlighting the hazy boundary between birds and advanced theropods. These results also suggest that likelihood-based methods (in addition to parsimony) can be useful in morphological phylogenetics.
Semiclassical methods in chemical reaction dynamics
Keshavamurthy, Srihari
1994-12-01
Semiclassical approximations, simple as well as rigorous, are formulated in order to be able to describe gas phase chemical reactions in large systems. We formulate a simple but accurate semiclassical model for incorporating multidimensional tunneling in classical trajectory simulations. This model is based on the existence of locally conserved actions around the saddle point region on a multidimensional potential energy surface. Using classical perturbation theory and monitoring the imaginary action as a function of time along a classical trajectory we calculate state-specific unimolecular decay rates for a model two dimensional potential with coupling. Results are in good comparison with exact quantum results for the potential over a wide range of coupling constants. We propose a new semiclassical hybrid method to calculate state-to-state S-matrix elements for bimolecular reactive scattering. The accuracy of the Van Vleck-Gutzwiller propagator and the short time dynamics of the system make this method self-consistent and accurate. We also go beyond the stationary phase approximation by doing the resulting integrals exactly (numerically). As a result, classically forbidden probabilties are calculated with purely real time classical trajectories within this approach. Application to the one dimensional Eckart barrier demonstrates the accuracy of this approach. Successful application of the semiclassical hybrid approach to collinear reactive scattering is prevented by the phenomenon of chaotic scattering. The modified Filinov approach to evaluating the integrals is discussed, but application to collinear systems requires a more careful analysis. In three and higher dimensional scattering systems, chaotic scattering is suppressed and hence the accuracy and usefulness of the semiclassical method should be tested for such systems.
NASA Technical Reports Server (NTRS)
Napolitano, Marcello R.
1995-01-01
This report is a compilation of PID (Proportional Integral Derivative) results for both longitudinal and lateral directional analysis that was completed during Fall 1994. It had earlier established that the maneuvers available for PID containing independent control surface inputs from OBES were not well suited for extracting the cross-coupling static (i.e., C(sub N beta)) or dynamic (i.e., C(sub Npf)) derivatives. This was due to the fact that these maneuvers were designed with the goal of minimizing any lateral directional motion during longitudinal maneuvers and vice-versa. This allows for greater simplification in the aerodynamic model as far as coupling between longitudinal and lateral directions is concerned. As a result, efforts were made to reanalyze this data and extract static and dynamic derivatives for the F/A-18 HARV (High Angle of Attack Research Vehicle) without the inclusion of the cross-coupling terms such that more accurate estimates of classical model terms could be acquired. Four longitudinal flights containing static PID maneuvers were examined. The classical state equations already available in pEst for alphadot, qdot and thetadot were used. Three lateral directional flights of PID static maneuvers were also examined. The classical state equations already available in pEst for betadot, p dot, rdot and phi dot were used. Enclosed with this document are the full set of longitudinal and lateral directional parameter estimate plots showing coefficient estimates along with Cramer-Rao bounds. In addition, a representative time history match for each type of meneuver tested at each angle of attack is also enclosed.
Nonparametric Bayes Factors Based On Empirical Likelihood Ratios
Vexler, Albert; Deng, Wei; Wilding, Gregory E.
2012-01-01
Bayes methodology provides posterior distribution functions based on parametric likelihoods adjusted for prior distributions. A distribution-free alternative to the parametric likelihood is use of empirical likelihood (EL) techniques, well known in the context of nonparametric testing of statistical hypotheses. Empirical likelihoods have been shown to exhibit many of the properties of conventional parametric likelihoods. In this article, we propose and examine Bayes factors (BF) methods that are derived via the EL ratio approach. Following Kass & Wasserman [10], we consider Bayes factors type decision rules in the context of standard statistical testing techniques. We show that the asymptotic properties of the proposed procedure are similar to the classical BF’s asymptotic operating characteristics. Although we focus on hypothesis testing, the proposed approach also yields confidence interval estimators of unknown parameters. Monte Carlo simulations were conducted to evaluate the theoretical results as well as to demonstrate the power of the proposed test. PMID:23180904
Factors Associated with Young Adults’ Pregnancy Likelihood
Kitsantas, Panagiota; Lindley, Lisa L.; Wu, Huichuan
2014-01-01
OBJECTIVES While progress has been made to reduce adolescent pregnancies in the United States, rates of unplanned pregnancy among young adults (18–29 years) remain high. In this study, we assessed factors associated with perceived likelihood of pregnancy (likelihood of getting pregnant/getting partner pregnant in the next year) among sexually experienced young adults who were not trying to get pregnant and had ever used contraceptives. METHODS We conducted a secondary analysis of 660 young adults, 18–29 years old in the United States, from the cross-sectional National Survey of Reproductive and Contraceptive Knowledge. Logistic regression and classification tree analyses were conducted to generate profiles of young adults most likely to report anticipating a pregnancy in the next year. RESULTS Nearly one-third (32%) of young adults indicated they believed they had at least some likelihood of becoming pregnant in the next year. Young adults who believed that avoiding pregnancy was not very important were most likely to report pregnancy likelihood (odds ratio [OR], 5.21; 95% CI, 2.80–9.69), as were young adults for whom avoiding a pregnancy was important but not satisfied with their current contraceptive method (OR, 3.93; 95% CI, 1.67–9.24), attended religious services frequently (OR, 3.0; 95% CI, 1.52–5.94), were uninsured (OR, 2.63; 95% CI, 1.31–5.26), and were likely to have unprotected sex in the next three months (OR, 1.77; 95% CI, 1.04–3.01). DISCUSSION These results may help guide future research and the development of pregnancy prevention interventions targeting sexually experienced young adults. PMID:25782849
Dynamic stiffness method for space frames under distributed harmonic loads
NASA Astrophysics Data System (ADS)
Dumir, P. C.; Saha, D. C.; Sengupta, S.
1992-10-01
An exact dynamic equivalent load vector for space frames subjected to harmonic distributed loads has been derived using the dynamic stiffness approach. The Taylor's series expansion of the dynamic equivalent load vector has revealed that the static consistent equivalent load vector used in a 12 degree of freedom two-noded finite element for a space frame is just the first term of the series. The dynamic stiffness approach using the exact dynamic equivalent load vector requires discretization of a member subjected to distributed loads into only one element. The results of the dynamic stiffness method are compared with those of the finite element method for illustrative problems.
A Maximum-Likelihood Approach to Force-Field Calibration.
Zaborowski, Bartłomiej; Jagieła, Dawid; Czaplewski, Cezary; Hałabis, Anna; Lewandowska, Agnieszka; Żmudzińska, Wioletta; Ołdziej, Stanisław; Karczyńska, Agnieszka; Omieczynski, Christian; Wirecki, Tomasz; Liwo, Adam
2015-09-28
A new approach to the calibration of the force fields is proposed, in which the force-field parameters are obtained by maximum-likelihood fitting of the calculated conformational ensembles to the experimental ensembles of training system(s). The maximum-likelihood function is composed of logarithms of the Boltzmann probabilities of the experimental conformations, calculated with the current energy function. Because the theoretical distribution is given in the form of the simulated conformations only, the contributions from all of the simulated conformations, with Gaussian weights in the distances from a given experimental conformation, are added to give the contribution to the target function from this conformation. In contrast to earlier methods for force-field calibration, the approach does not suffer from the arbitrariness of dividing the decoy set into native-like and non-native structures; however, if such a division is made instead of using Gaussian weights, application of the maximum-likelihood method results in the well-known energy-gap maximization. The computational procedure consists of cycles of decoy generation and maximum-likelihood-function optimization, which are iterated until convergence is reached. The method was tested with Gaussian distributions and then applied to the physics-based coarse-grained UNRES force field for proteins. The NMR structures of the tryptophan cage, a small α-helical protein, determined at three temperatures (T = 280, 305, and 313 K) by Hałabis et al. ( J. Phys. Chem. B 2012 , 116 , 6898 - 6907 ), were used. Multiplexed replica-exchange molecular dynamics was used to generate the decoys. The iterative procedure exhibited steady convergence. Three variants of optimization were tried: optimization of the energy-term weights alone and use of the experimental ensemble of the folded protein only at T = 280 K (run 1); optimization of the energy-term weights and use of experimental ensembles at all three temperatures (run 2
Weibull distribution based on maximum likelihood with interval inspection data
NASA Technical Reports Server (NTRS)
Rheinfurth, M. H.
1985-01-01
The two Weibull parameters based upon the method of maximum likelihood are determined. The test data used were failures observed at inspection intervals. The application was the reliability analysis of the SSME oxidizer turbine blades.
Properties of maximum likelihood male fertility estimation in plant populations.
Morgan, M T
1998-01-01
Computer simulations are used to evaluate maximum likelihood methods for inferring male fertility in plant populations. The maximum likelihood method can provide substantial power to characterize male fertilities at the population level. Results emphasize, however, the importance of adequate experimental design and evaluation of fertility estimates, as well as limitations to inference (e.g., about the variance in male fertility or the correlation between fertility and phenotypic trait value) that can be reasonably drawn. PMID:9611217
A Dynamic Management Method for Fast Manufacturing Resource Reconfiguration
NASA Astrophysics Data System (ADS)
Yuan, Zhiye
To fast and optimally reconfigure manufacturing resource, a dynamic management method for fast manufacturing resource reconfiguration based on holon was proposed. In this method, a dynamic management structure for fast manufacturing resource reconfiguration was established based on holon. Moreover, the cooperation relationship among holons for fast manufacturing resource reconfiguration and the manufacturing information cooperation mechanism based on holonic were constructed. Finally, the simulation system of a dynamic management method for fast manufacturing resource reconfiguration was demonstrated and validated by Flexsim software. It has shown the proposed method can dynamically and optimally reconfigure manufacturing resource, and it can effectively improve the efficiency of manufacturing processes.
Approximate likelihood for large irregularly spaced spatial data
Fuentes, Montserrat
2008-01-01
SUMMARY Likelihood approaches for large irregularly spaced spatial datasets are often very difficult, if not infeasible, to implement due to computational limitations. Even when we can assume normality, exact calculations of the likelihood for a Gaussian spatial process observed at n locations requires O(n3) operations. We present a version of Whittle’s approximation to the Gaussian log likelihood for spatial regular lattices with missing values and for irregularly spaced datasets. This method requires O(nlog2n) operations and does not involve calculating determinants. We present simulations and theoretical results to show the benefits and the performance of the spatial likelihood approximation method presented here for spatial irregularly spaced datasets and lattices with missing values. We apply these methods to estimate the spatial structure of sea surface temperatures (SST) using satellite data with missing values. PMID:19079638
Dynamic Programming Method for Impulsive Control Problems
ERIC Educational Resources Information Center
Balkew, Teshome Mogessie
2015-01-01
In many control systems changes in the dynamics occur unexpectedly or are applied by a controller as needed. The time at which a controller implements changes is not necessarily known a priori. For example, many manufacturing systems and flight operations have complicated control systems, and changes in the control systems may be automatically…
System and Method for Dynamic Aeroelastic Control
NASA Technical Reports Server (NTRS)
Suh, Peter M. (Inventor)
2015-01-01
The present invention proposes a hardware and software architecture for dynamic modal structural monitoring that uses a robust modal filter to monitor a potentially very large-scale array of sensors in real time, and tolerant of asymmetric sensor noise and sensor failures, to achieve aircraft performance optimization such as minimizing aircraft flutter, drag and maximizing fuel efficiency.
Section 9: Ground Water - Likelihood of Release
HRS training. the ground water pathway likelihood of release factor category reflects the likelihood that there has been, or will be, a release of hazardous substances in any of the aquifers underlying the site.
Recovering Velocity Distributions Via Penalized Likelihood
NASA Astrophysics Data System (ADS)
Merritt, David
1997-07-01
Line-of-sight velocity distributions are crucial for unravelling the dynamics of hot stellar systems. We present a new formalism based on penalized likelihood for deriving such distributions from kinematical data, and evaluate the performance of two algorithms that extract N(V) from absorption-line spectra and from sets of individual velocities. Both algorithms are superior to existing ones in that the solutions are nearly unbiased even when the data are so poor that a great deal of smoothing is required. In addition, the discrete-velocity algorithm is able to remove a known distribution of measurement errors from the estimate of N(V). The formalism is used to recover the velocity distribution of stars in five fields near the center of the globular cluster omega Centauri.
CosmoSlik: Cosmology sampler of likelihoods
NASA Astrophysics Data System (ADS)
Millea, Marius
2017-01-01
CosmoSlik quickly puts together, runs, and analyzes an MCMC chain for analysis of cosmological data. It is highly modular and comes with plugins for CAMB (ascl:1102.026), CLASS (ascl:1106.020), the Planck likelihood, the South Pole Telescope likelihood, other cosmological likelihoods, emcee (ascl:1303.002), and more. It offers ease-of-use, flexibility, and modularity.
Improved maximum likelihood reconstruction of complex multi-generational pedigrees.
Sheehan, Nuala A; Bartlett, Mark; Cussens, James
2014-11-01
The reconstruction of pedigrees from genetic marker data is relevant to a wide range of applications. Likelihood-based approaches aim to find the pedigree structure that gives the highest probability to the observed data. Existing methods either entail an exhaustive search and are hence restricted to small numbers of individuals, or they take a more heuristic approach and deliver a solution that will probably have high likelihood but is not guaranteed to be optimal. By encoding the pedigree learning problem as an integer linear program we can exploit efficient optimisation algorithms to construct pedigrees guaranteed to have maximal likelihood for the standard situation where we have complete marker data at unlinked loci and segregation of genes from parents to offspring is Mendelian. Previous work demonstrated efficient reconstruction of pedigrees of up to about 100 individuals. The modified method that we present here is not so restricted: we demonstrate its applicability with simulated data on a real human pedigree structure of over 1600 individuals. It also compares well with a very competitive approximate approach in terms of solving time and accuracy. In addition to identifying a maximum likelihood pedigree, we can obtain any number of pedigrees in decreasing order of likelihood. This is useful for assessing the uncertainty of a maximum likelihood solution and permits model averaging over high likelihood pedigrees when this would be appropriate. More importantly, when the solution is not unique, as will often be the case for large pedigrees, it enables investigation into the properties of maximum likelihood pedigree estimates which has not been possible up to now. Crucially, we also have a means of assessing the behaviour of other approximate approaches which all aim to find a maximum likelihood solution. Our approach hence allows us to properly address the question of whether a reasonably high likelihood solution that is easy to obtain is practically as
Constraint likelihood analysis for a network of gravitational wave detectors
Klimenko, S.; Rakhmanov, M.; Mitselmakher, G.; Mohanty, S.
2005-12-15
We propose a coherent method for detection and reconstruction of gravitational wave signals with a network of interferometric detectors. The method is derived by using the likelihood ratio functional for unknown signal waveforms. In the likelihood analysis, the global maximum of the likelihood ratio over the space of waveforms is used as the detection statistic. We identify a problem with this approach. In the case of an aligned pair of detectors, the detection statistic depends on the cross correlation between the detectors as expected, but this dependence disappears even for infinitesimally small misalignments. We solve the problem by applying constraints on the likelihood functional and obtain a new class of statistics. The resulting method can be applied to data from a network consisting of any number of detectors with arbitrary detector orientations. The method allows us reconstruction of the source coordinates and the waveforms of two polarization components of a gravitational wave. We study the performance of the method with numerical simulations and find the reconstruction of the source coordinates to be more accurate than in the standard likelihood method.
LIKEDM: Likelihood calculator of dark matter detection
NASA Astrophysics Data System (ADS)
Huang, Xiaoyuan; Tsai, Yue-Lin Sming; Yuan, Qiang
2017-04-01
With the large progress in searches for dark matter (DM) particles with indirect and direct methods, we develop a numerical tool that enables fast calculations of the likelihoods of specified DM particle models given a number of observational data, such as charged cosmic rays from space-borne experiments (e.g., PAMELA, AMS-02), γ-rays from the Fermi space telescope, and underground direct detection experiments. The purpose of this tool - LIKEDM, likelihood calculator for dark matter detection - is to bridge the gap between a particle model of DM and the observational data. The intermediate steps between these two, including the astrophysical backgrounds, the propagation of charged particles, the analysis of Fermi γ-ray data, as well as the DM velocity distribution and the nuclear form factor, have been dealt with in the code. We release the first version (v1.0) focusing on the constraints from indirect detection of DM with charged cosmic and gamma rays. Direct detection will be implemented in the next version. This manual describes the framework, usage, and related physics of the code.
Parametric likelihood inference for interval censored competing risks data.
Hudgens, Michael G; Li, Chenxi; Fine, Jason P
2014-03-01
Parametric estimation of the cumulative incidence function (CIF) is considered for competing risks data subject to interval censoring. Existing parametric models of the CIF for right censored competing risks data are adapted to the general case of interval censoring. Maximum likelihood estimators for the CIF are considered under the assumed models, extending earlier work on nonparametric estimation. A simple naive likelihood estimator is also considered that utilizes only part of the observed data. The naive estimator enables separate estimation of models for each cause, unlike full maximum likelihood in which all models are fit simultaneously. The naive likelihood is shown to be valid under mixed case interval censoring, but not under an independent inspection process model, in contrast with full maximum likelihood which is valid under both interval censoring models. In simulations, the naive estimator is shown to perform well and yield comparable efficiency to the full likelihood estimator in some settings. The methods are applied to data from a large, recent randomized clinical trial for the prevention of mother-to-child transmission of HIV.
Dynamic decoupling nonlinear control method for aircraft gust alleviation
NASA Astrophysics Data System (ADS)
Lv, Yang; Wan, Xiaopeng; Li, Aijun
2008-10-01
A dynamic decoupling nonlinear control method for MIMO system is presented in this paper. The dynamic inversion method is used to decouple the multivariable system. The nonlinear control method is used to overcome the poor decoupling effect when the system model is inaccurate. The nonlinear control method has correcting function and is expressed in analytic form, it is easy to adjust the parameters of the controller and optimize the design of the control system. The method is used to design vertical transition mode of active control aircraft for gust alleviation. Simulation results show that the designed vertical transition mode improves the gust alleviation effect about 34% comparing with the normal aircraft.
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.
2017-01-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119
Prediction of Dynamic Stall Characteristics Using Advanced Nonlinear Panel Methods,
This paper presents preliminary results of work in which a surface singularity panel method is being extended for modelling the dynamic interaction...between a separated wake and a surface undergoing an unsteady motion. The method combines the capabilities of an unsteady time-stepping code and a... technique for modelling extensive separation using free vortex sheets. Routines are developed for treating the dynamic interaction between the separated
Parameter estimation in X-ray astronomy using maximum likelihood
NASA Technical Reports Server (NTRS)
Wachter, K.; Leach, R.; Kellogg, E.
1979-01-01
Methods of estimation of parameter values and confidence regions by maximum likelihood and Fisher efficient scores starting from Poisson probabilities are developed for the nonlinear spectral functions commonly encountered in X-ray astronomy. It is argued that these methods offer significant advantages over the commonly used alternatives called minimum chi-squared because they rely on less pervasive statistical approximations and so may be expected to remain valid for data of poorer quality. Extensive numerical simulations of the maximum likelihood method are reported which verify that the best-fit parameter value and confidence region calculations are correct over a wide range of input spectra.
Non-Concave Penalized Likelihood with NP-Dimensionality
Fan, Jianqing; Lv, Jinchi
2011-01-01
Penalized likelihood methods are fundamental to ultra-high dimensional variable selection. How high dimensionality such methods can handle remains largely unknown. In this paper, we show that in the context of generalized linear models, such methods possess model selection consistency with oracle properties even for dimensionality of Non-Polynomial (NP) order of sample size, for a class of penalized likelihood approaches using folded-concave penalty functions, which were introduced to ameliorate the bias problems of convex penalty functions. This fills a long-standing gap in the literature where the dimensionality is allowed to grow slowly with the sample size. Our results are also applicable to penalized likelihood with the L1-penalty, which is a convex function at the boundary of the class of folded-concave penalty functions under consideration. The coordinate optimization is implemented for finding the solution paths, whose performance is evaluated by a few simulation examples and the real data analysis. PMID:22287795
Dynamic characteristics of a WPC—comparison of transfer matrix method and FE method
NASA Astrophysics Data System (ADS)
Chen, Guo-Long; Nie, Wu
2003-12-01
To find the difference in dynamic characteristics between conventional monohull ship and wave penetrating catamaran (WPC), a WPC was taken as an object; its dynamic characteristics were computed by transfer matrix method and finite element method respectively. According to the comparison of the nature frequency results and mode shape results, the fact that FEM method is more suitable to dynamic characteristics analysis of a WPC was pointed out, special features on dynamic characteristics of WPC were given, and some beneficial suggestions are proposed to optimize the strength of a WPC in design period.
Davtyan, Aram; Dama, James F.; Voth, Gregory A.; Andersen, Hans C.
2015-04-21
Coarse-grained (CG) models of molecular systems, with fewer mechanical degrees of freedom than an all-atom model, are used extensively in chemical physics. It is generally accepted that a coarse-grained model that accurately describes equilibrium structural properties (as a result of having a well constructed CG potential energy function) does not necessarily exhibit appropriate dynamical behavior when simulated using conservative Hamiltonian dynamics for the CG degrees of freedom on the CG potential energy surface. Attempts to develop accurate CG dynamic models usually focus on replacing Hamiltonian motion by stochastic but Markovian dynamics on that surface, such as Langevin or Brownian dynamics. However, depending on the nature of the system and the extent of the coarse-graining, a Markovian dynamics for the CG degrees of freedom may not be appropriate. In this paper, we consider the problem of constructing dynamic CG models within the context of the Multi-Scale Coarse-graining (MS-CG) method of Voth and coworkers. We propose a method of converting a MS-CG model into a dynamic CG model by adding degrees of freedom to it in the form of a small number of fictitious particles that interact with the CG degrees of freedom in simple ways and that are subject to Langevin forces. The dynamic models are members of a class of nonlinear systems interacting with special heat baths that were studied by Zwanzig [J. Stat. Phys. 9, 215 (1973)]. The properties of the fictitious particles can be inferred from analysis of the dynamics of all-atom simulations of the system of interest. This is analogous to the fact that the MS-CG method generates the CG potential from analysis of equilibrium structures observed in all-atom simulation data. The dynamic models generate a non-Markovian dynamics for the CG degrees of freedom, but they can be easily simulated using standard molecular dynamics programs. We present tests of this method on a series of simple examples that demonstrate that
NASA Astrophysics Data System (ADS)
Davtyan, Aram; Dama, James F.; Voth, Gregory A.; Andersen, Hans C.
2015-04-01
Coarse-grained (CG) models of molecular systems, with fewer mechanical degrees of freedom than an all-atom model, are used extensively in chemical physics. It is generally accepted that a coarse-grained model that accurately describes equilibrium structural properties (as a result of having a well constructed CG potential energy function) does not necessarily exhibit appropriate dynamical behavior when simulated using conservative Hamiltonian dynamics for the CG degrees of freedom on the CG potential energy surface. Attempts to develop accurate CG dynamic models usually focus on replacing Hamiltonian motion by stochastic but Markovian dynamics on that surface, such as Langevin or Brownian dynamics. However, depending on the nature of the system and the extent of the coarse-graining, a Markovian dynamics for the CG degrees of freedom may not be appropriate. In this paper, we consider the problem of constructing dynamic CG models within the context of the Multi-Scale Coarse-graining (MS-CG) method of Voth and coworkers. We propose a method of converting a MS-CG model into a dynamic CG model by adding degrees of freedom to it in the form of a small number of fictitious particles that interact with the CG degrees of freedom in simple ways and that are subject to Langevin forces. The dynamic models are members of a class of nonlinear systems interacting with special heat baths that were studied by Zwanzig [J. Stat. Phys. 9, 215 (1973)]. The properties of the fictitious particles can be inferred from analysis of the dynamics of all-atom simulations of the system of interest. This is analogous to the fact that the MS-CG method generates the CG potential from analysis of equilibrium structures observed in all-atom simulation data. The dynamic models generate a non-Markovian dynamics for the CG degrees of freedom, but they can be easily simulated using standard molecular dynamics programs. We present tests of this method on a series of simple examples that demonstrate that
Vestige: Maximum likelihood phylogenetic footprinting
Wakefield, Matthew J; Maxwell, Peter; Huttley, Gavin A
2005-01-01
Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational processes, DNA repair and
Robust Dynamic Multi-objective Vehicle Routing Optimization Method.
Guo, Yi-Nan; Cheng, Jian; Luo, Sha; Gong, Dun-Wei
2017-03-21
For dynamic multi-objective vehicle routing problems, the waiting time of vehicle, the number of serving vehicles, the total distance of routes were normally considered as the optimization objectives. Except for above objectives, fuel consumption that leads to the environmental pollution and energy consumption was focused on in this paper. Considering the vehicles' load and the driving distance, corresponding carbon emission model was built and set as an optimization objective. Dynamic multi-objective vehicle routing problems with hard time windows and randomly appeared dynamic customers, subsequently, were modeled. In existing planning methods, when the new service demand came up, global vehicle routing optimization method was triggered to find the optimal routes for non-served customers, which was time-consuming. Therefore, robust dynamic multi-objective vehicle routing method with two-phase is proposed. Three highlights of the novel method are: (i) After finding optimal robust virtual routes for all customers by adopting multi-objective particle swarm optimization in the first phase, static vehicle routes for static customers are formed by removing all dynamic customers from robust virtual routes in next phase. (ii)The dynamically appeared customers append to be served according to their service time and the vehicles' statues. Global vehicle routing optimization is triggered only when no suitable locations can be found for dynamic customers. (iii)A metric measuring the algorithms' robustness is given. The statistical results indicated that the routes obtained by the proposed method have better stability and robustness, but may be sub-optimum. Moreover, time-consuming global vehicle routing optimization is avoided as dynamic customers appear.
Likelihood maximization for list-mode emission tomographic image reconstruction.
Byrne, C
2001-10-01
The maximum a posteriori (MAP) Bayesian iterative algorithm using priors that are gamma distributed, due to Lange, Bahn and Little, is extended to include parameter choices that fall outside the gamma distribution model. Special cases of the resulting iterative method include the expectation maximization maximum likelihood (EMML) method based on the Poisson model in emission tomography, as well as algorithms obtained by Parra and Barrett and by Huesman et al. that converge to maximum likelihood and maximum conditional likelihood estimates of radionuclide intensities for list-mode emission tomography. The approach taken here is optimization-theoretic and does not rely on the usual expectation maximization (EM) formalism. Block-iterative variants of the algorithms are presented. A self-contained, elementary proof of convergence of the algorithm is included.
Maximum-likelihood estimation of admixture proportions from genetic data.
Wang, Jinliang
2003-01-01
For an admixed population, an important question is how much genetic contribution comes from each parental population. Several methods have been developed to estimate such admixture proportions, using data on genetic markers sampled from parental and admixed populations. In this study, I propose a likelihood method to estimate jointly the admixture proportions, the genetic drift that occurred to the admixed population and each parental population during the period between the hybridization and sampling events, and the genetic drift in each ancestral population within the interval between their split and hybridization. The results from extensive simulations using various combinations of relevant parameter values show that in general much more accurate and precise estimates of admixture proportions are obtained from the likelihood method than from previous methods. The likelihood method also yields reasonable estimates of genetic drift that occurred to each population, which translate into relative effective sizes (N(e)) or absolute average N(e)'s if the times when the relevant events (such as population split, admixture, and sampling) occurred are known. The proposed likelihood method also has features such as relatively low computational requirement compared with previous ones, flexibility for admixture models, and marker types. In particular, it allows for missing data from a contributing parental population. The method is applied to a human data set and a wolflike canids data set, and the results obtained are discussed in comparison with those from other estimators and from previous studies. PMID:12807794
Computational Methods for Dynamic Stability and Control Derivatives
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.
2003-01-01
Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.
Computational Methods for Dynamic Stability and Control Derivatives
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.
2004-01-01
Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.
Method to describe stochastic dynamics using an optimal coordinate.
Krivov, Sergei V
2013-12-01
A general method to describe the stochastic dynamics of Markov processes is suggested. The method aims to solve three related problems: the determination of an optimal coordinate for the description of stochastic dynamics; the reconstruction of time from an ensemble of stochastic trajectories; and the decomposition of stationary stochastic dynamics into eigenmodes which do not decay exponentially with time. The problems are solved by introducing additive eigenvectors which are transformed by a stochastic matrix in a simple way - every component is translated by a constant distance. Such solutions have peculiar properties. For example, an optimal coordinate for stochastic dynamics with detailed balance is a multivalued function. An optimal coordinate for a random walk on a line corresponds to the conventional eigenvector of the one-dimensional Dirac equation. The equation for the optimal coordinate in a slowly varying potential reduces to the Hamilton-Jacobi equation for the action function.
A review of substructure coupling methods for dynamic analysis
NASA Technical Reports Server (NTRS)
Craig, R. R., Jr.; Chang, C. J.
1976-01-01
The state of the art is assessed in substructure coupling for dynamic analysis. A general formulation, which permits all previously described methods to be characterized by a few constituent matrices, is developed. Limited results comparing the accuracy of various methods are presented.
An inverse dynamic method yielding flexible manipulator state trajectories
NASA Technical Reports Server (NTRS)
Kwon, Dong-Soo; Book, Wayne J.
1990-01-01
An inverse dynamic equation for a flexible manipulator is derived in a state form. By dividing the inverse system into the causal part and the anticausal part, torque is calculated in the time domain for a certain end point trajectory, as well as trajectories of all state variables. The open loop control of the inverse dynamic method shows an excellent result in simulation. For practical applications, a control strategy adapting feedback tracking control to the inverse dynamic feedforward control is illustrated, and its good experimental result is presented.
An efficient threshold dynamics method for wetting on rough surfaces
NASA Astrophysics Data System (ADS)
Xu, Xianmin; Wang, Dong; Wang, Xiao-Ping
2017-02-01
The threshold dynamics method developed by Merriman, Bence and Osher (MBO) is an efficient method for simulating the motion by mean curvature flow when the interface is away from the solid boundary. Direct generalization of MBO-type methods to the wetting problem with interfaces intersecting the solid boundary is not easy because solving the heat equation in a general domain with a wetting boundary condition is not as efficient as it is with the original MBO method. The dynamics of the contact point also follows a different law compared with the dynamics of the interface away from the boundary. In this paper, we develop an efficient volume preserving threshold dynamics method for simulating wetting on rough surfaces. This method is based on minimization of the weighted surface area functional over an extended domain that includes the solid phase. The method is simple, stable with O (Nlog N) complexity per time step and is not sensitive to the inhomogeneity or roughness of the solid boundary.
A dynamic integrated fault diagnosis method for power transformers.
Gao, Wensheng; Bai, Cuifen; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified.
A Dynamic Integrated Fault Diagnosis Method for Power Transformers
Gao, Wensheng; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841
Method for recovering dynamic position of photoelectric encoder
NASA Astrophysics Data System (ADS)
Wu, Yong-zhi; Wan, Qiu-hua; Zhao, Chang-hai; Sun, Ying; Liang, Li-hui; Liu, Yi-sheng
2009-05-01
This paper presents a method to recover dynamic position of photoelectric encoder. While working at dynamic state, original outputs of the photoelectric encoder are in theory two sine or triangular signals with a phase difference of π/2. However, there are still deviations of actual output signals. Interpolating on the basis of this deviation signal will result in interpolation errors. In dynamic state, true original signal data obtained by data acquisition system is a time equation. Through processing these data by data equiangular and harmonic analysis, the equation will be converted from time domain to position domain and an original position equation can be formed. Then the interpolation errors also can be obtained. By this method, the interpolation errors can be checked in dynamic state and it can also provide electric interpolation basis so that to improve dynamic interpolation precision of the encoder. Software simulation and experimental analysis all prove the method effective. This method is the basis in theory for precision checking and calibration in motion.
Improved dynamic analysis method using load-dependent Ritz vectors
NASA Technical Reports Server (NTRS)
Escobedo-Torres, J.; Ricles, J. M.
1993-01-01
The dynamic analysis of large space structures is important in order to predict their behavior under operating conditions. Computer models of large space structures are characterized by having a large number of degrees of freedom, and the computational effort required to carry out the analysis is very large. Conventional methods of solution utilize a subset of the eigenvectors of the system, but for systems with many degrees of freedom, the solution of the eigenproblem is in many cases the most costly phase of the analysis. For this reason, alternate solution methods need to be considered. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. The load dependent Ritz vector method is presented as an alternative to the classical normal mode methods for obtaining dynamic responses of large space structures. A simplified model of a space station is used to compare results. Results show that the load dependent Ritz vector method predicts the dynamic response better than the classical normal mode method. Even though this alternate method is very promising, further studies are necessary to fully understand its attributes and limitations.
Can the ring polymer molecular dynamics method be interpreted as real time quantum dynamics?
Jang, Seogjoo; Sinitskiy, Anton V.; Voth, Gregory A.
2014-04-21
The ring polymer molecular dynamics (RPMD) method has gained popularity in recent years as a simple approximation for calculating real time quantum correlation functions in condensed media. However, the extent to which RPMD captures real dynamical quantum effects and why it fails under certain situations have not been clearly understood. Addressing this issue has been difficult in the absence of a genuine justification for the RPMD algorithm starting from the quantum Liouville equation. To this end, a new and exact path integral formalism for the calculation of real time quantum correlation functions is presented in this work, which can serve as a rigorous foundation for the analysis of the RPMD method as well as providing an alternative derivation of the well established centroid molecular dynamics method. The new formalism utilizes the cyclic symmetry of the imaginary time path integral in the most general sense and enables the expression of Kubo-transformed quantum time correlation functions as that of physical observables pre-averaged over the imaginary time path. Upon filtering with a centroid constraint function, the formulation results in the centroid dynamics formalism. Upon filtering with the position representation of the imaginary time path integral, we obtain an exact quantum dynamics formalism involving the same variables as the RPMD method. The analysis of the RPMD approximation based on this approach clarifies that an explicit quantum dynamical justification does not exist for the use of the ring polymer harmonic potential term (imaginary time kinetic energy) as implemented in the RPMD method. It is analyzed why this can cause substantial errors in nonlinear correlation functions of harmonic oscillators. Such errors can be significant for general correlation functions of anharmonic systems. We also demonstrate that the short time accuracy of the exact path integral limit of RPMD is of lower order than those for finite discretization of path. The
Comparison of induced rules based on likelihood estimation
NASA Astrophysics Data System (ADS)
Tsumoto, Shusaku
2002-03-01
Rule induction methods have been applied to knowledge discovery in databases and data mining, The empirical results obtained show that they are very powerful and that important knowledge has been extracted from datasets. However, comparison and evaluation of rules are based not on statistical evidence but on rather naive indices, such as conditional probabilities and functions of conditional probabilities. In this paper, we introduce two approaches to induced statistical comparison of induced rules. For the statistical evaluation, likelihood ratio test and Fisher's exact test play an important role: likelihood ratio statistic measures statistical information about an information table and it is used to measure the difference between two tables.
Investigation of Ribosomes Using Molecular Dynamics Simulation Methods.
Makarov, G I; Makarova, T M; Sumbatyan, N V; Bogdanov, A A
2016-12-01
The ribosome as a complex molecular machine undergoes significant conformational changes while synthesizing a protein molecule. Molecular dynamics simulations have been used as complementary approaches to X-ray crystallography and cryoelectron microscopy, as well as biochemical methods, to answer many questions that modern structural methods leave unsolved. In this review, we demonstrate that all-atom modeling of ribosome molecular dynamics is particularly useful in describing the process of tRNA translocation, atomic details of behavior of nascent peptides, antibiotics, and other small molecules in the ribosomal tunnel, and the putative mechanism of allosteric signal transmission to functional sites of the ribosome.
Nonstationary hydrological time series forecasting using nonlinear dynamic methods
NASA Astrophysics Data System (ADS)
Coulibaly, Paulin; Baldwin, Connely K.
2005-06-01
Recent evidence of nonstationary trends in water resources time series as result of natural and/or anthropogenic climate variability and change, has raised more interest in nonlinear dynamic system modeling methods. In this study, the effectiveness of dynamically driven recurrent neural networks (RNN) for complex time-varying water resources system modeling is investigated. An optimal dynamic RNN approach is proposed to directly forecast different nonstationary hydrological time series. The proposed method automatically selects the most optimally trained network in any case. The simulation performance of the dynamic RNN-based model is compared with the results obtained from optimal multivariate adaptive regression splines (MARS) models. It is shown that the dynamically driven RNN model can be a good alternative for the modeling of complex dynamics of a hydrological system, performing better than the MARS model on the three selected hydrological time series, namely the historical storage volumes of the Great Salt Lake, the Saint-Lawrence River flows, and the Nile River flows.
NASA Astrophysics Data System (ADS)
Cattivelli, Federico S.; Estabrook, Polly; Satorius, Edgar H.; Sayed, Ali H.
2008-11-01
One of the most crucial stages of the Mars exploration missions is the entry, descent, and landing (EDL) phase. During EDL, maintaining reliable communication from the spacecraft to Earth is extremely important for the success of future missions, especially in case of mission failure. EDL is characterized by very deep accelerations, caused by friction, parachute deployment and rocket firing among others. These dynamics cause a severe Doppler shift on the carrier communications link to Earth. Methods have been proposed to estimate the Doppler shift based on Maximum Likelihood. So far these methods have proved successful, but it is expected that the next Mars mission, known as the Mars Science Laboratory, will suffer from higher dynamics and lower SNR. Thus, improving the existing estimation methods becomes a necessity. We propose a Maximum Likelihood approach that takes into account the power in the data tones to enhance carrier recovery, and improve the estimation performance by up to 3 dB. Simulations are performed using real data obtained during the EDL stage of the Mars Exploration Rover B (MERB) mission.
Accelerated molecular dynamics methods: introduction and recent developments
Uberuaga, Blas Pedro; Voter, Arthur F; Perez, Danny; Shim, Y; Amar, J G
2009-01-01
A long-standing limitation in the use of molecular dynamics (MD) simulation is that it can only be applied directly to processes that take place on very short timescales: nanoseconds if empirical potentials are employed, or picoseconds if we rely on electronic structure methods. Many processes of interest in chemistry, biochemistry, and materials science require study over microseconds and beyond, due either to the natural timescale for the evolution or to the duration of the experiment of interest. Ignoring the case of liquids xxx, the dynamics on these time scales is typically characterized by infrequent-event transitions, from state to state, usually involving an energy barrier. There is a long and venerable tradition in chemistry of using transition state theory (TST) [10, 19, 23] to directly compute rate constants for these kinds of activated processes. If needed dynamical corrections to the TST rate, and even quantum corrections, can be computed to achieve an accuracy suitable for the problem at hand. These rate constants then allow them to understand the system behavior on longer time scales than we can directly reach with MD. For complex systems with many reaction paths, the TST rates can be fed into a stochastic simulation procedure such as kinetic Monte Carlo xxx, and a direct simulation of the advance of the system through its possible states can be obtained in a probabilistically exact way. A problem that has become more evident in recent years, however, is that for many systems of interest there is a complexity that makes it difficult, if not impossible, to determine all the relevant reaction paths to which TST should be applied. This is a serious issue, as omitted transition pathways can have uncontrollable consequences on the simulated long-time kinetics. Over the last decade or so, we have been developing a new class of methods for treating the long-time dynamics in these complex, infrequent-event systems. Rather than trying to guess in advance what
Maximum Marginal Likelihood Estimation for Semiparametric Item Analysis.
ERIC Educational Resources Information Center
Ramsay, J. O.; Winsberg, S.
1991-01-01
A method is presented for estimating the item characteristic curve (ICC) using polynomial regression splines. Estimation of spline ICCs is described by maximizing the marginal likelihood formed by integrating ability over a beta prior distribution. Simulation results compare this approach with the joint estimation of ability and item parameters.…
Maximum likelihood estimates of polar motion parameters
NASA Technical Reports Server (NTRS)
Wilson, Clark R.; Vicente, R. O.
1990-01-01
Two estimators developed by Jeffreys (1940, 1968) are described and used in conjunction with polar-motion data to determine the frequency (Fc) and quality factor (Qc) of the Chandler wobble. Data are taken from a monthly polar-motion series, satellite laser-ranging results, and optical astrometry and intercompared for use via interpolation techniques. Maximum likelihood arguments were employed to develop the estimators, and the assumption that polar motion relates to a Gaussian random process is assessed in terms of the accuracies of the estimators. The present results agree with those from Jeffreys' earlier study but are inconsistent with the later estimator; a Monte Carlo evaluation of the estimators confirms that the 1968 method is more accurate. The later estimator method shows good performance because the Fourier coefficients derived from the data have signal/noise levels that are superior to those for an individual datum. The method is shown to be valuable for general spectral-analysis problems in which isolated peaks must be analyzed from noisy data.
Efficient maximum likelihood parameterization of continuous-time Markov processes
McGibbon, Robert T.; Pande, Vijay S.
2015-01-01
Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce a maximum likelihood estimator for constructing such models from data observed at a finite time interval. This estimator is dramatically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations. PMID:26203016
Adiabatic molecular-dynamics-simulation-method studies of kinetic friction
NASA Astrophysics Data System (ADS)
Zhang, J.; Sokoloff, J. B.
2005-06-01
An adiabatic molecular-dynamics method is developed and used to study the Muser-Robbins model for dry friction (i.e., nonzero kinetic friction in the slow sliding speed limit). In this model, dry friction between two crystalline surfaces rotated with respect to each other is due to mobile molecules (i.e., dirt particles) adsorbed at the interface. Our adiabatic method allows us to quickly locate interface potential-well minima, which become unstable during sliding of the surfaces. Since dissipation due to friction in the slow sliding speed limit results from mobile molecules dropping out of such unstable wells, our method provides a way to calculate dry friction, which agrees extremely well with results found by conventional molecular dynamics for the same system, but our method is more than a factor of 10 faster.
Continuation Methods for Qualitative Analysis of Aircraft Dynamics
NASA Technical Reports Server (NTRS)
Cummings, Peter A.
2004-01-01
A class of numerical methods for constructing bifurcation curves for systems of coupled, non-linear ordinary differential equations is presented. Foundations are discussed, and several variations are outlined along with their respective capabilities. Appropriate background material from dynamical systems theory is presented.
The Feldenkrais Method: A Dynamic Approach to Changing Motor Behavior.
ERIC Educational Resources Information Center
Buchanan, Patricia A.; Ulrich, Beverly D.
2001-01-01
Describes the Feldenkrais Method of somatic education, noting parallels with a dynamic systems theory (DST) approach to motor behavior. Feldenkrais uses movement and perception to foster individualized improvement in function. DST explains that a human-environment system continually adapts to changing conditions and assembles behaviors…
Do dynamic-based MR knee kinematics methods produce the same results as static methods?
d'Entremont, Agnes G; Nordmeyer-Massner, Jurek A; Bos, Clemens; Wilson, David R; Pruessmann, Klaas P
2013-06-01
MR-based methods provide low risk, noninvasive assessment of joint kinematics; however, these methods often use static positions or require many identical cycles of movement. The study objective was to compare the 3D kinematic results approximated from a series of sequential static poses of the knee with the 3D kinematic results obtained from continuous dynamic movement of the knee. To accomplish this objective, we compared kinematic data from a validated static MR method to a fast static MR method, and compared kinematic data from both static methods to a newly developed dynamic MR method. Ten normal volunteers were imaged using the three kinematic methods (dynamic, static standard, and static fast). Results showed that the two sets of static results were in agreement, indicating that the sequences (standard and fast) may be used interchangeably. Dynamic kinematic results were significantly different from both static results in eight of 11 kinematic parameters: patellar flexion, patellar tilt, patellar proximal translation, patellar lateral translation, patellar anterior translation, tibial abduction, tibial internal rotation, and tibial anterior translation. Three-dimensional MR kinematics measured from dynamic knee motion are often different from those measured in a static knee at several positions, indicating that dynamic-based kinematics provides information that is not obtainable from static scans.
Review of dynamic optimization methods in renewable natural resource management
Williams, B.K.
1989-01-01
In recent years, the applications of dynamic optimization procedures in natural resource management have proliferated. A systematic review of these applications is given in terms of a number of optimization methodologies and natural resource systems. The applicability of the methods to renewable natural resource systems are compared in terms of system complexity, system size, and precision of the optimal solutions. Recommendations are made concerning the appropriate methods for certain kinds of biological resource problems.
On the existence of maximum likelihood estimates for presence-only data
Hefley, Trevor J.; Hooten, Mevin B.
2015-01-01
It is important to identify conditions for which maximum likelihood estimates are unlikely to be identifiable from presence-only data. In data sets where the maximum likelihood estimates do not exist, penalized likelihood and Bayesian methods will produce coefficient estimates, but these are sensitive to the choice of estimation procedure and prior or penalty term. When sample size is small or it is thought that habitat preferences are strong, we propose a suite of estimation procedures researchers can consider using.
NASA Astrophysics Data System (ADS)
Suh, Youngjoo; Kim, Hoirin
2014-12-01
In this paper, a new discriminative likelihood score weighting technique is proposed for speaker identification. The proposed method employs a discriminative weighting of frame-level log-likelihood scores with acoustic-phonetic classification in the Gaussian mixture model (GMM)-based speaker identification. Experiments performed on the Aurora noise-corrupted TIMIT database showed that the proposed approach provides meaningful performance improvement with an overall relative error reduction of 15.8% over the maximum likelihood-based baseline GMM approach.
Empirical Likelihood-Based Confidence Interval of ROC Curves.
Su, Haiyan; Qin, Yongsong; Liang, Hua
2009-11-01
In this article we propose an empirical likelihood-based confidence interval for receiver operating characteristic curves which are based on a continuous-scale test. The approach is easily understood, simply implemented, and computationally efficient. The results from our simulation studies indicate that the finite-sample numerical performance slightly outperforms the most promising methods published recently. Two real datasets are analyzed by using the proposed method and the existing bootstrap-based method.
Predicting crash likelihood and severity on freeways with real-time loop detector data.
Xu, Chengcheng; Tarko, Andrew P; Wang, Wei; Liu, Pan
2013-08-01
Real-time crash risk prediction using traffic data collected from loop detector stations is useful in dynamic safety management systems aimed at improving traffic safety through application of proactive safety countermeasures. The major drawback of most of the existing studies is that they focus on the crash risk without consideration of crash severity. This paper presents an effort to develop a model that predicts the crash likelihood at different levels of severity with a particular focus on severe crashes. The crash data and traffic data used in this study were collected on the I-880 freeway in California, United States. This study considers three levels of crash severity: fatal/incapacitating injury crashes (KA), non-incapacitating/possible injury crashes (BC), and property-damage-only crashes (PDO). The sequential logit model was used to link the likelihood of crash occurrences at different severity levels to various traffic flow characteristics derived from detector data. The elasticity analysis was conducted to evaluate the effect of the traffic flow variables on the likelihood of crash and its severity.The results show that the traffic flow characteristics contributing to crash likelihood were quite different at different levels of severity. The PDO crashes were more likely to occur under congested traffic flow conditions with highly variable speed and frequent lane changes, while the KA and BC crashes were more likely to occur under less congested traffic flow conditions. High speed, coupled with a large speed difference between adjacent lanes under uncongested traffic conditions, was found to increase the likelihood of severe crashes (KA). This study applied the 20-fold cross-validation method to estimate the prediction performance of the developed models. The validation results show that the model's crash prediction performance at each severity level was satisfactory. The findings of this study can be used to predict the probabilities of crash at
Tensor-based dynamic reconstruction method for electrical capacitance tomography
NASA Astrophysics Data System (ADS)
Lei, J.; Mu, H. P.; Liu, Q. B.; Li, Z. H.; Liu, S.; Wang, X. Y.
2017-03-01
Electrical capacitance tomography (ECT) is an attractive visualization measurement method, in which the acquisition of high-quality images is beneficial for the understanding of the underlying physical or chemical mechanisms of the dynamic behaviors of the measurement objects. In real-world measurement environments, imaging objects are often in a dynamic process, and the exploitation of the spatial-temporal correlations related to the dynamic nature will contribute to improving the imaging quality. Different from existing imaging methods that are often used in ECT measurements, in this paper a dynamic image sequence is stacked into a third-order tensor that consists of a low rank tensor and a sparse tensor within the framework of the multiple measurement vectors model and the multi-way data analysis method. The low rank tensor models the similar spatial distribution information among frames, which is slowly changing over time, and the sparse tensor captures the perturbations or differences introduced in each frame, which is rapidly changing over time. With the assistance of the Tikhonov regularization theory and the tensor-based multi-way data analysis method, a new cost function, with the considerations of the multi-frames measurement data, the dynamic evolution information of a time-varying imaging object and the characteristics of the low rank tensor and the sparse tensor, is proposed to convert the imaging task in the ECT measurement into a reconstruction problem of a third-order image tensor. An effective algorithm is developed to search for the optimal solution of the proposed cost function, and the images are reconstructed via a batching pattern. The feasibility and effectiveness of the developed reconstruction method are numerically validated.
Dynamic Rupture Benchmarking of the ADER-DG Method
NASA Astrophysics Data System (ADS)
Pelties, C.; Gabriel, A.
2012-12-01
We will verify the arbitrary high-order derivative Discontinuous Galerkin (ADER-DG) method in various test cases of the 'SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise' benchmark suite (Harris et al. 2009). The ADER-DG scheme is able to solve the spontaneous rupture problem with high-order accuracy in space and time on three-dimensional unstructured tetrahedral meshes. Strong mesh coarsening or refinement at areas of interest can be applied to keep the computational costs feasible. Moreover, the method does not generate spurious high-frequency contributions in the slip rate spectra and therefore does not require any artificial damping as demonstrated in previous presentations and publications (Pelties et al. 2010 and 2012). We will show that the mentioned features hold also for more advanced setups as e.g. a branching fault system, heterogeneous background stresses and bimaterial faults. The advanced geometrical flexibility combined with an enhanced accuracy will make the ADER-DG method a useful tool to study earthquake dynamics on complex fault systems in realistic rheologies. References: Harris, R.A., M. Barall, R. Archuleta, B. Aagaard, J.-P. Ampuero, H. Bhat, V. Cruz-Atienza, L. Dalguer, P. Dawson, S. Day, B. Duan, E. Dunham, G. Ely, Y. Kaneko, Y. Kase, N. Lapusta, Y. Liu, S. Ma, D. Oglesby, K. Olsen, A. Pitarka, S. Song, and E. Templeton, The SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise, Seismological Research Letters, vol. 80, no. 1, pages 119-126, 2009 Pelties, C., J. de la Puente, and M. Kaeser, Dynamic Rupture Modeling in Three Dimensions on Unstructured Meshes Using a Discontinuous Galerkin Method, AGU 2010 Fall Meeting, abstract #S21C-2068 Pelties, C., J. de la Puente, J.-P. Ampuero, G. Brietzke, and M. Kaeser, Three-Dimensional Dynamic Rupture Simulation with a High-order Discontinuous Galerkin Method on Unstructured Tetrahedral Meshes, JGR. - Solid Earth, VOL. 117, B02309, 2012
Dynamic Rupture Benchmarking of the ADER-DG Method
NASA Astrophysics Data System (ADS)
Gabriel, Alice; Pelties, Christian
2013-04-01
We will verify the arbitrary high-order derivative Discontinuous Galerkin (ADER-DG) method in various test cases of the 'SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise' benchmark suite (Harris et al. 2009). The ADER-DG scheme is able to solve the spontaneous rupture problem with high-order accuracy in space and time on three-dimensional unstructured tetrahedral meshes. Strong mesh coarsening or refinement at areas of interest can be applied to keep the computational costs feasible. Moreover, the method does not generate spurious high-frequency contributions in the slip rate spectra and therefore does not require any artificial damping as demonstrated in previous presentations and publications (Pelties et al. 2010 and 2012). We will show that the mentioned features hold also for more advanced setups as e.g. a branching fault system, heterogeneous background stresses and bimaterial faults. The advanced geometrical flexibility combined with an enhanced accuracy will make the ADER-DG method a useful tool to study earthquake dynamics on complex fault systems in realistic rheologies. References: Harris, R.A., M. Barall, R. Archuleta, B. Aagaard, J.-P. Ampuero, H. Bhat, V. Cruz-Atienza, L. Dalguer, P. Dawson, S. Day, B. Duan, E. Dunham, G. Ely, Y. Kaneko, Y. Kase, N. Lapusta, Y. Liu, S. Ma, D. Oglesby, K. Olsen, A. Pitarka, S. Song, and E. Templeton, The SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise, Seismological Research Letters, vol. 80, no. 1, pages 119-126, 2009 Pelties, C., J. de la Puente, and M. Kaeser, Dynamic Rupture Modeling in Three Dimensions on Unstructured Meshes Using a Discontinuous Galerkin Method, AGU 2010 Fall Meeting, abstract #S21C-2068 Pelties, C., J. de la Puente, J.-P. Ampuero, G. Brietzke, and M. Kaeser, Three-Dimensional Dynamic Rupture Simulation with a High-order Discontinuous Galerkin Method on Unstructured Tetrahedral Meshes, JGR. - Solid Earth, VOL. 117, B02309, 2012
Censored Median Regression and Profile Empirical Likelihood
Subramanian, Sundarraman
2007-01-01
We implement profile empirical likelihood based inference for censored median regression models. Inference for any specified sub-vector is carried out by profiling out the nuisance parameters from the “plug-in” empirical likelihood ratio function proposed by Qin and Tsao. To obtain the critical value of the profile empirical likelihood ratio statistic, we first investigate its asymptotic distribution. The limiting distribution is a sum of weighted chi square distributions. Unlike for the full empirical likelihood, however, the derived asymptotic distribution has intractable covariance structure. Therefore, we employ the bootstrap to obtain the critical value, and compare the resulting confidence intervals with the ones obtained through Basawa and Koul’s minimum dispersion statistic. Furthermore, we obtain confidence intervals for the age and treatment effects in a lung cancer data set. PMID:19112527
Corrected profile likelihood confidence interval for binomial paired incomplete data.
Pradhan, Vivek; Menon, Sandeep; Das, Ujjwal
2013-01-01
Clinical trials often use paired binomial data as their clinical endpoint. The confidence interval is frequently used to estimate the treatment performance. Tang et al. (2009) have proposed exact and approximate unconditional methods for constructing a confidence interval in the presence of incomplete paired binary data. The approach proposed by Tang et al. can be overly conservative with large expected confidence interval width (ECIW) in some situations. We propose a profile likelihood-based method with a Jeffreys' prior correction to construct the confidence interval. This approach generates confidence interval with a much better coverage probability and shorter ECIWs. The performances of the method along with the corrections are demonstrated through extensive simulation. Finally, three real world data sets are analyzed by all the methods. Statistical Analysis System (SAS) codes to execute the profile likelihood-based methods are also presented.
Dynamic Optical Grating Device and Associated Method for Modulating Light
NASA Technical Reports Server (NTRS)
Park, Yeonjoon (Inventor); Choi, Sang H. (Inventor); King, Glen C. (Inventor); Chu, Sang-Hyon (Inventor)
2012-01-01
A dynamic optical grating device and associated method for modulating light is provided that is capable of controlling the spectral properties and propagation of light without moving mechanical components by the use of a dynamic electric and/or magnetic field. By changing the electric field and/or magnetic field, the index of refraction, the extinction coefficient, the transmittivity, and the reflectivity fo the optical grating device may be controlled in order to control the spectral properties of the light reflected or transmitted by the device.
Maximum-Likelihood Detection Of Noncoherent CPM
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Analysis of Nonlinear Dynamics by Square Matrix Method
Yu, Li Hua
2016-07-25
The nonlinear dynamics of a system with periodic structure can be analyzed using a square matrix. In this paper, we show that because the special property of the square matrix constructed for nonlinear dynamics, we can reduce the dimension of the matrix from the original large number for high order calculation to low dimension in the first step of the analysis. Then a stable Jordan decomposition is obtained with much lower dimension. The transformation to Jordan form provides an excellent action-angle approximation to the solution of the nonlinear dynamics, in good agreement with trajectories and tune obtained from tracking. And more importantly, the deviation from constancy of the new action-angle variable provides a measure of the stability of the phase space trajectories and their tunes. Thus the square matrix provides a novel method to optimize the nonlinear dynamic system. The method is illustrated by many examples of comparison between theory and numerical simulation. Finally, in particular, we show that the square matrix method can be used for optimization to reduce the nonlinearity of a system.
Fast inference in generalized linear models via expected log-likelihoods.
Ramirez, Alexandro D; Paninski, Liam
2014-04-01
Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting "expected log-likelihood" can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina.
Computational Methods for Structural Mechanics and Dynamics, part 1
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)
1989-01-01
The structural analysis methods research has several goals. One goal is to develop analysis methods that are general. This goal of generality leads naturally to finite-element methods, but the research will also include other structural analysis methods. Another goal is that the methods be amenable to error analysis; that is, given a physical problem and a mathematical model of that problem, an analyst would like to know the probable error in predicting a given response quantity. The ultimate objective is to specify the error tolerances and to use automated logic to adjust the mathematical model or solution strategy to obtain that accuracy. A third goal is to develop structural analysis methods that can exploit parallel processing computers. The structural analysis methods research will focus initially on three types of problems: local/global nonlinear stress analysis, nonlinear transient dynamics, and tire modeling.
Vortex element methods for fluid dynamic analysis of engineering systems
NASA Astrophysics Data System (ADS)
Lewis, Reginald Ivan
The surface-vorticity method of computational fluid mechanics is described, with an emphasis on turbomachinery applications, in an introduction for engineers. Chapters are devoted to surface singularity modeling; lifting bodies, two-dimensional airfoils, and cascades; mixed-flow and radial cascades; bodies of revolution, ducts, and annuli; ducted propellers and fans; three-dimensional and meridional flows in turbomachines; free vorticity shear layers and inverse methods; vortex dynamics in inviscid flows; the simulation of viscous diffusion in discrete vortex modeling; vortex-cloud modeling by the boundary-integral method; vortex-cloud models for lifting bodies and cascades; and grid systems for vortex dynamics and meridional flows. Diagrams, graphs, and the listings for a set of computer programs are provided.
System and method for reducing combustion dynamics in a combustor
Uhm, Jong Ho; Ziminsky, Willy Steve; Johnson, Thomas Edward; Srinivasan, Shiva; York, William David
2016-11-29
A system for reducing combustion dynamics in a combustor includes an end cap that extends radially across the combustor and includes an upstream surface axially separated from a downstream surface. A combustion chamber is downstream of the end cap, and tubes extend from the upstream surface through the downstream surface. Each tube provides fluid communication through the end cap to the combustion chamber. The system further includes means for reducing combustion dynamics in the combustor. A method for reducing combustion dynamics in a combustor includes flowing a working fluid through tubes that extend axially through an end cap that extends radially across the combustor and obstructing at least a portion of the working fluid flowing through a first set of the tubes.
A Non-smooth Newton Method for Multibody Dynamics
Erleben, K.; Ortiz, R.
2008-09-01
In this paper we deal with the simulation of rigid bodies. Rigid body dynamics have become very important for simulating rigid body motion in interactive applications, such as computer games or virtual reality. We present a novel way of computing contact forces using a Newton method. The contact problem is reformulated as a system of non-linear and non-smooth equations, and we solve this system using a non-smooth version of Newton's method. One of the main contribution of this paper is the reformulation of the complementarity problems, used to model impacts, as a system of equations that can be solved using traditional methods.
Dimension-independent likelihood-informed MCMC
Cui, Tiangang; Law, Kody J.H.; Marzouk, Youssef M.
2016-01-01
Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters that represent the discretization of an underlying function. This work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. Two distinct lines of research intersect in the methods developed here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated low-dimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Two nonlinear inverse problems are used to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.
Physically constrained maximum likelihood mode filtering.
Papp, Joseph C; Preisig, James C; Morozov, Andrey K
2010-04-01
Mode filtering is most commonly implemented using the sampled mode shapes or pseudoinverse algorithms. Buck et al. [J. Acoust. Soc. Am. 103, 1813-1824 (1998)] placed these techniques in the context of a broader maximum a posteriori (MAP) framework. However, the MAP algorithm requires that the signal and noise statistics be known a priori. Adaptive array processing algorithms are candidates for improving performance without the need for a priori signal and noise statistics. A variant of the physically constrained, maximum likelihood (PCML) algorithm [A. L. Kraay and A. B. Baggeroer, IEEE Trans. Signal Process. 55, 4048-4063 (2007)] is developed for mode filtering that achieves the same performance as the MAP mode filter yet does not need a priori knowledge of the signal and noise statistics. The central innovation of this adaptive mode filter is that the received signal's sample covariance matrix, as estimated by the algorithm, is constrained to be that which can be physically realized given a modal propagation model and an appropriate noise model. Shallow water simulation results are presented showing the benefit of using the PCML method in adaptive mode filtering.
Reducing the likelihood of long tennis matches.
Barnett, Tristan; Alan, Brown; Pollard, Graham
2006-01-01
Long matches can cause problems for tournaments. For example, the starting times of subsequent matches can be substantially delayed causing inconvenience to players, spectators, officials and television scheduling. They can even be seen as unfair in the tournament setting when the winner of a very long match, who may have negative aftereffects from such a match, plays the winner of an average or shorter length match in the next round. Long matches can also lead to injuries to the participating players. One factor that can lead to long matches is the use of the advantage set as the fifth set, as in the Australian Open, the French Open and Wimbledon. Another factor is long rallies and a greater than average number of points per game. This tends to occur more frequently on the slower surfaces such as at the French Open. The mathematical method of generating functions is used to show that the likelihood of long matches can be substantially reduced by using the tiebreak game in the fifth set, or more effectively by using a new type of game, the 50-40 game, throughout the match. Key PointsThe cumulant generating function has nice properties for calculating the parameters of distributions in a tennis matchA final tiebreaker set reduces the length of matches as currently being used in the US OpenA new 50-40 game reduces the length of matches whilst maintaining comparable probabilities for the better player to win the match.
Dimension-independent likelihood-informed MCMC
Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.
2015-10-08
Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.
Dimension-independent likelihood-informed MCMC
Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.
2015-10-08
Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian informationmore » and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.« less
Population Dynamics of the Stationary Phase Utilizing the ARGOS Method
NASA Astrophysics Data System (ADS)
Algarni, S.; Charest, A. J.; Iannacchione, G. S.
2015-03-01
The Area Recorded Generalized Optical Scattering (ARGOS) approach to light scattering employs large image capture array allowing for a well-defined geometry in which images may be manipulated to extract structure with intensity at a specific scattering wave vector (I(q)) and dynamics with intensity at a specific scattering wave vector over time (I (q,t)). The ARGOS method provides morphological dynamics noninvasively over a long time period and allows for a variety of aqueous conditions. This is important because traditional growth models do not provide for conditions similar to the natural environment. The present study found that the population dynamics of bacteria do not follow a traditional growth model and that the ARGOS method allowed for the observation of bacterial changes in terms of individual particles and population dynamics in real time. The observations of relative total intensity suggest that there is no stationary phase and that the bacterial population demonstrates sinusoidal type patterns consistently subsequent to the log phase growth. These observation were compared to shape changes by modeling fractal dimension and size changes by modeling effective radius.
Analysis of nonlinear dynamics by square matrix method
NASA Astrophysics Data System (ADS)
Yu, Li Hua
2017-03-01
The nonlinear dynamics of a system with periodic structure can be analyzed using a square matrix. We show that because of the special property of the square matrix constructed for nonlinear dynamics, we can reduce the dimension of the matrix from the original large number for high order calculations to a low dimension in the first step of the analysis. Then a stable Jordan decomposition is obtained with much lower dimension. The Jordan decomposition leads to a transformation to a new variable, which is an accurate action-angle variable, in good agreement with trajectories and tune obtained from tracking. More importantly, the deviation from constancy of the new action-angle variable provides a measure of the stability of the phase space trajectories and tune fluctuation. Thus the square matrix theory shows a good potential in theoretical understanding of a complicated dynamical system to guide the optimization of dynamical apertures. The method is illustrated by many examples of comparison between theory and numerical simulation. In particular, we show that the square matrix method can be used for fast optimization to reduce the nonlinearity of a system.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions
NASA Astrophysics Data System (ADS)
Novosad, Philip; Reader, Andrew J.
2016-06-01
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.
Novosad, Philip; Reader, Andrew J
2016-06-21
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral
Parallel methods for dynamic simulation of multiple manipulator systems
NASA Technical Reports Server (NTRS)
Mcmillan, Scott; Sadayappan, P.; Orin, David E.
1993-01-01
In this paper, efficient dynamic simulation algorithms for a system of m manipulators, cooperating to manipulate a large load, are developed; their performance, using two possible forms of parallelism on a general-purpose parallel computer, is investigated. One form, temporal parallelism, is obtained with the use of parallel numerical integration methods. A speedup of 3.78 on four processors of CRAY Y-MP8 was achieved with a parallel four-point block predictor-corrector method for the simulation of a four manipulator system. These multi-point methods suffer from reduced accuracy, and when comparing these runs with a serial integration method, the speedup can be as low as 1.83 for simulations with the same accuracy. To regain the performance lost due to accuracy problems, a second form of parallelism is employed. Spatial parallelism allows most of the dynamics of each manipulator chain to be computed simultaneously. Used exclusively in the four processor case, this form of parallelism in conjunction with a serial integration method results in a speedup of 3.1 on four processors over the best serial method. In cases where there are either more processors available or fewer chains in the system, the multi-point parallel integration methods are still advantageous despite the reduced accuracy because both forms of parallelism can then combine to generate more parallel tasks and achieve greater effective speedups. This paper also includes results for these cases.
A Dynamic Interval Decision-Making Method Based on GRA
NASA Astrophysics Data System (ADS)
Xue-jun, Tang; Jia, Chen
According to the basic theory of grey relational analysis, this paper constructs a three-dimensional grey interval relation degree model for the three dimensions of time, index and scheme. On its basis, it sets up and solves a single-targeted optimization model, and obtains each scheme's affiliate degree for the positive/negative ideal scheme and also arranges the schemes in sequence. The result shows that the three-dimensional grey relation degree simplifies the traditional dynamic multi-attribute decision-making method and can better resolve the dynamic multi-attribute decision-making method of interval numbers. Finally, this paper proves the practicality and efficiency of the model through a case study.
Analysis methods for wind turbine control and electrical system dynamics
NASA Technical Reports Server (NTRS)
Hinrichsen, E. N.
1995-01-01
The integration of new energy technologies into electric power systems requires methods which recognize the full range of dynamic events in both the new generating unit and the power system. Since new energy technologies are initially perceived as small contributors to large systems, little attention is generally paid to system integration, i.e. dynamic events in the power system are ignored. As a result, most new energy sources are only capable of base-load operation, i.e. they have no load following or cycling capability. Wind turbines are no exception. Greater awareness of this implicit (and often unnecessary) limitation is needed. Analysis methods are recommended which include very low penetration (infinite bus) as well as very high penetration (stand-alone) scenarios.
Search area Expanding Strategy and Dynamic Priority Setting Method in the Improved 2-opt Method
NASA Astrophysics Data System (ADS)
Matayoshi, Mitsukuni; Nakamura, Morikazu; Miyagi, Hayao
We propose a new 2-opt base method in a Memetic algorithm, that is, Genetic Algorithms(GAs) with a local search. The basic idea is from the fast 2-opt(1) method and the improved 2-opt method(20). Our new search method uses the “Priority" employed in the improved 2-opt method. The “Priority" represents the contribution level in exchange of genes. Matayoshi's method exchanges genes based on previous contribution to the fitness value improvement. We propose a new search method by using the concept of the Priority. We call our method the search area expanding strategy method in the improved 2-opt method. Our method escalates the search area by using “Priority". In computer experiment, it is shown that the computation time to find exact solution depends on the value of the Priority. If our method does not set an appropriate priority beforehand, then we propose the method to adapt to suitable value. If improvement does not achieved for certain generations, our dynamic priority method tries to modify the priority by the mutation operation. Experimental results show that the search area expanding strategy method embedded with the dynamic priority setting method can find the exact solution at earlier generation than other methods for comparison.
Maximal likelihood correspondence estimation for face recognition across pose.
Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang
2014-10-01
Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database.
A method for analyzing dynamic stall of helicopter rotor blades
NASA Technical Reports Server (NTRS)
Crimi, P.; Reeves, B. L.
1972-01-01
A model for each of the basic flow elements involved in the unsteady stall of a two-dimensional airfoil in incompressible flow is presented. The interaction of these elements is analyzed using a digital computer. Computations of the loading during transient and sinusoidal pitching motions are in good qualitative agreement with measured loads. The method was used to confirm that large torsional response of helicopter blades detected in flight tests can be attributed to dynamic stall.
Dynamic Data Driven Methods for Self-aware Aerospace Vehicles
2015-04-08
15. SUBJECT TERMS Dynamic data driven application systems (DDDAS); surrogate modeling; reduced order modeling; multifidelity methods; self-aware UAV...title and subtitle with volume number and part number, if applicable . On classified documents, enter the title classification in parentheses. 5a...for problems in which the Gaussian kernel has a variable bandwidth. To the best of our knowledge, all of these experiments are impossible or
Advanced three-dimensional dynamic analysis by boundary element methods
NASA Technical Reports Server (NTRS)
Banerjee, P. K.; Ahma, S.
1985-01-01
Advanced formulations of boundary element method for periodic, transient transform domain and transient time domain solution of three-dimensional solids have been implemented using a family of isoparametric boundary elements. The necessary numerical integration techniques as well as the various solution algorithms are described. The developed analysis has been incorporated in a fully general purpose computer program BEST3D which can handle up to 10 subregions. A number of numerical examples are presented to demonstrate the accuracy of the dynamic analyses.
Least-squares finite element method for fluid dynamics
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Povinelli, Louis A.
1989-01-01
An overview is given of new developments of the least squares finite element method (LSFEM) in fluid dynamics. Special emphasis is placed on the universality of LSFEM; the symmetry and positiveness of the algebraic systems obtained from LSFEM; the accommodation of LSFEM to equal order interpolations for incompressible viscous flows; and the natural numerical dissipation of LSFEM for convective transport problems and high speed compressible flows. The performance of LSFEM is illustrated by numerical examples.
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE)
Boker, Steven M.; Brick, Timothy R.; Pritikin, Joshua N.; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D.; Maes, Hermine H.; Neale, Michael C.
2015-01-01
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly-impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participants’ personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual’s data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies. PMID:26717128
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE).
Boker, Steven M; Brick, Timothy R; Pritikin, Joshua N; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D; Maes, Hermine H; Neale, Michael C
2015-01-01
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participant's personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual's data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies.
Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures
ERIC Educational Resources Information Center
Jeon, Minjeong; Rabe-Hesketh, Sophia
2012-01-01
In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…
Tong, Wenxu; Wei, Ying; Murga, Leonel F; Ondrechen, Mary Jo; Williams, Ronald J
2009-01-01
A new monotonicity-constrained maximum likelihood approach, called Partial Order Optimum Likelihood (POOL), is presented and applied to the problem of functional site prediction in protein 3D structures, an important current challenge in genomics. The input consists of electrostatic and geometric properties derived from the 3D structure of the query protein alone. Sequence-based conservation information, where available, may also be incorporated. Electrostatics features from THEMATICS are combined with multidimensional isotonic regression to form maximum likelihood estimates of probabilities that specific residues belong to an active site. This allows likelihood ranking of all ionizable residues in a given protein based on THEMATICS features. The corresponding ROC curves and statistical significance tests demonstrate that this method outperforms prior THEMATICS-based methods, which in turn have been shown previously to outperform other 3D-structure-based methods for identifying active site residues. Then it is shown that the addition of one simple geometric property, the size rank of the cleft in which a given residue is contained, yields improved performance. Extension of the method to include predictions of non-ionizable residues is achieved through the introduction of environment variables. This extension results in even better performance than THEMATICS alone and constitutes to date the best functional site predictor based on 3D structure only, achieving nearly the same level of performance as methods that use both 3D structure and sequence alignment data. Finally, the method also easily incorporates such sequence alignment data, and when this information is included, the resulting method is shown to outperform the best current methods using any combination of sequence alignments and 3D structures. Included is an analysis demonstrating that when THEMATICS features, cleft size rank, and alignment-based conservation scores are used individually or in combination
Comparing the Performance of Two Dynamic Load Distribution Methods
NASA Technical Reports Server (NTRS)
Kale, L. V.
1987-01-01
Parallel processing of symbolic computations on a message-passing multi-processor presents one challenge: To effectively utilize the available processors, the load must be distributed uniformly to all the processors. However, the structure of these computations cannot be predicted in advance. go, static scheduling methods are not applicable. In this paper, we compare the performance of two dynamic, distributed load balancing methods with extensive simulation studies. The two schemes are: the Contracting Within a Neighborhood (CWN) scheme proposed by us, and the Gradient Model proposed by Lin and Keller. We conclude that although simpler, the CWN is significantly more effective at distributing the work than the Gradient model.
Numerical continuation methods for large-scale dissipative dynamical systems
NASA Astrophysics Data System (ADS)
Umbría, Juan Sánchez; Net, Marta
2016-11-01
A tutorial on continuation and bifurcation methods for the analysis of truncated dissipative partial differential equations is presented. It focuses on the computation of equilibria, periodic orbits, their loci of codimension-one bifurcations, and invariant tori. To make it more self-contained, it includes some definitions of basic concepts of dynamical systems, and some preliminaries on the general underlying techniques used to solve non-linear systems of equations by inexact Newton methods, and eigenvalue problems by means of subspace or Arnoldi iterations.
NASA Astrophysics Data System (ADS)
Wiemker, Rafael; Wormanns, Dag; Beyer, Florian; Blaffert, Thomas; Buelow, Thomas
2005-04-01
For differential diagnosis of pulmonary nodules, assessment of contrast enhancement at chest CT scans after administration of contrast agent has been suggested. Likelihood of malignancy is considered very low if the contrast enhancement is lower than a certain threshold (10-20 HU). Automated average density measurement methods have been developed for that purpose. However, a certain fraction of malignant nodules does not exhibit significant enhancement when averaged over the whole nodule volume. The purpose of this paper is to test a new method for reduction of false negative results. We have investigated a method of showing not only a single averaged contrast enhancement number, but a more detailed enhancement curve for each nodule, showing the enhancement as a function of distance to boundary. A test set consisting of 11 malignant and 11 benign pulmonary lesions was used for validation, with diagnoses known from biopsy or follow-up for more than 24 months. For each nodule dynamic CT scans were available: the unenhanced native scan and scans after 60, 120, 180 and 240 seconds after onset of contrast injection (1 - 4 mm reconstructed slice thickness). The suggested method for measurement and visualization of contrast enhancement as radially resolved curves has reduced false negative results (apparently unenhancing but truly malignant nodules), and thus improved sensitivity. It proved to be a valuable tool for differential diagnosis between malignant and benign lesions using dynamic CT.
Dynamic Analysis of a Spur Gear by the Dynamic Stiffness Method
NASA Astrophysics Data System (ADS)
HUANG, K. J.; LIU, T. S.
2000-07-01
This study treats a spur gear tooth as a variable cross-section Timoshenko beam to construct a dynamic model, being able to obtain transient response for spur gears of involute profiles. The dynamic responses of a single tooth and a gear pair are investigated. Firstly, polynomials are used to represent the gear blank and the tooth profile. The dynamic stiffness matrix and natural frequencies of the gear are in turn calculated. The forced response of a tooth subject to a shaft-driven transmission torque is calculated by performing modal analysis. This study takes into account time-varying stiffness and mass matrices and the gear meshing forces at moving meshing points. The forced response at arbitrary points in a gear tooth can be obtained. Calculation results of fillet stresses and strains are compared with those in the literature to verify the proposed method.
A Method for Molecular Dynamics on Curved Surfaces
Paquay, Stefan; Kusters, Remy
2016-01-01
Dynamics simulations of constrained particles can greatly aid in understanding the temporal and spatial evolution of biological processes such as lateral transport along membranes and self-assembly of viruses. Most theoretical efforts in the field of diffusive transport have focused on solving the diffusion equation on curved surfaces, for which it is not tractable to incorporate particle interactions even though these play a crucial role in crowded systems. We show here that it is possible to take such interactions into account by combining standard constraint algorithms with the classical velocity Verlet scheme to perform molecular dynamics simulations of particles constrained to an arbitrarily curved surface. Furthermore, unlike Brownian dynamics schemes in local coordinates, our method is based on Cartesian coordinates, allowing for the reuse of many other standard tools without modifications, including parallelization through domain decomposition. We show that by applying the schemes to the Langevin equation for various surfaces, we obtain confined Brownian motion, which has direct applications to many biological and physical problems. Finally we present two practical examples that highlight the applicability of the method: 1) the influence of crowding and shape on the lateral diffusion of proteins in curved membranes; and 2) the self-assembly of a coarse-grained virus capsid protein model. PMID:27028633
A Poisson-Boltzmann dynamics method with nonperiodic boundary condition
NASA Astrophysics Data System (ADS)
Lu, Qiang; Luo, Ray
2003-12-01
We have developed a well-behaved and efficient finite difference Poisson-Boltzmann dynamics method with a nonperiodic boundary condition. This is made possible, in part, by a rather fine grid spacing used for the finite difference treatment of the reaction field interaction. The stability is also made possible by a new dielectric model that is smooth both over time and over space, an important issue in the application of implicit solvents. In addition, the electrostatic focusing technique facilitates the use of an accurate yet efficient nonperiodic boundary condition: boundary grid potentials computed by the sum of potentials from individual grid charges. Finally, the particle-particle particle-mesh technique is adopted in the computation of the Coulombic interaction to balance accuracy and efficiency in simulations of large biomolecules. Preliminary testing shows that the nonperiodic Poisson-Boltzmann dynamics method is numerically stable in trajectories at least 4 ns long. The new model is also fairly efficient: it is comparable to that of the pairwise generalized Born solvent model, making it a strong candidate for dynamics simulations of biomolecules in dilute aqueous solutions. Note that the current treatment of total electrostatic interactions is with no cutoff, which is important for simulations of biomolecules. Rigorous treatment of the Debye-Hückel screening is also possible within the Poisson-Boltzmann framework: its importance is demonstrated by a simulation of a highly charged protein.
Fast inference in generalized linear models via expected log-likelihoods
Ramirez, Alexandro D.; Paninski, Liam
2015-01-01
Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting “expected log-likelihood” can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina. PMID:23832289
Evaluation of the sensing block method for dynamic force measurement
NASA Astrophysics Data System (ADS)
Zhang, Qinghui; Chen, Hao; Li, Wenzhao; Song, Li
2017-01-01
Sensing block method was proposed for the dynamic force measurement by Tanimura et al. in 1994. Comparing with the Split Hopkinson pressure bar (SHPB) technique, it can provide a much longer measuring time for the dynamic properties test of materials. However, the signals recorded by sensing block are always accompanied with additional oscillations. Tanimura et al. discussed the effect of force rising edge on the test results, whereas more research is still needed. In this paper, some more dominant factors have been extracted through dimensional analysis. The finite element simulation has been performed to assess these factors. Base on the analysis and simulation, some valuable results are obtained and some criterions proposed in this paper can be applied in design or selection of the sensing block.
System and method for reducing combustion dynamics in a combustor
Uhm, Jong Ho; Johnson, Thomas Edward; Zuo, Baifang; York, William David
2015-09-01
A system for reducing combustion dynamics in a combustor includes an end cap having an upstream surface axially separated from a downstream surface, and tube bundles extend from the upstream surface through the downstream surface. A divider inside a tube bundle defines a diluent passage that extends axially through the downstream surface, and a diluent supply in fluid communication with the divider provides diluent flow to the diluent passage. A method for reducing combustion dynamics in a combustor includes flowing a fuel through tube bundles, flowing a diluent through a diluent passage inside a tube bundle, wherein the diluent passage extends axially through at least a portion of the end cap into a combustion chamber, and forming a diluent barrier in the combustion chamber between the tube bundle and at least one other adjacent tube bundle.
Dynamic Methods for Investigating the Conformational Changes of Biological Macromolecules
NASA Astrophysics Data System (ADS)
Vidolova-Angelova, E.; Peshev, Z.; Shaquiri, Z.; Angelov, D.
2010-01-01
Fast conformational changes of biological macromolecules such as RNA folding and DNA—protein interactions play a crucial role in their biological functions. Conformational changes are supposed to take place in the sub milliseconds to few seconds time range. The development of appropriate dynamic methods possessing both high space (one nucleotide) and time resolution is of important interest. Here, we present two different approaches we developed for studying nucleic acid conformational changes such as salt-induced tRNA folding and interaction of the transcription factor NF-κB with its recognition DNA sequence. Importantly, only a single laser pulse is sufficient for the accurate measuring the whole decay curve. This peculiarity can be used in dynamical experiments.
Hybrid pairwise likelihood analysis of animal behavior experiments.
Cattelan, Manuela; Varin, Cristiano
2013-12-01
The study of the determinants of fights between animals is an important issue in understanding animal behavior. For this purpose, tournament experiments among a set of animals are often used by zoologists. The results of these tournament experiments are naturally analyzed by paired comparison models. Proper statistical analysis of these models is complicated by the presence of dependence between the outcomes of fights because the same animal is involved in different contests. This paper discusses two different model specifications to account for between-fights dependence. Models are fitted through the hybrid pairwise likelihood method that iterates between optimal estimating equations for the regression parameters and pairwise likelihood inference for the association parameters. This approach requires the specification of means and covariances only. For this reason, the method can be applied also when the computation of the joint distribution is difficult or inconvenient. The proposed methodology is investigated by simulation studies and applied to real data about adult male Cape Dwarf Chameleons.
Informative Parameters of Dynamic Geo-electricity Methods
NASA Astrophysics Data System (ADS)
Tursunmetov, R.
With growing complexity of geological tasks and revealing abnormality zones con- nected with ore, oil, gas and water availability, methods of dynamic geo-electricity started to be used. In these methods geological environment is considered as inter- phase irregular one. Main dynamic element of this environment is double electric layer, which develops on the boundary between solid and liquid phase. In ore or wa- ter saturated environment double electric layers become electrochemical or electro- kinetic active elements of geo-electric environment, which, in turn, form natural elec- tric field. Mentioned field influences artificially created field distribution and inter- action bear complicated super-position or non-linear character. Therefore, geological environment is considered as active one, which is able to accumulate and transform artificially superpositioned fields. Main dynamic property of this environment is non- liner behavior of specific electric resistance and soil polarization depending on current density and measurements frequency, which serve as informative parameters for dy- namic geo-electricity methods. Study of disperse soil electric properties in impulse- frequency regime with study of temporal and frequency characteristics of electric field is of main interest for definition of geo-electric abnormality. Volt-amperic characteris- tics of electromagnetic field study has big practical significance. These characteristics are determined by electric-chemically active ore and water saturated fields. Mentioned parameters depend on initiated field polarity, in particular on ore saturated zone's character, composition and mineralization and natural electric field availability un- der cathode and anode mineralization. Non-linear behavior of environment's dynamic properties impacts initiated field structure that allows to define abnormal zone loca- tion. And, finally, study of soil anisotropy dynamic properties in space will allow to identify filtration flows
Some splitting methods for equations of geophysical fluid dynamics
NASA Astrophysics Data System (ADS)
Ji, Zhongzhen; Wang, Bin
1995-03-01
In this paper, equations of atmospheric and oceanic dynamics are reduced to a kind of evolutionary equation in operator form, based on which a conclusion that the separability of motion stages is relative is made and an issue that the tractional splitting methods established on the physical separability of the fast stage and the slow stage neglect the interaction between the two stages to some extent is shown. Also, three splitting patterns are summed up from the splitting methods in common use so that a comparison between them is carried out. The comparison shows that only the improved splitting pattern (ISP) can be in second order and keep the interaction well. Finally, the applications of some splitting methods on numerical simulations of typhoon tracks made clear that ISP owns the best effect and can save more than 80% CPU time.
A novel method to study cerebrospinal fluid dynamics in rats
Karimy, Jason K.; Kahle, Kristopher T.; Kurland, David B.; Yu, Edward; Gerzanich, Volodymyr; Simard, J. Marc
2014-01-01
Background Cerebrospinal fluid (CSF) flow dynamics play critical roles in both the immature and adult brain, with implications for neurodevelopment and disease processes such as hydrocephalus and neurodegeneration. Remarkably, the only reported method to date for measuring CSF formation in laboratory rats is the indirect tracer dilution method (a.k.a., ventriculocisternal perfusion), which has limitations. New Method Anesthetized rats were mounted in a stereotaxic apparatus, both lateral ventricles were cannulated, and the Sylvian aqueduct was occluded. Fluid exited one ventricle at a rate equal to the rate of CSF formation plus the rate of infusion (if any) into the contralateral ventricle. Pharmacological agents infused at a constant known rate into the contralateral ventricle were tested for their effect on CSF formation in real-time. Results The measured rate of CSF formation was increased by blockade of the Sylvian aqueduct but was not changed by increasing the outflow pressure (0–3 cm of H2O). In male Wistar rats, CSF formation was age-dependent: 0.39±0.06, 0.74±0.05, 1.02±0.04 and 1.40±0.06 µL/min at 8, 9, 10 and 12 weeks, respectively. CSF formation was reduced 57% by intraventricular infusion of the carbonic anhydrase inhibitor, acetazolamide. Comparison with existing methods Tracer dilution methods do not permit ongoing real-time determination of the rate of CSF formation, are not readily amenable to pharmacological manipulations, and require critical assumptions. Direct measurement of CSF formation overcomes these limitations. Conclusions Direct measurement of CSF formation in rats is feasible. Our method should prove useful for studying CSF dynamics in normal physiology and disease models. PMID:25554415
Applicability of optical scanner method for fine root dynamics
NASA Astrophysics Data System (ADS)
Kume, Tomonori; Ohashi, Mizue; Makita, Naoki; Khoon Kho, Lip; Katayama, Ayumi; Matsumoto, Kazuho; Ikeno, Hidetoshi
2016-04-01
Fine root dynamics is one of the important components in forest carbon cycling, as ~60 % of tree photosynthetic production can be allocated to root growth and metabolic activities. Various techniques have been developed for monitoring fine root biomass, production, mortality in order to understand carbon pools and fluxes resulting from fine roots dynamics. The minirhizotron method is now a widely used technique, in which a transparent tube is inserted into the soil and researchers count an increase and decrease of roots along the tube using images taken by a minirhizotron camera or minirhizotron video camera inside the tube. This method allows us to observe root behavior directly without destruction, but has several weaknesses; e.g., the difficulty of scaling up the results to stand level because of the small observation windows. Also, most of the image analysis are performed manually, which may yield insufficient quantitative and objective data. Recently, scanner method has been proposed, which can produce much bigger-size images (A4-size) with lower cost than those of the minirhizotron methods. However, laborious and time-consuming image analysis still limits the applicability of this method. In this study, therefore, we aimed to develop a new protocol for scanner image analysis to extract root behavior in soil. We evaluated applicability of this method in two ways; 1) the impact of different observers including root-study professionals, semi- and non-professionals on the detected results of root dynamics such as abundance, growth, and decomposition, and 2) the impact of window size on the results using a random sampling basis exercise. We applied our new protocol to analyze temporal changes of root behavior from sequential scanner images derived from a Bornean tropical forests. The results detected by the six observers showed considerable concordance in temporal changes in the abundance and the growth of fine roots but less in the decomposition. We also examined
A dynamic calibration method for the pressure transducer
NASA Astrophysics Data System (ADS)
Wang, Zhongyu; Wang, Zhuoran; Li, Qiang
2016-01-01
Pressure transducer is widely used in the field of industry. A calibrated pressure transducer can increase the performance of precision instruments in the closed mechanical relationship. Calibration is the key to ensure the pressure transducer with a high precision and dynamic characteristic. Unfortunately, the current calibration method can usually be used in the laboratory with a good condition and only one pressure transducer can be calibrated at each time. Therefore the calibration efficiency is hard to meet the requirement of modern industry with high efficiency. A dynamic and fast calibration technology with a calibration device and a corresponding data processing method is proposed in this paper. Firstly, the pressure transducer to be calibrated is placed in the small cavity chamber. The calibration process only contains a single loop. The outputs of each calibrated transducer are recorded automatically by the control terminal. Secondly, LabView programming is used for the information acquisition and data processing. The performance of the repeatability and nonlinear indicators can be figured out directly. At last the pressure transducers are calibrated simultaneously in the experiment to verify the suggested calibration technology. The experimental result shows this method can be used to calibrate the pressure transducer in the practical engineering measurement.
Coupled-cluster methods for core-hole dynamics
NASA Astrophysics Data System (ADS)
Picon, Antonio; Cheng, Lan; Hammond, Jeff R.; Stanton, John F.; Southworth, Stephen H.
2014-05-01
Coupled cluster (CC) is a powerful numerical method used in quantum chemistry in order to take into account electron correlation with high accuracy and size consistency. In the CC framework, excited, ionized, and electron-attached states can be described by the equation of motion (EOM) CC technique. However, bringing CC methods to describe molecular dynamics induced by x rays is challenging. X rays have the special feature of interacting with core-shell electrons that are close to the nucleus. Core-shell electrons can be ionized or excited to a valence shell, leaving a core-hole that will decay very fast (e.g. 2.4 fs for K-shell of Ne) by emitting photons (fluorescence process) or electrons (Auger process). Both processes are a clear manifestation of a many-body effect, involving electrons in the continuum in the case of Auger processes. We review our progress of developing EOM-CC methods for core-hole dynamics. Results of the calculations will be compared with measurements on core-hole decays in atomic Xe and molecular XeF2. This work is funded by the Office of Basic Energy Sciences, Office of Science, U.S. Department of Energy, under Contract No. DE-AC02-06CH11357.
Numerical likelihood analysis of cosmic ray anisotropies
Carlos Hojvat et al.
2003-07-02
A numerical likelihood approach to the determination of cosmic ray anisotropies is presented which offers many advantages over other approaches. It allows a wide range of statistically meaningful hypotheses to be compared even when full sky coverage is unavailable, can be readily extended in order to include measurement errors, and makes maximum unbiased use of all available information.
Quantum dynamics by the constrained adiabatic trajectory method
Leclerc, A.; Jolicard, G.; Guerin, S.; Killingbeck, J. P.
2011-03-15
We develop the constrained adiabatic trajectory method (CATM), which allows one to solve the time-dependent Schroedinger equation constraining the dynamics to a single Floquet eigenstate, as if it were adiabatic. This constrained Floquet state (CFS) is determined from the Hamiltonian modified by an artificial time-dependent absorbing potential whose forms are derived according to the initial conditions. The main advantage of this technique for practical implementation is that the CFS is easy to determine even for large systems since its corresponding eigenvalue is well isolated from the others through its imaginary part. The properties and limitations of the CATM are explored through simple examples.
A method for the evaluation of wide dynamic range cameras
NASA Astrophysics Data System (ADS)
Wong, Ping Wah; Lu, Yu Hua
2012-01-01
We propose a multi-component metric for the evaluation of digital or video cameras under wide dynamic range (WDR) scenes. The method is based on a single image capture using a specifically designed WDR test chart and light box. Test patterns on the WDR test chart include gray ramps, color patches, arrays of gray patches, white bars, and a relatively dark gray background. The WDR test chart is professionally made using 3 layers of transparencies to produce a contrast ratio of approximately 110 dB for WDR testing. A light box is designed to provide a uniform surface with light level at about 80K to 100K lux, which is typical of a sunny outdoor scene. From a captured image, 9 image quality component scores are calculated. The components include number of resolvable gray steps, dynamic range, linearity of tone response, grayness of gray ramp, number of distinguishable color patches, smearing resistance, edge contrast, grid clarity, and weighted signal-to-noise ratio. A composite score is calculated from the 9 component scores to reflect the comprehensive image quality in cameras under WDR scenes. Experimental results have demonstrated that the multi-component metric corresponds very well to subjective evaluation of wide dynamic range behavior of cameras.
Recent developments in maximum likelihood estimation of MTMM models for categorical data.
Jeon, Minjeong; Rijmen, Frank
2014-01-01
Maximum likelihood (ML) estimation of categorical multitrait-multimethod (MTMM) data is challenging because the likelihood involves high-dimensional integrals over the crossed method and trait factors, with no known closed-form solution. The purpose of the study is to introduce three newly developed ML methods that are eligible for estimating MTMM models with categorical responses: Variational maximization-maximization (e.g., Rijmen and Jeon, 2013), alternating imputation posterior (e.g., Cho and Rabe-Hesketh, 2011), and Monte Carlo local likelihood (e.g., Jeon et al., under revision). Each method is briefly described and its applicability for MTMM models with categorical data are discussed.
Multiscale molecular dynamics using the matched interface and boundary method
Geng Weihua; Wei, G.W.
2011-01-20
The Poisson-Boltzmann (PB) equation is an established multiscale model for electrostatic analysis of biomolecules and other dielectric systems. PB based molecular dynamics (MD) approach has a potential to tackle large biological systems. Obstacles that hinder the current development of PB based MD methods are concerns in accuracy, stability, efficiency and reliability. The presence of complex solvent-solute interface, geometric singularities and charge singularities leads to challenges in the numerical solution of the PB equation and electrostatic force evaluation in PB based MD methods. Recently, the matched interface and boundary (MIB) method has been utilized to develop the first second order accurate PB solver that is numerically stable in dealing with discontinuous dielectric coefficients, complex geometric singularities and singular source charges. The present work develops the PB based MD approach using the MIB method. New formulation of electrostatic forces is derived to allow the use of sharp molecular surfaces. Accurate reaction field forces are obtained by directly differentiating the electrostatic potential. Dielectric boundary forces are evaluated at the solvent-solute interface using an accurate Cartesian-grid surface integration method. The electrostatic forces located at reentrant surfaces are appropriately assigned to related atoms. Extensive numerical tests are carried out to validate the accuracy and stability of the present electrostatic force calculation. The new PB based MD method is implemented in conjunction with the AMBER package. MIB based MD simulations of biomolecules are demonstrated via a few example systems.
A new method for parameter estimation in nonlinear dynamical equations
NASA Astrophysics Data System (ADS)
Wang, Liu; He, Wen-Ping; Liao, Le-Jian; Wan, Shi-Quan; He, Tao
2015-01-01
Parameter estimation is an important scientific problem in various fields such as chaos control, chaos synchronization and other mathematical models. In this paper, a new method for parameter estimation in nonlinear dynamical equations is proposed based on evolutionary modelling (EM). This will be achieved by utilizing the following characteristics of EM which includes self-organizing, adaptive and self-learning features which are inspired by biological natural selection, and mutation and genetic inheritance. The performance of the new method is demonstrated by using various numerical tests on the classic chaos model—Lorenz equation (Lorenz 1963). The results indicate that the new method can be used for fast and effective parameter estimation irrespective of whether partial parameters or all parameters are unknown in the Lorenz equation. Moreover, the new method has a good convergence rate. Noises are inevitable in observational data. The influence of observational noises on the performance of the presented method has been investigated. The results indicate that the strong noises, such as signal noise ratio (SNR) of 10 dB, have a larger influence on parameter estimation than the relatively weak noises. However, it is found that the precision of the parameter estimation remains acceptable for the relatively weak noises, e.g. SNR is 20 or 30 dB. It indicates that the presented method also has some anti-noise performance.
Visualization Methods to Quantify DNAPL Dynamics in Chemical Remediation
NASA Astrophysics Data System (ADS)
Wang, H.; Chen, X.; Jawitz, J. W.
2006-12-01
A novel multiple-wavelength visualization method is under development for quantifying multiphase fluid dynamics in porous media. This technique is applied here for in situ characterization of laboratory-scale DNAPL chemical remediation, including co-solvent flushing and surfactant flushing. Development of this method is motivated by the limitations of current quantitative imaging methods. In the method both light adsorption (Beer's Law) and interfacial diffraction (Fresnel's Law) are considered. Furthermore, the use of multiple wavelengths introduces the ability to eliminate the interface structure effect. By using images taken at two wavelengths using band-pass filters, the heterogeneous DNAPL saturation distribution in a two- dimensional laboratory chamber can be quantified at any time during chemical remediation. Previously published DNAPL visualization techniques have been shown to be some accurate for post-spill conditions, but are ineffective once significant dissolution has occurred. The method introduced here is shown to achieve mass balances of 90% and greater even during chemical remediation. Furthermore, the heterogeneous saturation distribution in the chamber (i.e. Eulerian description) and the distribution over stream tubes (i.e. Lagrangian description) are quantified using the new method and shown to be superior to those obtained using the binary imaging technique.
Maximum Likelihood and Bayesian Parameter Estimation in Item Response Theory.
ERIC Educational Resources Information Center
Lord, Frederic M.
There are currently three main approaches to parameter estimation in item response theory (IRT): (1) joint maximum likelihood, exemplified by LOGIST, yielding maximum likelihood estimates; (2) marginal maximum likelihood, exemplified by BILOG, yielding maximum likelihood estimates of item parameters (ability parameters can be estimated…
A maximum-likelihood estimation of pairwise relatedness for autopolyploids
Huang, K; Guo, S T; Shattuck, M R; Chen, S T; Qi, X G; Zhang, P; Li, B G
2015-01-01
Relatedness between individuals is central to ecological genetics. Multiple methods are available to quantify relatedness from molecular data, including method-of-moment and maximum-likelihood estimators. We describe a maximum-likelihood estimator for autopolyploids, and quantify its statistical performance under a range of biologically relevant conditions. The statistical performances of five additional polyploid estimators of relatedness were also quantified under identical conditions. When comparing truncated estimators, the maximum-likelihood estimator exhibited lower root mean square error under some conditions and was more biased for non-relatives, especially when the number of alleles per loci was low. However, even under these conditions, this bias was reduced to be statistically insignificant with more robust genetic sampling. We also considered ambiguity in polyploid heterozygote genotyping and developed a weighting methodology for candidate genotypes. The statistical performances of three polyploid estimators under both ideal and actual conditions (including inbreeding and double reduction) were compared. The software package POLYRELATEDNESS is available to perform this estimation and supports a maximum ploidy of eight. PMID:25370210
Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET
Gopich, Irina V.
2015-01-21
Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.
An automated dynamic water vapor permeation test method
NASA Astrophysics Data System (ADS)
Gibson, Phillip; Kendrick, Cyrus; Rivin, Donald; Charmchii, Majid; Sicuranza, Linda
1995-05-01
This report describes an automated apparatus developed to measure the transport of water vapor through materials under a variety of conditions. The apparatus is more convenient to use than the traditional test methods for textiles and clothing materials, and allows one to use a wider variety of test conditions to investigate the concentration-dependent and nonlinear transport behavior of many of the semipermeable membrane laminates which are now available. The dynamic moisture permeation cell (DMPC) has been automated to permit multiple setpoint testing under computer control, and to facilitate investigation of transient phenomena. Results generated with the DMPC are in agreement with and of comparable accuracy to those from the ISO 11092 (sweating guarded hot plate) method of measuring water vapor permeability.
Computational methods. [Calculation of dynamic loading to offshore platforms
Maeda, H. . Inst. of Industrial Science)
1993-02-01
With regard to the computational methods for hydrodynamic forces, first identification of marine hydrodynamics in offshore technology is discussed. Then general computational methods, the state of the arts and uncertainty on flow problems in offshore technology in which developed, developing and undeveloped problems are categorized and future works follow. Marine hydrodynamics consists of water surface and underwater fluid dynamics. Marine hydrodynamics covers, not only hydro, but also aerodynamics such as wind load or current-wave-wind interaction, hydrodynamics such as cavitation, underwater noise, multi-phase flow such as two-phase flow in pipes or air bubble in water or surface and internal waves, and magneto-hydrodynamics such as propulsion due to super conductivity. Among them, two key words are focused on as the identification of marine hydrodynamics in offshore technology; they are free surface and vortex shedding.
A spatiotemporal characterization method for the dynamic cytoskeleton
Alhussein, Ghada; Shanti, Aya; Farhat, Ilyas A. H.; Timraz, Sara B. H.; Alwahab, Noaf S. A.; Pearson, Yanthe E.; Martin, Matthew N.; Christoforou, Nicolas
2016-01-01
The significant gap between quantitative and qualitative understanding of cytoskeletal function is a pressing problem; microscopy and labeling techniques have improved qualitative investigations of localized cytoskeleton behavior, whereas quantitative analyses of whole cell cytoskeleton networks remain challenging. Here we present a method that accurately quantifies cytoskeleton dynamics. Our approach digitally subdivides cytoskeleton images using interrogation windows, within which box‐counting is used to infer a fractal dimension (D f) to characterize spatial arrangement, and gray value intensity (GVI) to determine actin density. A partitioning algorithm further obtains cytoskeleton characteristics from the perinuclear, cytosolic, and periphery cellular regions. We validated our measurement approach on Cytochalasin‐treated cells using transgenically modified dermal fibroblast cells expressing fluorescent actin cytoskeletons. This method differentiates between normal and chemically disrupted actin networks, and quantifies rates of cytoskeletal degradation. Furthermore, GVI distributions were found to be inversely proportional to D f, having several biophysical implications for cytoskeleton formation/degradation. We additionally demonstrated detection sensitivity of differences in D f and GVI for cells seeded on substrates with varying degrees of stiffness, and coated with different attachment proteins. This general approach can be further implemented to gain insights on dynamic growth, disruption, and structure of the cytoskeleton (and other complex biological morphology) due to biological, chemical, or physical stimuli. © 2016 The Authors. Cytoskeleton Published by Wiley Periodicals, Inc. PMID:27015595
Likelihood-based modification of experimental crystal structure electron density maps
Terwilliger, Thomas C.
2005-04-16
A maximum-likelihood method for improves an electron density map of an experimental crystal structure. A likelihood of a set of structure factors {F.sub.h } is formed for the experimental crystal structure as (1) the likelihood of having obtained an observed set of structure factors {F.sub.h.sup.OBS } if structure factor set {F.sub.h } was correct, and (2) the likelihood that an electron density map resulting from {F.sub.h } is consistent with selected prior knowledge about the experimental crystal structure. The set of structure factors {F.sub.h } is then adjusted to maximize the likelihood of {F.sub.h } for the experimental crystal structure. An improved electron density map is constructed with the maximized structure factors.
Confidence interval of the likelihood ratio associated with mixed stain DNA evidence.
Beecham, Gary W; Weir, Bruce S
2011-01-01
Likelihood ratios are necessary to properly interpret mixed stain DNA evidence. They can flexibly consider alternate hypotheses and can account for population substructure. The likelihood ratio should be seen as an estimate and not a fixed value, because the calculations are functions of allelic frequency estimates that were estimated from a small portion of the population. Current methods do not account for uncertainty in the likelihood ratio estimates and are therefore an incomplete picture of the strength of the evidence. We propose the use of a confidence interval to report the consequent variation of likelihood ratios. The confidence interval is calculated using the standard forensic likelihood ratio formulae and a variance estimate derived using the Taylor expansion. The formula is explained, and a computer program has been made available. Numeric work shows that the evidential strength of DNA profiles decreases as the variation among populations increases.
Maximum likelihood estimation for life distributions with competing failure modes
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1979-01-01
Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.
Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.
Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram
2017-02-01
In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.
New Statistical Learning Methods for Estimating Optimal Dynamic Treatment Regimes
Zhao, Ying-Qi; Zeng, Donglin; Laber, Eric B.; Kosorok, Michael R.
2014-01-01
Dynamic treatment regimes (DTRs) are sequential decision rules for individual patients that can adapt over time to an evolving illness. The goal is to accommodate heterogeneity among patients and find the DTR which will produce the best long term outcome if implemented. We introduce two new statistical learning methods for estimating the optimal DTR, termed backward outcome weighted learning (BOWL), and simultaneous outcome weighted learning (SOWL). These approaches convert individualized treatment selection into an either sequential or simultaneous classification problem, and can thus be applied by modifying existing machine learning techniques. The proposed methods are based on directly maximizing over all DTRs a nonparametric estimator of the expected long-term outcome; this is fundamentally different than regression-based methods, for example Q-learning, which indirectly attempt such maximization and rely heavily on the correctness of postulated regression models. We prove that the resulting rules are consistent, and provide finite sample bounds for the errors using the estimated rules. Simulation results suggest the proposed methods produce superior DTRs compared with Q-learning especially in small samples. We illustrate the methods using data from a clinical trial for smoking cessation. PMID:26236062
Sensitivity based method for structural dynamic model improvement
NASA Astrophysics Data System (ADS)
Lin, R. M.; Du, H.; Ong, J. H.
1993-05-01
Sensitivity analysis, the study of how a structure's dynamic characteristics change with design variables, has been used to predict structural modification effects in design for many decades. In this paper, methods for calculating the eigensensitivity, frequency response function sensitivity and its modified new formulation are presented. The implementation of these sensitivity analyses to the practice of finite element model improvement using vibration test data, which is one of the major applications of experimental modal testing, is discussed. Since it is very difficult in practice to measure all the coordinates which are specified in the finite element model, sensitivity based methods become essential and are, in fact, the only appropriate methods of tackling the problem of finite element model improvement. Comparisons of these methods are made in terms of the amount of measured data required, the speed of convergence and the magnitudes of modelling errors. Also, it is identified that the inverse iteration technique can be effectively used to minimize the computational costs involved. The finite element model of a plane truss structure is used in numerical case studies to demonstrate the effectiveness of the applications of these sensitivity based methods to practical engineering structures.
Long-time atomistic dynamics through a new self-adaptive accelerated molecular dynamics method
NASA Astrophysics Data System (ADS)
Gao, N.; Yang, L.; Gao, F.; Kurtz, R. J.; West, D.; Zhang, S.
2017-04-01
A self-adaptive accelerated molecular dynamics method is developed to model infrequent atomic-scale events, especially those events that occur on a rugged free-energy surface. Key in the new development is the use of the total displacement of the system at a given temperature to construct a boost-potential, which is slowly increased to accelerate the dynamics. The temperature is slowly increased to accelerate the dynamics. By allowing the system to evolve from one steady-state configuration to another by overcoming the transition state, this self-evolving approach makes it possible to explore the coupled motion of species that migrate on vastly different time scales. The migrations of single vacancy (V) and small He-V clusters, and the growth of nano-sized He-V clusters in Fe for times in the order of seconds are studied by this new method. An interstitial-assisted mechanism is first explored for the migration of a helium-rich He-V cluster, while a new two-component Ostwald ripening mechanism is suggested for He-V cluster growth.
Long-time atomistic dynamics through a new self-adaptive accelerated molecular dynamics method.
Gao, N; Yang, L; Gao, F; Kurtz, R J; West, D; Zhang, S
2017-04-12
A self-adaptive accelerated molecular dynamics method is developed to model infrequent atomic-scale events, especially those events that occur on a rugged free-energy surface. Key in the new development is the use of the total displacement of the system at a given temperature to construct a boost-potential, which is slowly increased to accelerate the dynamics. The temperature is slowly increased to accelerate the dynamics. By allowing the system to evolve from one steady-state configuration to another by overcoming the transition state, this self-evolving approach makes it possible to explore the coupled motion of species that migrate on vastly different time scales. The migrations of single vacancy (V) and small He-V clusters, and the growth of nano-sized He-V clusters in Fe for times in the order of seconds are studied by this new method. An interstitial-assisted mechanism is first explored for the migration of a helium-rich He-V cluster, while a new two-component Ostwald ripening mechanism is suggested for He-V cluster growth.
Dynamic characterization of satellite components through non-invasive methods
Mullens, Joshua G; Wiest, Heather K; Mascarenas, David D; Park, Gyuhae
2011-01-24
The rapid deployment of satellites is hindered by the need to flight-qualify their components and the resulting mechanical assembly. Conventional methods for qualification testing of satellite components are costly and time consuming. Furthermore, full-scale vehicles must be subjected to launch loads during testing. The harsh testing environment increases the risk of component damage during qualification. The focus of this research effort was to assess the performance of Structural Health Monitoring (SHM) techniques as replacement for traditional vibration testing. SHM techniques were applied on a small-scale structure representative of a responsive satellite. The test structure consisted of an extruded aluminum space-frame covered with aluminum shear plates, which was assembled using bolted joints. Multiple piezoelectric patches were bonded to the test structure and acted as combined actuators and sensors. Various methods of SHM were explored including impedance-based health monitoring, wave propagation, and conventional frequency response functions. Using these methods in conjunction with finite element modeling, the dynamic properties of the test structure were established and areas of potential damage were identified and localized. The adequacy of the results from each SHM method was validated by comparison to results from conventional vibration testing.
Efficient sensitivity analysis method for chaotic dynamical systems
Liao, Haitao
2016-05-15
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.
Dynamic characterization of satellite components through non-invasive methods
Mullins, Joshua G; Wiest, Heather K; Mascarenas, David D. L.; Macknelly, David
2010-10-21
The rapid deployment of satellites is hindered by the need to flight-qualify their components and the resulting mechanical assembly. Conventional methods for qualification testing of satellite components are costly and time consuming. Furthermore, full-scale vehicles must be subjected to launch loads during testing. This harsh testing environment increases the risk of component damage during qualification. The focus of this research effort was to assess the performance of Structural Health Monitoring (SHM) techniques as a replacement for traditional vibration testing. SHM techniques were applied on a small-scale structure representative of a responsive satellite. The test structure consisted of an extruded aluminum space-frame covered with aluminum shear plates, which was assembled using bolted joints. Multiple piezoelectric patches were bonded to the test structure and acted as combined actuators and sensors. Various methods of SHM were explored including impedance-based health monitoring, wave propagation, and conventional frequency response functions. Using these methods in conjunction with finite element modelling, the dynamic properties of the test structure were established and areas of potential damage were identified and localized. The adequacy of the results from each SHM method was validated by comparison to results from conventional vibration testing.
cosmoabc: Likelihood-free inference for cosmology
NASA Astrophysics Data System (ADS)
Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.
2015-05-01
Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.
Spectral likelihood expansions for Bayesian inference
NASA Astrophysics Data System (ADS)
Nagel, Joseph B.; Sudret, Bruno
2016-03-01
A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the likelihood function in terms of orthogonal polynomials. From this spectral likelihood expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.
Likelihood-Based Climate Model Evaluation
NASA Technical Reports Server (NTRS)
Braverman, Amy; Cressie, Noel; Teixeira, Joao
2012-01-01
Climate models are deterministic, mathematical descriptions of the physics of climate. Confidence in predictions of future climate is increased if the physics are verifiably correct. A necessary, (but not sufficient) condition is that past and present climate be simulated well. Quantify the likelihood that a (summary statistic computed from a) set of observations arises from a physical system with the characteristics captured by a model generated time series. Given a prior on models, we can go further: posterior distribution of model given observations.
Space station static and dynamic analyses using parallel methods
NASA Technical Reports Server (NTRS)
Gupta, V.; Newell, J.; Storaasli, O.; Baddourah, M.; Bostic, S.
1993-01-01
Algorithms for high-performance parallel computers are applied to perform static analyses of large-scale Space Station finite-element models (FEMs). Several parallel-vector algorithms under development at NASA Langley are assessed. Sparse matrix solvers were found to be more efficient than banded symmetric or iterative solvers for the static analysis of large-scale applications. In addition, new sparse and 'out-of-core' solvers were found superior to substructure (superelement) techniques which require significant additional cost and time to perform static condensation during global FEM matrix generation as well as the subsequent recovery and expansion. A method to extend the fast parallel static solution techniques to reduce the computation time for dynamic analysis is also described. The resulting static and dynamic algorithms offer design economy for preliminary multidisciplinary design optimization and FEM validation against test modes. The algorithms are being optimized for parallel computers to solve one-million degrees-of-freedom (DOF) FEMs. The high-performance computers at NASA afforded effective software development, testing, efficient and accurate solution with timely system response and graphical interpretation of results rarely found in industry. Based on the author's experience, similar cooperation between industry and government should be encouraged for similar large-scale projects in the future.
Applications of Computational Methods for Dynamic Stability and Control Derivatives
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Spence, Angela M.
2004-01-01
Initial steps in the application o f a low-order panel method computational fluid dynamic (CFD) code to the calculation of aircraft dynamic stability and control (S&C) derivatives are documented. Several capabilities, unique to CFD but not unique to this particular demonstration, are identified and demonstrated in this paper. These unique capabilities complement conventional S&C techniques and they include the ability to: 1) perform maneuvers without the flow-kinematic restrictions and support interference commonly associated with experimental S&C facilities, 2) easily simulate advanced S&C testing techniques, 3) compute exact S&C derivatives with uncertainty propagation bounds, and 4) alter the flow physics associated with a particular testing technique from those observed in a wind or water tunnel test in order to isolate effects. Also presented are discussions about some computational issues associated with the simulation of S&C tests and selected results from numerous surface grid resolution studies performed during the course of the study.
Libration Orbit Mission Design: Applications of Numerical & Dynamical Methods
NASA Technical Reports Server (NTRS)
Bauer, Frank (Technical Monitor); Folta, David; Beckman, Mark
2002-01-01
Sun-Earth libration point orbits serve as excellent locations for scientific investigations. These orbits are often selected to minimize environmental disturbances and maximize observing efficiency. Trajectory design in support of libration orbits is ever more challenging as more complex missions are envisioned in the next decade. Trajectory design software must be further enabled to incorporate better understanding of the libration orbit solution space and thus improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple libration missions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes algorithm and software development. The recently launched Microwave Anisotropy Probe (MAP) and upcoming James Webb Space Telescope (JWST) and Constellation-X missions are examples of the use of improved numerical methods for attaining constrained orbital parameters and controlling their dynamical evolution at the collinear libration points. This paper presents a history of libration point missions, a brief description of the numerical and dynamical design techniques including software used, and a sample of future GSFC mission designs.
Measuring methods for evaluation of dynamic tyre properties
NASA Astrophysics Data System (ADS)
Kmoch, Klaus
1992-01-01
Extensive measuring methods for macroscopic assessment of tire properties, based on classical mechanics and dynamics, are presented. Theoretical results and measurements were included in an expert system, where the pneumatic tire is represented as a wheel with particular elastic properties. For geometry measurement of the tire surface, a laser scanner test bed was used. The tire was excited with a shaker in order to obtain acceleration signals and for estimating global parameters such as stiffness, damping, and nonlinearity influence, which is found to increase with excitation force. Tire dynamic behavior was examined by low velocities with microscopy and infrared thermography, in order to quantify temperature augmentation and tangential and normal forces in the contact area; the slip stick oscillations were recorded on microphones. A drum test bed was used for studying tire behavior at high velocities and the tire vehicle interaction was established with acceleration measurements; nonuniformity influence on rolling stability was ascertained. The results were compared with data from theoretical models, which are pinpoint mass systems or multiple bodies problems.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Implementing efficient dynamic formal verification methods for MPI programs.
Vakkalanka, S.; DeLisi, M.; Gopalakrishnan, G.; Kirby, R. M.; Thakur, R.; Gropp, W.; Mathematics and Computer Science; Univ. of Utah; Univ. of Illinois
2008-01-01
We examine the problem of formally verifying MPI programs for safety properties through an efficient dynamic (runtime) method in which the processes of a given MPI program are executed under the control of an interleaving scheduler. To ensure full coverage for given input test data, the algorithm must take into consideration MPI's out-of-order completion semantics. The algorithm must also ensure that nondeterministic constructs (e.g., MPI wildcard receive matches) are executed in all possible ways. Our new algorithm rewrites wildcard receives to specific receives, one for each sender that can potentially match with the receive. It then recursively explores each case of the specific receives. The list of potential senders matching a receive is determined through a runtime algorithm that exploits MPI's operation ordering semantics. Our verification tool ISP that incorporates this algorithm efficiently verifies several programs and finds bugs missed by existing informal verification tools.
Dynamically controlled crystallization method and apparatus and crystals obtained thereby
NASA Technical Reports Server (NTRS)
Arnowitz, Leonard (Inventor); Steinberg, Emanuel (Inventor)
1999-01-01
A method and apparatus for dynamically controlling the crystallization of proteins including a crystallization chamber or chambers for holding a protein in a salt solution, one or more salt solution chambers, two communication passages respectively coupling the crystallization chamber with each of the salt solution chambers, and transfer mechanisms configured to respectively transfer salt solution between each of the salt solution chambers and the crystallization chamber. The transfer mechanisms are interlocked to maintain the volume of salt solution in the crystallization chamber substantially constant. Salt solution of different concentrations is transferred into and out of the crystallization chamber to adjust the salt concentration in the crystallization chamber to achieve precise control of the crystallization process.
Methods for evaluating the predictive accuracy of structural dynamic models
NASA Technical Reports Server (NTRS)
Hasselman, T. K.; Chrostowski, Jon D.
1990-01-01
Uncertainty of frequency response using the fuzzy set method and on-orbit response prediction using laboratory test data to refine an analytical model are emphasized with respect to large space structures. Two aspects of the fuzzy set approach were investigated relative to its application to large structural dynamics problems: (1) minimizing the number of parameters involved in computing possible intervals; and (2) the treatment of extrema which may occur in the parameter space enclosed by all possible combinations of the important parameters of the model. Extensive printer graphics were added to the SSID code to help facilitate model verification, and an application of this code to the LaRC Ten Bay Truss is included in the appendix to illustrate this graphics capability.
A dynamically adjusted mixed emphasis method for building boosting ensembles.
Gomez-Verdejo, Vanessa; Arenas-Garcia, Jerónimo; Figueiras-Vidal, Aníbal R
2008-01-01
Progressively emphasizing samples that are difficult to classify correctly is the base for the recognized high performance of real Adaboost (RA) ensembles. The corresponding emphasis function can be written as a product of a factor that measures the quadratic error and a factor related to the proximity to the classification border; this fact opens the door to explore the potential advantages provided by using adjustable combined forms of these factors. In this paper, we introduce a principled procedure to select the combination parameter each time a new learner is added to the ensemble, just by maximizing the associated edge parameter, calling the resulting method the dynamically adapted weighted emphasis RA (DW-RA). A number of application examples illustrates the performance improvements obtained by DW-RA.
Computational methods of the Advanced Fluid Dynamics Model
Bohl, W.R.; Wilhelm, D.; Parker, F.R.; Berthier, J.; Maudlin, P.J.; Schmuck, P.; Goutagny, L.; Ichikawa, S.; Ninokata, H.; Luck, L.B.
1987-01-01
To more accurately treat severe accidents in fast reactors, a program has been set up to investigate new computational models and approaches. The product of this effort is a computer code, the Advanced Fluid Dynamics Model (AFDM). This paper describes some of the basic features of the numerical algorithm used in AFDM. Aspects receiving particular emphasis are the fractional-step method of time integration, the semi-implicit pressure iteration, the virtual mass inertial terms, the use of three velocity fields, higher order differencing, convection of interfacial area with source and sink terms, multicomponent diffusion processes in heat and mass transfer, the SESAME equation of state, and vectorized programming. A calculated comparison with an isothermal tetralin/ammonia experiment is performed. We conclude that significant improvements are possible in reliably calculating the progression of severe accidents with further development.
Modern wing flutter analysis by computational fluid dynamics methods
NASA Technical Reports Server (NTRS)
Cunningham, Herbert J.; Batina, John T.; Bennett, Robert M.
1988-01-01
The application and assessment of the recently developed CAP-TSD transonic small-disturbance code for flutter prediction is described. The CAP-TSD code has been developed for aeroelastic analysis of complete aircraft configurations and was previously applied to the calculation of steady and unsteady pressures with favorable results. Generalized aerodynamic forces and flutter characteristics are calculated and compared with linear theory results and with experimental data for a 45 deg sweptback wing. These results are in good agreement with the experimental flutter data which is the first step toward validating CAP-TSD for general transonic aeroelastic applications. The paper presents these results and comparisons along with general remarks regarding modern wing flutter analysis by computational fluid dynamics methods.
Testing and Validation of the Dynamic Inertia Measurement Method
NASA Technical Reports Server (NTRS)
Chin, Alexander W.; Herrera, Claudia Y.; Spivey, Natalie D.; Fladung, William A.; Cloutier, David
2015-01-01
The Dynamic Inertia Measurement (DIM) method uses a ground vibration test setup to determine the mass properties of an object using information from frequency response functions. Most conventional mass properties testing involves using spin tables or pendulum-based swing tests, which for large aerospace vehicles becomes increasingly difficult and time-consuming, and therefore expensive, to perform. The DIM method has been validated on small test articles but has not been successfully proven on large aerospace vehicles. In response, the National Aeronautics and Space Administration Armstrong Flight Research Center (Edwards, California) conducted mass properties testing on an "iron bird" test article that is comparable in mass and scale to a fighter-type aircraft. The simple two-I-beam design of the "iron bird" was selected to ensure accurate analytical mass properties. Traditional swing testing was also performed to compare the level of effort, amount of resources, and quality of data with the DIM method. The DIM test showed favorable results for the center of gravity and moments of inertia; however, the products of inertia showed disagreement with analytical predictions.
Data assimilation in problems of mantle dynamics: Methods and applications
NASA Astrophysics Data System (ADS)
Ismail-Zadeh, A.; Schubert, G.; Tsepelev, I.; Korotkii, A.
2009-05-01
We present and compare several methods (backward advection, adjoint, and quasi-reversibility) for assimilation of geophysical and geodetic data in geodynamical models. These methods allow for incorporating observations and unknown initial conditions for mantle temperature and flow into a three- dimensional dynamic model in order to determine the initial conditions in the geological past. Once the conditions are determined the evolution of mantle structures can be restore. Using the quasi-reversibility method we reconstruct the evolution of the descending lithospheric slab beneath the south-eastern Carpathians. We show that the geometry of the mantle structures changes with time diminishing the degree of surface curvature of the structures, because the heat diffusion tends to smooth the complex thermal surfaces of mantle bodies with time. Present seismic tomography images of mantle structures do not allow definition of the sharp shapes of these structures in the past. Assimilation of mantle temperature and flow instead provides a quantitative tool to restore thermal shapes of prominent structures in the past from their diffusive shapes at present.
Introduction to finite-difference methods for numerical fluid dynamics
Scannapieco, E.; Harlow, F.H.
1995-09-01
This work is intended to be a beginner`s exercise book for the study of basic finite-difference techniques in computational fluid dynamics. It is written for a student level ranging from high-school senior to university senior. Equations are derived from basic principles using algebra. Some discussion of partial-differential equations is included, but knowledge of calculus is not essential. The student is expected, however, to have some familiarity with the FORTRAN computer language, as the syntax of the computer codes themselves is not discussed. Topics examined in this work include: one-dimensional heat flow, one-dimensional compressible fluid flow, two-dimensional compressible fluid flow, and two-dimensional incompressible fluid flow with additions of the equations of heat flow and the {Kappa}-{epsilon} model for turbulence transport. Emphasis is placed on numerical instabilities and methods by which they can be avoided, techniques that can be used to evaluate the accuracy of finite-difference approximations, and the writing of the finite-difference codes themselves. Concepts introduced in this work include: flux and conservation, implicit and explicit methods, Lagrangian and Eulerian methods, shocks and rarefactions, donor-cell and cell-centered advective fluxes, compressible and incompressible fluids, the Boussinesq approximation for heat flow, Cartesian tensor notation, the Boussinesq approximation for the Reynolds stress tensor, and the modeling of transport equations. A glossary is provided which defines these and other terms.
A Dynamic Integration Method for Borderland Database using OSM data
NASA Astrophysics Data System (ADS)
Zhou, X.-G.; Jiang, Y.; Zhou, K.-X.; Zeng, L.
2013-11-01
Spatial data is the fundamental of borderland analysis of the geography, natural resources, demography, politics, economy, and culture. As the spatial region used in borderland researching usually covers several neighboring countries' borderland regions, the data is difficult to achieve by one research institution or government. VGI has been proven to be a very successful means of acquiring timely and detailed global spatial data at very low cost. Therefore VGI will be one reasonable source of borderland spatial data. OpenStreetMap (OSM) has been known as the most successful VGI resource. But OSM data model is far different from the traditional authoritative geographic information. Thus the OSM data needs to be converted to the scientist customized data model. With the real world changing fast, the converted data needs to be updated. Therefore, a dynamic integration method for borderland data is presented in this paper. In this method, a machine study mechanism is used to convert the OSM data model to the user data model; a method used to select the changed objects in the researching area over a given period from OSM whole world daily diff file is presented, the change-only information file with designed form is produced automatically. Based on the rules and algorithms mentioned above, we enabled the automatic (or semiautomatic) integration and updating of the borderland database by programming. The developed system was intensively tested.
Fast method for dynamic thresholding in volume holographic memories
NASA Astrophysics Data System (ADS)
Porter, Michael S.; Mitkas, Pericles A.
1998-11-01
It is essential for parallel optical memory interfaces to incorporate processing that dynamically differentiates between databit values. These thresholding points will vary as a result of system noise -- due to contrast fluctuations, variations in data page composition, reference beam misalignment, etc. To maintain reasonable data integrity it is necessary to select the threshold close to its optimal level. In this paper, a neural network (NN) approach is proposed as a fast method of determining the threshold to meet the required transfer rate. The multi-layered perceptron network can be incorporated as part of a smart photodetector array (SPA). Other methods have suggested performing the operation by means of histogram or by use of statistical information. These approaches fail in that they unnecessarily switch to a 1-D paradigm. In this serial domain, global thresholding is pointless since sequence detection could be applied. The discussed approach is a parallel solution with less overhead than multi-rail encoding. As part of this method, a small set of values are designated as threshold determination data bits; these are interleaved with the information data bits and are used as inputs to the NN. The approach has been tested using both simulated data as well as data obtained from a volume holographic memory system. Results show convergence of the training and an ability to generalize upon untrained data for binary and multi-level gray scale datapage images. Methodologies are discussed for improving the performance by a proper training set selection.
The ONIOM molecular dynamics method for biochemical applications: cytidine deaminase
Matsubara, Toshiaki; Dupuis, Michel; Aida, Misako
2007-03-22
Abstract We derived and implemented the ONIOM-molecular dynamics (MD) method for biochemical applications. The implementation allows the characterization of the functions of the real enzymes taking account of their thermal motion. In this method, the direct MD is performed by calculating the ONIOM energy and gradients of the system on the fly. We describe the first application of this ONOM-MD method to cytidine deaminase. The environmental effects on the substrate in the active site are examined. The ONIOM-MD simulations show that the product uridine is strongly perturbed by the thermal motion of the environment and dissociates easily from the active site. TM and MA were supported in part by grants from the Ministry of Education, Culture, Sports, Science and Technology of Japan. MD was supported by the Division of Chemical Sciences, Office of Basic Energy Sciences, and by the Office of Biological and Environmental Research of the U.S. Department of Energy DOE. Battelle operates Pacific Northwest National Laboratory for DOE.
Novel Dynamics and Controls Analysis Methods for Nonlinear Structural Systems
1990-08-30
Simulation of Constrained Multibody Dynamics," to appear in the proceedings published by Computational Mechanics Publications, 1990. [21 ] Placek , B...formulation of dynamics has been derived in the aerospace and mechanism dynamics research literature in [ Placek ],[Agrawal],[Kurdila]. Its theoretical...120. [12] Placek , B., "Contribution to the Solution of the Equations of Motion of the Discrete Dynamical System with Holonomic Constraints", In E
The reversibility error method (REM): a new, dynamical fast indicator for planetary dynamics
NASA Astrophysics Data System (ADS)
Panichi, Federico; Goździewski, Krzyszof; Turchetti, Giorgio
2017-02-01
We describe the reversibility error method (REM) and its applications to planetary dynamics. REM is based on the time-reversibility analysis of the phase-space trajectories of conservative Hamiltonian systems. The round-off errors break the time reversibility and the displacement from the initial condition, occurring when we integrate it forward and backward for the same time interval, is related to the dynamical character of the trajectory. If the motion is chaotic, in the sense of non-zero maximal Lyapunov characteristic exponent (mLCE), then REM increases exponentially with time, as exp λt, while when the motion is regular (quasi-periodic), then REM increases as a power law in time, as tα, where α and λ are real coefficients. We compare the REM with a variant of mLCE, the mean exponential growth factor of nearby orbits. The test set includes the restricted three-body problem and five resonant planetary systems: HD 37124, Kepler-60, Kepler-36, Kepler-29 and Kepler-26. We found a very good agreement between the outcomes of these algorithms. Moreover, the numerical implementation of REM is astonishing simple, and is based on solid theoretical background. The REM requires only a symplectic and time-reversible (symmetric) integrator of the equations of motion. This method is also CPU efficient. It may be particularly useful for the dynamical analysis of multiple planetary systems in the Kepler sample, characterized by low-eccentricity orbits and relatively weak mutual interactions. As an interesting side result, we found a possible stable chaos occurrence in the Kepler-29 planetary system.
Substructure method in high-speed monorail dynamic problems
NASA Astrophysics Data System (ADS)
Ivanchenko, I. I.
2008-12-01
The study of actions of high-speed moving loads on bridges and elevated tracks remains a topical problem for transport. In the present study, we propose a new method for moving load analysis of elevated tracks (monorail structures or bridges), which permits studying the interaction between two strained objects consisting of rod systems and rigid bodies with viscoelastic links; one of these objects is the moving load (monorail rolling stock), and the other is the carrying structure (monorail elevated track or bridge). The methods for moving load analysis of structures were developed in numerous papers [1-15]. At the first stage, when solving the problem about a beam under the action of the simplest moving load such as a moving weight, two fundamental methods can be used; the same methods are realized for other structures and loads. The first method is based on the use of a generalized coordinate in the expansion of the deflection in the natural shapes of the beam, and the problem is reduced to solving a system of ordinary differential equations with variable coefficients [1-3]. In the second method, after the "beam-weight" system is decomposed, just as in the problem with the weight impact on the beam [4], solving the problem is reduced to solving an integral equation for the dynamic weight reaction [6, 7]. In [1-3], an increase in the number of retained forms leads to an increase in the order of the system of equations; in [6, 7], difficulties arise when solving the integral equations related to the conditional stability of the step procedures. The method proposed in [9, 14] for beams and rod systems combines the above approaches and eliminates their drawbacks, because it permits retaining any necessary number of shapes in the deflection expansion and has a resolving system of equations with an unconditionally stable integration scheme and with a minimum number of unknowns, just as in the method of integral equations [6, 7]. This method is further developed for
Steered Molecular Dynamics Methods Applied to Enzyme Mechanism and Energetics.
Ramírez, C L; Martí, M A; Roitberg, A E
2016-01-01
One of the main goals of chemistry is to understand the underlying principles of chemical reactions, in terms of both its reaction mechanism and the thermodynamics that govern it. Using hybrid quantum mechanics/molecular mechanics (QM/MM)-based methods in combination with a biased sampling scheme, it is possible to simulate chemical reactions occurring inside complex environments such as an enzyme, or aqueous solution, and determining the corresponding free energy profile, which provides direct comparison with experimental determined kinetic and equilibrium parameters. Among the most promising biasing schemes is the multiple steered molecular dynamics method, which in combination with Jarzynski's Relationship (JR) allows obtaining the equilibrium free energy profile, from a finite set of nonequilibrium reactive trajectories by exponentially averaging the individual work profiles. However, obtaining statistically converged and accurate profiles is far from easy and may result in increased computational cost if the selected steering speed and number of trajectories are inappropriately chosen. In this small review, using the extensively studied chorismate to prephenate conversion reaction, we first present a systematic study of how key parameters such as pulling speed, number of trajectories, and reaction progress are related to the resulting work distributions and in turn the accuracy of the free energy obtained with JR. Second, and in the context of QM/MM strategies, we introduce the Hybrid Differential Relaxation Algorithm, and show how it allows obtaining more accurate free energy profiles using faster pulling speeds and smaller number of trajectories and thus smaller computational cost.
Dynamically controlled crystallization method and apparatus and crystals obtained thereby
NASA Technical Reports Server (NTRS)
Arnowitz, Leonard (Inventor); Steinberg, Emanuel (Inventor)
2003-01-01
A method and apparatus for dynamically controlling the crystallization of molecules including a crystallization chamber (14) or chambers for holding molecules in a precipitant solution, one or more precipitant solution reservoirs (16, 18), communication passages (17, 19) respectively coupling the crystallization chamber(s) with each of the precipitant solution reservoirs, and transfer mechanisms (20, 21, 22, 24, 26, 28) configured to respectively transfer precipitant solution between each of the precipitant solution reservoirs and the crystallization chamber(s). The transfer mechanisms are interlocked to maintain a constant volume of precipitant solution in the crystallization chamber(s). Precipitant solutions of different concentrations are transferred into and out of the crystallization chamber(s) to adjust the concentration of precipitant in the crystallization chamber(s) to achieve precise control of the crystallization process. The method and apparatus can be used effectively to grow crystals under reduced gravity conditions such as microgravity conditions of space, and under conditions of reduced or enhanced effective gravity as induced by a powerful magnetic field.
Detection of abrupt changes in dynamic systems
NASA Technical Reports Server (NTRS)
Willsky, A. S.
1984-01-01
Some of the basic ideas associated with the detection of abrupt changes in dynamic systems are presented. Multiple filter-based techniques and residual-based method and the multiple model and generalized likelihood ratio methods are considered. Issues such as the effect of unknown onset time on algorithm complexity and structure and robustness to model uncertainty are discussed.
A computationally efficient spectral method for modeling core dynamics
NASA Astrophysics Data System (ADS)
Marti, P.; Calkins, M. A.; Julien, K.
2016-08-01
An efficient, spectral numerical method is presented for solving problems in a spherical shell geometry that employs spherical harmonics in the angular dimensions and Chebyshev polynomials in the radial direction. We exploit the three-term recurrence relation for Chebyshev polynomials that renders all matrices sparse in spectral space. This approach is significantly more efficient than the collocation approach and is generalizable to both the Galerkin and tau methodologies for enforcing boundary conditions. The sparsity of the matrices reduces the computational complexity of the linear solution of implicit-explicit time stepping schemes to O(N) operations, compared to O>(N2>) operations for a collocation method. The method is illustrated by considering several example problems of important dynamical processes in the Earth's liquid outer core. Results are presented from both fully nonlinear, time-dependent numerical simulations and eigenvalue problems arising from the investigation of the onset of convection and the inertial wave spectrum. We compare the explicit and implicit temporal discretization of the Coriolis force; the latter becomes computationally feasible given the sparsity of the differential operators. We find that implicit treatment of the Coriolis force allows for significantly larger time step sizes compared to explicit algorithms; for hydrodynamic and dynamo problems at an Ekman number of E=10-5, time step sizes can be increased by a factor of 3 to 16 times that of the explicit algorithm, depending on the order of the time stepping scheme. The implementation with explicit Coriolis force scales well to at least 2048 cores, while the implicit implementation scales to 512 cores.
Driving the Model to Its Limit: Profile Likelihood Based Model Reduction.
Maiwald, Tim; Hass, Helge; Steiert, Bernhard; Vanlier, Joep; Engesser, Raphael; Raue, Andreas; Kipkeew, Friederike; Bock, Hans H; Kaschek, Daniel; Kreutz, Clemens; Timmer, Jens
2016-01-01
In systems biology, one of the major tasks is to tailor model complexity to information content of the data. A useful model should describe the data and produce well-determined parameter estimates and predictions. Too small of a model will not be able to describe the data whereas a model which is too large tends to overfit measurement errors and does not provide precise predictions. Typically, the model is modified and tuned to fit the data, which often results in an oversized model. To restore the balance between model complexity and available measurements, either new data has to be gathered or the model has to be reduced. In this manuscript, we present a data-based method for reducing non-linear models. The profile likelihood is utilised to assess parameter identifiability and designate likely candidates for reduction. Parameter dependencies are analysed along profiles, providing context-dependent suggestions for the type of reduction. We discriminate four distinct scenarios, each associated with a specific model reduction strategy. Iterating the presented procedure eventually results in an identifiable model, which is capable of generating precise and testable predictions. Source code for all toy examples is provided within the freely available, open-source modelling environment Data2Dynamics based on MATLAB available at http://www.data2dynamics.org/, as well as the R packages dMod/cOde available at https://github.com/dkaschek/. Moreover, the concept is generally applicable and can readily be used with any software capable of calculating the profile likelihood.
Driving the Model to Its Limit: Profile Likelihood Based Model Reduction
Vanlier, Joep; Engesser, Raphael; Raue, Andreas; Kipkeew, Friederike; Bock, Hans H.; Kaschek, Daniel; Kreutz, Clemens; Timmer, Jens
2016-01-01
In systems biology, one of the major tasks is to tailor model complexity to information content of the data. A useful model should describe the data and produce well-determined parameter estimates and predictions. Too small of a model will not be able to describe the data whereas a model which is too large tends to overfit measurement errors and does not provide precise predictions. Typically, the model is modified and tuned to fit the data, which often results in an oversized model. To restore the balance between model complexity and available measurements, either new data has to be gathered or the model has to be reduced. In this manuscript, we present a data-based method for reducing non-linear models. The profile likelihood is utilised to assess parameter identifiability and designate likely candidates for reduction. Parameter dependencies are analysed along profiles, providing context-dependent suggestions for the type of reduction. We discriminate four distinct scenarios, each associated with a specific model reduction strategy. Iterating the presented procedure eventually results in an identifiable model, which is capable of generating precise and testable predictions. Source code for all toy examples is provided within the freely available, open-source modelling environment Data2Dynamics based on MATLAB available at http://www.data2dynamics.org/, as well as the R packages dMod/cOde available at https://github.com/dkaschek/. Moreover, the concept is generally applicable and can readily be used with any software capable of calculating the profile likelihood. PMID:27588423
NASA Astrophysics Data System (ADS)
Zeng, X.
2015-12-01
A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.
Targeted Maximum Likelihood Estimation for Causal Inference in Observational Studies.
Schuler, Megan S; Rose, Sherri
2017-01-01
Estimation of causal effects using observational data continues to grow in popularity in the epidemiologic literature. While many applications of causal effect estimation use propensity score methods or G-computation, targeted maximum likelihood estimation (TMLE) is a well-established alternative method with desirable statistical properties. TMLE is a doubly robust maximum-likelihood-based approach that includes a secondary "targeting" step that optimizes the bias-variance tradeoff for the target parameter. Under standard causal assumptions, estimates can be interpreted as causal effects. Because TMLE has not been as widely implemented in epidemiologic research, we aim to provide an accessible presentation of TMLE for applied researchers. We give step-by-step instructions for using TMLE to estimate the average treatment effect in the context of an observational study. We discuss conceptual similarities and differences between TMLE and 2 common estimation approaches (G-computation and inverse probability weighting) and present findings on their relative performance using simulated data. Our simulation study compares methods under parametric regression misspecification; our results highlight TMLE's property of double robustness. Additionally, we discuss best practices for TMLE implementation, particularly the use of ensembled machine learning algorithms. Our simulation study demonstrates all methods using super learning, highlighting that incorporation of machine learning may outperform parametric regression in observational data settings.
Empirical Likelihood for Estimating Equations with Nonignorably Missing Data.
Tang, Niansheng; Zhao, Puying; Zhu, Hongtu
2014-04-01
We develop an empirical likelihood (EL) inference on parameters in generalized estimating equations with nonignorably missing response data. We consider an exponential tilting model for the nonignorably missing mechanism, and propose modified estimating equations by imputing missing data through a kernel regression method. We establish some asymptotic properties of the EL estimators of the unknown parameters under different scenarios. With the use of auxiliary information, the EL estimators are statistically more efficient. Simulation studies are used to assess the finite sample performance of our proposed EL estimators. We apply our EL estimators to investigate a data set on earnings obtained from the New York Social Indicators Survey.
Maximum likelihood decoding of Reed Solomon Codes
Sudan, M.
1996-12-31
We present a randomized algorithm which takes as input n distinct points ((x{sub i}, y{sub i})){sup n}{sub i=1} from F x F (where F is a field) and integer parameters t and d and returns a list of all univariate polynomials f over F in the variable x of degree at most d which agree with the given set of points in at least t places (i.e., y{sub i} = f (x{sub i}) for at least t values of i), provided t = {Omega}({radical}nd). The running time is bounded by a polynomial in n. This immediately provides a maximum likelihood decoding algorithm for Reed Solomon Codes, which works in a setting with a larger number of errors than any previously known algorithm. To the best of our knowledge, this is the first efficient (i.e., polynomial time bounded) algorithm which provides some maximum likelihood decoding for any efficient (i.e., constant or even polynomial rate) code.
Maximum Likelihood Analysis in the PEN Experiment
NASA Astrophysics Data System (ADS)
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Multiplicative earthquake likelihood models incorporating strain rates
NASA Astrophysics Data System (ADS)
Rhoades, D. A.; Christophersen, A.; Gerstenberger, M. C.
2017-01-01