Science.gov

Sample records for dynamical likelihood method

  1. Estimation of Dynamic Discrete Choice Models by Maximum Likelihood and the Simulated Method of Moments

    PubMed Central

    Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano

    2015-01-01

    We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926

  2. Measurement of the Top Quark Mass by Dynamical Likelihood Method using the Lepton + Jets Events with the Collider Detector at Fermilab

    SciTech Connect

    Kubo, Taichi

    2008-02-01

    We have measured the top quark mass with the dynamical likelihood method. The data corresponding to an integrated luminosity of 1.7fb-1 was collected in proton antiproton collisions at a center of mass energy of 1.96 TeV with the CDF detector at Fermilab Tevatron during the period March 2002-March 2007. We select t$\\bar{t}$ pair production candidates by requiring one high energy lepton and four jets, in which at least one of jets must be tagged as a b-jet. In order to reconstruct the top quark mass, we use the dynamical likelihood method based on maximum likelihood method where a likelihood is defined as the differential cross section multiplied by the transfer function from observed quantities to parton quantities, as a function of the top quark mass and the jet energy scale(JES). With this method, we measure the top quark mass to be 171.6 ± 2.0 (stat.+ JES) ± 1.3(syst.) = 171.6 ± 2.4 GeV/c2.

  3. Measurement of the Top Quark Mass by Dynamical Likelihood Method using the Lepton plus Jets Events in 1.96 Tev Proton-Antiproton Collisions

    SciTech Connect

    Yorita, Kohei

    2005-03-01

    We have measured the top quark mass with the dynamical likelihood method (DLM) using the CDF II detector at the Fermilab Tevatron. The Tevatron produces top and anti-top pairs in pp collisions at a center of mass energy of 1.96 TeV. The data sample used in this paper was accumulated from March 2002 through August 2003 which corresponds to an integrated luminosity of 162 pb-1.

  4. Synthesizing Regression Results: A Factored Likelihood Method

    ERIC Educational Resources Information Center

    Wu, Meng-Jia; Becker, Betsy Jane

    2013-01-01

    Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported…

  5. Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.

    ERIC Educational Resources Information Center

    Butler, Ronald W.

    The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…

  6. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions

    NASA Astrophysics Data System (ADS)

    Barrett, Harrison H.; Dainty, Christopher; Lara, David

    2007-02-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack-Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack-Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods.

  7. Measurement of the top quark mass with the dynamical likelihood method using lepton plus jets events with b-tags in p anti-p collisions at s**(1/2) = 1.96-TeV

    SciTech Connect

    Abulencia, A.; Acosta, D.; Adelman, Jahred A.; Affolder, Anthony A.; Akimoto, T.; Albrow, M.G.; Ambrose, D.; Amerio, S.; Amidei, D.; Anastassov, A.; Anikeev, K.; /Taiwan, Inst. Phys. /Argonne /Barcelona, IFAE /Baylor U. /INFN, Bologna /Bologna U. /Brandeis U. /UC, Davis /UCLA /UC, San Diego /UC, Santa Barbara

    2005-12-01

    This report describes a measurement of the top quark mass, M{sub top}, with the dynamical likelihood method (DLM) using the CDF II detector at the Fermilab Tevatron. The Tevatron produces top/anti-top (t{bar t}) pairs in p{bar p} collisions at a center-of-mass energy of 1.96 TeV. The data sample used in this analysis was accumulated from March 2002 through August 2004, which corresponds to an integrated luminosity of 318 pb{sup -1}. They use the t{bar t} candidates in the ''lepton+jets'' decay channel, requiring at least one jet identified as a b quark by finding an displaced secondary vertex. The DLM defines a likelihood for each event based on the differential cross section as a function of M{sub top} per unit phase space volume of the final partons, multiplied by the transfer functions from jet to parton energies. The method takes into account all possible jet combinations in an event, and the likelihood is multiplied event by event to derive the top quark mass by the maximum likelihood method. Using 63 t{bar t} candidates observed in the data, with 9.2 events expected from background, they measure the top quark mass to be 173.2{sub -2.4}{sup +2.6}(stat.) {+-} 3.2(syst.) GeV/c{sup 2}, or 173.2{sub -4.0}{sup +4.1} GeV/c{sup 2}.

  8. Measuring coherence of computer-assisted likelihood ratio methods.

    PubMed

    Haraksim, Rudolf; Ramos, Daniel; Meuwly, Didier; Berger, Charles E H

    2015-04-01

    Measuring the performance of forensic evaluation methods that compute likelihood ratios (LRs) is relevant for both the development and the validation of such methods. A framework of performance characteristics categorized as primary and secondary is introduced in this study to help achieve such development and validation. Ground-truth labelled fingerprint data is used to assess the performance of an example likelihood ratio method in terms of those performance characteristics. Discrimination, calibration, and especially the coherence of this LR method are assessed as a function of the quantity and quality of the trace fingerprint specimen. Assessment of the coherence revealed a weakness of the comparison algorithm in the computer-assisted likelihood ratio method used.

  9. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    ERIC Educational Resources Information Center

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  10. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc

    2016-03-01

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  11. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics.

    PubMed

    Arampatzis, Georgios; Katsoulakis, Markos A; Rey-Bellet, Luc

    2016-03-14

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  12. Empirical likelihood method for non-ignorable missing data problems.

    PubMed

    Guan, Zhong; Qin, Jing

    2017-01-01

    Missing response problem is ubiquitous in survey sampling, medical, social science and epidemiology studies. It is well known that non-ignorable missing is the most difficult missing data problem where the missing of a response depends on its own value. In statistical literature, unlike the ignorable missing data problem, not many papers on non-ignorable missing data are available except for the full parametric model based approach. In this paper we study a semiparametric model for non-ignorable missing data in which the missing probability is known up to some parameters, but the underlying distributions are not specified. By employing Owen (1988)'s empirical likelihood method we can obtain the constrained maximum empirical likelihood estimators of the parameters in the missing probability and the mean response which are shown to be asymptotically normal. Moreover the likelihood ratio statistic can be used to test whether the missing of the responses is non-ignorable or completely at random. The theoretical results are confirmed by a simulation study. As an illustration, the analysis of a real AIDS trial data shows that the missing of CD4 counts around two years are non-ignorable and the sample mean based on observed data only is biased.

  13. Evaluating maximum likelihood estimation methods to determine the hurst coefficients

    NASA Astrophysics Data System (ADS)

    Kendziorski, C. M.; Bassingthwaighte, J. B.; Tonellato, P. J.

    1999-12-01

    A maximum likelihood estimation method implemented in S-PLUS ( S-MLE) to estimate the Hurst coefficient ( H) is evaluated. The Hurst coefficient, with 0.5< H<1, characterizes long memory time series by quantifying the rate of decay of the autocorrelation function. S-MLE was developed to estimate H for fractionally differenced (fd) processes. However, in practice it is difficult to distinguish between fd processes and fractional Gaussian noise (fGn) processes. Thus, the method is evaluated for estimating H for both fd and fGn processes. S-MLE gave biased results of H for fGn processes of any length and for fd processes of lengths less than 2 10. A modified method is proposed to correct for this bias. It gives reliable estimates of H for both fd and fGn processes of length greater than or equal to 2 11.

  14. Error detection for genetic data, using likelihood methods

    SciTech Connect

    Ehm, M.G.; Kimmel, M.; Cottingham, R.W. Jr.

    1996-01-01

    As genetic maps become denser, the effect of laboratory typing errors becomes more serious. We review a general method for detecting errors in pedigree genotyping data that is a variant of the likelihood-ratio test statistic. It pinpoints individuals and loci with relatively unlikely genotypes. Power and significance studies using Monte Carlo methods are shown by using simulated data with pedigree structures similar to the CEPH pedigrees and a larger experimental pedigree used in the study of idiopathic dilated cardiomyopathy (DCM). The studies show the index detects errors for small values of {theta} with high power and an acceptable false positive rate. The method was also used to check for errors in DCM laboratory pedigree data and to estimate the error rate in CEPH chromosome 6 data. The errors flagged by our method in the DCM pedigree were confirmed by the laboratory. The results are consistent with estimated false-positive and false-negative rates obtained using simulation. 21 refs., 5 figs., 2 tabs.

  15. Constrained maximum likelihood modal parameter identification applied to structural dynamics

    NASA Astrophysics Data System (ADS)

    El-Kafafy, Mahmoud; Peeters, Bart; Guillaume, Patrick; De Troyer, Tim

    2016-05-01

    A new modal parameter estimation method to directly establish modal models of structural dynamic systems satisfying two physically motivated constraints will be presented. The constraints imposed in the identified modal model are the reciprocity of the frequency response functions (FRFs) and the estimation of normal (real) modes. The motivation behind the first constraint (i.e. reciprocity) comes from the fact that modal analysis theory shows that the FRF matrix and therefore the residue matrices are symmetric for non-gyroscopic, non-circulatory, and passive mechanical systems. In other words, such types of systems are expected to obey Maxwell-Betti's reciprocity principle. The second constraint (i.e. real mode shapes) is motivated by the fact that analytical models of structures are assumed to either be undamped or proportional damped. Therefore, normal (real) modes are needed for comparison with these analytical models. The work done in this paper is a further development of a recently introduced modal parameter identification method called ML-MM that enables us to establish modal model that satisfies such motivated constraints. The proposed constrained ML-MM method is applied to two real experimental datasets measured on fully trimmed cars. This type of data is still considered as a significant challenge in modal analysis. The results clearly demonstrate the applicability of the method to real structures with significant non-proportional damping and high modal densities.

  16. Comparisons of likelihood and machine learning methods of individual classification

    USGS Publications Warehouse

    Guinand, B.; Topchy, A.; Page, K.S.; Burnham-Curtis, M. K.; Punch, W.F.; Scribner, K.T.

    2002-01-01

    “Assignment tests” are designed to determine population membership for individuals. One particular application based on a likelihood estimate (LE) was introduced by Paetkau et al. (1995; see also Vásquez-Domínguez et al. 2001) to assign an individual to the population of origin on the basis of multilocus genotype and expectations of observing this genotype in each potential source population. The LE approach can be implemented statistically in a Bayesian framework as a convenient way to evaluate hypotheses of plausible genealogical relationships (e.g., that an individual possesses an ancestor in another population) (Dawson and Belkhir 2001;Pritchard et al. 2000; Rannala and Mountain 1997). Other studies have evaluated the confidence of the assignment (Almudevar 2000) and characteristics of genotypic data (e.g., degree of population divergence, number of loci, number of individuals, number of alleles) that lead to greater population assignment (Bernatchez and Duchesne 2000; Cornuet et al. 1999; Haig et al. 1997; Shriver et al. 1997; Smouse and Chevillon 1998). Main statistical and conceptual differences between methods leading to the use of an assignment test are given in, for example,Cornuet et al. (1999) and Rosenberg et al. (2001). Howeve

  17. Likelihood based observability analysis and confidence intervals for predictions of dynamic models

    PubMed Central

    2012-01-01

    Background Predicting a system’s behavior based on a mathematical model is a primary task in Systems Biology. If the model parameters are estimated from experimental data, the parameter uncertainty has to be translated into confidence intervals for model predictions. For dynamic models of biochemical networks, the nonlinearity in combination with the large number of parameters hampers the calculation of prediction confidence intervals and renders classical approaches as hardly feasible. Results In this article reliable confidence intervals are calculated based on the prediction profile likelihood. Such prediction confidence intervals of the dynamic states can be utilized for a data-based observability analysis. The method is also applicable if there are non-identifiable parameters yielding to some insufficiently specified model predictions that can be interpreted as non-observability. Moreover, a validation profile likelihood is introduced that should be applied when noisy validation experiments are to be interpreted. Conclusions The presented methodology allows the propagation of uncertainty from experimental to model predictions. Although presented in the context of ordinary differential equations, the concept is general and also applicable to other types of models. Matlab code which can be used as a template to implement the method is provided at http://www.fdmold.uni-freiburg.de/∼ckreutz/PPL. PMID:22947028

  18. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    SciTech Connect

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; Beerli, Peter; Zeng, Xiankui; Lu, Dan; Tao, Yuezan

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamic integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.

  19. An algorithm for maximum likelihood estimation using an efficient method for approximating sensitivities

    NASA Technical Reports Server (NTRS)

    Murphy, P. C.

    1984-01-01

    An algorithm for maximum likelihood (ML) estimation is developed primarily for multivariable dynamic systems. The algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). The method determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort compared with integrating the analytically determined sensitivity equations or using a finite-difference method. Different surface-fitting methods are discussed and demonstrated. Aircraft estimation problems are solved by using both simulated and real-flight data to compare MNRES with commonly used methods; in these solutions MNRES is found to be equally accurate and substantially faster. MNRES eliminates the need to derive sensitivity equations, thus producing a more generally applicable algorithm.

  20. A composite likelihood method for bivariate meta-analysis in diagnostic systematic reviews

    PubMed Central

    Liu, Yulun; Ning, Jing; Nie, Lei; Zhu, Hongjian; Chu, Haitao

    2014-01-01

    Diagnostic systematic review is a vital step in the evaluation of diagnostic technologies. In many applications, it involves pooling pairs of sensitivity and specificity of a dichotomized diagnostic test from multiple studies. We propose a composite likelihood method for bivariate meta-analysis in diagnostic systematic reviews. This method provides an alternative way to make inference on diagnostic measures such as sensitivity, specificity, likelihood ratios and diagnostic odds ratio. Its main advantages over the standard likelihood method are the avoidance of the non-convergence problem, which is non-trivial when the number of studies are relatively small, the computational simplicity and some robustness to model mis-specifications. Simulation studies show that the composite likelihood method maintains high relative efficiency compared to that of the standard likelihood method. We illustrate our method in a diagnostic review of the performance of contemporary diagnostic imaging technologies for detecting metastases in patients with melanoma. PMID:25512146

  1. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  2. An Empirical Likelihood Method for Semiparametric Linear Regression with Right Censored Data

    PubMed Central

    Fang, Kai-Tai; Li, Gang; Lu, Xuyang; Qin, Hong

    2013-01-01

    This paper develops a new empirical likelihood method for semiparametric linear regression with a completely unknown error distribution and right censored survival data. The method is based on the Buckley-James (1979) estimating equation. It inherits some appealing properties of the complete data empirical likelihood method. For example, it does not require variance estimation which is problematic for the Buckley-James estimator. We also extend our method to incorporate auxiliary information. We compare our method with the synthetic data empirical likelihood of Li and Wang (2003) using simulations. We also illustrate our method using Stanford heart transplantation data. PMID:23573169

  3. Maximum-Likelihood Adaptive Filter for Partially Observed Boolean Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Imani, Mahdi; Braga-Neto, Ulisses M.

    2017-01-01

    Partially-observed Boolean dynamical systems (POBDS) are a general class of nonlinear models with application in estimation and control of Boolean processes based on noisy and incomplete measurements. The optimal minimum mean square error (MMSE) algorithms for POBDS state estimation, namely, the Boolean Kalman filter (BKF) and Boolean Kalman smoother (BKS), are intractable in the case of large systems, due to computational and memory requirements. To address this, we propose approximate MMSE filtering and smoothing algorithms based on the auxiliary particle filter (APF) method from sequential Monte-Carlo theory. These algorithms are used jointly with maximum-likelihood (ML) methods for simultaneous state and parameter estimation in POBDS models. In the presence of continuous parameters, ML estimation is performed using the expectation-maximization (EM) algorithm; we develop for this purpose a special smoother which reduces the computational complexity of the EM algorithm. The resulting particle-based adaptive filter is applied to a POBDS model of Boolean gene regulatory networks observed through noisy RNA-Seq time series data, and performance is assessed through a series of numerical experiments using the well-known cell cycle gene regulatory model.

  4. Variance Difference between Maximum Likelihood Estimation Method and Expected A Posteriori Estimation Method Viewed from Number of Test Items

    ERIC Educational Resources Information Center

    Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.

    2016-01-01

    The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…

  5. Modified Maxium Likelihood Estimation Method for Completely Separated and Quasi-Completely Separated Data for a Dose-Response Model

    DTIC Science & Technology

    2015-08-01

    MODIFIED MAXIMUM LIKELIHOOD ESTIMATION METHOD FOR COMPLETELY SEPARATED AND QUASI-COMPLETELY SEPARATED DATA...Likelihood Estimation Method for Completely Separated and Quasi-Completely Separated Data for a Dose-Response Model 5a. CONTRACT NUMBER 5b. GRANT...quasi-completely separated , the traditional maximum likelihood estimation (MLE) method generates infinite estimates. The bias-reduction (BR) method

  6. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  7. Data cloning: easy maximum likelihood estimation for complex ecological models using Bayesian Markov chain Monte Carlo methods.

    PubMed

    Lele, Subhash R; Dennis, Brian; Lutscher, Frithjof

    2007-07-01

    We introduce a new statistical computing method, called data cloning, to calculate maximum likelihood estimates and their standard errors for complex ecological models. Although the method uses the Bayesian framework and exploits the computational simplicity of the Markov chain Monte Carlo (MCMC) algorithms, it provides valid frequentist inferences such as the maximum likelihood estimates and their standard errors. The inferences are completely invariant to the choice of the prior distributions and therefore avoid the inherent subjectivity of the Bayesian approach. The data cloning method is easily implemented using standard MCMC software. Data cloning is particularly useful for analysing ecological situations in which hierarchical statistical models, such as state-space models and mixed effects models, are appropriate. We illustrate the method by fitting two nonlinear population dynamics models to data in the presence of process and observation noise.

  8. Laser-Based Slam with Efficient Occupancy Likelihood Map Learning for Dynamic Indoor Scenes

    NASA Astrophysics Data System (ADS)

    Li, Li; Yao, Jian; Xie, Renping; Tu, Jinge; Feng, Chen

    2016-06-01

    Location-Based Services (LBS) have attracted growing attention in recent years, especially in indoor environments. The fundamental technique of LBS is the map building for unknown environments, this technique also named as simultaneous localization and mapping (SLAM) in robotic society. In this paper, we propose a novel approach for SLAMin dynamic indoor scenes based on a 2D laser scanner mounted on a mobile Unmanned Ground Vehicle (UGV) with the help of the grid-based occupancy likelihood map. Instead of applying scan matching in two adjacent scans, we propose to match current scan with the occupancy likelihood map learned from all previous scans in multiple scales to avoid the accumulation of matching errors. Due to that the acquisition of the points in a scan is sequential but not simultaneous, there unavoidably exists the scan distortion at different extents. To compensate the scan distortion caused by the motion of the UGV, we propose to integrate a velocity of a laser range finder (LRF) into the scan matching optimization framework. Besides, to reduce the effect of dynamic objects such as walking pedestrians often existed in indoor scenes as much as possible, we propose a new occupancy likelihood map learning strategy by increasing or decreasing the probability of each occupancy grid after each scan matching. Experimental results in several challenged indoor scenes demonstrate that our proposed approach is capable of providing high-precision SLAM results.

  9. A Method of Estimating Item Characteristic Functions Using the Maximum Likelihood Estimate of Ability

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    1977-01-01

    A method of estimating item characteristic functions is proposed, in which a set of test items, whose operating characteristics are known and which give a constant test information function for a wide range of ability, are used. The method is based on maximum likelihood estimation procedures. (Author/JKS)

  10. A Maximum Likelihood Method for Latent Class Regression Involving a Censored Dependent Variable.

    ERIC Educational Resources Information Center

    Jedidi, Kamel; And Others

    1993-01-01

    A method is proposed to simultaneously estimate regression functions and subject membership in "k" latent classes or groups given a censored dependent variable for a cross-section of subjects. Maximum likelihood estimates are obtained using an EM algorithm. The method is illustrated through a consumer psychology application. (SLD)

  11. Maximum-likelihood estimation in Optical Coherence Tomography in the context of the tear film dynamics.

    PubMed

    Huang, Jinxin; Clarkson, Eric; Kupinski, Matthew; Lee, Kye-Sung; Maki, Kara L; Ross, David S; Aquavella, James V; Rolland, Jannick P

    2013-01-01

    Understanding tear film dynamics is a prerequisite for advancing the management of Dry Eye Disease (DED). In this paper, we discuss the use of optical coherence tomography (OCT) and statistical decision theory to analyze the tear film dynamics of a digital phantom. We implement a maximum-likelihood (ML) estimator to interpret OCT data based on mathematical models of Fourier-Domain OCT and the tear film. With the methodology of task-based assessment, we quantify the tradeoffs among key imaging system parameters. We find, on the assumption that the broadband light source is characterized by circular Gaussian statistics, ML estimates of 40 nm +/- 4 nm for an axial resolution of 1 μm and an integration time of 5 μs. Finally, the estimator is validated with a digital phantom of tear film dynamics, which reveals estimates of nanometer precision.

  12. Evaluation of dynamic coastal response to sea-level rise modifies inundation likelihood

    USGS Publications Warehouse

    Lentz, Erika E.; Thieler, E. Robert; Plant, Nathaniel G.; Stippa, Sawyer R.; Horton, Radley M.; Gesch, Dean B.

    2016-01-01

    Sea-level rise (SLR) poses a range of threats to natural and built environments1, 2, making assessments of SLR-induced hazards essential for informed decision making3. We develop a probabilistic model that evaluates the likelihood that an area will inundate (flood) or dynamically respond (adapt) to SLR. The broad-area applicability of the approach is demonstrated by producing 30 × 30 m resolution predictions for more than 38,000 km2 of diverse coastal landscape in the northeastern United States. Probabilistic SLR projections, coastal elevation and vertical land movement are used to estimate likely future inundation levels. Then, conditioned on future inundation levels and the current land-cover type, we evaluate the likelihood of dynamic response versus inundation. We find that nearly 70% of this coastal landscape has some capacity to respond dynamically to SLR, and we show that inundation models over-predict land likely to submerge. This approach is well suited to guiding coastal resource management decisions that weigh future SLR impacts and uncertainty against ecological targets and economic constraints.

  13. Evaluation of Dynamic Coastal Response to Sea-level Rise Modifies Inundation Likelihood

    NASA Technical Reports Server (NTRS)

    Lentz, Erika E.; Thieler, E. Robert; Plant, Nathaniel G.; Stippa, Sawyer R.; Horton, Radley M.; Gesch, Dean B.

    2016-01-01

    Sea-level rise (SLR) poses a range of threats to natural and built environments, making assessments of SLR-induced hazards essential for informed decision making. We develop a probabilistic model that evaluates the likelihood that an area will inundate (flood) or dynamically respond (adapt) to SLR. The broad-area applicability of the approach is demonstrated by producing 30x30m resolution predictions for more than 38,000 sq km of diverse coastal landscape in the northeastern United States. Probabilistic SLR projections, coastal elevation and vertical land movement are used to estimate likely future inundation levels. Then, conditioned on future inundation levels and the current land-cover type, we evaluate the likelihood of dynamic response versus inundation. We find that nearly 70% of this coastal landscape has some capacity to respond dynamically to SLR, and we show that inundation models over-predict land likely to submerge. This approach is well suited to guiding coastal resource management decisions that weigh future SLR impacts and uncertainty against ecological targets and economic constraints.

  14. PhyPA: Phylogenetic method with pairwise sequence alignment outperforms likelihood methods in phylogenetics involving highly diverged sequences.

    PubMed

    Xia, Xuhua

    2016-09-01

    While pairwise sequence alignment (PSA) by dynamic programming is guaranteed to generate one of the optimal alignments, multiple sequence alignment (MSA) of highly divergent sequences often results in poorly aligned sequences, plaguing all subsequent phylogenetic analysis. One way to avoid this problem is to use only PSA to reconstruct phylogenetic trees, which can only be done with distance-based methods. I compared the accuracy of this new computational approach (named PhyPA for phylogenetics by pairwise alignment) against the maximum likelihood method using MSA (the ML+MSA approach), based on nucleotide, amino acid and codon sequences simulated with different topologies and tree lengths. I present a surprising discovery that the fast PhyPA method consistently outperforms the slow ML+MSA approach for highly diverged sequences even when all optimization options were turned on for the ML+MSA approach. Only when sequences are not highly diverged (i.e., when a reliable MSA can be obtained) does the ML+MSA approach outperforms PhyPA. The true topologies are always recovered by ML with the true alignment from the simulation. However, with MSA derived from alignment programs such as MAFFT or MUSCLE, the recovered topology consistently has higher likelihood than that for the true topology. Thus, the failure to recover the true topology by the ML+MSA is not because of insufficient search of tree space, but by the distortion of phylogenetic signal by MSA methods. I have implemented in DAMBE PhyPA and two approaches making use of multi-gene data sets to derive phylogenetic support for subtrees equivalent to resampling techniques such as bootstrapping and jackknifing.

  15. Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods

    ERIC Educational Resources Information Center

    Zhong, Xiaoling; Yuan, Ke-Hai

    2011-01-01

    In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…

  16. Likelihood methods for regression models with expensive variables missing by design.

    PubMed

    Zhao, Yang; Lawless, Jerald F; McLeish, Donald L

    2009-02-01

    In some applications involving regression the values of certain variables are missing by design for some individuals. For example, in two-stage studies (Zhao and Lipsitz, 1992), data on "cheaper" variables are collected on a random sample of individuals in stage I, and then "expensive" variables are measured for a subsample of these in stage II. So the "expensive" variables are missing by design at stage I. Both estimating function and likelihood methods have been proposed for cases where either covariates or responses are missing. We extend the semiparametric maximum likelihood (SPML) method for missing covariate problems (e.g. Chen, 2004; Ibrahim et al., 2005; Zhang and Rockette, 2005, 2007) to deal with more general cases where covariates and/or responses are missing by design, and show that profile likelihood ratio tests and interval estimation are easily implemented. Simulation studies are provided to examine the performance of the likelihood methods and to compare their efficiencies with estimating function methods for problems involving (a) a missing covariate and (b) a missing response variable. We illustrate the ease of implementation of SPML and demonstrate its high efficiency.

  17. Phase Noise Investigation of Maximum Likelihood Estimation Method for Airborne Multibaseline SAR Interferometry

    NASA Astrophysics Data System (ADS)

    Magnard, C.; Small, D.; Meier, E.

    2015-03-01

    The phase estimation of cross-track multibaseline synthetic aperture interferometric data is usually thought to be very efficiently achieved using the maximum likelihood (ML) method. The suitability of this method is investigated here as applied to airborne single pass multibaseline data. Experimental interferometric data acquired with a Ka-band sensor were processed using (a) a ML method that fuses the complex data from all receivers and (b) a coarse-to-fine method that only uses the intermediate baselines to unwrap the phase values from the longest baseline. The phase noise was analyzed for both methods: in most cases, a small improvement was found when the ML method was used.

  18. Efficient and exact maximum likelihood quantisation of genomic features using dynamic programming.

    PubMed

    Song, Mingzhou; Haralick, Robert M; Boissinot, Stéphane

    2010-01-01

    An efficient and exact dynamic programming algorithm is introduced to quantise a continuous random variable into a discrete random variable that maximises the likelihood of the quantised probability distribution for the original continuous random variable. Quantisation is often useful before statistical analysis and modelling of large discrete network models from observations of multiple continuous random variables. The quantisation algorithm is applied to genomic features including the recombination rate distribution across the chromosomes and the non-coding transposable element LINE-1 in the human genome. The association pattern is studied between the recombination rate, obtained by quantisation at genomic locations around LINE-1 elements, and the length groups of LINE-1 elements, also obtained by quantisation on LINE-1 length. The exact and density-preserving quantisation approach provides an alternative superior to the inexact and distance-based univariate iterative k-means clustering algorithm for discretisation.

  19. The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction.

    PubMed

    Williamson, Ross S; Sahani, Maneesh; Pillow, Jonathan W

    2015-04-01

    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.

  20. Simple imputation methods versus direct likelihood analysis for missing item scores in multilevel educational data.

    PubMed

    Kadengye, Damazo T; Cools, Wilfried; Ceulemans, Eva; Van den Noortgate, Wim

    2012-06-01

    Missing data, such as item responses in multilevel data, are ubiquitous in educational research settings. Researchers in the item response theory (IRT) context have shown that ignoring such missing data can create problems in the estimation of the IRT model parameters. Consequently, several imputation methods for dealing with missing item data have been proposed and shown to be effective when applied with traditional IRT models. Additionally, a nonimputation direct likelihood analysis has been shown to be an effective tool for handling missing observations in clustered data settings. This study investigates the performance of six simple imputation methods, which have been found to be useful in other IRT contexts, versus a direct likelihood analysis, in multilevel data from educational settings. Multilevel item response data were simulated on the basis of two empirical data sets, and some of the item scores were deleted, such that they were missing either completely at random or simply at random. An explanatory IRT model was used for modeling the complete, incomplete, and imputed data sets. We showed that direct likelihood analysis of the incomplete data sets produced unbiased parameter estimates that were comparable to those from a complete data analysis. Multiple-imputation approaches of the two-way mean and corrected item mean substitution methods displayed varying degrees of effectiveness in imputing data that in turn could produce unbiased parameter estimates. The simple random imputation, adjusted random imputation, item means substitution, and regression imputation methods seemed to be less effective in imputing missing item scores in multilevel data settings.

  1. Retrospective Likelihood Based Methods for Analyzing Case-Cohort Genetic Association Studies

    PubMed Central

    Shen, Yuanyuan; Cai, Tianxi; Chen, Yu; Yang, Ying; Chen, Jinbo

    2016-01-01

    Summary The case cohort (CCH) design is a cost effective design for assessing genetic susceptibility with time-to-event data especially when the event rate is low. In this work, we propose a powerful pseudo score test for assessing the association between a single nucleotide polymorphism (SNP) and the event time under the CCH design. The pseudo score is derived from a pseudo likelihood which is an estimated retrospective likelihood that treats the SNP genotype as the dependent variable and time-to-event outcome and other covariates as independent variables. It exploits the fact that the genetic variable is often distributed independent of covariates or only related to a low-dimensional subset. Estimates of hazard ratio parameters for association can be obtained by maximizing the pseudo likelihood. A unique advantage of our method is that it allows the censoring distribution to depend on covariates that are only measured for the CCH sample while not requiring the knowledge of follow up or covariate information on subjects not selected into the CCH sample. In addition to these flexibilities, the proposed method has high relative efficiency compared with commonly used alternative approaches. We study large sample properties of this method and assess its finite sample performance using both simulated and real data examples. PMID:26177343

  2. Efficient Simulation and Likelihood Methods for Non-Neutral Multi-Allele Models

    PubMed Central

    Joyce, Paul; Genz, Alan

    2012-01-01

    Abstract Throughout the 1980s, Simon Tavaré made numerous significant contributions to population genetics theory. As genetic data, in particular DNA sequence, became more readily available, a need to connect population-genetic models to data became the central issue. The seminal work of Griffiths and Tavaré (1994a, 1994b, 1994c) was among the first to develop a likelihood method to estimate the population-genetic parameters using full DNA sequences. Now, we are in the genomics era where methods need to scale-up to handle massive data sets, and Tavaré has led the way to new approaches. However, performing statistical inference under non-neutral models has proved elusive. In tribute to Simon Tavaré, we present an article in spirit of his work that provides a computationally tractable method for simulating and analyzing data under a class of non-neutral population-genetic models. Computational methods for approximating likelihood functions and generating samples under a class of allele-frequency based non-neutral parent-independent mutation models were proposed by Donnelly, Nordborg, and Joyce (DNJ) (Donnelly et al., 2001). DNJ (2001) simulated samples of allele frequencies from non-neutral models using neutral models as auxiliary distribution in a rejection algorithm. However, patterns of allele frequencies produced by neutral models are dissimilar to patterns of allele frequencies produced by non-neutral models, making the rejection method inefficient. For example, in some cases the methods in DNJ (2001) require 109 rejections before a sample from the non-neutral model is accepted. Our method simulates samples directly from the distribution of non-neutral models, making simulation methods a practical tool to study the behavior of the likelihood and to perform inference on the strength of selection. PMID:22697240

  3. Method and apparatus for implementing a traceback maximum-likelihood decoder in a hypercube network

    NASA Technical Reports Server (NTRS)

    Pollara-Bozzola, Fabrizio (Inventor)

    1989-01-01

    A method and a structure to implement maximum-likelihood decoding of convolutional codes on a network of microprocessors interconnected as an n-dimensional cube (hypercube). By proper reordering of states in the decoder, only communication between adjacent processors is required. Communication time is limited to that required for communication only of the accumulated metrics and not the survivor parameters of a Viterbi decoding algorithm. The survivor parameters are stored at a local processor's memory and a trace-back method is employed to ascertain the decoding result. Faster and more efficient operation is enabled, and decoding of large constraint length codes is feasible using standard VLSI technology.

  4. A dynamic growth model of Dunaliella salina: parameter identification and profile likelihood analysis.

    PubMed

    Fachet, Melanie; Flassig, Robert J; Rihko-Struckmann, Liisa; Sundmacher, Kai

    2014-12-01

    In this work, a photoautotrophic growth model incorporating light and nutrient effects on growth and pigmentation of Dunaliella salina was formulated. The model equations were taken from literature and modified according to the experimental setup with special emphasis on model reduction. The proposed model has been evaluated with experimental data of D. salina cultivated in a flat-plate photobioreactor under stressed and non-stressed conditions. Simulation results show that the model can represent the experimental data accurately. The identifiability of the model parameters was studied using the profile likelihood method. This analysis revealed that three model parameters are practically non-identifiable. However, some of these non-identifiabilities can be resolved by model reduction and additional measurements. As a conclusion, our results suggest that the proposed model equations result in a predictive growth model for D. salina.

  5. Likelihood ratio data to report the validation of a forensic fingerprint evaluation method.

    PubMed

    Ramos, Daniel; Haraksim, Rudolf; Meuwly, Didier

    2017-02-01

    Data to which the authors refer to throughout this article are likelihood ratios (LR) computed from the comparison of 5-12 minutiae fingermarks with fingerprints. These LRs data are used for the validation of a likelihood ratio (LR) method in forensic evidence evaluation. These data present a necessary asset for conducting validation experiments when validating LR methods used in forensic evidence evaluation and set up validation reports. These data can be also used as a baseline for comparing the fingermark evidence in the same minutiae configuration as presented in (D. Meuwly, D. Ramos, R. Haraksim,) [1], although the reader should keep in mind that different feature extraction algorithms and different AFIS systems used may produce different LRs values. Moreover, these data may serve as a reproducibility exercise, in order to train the generation of validation reports of forensic methods, according to [1]. Alongside the data, a justification and motivation for the use of methods is given. These methods calculate LRs from the fingerprint/mark data and are subject to a validation procedure. The choice of using real forensic fingerprint in the validation and simulated data in the development is described and justified. Validation criteria are set for the purpose of validation of the LR methods, which are used to calculate the LR values from the data and the validation report. For privacy and data protection reasons, the original fingerprint/mark images cannot be shared. But these images do not constitute the core data for the validation, contrarily to the LRs that are shared.

  6. Maximum-likelihood methods for array processing based on time-frequency distributions

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  7. Maximum Likelihood, Profile Likelihood, and Penalized Likelihood: A Primer

    PubMed Central

    Cole, Stephen R.; Chu, Haitao; Greenland, Sander

    2014-01-01

    The method of maximum likelihood is widely used in epidemiology, yet many epidemiologists receive little or no education in the conceptual underpinnings of the approach. Here we provide a primer on maximum likelihood and some important extensions which have proven useful in epidemiologic research, and which reveal connections between maximum likelihood and Bayesian methods. For a given data set and probability model, maximum likelihood finds values of the model parameters that give the observed data the highest probability. As with all inferential statistical methods, maximum likelihood is based on an assumed model and cannot account for bias sources that are not controlled by the model or the study design. Maximum likelihood is nonetheless popular, because it is computationally straightforward and intuitive and because maximum likelihood estimators have desirable large-sample properties in the (largely fictitious) case in which the model has been correctly specified. Here, we work through an example to illustrate the mechanics of maximum likelihood estimation and indicate how improvements can be made easily with commercial software. We then describe recent extensions and generalizations which are better suited to observational health research and which should arguably replace standard maximum likelihood as the default method. PMID:24173548

  8. Nonparametric maximum likelihood estimation of probability densities by penalty function methods

    NASA Technical Reports Server (NTRS)

    Demontricher, G. F.; Tapia, R. A.; Thompson, J. R.

    1974-01-01

    When it is known a priori exactly to which finite dimensional manifold the probability density function gives rise to a set of samples, the parametric maximum likelihood estimation procedure leads to poor estimates and is unstable; while the nonparametric maximum likelihood procedure is undefined. A very general theory of maximum penalized likelihood estimation which should avoid many of these difficulties is presented. It is demonstrated that each reproducing kernel Hilbert space leads, in a very natural way, to a maximum penalized likelihood estimator and that a well-known class of reproducing kernel Hilbert spaces gives polynomial splines as the nonparametric maximum penalized likelihood estimates.

  9. Maximum likelihood method for estimating airplane stability and control parameters from flight data in frequency domain

    NASA Technical Reports Server (NTRS)

    Klein, V.

    1980-01-01

    A frequency domain maximum likelihood method is developed for the estimation of airplane stability and control parameters from measured data. The model of an airplane is represented by a discrete-type steady state Kalman filter with time variables replaced by their Fourier series expansions. The likelihood function of innovations is formulated, and by its maximization with respect to unknown parameters the estimation algorithm is obtained. This algorithm is then simplified to the output error estimation method with the data in the form of transformed time histories, frequency response curves, or spectral and cross-spectral densities. The development is followed by a discussion on the equivalence of the cost function in the time and frequency domains, and on advantages and disadvantages of the frequency domain approach. The algorithm developed is applied in four examples to the estimation of longitudinal parameters of a general aviation airplane using computer generated and measured data in turbulent and still air. The cost functions in the time and frequency domains are shown to be equivalent; therefore, both approaches are complementary and not contradictory. Despite some computational advantages of parameter estimation in the frequency domain, this approach is limited to linear equations of motion with constant coefficients.

  10. Equivalence between modularity optimization and maximum likelihood methods for community detection

    NASA Astrophysics Data System (ADS)

    Newman, M. E. J.

    2016-11-01

    We demonstrate an equivalence between two widely used methods of community detection in networks, the method of modularity maximization and the method of maximum likelihood applied to the degree-corrected stochastic block model. Specifically, we show an exact equivalence between maximization of the generalized modularity that includes a resolution parameter and the special case of the block model known as the planted partition model, in which all communities in a network are assumed to have statistically similar properties. Among other things, this equivalence provides a mathematically principled derivation of the modularity function, clarifies the conditions and assumptions of its use, and gives an explicit formula for the optimal value of the resolution parameter.

  11. Efficient method for computing the maximum-likelihood quantum state from measurements with additive Gaussian noise.

    PubMed

    Smolin, John A; Gambetta, Jay M; Smith, Graeme

    2012-02-17

    We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.

  12. Comparative analysis of the performance of laser Doppler systems using maximum likelihood and phase increment methods

    NASA Astrophysics Data System (ADS)

    Sobolev, V. S.; Zhuravel', F. A.; Kashcheeva, G. A.

    2016-11-01

    This paper presents a comparative analysis of the errors of two alternative methods of estimating the central frequency of signals of laser Doppler systems, one of which is based on the maximum likelihood criterion and the other on the so-called pulse-pair technique. Using computer simulation, the standard deviations of the Doppler signal frequency from its true values are determined for both methods and plots of the ratios of these deviations as a measure of the accuracy gain of one of them are constructed. The results can be used by developers of appropriate systems to choose an optimal algorithm of signal processing based on a compromise between the accuracy and speed of the systems as well as the labor intensity of calculations.

  13. Determination of instrumentation errors from measured data using maximum likelihood method

    NASA Technical Reports Server (NTRS)

    Keskar, D. A.; Klein, V.

    1980-01-01

    The maximum likelihood method is used for estimation of unknown initial conditions, constant bias and scale factor errors in measured flight data. The model for the system to be identified consists of the airplane six-degree-of-freedom kinematic equations, and the output equations specifying the measured variables. The estimation problem is formulated in a general way and then, for practical use, simplified by ignoring the effect of process noise. The algorithm developed is first applied to computer generated data having different levels of process noise for the demonstration of the robustness of the method. Then the real flight data are analyzed and the results compared with those obtained by the extended Kalman filter algorithm.

  14. A guideline for the validation of likelihood ratio methods used for forensic evidence evaluation.

    PubMed

    Meuwly, Didier; Ramos, Daniel; Haraksim, Rudolf

    2016-04-26

    This Guideline proposes a protocol for the validation of forensic evaluation methods at the source level, using the Likelihood Ratio framework as defined within the Bayes' inference model. In the context of the inference of identity of source, the Likelihood Ratio is used to evaluate the strength of the evidence for a trace specimen, e.g. a fingermark, and a reference specimen, e.g. a fingerprint, to originate from common or different sources. Some theoretical aspects of probabilities necessary for this Guideline were discussed prior to its elaboration, which started after a workshop of forensic researchers and practitioners involved in this topic. In the workshop, the following questions were addressed: "which aspects of a forensic evaluation scenario need to be validated?", "what is the role of the LR as part of a decision process?" and "how to deal with uncertainty in the LR calculation?". The questions: "what to validate?" focuses on the validation methods and criteria and "how to validate?" deals with the implementation of the validation protocol. Answers to these questions were deemed necessary with several objectives. First, concepts typical for validation standards [1], such as performance characteristics, performance metrics and validation criteria, will be adapted or applied by analogy to the LR framework. Second, a validation strategy will be defined. Third, validation methods will be described. Finally, a validation protocol and an example of validation report will be proposed, which can be applied to the forensic fields developing and validating LR methods for the evaluation of the strength of evidence at source level under the following propositions.

  15. Likelihood ratio meta-analysis: New motivation and approach for an old method.

    PubMed

    Dormuth, Colin R; Filion, Kristian B; Platt, Robert W

    2016-03-01

    A 95% confidence interval (CI) in an updated meta-analysis may not have the expected 95% coverage. If a meta-analysis is simply updated with additional data, then the resulting 95% CI will be wrong because it will not have accounted for the fact that the earlier meta-analysis failed or succeeded to exclude the null. This situation can be avoided by using the likelihood ratio (LR) as a measure of evidence that does not depend on type-1 error. We show how an LR-based approach, first advanced by Goodman, can be used in a meta-analysis to pool data from separate studies to quantitatively assess where the total evidence points. The method works by estimating the log-likelihood ratio (LogLR) function from each study. Those functions are then summed to obtain a combined function, which is then used to retrieve the total effect estimate, and a corresponding 'intrinsic' confidence interval. Using as illustrations the CAPRIE trial of clopidogrel versus aspirin in the prevention of ischemic events, and our own meta-analysis of higher potency statins and the risk of acute kidney injury, we show that the LR-based method yields the same point estimate as the traditional analysis, but with an intrinsic confidence interval that is appropriately wider than the traditional 95% CI. The LR-based method can be used to conduct both fixed effect and random effects meta-analyses, it can be applied to old and new meta-analyses alike, and results can be presented in a format that is familiar to a meta-analytic audience.

  16. Maximum likelihood methods for investigating reporting rates of rings on hunter-shot birds

    USGS Publications Warehouse

    Conroy, M.J.; Morgan, B.J.T.; North, P.M.

    1985-01-01

    It is well known that hunters do not report 100% of the rings that they find on shot birds. Reward studies can be used to estimate what this reporting rate is, by comparison of recoveries of rings offering a monetary reward, to ordinary rings. A reward study of American Black Ducks (Anas rubripes) is used to illustrate the design, and to motivate the development of statistical models for estimation and for testing hypotheses of temporal and geographic variation in reporting rates. The method involves indexing the data (recoveries) and parameters (reporting, harvest, and solicitation rates) by geographic and temporal strata. Estimates are obtained under unconstrained (e.g., allowing temporal variability in reporting rates) and constrained (e.g., constant reporting rates) models, and hypotheses are tested by likelihood ratio. A FORTRAN program, available from the author, is used to perform the computations.

  17. A Maximum Likelihood Method for Reconstruction of the Evolution of Eukaryotic Gene Structure

    PubMed Central

    Carmel, Liran; Rogozin, Igor B.; Wolf, Yuri I.; Koonin, Eugene V.

    2012-01-01

    Spliceosomal introns are one of the principal distinctive features of eukaryotes. Nevertheless, different large-scale studies disagree about even the most basic features of their evolution. In order to come up with a more reliable reconstruction of intron evolution, we developed a model that is far more comprehensive than previous ones. This model is rich in parameters, and estimating them accurately is infeasible by straightforward likelihood maximization. Thus, we have developed an expectation-maximization algorithm that allows for efficient maximization. Here, we outline the model and describe the expectation-maximization algorithm in detail. Since the method works with intron presence–absence maps, it is expected to be instrumental for the analysis of the evolution of other binary characters as well. PMID:19381540

  18. An alternative empirical likelihood method in missing response problems and causal inference.

    PubMed

    Ren, Kaili; Drummond, Christopher A; Brewster, Pamela S; Haller, Steven T; Tian, Jiang; Cooper, Christopher J; Zhang, Biao

    2016-11-30

    Missing responses are common problems in medical, social, and economic studies. When responses are missing at random, a complete case data analysis may result in biases. A popular debias method is inverse probability weighting proposed by Horvitz and Thompson. To improve efficiency, Robins et al. proposed an augmented inverse probability weighting method. The augmented inverse probability weighting estimator has a double-robustness property and achieves the semiparametric efficiency lower bound when the regression model and propensity score model are both correctly specified. In this paper, we introduce an empirical likelihood-based estimator as an alternative to Qin and Zhang (2007). Our proposed estimator is also doubly robust and locally efficient. Simulation results show that the proposed estimator has better performance when the propensity score is correctly modeled. Moreover, the proposed method can be applied in the estimation of average treatment effect in observational causal inferences. Finally, we apply our method to an observational study of smoking, using data from the Cardiovascular Outcomes in Renal Atherosclerotic Lesions clinical trial. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Evolutionary analysis of apolipoprotein E by Maximum Likelihood and complex network methods

    PubMed Central

    Benevides, Leandro de Jesus; de Carvalho, Daniel Santana; Andrade, Roberto Fernandes Silva; Bomfim, Gilberto Cafezeiro; Fernandes, Flora Maria de Campos

    2016-01-01

    Abstract Apolipoprotein E (apo E) is a human glycoprotein with 299 amino acids, and it is a major component of very low density lipoproteins (VLDL) and a group of high-density lipoproteins (HDL). Phylogenetic studies are important to clarify how various apo E proteins are related in groups of organisms and whether they evolved from a common ancestor. Here, we aimed at performing a phylogenetic study on apo E carrying organisms. We employed a classical and robust method, such as Maximum Likelihood (ML), and compared the results using a more recent approach based on complex networks. Thirty-two apo E amino acid sequences were downloaded from NCBI. A clear separation could be observed among three major groups: mammals, fish and amphibians. The results obtained from ML method, as well as from the constructed networks showed two different groups: one with mammals only (C1) and another with fish (C2), and a single node with the single sequence available for an amphibian. The accordance in results from the different methods shows that the complex networks approach is effective in phylogenetic studies. Furthermore, our results revealed the conservation of apo E among animal groups. PMID:27560837

  20. Analyzing pathogen suppressiveness in bioassays with natural soils using integrative maximum likelihood methods in R

    PubMed Central

    Latz, Ellen

    2016-01-01

    The potential of soils to naturally suppress inherent plant pathogens is an important ecosystem function. Usually, pathogen infection assays are used for estimating the suppressive potential of soils. In natural soils, however, co-occurring pathogens might simultaneously infect plants complicating the estimation of a focal pathogen’s infection rate (initial slope of the infection-curve) as a measure of soil suppressiveness. Here, we present a method in R correcting for these unwanted effects by developing a two pathogen mono-molecular infection model. We fit the two pathogen mono-molecular infection model to data by using an integrative approach combining a numerical simulation of the model with an iterative maximum likelihood fit. We show that in presence of co-occurring pathogens using uncorrected data leads to a critical under- or overestimation of soil suppressiveness measures. In contrast, our new approach enables to precisely estimate soil suppressiveness measures such as plant infection rate and plant resistance time. Our method allows a correction of measured infection parameters that is necessary in case different pathogens are present. Moreover, our model can be (1) adapted to use other models such as the logistic or the Gompertz model; and (2) it could be extended by a facilitation parameter if infections in plants increase the susceptibility to new infections. We propose our method to be particularly useful for exploring soil suppressiveness of natural soils from different sites (e.g., in biodiversity experiments). PMID:27833800

  1. Methods for flexible sample-size design in clinical trials: Likelihood, weighted, dual test, and promising zone approaches.

    PubMed

    Shih, Weichung Joe; Li, Gang; Wang, Yining

    2016-03-01

    Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one.

  2. Two-locus models of disease: Comparison of likelihood and nonparametric linkage methods

    SciTech Connect

    Goldin, L.R. ); Weeks, D.E. )

    1993-10-01

    The power to detect linkage for likelihood and nonparametric (Haseman-Elston, affected-sib-pair, and affected-pedigree-member) methods is compared for the case of a common, dichotomous trait resulting from the segregation of two loci. Pedigree data for several two-locus epistatic and heterogeneity models have been simulated, with one of the loci linked to a marker locus. Replicate samples of 20 three-generation pedigrees (16 individuals/pedigree) were simulated and then ascertained for having at least 6 affected individuals. The power of linkage detection calculated under the correct two-locus model is only slightly higher than that under a single locus model with reduced penetrance. As expected, the nonparametric linkage methods have somewhat lower power than does the lod-score method, the difference depending on the mode of transmission of the linked locus. Thus, for many pedigree linkage studies, the lod-score method will have the best power. However, this conclusion depends on how many times the lod score will be calculated for a given marker. The Haseman-Elston method would likely be preferable to calculating lod scores under a large number of genetic models (i.e., varying both the mode of transmission and the penetrances), since such an analysis requires an increase in the critical value of the lod criterion. The power of the affected-pedigree-member method is lower than the other methods, which can be shown to be largely due to the fact that marker genotypes for unaffected individuals are not used. 31 refs., 1 fig., 5 tabs.

  3. Quantifying uncertainty in predictions of groundwater levels using formal likelihood methods

    NASA Astrophysics Data System (ADS)

    Marchant, Ben; Mackay, Jonathan; Bloomfield, John

    2016-09-01

    Informal and formal likelihood methods can be used to quantify uncertainty in modelled predictions of groundwater levels (GWLs). Informal methods use a relatively subjective criterion to identify sets of plausible or behavioural parameters of the GWL models. In contrast, formal methods specify a statistical model for the residuals or errors of the GWL model. The formal uncertainty estimates are only reliable when the assumptions of the statistical model are appropriate. We apply the formal approach to historical reconstructions of GWL hydrographs from four UK boreholes. We test whether a model which assumes Gaussian and independent errors is sufficient to represent the residuals or whether a model which includes temporal autocorrelation and a general non-Gaussian distribution is required. Groundwater level hydrographs are often observed at irregular time intervals so we use geostatistical methods to quantify the temporal autocorrelation rather than more standard time series methods such as autoregressive models. According to the Akaike Information Criterion, the more general statistical model better represents the residuals of the GWL model. However, no substantial difference between the accuracy of the GWL predictions and the estimates of their uncertainty is observed when the two statistical models are compared. When the general model is applied, significant temporal correlation over periods ranging from 3 to 20 months is evident for the different boreholes. When the GWL model parameters are sampled using a Markov Chain Monte Carlo approach the distributions based on the general statistical model differ from those of the Gaussian model, particularly for the boreholes with the most autocorrelation. These results suggest that the independent Gaussian model of residuals is sufficient to estimate the uncertainty of a GWL prediction on a single date. However, if realistically autocorrelated simulations of GWL hydrographs for multiple dates are required or if the

  4. A maximum-likelihood multi-resolution weak lensing mass reconstruction method

    NASA Astrophysics Data System (ADS)

    Khiabanian, Hossein

    Gravitational lensing is formed when the light from a distant source is "bent" around a massive object. Lensing analysis has increasingly become the method of choice for studying dark matter, so much that it is one of the main tools that will be employed in the future surveys to study the dark energy and its equation of state as well as the evolution of galaxy clustering. Unlike other popular techniques for selecting galaxy clusters (such as studying the X-ray emission or observing the over-densities of galaxies), weak gravitational lensing does not have the disadvantage of relying on the luminous matter and provides a parameter-free reconstruction of the projected mass distribution in clusters without dependence on baryon content. Gravitational lensing also provides a unique test for the presence of truly dark clusters, though it is otherwise an expensive detection method. Therefore it is essential to make use of all the information provided by the data to improve the quality of the lensing analysis. This thesis project has been motivated by the limitations encountered with the commonly used direct reconstruction methods of producing mass maps. We have developed a multi-resolution maximum-likelihood reconstruction method for producing two dimensional mass maps using weak gravitational lensing data. To utilize all the shear information, we employ an iterative inverse method with a properly selected regularization coefficient which fits the deflection potential at the position of each galaxy. By producing mass maps with multiple resolutions in the different parts of the observed field, we can achieve a uniform signal to noise level by increasing the resolution in regions of higher distortions or regions with an over-density of background galaxies. In addition, we are able to better study the sub- structure of the massive clusters at a resolution which is not attainable in the rest of the observed field.

  5. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance Structure Models to Block-Toeplitz Matrices Representing Single-Subject Multivariate Time-Series.

    ERIC Educational Resources Information Center

    Molenaar, Peter C. M.; Nesselroade, John R.

    1998-01-01

    Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…

  6. Application of maximum likelihood to direct methods: the probability density function of the triple-phase sums. XI.

    PubMed

    Rius, Jordi

    2006-09-01

    The maximum-likelihood method is applied to direct methods to derive a more general probability density function of the triple-phase sums which is capable of predicting negative values. This study also proves that maximization of the origin-free modulus sum function S yields, within the limitations imposed by the assumed approximations, the maximum-likelihood estimates of the phases. It thus represents the formal theoretical justification of the S function that was initially derived from Patterson-function arguments [Rius (1993). Acta Cryst. A49, 406-409].

  7. A Composite-Likelihood Method for Detecting Incomplete Selective Sweep from Population Genomic Data.

    PubMed

    Vy, Ha My T; Kim, Yuseob

    2015-06-01

    Adaptive evolution occurs as beneficial mutations arise and then increase in frequency by positive natural selection. How, when, and where in the genome such evolutionary events occur is a fundamental question in evolutionary biology. It is possible to detect ongoing positive selection or an incomplete selective sweep in species with sexual reproduction because, when a beneficial mutation is on the way to fixation, homologous chromosomes in the population are divided into two groups: one carrying the beneficial allele with very low polymorphism at nearby linked loci and the other carrying the ancestral allele with a normal pattern of sequence variation. Previous studies developed long-range haplotype tests to capture this difference between two groups as the signal of an incomplete selective sweep. In this study, we propose a composite-likelihood-ratio (CLR) test for detecting incomplete selective sweeps based on the joint sampling probabilities for allele frequencies of two groups as a function of strength of selection and recombination rate. Tested against simulated data, this method yielded statistical power and accuracy in parameter estimation that are higher than the iHS test and comparable to the more recently developed nSL test. This procedure was also applied to African Drosophila melanogaster population genomic data to detect candidate genes under ongoing positive selection. Upon visual inspection of sequence polymorphism, candidates detected by our CLR method exhibited clear haplotype structures predicted under incomplete selective sweeps. Our results suggest that different methods capture different aspects of genetic information regarding incomplete sweeps and thus are partially complementary to each other.

  8. From Dynamical Processes to Likelihood Functions, An Application to Internet Surveillance Data for Influenza Like Illnesses

    NASA Astrophysics Data System (ADS)

    Stollenwerk, Nico

    2009-09-01

    Basic stochastic processes, like the SIS and SIR epidemics, are used to describe data from an internet based surveillance system, the InfluenzaNet. Via generating functions, in some simplifying situations there can be analytic expressions derived for the probability. From this likelihood functions for parameter estimation are constructed. This is a nice application in which partial differential equations appear in epidemiological applications without invoking any explicitly spatial aspect. All steps can eventually be bridged by numeric simulations in case of analytical difficulties [1, 2].

  9. Estimating parameters of a multiple autoregressive model by the modified maximum likelihood method

    NASA Astrophysics Data System (ADS)

    Bayrak, Özlem Türker; Akkaya, Aysen D.

    2010-02-01

    We consider a multiple autoregressive model with non-normal error distributions, the latter being more prevalent in practice than the usually assumed normal distribution. Since the maximum likelihood equations have convergence problems (Puthenpura and Sinha, 1986) [11], we work out modified maximum likelihood equations by expressing the maximum likelihood equations in terms of ordered residuals and linearizing intractable nonlinear functions (Tiku and Suresh, 1992) [8]. The solutions, called modified maximum estimators, are explicit functions of sample observations and therefore easy to compute. They are under some very general regularity conditions asymptotically unbiased and efficient (Vaughan and Tiku, 2000) [4]. We show that for small sample sizes, they have negligible bias and are considerably more efficient than the traditional least squares estimators. We show that our estimators are robust to plausible deviations from an assumed distribution and are therefore enormously advantageous as compared to the least squares estimators. We give a real life example.

  10. Calibrating floor field cellular automaton models for pedestrian dynamics by using likelihood function optimization

    NASA Astrophysics Data System (ADS)

    Lovreglio, Ruggiero; Ronchi, Enrico; Nilsson, Daniel

    2015-11-01

    The formulation of pedestrian floor field cellular automaton models is generally based on hypothetical assumptions to represent reality. This paper proposes a novel methodology to calibrate these models using experimental trajectories. The methodology is based on likelihood function optimization and allows verifying whether the parameters defining a model statistically affect pedestrian navigation. Moreover, it allows comparing different model specifications or the parameters of the same model estimated using different data collection techniques, e.g. virtual reality experiment, real data, etc. The methodology is here implemented using navigation data collected in a Virtual Reality tunnel evacuation experiment including 96 participants. A trajectory dataset in the proximity of an emergency exit is used to test and compare different metrics, i.e. Euclidean and modified Euclidean distance, for the static floor field. In the present case study, modified Euclidean metrics provide better fitting with the data. A new formulation using random parameters for pedestrian cellular automaton models is also defined and tested.

  11. HIV AND POPULATION DYNAMICS: A GENERAL MODEL AND MAXIMUM-LIKELIHOOD STANDARDS FOR EAST AFRICA*

    PubMed Central

    HEUVELINE, PATRICK

    2014-01-01

    In high-prevalence populations, the HIV epidemic undermines the validity of past empirical models and related demographic techniques. A parsimonious model of HIV and population dynamics is presented here and fit to 46,000 observations, gathered from 11 East African populations. The fitted model simulates HIV and population dynamics with standard demographic inputs and only two additional parameters for the onset and scale of the epidemic. The underestimation of the general prevalence of HIV in samples of pregnant women and the fertility impact of HIV are examples of the dynamic interactions that demographic models must reproduce and are shown here to increase over time even with constant prevalence levels. As a result, the impact of HIV on population growth appears to have been underestimated by current population projections that ignore this dynamic. PMID:12846130

  12. Comparisons of Four Methods for Estimating a Dynamic Factor Model

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Hamaker, Ellen L.; Nesselroade, John R.

    2008-01-01

    Four methods for estimating a dynamic factor model, the direct autoregressive factor score (DAFS) model, are evaluated and compared. The first method estimates the DAFS model using a Kalman filter algorithm based on its state space model representation. The second one employs the maximum likelihood estimation method based on the construction of a…

  13. A likelihood method to cross-calibrate air-shower detectors

    NASA Astrophysics Data System (ADS)

    Dembinski, Hans Peter; Kégl, Balázs; Mariş, Ioana C.; Roth, Markus; Veberič, Darko

    2016-01-01

    We present a detailed statistical treatment of the energy calibration of hybrid air-shower detectors, which combine a surface detector array and a fluorescence detector, to obtain an unbiased estimate of the calibration curve. The special features of calibration data from air showers prevent unbiased results, if a standard least-squares fit is applied to the problem. We develop a general maximum-likelihood approach, based on the detailed statistical model, to solve the problem. Our approach was developed for the Pierre Auger Observatory, but the applied principles are general and can be transferred to other air-shower experiments, even to the cross-calibration of other observables. Since our general likelihood function is expensive to compute, we derive two approximations with significantly smaller computational cost. In the recent years both have been used to calibrate data of the Pierre Auger Observatory. We demonstrate that these approximations introduce negligible bias when they are applied to simulated toy experiments, which mimic realistic experimental conditions.

  14. How to use dynamic light scattering to improve the likelihood of growing macromolecular crystals.

    PubMed

    Borgstahl, Gloria E O

    2007-01-01

    Dynamic light scattering (DLS) has become one of the most useful diagnostic tools for crystallization. The main purpose of using DLS in crystal screening is to help the investigator understand the size distribution, stability, and aggregation state of macromolecules in solution. It can also be used to understand how experimental variables influence aggregation. With commercially available instruments, DLS is easy to perform, and most of the sample is recoverable. Most usefully, the homogeneity or monodispersity of a sample, as measured by DLS, can be predictive of crystallizability.

  15. Maximum-likelihood estimation of familial correlations from multivariate quantitative data on pedigrees: a general method and examples.

    PubMed Central

    Rao, D C; Vogler, G P; McGue, M; Russell, J M

    1987-01-01

    A general method for maximum-likelihood estimation of familial correlations from pedigree data is presented. The method is applicable to any type of data structure, including pedigrees in which variable numbers of individuals are present within classes of relatives, data in which multiple phenotypic measures are obtained on each individual, and multiple group analyses in which some correlations are equated across groups. The method is applied to data on high-density lipoprotein cholesterol and total cholesterol levels obtained from participants in the Swedish Twin Family Study. Results indicate that there is strong familial resemblance for both traits but little cross-trait resemblance. PMID:3687943

  16. A method for selecting M dwarfs with an increased likelihood of unresolved ultracool companionship

    NASA Astrophysics Data System (ADS)

    Cook, N. J.; Pinfield, D. J.; Marocco, F.; Burningham, B.; Jones, H. R. A.; Frith, J.; Zhong, J.; Luo, A. L.; Qi, Z. X.; Lucas, P. W.; Gromadzki, M.; Day-Jones, A. C.; Kurtev, R. G.; Guo, Y. X.; Wang, Y. F.; Bai, Y.; Yi, Z. P.; Smart, R. L.

    2016-04-01

    Locating ultracool companions to M dwarfs is important for constraining low-mass formation models, the measurement of substellar dynamical masses and radii, and for testing ultracool evolutionary models. We present an optimized method for identifying M dwarfs which may have unresolved ultracool companions. We construct a catalogue of 440 694 M dwarf candidates, from Wide-Field Infrared Survey Explorer, Two Micron All-Sky Survey and Sloan Digital Sky Survey, based on optical- and near-infrared colours and reduced proper motion. With strict reddening, photometric and quality constraints we isolate a subsample of 36 898 M dwarfs and search for possible mid-infrared M dwarf + ultracool dwarf candidates by comparing M dwarfs which have similar optical/near-infrared colours (chosen for their sensitivity to effective temperature and metallicity). We present 1082 M dwarf + ultracool dwarf candidates for follow-up. Using simulated ultracool dwarf companions to M dwarfs, we estimate that the occurrence of unresolved ultracool companions amongst our M dwarf + ultracool dwarf candidates should be at least four times the average for our full M dwarf catalogue. We discuss possible contamination and bias and predict yields of candidates based on our simulations.

  17. DREAM3: Network Inference Using Dynamic Context Likelihood of Relatedness and the Inferelator

    DTIC Science & Technology

    2010-03-22

    Methods Enzymol 350: 469–483. 47. Johnson DS, Mortazavi A, Myers RM, Wold B (2007) Genome- wide mapping of in vivo protein-dna interactions. Science ...Mathematics, Courant Institute of Mathematical Sciences , New York University, New York, New York, United States of America, 4 Department of Computer Science ...Courant Institute of Mathematical Sciences , New York University, New York, New York, United States of America Abstract Background: Many current works

  18. A maximum likelihood method for high resolution proton radiography/proton CT

    NASA Astrophysics Data System (ADS)

    Collins-Fekete, Charles-Antoine; Brousmiche, Sébastien; Portillo, Stephen K. N.; Beaulieu, Luc; Seco, Joao

    2016-12-01

    Multiple Coulomb scattering (MCS) is the largest contributor to blurring in proton imaging. In this work, we developed a maximum likelihood least squares estimator that improves proton radiography’s spatial resolution. The water equivalent thickness (WET) through projections defined from the source to the detector pixels were estimated such that they maximizes the likelihood of the energy loss of every proton crossing the volume. The length spent in each projection was calculated through the optimized cubic spline path estimate. The proton radiographies were produced using Geant4 simulations. Three phantoms were studied here: a slanted cube in a tank of water to measure 2D spatial resolution, a voxelized head phantom for clinical performance evaluation as well as a parametric Catphan phantom (CTP528) for 3D spatial resolution. Two proton beam configurations were used: a parallel and a conical beam. Proton beams of 200 and 330 MeV were simulated to acquire the radiography. Spatial resolution is increased from 2.44 lp cm-1 to 4.53 lp cm-1 in the 200 MeV beam and from 3.49 lp cm-1 to 5.76 lp cm-1 in the 330 MeV beam. Beam configurations do not affect the reconstructed spatial resolution as investigated between a radiography acquired with the parallel (3.49 lp cm-1 to 5.76 lp cm-1) or conical beam (from 3.49 lp cm-1 to 5.56 lp cm-1). The improved images were then used as input in a photon tomography algorithm. The proton CT reconstruction of the Catphan phantom shows high spatial resolution (from 2.79 to 5.55 lp cm-1 for the parallel beam and from 3.03 to 5.15 lp cm-1 for the conical beam) and the reconstruction of the head phantom, although qualitative, shows high contrast in the gradient region. The proposed formulation of the optimization demonstrates serious potential to increase the spatial resolution (up by 65 % ) in proton radiography and greatly accelerate proton computed tomography reconstruction.

  19. A maximum likelihood method for high resolution proton radiography/proton CT.

    PubMed

    Collins-Fekete, Charles-Antoine; Brousmiche, Sébastien; Portillo, Stephen K N; Beaulieu, Luc; Seco, Joao

    2016-12-07

    Multiple Coulomb scattering (MCS) is the largest contributor to blurring in proton imaging. In this work, we developed a maximum likelihood least squares estimator that improves proton radiography's spatial resolution. The water equivalent thickness (WET) through projections defined from the source to the detector pixels were estimated such that they maximizes the likelihood of the energy loss of every proton crossing the volume. The length spent in each projection was calculated through the optimized cubic spline path estimate. The proton radiographies were produced using Geant4 simulations. Three phantoms were studied here: a slanted cube in a tank of water to measure 2D spatial resolution, a voxelized head phantom for clinical performance evaluation as well as a parametric Catphan phantom (CTP528) for 3D spatial resolution. Two proton beam configurations were used: a parallel and a conical beam. Proton beams of 200 and 330 MeV were simulated to acquire the radiography. Spatial resolution is increased from 2.44 lp cm(-1) to 4.53 lp cm(-1) in the 200 MeV beam and from 3.49 lp cm(-1) to 5.76 lp cm(-1) in the 330 MeV beam. Beam configurations do not affect the reconstructed spatial resolution as investigated between a radiography acquired with the parallel (3.49 lp cm(-1) to 5.76 lp cm(-1)) or conical beam (from 3.49 lp cm(-1) to 5.56 lp cm(-1)). The improved images were then used as input in a photon tomography algorithm. The proton CT reconstruction of the Catphan phantom shows high spatial resolution (from 2.79 to 5.55 lp cm(-1) for the parallel beam and from 3.03 to 5.15 lp cm(-1) for the conical beam) and the reconstruction of the head phantom, although qualitative, shows high contrast in the gradient region. The proposed formulation of the optimization demonstrates serious potential to increase the spatial resolution (up by 65[Formula: see text]) in proton radiography and greatly accelerate proton computed tomography reconstruction.

  20. Plate dynamical mechanisms as constraints on the likelihood of earthquake precursors in the ionosphere

    NASA Astrophysics Data System (ADS)

    Osmaston, Miles

    2013-04-01

    In my oral(?) contribution to this session [1] I use my studies of the fundamental physics of gravitation to derive a reason for expecting the vertical gradient of electron density (= radial electric field) in the ionosphere to be closely affected by another field, directly associated with the ordinary gravitational potential (g) present at the Earth's surface. I have called that other field the Gravity-Electric (G-E) field. A calibration of this linkage relationship could be provided by noting corresponding co-seismic changes in (g) and in the ionosphere when, for example, a major normal-fault slippage occurs. But we are here concerned with precursory changes. This means we are looking for mechanisms which, on suitably short timescales, would generate pre-quake elastic deformation that changes the local (g). This poster supplements my talk by noting, for more relaxed discussion, what I see as potentially relevant plate dynamical mechanisms. Timescale constraints. If monitoring for ionospheric precursors is on only short timescales, their detectability is limited to correspondingly tectonically active regions. But as our monitoring becomes more precise and over longer terms, this constraint will relax. Most areas of the Earth are undergoing very slow heating or cooling and corresponding volume or epeirogenic change; major earthquakes can result but we won't have detected any accumulating ionospheric precursor. Transcurrent faulting. In principle, slip on a straight fault, even in a stick-slip manner, should produce little vertical deformation, but a kink, such as has caused the Transverse Ranges on the San Andreas Fault, would seem worth monitoring for precursory build-up in the ionosphere. Plate closure - subducting plate downbend. The traditionally presumed elastic flexure downbend mechanism is incorrect. 'Seismic coupling' has long been recognized by seismologists, invoking the repeated occurrence of 'asperities' to temporarily lock subduction and allow stress

  1. Weighted hurdle regression method for joint modeling of cardiovascular events likelihood and rate in the US dialysis population.

    PubMed

    Sentürk, Damla; Dalrymple, Lorien S; Mu, Yi; Nguyen, Danh V

    2014-11-10

    We propose a new weighted hurdle regression method for modeling count data, with particular interest in modeling cardiovascular events in patients on dialysis. Cardiovascular disease remains one of the leading causes of hospitalization and death in this population. Our aim is to jointly model the relationship/association between covariates and (i) the probability of cardiovascular events, a binary process, and (ii) the rate of events once the realization is positive-when the 'hurdle' is crossed-using a zero-truncated Poisson distribution. When the observation period or follow-up time, from the start of dialysis, varies among individuals, the estimated probability of positive cardiovascular events during the study period will be biased. Furthermore, when the model contains covariates, then the estimated relationship between the covariates and the probability of cardiovascular events will also be biased. These challenges are addressed with the proposed weighted hurdle regression method. Estimation for the weighted hurdle regression model is a weighted likelihood approach, where standard maximum likelihood estimation can be utilized. The method is illustrated with data from the United States Renal Data System. Simulation studies show the ability of proposed method to successfully adjust for differential follow-up times and incorporate the effects of covariates in the weighting.

  2. Average Likelihood Methods of Classification of Code Division Multiple Access (CDMA)

    DTIC Science & Technology

    2016-05-01

    where the system’s stochastic model is either incomplete or too complex to be described in mathematical terms. Feature based methods often provide an...developing mathematical rules that guarantees optimal performance in noise, i.e., rules that guarantee the lowest error in classification. The method is...suitable in problems where models are available and have low complexity. Its main disadvantage is the development of rules due to the mathematical

  3. Extended likelihood ratio test-based methods for signal detection in a drug class with application to FDA's adverse event reporting system database.

    PubMed

    Zhao, Yueqin; Yi, Min; Tiwari, Ram C

    2016-05-02

    A likelihood ratio test, recently developed for the detection of signals of adverse events for a drug of interest in the FDA Adverse Events Reporting System database, is extended to detect signals of adverse events simultaneously for all the drugs in a drug class. The extended likelihood ratio test methods, based on Poisson model (Ext-LRT) and zero-inflated Poisson model (Ext-ZIP-LRT), are discussed and are analytically shown, like the likelihood ratio test method, to control the type-I error and false discovery rate. Simulation studies are performed to evaluate the performance characteristics of Ext-LRT and Ext-ZIP-LRT. The proposed methods are applied to the Gadolinium drug class in FAERS database. An in-house likelihood ratio test tool, incorporating the Ext-LRT methodology, is being developed in the Food and Drug Administration.

  4. A Maximum Likelihood Ensemble Data Assimilation Method Tailored to the Inner Radiation Belt

    NASA Astrophysics Data System (ADS)

    Guild, T. B.; O'Brien, T. P., III; Mazur, J. E.

    2014-12-01

    The Earth's radiation belts are composed of energetic protons and electrons whose fluxes span many orders of magnitude, whose distributions are log-normal, and where data-model differences can be large and also log-normal. This physical system thus challenges standard data assimilation methods relying on underlying assumptions of Gaussian distributions of measurements and data-model differences, where innovations to the model are small. We have therefore developed a data assimilation method tailored to these properties of the inner radiation belt, analogous to the ensemble Kalman filter but for the unique cases of non-Gaussian model and measurement errors, and non-linear model and measurement distributions. We apply this method to the inner radiation belt proton populations, using the SIZM inner belt model [Selesnick et al., 2007] and SAMPEX/PET and HEO proton observations to select the most likely ensemble members contributing to the state of the inner belt. We will describe the algorithm, the method of generating ensemble members, our choice of minimizing the difference between instrument counts not phase space densities, and demonstrate the method with our reanalysis of the inner radiation belt throughout solar cycle 23. We will report on progress to continue our assimilation into solar cycle 24 using the Van Allen Probes/RPS observations.

  5. Practical aspects of a maximum likelihood estimation method to extract stability and control derivatives from flight data

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.

    1976-01-01

    A maximum likelihood estimation method was applied to flight data and procedures to facilitate the routine analysis of a large amount of flight data were described. Techniques that can be used to obtain stability and control derivatives from aircraft maneuvers that are less than ideal for this purpose are described. The techniques involve detecting and correcting the effects of dependent or nearly dependent variables, structural vibration, data drift, inadequate instrumentation, and difficulties with the data acquisition system and the mathematical model. The use of uncertainty levels and multiple maneuver analysis also proved to be useful in improving the quality of the estimated coefficients. The procedures used for editing the data and for overall analysis are also discussed.

  6. Insufficient ct data reconstruction based on directional total variation (dtv) regularized maximum likelihood expectation maximization (mlem) method

    NASA Astrophysics Data System (ADS)

    Islam, Fahima Fahmida

    Sparse tomography is an efficient technique which saves time as well as minimizes cost. However, due to few angular data it implies the image reconstruction problem as ill-posed. In the ill posed problem, even with exact data constraints, the inversion cannot be uniquely performed. Therefore, selection of suitable method to optimize the reconstruction problems plays an important role in sparse data CT. Use of regularization function is a well-known method to control the artifacts in limited angle data acquisition. In this work, we propose directional total variation regularized ordered subset (OS) type image reconstruction method for neutron limited data CT. Total variation (TV) regularization works as edge preserving regularization which not only preserves the sharp edge but also reduces many of the artifacts that are very common in limited data CT. However TV itself is not direction dependent. Therefore, TV is not very suitable for images with a dominant direction. The images with dominant direction it is important to know the total variation at certain direction. Hence, here a directional TV is used as prior term. TV regularization assumes the constraint of piecewise smoothness. As the original image is not piece wise constant image, sparsifying transform is used to convert the image in to sparse image or piecewise constant image. Along with this regularized function (D TV) the likelihood function which is adapted as objective function. To optimize this objective function a OS type algorithm is used. Generally there are two methods available to make OS method convergent. This work proposes OS type directional TV regularized likelihood reconstruction method which yields fast convergence as well as good quality image. Initial iteration starts with the filtered back projection (FBP) reconstructed image. The indication of convergence is determined by the convergence index between two successive reconstructed images. The quality of the image is assessed by showing

  7. New methods to assess severity and likelihood of urban flood risk from intense rainfall

    NASA Astrophysics Data System (ADS)

    Fewtrell, Tim; Foote, Matt; Bates, Paul; Ntelekos, Alexandros

    2010-05-01

    the construction of appropriate probabilistic flood models. This paper will describe new research being undertaken to assess the practicality of ultra-high resolution, ground based laser-scanner data for flood modelling in urban centres, using new hydraulic propagation methods to determine the feasibility of such data to be applied within stochastic event models. Results from the collection of ‘point cloud' data collected from a mobile terrestrial laser-scanner system in a key urban centre, combined with appropriate datasets, will be summarized here and an initial assessment of the potential for the use of such data in stochastic event sets will be made. Conclusions are drawn from comparisons with previous studies and underlying DEM products of similar resolutions in terms of computational time, flood extent and flood depth. Based on the above, the study provides some current recommendations on the most appropriate resolution of input data for urban hydraulic modelling.

  8. Methods of applied dynamics

    NASA Technical Reports Server (NTRS)

    Rheinfurth, M. H.; Wilson, H. B.

    1991-01-01

    The monograph was prepared to give the practicing engineer a clear understanding of dynamics with special consideration given to the dynamic analysis of aerospace systems. It is conceived to be both a desk-top reference and a refresher for aerospace engineers in government and industry. It could also be used as a supplement to standard texts for in-house training courses on the subject. Beginning with the basic concepts of kinematics and dynamics, the discussion proceeds to treat the dynamics of a system of particles. Both classical and modern formulations of the Lagrange equations, including constraints, are discussed and applied to the dynamic modeling of aerospace structures using the modal synthesis technique.

  9. A powerful likelihood method for the analysis of linkage disequilibrium between trait loci and one or more polymorphic marker loci

    SciTech Connect

    Terwilliger, J.D.

    1995-03-01

    Historically, most methods for detecting linkage disequilibrium were designed for use with diallelic marker loci, for which the analysis is straightforward. With the advent of polymorphic markers with many alleles, the normal approach to their analysis has been either to extend the methodology for two-allele systems (leading to an increase in df and to a corresponding loss of power) or to select the allele believed to be associated and then collapse the other alleles, reducing, in a biased way, the locus to a diallelic system. I propose a likelihood-based approach to testing for linkage disequilibrium, an approach that becomes more conservative as the number of alleles increases, and as the number of markers considered jointly increases in a multipoint test for linkage disequilibrium, while maintaining high power. Properties of this method for detecting associations and fine mapping the location of disease traits are investigated. It is found to be, in general, more powerful than conventional methods, and it provides a tractable framework for the fine mapping of new disease loci. Application to the cystic fibrosis data of Kerem et al. is included to illustrate the method. 12 refs., 4 figs., 4 tabs.

  10. The Likelihood Function and Likelihood Statistics

    NASA Astrophysics Data System (ADS)

    Robinson, Edward L.

    2016-01-01

    The likelihood function is a necessary component of Bayesian statistics but not of frequentist statistics. The likelihood function can, however, serve as the foundation for an attractive variant of frequentist statistics sometimes called likelihood statistics. We will first discuss the definition and meaning of the likelihood function, giving some examples of its use and abuse - most notably in the so-called prosecutor's fallacy. Maximum likelihood estimation is the aspect of likelihood statistics familiar to most people. When data points are known to have Gaussian probability distributions, maximum likelihood parameter estimation leads directly to least-squares estimation. When the data points have non-Gaussian distributions, least-squares estimation is no longer appropriate. We will show how the maximum likelihood principle leads to logical alternatives to least squares estimation for non-Gaussian distributions, taking the Poisson distribution as an example.The likelihood ratio is the ratio of the likelihoods of, for example, two hypotheses or two parameters. Likelihood ratios can be treated much like un-normalized probability distributions, greatly extending the applicability and utility of likelihood statistics. Likelihood ratios are prone to the same complexities that afflict posterior probability distributions in Bayesian statistics. We will show how meaningful information can be extracted from likelihood ratios by the Laplace approximation, by marginalizing, or by Markov chain Monte Carlo sampling.

  11. Maximum-likelihood method identifies meiotic restitution mechanism from heterozygosity transmission of centromeric loci: application in citrus

    PubMed Central

    Cuenca, José; Aleza, Pablo; Juárez, José; García-Lor, Andrés; Froelicher, Yann; Navarro, Luis; Ollitrault, Patrick

    2015-01-01

    Polyploidisation is a key source of diversification and speciation in plants. Most researchers consider sexual polyploidisation leading to unreduced gamete as its main origin. Unreduced gametes are useful in several crop breeding schemes. Their formation mechanism, i.e., First-Division Restitution (FDR) or Second-Division Restitution (SDR), greatly impacts the gametic and population structures and, therefore, the breeding efficiency. Previous methods to identify the underlying mechanism required the analysis of a large set of markers over large progeny. This work develops a new maximum-likelihood method to identify the unreduced gamete formation mechanism both at the population and individual levels using independent centromeric markers. Knowledge of marker-centromere distances greatly improves the statistical power of the comparison between the SDR and FDR hypotheses. Simulating data demonstrated the importance of selecting markers very close to the centromere to obtain significant conclusions at individual level. This new method was used to identify the meiotic restitution mechanism in nineteen mandarin genotypes used as female parents in triploid citrus breeding. SDR was identified for 85.3% of 543 triploid hybrids and FDR for 0.6%. No significant conclusions were obtained for 14.1% of the hybrids. At population level SDR was the predominant mechanisms for the 19 parental mandarins. PMID:25894579

  12. Maximum-likelihood method identifies meiotic restitution mechanism from heterozygosity transmission of centromeric loci: application in citrus.

    PubMed

    Cuenca, José; Aleza, Pablo; Juárez, José; García-Lor, Andrés; Froelicher, Yann; Navarro, Luis; Ollitrault, Patrick

    2015-04-20

    Polyploidisation is a key source of diversification and speciation in plants. Most researchers consider sexual polyploidisation leading to unreduced gamete as its main origin. Unreduced gametes are useful in several crop breeding schemes. Their formation mechanism, i.e., First-Division Restitution (FDR) or Second-Division Restitution (SDR), greatly impacts the gametic and population structures and, therefore, the breeding efficiency. Previous methods to identify the underlying mechanism required the analysis of a large set of markers over large progeny. This work develops a new maximum-likelihood method to identify the unreduced gamete formation mechanism both at the population and individual levels using independent centromeric markers. Knowledge of marker-centromere distances greatly improves the statistical power of the comparison between the SDR and FDR hypotheses. Simulating data demonstrated the importance of selecting markers very close to the centromere to obtain significant conclusions at individual level. This new method was used to identify the meiotic restitution mechanism in nineteen mandarin genotypes used as female parents in triploid citrus breeding. SDR was identified for 85.3% of 543 triploid hybrids and FDR for 0.6%. No significant conclusions were obtained for 14.1% of the hybrids. At population level SDR was the predominant mechanisms for the 19 parental mandarins.

  13. Evaluation of Bayesian source estimation methods with Prairie Grass observations and Gaussian plume model: A comparison of likelihood functions and distance measures

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Huang, Hong; Huang, Lida; Ristic, Branko

    2017-03-01

    Source term estimation for atmospheric dispersion deals with estimation of the emission strength and location of an emitting source using all available information, including site description, meteorological data, concentration observations and prior information. In this paper, Bayesian methods for source term estimation are evaluated using Prairie Grass field observations. The methods include those that require the specification of the likelihood function and those which are likelihood free, also known as approximate Bayesian computation (ABC) methods. The performances of five different likelihood functions in the former and six different distance measures in the latter case are compared for each component of the source parameter vector based on Nemenyi test over all the 68 data sets available in the Prairie Grass field experiment. Several likelihood functions and distance measures are introduced to source term estimation for the first time. Also, ABC method is improved in many aspects. Results show that discrepancy measures which refer to likelihood functions and distance measures collectively have significant influence on source estimation. There is no single winning algorithm, but these methods can be used collectively to provide more robust estimates.

  14. Accuracy of maximum likelihood and least-squares estimates in the lidar slope method with noisy data.

    PubMed

    Eberhard, Wynn L

    2017-04-01

    The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.

  15. Employing a Monte Carlo algorithm in Newton-type methods for restricted maximum likelihood estimation of genetic parameters.

    PubMed

    Matilainen, Kaarina; Mäntysaari, Esa A; Lidauer, Martin H; Strandén, Ismo; Thompson, Robin

    2013-01-01

    Estimation of variance components by Monte Carlo (MC) expectation maximization (EM) restricted maximum likelihood (REML) is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR), where the information matrix was generated via sampling; MC average information(AI), where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.

  16. Cosmic bulk flows on 50 h-1 Mpc scales: a Bayesian hyper-parameter method and multishell likelihood analysis

    NASA Astrophysics Data System (ADS)

    Ma, Yin-Zhe; Scott, Douglas

    2013-01-01

    It has been argued recently that the galaxy peculiar velocity field provides evidence of excessive power on scales of 50 h-1 Mpc, which seems to be inconsistent with the standard Λ cold dark matter (ΛCDM) cosmological model. We discuss several assumptions and conventions used in studies of the large-scale bulk flow to check whether this claim is robust under a variety of conditions. Rather than using a composite catalogue we select samples from the SN, ENEAR, Spiral Field I-band Survey (SFI++) and First Amendment Supernovae (A1SN) catalogues, and correct for Malmquist bias in each according to the IRAS PSCz density field. We also use slightly different assumptions about the small-scale velocity dispersion and the parametrization of the matter power spectrum when calculating the variance of the bulk flow. By combining the likelihood of individual catalogues using a Bayesian hyper-parameter method, we find that the joint likelihood of the amplitude parameter gives σ8 = 0.65+ 0.47- 0.35 (68 per cent confidence region), which is entirely consistent with the ΛCDM model. In addition, the bulk flow magnitude, v ˜ 310 km s-1, and direction, (l, b) ˜ (280° ± 8°, 5.1° ± 6°), found by each of the catalogues are all consistent with each other, and with the bulk flow results from most previous studies. Furthermore, the bulk flow velocities in different shells of the surveys constrain (σ8, Ωm) to be (1.01+ 0.26- 0.20, 0.31+ 0.28- 0.14) for SFI++ and (1.04+ 0.32- 0.24, 0.28+ 0.30- 0.14) for ENEAR, which are consistent with the 7-year Wilkinson and Microwave Anisotropy Probe (WMAP7) best-fitting values. We finally discuss the differences between our conclusions and those of the studies claiming the largest bulk flows.

  17. A maximum-likelihood method to correct for allelic dropout in microsatellite data with no replicate genotypes.

    PubMed

    Wang, Chaolong; Schroeder, Kari B; Rosenberg, Noah A

    2012-10-01

    Allelic dropout is a commonly observed source of missing data in microsatellite genotypes, in which one or both allelic copies at a locus fail to be amplified by the polymerase chain reaction. Especially for samples with poor DNA quality, this problem causes a downward bias in estimates of observed heterozygosity and an upward bias in estimates of inbreeding, owing to mistaken classifications of heterozygotes as homozygotes when one of the two copies drops out. One general approach for avoiding allelic dropout involves repeated genotyping of homozygous loci to minimize the effects of experimental error. Existing computational alternatives often require replicate genotyping as well. These approaches, however, are costly and are suitable only when enough DNA is available for repeated genotyping. In this study, we propose a maximum-likelihood approach together with an expectation-maximization algorithm to jointly estimate allelic dropout rates and allele frequencies when only one set of nonreplicated genotypes is available. Our method considers estimates of allelic dropout caused by both sample-specific factors and locus-specific factors, and it allows for deviation from Hardy-Weinberg equilibrium owing to inbreeding. Using the estimated parameters, we correct the bias in the estimation of observed heterozygosity through the use of multiple imputations of alleles in cases where dropout might have occurred. With simulated data, we show that our method can (1) effectively reproduce patterns of missing data and heterozygosity observed in real data; (2) correctly estimate model parameters, including sample-specific dropout rates, locus-specific dropout rates, and the inbreeding coefficient; and (3) successfully correct the downward bias in estimating the observed heterozygosity. We find that our method is fairly robust to violations of model assumptions caused by population structure and by genotyping errors from sources other than allelic dropout. Because the data sets

  18. The evolution of autodigestion in the mushroom family Psathyrellaceae (Agaricales) inferred from Maximum Likelihood and Bayesian methods.

    PubMed

    Nagy, László G; Urban, Alexander; Orstadius, Leif; Papp, Tamás; Larsson, Ellen; Vágvölgyi, Csaba

    2010-12-01

    Recently developed comparative phylogenetic methods offer a wide spectrum of applications in evolutionary biology, although it is generally accepted that their statistical properties are incompletely known. Here, we examine and compare the statistical power of the ML and Bayesian methods with regard to selection of best-fit models of fruiting-body evolution and hypothesis testing of ancestral states on a real-life data set of a physiological trait (autodigestion) in the family Psathyrellaceae. Our phylogenies are based on the first multigene data set generated for the family. Two different coding regimes (binary and multistate) and two data sets differing in taxon sampling density are examined. The Bayesian method outperformed Maximum Likelihood with regard to statistical power in all analyses. This is particularly evident if the signal in the data is weak, i.e. in cases when the ML approach does not provide support to choose among competing hypotheses. Results based on binary and multistate coding differed only modestly, although it was evident that multistate analyses were less conclusive in all cases. It seems that increased taxon sampling density has favourable effects on inference of ancestral states, while model parameters are influenced to a smaller extent. The model best fitting our data implies that the rate of losses of deliquescence equals zero, although model selection in ML does not provide proper support to reject three of the four candidate models. The results also support the hypothesis that non-deliquescence (lack of autodigestion) has been ancestral in Psathyrellaceae, and that deliquescent fruiting bodies represent the preferred state, having evolved independently several times during evolution.

  19. Fluid dynamics test method

    NASA Technical Reports Server (NTRS)

    Gayman, W. H.

    1974-01-01

    Test method and apparatus determine fluid effective mass and damping in frequency range where effective mass may be considered as total mass less sum of slosh masses. Apparatus is designed so test tank and its mounting yoke are supported from structural test wall by series of flexures.

  20. Impact of Violation of the Missing-at-Random Assumption on Full-Information Maximum Likelihood Method in Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.; Guo, Fanmin

    2014-01-01

    The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…

  1. Stepwise Signal Extraction via Marginal Likelihood

    PubMed Central

    Du, Chao; Kao, Chu-Lan Michael

    2015-01-01

    This paper studies the estimation of stepwise signal. To determine the number and locations of change-points of the stepwise signal, we formulate a maximum marginal likelihood estimator, which can be computed with a quadratic cost using dynamic programming. We carry out extensive investigation on the choice of the prior distribution and study the asymptotic properties of the maximum marginal likelihood estimator. We propose to treat each possible set of change-points equally and adopt an empirical Bayes approach to specify the prior distribution of segment parameters. Detailed simulation study is performed to compare the effectiveness of this method with other existing methods. We demonstrate our method on single-molecule enzyme reaction data and on DNA array CGH data. Our study shows that this method is applicable to a wide range of models and offers appealing results in practice. PMID:27212739

  2. Profile Likelihood and Incomplete Data.

    PubMed

    Zhang, Zhiwei

    2010-04-01

    According to the law of likelihood, statistical evidence is represented by likelihood functions and its strength measured by likelihood ratios. This point of view has led to a likelihood paradigm for interpreting statistical evidence, which carefully distinguishes evidence about a parameter from error probabilities and personal belief. Like other paradigms of statistics, the likelihood paradigm faces challenges when data are observed incompletely, due to non-response or censoring, for instance. Standard methods to generate likelihood functions in such circumstances generally require assumptions about the mechanism that governs the incomplete observation of data, assumptions that usually rely on external information and cannot be validated with the observed data. Without reliable external information, the use of untestable assumptions driven by convenience could potentially compromise the interpretability of the resulting likelihood as an objective representation of the observed evidence. This paper proposes a profile likelihood approach for representing and interpreting statistical evidence with incomplete data without imposing untestable assumptions. The proposed approach is based on partial identification and is illustrated with several statistical problems involving missing data or censored data. Numerical examples based on real data are presented to demonstrate the feasibility of the approach.

  3. Identifying change in the likelihood of violent recidivism: causal dynamic risk factors in the OASys violence predictor.

    PubMed

    Howard, Philip D; Dixon, Louise

    2013-06-01

    Recent studies of multiwave risk assessment have investigated the association between changes in risk factors and violent recidivism. This study analyzed a large multiwave data set of English and Welsh offenders (N = 196,493), assessed in realistic correctional conditions using the static/dynamic Offender Assessment System (OASys). It aimed to compare the predictive validity of the OASys Violence Predictor (OVP) under mandated repeated assessment and one-time initial assessment conditions. Scores on 5 of OVP's 7 purportedly dynamic risk factors changed in 6 to 15% of pairs of successive assessments, whereas the other 2 seldom changed. Violent reoffenders had higher initial total and dynamic OVP scores than nonreoffenders, yet nonreoffenders' dynamic scores fell by significantly more between initial and final assessment. OVP scores from the current assessment achieved greater predictive validity than those from the initial assessment. Cox regression models showed that, for total OVP scores and most risk factors, both the initial score and the change in score from initial to current assessment significantly predicted reoffending. These results consistently showed that OVP includes several causal dynamic risk factors for violent recidivism, which can be measured reliably in operational settings. This adds to the evidence base that links changes in risk factors to changes in future reoffending risk and links the use of repeated assessments to incremental improvements in predictive validity. Further research could quantify the costs and benefits of reassessment in correctional practice, study associations between treatment and dynamic risk factors, and separate the effects of improvements and deteriorations in dynamic risk.

  4. Procedure for estimating stability and control parameters from flight test data by using maximum likelihood methods employing a real-time digital system

    NASA Technical Reports Server (NTRS)

    Grove, R. D.; Bowles, R. L.; Mayhew, S. C.

    1972-01-01

    A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.

  5. Performance and sensitivity analysis of the generalized likelihood ratio method for failure detection. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Bueno, R. A.

    1977-01-01

    Results of the generalized likelihood ratio (GLR) technique for the detection of failures in aircraft application are presented, and its relationship to the properties of the Kalman-Bucy filter is examined. Under the assumption that the system is perfectly modeled, the detectability and distinguishability of four failure types are investigated by means of analysis and simulations. Detection of failures is found satisfactory, but problems in identifying correctly the mode of a failure may arise. These issues are closely examined as well as the sensitivity of GLR to modeling errors. The advantages and disadvantages of this technique are discussed, and various modifications are suggested to reduce its limitations in performance and computational complexity.

  6. Dynamic Method for Identifying Collected Sample Mass

    NASA Technical Reports Server (NTRS)

    Carson, John

    2008-01-01

    G-Sample is designed for sample collection missions to identify the presence and quantity of sample material gathered by spacecraft equipped with end effectors. The software method uses a maximum-likelihood estimator to identify the collected sample's mass based on onboard force-sensor measurements, thruster firings, and a dynamics model of the spacecraft. This makes sample mass identification a computation rather than a process requiring additional hardware. Simulation examples of G-Sample are provided for spacecraft model configurations with a sample collection device mounted on the end of an extended boom. In the absence of thrust knowledge errors, the results indicate that G-Sample can identify the amount of collected sample mass to within 10 grams (with 95-percent confidence) by using a force sensor with a noise and quantization floor of 50 micrometers. These results hold even in the presence of realistic parametric uncertainty in actual spacecraft inertia, center-of-mass offset, and first flexibility modes. Thrust profile knowledge is shown to be a dominant sensitivity for G-Sample, entering in a nearly one-to-one relationship with the final mass estimation error. This means thrust profiles should be well characterized with onboard accelerometers prior to sample collection. An overall sample-mass estimation error budget has been developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.

  7. Computational Methods for Structural Mechanics and Dynamics

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)

    1989-01-01

    Topics addressed include: transient dynamics; transient finite element method; transient analysis in impact and crash dynamic studies; multibody computer codes; dynamic analysis of space structures; multibody mechanics and manipulators; spatial and coplanar linkage systems; flexible body simulation; multibody dynamics; dynamical systems; and nonlinear characteristics of joints.

  8. Bayesian computation via empirical likelihood

    PubMed Central

    Mengersen, Kerrie L.; Pudlo, Pierre; Robert, Christian P.

    2013-01-01

    Approximate Bayesian computation has become an essential tool for the analysis of complex stochastic models when the likelihood function is numerically unavailable. However, the well-established statistical method of empirical likelihood provides another route to such settings that bypasses simulations from the model and the choices of the approximate Bayesian computation parameters (summary statistics, distance, tolerance), while being convergent in the number of observations. Furthermore, bypassing model simulations may lead to significant time savings in complex models, for instance those found in population genetics. The Bayesian computation with empirical likelihood algorithm we develop in this paper also provides an evaluation of its own performance through an associated effective sample size. The method is illustrated using several examples, including estimation of standard distributions, time series, and population genetics models. PMID:23297233

  9. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance-Structure Models to Block-Toeplitz Matrices Representing Single-Subject Multivariate Time-Series.

    PubMed

    Molenaar, P C; Nesselroade, J R

    1998-07-01

    The study of intraindividual variability pervades empirical inquiry in virtually all subdisciplines of psychology. The statistical analysis of multivariate time-series data - a central product of intraindividual investigations -requires special modeling techniques. The dynamic factor model (DFM), which is a generalization of the traditional common factor model, has been proposed by Molenaar (1985) for systematically extracting information from multivariate time- series via latent variable modeling. Implementation of the DFM model has taken several forms, one of which involves specifying it as a covariance-structure model and estimating its parameters from a block-Toeplitz matrix derived from the multivariate time-ser~es. We compare two methods for estimating DFM parameters within a covariance-structure framework - pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation - by means of a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates of comparable precision, but only the ADF method gives standard errors and chi-square statistics that appear to be consistent. The relative ordering of the values of all estimates appears to be very similar across methods. When the manifest time-series is relatively short, the two methods appear to perform about equally well.

  10. The phylogenetic likelihood library.

    PubMed

    Flouri, T; Izquierdo-Carrasco, F; Darriba, D; Aberer, A J; Nguyen, L-T; Minh, B Q; Von Haeseler, A; Stamatakis, A

    2015-03-01

    We introduce the Phylogenetic Likelihood Library (PLL), a highly optimized application programming interface for developing likelihood-based phylogenetic inference and postanalysis software. The PLL implements appropriate data structures and functions that allow users to quickly implement common, error-prone, and labor-intensive tasks, such as likelihood calculations, model parameter as well as branch length optimization, and tree space exploration. The highly optimized and parallelized implementation of the phylogenetic likelihood function and a thorough documentation provide a framework for rapid development of scalable parallel phylogenetic software. By example of two likelihood-based phylogenetic codes we show that the PLL improves the sequential performance of current software by a factor of 2-10 while requiring only 1 month of programming time for integration. We show that, when numerical scaling for preventing floating point underflow is enabled, the double precision likelihood calculations in the PLL are up to 1.9 times faster than those in BEAGLE. On an empirical DNA dataset with 2000 taxa the AVX version of PLL is 4 times faster than BEAGLE (scaling enabled and required). The PLL is available at http://www.libpll.org under the GNU General Public License (GPL).

  11. Sequence comparison and phylogenetic analysis by the Maximum Likelihood method of ribosome-inactivating proteins from angiosperms.

    PubMed

    Di Maro, Antimo; Citores, Lucía; Russo, Rosita; Iglesias, Rosario; Ferreras, José Miguel

    2014-08-01

    Ribosome-inactivating proteins (RIPs) from angiosperms are rRNA N-glycosidases that have been proposed as defence proteins against virus and fungi. They have been classified as type 1 RIPs, consisting of single-chain proteins, and type 2 RIPs, consisting of an A chain with RIP properties covalently linked to a B chain with lectin properties. In this work we have carried out a broad search of RIP sequence data banks from angiosperms in order to study their main structural characteristics and phylogenetic evolution. The comparison of the sequences revealed the presence, outside of the active site, of a novel structure that might be involved in the internal protein dynamics linked to enzyme catalysis. Also the B-chains presented another conserved structure that might function either supporting the beta-trefoil structure or in the communication between both sugar-binding sites. A systematic phylogenetic analysis of RIP sequences revealed that the most primitive type 1 RIPs were similar to that of the actual monocots (Poaceae and Asparagaceae). The primitive RIPs evolved to the dicot type 1 related RIPs (like those from Caryophyllales, Lamiales and Euphorbiales). The gene of a type 1 RIP related with the actual Euphorbiaceae type 1 RIPs fused with a double beta trefoil lectin gene similar to the actual Cucurbitaceae lectins to generate the type 2 RIPs and finally this gene underwent deletions rendering either type 1 RIPs (like those from Cucurbitaceae, Rosaceae and Iridaceae) or lectins without A chain (like those from Adoxaceae).

  12. Maximum Likelihood Methods in Treating Outliers and Symmetrically Heavy-Tailed Distributions for Nonlinear Structural Equation Models with Missing Data

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Xia, Ye-Mao

    2006-01-01

    By means of more than a dozen user friendly packages, structural equation models (SEMs) are widely used in behavioral, education, social, and psychological research. As the underlying theory and methods in these packages are vulnerable to outliers and distributions with longer-than-normal tails, a fundamental problem in the field is the…

  13. Inference of Gene Flow in the Process of Speciation: An Efficient Maximum-Likelihood Method for the Isolation-with-Initial-Migration Model

    PubMed Central

    Costa, Rui J.; Wilkinson-Herbots, Hilde

    2017-01-01

    The isolation-with-migration (IM) model is commonly used to make inferences about gene flow during speciation, using polymorphism data. However, it has been reported that the parameter estimates obtained by fitting the IM model are very sensitive to the model’s assumptions—including the assumption of constant gene flow until the present. This article is concerned with the isolation-with-initial-migration (IIM) model, which drops precisely this assumption. In the IIM model, one ancestral population divides into two descendant subpopulations, between which there is an initial period of gene flow and a subsequent period of isolation. We derive a very fast method of fitting an extended version of the IIM model, which also allows for asymmetric gene flow and unequal population sizes. This is a maximum-likelihood method, applicable to data on the number of segregating sites between pairs of DNA sequences from a large number of independent loci. In addition to obtaining parameter estimates, our method can also be used, by means of likelihood-ratio tests, to distinguish between alternative models representing the following divergence scenarios: (a) divergence with potentially asymmetric gene flow until the present, (b) divergence with potentially asymmetric gene flow until some point in the past and in isolation since then, and (c) divergence in complete isolation. We illustrate the procedure on pairs of Drosophila sequences from ∼30,000 loci. The computing time needed to fit the most complex version of the model to this data set is only a couple of minutes. The R code to fit the IIM model can be found in the supplementary files of this article. PMID:28193727

  14. Markov chain Monte Carlo without likelihoods.

    PubMed

    Marjoram, Paul; Molitor, John; Plagnol, Vincent; Tavare, Simon

    2003-12-23

    Many stochastic simulation approaches for generating observations from a posterior distribution depend on knowing a likelihood function. However, for many complex probability models, such likelihoods are either impossible or computationally prohibitive to obtain. Here we present a Markov chain Monte Carlo method for generating observations from a posterior distribution without the use of likelihoods. It can also be used in frequentist applications, in particular for maximum-likelihood estimation. The approach is illustrated by an example of ancestral inference in population genetics. A number of open problems are highlighted in the discussion.

  15. Augmented Likelihood Image Reconstruction.

    PubMed

    Stille, Maik; Kleine, Matthias; Hägele, Julian; Barkhausen, Jörg; Buzug, Thorsten M

    2016-01-01

    The presence of high-density objects remains an open problem in medical CT imaging. Data of projections passing through objects of high density, such as metal implants, are dominated by noise and are highly affected by beam hardening and scatter. Reconstructed images become less diagnostically conclusive because of pronounced artifacts that manifest as dark and bright streaks. A new reconstruction algorithm is proposed with the aim to reduce these artifacts by incorporating information about shape and known attenuation coefficients of a metal implant. Image reconstruction is considered as a variational optimization problem. The afore-mentioned prior knowledge is introduced in terms of equality constraints. An augmented Lagrangian approach is adapted in order to minimize the associated log-likelihood function for transmission CT. During iterations, temporally appearing artifacts are reduced with a bilateral filter and new projection values are calculated, which are used later on for the reconstruction. A detailed evaluation in cooperation with radiologists is performed on software and hardware phantoms, as well as on clinically relevant patient data of subjects with various metal implants. Results show that the proposed reconstruction algorithm is able to outperform contemporary metal artifact reduction methods such as normalized metal artifact reduction.

  16. Photon Counting Data Analysis: Application of the Maximum Likelihood and Related Methods for the Determination of Lifetimes in Mixtures of Rose Bengal and Rhodamine B

    SciTech Connect

    Santra, Kalyan; Smith, Emily A.; Petrich, Jacob W.; Song, Xueyu

    2016-12-12

    It is often convenient to know the minimum amount of data needed in order to obtain a result of desired accuracy and precision. It is a necessity in the case of subdiffraction-limited microscopies, such as stimulated emission depletion (STED) microscopy, owing to the limited sample volumes and the extreme sensitivity of the samples to photobleaching and photodamage. We present a detailed comparison of probability-based techniques (the maximum likelihood method and methods based on the binomial and the Poisson distributions) with residual minimization-based techniques for retrieving the fluorescence decay parameters for various two-fluorophore mixtures, as a function of the total number of photon counts, in time-correlated, single-photon counting experiments. The probability-based techniques proved to be the most robust (insensitive to initial values) in retrieving the target parameters and, in fact, performed equivalently to 2-3 significant figures. This is to be expected, as we demonstrate that the three methods are fundamentally related. Furthermore, methods based on the Poisson and binomial distributions have the desirable feature of providing a bin-by-bin analysis of a single fluorescence decay trace, which thus permits statistics to be acquired using only the one trace for not only the mean and median values of the fluorescence decay parameters but also for the associated standard deviations. Lastly, these probability-based methods lend themselves well to the analysis of the sparse data sets that are encountered in subdiffraction-limited microscopies.

  17. A note on the relationships between multiple imputation, maximum likelihood and fully Bayesian methods for missing responses in linear regression models.

    PubMed

    Chen, Qingxia; Ibrahim, Joseph G

    2014-07-01

    Multiple Imputation, Maximum Likelihood and Fully Bayesian methods are the three most commonly used model-based approaches in missing data problems. Although it is easy to show that when the responses are missing at random (MAR), the complete case analysis is unbiased and efficient, the aforementioned methods are still commonly used in practice for this setting. To examine the performance of and relationships between these three methods in this setting, we derive and investigate small sample and asymptotic expressions of the estimates and standard errors, and fully examine how these estimates are related for the three approaches in the linear regression model when the responses are MAR. We show that when the responses are MAR in the linear model, the estimates of the regression coefficients using these three methods are asymptotically equivalent to the complete case estimates under general conditions. One simulation and a real data set from a liver cancer clinical trial are given to compare the properties of these methods when the responses are MAR.

  18. Novel methods for molecular dynamics simulations.

    PubMed

    Elber, R

    1996-04-01

    In the past year, significant progress was made in the development of molecular dynamics methods for the liquid phase and for biological macromolecules. Specifically, faster algorithms to pursue molecular dynamics simulations were introduced and advances were made in the design of new optimization algorithms guided by molecular dynamics protocols. A technique to calculate the quantum spectra of protein vibrations was introduced.

  19. Simulation for position determination of distal and proximal edges for SOBP irradiation in hadron therapy by using the maximum likelihood estimation method

    NASA Astrophysics Data System (ADS)

    Inaniwa, Taku; Kohno, Toshiyuki; Tomitani, Takehiro

    2005-12-01

    In radiation therapy with hadron beams, conformal irradiation to a tumour can be achieved by using the properties of incident ions such as the high dose concentration around the Bragg peak. For the effective utilization of such properties, it is necessary to evaluate the volume irradiated with hadron beams and the deposited dose distribution in a patient's body. Several methods have been proposed for this purpose, one of which uses the positron emitters generated through fragmentation reactions between incident ions and target nuclei. In the previous paper, we showed that the maximum likelihood estimation (MLE) method could be applicable to the estimation of beam end-point from the measured positron emitting activity distribution for mono-energetic beam irradiations. In a practical treatment, a spread-out Bragg peak (SOBP) beam is used to achieve a uniform biological dose distribution in the whole target volume. Therefore, in the present paper, we proposed to extend the MLE method to estimations of the position of the distal and proximal edges of the SOBP from the detected annihilation gamma ray distribution. We confirmed the effectiveness of the method by means of simulations. Although polyethylene was adopted as a substitute for a soft tissue target in validating the method, the proposed method is equally applicable to general cases, provided that the reaction cross sections between the incident ions and the target nuclei are known. The relative advantage of incident beam species to determine the position of the distal and the proximal edges was compared. Furthermore, we ascertained the validity of applying the MLE method to determinations of the position of the distal and the proximal edges of an SOBP by simulations and we gave a physical explanation of the distal and the proximal information.

  20. Metrics for expert judgement in volcanic hazard assessment: comparing the Cooke classical model with a new method based on individual performance likelihood

    NASA Astrophysics Data System (ADS)

    Flandoli, F.; Giorgi, E.; Aspinall, W. A.; Neri, A.

    2009-04-01

    Expert elicitation is a method to obtain estimates for variables of interest when data is sparse or ambiguous. A team of experts is created and each is asked to provide three values for each target variable (typically the 5% quantile, the median, and the 95% quantile). If some weight can be associated with each expert, then different opinions can be pooled to generate a weighted mean, thus providing an estimate of the uncertain variable. The key challenge is to assign a proper weight to each expert. To determine this weight empirically, the experts can be asked a set of 'seed' questions, whose values are known by the analyst (facilitator). In this approach, the experts provide three separate quantile values for each question, and the expert's capability of quantifying uncertainty can be evaluated. For instance, the Cooke classical model quantifies the collective scientific uncertainty through an expert scoring scheme by which weights are ascribed to individual experts on the basis of empirically determined calibration and informativeness scores obtained from a probability analysis of individual performances. In our work, we compare such a method to a new algorithm in which the calibration score is substituted by a one based on the likelihood of observing these expert performances. The simple idea behind this is that of rewarding more strongly those experts whose seed item median values are systematically closer to the true values. Given the three quantile values provided by every expert for each question, we fit a Beta distribution to each test item response, and compute the probability that the location parameter of that distribution corresponds to the real value, by chance. For each expert, the geometric mean of these probabilities is computed as the likelihood factor, L(e), of the expert, thus providing an alternative ‘calibration' score. An information factor, I(e), is also computed as arithmetic mean of the relative entropies of the expert's distributions

  1. A Comparison of Bayesian Monte Carlo Markov Chain and Maximum Likelihood Estimation Methods for the Statistical Analysis of Geodetic Time Series

    NASA Astrophysics Data System (ADS)

    Olivares, G.; Teferle, F. N.

    2013-12-01

    Geodetic time series provide information which helps to constrain theoretical models of geophysical processes. It is well established that such time series, for example from GPS, superconducting gravity or mean sea level (MSL), contain time-correlated noise which is usually assumed to be a combination of a long-term stochastic process (characterized by a power-law spectrum) and random noise. Therefore, when fitting a model to geodetic time series it is essential to also estimate the stochastic parameters beside the deterministic ones. Often the stochastic parameters include the power amplitudes of both time-correlated and random noise, as well as, the spectral index of the power-law process. To date, the most widely used method for obtaining these parameter estimates is based on maximum likelihood estimation (MLE). We present an integration method, the Bayesian Monte Carlo Markov Chain (MCMC) method, which, by using Markov chains, provides a sample of the posteriori distribution of all parameters and, thereby, using Monte Carlo integration, all parameters and their uncertainties are estimated simultaneously. This algorithm automatically optimizes the Markov chain step size and estimates the convergence state by spectral analysis of the chain. We assess the MCMC method through comparison with MLE, using the recently released GPS position time series from JPL and apply it also to the MSL time series from the Revised Local Reference data base of the PSMSL. Although the parameter estimates for both methods are fairly equivalent, they suggest that the MCMC method has some advantages over MLE, for example, without further computations it provides the spectral index uncertainty, is computationally stable and detects multimodality.

  2. Optical methods in fault dynamics

    NASA Astrophysics Data System (ADS)

    Uenishi, K.; Rossmanith, H. P.

    2003-10-01

    The Rayleigh pulse interaction with a pre-stressed, partially contacting interface between similar and dissimilar materials is investigated experimentally as well as numerically. This study is intended to obtain an improved understanding of the interface (fault) dynamics during the earthquake rupture process. Using dynamic photoelasticity in conjunction with high-speed cinematography, snapshots of time-dependent isochromatic fringe patterns associated with Rayleigh pulse-interface interaction are experimentally recorded. It is shown that interface slip (instability) can be triggered dynamically by a pulse which propagates along the interface at the Rayleigh wave speed. For the numerical investigation, the finite difference wave simulator SWIFD is used for solving the problem under different combinations of contacting materials. The effect of acoustic impedance ratio of the two contacting materials on the wave patterns is discussed. The results indicate that upon interface rupture, Mach (head) waves, which carry a relatively large amount of energy in a concentrated form, can be generated and propagated from the interface contact region (asperity) into the acoustically softer material. Such Mach waves can cause severe damage onto a particular region inside an adjacent acoustically softer area. This type of damage concentration might be a possible reason for the generation of the "damage belt" in Kobe, Japan, on the occasion of the 1995 Hyogo-ken Nanbu (Kobe) Earthquake.

  3. A Method and On-Line Tool for Maximum Likelihood Calibration of Immunoblots and Other Measurements That Are Quantified in Batches

    PubMed Central

    Andrews, Steven S.; Rutherford, Suzannah

    2016-01-01

    Experimental measurements require calibration to transform measured signals into physically meaningful values. The conventional approach has two steps: the experimenter deduces a conversion function using measurements on standards and then calibrates (or normalizes) measurements on unknown samples with this function. The deduction of the conversion function from only the standard measurements causes the results to be quite sensitive to experimental noise. It also implies that any data collected without reliable standards must be discarded. Here we show that a “1-step calibration method” reduces these problems for the common situation in which samples are measured in batches, where a batch could be an immunoblot (Western blot), an enzyme-linked immunosorbent assay (ELISA), a sequence of spectra, or a microarray, provided that some sample measurements are replicated across multiple batches. The 1-step method computes all calibration results iteratively from all measurements. It returns the most probable values for the sample compositions under the assumptions of a statistical model, making them the maximum likelihood predictors. It is less sensitive to measurement error on standards and enables use of some batches that do not include standards. In direct comparison of both real and simulated immunoblot data, the 1-step method consistently exhibited smaller errors than the conventional “2-step” method. These results suggest that the 1-step method is likely to be most useful for cases where experimenters want to analyze existing data that are missing some standard measurements and where experimenters want to extract the best results possible from their data. Open source software for both methods is available for download or on-line use. PMID:26908370

  4. Photon Counting Data Analysis: Application of the Maximum Likelihood and Related Methods for the Determination of Lifetimes in Mixtures of Rose Bengal and Rhodamine B

    DOE PAGES

    Santra, Kalyan; Smith, Emily A.; Petrich, Jacob W.; ...

    2016-12-12

    It is often convenient to know the minimum amount of data needed in order to obtain a result of desired accuracy and precision. It is a necessity in the case of subdiffraction-limited microscopies, such as stimulated emission depletion (STED) microscopy, owing to the limited sample volumes and the extreme sensitivity of the samples to photobleaching and photodamage. We present a detailed comparison of probability-based techniques (the maximum likelihood method and methods based on the binomial and the Poisson distributions) with residual minimization-based techniques for retrieving the fluorescence decay parameters for various two-fluorophore mixtures, as a function of the total numbermore » of photon counts, in time-correlated, single-photon counting experiments. The probability-based techniques proved to be the most robust (insensitive to initial values) in retrieving the target parameters and, in fact, performed equivalently to 2-3 significant figures. This is to be expected, as we demonstrate that the three methods are fundamentally related. Furthermore, methods based on the Poisson and binomial distributions have the desirable feature of providing a bin-by-bin analysis of a single fluorescence decay trace, which thus permits statistics to be acquired using only the one trace for not only the mean and median values of the fluorescence decay parameters but also for the associated standard deviations. Lastly, these probability-based methods lend themselves well to the analysis of the sparse data sets that are encountered in subdiffraction-limited microscopies.« less

  5. Likelihood Principle and Maximum Likelihood Estimator of Location Parameter for Cauchy Distribution.

    DTIC Science & Technology

    1986-05-01

    consistency (or strong consistency) of maximum likelihood estimator has been studied by many researchers, for example, Wald (1949), Wolfowitz (1953, 1965...20, 595-601. [25] Wolfowitz , J. (1953). The method of maximum likelihood and Wald theory of decision functions. Indag. Math., Vol. 15, 114-119. [26...Probability Letters Vol. 1, No. 3, 197-202. [24] Wald , A. (1949). Note on the consistency of maximum likelihood estimates. Ann. Math. Statist., Vol

  6. Phylogeny of the cycads based on multiple single-copy nuclear genes: congruence of concatenated parsimony, likelihood and species tree inference methods

    PubMed Central

    Salas-Leiva, Dayana E.; Meerow, Alan W.; Calonje, Michael; Griffith, M. Patrick; Francisco-Ortega, Javier; Nakamura, Kyoko; Stevenson, Dennis W.; Lewis, Carl E.; Namoff, Sandra

    2013-01-01

    Background and aims Despite a recent new classification, a stable phylogeny for the cycads has been elusive, particularly regarding resolution of Bowenia, Stangeria and Dioon. In this study, five single-copy nuclear genes (SCNGs) are applied to the phylogeny of the order Cycadales. The specific aim is to evaluate several gene tree–species tree reconciliation approaches for developing an accurate phylogeny of the order, to contrast them with concatenated parsimony analysis and to resolve the erstwhile problematic phylogenetic position of these three genera. Methods DNA sequences of five SCNGs were obtained for 20 cycad species representing all ten genera of Cycadales. These were analysed with parsimony, maximum likelihood (ML) and three Bayesian methods of gene tree–species tree reconciliation, using Cycas as the outgroup. A calibrated date estimation was developed with Bayesian methods, and biogeographic analysis was also conducted. Key Results Concatenated parsimony, ML and three species tree inference methods resolve exactly the same tree topology with high support at most nodes. Dioon and Bowenia are the first and second branches of Cycadales after Cycas, respectively, followed by an encephalartoid clade (Macrozamia–Lepidozamia–Encephalartos), which is sister to a zamioid clade, of which Ceratozamia is the first branch, and in which Stangeria is sister to Microcycas and Zamia. Conclusions A single, well-supported phylogenetic hypothesis of the generic relationships of the Cycadales is presented. However, massive extinction events inferred from the fossil record that eliminated broader ancestral distributions within Zamiaceae compromise accurate optimization of ancestral biogeographical areas for that hypothesis. While major lineages of Cycadales are ancient, crown ages of all modern genera are no older than 12 million years, supporting a recent hypothesis of mostly Miocene radiations. This phylogeny can contribute to an accurate infrafamilial

  7. Maximum Likelihood Fusion Model

    DTIC Science & Technology

    2014-08-09

    data fusion, hypothesis testing,maximum likelihood estimation, mobile robot navigation REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT...61 vi 9 Bibliography 62 vii 10 LIST OF FIGURES 1.1 Illustration of mobile robotic agents. Land rovers such as (left) Pioneer robots ...simultaneous localization and mapping 1 15 Figure 1.1: Illustration of mobile robotic agents. Land rovers such as (left) Pioneer robots , (center) Segways

  8. Empirical aspects of the Whittle-based maximum likelihood method in jointly estimating seasonal and non-seasonal fractional integration parameters

    NASA Astrophysics Data System (ADS)

    Marques, G. O. L. C.

    2011-01-01

    This paper addresses the efficiency of the maximum likelihood ( ML) method in jointly estimating the fractional integration parameters ds and d, respectively associated with seasonal and non-seasonal long-memory components in discrete stochastic processes. The influence of the size of non-seasonal parameter over seasonal parameter estimation, and vice versa, was analyzed in the space d×ds∈(0,1)×(0,1) by using mean squared error statistics MSE(d) and MSE(dˆ). This study was based on Monte Carlo simulation experiments using the ML estimator with Whittle’s approximation in the frequency domain. Numerical results revealed that efficiency in jointly estimating each integration parameter is affected in different ways by their sizes: as ds and d increase simultaneously to 1, MSE(d) and MSE(dˆ) become larger; however, effects on MSE(d) are much stronger than the effects on MSE(dˆ). Moreover, as each parameter tends individually to 1, MSE(dˆ) becomes larger, but MSE(d) is barely influenced.

  9. List-mode likelihood

    PubMed Central

    Barrett, Harrison H.; White, Timothy; Parra, Lucas C.

    2010-01-01

    As photon-counting imaging systems become more complex, there is a trend toward measuring more attributes of each individual event. In various imaging systems the attributes can include several position variables, time variables, and energies. If more than about four attributes are measured for each event, it is not practical to record the data in an image matrix. Instead it is more efficient to use a simple list where every attribute is stored for every event. It is the purpose of this paper to discuss the concept of likelihood for such list-mode data. We present expressions for list-mode likelihood with an arbitrary number of attributes per photon and for both preset counts and preset time. Maximization of this likelihood can lead to a practical reconstruction algorithm with list-mode data, but that aspect is covered in a separate paper [IEEE Trans. Med. Imaging (to be published)]. An expression for lesion detectability for list-mode data is also derived and compared with the corresponding expression for conventional binned data. PMID:9379247

  10. [Contrastive study on dynamic spectrum extraction method].

    PubMed

    Li, Gang; Zhou, Mei; Wang, Hui-quan; Xiong, Chan; Lin, Ling

    2012-05-01

    Dynamic spectrum method extracts the absorbance of the artery pulse blood with some wavelengths. The method can reduce some influence such as measurement condition, individual difference and spectrum overlap. It is a new way for noninvasive blood components detection However, how to choose a dynamic spectrum extraction method is one of the key links for the weak ingredient spectrum signal. Now there are two methods to extract the dynamic spectral signal-frequency domain analysis and single-trial estimation in time domain In the present research, comparison analysis and research on the two methods were carrued out completely. Theoretical analysis and experimental results show that the two methods extract the dynamic spectrum from different angles. But they are the same in essence--the basic principle of dynamic spectrum, the signal statistical and average properties. With the pulse wave of relative stable period and amplitude, high precision dynamic spectrum can be obtained by the two methods. With the unstable pulse wave due to the influence of finger shake and contact-pressure change, the dynamic spectrum extracted by single-trial estimation is more accurate than the one by frequecy domain analysis.

  11. SPT Lensing Likelihood: South Pole Telescope CMB lensing likelihood code

    NASA Astrophysics Data System (ADS)

    Feeney, Stephen M.; Peiris, Hiranya V.; Verde, Licia

    2014-11-01

    The SPT lensing likelihood code, written in Fortran90, performs a Gaussian likelihood based upon the lensing potential power spectrum using a file from CAMB (ascl:1102.026) which contains the normalization required to get the power spectrum that the likelihood call is expecting.

  12. DALI: Derivative Approximation for LIkelihoods

    NASA Astrophysics Data System (ADS)

    Sellentin, Elena

    2015-07-01

    DALI (Derivative Approximation for LIkelihoods) is a fast approximation of non-Gaussian likelihoods. It extends the Fisher Matrix in a straightforward way and allows for a wider range of posterior shapes. The code is written in C/C++.

  13. Maximum Likelihood Estimation of Multivariate Polyserial and Polychoric Correlation Coefficients.

    ERIC Educational Resources Information Center

    Poon, Wai-Yin; Lee, Sik-Yum

    1987-01-01

    Reparameterization is used to find the maximum likelihood estimates of parameters in a multivariate model having some component variable observable only in polychotomous form. Maximum likelihood estimates are found by a Fletcher Powell algorithm. In addition, the partition maximum likelihood method is proposed and illustrated. (Author/GDC)

  14. Determination of stability and control parameters of a light airplane from flight data using two estimation methods. [equation error and maximum likelihood methods

    NASA Technical Reports Server (NTRS)

    Klein, V.

    1979-01-01

    Two identification methods, the equation error method and the output error method, are used to estimate stability and control parameter values from flight data for a low-wing, single-engine, general aviation airplane. The estimated parameters from both methods are in very good agreement primarily because of sufficient accuracy of measured data. The estimated static parameters also agree with the results from steady flights. The effect of power different input forms are demonstrated. Examination of all results available gives the best values of estimated parameters and specifies their accuracies.

  15. Sampling variability and estimates of density dependence: a composite-likelihood approach.

    PubMed

    Lele, Subhash R

    2006-01-01

    It is well known that sampling variability, if not properly taken into account, affects various ecologically important analyses. Statistical inference for stochastic population dynamics models is difficult when, in addition to the process error, there is also sampling error. The standard maximum-likelihood approach suffers from large computational burden. In this paper, I discuss an application of the composite-likelihood method for estimation of the parameters of the Gompertz model in the presence of sampling variability. The main advantage of the method of composite likelihood is that it reduces the computational burden substantially with little loss of statistical efficiency. Missing observations are a common problem with many ecological time series. The method of composite likelihood can accommodate missing observations in a straightforward fashion. Environmental conditions also affect the parameters of stochastic population dynamics models. This method is shown to handle such nonstationary population dynamics processes as well. Many ecological time series are short, and statistical inferences based on such short time series tend to be less precise. However, spatial replications of short time series provide an opportunity to increase the effective sample size. Application of likelihood-based methods for spatial time-series data for population dynamics models is computationally prohibitive. The method of composite likelihood is shown to have significantly less computational burden, making it possible to analyze large spatial time-series data. After discussing the methodology in general terms, I illustrate its use by analyzing a time series of counts of American Redstart (Setophaga ruticilla) from the Breeding Bird Survey data, San Joaquin kit fox (Vulpes macrotis mutica) population abundance data, and spatial time series of Bull trout (Salvelinus confluentus) redds count data.

  16. The Sherpa Maximum Likelihood Estimator

    NASA Astrophysics Data System (ADS)

    Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.

    2011-07-01

    A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.

  17. Dynamic discretization method for solving Kepler's equation

    NASA Astrophysics Data System (ADS)

    Feinstein, Scott A.; McLaughlin, Craig A.

    2006-09-01

    Kepler’s equation needs to be solved many times for a variety of problems in Celestial Mechanics. Therefore, computing the solution to Kepler’s equation in an efficient manner is of great importance to that community. There are some historical and many modern methods that address this problem. Of the methods known to the authors, Fukushima’s discretization technique performs the best. By taking more of a system approach and combining the use of discretization with the standard computer science technique known as dynamic programming, we were able to achieve even better performance than Fukushima. We begin by defining Kepler’s equation for the elliptical case and describe existing solution methods. We then present our dynamic discretization method and show the results of a comparative analysis. This analysis will demonstrate that, for the conditions of our tests, dynamic discretization performs the best.

  18. Dynamic Waypoint Navigation Using Voronoi Classifier Methods

    DTIC Science & Technology

    2004-12-01

    Robotics Mobility Laboratory Warren, MI 48397-5000 ABSTRACT This paper details the development of a dynamic waypoint navigation method ...elements of the environment are known initially and are used in the computation of the initial path). The drawback to this method is that the robot

  19. Simulating protein dynamics: Novel methods and applications

    NASA Astrophysics Data System (ADS)

    Vishal, V.

    This Ph.D dissertation describes several methodological advances in molecular dynamics (MD) simulations. Methods like Markov State Models can be used effectively in combination with distributed computing to obtain long time scale behavior from an ensemble of short simulations. Advanced computing architectures like Graphics Processors can be used to greatly extend the scope of MD. Applications of MD techniques to problems like Alzheimer's Disease and fundamental questions in protein dynamics are described.

  20. SWECS tower dynamics analysis methods and results

    NASA Technical Reports Server (NTRS)

    Wright, A. D.; Sexton, J. H.; Butterfield, C. P.; Thresher, R. M.

    1981-01-01

    Several different tower dynamics analysis methods and computer codes were used to determine the natural frequencies and mode shapes of both guyed and freestanding wind turbine towers. These analysis methods are described and the results for two types of towers, a guyed tower and a freestanding tower, are shown. The advantages and disadvantages in the use of and the accuracy of each method are also described.

  1. Quasi-likelihood for Spatial Point Processes

    PubMed Central

    Guan, Yongtao; Jalilian, Abdollah; Waagepetersen, Rasmus

    2014-01-01

    Summary Fitting regression models for intensity functions of spatial point processes is of great interest in ecological and epidemiological studies of association between spatially referenced events and geographical or environmental covariates. When Cox or cluster process models are used to accommodate clustering not accounted for by the available covariates, likelihood based inference becomes computationally cumbersome due to the complicated nature of the likelihood function and the associated score function. It is therefore of interest to consider alternative more easily computable estimating functions. We derive the optimal estimating function in a class of first-order estimating functions. The optimal estimating function depends on the solution of a certain Fredholm integral equation which in practise is solved numerically. The derivation of the optimal estimating function has close similarities to the derivation of quasi-likelihood for standard data sets. The approximate solution is further equivalent to a quasi-likelihood score for binary spatial data. We therefore use the term quasi-likelihood for our optimal estimating function approach. We demonstrate in a simulation study and a data example that our quasi-likelihood method for spatial point processes is both statistically and computationally efficient. PMID:26041970

  2. Disequilibrium mapping: Composite likelihood for pairwise disequilibrium

    SciTech Connect

    Devlin, B.; Roeder, K.; Risch, N.

    1996-08-15

    The pattern of linkage disequilibrium between a disease locus and a set of marker loci has been shown to be a useful tool for geneticists searching for disease genes. Several methods have been advanced to utilize the pairwise disequilibrium between the disease locus and each of a set of marker loci. However, none of the methods take into account the information from all pairs simultaneously while also modeling the variability in the disequilibrium values due to the evolutionary dynamics of the population. We propose a Composite Likelihood CL model that has these features when the physical distances between the marker loci are known or can be approximated. In this instance, and assuming that there is a single disease mutation, the CL model depends on only three parameters, the recombination fraction between the disease locus and an arbitrary marker locus, {theta}, the age of the mutation, and a variance parameter. When the CL is maximized over a grid of {theta}, it provides a graph that can direct the search for the disease locus. We also show how the CL model can be generalized to account for multiple disease mutations. Evolutionary simulations demonstrate the power of the analyses, as well as their potential weaknesses. Finally, we analyze the data from two mapped diseases, cystic fibrosis and diastrophic dysplasia, finding that the CL method performs well in both cases. 28 refs., 6 figs., 4 tabs.

  3. Method for monitoring slow dynamics recovery

    NASA Astrophysics Data System (ADS)

    Haller, Kristian C. E.; Hedberg, Claes M.

    2012-11-01

    Slow Dynamics is a specific material property, which for example is connected to the degree of damage. It is therefore of importance to be able to attain proper measurements of it. Usually it has been monitored by acoustic resonance methods which have very high sensitivity as such. However, because the acoustic wave is acting both as conditioner and as probe, the measurement is affecting the result which leads to a mixing of the fast nonlinear response to the excitation and the slow dynamics material recovery. In this article a method is introduced which, for the first time, removes the fast dynamics from the process and allows the behavior of the slow dynamics to be monitored by itself. The new method has the ability to measure at the shortest possible recovery times, and at very small conditioning strains. For the lowest strains the sound speed increases with strain, while at higher strains a linear decreasing dependence is observed. This is the first method and test that has been able to monitor the true material state recovery process.

  4. Solution Methods for Stochastic Dynamic Linear Programs.

    DTIC Science & Technology

    1980-12-01

    Linear Programming, IIASA , Laxenburg, Austria, June 2-6, 1980. [2] Aghili, P., R.H., Cramer and H.W. Thompson, "On the applicability of two- stage...Laxenburg, Austria, May, 1978. [52] Propoi, A. and V. Krivonozhko, ’The simplex method for dynamic linear programs", RR-78-14, IIASA , Vienna, Austria

  5. Interfacial gauge methods for incompressible fluid dynamics

    PubMed Central

    Saye, Robert

    2016-01-01

    Designing numerical methods for incompressible fluid flow involving moving interfaces, for example, in the computational modeling of bubble dynamics, swimming organisms, or surface waves, presents challenges due to the coupling of interfacial forces with incompressibility constraints. A class of methods, denoted interfacial gauge methods, is introduced for computing solutions to the corresponding incompressible Navier-Stokes equations. These methods use a type of “gauge freedom” to reduce the numerical coupling between fluid velocity, pressure, and interface position, allowing high-order accurate numerical methods to be developed more easily. Making use of an implicit mesh discontinuous Galerkin framework, developed in tandem with this work, high-order results are demonstrated, including surface tension dynamics in which fluid velocity, pressure, and interface geometry are computed with fourth-order spatial accuracy in the maximum norm. Applications are demonstrated with two-phase fluid flow displaying fine-scaled capillary wave dynamics, rigid body fluid-structure interaction, and a fluid-jet free surface flow problem exhibiting vortex shedding induced by a type of Plateau-Rayleigh instability. The developed methods can be generalized to other types of interfacial flow and facilitate precise computation of complex fluid interface phenomena. PMID:27386567

  6. Interfacial gauge methods for incompressible fluid dynamics.

    PubMed

    Saye, Robert

    2016-06-01

    Designing numerical methods for incompressible fluid flow involving moving interfaces, for example, in the computational modeling of bubble dynamics, swimming organisms, or surface waves, presents challenges due to the coupling of interfacial forces with incompressibility constraints. A class of methods, denoted interfacial gauge methods, is introduced for computing solutions to the corresponding incompressible Navier-Stokes equations. These methods use a type of "gauge freedom" to reduce the numerical coupling between fluid velocity, pressure, and interface position, allowing high-order accurate numerical methods to be developed more easily. Making use of an implicit mesh discontinuous Galerkin framework, developed in tandem with this work, high-order results are demonstrated, including surface tension dynamics in which fluid velocity, pressure, and interface geometry are computed with fourth-order spatial accuracy in the maximum norm. Applications are demonstrated with two-phase fluid flow displaying fine-scaled capillary wave dynamics, rigid body fluid-structure interaction, and a fluid-jet free surface flow problem exhibiting vortex shedding induced by a type of Plateau-Rayleigh instability. The developed methods can be generalized to other types of interfacial flow and facilitate precise computation of complex fluid interface phenomena.

  7. A real-time digital program for estimating aircraft stability and control parameters from flight test data by using the maximum likelihood method

    NASA Technical Reports Server (NTRS)

    Grove, R. D.; Mayhew, S. C.

    1973-01-01

    A computer program (Langley program C1123) has been developed for estimating aircraft stability and control parameters from flight test data. These parameters are estimated by the maximum likelihood estimation procedure implemented on a real-time digital simulation system, which uses the Control Data 6600 computer. This system allows the investigator to interact with the program in order to obtain satisfactory results. Part of this system, the control and display capabilities, is described for this program. This report also describes the computer program by presenting the program variables, subroutines, flow charts, listings, and operational features. Program usage is demonstrated with a test case using pseudo or simulated flight data.

  8. Evaluation of Dynamic Methods for Earthwork Assessment

    NASA Astrophysics Data System (ADS)

    Vlček, Jozef; Ďureková, Dominika; Zgútová, Katarína

    2015-05-01

    Rapid development of road construction imposes requests on fast and quality methods for earthwork quality evaluation. Dynamic methods are now adopted in numerous civil engineering sections. Especially evaluation of the earthwork quality can be sped up using dynamic equipment. This paper presents the results of the parallel measurements of chosen devices for determining the level of compaction of soils. Measurements were used to develop the correlations between values obtained from various apparatuses. Correlations show that examined apparatuses are suitable for examination of compaction level of fine-grained soils with consideration of boundary conditions of used equipment. Presented methods are quick and results can be obtained immediately after measurement, and they are thus suitable in cases when construction works have to be performed in a short period of time.

  9. Spectral Methods for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Zang, T. A.; Streett, C. L.; Hussaini, M. Y.

    1994-01-01

    As a tool for large-scale computations in fluid dynamics, spectral methods were prophesized in 1944, born in 1954, virtually buried in the mid-1960's, resurrected in 1969, evangalized in the 1970's, and catholicized in the 1980's. The use of spectral methods for meteorological problems was proposed by Blinova in 1944 and the first numerical computations were conducted by Silberman (1954). By the early 1960's computers had achieved sufficient power to permit calculations with hundreds of degrees of freedom. For problems of this size the traditional way of computing the nonlinear terms in spectral methods was expensive compared with finite-difference methods. Consequently, spectral methods fell out of favor. The expense of computing nonlinear terms remained a severe drawback until Orszag (1969) and Eliasen, Machenauer, and Rasmussen (1970) developed the transform methods that still form the backbone of many large-scale spectral computations. The original proselytes of spectral methods were meteorologists involved in global weather modeling and fluid dynamicists investigating isotropic turbulence. The converts who were inspired by the successes of these pioneers remained, for the most part, confined to these and closely related fields throughout the 1970's. During that decade spectral methods appeared to be well-suited only for problems governed by ordinary diSerential eqllations or by partial differential equations with periodic boundary conditions. And, of course, the solution itself needed to be smooth. Some of the obstacles to wider application of spectral methods were: (1) poor resolution of discontinuous solutions; (2) inefficient implementation of implicit methods; and (3) drastic geometric constraints. All of these barriers have undergone some erosion during the 1980's, particularly the latter two. As a result, the applicability and appeal of spectral methods for computational fluid dynamics has broadened considerably. The motivation for the use of spectral

  10. Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.

    1985-01-01

    Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.

  11. Growing local likelihood network: Emergence of communities

    NASA Astrophysics Data System (ADS)

    Chen, S.; Small, M.

    2015-10-01

    In many real situations, networks grow only via local interactions. New nodes are added to the growing network with information only pertaining to a small subset of existing nodes. Multilevel marketing, social networks, and disease models can all be depicted as growing networks based on local (network path-length) distance information. In these examples, all nodes whose distance from a chosen center is less than d form a subgraph. Hence, we grow networks with information only from these subgraphs. Moreover, we use a likelihood-based method, where at each step we modify the networks by changing their likelihood to be closer to the expected degree distribution. Combining the local information and the likelihood method, we grow networks that exhibit novel features. We discover that the likelihood method, over certain parameter ranges, can generate networks with highly modulated communities, even when global information is not available. Communities and clusters are abundant in real-life networks, and the method proposed here provides a natural mechanism for the emergence of communities in scale-free networks. In addition, the algorithmic implementation of network growth via local information is substantially faster than global methods and allows for the exploration of much larger networks.

  12. Mesoscopic Simulation Methods for Polymer Dynamics

    NASA Astrophysics Data System (ADS)

    Larson, Ronald

    2015-03-01

    We assess the accuracy and efficiency of mesoscopic simulation methods, namely Brownian Dynamics (BD), Stochastic Rotation Dynamics (SRD) and Dissipative Particle Dynamics (DPD), for polymers in solution at equilibrium and in flows in microfluidic geometries. Both SRD and DPD use solvent ``particles'' to carry momentum, and so account automatically for hydrodynamic interactions both within isolated polymer coils, and with other polymer molecules and with nearby solid boundaries. We assess quantitatively the effects of artificial particle inertia and fluid compressibility and show that they can be made small with appropriate choice of simulation parameters. We then use these methods to study flow-induced migration of polymer chains produced by: 1) hydrodynamic interactions, 2) streamline curvature or stress-gradients, and 3) convection of wall depletion zones. We show that huge concentration gradients can be produced by these mechanisms in microfluidic geometries that can be exploited for separation of polymers by size in periodic contraction-expansion geometries. We also assess the range of conditions for which BD, SRD or DPD is preferable for mesoscopic simulations. Finally, we show how such methods can be used to simulate quantitatively the swimming of micro-organisms such as E. coli. In collaboration with Lei Jiang and Tongyang Zhao, University of Michigan, Ann Arbor, MI.

  13. A Likelihood-Based SLIC Superpixel Algorithm for SAR Images Using Generalized Gamma Distribution

    PubMed Central

    Zou, Huanxin; Qin, Xianxiang; Zhou, Shilin; Ji, Kefeng

    2016-01-01

    The simple linear iterative clustering (SLIC) method is a recently proposed popular superpixel algorithm. However, this method may generate bad superpixels for synthetic aperture radar (SAR) images due to effects of speckle and the large dynamic range of pixel intensity. In this paper, an improved SLIC algorithm for SAR images is proposed. This algorithm exploits the likelihood information of SAR image pixel clusters. Specifically, a local clustering scheme combining intensity similarity with spatial proximity is proposed. Additionally, for post-processing, a local edge-evolving scheme that combines spatial context and likelihood information is introduced as an alternative to the connected components algorithm. To estimate the likelihood information of SAR image clusters, we incorporated a generalized gamma distribution (GГD). Finally, the superiority of the proposed algorithm was validated using both simulated and real-world SAR images. PMID:27438840

  14. Likelihood analysis of earthquake focal mechanism distributions

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.; Jackson, David D.

    2015-06-01

    In our paper published earlier we discussed forecasts of earthquake focal mechanism and ways to test the forecast efficiency. Several verification methods were proposed, but they were based on ad hoc, empirical assumptions, thus their performance is questionable. We apply a conventional likelihood method to measure the skill of earthquake focal mechanism orientation forecasts. The advantage of such an approach is that earthquake rate prediction can be adequately combined with focal mechanism forecast, if both are based on the likelihood scores, resulting in a general forecast optimization. We measure the difference between two double-couple sources as the minimum rotation angle that transforms one into the other. We measure the uncertainty of a focal mechanism forecast (the variability), and the difference between observed and forecasted orientations (the prediction error), in terms of these minimum rotation angles. To calculate the likelihood score we need to compare actual forecasts or occurrences of predicted events with the null hypothesis that the mechanism's 3-D orientation is random (or equally probable). For 3-D rotation the random rotation angle distribution is not uniform. To better understand the resulting complexities, we calculate the information (likelihood) score for two theoretical rotational distributions (Cauchy and von Mises-Fisher), which are used to approximate earthquake source orientation pattern. We then calculate the likelihood score for earthquake source forecasts and for their validation by future seismicity data. Several issues need to be explored when analyzing observational results: their dependence on forecast and data resolution, internal dependence of scores on forecasted angle and random variability of likelihood scores. Here, we propose a simple tentative solution but extensive theoretical and statistical analysis is needed.

  15. Comparing Methods for Dynamic Airspace Configuration

    NASA Technical Reports Server (NTRS)

    Zelinski, Shannon; Lai, Chok Fung

    2011-01-01

    This paper compares airspace design solutions for dynamically reconfiguring airspace in response to nominal daily traffic volume fluctuation. Airspace designs from seven algorithmic methods and a representation of current day operations in Kansas City Center were simulated with two times today's demand traffic. A three-configuration scenario was used to represent current day operations. Algorithms used projected unimpeded flight tracks to design initial 24-hour plans to switch between three configurations at predetermined reconfiguration times. At each reconfiguration time, algorithms used updated projected flight tracks to update the subsequent planned configurations. Compared to the baseline, most airspace design methods reduced delay and increased reconfiguration complexity, with similar traffic pattern complexity results. Design updates enabled several methods to as much as half the delay from their original designs. Freeform design methods reduced delay and increased reconfiguration complexity the most.

  16. B-spline Method in Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Botella, Olivier; Shariff, Karim; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    B-spline functions are bases for piecewise polynomials that possess attractive properties for complex flow simulations : they have compact support, provide a straightforward handling of boundary conditions and grid nonuniformities, and yield numerical schemes with high resolving power, where the order of accuracy is a mere input parameter. This paper reviews the progress made on the development and application of B-spline numerical methods to computational fluid dynamics problems. Basic B-spline approximation properties is investigated, and their relationship with conventional numerical methods is reviewed. Some fundamental developments towards efficient complex geometry spline methods are covered, such as local interpolation methods, fast solution algorithms on cartesian grid, non-conformal block-structured discretization, formulation of spline bases of higher continuity over triangulation, and treatment of pressure oscillations in Navier-Stokes equations. Application of some of these techniques to the computation of viscous incompressible flows is presented.

  17. Implicit integration methods for dislocation dynamics

    NASA Astrophysics Data System (ADS)

    Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; Hommes, G.; Aubry, S.; Arsenlis, A.

    2015-03-01

    In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. This paper investigates the viability of high-order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a way of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.

  18. Implicit integration methods for dislocation dynamics

    DOE PAGES

    Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...

    2015-01-20

    In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less

  19. Implicit integration methods for dislocation dynamics

    SciTech Connect

    Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; Hommes, G.; Aubry, S.; Arsenlis, A.

    2015-01-20

    In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a way of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.

  20. Integration based profile likelihood calculation for PDE constrained parameter estimation problems

    NASA Astrophysics Data System (ADS)

    Boiger, R.; Hasenauer, J.; Hroß, S.; Kaltenbacher, B.

    2016-12-01

    Partial differential equation (PDE) models are widely used in engineering and natural sciences to describe spatio-temporal processes. The parameters of the considered processes are often unknown and have to be estimated from experimental data. Due to partial observations and measurement noise, these parameter estimates are subject to uncertainty. This uncertainty can be assessed using profile likelihoods, a reliable but computationally intensive approach. In this paper, we present the integration based approach for the profile likelihood calculation developed by (Chen and Jennrich 2002 J. Comput. Graph. Stat. 11 714-32) and adapt it to inverse problems with PDE constraints. While existing methods for profile likelihood calculation in parameter estimation problems with PDE constraints rely on repeated optimization, the proposed approach exploits a dynamical system evolving along the likelihood profile. We derive the dynamical system for the unreduced estimation problem, prove convergence and study the properties of the integration based approach for the PDE case. To evaluate the proposed method, we compare it with state-of-the-art algorithms for a simple reaction-diffusion model for a cellular patterning process. We observe a good accuracy of the method as well as a significant speed up as compared to established methods. Integration based profile calculation facilitates rigorous uncertainty analysis for computationally demanding parameter estimation problems with PDE constraints.

  1. Factors Influencing Likelihood of Voice Therapy Attendance.

    PubMed

    Misono, Stephanie; Marmor, Schelomo; Roy, Nelson; Mau, Ted; Cohen, Seth M

    2017-03-01

    Objective To identify factors associated with the likelihood of attending voice therapy among patients referred for it in the CHEER (Creating Healthcare Excellence through Education and Research) practice-based research network infrastructure. Study Design Prospectively enrolled cross-sectional study. Setting CHEER network of community and academic sites. Methods Data were collected on patient-reported demographics, voice-related diagnoses, voice-related handicap (Voice Handicap Index-10), likelihood of attending voice therapy (VT), and opinions on factors influencing likelihood of attending VT. The relationships between patient characteristics/opinions and likelihood of attending VT were investigated. Results A total of 170 patients with various voice-related diagnoses reported receiving a recommendation for VT. Of those, 85% indicated that they were likely to attend it, regardless of voice-related handicap severity. The most common factors influencing likelihood of VT attendance were insurance/copay, relief that it was not cancer, and travel. Those who were not likely to attend VT identified, as important factors, unclear potential improvement, not understanding the purpose of therapy, and concern that it would be too hard. In multivariate analysis, factors associated with greater likelihood of attending VT included shorter travel distance, age (40-59 years), and being seen in an academic practice. Conclusions Most patients reported plans to attend VT as recommended. Patients who intended to attend VT reported different considerations in their decision making from those who did not plan to attend. These findings may inform patient counseling and efforts to increase access to voice care.

  2. Optimization of dynamic systems using collocation methods

    NASA Astrophysics Data System (ADS)

    Holden, Michael Eric

    The time-based simulation is an important tool for the engineer. Often a time-domain simulation is the most expedient to construct, the most capable of handling complex modeling issues, or the most understandable with an engineer's physical intuition. Aeroelastic systems, for example, are often most easily solved with a nonlinear time-based approach to allow the use of high fidelity models. Simulations of automatic flight control systems can also be easier to model in the time domain, especially when nonlinearities are present. Collocation is an optimization method for systems that incorporate a time-domain simulation. Instead of integrating the equations of motion for each design iteration, the optimizer iteratively solves the simulation as it finds the optimal design. This forms a smooth, well-posed, sparse optimization problem, transforming the numerical integration's sequential calculation into a set of constraints that can be evaluated in any order, or even in parallel. The collocation method used in this thesis has been improved from existing techniques in several ways, in particular with a very simple and computationally inexpensive method of applying dynamic constraints, such as damping, that are more traditionally calculated with linear models in the frequency domain. This thesis applies the collocation method to a range of aircraft design problems, from minimizing the weight of a wing with a flutter constraint, to gain-scheduling the stability augmentation system of a small-scale flight control testbed, to aeroservoelastic design of a large aircraft concept. Collocation methods have not been applied to aeroelastic simulations in the past, although the combination of nonlinear aerodynamic analyses with structural dynamics and stability constraints is well-suited to collocation. The results prove the collocation method's worth as a tool for aircraft design, particularly when applied to the multidisciplinary numerical models used today.

  3. New methods for quantum mechanical reaction dynamics

    SciTech Connect

    Thompson, Ward Hugh

    1996-12-01

    Quantum mechanical methods are developed to describe the dynamics of bimolecular chemical reactions. We focus on developing approaches for directly calculating the desired quantity of interest. Methods for the calculation of single matrix elements of the scattering matrix (S-matrix) and initial state-selected reaction probabilities are presented. This is accomplished by the use of absorbing boundary conditions (ABC) to obtain a localized (L2) representation of the outgoing wave scattering Green`s function. This approach enables the efficient calculation of only a single column of the S-matrix with a proportionate savings in effort over the calculation of the entire S-matrix. Applying this method to the calculation of the initial (or final) state-selected reaction probability, a more averaged quantity, requires even less effort than the state-to-state S-matrix elements. It is shown how the same representation of the Green`s function can be effectively applied to the calculation of negative ion photodetachment intensities. Photodetachment spectroscopy of the anion ABC- can be a very useful method for obtaining detailed information about the neutral ABC potential energy surface, particularly if the ABC- geometry is similar to the transition state of the neutral ABC. Total and arrangement-selected photodetachment spectra are calculated for the H3O- system, providing information about the potential energy surface for the OH + H2 reaction when compared with experimental results. Finally, we present methods for the direct calculation of the thermal rate constant from the flux-position and flux-flux correlation functions. The spirit of transition state theory is invoked by concentrating on the short time dynamics in the area around the transition state that determine reactivity. These methods are made efficient by evaluating the required quantum mechanical trace in the basis of eigenstates of the

  4. Schwarz method for earthquake source dynamics

    SciTech Connect

    Badea, Lori Ionescu, Ioan R. Wolf, Sylvie

    2008-04-01

    Dynamic faulting under slip-dependent friction in a linear elastic domain (in-plane and 3D configurations) is considered. The use of an implicit time-stepping scheme (Newmark method) allows much larger values of the time step than the critical CFL time step, and higher accuracy to handle the non-smoothness of the interface constitutive law (slip weakening friction). The finite element form of the quasi-variational inequality is solved by a Schwarz domain decomposition method, by separating the inner nodes of the domain from the nodes on the fault. In this way, the quasi-variational inequality splits into two subproblems. The first one is a large linear system of equations, and its unknowns are related to the mesh nodes of the first subdomain (i.e. lying inside the domain). The unknowns of the second subproblem are the degrees of freedom of the mesh nodes of the second subdomain (i.e. lying on the domain boundary where the conditions of contact and friction are imposed). This nonlinear subproblem is solved by the same Schwarz algorithm, leading to some local nonlinear subproblems of a very small size. Numerical experiments are performed to illustrate convergence in time and space, instability capturing, energy dissipation and the influence of normal stress variations. We have used the proposed numerical method to compute source dynamics phenomena on complex and realistic 2D fault models (branched fault systems)

  5. Dynamic data filtering system and method

    DOEpatents

    Bickford, Randall L; Palnitkar, Rahul M

    2014-04-29

    A computer-implemented dynamic data filtering system and method for selectively choosing operating data of a monitored asset that modifies or expands a learned scope of an empirical model of normal operation of the monitored asset while simultaneously rejecting operating data of the monitored asset that is indicative of excessive degradation or impending failure of the monitored asset, and utilizing the selectively chosen data for adaptively recalibrating the empirical model to more accurately monitor asset aging changes or operating condition changes of the monitored asset.

  6. Direct anharmonic correction method by molecular dynamics

    NASA Astrophysics Data System (ADS)

    Liu, Zhong-Li; Li, Rui; Zhang, Xiu-Lu; Qu, Nuo; Cai, Ling-Cang

    2017-04-01

    The quick calculation of accurate anharmonic effects of lattice vibrations is crucial to the calculations of thermodynamic properties, the construction of the multi-phase diagram and equation of states of materials, and the theoretical designs of new materials. In this paper, we proposed a direct free energy interpolation (DFEI) method based on the temperature dependent phonon density of states (TD-PDOS) reduced from molecular dynamics simulations. Using the DFEI method, after anharmonic free energy corrections we reproduced the thermal expansion coefficients, the specific heat, the thermal pressure, the isothermal bulk modulus, and the Hugoniot P- V- T relationships of Cu easily and accurately. The extensive tests on other materials including metal, alloy, semiconductor and insulator also manifest that the DFEI method can easily uncover the rest anharmonicity that the quasi-harmonic approximation (QHA) omits. It is thus evidenced that the DFEI method is indeed a very efficient method used to conduct anharmonic effect corrections beyond QHA. More importantly it is much more straightforward and easier compared to previous anharmonic methods.

  7. Concurrent DSMC Method Using Dynamic Domain Decomposition

    NASA Astrophysics Data System (ADS)

    Wu, J.-S.; Tseng, K.-C.

    2003-05-01

    In the current study, a parallel two-dimensional direct simulation Monte Carlo method is reported, which incorporates a multi-level graph-partitioning technique to dynamically decompose the computational domain. The current DSMC method is implemented on an unstructured mesh using particle ray-tracing technique, which takes the advantages of the cell connectivity information. Standard Message Passage Interface (MPI) is used to communicate data between processors. In addition, different strategies applying the Stop at Rise (SAR) [7] scheme is utilized to determine when to adapt the workload distribution among processors. Corresponding analysis of parallel performance is reported using the results of a high-speed driven cavity flow on IBM-SP2 parallel machines (memory-distributed, CPU 160 MHz, RAM 256 MB each) up to 64 processors. Small, medium and large problems, based on the number of particles and cells, are simulated. Results, applying SAR scheme every two time steps, show that parallel efficiency is 57%, 90% and 107% for small, medium and large problems, respectively, at 64 processors. In general, benefits of applying SAR scheme at larger periods decrease gradually with increasing problem size. Detailed time analysis shows that degree of imbalance levels off very rapidly at a relatively low value (30%˜40%) with increasing number of processors applying dynamic load balancing, while it, at a value of 5˜6 times larger, increases with increasing number of processors without dynamic load balancing. At the end, the completed code is applied to compute a near-continuum gas flow to demonstrate its superior computational capability.

  8. Domain decomposition methods in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gropp, William D.; Keyes, David E.

    1991-01-01

    The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.

  9. Methods and systems for combustion dynamics reduction

    DOEpatents

    Kraemer, Gilbert Otto; Varatharajan, Balachandar; Srinivasan, Shiva; Lynch, John Joseph; Yilmaz, Ertan; Kim, Kwanwoo; Lacy, Benjamin; Crothers, Sarah; Singh, Kapil Kumar

    2009-08-25

    Methods and systems for combustion dynamics reduction are provided. A combustion chamber may include a first premixer and a second premixer. Each premixer may include at least one fuel injector, at least one air inlet duct, and at least one vane pack for at least partially mixing the air from the air inlet duct or ducts and fuel from the fuel injector or injectors. Each vane pack may include a plurality of fuel orifices through which at least a portion of the fuel and at least a portion of the air may pass. The vane pack or packs of the first premixer may be positioned at a first axial position and the vane pack or packs of the second premixer may be positioned at a second axial position axially staggered with respect to the first axial position.

  10. NMR Methods to Study Dynamic Allostery

    PubMed Central

    Grutsch, Sarina; Brüschweiler, Sven; Tollinger, Martin

    2016-01-01

    Nuclear magnetic resonance (NMR) spectroscopy provides a unique toolbox of experimental probes for studying dynamic processes on a wide range of timescales, ranging from picoseconds to milliseconds and beyond. Along with NMR hardware developments, recent methodological advancements have enabled the characterization of allosteric proteins at unprecedented detail, revealing intriguing aspects of allosteric mechanisms and increasing the proportion of the conformational ensemble that can be observed by experiment. Here, we present an overview of NMR spectroscopic methods for characterizing equilibrium fluctuations in free and bound states of allosteric proteins that have been most influential in the field. By combining NMR experimental approaches with molecular simulations, atomistic-level descriptions of the mechanisms by which allosteric phenomena take place are now within reach. PMID:26964042

  11. On the likelihood of forests

    NASA Astrophysics Data System (ADS)

    Shang, Yilun

    2016-08-01

    How complex a network is crucially impacts its function and performance. In many modern applications, the networks involved have a growth property and sparse structures, which pose challenges to physicists and applied mathematicians. In this paper, we introduce the forest likelihood as a plausible measure to gauge how difficult it is to construct a forest in a non-preferential attachment way. Based on the notions of admittable labeling and path construction, we propose algorithms for computing the forest likelihood of a given forest. Concrete examples as well as the distributions of forest likelihoods for all forests with some fixed numbers of nodes are presented. Moreover, we illustrate the ideas on real-life networks, including a benzenoid tree, a mathematical family tree, and a peer-to-peer network.

  12. Applications of Langevin and Molecular Dynamics methods

    NASA Astrophysics Data System (ADS)

    Lomdahl, P. S.

    Computer simulation of complex nonlinear and disordered phenomena from materials science is rapidly becoming an active and new area serving as a guide for experiments and for testing of theoretical concepts. This is especially true when novel massively parallel computer systems and techniques are used on these problems. In particular the Langevin dynamics simulation technique has proven useful in situations where the time evolution of a system in contact with a heat bath is to be studied. The traditional way to study systems in contact with a heat bath has been via the Monte Carlo method. While this method has indeed been used successfully in many applications, it has difficulty addressing true dynamical questions. Large systems of coupled stochastic ODE's (or Langevin equations) are commonly the end result of a theoretical description of higher dimensional nonlinear systems in contact with a heat bath. The coupling is often local in nature, because it reflects local interactions formulated on a lattice, the lattice for example represents the underlying discreteness of a substrate of atoms or discrete k-values in Fourier space. The fundamental unit of parallelism thus has a direct analog in the physical system the authors are interested in. In these lecture notes the authors illustrate the use of Langevin stochastic simulation techniques on a number of nonlinear problems from materials science and condensed matter physics that have attracted attention in recent years. First, the authors review the idea behind the fluctuation-dissipation theorem which forms that basis for the numerical Langevin stochastic simulation scheme. The authors then show applications of the technique to various problems from condensed matter and materials science.

  13. Likelihood reinstates Archaeopteryx as a primitive bird.

    PubMed

    Lee, Michael S Y; Worthy, Trevor H

    2012-04-23

    The widespread view that Archaeopteryx was a primitive (basal) bird has been recently challenged by a comprehensive phylogenetic analysis that placed Archaeopteryx with deinonychosaurian theropods. The new phylogeny suggested that typical bird flight (powered by the front limbs only) either evolved at least twice, or was lost/modified in some deinonychosaurs. However, this parsimony-based result was acknowledged to be weakly supported. Maximum-likelihood and related Bayesian methods applied to the same dataset yield a different and more orthodox result: Archaeopteryx is restored as a basal bird with bootstrap frequency of 73 per cent and posterior probability of 1. These results are consistent with a single origin of typical (forelimb-powered) bird flight. The Archaeopteryx-deinonychosaur clade retrieved by parsimony is supported by more characters (which are on average more homoplasious), whereas the Archaeopteryx-bird clade retrieved by likelihood-based methods is supported by fewer characters (but on average less homoplasious). Both positions for Archaeopteryx remain plausible, highlighting the hazy boundary between birds and advanced theropods. These results also suggest that likelihood-based methods (in addition to parsimony) can be useful in morphological phylogenetics.

  14. Semiclassical methods in chemical reaction dynamics

    SciTech Connect

    Keshavamurthy, Srihari

    1994-12-01

    Semiclassical approximations, simple as well as rigorous, are formulated in order to be able to describe gas phase chemical reactions in large systems. We formulate a simple but accurate semiclassical model for incorporating multidimensional tunneling in classical trajectory simulations. This model is based on the existence of locally conserved actions around the saddle point region on a multidimensional potential energy surface. Using classical perturbation theory and monitoring the imaginary action as a function of time along a classical trajectory we calculate state-specific unimolecular decay rates for a model two dimensional potential with coupling. Results are in good comparison with exact quantum results for the potential over a wide range of coupling constants. We propose a new semiclassical hybrid method to calculate state-to-state S-matrix elements for bimolecular reactive scattering. The accuracy of the Van Vleck-Gutzwiller propagator and the short time dynamics of the system make this method self-consistent and accurate. We also go beyond the stationary phase approximation by doing the resulting integrals exactly (numerically). As a result, classically forbidden probabilties are calculated with purely real time classical trajectories within this approach. Application to the one dimensional Eckart barrier demonstrates the accuracy of this approach. Successful application of the semiclassical hybrid approach to collinear reactive scattering is prevented by the phenomenon of chaotic scattering. The modified Filinov approach to evaluating the integrals is discussed, but application to collinear systems requires a more careful analysis. In three and higher dimensional scattering systems, chaotic scattering is suppressed and hence the accuracy and usefulness of the semiclassical method should be tested for such systems.

  15. Determination of stability and control derivatives from the NASA F/A-18 HARV from flight data using the maximum likelihood method

    NASA Technical Reports Server (NTRS)

    Napolitano, Marcello R.

    1995-01-01

    This report is a compilation of PID (Proportional Integral Derivative) results for both longitudinal and lateral directional analysis that was completed during Fall 1994. It had earlier established that the maneuvers available for PID containing independent control surface inputs from OBES were not well suited for extracting the cross-coupling static (i.e., C(sub N beta)) or dynamic (i.e., C(sub Npf)) derivatives. This was due to the fact that these maneuvers were designed with the goal of minimizing any lateral directional motion during longitudinal maneuvers and vice-versa. This allows for greater simplification in the aerodynamic model as far as coupling between longitudinal and lateral directions is concerned. As a result, efforts were made to reanalyze this data and extract static and dynamic derivatives for the F/A-18 HARV (High Angle of Attack Research Vehicle) without the inclusion of the cross-coupling terms such that more accurate estimates of classical model terms could be acquired. Four longitudinal flights containing static PID maneuvers were examined. The classical state equations already available in pEst for alphadot, qdot and thetadot were used. Three lateral directional flights of PID static maneuvers were also examined. The classical state equations already available in pEst for betadot, p dot, rdot and phi dot were used. Enclosed with this document are the full set of longitudinal and lateral directional parameter estimate plots showing coefficient estimates along with Cramer-Rao bounds. In addition, a representative time history match for each type of meneuver tested at each angle of attack is also enclosed.

  16. Nonparametric Bayes Factors Based On Empirical Likelihood Ratios

    PubMed Central

    Vexler, Albert; Deng, Wei; Wilding, Gregory E.

    2012-01-01

    Bayes methodology provides posterior distribution functions based on parametric likelihoods adjusted for prior distributions. A distribution-free alternative to the parametric likelihood is use of empirical likelihood (EL) techniques, well known in the context of nonparametric testing of statistical hypotheses. Empirical likelihoods have been shown to exhibit many of the properties of conventional parametric likelihoods. In this article, we propose and examine Bayes factors (BF) methods that are derived via the EL ratio approach. Following Kass & Wasserman [10], we consider Bayes factors type decision rules in the context of standard statistical testing techniques. We show that the asymptotic properties of the proposed procedure are similar to the classical BF’s asymptotic operating characteristics. Although we focus on hypothesis testing, the proposed approach also yields confidence interval estimators of unknown parameters. Monte Carlo simulations were conducted to evaluate the theoretical results as well as to demonstrate the power of the proposed test. PMID:23180904

  17. Factors Associated with Young Adults’ Pregnancy Likelihood

    PubMed Central

    Kitsantas, Panagiota; Lindley, Lisa L.; Wu, Huichuan

    2014-01-01

    OBJECTIVES While progress has been made to reduce adolescent pregnancies in the United States, rates of unplanned pregnancy among young adults (18–29 years) remain high. In this study, we assessed factors associated with perceived likelihood of pregnancy (likelihood of getting pregnant/getting partner pregnant in the next year) among sexually experienced young adults who were not trying to get pregnant and had ever used contraceptives. METHODS We conducted a secondary analysis of 660 young adults, 18–29 years old in the United States, from the cross-sectional National Survey of Reproductive and Contraceptive Knowledge. Logistic regression and classification tree analyses were conducted to generate profiles of young adults most likely to report anticipating a pregnancy in the next year. RESULTS Nearly one-third (32%) of young adults indicated they believed they had at least some likelihood of becoming pregnant in the next year. Young adults who believed that avoiding pregnancy was not very important were most likely to report pregnancy likelihood (odds ratio [OR], 5.21; 95% CI, 2.80–9.69), as were young adults for whom avoiding a pregnancy was important but not satisfied with their current contraceptive method (OR, 3.93; 95% CI, 1.67–9.24), attended religious services frequently (OR, 3.0; 95% CI, 1.52–5.94), were uninsured (OR, 2.63; 95% CI, 1.31–5.26), and were likely to have unprotected sex in the next three months (OR, 1.77; 95% CI, 1.04–3.01). DISCUSSION These results may help guide future research and the development of pregnancy prevention interventions targeting sexually experienced young adults. PMID:25782849

  18. Dynamic stiffness method for space frames under distributed harmonic loads

    NASA Astrophysics Data System (ADS)

    Dumir, P. C.; Saha, D. C.; Sengupta, S.

    1992-10-01

    An exact dynamic equivalent load vector for space frames subjected to harmonic distributed loads has been derived using the dynamic stiffness approach. The Taylor's series expansion of the dynamic equivalent load vector has revealed that the static consistent equivalent load vector used in a 12 degree of freedom two-noded finite element for a space frame is just the first term of the series. The dynamic stiffness approach using the exact dynamic equivalent load vector requires discretization of a member subjected to distributed loads into only one element. The results of the dynamic stiffness method are compared with those of the finite element method for illustrative problems.

  19. A Maximum-Likelihood Approach to Force-Field Calibration.

    PubMed

    Zaborowski, Bartłomiej; Jagieła, Dawid; Czaplewski, Cezary; Hałabis, Anna; Lewandowska, Agnieszka; Żmudzińska, Wioletta; Ołdziej, Stanisław; Karczyńska, Agnieszka; Omieczynski, Christian; Wirecki, Tomasz; Liwo, Adam

    2015-09-28

    A new approach to the calibration of the force fields is proposed, in which the force-field parameters are obtained by maximum-likelihood fitting of the calculated conformational ensembles to the experimental ensembles of training system(s). The maximum-likelihood function is composed of logarithms of the Boltzmann probabilities of the experimental conformations, calculated with the current energy function. Because the theoretical distribution is given in the form of the simulated conformations only, the contributions from all of the simulated conformations, with Gaussian weights in the distances from a given experimental conformation, are added to give the contribution to the target function from this conformation. In contrast to earlier methods for force-field calibration, the approach does not suffer from the arbitrariness of dividing the decoy set into native-like and non-native structures; however, if such a division is made instead of using Gaussian weights, application of the maximum-likelihood method results in the well-known energy-gap maximization. The computational procedure consists of cycles of decoy generation and maximum-likelihood-function optimization, which are iterated until convergence is reached. The method was tested with Gaussian distributions and then applied to the physics-based coarse-grained UNRES force field for proteins. The NMR structures of the tryptophan cage, a small α-helical protein, determined at three temperatures (T = 280, 305, and 313 K) by Hałabis et al. ( J. Phys. Chem. B 2012 , 116 , 6898 - 6907 ), were used. Multiplexed replica-exchange molecular dynamics was used to generate the decoys. The iterative procedure exhibited steady convergence. Three variants of optimization were tried: optimization of the energy-term weights alone and use of the experimental ensemble of the folded protein only at T = 280 K (run 1); optimization of the energy-term weights and use of experimental ensembles at all three temperatures (run 2

  20. Weibull distribution based on maximum likelihood with interval inspection data

    NASA Technical Reports Server (NTRS)

    Rheinfurth, M. H.

    1985-01-01

    The two Weibull parameters based upon the method of maximum likelihood are determined. The test data used were failures observed at inspection intervals. The application was the reliability analysis of the SSME oxidizer turbine blades.

  1. Properties of maximum likelihood male fertility estimation in plant populations.

    PubMed Central

    Morgan, M T

    1998-01-01

    Computer simulations are used to evaluate maximum likelihood methods for inferring male fertility in plant populations. The maximum likelihood method can provide substantial power to characterize male fertilities at the population level. Results emphasize, however, the importance of adequate experimental design and evaluation of fertility estimates, as well as limitations to inference (e.g., about the variance in male fertility or the correlation between fertility and phenotypic trait value) that can be reasonably drawn. PMID:9611217

  2. A Dynamic Management Method for Fast Manufacturing Resource Reconfiguration

    NASA Astrophysics Data System (ADS)

    Yuan, Zhiye

    To fast and optimally reconfigure manufacturing resource, a dynamic management method for fast manufacturing resource reconfiguration based on holon was proposed. In this method, a dynamic management structure for fast manufacturing resource reconfiguration was established based on holon. Moreover, the cooperation relationship among holons for fast manufacturing resource reconfiguration and the manufacturing information cooperation mechanism based on holonic were constructed. Finally, the simulation system of a dynamic management method for fast manufacturing resource reconfiguration was demonstrated and validated by Flexsim software. It has shown the proposed method can dynamically and optimally reconfigure manufacturing resource, and it can effectively improve the efficiency of manufacturing processes.

  3. Approximate likelihood for large irregularly spaced spatial data

    PubMed Central

    Fuentes, Montserrat

    2008-01-01

    SUMMARY Likelihood approaches for large irregularly spaced spatial datasets are often very difficult, if not infeasible, to implement due to computational limitations. Even when we can assume normality, exact calculations of the likelihood for a Gaussian spatial process observed at n locations requires O(n3) operations. We present a version of Whittle’s approximation to the Gaussian log likelihood for spatial regular lattices with missing values and for irregularly spaced datasets. This method requires O(nlog2n) operations and does not involve calculating determinants. We present simulations and theoretical results to show the benefits and the performance of the spatial likelihood approximation method presented here for spatial irregularly spaced datasets and lattices with missing values. We apply these methods to estimate the spatial structure of sea surface temperatures (SST) using satellite data with missing values. PMID:19079638

  4. Dynamic Programming Method for Impulsive Control Problems

    ERIC Educational Resources Information Center

    Balkew, Teshome Mogessie

    2015-01-01

    In many control systems changes in the dynamics occur unexpectedly or are applied by a controller as needed. The time at which a controller implements changes is not necessarily known a priori. For example, many manufacturing systems and flight operations have complicated control systems, and changes in the control systems may be automatically…

  5. System and Method for Dynamic Aeroelastic Control

    NASA Technical Reports Server (NTRS)

    Suh, Peter M. (Inventor)

    2015-01-01

    The present invention proposes a hardware and software architecture for dynamic modal structural monitoring that uses a robust modal filter to monitor a potentially very large-scale array of sensors in real time, and tolerant of asymmetric sensor noise and sensor failures, to achieve aircraft performance optimization such as minimizing aircraft flutter, drag and maximizing fuel efficiency.

  6. Section 9: Ground Water - Likelihood of Release

    EPA Pesticide Factsheets

    HRS training. the ground water pathway likelihood of release factor category reflects the likelihood that there has been, or will be, a release of hazardous substances in any of the aquifers underlying the site.

  7. Recovering Velocity Distributions Via Penalized Likelihood

    NASA Astrophysics Data System (ADS)

    Merritt, David

    1997-07-01

    Line-of-sight velocity distributions are crucial for unravelling the dynamics of hot stellar systems. We present a new formalism based on penalized likelihood for deriving such distributions from kinematical data, and evaluate the performance of two algorithms that extract N(V) from absorption-line spectra and from sets of individual velocities. Both algorithms are superior to existing ones in that the solutions are nearly unbiased even when the data are so poor that a great deal of smoothing is required. In addition, the discrete-velocity algorithm is able to remove a known distribution of measurement errors from the estimate of N(V). The formalism is used to recover the velocity distribution of stars in five fields near the center of the globular cluster omega Centauri.

  8. CosmoSlik: Cosmology sampler of likelihoods

    NASA Astrophysics Data System (ADS)

    Millea, Marius

    2017-01-01

    CosmoSlik quickly puts together, runs, and analyzes an MCMC chain for analysis of cosmological data. It is highly modular and comes with plugins for CAMB (ascl:1102.026), CLASS (ascl:1106.020), the Planck likelihood, the South Pole Telescope likelihood, other cosmological likelihoods, emcee (ascl:1303.002), and more. It offers ease-of-use, flexibility, and modularity.

  9. Improved maximum likelihood reconstruction of complex multi-generational pedigrees.

    PubMed

    Sheehan, Nuala A; Bartlett, Mark; Cussens, James

    2014-11-01

    The reconstruction of pedigrees from genetic marker data is relevant to a wide range of applications. Likelihood-based approaches aim to find the pedigree structure that gives the highest probability to the observed data. Existing methods either entail an exhaustive search and are hence restricted to small numbers of individuals, or they take a more heuristic approach and deliver a solution that will probably have high likelihood but is not guaranteed to be optimal. By encoding the pedigree learning problem as an integer linear program we can exploit efficient optimisation algorithms to construct pedigrees guaranteed to have maximal likelihood for the standard situation where we have complete marker data at unlinked loci and segregation of genes from parents to offspring is Mendelian. Previous work demonstrated efficient reconstruction of pedigrees of up to about 100 individuals. The modified method that we present here is not so restricted: we demonstrate its applicability with simulated data on a real human pedigree structure of over 1600 individuals. It also compares well with a very competitive approximate approach in terms of solving time and accuracy. In addition to identifying a maximum likelihood pedigree, we can obtain any number of pedigrees in decreasing order of likelihood. This is useful for assessing the uncertainty of a maximum likelihood solution and permits model averaging over high likelihood pedigrees when this would be appropriate. More importantly, when the solution is not unique, as will often be the case for large pedigrees, it enables investigation into the properties of maximum likelihood pedigree estimates which has not been possible up to now. Crucially, we also have a means of assessing the behaviour of other approximate approaches which all aim to find a maximum likelihood solution. Our approach hence allows us to properly address the question of whether a reasonably high likelihood solution that is easy to obtain is practically as

  10. Constraint likelihood analysis for a network of gravitational wave detectors

    SciTech Connect

    Klimenko, S.; Rakhmanov, M.; Mitselmakher, G.; Mohanty, S.

    2005-12-15

    We propose a coherent method for detection and reconstruction of gravitational wave signals with a network of interferometric detectors. The method is derived by using the likelihood ratio functional for unknown signal waveforms. In the likelihood analysis, the global maximum of the likelihood ratio over the space of waveforms is used as the detection statistic. We identify a problem with this approach. In the case of an aligned pair of detectors, the detection statistic depends on the cross correlation between the detectors as expected, but this dependence disappears even for infinitesimally small misalignments. We solve the problem by applying constraints on the likelihood functional and obtain a new class of statistics. The resulting method can be applied to data from a network consisting of any number of detectors with arbitrary detector orientations. The method allows us reconstruction of the source coordinates and the waveforms of two polarization components of a gravitational wave. We study the performance of the method with numerical simulations and find the reconstruction of the source coordinates to be more accurate than in the standard likelihood method.

  11. LIKEDM: Likelihood calculator of dark matter detection

    NASA Astrophysics Data System (ADS)

    Huang, Xiaoyuan; Tsai, Yue-Lin Sming; Yuan, Qiang

    2017-04-01

    With the large progress in searches for dark matter (DM) particles with indirect and direct methods, we develop a numerical tool that enables fast calculations of the likelihoods of specified DM particle models given a number of observational data, such as charged cosmic rays from space-borne experiments (e.g., PAMELA, AMS-02), γ-rays from the Fermi space telescope, and underground direct detection experiments. The purpose of this tool - LIKEDM, likelihood calculator for dark matter detection - is to bridge the gap between a particle model of DM and the observational data. The intermediate steps between these two, including the astrophysical backgrounds, the propagation of charged particles, the analysis of Fermi γ-ray data, as well as the DM velocity distribution and the nuclear form factor, have been dealt with in the code. We release the first version (v1.0) focusing on the constraints from indirect detection of DM with charged cosmic and gamma rays. Direct detection will be implemented in the next version. This manual describes the framework, usage, and related physics of the code.

  12. Parametric likelihood inference for interval censored competing risks data.

    PubMed

    Hudgens, Michael G; Li, Chenxi; Fine, Jason P

    2014-03-01

    Parametric estimation of the cumulative incidence function (CIF) is considered for competing risks data subject to interval censoring. Existing parametric models of the CIF for right censored competing risks data are adapted to the general case of interval censoring. Maximum likelihood estimators for the CIF are considered under the assumed models, extending earlier work on nonparametric estimation. A simple naive likelihood estimator is also considered that utilizes only part of the observed data. The naive estimator enables separate estimation of models for each cause, unlike full maximum likelihood in which all models are fit simultaneously. The naive likelihood is shown to be valid under mixed case interval censoring, but not under an independent inspection process model, in contrast with full maximum likelihood which is valid under both interval censoring models. In simulations, the naive estimator is shown to perform well and yield comparable efficiency to the full likelihood estimator in some settings. The methods are applied to data from a large, recent randomized clinical trial for the prevention of mother-to-child transmission of HIV.

  13. Dynamic decoupling nonlinear control method for aircraft gust alleviation

    NASA Astrophysics Data System (ADS)

    Lv, Yang; Wan, Xiaopeng; Li, Aijun

    2008-10-01

    A dynamic decoupling nonlinear control method for MIMO system is presented in this paper. The dynamic inversion method is used to decouple the multivariable system. The nonlinear control method is used to overcome the poor decoupling effect when the system model is inaccurate. The nonlinear control method has correcting function and is expressed in analytic form, it is easy to adjust the parameters of the controller and optimize the design of the control system. The method is used to design vertical transition mode of active control aircraft for gust alleviation. Simulation results show that the designed vertical transition mode improves the gust alleviation effect about 34% comparing with the normal aircraft.

  14. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting

    PubMed Central

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.

    2017-01-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119

  15. Prediction of Dynamic Stall Characteristics Using Advanced Nonlinear Panel Methods,

    DTIC Science & Technology

    This paper presents preliminary results of work in which a surface singularity panel method is being extended for modelling the dynamic interaction...between a separated wake and a surface undergoing an unsteady motion. The method combines the capabilities of an unsteady time-stepping code and a... technique for modelling extensive separation using free vortex sheets. Routines are developed for treating the dynamic interaction between the separated

  16. Parameter estimation in X-ray astronomy using maximum likelihood

    NASA Technical Reports Server (NTRS)

    Wachter, K.; Leach, R.; Kellogg, E.

    1979-01-01

    Methods of estimation of parameter values and confidence regions by maximum likelihood and Fisher efficient scores starting from Poisson probabilities are developed for the nonlinear spectral functions commonly encountered in X-ray astronomy. It is argued that these methods offer significant advantages over the commonly used alternatives called minimum chi-squared because they rely on less pervasive statistical approximations and so may be expected to remain valid for data of poorer quality. Extensive numerical simulations of the maximum likelihood method are reported which verify that the best-fit parameter value and confidence region calculations are correct over a wide range of input spectra.

  17. Non-Concave Penalized Likelihood with NP-Dimensionality

    PubMed Central

    Fan, Jianqing; Lv, Jinchi

    2011-01-01

    Penalized likelihood methods are fundamental to ultra-high dimensional variable selection. How high dimensionality such methods can handle remains largely unknown. In this paper, we show that in the context of generalized linear models, such methods possess model selection consistency with oracle properties even for dimensionality of Non-Polynomial (NP) order of sample size, for a class of penalized likelihood approaches using folded-concave penalty functions, which were introduced to ameliorate the bias problems of convex penalty functions. This fills a long-standing gap in the literature where the dimensionality is allowed to grow slowly with the sample size. Our results are also applicable to penalized likelihood with the L1-penalty, which is a convex function at the boundary of the class of folded-concave penalty functions under consideration. The coordinate optimization is implemented for finding the solution paths, whose performance is evaluated by a few simulation examples and the real data analysis. PMID:22287795

  18. Dynamic characteristics of a WPC—comparison of transfer matrix method and FE method

    NASA Astrophysics Data System (ADS)

    Chen, Guo-Long; Nie, Wu

    2003-12-01

    To find the difference in dynamic characteristics between conventional monohull ship and wave penetrating catamaran (WPC), a WPC was taken as an object; its dynamic characteristics were computed by transfer matrix method and finite element method respectively. According to the comparison of the nature frequency results and mode shape results, the fact that FEM method is more suitable to dynamic characteristics analysis of a WPC was pointed out, special features on dynamic characteristics of WPC were given, and some beneficial suggestions are proposed to optimize the strength of a WPC in design period.

  19. Dynamic force matching: A method for constructing dynamical coarse-grained models with realistic time dependence

    SciTech Connect

    Davtyan, Aram; Dama, James F.; Voth, Gregory A.; Andersen, Hans C.

    2015-04-21

    Coarse-grained (CG) models of molecular systems, with fewer mechanical degrees of freedom than an all-atom model, are used extensively in chemical physics. It is generally accepted that a coarse-grained model that accurately describes equilibrium structural properties (as a result of having a well constructed CG potential energy function) does not necessarily exhibit appropriate dynamical behavior when simulated using conservative Hamiltonian dynamics for the CG degrees of freedom on the CG potential energy surface. Attempts to develop accurate CG dynamic models usually focus on replacing Hamiltonian motion by stochastic but Markovian dynamics on that surface, such as Langevin or Brownian dynamics. However, depending on the nature of the system and the extent of the coarse-graining, a Markovian dynamics for the CG degrees of freedom may not be appropriate. In this paper, we consider the problem of constructing dynamic CG models within the context of the Multi-Scale Coarse-graining (MS-CG) method of Voth and coworkers. We propose a method of converting a MS-CG model into a dynamic CG model by adding degrees of freedom to it in the form of a small number of fictitious particles that interact with the CG degrees of freedom in simple ways and that are subject to Langevin forces. The dynamic models are members of a class of nonlinear systems interacting with special heat baths that were studied by Zwanzig [J. Stat. Phys. 9, 215 (1973)]. The properties of the fictitious particles can be inferred from analysis of the dynamics of all-atom simulations of the system of interest. This is analogous to the fact that the MS-CG method generates the CG potential from analysis of equilibrium structures observed in all-atom simulation data. The dynamic models generate a non-Markovian dynamics for the CG degrees of freedom, but they can be easily simulated using standard molecular dynamics programs. We present tests of this method on a series of simple examples that demonstrate that

  20. Dynamic force matching: A method for constructing dynamical coarse-grained models with realistic time dependence

    NASA Astrophysics Data System (ADS)

    Davtyan, Aram; Dama, James F.; Voth, Gregory A.; Andersen, Hans C.

    2015-04-01

    Coarse-grained (CG) models of molecular systems, with fewer mechanical degrees of freedom than an all-atom model, are used extensively in chemical physics. It is generally accepted that a coarse-grained model that accurately describes equilibrium structural properties (as a result of having a well constructed CG potential energy function) does not necessarily exhibit appropriate dynamical behavior when simulated using conservative Hamiltonian dynamics for the CG degrees of freedom on the CG potential energy surface. Attempts to develop accurate CG dynamic models usually focus on replacing Hamiltonian motion by stochastic but Markovian dynamics on that surface, such as Langevin or Brownian dynamics. However, depending on the nature of the system and the extent of the coarse-graining, a Markovian dynamics for the CG degrees of freedom may not be appropriate. In this paper, we consider the problem of constructing dynamic CG models within the context of the Multi-Scale Coarse-graining (MS-CG) method of Voth and coworkers. We propose a method of converting a MS-CG model into a dynamic CG model by adding degrees of freedom to it in the form of a small number of fictitious particles that interact with the CG degrees of freedom in simple ways and that are subject to Langevin forces. The dynamic models are members of a class of nonlinear systems interacting with special heat baths that were studied by Zwanzig [J. Stat. Phys. 9, 215 (1973)]. The properties of the fictitious particles can be inferred from analysis of the dynamics of all-atom simulations of the system of interest. This is analogous to the fact that the MS-CG method generates the CG potential from analysis of equilibrium structures observed in all-atom simulation data. The dynamic models generate a non-Markovian dynamics for the CG degrees of freedom, but they can be easily simulated using standard molecular dynamics programs. We present tests of this method on a series of simple examples that demonstrate that

  1. Vestige: Maximum likelihood phylogenetic footprinting

    PubMed Central

    Wakefield, Matthew J; Maxwell, Peter; Huttley, Gavin A

    2005-01-01

    Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational processes, DNA repair and

  2. Robust Dynamic Multi-objective Vehicle Routing Optimization Method.

    PubMed

    Guo, Yi-Nan; Cheng, Jian; Luo, Sha; Gong, Dun-Wei

    2017-03-21

    For dynamic multi-objective vehicle routing problems, the waiting time of vehicle, the number of serving vehicles, the total distance of routes were normally considered as the optimization objectives. Except for above objectives, fuel consumption that leads to the environmental pollution and energy consumption was focused on in this paper. Considering the vehicles' load and the driving distance, corresponding carbon emission model was built and set as an optimization objective. Dynamic multi-objective vehicle routing problems with hard time windows and randomly appeared dynamic customers, subsequently, were modeled. In existing planning methods, when the new service demand came up, global vehicle routing optimization method was triggered to find the optimal routes for non-served customers, which was time-consuming. Therefore, robust dynamic multi-objective vehicle routing method with two-phase is proposed. Three highlights of the novel method are: (i) After finding optimal robust virtual routes for all customers by adopting multi-objective particle swarm optimization in the first phase, static vehicle routes for static customers are formed by removing all dynamic customers from robust virtual routes in next phase. (ii)The dynamically appeared customers append to be served according to their service time and the vehicles' statues. Global vehicle routing optimization is triggered only when no suitable locations can be found for dynamic customers. (iii)A metric measuring the algorithms' robustness is given. The statistical results indicated that the routes obtained by the proposed method have better stability and robustness, but may be sub-optimum. Moreover, time-consuming global vehicle routing optimization is avoided as dynamic customers appear.

  3. Likelihood maximization for list-mode emission tomographic image reconstruction.

    PubMed

    Byrne, C

    2001-10-01

    The maximum a posteriori (MAP) Bayesian iterative algorithm using priors that are gamma distributed, due to Lange, Bahn and Little, is extended to include parameter choices that fall outside the gamma distribution model. Special cases of the resulting iterative method include the expectation maximization maximum likelihood (EMML) method based on the Poisson model in emission tomography, as well as algorithms obtained by Parra and Barrett and by Huesman et al. that converge to maximum likelihood and maximum conditional likelihood estimates of radionuclide intensities for list-mode emission tomography. The approach taken here is optimization-theoretic and does not rely on the usual expectation maximization (EM) formalism. Block-iterative variants of the algorithms are presented. A self-contained, elementary proof of convergence of the algorithm is included.

  4. Maximum-likelihood estimation of admixture proportions from genetic data.

    PubMed Central

    Wang, Jinliang

    2003-01-01

    For an admixed population, an important question is how much genetic contribution comes from each parental population. Several methods have been developed to estimate such admixture proportions, using data on genetic markers sampled from parental and admixed populations. In this study, I propose a likelihood method to estimate jointly the admixture proportions, the genetic drift that occurred to the admixed population and each parental population during the period between the hybridization and sampling events, and the genetic drift in each ancestral population within the interval between their split and hybridization. The results from extensive simulations using various combinations of relevant parameter values show that in general much more accurate and precise estimates of admixture proportions are obtained from the likelihood method than from previous methods. The likelihood method also yields reasonable estimates of genetic drift that occurred to each population, which translate into relative effective sizes (N(e)) or absolute average N(e)'s if the times when the relevant events (such as population split, admixture, and sampling) occurred are known. The proposed likelihood method also has features such as relatively low computational requirement compared with previous ones, flexibility for admixture models, and marker types. In particular, it allows for missing data from a contributing parental population. The method is applied to a human data set and a wolflike canids data set, and the results obtained are discussed in comparison with those from other estimators and from previous studies. PMID:12807794

  5. Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

    2003-01-01

    Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

  6. Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

    2004-01-01

    Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

  7. Method to describe stochastic dynamics using an optimal coordinate.

    PubMed

    Krivov, Sergei V

    2013-12-01

    A general method to describe the stochastic dynamics of Markov processes is suggested. The method aims to solve three related problems: the determination of an optimal coordinate for the description of stochastic dynamics; the reconstruction of time from an ensemble of stochastic trajectories; and the decomposition of stationary stochastic dynamics into eigenmodes which do not decay exponentially with time. The problems are solved by introducing additive eigenvectors which are transformed by a stochastic matrix in a simple way - every component is translated by a constant distance. Such solutions have peculiar properties. For example, an optimal coordinate for stochastic dynamics with detailed balance is a multivalued function. An optimal coordinate for a random walk on a line corresponds to the conventional eigenvector of the one-dimensional Dirac equation. The equation for the optimal coordinate in a slowly varying potential reduces to the Hamilton-Jacobi equation for the action function.

  8. A review of substructure coupling methods for dynamic analysis

    NASA Technical Reports Server (NTRS)

    Craig, R. R., Jr.; Chang, C. J.

    1976-01-01

    The state of the art is assessed in substructure coupling for dynamic analysis. A general formulation, which permits all previously described methods to be characterized by a few constituent matrices, is developed. Limited results comparing the accuracy of various methods are presented.

  9. An inverse dynamic method yielding flexible manipulator state trajectories

    NASA Technical Reports Server (NTRS)

    Kwon, Dong-Soo; Book, Wayne J.

    1990-01-01

    An inverse dynamic equation for a flexible manipulator is derived in a state form. By dividing the inverse system into the causal part and the anticausal part, torque is calculated in the time domain for a certain end point trajectory, as well as trajectories of all state variables. The open loop control of the inverse dynamic method shows an excellent result in simulation. For practical applications, a control strategy adapting feedback tracking control to the inverse dynamic feedforward control is illustrated, and its good experimental result is presented.

  10. An efficient threshold dynamics method for wetting on rough surfaces

    NASA Astrophysics Data System (ADS)

    Xu, Xianmin; Wang, Dong; Wang, Xiao-Ping

    2017-02-01

    The threshold dynamics method developed by Merriman, Bence and Osher (MBO) is an efficient method for simulating the motion by mean curvature flow when the interface is away from the solid boundary. Direct generalization of MBO-type methods to the wetting problem with interfaces intersecting the solid boundary is not easy because solving the heat equation in a general domain with a wetting boundary condition is not as efficient as it is with the original MBO method. The dynamics of the contact point also follows a different law compared with the dynamics of the interface away from the boundary. In this paper, we develop an efficient volume preserving threshold dynamics method for simulating wetting on rough surfaces. This method is based on minimization of the weighted surface area functional over an extended domain that includes the solid phase. The method is simple, stable with O (Nlog ⁡ N) complexity per time step and is not sensitive to the inhomogeneity or roughness of the solid boundary.

  11. A dynamic integrated fault diagnosis method for power transformers.

    PubMed

    Gao, Wensheng; Bai, Cuifen; Liu, Tong

    2015-01-01

    In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified.

  12. A Dynamic Integrated Fault Diagnosis Method for Power Transformers

    PubMed Central

    Gao, Wensheng; Liu, Tong

    2015-01-01

    In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841

  13. Method for recovering dynamic position of photoelectric encoder

    NASA Astrophysics Data System (ADS)

    Wu, Yong-zhi; Wan, Qiu-hua; Zhao, Chang-hai; Sun, Ying; Liang, Li-hui; Liu, Yi-sheng

    2009-05-01

    This paper presents a method to recover dynamic position of photoelectric encoder. While working at dynamic state, original outputs of the photoelectric encoder are in theory two sine or triangular signals with a phase difference of π/2. However, there are still deviations of actual output signals. Interpolating on the basis of this deviation signal will result in interpolation errors. In dynamic state, true original signal data obtained by data acquisition system is a time equation. Through processing these data by data equiangular and harmonic analysis, the equation will be converted from time domain to position domain and an original position equation can be formed. Then the interpolation errors also can be obtained. By this method, the interpolation errors can be checked in dynamic state and it can also provide electric interpolation basis so that to improve dynamic interpolation precision of the encoder. Software simulation and experimental analysis all prove the method effective. This method is the basis in theory for precision checking and calibration in motion.

  14. Improved dynamic analysis method using load-dependent Ritz vectors

    NASA Technical Reports Server (NTRS)

    Escobedo-Torres, J.; Ricles, J. M.

    1993-01-01

    The dynamic analysis of large space structures is important in order to predict their behavior under operating conditions. Computer models of large space structures are characterized by having a large number of degrees of freedom, and the computational effort required to carry out the analysis is very large. Conventional methods of solution utilize a subset of the eigenvectors of the system, but for systems with many degrees of freedom, the solution of the eigenproblem is in many cases the most costly phase of the analysis. For this reason, alternate solution methods need to be considered. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. The load dependent Ritz vector method is presented as an alternative to the classical normal mode methods for obtaining dynamic responses of large space structures. A simplified model of a space station is used to compare results. Results show that the load dependent Ritz vector method predicts the dynamic response better than the classical normal mode method. Even though this alternate method is very promising, further studies are necessary to fully understand its attributes and limitations.

  15. Can the ring polymer molecular dynamics method be interpreted as real time quantum dynamics?

    SciTech Connect

    Jang, Seogjoo; Sinitskiy, Anton V.; Voth, Gregory A.

    2014-04-21

    The ring polymer molecular dynamics (RPMD) method has gained popularity in recent years as a simple approximation for calculating real time quantum correlation functions in condensed media. However, the extent to which RPMD captures real dynamical quantum effects and why it fails under certain situations have not been clearly understood. Addressing this issue has been difficult in the absence of a genuine justification for the RPMD algorithm starting from the quantum Liouville equation. To this end, a new and exact path integral formalism for the calculation of real time quantum correlation functions is presented in this work, which can serve as a rigorous foundation for the analysis of the RPMD method as well as providing an alternative derivation of the well established centroid molecular dynamics method. The new formalism utilizes the cyclic symmetry of the imaginary time path integral in the most general sense and enables the expression of Kubo-transformed quantum time correlation functions as that of physical observables pre-averaged over the imaginary time path. Upon filtering with a centroid constraint function, the formulation results in the centroid dynamics formalism. Upon filtering with the position representation of the imaginary time path integral, we obtain an exact quantum dynamics formalism involving the same variables as the RPMD method. The analysis of the RPMD approximation based on this approach clarifies that an explicit quantum dynamical justification does not exist for the use of the ring polymer harmonic potential term (imaginary time kinetic energy) as implemented in the RPMD method. It is analyzed why this can cause substantial errors in nonlinear correlation functions of harmonic oscillators. Such errors can be significant for general correlation functions of anharmonic systems. We also demonstrate that the short time accuracy of the exact path integral limit of RPMD is of lower order than those for finite discretization of path. The

  16. Comparison of induced rules based on likelihood estimation

    NASA Astrophysics Data System (ADS)

    Tsumoto, Shusaku

    2002-03-01

    Rule induction methods have been applied to knowledge discovery in databases and data mining, The empirical results obtained show that they are very powerful and that important knowledge has been extracted from datasets. However, comparison and evaluation of rules are based not on statistical evidence but on rather naive indices, such as conditional probabilities and functions of conditional probabilities. In this paper, we introduce two approaches to induced statistical comparison of induced rules. For the statistical evaluation, likelihood ratio test and Fisher's exact test play an important role: likelihood ratio statistic measures statistical information about an information table and it is used to measure the difference between two tables.

  17. Investigation of Ribosomes Using Molecular Dynamics Simulation Methods.

    PubMed

    Makarov, G I; Makarova, T M; Sumbatyan, N V; Bogdanov, A A

    2016-12-01

    The ribosome as a complex molecular machine undergoes significant conformational changes while synthesizing a protein molecule. Molecular dynamics simulations have been used as complementary approaches to X-ray crystallography and cryoelectron microscopy, as well as biochemical methods, to answer many questions that modern structural methods leave unsolved. In this review, we demonstrate that all-atom modeling of ribosome molecular dynamics is particularly useful in describing the process of tRNA translocation, atomic details of behavior of nascent peptides, antibiotics, and other small molecules in the ribosomal tunnel, and the putative mechanism of allosteric signal transmission to functional sites of the ribosome.

  18. Nonstationary hydrological time series forecasting using nonlinear dynamic methods

    NASA Astrophysics Data System (ADS)

    Coulibaly, Paulin; Baldwin, Connely K.

    2005-06-01

    Recent evidence of nonstationary trends in water resources time series as result of natural and/or anthropogenic climate variability and change, has raised more interest in nonlinear dynamic system modeling methods. In this study, the effectiveness of dynamically driven recurrent neural networks (RNN) for complex time-varying water resources system modeling is investigated. An optimal dynamic RNN approach is proposed to directly forecast different nonstationary hydrological time series. The proposed method automatically selects the most optimally trained network in any case. The simulation performance of the dynamic RNN-based model is compared with the results obtained from optimal multivariate adaptive regression splines (MARS) models. It is shown that the dynamically driven RNN model can be a good alternative for the modeling of complex dynamics of a hydrological system, performing better than the MARS model on the three selected hydrological time series, namely the historical storage volumes of the Great Salt Lake, the Saint-Lawrence River flows, and the Nile River flows.

  19. Carrier Recovery Enhancement for Maximum-Likelihood Doppler Shift Estimation in Mars Exploration Missions

    NASA Astrophysics Data System (ADS)

    Cattivelli, Federico S.; Estabrook, Polly; Satorius, Edgar H.; Sayed, Ali H.

    2008-11-01

    One of the most crucial stages of the Mars exploration missions is the entry, descent, and landing (EDL) phase. During EDL, maintaining reliable communication from the spacecraft to Earth is extremely important for the success of future missions, especially in case of mission failure. EDL is characterized by very deep accelerations, caused by friction, parachute deployment and rocket firing among others. These dynamics cause a severe Doppler shift on the carrier communications link to Earth. Methods have been proposed to estimate the Doppler shift based on Maximum Likelihood. So far these methods have proved successful, but it is expected that the next Mars mission, known as the Mars Science Laboratory, will suffer from higher dynamics and lower SNR. Thus, improving the existing estimation methods becomes a necessity. We propose a Maximum Likelihood approach that takes into account the power in the data tones to enhance carrier recovery, and improve the estimation performance by up to 3 dB. Simulations are performed using real data obtained during the EDL stage of the Mars Exploration Rover B (MERB) mission.

  20. Accelerated molecular dynamics methods: introduction and recent developments

    SciTech Connect

    Uberuaga, Blas Pedro; Voter, Arthur F; Perez, Danny; Shim, Y; Amar, J G

    2009-01-01

    A long-standing limitation in the use of molecular dynamics (MD) simulation is that it can only be applied directly to processes that take place on very short timescales: nanoseconds if empirical potentials are employed, or picoseconds if we rely on electronic structure methods. Many processes of interest in chemistry, biochemistry, and materials science require study over microseconds and beyond, due either to the natural timescale for the evolution or to the duration of the experiment of interest. Ignoring the case of liquids xxx, the dynamics on these time scales is typically characterized by infrequent-event transitions, from state to state, usually involving an energy barrier. There is a long and venerable tradition in chemistry of using transition state theory (TST) [10, 19, 23] to directly compute rate constants for these kinds of activated processes. If needed dynamical corrections to the TST rate, and even quantum corrections, can be computed to achieve an accuracy suitable for the problem at hand. These rate constants then allow them to understand the system behavior on longer time scales than we can directly reach with MD. For complex systems with many reaction paths, the TST rates can be fed into a stochastic simulation procedure such as kinetic Monte Carlo xxx, and a direct simulation of the advance of the system through its possible states can be obtained in a probabilistically exact way. A problem that has become more evident in recent years, however, is that for many systems of interest there is a complexity that makes it difficult, if not impossible, to determine all the relevant reaction paths to which TST should be applied. This is a serious issue, as omitted transition pathways can have uncontrollable consequences on the simulated long-time kinetics. Over the last decade or so, we have been developing a new class of methods for treating the long-time dynamics in these complex, infrequent-event systems. Rather than trying to guess in advance what

  1. Maximum Marginal Likelihood Estimation for Semiparametric Item Analysis.

    ERIC Educational Resources Information Center

    Ramsay, J. O.; Winsberg, S.

    1991-01-01

    A method is presented for estimating the item characteristic curve (ICC) using polynomial regression splines. Estimation of spline ICCs is described by maximizing the marginal likelihood formed by integrating ability over a beta prior distribution. Simulation results compare this approach with the joint estimation of ability and item parameters.…

  2. Maximum likelihood estimates of polar motion parameters

    NASA Technical Reports Server (NTRS)

    Wilson, Clark R.; Vicente, R. O.

    1990-01-01

    Two estimators developed by Jeffreys (1940, 1968) are described and used in conjunction with polar-motion data to determine the frequency (Fc) and quality factor (Qc) of the Chandler wobble. Data are taken from a monthly polar-motion series, satellite laser-ranging results, and optical astrometry and intercompared for use via interpolation techniques. Maximum likelihood arguments were employed to develop the estimators, and the assumption that polar motion relates to a Gaussian random process is assessed in terms of the accuracies of the estimators. The present results agree with those from Jeffreys' earlier study but are inconsistent with the later estimator; a Monte Carlo evaluation of the estimators confirms that the 1968 method is more accurate. The later estimator method shows good performance because the Fourier coefficients derived from the data have signal/noise levels that are superior to those for an individual datum. The method is shown to be valuable for general spectral-analysis problems in which isolated peaks must be analyzed from noisy data.

  3. Efficient maximum likelihood parameterization of continuous-time Markov processes

    PubMed Central

    McGibbon, Robert T.; Pande, Vijay S.

    2015-01-01

    Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce a maximum likelihood estimator for constructing such models from data observed at a finite time interval. This estimator is dramatically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations. PMID:26203016

  4. Adiabatic molecular-dynamics-simulation-method studies of kinetic friction

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Sokoloff, J. B.

    2005-06-01

    An adiabatic molecular-dynamics method is developed and used to study the Muser-Robbins model for dry friction (i.e., nonzero kinetic friction in the slow sliding speed limit). In this model, dry friction between two crystalline surfaces rotated with respect to each other is due to mobile molecules (i.e., dirt particles) adsorbed at the interface. Our adiabatic method allows us to quickly locate interface potential-well minima, which become unstable during sliding of the surfaces. Since dissipation due to friction in the slow sliding speed limit results from mobile molecules dropping out of such unstable wells, our method provides a way to calculate dry friction, which agrees extremely well with results found by conventional molecular dynamics for the same system, but our method is more than a factor of 10 faster.

  5. Continuation Methods for Qualitative Analysis of Aircraft Dynamics

    NASA Technical Reports Server (NTRS)

    Cummings, Peter A.

    2004-01-01

    A class of numerical methods for constructing bifurcation curves for systems of coupled, non-linear ordinary differential equations is presented. Foundations are discussed, and several variations are outlined along with their respective capabilities. Appropriate background material from dynamical systems theory is presented.

  6. The Feldenkrais Method: A Dynamic Approach to Changing Motor Behavior.

    ERIC Educational Resources Information Center

    Buchanan, Patricia A.; Ulrich, Beverly D.

    2001-01-01

    Describes the Feldenkrais Method of somatic education, noting parallels with a dynamic systems theory (DST) approach to motor behavior. Feldenkrais uses movement and perception to foster individualized improvement in function. DST explains that a human-environment system continually adapts to changing conditions and assembles behaviors…

  7. Do dynamic-based MR knee kinematics methods produce the same results as static methods?

    PubMed

    d'Entremont, Agnes G; Nordmeyer-Massner, Jurek A; Bos, Clemens; Wilson, David R; Pruessmann, Klaas P

    2013-06-01

    MR-based methods provide low risk, noninvasive assessment of joint kinematics; however, these methods often use static positions or require many identical cycles of movement. The study objective was to compare the 3D kinematic results approximated from a series of sequential static poses of the knee with the 3D kinematic results obtained from continuous dynamic movement of the knee. To accomplish this objective, we compared kinematic data from a validated static MR method to a fast static MR method, and compared kinematic data from both static methods to a newly developed dynamic MR method. Ten normal volunteers were imaged using the three kinematic methods (dynamic, static standard, and static fast). Results showed that the two sets of static results were in agreement, indicating that the sequences (standard and fast) may be used interchangeably. Dynamic kinematic results were significantly different from both static results in eight of 11 kinematic parameters: patellar flexion, patellar tilt, patellar proximal translation, patellar lateral translation, patellar anterior translation, tibial abduction, tibial internal rotation, and tibial anterior translation. Three-dimensional MR kinematics measured from dynamic knee motion are often different from those measured in a static knee at several positions, indicating that dynamic-based kinematics provides information that is not obtainable from static scans.

  8. Review of dynamic optimization methods in renewable natural resource management

    USGS Publications Warehouse

    Williams, B.K.

    1989-01-01

    In recent years, the applications of dynamic optimization procedures in natural resource management have proliferated. A systematic review of these applications is given in terms of a number of optimization methodologies and natural resource systems. The applicability of the methods to renewable natural resource systems are compared in terms of system complexity, system size, and precision of the optimal solutions. Recommendations are made concerning the appropriate methods for certain kinds of biological resource problems.

  9. On the existence of maximum likelihood estimates for presence-only data

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.

    2015-01-01

    It is important to identify conditions for which maximum likelihood estimates are unlikely to be identifiable from presence-only data. In data sets where the maximum likelihood estimates do not exist, penalized likelihood and Bayesian methods will produce coefficient estimates, but these are sensitive to the choice of estimation procedure and prior or penalty term. When sample size is small or it is thought that habitat preferences are strong, we propose a suite of estimation procedures researchers can consider using.

  10. Discriminative likelihood score weighting based on acoustic-phonetic classification for speaker identification

    NASA Astrophysics Data System (ADS)

    Suh, Youngjoo; Kim, Hoirin

    2014-12-01

    In this paper, a new discriminative likelihood score weighting technique is proposed for speaker identification. The proposed method employs a discriminative weighting of frame-level log-likelihood scores with acoustic-phonetic classification in the Gaussian mixture model (GMM)-based speaker identification. Experiments performed on the Aurora noise-corrupted TIMIT database showed that the proposed approach provides meaningful performance improvement with an overall relative error reduction of 15.8% over the maximum likelihood-based baseline GMM approach.

  11. Empirical Likelihood-Based Confidence Interval of ROC Curves.

    PubMed

    Su, Haiyan; Qin, Yongsong; Liang, Hua

    2009-11-01

    In this article we propose an empirical likelihood-based confidence interval for receiver operating characteristic curves which are based on a continuous-scale test. The approach is easily understood, simply implemented, and computationally efficient. The results from our simulation studies indicate that the finite-sample numerical performance slightly outperforms the most promising methods published recently. Two real datasets are analyzed by using the proposed method and the existing bootstrap-based method.

  12. Predicting crash likelihood and severity on freeways with real-time loop detector data.

    PubMed

    Xu, Chengcheng; Tarko, Andrew P; Wang, Wei; Liu, Pan

    2013-08-01

    Real-time crash risk prediction using traffic data collected from loop detector stations is useful in dynamic safety management systems aimed at improving traffic safety through application of proactive safety countermeasures. The major drawback of most of the existing studies is that they focus on the crash risk without consideration of crash severity. This paper presents an effort to develop a model that predicts the crash likelihood at different levels of severity with a particular focus on severe crashes. The crash data and traffic data used in this study were collected on the I-880 freeway in California, United States. This study considers three levels of crash severity: fatal/incapacitating injury crashes (KA), non-incapacitating/possible injury crashes (BC), and property-damage-only crashes (PDO). The sequential logit model was used to link the likelihood of crash occurrences at different severity levels to various traffic flow characteristics derived from detector data. The elasticity analysis was conducted to evaluate the effect of the traffic flow variables on the likelihood of crash and its severity.The results show that the traffic flow characteristics contributing to crash likelihood were quite different at different levels of severity. The PDO crashes were more likely to occur under congested traffic flow conditions with highly variable speed and frequent lane changes, while the KA and BC crashes were more likely to occur under less congested traffic flow conditions. High speed, coupled with a large speed difference between adjacent lanes under uncongested traffic conditions, was found to increase the likelihood of severe crashes (KA). This study applied the 20-fold cross-validation method to estimate the prediction performance of the developed models. The validation results show that the model's crash prediction performance at each severity level was satisfactory. The findings of this study can be used to predict the probabilities of crash at

  13. Tensor-based dynamic reconstruction method for electrical capacitance tomography

    NASA Astrophysics Data System (ADS)

    Lei, J.; Mu, H. P.; Liu, Q. B.; Li, Z. H.; Liu, S.; Wang, X. Y.

    2017-03-01

    Electrical capacitance tomography (ECT) is an attractive visualization measurement method, in which the acquisition of high-quality images is beneficial for the understanding of the underlying physical or chemical mechanisms of the dynamic behaviors of the measurement objects. In real-world measurement environments, imaging objects are often in a dynamic process, and the exploitation of the spatial-temporal correlations related to the dynamic nature will contribute to improving the imaging quality. Different from existing imaging methods that are often used in ECT measurements, in this paper a dynamic image sequence is stacked into a third-order tensor that consists of a low rank tensor and a sparse tensor within the framework of the multiple measurement vectors model and the multi-way data analysis method. The low rank tensor models the similar spatial distribution information among frames, which is slowly changing over time, and the sparse tensor captures the perturbations or differences introduced in each frame, which is rapidly changing over time. With the assistance of the Tikhonov regularization theory and the tensor-based multi-way data analysis method, a new cost function, with the considerations of the multi-frames measurement data, the dynamic evolution information of a time-varying imaging object and the characteristics of the low rank tensor and the sparse tensor, is proposed to convert the imaging task in the ECT measurement into a reconstruction problem of a third-order image tensor. An effective algorithm is developed to search for the optimal solution of the proposed cost function, and the images are reconstructed via a batching pattern. The feasibility and effectiveness of the developed reconstruction method are numerically validated.

  14. Dynamic Rupture Benchmarking of the ADER-DG Method

    NASA Astrophysics Data System (ADS)

    Pelties, C.; Gabriel, A.

    2012-12-01

    We will verify the arbitrary high-order derivative Discontinuous Galerkin (ADER-DG) method in various test cases of the 'SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise' benchmark suite (Harris et al. 2009). The ADER-DG scheme is able to solve the spontaneous rupture problem with high-order accuracy in space and time on three-dimensional unstructured tetrahedral meshes. Strong mesh coarsening or refinement at areas of interest can be applied to keep the computational costs feasible. Moreover, the method does not generate spurious high-frequency contributions in the slip rate spectra and therefore does not require any artificial damping as demonstrated in previous presentations and publications (Pelties et al. 2010 and 2012). We will show that the mentioned features hold also for more advanced setups as e.g. a branching fault system, heterogeneous background stresses and bimaterial faults. The advanced geometrical flexibility combined with an enhanced accuracy will make the ADER-DG method a useful tool to study earthquake dynamics on complex fault systems in realistic rheologies. References: Harris, R.A., M. Barall, R. Archuleta, B. Aagaard, J.-P. Ampuero, H. Bhat, V. Cruz-Atienza, L. Dalguer, P. Dawson, S. Day, B. Duan, E. Dunham, G. Ely, Y. Kaneko, Y. Kase, N. Lapusta, Y. Liu, S. Ma, D. Oglesby, K. Olsen, A. Pitarka, S. Song, and E. Templeton, The SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise, Seismological Research Letters, vol. 80, no. 1, pages 119-126, 2009 Pelties, C., J. de la Puente, and M. Kaeser, Dynamic Rupture Modeling in Three Dimensions on Unstructured Meshes Using a Discontinuous Galerkin Method, AGU 2010 Fall Meeting, abstract #S21C-2068 Pelties, C., J. de la Puente, J.-P. Ampuero, G. Brietzke, and M. Kaeser, Three-Dimensional Dynamic Rupture Simulation with a High-order Discontinuous Galerkin Method on Unstructured Tetrahedral Meshes, JGR. - Solid Earth, VOL. 117, B02309, 2012

  15. Dynamic Rupture Benchmarking of the ADER-DG Method

    NASA Astrophysics Data System (ADS)

    Gabriel, Alice; Pelties, Christian

    2013-04-01

    We will verify the arbitrary high-order derivative Discontinuous Galerkin (ADER-DG) method in various test cases of the 'SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise' benchmark suite (Harris et al. 2009). The ADER-DG scheme is able to solve the spontaneous rupture problem with high-order accuracy in space and time on three-dimensional unstructured tetrahedral meshes. Strong mesh coarsening or refinement at areas of interest can be applied to keep the computational costs feasible. Moreover, the method does not generate spurious high-frequency contributions in the slip rate spectra and therefore does not require any artificial damping as demonstrated in previous presentations and publications (Pelties et al. 2010 and 2012). We will show that the mentioned features hold also for more advanced setups as e.g. a branching fault system, heterogeneous background stresses and bimaterial faults. The advanced geometrical flexibility combined with an enhanced accuracy will make the ADER-DG method a useful tool to study earthquake dynamics on complex fault systems in realistic rheologies. References: Harris, R.A., M. Barall, R. Archuleta, B. Aagaard, J.-P. Ampuero, H. Bhat, V. Cruz-Atienza, L. Dalguer, P. Dawson, S. Day, B. Duan, E. Dunham, G. Ely, Y. Kaneko, Y. Kase, N. Lapusta, Y. Liu, S. Ma, D. Oglesby, K. Olsen, A. Pitarka, S. Song, and E. Templeton, The SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise, Seismological Research Letters, vol. 80, no. 1, pages 119-126, 2009 Pelties, C., J. de la Puente, and M. Kaeser, Dynamic Rupture Modeling in Three Dimensions on Unstructured Meshes Using a Discontinuous Galerkin Method, AGU 2010 Fall Meeting, abstract #S21C-2068 Pelties, C., J. de la Puente, J.-P. Ampuero, G. Brietzke, and M. Kaeser, Three-Dimensional Dynamic Rupture Simulation with a High-order Discontinuous Galerkin Method on Unstructured Tetrahedral Meshes, JGR. - Solid Earth, VOL. 117, B02309, 2012

  16. Censored Median Regression and Profile Empirical Likelihood

    PubMed Central

    Subramanian, Sundarraman

    2007-01-01

    We implement profile empirical likelihood based inference for censored median regression models. Inference for any specified sub-vector is carried out by profiling out the nuisance parameters from the “plug-in” empirical likelihood ratio function proposed by Qin and Tsao. To obtain the critical value of the profile empirical likelihood ratio statistic, we first investigate its asymptotic distribution. The limiting distribution is a sum of weighted chi square distributions. Unlike for the full empirical likelihood, however, the derived asymptotic distribution has intractable covariance structure. Therefore, we employ the bootstrap to obtain the critical value, and compare the resulting confidence intervals with the ones obtained through Basawa and Koul’s minimum dispersion statistic. Furthermore, we obtain confidence intervals for the age and treatment effects in a lung cancer data set. PMID:19112527

  17. Corrected profile likelihood confidence interval for binomial paired incomplete data.

    PubMed

    Pradhan, Vivek; Menon, Sandeep; Das, Ujjwal

    2013-01-01

    Clinical trials often use paired binomial data as their clinical endpoint. The confidence interval is frequently used to estimate the treatment performance. Tang et al. (2009) have proposed exact and approximate unconditional methods for constructing a confidence interval in the presence of incomplete paired binary data. The approach proposed by Tang et al. can be overly conservative with large expected confidence interval width (ECIW) in some situations. We propose a profile likelihood-based method with a Jeffreys' prior correction to construct the confidence interval. This approach generates confidence interval with a much better coverage probability and shorter ECIWs. The performances of the method along with the corrections are demonstrated through extensive simulation. Finally, three real world data sets are analyzed by all the methods. Statistical Analysis System (SAS) codes to execute the profile likelihood-based methods are also presented.

  18. Dynamic Optical Grating Device and Associated Method for Modulating Light

    NASA Technical Reports Server (NTRS)

    Park, Yeonjoon (Inventor); Choi, Sang H. (Inventor); King, Glen C. (Inventor); Chu, Sang-Hyon (Inventor)

    2012-01-01

    A dynamic optical grating device and associated method for modulating light is provided that is capable of controlling the spectral properties and propagation of light without moving mechanical components by the use of a dynamic electric and/or magnetic field. By changing the electric field and/or magnetic field, the index of refraction, the extinction coefficient, the transmittivity, and the reflectivity fo the optical grating device may be controlled in order to control the spectral properties of the light reflected or transmitted by the device.

  19. Maximum-Likelihood Detection Of Noncoherent CPM

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  20. Analysis of Nonlinear Dynamics by Square Matrix Method

    SciTech Connect

    Yu, Li Hua

    2016-07-25

    The nonlinear dynamics of a system with periodic structure can be analyzed using a square matrix. In this paper, we show that because the special property of the square matrix constructed for nonlinear dynamics, we can reduce the dimension of the matrix from the original large number for high order calculation to low dimension in the first step of the analysis. Then a stable Jordan decomposition is obtained with much lower dimension. The transformation to Jordan form provides an excellent action-angle approximation to the solution of the nonlinear dynamics, in good agreement with trajectories and tune obtained from tracking. And more importantly, the deviation from constancy of the new action-angle variable provides a measure of the stability of the phase space trajectories and their tunes. Thus the square matrix provides a novel method to optimize the nonlinear dynamic system. The method is illustrated by many examples of comparison between theory and numerical simulation. Finally, in particular, we show that the square matrix method can be used for optimization to reduce the nonlinearity of a system.

  1. Fast inference in generalized linear models via expected log-likelihoods.

    PubMed

    Ramirez, Alexandro D; Paninski, Liam

    2014-04-01

    Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting "expected log-likelihood" can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina.

  2. Computational Methods for Structural Mechanics and Dynamics, part 1

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)

    1989-01-01

    The structural analysis methods research has several goals. One goal is to develop analysis methods that are general. This goal of generality leads naturally to finite-element methods, but the research will also include other structural analysis methods. Another goal is that the methods be amenable to error analysis; that is, given a physical problem and a mathematical model of that problem, an analyst would like to know the probable error in predicting a given response quantity. The ultimate objective is to specify the error tolerances and to use automated logic to adjust the mathematical model or solution strategy to obtain that accuracy. A third goal is to develop structural analysis methods that can exploit parallel processing computers. The structural analysis methods research will focus initially on three types of problems: local/global nonlinear stress analysis, nonlinear transient dynamics, and tire modeling.

  3. Vortex element methods for fluid dynamic analysis of engineering systems

    NASA Astrophysics Data System (ADS)

    Lewis, Reginald Ivan

    The surface-vorticity method of computational fluid mechanics is described, with an emphasis on turbomachinery applications, in an introduction for engineers. Chapters are devoted to surface singularity modeling; lifting bodies, two-dimensional airfoils, and cascades; mixed-flow and radial cascades; bodies of revolution, ducts, and annuli; ducted propellers and fans; three-dimensional and meridional flows in turbomachines; free vorticity shear layers and inverse methods; vortex dynamics in inviscid flows; the simulation of viscous diffusion in discrete vortex modeling; vortex-cloud modeling by the boundary-integral method; vortex-cloud models for lifting bodies and cascades; and grid systems for vortex dynamics and meridional flows. Diagrams, graphs, and the listings for a set of computer programs are provided.

  4. System and method for reducing combustion dynamics in a combustor

    SciTech Connect

    Uhm, Jong Ho; Ziminsky, Willy Steve; Johnson, Thomas Edward; Srinivasan, Shiva; York, William David

    2016-11-29

    A system for reducing combustion dynamics in a combustor includes an end cap that extends radially across the combustor and includes an upstream surface axially separated from a downstream surface. A combustion chamber is downstream of the end cap, and tubes extend from the upstream surface through the downstream surface. Each tube provides fluid communication through the end cap to the combustion chamber. The system further includes means for reducing combustion dynamics in the combustor. A method for reducing combustion dynamics in a combustor includes flowing a working fluid through tubes that extend axially through an end cap that extends radially across the combustor and obstructing at least a portion of the working fluid flowing through a first set of the tubes.

  5. A Non-smooth Newton Method for Multibody Dynamics

    SciTech Connect

    Erleben, K.; Ortiz, R.

    2008-09-01

    In this paper we deal with the simulation of rigid bodies. Rigid body dynamics have become very important for simulating rigid body motion in interactive applications, such as computer games or virtual reality. We present a novel way of computing contact forces using a Newton method. The contact problem is reformulated as a system of non-linear and non-smooth equations, and we solve this system using a non-smooth version of Newton's method. One of the main contribution of this paper is the reformulation of the complementarity problems, used to model impacts, as a system of equations that can be solved using traditional methods.

  6. Dimension-independent likelihood-informed MCMC

    SciTech Connect

    Cui, Tiangang; Law, Kody J.H.; Marzouk, Youssef M.

    2016-01-01

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters that represent the discretization of an underlying function. This work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. Two distinct lines of research intersect in the methods developed here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated low-dimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Two nonlinear inverse problems are used to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.

  7. Physically constrained maximum likelihood mode filtering.

    PubMed

    Papp, Joseph C; Preisig, James C; Morozov, Andrey K

    2010-04-01

    Mode filtering is most commonly implemented using the sampled mode shapes or pseudoinverse algorithms. Buck et al. [J. Acoust. Soc. Am. 103, 1813-1824 (1998)] placed these techniques in the context of a broader maximum a posteriori (MAP) framework. However, the MAP algorithm requires that the signal and noise statistics be known a priori. Adaptive array processing algorithms are candidates for improving performance without the need for a priori signal and noise statistics. A variant of the physically constrained, maximum likelihood (PCML) algorithm [A. L. Kraay and A. B. Baggeroer, IEEE Trans. Signal Process. 55, 4048-4063 (2007)] is developed for mode filtering that achieves the same performance as the MAP mode filter yet does not need a priori knowledge of the signal and noise statistics. The central innovation of this adaptive mode filter is that the received signal's sample covariance matrix, as estimated by the algorithm, is constrained to be that which can be physically realized given a modal propagation model and an appropriate noise model. Shallow water simulation results are presented showing the benefit of using the PCML method in adaptive mode filtering.

  8. Reducing the likelihood of long tennis matches.

    PubMed

    Barnett, Tristan; Alan, Brown; Pollard, Graham

    2006-01-01

    Long matches can cause problems for tournaments. For example, the starting times of subsequent matches can be substantially delayed causing inconvenience to players, spectators, officials and television scheduling. They can even be seen as unfair in the tournament setting when the winner of a very long match, who may have negative aftereffects from such a match, plays the winner of an average or shorter length match in the next round. Long matches can also lead to injuries to the participating players. One factor that can lead to long matches is the use of the advantage set as the fifth set, as in the Australian Open, the French Open and Wimbledon. Another factor is long rallies and a greater than average number of points per game. This tends to occur more frequently on the slower surfaces such as at the French Open. The mathematical method of generating functions is used to show that the likelihood of long matches can be substantially reduced by using the tiebreak game in the fifth set, or more effectively by using a new type of game, the 50-40 game, throughout the match. Key PointsThe cumulant generating function has nice properties for calculating the parameters of distributions in a tennis matchA final tiebreaker set reduces the length of matches as currently being used in the US OpenA new 50-40 game reduces the length of matches whilst maintaining comparable probabilities for the better player to win the match.

  9. Dimension-independent likelihood-informed MCMC

    SciTech Connect

    Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.

    2015-10-08

    Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.

  10. Dimension-independent likelihood-informed MCMC

    DOE PAGES

    Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.

    2015-10-08

    Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian informationmore » and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.« less

  11. Population Dynamics of the Stationary Phase Utilizing the ARGOS Method

    NASA Astrophysics Data System (ADS)

    Algarni, S.; Charest, A. J.; Iannacchione, G. S.

    2015-03-01

    The Area Recorded Generalized Optical Scattering (ARGOS) approach to light scattering employs large image capture array allowing for a well-defined geometry in which images may be manipulated to extract structure with intensity at a specific scattering wave vector (I(q)) and dynamics with intensity at a specific scattering wave vector over time (I (q,t)). The ARGOS method provides morphological dynamics noninvasively over a long time period and allows for a variety of aqueous conditions. This is important because traditional growth models do not provide for conditions similar to the natural environment. The present study found that the population dynamics of bacteria do not follow a traditional growth model and that the ARGOS method allowed for the observation of bacterial changes in terms of individual particles and population dynamics in real time. The observations of relative total intensity suggest that there is no stationary phase and that the bacterial population demonstrates sinusoidal type patterns consistently subsequent to the log phase growth. These observation were compared to shape changes by modeling fractal dimension and size changes by modeling effective radius.

  12. Analysis of nonlinear dynamics by square matrix method

    NASA Astrophysics Data System (ADS)

    Yu, Li Hua

    2017-03-01

    The nonlinear dynamics of a system with periodic structure can be analyzed using a square matrix. We show that because of the special property of the square matrix constructed for nonlinear dynamics, we can reduce the dimension of the matrix from the original large number for high order calculations to a low dimension in the first step of the analysis. Then a stable Jordan decomposition is obtained with much lower dimension. The Jordan decomposition leads to a transformation to a new variable, which is an accurate action-angle variable, in good agreement with trajectories and tune obtained from tracking. More importantly, the deviation from constancy of the new action-angle variable provides a measure of the stability of the phase space trajectories and tune fluctuation. Thus the square matrix theory shows a good potential in theoretical understanding of a complicated dynamical system to guide the optimization of dynamical apertures. The method is illustrated by many examples of comparison between theory and numerical simulation. In particular, we show that the square matrix method can be used for fast optimization to reduce the nonlinearity of a system.

  13. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions

    NASA Astrophysics Data System (ADS)

    Novosad, Philip; Reader, Andrew J.

    2016-06-01

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral

  14. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.

    PubMed

    Novosad, Philip; Reader, Andrew J

    2016-06-21

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral

  15. Parallel methods for dynamic simulation of multiple manipulator systems

    NASA Technical Reports Server (NTRS)

    Mcmillan, Scott; Sadayappan, P.; Orin, David E.

    1993-01-01

    In this paper, efficient dynamic simulation algorithms for a system of m manipulators, cooperating to manipulate a large load, are developed; their performance, using two possible forms of parallelism on a general-purpose parallel computer, is investigated. One form, temporal parallelism, is obtained with the use of parallel numerical integration methods. A speedup of 3.78 on four processors of CRAY Y-MP8 was achieved with a parallel four-point block predictor-corrector method for the simulation of a four manipulator system. These multi-point methods suffer from reduced accuracy, and when comparing these runs with a serial integration method, the speedup can be as low as 1.83 for simulations with the same accuracy. To regain the performance lost due to accuracy problems, a second form of parallelism is employed. Spatial parallelism allows most of the dynamics of each manipulator chain to be computed simultaneously. Used exclusively in the four processor case, this form of parallelism in conjunction with a serial integration method results in a speedup of 3.1 on four processors over the best serial method. In cases where there are either more processors available or fewer chains in the system, the multi-point parallel integration methods are still advantageous despite the reduced accuracy because both forms of parallelism can then combine to generate more parallel tasks and achieve greater effective speedups. This paper also includes results for these cases.

  16. A Dynamic Interval Decision-Making Method Based on GRA

    NASA Astrophysics Data System (ADS)

    Xue-jun, Tang; Jia, Chen

    According to the basic theory of grey relational analysis, this paper constructs a three-dimensional grey interval relation degree model for the three dimensions of time, index and scheme. On its basis, it sets up and solves a single-targeted optimization model, and obtains each scheme's affiliate degree for the positive/negative ideal scheme and also arranges the schemes in sequence. The result shows that the three-dimensional grey relation degree simplifies the traditional dynamic multi-attribute decision-making method and can better resolve the dynamic multi-attribute decision-making method of interval numbers. Finally, this paper proves the practicality and efficiency of the model through a case study.

  17. Analysis methods for wind turbine control and electrical system dynamics

    NASA Technical Reports Server (NTRS)

    Hinrichsen, E. N.

    1995-01-01

    The integration of new energy technologies into electric power systems requires methods which recognize the full range of dynamic events in both the new generating unit and the power system. Since new energy technologies are initially perceived as small contributors to large systems, little attention is generally paid to system integration, i.e. dynamic events in the power system are ignored. As a result, most new energy sources are only capable of base-load operation, i.e. they have no load following or cycling capability. Wind turbines are no exception. Greater awareness of this implicit (and often unnecessary) limitation is needed. Analysis methods are recommended which include very low penetration (infinite bus) as well as very high penetration (stand-alone) scenarios.

  18. Search area Expanding Strategy and Dynamic Priority Setting Method in the Improved 2-opt Method

    NASA Astrophysics Data System (ADS)

    Matayoshi, Mitsukuni; Nakamura, Morikazu; Miyagi, Hayao

    We propose a new 2-opt base method in a Memetic algorithm, that is, Genetic Algorithms(GAs) with a local search. The basic idea is from the fast 2-opt(1) method and the improved 2-opt method(20). Our new search method uses the “Priority" employed in the improved 2-opt method. The “Priority" represents the contribution level in exchange of genes. Matayoshi's method exchanges genes based on previous contribution to the fitness value improvement. We propose a new search method by using the concept of the Priority. We call our method the search area expanding strategy method in the improved 2-opt method. Our method escalates the search area by using “Priority". In computer experiment, it is shown that the computation time to find exact solution depends on the value of the Priority. If our method does not set an appropriate priority beforehand, then we propose the method to adapt to suitable value. If improvement does not achieved for certain generations, our dynamic priority method tries to modify the priority by the mutation operation. Experimental results show that the search area expanding strategy method embedded with the dynamic priority setting method can find the exact solution at earlier generation than other methods for comparison.

  19. Maximal likelihood correspondence estimation for face recognition across pose.

    PubMed

    Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang

    2014-10-01

    Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database.

  20. A method for analyzing dynamic stall of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Crimi, P.; Reeves, B. L.

    1972-01-01

    A model for each of the basic flow elements involved in the unsteady stall of a two-dimensional airfoil in incompressible flow is presented. The interaction of these elements is analyzed using a digital computer. Computations of the loading during transient and sinusoidal pitching motions are in good qualitative agreement with measured loads. The method was used to confirm that large torsional response of helicopter blades detected in flight tests can be attributed to dynamic stall.

  1. Dynamic Data Driven Methods for Self-aware Aerospace Vehicles

    DTIC Science & Technology

    2015-04-08

    15. SUBJECT TERMS Dynamic data driven application systems (DDDAS); surrogate modeling; reduced order modeling; multifidelity methods; self-aware UAV...title and subtitle with volume number and part number, if applicable . On classified documents, enter the title classification in parentheses. 5a...for problems in which the Gaussian kernel has a variable bandwidth. To the best of our knowledge, all of these experiments are impossible or

  2. Advanced three-dimensional dynamic analysis by boundary element methods

    NASA Technical Reports Server (NTRS)

    Banerjee, P. K.; Ahma, S.

    1985-01-01

    Advanced formulations of boundary element method for periodic, transient transform domain and transient time domain solution of three-dimensional solids have been implemented using a family of isoparametric boundary elements. The necessary numerical integration techniques as well as the various solution algorithms are described. The developed analysis has been incorporated in a fully general purpose computer program BEST3D which can handle up to 10 subregions. A number of numerical examples are presented to demonstrate the accuracy of the dynamic analyses.

  3. Least-squares finite element method for fluid dynamics

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Povinelli, Louis A.

    1989-01-01

    An overview is given of new developments of the least squares finite element method (LSFEM) in fluid dynamics. Special emphasis is placed on the universality of LSFEM; the symmetry and positiveness of the algebraic systems obtained from LSFEM; the accommodation of LSFEM to equal order interpolations for incompressible viscous flows; and the natural numerical dissipation of LSFEM for convective transport problems and high speed compressible flows. The performance of LSFEM is illustrated by numerical examples.

  4. Maintained Individual Data Distributed Likelihood Estimation (MIDDLE)

    PubMed Central

    Boker, Steven M.; Brick, Timothy R.; Pritikin, Joshua N.; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D.; Maes, Hermine H.; Neale, Michael C.

    2015-01-01

    Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly-impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participants’ personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual’s data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies. PMID:26717128

  5. Maintained Individual Data Distributed Likelihood Estimation (MIDDLE).

    PubMed

    Boker, Steven M; Brick, Timothy R; Pritikin, Joshua N; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D; Maes, Hermine H; Neale, Michael C

    2015-01-01

    Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participant's personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual's data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies.

  6. Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures

    ERIC Educational Resources Information Center

    Jeon, Minjeong; Rabe-Hesketh, Sophia

    2012-01-01

    In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…

  7. Partial order optimum likelihood (POOL): maximum likelihood prediction of protein active site residues using 3D Structure and sequence properties.

    PubMed

    Tong, Wenxu; Wei, Ying; Murga, Leonel F; Ondrechen, Mary Jo; Williams, Ronald J

    2009-01-01

    A new monotonicity-constrained maximum likelihood approach, called Partial Order Optimum Likelihood (POOL), is presented and applied to the problem of functional site prediction in protein 3D structures, an important current challenge in genomics. The input consists of electrostatic and geometric properties derived from the 3D structure of the query protein alone. Sequence-based conservation information, where available, may also be incorporated. Electrostatics features from THEMATICS are combined with multidimensional isotonic regression to form maximum likelihood estimates of probabilities that specific residues belong to an active site. This allows likelihood ranking of all ionizable residues in a given protein based on THEMATICS features. The corresponding ROC curves and statistical significance tests demonstrate that this method outperforms prior THEMATICS-based methods, which in turn have been shown previously to outperform other 3D-structure-based methods for identifying active site residues. Then it is shown that the addition of one simple geometric property, the size rank of the cleft in which a given residue is contained, yields improved performance. Extension of the method to include predictions of non-ionizable residues is achieved through the introduction of environment variables. This extension results in even better performance than THEMATICS alone and constitutes to date the best functional site predictor based on 3D structure only, achieving nearly the same level of performance as methods that use both 3D structure and sequence alignment data. Finally, the method also easily incorporates such sequence alignment data, and when this information is included, the resulting method is shown to outperform the best current methods using any combination of sequence alignments and 3D structures. Included is an analysis demonstrating that when THEMATICS features, cleft size rank, and alignment-based conservation scores are used individually or in combination

  8. Comparing the Performance of Two Dynamic Load Distribution Methods

    NASA Technical Reports Server (NTRS)

    Kale, L. V.

    1987-01-01

    Parallel processing of symbolic computations on a message-passing multi-processor presents one challenge: To effectively utilize the available processors, the load must be distributed uniformly to all the processors. However, the structure of these computations cannot be predicted in advance. go, static scheduling methods are not applicable. In this paper, we compare the performance of two dynamic, distributed load balancing methods with extensive simulation studies. The two schemes are: the Contracting Within a Neighborhood (CWN) scheme proposed by us, and the Gradient Model proposed by Lin and Keller. We conclude that although simpler, the CWN is significantly more effective at distributing the work than the Gradient model.

  9. Numerical continuation methods for large-scale dissipative dynamical systems

    NASA Astrophysics Data System (ADS)

    Umbría, Juan Sánchez; Net, Marta

    2016-11-01

    A tutorial on continuation and bifurcation methods for the analysis of truncated dissipative partial differential equations is presented. It focuses on the computation of equilibria, periodic orbits, their loci of codimension-one bifurcations, and invariant tori. To make it more self-contained, it includes some definitions of basic concepts of dynamical systems, and some preliminaries on the general underlying techniques used to solve non-linear systems of equations by inexact Newton methods, and eigenvalue problems by means of subspace or Arnoldi iterations.

  10. Improved sensitivity of dynamic CT with a new visualization method for radial distribution of lung nodule enhancement

    NASA Astrophysics Data System (ADS)

    Wiemker, Rafael; Wormanns, Dag; Beyer, Florian; Blaffert, Thomas; Buelow, Thomas

    2005-04-01

    For differential diagnosis of pulmonary nodules, assessment of contrast enhancement at chest CT scans after administration of contrast agent has been suggested. Likelihood of malignancy is considered very low if the contrast enhancement is lower than a certain threshold (10-20 HU). Automated average density measurement methods have been developed for that purpose. However, a certain fraction of malignant nodules does not exhibit significant enhancement when averaged over the whole nodule volume. The purpose of this paper is to test a new method for reduction of false negative results. We have investigated a method of showing not only a single averaged contrast enhancement number, but a more detailed enhancement curve for each nodule, showing the enhancement as a function of distance to boundary. A test set consisting of 11 malignant and 11 benign pulmonary lesions was used for validation, with diagnoses known from biopsy or follow-up for more than 24 months. For each nodule dynamic CT scans were available: the unenhanced native scan and scans after 60, 120, 180 and 240 seconds after onset of contrast injection (1 - 4 mm reconstructed slice thickness). The suggested method for measurement and visualization of contrast enhancement as radially resolved curves has reduced false negative results (apparently unenhancing but truly malignant nodules), and thus improved sensitivity. It proved to be a valuable tool for differential diagnosis between malignant and benign lesions using dynamic CT.

  11. Dynamic Analysis of a Spur Gear by the Dynamic Stiffness Method

    NASA Astrophysics Data System (ADS)

    HUANG, K. J.; LIU, T. S.

    2000-07-01

    This study treats a spur gear tooth as a variable cross-section Timoshenko beam to construct a dynamic model, being able to obtain transient response for spur gears of involute profiles. The dynamic responses of a single tooth and a gear pair are investigated. Firstly, polynomials are used to represent the gear blank and the tooth profile. The dynamic stiffness matrix and natural frequencies of the gear are in turn calculated. The forced response of a tooth subject to a shaft-driven transmission torque is calculated by performing modal analysis. This study takes into account time-varying stiffness and mass matrices and the gear meshing forces at moving meshing points. The forced response at arbitrary points in a gear tooth can be obtained. Calculation results of fillet stresses and strains are compared with those in the literature to verify the proposed method.

  12. A Method for Molecular Dynamics on Curved Surfaces

    PubMed Central

    Paquay, Stefan; Kusters, Remy

    2016-01-01

    Dynamics simulations of constrained particles can greatly aid in understanding the temporal and spatial evolution of biological processes such as lateral transport along membranes and self-assembly of viruses. Most theoretical efforts in the field of diffusive transport have focused on solving the diffusion equation on curved surfaces, for which it is not tractable to incorporate particle interactions even though these play a crucial role in crowded systems. We show here that it is possible to take such interactions into account by combining standard constraint algorithms with the classical velocity Verlet scheme to perform molecular dynamics simulations of particles constrained to an arbitrarily curved surface. Furthermore, unlike Brownian dynamics schemes in local coordinates, our method is based on Cartesian coordinates, allowing for the reuse of many other standard tools without modifications, including parallelization through domain decomposition. We show that by applying the schemes to the Langevin equation for various surfaces, we obtain confined Brownian motion, which has direct applications to many biological and physical problems. Finally we present two practical examples that highlight the applicability of the method: 1) the influence of crowding and shape on the lateral diffusion of proteins in curved membranes; and 2) the self-assembly of a coarse-grained virus capsid protein model. PMID:27028633

  13. A Poisson-Boltzmann dynamics method with nonperiodic boundary condition

    NASA Astrophysics Data System (ADS)

    Lu, Qiang; Luo, Ray

    2003-12-01

    We have developed a well-behaved and efficient finite difference Poisson-Boltzmann dynamics method with a nonperiodic boundary condition. This is made possible, in part, by a rather fine grid spacing used for the finite difference treatment of the reaction field interaction. The stability is also made possible by a new dielectric model that is smooth both over time and over space, an important issue in the application of implicit solvents. In addition, the electrostatic focusing technique facilitates the use of an accurate yet efficient nonperiodic boundary condition: boundary grid potentials computed by the sum of potentials from individual grid charges. Finally, the particle-particle particle-mesh technique is adopted in the computation of the Coulombic interaction to balance accuracy and efficiency in simulations of large biomolecules. Preliminary testing shows that the nonperiodic Poisson-Boltzmann dynamics method is numerically stable in trajectories at least 4 ns long. The new model is also fairly efficient: it is comparable to that of the pairwise generalized Born solvent model, making it a strong candidate for dynamics simulations of biomolecules in dilute aqueous solutions. Note that the current treatment of total electrostatic interactions is with no cutoff, which is important for simulations of biomolecules. Rigorous treatment of the Debye-Hückel screening is also possible within the Poisson-Boltzmann framework: its importance is demonstrated by a simulation of a highly charged protein.

  14. Fast inference in generalized linear models via expected log-likelihoods

    PubMed Central

    Ramirez, Alexandro D.; Paninski, Liam

    2015-01-01

    Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting “expected log-likelihood” can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina. PMID:23832289

  15. Evaluation of the sensing block method for dynamic force measurement

    NASA Astrophysics Data System (ADS)

    Zhang, Qinghui; Chen, Hao; Li, Wenzhao; Song, Li

    2017-01-01

    Sensing block method was proposed for the dynamic force measurement by Tanimura et al. in 1994. Comparing with the Split Hopkinson pressure bar (SHPB) technique, it can provide a much longer measuring time for the dynamic properties test of materials. However, the signals recorded by sensing block are always accompanied with additional oscillations. Tanimura et al. discussed the effect of force rising edge on the test results, whereas more research is still needed. In this paper, some more dominant factors have been extracted through dimensional analysis. The finite element simulation has been performed to assess these factors. Base on the analysis and simulation, some valuable results are obtained and some criterions proposed in this paper can be applied in design or selection of the sensing block.

  16. System and method for reducing combustion dynamics in a combustor

    DOEpatents

    Uhm, Jong Ho; Johnson, Thomas Edward; Zuo, Baifang; York, William David

    2015-09-01

    A system for reducing combustion dynamics in a combustor includes an end cap having an upstream surface axially separated from a downstream surface, and tube bundles extend from the upstream surface through the downstream surface. A divider inside a tube bundle defines a diluent passage that extends axially through the downstream surface, and a diluent supply in fluid communication with the divider provides diluent flow to the diluent passage. A method for reducing combustion dynamics in a combustor includes flowing a fuel through tube bundles, flowing a diluent through a diluent passage inside a tube bundle, wherein the diluent passage extends axially through at least a portion of the end cap into a combustion chamber, and forming a diluent barrier in the combustion chamber between the tube bundle and at least one other adjacent tube bundle.

  17. Dynamic Methods for Investigating the Conformational Changes of Biological Macromolecules

    NASA Astrophysics Data System (ADS)

    Vidolova-Angelova, E.; Peshev, Z.; Shaquiri, Z.; Angelov, D.

    2010-01-01

    Fast conformational changes of biological macromolecules such as RNA folding and DNA—protein interactions play a crucial role in their biological functions. Conformational changes are supposed to take place in the sub milliseconds to few seconds time range. The development of appropriate dynamic methods possessing both high space (one nucleotide) and time resolution is of important interest. Here, we present two different approaches we developed for studying nucleic acid conformational changes such as salt-induced tRNA folding and interaction of the transcription factor NF-κB with its recognition DNA sequence. Importantly, only a single laser pulse is sufficient for the accurate measuring the whole decay curve. This peculiarity can be used in dynamical experiments.

  18. Hybrid pairwise likelihood analysis of animal behavior experiments.

    PubMed

    Cattelan, Manuela; Varin, Cristiano

    2013-12-01

    The study of the determinants of fights between animals is an important issue in understanding animal behavior. For this purpose, tournament experiments among a set of animals are often used by zoologists. The results of these tournament experiments are naturally analyzed by paired comparison models. Proper statistical analysis of these models is complicated by the presence of dependence between the outcomes of fights because the same animal is involved in different contests. This paper discusses two different model specifications to account for between-fights dependence. Models are fitted through the hybrid pairwise likelihood method that iterates between optimal estimating equations for the regression parameters and pairwise likelihood inference for the association parameters. This approach requires the specification of means and covariances only. For this reason, the method can be applied also when the computation of the joint distribution is difficult or inconvenient. The proposed methodology is investigated by simulation studies and applied to real data about adult male Cape Dwarf Chameleons.

  19. Informative Parameters of Dynamic Geo-electricity Methods

    NASA Astrophysics Data System (ADS)

    Tursunmetov, R.

    With growing complexity of geological tasks and revealing abnormality zones con- nected with ore, oil, gas and water availability, methods of dynamic geo-electricity started to be used. In these methods geological environment is considered as inter- phase irregular one. Main dynamic element of this environment is double electric layer, which develops on the boundary between solid and liquid phase. In ore or wa- ter saturated environment double electric layers become electrochemical or electro- kinetic active elements of geo-electric environment, which, in turn, form natural elec- tric field. Mentioned field influences artificially created field distribution and inter- action bear complicated super-position or non-linear character. Therefore, geological environment is considered as active one, which is able to accumulate and transform artificially superpositioned fields. Main dynamic property of this environment is non- liner behavior of specific electric resistance and soil polarization depending on current density and measurements frequency, which serve as informative parameters for dy- namic geo-electricity methods. Study of disperse soil electric properties in impulse- frequency regime with study of temporal and frequency characteristics of electric field is of main interest for definition of geo-electric abnormality. Volt-amperic characteris- tics of electromagnetic field study has big practical significance. These characteristics are determined by electric-chemically active ore and water saturated fields. Mentioned parameters depend on initiated field polarity, in particular on ore saturated zone's character, composition and mineralization and natural electric field availability un- der cathode and anode mineralization. Non-linear behavior of environment's dynamic properties impacts initiated field structure that allows to define abnormal zone loca- tion. And, finally, study of soil anisotropy dynamic properties in space will allow to identify filtration flows

  20. Some splitting methods for equations of geophysical fluid dynamics

    NASA Astrophysics Data System (ADS)

    Ji, Zhongzhen; Wang, Bin

    1995-03-01

    In this paper, equations of atmospheric and oceanic dynamics are reduced to a kind of evolutionary equation in operator form, based on which a conclusion that the separability of motion stages is relative is made and an issue that the tractional splitting methods established on the physical separability of the fast stage and the slow stage neglect the interaction between the two stages to some extent is shown. Also, three splitting patterns are summed up from the splitting methods in common use so that a comparison between them is carried out. The comparison shows that only the improved splitting pattern (ISP) can be in second order and keep the interaction well. Finally, the applications of some splitting methods on numerical simulations of typhoon tracks made clear that ISP owns the best effect and can save more than 80% CPU time.

  1. A novel method to study cerebrospinal fluid dynamics in rats

    PubMed Central

    Karimy, Jason K.; Kahle, Kristopher T.; Kurland, David B.; Yu, Edward; Gerzanich, Volodymyr; Simard, J. Marc

    2014-01-01

    Background Cerebrospinal fluid (CSF) flow dynamics play critical roles in both the immature and adult brain, with implications for neurodevelopment and disease processes such as hydrocephalus and neurodegeneration. Remarkably, the only reported method to date for measuring CSF formation in laboratory rats is the indirect tracer dilution method (a.k.a., ventriculocisternal perfusion), which has limitations. New Method Anesthetized rats were mounted in a stereotaxic apparatus, both lateral ventricles were cannulated, and the Sylvian aqueduct was occluded. Fluid exited one ventricle at a rate equal to the rate of CSF formation plus the rate of infusion (if any) into the contralateral ventricle. Pharmacological agents infused at a constant known rate into the contralateral ventricle were tested for their effect on CSF formation in real-time. Results The measured rate of CSF formation was increased by blockade of the Sylvian aqueduct but was not changed by increasing the outflow pressure (0–3 cm of H2O). In male Wistar rats, CSF formation was age-dependent: 0.39±0.06, 0.74±0.05, 1.02±0.04 and 1.40±0.06 µL/min at 8, 9, 10 and 12 weeks, respectively. CSF formation was reduced 57% by intraventricular infusion of the carbonic anhydrase inhibitor, acetazolamide. Comparison with existing methods Tracer dilution methods do not permit ongoing real-time determination of the rate of CSF formation, are not readily amenable to pharmacological manipulations, and require critical assumptions. Direct measurement of CSF formation overcomes these limitations. Conclusions Direct measurement of CSF formation in rats is feasible. Our method should prove useful for studying CSF dynamics in normal physiology and disease models. PMID:25554415

  2. Applicability of optical scanner method for fine root dynamics

    NASA Astrophysics Data System (ADS)

    Kume, Tomonori; Ohashi, Mizue; Makita, Naoki; Khoon Kho, Lip; Katayama, Ayumi; Matsumoto, Kazuho; Ikeno, Hidetoshi

    2016-04-01

    Fine root dynamics is one of the important components in forest carbon cycling, as ~60 % of tree photosynthetic production can be allocated to root growth and metabolic activities. Various techniques have been developed for monitoring fine root biomass, production, mortality in order to understand carbon pools and fluxes resulting from fine roots dynamics. The minirhizotron method is now a widely used technique, in which a transparent tube is inserted into the soil and researchers count an increase and decrease of roots along the tube using images taken by a minirhizotron camera or minirhizotron video camera inside the tube. This method allows us to observe root behavior directly without destruction, but has several weaknesses; e.g., the difficulty of scaling up the results to stand level because of the small observation windows. Also, most of the image analysis are performed manually, which may yield insufficient quantitative and objective data. Recently, scanner method has been proposed, which can produce much bigger-size images (A4-size) with lower cost than those of the minirhizotron methods. However, laborious and time-consuming image analysis still limits the applicability of this method. In this study, therefore, we aimed to develop a new protocol for scanner image analysis to extract root behavior in soil. We evaluated applicability of this method in two ways; 1) the impact of different observers including root-study professionals, semi- and non-professionals on the detected results of root dynamics such as abundance, growth, and decomposition, and 2) the impact of window size on the results using a random sampling basis exercise. We applied our new protocol to analyze temporal changes of root behavior from sequential scanner images derived from a Bornean tropical forests. The results detected by the six observers showed considerable concordance in temporal changes in the abundance and the growth of fine roots but less in the decomposition. We also examined

  3. A dynamic calibration method for the pressure transducer

    NASA Astrophysics Data System (ADS)

    Wang, Zhongyu; Wang, Zhuoran; Li, Qiang

    2016-01-01

    Pressure transducer is widely used in the field of industry. A calibrated pressure transducer can increase the performance of precision instruments in the closed mechanical relationship. Calibration is the key to ensure the pressure transducer with a high precision and dynamic characteristic. Unfortunately, the current calibration method can usually be used in the laboratory with a good condition and only one pressure transducer can be calibrated at each time. Therefore the calibration efficiency is hard to meet the requirement of modern industry with high efficiency. A dynamic and fast calibration technology with a calibration device and a corresponding data processing method is proposed in this paper. Firstly, the pressure transducer to be calibrated is placed in the small cavity chamber. The calibration process only contains a single loop. The outputs of each calibrated transducer are recorded automatically by the control terminal. Secondly, LabView programming is used for the information acquisition and data processing. The performance of the repeatability and nonlinear indicators can be figured out directly. At last the pressure transducers are calibrated simultaneously in the experiment to verify the suggested calibration technology. The experimental result shows this method can be used to calibrate the pressure transducer in the practical engineering measurement.

  4. Coupled-cluster methods for core-hole dynamics

    NASA Astrophysics Data System (ADS)

    Picon, Antonio; Cheng, Lan; Hammond, Jeff R.; Stanton, John F.; Southworth, Stephen H.

    2014-05-01

    Coupled cluster (CC) is a powerful numerical method used in quantum chemistry in order to take into account electron correlation with high accuracy and size consistency. In the CC framework, excited, ionized, and electron-attached states can be described by the equation of motion (EOM) CC technique. However, bringing CC methods to describe molecular dynamics induced by x rays is challenging. X rays have the special feature of interacting with core-shell electrons that are close to the nucleus. Core-shell electrons can be ionized or excited to a valence shell, leaving a core-hole that will decay very fast (e.g. 2.4 fs for K-shell of Ne) by emitting photons (fluorescence process) or electrons (Auger process). Both processes are a clear manifestation of a many-body effect, involving electrons in the continuum in the case of Auger processes. We review our progress of developing EOM-CC methods for core-hole dynamics. Results of the calculations will be compared with measurements on core-hole decays in atomic Xe and molecular XeF2. This work is funded by the Office of Basic Energy Sciences, Office of Science, U.S. Department of Energy, under Contract No. DE-AC02-06CH11357.

  5. Numerical likelihood analysis of cosmic ray anisotropies

    SciTech Connect

    Carlos Hojvat et al.

    2003-07-02

    A numerical likelihood approach to the determination of cosmic ray anisotropies is presented which offers many advantages over other approaches. It allows a wide range of statistically meaningful hypotheses to be compared even when full sky coverage is unavailable, can be readily extended in order to include measurement errors, and makes maximum unbiased use of all available information.

  6. Quantum dynamics by the constrained adiabatic trajectory method

    SciTech Connect

    Leclerc, A.; Jolicard, G.; Guerin, S.; Killingbeck, J. P.

    2011-03-15

    We develop the constrained adiabatic trajectory method (CATM), which allows one to solve the time-dependent Schroedinger equation constraining the dynamics to a single Floquet eigenstate, as if it were adiabatic. This constrained Floquet state (CFS) is determined from the Hamiltonian modified by an artificial time-dependent absorbing potential whose forms are derived according to the initial conditions. The main advantage of this technique for practical implementation is that the CFS is easy to determine even for large systems since its corresponding eigenvalue is well isolated from the others through its imaginary part. The properties and limitations of the CATM are explored through simple examples.

  7. A method for the evaluation of wide dynamic range cameras

    NASA Astrophysics Data System (ADS)

    Wong, Ping Wah; Lu, Yu Hua

    2012-01-01

    We propose a multi-component metric for the evaluation of digital or video cameras under wide dynamic range (WDR) scenes. The method is based on a single image capture using a specifically designed WDR test chart and light box. Test patterns on the WDR test chart include gray ramps, color patches, arrays of gray patches, white bars, and a relatively dark gray background. The WDR test chart is professionally made using 3 layers of transparencies to produce a contrast ratio of approximately 110 dB for WDR testing. A light box is designed to provide a uniform surface with light level at about 80K to 100K lux, which is typical of a sunny outdoor scene. From a captured image, 9 image quality component scores are calculated. The components include number of resolvable gray steps, dynamic range, linearity of tone response, grayness of gray ramp, number of distinguishable color patches, smearing resistance, edge contrast, grid clarity, and weighted signal-to-noise ratio. A composite score is calculated from the 9 component scores to reflect the comprehensive image quality in cameras under WDR scenes. Experimental results have demonstrated that the multi-component metric corresponds very well to subjective evaluation of wide dynamic range behavior of cameras.

  8. Recent developments in maximum likelihood estimation of MTMM models for categorical data.

    PubMed

    Jeon, Minjeong; Rijmen, Frank

    2014-01-01

    Maximum likelihood (ML) estimation of categorical multitrait-multimethod (MTMM) data is challenging because the likelihood involves high-dimensional integrals over the crossed method and trait factors, with no known closed-form solution. The purpose of the study is to introduce three newly developed ML methods that are eligible for estimating MTMM models with categorical responses: Variational maximization-maximization (e.g., Rijmen and Jeon, 2013), alternating imputation posterior (e.g., Cho and Rabe-Hesketh, 2011), and Monte Carlo local likelihood (e.g., Jeon et al., under revision). Each method is briefly described and its applicability for MTMM models with categorical data are discussed.

  9. Multiscale molecular dynamics using the matched interface and boundary method

    SciTech Connect

    Geng Weihua; Wei, G.W.

    2011-01-20

    The Poisson-Boltzmann (PB) equation is an established multiscale model for electrostatic analysis of biomolecules and other dielectric systems. PB based molecular dynamics (MD) approach has a potential to tackle large biological systems. Obstacles that hinder the current development of PB based MD methods are concerns in accuracy, stability, efficiency and reliability. The presence of complex solvent-solute interface, geometric singularities and charge singularities leads to challenges in the numerical solution of the PB equation and electrostatic force evaluation in PB based MD methods. Recently, the matched interface and boundary (MIB) method has been utilized to develop the first second order accurate PB solver that is numerically stable in dealing with discontinuous dielectric coefficients, complex geometric singularities and singular source charges. The present work develops the PB based MD approach using the MIB method. New formulation of electrostatic forces is derived to allow the use of sharp molecular surfaces. Accurate reaction field forces are obtained by directly differentiating the electrostatic potential. Dielectric boundary forces are evaluated at the solvent-solute interface using an accurate Cartesian-grid surface integration method. The electrostatic forces located at reentrant surfaces are appropriately assigned to related atoms. Extensive numerical tests are carried out to validate the accuracy and stability of the present electrostatic force calculation. The new PB based MD method is implemented in conjunction with the AMBER package. MIB based MD simulations of biomolecules are demonstrated via a few example systems.

  10. A new method for parameter estimation in nonlinear dynamical equations

    NASA Astrophysics Data System (ADS)

    Wang, Liu; He, Wen-Ping; Liao, Le-Jian; Wan, Shi-Quan; He, Tao

    2015-01-01

    Parameter estimation is an important scientific problem in various fields such as chaos control, chaos synchronization and other mathematical models. In this paper, a new method for parameter estimation in nonlinear dynamical equations is proposed based on evolutionary modelling (EM). This will be achieved by utilizing the following characteristics of EM which includes self-organizing, adaptive and self-learning features which are inspired by biological natural selection, and mutation and genetic inheritance. The performance of the new method is demonstrated by using various numerical tests on the classic chaos model—Lorenz equation (Lorenz 1963). The results indicate that the new method can be used for fast and effective parameter estimation irrespective of whether partial parameters or all parameters are unknown in the Lorenz equation. Moreover, the new method has a good convergence rate. Noises are inevitable in observational data. The influence of observational noises on the performance of the presented method has been investigated. The results indicate that the strong noises, such as signal noise ratio (SNR) of 10 dB, have a larger influence on parameter estimation than the relatively weak noises. However, it is found that the precision of the parameter estimation remains acceptable for the relatively weak noises, e.g. SNR is 20 or 30 dB. It indicates that the presented method also has some anti-noise performance.

  11. Visualization Methods to Quantify DNAPL Dynamics in Chemical Remediation

    NASA Astrophysics Data System (ADS)

    Wang, H.; Chen, X.; Jawitz, J. W.

    2006-12-01

    A novel multiple-wavelength visualization method is under development for quantifying multiphase fluid dynamics in porous media. This technique is applied here for in situ characterization of laboratory-scale DNAPL chemical remediation, including co-solvent flushing and surfactant flushing. Development of this method is motivated by the limitations of current quantitative imaging methods. In the method both light adsorption (Beer's Law) and interfacial diffraction (Fresnel's Law) are considered. Furthermore, the use of multiple wavelengths introduces the ability to eliminate the interface structure effect. By using images taken at two wavelengths using band-pass filters, the heterogeneous DNAPL saturation distribution in a two- dimensional laboratory chamber can be quantified at any time during chemical remediation. Previously published DNAPL visualization techniques have been shown to be some accurate for post-spill conditions, but are ineffective once significant dissolution has occurred. The method introduced here is shown to achieve mass balances of 90% and greater even during chemical remediation. Furthermore, the heterogeneous saturation distribution in the chamber (i.e. Eulerian description) and the distribution over stream tubes (i.e. Lagrangian description) are quantified using the new method and shown to be superior to those obtained using the binary imaging technique.

  12. Maximum Likelihood and Bayesian Parameter Estimation in Item Response Theory.

    ERIC Educational Resources Information Center

    Lord, Frederic M.

    There are currently three main approaches to parameter estimation in item response theory (IRT): (1) joint maximum likelihood, exemplified by LOGIST, yielding maximum likelihood estimates; (2) marginal maximum likelihood, exemplified by BILOG, yielding maximum likelihood estimates of item parameters (ability parameters can be estimated…

  13. A maximum-likelihood estimation of pairwise relatedness for autopolyploids

    PubMed Central

    Huang, K; Guo, S T; Shattuck, M R; Chen, S T; Qi, X G; Zhang, P; Li, B G

    2015-01-01

    Relatedness between individuals is central to ecological genetics. Multiple methods are available to quantify relatedness from molecular data, including method-of-moment and maximum-likelihood estimators. We describe a maximum-likelihood estimator for autopolyploids, and quantify its statistical performance under a range of biologically relevant conditions. The statistical performances of five additional polyploid estimators of relatedness were also quantified under identical conditions. When comparing truncated estimators, the maximum-likelihood estimator exhibited lower root mean square error under some conditions and was more biased for non-relatives, especially when the number of alleles per loci was low. However, even under these conditions, this bias was reduced to be statistically insignificant with more robust genetic sampling. We also considered ambiguity in polyploid heterozygote genotyping and developed a weighting methodology for candidate genotypes. The statistical performances of three polyploid estimators under both ideal and actual conditions (including inbreeding and double reduction) were compared. The software package POLYRELATEDNESS is available to perform this estimation and supports a maximum ploidy of eight. PMID:25370210

  14. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    SciTech Connect

    Gopich, Irina V.

    2015-01-21

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.

  15. An automated dynamic water vapor permeation test method

    NASA Astrophysics Data System (ADS)

    Gibson, Phillip; Kendrick, Cyrus; Rivin, Donald; Charmchii, Majid; Sicuranza, Linda

    1995-05-01

    This report describes an automated apparatus developed to measure the transport of water vapor through materials under a variety of conditions. The apparatus is more convenient to use than the traditional test methods for textiles and clothing materials, and allows one to use a wider variety of test conditions to investigate the concentration-dependent and nonlinear transport behavior of many of the semipermeable membrane laminates which are now available. The dynamic moisture permeation cell (DMPC) has been automated to permit multiple setpoint testing under computer control, and to facilitate investigation of transient phenomena. Results generated with the DMPC are in agreement with and of comparable accuracy to those from the ISO 11092 (sweating guarded hot plate) method of measuring water vapor permeability.

  16. Computational methods. [Calculation of dynamic loading to offshore platforms

    SciTech Connect

    Maeda, H. . Inst. of Industrial Science)

    1993-02-01

    With regard to the computational methods for hydrodynamic forces, first identification of marine hydrodynamics in offshore technology is discussed. Then general computational methods, the state of the arts and uncertainty on flow problems in offshore technology in which developed, developing and undeveloped problems are categorized and future works follow. Marine hydrodynamics consists of water surface and underwater fluid dynamics. Marine hydrodynamics covers, not only hydro, but also aerodynamics such as wind load or current-wave-wind interaction, hydrodynamics such as cavitation, underwater noise, multi-phase flow such as two-phase flow in pipes or air bubble in water or surface and internal waves, and magneto-hydrodynamics such as propulsion due to super conductivity. Among them, two key words are focused on as the identification of marine hydrodynamics in offshore technology; they are free surface and vortex shedding.

  17. A spatiotemporal characterization method for the dynamic cytoskeleton

    PubMed Central

    Alhussein, Ghada; Shanti, Aya; Farhat, Ilyas A. H.; Timraz, Sara B. H.; Alwahab, Noaf S. A.; Pearson, Yanthe E.; Martin, Matthew N.; Christoforou, Nicolas

    2016-01-01

    The significant gap between quantitative and qualitative understanding of cytoskeletal function is a pressing problem; microscopy and labeling techniques have improved qualitative investigations of localized cytoskeleton behavior, whereas quantitative analyses of whole cell cytoskeleton networks remain challenging. Here we present a method that accurately quantifies cytoskeleton dynamics. Our approach digitally subdivides cytoskeleton images using interrogation windows, within which box‐counting is used to infer a fractal dimension (D f) to characterize spatial arrangement, and gray value intensity (GVI) to determine actin density. A partitioning algorithm further obtains cytoskeleton characteristics from the perinuclear, cytosolic, and periphery cellular regions. We validated our measurement approach on Cytochalasin‐treated cells using transgenically modified dermal fibroblast cells expressing fluorescent actin cytoskeletons. This method differentiates between normal and chemically disrupted actin networks, and quantifies rates of cytoskeletal degradation. Furthermore, GVI distributions were found to be inversely proportional to D f, having several biophysical implications for cytoskeleton formation/degradation. We additionally demonstrated detection sensitivity of differences in D f and GVI for cells seeded on substrates with varying degrees of stiffness, and coated with different attachment proteins. This general approach can be further implemented to gain insights on dynamic growth, disruption, and structure of the cytoskeleton (and other complex biological morphology) due to biological, chemical, or physical stimuli. © 2016 The Authors. Cytoskeleton Published by Wiley Periodicals, Inc. PMID:27015595

  18. Likelihood-based modification of experimental crystal structure electron density maps

    DOEpatents

    Terwilliger, Thomas C.

    2005-04-16

    A maximum-likelihood method for improves an electron density map of an experimental crystal structure. A likelihood of a set of structure factors {F.sub.h } is formed for the experimental crystal structure as (1) the likelihood of having obtained an observed set of structure factors {F.sub.h.sup.OBS } if structure factor set {F.sub.h } was correct, and (2) the likelihood that an electron density map resulting from {F.sub.h } is consistent with selected prior knowledge about the experimental crystal structure. The set of structure factors {F.sub.h } is then adjusted to maximize the likelihood of {F.sub.h } for the experimental crystal structure. An improved electron density map is constructed with the maximized structure factors.

  19. Confidence interval of the likelihood ratio associated with mixed stain DNA evidence.

    PubMed

    Beecham, Gary W; Weir, Bruce S

    2011-01-01

    Likelihood ratios are necessary to properly interpret mixed stain DNA evidence. They can flexibly consider alternate hypotheses and can account for population substructure. The likelihood ratio should be seen as an estimate and not a fixed value, because the calculations are functions of allelic frequency estimates that were estimated from a small portion of the population. Current methods do not account for uncertainty in the likelihood ratio estimates and are therefore an incomplete picture of the strength of the evidence. We propose the use of a confidence interval to report the consequent variation of likelihood ratios. The confidence interval is calculated using the standard forensic likelihood ratio formulae and a variance estimate derived using the Taylor expansion. The formula is explained, and a computer program has been made available. Numeric work shows that the evidential strength of DNA profiles decreases as the variation among populations increases.

  20. Maximum likelihood estimation for life distributions with competing failure modes

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1979-01-01

    Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.

  1. Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.

    PubMed

    Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram

    2017-02-01

    In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.

  2. New Statistical Learning Methods for Estimating Optimal Dynamic Treatment Regimes

    PubMed Central

    Zhao, Ying-Qi; Zeng, Donglin; Laber, Eric B.; Kosorok, Michael R.

    2014-01-01

    Dynamic treatment regimes (DTRs) are sequential decision rules for individual patients that can adapt over time to an evolving illness. The goal is to accommodate heterogeneity among patients and find the DTR which will produce the best long term outcome if implemented. We introduce two new statistical learning methods for estimating the optimal DTR, termed backward outcome weighted learning (BOWL), and simultaneous outcome weighted learning (SOWL). These approaches convert individualized treatment selection into an either sequential or simultaneous classification problem, and can thus be applied by modifying existing machine learning techniques. The proposed methods are based on directly maximizing over all DTRs a nonparametric estimator of the expected long-term outcome; this is fundamentally different than regression-based methods, for example Q-learning, which indirectly attempt such maximization and rely heavily on the correctness of postulated regression models. We prove that the resulting rules are consistent, and provide finite sample bounds for the errors using the estimated rules. Simulation results suggest the proposed methods produce superior DTRs compared with Q-learning especially in small samples. We illustrate the methods using data from a clinical trial for smoking cessation. PMID:26236062

  3. Sensitivity based method for structural dynamic model improvement

    NASA Astrophysics Data System (ADS)

    Lin, R. M.; Du, H.; Ong, J. H.

    1993-05-01

    Sensitivity analysis, the study of how a structure's dynamic characteristics change with design variables, has been used to predict structural modification effects in design for many decades. In this paper, methods for calculating the eigensensitivity, frequency response function sensitivity and its modified new formulation are presented. The implementation of these sensitivity analyses to the practice of finite element model improvement using vibration test data, which is one of the major applications of experimental modal testing, is discussed. Since it is very difficult in practice to measure all the coordinates which are specified in the finite element model, sensitivity based methods become essential and are, in fact, the only appropriate methods of tackling the problem of finite element model improvement. Comparisons of these methods are made in terms of the amount of measured data required, the speed of convergence and the magnitudes of modelling errors. Also, it is identified that the inverse iteration technique can be effectively used to minimize the computational costs involved. The finite element model of a plane truss structure is used in numerical case studies to demonstrate the effectiveness of the applications of these sensitivity based methods to practical engineering structures.

  4. Long-time atomistic dynamics through a new self-adaptive accelerated molecular dynamics method

    NASA Astrophysics Data System (ADS)

    Gao, N.; Yang, L.; Gao, F.; Kurtz, R. J.; West, D.; Zhang, S.

    2017-04-01

    A self-adaptive accelerated molecular dynamics method is developed to model infrequent atomic-scale events, especially those events that occur on a rugged free-energy surface. Key in the new development is the use of the total displacement of the system at a given temperature to construct a boost-potential, which is slowly increased to accelerate the dynamics. The temperature is slowly increased to accelerate the dynamics. By allowing the system to evolve from one steady-state configuration to another by overcoming the transition state, this self-evolving approach makes it possible to explore the coupled motion of species that migrate on vastly different time scales. The migrations of single vacancy (V) and small He-V clusters, and the growth of nano-sized He-V clusters in Fe for times in the order of seconds are studied by this new method. An interstitial-assisted mechanism is first explored for the migration of a helium-rich He-V cluster, while a new two-component Ostwald ripening mechanism is suggested for He-V cluster growth.

  5. Long-time atomistic dynamics through a new self-adaptive accelerated molecular dynamics method.

    PubMed

    Gao, N; Yang, L; Gao, F; Kurtz, R J; West, D; Zhang, S

    2017-04-12

    A self-adaptive accelerated molecular dynamics method is developed to model infrequent atomic-scale events, especially those events that occur on a rugged free-energy surface. Key in the new development is the use of the total displacement of the system at a given temperature to construct a boost-potential, which is slowly increased to accelerate the dynamics. The temperature is slowly increased to accelerate the dynamics. By allowing the system to evolve from one steady-state configuration to another by overcoming the transition state, this self-evolving approach makes it possible to explore the coupled motion of species that migrate on vastly different time scales. The migrations of single vacancy (V) and small He-V clusters, and the growth of nano-sized He-V clusters in Fe for times in the order of seconds are studied by this new method. An interstitial-assisted mechanism is first explored for the migration of a helium-rich He-V cluster, while a new two-component Ostwald ripening mechanism is suggested for He-V cluster growth.

  6. Dynamic characterization of satellite components through non-invasive methods

    SciTech Connect

    Mullens, Joshua G; Wiest, Heather K; Mascarenas, David D; Park, Gyuhae

    2011-01-24

    The rapid deployment of satellites is hindered by the need to flight-qualify their components and the resulting mechanical assembly. Conventional methods for qualification testing of satellite components are costly and time consuming. Furthermore, full-scale vehicles must be subjected to launch loads during testing. The harsh testing environment increases the risk of component damage during qualification. The focus of this research effort was to assess the performance of Structural Health Monitoring (SHM) techniques as replacement for traditional vibration testing. SHM techniques were applied on a small-scale structure representative of a responsive satellite. The test structure consisted of an extruded aluminum space-frame covered with aluminum shear plates, which was assembled using bolted joints. Multiple piezoelectric patches were bonded to the test structure and acted as combined actuators and sensors. Various methods of SHM were explored including impedance-based health monitoring, wave propagation, and conventional frequency response functions. Using these methods in conjunction with finite element modeling, the dynamic properties of the test structure were established and areas of potential damage were identified and localized. The adequacy of the results from each SHM method was validated by comparison to results from conventional vibration testing.

  7. Efficient sensitivity analysis method for chaotic dynamical systems

    SciTech Connect

    Liao, Haitao

    2016-05-15

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.

  8. Dynamic characterization of satellite components through non-invasive methods

    SciTech Connect

    Mullins, Joshua G; Wiest, Heather K; Mascarenas, David D. L.; Macknelly, David

    2010-10-21

    The rapid deployment of satellites is hindered by the need to flight-qualify their components and the resulting mechanical assembly. Conventional methods for qualification testing of satellite components are costly and time consuming. Furthermore, full-scale vehicles must be subjected to launch loads during testing. This harsh testing environment increases the risk of component damage during qualification. The focus of this research effort was to assess the performance of Structural Health Monitoring (SHM) techniques as a replacement for traditional vibration testing. SHM techniques were applied on a small-scale structure representative of a responsive satellite. The test structure consisted of an extruded aluminum space-frame covered with aluminum shear plates, which was assembled using bolted joints. Multiple piezoelectric patches were bonded to the test structure and acted as combined actuators and sensors. Various methods of SHM were explored including impedance-based health monitoring, wave propagation, and conventional frequency response functions. Using these methods in conjunction with finite element modelling, the dynamic properties of the test structure were established and areas of potential damage were identified and localized. The adequacy of the results from each SHM method was validated by comparison to results from conventional vibration testing.

  9. cosmoabc: Likelihood-free inference for cosmology

    NASA Astrophysics Data System (ADS)

    Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.

    2015-05-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.

  10. Spectral likelihood expansions for Bayesian inference

    NASA Astrophysics Data System (ADS)

    Nagel, Joseph B.; Sudret, Bruno

    2016-03-01

    A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the likelihood function in terms of orthogonal polynomials. From this spectral likelihood expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.

  11. Likelihood-Based Climate Model Evaluation

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Cressie, Noel; Teixeira, Joao

    2012-01-01

    Climate models are deterministic, mathematical descriptions of the physics of climate. Confidence in predictions of future climate is increased if the physics are verifiably correct. A necessary, (but not sufficient) condition is that past and present climate be simulated well. Quantify the likelihood that a (summary statistic computed from a) set of observations arises from a physical system with the characteristics captured by a model generated time series. Given a prior on models, we can go further: posterior distribution of model given observations.

  12. Space station static and dynamic analyses using parallel methods

    NASA Technical Reports Server (NTRS)

    Gupta, V.; Newell, J.; Storaasli, O.; Baddourah, M.; Bostic, S.

    1993-01-01

    Algorithms for high-performance parallel computers are applied to perform static analyses of large-scale Space Station finite-element models (FEMs). Several parallel-vector algorithms under development at NASA Langley are assessed. Sparse matrix solvers were found to be more efficient than banded symmetric or iterative solvers for the static analysis of large-scale applications. In addition, new sparse and 'out-of-core' solvers were found superior to substructure (superelement) techniques which require significant additional cost and time to perform static condensation during global FEM matrix generation as well as the subsequent recovery and expansion. A method to extend the fast parallel static solution techniques to reduce the computation time for dynamic analysis is also described. The resulting static and dynamic algorithms offer design economy for preliminary multidisciplinary design optimization and FEM validation against test modes. The algorithms are being optimized for parallel computers to solve one-million degrees-of-freedom (DOF) FEMs. The high-performance computers at NASA afforded effective software development, testing, efficient and accurate solution with timely system response and graphical interpretation of results rarely found in industry. Based on the author's experience, similar cooperation between industry and government should be encouraged for similar large-scale projects in the future.

  13. Applications of Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.

    2004-01-01

    Initial steps in the application o f a low-order panel method computational fluid dynamic (CFD) code to the calculation of aircraft dynamic stability and control (S&C) derivatives are documented. Several capabilities, unique to CFD but not unique to this particular demonstration, are identified and demonstrated in this paper. These unique capabilities complement conventional S&C techniques and they include the ability to: 1) perform maneuvers without the flow-kinematic restrictions and support interference commonly associated with experimental S&C facilities, 2) easily simulate advanced S&C testing techniques, 3) compute exact S&C derivatives with uncertainty propagation bounds, and 4) alter the flow physics associated with a particular testing technique from those observed in a wind or water tunnel test in order to isolate effects. Also presented are discussions about some computational issues associated with the simulation of S&C tests and selected results from numerous surface grid resolution studies performed during the course of the study.

  14. Libration Orbit Mission Design: Applications of Numerical & Dynamical Methods

    NASA Technical Reports Server (NTRS)

    Bauer, Frank (Technical Monitor); Folta, David; Beckman, Mark

    2002-01-01

    Sun-Earth libration point orbits serve as excellent locations for scientific investigations. These orbits are often selected to minimize environmental disturbances and maximize observing efficiency. Trajectory design in support of libration orbits is ever more challenging as more complex missions are envisioned in the next decade. Trajectory design software must be further enabled to incorporate better understanding of the libration orbit solution space and thus improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple libration missions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes algorithm and software development. The recently launched Microwave Anisotropy Probe (MAP) and upcoming James Webb Space Telescope (JWST) and Constellation-X missions are examples of the use of improved numerical methods for attaining constrained orbital parameters and controlling their dynamical evolution at the collinear libration points. This paper presents a history of libration point missions, a brief description of the numerical and dynamical design techniques including software used, and a sample of future GSFC mission designs.

  15. Measuring methods for evaluation of dynamic tyre properties

    NASA Astrophysics Data System (ADS)

    Kmoch, Klaus

    1992-01-01

    Extensive measuring methods for macroscopic assessment of tire properties, based on classical mechanics and dynamics, are presented. Theoretical results and measurements were included in an expert system, where the pneumatic tire is represented as a wheel with particular elastic properties. For geometry measurement of the tire surface, a laser scanner test bed was used. The tire was excited with a shaker in order to obtain acceleration signals and for estimating global parameters such as stiffness, damping, and nonlinearity influence, which is found to increase with excitation force. Tire dynamic behavior was examined by low velocities with microscopy and infrared thermography, in order to quantify temperature augmentation and tangential and normal forces in the contact area; the slip stick oscillations were recorded on microphones. A drum test bed was used for studying tire behavior at high velocities and the tire vehicle interaction was established with acceleration measurements; nonuniformity influence on rolling stability was ascertained. The results were compared with data from theoretical models, which are pinpoint mass systems or multiple bodies problems.

  16. Maximum likelihood continuity mapping for fraud detection

    SciTech Connect

    Hogden, J.

    1997-05-01

    The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.

  17. Implementing efficient dynamic formal verification methods for MPI programs.

    SciTech Connect

    Vakkalanka, S.; DeLisi, M.; Gopalakrishnan, G.; Kirby, R. M.; Thakur, R.; Gropp, W.; Mathematics and Computer Science; Univ. of Utah; Univ. of Illinois

    2008-01-01

    We examine the problem of formally verifying MPI programs for safety properties through an efficient dynamic (runtime) method in which the processes of a given MPI program are executed under the control of an interleaving scheduler. To ensure full coverage for given input test data, the algorithm must take into consideration MPI's out-of-order completion semantics. The algorithm must also ensure that nondeterministic constructs (e.g., MPI wildcard receive matches) are executed in all possible ways. Our new algorithm rewrites wildcard receives to specific receives, one for each sender that can potentially match with the receive. It then recursively explores each case of the specific receives. The list of potential senders matching a receive is determined through a runtime algorithm that exploits MPI's operation ordering semantics. Our verification tool ISP that incorporates this algorithm efficiently verifies several programs and finds bugs missed by existing informal verification tools.

  18. Dynamically controlled crystallization method and apparatus and crystals obtained thereby

    NASA Technical Reports Server (NTRS)

    Arnowitz, Leonard (Inventor); Steinberg, Emanuel (Inventor)

    1999-01-01

    A method and apparatus for dynamically controlling the crystallization of proteins including a crystallization chamber or chambers for holding a protein in a salt solution, one or more salt solution chambers, two communication passages respectively coupling the crystallization chamber with each of the salt solution chambers, and transfer mechanisms configured to respectively transfer salt solution between each of the salt solution chambers and the crystallization chamber. The transfer mechanisms are interlocked to maintain the volume of salt solution in the crystallization chamber substantially constant. Salt solution of different concentrations is transferred into and out of the crystallization chamber to adjust the salt concentration in the crystallization chamber to achieve precise control of the crystallization process.

  19. Methods for evaluating the predictive accuracy of structural dynamic models

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, Jon D.

    1990-01-01

    Uncertainty of frequency response using the fuzzy set method and on-orbit response prediction using laboratory test data to refine an analytical model are emphasized with respect to large space structures. Two aspects of the fuzzy set approach were investigated relative to its application to large structural dynamics problems: (1) minimizing the number of parameters involved in computing possible intervals; and (2) the treatment of extrema which may occur in the parameter space enclosed by all possible combinations of the important parameters of the model. Extensive printer graphics were added to the SSID code to help facilitate model verification, and an application of this code to the LaRC Ten Bay Truss is included in the appendix to illustrate this graphics capability.

  20. A dynamically adjusted mixed emphasis method for building boosting ensembles.

    PubMed

    Gomez-Verdejo, Vanessa; Arenas-Garcia, Jerónimo; Figueiras-Vidal, Aníbal R

    2008-01-01

    Progressively emphasizing samples that are difficult to classify correctly is the base for the recognized high performance of real Adaboost (RA) ensembles. The corresponding emphasis function can be written as a product of a factor that measures the quadratic error and a factor related to the proximity to the classification border; this fact opens the door to explore the potential advantages provided by using adjustable combined forms of these factors. In this paper, we introduce a principled procedure to select the combination parameter each time a new learner is added to the ensemble, just by maximizing the associated edge parameter, calling the resulting method the dynamically adapted weighted emphasis RA (DW-RA). A number of application examples illustrates the performance improvements obtained by DW-RA.

  1. Computational methods of the Advanced Fluid Dynamics Model

    SciTech Connect

    Bohl, W.R.; Wilhelm, D.; Parker, F.R.; Berthier, J.; Maudlin, P.J.; Schmuck, P.; Goutagny, L.; Ichikawa, S.; Ninokata, H.; Luck, L.B.

    1987-01-01

    To more accurately treat severe accidents in fast reactors, a program has been set up to investigate new computational models and approaches. The product of this effort is a computer code, the Advanced Fluid Dynamics Model (AFDM). This paper describes some of the basic features of the numerical algorithm used in AFDM. Aspects receiving particular emphasis are the fractional-step method of time integration, the semi-implicit pressure iteration, the virtual mass inertial terms, the use of three velocity fields, higher order differencing, convection of interfacial area with source and sink terms, multicomponent diffusion processes in heat and mass transfer, the SESAME equation of state, and vectorized programming. A calculated comparison with an isothermal tetralin/ammonia experiment is performed. We conclude that significant improvements are possible in reliably calculating the progression of severe accidents with further development.

  2. Modern wing flutter analysis by computational fluid dynamics methods

    NASA Technical Reports Server (NTRS)

    Cunningham, Herbert J.; Batina, John T.; Bennett, Robert M.

    1988-01-01

    The application and assessment of the recently developed CAP-TSD transonic small-disturbance code for flutter prediction is described. The CAP-TSD code has been developed for aeroelastic analysis of complete aircraft configurations and was previously applied to the calculation of steady and unsteady pressures with favorable results. Generalized aerodynamic forces and flutter characteristics are calculated and compared with linear theory results and with experimental data for a 45 deg sweptback wing. These results are in good agreement with the experimental flutter data which is the first step toward validating CAP-TSD for general transonic aeroelastic applications. The paper presents these results and comparisons along with general remarks regarding modern wing flutter analysis by computational fluid dynamics methods.

  3. Testing and Validation of the Dynamic Inertia Measurement Method

    NASA Technical Reports Server (NTRS)

    Chin, Alexander W.; Herrera, Claudia Y.; Spivey, Natalie D.; Fladung, William A.; Cloutier, David

    2015-01-01

    The Dynamic Inertia Measurement (DIM) method uses a ground vibration test setup to determine the mass properties of an object using information from frequency response functions. Most conventional mass properties testing involves using spin tables or pendulum-based swing tests, which for large aerospace vehicles becomes increasingly difficult and time-consuming, and therefore expensive, to perform. The DIM method has been validated on small test articles but has not been successfully proven on large aerospace vehicles. In response, the National Aeronautics and Space Administration Armstrong Flight Research Center (Edwards, California) conducted mass properties testing on an "iron bird" test article that is comparable in mass and scale to a fighter-type aircraft. The simple two-I-beam design of the "iron bird" was selected to ensure accurate analytical mass properties. Traditional swing testing was also performed to compare the level of effort, amount of resources, and quality of data with the DIM method. The DIM test showed favorable results for the center of gravity and moments of inertia; however, the products of inertia showed disagreement with analytical predictions.

  4. Data assimilation in problems of mantle dynamics: Methods and applications

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, A.; Schubert, G.; Tsepelev, I.; Korotkii, A.

    2009-05-01

    We present and compare several methods (backward advection, adjoint, and quasi-reversibility) for assimilation of geophysical and geodetic data in geodynamical models. These methods allow for incorporating observations and unknown initial conditions for mantle temperature and flow into a three- dimensional dynamic model in order to determine the initial conditions in the geological past. Once the conditions are determined the evolution of mantle structures can be restore. Using the quasi-reversibility method we reconstruct the evolution of the descending lithospheric slab beneath the south-eastern Carpathians. We show that the geometry of the mantle structures changes with time diminishing the degree of surface curvature of the structures, because the heat diffusion tends to smooth the complex thermal surfaces of mantle bodies with time. Present seismic tomography images of mantle structures do not allow definition of the sharp shapes of these structures in the past. Assimilation of mantle temperature and flow instead provides a quantitative tool to restore thermal shapes of prominent structures in the past from their diffusive shapes at present.

  5. Introduction to finite-difference methods for numerical fluid dynamics

    SciTech Connect

    Scannapieco, E.; Harlow, F.H.

    1995-09-01

    This work is intended to be a beginner`s exercise book for the study of basic finite-difference techniques in computational fluid dynamics. It is written for a student level ranging from high-school senior to university senior. Equations are derived from basic principles using algebra. Some discussion of partial-differential equations is included, but knowledge of calculus is not essential. The student is expected, however, to have some familiarity with the FORTRAN computer language, as the syntax of the computer codes themselves is not discussed. Topics examined in this work include: one-dimensional heat flow, one-dimensional compressible fluid flow, two-dimensional compressible fluid flow, and two-dimensional incompressible fluid flow with additions of the equations of heat flow and the {Kappa}-{epsilon} model for turbulence transport. Emphasis is placed on numerical instabilities and methods by which they can be avoided, techniques that can be used to evaluate the accuracy of finite-difference approximations, and the writing of the finite-difference codes themselves. Concepts introduced in this work include: flux and conservation, implicit and explicit methods, Lagrangian and Eulerian methods, shocks and rarefactions, donor-cell and cell-centered advective fluxes, compressible and incompressible fluids, the Boussinesq approximation for heat flow, Cartesian tensor notation, the Boussinesq approximation for the Reynolds stress tensor, and the modeling of transport equations. A glossary is provided which defines these and other terms.

  6. A Dynamic Integration Method for Borderland Database using OSM data

    NASA Astrophysics Data System (ADS)

    Zhou, X.-G.; Jiang, Y.; Zhou, K.-X.; Zeng, L.

    2013-11-01

    Spatial data is the fundamental of borderland analysis of the geography, natural resources, demography, politics, economy, and culture. As the spatial region used in borderland researching usually covers several neighboring countries' borderland regions, the data is difficult to achieve by one research institution or government. VGI has been proven to be a very successful means of acquiring timely and detailed global spatial data at very low cost. Therefore VGI will be one reasonable source of borderland spatial data. OpenStreetMap (OSM) has been known as the most successful VGI resource. But OSM data model is far different from the traditional authoritative geographic information. Thus the OSM data needs to be converted to the scientist customized data model. With the real world changing fast, the converted data needs to be updated. Therefore, a dynamic integration method for borderland data is presented in this paper. In this method, a machine study mechanism is used to convert the OSM data model to the user data model; a method used to select the changed objects in the researching area over a given period from OSM whole world daily diff file is presented, the change-only information file with designed form is produced automatically. Based on the rules and algorithms mentioned above, we enabled the automatic (or semiautomatic) integration and updating of the borderland database by programming. The developed system was intensively tested.

  7. Fast method for dynamic thresholding in volume holographic memories

    NASA Astrophysics Data System (ADS)

    Porter, Michael S.; Mitkas, Pericles A.

    1998-11-01

    It is essential for parallel optical memory interfaces to incorporate processing that dynamically differentiates between databit values. These thresholding points will vary as a result of system noise -- due to contrast fluctuations, variations in data page composition, reference beam misalignment, etc. To maintain reasonable data integrity it is necessary to select the threshold close to its optimal level. In this paper, a neural network (NN) approach is proposed as a fast method of determining the threshold to meet the required transfer rate. The multi-layered perceptron network can be incorporated as part of a smart photodetector array (SPA). Other methods have suggested performing the operation by means of histogram or by use of statistical information. These approaches fail in that they unnecessarily switch to a 1-D paradigm. In this serial domain, global thresholding is pointless since sequence detection could be applied. The discussed approach is a parallel solution with less overhead than multi-rail encoding. As part of this method, a small set of values are designated as threshold determination data bits; these are interleaved with the information data bits and are used as inputs to the NN. The approach has been tested using both simulated data as well as data obtained from a volume holographic memory system. Results show convergence of the training and an ability to generalize upon untrained data for binary and multi-level gray scale datapage images. Methodologies are discussed for improving the performance by a proper training set selection.

  8. The ONIOM molecular dynamics method for biochemical applications: cytidine deaminase

    SciTech Connect

    Matsubara, Toshiaki; Dupuis, Michel; Aida, Misako

    2007-03-22

    Abstract We derived and implemented the ONIOM-molecular dynamics (MD) method for biochemical applications. The implementation allows the characterization of the functions of the real enzymes taking account of their thermal motion. In this method, the direct MD is performed by calculating the ONIOM energy and gradients of the system on the fly. We describe the first application of this ONOM-MD method to cytidine deaminase. The environmental effects on the substrate in the active site are examined. The ONIOM-MD simulations show that the product uridine is strongly perturbed by the thermal motion of the environment and dissociates easily from the active site. TM and MA were supported in part by grants from the Ministry of Education, Culture, Sports, Science and Technology of Japan. MD was supported by the Division of Chemical Sciences, Office of Basic Energy Sciences, and by the Office of Biological and Environmental Research of the U.S. Department of Energy DOE. Battelle operates Pacific Northwest National Laboratory for DOE.

  9. Novel Dynamics and Controls Analysis Methods for Nonlinear Structural Systems

    DTIC Science & Technology

    1990-08-30

    Simulation of Constrained Multibody Dynamics," to appear in the proceedings published by Computational Mechanics Publications, 1990. [21 ] Placek , B...formulation of dynamics has been derived in the aerospace and mechanism dynamics research literature in [ Placek ],[Agrawal],[Kurdila]. Its theoretical...120. [12] Placek , B., "Contribution to the Solution of the Equations of Motion of the Discrete Dynamical System with Holonomic Constraints", In E

  10. The reversibility error method (REM): a new, dynamical fast indicator for planetary dynamics

    NASA Astrophysics Data System (ADS)

    Panichi, Federico; Goździewski, Krzyszof; Turchetti, Giorgio

    2017-02-01

    We describe the reversibility error method (REM) and its applications to planetary dynamics. REM is based on the time-reversibility analysis of the phase-space trajectories of conservative Hamiltonian systems. The round-off errors break the time reversibility and the displacement from the initial condition, occurring when we integrate it forward and backward for the same time interval, is related to the dynamical character of the trajectory. If the motion is chaotic, in the sense of non-zero maximal Lyapunov characteristic exponent (mLCE), then REM increases exponentially with time, as exp λt, while when the motion is regular (quasi-periodic), then REM increases as a power law in time, as tα, where α and λ are real coefficients. We compare the REM with a variant of mLCE, the mean exponential growth factor of nearby orbits. The test set includes the restricted three-body problem and five resonant planetary systems: HD 37124, Kepler-60, Kepler-36, Kepler-29 and Kepler-26. We found a very good agreement between the outcomes of these algorithms. Moreover, the numerical implementation of REM is astonishing simple, and is based on solid theoretical background. The REM requires only a symplectic and time-reversible (symmetric) integrator of the equations of motion. This method is also CPU efficient. It may be particularly useful for the dynamical analysis of multiple planetary systems in the Kepler sample, characterized by low-eccentricity orbits and relatively weak mutual interactions. As an interesting side result, we found a possible stable chaos occurrence in the Kepler-29 planetary system.

  11. Substructure method in high-speed monorail dynamic problems

    NASA Astrophysics Data System (ADS)

    Ivanchenko, I. I.

    2008-12-01

    The study of actions of high-speed moving loads on bridges and elevated tracks remains a topical problem for transport. In the present study, we propose a new method for moving load analysis of elevated tracks (monorail structures or bridges), which permits studying the interaction between two strained objects consisting of rod systems and rigid bodies with viscoelastic links; one of these objects is the moving load (monorail rolling stock), and the other is the carrying structure (monorail elevated track or bridge). The methods for moving load analysis of structures were developed in numerous papers [1-15]. At the first stage, when solving the problem about a beam under the action of the simplest moving load such as a moving weight, two fundamental methods can be used; the same methods are realized for other structures and loads. The first method is based on the use of a generalized coordinate in the expansion of the deflection in the natural shapes of the beam, and the problem is reduced to solving a system of ordinary differential equations with variable coefficients [1-3]. In the second method, after the "beam-weight" system is decomposed, just as in the problem with the weight impact on the beam [4], solving the problem is reduced to solving an integral equation for the dynamic weight reaction [6, 7]. In [1-3], an increase in the number of retained forms leads to an increase in the order of the system of equations; in [6, 7], difficulties arise when solving the integral equations related to the conditional stability of the step procedures. The method proposed in [9, 14] for beams and rod systems combines the above approaches and eliminates their drawbacks, because it permits retaining any necessary number of shapes in the deflection expansion and has a resolving system of equations with an unconditionally stable integration scheme and with a minimum number of unknowns, just as in the method of integral equations [6, 7]. This method is further developed for

  12. Steered Molecular Dynamics Methods Applied to Enzyme Mechanism and Energetics.

    PubMed

    Ramírez, C L; Martí, M A; Roitberg, A E

    2016-01-01

    One of the main goals of chemistry is to understand the underlying principles of chemical reactions, in terms of both its reaction mechanism and the thermodynamics that govern it. Using hybrid quantum mechanics/molecular mechanics (QM/MM)-based methods in combination with a biased sampling scheme, it is possible to simulate chemical reactions occurring inside complex environments such as an enzyme, or aqueous solution, and determining the corresponding free energy profile, which provides direct comparison with experimental determined kinetic and equilibrium parameters. Among the most promising biasing schemes is the multiple steered molecular dynamics method, which in combination with Jarzynski's Relationship (JR) allows obtaining the equilibrium free energy profile, from a finite set of nonequilibrium reactive trajectories by exponentially averaging the individual work profiles. However, obtaining statistically converged and accurate profiles is far from easy and may result in increased computational cost if the selected steering speed and number of trajectories are inappropriately chosen. In this small review, using the extensively studied chorismate to prephenate conversion reaction, we first present a systematic study of how key parameters such as pulling speed, number of trajectories, and reaction progress are related to the resulting work distributions and in turn the accuracy of the free energy obtained with JR. Second, and in the context of QM/MM strategies, we introduce the Hybrid Differential Relaxation Algorithm, and show how it allows obtaining more accurate free energy profiles using faster pulling speeds and smaller number of trajectories and thus smaller computational cost.

  13. Dynamically controlled crystallization method and apparatus and crystals obtained thereby

    NASA Technical Reports Server (NTRS)

    Arnowitz, Leonard (Inventor); Steinberg, Emanuel (Inventor)

    2003-01-01

    A method and apparatus for dynamically controlling the crystallization of molecules including a crystallization chamber (14) or chambers for holding molecules in a precipitant solution, one or more precipitant solution reservoirs (16, 18), communication passages (17, 19) respectively coupling the crystallization chamber(s) with each of the precipitant solution reservoirs, and transfer mechanisms (20, 21, 22, 24, 26, 28) configured to respectively transfer precipitant solution between each of the precipitant solution reservoirs and the crystallization chamber(s). The transfer mechanisms are interlocked to maintain a constant volume of precipitant solution in the crystallization chamber(s). Precipitant solutions of different concentrations are transferred into and out of the crystallization chamber(s) to adjust the concentration of precipitant in the crystallization chamber(s) to achieve precise control of the crystallization process. The method and apparatus can be used effectively to grow crystals under reduced gravity conditions such as microgravity conditions of space, and under conditions of reduced or enhanced effective gravity as induced by a powerful magnetic field.

  14. Detection of abrupt changes in dynamic systems

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1984-01-01

    Some of the basic ideas associated with the detection of abrupt changes in dynamic systems are presented. Multiple filter-based techniques and residual-based method and the multiple model and generalized likelihood ratio methods are considered. Issues such as the effect of unknown onset time on algorithm complexity and structure and robustness to model uncertainty are discussed.

  15. A computationally efficient spectral method for modeling core dynamics

    NASA Astrophysics Data System (ADS)

    Marti, P.; Calkins, M. A.; Julien, K.

    2016-08-01

    An efficient, spectral numerical method is presented for solving problems in a spherical shell geometry that employs spherical harmonics in the angular dimensions and Chebyshev polynomials in the radial direction. We exploit the three-term recurrence relation for Chebyshev polynomials that renders all matrices sparse in spectral space. This approach is significantly more efficient than the collocation approach and is generalizable to both the Galerkin and tau methodologies for enforcing boundary conditions. The sparsity of the matrices reduces the computational complexity of the linear solution of implicit-explicit time stepping schemes to O(N) operations, compared to O>(N2>) operations for a collocation method. The method is illustrated by considering several example problems of important dynamical processes in the Earth's liquid outer core. Results are presented from both fully nonlinear, time-dependent numerical simulations and eigenvalue problems arising from the investigation of the onset of convection and the inertial wave spectrum. We compare the explicit and implicit temporal discretization of the Coriolis force; the latter becomes computationally feasible given the sparsity of the differential operators. We find that implicit treatment of the Coriolis force allows for significantly larger time step sizes compared to explicit algorithms; for hydrodynamic and dynamo problems at an Ekman number of E=10-5, time step sizes can be increased by a factor of 3 to 16 times that of the explicit algorithm, depending on the order of the time stepping scheme. The implementation with explicit Coriolis force scales well to at least 2048 cores, while the implicit implementation scales to 512 cores.

  16. Driving the Model to Its Limit: Profile Likelihood Based Model Reduction.

    PubMed

    Maiwald, Tim; Hass, Helge; Steiert, Bernhard; Vanlier, Joep; Engesser, Raphael; Raue, Andreas; Kipkeew, Friederike; Bock, Hans H; Kaschek, Daniel; Kreutz, Clemens; Timmer, Jens

    2016-01-01

    In systems biology, one of the major tasks is to tailor model complexity to information content of the data. A useful model should describe the data and produce well-determined parameter estimates and predictions. Too small of a model will not be able to describe the data whereas a model which is too large tends to overfit measurement errors and does not provide precise predictions. Typically, the model is modified and tuned to fit the data, which often results in an oversized model. To restore the balance between model complexity and available measurements, either new data has to be gathered or the model has to be reduced. In this manuscript, we present a data-based method for reducing non-linear models. The profile likelihood is utilised to assess parameter identifiability and designate likely candidates for reduction. Parameter dependencies are analysed along profiles, providing context-dependent suggestions for the type of reduction. We discriminate four distinct scenarios, each associated with a specific model reduction strategy. Iterating the presented procedure eventually results in an identifiable model, which is capable of generating precise and testable predictions. Source code for all toy examples is provided within the freely available, open-source modelling environment Data2Dynamics based on MATLAB available at http://www.data2dynamics.org/, as well as the R packages dMod/cOde available at https://github.com/dkaschek/. Moreover, the concept is generally applicable and can readily be used with any software capable of calculating the profile likelihood.

  17. Driving the Model to Its Limit: Profile Likelihood Based Model Reduction

    PubMed Central

    Vanlier, Joep; Engesser, Raphael; Raue, Andreas; Kipkeew, Friederike; Bock, Hans H.; Kaschek, Daniel; Kreutz, Clemens; Timmer, Jens

    2016-01-01

    In systems biology, one of the major tasks is to tailor model complexity to information content of the data. A useful model should describe the data and produce well-determined parameter estimates and predictions. Too small of a model will not be able to describe the data whereas a model which is too large tends to overfit measurement errors and does not provide precise predictions. Typically, the model is modified and tuned to fit the data, which often results in an oversized model. To restore the balance between model complexity and available measurements, either new data has to be gathered or the model has to be reduced. In this manuscript, we present a data-based method for reducing non-linear models. The profile likelihood is utilised to assess parameter identifiability and designate likely candidates for reduction. Parameter dependencies are analysed along profiles, providing context-dependent suggestions for the type of reduction. We discriminate four distinct scenarios, each associated with a specific model reduction strategy. Iterating the presented procedure eventually results in an identifiable model, which is capable of generating precise and testable predictions. Source code for all toy examples is provided within the freely available, open-source modelling environment Data2Dynamics based on MATLAB available at http://www.data2dynamics.org/, as well as the R packages dMod/cOde available at https://github.com/dkaschek/. Moreover, the concept is generally applicable and can readily be used with any software capable of calculating the profile likelihood. PMID:27588423

  18. Estimation of Model's Marginal likelihood Using Adaptive Sparse Grid Surrogates in Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Zeng, X.

    2015-12-01

    A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.

  19. Targeted Maximum Likelihood Estimation for Causal Inference in Observational Studies.

    PubMed

    Schuler, Megan S; Rose, Sherri

    2017-01-01

    Estimation of causal effects using observational data continues to grow in popularity in the epidemiologic literature. While many applications of causal effect estimation use propensity score methods or G-computation, targeted maximum likelihood estimation (TMLE) is a well-established alternative method with desirable statistical properties. TMLE is a doubly robust maximum-likelihood-based approach that includes a secondary "targeting" step that optimizes the bias-variance tradeoff for the target parameter. Under standard causal assumptions, estimates can be interpreted as causal effects. Because TMLE has not been as widely implemented in epidemiologic research, we aim to provide an accessible presentation of TMLE for applied researchers. We give step-by-step instructions for using TMLE to estimate the average treatment effect in the context of an observational study. We discuss conceptual similarities and differences between TMLE and 2 common estimation approaches (G-computation and inverse probability weighting) and present findings on their relative performance using simulated data. Our simulation study compares methods under parametric regression misspecification; our results highlight TMLE's property of double robustness. Additionally, we discuss best practices for TMLE implementation, particularly the use of ensembled machine learning algorithms. Our simulation study demonstrates all methods using super learning, highlighting that incorporation of machine learning may outperform parametric regression in observational data settings.

  20. Empirical Likelihood for Estimating Equations with Nonignorably Missing Data.

    PubMed

    Tang, Niansheng; Zhao, Puying; Zhu, Hongtu

    2014-04-01

    We develop an empirical likelihood (EL) inference on parameters in generalized estimating equations with nonignorably missing response data. We consider an exponential tilting model for the nonignorably missing mechanism, and propose modified estimating equations by imputing missing data through a kernel regression method. We establish some asymptotic properties of the EL estimators of the unknown parameters under different scenarios. With the use of auxiliary information, the EL estimators are statistically more efficient. Simulation studies are used to assess the finite sample performance of our proposed EL estimators. We apply our EL estimators to investigate a data set on earnings obtained from the New York Social Indicators Survey.

  1. Maximum likelihood decoding of Reed Solomon Codes

    SciTech Connect

    Sudan, M.

    1996-12-31

    We present a randomized algorithm which takes as input n distinct points ((x{sub i}, y{sub i})){sup n}{sub i=1} from F x F (where F is a field) and integer parameters t and d and returns a list of all univariate polynomials f over F in the variable x of degree at most d which agree with the given set of points in at least t places (i.e., y{sub i} = f (x{sub i}) for at least t values of i), provided t = {Omega}({radical}nd). The running time is bounded by a polynomial in n. This immediately provides a maximum likelihood decoding algorithm for Reed Solomon Codes, which works in a setting with a larger number of errors than any previously known algorithm. To the best of our knowledge, this is the first efficient (i.e., polynomial time bounded) algorithm which provides some maximum likelihood decoding for any efficient (i.e., constant or even polynomial rate) code.

  2. Maximum Likelihood Analysis in the PEN Experiment

    NASA Astrophysics Data System (ADS)

    Lehman, Martin

    2013-10-01

    The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.

  3. Multiplicative earthquake likelihood models incorporating strain rates

    NASA Astrophysics Data System (ADS)

    Rhoades, D. A.; Christophersen, A.; Gerstenberger, M. C.

    2017-01-01

    SUMMARYWe examine the potential for strain-rate variables to improve long-term earthquake <span class="hlt">likelihood</span> models. We derive a set of multiplicative hybrid earthquake <span class="hlt">likelihood</span> models in which cell rates in a spatially uniform baseline model are scaled using combinations of covariates derived from earthquake catalogue data, fault data, and strain-rates for the New Zealand region. Three components of the strain rate estimated from GPS data over the period 1991-2011 are considered: the shear, rotational and dilatational strain rates. The hybrid model parameters are optimised for earthquakes of M 5 and greater over the period 1987-2006 and tested on earthquakes from the period 2012-2015, which is independent of the strain rate estimates. The shear strain rate is overall the most informative individual covariate, as indicated by Molchan error diagrams as well as multiplicative modelling. Most models including strain rates are significantly more informative than the best models excluding strain rates in both the fitting and testing period. A hybrid that combines the shear and dilatational strain rates with a smoothed seismicity covariate is the most informative model in the fitting period, and a simpler model without the dilatational strain rate is the most informative in the testing period. These results have implications for probabilistic seismic hazard analysis and can be used to improve the background model component of medium-term and short-term earthquake forecasting models.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010ApJS..190..297B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010ApJS..190..297B"><span>Particle-gas <span class="hlt">Dynamics</span> with Athena: <span class="hlt">Method</span> and Convergence</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bai, Xue-Ning; Stone, James M.</p> <p>2010-10-01</p> <p>The Athena magnetohydrodynamics code has been extended to integrate the motion of particles coupled with the gas via aerodynamic drag in order to study the <span class="hlt">dynamics</span> of gas and solids in protoplanetary disks (PPDs) and the formation of planetesimals. Our particle-gas hybrid scheme is based on a second-order predictor-corrector <span class="hlt">method</span>. Careful treatment of the momentum feedback on the gas guarantees exact conservation. The hybrid scheme is stable and convergent in most regimes relevant to PPDs. We describe a semi-implicit integrator generalized from the leap-frog approach. In the absence of drag force, it preserves the geometric properties of a particle orbit. We also present a fully implicit integrator that is unconditionally stable for all regimes of particle-gas coupling. Using our hybrid code, we study the numerical convergence of the nonlinear saturated state of the streaming instability. We find that gas flow properties are well converged with modest grid resolution (128 cells per pressure length ηr for dimensionless stopping time τ s = 0.1) and an equal number of particles and grid cells. On the other hand, particle clumping properties converge only at higher resolutions, and finer resolution leads to stronger clumping before convergence is reached. Finally, we find that the measurement of particle transport properties resulted from the streaming instability may be subject to error of about ±20%.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2233897','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2233897"><span>A Subspace <span class="hlt">Method</span> for <span class="hlt">Dynamical</span> Estimation of Evoked Potentials</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Georgiadis, Stefanos D.; Ranta-aho, Perttu O.; Tarvainen, Mika P.; Karjalainen, Pasi A.</p> <p>2007-01-01</p> <p>It is a challenge in evoked potential (EP) analysis to incorporate prior physiological knowledge for estimation. In this paper, we address the problem of single-channel trial-to-trial EP characteristics estimation. Prior information about phase-locked properties of the EPs is assesed by means of estimated signal subspace and eigenvalue decomposition. Then for those situations that <span class="hlt">dynamic</span> fluctuations from stimulus-to-stimulus could be expected, prior information can be exploited by means of state-space modeling and recursive Bayesian mean square estimation <span class="hlt">methods</span> (Kalman filtering and smoothing). We demonstrate that a few dominant eigenvectors of the data correlation matrix are able to model trend-like changes of some component of the EPs, and that Kalman smoother algorithm is to be preferred in terms of better tracking capabilities and mean square error reduction. We also demonstrate the effect of strong artifacts, particularly eye blinks, on the quality of the signal subspace and EP estimates by means of independent component analysis applied as a prepossessing step on the multichannel measurements. PMID:18288257</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/1175020','DOE-PATENT-XML'); return false;" href="http://www.osti.gov/scitech/servlets/purl/1175020"><span><span class="hlt">Method</span> for increasing the <span class="hlt">dynamic</span> range of mass spectrometers</span></a></p> <p><a target="_blank" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Belov, Mikhail; Smith, Richard D.; Udseth, Harold R.</p> <p>2004-09-07</p> <p>A <span class="hlt">method</span> for enhancing the <span class="hlt">dynamic</span> range of a mass spectrometer by first passing a sample of ions through the mass spectrometer having a quadrupole ion filter, whereupon the intensities of the mass spectrum of the sample are measured. From the mass spectrum, ions within this sample are then identified for subsequent ejection. As further sampling introduces more ions into the mass spectrometer, the appropriate rf voltages are applied to a quadrupole ion filter, thereby selectively ejecting the undesired ions previously identified. In this manner, the desired ions may be collected for longer periods of time in an ion trap, thus allowing better collection and subsequent analysis of the desired ions. The ion trap used for accumulation may be the same ion trap used for mass analysis, in which case the mass analysis is performed directly, or it may be an intermediate trap. In the case where collection is an intermediate trap, the desired ions are accumulated in the intermediate trap, and then transferred to a separate mass analyzer. The present invention finds particular utility where the mass analysis is performed in an ion trap mass spectrometer or a Fourier transform ion cyclotron resonance mass spectrometer.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1992umd..reptQ....O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1992umd..reptQ....O"><span>On the feasibility of a transient <span class="hlt">dynamic</span> design analysis <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ohara, George J.; Cunniff, Patrick F.</p> <p>1992-04-01</p> <p>This Annual Report summarizes the progress that was made during the first year of the two-year grant from the Office of Naval Research. The <span class="hlt">dynamic</span> behavior of structures subjected to mechanical shock loading provides a continuing problem for design engineers concerned with shipboard foundations supporting critical equipment. There are two particular problems associated with shock response that are currently under investigation. The first topic explores the possibilities of developing a transient design analysis <span class="hlt">method</span> that does not degrade the current level of the Navy's shock-proofness requirements for heavy shipboard equipment. The second topic examines the prospects of developing scaling rules for the shock response of simple internal equipment of submarines subjected to various attack situations. This effort has been divided into two tasks: chemical explosive scaling for a given hull; and scaling of equipment response across different hull sizes. The computer is used as a surrogate shock machine for these studies. Hence, the results of the research can provide trends, ideas, suggestions, and scaling rules to the Navy. In using these results, the shock-hardening program should use measured data rather than calculated data.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/21454943','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/21454943"><span>PARTICLE-GAS <span class="hlt">DYNAMICS</span> WITH ATHENA: <span class="hlt">METHOD</span> AND CONVERGENCE</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Bai Xuening; Stone, James M. E-mail: jstone@astro.princeton.ed</p> <p>2010-10-15</p> <p>The Athena magnetohydrodynamics code has been extended to integrate the motion of particles coupled with the gas via aerodynamic drag in order to study the <span class="hlt">dynamics</span> of gas and solids in protoplanetary disks (PPDs) and the formation of planetesimals. Our particle-gas hybrid scheme is based on a second-order predictor-corrector <span class="hlt">method</span>. Careful treatment of the momentum feedback on the gas guarantees exact conservation. The hybrid scheme is stable and convergent in most regimes relevant to PPDs. We describe a semi-implicit integrator generalized from the leap-frog approach. In the absence of drag force, it preserves the geometric properties of a particle orbit. We also present a fully implicit integrator that is unconditionally stable for all regimes of particle-gas coupling. Using our hybrid code, we study the numerical convergence of the nonlinear saturated state of the streaming instability. We find that gas flow properties are well converged with modest grid resolution (128 cells per pressure length {eta}r for dimensionless stopping time {tau} {sub s} = 0.1) and an equal number of particles and grid cells. On the other hand, particle clumping properties converge only at higher resolutions, and finer resolution leads to stronger clumping before convergence is reached. Finally, we find that the measurement of particle transport properties resulted from the streaming instability may be subject to error of about {+-}20%.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9803E..23J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9803E..23J"><span><span class="hlt">Likelihood</span>-free Bayesian computation for structural model calibration: a feasibility study</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jin, Seung-Seop; Jung, Hyung-Jo</p> <p>2016-04-01</p> <p>Finite element (FE) model updating is often used to associate FE models with corresponding existing structures for the condition assessment. FE model updating is an inverse problem and prone to be ill-posed and ill-conditioning when there are many errors and uncertainties in both an FE model and its corresponding measurements. In this case, it is important to quantify these uncertainties properly. Bayesian FE model updating is one of the well-known <span class="hlt">methods</span> to quantify parameter uncertainty by updating our prior belief on the parameters with the available measurements. In Bayesian inference, <span class="hlt">likelihood</span> plays a central role in summarizing the overall residuals between model predictions and corresponding measurements. Therefore, <span class="hlt">likelihood</span> should be carefully chosen to reflect the characteristics of the residuals. It is generally known that very little or no information is available regarding the statistical characteristics of the residuals. In most cases, the <span class="hlt">likelihood</span> is assumed to be the independent identically distributed Gaussian distribution with the zero mean and constant variance. However, this assumption may cause biased and over/underestimated estimates of parameters, so that the uncertainty quantification and prediction are questionable. To alleviate the potential misuse of the inadequate <span class="hlt">likelihood</span>, this study introduced approximate Bayesian computation (i.e., <span class="hlt">likelihood</span>-free Bayesian inference), which relaxes the need for an explicit <span class="hlt">likelihood</span> by analyzing the behavior similarities between model predictions and measurements. We performed FE model updating based on <span class="hlt">likelihood</span>-free Markov chain Monte Carlo (MCMC) without using the <span class="hlt">likelihood</span>. Based on the result of the numerical study, we observed that the <span class="hlt">likelihood</span>-free Bayesian computation can quantify the updating parameters correctly and its predictive capability for the measurements, not used in calibrated, is also secured.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012PhRvL.109m8105B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012PhRvL.109m8105B"><span>Transfer Entropy as a Log-<span class="hlt">Likelihood</span> Ratio</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Barnett, Lionel; Bossomaier, Terry</p> <p>2012-09-01</p> <p>Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic <span class="hlt">dynamics</span> in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-<span class="hlt">likelihood</span> ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1461941','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1461941"><span>Assessing allelic dropout and genotype reliability using maximum <span class="hlt">likelihood</span>.</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Miller, Craig R; Joyce, Paul; Waits, Lisette P</p> <p>2002-01-01</p> <p>A growing number of population genetic studies utilize nuclear DNA microsatellite data from museum specimens and noninvasive sources. Genotyping errors are elevated in these low quantity DNA sources, potentially compromising the power and accuracy of the data. The most conservative <span class="hlt">method</span> for addressing this problem is effective, but requires extensive replication of individual genotypes. In search of a more efficient <span class="hlt">method</span>, we developed a maximum-<span class="hlt">likelihood</span> approach that minimizes errors by estimating genotype reliability and strategically directing replication at loci most likely to harbor errors. The model assumes that false and contaminant alleles can be removed from the dataset and that the allelic dropout rate is even across loci. Simulations demonstrate that the proposed <span class="hlt">method</span> marks a vast improvement in efficiency while maintaining accuracy. When allelic dropout rates are low (0-30%), the reduction in the number of PCR replicates is typically 40-50%. The model is robust to moderate violations of the even dropout rate assumption. For datasets that contain false and contaminant alleles, a replication strategy is proposed. Our current model addresses only allelic dropout, the most prevalent source of genotyping error. However, the developed <span class="hlt">likelihood</span> framework can incorporate additional error-generating processes as they become more clearly understood. PMID:11805071</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=uncertainty&pg=7&id=EJ948897','ERIC'); return false;" href="http://eric.ed.gov/?q=uncertainty&pg=7&id=EJ948897"><span>Developmental Changes in Children's Understanding of Future <span class="hlt">Likelihood</span> and Uncertainty</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Lagattuta, Kristin Hansen; Sayfan, Liat</p> <p>2011-01-01</p> <p>Two measures assessed 4-10-year-olds' and adults' (N = 201) understanding of future <span class="hlt">likelihood</span> and uncertainty. In one task, participants sequenced sets of event pictures varying by one physical dimension according to increasing future <span class="hlt">likelihood</span>. In a separate task, participants rated characters' thoughts about the <span class="hlt">likelihood</span> of future events,…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19730012344','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19730012344"><span>Maximum <span class="hlt">likelihood</span> identification and optimal input design for identifying aircraft stability and control derivatives</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Stepner, D. E.; Mehra, R. K.</p> <p>1973-01-01</p> <p>A new <span class="hlt">method</span> of extracting aircraft stability and control derivatives from flight test data is developed based on the maximum <span class="hlt">likelihood</span> cirterion. It is shown that this new <span class="hlt">method</span> is capable of processing data from both linear and nonlinear models, both with and without process noise and includes output error and equation error <span class="hlt">methods</span> as special cases. The first application of this <span class="hlt">method</span> to flight test data is reported for lateral maneuvers of the HL-10 and M2/F3 lifting bodies, including the extraction of stability and control derivatives in the presence of wind gusts. All the problems encountered in this identification study are discussed. Several different <span class="hlt">methods</span> (including a priori weighting, parameter fixing and constrained parameter values) for dealing with identifiability and uniqueness problems are introduced and the results given. The <span class="hlt">method</span> for the design of optimal inputs for identifying the parameters of linear <span class="hlt">dynamic</span> systems is also given. The criterion used for the optimization is the sensitivity of the system output to the unknown parameters. Several simple examples are first given and then the results of an extensive stability and control dervative identification simulation for a C-8 aircraft are detailed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011CoPhC.182..540K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011CoPhC.182..540K"><span>An improved version of the Green's function molecular <span class="hlt">dynamics</span> <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kong, Ling Ti; Denniston, Colin; Müser, Martin H.</p> <p>2011-02-01</p> <p>This work presents an improved version of the Green's function molecular <span class="hlt">dynamics</span> <span class="hlt">method</span> (Kong et al., 2009; Campañá and Müser, 2004 [1,2]), which enables one to study the elastic response of a three-dimensional solid to an external stress field by taking into consideration only atoms near the surface. In the previous implementation, the effective elastic coefficients measured at the Γ-point were altered to reduce finite size effects: their eigenvalues corresponding to the acoustic modes were set to zero. This scheme was found to work well for simple Bravais lattices as long as only atoms within the last layer were treated as Green's function atoms. However, it failed to function as expected in all other cases. It turns out that a violation of the acoustic sum rule for the effective elastic coefficients at Γ (Kong, 2010 [3]) was responsible for this behavior. In the new version, the acoustic sum rule is enforced by adopting an iterative procedure, which is found to be physically more meaningful than the previous one. In addition, the new algorithm allows one to treat lattices with bases and the Green's function slab is no longer confined to one layer. New version program summaryProgram title: FixGFC/FixGFMD v1.12 Catalogue identifier: AECW_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECW_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 206 436 No. of bytes in distributed program, including test data, etc.: 4 314 850 Distribution format: tar.gz Programming language: C++ Computer: All Operating system: Linux Has the code been vectorized or parallelized?: Yes. Code has been parallelized using MPI directives. RAM: Depends on the problem Classification: 7.7 External routines: LAMMPS ( http://lammps.sandia.gov/), MPI ( http</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25091562','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25091562"><span>Subsample ignorable <span class="hlt">likelihood</span> for accelerated failure time models with missing predictors.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Nanhua; Little, Roderick J</p> <p>2015-07-01</p> <p>Missing values in predictors are a common problem in survival analysis. In this paper, we review estimation <span class="hlt">methods</span> for accelerated failure time models with missing predictors, and apply a new <span class="hlt">method</span> called subsample ignorable <span class="hlt">likelihood</span> (IL) Little and Zhang (J R Stat Soc 60:591-605, 2011) to this class of models. The approach applies a <span class="hlt">likelihood</span>-based <span class="hlt">method</span> to a subsample of observations that are complete on a subset of the covariates, chosen based on assumptions about the missing data mechanism. We give conditions on the missing data mechanism under which the subsample IL <span class="hlt">method</span> is consistent, while both complete-case analysis and ignorable maximum <span class="hlt">likelihood</span> are inconsistent. We illustrate the properties of the proposed <span class="hlt">method</span> by simulation and apply the <span class="hlt">method</span> to a real dataset.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015MNRAS.453..893S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015MNRAS.453..893S"><span>A fast, always positive definite and normalizable approximation of non-Gaussian <span class="hlt">likelihoods</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sellentin, Elena</p> <p>2015-10-01</p> <p>In this paper we extend the previously published DALI-approximation for <span class="hlt">likelihoods</span> to cases in which the parameter dependence is in the covariance matrix. The approximation recovers non-Gaussian <span class="hlt">likelihoods</span>, and reduces to the Fisher matrix approach in the case of Gaussianity. It works with the minimal assumptions of having Gaussian errors on the data, and a covariance matrix that possesses a converging Taylor approximation. The resulting approximation works in cases of severe parameter degeneracies and in cases where the Fisher matrix is singular. It is at least 1000 times faster than a typical Monte Carlo Markov Chain run over the same parameter space. Two example applications, to cases of extremely non-Gaussian <span class="hlt">likelihoods</span>, are presented - one demonstrates how the <span class="hlt">method</span> succeeds in reconstructing completely a ring-shaped <span class="hlt">likelihood</span>. A public code is released here: http://lnasellentin.github.io/DALI/.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19802375','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19802375"><span>Maximum <span class="hlt">Likelihood</span> Inference for the Cox Regression Model with Applications to Missing Covariates.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Ming-Hui; Ibrahim, Joseph G; Shao, Qi-Man</p> <p>2009-10-01</p> <p>In this paper, we carry out an in-depth theoretical investigation for existence of maximum <span class="hlt">likelihood</span> estimates for the Cox model (Cox, 1972, 1975) both in the full data setting as well as in the presence of missing covariate data. The main motivation for this work arises from missing data problems, where models can easily become difficult to estimate with certain missing data configurations or large missing data fractions. We establish necessary and sufficient conditions for existence of the maximum partial <span class="hlt">likelihood</span> estimate (MPLE) for completely observed data (i.e., no missing data) settings as well as sufficient conditions for existence of the maximum <span class="hlt">likelihood</span> estimate (MLE) for survival data with missing covariates via a profile <span class="hlt">likelihood</span> <span class="hlt">method</span>. Several theorems are given to establish these conditions. A real dataset from a cancer clinical trial is presented to further illustrate the proposed methodology.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/22126599','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/22126599"><span>AN EFFICIENT APPROXIMATION TO THE <span class="hlt">LIKELIHOOD</span> FOR GRAVITATIONAL WAVE STOCHASTIC BACKGROUND DETECTION USING PULSAR TIMING DATA</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Ellis, J. A.; Siemens, X.; Van Haasteren, R.</p> <p>2013-05-20</p> <p>Direct detection of gravitational waves by pulsar timing arrays will become feasible over the next few years. In the low frequency regime (10{sup -7} Hz-10{sup -9} Hz), we expect that a superposition of gravitational waves from many sources will manifest itself as an isotropic stochastic gravitational wave background. Currently, a number of techniques exist to detect such a signal; however, many detection <span class="hlt">methods</span> are computationally challenging. Here we introduce an approximation to the full <span class="hlt">likelihood</span> function for a pulsar timing array that results in computational savings proportional to the square of the number of pulsars in the array. Through a series of simulations we show that the approximate <span class="hlt">likelihood</span> function reproduces results obtained from the full <span class="hlt">likelihood</span> function. We further show, both analytically and through simulations, that, on average, this approximate <span class="hlt">likelihood</span> function gives unbiased parameter estimates for astrophysically realistic stochastic background amplitudes.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=Newton&pg=4&id=EJ948212','ERIC'); return false;" href="http://eric.ed.gov/?q=Newton&pg=4&id=EJ948212"><span>Multimodal <span class="hlt">Likelihoods</span> in Educational Assessment: Will the Real Maximum <span class="hlt">Likelihood</span> Score Please Stand up?</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike</p> <p>2011-01-01</p> <p>It has been known for some time that item response theory (IRT) models may exhibit a <span class="hlt">likelihood</span> function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/134709','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/134709"><span>Multipoint <span class="hlt">likelihoods</span> for genetic linkage: The untyped founder problem</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>O`Connell, J.R.; Chiarulli, D.M.; Weeks, D.E.</p> <p>1994-09-01</p> <p>Too many untyped founders in a pedigree cause the Elston-Stewart algorithm to grind to a halt. Our solution to this problem involves recoding alleles based on symmetry and identity-by-descent to greatly reduce the number of multi-locus genotypes. We also use modified genotype elimination to better organize the calculation, substantially reducing the amount of memory needed. We never have to consider multi-locus genotypes that are not valid. Thus for typed pedigrees, the calculation is independent of the number of alleles at a locus. In addition, our locus-by-locus <span class="hlt">method</span> allows us to group similar calculations to avoid recomputation, costly bookkeeping for valid genotypes, and large memory allocation. We were able to do a 4-locus <span class="hlt">likelihood</span> for a 41-member simple pedigree with the first two generations untyped and an allele product of over 1500 in under an hour. This <span class="hlt">likelihood</span> cannot be computed at all with LINKAGE, since some of its arrays would require over a gigabyte of memory. Our locus-by-locus <span class="hlt">method</span> is also well-suited for parallelization since we can factor the computation into smaller independent pieces. This will enable us to tackle problems of even greater complexity.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5066976','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5066976"><span><span class="hlt">Likelihood</span>-Based Inference of B Cell Clonal Families</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ralph, Duncan K.</p> <p>2016-01-01</p> <p>The human immune system depends on a highly diverse collection of antibody-making B cells. B cell receptor sequence diversity is generated by a random recombination process called “rearrangement” forming progenitor B cells, then a Darwinian process of lineage diversification and selection called “affinity maturation.” The resulting receptors can be sequenced in high throughput for research and diagnostics. Such a collection of sequences contains a mixture of various lineages, each of which may be quite numerous, or may consist of only a single member. As a step to understanding the process and result of this diversification, one may wish to reconstruct lineage membership, i.e. to cluster sampled sequences according to which came from the same rearrangement events. We call this clustering problem “clonal family inference.” In this paper we describe and validate a <span class="hlt">likelihood</span>-based framework for clonal family inference based on a multi-hidden Markov Model (multi-HMM) framework for B cell receptor sequences. We describe an agglomerative algorithm to find a maximum <span class="hlt">likelihood</span> clustering, two approximate algorithms with various trade-offs of speed versus accuracy, and a third, fast algorithm for finding specific lineages. We show that under simulation these algorithms greatly improve upon existing clonal family inference <span class="hlt">methods</span>, and that they also give significantly different clusters than previous <span class="hlt">methods</span> when applied to two real data sets. PMID:27749910</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3123671','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3123671"><span>Identification of Human Gustatory Cortex by Activation <span class="hlt">Likelihood</span> Estimation</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Veldhuizen, Maria G.; Albrecht, Jessica; Zelano, Christina; Boesveldt, Sanne; Breslin, Paul; Lundström, Johan N.</p> <p>2010-01-01</p> <p>Over the last two decades, neuroimaging <span class="hlt">methods</span> have identified a variety of taste-responsive brain regions. Their precise location, however, remains in dispute. For example, taste stimulation activates areas throughout the insula and overlying operculum, but identification of subregions has been inconsistent. Furthermore, literature reviews and summaries of gustatory brain activations tend to reiterate rather than resolve this ambiguity. Here we used a new meta-analytic <span class="hlt">method</span> [activation <span class="hlt">likelihood</span> estimation (ALE)] to obtain a probability map of the location of gustatory brain activation across fourteen studies. The map of activation <span class="hlt">likelihood</span> values can also serve as a source of independent coordinates for future region-of-interest analyses. We observed significant cortical activation probabilities in: bilateral anterior insula and overlying frontal operculum, bilateral mid dorsal insula and overlying Rolandic operculum, and bilateral posterior insula/parietal operculum/postcentral gyrus, left lateral orbitofrontal cortex (OFC), right medial OFC, pregenual anterior cingulate cortex (prACC) and right mediodorsal thalamus. This analysis confirms the involvement of multiple cortical areas within insula and overlying operculum in gustatory processing and provides a functional “taste map” which can be used as an inclusive mask in the data analyses of future studies. In light of this new analysis, we discuss human central processing of gustatory stimuli and identify topics where increased research effort is warranted. PMID:21305668</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21138291','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21138291"><span><span class="hlt">Likelihood</span> of achieving air quality targets under model uncertainties.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Digar, Antara; Cohan, Daniel S; Cox, Dennis D; Kim, Byeong-Uk; Boylan, James W</p> <p>2011-01-01</p> <p>Regulatory attainment demonstrations in the United States typically apply a bright-line test to predict whether a control strategy is sufficient to attain an air quality standard. Photochemical models are the best tools available to project future pollutant levels and are a critical part of regulatory attainment demonstrations. However, because photochemical models are uncertain and future meteorology is unknowable, future pollutant levels cannot be predicted perfectly and attainment cannot be guaranteed. This paper introduces a computationally efficient methodology for estimating the <span class="hlt">likelihood</span> that an emission control strategy will achieve an air quality objective in light of uncertainties in photochemical model input parameters (e.g., uncertain emission and reaction rates, deposition velocities, and boundary conditions). The <span class="hlt">method</span> incorporates Monte Carlo simulations of a reduced form model representing pollutant-precursor response under parametric uncertainty to probabilistically predict the improvement in air quality due to emission control. The <span class="hlt">method</span> is applied to recent 8-h ozone attainment modeling for Atlanta, Georgia, to assess the <span class="hlt">likelihood</span> that additional controls would achieve fixed (well-defined) or flexible (due to meteorological variability and uncertain emission trends) targets of air pollution reduction. The results show that in certain instances ranking of the predicted effectiveness of control strategies may differ between probabilistic and deterministic analyses.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/7437589','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/7437589"><span>A simple objective <span class="hlt">method</span> for determining a <span class="hlt">dynamic</span> journal collection.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bastille, J D; Mankin, C J</p> <p>1980-10-01</p> <p>In order to determine the content of a journal collection responsive to both user needs and space and dollar constraints, quantitative measures of the use of a 647-title collection have been related to space and cost requirements to develop objective criteria for a <span class="hlt">dynamic</span> collection for the Treadwell Library at the Massachusetts General Hospital, a large medical research center. Data were collected for one calendar year (1977) and stored with the elements for each title's profile in a computerized file. To account for the effect of the bulk of the journal runs on the number of uses, raw use data have been adjusted using linear shelf space required for each title to produce a factor called density of use. Titles have been ranked by raw use and by density of use with space and cost requirements for each. Data have also been analyzed for five special categories of use. Given automated means of collecting and storing data, use measures should be collected continuously. Using raw use frequency ranking to relate use to space and costs seems sensible since a decision point cutoff can be chosen in terms of the potential interlibrary loans generated. But it places new titles at risk while protecting titles with long, little used runs. Basing decisions on density of use frequency ranking seems to produce a larger yield of titles with fewer potential interlibrary loans and to identify titles with overlong runs which may be pruned or converted to microform. The <span class="hlt">method</span> developed is simple and practical. Its design will be improved to apply to data collected in 1980 for a continuous study of journal use. The problem addressed is essentially one of inventory control. Viewed as such it makes good financial sense to measure use as part of the routine operation of the library to provide information for effective management decisions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920017650','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920017650"><span>Modelling default and <span class="hlt">likelihood</span> reasoning as probabilistic</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Buntine, Wray</p> <p>1990-01-01</p> <p>A probabilistic analysis of plausible reasoning about defaults and about <span class="hlt">likelihood</span> is presented. 'Likely' and 'by default' are in fact treated as duals in the same sense as 'possibility' and 'necessity'. To model these four forms probabilistically, a logic QDP and its quantitative counterpart DP are derived that allow qualitative and corresponding quantitative reasoning. Consistency and consequence results for subsets of the logics are given that require at most a quadratic number of satisfiability tests in the underlying propositional logic. The quantitative logic shows how to track the propagation error inherent in these reasoning forms. The methodology and sound framework of the system highlights their approximate nature, the dualities, and the need for complementary reasoning about relevance.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19790012619','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19790012619"><span>Approximate maximum <span class="hlt">likelihood</span> decoding of block codes</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Greenberger, H. J.</p> <p>1979-01-01</p> <p>Approximate maximum <span class="hlt">likelihood</span> decoding algorithms, based upon selecting a small set of candidate code words with the aid of the estimated probability of error of each received symbol, can give performance close to optimum with a reasonable amount of computation. By combining the best features of various algorithms and taking care to perform each step as efficiently as possible, a decoding scheme was developed which can decode codes which have better performance than those presently in use and yet not require an unreasonable amount of computation. The discussion of the details and tradeoffs of presently known efficient optimum and near optimum decoding algorithms leads, naturally, to the one which embodies the best features of all of them.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhRvE..93d0101S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhRvE..93d0101S"><span>Groups, information theory, and Einstein's <span class="hlt">likelihood</span> principle</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sicuro, Gabriele; Tempesta, Piergiulio</p> <p>2016-04-01</p> <p>We propose a unifying picture where the notion of generalized entropy is related to information theory by means of a group-theoretical approach. The group structure comes from the requirement that an entropy be well defined with respect to the composition of independent systems, in the context of a recently proposed generalization of the Shannon-Khinchin axioms. We associate to each member of a large class of entropies a generalized information measure, satisfying the additivity property on a set of independent systems as a consequence of the underlying group law. At the same time, we also show that Einstein's <span class="hlt">likelihood</span> function naturally emerges as a byproduct of our informational interpretation of (generally nonadditive) entropies. These results confirm the adequacy of composable entropies both in physical and social science contexts.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27176234','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27176234"><span>Groups, information theory, and Einstein's <span class="hlt">likelihood</span> principle.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sicuro, Gabriele; Tempesta, Piergiulio</p> <p>2016-04-01</p> <p>We propose a unifying picture where the notion of generalized entropy is related to information theory by means of a group-theoretical approach. The group structure comes from the requirement that an entropy be well defined with respect to the composition of independent systems, in the context of a recently proposed generalization of the Shannon-Khinchin axioms. We associate to each member of a large class of entropies a generalized information measure, satisfying the additivity property on a set of independent systems as a consequence of the underlying group law. At the same time, we also show that Einstein's <span class="hlt">likelihood</span> function naturally emerges as a byproduct of our informational interpretation of (generally nonadditive) entropies. These results confirm the adequacy of composable entropies both in physical and social science contexts.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA606543','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA606543"><span>Application of Koopman Mode Decomposition <span class="hlt">Methods</span> in <span class="hlt">Dynamic</span> Stall</span></a></p> <p><a target="_blank" href="https://publicaccess.dtic.mil/psm/api/service/search/search">DTIC Science & Technology</a></p> <p></p> <p>2014-03-11</p> <p><span class="hlt">dynamic</span> stall motivated by the interest in improving maneuverability and performance of rotorcraft air vehicles, 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...policy or decision, unless so designated by other documentation. 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS (ES) U.S. Army Research Office...efforts to study <span class="hlt">dynamic</span> stall motivated by the interest in improving maneuverability and performance of rotorcraft air vehicles, progress is needed for</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2601029','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2601029"><span>Bayesian and maximum <span class="hlt">likelihood</span> estimation of hierarchical response time models</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Farrell, Simon; Ludwig, Casimir</p> <p>2008-01-01</p> <p>Hierarchical (or multilevel) statistical models have become increasingly popular in psychology in the last few years. We consider the application of multilevel modeling to the ex-Gaussian, a popular model of response times. Single-level estimation is compared with hierarchical estimation of parameters of the ex-Gaussian distribution. Additionally, for each approach maximum <span class="hlt">likelihood</span> (ML) estimation is compared with Bayesian estimation. A set of simulations and analyses of parameter recovery show that although all <span class="hlt">methods</span> perform adequately well, hierarchical <span class="hlt">methods</span> are better able to recover the parameters of the ex-Gaussian by reducing the variability in recovered parameters. At each level, little overall difference was observed between the ML and Bayesian <span class="hlt">methods</span>. PMID:19001592</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23843661','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23843661"><span>Pointwise nonparametric maximum <span class="hlt">likelihood</span> estimator of stochastically ordered survivor functions.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Park, Yongseok; Taylor, Jeremy M G; Kalbfleisch, John D</p> <p>2012-06-01</p> <p>In this paper, we consider estimation of survivor functions from groups of observations with right-censored data when the groups are subject to a stochastic ordering constraint. Many <span class="hlt">methods</span> and algorithms have been proposed to estimate distribution functions under such restrictions, but none have completely satisfactory properties when the observations are censored. We propose a pointwise constrained nonparametric maximum <span class="hlt">likelihood</span> estimator, which is defined at each time t by the estimates of the survivor functions subject to constraints applied at time t only. We also propose an efficient <span class="hlt">method</span> to obtain the estimator. The estimator of each constrained survivor function is shown to be nonincreasing in t, and its consistency and asymptotic distribution are established. A simulation study suggests better small and large sample properties than for alternative estimators. An example using prostate cancer data illustrates the <span class="hlt">method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006SPIE.6311E..0YI','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006SPIE.6311E..0YI"><span>Pattern recognition using maximum <span class="hlt">likelihood</span> estimation and orthogonal subspace projection</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Islam, M. M.; Alam, M. S.</p> <p>2006-08-01</p> <p>Hyperspectral sensor imagery (HSI) is a relatively new area of research, however, it is extensively being used in geology, agriculture, defense, intelligence and law enforcement applications. Much of the current research focuses on the object detection with low false alarm rate. Over the past several years, many object detection algorithms have been developed which include linear detector, quadratic detector, adaptive matched filter etc. In those <span class="hlt">methods</span> the available data cube was directly used to determine the background mean and the covariance matrix, assuming that the number of object pixels is low compared to that of the data pixels. In this paper, we have used the orthogonal subspace projection (OSP) technique to find the background matrix from the given image data. Our algorithm consists of three parts. In the first part, we have calculated the background matrix using the OSP technique. In the second part, we have determined the maximum <span class="hlt">likelihood</span> estimates of the parameters. Finally, we have considered the <span class="hlt">likelihood</span> ratio, commonly known as the Neyman Pearson quadratic detector, to recognize the objects. The proposed technique has been investigated via computer simulation where excellent performance has been observed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25720092','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25720092"><span><span class="hlt">Likelihood</span> free inference for Markov processes: a comparison.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Owen, Jamie; Wilkinson, Darren J; Gillespie, Colin S</p> <p>2015-04-01</p> <p>Approaches to Bayesian inference for problems with intractable <span class="hlt">likelihoods</span> have become increasingly important in recent years. Approximate Bayesian computation (ABC) and "<span class="hlt">likelihood</span> free" Markov chain Monte Carlo techniques are popular <span class="hlt">methods</span> for tackling inference in these scenarios but such techniques are computationally expensive. In this paper we compare the two approaches to inference, with a particular focus on parameter inference for stochastic kinetic models, widely used in systems biology. Discrete time transition kernels for models of this type are intractable for all but the most trivial systems yet forward simulation is usually straightforward. We discuss the relative merits and drawbacks of each approach whilst considering the computational cost implications and efficiency of these techniques. In order to explore the properties of each approach we examine a range of observation regimes using two example models. We use a Lotka-Volterra predator-prey model to explore the impact of full or partial species observations using various time course observations under the assumption of known and unknown measurement error. Further investigation into the impact of observation error is then made using a Schlögl system, a test case which exhibits bi-modal state stability in some regions of parameter space.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006PMB....51.4017Q','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006PMB....51.4017Q"><span>Penalized maximum-<span class="hlt">likelihood</span> image reconstruction for lesion detection</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Qi, Jinyi; Huesman, Ronald H.</p> <p>2006-08-01</p> <p>Detecting cancerous lesions is one major application in emission tomography. In this paper, we study penalized maximum-<span class="hlt">likelihood</span> image reconstruction for this important clinical task. Compared to analytical reconstruction <span class="hlt">methods</span>, statistical approaches can improve the image quality by accurately modelling the photon detection process and measurement noise in imaging systems. To explore the full potential of penalized maximum-<span class="hlt">likelihood</span> image reconstruction for lesion detection, we derived simplified theoretical expressions that allow fast evaluation of the detectability of a random lesion. The theoretical results are used to design the regularization parameters to improve lesion detectability. We conducted computer-based Monte Carlo simulations to compare the proposed penalty function, conventional penalty function, and a penalty function for isotropic point spread function. The lesion detectability is measured by a channelized Hotelling observer. The results show that the proposed penalty function outperforms the other penalty functions for lesion detection. The relative improvement is dependent on the size of the lesion. However, we found that the penalty function optimized for a 5 mm lesion still outperforms the other two penalty functions for detecting a 14 mm lesion. Therefore, it is feasible to use the penalty function designed for small lesions in image reconstruction, because detection of large lesions is relatively easy.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EPJWC..9401075M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EPJWC..9401075M"><span>A comparative study on the restrictions of <span class="hlt">dynamic</span> test <span class="hlt">methods</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Majzoobi, GH.; Lahmi, S.</p> <p>2015-09-01</p> <p><span class="hlt">Dynamic</span> behavior of materials is investigated using different devices. Each of the devices has some restrictions. For instance, the stress-strain curve of the materials can be captured at high strain rates only with Hopkinson bar. However, by using a new approach some of the other techniques could be used to obtain the constants of material models such as Johnson-Cook model too. In this work, the restrictions of some devices such as drop hammer, Taylor test, Flying wedge, Shot impact test, <span class="hlt">dynamic</span> tensile extrusion and Hopkinson bars which are used to characterize the material properties at high strain rates are described. The level of strain and strain rate and their restrictions are very important in examining the efficiency of each of the devices. For instance, necking or bulging in tensile and compressive Hopkinson bars, fragmentation in <span class="hlt">dynamic</span> tensile extrusion and petaling in Taylor test are restricting issues in the level of strain rate attainable in the devices.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19940030863','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19940030863"><span><span class="hlt">Method</span> for making a <span class="hlt">dynamic</span> pressure sensor and a pressure sensor made according to the <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Zuckerwar, Allan J. (Inventor); Robbins, William E. (Inventor); Robins, Glenn M. (Inventor)</p> <p>1994-01-01</p> <p>A <span class="hlt">method</span> for providing a perfectly flat top with a sharp edge on a <span class="hlt">dynamic</span> pressure sensor using a cup-shaped stretched membrane as a sensing element is described. First, metal is deposited on the membrane and surrounding areas. Next, the side wall of the pressure sensor with the deposited metal is machined to a predetermined size. Finally, deposited metal is removed from the top of the membrane in small steps, by machining or lapping while the pressure sensor is mounted in a jig or the wall of a test object, until the true top surface of the membrane appears. A thin indicator layer having a color contrasting with the color of the membrane may be applied to the top of the membrane before metal is deposited to facilitate the determination of when to stop metal removal from the top surface of the membrane.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19810044641&hterms=methodology+methods&qs=Ntx%3Dmode%2Bmatchany%26Ntk%3DTitle%26N%3D0%26No%3D20%26Ntt%3Dmethodology%2Bmethods','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19810044641&hterms=methodology+methods&qs=Ntx%3Dmode%2Bmatchany%26Ntk%3DTitle%26N%3D0%26No%3D20%26Ntt%3Dmethodology%2Bmethods"><span>On time discretizations for spectral <span class="hlt">methods</span>. [numerical integration of Fourier and Chebyshev <span class="hlt">methods</span> for <span class="hlt">dynamic</span> partial differential equations</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gottlieb, D.; Turkel, E.</p> <p>1980-01-01</p> <p>New <span class="hlt">methods</span> are introduced for the time integration of the Fourier and Chebyshev <span class="hlt">methods</span> of solution for <span class="hlt">dynamic</span> differential equations. These <span class="hlt">methods</span> are unconditionally stable, even though no matrix inversions are required. Time steps are chosen by accuracy requirements alone. For the Fourier <span class="hlt">method</span> both leapfrog and Runge-Kutta <span class="hlt">methods</span> are considered. For the Chebyshev <span class="hlt">method</span> only Runge-Kutta schemes are tested. Numerical calculations are presented to verify the analytic results. Applications to the shallow water equations are presented.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1793g0011B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1793g0011B"><span>Nonholonomic Hamiltonian <span class="hlt">method</span> for molecular <span class="hlt">dynamics</span> simulations of reacting shocks</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bass, Joseph; Fahrenthold, Eric P.</p> <p>2017-01-01</p> <p>Conventional molecular <span class="hlt">dynamics</span> simulations of reacting shocks employ a holonomic Hamiltonian formulation: the breaking and forming of covalent bonds is described by potential functions. In general the potential functions: (a) are algebraically complex, (b) must satisfy strict smoothness requirements, and (c) contain many fitted parameters. In recent research the authors have developed a new nonholonomic formulation of reacting molecular <span class="hlt">dynamics</span>. In this formulation bond orders are determined by rate equations, and the bonding-debonding process need not be described by differentiable functions. This simplifies the representation of complex chemistry and reduces the number of fitted parameters.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/1162103','DOE-PATENT-XML'); return false;" href="http://www.osti.gov/scitech/servlets/purl/1162103"><span><span class="hlt">Method</span> for discovering relationships in data by <span class="hlt">dynamic</span> quantum clustering</span></a></p> <p><a target="_blank" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Weinstein, Marvin; Horn, David</p> <p>2014-10-28</p> <p>Data clustering is provided according to a <span class="hlt">dynamical</span> framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying <span class="hlt">dynamical</span>-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19890010828','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19890010828"><span>Identification of space shuttle main engine <span class="hlt">dynamics</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Duyar, Ahmet; Guo, Ten-Huei; Merrill, Walter C.</p> <p>1989-01-01</p> <p>System identification techniques are used to represent the <span class="hlt">dynamic</span> behavior of the Space Shuttle Main Engine. The transfer function matrices of the linearized models of both the closed loop and the open loop system are obtained by using the recursive maximum <span class="hlt">likelihood</span> <span class="hlt">method</span>.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1999MNRAS.304..893S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1999MNRAS.304..893S"><span><span class="hlt">Likelihood</span> analysis of the Local Group acceleration</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Schmoldt, I.; Branchini, E.; Teodoro, L.; Efstathiou, G.; Frenk, C. S.; Keeble, O.; McMahon, R.; Maddox, S.; Oliver, S.; Rowan-Robinson, M.; Saunders, W.; Sutherland, W.; Tadros, H.; White, S. D. M.</p> <p>1999-04-01</p> <p>We compute the acceleration of the Local Group using 11 206 IRAS galaxies from the recently completed all-sky PSCz redshift survey. Measuring the acceleration vector in redshift space generates systematic uncertainties caused by the redshift-space distortions in the density field. We therefore assign galaxies to their real-space positions by adopting a non-parametric model for the velocity field that relies solely on the linear gravitational instability (GI) and linear biasing hypotheses. Remaining systematic contributions to the measured acceleration vector are corrected for by using PSCz mock catalogues from N-body experiments. The resulting acceleration vector points ~15 away from the CMB dipole apex, with a remarkable alignment between small- and large-scale contributions. A considerable fraction (~65 per cent) of the measured acceleration is generated within 40 h^-1 Mpc, with a non-negligible contribution from scales between 90 and 140 h^-1 Mpc, after which the acceleration amplitude seems to have converged. The local group acceleration from PSCz appears to be consistent with the one determined from the IRAS 1.2-Jy galaxy catalogue once the different contributions from shot noise have been taken into account. The results are consistent with the gravitational instability hypothesis and do not indicate any strong deviations from the linear biasing relation on large scales. A maximum-<span class="hlt">likelihood</span> analysis of the cumulative PSCz dipole is performed within a radius of 150 h^-1 Mpc, in which we account for non-linear effects, shot noise and finite sample size. The aim is to constrain the beta=Omega^0.6/b parameter and the power spectrum of density fluctuations. We obtain beta=0.70^+0.35_-0.2 at 1sigma confidence level. The <span class="hlt">likelihood</span> analysis is not very sensitive to the shape of the power spectrum, because of the rise in the amplitude of the dipole beyond 40 h^-1 Mpc and the increase in shot noise on large scales. There is, however, a weak indication that within the</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1982RScI...53.1906S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1982RScI...53.1906S"><span>Two <span class="hlt">methods</span> for absolute calibration of <span class="hlt">dynamic</span> pressure transducers</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Swift, G. W.; Migliori, A.; Garrett, S. L.; Wheatley, J. C.</p> <p>1982-12-01</p> <p>Two techniques are described for absolute calibration of a <span class="hlt">dynamic</span> pressure transducer from 0 to 400 Hz in 1-MPa helium gas. One technique is based on a comparison to a mercury manometer; the other is based on the principle of reciprocity. The two techniques agree within the instrumental uncertainties of 1%.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AIPA....6c5008G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AIPA....6c5008G"><span>Thermal <span class="hlt">dynamics</span> of thermoelectric phenomena from frequency resolved <span class="hlt">methods</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>García-Cañadas, J.; Min, G.</p> <p>2016-03-01</p> <p>Understanding the <span class="hlt">dynamics</span> of thermoelectric (TE) phenomena is important for the detailed knowledge of the operation of TE materials and devices. By analyzing the impedance response of both a single TE element and a TE device under suspended conditions, we provide new insights into the thermal <span class="hlt">dynamics</span> of these systems. The analysis is performed employing parameters such as the thermal penetration depth, the characteristic thermal diffusion frequency and the thermal diffusion time. It is shown that in both systems the <span class="hlt">dynamics</span> of the thermoelectric response is governed by how the Peltier heat production/absorption at the junctions evolves. In a single thermoelement, at high frequencies the thermal waves diffuse semi-infinitely from the junctions towards the half-length. When the frequency is reduced, the thermal waves can penetrate further and eventually reach the half-length where they start to cancel each other and further penetration is blocked. In the case of a TE module, semi-infinite thermal diffusion along the thickness of the ceramic layers occurs at the highest frequencies. As the frequency is decreased, heat storage in the ceramics becomes dominant and starts to compete with the diffusion of the thermal waves towards the half-length of the thermoelements. Finally, the cancellation of the waves occurs at the lowest frequencies. It is demonstrated that the analysis is able to identify and separate the different physical processes and to provide a detailed understanding of the <span class="hlt">dynamics</span> of different thermoelectric effects.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19820063274&hterms=Factoring&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3DFactoring','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19820063274&hterms=Factoring&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3DFactoring"><span>An improved semi-implicit <span class="hlt">method</span> for structural <span class="hlt">dynamics</span> analysis</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Park, K. C.</p> <p>1982-01-01</p> <p>A semi-implicit algorithm is presented for direct time integration of the structural <span class="hlt">dynamics</span> equations. The algorithm avoids the factoring of the implicit difference solution matrix and mitigates the unacceptable accuracy losses which plagued previous semi-implicit algorithms. This substantial accuracy improvement is achieved by augmenting the solution matrix with two simple diagonal matrices of the order of the integration truncation error.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009CeMDA.104..175L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009CeMDA.104..175L"><span><span class="hlt">Dynamic</span> expansion points: an extension to Hadjidemetriou's mapping <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lhotka, Christoph</p> <p>2009-06-01</p> <p>Series expansions are widely used objects in perturbation theory in Celestial Mechanics and Physics in general. Their application nevertheless is limited due to the fact of convergence problems of the series on the one hand and constricted to regions in phase space, where small (expansion) parameters remain small on the other hand. In the mapping case, to overcome the latter problem, e.g., different expansion points are used to cover the whole phase space, resulting in a set of <span class="hlt">dynamical</span> mappings for one <span class="hlt">dynamical</span> system. In addition, the accuracy of such expansions depend not only on the order of truncation but also on the definition of the grid of the expansion points in phase space. A simple modification of the usual approach allows to increase the accuracy of the expanded mappings and to cover the whole phase space, where the series converge. Convergence problems due to the nonintegrability of the system can never be ruled out of the system, but the convergence of the series expansions in mapping models, which are convergent can be improved. The underlying idea is based on <span class="hlt">dynamic</span> expansion points, which are the main subject of this article. As I will show it is possible to derive unique linear mappings, based on <span class="hlt">dynamically</span> expanded generating functions, for the 3:1 resonance and the coupled standard map, which are valid in their whole phase spaces.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=michel+AND+foucault&pg=6&id=EJ891908','ERIC'); return false;" href="http://eric.ed.gov/?q=michel+AND+foucault&pg=6&id=EJ891908"><span>Technologies and Truth Games: Research as a <span class="hlt">Dynamic</span> <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Hassett, Dawnene D.</p> <p>2010-01-01</p> <p>This article offers a way of thinking about literacy instruction that critiques current reasoning, but also provides a space to <span class="hlt">dynamically</span> think outside of prevalent practices. It presents a framework for both planning and studying literacy pedagogy that combines a practical everyday model of the reading process with Michel Foucault's (1988c)…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27563280','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27563280"><span><span class="hlt">Likelihood</span>-free simulation-based optimal design with an application to spatial extremes.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hainy, Markus; Müller, Werner G; Wagner, Helga</p> <p></p> <p>In this paper we employ a novel <span class="hlt">method</span> to find the optimal design for problems where the <span class="hlt">likelihood</span> is not available analytically, but simulation from the <span class="hlt">likelihood</span> is feasible. To approximate the expected utility we make use of approximate Bayesian computation <span class="hlt">methods</span>. We detail the approach for a model on spatial extremes, where the goal is to find the optimal design for efficiently estimating the parameters determining the dependence structure. The <span class="hlt">method</span> is applied to determine the optimal design of weather stations for modeling maximum annual summer temperatures.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015ApJS..221....5G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015ApJS..221....5G"><span>Optimized Large-scale CMB <span class="hlt">Likelihood</span> and Quadratic Maximum <span class="hlt">Likelihood</span> Power Spectrum Estimation</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.</p> <p>2015-11-01</p> <p>We revisit the problem of exact cosmic microwave background (CMB) <span class="hlt">likelihood</span> and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single <span class="hlt">likelihood</span> evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust <span class="hlt">likelihood</span> by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum <span class="hlt">Likelihood</span> implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1240907-what-best-method-fit-time-resolved-data-comparison-residual-minimization-maximum-likelihood-techniques-applied-experimental-time-correlated-single-photon-counting-data','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1240907-what-best-method-fit-time-resolved-data-comparison-residual-minimization-maximum-likelihood-techniques-applied-experimental-time-correlated-single-photon-counting-data"><span>What is the best <span class="hlt">method</span> to fit time-resolved data? A comparison of the residual minimization and the maximum <span class="hlt">likelihood</span> techniques as applied to experimental time-correlated, single-photon counting data</span></a></p> <p><a target="_blank" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Santra, Kalyan; Zhan, Jinchun; Song, Xueyu; ...</p> <p>2016-02-10</p> <p>The need for measuring fluorescence lifetimes of species in subdiffraction-limited volumes in, for example, stimulated emission depletion (STED) microscopy, entails the dual challenge of probing a small number of fluorophores and fitting the concomitant sparse data set to the appropriate excited-state decay function. This need has stimulated a further investigation into the relative merits of two fitting techniques commonly referred to as “residual minimization” (RM) and “maximum likelihood” (ML). Fluorescence decays of the well-characterized standard, rose bengal in methanol at room temperature (530 ± 10 ps), were acquired in a set of five experiments in which the total number ofmore » “photon counts” was approximately 20, 200, 1000, 3000, and 6000 and there were about 2–200 counts at the maxima of the respective decays. Each set of experiments was repeated 50 times to generate the appropriate statistics. Each of the 250 data sets was analyzed by ML and two different RM <span class="hlt">methods</span> (differing in the weighting of residuals) using in-house routines and compared with a frequently used commercial RM routine. Convolution with a real instrument response function was always included in the fitting. While RM using Pearson’s weighting of residuals can recover the correct mean result with a total number of counts of 1000 or more, ML distinguishes itself by yielding, in all cases, the same mean lifetime within 2% of the accepted value. For 200 total counts and greater, ML always provides a standard deviation of <10% of the mean lifetime, and even at 20 total counts there is only 20% error in the mean lifetime. Here, the robustness of ML advocates its use for sparse data sets such as those acquired in some subdiffraction-limited microscopies, such as STED, and, more importantly, provides greater motivation for exploiting the time-resolved capacities of this technique to acquire and analyze fluorescence lifetime data.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/1240907','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/1240907"><span>What is the best <span class="hlt">method</span> to fit time-resolved data? A comparison of the residual minimization and the maximum <span class="hlt">likelihood</span> techniques as applied to experimental time-correlated, single-photon counting data</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Santra, Kalyan; Zhan, Jinchun; Song, Xueyu; Smith, Emily A.; Vaswani, Namrata; Petrich, Jacob W.</p> <p>2016-02-10</p> <p>The need for measuring fluorescence lifetimes of species in subdiffraction-limited volumes in, for example, stimulated emission depletion (STED) microscopy, entails the dual challenge of probing a small number of fluorophores and fitting the concomitant sparse data set to the appropriate excited-state decay function. This need has stimulated a further investigation into the relative merits of two fitting techniques commonly referred to as “residual minimization” (RM) and “maximum likelihood” (ML). Fluorescence decays of the well-characterized standard, rose bengal in methanol at room temperature (530 ± 10 ps), were acquired in a set of five experiments in which the total number of “photon counts” was approximately 20, 200, 1000, 3000, and 6000 and there were about 2–200 counts at the maxima of the respective decays. Each set of experiments was repeated 50 times to generate the appropriate statistics. Each of the 250 data sets was analyzed by ML and two different RM <span class="hlt">methods</span> (differing in the weighting of residuals) using in-house routines and compared with a frequently used commercial RM routine. Convolution with a real instrument response function was always included in the fitting. While RM using Pearson’s weighting of residuals can recover the correct mean result with a total number of counts of 1000 or more, ML distinguishes itself by yielding, in all cases, the same mean lifetime within 2% of the accepted value. For 200 total counts and greater, ML always provides a standard deviation of <10% of the mean lifetime, and even at 20 total counts there is only 20% error in the mean lifetime. Here, the robustness of ML advocates its use for sparse data sets such as those acquired in some subdiffraction-limited microscopies, such as STED, and, more importantly, provides greater motivation for exploiting the time-resolved capacities of this technique to acquire and analyze fluorescence lifetime data.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4759929','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4759929"><span>Prioritizing Rare Variants with Conditional <span class="hlt">Likelihood</span> Ratios</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Li, Weili; Dobbins, Sara; Tomlinson, Ian; Houlston, Richard; Pal, Deb K.; Strug, Lisa J.</p> <p>2016-01-01</p> <p>Background Prioritizing individual rare variants within associated genes or regions often consists of an ad hoc combination of statistical and biological considerations. From the statistical perspective, rare variants are often ranked using Fisher’s exact p values, which can lead to different rankings of the same set of variants depending on whether 1- or 2-sided p values are used. Results We propose a <span class="hlt">likelihood</span> ratio-based measure, maxLRc, for the statistical component of ranking rare variants under a case-control study design that avoids the hypothesis-testing paradigm. We prove analytically that the maxLRc is always well-defined, even when the data has zero cell counts in the 2×2 disease-variant table. Via simulation, we show that the maxLRc outperforms Fisher’s exact p values in most practical scenarios considered. Using next-generation sequence data from 27 rolandic epilepsy cases and 200 controls in a region previously shown to be linked to and associated with rolandic epilepsy, we demonstrate that rankings assigned by the maxLRc and exact p values can differ substantially. Conclusion The maxLRc provides reliable statistical prioritization of rare variants using only the observed data, avoiding the need to specify parameters associated with hypothesis testing that can result in ranking discrepancies across p value procedures; and it is applicable to common variant prioritization. PMID:25659987</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28260982','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28260982"><span><span class="hlt">Likelihood</span> analysis of supersymmetric SU(5) GUTs.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bagnaschi, E; Costa, J C; Sakurai, K; Borsato, M; Buchmueller, O; Cavanaugh, R; Chobanova, V; Citron, M; De Roeck, A; Dolan, M J; Ellis, J R; Flächer, H; Heinemeyer, S; Isidori, G; Lucio, M; Martínez Santos, D; Olive, K A; Richards, A; de Vries, K J; Weiglein, G</p> <p>2017-01-01</p> <p>We perform a <span class="hlt">likelihood</span> analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has seven parameters: a universal gaugino mass [Formula: see text], distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), [Formula: see text] and [Formula: see text], and for the [Formula: see text] and [Formula: see text] Higgs representations [Formula: see text] and [Formula: see text], a universal trilinear soft SUSY-breaking parameter [Formula: see text], and the ratio of Higgs vevs [Formula: see text]. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + [Formula: see text] events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel [Formula: see text] coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of [Formula: see text] coannihilation. We find complementarity between the prospects for direct Dark Matter detection and SUSY searches at the LHC.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/1343970','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/1343970"><span><span class="hlt">Likelihood</span> Analysis of Supersymmetric SU(5) GUTs</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Bagnaschi, E.; Costa, J. C.; Sakurai, K.; Borsato, M.; Buchmueller, O.; Cavanaugh, R.; Chobanova, V.; Citron, M.; De Roeck, A.; Dolan, M. J.; Ellis, J. R.; Flächer, H.; Heinemeyer, S.; Isidori, G.; Lucio, M.; Martínez Santos, D.; Olive, K. A.; Richards, A.; de Vries, K. J.; Weiglein, G.</p> <p>2016-10-31</p> <p>We perform a <span class="hlt">likelihood</span> analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass $m_{1/2}$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $m_5$ and $m_{10}$, and for the $\\mathbf{5}$ and $\\mathbf{\\bar 5}$ Higgs representations $m_{H_u}$ and $m_{H_d}$, a universal trilinear soft SUSY-breaking parameter $A_0$, and the ratio of Higgs vevs $\\tan \\beta$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel ${\\tilde u_R}/{\\tilde c_R} - \\tilde{\\chi}^0_1$ coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ${\\tilde \</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EPJC...77..104B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EPJC...77..104B"><span><span class="hlt">Likelihood</span> analysis of supersymmetric SU(5) GUTs</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bagnaschi, E.; Costa, J. C.; Sakurai, K.; Borsato, M.; Buchmueller, O.; Cavanaugh, R.; Chobanova, V.; Citron, M.; De Roeck, A.; Dolan, M. J.; Ellis, J. R.; Flächer, H.; Heinemeyer, S.; Isidori, G.; Lucio, M.; Martínez Santos, D.; Olive, K. A.; Richards, A.; de Vries, K. J.; Weiglein, G.</p> <p>2017-02-01</p> <p>We perform a <span class="hlt">likelihood</span> analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has seven parameters: a universal gaugino mass m_{1/2}, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), m_5 and m_{10}, and for the 5 and {bar{5}} Higgs representations m_{H_u} and m_{H_d}, a universal trilinear soft SUSY-breaking parameter A_0, and the ratio of Higgs vevs tan β . In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + [InlineEquation not available: see fulltext.] events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel {tilde{u}_R}/{tilde{c}_R} - tilde{χ }01 coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of {{tilde{ν }}_τ } coannihilation. We find complementarity between the prospects for direct Dark Matter detection and SUSY searches at the LHC.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4636493','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4636493"><span>MAGPI: A Framework for Maximum <span class="hlt">Likelihood</span> MR Phase Imaging Using Multiple Receive Coils</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Dagher, Joseph; Nael, Kambiz</p> <p>2015-01-01</p> <p>Purpose Combining MR phase images from multiple receive coils is a challenging problem, complicated by ambiguities introduced by phase wrapping, noise and the unknown phase-offset between the coils. Various techniques have been proposed to mitigate the effect of these ambiguities but most of the existing <span class="hlt">methods</span> require additional reference scans and/or use ad-hoc post-processing techniques that do not guarantee any optimality. Theory and <span class="hlt">Methods</span> Here, the phase estimation problem is formulated rigorously using a Maximum-<span class="hlt">Likelihood</span> (ML) approach. The proposed framework jointly designs the acquisition-processing chain: the optimized pulse sequence is a single Multi-Echo Gradient Echo scan and the corresponding post-processing algorithm is a voxel-per-voxel ML estimator of the underlying tissue phase. Results Our proposed framework (MAGPI) achieves substantial improvements in the phase estimate, resulting in phase SNR gains by up to an order of magnitude compared to existing <span class="hlt">methods</span>. Conclusion The advantages of MAGPI are: (1) ML-optimal combination of phase data from multiple receive coils, without a reference scan; (2) ML-optimal estimation of the underlying tissue phase, without the need for spatial processing; and (3) robust <span class="hlt">dynamic</span> estimation of channel-dependent phase-offsets. PMID:25946426</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MSSP...86...17B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MSSP...86...17B"><span>Photogrammetry and optical <span class="hlt">methods</span> in structural <span class="hlt">dynamics</span> - A review</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Baqersad, Javad; Poozesh, Peyman; Niezrecki, Christopher; Avitabile, Peter</p> <p>2017-03-01</p> <p>In the last few decades, there has been a surge of research in the area of non-contact measurement techniques. Photogrammetry has received considerable attention due to its ability to achieve full-field measurement and its robustness to work in testing environments and on testing articles in which using other measurement techniques may not be practical. More recently, researchers have used this technique to study transient phenomena and to perform measurements on vibrating structures. The current paper reviews the most current trends in the photogrammetry technique (point tracking, digital image correlation, and target-less approaches) and compares the applications of photogrammetry to other measurement techniques used in structural <span class="hlt">dynamics</span> (e.g. laser Doppler vibrometry and interferometry techniques). The paper does not present the theoretical background of the optical techniques, but instead presents the general principles of each approach and highlights the novel structural <span class="hlt">dynamic</span> measurement concepts and applications that are enhanced by utilizing optical techniques.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27131683','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27131683"><span>A <span class="hlt">method</span> of measuring <span class="hlt">dynamic</span> strain under electromagnetic forming conditions.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Jinling; Xi, Xuekui; Wang, Sijun; Lu, Jun; Guo, Chenglong; Wang, Wenquan; Liu, Enke; Wang, Wenhong; Liu, Lin; Wu, Guangheng</p> <p>2016-04-01</p> <p><span class="hlt">Dynamic</span> strain measurement is rather important for the characterization of mechanical behaviors in electromagnetic forming process, but it has been hindered by high strain rate and serious electromagnetic interference for years. In this work, a simple and effective strain measuring technique for physical and mechanical behavior studies in the electromagnetic forming process has been developed. High resolution (∼5 ppm) of strain curves of a budging aluminum tube in pulsed electromagnetic field has been successfully measured using this technique. The measured strain rate is about 10(5) s(-1), which depends on the discharging conditions, nearly one order of magnitude of higher than that under conventional split Hopkins pressure bar loading conditions (∼10(4) s(-1)). It has been found that the <span class="hlt">dynamic</span> fracture toughness of an aluminum alloy is significantly enhanced during the electromagnetic forming, which explains why the formability is much larger under electromagnetic forging conditions in comparison with conventional forging processes.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/6476903','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/6476903"><span>Phase portrait <span class="hlt">methods</span> for verifying fluid <span class="hlt">dynamic</span> simulations</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Stewart, H.B.</p> <p>1989-01-01</p> <p>As computing resources become more powerful and accessible, engineers more frequently face the difficult and challenging engineering problem of accurately simulating nonlinear <span class="hlt">dynamic</span> phenomena. Although mathematical models are usually available, in the form of initial value problems for differential equations, the behavior of the solutions of nonlinear models is often poorly understood. A notable example is fluid <span class="hlt">dynamics</span>: while the Navier-Stokes equations are believed to correctly describe turbulent flow, no exact mathematical solution of these equations in the turbulent regime is known. Differential equations can of course be solved numerically, but how are we to assess numerical solutions of complex phenomena without some understanding of the mathematical problem and its solutions to guide us</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19820047205&hterms=leer&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dleer','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19820047205&hterms=leer&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dleer"><span>A comparative study of computational <span class="hlt">methods</span> in cosmic gas <span class="hlt">dynamics</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Van Albada, G. D.; Van Leer, B.; Roberts, W. W., Jr.</p> <p>1982-01-01</p> <p>Many theoretical investigations of fluid flows in astrophysics require extensive numerical calculations. The selection of an appropriate computational <span class="hlt">method</span> is, therefore, important for the astronomer who has to solve an astrophysical flow problem. The present investigation has the objective to provide an informational basis for such a selection by comparing a variety of numerical <span class="hlt">methods</span> with the aid of a test problem. The test problem involves a simple, one-dimensional model of the gas flow in a spiral galaxy. The numerical <span class="hlt">methods</span> considered include the beam scheme, Godunov's <span class="hlt">method</span> (G), the second-order flux-splitting <span class="hlt">method</span> (FS2), MacCormack's <span class="hlt">method</span>, and the flux corrected transport <span class="hlt">methods</span> of Boris and Book (1973). It is found that the best second-order <span class="hlt">method</span> (FS2) outperforms the best first-order <span class="hlt">method</span> (G) by a huge margin.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27220203','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27220203"><span>Seasonal species interactions minimize the impact of species turnover on the <span class="hlt">likelihood</span> of community persistence.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Saavedra, Serguei; Rohr, Rudolf P; Fortuna, Miguel A; Selva, Nuria; Bascompte, Jordi</p> <p>2016-04-01</p> <p>Many of the observed species interactions embedded in ecological communities are not permanent, but are characterized by temporal changes that are observed along with abiotic and biotic variations. While work has been done describing and quantifying these changes, little is known about their consequences for species coexistence. Here, we investigate the extent to which changes of species composition impact the <span class="hlt">likelihood</span> of persistence of the predator-prey community in the highly seasonal Białowieza Primeval Forest (northeast Poland), and the extent to which seasonal changes of species interactions (predator diet) modulate the expected impact. This <span class="hlt">likelihood</span> is estimated extending recent developments on the study of structural stability in ecological communities. We find that the observed species turnover strongly varies the <span class="hlt">likelihood</span> of community persistence between summer and winter. Importantly, we demonstrate that the observed seasonal interaction changes minimize the variation in the <span class="hlt">likelihood</span> of persistence associated with species turnover across the year. We find that these community <span class="hlt">dynamics</span> can be explained as the coupling of individual species to their environment by minimizing both the variation in persistence conditions and the interaction changes between seasons. Our results provide a homeostatic explanation for seasonal species interactions and suggest that monitoring the association of interactions changes with the level of variation in community <span class="hlt">dynamics</span> can provide a good indicator of the response of species to environmental pressures.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24376386','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24376386"><span>Drawing <span class="hlt">dynamical</span> and parameters planes of iterative families and <span class="hlt">methods</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chicharro, Francisco I; Cordero, Alicia; Torregrosa, Juan R</p> <p>2013-01-01</p> <p>The complex <span class="hlt">dynamical</span> analysis of the parametric fourth-order Kim's iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated with the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us the excellent schemes (or dreadful ones).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3859210','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3859210"><span>Drawing <span class="hlt">Dynamical</span> and Parameters Planes of Iterative Families and <span class="hlt">Methods</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Chicharro, Francisco I.</p> <p>2013-01-01</p> <p>The complex <span class="hlt">dynamical</span> analysis of the parametric fourth-order Kim's iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated with the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us the excellent schemes (or dreadful ones). PMID:24376386</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/1039492','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/1039492"><span><span class="hlt">Dynamic</span> State Estimation Utilizing High Performance Computing <span class="hlt">Methods</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Schneider, Kevin P.; Huang, Zhenyu; Yang, Bo; Hauer, Matthew L.; Nieplocha, Jaroslaw</p> <p>2009-03-18</p> <p>The state estimation tools which are currently deployed in power system control rooms are based on a quasi-steady-state assumption. As a result, the suite of operational tools that rely on state estimation results as inputs do not have <span class="hlt">dynamic</span> information available and their accuracy is compromised. This paper presents an overview of the Kalman Filtering process and then focuses on the implementation of the predication component on multiple processors.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920016565','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920016565"><span>Computational Fluid <span class="hlt">Dynamics</span>. [numerical <span class="hlt">methods</span> and algorithm development</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p></p> <p>1992-01-01</p> <p>This collection of papers was presented at the Computational Fluid <span class="hlt">Dynamics</span> (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20130008980','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20130008980"><span><span class="hlt">Method</span> and system for <span class="hlt">dynamic</span> probabilistic risk assessment</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Dugan, Joanne Bechta (Inventor); Xu, Hong (Inventor)</p> <p>2013-01-01</p> <p>The DEFT methodology, system and computer readable medium extends the applicability of the PRA (Probabilistic Risk Assessment) methodology to computer-based systems, by allowing DFT (<span class="hlt">Dynamic</span> Fault Tree) nodes as pivot nodes in the Event Tree (ET) model. DEFT includes a mathematical model and solution algorithm, supports all common PRA analysis functions and cutsets. Additional capabilities enabled by the DFT include modularization, phased mission analysis, sequence dependencies, and imperfect coverage.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19170628','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19170628"><span>Electronically nonadiabatic <span class="hlt">dynamics</span> via semiclassical initial value <span class="hlt">methods</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Miller, William H</p> <p>2009-02-26</p> <p>In the late 1970s Meyer and Miller (MM) [J. Chem. Phys. 1979, 70, 3214.] presented a classical Hamiltonian corresponding to a finite set of electronic states of a molecular system (i.e., the various potential energy surfaces and their couplings), so that classical trajectory simulations could be carried out by treating the nuclear and electronic degrees of freedom (DOF) in an equivalent <span class="hlt">dynamical</span> framework (i.e., by classical mechanics), thereby describing nonadiabatic <span class="hlt">dynamics</span> in a more unified manner. Much later Stock and Thoss (ST) [Phys. Rev. Lett. 1997, 78, 578.] showed that the MM model is actually not a "model", but rather a "representation" of the nuclear-electronic system; i.e., were the MMST nuclear-electronic Hamiltonian taken as a Hamiltonian operator and used in the Schrodinger equation, the exact (quantum) nuclear-electronic <span class="hlt">dynamics</span> would be obtained. In recent years various initial value representations (IVRs) of semiclassical (SC) theory have been used with the MMST Hamiltonian to describe electronically nonadiabatic processes. Of special interest is the fact that, though the classical trajectories generated by the MMST Hamiltonian (and which are the "input" for an SC-IVR treatment) are "Ehrenfest trajectories", when they are used within the SC-IVR framework, the nuclear motion emerges from regions of nonadiabaticity on one potential energy surface (PES) or another, and not on an average PES as in the traditional Ehrenfest model. Examples are presented to illustrate and (hopefully) illuminate this behavior.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/949211','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/949211"><span>Electronically Nonadiabatic <span class="hlt">Dynamics</span> via Semiclassical Initial Value <span class="hlt">Methods</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Miller, William H.</p> <p>2008-12-11</p> <p>In the late 1970's Meyer and Miller (MM) [J. Chem. Phys. 70, 3214 (1979)] presented a classical Hamiltonian corresponding to a finite set of electronic states of a molecular system (i.e., the various potential energy surfaces and their couplings), so that classical trajectory simulations could be carried out treating the nuclear and electronic degrees of freedom (DOF) in an equivalent <span class="hlt">dynamical</span> framework (i.e., by classical mechanics), thereby describing non-adiabatic <span class="hlt">dynamics</span> in a more unified manner. Much later Stock and Thoss (ST) [Phys. Rev. Lett. 78, 578 (1997)] showed that the MM model is actually not a 'model', but rather a 'representation' of the nuclear-electronic system; i.e., were the MMST nuclear-electronic Hamiltonian taken as a Hamiltonian operator and used in the Schroedinger equation, the exact (quantum) nuclear-electronic <span class="hlt">dynamics</span> would be obtained. In recent years various initial value representations (IVRs) of semiclassical (SC) theory have been used with the MMST Hamiltonian to describe electronically non-adiabatic processes. Of special interest is the fact that though the classical trajectories generated by the MMST Hamiltonian (and which are the 'input' for an SC-IVR treatment) are 'Ehrenfest trajectories', when they are used within the SC-IVR framework the nuclear motion emerges from regions of non-adiabaticity on one potential energy surface (PES) or another, and not on an average PES as in the traditional Ehrenfest model. Examples are presented to illustrate and (hopefully) illuminate this behavior.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26545922','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26545922"><span>Computationally Efficient Composite <span class="hlt">Likelihood</span> Statistics for Demographic Inference.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Coffman, Alec J; Hsieh, Ping Hsun; Gravel, Simon; Gutenkunst, Ryan N</p> <p>2016-02-01</p> <p>Many population genetics tools employ composite <span class="hlt">likelihoods</span>, because fully modeling genomic linkage is challenging. But traditional approaches to estimating parameter uncertainties and performing model selection require full <span class="hlt">likelihoods</span>, so these tools have relied on computationally expensive maximum-<span class="hlt">likelihood</span> estimation (MLE) on bootstrapped data. Here, we demonstrate that statistical theory can be applied to adjust composite <span class="hlt">likelihoods</span> and perform robust computationally efficient statistical inference in two demographic inference tools: ∂a∂i and TRACTS. On both simulated and real data, the adjustments perform comparably to MLE bootstrapping while using orders of magnitude less computational time.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014A%26A...564A..49P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014A%26A...564A..49P"><span>An updated maximum <span class="hlt">likelihood</span> approach to open cluster distance determination</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Palmer, M.; Arenou, F.; Luri, X.; Masana, E.</p> <p>2014-04-01</p> <p>Aims: An improved <span class="hlt">method</span> for estimating distances to open clusters is presented and applied to Hipparcos data for the Pleiades and the Hyades. The <span class="hlt">method</span> is applied in the context of the historic Pleiades distance problem, with a discussion of previous criticisms of Hipparcos parallaxes. This is followed by an outlook for Gaia, where the improved <span class="hlt">method</span> could be especially useful. <span class="hlt">Methods</span>: Based on maximum <span class="hlt">likelihood</span> estimation, the <span class="hlt">method</span> combines parallax, position, apparent magnitude, colour, proper motion, and radial velocity information to estimate the parameters describing an open cluster precisely and without bias. Results: We find the distance to the Pleiades to be 120.3 ± 1.5 pc, in accordance with previously published work using the same dataset. We find that error correlations cannot be responsible for the still present discrepancy between Hipparcos and photometric <span class="hlt">methods</span>. Additionally, the three-dimensional space velocity and physical structure of Pleiades is parametrised, where we find strong evidence of mass segregation. The distance to the Hyades is found to be 46.35 ± 0.35 pc, also in accordance with previous results. Through the use of simulations, we confirm that the <span class="hlt">method</span> is unbiased, so will be useful for accurate open cluster parameter estimation with Gaia at distances up to several thousand parsec. Appendices are available in electronic form at http://www.aanda.org</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011LNCS.6577..452S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011LNCS.6577..452S"><span>Increasing Power of Groupwise Association Test with <span class="hlt">Likelihood</span> Ratio Test</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sul, Jae Hoon; Han, Buhm; Eskin, Eleazar</p> <p></p> <p>Sequencing studies have been discovering a numerous number of rare variants, allowing the identification of the effects of rare variants on disease susceptibility. As a <span class="hlt">method</span> to increase the statistical power of studies on rare variants, several groupwise association tests that group rare variants in genes and detect associations between groups and diseases have been proposed. One major challenge in these <span class="hlt">methods</span> is to determine which variants are causal in a group, and to overcome this challenge, previous <span class="hlt">methods</span> used prior information that specifies how likely each variant is causal. Another source of information that can be used to determine causal variants is observation data because case individuals are likely to have more causal variants than control individuals. In this paper, we introduce a <span class="hlt">likelihood</span> ratio test (LRT) that uses both data and prior information to infer which variants are causal and uses this finding to determine whether a group of variants is involved in a disease. We demonstrate through simulations that LRT achieves higher power than previous <span class="hlt">methods</span>. We also evaluate our <span class="hlt">method</span> on mutation screening data of the susceptibility gene for ataxia telangiectasia, and show that LRT can detect an association in real data. To increase the computational speed of our <span class="hlt">method</span>, we show how we can decompose the computation of LRT, and propose an efficient permutation test. With this optimization, we can efficiently compute an LRT statistic and its significance at a genome-wide level. The software for our <span class="hlt">method</span> is publicly available at http://genetics.cs.ucla.edu/rarevariants.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4499453','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4499453"><span>H.264 SVC Complexity Reduction Based on <span class="hlt">Likelihood</span> Mode Decision</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Balaji, L.; Thyagharajan, K. K.</p> <p>2015-01-01</p> <p>H.264 Advanced Video Coding (AVC) was prolonged to Scalable Video Coding (SVC). SVC executes in different electronics gadgets such as personal computer, HDTV, SDTV, IPTV, and full-HDTV in which user demands various scaling of the same content. The various scaling is resolution, frame rate, quality, heterogeneous networks, bandwidth, and so forth. Scaling consumes more encoding time and computational complexity during mode selection. In this paper, to reduce encoding time and computational complexity, a fast mode decision algorithm based on <span class="hlt">likelihood</span> mode decision (LMD) is proposed. LMD is evaluated in both temporal and spatial scaling. From the results, we conclude that LMD performs well, when compared to the previous fast mode decision algorithms. The comparison parameters are time, PSNR, and bit rate. LMD achieve time saving of 66.65% with 0.05% detriment in PSNR and 0.17% increment in bit rate compared with the full search <span class="hlt">method</span>. PMID:26221623</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2975406','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2975406"><span>Sequential Generalized <span class="hlt">Likelihood</span> Ratio Tests for Vaccine Safety Evaluation</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Shih, Mei-Chiung; Lai, Tze Leung; Heyse, Joseph F.; Chen, Jie</p> <p>2010-01-01</p> <p>SUMMARY The evaluation of vaccine safety involves pre-clinical animal studies, pre-licensure randomized clinical trials and post-licensure safety studies. Sequential design and analysis are of particular interest because they allow early termination of the trial or quick detection that the vaccine exceeds a prescribed bound on the adverse event rate. After a review of recent developments in this area, we propose a new class of sequential generalized <span class="hlt">likelihood</span> ratio tests for evaluating adverse event rates in two-armed pre-licensure clinical trials and single-armed post-licensure studies. The proposed approach is illustrated using data from the Rotavirus Efficacy and Safety Trial (REST). Simulation studies of the performance of the proposed approach and other <span class="hlt">methods</span> are also given. PMID:20799244</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/760033','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/760033"><span>Maximum-<span class="hlt">Likelihood</span> Continuity Mapping (MALCOM): An Alternative to HMMs</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Nix, D.A.; Hogden, J.E.</p> <p>1998-12-01</p> <p>The authors describe Maximum-<span class="hlt">Likelihood</span> Continuity Mapping (MALCOM) as an alternative to hidden Markov models (HMMs) for processing sequence data such as speech. While HMMs have a discrete ''hidden'' space constrained by a fixed finite-automata architecture, MALCOM has a continuous hidden space (a continuity map) that is constrained only by a smoothness requirement on paths through the space. MALCOM fits into the same probabilistic framework for speech recognition as HMMs, but it represents a far more realistic model of the speech production process. The authors support this claim by generating continuity maps for three speakers and using the resulting MALCOM paths to predict measured speech articulator data. The correlations between the MALCOM paths (obtained from only the speech acoustics) and the actual articulator movements average 0.77 on an independent test set not used to train MALCOM nor the predictor. On average, this unsupervised model achieves 92% of performance obtained using the corresponding supervised <span class="hlt">method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26221623','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26221623"><span>H.264 SVC Complexity Reduction Based on <span class="hlt">Likelihood</span> Mode Decision.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Balaji, L; Thyagharajan, K K</p> <p>2015-01-01</p> <p>H.264 Advanced Video Coding (AVC) was prolonged to Scalable Video Coding (SVC). SVC executes in different electronics gadgets such as personal computer, HDTV, SDTV, IPTV, and full-HDTV in which user demands various scaling of the same content. The various scaling is resolution, frame rate, quality, heterogeneous networks, bandwidth, and so forth. Scaling consumes more encoding time and computational complexity during mode selection. In this paper, to reduce encoding time and computational complexity, a fast mode decision algorithm based on <span class="hlt">likelihood</span> mode decision (LMD) is proposed. LMD is evaluated in both temporal and spatial scaling. From the results, we conclude that LMD performs well, when compared to the previous fast mode decision algorithms. The comparison parameters are time, PSNR, and bit rate. LMD achieve time saving of 66.65% with 0.05% detriment in PSNR and 0.17% increment in bit rate compared with the full search <span class="hlt">method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008PhRvE..78a5101G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008PhRvE..78a5101G"><span>Maximum <span class="hlt">likelihood</span>: Extracting unbiased information from complex networks</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Garlaschelli, Diego; Loffredo, Maria I.</p> <p>2008-07-01</p> <p>The choice of free parameters in network models is subjective, since it depends on what topological properties are being monitored. However, we show that the maximum <span class="hlt">likelihood</span> (ML) principle indicates a unique, statistically rigorous parameter choice, associated with a well-defined topological feature. We then find that, if the ML condition is incompatible with the built-in parameter choice, network models turn out to be intrinsically ill defined or biased. To overcome this problem, we construct a class of safely unbiased models. We also propose an extension of these results that leads to the fascinating possibility to extract, only from topological data, the “hidden variables” underlying network organization, making them “no longer hidden.” We test our <span class="hlt">method</span> on World Trade Web data, where we recover the empirical gross domestic product using only topological information.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=Montecarlo&id=EJ815006','ERIC'); return false;" href="http://eric.ed.gov/?q=Montecarlo&id=EJ815006"><span>Quasi-Maximum <span class="hlt">Likelihood</span> Estimation of Structural Equation Models with Multiple Interaction and Quadratic Effects</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Klein, Andreas G.; Muthen, Bengt O.</p> <p>2007-01-01</p> <p>In this article, a nonlinear structural equation model is introduced and a quasi-maximum <span class="hlt">likelihood</span> <span class="hlt">method</span> for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the <span class="hlt">method</span> is highly…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA603961','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA603961"><span>Average <span class="hlt">Likelihood</span> <span class="hlt">Methods</span> for Code Division Multiple Access (CDMA)</span></a></p> <p><a target="_blank" href="https://publicaccess.dtic.mil/psm/api/service/search/search">DTIC Science & Technology</a></p> <p></p> <p>2014-05-01</p> <p>AFOSR). Some of the work accomplished was possible by using the AFRL High Performance Computing servers ( Condor HPC). Approved for Public Release...A Matlab implementation of the parameters is shown below. The code was executed using AFRL Condor HPC cluster for calculating the values of</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014PhDT.......150W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014PhDT.......150W"><span>Development and Evaluation of a Hybrid <span class="hlt">Dynamical</span>-Statistical Downscaling <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Walton, Daniel Burton</p> <p></p> <p>Regional climate change studies usually rely on downscaling of global climate model (GCM) output in order to resolve important fine-scale features and processes that govern local climate. Previous efforts have used one of two techniques: (1) <span class="hlt">dynamical</span> downscaling, in which a regional climate model is forced at the boundaries by GCM output, or (2) statistical downscaling, which employs historical empirical relationships to go from coarse to fine resolution. Studies using these <span class="hlt">methods</span> have been criticized because they either <span class="hlt">dynamical</span> downscaled only a few GCMs, or used statistical downscaling on an ensemble of GCMs, but missed important <span class="hlt">dynamical</span> effects in the climate change signal. This study describes the development and evaluation of a hybrid <span class="hlt">dynamical</span>-statstical downscaling <span class="hlt">method</span> that utilizes aspects of both <span class="hlt">dynamical</span> and statistical downscaling to address these concerns. The first step of the hybrid <span class="hlt">method</span> is to use <span class="hlt">dynamical</span> downscaling to understand the most important physical processes that contribute to the climate change signal in the region of interest. Then a statistical model is built based on the patterns and relationships identified from <span class="hlt">dynamical</span> downscaling. This statistical model can be used to downscale an entire ensemble of GCMs quickly and efficiently. The hybrid <span class="hlt">method</span> is first applied to a domain covering Los Angeles Region to generate projections of temperature change between the 2041-2060 and 1981-2000 periods for 32 CMIP5 GCMs. The hybrid <span class="hlt">method</span> is also applied to a larger region covering all of California and the adjacent ocean. The hybrid <span class="hlt">method</span> works well in both areas, primarily because a single feature, the land-sea contrast in the warming, controls the overwhelming majority of the spatial detail. Finally, the <span class="hlt">dynamically</span> downscaled temperature change patterns are compared to those produced by two commonly-used statistical <span class="hlt">methods</span>, BCSD and BCCA. Results show that <span class="hlt">dynamical</span> downscaling recovers important spatial features that the</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016KARJ...28..301P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016KARJ...28..301P"><span><span class="hlt">Dynamic</span> blocked transfer stiffness <span class="hlt">method</span> of characterizing the magnetic field and frequency dependent <span class="hlt">dynamic</span> viscoelastic properties of MRE</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Poojary, Umanath R.; Hegde, Sriharsha; Gangadharan, K. V.</p> <p>2016-11-01</p> <p>Magneto rheological elastomer (MRE) is a potential resilient element for the semi active vibration isolator. MRE based isolators adapt to different frequency of vibrations arising from the source to isolate the structure over wider frequency range. The performance of MRE isolator depends on the magnetic field and frequency dependent characteristics of MRE. Present study is focused on experimentally evaluating the <span class="hlt">dynamic</span> stiffness and loss factor of MRE through <span class="hlt">dynamic</span> blocked transfer stiffness <span class="hlt">method</span>. The <span class="hlt">dynamic</span> stiffness variations of MRE exhibit strong magnetic field and mild frequency dependency. Enhancements in <span class="hlt">dynamic</span> stiffness saturate with the increase in magnetic field and the frequency. The inconsistent variations of loss factor with the magnetic field substantiate the inability of MRE to have independent control over its damping characteristics.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4523555','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4523555"><span><span class="hlt">Dynamic</span> light scattering Monte Carlo: a <span class="hlt">method</span> for simulating time-varying <span class="hlt">dynamics</span> for ordered motion in heterogeneous media</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Davis, Mitchell A.; Dunn, Andrew K.</p> <p>2015-01-01</p> <p>Few <span class="hlt">methods</span> exist that can accurately handle <span class="hlt">dynamic</span> light scattering in the regime between single and highly multiple scattering. We demonstrate <span class="hlt">dynamic</span> light scattering Monte Carlo (DLS-MC), a numerical <span class="hlt">method</span> by which the electric field autocorrelation function may be calculated for arbitrary geometries if the optical properties and particle motion are known or assumed. DLS-MC requires no assumptions regarding the number of scattering events, the final form of the autocorrelation function, or the degree of correlation between scattering events. Furthermore, the <span class="hlt">method</span> is capable of rapidly determining the effect of particle motion changes on the autocorrelation function in heterogeneous samples. We experimentally validated the <span class="hlt">method</span> and demonstrated that the simulations match both the expected form and the experimental results. We also demonstrate the perturbation capabilities of the <span class="hlt">method</span> by calculating the autocorrelation function of flow in a representation of mouse microvasculature and determining the sensitivity to flow changes as a function of depth. PMID:26191723</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016CPL...661...42H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016CPL...661...42H"><span>Accelerating ab initio molecular <span class="hlt">dynamics</span> simulations by linear prediction <span class="hlt">methods</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Herr, Jonathan D.; Steele, Ryan P.</p> <p>2016-09-01</p> <p>Acceleration of ab initio molecular <span class="hlt">dynamics</span> (AIMD) simulations can be reliably achieved by extrapolation of electronic data from previous timesteps. Existing techniques utilize polynomial least-squares regression to fit previous steps' Fock or density matrix elements. In this work, the recursive Burg 'linear prediction' technique is shown to be a viable alternative to polynomial regression, and the extrapolation-predicted Fock matrix elements were three orders of magnitude closer to converged elements. Accelerations of 1.8-3.4× were observed in test systems, and in all cases, linear prediction outperformed polynomial extrapolation. Importantly, these accelerations were achieved without reducing the MD integration timestep.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5331837','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5331837"><span><span class="hlt">Dynamic</span> contrast-enhanced endoscopic ultrasound: A quantification <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Dietrich, Christoph F.; Dong, Yi; Froehlich, Eckhart; Hocke, Michael</p> <p>2017-01-01</p> <p><span class="hlt">Dynamic</span> contrast-enhanced ultrasound (DCE-US) has been recently standardized by guidelines and recommendations. The European Federation of Societies for US in Medicine and Biology position paper describes the use for DCE-US. Comparatively, little is known about the use of contrast-enhanced endoscopic US (CE-EUS). This current paper reviews and discusses the clinical use of CE-EUS and DCE-US. The most important clinical use of DCE-US is the prediction of tumor response to new drugs against vascular angioneogenesis. PMID:28218195</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AIPC.1738F0004C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AIPC.1738F0004C"><span>Optimal control <span class="hlt">methods</span> for controlling bacterial populations with persister <span class="hlt">dynamics</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cogan, N. G.</p> <p>2016-06-01</p> <p>Bacterial tolerance to antibiotics is a well-known phenomena; however, only recent studies of bacterial biofilms have shown how multifaceted tolerance really is. By joining into a structured community and offering shared protection and gene transfer, bacterial populations can protect themselves genotypically, phenotypically and physically. In this study, we collect a line of research that focuses on phenotypic (or plastic) tolerance. The <span class="hlt">dynamics</span> of persister formation are becoming better understood, even though there are major questions that remain. The thrust of our results indicate that even without detailed description of the biological mechanisms, theoretical studies can offer strategies that can eradicate bacterial populations with existing drugs.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/15013438','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/15013438"><span>A <span class="hlt">Dynamically</span> Adaptive Arbitrary Lagrangian-Eulerian <span class="hlt">Method</span> for Hydrodynamics</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Anderson, R W; Pember, R B; Elliott, N S</p> <p>2002-10-19</p> <p>A new <span class="hlt">method</span> that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR <span class="hlt">method</span> hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the <span class="hlt">method</span> in performing detailed three-dimensional shock-driven instability calculations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/15013468','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/15013468"><span>A <span class="hlt">Dynamically</span> Adaptive Arbitrary Lagrangian-Eulerian <span class="hlt">Method</span> for Hydrodynamics</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Anderson, R W; Pember, R B; Elliott, N S</p> <p>2004-01-28</p> <p>A new <span class="hlt">method</span> that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR <span class="hlt">method</span> hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the <span class="hlt">method</span> in performing detailed three-dimensional shock-driven instability calculations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19337664','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19337664"><span>Time-resolved <span class="hlt">methods</span> in biophysics. 9. Laser temperature-jump <span class="hlt">methods</span> for investigating biomolecular <span class="hlt">dynamics</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kubelka, Jan</p> <p>2009-04-01</p> <p>Many important biochemical processes occur on the time-scales of nanoseconds and microseconds. The introduction of the laser temperature-jump (T-jump) to biophysics more than a decade ago opened these previously inaccessible time regimes up to direct experimental observation. Since then, laser T-jump methodology has evolved into one of the most versatile and generally applicable <span class="hlt">methods</span> for studying fast biomolecular kinetics. This perspective is a review of the principles and applications of the laser T-jump technique in biophysics. A brief overview of the T-jump relaxation kinetics and the historical development of laser T-jump methodology is presented. The physical principles and practical experimental considerations that are important for the design of the laser T-jump experiments are summarized. These include the Raman conversion for generating heating pulses, considerations of size, duration and uniformity of the temperature jump, as well as potential adverse effects due to photo-acoustic waves, cavitation and thermal lensing, and their elimination. The laser T-jump apparatus developed at the NIH Laboratory of Chemical Physics is described in detail along with a brief survey of other laser T-jump designs in use today. Finally, applications of the laser T-jump in biophysics are reviewed, with an emphasis on the broad range of problems where the laser T-jump methodology has provided important new results and insights into the <span class="hlt">dynamics</span> of the biomolecular processes.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24057917','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24057917"><span>A general <span class="hlt">method</span> for modeling population <span class="hlt">dynamics</span> and its applications.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shestopaloff, Yuri K</p> <p>2013-12-01</p> <p>Studying populations, be it a microbe colony or mankind, is important for understanding how complex systems evolve and exist. Such knowledge also often provides insights into evolution, history and different aspects of human life. By and large, populations' prosperity and decline is about transformation of certain resources into quantity and other characteristics of populations through growth, replication, expansion and acquisition of resources. We introduce a general model of population change, applicable to different types of populations, which interconnects numerous factors influencing population <span class="hlt">dynamics</span>, such as nutrient influx and nutrient consumption, reproduction period, reproduction rate, etc. It is also possible to take into account specific growth features of individual organisms. We considered two recently discovered distinct growth scenarios: first, when organisms do not change their grown mass regardless of nutrients availability, and the second when organisms can reduce their grown mass by several times in a nutritionally poor environment. We found that nutrient supply and reproduction period are two major factors influencing the shape of population growth curves. There is also a difference in population <span class="hlt">dynamics</span> between these two groups. Organisms belonging to the second group are significantly more adaptive to reduction of nutrients and far more resistant to extinction. Also, such organisms have substantially more frequent and lesser in amplitude fluctuations of population quantity for the same periodic nutrient supply (compared to the first group). Proposed model allows adequately describing virtually any possible growth scenario, including complex ones with periodic and irregular nutrient supply and other changing parameters, which present approaches cannot do.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17964950','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17964950"><span><span class="hlt">Methods</span> for simulating the <span class="hlt">dynamics</span> of complex biological processes.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Schilstra, Maria J; Martin, Stephen R; Keating, Sarah M</p> <p>2008-01-01</p> <p>In this chapter, we provide the basic information required to understand the central concepts in the modeling and simulation of complex biochemical processes. We underline the fact that most biochemical processes involve sequences of interactions between distinct entities (molecules, molecular assemblies), and also stress that models must adhere to the laws of thermodynamics. Therefore, we discuss the principles of mass-action reaction kinetics, the <span class="hlt">dynamics</span> of equilibrium and steady state, and enzyme kinetics, and explain how to assess transition probabilities and reactant lifetime distributions for first-order reactions. Stochastic simulation of reaction systems in well-stirred containers is introduced using a relatively simple, phenomenological model of microtubule <span class="hlt">dynamic</span> instability in vitro. We demonstrate that deterministic simulation [by numerical integration of coupled ordinary differential equations (ODE)] produces trajectories that would be observed if the results of many rounds of stochastic simulation of the same system were averaged. In Section V, we highlight several practical issues with regard to the assessment of parameter values. We draw some attention to the development of a standard format for model storage and exchange, and provide a list of selected software tools that may facilitate the model building process, and can be used to simulate the modeled systems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3672404','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3672404"><span>Efficient Fully Implicit Time Integration <span class="hlt">Methods</span> for Modeling Cardiac <span class="hlt">Dynamics</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Rose, Donald J.; Henriquez, Craig S.</p> <p>2013-01-01</p> <p>Implicit <span class="hlt">methods</span> are well known to have greater stability than explicit <span class="hlt">methods</span> for stiff systems, but they often are not used in practice due to perceived computational complexity. This paper applies the Backward Euler <span class="hlt">method</span> and a second-order one-step two-stage composite backward differentiation formula (C-BDF2) for the monodomain equations arising from mathematically modeling the electrical activity of the heart. The C-BDF2 scheme is an L-stable implicit time integration <span class="hlt">method</span> and easily implementable. It uses the simplest Forward Euler and Backward Euler <span class="hlt">methods</span> as fundamental building blocks. The nonlinear system resulting from application of the Backward Euler <span class="hlt">method</span> for the monodomain equations is solved for the first time by a nonlinear elimination <span class="hlt">method</span>, which eliminates local and non-symmetric components by using a Jacobian-free Newton solver, called Newton-Krylov solver. Unlike other fully implicit <span class="hlt">methods</span> proposed for the monodomain equations in the literature, the Jacobian of the global system after the nonlinear elimination has much smaller size, is symmetric and possibly positive definite, which can be solved efficiently by standard optimal solvers. Numerical results are presented demonstrating that the C-BDF2 scheme can yield accurate results with less CPU times than explicit <span class="hlt">methods</span> for both a single patch and spatially extended domains. PMID:19126449</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24838309','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24838309"><span>An improved <span class="hlt">dynamic</span> <span class="hlt">method</span> to measure kL a in bioreactors.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Damiani, Andrew L; Kim, Min Hea; Wang, Jin</p> <p>2014-10-01</p> <p>An accurate measurement or estimation of the volumetric mass transfer coefficient kL a is crucial for the design, operation, and scale up of bioreactors. Among different physical and chemical <span class="hlt">methods</span>, the classical <span class="hlt">dynamic</span> <span class="hlt">method</span> is the most widely applied <span class="hlt">method</span> to simultaneously estimate both kL a and cell's oxygen utilization rate. Despite several important follow-up articles to improve the original <span class="hlt">dynamic</span> <span class="hlt">method</span>, some limitations exist that make the classical <span class="hlt">dynamic</span> <span class="hlt">method</span> less effective under certain conditions. For example, for the case of high cell density with moderate agitation, the dissolved oxygen concentration barely increases during the re-gassing step of the classical <span class="hlt">dynamic</span> <span class="hlt">method</span>, which makes kL a estimation impossible. To address these limitations, in this work we present an improved <span class="hlt">dynamic</span> <span class="hlt">method</span> that consists of both an improved model and an improved procedure. The improved model takes into account the mass transfer between the headspace and the broth; in addition, nitrogen is bubbled through the broth when air is shut off. The improved <span class="hlt">method</span> not only enables a faster and more accurate estimation of kL a, but also allows the measurement of kL a for high cell density with medium/low agitation that is impossible with the classical <span class="hlt">dynamic</span> <span class="hlt">method</span>. Scheffersomyces stipitis was used as the model system to demonstrate the effectiveness of the improved <span class="hlt">method</span>; in addition, experiments were conducted to examine the effect of cell density and agitation speed on kL a.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20236885','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20236885"><span>Sparse array 3-D ISAR imaging based on maximum <span class="hlt">likelihood</span> estimation and CLEAN technique.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ma, Changzheng; Yeo, Tat Soon; Tan, Chee Seng; Tan, Hwee Siang</p> <p>2010-08-01</p> <p>Large 2-D sparse array provides high angular resolution microwave images but artifacts are also induced by the high sidelobes of the beam pattern, thus, limiting its <span class="hlt">dynamic</span> range. CLEAN technique has been used in the literature to extract strong scatterers for use in subsequent signal cancelation (artifacts removal). However, the performance of DFT parameters estimation based CLEAN algorithm for the estimation of the signal amplitudes is known to be poor, and this affects the signal cancelation. In this paper, DFT is used only to provide the initial estimates, and the maximum <span class="hlt">likelihood</span> parameters estimation <span class="hlt">method</span> with steepest descent implementation is then used to improve the precision of the calculated scatterers positions and amplitudes. Time domain information is also used to reduce the sidelobe levels. As a result, clear, artifact-free images could be obtained. The effects of multiple reflections and rotation speed estimation error are also discussed. The proposed <span class="hlt">method</span> has been verified using numerical simulations and it has been shown to be effective.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25182616','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25182616"><span>A <span class="hlt">likelihood</span>-based biostatistical model for analyzing consumer movement in simultaneous choice experiments.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zeilinger, Adam R; Olson, Dawn M; Andow, David A</p> <p>2014-08-01</p> <p>Consumer feeding preference among resource choices has critical implications for basic ecological and evolutionary processes, and can be highly relevant to applied problems such as ecological risk assessment and invasion biology. Within consumer choice experiments, also known as feeding preference or cafeteria experiments, measures of relative consumption and measures of consumer movement can provide distinct and complementary insights into the strength, causes, and consequences of preference. Despite the distinct value of inferring preference from measures of consumer movement, rigorous and biologically relevant analytical <span class="hlt">methods</span> are lacking. We describe a simple, <span class="hlt">likelihood</span>-based, biostatistical model for analyzing the transient <span class="hlt">dynamics</span> of consumer movement in a paired-choice experiment. With experimental data consisting of repeated discrete measures of consumer location, the model can be used to estimate constant consumer attraction and leaving rates for two food choices, and differences in choice-specific attraction and leaving rates can be tested using model selection. The model enables calculation of transient and equilibrial probabilities of consumer-resource association, which could be incorporated into larger scale movement models. We explore the effect of experimental design on parameter estimation through stochastic simulation and describe <span class="hlt">methods</span> to check that data meet model assumptions. Using a dataset of modest sample size, we illustrate the use of the model to draw inferences on consumer preference as well as underlying behavioral mechanisms. Finally, we include a user's guide and computer code scripts in R to facilitate use of the model by other researchers.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19860015462','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19860015462"><span>Statistical prediction of <span class="hlt">dynamic</span> distortion of inlet flow using minimum <span class="hlt">dynamic</span> measurement. An application to the Melick statistical <span class="hlt">method</span> and inlet flow <span class="hlt">dynamic</span> distortion prediction without RMS measurements</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Schweikhard, W. G.; Chen, Y. S.</p> <p>1986-01-01</p> <p>The Melick <span class="hlt">method</span> of inlet flow <span class="hlt">dynamic</span> distortion prediction by statistical means is outlined. A hypothetic vortex model is used as the basis for the mathematical formulations. The main variables are identified by matching the theoretical total pressure rms ratio with the measured total pressure rms ratio. Data comparisons, using the HiMAT inlet test data set, indicate satisfactory prediction of the <span class="hlt">dynamic</span> peak distortion for cases with boundary layer control device vortex generators. A <span class="hlt">method</span> for the <span class="hlt">dynamic</span> probe selection was developed. Validity of the probe selection criteria is demonstrated by comparing the reduced-probe predictions with the 40-probe predictions. It is indicated that the the number of <span class="hlt">dynamic</span> probes can be reduced to as few as two and still retain good accuracy.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5240920','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5240920"><span>A <span class="hlt">Likelihood</span> Approach for Real-Time Calibration of Stochastic Compartmental Epidemic Models</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zimmer, Christoph; Cohen, Ted</p> <p>2017-01-01</p> <p>Stochastic transmission <span class="hlt">dynamic</span> models are especially useful for studying the early emergence of novel pathogens given the importance of chance events when the number of infectious individuals is small. However, <span class="hlt">methods</span> for parameter estimation and prediction for these types of stochastic models remain limited. In this manuscript, we describe a calibration and prediction framework for stochastic compartmental transmission models of epidemics. The proposed <span class="hlt">method</span>, Multiple Shooting for Stochastic systems (MSS), applies a linear noise approximation to describe the size of the fluctuations, and uses each new surveillance observation to update the belief about the true epidemic state. Using simulated outbreaks of a novel viral pathogen, we evaluate the accuracy of MSS for real-time parameter estimation and prediction during epidemics. We assume that weekly counts for the number of new diagnosed cases are available and serve as an imperfect proxy of incidence. We show that MSS produces accurate estimates of key epidemic parameters (i.e. mean duration of infectiousness, R0, and Reff) and can provide an accurate estimate of the unobserved number of infectious individuals during the course of an epidemic. MSS also allows for accurate prediction of the number and timing of future hospitalizations and the overall attack rate. We compare the performance of MSS to three state-of-the-art benchmark <span class="hlt">methods</span>: 1) a <span class="hlt">likelihood</span> approximation with an assumption of independent Poisson observations; 2) a particle filtering <span class="hlt">method</span>; and 3) an ensemble Kalman filter <span class="hlt">method</span>. We find that MSS significantly outperforms each of these three benchmark <span class="hlt">methods</span> in the majority of epidemic scenarios tested. In summary, MSS is a promising <span class="hlt">method</span> that may improve on current approaches for calibration and prediction using stochastic models of epidemics. PMID:28095403</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28095403','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28095403"><span>A <span class="hlt">Likelihood</span> Approach for Real-Time Calibration of Stochastic Compartmental Epidemic Models.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zimmer, Christoph; Yaesoubi, Reza; Cohen, Ted</p> <p>2017-01-01</p> <p>Stochastic transmission <span class="hlt">dynamic</span> models are especially useful for studying the early emergence of novel pathogens given the importance of chance events when the number of infectious individuals is small. However, <span class="hlt">methods</span> for parameter estimation and prediction for these types of stochastic models remain limited. In this manuscript, we describe a calibration and prediction framework for stochastic compartmental transmission models of epidemics. The proposed <span class="hlt">method</span>, Multiple Shooting for Stochastic systems (MSS), applies a linear noise approximation to describe the size of the fluctuations, and uses each new surveillance observation to update the belief about the true epidemic state. Using simulated outbreaks of a novel viral pathogen, we evaluate the accuracy of MSS for real-time parameter estimation and prediction during epidemics. We assume that weekly counts for the number of new diagnosed cases are available and serve as an imperfect proxy of incidence. We show that MSS produces accurate estimates of key epidemic parameters (i.e. mean duration of infectiousness, R0, and Reff) and can provide an accurate estimate of the unobserved number of infectious individuals during the course of an epidemic. MSS also allows for accurate prediction of the number and timing of future hospitalizations and the overall attack rate. We compare the performance of MSS to three state-of-the-art benchmark <span class="hlt">methods</span>: 1) a <span class="hlt">likelihood</span> approximation with an assumption of independent Poisson observations; 2) a particle filtering <span class="hlt">method</span>; and 3) an ensemble Kalman filter <span class="hlt">method</span>. We find that MSS significantly outperforms each of these three benchmark <span class="hlt">methods</span> in the majority of epidemic scenarios tested. In summary, MSS is a promising <span class="hlt">method</span> that may improve on current approaches for calibration and prediction using stochastic models of epidemics.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016MNRAS.457.2107S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016MNRAS.457.2107S"><span>A review of action estimation <span class="hlt">methods</span> for galactic <span class="hlt">dynamics</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sanders, Jason L.; Binney, James</p> <p>2016-04-01</p> <p>We review the available <span class="hlt">methods</span> for estimating actions, angles and frequencies of orbits in both axisymmetric and triaxial potentials. The <span class="hlt">methods</span> are separated into two classes. Unless an orbit has been trapped by a resonance, convergent, or iterative, <span class="hlt">methods</span> are able to recover the actions to arbitrarily high accuracy given sufficient computing time. Faster non-convergent <span class="hlt">methods</span> rely on the potential being sufficiently close to a separable potential, and the accuracy of the action estimate cannot be improved through further computation. We critically compare the accuracy of the <span class="hlt">methods</span> and the required computation time for a range of orbits in an axisymmetric multicomponent Galactic potential. We introduce a new <span class="hlt">method</span> for estimating actions that builds on the adiabatic approximation of Schönrich & Binney and discuss the accuracy required for the actions, angles and frequencies using suitable distribution functions for the thin and thick discs, the stellar halo and a star stream. We conclude that for studies of the disc and smooth halo component of the Milky Way, the most suitable compromise between speed and accuracy is the Stäckel Fudge, whilst when studying streams the non-convergent <span class="hlt">methods</span> do not offer sufficient accuracy and the most suitable <span class="hlt">method</span> is computing the actions from an orbit integration via a generating function. All the software used in this study can be downloaded from https://github.com/jls713/tact.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/874926','DOE-PATENT-XML'); return false;" href="http://www.osti.gov/scitech/servlets/purl/874926"><span><span class="hlt">Method</span> and apparatus for <span class="hlt">dynamic</span> focusing of ultrasound energy</span></a></p> <p><a target="_blank" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Candy, James V.</p> <p>2002-01-01</p> <p><span class="hlt">Method</span> and system disclosed herein include noninvasively detecting, separating and destroying multiple masses (tumors, cysts, etc.) through a plurality of iterations from tissue (e.g., breast tissue). The <span class="hlt">method</span> and system may open new frontiers with the implication of noninvasive treatment of masses in the biomedical area along with the expanding technology of acoustic surgery.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20150023569','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20150023569"><span>Testing and Validation of the <span class="hlt">Dynamic</span> Interia Measurement <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chin, Alexander; Herrera, Claudia; Spivey, Natalie; Fladung, William; Cloutier, David</p> <p>2015-01-01</p> <p>This presentation describes the DIM <span class="hlt">method</span> and how it measures the inertia properties of an object by analyzing the frequency response functions measured during a ground vibration test (GVT). The DIM <span class="hlt">method</span> has been in development at the University of Cincinnati and has shown success on a variety of small scale test articles. The NASA AFRC version was modified for larger applications.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2001SPIE.4400..127R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2001SPIE.4400..127R"><span>Optical measurement <span class="hlt">methods</span> to study <span class="hlt">dynamic</span> behavior in MEMS</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rembe, Christian; Kant, Rishi; Muller, Richard S.</p> <p>2001-10-01</p> <p>The maturing designs of moving microelectromechanical systems (MEMS) make it more-and-more important to have precise measurements and visual means to characterize <span class="hlt">dynamic</span> microstructures. The Berkeley Sensor&Actuator Center (BSAC) has a forefront project aimed at developing these capabilities and at providing high-speed Internet (Supernet) access for remote use of its facilities. Already in operation are three optical-characterization tools: a stroboscopic-interferometer system, a computer-microvision system, and a laser-Doppler vibrometer. This paper describes precision and limitations of these systems and discusses their further development. In addition, we describe the results of experimental studies on the different MEMS devices, and give an overview about high-speed visualization of rapidly moving MEMS structures.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18632386','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18632386"><span>Decentralized Bayesian search using approximate <span class="hlt">dynamic</span> programming <span class="hlt">methods</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhao, Yijia; Patek, Stephen D; Beling, Peter A</p> <p>2008-08-01</p> <p>We consider decentralized Bayesian search problems that involve a team of multiple autonomous agents searching for targets on a network of search points operating under the following constraints: 1) interagent communication is limited; 2) the agents do not have the opportunity to agree in advance on how to resolve equivalent but incompatible strategies; and 3) each agent lacks the ability to control or predict with certainty the actions of the other agents. We formulate the multiagent search-path-planning problem as a decentralized optimal control problem and introduce approximate <span class="hlt">dynamic</span> heuristics that can be implemented in a decentralized fashion. After establishing some analytical properties of the heuristics, we present computational results for a search problem involving two agents on a 5 x 5 grid.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=win+AND+fields&pg=4&id=EJ689137','ERIC'); return false;" href="http://eric.ed.gov/?q=win+AND+fields&pg=4&id=EJ689137"><span>The Dud-Alternative Effect in <span class="hlt">Likelihood</span> Judgment</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Windschitl, Paul D.; Chambers, John R.</p> <p>2004-01-01</p> <p>The judged <span class="hlt">likelihood</span> of a focal outcome should generally decrease as the list of alternative possibilities increases. For example, the <span class="hlt">likelihood</span> that a runner will win a race goes down when 2 new entries are added to the field. However, 6 experiments demonstrate that the presence of implausible alternatives (duds) often increases the judged…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27286901','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27286901"><span>Maximum <span class="hlt">likelihood</span> estimation in meta-analytic structural equation modeling.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Oort, Frans J; Jak, Suzanne</p> <p>2016-06-01</p> <p>Meta-analytic structural equation modeling (MASEM) involves fitting models to a common population correlation matrix that is estimated on the basis of correlation coefficients that are reported by a number of independent studies. MASEM typically consist of two stages. The <span class="hlt">method</span> that has been found to perform best in terms of statistical properties is the two-stage structural equation modeling, in which maximum <span class="hlt">likelihood</span> analysis is used to estimate the common correlation matrix in the first stage, and weighted least squares analysis is used to fit structural equation models to the common correlation matrix in the second stage. In the present paper, we propose an alternative <span class="hlt">method</span>, ML MASEM, that uses ML estimation throughout. In a simulation study, we use both <span class="hlt">methods</span> and compare chi-square distributions, bias in parameter estimates, false positive rates, and true positive rates. Both <span class="hlt">methods</span> appear to yield unbiased parameter estimates and false and true positive rates that are close to the expected values. ML MASEM parameter estimates are found to be significantly less bias than two-stage structural equation modeling estimates, but the differences are very small. The choice between the two <span class="hlt">methods</span> may therefore be based on other fundamental or practical arguments. Copyright © 2016 John Wiley & Sons, Ltd.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15827824','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15827824"><span>Computerized <span class="hlt">methods</span> for determining respiratory phase on <span class="hlt">dynamic</span> chest radiographs obtained by a <span class="hlt">dynamic</span> flat-panel detector (FPD) system.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tanaka, Rie; Sanada, Shigeru; Kobayashi, Takeshi; Suzuki, Masayuki; Matsui, Takeshi; Matsui, Osamu</p> <p>2006-03-01</p> <p>Chest radiography using a <span class="hlt">dynamic</span> flat-panel detector with a large field of view can provide sequential chest radiographs during respiration. These images provide information regarding respiratory kinetics, which is effective for diagnosis of pulmonary diseases. For valid analysis of respiratory kinetics in diagnosis of pulmonary diseases, it is crucial to determine the association between the kinetics and respiratory phase. We developed four <span class="hlt">methods</span> to determine the respiratory phase based on image information associated with respiration and compared the results in <span class="hlt">dynamic</span> chest radiographs of 37 subjects. Here, the properties of each <span class="hlt">method</span> and future tasks are discussed. The <span class="hlt">method</span> based on the change in size of the lung gave the most stable results, and that based on the change in distance from the lung apex to the diaphragm was the most promising <span class="hlt">method</span> for determining the respiratory phase.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19810054609&hterms=numerical+method&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dnumerical%2Bmethod','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19810054609&hterms=numerical+method&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dnumerical%2Bmethod"><span>Numerical <span class="hlt">method</span> for gas <span class="hlt">dynamics</span> combining characteristic and conservation concepts</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Coakley, T. J.</p> <p>1981-01-01</p> <p>An efficient implicit numerical <span class="hlt">method</span> that solves the compressible Navier-Stokes equations in arbitrary curvilinear coordinates by the finite-volume technique is presented. An intrinsically dissipative difference scheme and a fully implicit treatment of boundary conditions, based on characteristic and conservation concepts, are used to improve stability and accuracy. Efficiency is achieved by using a diagonal form of the implicit algorithm and spatially varying time-steps. Comparisons of various schemes and <span class="hlt">methods</span> are presented for one- and two-dimensional flows, including transonic separated flow past a thick circular-arc airfoil in a channel. The new <span class="hlt">method</span> is equal to or better than a version of MacCormack's hybrid <span class="hlt">method</span> in accuracy and it converges to a steady state up to an order of magnitude faster.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/21417247','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/21417247"><span>Free energy reconstruction from steered <span class="hlt">dynamics</span> without post-processing</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Athenes, Manuel; Marinica, Mihai-Cosmin</p> <p>2010-09-20</p> <p>Various <span class="hlt">methods</span> achieving importance sampling in ensembles of nonequilibrium trajectories enable one to estimate free energy differences and, by maximum-<span class="hlt">likelihood</span> post-processing, to reconstruct free energy landscapes. Here, based on Bayes theorem, we propose a more direct <span class="hlt">method</span> in which a posterior <span class="hlt">likelihood</span> function is used both to construct the steered <span class="hlt">dynamics</span> and to infer the contribution to equilibrium of all the sampled states. The <span class="hlt">method</span> is implemented with two steering schedules. First, using non-autonomous steering, we calculate the migration barrier of the vacancy in Fe-{alpha}. Second, using an autonomous scheduling related to metadynamics and equivalent to temperature-accelerated molecular <span class="hlt">dynamics</span>, we accurately reconstruct the two-dimensional free energy landscape of the 38-atom Lennard-Jones cluster as a function of an orientational bond-order parameter and energy, down to the solid-solid structural transition temperature of the cluster and without maximum-<span class="hlt">likelihood</span> post-processing.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19740005834','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19740005834"><span>Study of <span class="hlt">methods</span> of improving the performance of the Langley Research Center Transonic <span class="hlt">Dynamics</span> Tunnel (TDT)</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p></p> <p>1973-01-01</p> <p>A study has been made of possible ways to improve the performance of the Langley Research Center's Transonic <span class="hlt">Dynamics</span> Tunnel (TDT). The major effort was directed toward obtaining increased <span class="hlt">dynamic</span> pressure in the Mach number range from 0.8 to 1.2, but <span class="hlt">methods</span> to increase Mach number capability were also considered. <span class="hlt">Methods</span> studied for increasing <span class="hlt">dynamic</span> pressure capability were higher total pressure, auxiliary suction, reducing circuit losses, reduced test medium temperature, smaller test section and higher molecular weight test medium. Increased Mach number <span class="hlt">methods</span> investigated were nozzle block inserts, variable geometry nozzle, changes in test section wall configuration, and auxiliary suction.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19770003309&hterms=hemp&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dhemp','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19770003309&hterms=hemp&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dhemp"><span>Application of a novel finite difference <span class="hlt">method</span> to <span class="hlt">dynamic</span> crack problems</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chen, Y. M.; Wilkins, M. L.</p> <p>1976-01-01</p> <p>A versatile finite difference <span class="hlt">method</span> (HEMP and HEMP 3D computer programs) was developed originally for solving <span class="hlt">dynamic</span> problems in continuum mechanics. It was extended to analyze the stress field around cracks in a solid with finite geometry subjected to <span class="hlt">dynamic</span> loads and to simulate numerically the <span class="hlt">dynamic</span> fracture phenomena with success. This <span class="hlt">method</span> is an explicit finite difference <span class="hlt">method</span> applied to the Lagrangian formulation of the equations of continuum mechanics in two and three space dimensions and time. The calculational grid moves with the material and in this way it gives a more detailed description of the physics of the problem than the Eulerian formulation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22320730','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22320730"><span>The multi-configuration electron-nuclear <span class="hlt">dynamics</span> <span class="hlt">method</span> applied to LiH.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ulusoy, Inga S; Nest, Mathias</p> <p>2012-02-07</p> <p>The multi-configuration electron-nuclear <span class="hlt">dynamics</span> (MCEND) <span class="hlt">method</span> is a nonadiabatic quantum <span class="hlt">dynamics</span> approach to the description of molecular processes. MCEND is a combination of the multi-configuration time-dependent Hartree (MCTDH) <span class="hlt">method</span> for atoms and its antisymmetrized equivalent MCTDHF for electrons. The purpose of this <span class="hlt">method</span> is to simultaneously describe nuclear and electronic wave packets in a quantum <span class="hlt">dynamical</span> way, without the need to calculate potential energy surfaces and diabatic coupling functions. In this paper we present first exemplary calculations of MCEND applied to the LiH molecule, and discuss computational and numerical details of our implementation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/872984','DOE-PATENT-XML'); return false;" href="http://www.osti.gov/scitech/servlets/purl/872984"><span><span class="hlt">Dynamically</span> balanced fuel nozzle and <span class="hlt">method</span> of operation</span></a></p> <p><a target="_blank" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Richards, George A.; Janus, Michael C.; Robey, Edward H.</p> <p>2000-01-01</p> <p>An apparatus and <span class="hlt">method</span> of operation designed to reduce undesirably high pressure oscillations in lean premix combustion systems burning hydrocarbon fuels are provided. Natural combustion and nozzle acoustics are employed to generate multiple fuel pockets which, when burned in the combustor, counteract the oscillations caused by variations in heat release in the combustor. A hybrid of active and passive control techniques, the apparatus and <span class="hlt">method</span> eliminate combustion oscillations over a wide operating range, without the use of moving parts or electronics.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/22364383','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/22364383"><span>ON THE <span class="hlt">LIKELIHOOD</span> OF PLANET FORMATION IN CLOSE BINARIES</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Jang-Condell, Hannah</p> <p>2015-02-01</p> <p>To date, several exoplanets have been discovered orbiting stars with close binary companions (a ≲ 30 AU). The fact that planets can form in these <span class="hlt">dynamically</span> challenging environments implies that planet formation must be a robust process. The initial protoplanetary disks in these systems from which planets must form should be tidally truncated to radii of a few AU, which indicates that the efficiency of planet formation must be high. Here, we examine the truncation of circumstellar protoplanetary disks in close binary systems, studying how the <span class="hlt">likelihood</span> of planet formation is affected over a range of disk parameters. If the semimajor axis of the binary is too small or its eccentricity is too high, the disk will have too little mass for planet formation to occur. However, we find that the stars in the binary systems known to have planets should have once hosted circumstellar disks that were capable of supporting planet formation despite their truncation. We present a way to characterize the feasibility of planet formation based on binary orbital parameters such as stellar mass, companion mass, eccentricity, and semimajor axis. Using this measure, we can quantify the robustness of planet formation in close binaries and better understand the overall efficiency of planet formation in general.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26200781','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26200781"><span>The <span class="hlt">Likelihood</span> of Experiencing Relative Poverty over the Life Course.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rank, Mark R; Hirschl, Thomas A</p> <p>2015-01-01</p> <p>Research on poverty in the United States has largely consisted of examining cross-sectional levels of absolute poverty. In this analysis, we focus on understanding relative poverty within a life course context. Specifically, we analyze the <span class="hlt">likelihood</span> of individuals falling below the 20th percentile and the 10th percentile of the income distribution between the ages of 25 and 60. A series of life tables are constructed using the nationally representative Panel Study of Income <span class="hlt">Dynamics</span> data set. This includes panel data from 1968 through 2011. Results indicate that the prevalence of relative poverty is quite high. Consequently, between the ages of 25 to 60, 61.8 percent of the population will experience a year below the 20th percentile, and 42.1 percent will experience a year below the 10th percentile. Characteristics associated with experiencing these levels of poverty include those who are younger, nonwhite, female, not married, with 12 years or less of education, or who have a work disability.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?direntryid=335600&keyword=water&subject=water%20research&showcriteria=2&fed_org_id=111&datebeginpublishedpresented=04/02/2012&dateendpublishedpresented=04/02/2017&sortby=pubdateyear','PESTICIDES'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?direntryid=335600&keyword=water&subject=water%20research&showcriteria=2&fed_org_id=111&datebeginpublishedpresented=04/02/2012&dateendpublishedpresented=04/02/2017&sortby=pubdateyear"><span>Bayesian Monte Carlo and Maximum <span class="hlt">Likelihood</span> Approach for ...</span></a></p> <p><a target="_blank" href="http://www.epa.gov/pesticides/search.htm">EPA Pesticide Factsheets</a></p> <p></p> <p></p> <p>Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum <span class="hlt">Likelihood</span> estimation (BMCML) to calibrate a lake oxygen recovery model. We first derive an analytical solution of the differential equation governing lake-averaged oxygen <span class="hlt">dynamics</span> as a function of time-variable wind speed. Statistical inferences on model parameters and predictive uncertainty are then drawn by Bayesian conditioning of the analytical solution on observed daily wind speed and oxygen concentration data obtained from an earlier study during two recovery periods on a eutrophic lake in upper state New York. The model is calibrated using oxygen recovery data for one year and statistical inferences were validated using recovery data for another year. Compared with essentially two-step, regression and optimization approach, the BMCML results are more comprehensive and performed relatively better in predicting the observed temporal dissolved oxygen levels (DO) in the lake. BMCML also produced comparable calibration and validation results with those obtained using popular Markov Chain Monte Carlo technique (MCMC) and is computationally simpler and easier to implement than the MCMC. Next, using the calibrated model, we derive an optimal relationship between liquid film-transfer coefficien</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JChPh.140j4910H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JChPh.140j4910H"><span>Peptide <span class="hlt">dynamics</span> by molecular <span class="hlt">dynamics</span> simulation and diffusion theory <span class="hlt">method</span> with improved basis sets</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hsu, Po Jen; Lai, S. K.; Rapallo, Arnaldo</p> <p>2014-03-01</p> <p>Improved basis sets for the study of polymer <span class="hlt">dynamics</span> by means of the diffusion theory, and tests on a melt of cis-1,4-polyisoprene decamers, and a toluene solution of a 71-mer syndiotactic trans-1,2-polypentadiene were presented recently [R. Gaspari and A. Rapallo, J. Chem. Phys. 128, 244109 (2008)]. The proposed hybrid basis approach (HBA) combined two techniques, the long time sorting procedure and the maximum correlation approximation. The HBA takes advantage of the strength of these two techniques, and its basis sets proved to be very effective and computationally convenient in describing both local and global <span class="hlt">dynamics</span> in cases of flexible synthetic polymers where the repeating unit is a unique type of monomer. The question then arises if the same efficacy continues when the HBA is applied to polymers of different monomers, variable local stiffness along the chain and with longer persistence length, which have different local and global <span class="hlt">dynamical</span> properties against the above-mentioned systems. Important examples of this kind of molecular chains are the proteins, so that a fragment of the protein transthyretin is chosen as the system of the present study. This peptide corresponds to a sequence that is structured in β-sheets of the protein and is located on the surface of the channel with thyroxin. The protein transthyretin forms amyloid fibrils in vivo, whereas the peptide fragment has been shown [C. P. Jaroniec, C. E. MacPhee, N. S. Astrof, C. M. Dobson, and R. G. Griffin, Proc. Natl. Acad. Sci. U.S.A. 99, 16748 (2002)] to form amyloid fibrils in vitro in extended β-sheet conformations. For these reasons the latter is given considerable attention in the literature and studied also as an isolated fragment in water solution where both experimental and theoretical efforts have indicated the propensity of the system to form β turns or α helices, but is otherwise predominantly unstructured. Differing from previous computational studies that employed implicit</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/22253437','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/22253437"><span>Peptide <span class="hlt">dynamics</span> by molecular <span class="hlt">dynamics</span> simulation and diffusion theory <span class="hlt">method</span> with improved basis sets</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Hsu, Po Jen; Lai, S. K.; Rapallo, Arnaldo</p> <p>2014-03-14</p> <p>Improved basis sets for the study of polymer <span class="hlt">dynamics</span> by means of the diffusion theory, and tests on a melt of cis-1,4-polyisoprene decamers, and a toluene solution of a 71-mer syndiotactic trans-1,2-polypentadiene were presented recently [R. Gaspari and A. Rapallo, J. Chem. Phys. 128, 244109 (2008)]. The proposed hybrid basis approach (HBA) combined two techniques, the long time sorting procedure and the maximum correlation approximation. The HBA takes advantage of the strength of these two techniques, and its basis sets proved to be very effective and computationally convenient in describing both local and global <span class="hlt">dynamics</span> in cases of flexible synthetic polymers where the repeating unit is a unique type of monomer. The question then arises if the same efficacy continues when the HBA is applied to polymers of different monomers, variable local stiffness along the chain and with longer persistence length, which have different local and global <span class="hlt">dynamical</span> properties against the above-mentioned systems. Important examples of this kind of molecular chains are the proteins, so that a fragment of the protein transthyretin is chosen as the system of the present study. This peptide corresponds to a sequence that is structured in β-sheets of the protein and is located on the surface of the channel with thyroxin. The protein transthyretin forms amyloid fibrils in vivo, whereas the peptide fragment has been shown [C. P. Jaroniec, C. E. MacPhee, N. S. Astrof, C. M. Dobson, and R. G. Griffin, Proc. Natl. Acad. Sci. U.S.A. 99, 16748 (2002)] to form amyloid fibrils in vitro in extended β-sheet conformations. For these reasons the latter is given considerable attention in the literature and studied also as an isolated fragment in water solution where both experimental and theoretical efforts have indicated the propensity of the system to form β turns or α helices, but is otherwise predominantly unstructured. Differing from previous computational studies that employed implicit</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24628208','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24628208"><span>Peptide <span class="hlt">dynamics</span> by molecular <span class="hlt">dynamics</span> simulation and diffusion theory <span class="hlt">method</span> with improved basis sets.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hsu, Po Jen; Lai, S K; Rapallo, Arnaldo</p> <p>2014-03-14</p> <p>Improved basis sets for the study of polymer <span class="hlt">dynamics</span> by means of the diffusion theory, and tests on a melt of cis-1,4-polyisoprene decamers, and a toluene solution of a 71-mer syndiotactic trans-1,2-polypentadiene were presented recently [R. Gaspari and A. Rapallo, J. Chem. Phys. 128, 244109 (2008)]. The proposed hybrid basis approach (HBA) combined two techniques, the long time sorting procedure and the maximum correlation approximation. The HBA takes advantage of the strength of these two techniques, and its basis sets proved to be very effective and computationally convenient in describing both local and global <span class="hlt">dynamics</span> in cases of flexible synthetic polymers where the repeating unit is a unique type of monomer. The question then arises if the same efficacy continues when the HBA is applied to polymers of different monomers, variable local stiffness along the chain and with longer persistence length, which have different local and global <span class="hlt">dynamical</span> properties against the above-mentioned systems. Important examples of this kind of molecular chains are the proteins, so that a fragment of the protein transthyretin is chosen as the system of the present study. This peptide corresponds to a sequence that is structured in β-sheets of the protein and is located on the surface of the channel with thyroxin. The protein transthyretin forms amyloid fibrils in vivo, whereas the peptide fragment has been shown [C. P. Jaroniec, C. E. MacPhee, N. S. Astrof, C. M. Dobson, and R. G. Griffin, Proc. Natl. Acad. Sci. U.S.A. 99, 16748 (2002)] to form amyloid fibrils in vitro in extended β-sheet conformations. For these reasons the latter is given considerable attention in the literature and studied also as an isolated fragment in water solution where both experimental and theoretical efforts have indicated the propensity of the system to form β turns or α helices, but is otherwise predominantly unstructured. Differing from previous computational studies that employed implicit</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016MeScT..27i5001O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016MeScT..27i5001O"><span><span class="hlt">Dynamic</span> measurements and uncertainty estimation of clinical thermometers using Monte Carlo <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ogorevc, Jaka; Bojkovski, Jovan; Pušnik, Igor; Drnovšek, Janko</p> <p>2016-09-01</p> <p>Clinical thermometers in intensive care units are used for the continuous measurement of body temperature. This study describes a procedure for <span class="hlt">dynamic</span> measurement uncertainty evaluation in order to examine the requirements for clinical thermometer <span class="hlt">dynamic</span> properties in standards and recommendations. In this study thermistors were used as temperature sensors, transient temperature measurements were performed in water and air and the measurement data were processed for the investigation of thermometer <span class="hlt">dynamic</span> properties. The thermometers were mathematically modelled. A Monte Carlo <span class="hlt">method</span> was implemented for <span class="hlt">dynamic</span> measurement uncertainty evaluation. The measurement uncertainty was analysed for static and <span class="hlt">dynamic</span> conditions. Results showed that <span class="hlt">dynamic</span> uncertainty is much larger than steady-state uncertainty. The results of <span class="hlt">dynamic</span> uncertainty analysis were applied on an example of clinical measurements and were compared to current requirements in ISO standard for clinical thermometers. It can be concluded that there was no need for <span class="hlt">dynamic</span> evaluation of clinical thermometers for continuous measurement, while <span class="hlt">dynamic</span> measurement uncertainty was within the demands of target uncertainty. Whereas in the case of intermittent predictive thermometers, the thermometer <span class="hlt">dynamic</span> properties had a significant impact on the measurement result. Estimation of <span class="hlt">dynamic</span> uncertainty is crucial for the assurance of traceable and comparable measurements.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16506972','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16506972"><span>Eliciting information from experts on the <span class="hlt">likelihood</span> of rapid climate change.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Arnell, Nigel W; Tompkins, Emma L; Adger, W Neil</p> <p>2005-12-01</p> <p>The threat of so-called rapid or abrupt climate change has generated considerable public interest because of its potentially significant impacts. The collapse of the North Atlantic Thermohaline Circulation or the West Antarctic Ice Sheet, for example, would have potentially catastrophic effects on temperatures and sea level, respectively. But how likely are such extreme climatic changes? Is it possible actually to estimate <span class="hlt">likelihoods</span>? This article reviews the societal demand for the <span class="hlt">likelihoods</span> of rapid or abrupt climate change, and different <span class="hlt">methods</span> for estimating <span class="hlt">likelihoods</span>: past experience, model simulation, or through the elicitation of expert judgments. The article describes a survey to estimate the <span class="hlt">likelihoods</span> of two characterizations of rapid climate change, and explores the issues associated with such surveys and the value of information produced. The surveys were based on key scientists chosen for their expertise in the climate science of abrupt climate change. Most survey respondents ascribed low <span class="hlt">likelihoods</span> to rapid climate change, due either to the collapse of the Thermohaline Circulation or increased positive feedbacks. In each case one assessment was an order of magnitude higher than the others. We explore a high rate of refusal to participate in this expert survey: many scientists prefer to rely on output from future climate model simulations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19820014993','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19820014993"><span>Protein folding, protein structure and the origin of life: Theoretical <span class="hlt">methods</span> and solutions of <span class="hlt">dynamical</span> problems</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Weaver, D. L.</p> <p>1982-01-01</p> <p>Theoretical <span class="hlt">methods</span> and solutions of the <span class="hlt">dynamics</span> of protein folding, protein aggregation, protein structure, and the origin of life are discussed. The elements of a <span class="hlt">dynamic</span> model representing the initial stages of protein folding are presented. The calculation and experimental determination of the model parameters are discussed. The use of computer simulation for modeling protein folding is considered.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011VSD....49.1159A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011VSD....49.1159A"><span>Numerical <span class="hlt">methods</span> in vehicle system <span class="hlt">dynamics</span>: state of the art and current developments</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Arnold, M.; Burgermeister, B.; Führer, C.; Hippmann, G.; Rill, G.</p> <p>2011-07-01</p> <p>Robust and efficient numerical <span class="hlt">methods</span> are an essential prerequisite for the computer-based <span class="hlt">dynamical</span> analysis of engineering systems. In vehicle system <span class="hlt">dynamics</span>, the <span class="hlt">methods</span> and software tools from multibody system <span class="hlt">dynamics</span> provide the integration platform for the analysis, simulation and optimisation of the complex <span class="hlt">dynamical</span> behaviour of vehicles and vehicle components and their interaction with hydraulic components, electronical devices and control structures. Based on the principles of classical mechanics, the modelling of vehicles and their components results in nonlinear systems of ordinary differential equations (ODEs) or differential-algebraic equations (DAEs) of moderate dimension that describe the <span class="hlt">dynamical</span> behaviour in the frequency range required and with a level of detail being characteristic of vehicle system <span class="hlt">dynamics</span>. Most practical problems in this field may be transformed to generic problems of numerical mathematics like systems of nonlinear equations in the (quasi-)static analysis and explicit ODEs or DAEs with a typical semi-explicit structure in the <span class="hlt">dynamical</span> analysis. This transformation to mathematical standard problems allows to use sophisticated, freely available numerical software that is based on well approved numerical <span class="hlt">methods</span> like the Newton-Raphson iteration for nonlinear equations or Runge-Kutta and linear multistep <span class="hlt">methods</span> for ODE/DAE time integration. Substantial speed-ups of these numerical standard <span class="hlt">methods</span> may be achieved exploiting some specific structure of the mathematical models in vehicle system <span class="hlt">dynamics</span>. In the present paper, we follow this framework and start with some modelling aspects being relevant from the numerical viewpoint. The focus of the paper is on numerical <span class="hlt">methods</span> for static and <span class="hlt">dynamic</span> problems, including software issues and a discussion which <span class="hlt">method</span> fits best for which class of problems. Adaptive components in state-of-the-art numerical software like stepsize and order control in time integration are</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/21513091','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/21513091"><span>Mapping gravitational lensing of the CMB using local <span class="hlt">likelihoods</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Anderes, Ethan; Knox, Lloyd; Engelen, Alexander van</p> <p>2011-02-15</p> <p>We present a new estimation <span class="hlt">method</span> for mapping the gravitational lensing potential from observed CMB intensity and polarization fields. Our <span class="hlt">method</span> uses Bayesian techniques to estimate the average curvature of the potential over small local regions. These local curvatures are then used to construct an estimate of a low pass filter of the gravitational potential. By utilizing Bayesian/<span class="hlt">likelihood</span> <span class="hlt">methods</span> one can easily overcome problems with missing and/or nonuniform pixels and problems with partial sky observations (E- and B-mode mixing, for example). Moreover, our <span class="hlt">methods</span> are local in nature, which allow us to easily model spatially varying beams, and are highly parallelizable. We note that our estimates do not rely on the typical Taylor approximation which is used to construct estimates of the gravitational potential by Fourier coupling. We present our methodology with a flat sky simulation under nearly ideal experimental conditions with a noise level of 1 {mu}K-arcmin for the temperature field, {radical}(2) {mu}K-arcmin for the polarization fields, with an instrumental beam full width at half maximum (FWHM) of 0.25 arcmin.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26836394','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26836394"><span>Maximum <span class="hlt">likelihood</span> positioning and energy correction for scintillation detectors.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lerche, Christoph W; Salomon, André; Goldschmidt, Benjamin; Lodomez, Sarah; Weissler, Björn; Solf, Torsten</p> <p>2016-02-21</p> <p>An algorithm for determining the crystal pixel and the gamma ray energy with scintillation detectors for PET is presented. The algorithm uses <span class="hlt">Likelihood</span> Maximisation (ML) and therefore is inherently robust to missing data caused by defect or paralysed photo detector pixels. We tested the algorithm on a highly integrated MRI compatible small animal PET insert. The scintillation detector blocks of the PET gantry were built with the newly developed digital Silicon Photomultiplier (SiPM) technology from Philips Digital Photon Counting and LYSO pixel arrays with a pitch of 1 mm and length of 12 mm. Light sharing was used to readout the scintillation light from the 30 × 30 scintillator pixel array with an 8 × 8 SiPM array. For the performance evaluation of the proposed algorithm, we measured the scanner's spatial resolution, energy resolution, singles and prompt count rate performance, and image noise. These values were compared to corresponding values obtained with Center of Gravity (CoG) based positioning <span class="hlt">methods</span> for different scintillation light trigger thresholds and also for different energy windows. While all positioning algorithms showed similar spatial resolution, a clear advantage for the ML <span class="hlt">method</span> was observed when comparing the PET scanner's overall single and prompt detection efficiency, image noise, and energy resolution to the CoG based <span class="hlt">methods</span>. Further, ML positioning reduces the dependence of image quality on scanner configuration parameters and was the only <span class="hlt">method</span> that allowed achieving highest energy resolution, count rate performance and spatial resolution at the same time.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24919793','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24919793"><span><span class="hlt">Likelihood</span> ratio based tests for longitudinal drug safety data.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Huang, Lan; Zalkikar, Jyoti; Tiwari, Ram</p> <p>2014-06-30</p> <p>This article presents longitudinal <span class="hlt">likelihood</span> ratio test (LongLRT) <span class="hlt">methods</span> for large databases with exposure information. These <span class="hlt">methods</span> are applied to a pooled large longitudinal clinical trial dataset for drugs treating osteoporosis with concomitant use of proton pump inhibitors (PPIs). When the interest is in the evaluation of a signal of an adverse event for a particular drug compared with placebo or a comparator, the special case of the LongLRT, referred to as sequential LRT (SeqLRT), is also presented. The results show that there is some possible evidence of concomitant use of PPIs leading to more adverse events associated with osteoporosis. The performance of the proposed LongLRT and SeqLRT <span class="hlt">methods</span> is evaluated using simulated datasets and shown to be good in terms of (conditional) power and control of type I error over time. The proposed <span class="hlt">methods</span> can also be applied to large observational databases with exposure information under the US Food and Drug Administration Sentinel Initiative for active surveillance. Published 2014. This article is a US Government work and is in the public domain in the USA.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/985223','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/985223"><span><span class="hlt">Dynamic</span> multiplexed analysis <span class="hlt">method</span> using ion mobility spectrometer</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Belov, Mikhail E</p> <p>2010-05-18</p> <p>A <span class="hlt">method</span> for multiplexed analysis using ion mobility spectrometer in which the effectiveness and efficiency of the multiplexed <span class="hlt">method</span> is optimized by automatically adjusting rates of passage of analyte materials through an IMS drift tube during operation of the system. This automatic adjustment is performed by the IMS instrument itself after determining the appropriate levels of adjustment according to the <span class="hlt">method</span> of the present invention. In one example, the adjustment of the rates of passage for these materials is determined by quantifying the total number of analyte molecules delivered to the ion trap in a preselected period of time, comparing this number to the charge capacity of the ion trap, selecting a gate opening sequence; and implementing the selected gate opening sequence to obtain a preselected rate of analytes within said IMS drift tube.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16848574','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16848574"><span><span class="hlt">Dynamically</span> screened local correlation <span class="hlt">method</span> using enveloping localized orbitals.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Auer, Alexander A; Nooijen, Marcel</p> <p>2006-07-14</p> <p>In this paper we present a local coupled cluster approach based on a <span class="hlt">dynamical</span> screening scheme, in which amplitudes are either calculated at the coupled cluster level (in this case CCSD) or at the level of perturbation theory, employing a threshold driven procedure based on MP2 energy increments. This way, controllable accuracy and smooth convergence towards the exact result are obtained in the framework of an a posteriori approximation scheme. For the representation of the occupied space a new set of local orbitals is presented with the size of a minimal basis set. This set is atom centered, is nonorthogonal, and has shapes which are fairly independent of the details of the molecular system of interest. Two slightly different versions of combined local coupled cluster and perturbation theory equations are considered. In the limit both converge to the untruncated CCSD result. Benchmark calculations for four systems (heptane, serine, water hexamer, and oxadiazole-2-oxide) are carried out, and decay of the amplitudes, truncation error, and convergence towards the exact CCSD result are analyzed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006JChPh.125b4104A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006JChPh.125b4104A"><span><span class="hlt">Dynamically</span> screened local correlation <span class="hlt">method</span> using enveloping localized orbitals</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Auer, Alexander A.; Nooijen, Marcel</p> <p>2006-07-01</p> <p>In this paper we present a local coupled cluster approach based on a <span class="hlt">dynamical</span> screening scheme, in which amplitudes are either calculated at the coupled cluster level (in this case CCSD) or at the level of perturbation theory, employing a threshold driven procedure based on MP2 energy increments. This way, controllable accuracy and smooth convergence towards the exact result are obtained in the framework of an a posteriori approximation scheme. For the representation of the occupied space a new set of local orbitals is presented with the size of a minimal basis set. This set is atom centered, is nonorthogonal, and has shapes which are fairly independent of the details of the molecular system of interest. Two slightly different versions of combined local coupled cluster and perturbation theory equations are considered. In the limit both converge to the untruncated CCSD result. Benchmark calculations for four systems (heptane, serine, water hexamer, and oxadiazole-2-oxide) are carried out, and decay of the amplitudes, truncation error, and convergence towards the exact CCSD result are analyzed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010IEITF..91.1666O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010IEITF..91.1666O"><span>An Adaptive <span class="hlt">Likelihood</span> Distribution Algorithm for the Localization of Passive RFID Tags</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ota, Yuuki; Hori, Toshihiro; Onishi, Taiki; Wada, Tomotaka; Mutsuura, Kouichi; Okada, Hiromi</p> <p></p> <p>The RFID (Radio Frequency IDentification) tag technology is expected as a tool of localization. By the localization of RFID tags, a mobile robot which installs in RFID readers can recognize surrounding environments. In addition, RFID tags can be applied to a navigation system for walkers. In this paper, we propose an adaptive <span class="hlt">likelihood</span> distribution scheme for the localization of RFID tags. This <span class="hlt">method</span> adjusts the <span class="hlt">likelihood</span> distribution depending on the signal intensity from RFID tags. We carry out the performance evaluation of estimated position error by both computer simulations and implemental experiments. We show that the proposed system is more effective than the conventional system.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.jstor.org/stable/2530917','USGSPUBS'); return false;" href="http://www.jstor.org/stable/2530917"><span>A general methodology for maximum <span class="hlt">likelihood</span> inference from band-recovery data</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Conroy, M.J.; Williams, B.K.</p> <p>1984-01-01</p> <p>A numerical procedure is described for obtaining maximum <span class="hlt">likelihood</span> estimates and associated maximum <span class="hlt">likelihood</span> inference from band- recovery data. The <span class="hlt">method</span> is used to illustrate previously developed one-age-class band-recovery models, and is extended to new models, including the analysis with a covariate for survival rates and variable-time-period recovery models. Extensions to R-age-class band- recovery, mark-recapture models, and twice-yearly marking are discussed. A FORTRAN program provides computations for these models.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/821998','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/821998"><span>A Measure of the goodness of fit in unbinned <span class="hlt">likelihood</span> fits; end of Bayesianism?</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Rajendran Raja</p> <p>2004-03-12</p> <p>Maximum <span class="hlt">likelihood</span> fits to data can be done using binned data (histograms) and unbinned data. With binned data, one gets not only the fitted parameters but also a measure of the goodness of fit. With unbinned data, currently, the fitted parameters are obtained but no measure of goodness of fit is available. This remains, to date, an unsolved problem in statistics. Using Bayes' theorem and <span class="hlt">likelihood</span> ratios, they provide a <span class="hlt">method</span> by which both the fitted quantities and a measure of the goodness of fit are obtained for unbinned <span class="hlt">likelihood</span> fits, as well as errors in the fitted quantities. The quantity, conventionally interpreted as a Bayesian prior, is seen in this scheme to be a number not a distribution, that is determined from data.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19920043593&hterms=new+developments&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dnew%2Bdevelopments','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19920043593&hterms=new+developments&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dnew%2Bdevelopments"><span>New developments in adaptive <span class="hlt">methods</span> for computational fluid <span class="hlt">dynamics</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Oden, J. T.; Bass, Jon M.</p> <p>1990-01-01</p> <p>New developments in a posteriori error estimates, smart algorithms, and h- and h-p adaptive finite element <span class="hlt">methods</span> are discussed in the context of two- and three-dimensional compressible and incompressible flow simulations. Applications to rotor-stator interaction, rotorcraft aerodynamics, shock and viscous boundary layer interaction and fluid-structure interaction problems are discussed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19285511','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19285511"><span>The shape of the competition and carrying capacity kernels affects the <span class="hlt">likelihood</span> of disruptive selection.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Baptestini, Elizabeth M; de Aguiar, Marcus A M; Bolnick, Daniel I; Araújo, Márcio S</p> <p>2009-07-07</p> <p>Many quantitative genetic and adaptive <span class="hlt">dynamic</span> models suggest that disruptive selection can maintain genetic polymorphism and be the driving force causing evolutionary divergence. These models also suggest that disruptive selection arises from frequency-dependent intraspecific competition. For convenience or historical precedence, these models assume that carrying capacity and competition functions follow a Gaussian distribution. Here, we propose a new analytical framework that relaxes the assumption of Gaussian competition and carrying capacity functions, and investigate how alternative shapes affect the <span class="hlt">likelihood</span> of disruptive selection. We found that the shape of both carrying capacity and competition kernels interact to determine the <span class="hlt">likelihood</span> of disruptive selection. For certain regions of the parametric space disruptive selection is facilitated, whereas for others it becomes more difficult. Our results suggest that the relationship between the degree of frequency dependence and the <span class="hlt">likelihood</span> of disruptive selection is more complex than previously thought, depending on how resources are distributed and competition interference takes place. It is now important to describe the empirical patterns of resource distribution and competition in nature as a way to determine the <span class="hlt">likelihood</span> of disruptive selection in natural populations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23813342','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23813342"><span>A <span class="hlt">dynamic</span> scanning <span class="hlt">method</span> based on signal-statistics for scanning electron microscopy.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Timischl, F</p> <p>2014-01-01</p> <p>A novel <span class="hlt">dynamic</span> scanning <span class="hlt">method</span> for noise reduction in scanning electron microscopy and related applications is presented. The scanning <span class="hlt">method</span> <span class="hlt">dynamically</span> adjusts the scanning speed of the electron beam depending on the statistical behavior of the detector signal and gives SEM images with uniform and predefined standard deviation, independent of the signal value itself. In the case of partially saturated images, the proposed <span class="hlt">method</span> decreases image acquisition time without sacrificing image quality. The effectiveness of the proposed <span class="hlt">method</span> is shown and compared to the conventional scanning <span class="hlt">method</span> and median filtering using numerical simulations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010SPIE.7522E..4GM','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010SPIE.7522E..4GM"><span>A hybrid numerical-experimental <span class="hlt">method</span> for determination of <span class="hlt">dynamic</span> fracture properties of material</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mihradi, S.; Putra, I. S.; Dirgantara, T.; Widagdo, D.; Truong, L. X.</p> <p>2010-03-01</p> <p>A novel hybrid numerical-experimental <span class="hlt">method</span> to obtain <span class="hlt">dynamic</span> fracture properties of materials has been developed in the present work. Specimens were tested with one-point bending configuration in the Hopkinson's bar apparatus, from which the impact loading profiles were measured. In this <span class="hlt">dynamic</span> fracture experiment, the crack tip position was measured by two strips of special strain gage having five gages arranged in one strip. Since the strain gage record only gave strain signal of each gage as a function of time, a novel <span class="hlt">method</span> is proposed to determine the time at which the crack tip passed each strain gage and the time when the crack finally stopped. From the data of crack tip position as a function of time, the crack speed then can be calculated. These data, i.e. the loading profile and the crack speed, were then used as the input of the Node-Based FEM program developed for <span class="hlt">dynamic</span> fractures problems. With the proposed <span class="hlt">method</span>, three <span class="hlt">dynamic</span> fracture properties of materials i.e <span class="hlt">dynamic</span> fracture toughness for crack initiation (KIcd), fracture toughness for crack propagation (KID), and crack arrest toughness (KIa) can simultaneously be obtained. The results obtained from the investigation of <span class="hlt">dynamic</span> fracture properties of Polymethyl Methacrylate (PMMA) material by the present <span class="hlt">method</span> are well compared with the ones in the literature and from the direct experimental measurement. The good agreement suggests that the hybrid <span class="hlt">method</span> developed in the present work can be used reliably to determine the <span class="hlt">dynamic</span> fracture properties of materials.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009SPIE.7522E..4GM','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009SPIE.7522E..4GM"><span>A hybrid numerical-experimental <span class="hlt">method</span> for determination of <span class="hlt">dynamic</span> fracture properties of material</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mihradi, S.; Putra, I. S.; Dirgantara, T.; Widagdo, D.; Truong, L. X.</p> <p>2009-12-01</p> <p>A novel hybrid numerical-experimental <span class="hlt">method</span> to obtain <span class="hlt">dynamic</span> fracture properties of materials has been developed in the present work. Specimens were tested with one-point bending configuration in the Hopkinson's bar apparatus, from which the impact loading profiles were measured. In this <span class="hlt">dynamic</span> fracture experiment, the crack tip position was measured by two strips of special strain gage having five gages arranged in one strip. Since the strain gage record only gave strain signal of each gage as a function of time, a novel <span class="hlt">method</span> is proposed to determine the time at which the crack tip passed each strain gage and the time when the crack finally stopped. From the data of crack tip position as a function of time, the crack speed then can be calculated. These data, i.e. the loading profile and the crack speed, were then used as the input of the Node-Based FEM program developed for <span class="hlt">dynamic</span> fractures problems. With the proposed <span class="hlt">method</span>, three <span class="hlt">dynamic</span> fracture properties of materials i.e <span class="hlt">dynamic</span> fracture toughness for crack initiation (KIcd), fracture toughness for crack propagation (KID), and crack arrest toughness (KIa) can simultaneously be obtained. The results obtained from the investigation of <span class="hlt">dynamic</span> fracture properties of Polymethyl Methacrylate (PMMA) material by the present <span class="hlt">method</span> are well compared with the ones in the literature and from the direct experimental measurement. The good agreement suggests that the hybrid <span class="hlt">method</span> developed in the present work can be used reliably to determine the <span class="hlt">dynamic</span> fracture properties of materials.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20110016062&hterms=research+methods&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dresearch%2Bmethods','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20110016062&hterms=research+methods&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dresearch%2Bmethods"><span>Small Body GN and C Research Report: G-SAMPLE - An In-Flight <span class="hlt">Dynamical</span> <span class="hlt">Method</span> for Identifying Sample Mass [External Release Version</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Carson, John M., III; Bayard, David S.</p> <p>2006-01-01</p> <p>G-SAMPLE is an in-flight <span class="hlt">dynamical</span> <span class="hlt">method</span> for use by sample collection missions to identify the presence and quantity of collected sample material. The G-SAMPLE <span class="hlt">method</span> implements a maximum-<span class="hlt">likelihood</span> estimator to identify the collected sample mass, based on onboard force sensor measurements, thruster firings, and a <span class="hlt">dynamics</span> model of the spacecraft. With G-SAMPLE, sample mass identification becomes a computation rather than an extra hardware requirement; the added cost of cameras or other sensors for sample mass detection is avoided. Realistic simulation examples are provided for a spacecraft configuration with a sample collection device mounted on the end of an extended boom. In one representative example, a 1000 gram sample mass is estimated to within 110 grams (95% confidence) under realistic assumptions of thruster profile error, spacecraft parameter uncertainty, and sensor noise. For convenience to future mission design, an overall sample-mass estimation error budget is developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/1204104','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/1204104"><span><span class="hlt">Dynamic</span> analysis <span class="hlt">methods</span> for detecting anomalies in asynchronously interacting systems</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Kumar, Akshat; Solis, John Hector; Matschke, Benjamin</p> <p>2014-01-01</p> <p>Detecting modifications to digital system designs, whether malicious or benign, is problematic due to the complexity of the systems being analyzed. Moreover, static analysis techniques and tools can only be used during the initial design and implementation phases to verify safety and liveness properties. It is computationally intractable to guarantee that any previously verified properties still hold after a system, or even a single component, has been produced by a third-party manufacturer. In this paper we explore new approaches for creating a robust system design by investigating highly-structured computational models that simplify verification and analysis. Our approach avoids the need to fully reconstruct the implemented system by incorporating a small verification component that <span class="hlt">dynamically</span> detects for deviations from the design specification at run-time. The first approach encodes information extracted from the original system design algebraically into a verification component. During run-time this component randomly queries the implementation for trace information and verifies that no design-level properties have been violated. If any deviation is detected then a pre-specified fail-safe or notification behavior is triggered. Our second approach utilizes a partitioning methodology to view liveness and safety properties as a distributed decision task and the implementation as a proposed protocol that solves this task. Thus the problem of verifying safety and liveness properties is translated to that of verifying that the implementation solves the associated decision task. We develop upon results from distributed systems and algebraic topology to construct a learning mechanism for verifying safety and liveness properties from samples of run-time executions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19920041614&hterms=scientific+method&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Dscientific%2Bmethod','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19920041614&hterms=scientific+method&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Dscientific%2Bmethod"><span>A self-consistent field <span class="hlt">method</span> for galactic <span class="hlt">dynamics</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hernquist, Lars; Ostriker, Jeremiah P.</p> <p>1992-01-01</p> <p>The present study describes an algorithm for evolving collisionless stellar systems in order to investigate the evolution of systems with density profiles like the R exp 1/4 law, using only a few terms in the expansions. A good fit is obtained for a truncated isothermal distribution, which renders the <span class="hlt">method</span> appropriate for galaxies with flat rotation curves. Calculations employing N of about 10 exp 6-7 are straightforward on existing supercomputers, making possible simulations having significantly smoother fields than with direct <span class="hlt">methods</span> such as tree-codes. Orbits are found in a given static or time-dependent gravitational field; the potential, phi(r, t) is revised from the resultant density, rho(r, t). Possible scientific uses of this technique are discussed, including tidal perturbations of dwarf galaxies, the adiabatic growth of central masses in spheroidal galaxies, instabilities in realistic galaxy models, and secular processes in galactic evolution.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/5453897','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/5453897"><span>Catastrophic fault diagnosis in <span class="hlt">dynamic</span> systems using bond graph <span class="hlt">methods</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Yarom, Tamar.</p> <p>1990-01-01</p> <p>Detection and diagnosis of faults has become a critical issue in high performance engineering systems as well as in mass-produced equipment. It is particularly helpful when the diagnosis can be made at the initial design level with respect to a prospective fault list. A number of powerful <span class="hlt">methods</span> have been developed for aiding in the general fault analysis of designs. Catastrophic faults represent the limit case of complete local failure of connections or components. They result in the interruption of energy transfer between corresponding points in the system. In this work the conventional approach to fault detection and diagnosis is extended by means of bond-graph <span class="hlt">methods</span> to a wide variety of engineering systems. Attention is focused on catastrophic fault diagnosis. A catastrophic fault dictionary is generated from the system model based on topological properties of the bond graph. The dictionary is processed by existing <span class="hlt">methods</span> to extract a catastrophic fault report to aid the engineer in performing a design analysis.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2002JSV...249...83R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2002JSV...249...83R"><span>a Stochastic Newmark <span class="hlt">Method</span> for Engineering <span class="hlt">Dynamical</span> Systems</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>ROY, D.; DASH, M. K.</p> <p>2002-01-01</p> <p>The purpose of this study is to develop a stochastic Newmark integration principle based on an implicit stochastic Taylor (Ito-Taylor or Stratonovich-Taylor) expansion of the vector field. As in the deterministic case, implicitness in stochastic Taylor expansions for the displacement and velocity vectors is achieved by introducing a couple of non-unique integration parameters, α and β. A rigorous error analysis is performed to put bounds on the local and global errors in computing displacements and velocities. The stochastic Newmark <span class="hlt">method</span> is elegantly adaptable for obtaining strong sample-path solutions of linear and non-linear multi-degree-of freedom (m.d.o.f.) stochastic engineering systems with continuous and Lipschitz-bounded vector fields under (filtered) white-noise inputs. The <span class="hlt">method</span> has presently been numerically illustrated, to a limited extent, for sample-path integration of a hardening Duffing oscillator under additive and multiplicative white-noise excitations. The results are indicative of consistency, convergence and stochastic numerical stability of the stochastic Newmark <span class="hlt">method</span> (SNM).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/22572166','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/22572166"><span>Assessing compatibility of direct detection data: halo-independent global <span class="hlt">likelihood</span> analyses</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.</p> <p>2016-10-18</p> <p>We present two different halo-independent <span class="hlt">methods</span> to assess the compatibility of several direct dark matter detection data sets for a given dark matter model using a global <span class="hlt">likelihood</span> consisting of at least one extended <span class="hlt">likelihood</span> and an arbitrary number of Gaussian or Poisson <span class="hlt">likelihoods</span>. In the first <span class="hlt">method</span> we find the global best fit halo function (we prove that it is a unique piecewise constant function with a number of down steps smaller than or equal to a maximum number that we compute) and construct a two-sided pointwise confidence band at any desired confidence level, which can then be compared with those derived from the extended <span class="hlt">likelihood</span> alone to assess the joint compatibility of the data. In the second <span class="hlt">method</span> we define a “constrained parameter goodness-of-fit” test statistic, whose p-value we then use to define a “plausibility region” (e.g. where p≥10%). For any halo function not entirely contained within the plausibility region, the level of compatibility of the data is very low (e.g. p<10%). We illustrate these <span class="hlt">methods</span> by applying them to CDMS-II-Si and SuperCDMS data, assuming dark matter particles with elastic spin-independent isospin-conserving interactions or exothermic spin-independent isospin-violating interactions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=fumiko&pg=5&id=ED209288','ERIC'); return false;" href="http://eric.ed.gov/?q=fumiko&pg=5&id=ED209288"><span>An Alternative Estimator for the Maximum <span class="hlt">Likelihood</span> Estimator for the Two Extreme Response Patterns.</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Samejima, Fumiko</p> <p></p> <p>In the <span class="hlt">methods</span> and approaches developed for estimating the operating characteristics of the discrete item responses, the maximum <span class="hlt">likelihood</span> estimate of the examinee based upon the "Old Test" has an important role. When Old Test does not provide a sufficient amount of test information for the upper and lower part of the ability interval,…</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=estimator&pg=3&id=EJ874526','ERIC'); return false;" href="http://eric.ed.gov/?q=estimator&pg=3&id=EJ874526"><span>An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local <span class="hlt">Likelihood</span> Maxima</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Magis, David; Raiche, Gilles</p> <p>2010-01-01</p> <p>In this article the authors focus on the issue of the nonuniqueness of the maximum <span class="hlt">likelihood</span> (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) <span class="hlt">method</span> offers a good alternative within that framework; however, this article highlights some drawbacks of its…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26210670','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26210670"><span>Integrated <span class="hlt">likelihoods</span> in parametric survival models for highly clustered censored data.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cortese, Giuliana; Sartori, Nicola</p> <p>2016-07-01</p> <p>In studies that involve censored time-to-event data, stratification is frequently encountered due to different reasons, such as stratified sampling or model adjustment due to violation of model assumptions. Often, the main interest is not in the clustering variables, and the cluster-related parameters are treated as nuisance. When inference is about a parameter of interest in presence of many nuisance parameters, standard <span class="hlt">likelihood</span> <span class="hlt">methods</span> often perform very poorly and may lead to severe bias. This problem is particularly evident in models for clustered data with cluster-specific nuisance parameters, when the number of clusters is relatively high with respect to the within-cluster size. However, it is still unclear how the presence of censoring would affect this issue. We consider clustered failure time data with independent censoring, and propose frequentist inference based on an integrated <span class="hlt">likelihood</span>. We then apply the proposed approach to a stratified Weibull model. Simulation studies show that appropriately defined integrated <span class="hlt">likelihoods</span> provide very accurate inferential results in all circumstances, such as for highly clustered data or heavy censoring, even in extreme settings where standard <span class="hlt">likelihood</span> procedures lead to strongly misleading results. We show that the proposed <span class="hlt">method</span> performs generally as well as the frailty model, but it is superior when the frailty distribution is seriously misspecified. An application, which concerns treatments for a frequent disease in late-stage HIV-infected people, illustrates the proposed inferential <span class="hlt">method</span> in Weibull regression models, and compares different inferential conclusions from alternative <span class="hlt">methods</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22826173','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22826173"><span>A conditional <span class="hlt">likelihood</span> approach for regression analysis using biomarkers measured with batch-specific error.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Ming; Flanders, W Dana; Bostick, Roberd M; Long, Qi</p> <p>2012-12-20</p> <p>Measurement error is common in epidemiological and biomedical studies. When biomarkers are measured in batches or groups, measurement error is potentially correlated within each batch or group. In regression analysis, most existing <span class="hlt">methods</span> are not applicable in the presence of batch-specific measurement error in predictors. We propose a robust conditional <span class="hlt">likelihood</span> approach to account for batch-specific error in predictors when batch effect is additive and the predominant source of error, which requires no assumptions on the distribution of measurement error. Although a regression model with batch as a categorical covariable yields the same parameter estimates as the proposed conditional <span class="hlt">likelihood</span> approach for linear regression, this result does not hold in general for all generalized linear models, in particular, logistic regression. Our simulation studies show that the conditional <span class="hlt">likelihood</span> approach achieves better finite sample performance than the regression calibration approach or a naive approach without adjustment for measurement error. In the case of logistic regression, our proposed approach is shown to also outperform the regression approach with batch as a categorical covariate. In addition, we also examine a 'hybrid' approach combining the conditional <span class="hlt">likelihood</span> <span class="hlt">method</span> and the regression calibration <span class="hlt">method</span>, which is shown in simulations to achieve good performance in the presence of both batch-specific and measurement-specific errors. We illustrate our <span class="hlt">method</span> by using data from a colorectal adenoma study.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JCAP...10..029G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JCAP...10..029G"><span>Assessing compatibility of direct detection data: halo-independent global <span class="hlt">likelihood</span> analyses</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.</p> <p>2016-10-01</p> <p>We present two different halo-independent <span class="hlt">methods</span> to assess the compatibility of several direct dark matter detection data sets for a given dark matter model using a global <span class="hlt">likelihood</span> consisting of at least one extended <span class="hlt">likelihood</span> and an arbitrary number of Gaussian or Poisson <span class="hlt">likelihoods</span>. In the first <span class="hlt">method</span> we find the global best fit halo function (we prove that it is a unique piecewise constant function with a number of down steps smaller than or equal to a maximum number that we compute) and construct a two-sided pointwise confidence band at any desired confidence level, which can then be compared with those derived from the extended <span class="hlt">likelihood</span> alone to assess the joint compatibility of the data. In the second <span class="hlt">method</span> we define a ``constrained parameter goodness-of-fit'' test statistic, whose p-value we then use to define a ``plausibility region'' (e.g. where p >= 10%). For any halo function not entirely contained within the plausibility region, the level of compatibility of the data is very low (e.g. p < 10%). We illustrate these <span class="hlt">methods</span> by applying them to CDMS-II-Si and SuperCDMS data, assuming dark matter particles with elastic spin-independent isospin-conserving interactions or exothermic spin-independent isospin-violating interactions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=monte+AND+carlo+AND+3+AND+2&pg=5&id=EJ636367','ERIC'); return false;" href="http://eric.ed.gov/?q=monte+AND+carlo+AND+3+AND+2&pg=5&id=EJ636367"><span>The Relative Performance of Full Information Maximum <span class="hlt">Likelihood</span> Estimation for Missing Data in Structural Equation Models.</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Enders, Craig K.; Bandalos, Deborah L.</p> <p>2001-01-01</p> <p>Used Monte Carlo simulation to examine the performance of four missing data <span class="hlt">methods</span> in structural equation models: (1)full information maximum <span class="hlt">likelihood</span> (FIML); (2) listwise deletion; (3) pairwise deletion; and (4) similar response pattern imputation. Results show that FIML estimation is superior across all conditions of the design. (SLD)</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA598875','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA598875"><span>Binary Detection using Multi-Hypothesis Log-<span class="hlt">Likelihood</span>, Image Processing</span></a></p> <p><a target="_blank" href="https://publicaccess.dtic.mil/psm/api/service/search/search">DTIC Science & Technology</a></p> <p></p> <p>2014-03-27</p> <p>xi I. Introduction ...Projects Agency GEO Geostationary Earth Orbit xi BINARY DETECTION USING MULTI-HYPOTHESIS LOG-<span class="hlt">LIKELIHOOD</span>, IMAGE PROCESSING I. Introduction Comparing...is important to compare them to another modern technique. The third objective is to compare results from another image detection <span class="hlt">method</span>, specifically</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=Simulation+AND+Monte+AND+Carlo+AND+Method&pg=6&id=EJ933553','ERIC'); return false;" href="http://eric.ed.gov/?q=Simulation+AND+Monte+AND+Carlo+AND+Method&pg=6&id=EJ933553"><span>A Comparison of Maximum <span class="hlt">Likelihood</span> and Bayesian Estimation for Polychoric Correlation Using Monte Carlo Simulation</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Choi, Jaehwa; Kim, Sunhee; Chen, Jinsong; Dannels, Sharon</p> <p>2011-01-01</p> <p>The purpose of this study is to compare the maximum <span class="hlt">likelihood</span> (ML) and Bayesian estimation <span class="hlt">methods</span> for polychoric correlation (PCC) under diverse conditions using a Monte Carlo simulation. Two new Bayesian estimates, maximum a posteriori (MAP) and expected a posteriori (EAP), are compared to ML, the classic solution, to estimate PCC. Different…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/11953908','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/11953908"><span>Assessment of the RR versus QT relation by a new symbolic <span class="hlt">dynamics</span> <span class="hlt">method</span>. Gender differences in repolarization <span class="hlt">dynamics</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Baranowski, Rafał; Zebrowski, Jan J</p> <p>2002-04-01</p> <p>A new <span class="hlt">method</span> based on symbolic <span class="hlt">dynamics</span> was applied to assess RR-QT <span class="hlt">dynamics</span> and to compare gender differences. Segments of 10,000 RR and QT from the night were selected. The values of RR and QT were coded as follows. Each RR and QT interval was compared with their means in the last 50 beats [xRR, xQT]; when the interval was larger than x + delta then it was coded as a "2", where delta is the tolerance parameter; when it was less than x - delta-the code was a "0"; when it was larger than x-delta and and the less than x+delta-then it was coded as a "1." The tolerance parameter "delta" was equal to 10 ms for RR and 4 ms for QT. We obtained pairs of symbols representing the values of RR and QT-symbolic words. The results were presented in form of the probability density of the symbolic words. Mean RR, mean QT, SDRR, SDQT, QTc (Bazett formula) were also calculated. Electrocardiogram data of healthy individuals: 20 women and 20 men (mean age 39 +/- 12) were analyzed. There were significant gender differences in RR-QT <span class="hlt">dynamics</span>. During heart rate acceleration the probability of QT shortening (the probability of the word "00") was higher in men than in women (P =.003). During heart rate deceleration QT lengthening (the word "22") was more frequently observed in men than in women (P =.003) as well. The QT reaction to RR interval changes is less complex in women than in men. In discriminant analysis, when QTc was ignored in the model, the RR-QT <span class="hlt">dynamics</span> separated genders with 67% accuracy (chi(2) = 9.1, P <.003). RR-QT <span class="hlt">dynamics</span> can be analyzed with symbolic <span class="hlt">dynamics</span> <span class="hlt">methods</span>. The gender differences in repolarization are not only due to QTc duration alone but also result from the dependence of the duration of QT on the RR duration.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/9056342','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/9056342"><span>The Interpretation of <span class="hlt">Dynamic</span> Contact Angles Measured by the Wilhelmy Plate <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ramé</p> <p>1997-01-01</p> <p>We present an analysis for properly interpreting apparent <span class="hlt">dynamic</span> contact angles measured using the Wilhelmy plate <span class="hlt">method</span> at low capillary numbers, Ca. This analysis removes the ambiguity in current <span class="hlt">dynamic</span> measurements which interpret data with the same formula as static measurements. We properly account for all forces, including viscous forces, acting on the plate as it moves into or out of a liquid bath. Our main result, valid at O(1) as Ca --> 0, relates the apparent <span class="hlt">dynamic</span> contact angle to material-dependent, geometry-independent parameters necessary for describing <span class="hlt">dynamic</span> wetting of a system. The special case of the apparent contact angle = pi/2 was solved to O(Ca). This O(Ca) solution can guide numerical work necessary for higher Ca's and arbitrary values of the apparent contact angle. These results make the Wilhelmy plate a viable <span class="hlt">method</span> for determining material parameters for <span class="hlt">dynamic</span> spreading.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24446366','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24446366"><span>NMR and computational <span class="hlt">methods</span> in the structural and <span class="hlt">dynamic</span> characterization of ligand-receptor interactions.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ghitti, Michela; Musco, Giovanna; Spitaleri, Andrea</p> <p>2014-01-01</p> <p>The recurrent failures in drug discovery campaigns, the asymmetry between the enormous financial investments and the relatively scarce results have fostered the development of strategies based on complementary <span class="hlt">methods</span>. In this context in recent years the rigid lock-and-key binding concept had to be revisited in favour of a <span class="hlt">dynamic</span> model of molecular recognition accounting for conformational changes of both the ligand and the receptor. The high level of complexity required by a <span class="hlt">dynamic</span> description of the processes underlying molecular recognition requires a multidisciplinary investigation approach. In this perspective, the combination of nuclear magnetic resonance spectroscopy with molecular docking, conformational searches along with molecular <span class="hlt">dynamics</span> simulations has given new insights into the <span class="hlt">dynamic</span> mechanisms governing ligand receptor interactions, thus giving an enormous contribution to the identification and design of new and effective drugs. Herein a succinct overview on the applications of both NMR and computational <span class="hlt">methods</span> to the structural and <span class="hlt">dynamic</span> characterization of ligand-receptor interactions will be presented.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28253349','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28253349"><span>Single particle maximum <span class="hlt">likelihood</span> reconstruction from superresolution microscopy images.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Verdier, Timothée; Gunzenhauser, Julia; Manley, Suliana; Castelnovo, Martin</p> <p>2017-01-01</p> <p>Point localization superresolution microscopy enables fluorescently tagged molecules to be imaged beyond the optical diffraction limit, reaching single molecule localization precisions down to a few nanometers. For small objects whose sizes are few times this precision, localization uncertainty prevents the straightforward extraction of a structural model from the reconstructed images. We demonstrate in the present work that this limitation can be overcome at the single particle level, requiring no particle averaging, by using a maximum <span class="hlt">likelihood</span> reconstruction (MLR) <span class="hlt">method</span> perfectly suited to the stochastic nature of such superresolution imaging. We validate this <span class="hlt">method</span> by extracting structural information from both simulated and experimental PALM data of immature virus-like particles of the Human Immunodeficiency Virus (HIV-1). MLR allows us to measure the radii of individual viruses with precision of a few nanometers and confirms the incomplete closure of the viral protein lattice. The quantitative results of our analysis are consistent with previous cryoelectron microscopy characterizations. Our study establishes the framework for a <span class="hlt">method</span> that can be broadly applied to PALM data to determine the structural parameters for an existing structural model, and is particularly well suited to heterogeneous features due to its single particle implementation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015MSSP...50..659W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015MSSP...50..659W"><span>A new uncertain analysis <span class="hlt">method</span> and its application in vehicle <span class="hlt">dynamics</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing</p> <p>2015-01-01</p> <p>This paper proposes a new uncertain analysis <span class="hlt">method</span> for vehicle <span class="hlt">dynamics</span> involving hybrid uncertainty parameters. The Polynomial Chaos (PC) theory that accounts for the random uncertainty is systematically integrated with the Chebyshev inclusion function theory that describes the interval uncertainty, to deliver a Polynomial-Chaos-Chebyshev-Interval (PCCI) <span class="hlt">method</span>. The PCCI <span class="hlt">method</span> is non-intrusive, because it does not require the amendment of the original solver for different and complicated <span class="hlt">dynamics</span> problems. Two types of evaluation indexes are established: the first includes interval mean (IM) and interval variance (IV), and the second are the mean of lower bound (MLB), the variance of lower bound (VLB), the mean of upper bound (MUB) and the variance of upper bound (VUB). The Monte Carlo <span class="hlt">method</span> is combined with the scanning <span class="hlt">method</span> to produce the reference results, and then a 4-DOF vehicle roll plan model is employed to demonstrate the effectiveness of the proposed <span class="hlt">method</span> for vehicle <span class="hlt">dynamics</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27528179','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27528179"><span>A multi-similarity spectral clustering <span class="hlt">method</span> for community detection in <span class="hlt">dynamic</span> networks.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Qin, Xuanmei; Dai, Weidi; Jiao, Pengfei; Wang, Wenjun; Yuan, Ning</p> <p>2016-08-16</p> <p>Community structure is one of the fundamental characteristics of complex networks. Many <span class="hlt">methods</span> have been proposed for community detection. However, most of these <span class="hlt">methods</span> are designed for static networks and are not suitable for <span class="hlt">dynamic</span> networks that evolve over time. Recently, the evolutionary clustering framework was proposed for clustering <span class="hlt">dynamic</span> data, and it can also be used for community detection in <span class="hlt">dynamic</span> networks. In this paper, a multi-similarity spectral (MSSC) <span class="hlt">method</span> is proposed as an improvement to the former evolutionary clustering <span class="hlt">method</span>. To detect the community structure in <span class="hlt">dynamic</span> networks, our <span class="hlt">method</span> considers the different similarity metrics of networks. First, multiple similarity matrices are constructed for each snapshot of <span class="hlt">dynamic</span> networks. Then, a <span class="hlt">dynamic</span> co-training algorithm is proposed by bootstrapping the clustering of different similarity measures. Compared with a number of baseline models, the experimental results show that the proposed MSSC <span class="hlt">method</span> has better performance on some widely used synthetic and real-world datasets with ground-truth community structure that change over time.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4985760','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4985760"><span>A multi-similarity spectral clustering <span class="hlt">method</span> for community detection in <span class="hlt">dynamic</span> networks</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Qin, Xuanmei; Dai, Weidi; Jiao, Pengfei; Wang, Wenjun; Yuan, Ning</p> <p>2016-01-01</p> <p>Community structure is one of the fundamental characteristics of complex networks. Many <span class="hlt">methods</span> have been proposed for community detection. However, most of these <span class="hlt">methods</span> are designed for static networks and are not suitable for <span class="hlt">dynamic</span> networks that evolve over time. Recently, the evolutionary clustering framework was proposed for clustering <span class="hlt">dynamic</span> data, and it can also be used for community detection in <span class="hlt">dynamic</span> networks. In this paper, a multi-similarity spectral (MSSC) <span class="hlt">method</span> is proposed as an improvement to the former evolutionary clustering <span class="hlt">method</span>. To detect the community structure in <span class="hlt">dynamic</span> networks, our <span class="hlt">method</span> considers the different similarity metrics of networks. First, multiple similarity matrices are constructed for each snapshot of <span class="hlt">dynamic</span> networks. Then, a <span class="hlt">dynamic</span> co-training algorithm is proposed by bootstrapping the clustering of different similarity measures. Compared with a number of baseline models, the experimental results show that the proposed MSSC <span class="hlt">method</span> has better performance on some widely used synthetic and real-world datasets with ground-truth community structure that change over time. PMID:27528179</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016NatSR...631454Q','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016NatSR...631454Q"><span>A multi-similarity spectral clustering <span class="hlt">method</span> for community detection in <span class="hlt">dynamic</span> networks</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Qin, Xuanmei; Dai, Weidi; Jiao, Pengfei; Wang, Wenjun; Yuan, Ning</p> <p>2016-08-01</p> <p>Community structure is one of the fundamental characteristics of complex networks. Many <span class="hlt">methods</span> have been proposed for community detection. However, most of these <span class="hlt">methods</span> are designed for static networks and are not suitable for <span class="hlt">dynamic</span> networks that evolve over time. Recently, the evolutionary clustering framework was proposed for clustering <span class="hlt">dynamic</span> data, and it can also be used for community detection in <span class="hlt">dynamic</span> networks. In this paper, a multi-similarity spectral (MSSC) <span class="hlt">method</span> is proposed as an improvement to the former evolutionary clustering <span class="hlt">method</span>. To detect the community structure in <span class="hlt">dynamic</span> networks, our <span class="hlt">method</span> considers the different similarity metrics of networks. First, multiple similarity matrices are constructed for each snapshot of <span class="hlt">dynamic</span> networks. Then, a <span class="hlt">dynamic</span> co-training algorithm is proposed by bootstrapping the clustering of different similarity measures. Compared with a number of baseline models, the experimental results show that the proposed MSSC <span class="hlt">method</span> has better performance on some widely used synthetic and real-world datasets with ground-truth community structure that change over time.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1981STIN...8211434S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1981STIN...8211434S"><span><span class="hlt">Dynamic</span> Moire <span class="hlt">methods</span> for detection of loosened space shuttle tiles</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Snow, W. L.; Burner, A. W.; Goad, W. K.</p> <p>1981-09-01</p> <p>Moire fringe <span class="hlt">methods</span> for detecting loose space shuttle tiles were investigated with a test panel consisting of a loose tile surrounded by four securely bonded tiles. The test panel was excited from 20 to 150 Hz with in-plane sinusoidal acceleration of 2 g (peak). If the shuttle orbiter can be subjected to periodic excitation of 1 to 2 g (peak) and rigid-body periodic displacements do not mask the change in the Moire pattern due to tile looseness, then the use of projected Moire fringes to detect out-of-plane rockin appears to be the most viable indicator of tile looseness since no modifications to the tiles are required.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..174a2041C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..174a2041C"><span><span class="hlt">Method</span> and device for measurement of <span class="hlt">dynamic</span> viscosity</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ciornei, F. C.; Alaci, S.; Amarandei, D.; Irimescu, L.; Romanu, I. C.; Acsinte, L. I.</p> <p>2017-02-01</p> <p>The paper proposes a methodology and ensuing test rig for finding the viscosity of a liquid lubricant. The principle consists in obtaining a contact between two spherical surfaces, one concave and the other one convex. One of the surfaces is kept immobile and to the other, a rotation motion is imposed around the common normal in the contact point and then the law of motion for the mobile lens is found. The law of motion allows for estimation of friction torque, dependent on viscosity at its turn. Applying the <span class="hlt">method</span> for mineral oils, values comparable to the ones presented by the producer were obtained.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3216097','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3216097"><span>Increasing Power of Groupwise Association Test with <span class="hlt">Likelihood</span> Ratio Test</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Sul, Jae Hoon; Han, Buhm</p> <p>2011-01-01</p> <p>Abstract Sequencing studies have been discovering a numerous number of rare variants, allowing the identification of the effects of rare variants on disease susceptibility. As a <span class="hlt">method</span> to increase the statistical power of studies on rare variants, several groupwise association tests that group rare variants in genes and detect associations between genes and diseases have been proposed. One major challenge in these <span class="hlt">methods</span> is to determine which variants are causal in a group, and to overcome this challenge, previous <span class="hlt">methods</span> used prior information that specifies how likely each variant is causal. Another source of information that can be used to determine causal variants is the observed data because case individuals are likely to have more causal variants than control individuals. In this article, we introduce a <span class="hlt">likelihood</span> ratio test (LRT) that uses both data and prior information to infer which variants are causal and uses this finding to determine whether a group of variants is involved in a disease. We demonstrate through simulations that LRT achieves higher power than previous <span class="hlt">methods</span>. We also evaluate our <span class="hlt">method</span> on mutation screening data of the susceptibility gene for ataxia telangiectasia, and show that LRT can detect an association in real data. To increase the computational speed of our <span class="hlt">method</span>, we show how we can decompose the computation of LRT, and propose an efficient permutation test. With this optimization, we can efficiently compute an LRT statistic and its significance at a genome-wide level. The software for our <span class="hlt">method</span> is publicly available at http://genetics.cs.ucla.edu/rarevariants. PMID:21919745</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19740054149&hterms=codex&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dcodex','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19740054149&hterms=codex&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dcodex"><span>Convolutional codes. II - Maximum-<span class="hlt">likelihood</span> decoding. III - Sequential decoding</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Forney, G. D., Jr.</p> <p>1974-01-01</p> <p>Maximum-<span class="hlt">likelihood</span> decoding is characterized as the determination of the shortest path through a topological structure called a trellis. Aspects of code structure are discussed along with questions regarding maximum-<span class="hlt">likelihood</span> decoding on memoryless channels. A general bounding technique is introduced. The technique is used to obtain asymptotic bounds on the probability of error for maximum-<span class="hlt">likelihood</span> decoding and list-of-2 decoding. The basic features of sequential algorithms are discussed along with a stack algorithm, questions of computational distribution, and the martingale approach to computational bounds.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PhyA..439....7R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PhyA..439....7R"><span>Motif-Synchronization: A new <span class="hlt">method</span> for analysis of <span class="hlt">dynamic</span> brain networks with EEG</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rosário, R. S.; Cardoso, P. T.; Muñoz, M. A.; Montoya, P.; Miranda, J. G. V.</p> <p>2015-12-01</p> <p>The major aim of this work was to propose a new association <span class="hlt">method</span> known as Motif-Synchronization. This <span class="hlt">method</span> was developed to provide information about the synchronization degree and direction between two nodes of a network by counting the number of occurrences of some patterns between any two time series. The second objective of this work was to present a new methodology for the analysis of <span class="hlt">dynamic</span> brain networks, by combining the Time-Varying Graph (TVG) <span class="hlt">method</span> with a directional association <span class="hlt">method</span>. We further applied the new algorithms to a set of human electroencephalogram (EEG) signals to perform a <span class="hlt">dynamic</span> analysis of the brain functional networks (BFN).</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <center> <div class="footer-extlink text-muted"><small>Some links on this page may take you to non-federal websites. Their policies may differ from this site.</small> </div> </center> <div id="footer-wrapper"> <div class="footer-content"> <div id="footerOSTI" class=""> <div class="row"> <div class="col-md-4 text-center col-md-push-4 footer-content-center"><small><a href="http://www.science.gov/disclaimer.html">Privacy and Security</a></small> <div class="visible-sm visible-xs push_footer"></div> </div> <div class="col-md-4 text-center col-md-pull-4 footer-content-left"> <img src="https://www.osti.gov/images/DOE_SC31.png" alt="U.S. Department of Energy" usemap="#doe" height="31" width="177"><map style="display:none;" name="doe" id="doe"><area shape="rect" coords="1,3,107,30" href="http://www.energy.gov" alt="U.S. Deparment of Energy"><area shape="rect" coords="114,3,165,30" href="http://www.science.energy.gov" alt="Office of Science"></map> <a ref="http://www.osti.gov" style="margin-left: 15px;"><img src="https://www.osti.gov/images/footerimages/ostigov53.png" alt="Office of Scientific and Technical Information" height="31" width="53"></a> <div class="visible-sm visible-xs push_footer"></div> </div> <div class="col-md-4 text-center footer-content-right"> <a href="http://www.science.gov"><img src="https://www.osti.gov/images/footerimages/scigov77.png" alt="science.gov" height="31" width="98"></a> <a href="http://worldwidescience.org"><img src="https://www.osti.gov/images/footerimages/wws82.png" alt="WorldWideScience.org" height="31" width="90"></a> </div> </div> </div> </div> </div> <p><br></p> </div><!-- container --> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>