Analysis of neighborhood dynamics of forest ecosystems using likelihood methods and modeling.
Canham, Charles D; Uriarte, María
2006-02-01
Advances in computing power in the past 20 years have led to a proliferation of spatially explicit, individual-based models of population and ecosystem dynamics. In forest ecosystems, the individual-based models encapsulate an emerging theory of "neighborhood" dynamics, in which fine-scale spatial interactions regulate the demography of component tree species. The spatial distribution of component species, in turn, regulates spatial variation in a whole host of community and ecosystem properties, with subsequent feedbacks on component species. The development of these models has been facilitated by development of new methods of analysis of field data, in which critical demographic rates and ecosystem processes are analyzed in terms of the spatial distributions of neighboring trees and physical environmental factors. The analyses are based on likelihood methods and information theory, and they allow a tight linkage between the models and explicit parameterization of the models from field data. Maximum likelihood methods have a long history of use for point and interval estimation in statistics. In contrast, likelihood principles have only more gradually emerged in ecology as the foundation for an alternative to traditional hypothesis testing. The alternative framework stresses the process of identifying and selecting among competing models, or in the simplest case, among competing point estimates of a parameter of a model. There are four general steps involved in a likelihood analysis: (1) model specification, (2) parameter estimation using maximum likelihood methods, (3) model comparison, and (4) model evaluation. Our goal in this paper is to review recent developments in the use of likelihood methods and modeling for the analysis of neighborhood processes in forest ecosystems. We will focus on a single class of processes, seed dispersal and seedling dispersion, because recent papers provide compelling evidence of the potential power of the approach, and illustrate
Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano
2015-01-01
We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926
MAXIMUM LIKELIHOOD ESTIMATION FOR SOCIAL NETWORK DYNAMICS
Snijders, Tom A.B.; Koskinen, Johan; Schweinberger, Michael
2014-01-01
A model for network panel data is discussed, based on the assumption that the observed data are discrete observations of a continuous-time Markov process on the space of all directed graphs on a given node set, in which changes in tie variables are independent conditional on the current graph. The model for tie changes is parametric and designed for applications to social network analysis, where the network dynamics can be interpreted as being generated by choices made by the social actors represented by the nodes of the graph. An algorithm for calculating the Maximum Likelihood estimator is presented, based on data augmentation and stochastic approximation. An application to an evolving friendship network is given and a small simulation study is presented which suggests that for small data sets the Maximum Likelihood estimator is more efficient than the earlier proposed Method of Moments estimator. PMID:25419259
Kubo, Taichi
2008-02-01
We have measured the top quark mass with the dynamical likelihood method. The data corresponding to an integrated luminosity of 1.7fb^{-1} was collected in proton antiproton collisions at a center of mass energy of 1.96 TeV with the CDF detector at Fermilab Tevatron during the period March 2002-March 2007. We select t$\\bar{t}$ pair production candidates by requiring one high energy lepton and four jets, in which at least one of jets must be tagged as a b-jet. In order to reconstruct the top quark mass, we use the dynamical likelihood method based on maximum likelihood method where a likelihood is defined as the differential cross section multiplied by the transfer function from observed quantities to parton quantities, as a function of the top quark mass and the jet energy scale(JES). With this method, we measure the top quark mass to be 171.6 ± 2.0 (stat.+ JES) ± 1.3(syst.) = 171.6 ± 2.4 GeV/c^{2}.
Yorita, Kohei
2005-03-01
We have measured the top quark mass with the dynamical likelihood method (DLM) using the CDF II detector at the Fermilab Tevatron. The Tevatron produces top and anti-top pairs in pp collisions at a center of mass energy of 1.96 TeV. The data sample used in this paper was accumulated from March 2002 through August 2003 which corresponds to an integrated luminosity of 162 pb^{-1}.
Likelihood methods for cluster dark energy surveys
Hu, Wayne; Cohn, J. D.
2006-03-15
Galaxy cluster counts at high redshift, binned into spatial pixels and binned into ranges in an observable proxy for mass, contain a wealth of information on both the dark energy equation of state and the mass selection function required to extract it. The likelihood of the number counts follows a Poisson distribution whose mean fluctuates with the large-scale structure of the universe. We develop a joint likelihood method that accounts for these distributions. Maximization of the likelihood over a theoretical model that includes both the cosmology and the observable-mass relations allows for a joint extraction of dark energy and cluster structural parameters.
Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.
ERIC Educational Resources Information Center
Butler, Ronald W.
The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…
Synthesizing regression results: a factored likelihood method.
Wu, Meng-Jia; Becker, Betsy Jane
2013-06-01
Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported in the regression studies to calculate synthesized standardized slopes. It uses available correlations to estimate missing ones through a series of regressions, allowing us to synthesize correlations among variables as if each included study contained all the same variables. Great accuracy and stability of this method under fixed-effects models were found through Monte Carlo simulation. An example was provided to demonstrate the steps for calculating the synthesized slopes through sweep operators. By rearranging the predictors in the included regression models or omitting a relatively small number of correlations from those models, we can easily apply the factored likelihood method to many situations involving synthesis of linear models. Limitations and other possible methods for synthesizing more complicated models are discussed. Copyright © 2012 John Wiley & Sons, Ltd. PMID:26053653
Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions
Barrett, Harrison H.; Dainty, Christopher; Lara, David
2008-01-01
Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255
A Generalized, Likelihood-Free Method for Posterior Estimation
Turner, Brandon M.; Sederberg, Per B.
2014-01-01
Recent advancements in Bayesian modeling have allowed for likelihood-free posterior estimation. Such estimation techniques are crucial to the understanding of simulation-based models, whose likelihood functions may be difficult or even impossible to derive. However, current approaches are limited by their dependence on sufficient statistics and/or tolerance thresholds. In this article, we provide a new approach that requires no summary statistics, error terms, or thresholds, and is generalizable to all models in psychology that can be simulated. We use our algorithm to fit a variety of cognitive models with known likelihood functions to ensure the accuracy of our approach. We then apply our method to two real-world examples to illustrate the types of complex problems our method solves. In the first example, we fit an error-correcting criterion model of signal detection, whose criterion dynamically adjusts after every trial. We then fit two models of choice response time to experimental data: the Linear Ballistic Accumulator model, which has a known likelihood, and the Leaky Competing Accumulator model whose likelihood is intractable. The estimated posterior distributions of the two models allow for direct parameter interpretation and model comparison by means of conventional Bayesian statistics – a feat that was not previously possible. PMID:24258272
Abulencia, A.; Acosta, D.; Adelman, Jahred A.; Affolder, Anthony A.; Akimoto, T.; Albrow, M.G.; Ambrose, D.; Amerio, S.; Amidei, D.; Anastassov, A.; Anikeev, K.; /Taiwan, Inst. Phys. /Argonne /Barcelona, IFAE /Baylor U. /INFN, Bologna /Bologna U. /Brandeis U. /UC, Davis /UCLA /UC, San Diego /UC, Santa Barbara
2005-12-01
This report describes a measurement of the top quark mass, M{sub top}, with the dynamical likelihood method (DLM) using the CDF II detector at the Fermilab Tevatron. The Tevatron produces top/anti-top (t{bar t}) pairs in p{bar p} collisions at a center-of-mass energy of 1.96 TeV. The data sample used in this analysis was accumulated from March 2002 through August 2004, which corresponds to an integrated luminosity of 318 pb{sup -1}. They use the t{bar t} candidates in the ''lepton+jets'' decay channel, requiring at least one jet identified as a b quark by finding an displaced secondary vertex. The DLM defines a likelihood for each event based on the differential cross section as a function of M{sub top} per unit phase space volume of the final partons, multiplied by the transfer functions from jet to parton energies. The method takes into account all possible jet combinations in an event, and the likelihood is multiplied event by event to derive the top quark mass by the maximum likelihood method. Using 63 t{bar t} candidates observed in the data, with 9.2 events expected from background, they measure the top quark mass to be 173.2{sub -2.4}{sup +2.6}(stat.) {+-} 3.2(syst.) GeV/c{sup 2}, or 173.2{sub -4.0}{sup +4.1} GeV/c{sup 2}.
Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM
ERIC Educational Resources Information Center
Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman
2012-01-01
This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics
NASA Astrophysics Data System (ADS)
Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc
2016-03-01
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics.
Arampatzis, Georgios; Katsoulakis, Markos A; Rey-Bellet, Luc
2016-03-14
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Spatially explicit maximum likelihood methods for capture-recapture studies.
Borchers, D L; Efford, M G
2008-06-01
Live-trapping capture-recapture studies of animal populations with fixed trap locations inevitably have a spatial component: animals close to traps are more likely to be caught than those far away. This is not addressed in conventional closed-population estimates of abundance and without the spatial component, rigorous estimates of density cannot be obtained. We propose new, flexible capture-recapture models that use the capture locations to estimate animal locations and spatially referenced capture probability. The models are likelihood-based and hence allow use of Akaike's information criterion or other likelihood-based methods of model selection. Density is an explicit parameter, and the evaluation of its dependence on spatial or temporal covariates is therefore straightforward. Additional (nonspatial) variation in capture probability may be modeled as in conventional capture-recapture. The method is tested by simulation, using a model in which capture probability depends only on location relative to traps. Point estimators are found to be unbiased and standard error estimators almost unbiased. The method is used to estimate the density of Red-eyed Vireos (Vireo olivaceus) from mist-netting data from the Patuxent Research Refuge, Maryland, U.S.A. Estimates agree well with those from an existing spatially explicit method based on inverse prediction. A variety of additional spatially explicit models are fitted; these include models with temporal stratification, behavioral response, and heterogeneous animal home ranges. PMID:17970815
Constrained maximum likelihood modal parameter identification applied to structural dynamics
NASA Astrophysics Data System (ADS)
El-Kafafy, Mahmoud; Peeters, Bart; Guillaume, Patrick; De Troyer, Tim
2016-05-01
A new modal parameter estimation method to directly establish modal models of structural dynamic systems satisfying two physically motivated constraints will be presented. The constraints imposed in the identified modal model are the reciprocity of the frequency response functions (FRFs) and the estimation of normal (real) modes. The motivation behind the first constraint (i.e. reciprocity) comes from the fact that modal analysis theory shows that the FRF matrix and therefore the residue matrices are symmetric for non-gyroscopic, non-circulatory, and passive mechanical systems. In other words, such types of systems are expected to obey Maxwell-Betti's reciprocity principle. The second constraint (i.e. real mode shapes) is motivated by the fact that analytical models of structures are assumed to either be undamped or proportional damped. Therefore, normal (real) modes are needed for comparison with these analytical models. The work done in this paper is a further development of a recently introduced modal parameter identification method called ML-MM that enables us to establish modal model that satisfies such motivated constraints. The proposed constrained ML-MM method is applied to two real experimental datasets measured on fully trimmed cars. This type of data is still considered as a significant challenge in modal analysis. The results clearly demonstrate the applicability of the method to real structures with significant non-proportional damping and high modal densities.
Comparisons of likelihood and machine learning methods of individual classification
Guinand, B.; Topchy, A.; Page, K.S.; Burnham-Curtis, M. K.; Punch, W.F.; Scribner, K.T.
2002-01-01
“Assignment tests” are designed to determine population membership for individuals. One particular application based on a likelihood estimate (LE) was introduced by Paetkau et al. (1995; see also Vásquez-Domínguez et al. 2001) to assign an individual to the population of origin on the basis of multilocus genotype and expectations of observing this genotype in each potential source population. The LE approach can be implemented statistically in a Bayesian framework as a convenient way to evaluate hypotheses of plausible genealogical relationships (e.g., that an individual possesses an ancestor in another population) (Dawson and Belkhir 2001;Pritchard et al. 2000; Rannala and Mountain 1997). Other studies have evaluated the confidence of the assignment (Almudevar 2000) and characteristics of genotypic data (e.g., degree of population divergence, number of loci, number of individuals, number of alleles) that lead to greater population assignment (Bernatchez and Duchesne 2000; Cornuet et al. 1999; Haig et al. 1997; Shriver et al. 1997; Smouse and Chevillon 1998). Main statistical and conceptual differences between methods leading to the use of an assignment test are given in, for example,Cornuet et al. (1999) and Rosenberg et al. (2001). Howeve
A composite likelihood method for bivariate meta-analysis in diagnostic systematic reviews
Liu, Yulun; Ning, Jing; Nie, Lei; Zhu, Hongjian; Chu, Haitao
2014-01-01
Diagnostic systematic review is a vital step in the evaluation of diagnostic technologies. In many applications, it involves pooling pairs of sensitivity and specificity of a dichotomized diagnostic test from multiple studies. We propose a composite likelihood method for bivariate meta-analysis in diagnostic systematic reviews. This method provides an alternative way to make inference on diagnostic measures such as sensitivity, specificity, likelihood ratios and diagnostic odds ratio. Its main advantages over the standard likelihood method are the avoidance of the non-convergence problem, which is non-trivial when the number of studies are relatively small, the computational simplicity and some robustness to model mis-specifications. Simulation studies show that the composite likelihood method maintains high relative efficiency compared to that of the standard likelihood method. We illustrate our method in a diagnostic review of the performance of contemporary diagnostic imaging technologies for detecting metastases in patients with melanoma. PMID:25512146
Using continuous DNA interpretation methods to revisit likelihood ratio behaviour.
Taylor, Duncan
2014-07-01
Continuous DNA interpretation systems make use of more information from DNA profiles than analysts have previously been able to with binary, threshold based systems. With these new continuous DNA interpretation systems and a new, more powerful, DNA profiling kit (GlobalFiler) there is an opportunity to re-examine the behaviour of a commonly used statistic in forensic science, the likelihood ratio (LR). The theoretical behaviour of the LR has been known for some time, although in many instances the behaviour has not been able to be thoroughly demonstrated due to limitations of the biological and mathematical models being used. In this paper the effects of profile complexity, replicate amplifications, assuming contributors, adding incorrect information, and adding irrelevant information to the calculation of the LR are explored. The empirical results are compared to theoretical expectations and explained. The work finishes with the results being used to dispel common misconceptions around reliability, accuracy, informativeness and reproducibility. PMID:24727432
Yang, Shuying; De Angelis, Daniela
2013-01-01
The maximum likelihood method is a popular statistical inferential procedure widely used in many areas to obtain the estimates of the unknown parameters of a population of interest. This chapter gives a brief description of the important concepts underlying the maximum likelihood method, the definition of the key components, the basic theory of the method, and the properties of the resulting estimates. Confidence interval and likelihood ratio test are also introduced. Finally, a few examples of applications are given to illustrate how to derive maximum likelihood estimates in practice. A list of references to relevant papers and software for a further understanding of the method and its implementation is provided.
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; Beerli, Peter; Zeng, Xiankui; Lu, Dan; Tao, Yuezan
2016-02-05
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less
An Empirical Likelihood Method for Semiparametric Linear Regression with Right Censored Data
Fang, Kai-Tai; Li, Gang; Lu, Xuyang; Qin, Hong
2013-01-01
This paper develops a new empirical likelihood method for semiparametric linear regression with a completely unknown error distribution and right censored survival data. The method is based on the Buckley-James (1979) estimating equation. It inherits some appealing properties of the complete data empirical likelihood method. For example, it does not require variance estimation which is problematic for the Buckley-James estimator. We also extend our method to incorporate auxiliary information. We compare our method with the synthetic data empirical likelihood of Li and Wang (2003) using simulations. We also illustrate our method using Stanford heart transplantation data. PMID:23573169
NASA Astrophysics Data System (ADS)
Llacer, Jorge; Solberg, Timothy D.; Promberger, Claus
2001-10-01
This paper presents a description of tests carried out to compare the behaviour of five algorithms in inverse radiation therapy planning: (1) The Dynamically Penalized Likelihood (DPL), an algorithm based on statistical estimation theory; (2) an accelerated version of the same algorithm; (3) a new fast adaptive simulated annealing (ASA) algorithm; (4) a conjugate gradient method; and (5) a Newton gradient method. A three-dimensional mathematical phantom and two clinical cases have been studied in detail. The phantom consisted of a U-shaped tumour with a partially enclosed 'spinal cord'. The clinical examples were a cavernous sinus meningioma and a prostate case. The algorithms have been tested in carefully selected and controlled conditions so as to ensure fairness in the assessment of results. It has been found that all five methods can yield relatively similar optimizations, except when a very demanding optimization is carried out. For the easier cases, the differences are principally in robustness, ease of use and optimization speed. In the more demanding case, there are significant differences in the resulting dose distributions. The accelerated DPL emerges as possibly the algorithm of choice for clinical practice. An appendix describes the differences in behaviour between the new ASA method and the one based on a patent by the Nomos Corporation.
A SIMPLE LIKELIHOOD METHOD FOR QUASAR TARGET SELECTION
Kirkpatrick, Jessica A.; Schlegel, David J.; Ross, Nicholas P.; Myers, Adam D.; Hennawi, Joseph F.; Sheldon, Erin S.; Schneider, Donald P.; Weaver, Benjamin A.
2011-12-20
We present a new method for quasar target selection using photometric fluxes and a Bayesian probabilistic approach. For our purposes, we target quasars using Sloan Digital Sky Survey (SDSS) photometry to a magnitude limit of g = 22. The efficiency and completeness of this technique are measured using the Baryon Oscillation Spectroscopic Survey (BOSS) data taken in 2010. This technique was used for the uniformly selected (CORE) sample of targets in BOSS year-one spectroscopy to be realized in the ninth SDSS data release. When targeting at a density of 40 objects deg{sup -2} (the BOSS quasar targeting density), the efficiency of this technique in recovering z > 2.2 quasars is 40%. The completeness compared to all quasars identified in BOSS data is 65%. This paper also describes possible extensions and improvements for this technique.
NASA Technical Reports Server (NTRS)
1979-01-01
The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.
Laser-Based Slam with Efficient Occupancy Likelihood Map Learning for Dynamic Indoor Scenes
NASA Astrophysics Data System (ADS)
Li, Li; Yao, Jian; Xie, Renping; Tu, Jinge; Feng, Chen
2016-06-01
Location-Based Services (LBS) have attracted growing attention in recent years, especially in indoor environments. The fundamental technique of LBS is the map building for unknown environments, this technique also named as simultaneous localization and mapping (SLAM) in robotic society. In this paper, we propose a novel approach for SLAMin dynamic indoor scenes based on a 2D laser scanner mounted on a mobile Unmanned Ground Vehicle (UGV) with the help of the grid-based occupancy likelihood map. Instead of applying scan matching in two adjacent scans, we propose to match current scan with the occupancy likelihood map learned from all previous scans in multiple scales to avoid the accumulation of matching errors. Due to that the acquisition of the points in a scan is sequential but not simultaneous, there unavoidably exists the scan distortion at different extents. To compensate the scan distortion caused by the motion of the UGV, we propose to integrate a velocity of a laser range finder (LRF) into the scan matching optimization framework. Besides, to reduce the effect of dynamic objects such as walking pedestrians often existed in indoor scenes as much as possible, we propose a new occupancy likelihood map learning strategy by increasing or decreasing the probability of each occupancy grid after each scan matching. Experimental results in several challenged indoor scenes demonstrate that our proposed approach is capable of providing high-precision SLAM results.
NASA Astrophysics Data System (ADS)
Fu, Qiang; Luk, Wai-Shing; Tao, Jun; Zeng, Xuan; Cai, Wei
In this paper, a novel intra-die spatial correlation extraction method referred to as MLEMTC (Maximum Likelihood Estimation for Multiple Test Chips) is presented. In the MLEMTC method, a joint likelihood function is formulated by multiplying the set of individual likelihood functions for all test chips. This joint likelihood function is then maximized to extract a unique group of parameter values of a single spatial correlation function, which can be used for statistical circuit analysis and design. Moreover, to deal with the purely random component and measurement error contained in measurement data, the spatial correlation function combined with the correlation of white noise is used in the extraction, which significantly improves the accuracy of the extraction results. Furthermore, an LU decomposition based technique is developed to calculate the log-determinant of the positive definite matrix within the likelihood function, which solves the numerical stability problem encountered in the direct calculation. Experimental results have shown that the proposed method is efficient and practical.
Evaluation of Dynamic Coastal Response to Sea-level Rise Modifies Inundation Likelihood
NASA Technical Reports Server (NTRS)
Lentz, Erika E.; Thieler, E. Robert; Plant, Nathaniel G.; Stippa, Sawyer R.; Horton, Radley M.; Gesch, Dean B.
2016-01-01
Sea-level rise (SLR) poses a range of threats to natural and built environments, making assessments of SLR-induced hazards essential for informed decision making. We develop a probabilistic model that evaluates the likelihood that an area will inundate (flood) or dynamically respond (adapt) to SLR. The broad-area applicability of the approach is demonstrated by producing 30x30m resolution predictions for more than 38,000 sq km of diverse coastal landscape in the northeastern United States. Probabilistic SLR projections, coastal elevation and vertical land movement are used to estimate likely future inundation levels. Then, conditioned on future inundation levels and the current land-cover type, we evaluate the likelihood of dynamic response versus inundation. We find that nearly 70% of this coastal landscape has some capacity to respond dynamically to SLR, and we show that inundation models over-predict land likely to submerge. This approach is well suited to guiding coastal resource management decisions that weigh future SLR impacts and uncertainty against ecological targets and economic constraints.
Evaluation of dynamic coastal response to sea-level rise modifies inundation likelihood
Lentz, Erika E.; Thieler, E. Robert; Plant, Nathaniel G.; Stippa, Sawyer R.; Horton, Radley M.; Gesch, Dean B.
2016-01-01
Sea-level rise (SLR) poses a range of threats to natural and built environments1, 2, making assessments of SLR-induced hazards essential for informed decision making3. We develop a probabilistic model that evaluates the likelihood that an area will inundate (flood) or dynamically respond (adapt) to SLR. The broad-area applicability of the approach is demonstrated by producing 30 × 30 m resolution predictions for more than 38,000 km2 of diverse coastal landscape in the northeastern United States. Probabilistic SLR projections, coastal elevation and vertical land movement are used to estimate likely future inundation levels. Then, conditioned on future inundation levels and the current land-cover type, we evaluate the likelihood of dynamic response versus inundation. We find that nearly 70% of this coastal landscape has some capacity to respond dynamically to SLR, and we show that inundation models over-predict land likely to submerge. This approach is well suited to guiding coastal resource management decisions that weigh future SLR impacts and uncertainty against ecological targets and economic constraints.
Simulated likelihood methods for complex double-platform line transect surveys.
Schweder, T; Skaug, H J; Langaas, M; Dimakos, X K
1999-09-01
The conventional line transect approach of estimating effective search width from the perpendicular distance distribution is inappropriate in certain types of surveys, e.g., when an unknown fraction of the animals on the track line is detected, the animals can be observed only at discrete points in time, there are errors in positional measurements, and covariate heterogeneity exists in detectability. For such situations a hazard probability framework for independent observer surveys is developed. The likelihood of the data, including observed positions of both initial and subsequent observations of animals, is established under the assumption of no measurement errors. To account for measurement errors and possibly other complexities, this likelihood is modified by a function estimated from extensive simulations. This general method of simulated likelihood is explained and the methodology applied to data from a double-platform survey of minke whales in the northeastern Atlantic in 1995. PMID:11314993
Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods
ERIC Educational Resources Information Center
Zhong, Xiaoling; Yuan, Ke-Hai
2011-01-01
In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…
Xia, Xuhua
2016-09-01
While pairwise sequence alignment (PSA) by dynamic programming is guaranteed to generate one of the optimal alignments, multiple sequence alignment (MSA) of highly divergent sequences often results in poorly aligned sequences, plaguing all subsequent phylogenetic analysis. One way to avoid this problem is to use only PSA to reconstruct phylogenetic trees, which can only be done with distance-based methods. I compared the accuracy of this new computational approach (named PhyPA for phylogenetics by pairwise alignment) against the maximum likelihood method using MSA (the ML+MSA approach), based on nucleotide, amino acid and codon sequences simulated with different topologies and tree lengths. I present a surprising discovery that the fast PhyPA method consistently outperforms the slow ML+MSA approach for highly diverged sequences even when all optimization options were turned on for the ML+MSA approach. Only when sequences are not highly diverged (i.e., when a reliable MSA can be obtained) does the ML+MSA approach outperforms PhyPA. The true topologies are always recovered by ML with the true alignment from the simulation. However, with MSA derived from alignment programs such as MAFFT or MUSCLE, the recovered topology consistently has higher likelihood than that for the true topology. Thus, the failure to recover the true topology by the ML+MSA is not because of insufficient search of tree space, but by the distortion of phylogenetic signal by MSA methods. I have implemented in DAMBE PhyPA and two approaches making use of multi-gene data sets to derive phylogenetic support for subtrees equivalent to resampling techniques such as bootstrapping and jackknifing.
Xia, Xuhua
2016-09-01
While pairwise sequence alignment (PSA) by dynamic programming is guaranteed to generate one of the optimal alignments, multiple sequence alignment (MSA) of highly divergent sequences often results in poorly aligned sequences, plaguing all subsequent phylogenetic analysis. One way to avoid this problem is to use only PSA to reconstruct phylogenetic trees, which can only be done with distance-based methods. I compared the accuracy of this new computational approach (named PhyPA for phylogenetics by pairwise alignment) against the maximum likelihood method using MSA (the ML+MSA approach), based on nucleotide, amino acid and codon sequences simulated with different topologies and tree lengths. I present a surprising discovery that the fast PhyPA method consistently outperforms the slow ML+MSA approach for highly diverged sequences even when all optimization options were turned on for the ML+MSA approach. Only when sequences are not highly diverged (i.e., when a reliable MSA can be obtained) does the ML+MSA approach outperforms PhyPA. The true topologies are always recovered by ML with the true alignment from the simulation. However, with MSA derived from alignment programs such as MAFFT or MUSCLE, the recovered topology consistently has higher likelihood than that for the true topology. Thus, the failure to recover the true topology by the ML+MSA is not because of insufficient search of tree space, but by the distortion of phylogenetic signal by MSA methods. I have implemented in DAMBE PhyPA and two approaches making use of multi-gene data sets to derive phylogenetic support for subtrees equivalent to resampling techniques such as bootstrapping and jackknifing. PMID:27377322
Kadengye, Damazo T; Cools, Wilfried; Ceulemans, Eva; Van den Noortgate, Wim
2012-06-01
Missing data, such as item responses in multilevel data, are ubiquitous in educational research settings. Researchers in the item response theory (IRT) context have shown that ignoring such missing data can create problems in the estimation of the IRT model parameters. Consequently, several imputation methods for dealing with missing item data have been proposed and shown to be effective when applied with traditional IRT models. Additionally, a nonimputation direct likelihood analysis has been shown to be an effective tool for handling missing observations in clustered data settings. This study investigates the performance of six simple imputation methods, which have been found to be useful in other IRT contexts, versus a direct likelihood analysis, in multilevel data from educational settings. Multilevel item response data were simulated on the basis of two empirical data sets, and some of the item scores were deleted, such that they were missing either completely at random or simply at random. An explanatory IRT model was used for modeling the complete, incomplete, and imputed data sets. We showed that direct likelihood analysis of the incomplete data sets produced unbiased parameter estimates that were comparable to those from a complete data analysis. Multiple-imputation approaches of the two-way mean and corrected item mean substitution methods displayed varying degrees of effectiveness in imputing data that in turn could produce unbiased parameter estimates. The simple random imputation, adjusted random imputation, item means substitution, and regression imputation methods seemed to be less effective in imputing missing item scores in multilevel data settings.
Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.
2015-01-01
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448
Maximum-likelihood methods in cryo-EM. Part II: application to experimental data
Scheres, Sjors H.W.
2010-01-01
With the advent of computationally feasible approaches to maximum likelihood image processing for cryo-electron microscopy, these methods have proven particularly useful in the classification of structurally heterogeneous single-particle data. A growing number of experimental studies have applied these algorithms to study macromolecular complexes with a wide range of structural variability, including non-stoichiometric complex formation, large conformational changes and combinations of both. This chapter aims to share the practical experience that has been gained from the application of these novel approaches. Current insights on how to prepare the data and how to perform two- or three-dimensional classifications are discussed together with aspects related to high-performance computing. Thereby, this chapter will hopefully be of practical use for those microscopists wanting to apply maximum likelihood methods in their own investigations. PMID:20888966
Maximum-Likelihood Methods for Processing Signals From Gamma-Ray Detectors
Barrett, Harrison H.; Hunter, William C. J.; Miller, Brian William; Moore, Stephen K.; Chen, Yichun; Furenlid, Lars R.
2009-01-01
In any gamma-ray detector, each event produces electrical signals on one or more circuit elements. From these signals, we may wish to determine the presence of an interaction; whether multiple interactions occurred; the spatial coordinates in two or three dimensions of at least the primary interaction; or the total energy deposited in that interaction. We may also want to compute listmode probabilities for tomographic reconstruction. Maximum-likelihood methods provide a rigorous and in some senses optimal approach to extracting this information, and the associated Fisher information matrix provides a way of quantifying and optimizing the information conveyed by the detector. This paper will review the principles of likelihood methods as applied to gamma-ray detectors and illustrate their power with recent results from the Center for Gamma-ray Imaging. PMID:20107527
Yang, Li; Wang, Guobao; Qi, Jinyi
2016-04-01
Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.
Comparison of ROC-based and likelihood methods for fingerprint verification
NASA Astrophysics Data System (ADS)
Srinivasan, Harish; Srihari, Sargur N.; Beal, Matthew J.; Phatak, Prasad; Fang, Gang
2006-04-01
The fingerprint verification task answers the question of whether or not two fingerprints belongs to the same finger. The paper focuses on the classification aspect of fingerprint verification. Classification is the third and final step after after the two earlier steps of feature extraction, where a known set of features (minutiae points) have been extracted from each fingerprint, and scoring, where a matcher has determined a degree of match between the two sets of features. Since this is a binary classification problem involving a single variable, the commonly used threshold method is related to the so-called receiver operating characteristics (ROC). In the ROC approach the optimal threshold on the score is determined so as to determine match or non-match. Such a method works well when there is a well-registered fingerprint image. On the other hand more sophisticated methods are needed when there exists a partial imprint of a finger- as in the case of latent prints in forensics or due to limitations of the biometric device. In such situations it is useful to consider classification methods based on computing the likelihood ratio of match/non-match. Such methods are commonly used in some biometric and forensic domains such as speaker verification where there is a much higher degree of uncertainty. This paper compares the two approaches empirically for the fingerprint classification task when the number of available minutiae are varied. In both ROC-based and likelihood ratio methods, learning is from a general population of ensemble of pairs, each of which is labeled as being from the same finger or from different fingers. In the ROC-based method the best operating point is derived from the ROC curve. In the likelihood method the distributions of same finger and different finger scores are modeled using Gaussian and Gamma distributions. The performances of the two methods are compared for varying numbers of minutiae points available. Results show that the
Method and apparatus for implementing a traceback maximum-likelihood decoder in a hypercube network
NASA Technical Reports Server (NTRS)
Pollara-Bozzola, Fabrizio (Inventor)
1989-01-01
A method and a structure to implement maximum-likelihood decoding of convolutional codes on a network of microprocessors interconnected as an n-dimensional cube (hypercube). By proper reordering of states in the decoder, only communication between adjacent processors is required. Communication time is limited to that required for communication only of the accumulated metrics and not the survivor parameters of a Viterbi decoding algorithm. The survivor parameters are stored at a local processor's memory and a trace-back method is employed to ascertain the decoding result. Faster and more efficient operation is enabled, and decoding of large constraint length codes is feasible using standard VLSI technology.
Fachet, Melanie; Flassig, Robert J; Rihko-Struckmann, Liisa; Sundmacher, Kai
2014-12-01
In this work, a photoautotrophic growth model incorporating light and nutrient effects on growth and pigmentation of Dunaliella salina was formulated. The model equations were taken from literature and modified according to the experimental setup with special emphasis on model reduction. The proposed model has been evaluated with experimental data of D. salina cultivated in a flat-plate photobioreactor under stressed and non-stressed conditions. Simulation results show that the model can represent the experimental data accurately. The identifiability of the model parameters was studied using the profile likelihood method. This analysis revealed that three model parameters are practically non-identifiable. However, some of these non-identifiabilities can be resolved by model reduction and additional measurements. As a conclusion, our results suggest that the proposed model equations result in a predictive growth model for D. salina.
New method to compute Rcomplete enables maximum likelihood refinement for small datasets
Luebben, Jens; Gruene, Tim
2015-01-01
The crystallographic reliability index Rcomplete is based on a method proposed more than two decades ago. Because its calculation is computationally expensive its use did not spread into the crystallographic community in favor of the cross-validation method known as Rfree. The importance of Rfree has grown beyond a pure validation tool. However, its application requires a sufficiently large dataset. In this work we assess the reliability of Rcomplete and we compare it with k-fold cross-validation, bootstrapping, and jackknifing. As opposed to proper cross-validation as realized with Rfree, Rcomplete relies on a method of reducing bias from the structural model. We compare two different methods reducing model bias and question the widely spread notion that random parameter shifts are required for this purpose. We show that Rcomplete has as little statistical bias as Rfree with the benefit of a much smaller variance. Because the calculation of Rcomplete is based on the entire dataset instead of a small subset, it allows the estimation of maximum likelihood parameters even for small datasets. Rcomplete enables maximum likelihood-based refinement to be extended to virtually all areas of crystallographic structure determination including high-pressure studies, neutron diffraction studies, and datasets from free electron lasers. PMID:26150515
Maximum likelihood method for fitting the Fundamental Plane of the 6dF Galaxy Survey
NASA Astrophysics Data System (ADS)
Magoulas, C.; Colless, M.; Jones, D.; Springob, C.; Mould, J.
2010-04-01
We have used over 10,000 early-type galaxies from the 6dF Galaxy Survey (6dFGS) to construct the Fundamental Plane across the optical and near-infrared passbands. We demonstrate that a maximum likelihood fit to a multivariate Gaussian model for the distribution of galaxies in size, surface brightness and velocity dispersion can properly account for selection effects, censoring and observational errors, leading to precise and unbiased parameters for the Fundamental Plane and its intrinsic scatter. This method allows an accurate and robust determination of the dependencies of the Fundamental Plane on variations in the stellar populations and environment of early-type galaxies.
Maximum-likelihood methods for array processing based on time-frequency distributions
NASA Astrophysics Data System (ADS)
Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.
1999-11-01
This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.
Nonparametric maximum likelihood estimation of probability densities by penalty function methods
NASA Technical Reports Server (NTRS)
Demontricher, G. F.; Tapia, R. A.; Thompson, J. R.
1974-01-01
When it is known a priori exactly to which finite dimensional manifold the probability density function gives rise to a set of samples, the parametric maximum likelihood estimation procedure leads to poor estimates and is unstable; while the nonparametric maximum likelihood procedure is undefined. A very general theory of maximum penalized likelihood estimation which should avoid many of these difficulties is presented. It is demonstrated that each reproducing kernel Hilbert space leads, in a very natural way, to a maximum penalized likelihood estimator and that a well-known class of reproducing kernel Hilbert spaces gives polynomial splines as the nonparametric maximum penalized likelihood estimates.
Smolin, John A; Gambetta, Jay M; Smith, Graeme
2012-02-17
We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.
Yang, Shuying; Roger, James
2010-01-01
Pharmacokinetic (PK) data often contain concentration measurements below the quantification limit (BQL). While specific values cannot be assigned to these observations, nevertheless these observed BQL data are informative and generally known to be lower than the lower limit of quantification (LLQ). Setting BQLs as missing data violates the usual missing at random (MAR) assumption applied to the statistical methods, and therefore leads to biased or less precise parameter estimation. By definition, these data lie within the interval [0, LLQ], and can be considered as censored observations. Statistical methods that handle censored data, such as maximum likelihood and Bayesian methods, are thus useful in the modelling of such data sets. The main aim of this work was to investigate the impact of the amount of BQL observations on the bias and precision of parameter estimates in population PK models (non-linear mixed effects models in general) under maximum likelihood method as implemented in SAS and NONMEM, and a Bayesian approach using Markov chain Monte Carlo (MCMC) as applied in WinBUGS. A second aim was to compare these different methods in dealing with BQL or censored data in a practical situation. The evaluation was illustrated by simulation based on a simple PK model, where a number of data sets were simulated from a one-compartment first-order elimination PK model. Several quantification limits were applied to each of the simulated data to generate data sets with certain amounts of BQL data. The average percentage of BQL ranged from 25% to 75%. Their influence on the bias and precision of all population PK model parameters such as clearance and volume distribution under each estimation approach was explored and compared.
Determination of instrumentation errors from measured data using maximum likelihood method
NASA Technical Reports Server (NTRS)
Keskar, D. A.; Klein, V.
1980-01-01
The maximum likelihood method is used for estimation of unknown initial conditions, constant bias and scale factor errors in measured flight data. The model for the system to be identified consists of the airplane six-degree-of-freedom kinematic equations, and the output equations specifying the measured variables. The estimation problem is formulated in a general way and then, for practical use, simplified by ignoring the effect of process noise. The algorithm developed is first applied to computer generated data having different levels of process noise for the demonstration of the robustness of the method. Then the real flight data are analyzed and the results compared with those obtained by the extended Kalman filter algorithm.
Likelihood ratio meta-analysis: New motivation and approach for an old method.
Dormuth, Colin R; Filion, Kristian B; Platt, Robert W
2016-03-01
A 95% confidence interval (CI) in an updated meta-analysis may not have the expected 95% coverage. If a meta-analysis is simply updated with additional data, then the resulting 95% CI will be wrong because it will not have accounted for the fact that the earlier meta-analysis failed or succeeded to exclude the null. This situation can be avoided by using the likelihood ratio (LR) as a measure of evidence that does not depend on type-1 error. We show how an LR-based approach, first advanced by Goodman, can be used in a meta-analysis to pool data from separate studies to quantitatively assess where the total evidence points. The method works by estimating the log-likelihood ratio (LogLR) function from each study. Those functions are then summed to obtain a combined function, which is then used to retrieve the total effect estimate, and a corresponding 'intrinsic' confidence interval. Using as illustrations the CAPRIE trial of clopidogrel versus aspirin in the prevention of ischemic events, and our own meta-analysis of higher potency statins and the risk of acute kidney injury, we show that the LR-based method yields the same point estimate as the traditional analysis, but with an intrinsic confidence interval that is appropriately wider than the traditional 95% CI. The LR-based method can be used to conduct both fixed effect and random effects meta-analyses, it can be applied to old and new meta-analyses alike, and results can be presented in a format that is familiar to a meta-analytic audience.
Accelerated molecular dynamics methods
Perez, Danny
2011-01-04
The molecular dynamics method, although extremely powerful for materials simulations, is limited to times scales of roughly one microsecond or less. On longer time scales, dynamical evolution typically consists of infrequent events, which are usually activated processes. This course is focused on understanding infrequent-event dynamics, on methods for characterizing infrequent-event mechanisms and rate constants, and on methods for simulating long time scales in infrequent-event systems, emphasizing the recently developed accelerated molecular dynamics methods (hyperdynamics, parallel replica dynamics, and temperature accelerated dynamics). Some familiarity with basic statistical mechanics and molecular dynamics methods will be assumed.
Maximum likelihood methods for investigating reporting rates of rings on hunter-shot birds
Conroy, M.J.; Morgan, B.J.T.; North, P.M.
1985-01-01
It is well known that hunters do not report 100% of the rings that they find on shot birds. Reward studies can be used to estimate what this reporting rate is, by comparison of recoveries of rings offering a monetary reward, to ordinary rings. A reward study of American Black Ducks (Anas rubripes) is used to illustrate the design, and to motivate the development of statistical models for estimation and for testing hypotheses of temporal and geographic variation in reporting rates. The method involves indexing the data (recoveries) and parameters (reporting, harvest, and solicitation rates) by geographic and temporal strata. Estimates are obtained under unconstrained (e.g., allowing temporal variability in reporting rates) and constrained (e.g., constant reporting rates) models, and hypotheses are tested by likelihood ratio. A FORTRAN program, available from the author, is used to perform the computations.
Pseudo-empirical Likelihood-Based Method Using Calibration for Longitudinal Data with Drop-Out
Chen, Baojiang; Zhou, Xiao-Hua; Chan, Kwun Chuen Gary
2014-01-01
Summary In observational studies, interest mainly lies in estimation of the population-level relationship between the explanatory variables and dependent variables, and the estimation is often undertaken using a sample of longitudinal data. In some situations, the longitudinal data sample features biases and loss of estimation efficiency due to non-random drop-out. However, inclusion of population-level information can increase estimation efficiency. In this paper we propose an empirical likelihood-based method to incorporate population-level information in a longitudinal study with drop-out. The population-level information is incorporated via constraints on functions of the parameters, and non-random drop-out bias is corrected by using a weighted generalized estimating equations method. We provide a three-step estimation procedure that makes computation easier. Some commonly used methods are compared in simulation studies, which demonstrate that our proposed method can correct the non-random drop-out bias and increase the estimation efficiency, especially for small sample size or when the missing proportion is high. In some situations, the efficiency improvement is substantial. Finally, we apply this method to an Alzheimer’s disease study. PMID:25587200
NASA Astrophysics Data System (ADS)
Song, Yanxing; Yang, Jingsong; Cheng, Lina; Liu, Shucong
2014-09-01
An image restoration method based on Poisson-maximum likelihood estimation method (PMLE) for earthquake ruin scene is proposed in this paper. The PMLE algorithm is introduced at first, and automatic acceleration method is used in the algorithm to accelerate the iterative process, then an image of earthquake ruin scene is processed with this image restoration method. The spectral correlation method and PSNR (peak signal-to-noise ratio) are chosen respectively to validate the restoration effect of the method, the simulation results show that iterations in this method will effect the PSNR of the processed image and operation time, and this method can restore image of earthquake ruin scene effectively and has a good practicability.
Two-group time-to-event continual reassessment method using likelihood estimation.
Salter, Amber; O'Quigley, John; Cutter, Gary R; Aban, Inmaculada B
2015-11-01
The presence of patient heterogeneity in dose finding studies is inherent (i.e. groups with different maximum tolerated doses). When this type of heterogeneity is not accounted for in the trial design, subjects may be exposed to toxic or suboptimal doses. Options to handle patient heterogeneity include conducting separate trials or splitting the trial into arms. However, cost and/or lack of resources may limit the feasibility of these options. If information is shared between the groups, then both of these options do not benefit from using the shared information. Extending current dose finding designs to handle patient heterogeneity maximizes the utility of existing methods within a single trial. We propose a modification to the time-to-event continual reassessment method to accommodate two groups using a two-parameter model and maximum likelihood estimation. The operating characteristics of the design are investigated through simulations under different scenarios including the scenario where one conducts two separate trials, one for each group, using the one-sample time-to-event continual reassessment method.
NASA Astrophysics Data System (ADS)
Sheen, D. H.; Seong, Y. J.; Park, J. H.; Lim, I. S.
2015-12-01
From the early of this year, the Korea Meteorological Administration (KMA) began to operate the first stage of an earthquake early warning system (EEWS) and provide early warning information to the general public. The earthquake early warning system (EEWS) in the KMA is based on the Earthquake Alarm Systems version 2 (ElarmS-2), developed at the University of California Berkeley. This method estimates the earthquake location using a simple grid search algorithm that finds the location with the minimum variance of the origin time on successively finer grids. A robust maximum likelihood earthquake location (MAXEL) method for early warning, based on the equal differential times of P arrivals, was recently developed. The MAXEL has been demonstrated to be successful in determining the event location, even when an outlier is included in the small number of P arrivals. This presentation details the application of the MAXEL to the EEWS of the KMA, its performance evaluation over seismic networks in South Korea with synthetic data, and comparison of statistics of earthquake locations based on the ElarmS-2 and the MAXEL.
Likelihood Methods for Testing Group Problem Solving Models with Censored Data.
ERIC Educational Resources Information Center
Regal, Ronald R.; Larntz, Kinley
1978-01-01
Models relating individual and group problem solving solution times under the condition of limited time (time limit censoring) are presented. Maximum likelihood estimation of parameters and a goodness of fit test are presented. (Author/JKS)
Quantifying uncertainty in predictions of groundwater levels using formal likelihood methods
NASA Astrophysics Data System (ADS)
Marchant, Ben; Mackay, Jonathan; Bloomfield, John
2016-09-01
Informal and formal likelihood methods can be used to quantify uncertainty in modelled predictions of groundwater levels (GWLs). Informal methods use a relatively subjective criterion to identify sets of plausible or behavioural parameters of the GWL models. In contrast, formal methods specify a statistical model for the residuals or errors of the GWL model. The formal uncertainty estimates are only reliable when the assumptions of the statistical model are appropriate. We apply the formal approach to historical reconstructions of GWL hydrographs from four UK boreholes. We test whether a model which assumes Gaussian and independent errors is sufficient to represent the residuals or whether a model which includes temporal autocorrelation and a general non-Gaussian distribution is required. Groundwater level hydrographs are often observed at irregular time intervals so we use geostatistical methods to quantify the temporal autocorrelation rather than more standard time series methods such as autoregressive models. According to the Akaike Information Criterion, the more general statistical model better represents the residuals of the GWL model. However, no substantial difference between the accuracy of the GWL predictions and the estimates of their uncertainty is observed when the two statistical models are compared. When the general model is applied, significant temporal correlation over periods ranging from 3 to 20 months is evident for the different boreholes. When the GWL model parameters are sampled using a Markov Chain Monte Carlo approach the distributions based on the general statistical model differ from those of the Gaussian model, particularly for the boreholes with the most autocorrelation. These results suggest that the independent Gaussian model of residuals is sufficient to estimate the uncertainty of a GWL prediction on a single date. However, if realistically autocorrelated simulations of GWL hydrographs for multiple dates are required or if the
Penalized likelihood methods for estimation of sparse high-dimensional directed acyclic graphs
SHOJAIE, ALI; MICHAILIDIS, GEORGE
2010-01-01
Summary Directed acyclic graphs are commonly used to represent causal relationships among random variables in graphical models. Applications of these models arise in the study of physical and biological systems where directed edges between nodes represent the influence of components of the system on each other. Estimation of directed graphs from observational data is computationally NP-hard. In addition, directed graphs with the same structure may be indistinguishable based on observations alone. When the nodes exhibit a natural ordering, the problem of estimating directed graphs reduces to the problem of estimating the structure of the network. In this paper, we propose an efficient penalized likelihood method for estimation of the adjacency matrix of directed acyclic graphs, when variables inherit a natural ordering. We study variable selection consistency of lasso and adaptive lasso penalties in high-dimensional sparse settings, and propose an error-based choice for selecting the tuning parameter. We show that although the lasso is only variable selection consistent under stringent conditions, the adaptive lasso can consistently estimate the true graph under the usual regularity assumptions. PMID:22434937
Cox regression with missing covariate data using a modified partial likelihood method.
Martinussen, Torben; Holst, Klaus K; Scheike, Thomas H
2016-10-01
Missing covariate values is a common problem in survival analysis. In this paper we propose a novel method for the Cox regression model that is close to maximum likelihood but avoids the use of the EM-algorithm. It exploits that the observed hazard function is multiplicative in the baseline hazard function with the idea being to profile out this function before carrying out the estimation of the parameter of interest. In this step one uses a Breslow type estimator to estimate the cumulative baseline hazard function. We focus on the situation where the observed covariates are categorical which allows us to calculate estimators without having to assume anything about the distribution of the covariates. We show that the proposed estimator is consistent and asymptotically normal, and derive a consistent estimator of the variance-covariance matrix that does not involve any choice of a perturbation parameter. Moderate sample size performance of the estimators is investigated via simulation and by application to a real data example.
Cox regression with missing covariate data using a modified partial likelihood method.
Martinussen, Torben; Holst, Klaus K; Scheike, Thomas H
2016-10-01
Missing covariate values is a common problem in survival analysis. In this paper we propose a novel method for the Cox regression model that is close to maximum likelihood but avoids the use of the EM-algorithm. It exploits that the observed hazard function is multiplicative in the baseline hazard function with the idea being to profile out this function before carrying out the estimation of the parameter of interest. In this step one uses a Breslow type estimator to estimate the cumulative baseline hazard function. We focus on the situation where the observed covariates are categorical which allows us to calculate estimators without having to assume anything about the distribution of the covariates. We show that the proposed estimator is consistent and asymptotically normal, and derive a consistent estimator of the variance-covariance matrix that does not involve any choice of a perturbation parameter. Moderate sample size performance of the estimators is investigated via simulation and by application to a real data example. PMID:26493471
A Composite-Likelihood Method for Detecting Incomplete Selective Sweep from Population Genomic Data
Vy, Ha My T.; Kim, Yuseob
2015-01-01
Adaptive evolution occurs as beneficial mutations arise and then increase in frequency by positive natural selection. How, when, and where in the genome such evolutionary events occur is a fundamental question in evolutionary biology. It is possible to detect ongoing positive selection or an incomplete selective sweep in species with sexual reproduction because, when a beneficial mutation is on the way to fixation, homologous chromosomes in the population are divided into two groups: one carrying the beneficial allele with very low polymorphism at nearby linked loci and the other carrying the ancestral allele with a normal pattern of sequence variation. Previous studies developed long-range haplotype tests to capture this difference between two groups as the signal of an incomplete selective sweep. In this study, we propose a composite-likelihood-ratio (CLR) test for detecting incomplete selective sweeps based on the joint sampling probabilities for allele frequencies of two groups as a function of strength of selection and recombination rate. Tested against simulated data, this method yielded statistical power and accuracy in parameter estimation that are higher than the iHS test and comparable to the more recently developed nSL test. This procedure was also applied to African Drosophila melanogaster population genomic data to detect candidate genes under ongoing positive selection. Upon visual inspection of sequence polymorphism, candidates detected by our CLR method exhibited clear haplotype structures predicted under incomplete selective sweeps. Our results suggest that different methods capture different aspects of genetic information regarding incomplete sweeps and thus are partially complementary to each other. PMID:25911658
A Composite-Likelihood Method for Detecting Incomplete Selective Sweep from Population Genomic Data.
Vy, Ha My T; Kim, Yuseob
2015-06-01
Adaptive evolution occurs as beneficial mutations arise and then increase in frequency by positive natural selection. How, when, and where in the genome such evolutionary events occur is a fundamental question in evolutionary biology. It is possible to detect ongoing positive selection or an incomplete selective sweep in species with sexual reproduction because, when a beneficial mutation is on the way to fixation, homologous chromosomes in the population are divided into two groups: one carrying the beneficial allele with very low polymorphism at nearby linked loci and the other carrying the ancestral allele with a normal pattern of sequence variation. Previous studies developed long-range haplotype tests to capture this difference between two groups as the signal of an incomplete selective sweep. In this study, we propose a composite-likelihood-ratio (CLR) test for detecting incomplete selective sweeps based on the joint sampling probabilities for allele frequencies of two groups as a function of strength of selection and recombination rate. Tested against simulated data, this method yielded statistical power and accuracy in parameter estimation that are higher than the iHS test and comparable to the more recently developed nSL test. This procedure was also applied to African Drosophila melanogaster population genomic data to detect candidate genes under ongoing positive selection. Upon visual inspection of sequence polymorphism, candidates detected by our CLR method exhibited clear haplotype structures predicted under incomplete selective sweeps. Our results suggest that different methods capture different aspects of genetic information regarding incomplete sweeps and thus are partially complementary to each other.
NASA Astrophysics Data System (ADS)
Stollenwerk, Nico
2009-09-01
Basic stochastic processes, like the SIS and SIR epidemics, are used to describe data from an internet based surveillance system, the InfluenzaNet. Via generating functions, in some simplifying situations there can be analytic expressions derived for the probability. From this likelihood functions for parameter estimation are constructed. This is a nice application in which partial differential equations appear in epidemiological applications without invoking any explicitly spatial aspect. All steps can eventually be bridged by numeric simulations in case of analytical difficulties [1, 2].
An alternative method to measure the likelihood of a financial crisis in an emerging market
NASA Astrophysics Data System (ADS)
Özlale, Ümit; Metin-Özcan, Kıvılcım
2007-07-01
This paper utilizes an early warning system in order to measure the likelihood of a financial crisis in an emerging market economy. We introduce a methodology, where we can both obtain a likelihood series and analyze the time-varying effects of several macroeconomic variables on this likelihood. Since the issue is analyzed in a non-linear state space framework, the extended Kalman filter emerges as the optimal estimation algorithm. Taking the Turkish economy as our laboratory, the results indicate that both the derived likelihood measure and the estimated time-varying parameters are meaningful and can successfully explain the path that the Turkish economy had followed between 2000 and 2006. The estimated parameters also suggest that overvalued domestic currency, current account deficit and the increase in the default risk increase the likelihood of having an economic crisis in the economy. Overall, the findings in this paper suggest that the estimation methodology introduced in this paper can also be applied to other emerging market economies as well.
ERIC Educational Resources Information Center
Tao, Jian; Shi, Ning-Zhong; Chang, Hua-Hua
2012-01-01
For mixed-type tests composed of both dichotomous and polytomous items, polytomous items often yield more information than dichotomous ones. To reflect the difference between the two types of items, polytomous items are usually pre-assigned with larger weights. We propose an item-weighted likelihood method to better assess examinees' ability…
A Maximum-Likelihood Method for the Estimation of Pairwise Relatedness in Structured Populations
Anderson, Amy D.; Weir, Bruce S.
2007-01-01
A maximum-likelihood estimator for pairwise relatedness is presented for the situation in which the individuals under consideration come from a large outbred subpopulation of the population for which allele frequencies are known. We demonstrate via simulations that a variety of commonly used estimators that do not take this kind of misspecification of allele frequencies into account will systematically overestimate the degree of relatedness between two individuals from a subpopulation. A maximum-likelihood estimator that includes FST as a parameter is introduced with the goal of producing the relatedness estimates that would have been obtained if the subpopulation allele frequencies had been known. This estimator is shown to work quite well, even when the value of FST is misspecified. Bootstrap confidence intervals are also examined and shown to exhibit close to nominal coverage when FST is correctly specified. PMID:17339212
Anisimova, Maria; Gil, Manuel; Dufayard, Jean-François; Dessimoz, Christophe; Gascuel, Olivier
2011-01-01
Phylogenetic inference and evaluating support for inferred relationships is at the core of many studies testing evolutionary hypotheses. Despite the popularity of nonparametric bootstrap frequencies and Bayesian posterior probabilities, the interpretation of these measures of tree branch support remains a source of discussion. Furthermore, both methods are computationally expensive and become prohibitive for large data sets. Recent fast approximate likelihood-based measures of branch supports (approximate likelihood ratio test [aLRT] and Shimodaira–Hasegawa [SH]-aLRT) provide a compelling alternative to these slower conventional methods, offering not only speed advantages but also excellent levels of accuracy and power. Here we propose an additional method: a Bayesian-like transformation of aLRT (aBayes). Considering both probabilistic and frequentist frameworks, we compare the performance of the three fast likelihood-based methods with the standard bootstrap (SBS), the Bayesian approach, and the recently introduced rapid bootstrap. Our simulations and real data analyses show that with moderate model violations, all tests are sufficiently accurate, but aLRT and aBayes offer the highest statistical power and are very fast. With severe model violations aLRT, aBayes and Bayesian posteriors can produce elevated false-positive rates. With data sets for which such violation can be detected, we recommend using SH-aLRT, the nonparametric version of aLRT based on a procedure similar to the Shimodaira–Hasegawa tree selection. In general, the SBS seems to be excessively conservative and is much slower than our approximate likelihood-based methods. PMID:21540409
NASA Astrophysics Data System (ADS)
Nezhel'skaya, L. A.
2016-09-01
A flow of physical events (photons, electrons, and other elementary particles) is studied. One of the mathematical models of such flows is the modulated MAP flow of events circulating under conditions of unextendable dead time period. It is assumed that the dead time period is an unknown fixed value. The problem of estimation of the dead time period from observations of arrival times of events is solved by the method of maximum likelihood.
A likelihood method to cross-calibrate air-shower detectors
NASA Astrophysics Data System (ADS)
Dembinski, Hans Peter; Kégl, Balázs; Mariş, Ioana C.; Roth, Markus; Veberič, Darko
2016-01-01
We present a detailed statistical treatment of the energy calibration of hybrid air-shower detectors, which combine a surface detector array and a fluorescence detector, to obtain an unbiased estimate of the calibration curve. The special features of calibration data from air showers prevent unbiased results, if a standard least-squares fit is applied to the problem. We develop a general maximum-likelihood approach, based on the detailed statistical model, to solve the problem. Our approach was developed for the Pierre Auger Observatory, but the applied principles are general and can be transferred to other air-shower experiments, even to the cross-calibration of other observables. Since our general likelihood function is expensive to compute, we derive two approximations with significantly smaller computational cost. In the recent years both have been used to calibrate data of the Pierre Auger Observatory. We demonstrate that these approximations introduce negligible bias when they are applied to simulated toy experiments, which mimic realistic experimental conditions.
How to use dynamic light scattering to improve the likelihood of growing macromolecular crystals.
Borgstahl, Gloria E O
2007-01-01
Dynamic light scattering (DLS) has become one of the most useful diagnostic tools for crystallization. The main purpose of using DLS in crystal screening is to help the investigator understand the size distribution, stability, and aggregation state of macromolecules in solution. It can also be used to understand how experimental variables influence aggregation. With commercially available instruments, DLS is easy to perform, and most of the sample is recoverable. Most usefully, the homogeneity or monodispersity of a sample, as measured by DLS, can be predictive of crystallizability.
Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier
2010-05-01
PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.
Comparisons of Four Methods for Estimating a Dynamic Factor Model
ERIC Educational Resources Information Center
Zhang, Zhiyong; Hamaker, Ellen L.; Nesselroade, John R.
2008-01-01
Four methods for estimating a dynamic factor model, the direct autoregressive factor score (DAFS) model, are evaluated and compared. The first method estimates the DAFS model using a Kalman filter algorithm based on its state space model representation. The second one employs the maximum likelihood estimation method based on the construction of a…
A method for selecting M dwarfs with an increased likelihood of unresolved ultracool companionship
NASA Astrophysics Data System (ADS)
Cook, N. J.; Pinfield, D. J.; Marocco, F.; Burningham, B.; Jones, H. R. A.; Frith, J.; Zhong, J.; Luo, A. L.; Qi, Z. X.; Lucas, P. W.; Gromadzki, M.; Day-Jones, A. C.; Kurtev, R. G.; Guo, Y. X.; Wang, Y. F.; Bai, Y.; Yi, Z. P.; Smart, R. L.
2016-04-01
Locating ultracool companions to M dwarfs is important for constraining low-mass formation models, the measurement of substellar dynamical masses and radii, and for testing ultracool evolutionary models. We present an optimized method for identifying M dwarfs which may have unresolved ultracool companions. We construct a catalogue of 440 694 M dwarf candidates, from Wide-Field Infrared Survey Explorer, Two Micron All-Sky Survey and Sloan Digital Sky Survey, based on optical- and near-infrared colours and reduced proper motion. With strict reddening, photometric and quality constraints we isolate a subsample of 36 898 M dwarfs and search for possible mid-infrared M dwarf + ultracool dwarf candidates by comparing M dwarfs which have similar optical/near-infrared colours (chosen for their sensitivity to effective temperature and metallicity). We present 1082 M dwarf + ultracool dwarf candidates for follow-up. Using simulated ultracool dwarf companions to M dwarfs, we estimate that the occurrence of unresolved ultracool companions amongst our M dwarf + ultracool dwarf candidates should be at least four times the average for our full M dwarf catalogue. We discuss possible contamination and bias and predict yields of candidates based on our simulations.
NASA Astrophysics Data System (ADS)
Osmaston, Miles
2013-04-01
In my oral(?) contribution to this session [1] I use my studies of the fundamental physics of gravitation to derive a reason for expecting the vertical gradient of electron density (= radial electric field) in the ionosphere to be closely affected by another field, directly associated with the ordinary gravitational potential (g) present at the Earth's surface. I have called that other field the Gravity-Electric (G-E) field. A calibration of this linkage relationship could be provided by noting corresponding co-seismic changes in (g) and in the ionosphere when, for example, a major normal-fault slippage occurs. But we are here concerned with precursory changes. This means we are looking for mechanisms which, on suitably short timescales, would generate pre-quake elastic deformation that changes the local (g). This poster supplements my talk by noting, for more relaxed discussion, what I see as potentially relevant plate dynamical mechanisms. Timescale constraints. If monitoring for ionospheric precursors is on only short timescales, their detectability is limited to correspondingly tectonically active regions. But as our monitoring becomes more precise and over longer terms, this constraint will relax. Most areas of the Earth are undergoing very slow heating or cooling and corresponding volume or epeirogenic change; major earthquakes can result but we won't have detected any accumulating ionospheric precursor. Transcurrent faulting. In principle, slip on a straight fault, even in a stick-slip manner, should produce little vertical deformation, but a kink, such as has caused the Transverse Ranges on the San Andreas Fault, would seem worth monitoring for precursory build-up in the ionosphere. Plate closure - subducting plate downbend. The traditionally presumed elastic flexure downbend mechanism is incorrect. 'Seismic coupling' has long been recognized by seismologists, invoking the repeated occurrence of 'asperities' to temporarily lock subduction and allow stress
A method for modeling bias in a person's estimates of likelihoods of events
NASA Technical Reports Server (NTRS)
Nygren, Thomas E.; Morera, Osvaldo
1988-01-01
It is of practical importance in decision situations involving risk to train individuals to transform uncertainties into subjective probability estimates that are both accurate and unbiased. We have found that in decision situations involving risk, people often introduce subjective bias in their estimation of the likelihoods of events depending on whether the possible outcomes are perceived as being good or bad. Until now, however, the successful measurement of individual differences in the magnitude of such biases has not been attempted. In this paper we illustrate a modification of a procedure originally outlined by Davidson, Suppes, and Siegel (3) to allow for a quantitatively-based methodology for simultaneously estimating an individual's subjective utility and subjective probability functions. The procedure is now an interactive computer-based algorithm, DSS, that allows for the measurement of biases in probability estimation by obtaining independent measures of two subjective probability functions (S+ and S-) for winning (i.e., good outcomes) and for losing (i.e., bad outcomes) respectively for each individual, and for different experimental conditions within individuals. The algorithm and some recent empirical data are described.
Vicario, Saverio; Caccone, Adalgisa; Gauthier, Jacques
2003-02-01
Contentious issues in Night Lizard (Xantusiidae) evolution are revisited using Maximum Likelihood-based Bayesian methods and compared with results from Neighbor-Joining and Maximum Parsimony analyses. Fragments of three mitochondrial genes, the 12S and 16S ribosomal genes, and the cytochrome b gene, are sampled across an ingroup composed of seven xantusiid species and a 12-species outgroup chosen to bracket ancestral states for six additional clades of scleroglossan lizards. Our phylogenetic analyses afford robust support for the following conclusions: Xantusiidae is part of Scincomorpha, rather than being allied with Gekkota; Lepidophyma is sister to Xantusia, rather than to Cricosaura; Xantusia riversiana is imbedded within, rather than being sister to, other Xantusia species; and rock-morph Xantusia are not closely related to one another. Convergence related to retarded rates of growth and development, or to physical constraints imposed by living in rock crevices, may be responsible for much of the character discordance underlying conflicts in xantusiid phylogeny. Fossil-calibrated Maximum Likelihood-based divergence time estimates suggest that although the xantusiid stem may have originated in the Mesozoic, the crown clade is exclusively Tertiary in age. Thus, the clade including extant Cricosaura does not appear to have been extant during the K-T boundary bolide impact, as has been suggested. Moreover, our divergence-time estimates indicate that the xantusiid island endemics, Cricosaura typica on Cuba and Xantusia riversiana on the California Channel Islands, arrived via dispersal rather than vicariance, as previously proposed.
Maximum likelihood method to correct for missed levels based on the {Delta}{sub 3}(L) statistic
Mulhall, Declan
2011-05-15
The {Delta}{sub 3}(L) statistic of random matrix theory is defined as the average of a set of random numbers {l_brace}{delta}{r_brace}, derived from a spectrum. The distribution p({delta}) of these random numbers is used as the basis of a maximum likelihood method to gauge the fraction x of levels missed in an experimental spectrum. The method is tested on an ensemble of depleted spectra from the Gaussian orthogonal ensemble (GOE) and accurately returned the correct fraction of missed levels. Neutron resonance data and acoustic spectra of an aluminum block were analyzed. All results were compared with an analysis based on an established expression for {Delta}{sub 3}(L) for a depleted GOE spectrum. The effects of intruder levels are examined and seen to be very similar to those of missed levels. Shell model spectra were seen to give the same p({delta}) as the GOE.
Calibrating CAT Pools and Online Pretest Items Using Marginal Maximum Likelihood Methods.
ERIC Educational Resources Information Center
Pommerich, Mary; Segall, Daniel O.
Research discussed in this paper was conducted as part of an ongoing large-scale simulation study to evaluate methods of calibrating pretest items for computerized adaptive testing (CAT) pools. The simulation was designed to mimic the operational CAT Armed Services Vocational Aptitude Battery (ASVAB) testing program, in which a single pretest item…
NASA Technical Reports Server (NTRS)
Iliff, K. W.; Maine, R. E.
1976-01-01
A maximum likelihood estimation method was applied to flight data and procedures to facilitate the routine analysis of a large amount of flight data were described. Techniques that can be used to obtain stability and control derivatives from aircraft maneuvers that are less than ideal for this purpose are described. The techniques involve detecting and correcting the effects of dependent or nearly dependent variables, structural vibration, data drift, inadequate instrumentation, and difficulties with the data acquisition system and the mathematical model. The use of uncertainty levels and multiple maneuver analysis also proved to be useful in improving the quality of the estimated coefficients. The procedures used for editing the data and for overall analysis are also discussed.
Maximum likelihood methods reveal conservation of function among closely related kinesin families.
Lawrence, Carolyn J; Malmberg, Russell L; Muszynski, Michael G; Dawe, R Kelly
2002-01-01
We have reconstructed the evolution of the anciently derived kinesin superfamily using various alignment and tree-building methods. In addition to classifying previously described kinesins from protists, fungi, and animals, we analyzed a variety of kinesin sequences from the plant kingdom including 12 from Zea mays and 29 from Arabidopsis thaliana. Also included in our data set were four sequences from the anciently diverged amitochondriate protist Giardia lamblia. The overall topology of the best tree we found is more likely than previously reported topologies and allows us to make the following new observations: (1) kinesins involved in chromosome movement including MCAK, chromokinesin, and CENP-E may be descended from a single ancestor; (2) kinesins that form complex oligomers are limited to a monophyletic group of families; (3) kinesins that crosslink antiparallel microtubules at the spindle midzone including BIMC, MKLP, and CENP-E are closely related; (4) Drosophila NOD and human KID group with other characterized chromokinesins; and (5) Saccharomyces SMY1 groups with kinesin-I sequences, forming a family of kinesins capable of class V myosin interactions. In addition, we found that one monophyletic clade composed exclusively of sequences with a C-terminal motor domain contains all known minus end-directed kinesins.
New methods to assess severity and likelihood of urban flood risk from intense rainfall
NASA Astrophysics Data System (ADS)
Fewtrell, Tim; Foote, Matt; Bates, Paul; Ntelekos, Alexandros
2010-05-01
the construction of appropriate probabilistic flood models. This paper will describe new research being undertaken to assess the practicality of ultra-high resolution, ground based laser-scanner data for flood modelling in urban centres, using new hydraulic propagation methods to determine the feasibility of such data to be applied within stochastic event models. Results from the collection of ‘point cloud' data collected from a mobile terrestrial laser-scanner system in a key urban centre, combined with appropriate datasets, will be summarized here and an initial assessment of the potential for the use of such data in stochastic event sets will be made. Conclusions are drawn from comparisons with previous studies and underlying DEM products of similar resolutions in terms of computational time, flood extent and flood depth. Based on the above, the study provides some current recommendations on the most appropriate resolution of input data for urban hydraulic modelling.
The Likelihood Function and Likelihood Statistics
NASA Astrophysics Data System (ADS)
Robinson, Edward L.
2016-01-01
The likelihood function is a necessary component of Bayesian statistics but not of frequentist statistics. The likelihood function can, however, serve as the foundation for an attractive variant of frequentist statistics sometimes called likelihood statistics. We will first discuss the definition and meaning of the likelihood function, giving some examples of its use and abuse - most notably in the so-called prosecutor's fallacy. Maximum likelihood estimation is the aspect of likelihood statistics familiar to most people. When data points are known to have Gaussian probability distributions, maximum likelihood parameter estimation leads directly to least-squares estimation. When the data points have non-Gaussian distributions, least-squares estimation is no longer appropriate. We will show how the maximum likelihood principle leads to logical alternatives to least squares estimation for non-Gaussian distributions, taking the Poisson distribution as an example.The likelihood ratio is the ratio of the likelihoods of, for example, two hypotheses or two parameters. Likelihood ratios can be treated much like un-normalized probability distributions, greatly extending the applicability and utility of likelihood statistics. Likelihood ratios are prone to the same complexities that afflict posterior probability distributions in Bayesian statistics. We will show how meaningful information can be extracted from likelihood ratios by the Laplace approximation, by marginalizing, or by Markov chain Monte Carlo sampling.
Terwilliger, J.D.
1995-03-01
Historically, most methods for detecting linkage disequilibrium were designed for use with diallelic marker loci, for which the analysis is straightforward. With the advent of polymorphic markers with many alleles, the normal approach to their analysis has been either to extend the methodology for two-allele systems (leading to an increase in df and to a corresponding loss of power) or to select the allele believed to be associated and then collapse the other alleles, reducing, in a biased way, the locus to a diallelic system. I propose a likelihood-based approach to testing for linkage disequilibrium, an approach that becomes more conservative as the number of alleles increases, and as the number of markers considered jointly increases in a multipoint test for linkage disequilibrium, while maintaining high power. Properties of this method for detecting associations and fine mapping the location of disease traits are investigated. It is found to be, in general, more powerful than conventional methods, and it provides a tractable framework for the fine mapping of new disease loci. Application to the cystic fibrosis data of Kerem et al. is included to illustrate the method. 12 refs., 4 figs., 4 tabs.
Cuenca, José; Aleza, Pablo; Juárez, José; García-Lor, Andrés; Froelicher, Yann; Navarro, Luis; Ollitrault, Patrick
2015-01-01
Polyploidisation is a key source of diversification and speciation in plants. Most researchers consider sexual polyploidisation leading to unreduced gamete as its main origin. Unreduced gametes are useful in several crop breeding schemes. Their formation mechanism, i.e., First-Division Restitution (FDR) or Second-Division Restitution (SDR), greatly impacts the gametic and population structures and, therefore, the breeding efficiency. Previous methods to identify the underlying mechanism required the analysis of a large set of markers over large progeny. This work develops a new maximum-likelihood method to identify the unreduced gamete formation mechanism both at the population and individual levels using independent centromeric markers. Knowledge of marker-centromere distances greatly improves the statistical power of the comparison between the SDR and FDR hypotheses. Simulating data demonstrated the importance of selecting markers very close to the centromere to obtain significant conclusions at individual level. This new method was used to identify the meiotic restitution mechanism in nineteen mandarin genotypes used as female parents in triploid citrus breeding. SDR was identified for 85.3% of 543 triploid hybrids and FDR for 0.6%. No significant conclusions were obtained for 14.1% of the hybrids. At population level SDR was the predominant mechanisms for the 19 parental mandarins. PMID:25894579
Cuenca, José; Aleza, Pablo; Juárez, José; García-Lor, Andrés; Froelicher, Yann; Navarro, Luis; Ollitrault, Patrick
2015-04-20
Polyploidisation is a key source of diversification and speciation in plants. Most researchers consider sexual polyploidisation leading to unreduced gamete as its main origin. Unreduced gametes are useful in several crop breeding schemes. Their formation mechanism, i.e., First-Division Restitution (FDR) or Second-Division Restitution (SDR), greatly impacts the gametic and population structures and, therefore, the breeding efficiency. Previous methods to identify the underlying mechanism required the analysis of a large set of markers over large progeny. This work develops a new maximum-likelihood method to identify the unreduced gamete formation mechanism both at the population and individual levels using independent centromeric markers. Knowledge of marker-centromere distances greatly improves the statistical power of the comparison between the SDR and FDR hypotheses. Simulating data demonstrated the importance of selecting markers very close to the centromere to obtain significant conclusions at individual level. This new method was used to identify the meiotic restitution mechanism in nineteen mandarin genotypes used as female parents in triploid citrus breeding. SDR was identified for 85.3% of 543 triploid hybrids and FDR for 0.6%. No significant conclusions were obtained for 14.1% of the hybrids. At population level SDR was the predominant mechanisms for the 19 parental mandarins.
Cuenca, José; Aleza, Pablo; Juárez, José; García-Lor, Andrés; Froelicher, Yann; Navarro, Luis; Ollitrault, Patrick
2015-01-01
Polyploidisation is a key source of diversification and speciation in plants. Most researchers consider sexual polyploidisation leading to unreduced gamete as its main origin. Unreduced gametes are useful in several crop breeding schemes. Their formation mechanism, i.e., First-Division Restitution (FDR) or Second-Division Restitution (SDR), greatly impacts the gametic and population structures and, therefore, the breeding efficiency. Previous methods to identify the underlying mechanism required the analysis of a large set of markers over large progeny. This work develops a new maximum-likelihood method to identify the unreduced gamete formation mechanism both at the population and individual levels using independent centromeric markers. Knowledge of marker-centromere distances greatly improves the statistical power of the comparison between the SDR and FDR hypotheses. Simulating data demonstrated the importance of selecting markers very close to the centromere to obtain significant conclusions at individual level. This new method was used to identify the meiotic restitution mechanism in nineteen mandarin genotypes used as female parents in triploid citrus breeding. SDR was identified for 85.3% of 543 triploid hybrids and FDR for 0.6%. No significant conclusions were obtained for 14.1% of the hybrids. At population level SDR was the predominant mechanisms for the 19 parental mandarins. PMID:25894579
Wen, Yalu; Lu, Qing
2016-09-01
Although compelling evidence suggests that the genetic etiology of complex diseases could be heterogeneous in subphenotype groups, little attention has been paid to phenotypic heterogeneity in genetic association analysis of complex diseases. Simply ignoring phenotypic heterogeneity in association analysis could result in attenuated estimates of genetic effects and low power of association tests if subphenotypes with similar clinical manifestations have heterogeneous underlying genetic etiologies. To facilitate the family-based association analysis allowing for phenotypic heterogeneity, we propose a clustered multiclass likelihood-ratio ensemble (CMLRE) method. The proposed method provides an alternative way to model the complex relationship between disease outcomes and genetic variants. It allows for heterogeneous genetic causes of disease subphenotypes and can be applied to various pedigree structures. Through simulations, we found CMLRE outperformed the commonly adopted strategies in a variety of underlying disease scenarios. We further applied CMLRE to a family-based dataset from the International Consortium to Identify Genes and Interactions Controlling Oral Clefts (ICOC) to investigate the genetic variants and interactions predisposing to subphenotypes of oral clefts. The analysis suggested that two subphenotypes, nonsyndromic cleft lip without palate (CL) and cleft lip with palate (CLP), shared similar genetic etiologies, while cleft palate only (CP) had its own genetic mechanism. The analysis further revealed that rs10863790 (IRF6), rs7017252 (8q24), and rs7078160 (VAX1) were jointly associated with CL/CLP, while rs7969932 (TBK1), rs227731 (17q22), and rs2141765 (TBK1) jointly contributed to CP. PMID:27321816
Wen, Yalu; Lu, Qing
2016-09-01
Although compelling evidence suggests that the genetic etiology of complex diseases could be heterogeneous in subphenotype groups, little attention has been paid to phenotypic heterogeneity in genetic association analysis of complex diseases. Simply ignoring phenotypic heterogeneity in association analysis could result in attenuated estimates of genetic effects and low power of association tests if subphenotypes with similar clinical manifestations have heterogeneous underlying genetic etiologies. To facilitate the family-based association analysis allowing for phenotypic heterogeneity, we propose a clustered multiclass likelihood-ratio ensemble (CMLRE) method. The proposed method provides an alternative way to model the complex relationship between disease outcomes and genetic variants. It allows for heterogeneous genetic causes of disease subphenotypes and can be applied to various pedigree structures. Through simulations, we found CMLRE outperformed the commonly adopted strategies in a variety of underlying disease scenarios. We further applied CMLRE to a family-based dataset from the International Consortium to Identify Genes and Interactions Controlling Oral Clefts (ICOC) to investigate the genetic variants and interactions predisposing to subphenotypes of oral clefts. The analysis suggested that two subphenotypes, nonsyndromic cleft lip without palate (CL) and cleft lip with palate (CLP), shared similar genetic etiologies, while cleft palate only (CP) had its own genetic mechanism. The analysis further revealed that rs10863790 (IRF6), rs7017252 (8q24), and rs7078160 (VAX1) were jointly associated with CL/CLP, while rs7969932 (TBK1), rs227731 (17q22), and rs2141765 (TBK1) jointly contributed to CP.
Matilainen, Kaarina; Mäntysaari, Esa A; Lidauer, Martin H; Strandén, Ismo; Thompson, Robin
2013-01-01
Estimation of variance components by Monte Carlo (MC) expectation maximization (EM) restricted maximum likelihood (REML) is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR), where the information matrix was generated via sampling; MC average information(AI), where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.
NASA Technical Reports Server (NTRS)
Rheinfurth, M. H.; Wilson, H. B.
1991-01-01
The monograph was prepared to give the practicing engineer a clear understanding of dynamics with special consideration given to the dynamic analysis of aerospace systems. It is conceived to be both a desk-top reference and a refresher for aerospace engineers in government and industry. It could also be used as a supplement to standard texts for in-house training courses on the subject. Beginning with the basic concepts of kinematics and dynamics, the discussion proceeds to treat the dynamics of a system of particles. Both classical and modern formulations of the Lagrange equations, including constraints, are discussed and applied to the dynamic modeling of aerospace structures using the modal synthesis technique.
Nagy, László G; Urban, Alexander; Orstadius, Leif; Papp, Tamás; Larsson, Ellen; Vágvölgyi, Csaba
2010-12-01
Recently developed comparative phylogenetic methods offer a wide spectrum of applications in evolutionary biology, although it is generally accepted that their statistical properties are incompletely known. Here, we examine and compare the statistical power of the ML and Bayesian methods with regard to selection of best-fit models of fruiting-body evolution and hypothesis testing of ancestral states on a real-life data set of a physiological trait (autodigestion) in the family Psathyrellaceae. Our phylogenies are based on the first multigene data set generated for the family. Two different coding regimes (binary and multistate) and two data sets differing in taxon sampling density are examined. The Bayesian method outperformed Maximum Likelihood with regard to statistical power in all analyses. This is particularly evident if the signal in the data is weak, i.e. in cases when the ML approach does not provide support to choose among competing hypotheses. Results based on binary and multistate coding differed only modestly, although it was evident that multistate analyses were less conclusive in all cases. It seems that increased taxon sampling density has favourable effects on inference of ancestral states, while model parameters are influenced to a smaller extent. The model best fitting our data implies that the rate of losses of deliquescence equals zero, although model selection in ML does not provide proper support to reject three of the four candidate models. The results also support the hypothesis that non-deliquescence (lack of autodigestion) has been ancestral in Psathyrellaceae, and that deliquescent fruiting bodies represent the preferred state, having evolved independently several times during evolution.
Generalized multidimensional dynamic allocation method.
Lebowitsch, Jonathan; Ge, Yan; Young, Benjamin; Hu, Feifang
2012-12-10
Dynamic allocation has received considerable attention since it was first proposed in the 1970s as an alternative means of allocating treatments in clinical trials which helps to secure the balance of prognostic factors across treatment groups. The purpose of this paper is to present a generalized multidimensional dynamic allocation method that simultaneously balances treatment assignments at three key levels: within the overall study, within each level of each prognostic factor, and within each stratum, that is, combination of levels of different factors Further it offers capabilities for unbalanced and adaptive designs for trials. The treatment balancing performance of the proposed method is investigated through simulations which compare multidimensional dynamic allocation with traditional stratified block randomization and the Pocock-Simon method. On the basis of these results, we conclude that this generalized multidimensional dynamic allocation method is an improvement over conventional dynamic allocation methods and is flexible enough to be applied for most trial settings including Phases I, II and III trials.
ERIC Educational Resources Information Center
Han, Kyung T.; Guo, Fanmin
2014-01-01
The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…
ERIC Educational Resources Information Center
Atalay Kabasakal, Kübra; Arsan, Nihan; Gök, Bilge; Kelecioglu, Hülya
2014-01-01
This simulation study compared the performances (Type I error and power) of Mantel-Haenszel (MH), SIBTEST, and item response theory-likelihood ratio (IRT-LR) methods under certain conditions. Manipulated factors were sample size, ability differences between groups, test length, the percentage of differential item functioning (DIF), and underlying…
Young, Jonathan; Thompson, Sandra E.; Brothers, Alan J.; Whitney, Paul D.; Coles, Garill A.; Henderson, Cindy L.; Wolf, Katherine E.; Hoopes, Bonnie L.
2008-12-01
The ability to estimate the likelihood of future events based on current and historical data is essential to the decision making process of many government agencies. Successful predictions related to terror events and characterizing the risks will support development of options for countering these events. The predictive tasks involve both technical and social component models. The social components have presented a particularly difficult challenge. This paper outlines some technical considerations of this modeling activity. Both data and predictions associated with the technical and social models will likely be known with differing certainties or accuracies – a critical challenge is linking across these model domains while respecting this fundamental difference in certainty level. This paper will describe the technical approach being taken to develop the social model and identification of the significant interfaces between the technical and social modeling in the context of analysis of diversion of nuclear material.
NASA Technical Reports Server (NTRS)
Gayman, W. H.
1974-01-01
Test method and apparatus determine fluid effective mass and damping in frequency range where effective mass may be considered as total mass less sum of slosh masses. Apparatus is designed so test tank and its mounting yoke are supported from structural test wall by series of flexures.
NASA Technical Reports Server (NTRS)
Grove, R. D.; Bowles, R. L.; Mayhew, S. C.
1972-01-01
A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.
NASA Technical Reports Server (NTRS)
Bueno, R. A.
1977-01-01
Results of the generalized likelihood ratio (GLR) technique for the detection of failures in aircraft application are presented, and its relationship to the properties of the Kalman-Bucy filter is examined. Under the assumption that the system is perfectly modeled, the detectability and distinguishability of four failure types are investigated by means of analysis and simulations. Detection of failures is found satisfactory, but problems in identifying correctly the mode of a failure may arise. These issues are closely examined as well as the sensitivity of GLR to modeling errors. The advantages and disadvantages of this technique are discussed, and various modifications are suggested to reduce its limitations in performance and computational complexity.
Ipsen, Andreas; Ebbels, Timothy M D
2014-10-01
In a recent article, we derived a probability distribution that was shown to closely approximate that of the data produced by liquid chromatography time-of-flight mass spectrometry (LC/TOFMS) instruments employing time-to-digital converters (TDCs) as part of their detection system. The approach of formulating detailed and highly accurate mathematical models of LC/MS data via probability distributions that are parameterized by quantities of analytical interest does not appear to have been fully explored before. However, we believe it could lead to a statistically rigorous framework for addressing many of the data analytical problems that arise in LC/MS studies. In this article, we present new procedures for correcting for TDC saturation using such an approach and demonstrate that there is potential for significant improvements in the effective dynamic range of TDC-based mass spectrometers, which could make them much more competitive with the alternative analog-to-digital converters (ADCs). The degree of improvement depends on our ability to generate mass and chromatographic peaks that conform to known mathematical functions and our ability to accurately describe the state of the detector dead time-tasks that may be best addressed through engineering efforts.
Dynamic Method for Identifying Collected Sample Mass
NASA Technical Reports Server (NTRS)
Carson, John
2008-01-01
G-Sample is designed for sample collection missions to identify the presence and quantity of sample material gathered by spacecraft equipped with end effectors. The software method uses a maximum-likelihood estimator to identify the collected sample's mass based on onboard force-sensor measurements, thruster firings, and a dynamics model of the spacecraft. This makes sample mass identification a computation rather than a process requiring additional hardware. Simulation examples of G-Sample are provided for spacecraft model configurations with a sample collection device mounted on the end of an extended boom. In the absence of thrust knowledge errors, the results indicate that G-Sample can identify the amount of collected sample mass to within 10 grams (with 95-percent confidence) by using a force sensor with a noise and quantization floor of 50 micrometers. These results hold even in the presence of realistic parametric uncertainty in actual spacecraft inertia, center-of-mass offset, and first flexibility modes. Thrust profile knowledge is shown to be a dominant sensitivity for G-Sample, entering in a nearly one-to-one relationship with the final mass estimation error. This means thrust profiles should be well characterized with onboard accelerometers prior to sample collection. An overall sample-mass estimation error budget has been developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.
Likelihood functions for the analysis of single-molecule binned photon sequences
Gopich, Irina V.
2011-01-01
We consider the analysis of a class of experiments in which the number of photons in consecutive time intervals is recorded. Sequence of photon counts or, alternatively, of FRET efficiencies can be studied using likelihood-based methods. For a kinetic model of the conformational dynamics and state-dependent Poisson photon statistics, the formalism to calculate the exact likelihood that this model describes such sequences of photons or FRET efficiencies is developed. Explicit analytic expressions for the likelihood function for a two-state kinetic model are provided. The important special case when conformational dynamics are so slow that at most a single transition occurs in a time bin is considered. By making a series of approximations, we eventually recover the likelihood function used in hidden Markov models. In this way, not only is insight gained into the range of validity of this procedure, but also an improved likelihood function can be obtained. PMID:22711967
ERIC Educational Resources Information Center
Fennell, Mary L.; And Others
This document is part of a series of chapters described in SO 011 759. This chapter reports the results of Monte Carlo simulations designed to analyze problems of using maximum likelihood estimation (MLE: see SO 011 767) in research models which combine longitudinal and dynamic behavior data in studies of change. Four complications--censoring of…
Computational Methods for Structural Mechanics and Dynamics
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)
1989-01-01
Topics addressed include: transient dynamics; transient finite element method; transient analysis in impact and crash dynamic studies; multibody computer codes; dynamic analysis of space structures; multibody mechanics and manipulators; spatial and coplanar linkage systems; flexible body simulation; multibody dynamics; dynamical systems; and nonlinear characteristics of joints.
The phylogenetic likelihood library.
Flouri, T; Izquierdo-Carrasco, F; Darriba, D; Aberer, A J; Nguyen, L-T; Minh, B Q; Von Haeseler, A; Stamatakis, A
2015-03-01
We introduce the Phylogenetic Likelihood Library (PLL), a highly optimized application programming interface for developing likelihood-based phylogenetic inference and postanalysis software. The PLL implements appropriate data structures and functions that allow users to quickly implement common, error-prone, and labor-intensive tasks, such as likelihood calculations, model parameter as well as branch length optimization, and tree space exploration. The highly optimized and parallelized implementation of the phylogenetic likelihood function and a thorough documentation provide a framework for rapid development of scalable parallel phylogenetic software. By example of two likelihood-based phylogenetic codes we show that the PLL improves the sequential performance of current software by a factor of 2-10 while requiring only 1 month of programming time for integration. We show that, when numerical scaling for preventing floating point underflow is enabled, the double precision likelihood calculations in the PLL are up to 1.9 times faster than those in BEAGLE. On an empirical DNA dataset with 2000 taxa the AVX version of PLL is 4 times faster than BEAGLE (scaling enabled and required). The PLL is available at http://www.libpll.org under the GNU General Public License (GPL).
Di Maro, Antimo; Citores, Lucía; Russo, Rosita; Iglesias, Rosario; Ferreras, José Miguel
2014-08-01
Ribosome-inactivating proteins (RIPs) from angiosperms are rRNA N-glycosidases that have been proposed as defence proteins against virus and fungi. They have been classified as type 1 RIPs, consisting of single-chain proteins, and type 2 RIPs, consisting of an A chain with RIP properties covalently linked to a B chain with lectin properties. In this work we have carried out a broad search of RIP sequence data banks from angiosperms in order to study their main structural characteristics and phylogenetic evolution. The comparison of the sequences revealed the presence, outside of the active site, of a novel structure that might be involved in the internal protein dynamics linked to enzyme catalysis. Also the B-chains presented another conserved structure that might function either supporting the beta-trefoil structure or in the communication between both sugar-binding sites. A systematic phylogenetic analysis of RIP sequences revealed that the most primitive type 1 RIPs were similar to that of the actual monocots (Poaceae and Asparagaceae). The primitive RIPs evolved to the dicot type 1 related RIPs (like those from Caryophyllales, Lamiales and Euphorbiales). The gene of a type 1 RIP related with the actual Euphorbiaceae type 1 RIPs fused with a double beta trefoil lectin gene similar to the actual Cucurbitaceae lectins to generate the type 2 RIPs and finally this gene underwent deletions rendering either type 1 RIPs (like those from Cucurbitaceae, Rosaceae and Iridaceae) or lectins without A chain (like those from Adoxaceae).
Augmented Likelihood Image Reconstruction.
Stille, Maik; Kleine, Matthias; Hägele, Julian; Barkhausen, Jörg; Buzug, Thorsten M
2016-01-01
The presence of high-density objects remains an open problem in medical CT imaging. Data of projections passing through objects of high density, such as metal implants, are dominated by noise and are highly affected by beam hardening and scatter. Reconstructed images become less diagnostically conclusive because of pronounced artifacts that manifest as dark and bright streaks. A new reconstruction algorithm is proposed with the aim to reduce these artifacts by incorporating information about shape and known attenuation coefficients of a metal implant. Image reconstruction is considered as a variational optimization problem. The afore-mentioned prior knowledge is introduced in terms of equality constraints. An augmented Lagrangian approach is adapted in order to minimize the associated log-likelihood function for transmission CT. During iterations, temporally appearing artifacts are reduced with a bilateral filter and new projection values are calculated, which are used later on for the reconstruction. A detailed evaluation in cooperation with radiologists is performed on software and hardware phantoms, as well as on clinically relevant patient data of subjects with various metal implants. Results show that the proposed reconstruction algorithm is able to outperform contemporary metal artifact reduction methods such as normalized metal artifact reduction.
ERIC Educational Resources Information Center
Lee, Sik-Yum; Xia, Ye-Mao
2006-01-01
By means of more than a dozen user friendly packages, structural equation models (SEMs) are widely used in behavioral, education, social, and psychological research. As the underlying theory and methods in these packages are vulnerable to outliers and distributions with longer-than-normal tails, a fundamental problem in the field is the…
Maximum likelihood versus likelihood-free quantum system identification in the atom maser
NASA Astrophysics Data System (ADS)
Catana, Catalin; Kypraios, Theodore; Guţă, Mădălin
2014-10-01
We consider the problem of estimating a dynamical parameter of a Markovian quantum open system (the atom maser), by performing continuous time measurements in the system's output (outgoing atoms). Two estimation methods are investigated and compared. Firstly, the maximum likelihood estimator (MLE) takes into account the full measurement data and is asymptotically optimal in terms of its mean square error. Secondly, the ‘likelihood-free’ method of approximate Bayesian computation (ABC) produces an approximation of the posterior distribution for a given set of summary statistics, by sampling trajectories at different parameter values and comparing them with the measurement data via chosen statistics. Building on previous results which showed that atom counts are poor statistics for certain values of the Rabi angle, we apply MLE to the full measurement data and estimate its Fisher information. We then select several correlation statistics such as waiting times, distribution of successive identical detections, and use them as input of the ABC algorithm. The resulting posterior distribution follows closely the data likelihood, showing that the selected statistics capture ‘most’ statistical information about the Rabi angle.
Ibrahim, Joseph G.
2014-01-01
Multiple Imputation, Maximum Likelihood and Fully Bayesian methods are the three most commonly used model-based approaches in missing data problems. Although it is easy to show that when the responses are missing at random (MAR), the complete case analysis is unbiased and efficient, the aforementioned methods are still commonly used in practice for this setting. To examine the performance of and relationships between these three methods in this setting, we derive and investigate small sample and asymptotic expressions of the estimates and standard errors, and fully examine how these estimates are related for the three approaches in the linear regression model when the responses are MAR. We show that when the responses are MAR in the linear model, the estimates of the regression coefficients using these three methods are asymptotically equivalent to the complete case estimates under general conditions. One simulation and a real data set from a liver cancer clinical trial are given to compare the properties of these methods when the responses are MAR. PMID:25309677
Andrews, Steven S; Rutherford, Suzannah
2016-01-01
Experimental measurements require calibration to transform measured signals into physically meaningful values. The conventional approach has two steps: the experimenter deduces a conversion function using measurements on standards and then calibrates (or normalizes) measurements on unknown samples with this function. The deduction of the conversion function from only the standard measurements causes the results to be quite sensitive to experimental noise. It also implies that any data collected without reliable standards must be discarded. Here we show that a "1-step calibration method" reduces these problems for the common situation in which samples are measured in batches, where a batch could be an immunoblot (Western blot), an enzyme-linked immunosorbent assay (ELISA), a sequence of spectra, or a microarray, provided that some sample measurements are replicated across multiple batches. The 1-step method computes all calibration results iteratively from all measurements. It returns the most probable values for the sample compositions under the assumptions of a statistical model, making them the maximum likelihood predictors. It is less sensitive to measurement error on standards and enables use of some batches that do not include standards. In direct comparison of both real and simulated immunoblot data, the 1-step method consistently exhibited smaller errors than the conventional "2-step" method. These results suggest that the 1-step method is likely to be most useful for cases where experimenters want to analyze existing data that are missing some standard measurements and where experimenters want to extract the best results possible from their data. Open source software for both methods is available for download or on-line use.
NASA Astrophysics Data System (ADS)
Olivares, G.; Teferle, F. N.
2013-12-01
Geodetic time series provide information which helps to constrain theoretical models of geophysical processes. It is well established that such time series, for example from GPS, superconducting gravity or mean sea level (MSL), contain time-correlated noise which is usually assumed to be a combination of a long-term stochastic process (characterized by a power-law spectrum) and random noise. Therefore, when fitting a model to geodetic time series it is essential to also estimate the stochastic parameters beside the deterministic ones. Often the stochastic parameters include the power amplitudes of both time-correlated and random noise, as well as, the spectral index of the power-law process. To date, the most widely used method for obtaining these parameter estimates is based on maximum likelihood estimation (MLE). We present an integration method, the Bayesian Monte Carlo Markov Chain (MCMC) method, which, by using Markov chains, provides a sample of the posteriori distribution of all parameters and, thereby, using Monte Carlo integration, all parameters and their uncertainties are estimated simultaneously. This algorithm automatically optimizes the Markov chain step size and estimates the convergence state by spectral analysis of the chain. We assess the MCMC method through comparison with MLE, using the recently released GPS position time series from JPL and apply it also to the MSL time series from the Revised Local Reference data base of the PSMSL. Although the parameter estimates for both methods are fairly equivalent, they suggest that the MCMC method has some advantages over MLE, for example, without further computations it provides the spectral index uncertainty, is computationally stable and detects multimodality.
Salas-Leiva, Dayana E.; Meerow, Alan W.; Calonje, Michael; Griffith, M. Patrick; Francisco-Ortega, Javier; Nakamura, Kyoko; Stevenson, Dennis W.; Lewis, Carl E.; Namoff, Sandra
2013-01-01
Background and aims Despite a recent new classification, a stable phylogeny for the cycads has been elusive, particularly regarding resolution of Bowenia, Stangeria and Dioon. In this study, five single-copy nuclear genes (SCNGs) are applied to the phylogeny of the order Cycadales. The specific aim is to evaluate several gene tree–species tree reconciliation approaches for developing an accurate phylogeny of the order, to contrast them with concatenated parsimony analysis and to resolve the erstwhile problematic phylogenetic position of these three genera. Methods DNA sequences of five SCNGs were obtained for 20 cycad species representing all ten genera of Cycadales. These were analysed with parsimony, maximum likelihood (ML) and three Bayesian methods of gene tree–species tree reconciliation, using Cycas as the outgroup. A calibrated date estimation was developed with Bayesian methods, and biogeographic analysis was also conducted. Key Results Concatenated parsimony, ML and three species tree inference methods resolve exactly the same tree topology with high support at most nodes. Dioon and Bowenia are the first and second branches of Cycadales after Cycas, respectively, followed by an encephalartoid clade (Macrozamia–Lepidozamia–Encephalartos), which is sister to a zamioid clade, of which Ceratozamia is the first branch, and in which Stangeria is sister to Microcycas and Zamia. Conclusions A single, well-supported phylogenetic hypothesis of the generic relationships of the Cycadales is presented. However, massive extinction events inferred from the fossil record that eliminated broader ancestral distributions within Zamiaceae compromise accurate optimization of ancestral biogeographical areas for that hypothesis. While major lineages of Cycadales are ancient, crown ages of all modern genera are no older than 12 million years, supporting a recent hypothesis of mostly Miocene radiations. This phylogeny can contribute to an accurate infrafamilial
Thorn, Graeme J; King, John R
2016-01-01
The Gram-positive bacterium Clostridium acetobutylicum is an anaerobic endospore-forming species which produces acetone, butanol and ethanol via the acetone-butanol (AB) fermentation process, leading to biofuels including butanol. In previous work we looked to estimate the parameters in an ordinary differential equation model of the glucose metabolism network using data from pH-controlled continuous culture experiments. Here we combine two approaches, namely the approximate Bayesian computation via an existing sequential Monte Carlo (ABC-SMC) method (to compute credible intervals for the parameters), and the profile likelihood estimation (PLE) (to improve the calculation of confidence intervals for the same parameters), the parameters in both cases being derived from experimental data from forward shift experiments. We also apply the ABC-SMC method to investigate which of the models introduced previously (one non-sporulation and four sporulation models) have the greatest strength of evidence. We find that the joint approximate posterior distribution of the parameters determines the same parameters as previously, including all of the basal and increased enzyme production rates and enzyme reaction activity parameters, as well as the Michaelis-Menten kinetic parameters for glucose ingestion, while other parameters are not as well-determined, particularly those connected with the internal metabolites acetyl-CoA, acetoacetyl-CoA and butyryl-CoA. We also find that the approximate posterior is strongly non-Gaussian, indicating that our previous assumption of elliptical contours of the distribution is not valid, which has the effect of reducing the numbers of pairs of parameters that are (linearly) correlated with each other. Calculations of confidence intervals using the PLE method back this up. Finally, we find that all five of our models are equally likely, given the data available at present. PMID:26561777
Likelihood and clinical trials.
Hill, G; Forbes, W; Kozak, J; MacNeill, I
2000-03-01
The history of the application of statistical theory to the analysis of clinical trials is reviewed. The current orthodoxy is a somewhat illogical hybrid of the original theory of significance tests of Edgeworth, Karl Pearson, and Fisher, and the subsequent decision theory approach of Neyman, Egon Pearson, and Wald. This hegemony is under threat from Bayesian statisticians. A third approach is that of likelihood, stemming from the work of Fisher and Barnard. This approach is illustrated using hypothetical data from the Lancet articles by Bradford Hill, which introduced clinicians to statistical theory. PMID:10760630
Optical methods in fault dynamics
NASA Astrophysics Data System (ADS)
Uenishi, K.; Rossmanith, H. P.
2003-10-01
The Rayleigh pulse interaction with a pre-stressed, partially contacting interface between similar and dissimilar materials is investigated experimentally as well as numerically. This study is intended to obtain an improved understanding of the interface (fault) dynamics during the earthquake rupture process. Using dynamic photoelasticity in conjunction with high-speed cinematography, snapshots of time-dependent isochromatic fringe patterns associated with Rayleigh pulse-interface interaction are experimentally recorded. It is shown that interface slip (instability) can be triggered dynamically by a pulse which propagates along the interface at the Rayleigh wave speed. For the numerical investigation, the finite difference wave simulator SWIFD is used for solving the problem under different combinations of contacting materials. The effect of acoustic impedance ratio of the two contacting materials on the wave patterns is discussed. The results indicate that upon interface rupture, Mach (head) waves, which carry a relatively large amount of energy in a concentrated form, can be generated and propagated from the interface contact region (asperity) into the acoustically softer material. Such Mach waves can cause severe damage onto a particular region inside an adjacent acoustically softer area. This type of damage concentration might be a possible reason for the generation of the "damage belt" in Kobe, Japan, on the occasion of the 1995 Hyogo-ken Nanbu (Kobe) Earthquake.
Likelihood estimation in image warping
NASA Astrophysics Data System (ADS)
Machado, Alexei M. C.; Campos, Mario F.; Gee, James C.
1999-05-01
The problem of matching two images can be posed as the search for a displacement field which assigns each point of one image to a point in the second image in such a way that a likelihood function is maximized ruled by topological constraints. Since the images may be acquired by different scanners, the intensity relationship between intensity levels is generally unknown. The matching problem is usually solved iteratively by optimization methods. The evaluation of each candidate solution is based on an objective function which favors smooth displacements that yield likely intensity matches. This paper is concerned with the construction of a likelihood function that is derived from the information contained in the data and is thus applicable to data acquired from an arbitrary scanner. The basic assumption of the method is that the pair of images to be matched is assumed to contain roughly the same proportion of tissues, which will be reflected in their gray-level histograms. Experiments with MRI images corrupted with strong non-linear intensity shading show the method's effectiveness for modeling intensity artifacts. Image matching can thus be made robust to a wide range of intensity degradations.
GLIMM'S METHOD FOR GAS DYNAMICS
Colella, Phillip
1980-07-01
We investigate Glimm's method, a method for constructing approximate solutions to systems of hyperbolic conservation laws in one space variable by sampling explicit wave solutions. It is extended to several space variables by operator splitting. We consider two functional problems. 1) We propose a highly accurate form of the sampling procedure, in one space variable, based on the van der Corput sampling sequence. We test the improved sampling procedure numerically in the case of inviscid compressible flow in one space dimension and find that it gives high resolution results both in the smooth parts of the solution, as well as the discontinuities. 2) We investigate the operator splitting procedure by means of which the multidimensional method is constructed. An 0(1) error stemming from the use of this procedure near shocks oblique to the spatial grid is analyzed numerically in the case of the equations for inviscid compressible flow in two space dimensions. We present a hybrid method which eliminates this error, consisting of Glimm's method, used in continuous parts of the flow, and the nonlinear Godunov's method, used in regions where large pressure jumps are generated. The resulting method is seen to be a substantial improvement over either of the component methods for multidimensional calculations.
Glimm's method for gas dynamics
Colella, P.
1982-03-01
We investigate Glimm's method, a method for constructing approximate solutions to systems of hyperbolic conservation laws in one space variable by sampling explicit wave solutions. It is extended to several space variables by operator splitting. We consider two problems: (1) we propose a highly accurate form of the sampling procedure, in one space variable, based on the van der Corput sampling sequence. We test the improved sampling procedure numerically in the case of inviscid compressible flow in one space dimension and find that it gives high resolution results both in the smooth parts of the solution, as well as at discontinuities; (2) we investigate the operator splitting procedure by means of which the multidimensional method is constructed. An O(1) error stemming from the use of this procedure near shocks oblique to the spatial grid is analyzed numerically in the case of the equations for inviscid compressible flow in two space dimensions. We present a hybrid method which eliminates this error, consisting of Glimm's method, used in continuous parts of the flow, and the nonlinear Godunov method, used in regions where large pressure jumps are generated. The resulting method is seen to be a substantial improvement over either of the component methods for multidimensional calculations.
NASA Technical Reports Server (NTRS)
Klein, V.
1979-01-01
Two identification methods, the equation error method and the output error method, are used to estimate stability and control parameter values from flight data for a low-wing, single-engine, general aviation airplane. The estimated parameters from both methods are in very good agreement primarily because of sufficient accuracy of measured data. The estimated static parameters also agree with the results from steady flights. The effect of power different input forms are demonstrated. Examination of all results available gives the best values of estimated parameters and specifies their accuracies.
Maximum Likelihood Estimation of Multivariate Polyserial and Polychoric Correlation Coefficients.
ERIC Educational Resources Information Center
Poon, Wai-Yin; Lee, Sik-Yum
1987-01-01
Reparameterization is used to find the maximum likelihood estimates of parameters in a multivariate model having some component variable observable only in polychotomous form. Maximum likelihood estimates are found by a Fletcher Powell algorithm. In addition, the partition maximum likelihood method is proposed and illustrated. (Author/GDC)
Model Fit after Pairwise Maximum Likelihood
Barendse, M. T.; Ligtvoet, R.; Timmerman, M. E.; Oort, F. J.
2016-01-01
Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log–likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML) of two–way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more), PML performs as well the robust weighted least squares analysis of polychoric correlations. PMID:27148136
Numerical methods for molecular dynamics
Skeel, R.D.
1991-01-01
This report summarizes our research progress to date on the use of multigrid methods for three-dimensional elliptic partial differential equations, with particular emphasis on application to the Poisson-Boltzmann equation of molecular biophysics. This research is motivated by the need for fast and accurate numerical solution techniques for three-dimensional problems arising in physics and engineering. In many applications these problems must be solved repeatedly, and the extremely large number of discrete unknowns required to accurately approximate solutions to partial differential equations in three-dimensional regions necessitates the use of efficient solution methods. This situation makes clear the importance of developing methods which are of optimal order (or nearly so), meaning that the number of operations required to solve the discrete problem is on the order of the number of discrete unknowns. Multigrid methods are generally regarded as being in this class of methods, and are in fact provably optimal order for an increasingly large class of problems. The fundamental goal of this research is to develop a fast and accurate numerical technique, based on multi-level principles, for the solutions of the Poisson-Boltzmann equation of molecular biophysics and similar equations occurring in other applications. An outline of the report is as follows. We first present some background material, followed by a survey of the literature on the use of multigrid methods for solving problems similar to the Poisson-Boltzmann equation. A short description of the software we have developed so far is then given, and numerical results are discussed. Finally, our research plans for the coming year are presented.
Galerkin Method for Nonlinear Dynamics
NASA Astrophysics Data System (ADS)
Noack, Bernd R.; Schlegel, Michael; Morzynski, Marek; Tadmor, Gilead
A Galerkin method is presented for control-oriented reduced-order models (ROM). This method generalizes linear approaches elaborated by M. Morzyński et al. for the nonlinear Navier-Stokes equation. These ROM are used as plants for control design in the chapters by G. Tadmor et al., S. Siegel, and R. King in this volume. Focus is placed on empirical ROM which compress flow data in the proper orthogonal decomposition (POD). The chapter shall provide a complete description for construction of straight-forward ROM as well as the physical understanding and teste
NASA Technical Reports Server (NTRS)
Gupta, N. K.; Mehra, R. K.
1974-01-01
This paper discusses numerical aspects of computing maximum likelihood estimates for linear dynamical systems in state-vector form. Different gradient-based nonlinear programming methods are discussed in a unified framework and their applicability to maximum likelihood estimation is examined. The problems due to singular Hessian or singular information matrix that are common in practice are discussed in detail and methods for their solution are proposed. New results on the calculation of state sensitivity functions via reduced order models are given. Several methods for speeding convergence and reducing computation time are also discussed.
Molecular dynamic simulation methods for anisotropic liquids.
Aoki, Keiko M; Yoneya, Makoto; Yokoyama, Hiroshi
2004-03-22
Methods of molecular dynamics simulations for anisotropic molecules are presented. The new methods, with an anisotropic factor in the cell dynamics, dramatically reduce the artifacts related to cell shapes and overcome the difficulties of simulating anisotropic molecules under constant hydrostatic pressure or constant volume. The methods are especially effective for anisotropic liquids, such as smectic liquid crystals and membranes, of which the stacks of layers are compressible (elastic in direction perpendicular to the layers) while the layer itself is liquid and only elastic under uniform compressive force. The methods can also be used for crystals and isotropic liquids as well.
Quasi-likelihood for Spatial Point Processes
Guan, Yongtao; Jalilian, Abdollah; Waagepetersen, Rasmus
2014-01-01
Summary Fitting regression models for intensity functions of spatial point processes is of great interest in ecological and epidemiological studies of association between spatially referenced events and geographical or environmental covariates. When Cox or cluster process models are used to accommodate clustering not accounted for by the available covariates, likelihood based inference becomes computationally cumbersome due to the complicated nature of the likelihood function and the associated score function. It is therefore of interest to consider alternative more easily computable estimating functions. We derive the optimal estimating function in a class of first-order estimating functions. The optimal estimating function depends on the solution of a certain Fredholm integral equation which in practise is solved numerically. The derivation of the optimal estimating function has close similarities to the derivation of quasi-likelihood for standard data sets. The approximate solution is further equivalent to a quasi-likelihood score for binary spatial data. We therefore use the term quasi-likelihood for our optimal estimating function approach. We demonstrate in a simulation study and a data example that our quasi-likelihood method for spatial point processes is both statistically and computationally efficient. PMID:26041970
Disequilibrium mapping: Composite likelihood for pairwise disequilibrium
Devlin, B.; Roeder, K.; Risch, N.
1996-08-15
The pattern of linkage disequilibrium between a disease locus and a set of marker loci has been shown to be a useful tool for geneticists searching for disease genes. Several methods have been advanced to utilize the pairwise disequilibrium between the disease locus and each of a set of marker loci. However, none of the methods take into account the information from all pairs simultaneously while also modeling the variability in the disequilibrium values due to the evolutionary dynamics of the population. We propose a Composite Likelihood CL model that has these features when the physical distances between the marker loci are known or can be approximated. In this instance, and assuming that there is a single disease mutation, the CL model depends on only three parameters, the recombination fraction between the disease locus and an arbitrary marker locus, {theta}, the age of the mutation, and a variance parameter. When the CL is maximized over a grid of {theta}, it provides a graph that can direct the search for the disease locus. We also show how the CL model can be generalized to account for multiple disease mutations. Evolutionary simulations demonstrate the power of the analyses, as well as their potential weaknesses. Finally, we analyze the data from two mapped diseases, cystic fibrosis and diastrophic dysplasia, finding that the CL method performs well in both cases. 28 refs., 6 figs., 4 tabs.
Dynamic discretization method for solving Kepler's equation
NASA Astrophysics Data System (ADS)
Feinstein, Scott A.; McLaughlin, Craig A.
2006-09-01
Kepler’s equation needs to be solved many times for a variety of problems in Celestial Mechanics. Therefore, computing the solution to Kepler’s equation in an efficient manner is of great importance to that community. There are some historical and many modern methods that address this problem. Of the methods known to the authors, Fukushima’s discretization technique performs the best. By taking more of a system approach and combining the use of discretization with the standard computer science technique known as dynamic programming, we were able to achieve even better performance than Fukushima. We begin by defining Kepler’s equation for the elliptical case and describe existing solution methods. We then present our dynamic discretization method and show the results of a comparative analysis. This analysis will demonstrate that, for the conditions of our tests, dynamic discretization performs the best.
Geometric methods in computational fluid dynamics. [turbomachinery
NASA Technical Reports Server (NTRS)
Eiseman, P. R.
1980-01-01
General methods for the construction of geometric computational fluid dynamic algorithms are presented which simulate a variety of flow fields in various nontrivial regions. Included are: basic developments with tensors; various forms for the equations of motion; generalized numerical methods and boundary conditions; and methods for mesh generation to meet the strong geometric constraints of turbomachines. Coordinate generation is shown generally to yield mesh descriptions from one or more transformations that are smoothly joined together to form a composite mesh.
Sun, Yanqing; Sundaram, Rajeshwari; Zhao, Yichuan
2009-01-01
The Cox model with time-dependent coefficients has been studied by a number of authors recently. In this paper, we develop empirical likelihood (EL) pointwise confidence regions for the time-dependent regression coefficients via local partial likelihood smoothing. The EL simultaneous confidence bands for a linear combination of the coefficients are also derived based on the strong approximation methods. The empirical likelihood ratio is formulated through the local partial log-likelihood for the regression coefficient functions. Our numerical studies indicate that the EL pointwise/simultaneous confidence regions/bands have satisfactory finite sample performances. Compared with the confidence regions derived directly based on the asymptotic normal distribution of the local constant estimator, the EL confidence regions are overall tighter and can better capture the curvature of the underlying regression coefficient functions. Two data sets, the gastric cancer data and the Mayo Clinic primary biliary cirrhosis data, are analyzed using the proposed method. PMID:19838322
Growing local likelihood network: Emergence of communities
NASA Astrophysics Data System (ADS)
Chen, S.; Small, M.
2015-10-01
In many real situations, networks grow only via local interactions. New nodes are added to the growing network with information only pertaining to a small subset of existing nodes. Multilevel marketing, social networks, and disease models can all be depicted as growing networks based on local (network path-length) distance information. In these examples, all nodes whose distance from a chosen center is less than d form a subgraph. Hence, we grow networks with information only from these subgraphs. Moreover, we use a likelihood-based method, where at each step we modify the networks by changing their likelihood to be closer to the expected degree distribution. Combining the local information and the likelihood method, we grow networks that exhibit novel features. We discover that the likelihood method, over certain parameter ranges, can generate networks with highly modulated communities, even when global information is not available. Communities and clusters are abundant in real-life networks, and the method proposed here provides a natural mechanism for the emergence of communities in scale-free networks. In addition, the algorithmic implementation of network growth via local information is substantially faster than global methods and allows for the exploration of much larger networks.
Simple dynamics for broad histogram method
NASA Astrophysics Data System (ADS)
de Oliveira, Paulo Murilo Castro
2002-08-01
The purpose of this text is: (1) to clarify the foundations of the broad histogram method, stressing the conceptual differences between it and reweighting procedures in general; (2) to propose a very simple microcanonical dynamic rule, yet to be tested by theoretical grounds, which could provide a good improvement to numerical simulations.
Method for monitoring slow dynamics recovery
NASA Astrophysics Data System (ADS)
Haller, Kristian C. E.; Hedberg, Claes M.
2012-11-01
Slow Dynamics is a specific material property, which for example is connected to the degree of damage. It is therefore of importance to be able to attain proper measurements of it. Usually it has been monitored by acoustic resonance methods which have very high sensitivity as such. However, because the acoustic wave is acting both as conditioner and as probe, the measurement is affecting the result which leads to a mixing of the fast nonlinear response to the excitation and the slow dynamics material recovery. In this article a method is introduced which, for the first time, removes the fast dynamics from the process and allows the behavior of the slow dynamics to be monitored by itself. The new method has the ability to measure at the shortest possible recovery times, and at very small conditioning strains. For the lowest strains the sound speed increases with strain, while at higher strains a linear decreasing dependence is observed. This is the first method and test that has been able to monitor the true material state recovery process.
Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data
NASA Technical Reports Server (NTRS)
Iliff, K. W.; Maine, R. E.
1985-01-01
Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.
Interfacial gauge methods for incompressible fluid dynamics.
Saye, Robert
2016-06-01
Designing numerical methods for incompressible fluid flow involving moving interfaces, for example, in the computational modeling of bubble dynamics, swimming organisms, or surface waves, presents challenges due to the coupling of interfacial forces with incompressibility constraints. A class of methods, denoted interfacial gauge methods, is introduced for computing solutions to the corresponding incompressible Navier-Stokes equations. These methods use a type of "gauge freedom" to reduce the numerical coupling between fluid velocity, pressure, and interface position, allowing high-order accurate numerical methods to be developed more easily. Making use of an implicit mesh discontinuous Galerkin framework, developed in tandem with this work, high-order results are demonstrated, including surface tension dynamics in which fluid velocity, pressure, and interface geometry are computed with fourth-order spatial accuracy in the maximum norm. Applications are demonstrated with two-phase fluid flow displaying fine-scaled capillary wave dynamics, rigid body fluid-structure interaction, and a fluid-jet free surface flow problem exhibiting vortex shedding induced by a type of Plateau-Rayleigh instability. The developed methods can be generalized to other types of interfacial flow and facilitate precise computation of complex fluid interface phenomena. PMID:27386567
Interfacial gauge methods for incompressible fluid dynamics
Saye, Robert
2016-01-01
Designing numerical methods for incompressible fluid flow involving moving interfaces, for example, in the computational modeling of bubble dynamics, swimming organisms, or surface waves, presents challenges due to the coupling of interfacial forces with incompressibility constraints. A class of methods, denoted interfacial gauge methods, is introduced for computing solutions to the corresponding incompressible Navier-Stokes equations. These methods use a type of “gauge freedom” to reduce the numerical coupling between fluid velocity, pressure, and interface position, allowing high-order accurate numerical methods to be developed more easily. Making use of an implicit mesh discontinuous Galerkin framework, developed in tandem with this work, high-order results are demonstrated, including surface tension dynamics in which fluid velocity, pressure, and interface geometry are computed with fourth-order spatial accuracy in the maximum norm. Applications are demonstrated with two-phase fluid flow displaying fine-scaled capillary wave dynamics, rigid body fluid-structure interaction, and a fluid-jet free surface flow problem exhibiting vortex shedding induced by a type of Plateau-Rayleigh instability. The developed methods can be generalized to other types of interfacial flow and facilitate precise computation of complex fluid interface phenomena. PMID:27386567
A Likelihood-Based SLIC Superpixel Algorithm for SAR Images Using Generalized Gamma Distribution.
Zou, Huanxin; Qin, Xianxiang; Zhou, Shilin; Ji, Kefeng
2016-01-01
The simple linear iterative clustering (SLIC) method is a recently proposed popular superpixel algorithm. However, this method may generate bad superpixels for synthetic aperture radar (SAR) images due to effects of speckle and the large dynamic range of pixel intensity. In this paper, an improved SLIC algorithm for SAR images is proposed. This algorithm exploits the likelihood information of SAR image pixel clusters. Specifically, a local clustering scheme combining intensity similarity with spatial proximity is proposed. Additionally, for post-processing, a local edge-evolving scheme that combines spatial context and likelihood information is introduced as an alternative to the connected components algorithm. To estimate the likelihood information of SAR image clusters, we incorporated a generalized gamma distribution (GГD). Finally, the superiority of the proposed algorithm was validated using both simulated and real-world SAR images. PMID:27438840
A Likelihood-Based SLIC Superpixel Algorithm for SAR Images Using Generalized Gamma Distribution
Zou, Huanxin; Qin, Xianxiang; Zhou, Shilin; Ji, Kefeng
2016-01-01
The simple linear iterative clustering (SLIC) method is a recently proposed popular superpixel algorithm. However, this method may generate bad superpixels for synthetic aperture radar (SAR) images due to effects of speckle and the large dynamic range of pixel intensity. In this paper, an improved SLIC algorithm for SAR images is proposed. This algorithm exploits the likelihood information of SAR image pixel clusters. Specifically, a local clustering scheme combining intensity similarity with spatial proximity is proposed. Additionally, for post-processing, a local edge-evolving scheme that combines spatial context and likelihood information is introduced as an alternative to the connected components algorithm. To estimate the likelihood information of SAR image clusters, we incorporated a generalized gamma distribution (GГD). Finally, the superiority of the proposed algorithm was validated using both simulated and real-world SAR images. PMID:27438840
A Likelihood-Based SLIC Superpixel Algorithm for SAR Images Using Generalized Gamma Distribution.
Zou, Huanxin; Qin, Xianxiang; Zhou, Shilin; Ji, Kefeng
2016-01-01
The simple linear iterative clustering (SLIC) method is a recently proposed popular superpixel algorithm. However, this method may generate bad superpixels for synthetic aperture radar (SAR) images due to effects of speckle and the large dynamic range of pixel intensity. In this paper, an improved SLIC algorithm for SAR images is proposed. This algorithm exploits the likelihood information of SAR image pixel clusters. Specifically, a local clustering scheme combining intensity similarity with spatial proximity is proposed. Additionally, for post-processing, a local edge-evolving scheme that combines spatial context and likelihood information is introduced as an alternative to the connected components algorithm. To estimate the likelihood information of SAR image clusters, we incorporated a generalized gamma distribution (GГD). Finally, the superiority of the proposed algorithm was validated using both simulated and real-world SAR images.
Evaluation of Dynamic Methods for Earthwork Assessment
NASA Astrophysics Data System (ADS)
Vlček, Jozef; Ďureková, Dominika; Zgútová, Katarína
2015-05-01
Rapid development of road construction imposes requests on fast and quality methods for earthwork quality evaluation. Dynamic methods are now adopted in numerous civil engineering sections. Especially evaluation of the earthwork quality can be sped up using dynamic equipment. This paper presents the results of the parallel measurements of chosen devices for determining the level of compaction of soils. Measurements were used to develop the correlations between values obtained from various apparatuses. Correlations show that examined apparatuses are suitable for examination of compaction level of fine-grained soils with consideration of boundary conditions of used equipment. Presented methods are quick and results can be obtained immediately after measurement, and they are thus suitable in cases when construction works have to be performed in a short period of time.
Spectral Methods for Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Zang, T. A.; Streett, C. L.; Hussaini, M. Y.
1994-01-01
As a tool for large-scale computations in fluid dynamics, spectral methods were prophesized in 1944, born in 1954, virtually buried in the mid-1960's, resurrected in 1969, evangalized in the 1970's, and catholicized in the 1980's. The use of spectral methods for meteorological problems was proposed by Blinova in 1944 and the first numerical computations were conducted by Silberman (1954). By the early 1960's computers had achieved sufficient power to permit calculations with hundreds of degrees of freedom. For problems of this size the traditional way of computing the nonlinear terms in spectral methods was expensive compared with finite-difference methods. Consequently, spectral methods fell out of favor. The expense of computing nonlinear terms remained a severe drawback until Orszag (1969) and Eliasen, Machenauer, and Rasmussen (1970) developed the transform methods that still form the backbone of many large-scale spectral computations. The original proselytes of spectral methods were meteorologists involved in global weather modeling and fluid dynamicists investigating isotropic turbulence. The converts who were inspired by the successes of these pioneers remained, for the most part, confined to these and closely related fields throughout the 1970's. During that decade spectral methods appeared to be well-suited only for problems governed by ordinary diSerential eqllations or by partial differential equations with periodic boundary conditions. And, of course, the solution itself needed to be smooth. Some of the obstacles to wider application of spectral methods were: (1) poor resolution of discontinuous solutions; (2) inefficient implementation of implicit methods; and (3) drastic geometric constraints. All of these barriers have undergone some erosion during the 1980's, particularly the latter two. As a result, the applicability and appeal of spectral methods for computational fluid dynamics has broadened considerably. The motivation for the use of spectral
Mesoscopic Simulation Methods for Polymer Dynamics
NASA Astrophysics Data System (ADS)
Larson, Ronald
2015-03-01
We assess the accuracy and efficiency of mesoscopic simulation methods, namely Brownian Dynamics (BD), Stochastic Rotation Dynamics (SRD) and Dissipative Particle Dynamics (DPD), for polymers in solution at equilibrium and in flows in microfluidic geometries. Both SRD and DPD use solvent ``particles'' to carry momentum, and so account automatically for hydrodynamic interactions both within isolated polymer coils, and with other polymer molecules and with nearby solid boundaries. We assess quantitatively the effects of artificial particle inertia and fluid compressibility and show that they can be made small with appropriate choice of simulation parameters. We then use these methods to study flow-induced migration of polymer chains produced by: 1) hydrodynamic interactions, 2) streamline curvature or stress-gradients, and 3) convection of wall depletion zones. We show that huge concentration gradients can be produced by these mechanisms in microfluidic geometries that can be exploited for separation of polymers by size in periodic contraction-expansion geometries. We also assess the range of conditions for which BD, SRD or DPD is preferable for mesoscopic simulations. Finally, we show how such methods can be used to simulate quantitatively the swimming of micro-organisms such as E. coli. In collaboration with Lei Jiang and Tongyang Zhao, University of Michigan, Ann Arbor, MI.
Development of semiclassical molecular dynamics simulation method.
Nakamura, Hiroki; Nanbu, Shinkoh; Teranishi, Yoshiaki; Ohta, Ayumi
2016-04-28
Various quantum mechanical effects such as nonadiabatic transitions, quantum mechanical tunneling and coherence play crucial roles in a variety of chemical and biological systems. In this paper, we propose a method to incorporate tunneling effects into the molecular dynamics (MD) method, which is purely based on classical mechanics. Caustics, which define the boundary between classically allowed and forbidden regions, are detected along classical trajectories and the optimal tunneling path with minimum action is determined by starting from each appropriate caustic. The real phase associated with tunneling can also be estimated. Numerical demonstration with use of a simple collinear chemical reaction O + HCl → OH + Cl is presented in order to help the reader to well comprehend the method proposed here. Generalization to the on-the-fly ab initio version is rather straightforward. By treating the nonadiabatic transitions at conical intersections by the Zhu-Nakamura theory, new semiclassical MD methods can be developed. PMID:27067383
Comparing Methods for Dynamic Airspace Configuration
NASA Technical Reports Server (NTRS)
Zelinski, Shannon; Lai, Chok Fung
2011-01-01
This paper compares airspace design solutions for dynamically reconfiguring airspace in response to nominal daily traffic volume fluctuation. Airspace designs from seven algorithmic methods and a representation of current day operations in Kansas City Center were simulated with two times today's demand traffic. A three-configuration scenario was used to represent current day operations. Algorithms used projected unimpeded flight tracks to design initial 24-hour plans to switch between three configurations at predetermined reconfiguration times. At each reconfiguration time, algorithms used updated projected flight tracks to update the subsequent planned configurations. Compared to the baseline, most airspace design methods reduced delay and increased reconfiguration complexity, with similar traffic pattern complexity results. Design updates enabled several methods to as much as half the delay from their original designs. Freeform design methods reduced delay and increased reconfiguration complexity the most.
Development of semiclassical molecular dynamics simulation method.
Nakamura, Hiroki; Nanbu, Shinkoh; Teranishi, Yoshiaki; Ohta, Ayumi
2016-04-28
Various quantum mechanical effects such as nonadiabatic transitions, quantum mechanical tunneling and coherence play crucial roles in a variety of chemical and biological systems. In this paper, we propose a method to incorporate tunneling effects into the molecular dynamics (MD) method, which is purely based on classical mechanics. Caustics, which define the boundary between classically allowed and forbidden regions, are detected along classical trajectories and the optimal tunneling path with minimum action is determined by starting from each appropriate caustic. The real phase associated with tunneling can also be estimated. Numerical demonstration with use of a simple collinear chemical reaction O + HCl → OH + Cl is presented in order to help the reader to well comprehend the method proposed here. Generalization to the on-the-fly ab initio version is rather straightforward. By treating the nonadiabatic transitions at conical intersections by the Zhu-Nakamura theory, new semiclassical MD methods can be developed.
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; Hommes, G.; Aubry, S.; Arsenlis, A.
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; Hommes, G.; Aubry, S.; Arsenlis, A.
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a way of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.
Maximum likelihood solution for inclination-only data in paleomagnetism
NASA Astrophysics Data System (ADS)
Arason, P.; Levi, S.
2010-08-01
We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.
NASA Astrophysics Data System (ADS)
Shang, Yilun
2016-08-01
How complex a network is crucially impacts its function and performance. In many modern applications, the networks involved have a growth property and sparse structures, which pose challenges to physicists and applied mathematicians. In this paper, we introduce the forest likelihood as a plausible measure to gauge how difficult it is to construct a forest in a non-preferential attachment way. Based on the notions of admittable labeling and path construction, we propose algorithms for computing the forest likelihood of a given forest. Concrete examples as well as the distributions of forest likelihoods for all forests with some fixed numbers of nodes are presented. Moreover, we illustrate the ideas on real-life networks, including a benzenoid tree, a mathematical family tree, and a peer-to-peer network.
Schwarz method for earthquake source dynamics
Badea, Lori Ionescu, Ioan R. Wolf, Sylvie
2008-04-01
Dynamic faulting under slip-dependent friction in a linear elastic domain (in-plane and 3D configurations) is considered. The use of an implicit time-stepping scheme (Newmark method) allows much larger values of the time step than the critical CFL time step, and higher accuracy to handle the non-smoothness of the interface constitutive law (slip weakening friction). The finite element form of the quasi-variational inequality is solved by a Schwarz domain decomposition method, by separating the inner nodes of the domain from the nodes on the fault. In this way, the quasi-variational inequality splits into two subproblems. The first one is a large linear system of equations, and its unknowns are related to the mesh nodes of the first subdomain (i.e. lying inside the domain). The unknowns of the second subproblem are the degrees of freedom of the mesh nodes of the second subdomain (i.e. lying on the domain boundary where the conditions of contact and friction are imposed). This nonlinear subproblem is solved by the same Schwarz algorithm, leading to some local nonlinear subproblems of a very small size. Numerical experiments are performed to illustrate convergence in time and space, instability capturing, energy dissipation and the influence of normal stress variations. We have used the proposed numerical method to compute source dynamics phenomena on complex and realistic 2D fault models (branched fault systems)
Dynamic data filtering system and method
Bickford, Randall L; Palnitkar, Rahul M
2014-04-29
A computer-implemented dynamic data filtering system and method for selectively choosing operating data of a monitored asset that modifies or expands a learned scope of an empirical model of normal operation of the monitored asset while simultaneously rejecting operating data of the monitored asset that is indicative of excessive degradation or impending failure of the monitored asset, and utilizing the selectively chosen data for adaptively recalibrating the empirical model to more accurately monitor asset aging changes or operating condition changes of the monitored asset.
On methods for studying stochastic disease dynamics.
Keeling, M J; Ross, J V
2008-02-01
Models that deal with the individual level of populations have shown the importance of stochasticity in ecology, epidemiology and evolution. An increasingly common approach to studying these models is through stochastic (event-driven) simulation. One striking disadvantage of this approach is the need for a large number of replicates to determine the range of expected behaviour. Here, for a class of stochastic models called Markov processes, we present results that overcome this difficulty and provide valuable insights, but which have been largely ignored by applied researchers. For these models, the so-called Kolmogorov forward equation (also called the ensemble or master equation) allows one to simultaneously consider the probability of each possible state occurring. Irrespective of the complexities and nonlinearities of population dynamics, this equation is linear and has a natural matrix formulation that provides many analytical insights into the behaviour of stochastic populations and allows rapid evaluation of process dynamics. Here, using epidemiological models as a template, these ensemble equations are explored and results are compared with traditional stochastic simulations. In addition, we describe further advantages of the matrix formulation of dynamics, providing simple exact methods for evaluating expected eradication (extinction) times of diseases, for comparing expected total costs of possible control programmes and for estimation of disease parameters. PMID:17638650
Dynamics of reactive collisions by optical methods
NASA Astrophysics Data System (ADS)
Ureña, A. González; Vetter, R.
This paper reviews recent developments in the study of reactive collisions using optical methods. Although the basic approach is from the experimental viewpoint, attention is paid to the conceptual and theoretical aspects of the physics underlying modern reaction dynamics. After a brief resume of basic concepts and definitions on both scalar and vectorial quantities characterizing the chemical reaction, a significant body of this paper describes the recent achievements using laser techniques, mainly via laser-induced fluorescence, and chemiluminescence. Both high-resolution crossed-beam and high-resolution bulb studies are presented in a complementary fashion, as they provide a detailed picture of reaction dynamics through the measurement of quantum state specific differential cross-sections. Specific examples include the use of Doppler resolved laser-induced fluorescence, multiphoton ionization or Cars studies. Some examples are also included based on the use of product imaging techniques, the novel approach of obtaining quantum state resolved differential cross-sections for chemical reactions. In addition, new data on the collision energy dependence of the collision cross-section, i.e. the excitation function, obtained by highly sensitive collision energy cross-beam techniques is also presented and reviewed. Another part of the paper is dedicated to recent advances in the study of reaction dynamics using electronically excited species. Emphasis is placed not only on the opening of new channels for chemical reactions but also on the possible outcome of the reaction products associated with the different symmetries of the excited potential energy surfaces. Finally, a section is dedicated to recent developments in studies carried out in the area of van der Waals and cluster reactions. The possibility of clocking the chemical act as well as very efficient trapping of reaction intermediates is illustrated with some examples. Throughout the whole paper care is taken to
An empirical method for dynamic camouflage assessment
NASA Astrophysics Data System (ADS)
Blitch, John G.
2011-06-01
As camouflage systems become increasingly sophisticated in their potential to conceal military personnel and precious cargo, evaluation methods need to evolve as well. This paper presents an overview of one such attempt to explore alternative methods for empirical evaluation of dynamic camouflage systems which aspire to keep pace with a soldier's movement through rapidly changing environments that are typical of urban terrain. Motivating factors are covered first, followed by a description of the Blitz Camouflage Assessment (BCA) process and results from an initial proof of concept experiment conducted in November 2006. The conclusion drawn from these results, related literature and the author's personal experience suggest that operational evaluation of personal camouflage needs to be expanded beyond its foundation in signal detection theory and embrace the challenges posed by high levels of cognitive processing.
A dynamic transformation method for modal synthesis.
NASA Technical Reports Server (NTRS)
Kuhar, E. J.; Stahle, C. V.
1973-01-01
This paper presents a condensation method for large discrete parameter vibration analysis of complex structures that greatly reduces truncation errors and provides accurate definition of modes in a selected frequency range. A dynamic transformation is obtained from the partitioned equations of motion that relates modes not explicity in the condensed solution to the retained modes at a selected system frequency. The generalized mass and stiffness matrices, obtained with existing modal synthesis methods, are reduced using this transformation and solved. Revised solutions are then obtained using new transformations at the calculated eigenvalues and are also used to assess the accuracy of the results. If all the modes of interest have not been obtained, the results are used to select a new set of retained coordinates and a new transformation frequency, and the procedure is repeated for another group of modes.
Likelihood reinstates Archaeopteryx as a primitive bird
Lee, Michael S. Y.; Worthy, Trevor H.
2012-01-01
The widespread view that Archaeopteryx was a primitive (basal) bird has been recently challenged by a comprehensive phylogenetic analysis that placed Archaeopteryx with deinonychosaurian theropods. The new phylogeny suggested that typical bird flight (powered by the front limbs only) either evolved at least twice, or was lost/modified in some deinonychosaurs. However, this parsimony-based result was acknowledged to be weakly supported. Maximum-likelihood and related Bayesian methods applied to the same dataset yield a different and more orthodox result: Archaeopteryx is restored as a basal bird with bootstrap frequency of 73 per cent and posterior probability of 1. These results are consistent with a single origin of typical (forelimb-powered) bird flight. The Archaeopteryx–deinonychosaur clade retrieved by parsimony is supported by more characters (which are on average more homoplasious), whereas the Archaeopteryx–bird clade retrieved by likelihood-based methods is supported by fewer characters (but on average less homoplasious). Both positions for Archaeopteryx remain plausible, highlighting the hazy boundary between birds and advanced theropods. These results also suggest that likelihood-based methods (in addition to parsimony) can be useful in morphological phylogenetics. PMID:22031726
A special perturbation method in orbital dynamics
NASA Astrophysics Data System (ADS)
Peláez, Jesús; Hedo, José Manuel; Rodríguez de Andrés, Pedro
2007-02-01
The special perturbation method considered in this paper combines simplicity of computer implementation, speed and precision, and can propagate the orbit of any material particle. The paper describes the evolution of some orbital elements based in Euler parameters, which are constants in the unperturbed problem, but which evolve in the time scale imposed by the perturbation. The variation of parameters technique is used to develop expressions for the derivatives of seven elements for the general case, which includes any type of perturbation. These basic differential equations are slightly modified by introducing one additional equation for the time, reaching a total order of eight. The method was developed in the Grupo de Dinámica de Tethers (GDT) of the UPM, as a tool for dynamic simulations of tethers. However, it can be used in any other field and with any kind of orbit and perturbation. It is free of singularities related to small inclination and/or eccentricity. The use of Euler parameters makes it robust. The perturbation forces are handled in a very simple way: the method requires their components in the orbital frame or in an inertial frame. A comparison with other schemes is performed in the paper to show the good performance of the method.
NASA Technical Reports Server (NTRS)
Napolitano, Marcello R.
1995-01-01
This report is a compilation of PID (Proportional Integral Derivative) results for both longitudinal and lateral directional analysis that was completed during Fall 1994. It had earlier established that the maneuvers available for PID containing independent control surface inputs from OBES were not well suited for extracting the cross-coupling static (i.e., C(sub N beta)) or dynamic (i.e., C(sub Npf)) derivatives. This was due to the fact that these maneuvers were designed with the goal of minimizing any lateral directional motion during longitudinal maneuvers and vice-versa. This allows for greater simplification in the aerodynamic model as far as coupling between longitudinal and lateral directions is concerned. As a result, efforts were made to reanalyze this data and extract static and dynamic derivatives for the F/A-18 HARV (High Angle of Attack Research Vehicle) without the inclusion of the cross-coupling terms such that more accurate estimates of classical model terms could be acquired. Four longitudinal flights containing static PID maneuvers were examined. The classical state equations already available in pEst for alphadot, qdot and thetadot were used. Three lateral directional flights of PID static maneuvers were also examined. The classical state equations already available in pEst for betadot, p dot, rdot and phi dot were used. Enclosed with this document are the full set of longitudinal and lateral directional parameter estimate plots showing coefficient estimates along with Cramer-Rao bounds. In addition, a representative time history match for each type of meneuver tested at each angle of attack is also enclosed.
Domain decomposition methods in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Gropp, William D.; Keyes, David E.
1992-01-01
The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.
Methods and systems for combustion dynamics reduction
Kraemer, Gilbert Otto; Varatharajan, Balachandar; Srinivasan, Shiva; Lynch, John Joseph; Yilmaz, Ertan; Kim, Kwanwoo; Lacy, Benjamin; Crothers, Sarah; Singh, Kapil Kumar
2009-08-25
Methods and systems for combustion dynamics reduction are provided. A combustion chamber may include a first premixer and a second premixer. Each premixer may include at least one fuel injector, at least one air inlet duct, and at least one vane pack for at least partially mixing the air from the air inlet duct or ducts and fuel from the fuel injector or injectors. Each vane pack may include a plurality of fuel orifices through which at least a portion of the fuel and at least a portion of the air may pass. The vane pack or packs of the first premixer may be positioned at a first axial position and the vane pack or packs of the second premixer may be positioned at a second axial position axially staggered with respect to the first axial position.
Domain decomposition methods in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Gropp, William D.; Keyes, David E.
1991-01-01
The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.
Weibull distribution based on maximum likelihood with interval inspection data
NASA Technical Reports Server (NTRS)
Rheinfurth, M. H.
1985-01-01
The two Weibull parameters based upon the method of maximum likelihood are determined. The test data used were failures observed at inspection intervals. The application was the reliability analysis of the SSME oxidizer turbine blades.
A Maximum-Likelihood Approach to Force-Field Calibration.
Zaborowski, Bartłomiej; Jagieła, Dawid; Czaplewski, Cezary; Hałabis, Anna; Lewandowska, Agnieszka; Żmudzińska, Wioletta; Ołdziej, Stanisław; Karczyńska, Agnieszka; Omieczynski, Christian; Wirecki, Tomasz; Liwo, Adam
2015-09-28
A new approach to the calibration of the force fields is proposed, in which the force-field parameters are obtained by maximum-likelihood fitting of the calculated conformational ensembles to the experimental ensembles of training system(s). The maximum-likelihood function is composed of logarithms of the Boltzmann probabilities of the experimental conformations, calculated with the current energy function. Because the theoretical distribution is given in the form of the simulated conformations only, the contributions from all of the simulated conformations, with Gaussian weights in the distances from a given experimental conformation, are added to give the contribution to the target function from this conformation. In contrast to earlier methods for force-field calibration, the approach does not suffer from the arbitrariness of dividing the decoy set into native-like and non-native structures; however, if such a division is made instead of using Gaussian weights, application of the maximum-likelihood method results in the well-known energy-gap maximization. The computational procedure consists of cycles of decoy generation and maximum-likelihood-function optimization, which are iterated until convergence is reached. The method was tested with Gaussian distributions and then applied to the physics-based coarse-grained UNRES force field for proteins. The NMR structures of the tryptophan cage, a small α-helical protein, determined at three temperatures (T = 280, 305, and 313 K) by Hałabis et al. ( J. Phys. Chem. B 2012 , 116 , 6898 - 6907 ), were used. Multiplexed replica-exchange molecular dynamics was used to generate the decoys. The iterative procedure exhibited steady convergence. Three variants of optimization were tried: optimization of the energy-term weights alone and use of the experimental ensemble of the folded protein only at T = 280 K (run 1); optimization of the energy-term weights and use of experimental ensembles at all three temperatures (run 2
Semiclassical methods in chemical reaction dynamics
Keshavamurthy, S.
1994-12-01
Semiclassical approximations, simple as well as rigorous, are formulated in order to be able to describe gas phase chemical reactions in large systems. We formulate a simple but accurate semiclassical model for incorporating multidimensional tunneling in classical trajectory simulations. This model is based on the existence of locally conserved actions around the saddle point region on a multidimensional potential energy surface. Using classical perturbation theory and monitoring the imaginary action as a function of time along a classical trajectory we calculate state-specific unimolecular decay rates for a model two dimensional potential with coupling. Results are in good comparison with exact quantum results for the potential over a wide range of coupling constants. We propose a new semiclassical hybrid method to calculate state-to-state S-matrix elements for bimolecular reactive scattering. The accuracy of the Van Vleck-Gutzwiller propagator and the short time dynamics of the system make this method self-consistent and accurate. We also go beyond the stationary phase approximation by doing the resulting integrals exactly (numerically). As a result, classically forbidden probabilties are calculated with purely real time classical trajectories within this approach. Application to the one dimensional Eckart barrier demonstrates the accuracy of this approach. Successful application of the semiclassical hybrid approach to collinear reactive scattering is prevented by the phenomenon of chaotic scattering. The modified Filinov approach to evaluating the integrals is discussed, but application to collinear systems requires a more careful analysis. In three and higher dimensional scattering systems, chaotic scattering is suppressed and hence the accuracy and usefulness of the semiclassical method should be tested for such systems.
ROBUST MAXIMUM LIKELIHOOD ESTIMATION IN Q-SPACE MRI.
Landman, B A; Farrell, J A D; Smith, S A; Calabresi, P A; van Zijl, P C M; Prince, J L
2008-05-14
Q-space imaging is an emerging diffusion weighted MR imaging technique to estimate molecular diffusion probability density functions (PDF's) without the need to assume a Gaussian distribution. We present a robust M-estimator, Q-space Estimation by Maximizing Rician Likelihood (QEMRL), for diffusion PDF's based on maximum likelihood. PDF's are modeled by constrained Gaussian mixtures. In QEMRL, robust likelihood measures mitigate the impacts of imaging artifacts. In simulation and in vivo human spinal cord, the method improves reliability of estimated PDF's and increases tissue contrast. QEMRL enables more detailed exploration of the PDF properties than prior approaches and may allow acquisitions at higher spatial resolution.
Intelligence's likelihood and evolutionary time frame
NASA Astrophysics Data System (ADS)
Bogonovich, Marc
2011-04-01
This paper outlines hypotheses relevant to the evolution of intelligent life and encephalization in the Phanerozoic. If general principles are inferable from patterns of Earth life, implications could be drawn for astrobiology. Many of the outlined hypotheses, relevant data, and associated evolutionary and ecological theory are not frequently cited in astrobiological journals. Thus opportunity exists to evaluate reviewed hypotheses with an astrobiological perspective. A quantitative method is presented for testing one of the reviewed hypotheses (hypothesis i; the diffusion hypothesis). Questions are presented throughout, which illustrate that the question of intelligent life's likelihood can be expressed as multiple, broadly ranging, more tractable questions.
A hybrid likelihood algorithm for risk modelling.
Kellerer, A M; Kreisheimer, M; Chmelevsky, D; Barclay, D
1995-03-01
The risk of radiation-induced cancer is assessed through the follow-up of large cohorts, such as atomic bomb survivors or underground miners who have been occupationally exposed to radon and its decay products. The models relate to the dose, age and time dependence of the excess tumour rates, and they contain parameters that are estimated in terms of maximum likelihood computations. The computations are performed with the software package EPI-CURE, which contains the two main options of person-by person regression or of Poisson regression with grouped data. The Poisson regression is most frequently employed, but there are certain models that require an excessive number of cells when grouped data are used. One example involves computations that account explicitly for the temporal distribution of continuous exposures, as they occur with underground miners. In past work such models had to be approximated, but it is shown here that they can be treated explicitly in a suitably reformulated person-by person computation of the likelihood. The algorithm uses the familiar partitioning of the log-likelihood into two terms, L1 and L0. The first term, L1, represents the contribution of the 'events' (tumours). It needs to be evaluated in the usual way, but constitutes no computational problem. The second term, L0, represents the event-free periods of observation. It is, in its usual form, unmanageable for large cohorts. However, it can be reduced to a simple form, in which the number of computational steps is independent of cohort size. The method requires less computing time and computer memory, but more importantly it leads to more stable numerical results by obviating the need for grouping the data. The algorithm may be most relevant to radiation risk modelling, but it can facilitate the modelling of failure-time data in general. PMID:7604154
Maximum likelihood estimation of finite mixture model for economic data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Improved maximum likelihood reconstruction of complex multi-generational pedigrees.
Sheehan, Nuala A; Bartlett, Mark; Cussens, James
2014-11-01
The reconstruction of pedigrees from genetic marker data is relevant to a wide range of applications. Likelihood-based approaches aim to find the pedigree structure that gives the highest probability to the observed data. Existing methods either entail an exhaustive search and are hence restricted to small numbers of individuals, or they take a more heuristic approach and deliver a solution that will probably have high likelihood but is not guaranteed to be optimal. By encoding the pedigree learning problem as an integer linear program we can exploit efficient optimisation algorithms to construct pedigrees guaranteed to have maximal likelihood for the standard situation where we have complete marker data at unlinked loci and segregation of genes from parents to offspring is Mendelian. Previous work demonstrated efficient reconstruction of pedigrees of up to about 100 individuals. The modified method that we present here is not so restricted: we demonstrate its applicability with simulated data on a real human pedigree structure of over 1600 individuals. It also compares well with a very competitive approximate approach in terms of solving time and accuracy. In addition to identifying a maximum likelihood pedigree, we can obtain any number of pedigrees in decreasing order of likelihood. This is useful for assessing the uncertainty of a maximum likelihood solution and permits model averaging over high likelihood pedigrees when this would be appropriate. More importantly, when the solution is not unique, as will often be the case for large pedigrees, it enables investigation into the properties of maximum likelihood pedigree estimates which has not been possible up to now. Crucially, we also have a means of assessing the behaviour of other approximate approaches which all aim to find a maximum likelihood solution. Our approach hence allows us to properly address the question of whether a reasonably high likelihood solution that is easy to obtain is practically as
System and Method for Dynamic Aeroelastic Control
NASA Technical Reports Server (NTRS)
Suh, Peter M. (Inventor)
2015-01-01
The present invention proposes a hardware and software architecture for dynamic modal structural monitoring that uses a robust modal filter to monitor a potentially very large-scale array of sensors in real time, and tolerant of asymmetric sensor noise and sensor failures, to achieve aircraft performance optimization such as minimizing aircraft flutter, drag and maximizing fuel efficiency.
Constraint likelihood analysis for a network of gravitational wave detectors
Klimenko, S.; Rakhmanov, M.; Mitselmakher, G.; Mohanty, S.
2005-12-15
We propose a coherent method for detection and reconstruction of gravitational wave signals with a network of interferometric detectors. The method is derived by using the likelihood ratio functional for unknown signal waveforms. In the likelihood analysis, the global maximum of the likelihood ratio over the space of waveforms is used as the detection statistic. We identify a problem with this approach. In the case of an aligned pair of detectors, the detection statistic depends on the cross correlation between the detectors as expected, but this dependence disappears even for infinitesimally small misalignments. We solve the problem by applying constraints on the likelihood functional and obtain a new class of statistics. The resulting method can be applied to data from a network consisting of any number of detectors with arbitrary detector orientations. The method allows us reconstruction of the source coordinates and the waveforms of two polarization components of a gravitational wave. We study the performance of the method with numerical simulations and find the reconstruction of the source coordinates to be more accurate than in the standard likelihood method.
PACO: PArticle COunting Method To Enforce Concentrations in Dynamic Simulations.
Berti, Claudio; Furini, Simone; Gillespie, Dirk
2016-03-01
We present PACO, a computationally efficient method for concentration boundary conditions in nonequilibrium particle simulations. Because it requires only particle counting, its computational effort is significantly smaller than other methods. PACO enables Brownian dynamics simulations of micromolar electrolytes (3 orders of magnitude lower than previously simulated). PACO for Brownian dynamics is integrated in the BROWNIES package (www.phys.rush.edu/BROWNIES). We also introduce a molecular dynamics PACO implementation that allows for very accurate control of concentration gradients.
Parametric likelihood inference for interval censored competing risks data
Hudgens, Michael G.; Li, Chenxi
2014-01-01
Summary Parametric estimation of the cumulative incidence function (CIF) is considered for competing risks data subject to interval censoring. Existing parametric models of the CIF for right censored competing risks data are adapted to the general case of interval censoring. Maximum likelihood estimators for the CIF are considered under the assumed models, extending earlier work on nonparametric estimation. A simple naive likelihood estimator is also considered that utilizes only part of the observed data. The naive estimator enables separate estimation of models for each cause, unlike full maximum likelihood in which all models are fit simultaneously. The naive likelihood is shown to be valid under mixed case interval censoring, but not under an independent inspection process model, in contrast with full maximum likelihood which is valid under both interval censoring models. In simulations, the naive estimator is shown to perform well and yield comparable efficiency to the full likelihood estimator in some settings. The methods are applied to data from a large, recent randomized clinical trial for the prevention of mother-to-child transmission of HIV. PMID:24400873
Quasi-likelihood estimation for relative risk regression models.
Carter, Rickey E; Lipsitz, Stuart R; Tilley, Barbara C
2005-01-01
For a prospective randomized clinical trial with two groups, the relative risk can be used as a measure of treatment effect and is directly interpretable as the ratio of success probabilities in the new treatment group versus the placebo group. For a prospective study with many covariates and a binary outcome (success or failure), relative risk regression may be of interest. If we model the log of the success probability as a linear function of covariates, the regression coefficients are log-relative risks. However, using such a log-linear model with a Bernoulli likelihood can lead to convergence problems in the Newton-Raphson algorithm. This is likely to occur when the success probabilities are close to one. A constrained likelihood method proposed by Wacholder (1986, American Journal of Epidemiology 123, 174-184), also has convergence problems. We propose a quasi-likelihood method of moments technique in which we naively assume the Bernoulli outcome is Poisson, with the mean (success probability) following a log-linear model. We use the Poisson maximum likelihood equations to estimate the regression coefficients without constraints. Using method of moment ideas, one can show that the estimates using the Poisson likelihood will be consistent and asymptotically normal. We apply these methods to a double-blinded randomized trial in primary biliary cirrhosis of the liver (Markus et al., 1989, New England Journal of Medicine 320, 1709-1713). PMID:15618526
Approximated maximum likelihood estimation in multifractal random walks
NASA Astrophysics Data System (ADS)
Løvsletten, O.; Rypdal, M.
2012-04-01
We present an approximated maximum likelihood method for the multifractal random walk processes of [E. Bacry , Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.64.026103 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the r computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices.
Assumed modes method and flexible multibody dynamics
NASA Technical Reports Server (NTRS)
Tadikonda, S. S. K.; Mordfin, T. G.; Hu, T. G.
1993-01-01
The use of assumed modes in flexible multibody dynamics algorithms requires the evaluation of several domain dependent integrals that are affected by the type of modes used. The implications of these integrals - often called zeroth, first and second order terms - are investigated in this paper, for arbitrarily shaped bodies. Guidelines are developed for the use of appropriate boundary conditions while generating the component modal models. The issue of whether and which higher order terms must be retained is also addressed. Analytical results, and numerical results using the Shuttle Remote Manipulator System as the multibody system, are presented to qualitatively and quantitatively address these issues.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.
Extrapolation methods for dynamic partial differential equations
NASA Technical Reports Server (NTRS)
Turkel, E.
1978-01-01
Several extrapolation procedures are presented for increasing the order of accuracy in time for evolutionary partial differential equations. These formulas are based on finite difference schemes in both the spatial and temporal directions. On practical grounds the methods are restricted to schemes that are fourth order in time and either second, fourth or sixth order in space. For hyperbolic problems the second order in space methods are not useful while the fourth order methods offer no advantage over the Kreiss-Oliger method unless very fine meshes are used. Advantages are first achieved using sixth order methods in space coupled with fourth order accuracy in time. Computational results are presented confirming the analytic discussions.
One-step Sparse Estimates in Nonconcave Penalized Likelihood Models.
Zou, Hui; Li, Runze
2008-08-01
Fan & Li (2001) propose a family of variable selection methods via penalized likelihood using concave penalty functions. The nonconcave penalized likelihood estimators enjoy the oracle properties, but maximizing the penalized likelihood function is computationally challenging, because the objective function is nondifferentiable and nonconcave. In this article we propose a new unified algorithm based on the local linear approximation (LLA) for maximizing the penalized likelihood for a broad class of concave penalty functions. Convergence and other theoretical properties of the LLA algorithm are established. A distinguished feature of the LLA algorithm is that at each LLA step, the LLA estimator can naturally adopt a sparse representation. Thus we suggest using the one-step LLA estimator from the LLA algorithm as the final estimates. Statistically, we show that if the regularization parameter is appropriately chosen, the one-step LLA estimates enjoy the oracle properties with good initial estimators. Computationally, the one-step LLA estimation methods dramatically reduce the computational cost in maximizing the nonconcave penalized likelihood. We conduct some Monte Carlo simulation to assess the finite sample performance of the one-step sparse estimation methods. The results are very encouraging.
Tree Method for Quantum Vortex Dynamics
NASA Astrophysics Data System (ADS)
Baggaley, A. W.; Barenghi, C. F.
2012-01-01
We present a numerical method to compute the evolution of vortex filaments in superfluid helium. The method is based on a tree algorithm which considerably speeds up the calculation of Biot-Savart integrals. We show that the computational cost scales as Nlog( N) rather than N 2, where N is the number of discretization points. We test the method and its properties for a variety of vortex configurations, ranging from simple vortex rings to a counterflow vortex tangle, and compare results against the Local Induction Approximation and the exact Biot-Savart law.
Dynamic fiber Bragg grating sensing method
NASA Astrophysics Data System (ADS)
Ho, Siu Chun Michael; Ren, Liang; Li, Hongnan; Song, Gangbing
2016-02-01
The measurement of high frequency vibrations is important in many scientific and engineering problems. This paper presents a novel, cost effective method using fiber optic fiber Bragg gratings (FBGs) for the measurement of high frequency vibrations. The method uses wavelength matched FBG sensors, with the first sensor acting as a transmission filter and the second sensor acting as the sensing portion. Energy fluctuations in the reflection spectrum of the second FBG due to wavelength mismatch between the sensors are captured by a photodiode. An in-depth analysis of the optical circuit is provided to predict the behavior of the method as well as identify ways to optimize the method. Simple demonstrations of the method were performed with the FBG sensing system installed on a piezoelectric transducer and on a wind turbine blade. Vibrations were measured with sampling frequencies up to 1 MHz for demonstrative purposes. The sensing method can be multiplexed for use with multiple sensors, and with care, can be retrofitted to work with FBG sensors already installed on a structure.
Maximum likelihood inference of reticulate evolutionary histories.
Yu, Yun; Dong, Jianrong; Liu, Kevin J; Nakhleh, Luay
2014-11-18
Hybridization plays an important role in the evolution of certain groups of organisms, adaptation to their environments, and diversification of their genomes. The evolutionary histories of such groups are reticulate, and methods for reconstructing them are still in their infancy and have limited applicability. We present a maximum likelihood method for inferring reticulate evolutionary histories while accounting simultaneously for incomplete lineage sorting. Additionally, we propose methods for assessing confidence in the amount of reticulation and the topology of the inferred evolutionary history. Our method obtains accurate estimates of reticulate evolutionary histories on simulated datasets. Furthermore, our method provides support for a hypothesis of a reticulate evolutionary history inferred from a set of house mouse (Mus musculus) genomes. As evidence of hybridization in eukaryotic groups accumulates, it is essential to have methods that infer reticulate evolutionary histories. The work we present here allows for such inference and provides a significant step toward putting phylogenetic networks on par with phylogenetic trees as a model of capturing evolutionary relationships. PMID:25368173
Methods for modeling contact dynamics of capture mechanisms
NASA Technical Reports Server (NTRS)
Williams, Philip J.; Tobbe, Patrick A.; Glaese, John
1991-01-01
In this paper, an analytical approach for studying the contact dynamics of space-based vehicles during docking/berthing maneuvers is presented. Methods for modeling physical contact between docking/berthing mechanisms, examples of how these models have been used to evaluate the dynamic behavior of automated capture mechanisms, and experimental verification of predicted results are shown.
Numerical methods for molecular dynamics. Progress report
Skeel, R.D.
1991-12-31
This report summarizes our research progress to date on the use of multigrid methods for three-dimensional elliptic partial differential equations, with particular emphasis on application to the Poisson-Boltzmann equation of molecular biophysics. This research is motivated by the need for fast and accurate numerical solution techniques for three-dimensional problems arising in physics and engineering. In many applications these problems must be solved repeatedly, and the extremely large number of discrete unknowns required to accurately approximate solutions to partial differential equations in three-dimensional regions necessitates the use of efficient solution methods. This situation makes clear the importance of developing methods which are of optimal order (or nearly so), meaning that the number of operations required to solve the discrete problem is on the order of the number of discrete unknowns. Multigrid methods are generally regarded as being in this class of methods, and are in fact provably optimal order for an increasingly large class of problems. The fundamental goal of this research is to develop a fast and accurate numerical technique, based on multi-level principles, for the solutions of the Poisson-Boltzmann equation of molecular biophysics and similar equations occurring in other applications. An outline of the report is as follows. We first present some background material, followed by a survey of the literature on the use of multigrid methods for solving problems similar to the Poisson-Boltzmann equation. A short description of the software we have developed so far is then given, and numerical results are discussed. Finally, our research plans for the coming year are presented.
Maximum-likelihood estimation of admixture proportions from genetic data.
Wang, Jinliang
2003-01-01
For an admixed population, an important question is how much genetic contribution comes from each parental population. Several methods have been developed to estimate such admixture proportions, using data on genetic markers sampled from parental and admixed populations. In this study, I propose a likelihood method to estimate jointly the admixture proportions, the genetic drift that occurred to the admixed population and each parental population during the period between the hybridization and sampling events, and the genetic drift in each ancestral population within the interval between their split and hybridization. The results from extensive simulations using various combinations of relevant parameter values show that in general much more accurate and precise estimates of admixture proportions are obtained from the likelihood method than from previous methods. The likelihood method also yields reasonable estimates of genetic drift that occurred to each population, which translate into relative effective sizes (N(e)) or absolute average N(e)'s if the times when the relevant events (such as population split, admixture, and sampling) occurred are known. The proposed likelihood method also has features such as relatively low computational requirement compared with previous ones, flexibility for admixture models, and marker types. In particular, it allows for missing data from a contributing parental population. The method is applied to a human data set and a wolflike canids data set, and the results obtained are discussed in comparison with those from other estimators and from previous studies. PMID:12807794
Recent applications of spectral methods in fluid dynamics
NASA Technical Reports Server (NTRS)
Zang, T. A.; Hussaini, M. Y.
1985-01-01
Origins of spectral methods, especially their relation to the method of weighted residuals, are surveyed. Basic Fourier and Chebyshev spectral concepts are reviewed and demonstrated through application to simple model problems. Both collocation and tau methods are considered. These techniques are then applied to a number of difficult, nonlinear problems of hyperbolic, parabolic, elliptic and mixzed type. Fluid dynamical applications are emphasized.
Likelihood-based population independent component analysis
Eloyan, Ani; Crainiceanu, Ciprian M.; Caffo, Brian S.
2013-01-01
Independent component analysis (ICA) is a widely used technique for blind source separation, used heavily in several scientific research areas including acoustics, electrophysiology, and functional neuroimaging. We propose a scalable two-stage iterative true group ICA methodology for analyzing population level functional magnetic resonance imaging (fMRI) data where the number of subjects is very large. The method is based on likelihood estimators of the underlying source densities and the mixing matrix. As opposed to many commonly used group ICA algorithms, the proposed method does not require significant data reduction by a 2-fold singular value decomposition. In addition, the method can be applied to a large group of subjects since the memory requirements are not restrictive. The performance of our approach is compared with a commonly used group ICA algorithm via simulation studies. Furthermore, the proposed method is applied to a large collection of resting state fMRI datasets. The results show that established brain networks are well recovered by the proposed algorithm. PMID:23314416
Engineering applications of a dynamical state feedback chaotification method
NASA Astrophysics Data System (ADS)
Şahin, Savaş; Güzeliş, Cüneyt
2012-09-01
This paper presents two engineering applications of a chaotification method which can be applied to any inputstate linearizable (nonlinear) system including linear controllable ones as special cases. In the used chaotification method, a reference chaotic and linear system can be combined into a special form by a dynamical state feedback increasing the order of the open loop system to have the same chaotic dynamics with the reference chaotic system. Promising dc motor applications of the method are implemented by the proposed dynamical state feedback which is based on matching the closed loop dynamics to the well known Chua and also Lorenz chaotic systems. The first application, which is the chaotified dc motor used for mixing a corn syrup added acid-base mixture, is implemented via a personal computer and a microcontroller based circuit. As a second application, a chaotified dc motor with a taco-generator used in the feedback is realized by using fully analog circuit elements.
MARGINAL EMPIRICAL LIKELIHOOD AND SURE INDEPENDENCE FEATURE SCREENING
Chang, Jinyuan; Tang, Cheng Yong; Wu, Yichao
2013-01-01
We study a marginal empirical likelihood approach in scenarios when the number of variables grows exponentially with the sample size. The marginal empirical likelihood ratios as functions of the parameters of interest are systematically examined, and we find that the marginal empirical likelihood ratio evaluated at zero can be used to differentiate whether an explanatory variable is contributing to a response variable or not. Based on this finding, we propose a unified feature screening procedure for linear models and the generalized linear models. Different from most existing feature screening approaches that rely on the magnitudes of some marginal estimators to identify true signals, the proposed screening approach is capable of further incorporating the level of uncertainties of such estimators. Such a merit inherits the self-studentization property of the empirical likelihood approach, and extends the insights of existing feature screening methods. Moreover, we show that our screening approach is less restrictive to distributional assumptions, and can be conveniently adapted to be applied in a broad range of scenarios such as models specified using general moment conditions. Our theoretical results and extensive numerical examples by simulations and data analysis demonstrate the merits of the marginal empirical likelihood approach. PMID:24415808
Dynamic compensation methods for self-powered neutron detectors
Auh, G.S. . Dept. of Transients and Setpoints)
1994-11-01
Among the three digital dynamic compensation methods that are developed for or applied to the rhodium self-powered neutron detector -- the dominant pole Tustin method of the core operating limit supervisory system, the direct inversion method, and the Kalman filter method -- the best method is selected. The direct inversion method is slightly improved from the previous version, and the Kalman filter method is proposed. The simulation results show that the direct inversion method is better than the dominant pole Tustin method, but the best compensation results can be obtained from the Kalman filter method. The direct inversion method gives better results than the dominant pole Tustin method because it does not contain the assumption of a single pole and zero. The Kalman filter method is the best among the three methods because it uses the information of previous time steps throughout its estimation process.
Davtyan, Aram; Dama, James F.; Voth, Gregory A.; Andersen, Hans C.
2015-04-21
Coarse-grained (CG) models of molecular systems, with fewer mechanical degrees of freedom than an all-atom model, are used extensively in chemical physics. It is generally accepted that a coarse-grained model that accurately describes equilibrium structural properties (as a result of having a well constructed CG potential energy function) does not necessarily exhibit appropriate dynamical behavior when simulated using conservative Hamiltonian dynamics for the CG degrees of freedom on the CG potential energy surface. Attempts to develop accurate CG dynamic models usually focus on replacing Hamiltonian motion by stochastic but Markovian dynamics on that surface, such as Langevin or Brownian dynamics. However, depending on the nature of the system and the extent of the coarse-graining, a Markovian dynamics for the CG degrees of freedom may not be appropriate. In this paper, we consider the problem of constructing dynamic CG models within the context of the Multi-Scale Coarse-graining (MS-CG) method of Voth and coworkers. We propose a method of converting a MS-CG model into a dynamic CG model by adding degrees of freedom to it in the form of a small number of fictitious particles that interact with the CG degrees of freedom in simple ways and that are subject to Langevin forces. The dynamic models are members of a class of nonlinear systems interacting with special heat baths that were studied by Zwanzig [J. Stat. Phys. 9, 215 (1973)]. The properties of the fictitious particles can be inferred from analysis of the dynamics of all-atom simulations of the system of interest. This is analogous to the fact that the MS-CG method generates the CG potential from analysis of equilibrium structures observed in all-atom simulation data. The dynamic models generate a non-Markovian dynamics for the CG degrees of freedom, but they can be easily simulated using standard molecular dynamics programs. We present tests of this method on a series of simple examples that demonstrate that
NASA Astrophysics Data System (ADS)
Davtyan, Aram; Dama, James F.; Voth, Gregory A.; Andersen, Hans C.
2015-04-01
Coarse-grained (CG) models of molecular systems, with fewer mechanical degrees of freedom than an all-atom model, are used extensively in chemical physics. It is generally accepted that a coarse-grained model that accurately describes equilibrium structural properties (as a result of having a well constructed CG potential energy function) does not necessarily exhibit appropriate dynamical behavior when simulated using conservative Hamiltonian dynamics for the CG degrees of freedom on the CG potential energy surface. Attempts to develop accurate CG dynamic models usually focus on replacing Hamiltonian motion by stochastic but Markovian dynamics on that surface, such as Langevin or Brownian dynamics. However, depending on the nature of the system and the extent of the coarse-graining, a Markovian dynamics for the CG degrees of freedom may not be appropriate. In this paper, we consider the problem of constructing dynamic CG models within the context of the Multi-Scale Coarse-graining (MS-CG) method of Voth and coworkers. We propose a method of converting a MS-CG model into a dynamic CG model by adding degrees of freedom to it in the form of a small number of fictitious particles that interact with the CG degrees of freedom in simple ways and that are subject to Langevin forces. The dynamic models are members of a class of nonlinear systems interacting with special heat baths that were studied by Zwanzig [J. Stat. Phys. 9, 215 (1973)]. The properties of the fictitious particles can be inferred from analysis of the dynamics of all-atom simulations of the system of interest. This is analogous to the fact that the MS-CG method generates the CG potential from analysis of equilibrium structures observed in all-atom simulation data. The dynamic models generate a non-Markovian dynamics for the CG degrees of freedom, but they can be easily simulated using standard molecular dynamics programs. We present tests of this method on a series of simple examples that demonstrate that
A Particle Population Control Method for Dynamic Monte Carlo
NASA Astrophysics Data System (ADS)
Sweezy, Jeremy; Nolen, Steve; Adams, Terry; Zukaitis, Anthony
2014-06-01
A general particle population control method has been derived from splitting and Russian Roulette for dynamic Monte Carlo particle transport. A well-known particle population control method, known as the particle population comb, has been shown to be a special case of this general method. This general method has been incorporated in Los Alamos National Laboratory's Monte Carlo Application Toolkit (MCATK) and examples of it's use are shown for both super-critical and sub-critical systems.
Likelihood-Free Inference in High-Dimensional Models.
Kousathanas, Athanasios; Leuenberger, Christoph; Helfer, Jonas; Quinodoz, Mathieu; Foll, Matthieu; Wegmann, Daniel
2016-06-01
Methods that bypass analytical evaluations of the likelihood function have become an indispensable tool for statistical inference in many fields of science. These so-called likelihood-free methods rely on accepting and rejecting simulations based on summary statistics, which limits them to low-dimensional models for which the value of the likelihood is large enough to result in manageable acceptance rates. To get around these issues, we introduce a novel, likelihood-free Markov chain Monte Carlo (MCMC) method combining two key innovations: updating only one parameter per iteration and accepting or rejecting this update based on subsets of statistics approximately sufficient for this parameter. This increases acceptance rates dramatically, rendering this approach suitable even for models of very high dimensionality. We further derive that for linear models, a one-dimensional combination of statistics per parameter is sufficient and can be found empirically with simulations. Finally, we demonstrate that our method readily scales to models of very high dimensionality, using toy models as well as by jointly inferring the effective population size, the distribution of fitness effects (DFE) of segregating mutations, and selection coefficients for each locus from data of a recent experiment on the evolution of drug resistance in influenza. PMID:27052569
Likelihood-Free Inference in High-Dimensional Models.
Kousathanas, Athanasios; Leuenberger, Christoph; Helfer, Jonas; Quinodoz, Mathieu; Foll, Matthieu; Wegmann, Daniel
2016-06-01
Methods that bypass analytical evaluations of the likelihood function have become an indispensable tool for statistical inference in many fields of science. These so-called likelihood-free methods rely on accepting and rejecting simulations based on summary statistics, which limits them to low-dimensional models for which the value of the likelihood is large enough to result in manageable acceptance rates. To get around these issues, we introduce a novel, likelihood-free Markov chain Monte Carlo (MCMC) method combining two key innovations: updating only one parameter per iteration and accepting or rejecting this update based on subsets of statistics approximately sufficient for this parameter. This increases acceptance rates dramatically, rendering this approach suitable even for models of very high dimensionality. We further derive that for linear models, a one-dimensional combination of statistics per parameter is sufficient and can be found empirically with simulations. Finally, we demonstrate that our method readily scales to models of very high dimensionality, using toy models as well as by jointly inferring the effective population size, the distribution of fitness effects (DFE) of segregating mutations, and selection coefficients for each locus from data of a recent experiment on the evolution of drug resistance in influenza.
NASA Astrophysics Data System (ADS)
Pavese, Marc; Berard, Daniel R.; Voth, Gregory A.
1999-01-01
A fully quantum molecular dynamics method is presented which combines ab initio Car-Parrinello molecular dynamics with centroid molecular dynamics. The first technique allows the forces on the atoms to be obtained from ab initio electronic structure. The second technique, given the forces on the atoms, allows one to calculate an approximate quantum time evolution for the nuclei. The combination of the two, therefore, represents the first feasible approach to simulating the fully quantum dynamics of a many-body system. An application to excess proton translocation along a model water wire will be presented.
Theoretical method for analyzing quantum dynamics of correlated photons
Koshino, Kazuki; Nakatani, Masatoshi
2009-05-15
We present a theoretical method for the efficient analysis of quantum nonlinear dynamics of correlated photons. Since correlated photons can be regarded as a superposition of uncorrelated photons, semiclassical analysis can be applied to this problem. The proposed method is demonstrated for a V-type three-level atom as a nonlinear optical system.
Computational Methods for Dynamic Stability and Control Derivatives
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.
2004-01-01
Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.
Computational Methods for Dynamic Stability and Control Derivatives
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.
2003-01-01
Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.
Method to describe stochastic dynamics using an optimal coordinate.
Krivov, Sergei V
2013-12-01
A general method to describe the stochastic dynamics of Markov processes is suggested. The method aims to solve three related problems: the determination of an optimal coordinate for the description of stochastic dynamics; the reconstruction of time from an ensemble of stochastic trajectories; and the decomposition of stationary stochastic dynamics into eigenmodes which do not decay exponentially with time. The problems are solved by introducing additive eigenvectors which are transformed by a stochastic matrix in a simple way - every component is translated by a constant distance. Such solutions have peculiar properties. For example, an optimal coordinate for stochastic dynamics with detailed balance is a multivalued function. An optimal coordinate for a random walk on a line corresponds to the conventional eigenvector of the one-dimensional Dirac equation. The equation for the optimal coordinate in a slowly varying potential reduces to the Hamilton-Jacobi equation for the action function. PMID:24483410
Efficient maximum likelihood parameterization of continuous-time Markov processes
McGibbon, Robert T.; Pande, Vijay S.
2015-01-01
Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce a maximum likelihood estimator for constructing such models from data observed at a finite time interval. This estimator is dramatically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations. PMID:26203016
Maximum-Likelihood Detection Of Noncoherent CPM
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
A dynamic integrated fault diagnosis method for power transformers.
Gao, Wensheng; Bai, Cuifen; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified.
A Dynamic Integrated Fault Diagnosis Method for Power Transformers
Gao, Wensheng; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841
On the existence of maximum likelihood estimates for presence-only data
Hefley, Trevor J.; Hooten, Mevin B.
2015-01-01
It is important to identify conditions for which maximum likelihood estimates are unlikely to be identifiable from presence-only data. In data sets where the maximum likelihood estimates do not exist, penalized likelihood and Bayesian methods will produce coefficient estimates, but these are sensitive to the choice of estimation procedure and prior or penalty term. When sample size is small or it is thought that habitat preferences are strong, we propose a suite of estimation procedures researchers can consider using.
Automated maximum likelihood separation of signal from baseline in noisy quantal data.
Bruno, William J; Ullah, Ghanim; Mak, Don-On Daniel; Pearson, John E
2013-07-01
Data recordings often include high-frequency noise and baseline fluctuations that are not generated by the system under investigation, which need to be removed before analyzing the signal for the system's behavior. In the absence of an automated method, experimentalists fall back on manual procedures for removing these fluctuations, which can be laborious and prone to subjective bias. We introduce a maximum likelihood formalism for separating signal from a drifting baseline plus noise, when the signal takes on integer multiples of some value, as in ion channel patch-clamp current traces. Parameters such as the quantal step size (e.g., current passing through a single channel), noise amplitude, and baseline drift rate can all be optimized automatically using the expectation-maximization algorithm, taking the number of open channels (or molecules in the on-state) at each time point as a hidden variable. Our goal here is to reconstruct the signal, not model the (possibly highly complex) underlying system dynamics. Thus, our likelihood function is independent of those dynamics. This may be thought of as restricting to the simplest possible hidden Markov model for the underlying channel current, in which successive measurements of the state of the channel(s) are independent. The resulting method is comparable to an experienced human in terms of results, but much faster. FORTRAN 90, C, R, and JAVA codes that implement the algorithm are available for download from our website. PMID:23823225
Improved dynamic analysis method using load-dependent Ritz vectors
NASA Technical Reports Server (NTRS)
Escobedo-Torres, J.; Ricles, J. M.
1993-01-01
The dynamic analysis of large space structures is important in order to predict their behavior under operating conditions. Computer models of large space structures are characterized by having a large number of degrees of freedom, and the computational effort required to carry out the analysis is very large. Conventional methods of solution utilize a subset of the eigenvectors of the system, but for systems with many degrees of freedom, the solution of the eigenproblem is in many cases the most costly phase of the analysis. For this reason, alternate solution methods need to be considered. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. The load dependent Ritz vector method is presented as an alternative to the classical normal mode methods for obtaining dynamic responses of large space structures. A simplified model of a space station is used to compare results. Results show that the load dependent Ritz vector method predicts the dynamic response better than the classical normal mode method. Even though this alternate method is very promising, further studies are necessary to fully understand its attributes and limitations.
Can the ring polymer molecular dynamics method be interpreted as real time quantum dynamics?
Jang, Seogjoo; Sinitskiy, Anton V.; Voth, Gregory A.
2014-04-21
The ring polymer molecular dynamics (RPMD) method has gained popularity in recent years as a simple approximation for calculating real time quantum correlation functions in condensed media. However, the extent to which RPMD captures real dynamical quantum effects and why it fails under certain situations have not been clearly understood. Addressing this issue has been difficult in the absence of a genuine justification for the RPMD algorithm starting from the quantum Liouville equation. To this end, a new and exact path integral formalism for the calculation of real time quantum correlation functions is presented in this work, which can serve as a rigorous foundation for the analysis of the RPMD method as well as providing an alternative derivation of the well established centroid molecular dynamics method. The new formalism utilizes the cyclic symmetry of the imaginary time path integral in the most general sense and enables the expression of Kubo-transformed quantum time correlation functions as that of physical observables pre-averaged over the imaginary time path. Upon filtering with a centroid constraint function, the formulation results in the centroid dynamics formalism. Upon filtering with the position representation of the imaginary time path integral, we obtain an exact quantum dynamics formalism involving the same variables as the RPMD method. The analysis of the RPMD approximation based on this approach clarifies that an explicit quantum dynamical justification does not exist for the use of the ring polymer harmonic potential term (imaginary time kinetic energy) as implemented in the RPMD method. It is analyzed why this can cause substantial errors in nonlinear correlation functions of harmonic oscillators. Such errors can be significant for general correlation functions of anharmonic systems. We also demonstrate that the short time accuracy of the exact path integral limit of RPMD is of lower order than those for finite discretization of path. The
Dynamical systems and probabilistic methods in partial differential equations
Deift, P.; Levermore, C.D.; Wayne, C.E.
1996-12-31
This publication covers material presented at the American Mathematical Society summer seminar in June, 1994. This seminar sought to provide participants exposure to a wide range of interesting and ongoing work on dynamic systems and the application of probabilistic methods in applied mathematics. Topics discussed include: the application of dynamical systems theory to the solution of partial differential equations; specific work with the complex Ginzburg-Landau, nonlinear Schroedinger, and Korteweg-deVries equations; applications in the area of fluid mechanics; turbulence studies from the perspective of probabilistic methods. Separate abstracts have been indexed into the database from articles in this proceedings.
Fast inference in generalized linear models via expected log-likelihoods.
Ramirez, Alexandro D; Paninski, Liam
2014-04-01
Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting "expected log-likelihood" can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina.
Accelerated molecular dynamics methods: introduction and recent developments
Uberuaga, Blas Pedro; Voter, Arthur F; Perez, Danny; Shim, Y; Amar, J G
2009-01-01
A long-standing limitation in the use of molecular dynamics (MD) simulation is that it can only be applied directly to processes that take place on very short timescales: nanoseconds if empirical potentials are employed, or picoseconds if we rely on electronic structure methods. Many processes of interest in chemistry, biochemistry, and materials science require study over microseconds and beyond, due either to the natural timescale for the evolution or to the duration of the experiment of interest. Ignoring the case of liquids xxx, the dynamics on these time scales is typically characterized by infrequent-event transitions, from state to state, usually involving an energy barrier. There is a long and venerable tradition in chemistry of using transition state theory (TST) [10, 19, 23] to directly compute rate constants for these kinds of activated processes. If needed dynamical corrections to the TST rate, and even quantum corrections, can be computed to achieve an accuracy suitable for the problem at hand. These rate constants then allow them to understand the system behavior on longer time scales than we can directly reach with MD. For complex systems with many reaction paths, the TST rates can be fed into a stochastic simulation procedure such as kinetic Monte Carlo xxx, and a direct simulation of the advance of the system through its possible states can be obtained in a probabilistically exact way. A problem that has become more evident in recent years, however, is that for many systems of interest there is a complexity that makes it difficult, if not impossible, to determine all the relevant reaction paths to which TST should be applied. This is a serious issue, as omitted transition pathways can have uncontrollable consequences on the simulated long-time kinetics. Over the last decade or so, we have been developing a new class of methods for treating the long-time dynamics in these complex, infrequent-event systems. Rather than trying to guess in advance what
Physically constrained maximum likelihood mode filtering.
Papp, Joseph C; Preisig, James C; Morozov, Andrey K
2010-04-01
Mode filtering is most commonly implemented using the sampled mode shapes or pseudoinverse algorithms. Buck et al. [J. Acoust. Soc. Am. 103, 1813-1824 (1998)] placed these techniques in the context of a broader maximum a posteriori (MAP) framework. However, the MAP algorithm requires that the signal and noise statistics be known a priori. Adaptive array processing algorithms are candidates for improving performance without the need for a priori signal and noise statistics. A variant of the physically constrained, maximum likelihood (PCML) algorithm [A. L. Kraay and A. B. Baggeroer, IEEE Trans. Signal Process. 55, 4048-4063 (2007)] is developed for mode filtering that achieves the same performance as the MAP mode filter yet does not need a priori knowledge of the signal and noise statistics. The central innovation of this adaptive mode filter is that the received signal's sample covariance matrix, as estimated by the algorithm, is constrained to be that which can be physically realized given a modal propagation model and an appropriate noise model. Shallow water simulation results are presented showing the benefit of using the PCML method in adaptive mode filtering.
Dimension-independent likelihood-informed MCMC
Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.
2015-10-08
Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.
Dimension-independent likelihood-informed MCMC
Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.
2015-10-08
Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian informationmore » and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.« less
New visualization method for high dynamic range images in low dynamic range devices
NASA Astrophysics Data System (ADS)
Kim, Jun-Hyung; Kim, Hoon; Ko, Sung-Jea
2011-10-01
Various tone reproduction operators have been proposed to display high dynamic range images on low dynamic range (LDR) devices. The gradient domain operator is a good candidate due to its capability of reducing the dynamic range and avoiding common artifacts including halos and loss of image details. However the gradient domain operator requires high computational complexity and often introduces low-frequency artifacts such as reversal of contrast between distant image patches. In order to solve these problems we present a new gradient domain tone reproduction method which adopts an energy functional with two terms one for preserving global contrast and the other for enhancing image details. In the proposed method the LDR image is obtained by minimizing the proposed energy functional through a numerical method. Simulation results demonstrate that the proposed method can not only achieve the significantly reduced computational complexity but also exhibit better visual quality as compared with conventional algorithms.
Sampling variability in forensic likelihood-ratio computation: A simulation study.
Ali, Tauseef; Spreeuwers, Luuk; Veldhuis, Raymond; Meuwly, Didier
2015-12-01
Recently, in the forensic biometric community, there is a growing interest to compute a metric called "likelihood-ratio" when a pair of biometric specimens is compared using a biometric recognition system. Generally, a biometric recognition system outputs a score and therefore a likelihood-ratio computation method is used to convert the score to a likelihood-ratio. The likelihood-ratio is the probability of the score given the hypothesis of the prosecution, Hp (the two biometric specimens arose from a same source), divided by the probability of the score given the hypothesis of the defense, Hd (the two biometric specimens arose from different sources). Given a set of training scores under Hp and a set of training scores under Hd, several methods exist to convert a score to a likelihood-ratio. In this work, we focus on the issue of sampling variability in the training sets and carry out a detailed empirical study to quantify its effect on commonly proposed likelihood-ratio computation methods. We study the effect of the sampling variability varying: 1) the shapes of the probability density functions which model the distributions of scores in the two training sets; 2) the sizes of the training sets and 3) the score for which a likelihood-ratio is computed. For this purpose, we introduce a simulation framework which can be used to study several properties of a likelihood-ratio computation method and to quantify the effect of sampling variability in the likelihood-ratio computation. It is empirically shown that the sampling variability can be considerable, particularly when the training sets are small. Furthermore, a given method of likelihood-ratio computation can behave very differently for different shapes of the probability density functions of the scores in the training sets and different scores for which likelihood-ratios are computed.
Adiabatic molecular-dynamics-simulation-method studies of kinetic friction
NASA Astrophysics Data System (ADS)
Zhang, J.; Sokoloff, J. B.
2005-06-01
An adiabatic molecular-dynamics method is developed and used to study the Muser-Robbins model for dry friction (i.e., nonzero kinetic friction in the slow sliding speed limit). In this model, dry friction between two crystalline surfaces rotated with respect to each other is due to mobile molecules (i.e., dirt particles) adsorbed at the interface. Our adiabatic method allows us to quickly locate interface potential-well minima, which become unstable during sliding of the surfaces. Since dissipation due to friction in the slow sliding speed limit results from mobile molecules dropping out of such unstable wells, our method provides a way to calculate dry friction, which agrees extremely well with results found by conventional molecular dynamics for the same system, but our method is more than a factor of 10 faster.
Extended Molecular Dynamics Methods for Vortex Dynamics in Nano-structured Superconductors
NASA Astrophysics Data System (ADS)
Kato, Masaru; Sato, Osamu
Using improved molecular dynamics simulation method, we study vortex dynamics in nano-scaled superconductors. Heat generations during vortex motion, heat transfer in superconductors, and entropy forces to vortices are incorporated. Also quasi-particle relaxations after vortex motion, and their attractive "retarded" forces to other vortices are incorporated using the condensation-energy field. We show the time development of formation of vortex channel flow in a superconducting Corbino-disk.
The Feldenkrais Method: A Dynamic Approach to Changing Motor Behavior.
ERIC Educational Resources Information Center
Buchanan, Patricia A.; Ulrich, Beverly D.
2001-01-01
Describes the Feldenkrais Method of somatic education, noting parallels with a dynamic systems theory (DST) approach to motor behavior. Feldenkrais uses movement and perception to foster individualized improvement in function. DST explains that a human-environment system continually adapts to changing conditions and assembles behaviors…
Forced vibration of flexible body systems. A dynamic stiffness method
NASA Astrophysics Data System (ADS)
Liu, T. S.; Lin, J. C.
1993-10-01
Due to the development of high speed machinery, robots, and aerospace structures, the research of flexible body systems undergoing both gross motion and elastic deformation has seen increasing importance. The finite element method and modal analysis are often used in formulating equations of motion for dynamic analysis of the systems which entail time domain, forced vibration analysis. This study develops a new method based on dynamic stiffness to investigate forced vibration of flexible body systems. In contrast to the conventional finite element method, shape functions and stiffness matrices used in this study are derived from equations of motion for continuum beams. Hence, the resulting shape functions are named as dynamic shape functions. By applying the dynamic shape functions, the mass and stiffness matrices of a beam element are derived. The virtual work principle is employed to formulate equations of motion. Not only the coupling of gross motion and elastic deformation, but also the stiffening effect of axial forces is taken into account. Simulation results of a cantilever beam, a rotating beam, and a slider crank mechanism are compared with the literature to verify the proposed method.
Continuation Methods for Qualitative Analysis of Aircraft Dynamics
NASA Technical Reports Server (NTRS)
Cummings, Peter A.
2004-01-01
A class of numerical methods for constructing bifurcation curves for systems of coupled, non-linear ordinary differential equations is presented. Foundations are discussed, and several variations are outlined along with their respective capabilities. Appropriate background material from dynamical systems theory is presented.
Do dynamic-based MR knee kinematics methods produce the same results as static methods?
d'Entremont, Agnes G; Nordmeyer-Massner, Jurek A; Bos, Clemens; Wilson, David R; Pruessmann, Klaas P
2013-06-01
MR-based methods provide low risk, noninvasive assessment of joint kinematics; however, these methods often use static positions or require many identical cycles of movement. The study objective was to compare the 3D kinematic results approximated from a series of sequential static poses of the knee with the 3D kinematic results obtained from continuous dynamic movement of the knee. To accomplish this objective, we compared kinematic data from a validated static MR method to a fast static MR method, and compared kinematic data from both static methods to a newly developed dynamic MR method. Ten normal volunteers were imaged using the three kinematic methods (dynamic, static standard, and static fast). Results showed that the two sets of static results were in agreement, indicating that the sequences (standard and fast) may be used interchangeably. Dynamic kinematic results were significantly different from both static results in eight of 11 kinematic parameters: patellar flexion, patellar tilt, patellar proximal translation, patellar lateral translation, patellar anterior translation, tibial abduction, tibial internal rotation, and tibial anterior translation. Three-dimensional MR kinematics measured from dynamic knee motion are often different from those measured in a static knee at several positions, indicating that dynamic-based kinematics provides information that is not obtainable from static scans.
Hybrid finite element and Brownian dynamics method for charged particles
NASA Astrophysics Data System (ADS)
Huber, Gary A.; Miao, Yinglong; Zhou, Shenggao; Li, Bo; McCammon, J. Andrew
2016-04-01
Diffusion is often the rate-determining step in many biological processes. Currently, the two main computational methods for studying diffusion are stochastic methods, such as Brownian dynamics, and continuum methods, such as the finite element method. A previous study introduced a new hybrid diffusion method that couples the strengths of each of these two methods, but was limited by the lack of interactions among the particles; the force on each particle had to be from an external field. This study further develops the method to allow charged particles. The method is derived for a general multidimensional system and is presented using a basic test case for a one-dimensional linear system with one charged species and a radially symmetric system with three charged species.
Maximal likelihood correspondence estimation for face recognition across pose.
Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang
2014-10-01
Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database. PMID:25163062
Dynamic analysis of piping using the structural overlap method
Curreri, J.; Bezler, P.; Hartzman, M.
1981-03-01
The structural overlap method is a procedure for analyzing the dynamic response of a piping system by performing a separate analysis on subsystems of the complete structure. Specific cases were investigated to obtain an estimate of the validity and application of the method. The case studies were increased in complexity in order to examine some of the problems involved in implementing the method. It is concluded that the overlap method should not be substituted for a complete analysis of a full system. However, if a sufficiently high natural frequency is associated with the overlap section or the overlap section is a substantial portion of the system, acceptable results could be obtained.
Review of dynamic optimization methods in renewable natural resource management
Williams, B.K.
1989-01-01
In recent years, the applications of dynamic optimization procedures in natural resource management have proliferated. A systematic review of these applications is given in terms of a number of optimization methodologies and natural resource systems. The applicability of the methods to renewable natural resource systems are compared in terms of system complexity, system size, and precision of the optimal solutions. Recommendations are made concerning the appropriate methods for certain kinds of biological resource problems.
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE).
Boker, Steven M; Brick, Timothy R; Pritikin, Joshua N; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D; Maes, Hermine H; Neale, Michael C
2015-01-01
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participant's personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual's data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies. PMID:26717128
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE).
Boker, Steven M; Brick, Timothy R; Pritikin, Joshua N; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D; Maes, Hermine H; Neale, Michael C
2015-01-01
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participant's personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual's data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies.
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE)
Boker, Steven M.; Brick, Timothy R.; Pritikin, Joshua N.; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D.; Maes, Hermine H.; Neale, Michael C.
2015-01-01
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly-impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participants’ personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual’s data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies. PMID:26717128
A method for dynamic fracture initiation testing of ceramics
Duffy, J.; Suresh, S.; Cho, K.; Bopp, E.R.
1988-10-01
An experimental method is described whereby the dynamic fracture initiation toughness of ceramics and ceramic composites can be measured in pure tension or pure torsion at stress intensity factor rates of 10/sup 5/ to 10/sup 6/ MPa/Lambda/m/s. In this procedure, circumferentially notched cylindrical rods are subjected to uniaxial cyclic compression at room temperature to introduce a self-arresting, concentric Mode I fatigue pre-crack, following the technique presented. Subsequently, dynamic fracture initiation is effected by stress wave loading with a sharp-fronted pulse which subjects the specimen to a dynamic load inducing either Mode I or Mode III fracture. Instrumentation appropriate to the loading mode provides a record of average stress at the fracture site as a function of time. The capability of this method to yield highly reproducible dynamic fracture initiation toughness values for ceramics is demonstrated with the aid of experiments conducted on a polycrystalline aluminum oxide. Guidelines for the dynamic fracture initiation testing of ceramics and ceramic composites are discussed.
Dynamic Rupture Benchmarking of the ADER-DG Method
NASA Astrophysics Data System (ADS)
Pelties, C.; Gabriel, A.
2012-12-01
We will verify the arbitrary high-order derivative Discontinuous Galerkin (ADER-DG) method in various test cases of the 'SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise' benchmark suite (Harris et al. 2009). The ADER-DG scheme is able to solve the spontaneous rupture problem with high-order accuracy in space and time on three-dimensional unstructured tetrahedral meshes. Strong mesh coarsening or refinement at areas of interest can be applied to keep the computational costs feasible. Moreover, the method does not generate spurious high-frequency contributions in the slip rate spectra and therefore does not require any artificial damping as demonstrated in previous presentations and publications (Pelties et al. 2010 and 2012). We will show that the mentioned features hold also for more advanced setups as e.g. a branching fault system, heterogeneous background stresses and bimaterial faults. The advanced geometrical flexibility combined with an enhanced accuracy will make the ADER-DG method a useful tool to study earthquake dynamics on complex fault systems in realistic rheologies. References: Harris, R.A., M. Barall, R. Archuleta, B. Aagaard, J.-P. Ampuero, H. Bhat, V. Cruz-Atienza, L. Dalguer, P. Dawson, S. Day, B. Duan, E. Dunham, G. Ely, Y. Kaneko, Y. Kase, N. Lapusta, Y. Liu, S. Ma, D. Oglesby, K. Olsen, A. Pitarka, S. Song, and E. Templeton, The SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise, Seismological Research Letters, vol. 80, no. 1, pages 119-126, 2009 Pelties, C., J. de la Puente, and M. Kaeser, Dynamic Rupture Modeling in Three Dimensions on Unstructured Meshes Using a Discontinuous Galerkin Method, AGU 2010 Fall Meeting, abstract #S21C-2068 Pelties, C., J. de la Puente, J.-P. Ampuero, G. Brietzke, and M. Kaeser, Three-Dimensional Dynamic Rupture Simulation with a High-order Discontinuous Galerkin Method on Unstructured Tetrahedral Meshes, JGR. - Solid Earth, VOL. 117, B02309, 2012
Dynamic Rupture Benchmarking of the ADER-DG Method
NASA Astrophysics Data System (ADS)
Gabriel, Alice; Pelties, Christian
2013-04-01
We will verify the arbitrary high-order derivative Discontinuous Galerkin (ADER-DG) method in various test cases of the 'SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise' benchmark suite (Harris et al. 2009). The ADER-DG scheme is able to solve the spontaneous rupture problem with high-order accuracy in space and time on three-dimensional unstructured tetrahedral meshes. Strong mesh coarsening or refinement at areas of interest can be applied to keep the computational costs feasible. Moreover, the method does not generate spurious high-frequency contributions in the slip rate spectra and therefore does not require any artificial damping as demonstrated in previous presentations and publications (Pelties et al. 2010 and 2012). We will show that the mentioned features hold also for more advanced setups as e.g. a branching fault system, heterogeneous background stresses and bimaterial faults. The advanced geometrical flexibility combined with an enhanced accuracy will make the ADER-DG method a useful tool to study earthquake dynamics on complex fault systems in realistic rheologies. References: Harris, R.A., M. Barall, R. Archuleta, B. Aagaard, J.-P. Ampuero, H. Bhat, V. Cruz-Atienza, L. Dalguer, P. Dawson, S. Day, B. Duan, E. Dunham, G. Ely, Y. Kaneko, Y. Kase, N. Lapusta, Y. Liu, S. Ma, D. Oglesby, K. Olsen, A. Pitarka, S. Song, and E. Templeton, The SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise, Seismological Research Letters, vol. 80, no. 1, pages 119-126, 2009 Pelties, C., J. de la Puente, and M. Kaeser, Dynamic Rupture Modeling in Three Dimensions on Unstructured Meshes Using a Discontinuous Galerkin Method, AGU 2010 Fall Meeting, abstract #S21C-2068 Pelties, C., J. de la Puente, J.-P. Ampuero, G. Brietzke, and M. Kaeser, Three-Dimensional Dynamic Rupture Simulation with a High-order Discontinuous Galerkin Method on Unstructured Tetrahedral Meshes, JGR. - Solid Earth, VOL. 117, B02309, 2012
Dynamic Optical Grating Device and Associated Method for Modulating Light
NASA Technical Reports Server (NTRS)
Park, Yeonjoon (Inventor); Choi, Sang H. (Inventor); King, Glen C. (Inventor); Chu, Sang-Hyon (Inventor)
2012-01-01
A dynamic optical grating device and associated method for modulating light is provided that is capable of controlling the spectral properties and propagation of light without moving mechanical components by the use of a dynamic electric and/or magnetic field. By changing the electric field and/or magnetic field, the index of refraction, the extinction coefficient, the transmittivity, and the reflectivity fo the optical grating device may be controlled in order to control the spectral properties of the light reflected or transmitted by the device.
Computational Methods for Structural Mechanics and Dynamics, part 1
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)
1989-01-01
The structural analysis methods research has several goals. One goal is to develop analysis methods that are general. This goal of generality leads naturally to finite-element methods, but the research will also include other structural analysis methods. Another goal is that the methods be amenable to error analysis; that is, given a physical problem and a mathematical model of that problem, an analyst would like to know the probable error in predicting a given response quantity. The ultimate objective is to specify the error tolerances and to use automated logic to adjust the mathematical model or solution strategy to obtain that accuracy. A third goal is to develop structural analysis methods that can exploit parallel processing computers. The structural analysis methods research will focus initially on three types of problems: local/global nonlinear stress analysis, nonlinear transient dynamics, and tire modeling.
Population-dynamics method with a multicanonical feedback control.
Nemoto, Takahiro; Bouchet, Freddy; Jack, Robert L; Lecomte, Vivien
2016-06-01
We discuss the Giardinà-Kurchan-Peliti population dynamics method for evaluating large deviations of time-averaged quantities in Markov processes [Phys. Rev. Lett. 96, 120603 (2006)PRLTAO0031-900710.1103/PhysRevLett.96.120603]. This method exhibits systematic errors which can be large in some circumstances, particularly for systems with weak noise, with many degrees of freedom, or close to dynamical phase transitions. We show how these errors can be mitigated by introducing control forces within the algorithm. These forces are determined by an iteration-and-feedback scheme, inspired by multicanonical methods in equilibrium sampling. We demonstrate substantially improved results in a simple model, and we discuss potential applications to more complex systems. PMID:27415224
Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures
ERIC Educational Resources Information Center
Jeon, Minjeong; Rabe-Hesketh, Sophia
2012-01-01
In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…
Maximum likelihood clustering with dependent feature trees
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1981-01-01
The decomposition of mixture density of the data into its normal component densities is considered. The densities are approximated with first order dependent feature trees using criteria of mutual information and distance measures. Expressions are presented for the criteria when the densities are Gaussian. By defining different typs of nodes in a general dependent feature tree, maximum likelihood equations are developed for the estimation of parameters using fixed point iterations. The field structure of the data is also taken into account in developing maximum likelihood equations. Experimental results from the processing of remotely sensed multispectral scanner imagery data are included.
NASA Astrophysics Data System (ADS)
Takahashi, Takashi; Matunaga, Saburo
In order to analyze dynamics of space systems, such as cluster satellite systems and the capturing process of damaged satellites, it is necessary to consider such space systems as reconfigurable multibody systems. In this paper, we discuss the numerical computation of the dynamics of the ground experiment system to simulate the capturing and berthing process of a satellite by a dual-manipulator on the flat floor as an example. We have previously discussed the efficient dynamics algorithm for reconfigurable multibody system with topological changes. However, the contact dynamics, which is one of the most difficult issues on our study, remains to be discussed. We introduce two types of the linear complementarity problem (LCP) concerned with contact dynamics. The difference between two types of the LCP is whether impacts can be considered. Dynamics systems with impacts and friction are non-conservation systems, moreover the LCP is not always solvable. Therefore we must check if the solutions of the numerical computation are correct, or how accurate those are. In this paper, we derive the method of numerical computation with guaranteed accuracy of the LCP for contact dynamics.
A Non-smooth Newton Method for Multibody Dynamics
Erleben, K.; Ortiz, R.
2008-09-01
In this paper we deal with the simulation of rigid bodies. Rigid body dynamics have become very important for simulating rigid body motion in interactive applications, such as computer games or virtual reality. We present a novel way of computing contact forces using a Newton method. The contact problem is reformulated as a system of non-linear and non-smooth equations, and we solve this system using a non-smooth version of Newton's method. One of the main contribution of this paper is the reformulation of the complementarity problems, used to model impacts, as a system of equations that can be solved using traditional methods.
Activation Likelihood Estimation meta-analysis revisited
Eickhoff, Simon B.; Bzdok, Danilo; Laird, Angela R.; Kurth, Florian; Fox, Peter T.
2011-01-01
A widely used technique for coordinate-based meta-analysis of neuroimaging data is activation likelihood estimation (ALE), which determines the convergence of foci reported from different experiments. ALE analysis involves modelling these foci as probability distributions whose width is based on empirical estimates of the spatial uncertainty due to the between-subject and between-template variability of neuroimaging data. ALE results are assessed against a null-distribution of random spatial association between experiments, resulting in random-effects inference. In the present revision of this algorithm, we address two remaining drawbacks of the previous algorithm. First, the assessment of spatial association between experiments was based on a highly time-consuming permutation test, which nevertheless entailed the danger of underestimating the right tail of the null-distribution. In this report, we outline how this previous approach may be replaced by a faster and more precise analytical method. Second, the previously applied correction procedure, i.e. controlling the false discovery rate (FDR), is supplemented by new approaches for correcting the family-wise error rate and the cluster-level significance. The different alternatives for drawing inference on meta-analytic results are evaluated on an exemplary dataset on face perception as well as discussed with respect to their methodological limitations and advantages. In summary, we thus replaced the previous permutation algorithm with a faster and more rigorous analytical solution for the null-distribution and comprehensively address the issue of multiple-comparison corrections. The proposed revision of the ALE-algorithm should provide an improved tool for conducting coordinate-based meta-analyses on functional imaging data. PMID:21963913
Application of the Probabilistic Dynamic Synthesis Method to Realistic Structures
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ferri, Aldo A.
1998-01-01
The Probabilistic Dynamic Synthesis method is a technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. In previous work, the feasibility of the PDS method applied to a simple seven degree-of-freedom spring-mass system was verified. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.
Parallel methods for dynamic simulation of multiple manipulator systems
NASA Technical Reports Server (NTRS)
Mcmillan, Scott; Sadayappan, P.; Orin, David E.
1993-01-01
In this paper, efficient dynamic simulation algorithms for a system of m manipulators, cooperating to manipulate a large load, are developed; their performance, using two possible forms of parallelism on a general-purpose parallel computer, is investigated. One form, temporal parallelism, is obtained with the use of parallel numerical integration methods. A speedup of 3.78 on four processors of CRAY Y-MP8 was achieved with a parallel four-point block predictor-corrector method for the simulation of a four manipulator system. These multi-point methods suffer from reduced accuracy, and when comparing these runs with a serial integration method, the speedup can be as low as 1.83 for simulations with the same accuracy. To regain the performance lost due to accuracy problems, a second form of parallelism is employed. Spatial parallelism allows most of the dynamics of each manipulator chain to be computed simultaneously. Used exclusively in the four processor case, this form of parallelism in conjunction with a serial integration method results in a speedup of 3.1 on four processors over the best serial method. In cases where there are either more processors available or fewer chains in the system, the multi-point parallel integration methods are still advantageous despite the reduced accuracy because both forms of parallelism can then combine to generate more parallel tasks and achieve greater effective speedups. This paper also includes results for these cases.
Molecular Dynamics and Energy Minimization Based on Embedded Atom Method
1995-03-01
This program performs atomic scale computer simulations of the structure and dynamics of metallic system using energetices based on the Embedded Atom Method. The program performs two types of calculations. First, it performs local energy minimization of all atomic positions to determine ground state and saddle point energies and structures. Second, it performs molecular dynamics simulations to determine thermodynamics or miscroscopic dynamics of the system. In both cases, various constraints can be applied to themore » system. The volume of the system can be varied automatically to achieve any desired external pressure. The temperature in molecular dynamics simulations can be controlled by a variety of methods. Further, the temperature control can be applied either to the entire system or just a subset of the atoms that would act as a thermal source/sink. The motion of one or more of the atoms can be constrained to either simulate the effects of bulk boundary conditions or to facilitate the determination of saddle point configurations. The simulations are performed with periodic boundary conditions.« less
A novel method of dynamic correction in the time domain
NASA Astrophysics Data System (ADS)
Hessling, J. P.
2008-07-01
The dynamic error of measured signals is sometimes unacceptably large. If the dynamic properties of the measurement system are known, the true physical signal may to some extent be re-constructed. With a parametrized characterization of the system and sampled signals, time-domain digital filters may be utilized for correction. In the present work a general method for synthesizing such correction filters is developed. It maps the dynamic parameters of the measurement system directly on to the filter coefficients and utilizes time reversed filtering. This avoids commonly used numerical optimization in the filter synthesis. The method of correction is simple with absolute repeatability and stability, and results in a low residual error. Explicit criteria to control both the horizontal (time) and vertical (amplitude) discretization errors are presented in terms of the utilization of bandwidth and noise gain, respectively. To evaluate how close to optimal the correction is, these errors are also formulated in relation to the signal-to-noise ratio of the original measurement system. For purposes of illustration, typical mechanical and piezo-electric transducer systems for measuring force, pressure or acceleration are simulated and dynamically corrected with such dedicated digital filters.
Method for dynamic fracture initiation testing of ceramics
Duffy, J.; Suresh, S.; Cho, K.; Bopp, E.R.
1987-05-01
An experimental method is described whereby the dynamic fracture initiation toughness of ceramics and ceramic composites can be measured in pure tension or pure torsion at stress intensity factor rates of 100,000 to 1,000,000 MPA sq rt m/s. In this procedure, circumferentially-notched cylindrical rods are subjected to uniaxial cyclic compression at room temperature to introduce a self-arresting, concentric Mode I fatigue pre-crack, following the technique presented by Suresh et al. (1987) and Suresh and Tschegg (1987). Subsequently, dynamic-fracture initiation is effected by stress-wave loading with a sharp-fronted pulse which subjects the specimen to a dynamic load inducing either Mode I or Mode III fracture. Instrumentation appropriate to the loading mode provides a record of average stress at the fracture site as a function of time. The capability of this method to yield highly reproducible dynamic fracture initiation toughness values for ceramics is demonstrated with the aid of experiments conducted on a polycrystalline aluminum oxide.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.
Novosad, Philip; Reader, Andrew J
2016-06-21
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions
NASA Astrophysics Data System (ADS)
Novosad, Philip; Reader, Andrew J.
2016-06-01
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral
Fast inference in generalized linear models via expected log-likelihoods
Ramirez, Alexandro D.; Paninski, Liam
2015-01-01
Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting “expected log-likelihood” can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina. PMID:23832289
Numerical likelihood analysis of cosmic ray anisotropies
Carlos Hojvat et al.
2003-07-02
A numerical likelihood approach to the determination of cosmic ray anisotropies is presented which offers many advantages over other approaches. It allows a wide range of statistically meaningful hypotheses to be compared even when full sky coverage is unavailable, can be readily extended in order to include measurement errors, and makes maximum unbiased use of all available information.
Analysis methods for wind turbine control and electrical system dynamics
NASA Astrophysics Data System (ADS)
Hinrichsen, E. N.
1995-05-01
The integration of new energy technologies into electric power systems requires methods which recognize the full range of dynamic events in both the new generating unit and the power system. Since new energy technologies are initially perceived as small contributors to large systems, little attention is generally paid to system integration, i.e. dynamic events in the power system are ignored. As a result, most new energy sources are only capable of base-load operation, i.e. they have no load following or cycling capability. Wind turbines are no exception. Greater awareness of this implicit (and often unnecessary) limitation is needed. Analysis methods are recommended which include very low penetration (infinite bus) as well as very high penetration (stand-alone) scenarios.
Analysis methods for wind turbine control and electrical system dynamics
NASA Technical Reports Server (NTRS)
Hinrichsen, E. N.
1995-01-01
The integration of new energy technologies into electric power systems requires methods which recognize the full range of dynamic events in both the new generating unit and the power system. Since new energy technologies are initially perceived as small contributors to large systems, little attention is generally paid to system integration, i.e. dynamic events in the power system are ignored. As a result, most new energy sources are only capable of base-load operation, i.e. they have no load following or cycling capability. Wind turbines are no exception. Greater awareness of this implicit (and often unnecessary) limitation is needed. Analysis methods are recommended which include very low penetration (infinite bus) as well as very high penetration (stand-alone) scenarios.
Multilevel methods for eigenspace computations in structural dynamics.
Arbenz, Peter; Lehoucq, Richard B.; Thornquist, Heidi K.; Bennighof, Jeff; Cochran, Bill; Hetmaniuk, Ulrich L.; Muller, Mark; Tuminaro, Raymond Stephen
2005-01-01
Modal analysis of three-dimensional structures frequently involves finite element discretizations with millions of unknowns and requires computing hundreds or thousands of eigenpairs. In this presentation we review methods based on domain decomposition for such eigenspace computations in structural dynamics. We distinguish approaches that solve the eigenproblem algebraically (with minimal connections to the underlying partial differential equation) from approaches that tightly couple the eigensolver with the partial differential equation.
Parallel processing numerical method for confined vortex dynamics and applications
NASA Astrophysics Data System (ADS)
Bistrian, Diana Alina
2013-10-01
This paper explores a combined analytical and numerical technique to investigate the hydrodynamic instability of confined swirling flows, with application to vortex rope dynamics in a Francis turbine diffuser, in condition of sophisticated boundary constraints. We present a new approach based on the method of orthogonal decomposition in the Hilbert space, implemented with a spectral descriptor scheme in discrete space. A parallel implementation of the numerical scheme is conducted reducing the computational time compared to other techniques.
Least-squares finite element method for fluid dynamics
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Povinelli, Louis A.
1989-01-01
An overview is given of new developments of the least squares finite element method (LSFEM) in fluid dynamics. Special emphasis is placed on the universality of LSFEM; the symmetry and positiveness of the algebraic systems obtained from LSFEM; the accommodation of LSFEM to equal order interpolations for incompressible viscous flows; and the natural numerical dissipation of LSFEM for convective transport problems and high speed compressible flows. The performance of LSFEM is illustrated by numerical examples.
A method for analyzing dynamic stall of helicopter rotor blades
NASA Technical Reports Server (NTRS)
Crimi, P.; Reeves, B. L.
1972-01-01
A model for each of the basic flow elements involved in the unsteady stall of a two-dimensional airfoil in incompressible flow is presented. The interaction of these elements is analyzed using a digital computer. Computations of the loading during transient and sinusoidal pitching motions are in good qualitative agreement with measured loads. The method was used to confirm that large torsional response of helicopter blades detected in flight tests can be attributed to dynamic stall.
Likelihood approaches for the invariant density ratio model with biased-sampling data
Shen, Yu; Ning, Jing; Qin, Jing
2012-01-01
The full likelihood approach in statistical analysis is regarded as the most efficient means for estimation and inference. For complex length-biased failure time data, computational algorithms and theoretical properties are not readily available, especially when a likelihood function involves infinite-dimensional parameters. Relying on the invariance property of length-biased failure time data under the semiparametric density ratio model, we present two likelihood approaches for the estimation and assessment of the difference between two survival distributions. The most efficient maximum likelihood estimators are obtained by the em algorithm and profile likelihood. We also provide a simple numerical method for estimation and inference based on conditional likelihood, which can be generalized to k-arm settings. Unlike conventional survival data, the mean of the population failure times can be consistently estimated given right-censored length-biased data under mild regularity conditions. To check the semiparametric density ratio model assumption, we use a test statistic based on the area between two survival distributions. Simulation studies confirm that the full likelihood estimators are more efficient than the conditional likelihood estimators. We analyse an epidemiological study to illustrate the proposed methods. PMID:23843663
A correction method suitable for dynamical seasonal prediction
NASA Astrophysics Data System (ADS)
Chen, H.; Lin, Z. H.
2006-05-01
Based on the hindcast results of summer rainfall anomalies over China for the period 1981-2000 by the Dynamical Climate Prediction System (IAP-DCP) developed by the Institute of Atmospheric Physics, a correction method that can account for the dependence of model's systematic biases on SST anomalies is proposed. It is shown that this correction method can improve the hindcast skill of the IAP-DCP for summer rainfall anomalies over China, especially in western China and southeast China, which may imply its potential application to real-time seasonal prediction.
Comparing the Performance of Two Dynamic Load Distribution Methods
NASA Technical Reports Server (NTRS)
Kale, L. V.
1987-01-01
Parallel processing of symbolic computations on a message-passing multi-processor presents one challenge: To effectively utilize the available processors, the load must be distributed uniformly to all the processors. However, the structure of these computations cannot be predicted in advance. go, static scheduling methods are not applicable. In this paper, we compare the performance of two dynamic, distributed load balancing methods with extensive simulation studies. The two schemes are: the Contracting Within a Neighborhood (CWN) scheme proposed by us, and the Gradient Model proposed by Lin and Keller. We conclude that although simpler, the CWN is significantly more effective at distributing the work than the Gradient model.
A Method for Evaluating Dynamical Friction in Linear Ball Bearings
Fujii, Yusaku; Maru, Koichi; Jin, Tao; Yupapin, Preecha P.; Mitatha, Somsak
2010-01-01
A method is proposed for evaluating the dynamical friction of linear bearings, whose motion is not perfectly linear due to some play in its internal mechanism. In this method, the moving part of a linear bearing is made to move freely, and the force acting on the moving part is measured as the inertial force given by the product of its mass and the acceleration of its centre of gravity. To evaluate the acceleration of its centre of gravity, the acceleration of two different points on it is measured using a dual-axis optical interferometer. PMID:22163457
Recent developments in maximum likelihood estimation of MTMM models for categorical data
Jeon, Minjeong; Rijmen, Frank
2014-01-01
Maximum likelihood (ML) estimation of categorical multitrait-multimethod (MTMM) data is challenging because the likelihood involves high-dimensional integrals over the crossed method and trait factors, with no known closed-form solution. The purpose of the study is to introduce three newly developed ML methods that are eligible for estimating MTMM models with categorical responses: Variational maximization-maximization (e.g., Rijmen and Jeon, 2013), alternating imputation posterior (e.g., Cho and Rabe-Hesketh, 2011), and Monte Carlo local likelihood (e.g., Jeon et al., under revision). Each method is briefly described and its applicability for MTMM models with categorical data are discussed. PMID:24782791
Maximum-likelihood registration of range images with missing data.
Sharp, Gregory C; Lee, Sang W; Wehe, David K
2008-01-01
Missing data are common in range images, due to geometric occlusions, limitations in the sensor field of view, poor reflectivity, depth discontinuities, and cast shadows. Using registration to align these data often fails, because points without valid correspondences can be incorrectly matched. This paper presents a maximum likelihood method for registration of scenes with unmatched or missing data. Using ray casting, correspondences are formed between valid and missing points in each view. These correspondences are used to classify points by their visibility properties, including occlusions, field of view, and shadow regions. The likelihood of each point match is then determined using statistical properties of the sensor, such as noise and outlier distributions. Experiments demonstrate a high rates of convergence on complex scenes with varying degrees of overlap. PMID:18000329
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
NASA Astrophysics Data System (ADS)
Lui, Kenneth W. K.; So, H. C.
2009-12-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Use of historical information in a maximum-likelihood framework
Cohn, T.A.; Stedinger, J.R.
1987-01-01
This paper discusses flood-quantile estimators which can employ historical and paleoflood information, both when the magnitudes of historical flood peaks are known, and when only threshold-exceedance information is available. Maximum likelihood, quasi-maximum likelihood and curve fitting methods for simultaneous estimation of 1, 2 and 3 unknown parameters are examined. The information contained in a 100 yr record of historical observations, during which the flood perception threshold was near the 10 yr flood level (i.e., on average, one flood in ten is above the threshold and hence is recorded), is equivalent to roughly 43, 64 and 78 years of systematic record in terms of the improvement of the precision of 100 yr flood estimators when estimating 1, 2 and 3 parameters, respectively. With the perception threshold at the 100 yr flood level, the historical data was worth 13, 20 and 46 years of systematic data when estimating 1, 2 and 3 parameters, respectively. ?? 1987.
Role of Molecular Dynamics and Related Methods in Drug Discovery.
De Vivo, Marco; Masetti, Matteo; Bottegoni, Giovanni; Cavalli, Andrea
2016-05-12
Molecular dynamics (MD) and related methods are close to becoming routine computational tools for drug discovery. Their main advantage is in explicitly treating structural flexibility and entropic effects. This allows a more accurate estimate of the thermodynamics and kinetics associated with drug-target recognition and binding, as better algorithms and hardware architectures increase their use. Here, we review the theoretical background of MD and enhanced sampling methods, focusing on free-energy perturbation, metadynamics, steered MD, and other methods most consistently used to study drug-target binding. We discuss unbiased MD simulations that nowadays allow the observation of unsupervised ligand-target binding, assessing how these approaches help optimizing target affinity and drug residence time toward improved drug efficacy. Further issues discussed include allosteric modulation and the role of water molecules in ligand binding and optimization. We conclude by calling for more prospective studies to attest to these methods' utility in discovering novel drug candidates. PMID:26807648
Maximum likelihood as a common computational framework in tomotherapy.
Olivera, G H; Shepard, D M; Reckwerdt, P J; Ruchala, K; Zachman, J; Fitchard, E E; Mackie, T R
1998-11-01
Tomotherapy is a dose delivery technique using helical or axial intensity modulated beams. One of the strengths of the tomotherapy concept is that it can incorporate a number of processes into a single piece of equipment. These processes include treatment optimization planning, dose reconstruction and kilovoltage/megavoltage image reconstruction. A common computational technique that could be used for all of these processes would be very appealing. The maximum likelihood estimator, originally developed for emission tomography, can serve as a useful tool in imaging and radiotherapy. We believe that this approach can play an important role in the processes of optimization planning, dose reconstruction and kilovoltage and/or megavoltage image reconstruction. These processes involve computations that require comparable physical methods. They are also based on equivalent assumptions, and they have similar mathematical solutions. As a result, the maximum likelihood approach is able to provide a common framework for all three of these computational problems. We will demonstrate how maximum likelihood methods can be applied to optimization planning, dose reconstruction and megavoltage image reconstruction in tomotherapy. Results for planning optimization, dose reconstruction and megavoltage image reconstruction will be presented. Strengths and weaknesses of the methodology are analysed. Future directions for this work are also suggested. PMID:9832016
A maximum-likelihood estimation of pairwise relatedness for autopolyploids
Huang, K; Guo, S T; Shattuck, M R; Chen, S T; Qi, X G; Zhang, P; Li, B G
2015-01-01
Relatedness between individuals is central to ecological genetics. Multiple methods are available to quantify relatedness from molecular data, including method-of-moment and maximum-likelihood estimators. We describe a maximum-likelihood estimator for autopolyploids, and quantify its statistical performance under a range of biologically relevant conditions. The statistical performances of five additional polyploid estimators of relatedness were also quantified under identical conditions. When comparing truncated estimators, the maximum-likelihood estimator exhibited lower root mean square error under some conditions and was more biased for non-relatives, especially when the number of alleles per loci was low. However, even under these conditions, this bias was reduced to be statistically insignificant with more robust genetic sampling. We also considered ambiguity in polyploid heterozygote genotyping and developed a weighting methodology for candidate genotypes. The statistical performances of three polyploid estimators under both ideal and actual conditions (including inbreeding and double reduction) were compared. The software package POLYRELATEDNESS is available to perform this estimation and supports a maximum ploidy of eight. PMID:25370210
A Poisson-Boltzmann dynamics method with nonperiodic boundary condition
NASA Astrophysics Data System (ADS)
Lu, Qiang; Luo, Ray
2003-12-01
We have developed a well-behaved and efficient finite difference Poisson-Boltzmann dynamics method with a nonperiodic boundary condition. This is made possible, in part, by a rather fine grid spacing used for the finite difference treatment of the reaction field interaction. The stability is also made possible by a new dielectric model that is smooth both over time and over space, an important issue in the application of implicit solvents. In addition, the electrostatic focusing technique facilitates the use of an accurate yet efficient nonperiodic boundary condition: boundary grid potentials computed by the sum of potentials from individual grid charges. Finally, the particle-particle particle-mesh technique is adopted in the computation of the Coulombic interaction to balance accuracy and efficiency in simulations of large biomolecules. Preliminary testing shows that the nonperiodic Poisson-Boltzmann dynamics method is numerically stable in trajectories at least 4 ns long. The new model is also fairly efficient: it is comparable to that of the pairwise generalized Born solvent model, making it a strong candidate for dynamics simulations of biomolecules in dilute aqueous solutions. Note that the current treatment of total electrostatic interactions is with no cutoff, which is important for simulations of biomolecules. Rigorous treatment of the Debye-Hückel screening is also possible within the Poisson-Boltzmann framework: its importance is demonstrated by a simulation of a highly charged protein.
Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET
Gopich, Irina V.
2015-01-21
Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.
System and method for reducing combustion dynamics in a combustor
Uhm, Jong Ho; Johnson, Thomas Edward; Zuo, Baifang; York, William David
2015-09-01
A system for reducing combustion dynamics in a combustor includes an end cap having an upstream surface axially separated from a downstream surface, and tube bundles extend from the upstream surface through the downstream surface. A divider inside a tube bundle defines a diluent passage that extends axially through the downstream surface, and a diluent supply in fluid communication with the divider provides diluent flow to the diluent passage. A method for reducing combustion dynamics in a combustor includes flowing a fuel through tube bundles, flowing a diluent through a diluent passage inside a tube bundle, wherein the diluent passage extends axially through at least a portion of the end cap into a combustion chamber, and forming a diluent barrier in the combustion chamber between the tube bundle and at least one other adjacent tube bundle.
System and method for reducing combustion dynamics in a combustor
Uhm, Jong Ho; Johnson, Thomas Edward; Zuo, Baifang; York, William David
2013-08-20
A system for reducing combustion dynamics in a combustor includes an end cap having an upstream surface axially separated from a downstream surface, and tube bundles extend through the end cap. A diluent supply in fluid communication with the end cap provides diluent flow to the end cap. Diluent distributors circumferentially arranged inside at least one tube bundle extend downstream from the downstream surface and provide fluid communication for the diluent flow through the end cap. A method for reducing combustion dynamics in a combustor includes flowing fuel through tube bundles that extend axially through an end cap, flowing a diluent through diluent distributors into a combustion chamber, wherein the diluent distributors are circumferentially arranged inside at least one tube bundle and each diluent distributor extends downstream from the end cap, and forming a diluent barrier in the combustion chamber between at least one pair of adjacent tube bundles.
Likelihood-based modification of experimental crystal structure electron density maps
Terwilliger, Thomas C.
2005-04-16
A maximum-likelihood method for improves an electron density map of an experimental crystal structure. A likelihood of a set of structure factors {F.sub.h } is formed for the experimental crystal structure as (1) the likelihood of having obtained an observed set of structure factors {F.sub.h.sup.OBS } if structure factor set {F.sub.h } was correct, and (2) the likelihood that an electron density map resulting from {F.sub.h } is consistent with selected prior knowledge about the experimental crystal structure. The set of structure factors {F.sub.h } is then adjusted to maximize the likelihood of {F.sub.h } for the experimental crystal structure. An improved electron density map is constructed with the maximized structure factors.
cosmoabc: Likelihood-free inference for cosmology
NASA Astrophysics Data System (ADS)
Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.
2015-05-01
Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.
Informative Parameters of Dynamic Geo-electricity Methods
NASA Astrophysics Data System (ADS)
Tursunmetov, R.
With growing complexity of geological tasks and revealing abnormality zones con- nected with ore, oil, gas and water availability, methods of dynamic geo-electricity started to be used. In these methods geological environment is considered as inter- phase irregular one. Main dynamic element of this environment is double electric layer, which develops on the boundary between solid and liquid phase. In ore or wa- ter saturated environment double electric layers become electrochemical or electro- kinetic active elements of geo-electric environment, which, in turn, form natural elec- tric field. Mentioned field influences artificially created field distribution and inter- action bear complicated super-position or non-linear character. Therefore, geological environment is considered as active one, which is able to accumulate and transform artificially superpositioned fields. Main dynamic property of this environment is non- liner behavior of specific electric resistance and soil polarization depending on current density and measurements frequency, which serve as informative parameters for dy- namic geo-electricity methods. Study of disperse soil electric properties in impulse- frequency regime with study of temporal and frequency characteristics of electric field is of main interest for definition of geo-electric abnormality. Volt-amperic characteris- tics of electromagnetic field study has big practical significance. These characteristics are determined by electric-chemically active ore and water saturated fields. Mentioned parameters depend on initiated field polarity, in particular on ore saturated zone's character, composition and mineralization and natural electric field availability un- der cathode and anode mineralization. Non-linear behavior of environment's dynamic properties impacts initiated field structure that allows to define abnormal zone loca- tion. And, finally, study of soil anisotropy dynamic properties in space will allow to identify filtration flows
Probabilistic anti-aliasing methods for dynamic variable resolution images
NASA Astrophysics Data System (ADS)
Panerai, Francesco M.; Juday, Richard D.
1996-11-01
We have attained initial function of a real-time acuity- based video transformation. It matches the transmitted local resolution of video images to the eccentrically-varying acuity of the viewer's human visual system. In previous variable resolution imagery, a variable blockiness produces disturbing aliasing effects. We show how probabilistic methods can be useful to perform anti-aliasing on the variable resolution images, so that smoothing interpolation need not be done to defeat the aliasing. Especially when used in dynamic imaging, the methods consistently reduce the high frequency artifacts perceived by the human eye. The effectiveness of these techniques have been demonstrated with the NASA/Texas Instrument Programmable Remapper, which is able to apply the anti-aliasing methods on the fly on the low bandwidth, acuity-based video signal Video imagery will be shown to demonstrate the technique.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Score-based likelihood ratios for handwriting evidence.
Hepler, Amanda B; Saunders, Christopher P; Davis, Linda J; Buscaglia, JoAnn
2012-06-10
Score-based approaches for computing forensic likelihood ratios are becoming more prevalent in the forensic literature. When two items of evidential value are entangled via a scorefunction, several nuances arise when attempting to model the score behavior under the competing source-level propositions. Specific assumptions must be made in order to appropriately model the numerator and denominator probability distributions. This process is fairly straightforward for the numerator of the score-based likelihood ratio, entailing the generation of a database of scores obtained by pairing items of evidence from the same source. However, this process presents ambiguities for the denominator database generation - in particular, how best to generate a database of scores between two items of different sources. Many alternatives have appeared in the literature, three of which we will consider in detail. They differ in their approach to generating denominator databases, by pairing (1) the item of known source with randomly selected items from a relevant database; (2) the item of unknown source with randomly generated items from a relevant database; or (3) two randomly generated items. When the two items differ in type, perhaps one having higher information content, these three alternatives can produce very different denominator databases. While each of these alternatives has appeared in the literature, the decision of how to generate the denominator database is often made without calling attention to the subjective nature of this process. In this paper, we compare each of the three methods (and the resulting score-based likelihood ratios), which can be thought of as three distinct interpretations of the denominator proposition. Our goal in performing these comparisons is to illustrate the effect that subtle modifications of these propositions can have on inferences drawn from the evidence evaluation procedure. The study was performed using a data set composed of cursive writing
Applicability of optical scanner method for fine root dynamics
NASA Astrophysics Data System (ADS)
Kume, Tomonori; Ohashi, Mizue; Makita, Naoki; Khoon Kho, Lip; Katayama, Ayumi; Matsumoto, Kazuho; Ikeno, Hidetoshi
2016-04-01
Fine root dynamics is one of the important components in forest carbon cycling, as ~60 % of tree photosynthetic production can be allocated to root growth and metabolic activities. Various techniques have been developed for monitoring fine root biomass, production, mortality in order to understand carbon pools and fluxes resulting from fine roots dynamics. The minirhizotron method is now a widely used technique, in which a transparent tube is inserted into the soil and researchers count an increase and decrease of roots along the tube using images taken by a minirhizotron camera or minirhizotron video camera inside the tube. This method allows us to observe root behavior directly without destruction, but has several weaknesses; e.g., the difficulty of scaling up the results to stand level because of the small observation windows. Also, most of the image analysis are performed manually, which may yield insufficient quantitative and objective data. Recently, scanner method has been proposed, which can produce much bigger-size images (A4-size) with lower cost than those of the minirhizotron methods. However, laborious and time-consuming image analysis still limits the applicability of this method. In this study, therefore, we aimed to develop a new protocol for scanner image analysis to extract root behavior in soil. We evaluated applicability of this method in two ways; 1) the impact of different observers including root-study professionals, semi- and non-professionals on the detected results of root dynamics such as abundance, growth, and decomposition, and 2) the impact of window size on the results using a random sampling basis exercise. We applied our new protocol to analyze temporal changes of root behavior from sequential scanner images derived from a Bornean tropical forests. The results detected by the six observers showed considerable concordance in temporal changes in the abundance and the growth of fine roots but less in the decomposition. We also examined
Approximate maximum likelihood estimation of scanning observer templates
NASA Astrophysics Data System (ADS)
Abbey, Craig K.; Samuelson, Frank W.; Wunderlich, Adam; Popescu, Lucretiu M.; Eckstein, Miguel P.; Boone, John M.
2015-03-01
In localization tasks, an observer is asked to give the location of some target or feature of interest in an image. Scanning linear observer models incorporate the search implicit in this task through convolution of an observer template with the image being evaluated. Such models are becoming increasingly popular as predictors of human performance for validating medical imaging methodology. In addition to convolution, scanning models may utilize internal noise components to model inconsistencies in human observer responses. In this work, we build a probabilistic mathematical model of this process and show how it can, in principle, be used to obtain estimates of the observer template using maximum likelihood methods. The main difficulty of this approach is that a closed form probability distribution for a maximal location response is not generally available in the presence of internal noise. However, for a given image we can generate an empirical distribution of maximal locations using Monte-Carlo sampling. We show that this probability is well approximated by applying an exponential function to the scanning template output. We also evaluate log-likelihood functions on the basis of this approximate distribution. Using 1,000 trials of simulated data as a validation test set, we find that a plot of the approximate log-likelihood function along a single parameter related to the template profile achieves its maximum value near the true value used in the simulation. This finding holds regardless of whether the trials are correctly localized or not. In a second validation study evaluating a parameter related to the relative magnitude of internal noise, only the incorrect localization images produces a maximum in the approximate log-likelihood function that is near the true value of the parameter.
A new method of dynamic multitarget tracking and measuring
NASA Astrophysics Data System (ADS)
Wang, Fei; Zheng, Nanning; Liu, Yuehu
2003-09-01
In allusion to the features of dynamic multi-target tracking and measuring system (DMTTMS), compars the DMTTMS with the single target tracking and measuring system (STTMS) and analyses the difficulties about homonymy image point ascertainment in DMTTMS. Three methods are presented based on the geometric peculiarity of rays in imaging principia of geometric optics to solve the problem of homonymy image point ascertainment. A design scheme of DMTTMS is put forward using multiple optical capture instruments. Furthermore, an algorithm is emphasized that could treat some targets that are hided by other targets. The simulating result shows that proposed scheme and algorithm has feasibility and validity for DMTTMS.
A method for the evaluation of wide dynamic range cameras
NASA Astrophysics Data System (ADS)
Wong, Ping Wah; Lu, Yu Hua
2012-01-01
We propose a multi-component metric for the evaluation of digital or video cameras under wide dynamic range (WDR) scenes. The method is based on a single image capture using a specifically designed WDR test chart and light box. Test patterns on the WDR test chart include gray ramps, color patches, arrays of gray patches, white bars, and a relatively dark gray background. The WDR test chart is professionally made using 3 layers of transparencies to produce a contrast ratio of approximately 110 dB for WDR testing. A light box is designed to provide a uniform surface with light level at about 80K to 100K lux, which is typical of a sunny outdoor scene. From a captured image, 9 image quality component scores are calculated. The components include number of resolvable gray steps, dynamic range, linearity of tone response, grayness of gray ramp, number of distinguishable color patches, smearing resistance, edge contrast, grid clarity, and weighted signal-to-noise ratio. A composite score is calculated from the 9 component scores to reflect the comprehensive image quality in cameras under WDR scenes. Experimental results have demonstrated that the multi-component metric corresponds very well to subjective evaluation of wide dynamic range behavior of cameras.
Comparison between dynamical and stochastic downscaling methods in central Italy
NASA Astrophysics Data System (ADS)
Camici, Stefania; Palazzi, Elisa; Pieri, Alexandre; Brocca, Luca; Moramarco, Tommaso; Provenzale, Antonello
2015-04-01
Global climate models (GCMs) are the primary tool to assess future climate change. However, most GCMs currently do not provide reliable information on scales below about 100 km and, hence, cannot be used as a direct input of hydrological models for climate change impact assessments. Therefore, a wide range of statistical and dynamical downscaling methods have been developed to overcome the scale discrepancy between the GCM climatic scenarios and the resolution required for hydrological applications and impact studies. In this context, the selection of a suitable downscaling method is an important issue. The use of different spatial domains, predictor variables, predictands and assessment criteria makes the relative performance of different methods difficult to achieve and general rules to select a priori the best downscaling method do not exist. Additionally, many studies have shown that, depending on the hydrological variable, each downscaling method significantly contributes to the overall uncertainty of the final hydrological response. Therefore, it is strongly recommended to test/evaluate different downscaling methods by using ground-based data before applying them to climate model data. In this study, the daily rainfall data from the ERA-Interim re-analysis database (provided by the European Centre for Medium-Range Weather Forecasts, ECMWF) for the period 1979-2008 and with a resolution of about 80 km, are downscaled using both dynamical and statistical methods. In the first case, the Weather Research and Forecasting (WRF) model was nested into the ERA-Interim re-analysis system to achieve a spatial resolution of about 4 km; in the second one, the stochastic rainfall downscaling method called RainFARM was applied to the ERA-Interim data to obtain one stochastic realization of the rainfall field with a resolution of ~1 km. The downscaled rainfall data obtained with the two methods are then used to force a continuous rainfall-runoff model in order to obtain a
Towards a method to characterize temporary groundwater dynamics during droughts
NASA Astrophysics Data System (ADS)
Heudorfer, Benedikt; Stahl, Kerstin
2016-04-01
In order to improve our understanding of the complex mechanisms involved in the development, propagation and termination of drought events, a major challenge is to grasp the role of groundwater systems. Research on how groundwater responds to meteorological drought events (i.e. short-term climate anomalies) is still limited. Part of the problem is that there is as yet no generic method to characterize the response of different groundwater systems to extreme climate anomalies. In order to explore possibilities for such a methodology, we evaluate two statistical approaches to characterize groundwater dynamics on short time scales by applying them on observed groundwater head data from different pre- and peri-mountainous groundwater systems in humid central Europe (Germany). The first method is based on the coefficient of variation in moving windows of various lengths, the second method is based on streamflow recession characteristics applied on groundwater data. With these methods, the gauges behavior during low head events and its response to precipitation was explored. Findings regarding the behavior of the gauges make it possible to distinguish between gauges with a dominance of cyclic patterns, and gauges with a dominance of patterns on seasonal or event scale (commonly referred to as slow/fast responding gauges, respectively). While some clues on what factors that might control these patterns are present, the specific controls are general unclear for the gauges in this study. However as the key conclusion stands the question if the variety of manifestations of groundwater dynamics, as they occur in real systems, is subsumable with one unique method. Further studies on the topic are in progress.
Multiscale molecular dynamics using the matched interface and boundary method
Geng Weihua; Wei, G.W.
2011-01-20
The Poisson-Boltzmann (PB) equation is an established multiscale model for electrostatic analysis of biomolecules and other dielectric systems. PB based molecular dynamics (MD) approach has a potential to tackle large biological systems. Obstacles that hinder the current development of PB based MD methods are concerns in accuracy, stability, efficiency and reliability. The presence of complex solvent-solute interface, geometric singularities and charge singularities leads to challenges in the numerical solution of the PB equation and electrostatic force evaluation in PB based MD methods. Recently, the matched interface and boundary (MIB) method has been utilized to develop the first second order accurate PB solver that is numerically stable in dealing with discontinuous dielectric coefficients, complex geometric singularities and singular source charges. The present work develops the PB based MD approach using the MIB method. New formulation of electrostatic forces is derived to allow the use of sharp molecular surfaces. Accurate reaction field forces are obtained by directly differentiating the electrostatic potential. Dielectric boundary forces are evaluated at the solvent-solute interface using an accurate Cartesian-grid surface integration method. The electrostatic forces located at reentrant surfaces are appropriately assigned to related atoms. Extensive numerical tests are carried out to validate the accuracy and stability of the present electrostatic force calculation. The new PB based MD method is implemented in conjunction with the AMBER package. MIB based MD simulations of biomolecules are demonstrated via a few example systems.
Parallel computation of meshless methods for explicit dynamic analysis.
Danielson, K. T.; Hao, S.; Liu, W. K.; Uras, R. A.; Li, S.; Reactor Engineering; Northwestern Univ.; Waterways Experiment Station
2000-03-10
A parallel computational implementation of modern meshless methods is presented for explicit dynamic analysis. The procedures are demonstrated by application of the Reproducing Kernel Particle Method (RKPM). Aspects of a coarse grain parallel paradigm are detailed for a Lagrangian formulation using model partitioning. Integration points are uniquely defined on separate processors and particle definitions are duplicated, as necessary, so that all support particles for each point are defined locally on the corresponding processor. Several partitioning schemes are considered and a reduced graph-based procedure is presented. Partitioning issues are discussed and procedures to accommodate essential boundary conditions in parallel are presented. Explicit MPI message passing statements are used for all communications among partitions on different processors. The effectiveness of the procedure is demonstrated by highly deformable inelastic example problems.
An automated dynamic water vapor permeation test method
NASA Astrophysics Data System (ADS)
Gibson, Phillip; Kendrick, Cyrus; Rivin, Donald; Charmchii, Majid; Sicuranza, Linda
1995-05-01
This report describes an automated apparatus developed to measure the transport of water vapor through materials under a variety of conditions. The apparatus is more convenient to use than the traditional test methods for textiles and clothing materials, and allows one to use a wider variety of test conditions to investigate the concentration-dependent and nonlinear transport behavior of many of the semipermeable membrane laminates which are now available. The dynamic moisture permeation cell (DMPC) has been automated to permit multiple setpoint testing under computer control, and to facilitate investigation of transient phenomena. Results generated with the DMPC are in agreement with and of comparable accuracy to those from the ISO 11092 (sweating guarded hot plate) method of measuring water vapor permeability.
Efficient computations with the likelihood ratio distribution.
Kruijver, Maarten
2015-01-01
What is the probability that the likelihood ratio exceeds a threshold t, if a specified hypothesis is true? This question is asked, for instance, when performing power calculations for kinship testing, when computing true and false positive rates for familial searching and when computing the power of discrimination of a complex mixture. Answering this question is not straightforward, since there is are a huge number of possible genotypic combinations to consider. Different solutions are found in the literature. Several authors estimate the threshold exceedance probability using simulation. Corradi and Ricciardi [1] propose a discrete approximation to the likelihood ratio distribution which yields a lower and upper bound on the probability. Nothnagel et al. [2] use the normal distribution as an approximation to the likelihood ratio distribution. Dørum et al. [3] introduce an algorithm that can be used for exact computation, but this algorithm is computationally intensive, unless the threshold t is very large. We present three new approaches to the problem. Firstly, we show how importance sampling can be used to make the simulation approach significantly more efficient. Importance sampling is a statistical technique that turns out to work well in the current context. Secondly, we present a novel algorithm for computing exceedance probabilities. The algorithm is exact, fast and can handle relatively large problems. Thirdly, we introduce an approach that combines the novel algorithm with the discrete approximation of Corradi and Ricciardi. This last approach can be applied to very large problems and yields a lower and upper bound on the exceedance probability. The use of the different approaches is illustrated with examples from forensic genetics, such as kinship testing, familial searching and mixture interpretation. The algorithms are implemented in an R-package called DNAprofiles, which is freely available from CRAN.
Dynamic characterization of satellite components through non-invasive methods
Mullins, Joshua G; Wiest, Heather K; Mascarenas, David D. L.; Macknelly, David
2010-10-21
The rapid deployment of satellites is hindered by the need to flight-qualify their components and the resulting mechanical assembly. Conventional methods for qualification testing of satellite components are costly and time consuming. Furthermore, full-scale vehicles must be subjected to launch loads during testing. This harsh testing environment increases the risk of component damage during qualification. The focus of this research effort was to assess the performance of Structural Health Monitoring (SHM) techniques as a replacement for traditional vibration testing. SHM techniques were applied on a small-scale structure representative of a responsive satellite. The test structure consisted of an extruded aluminum space-frame covered with aluminum shear plates, which was assembled using bolted joints. Multiple piezoelectric patches were bonded to the test structure and acted as combined actuators and sensors. Various methods of SHM were explored including impedance-based health monitoring, wave propagation, and conventional frequency response functions. Using these methods in conjunction with finite element modelling, the dynamic properties of the test structure were established and areas of potential damage were identified and localized. The adequacy of the results from each SHM method was validated by comparison to results from conventional vibration testing.
Hopf Method Applied to Low and High Dimensional Dynamical Systems
NASA Astrophysics Data System (ADS)
Ma, Seungwook; Marston, Brad
2004-03-01
With an eye towards the goal of directly extracting statistical information from general circulation models (GCMs) of climate, thereby avoiding lengthy time integrations, we investigate the usage of the Hopf functional method(Uriel Frisch, Turbulence: The Legacy of A. N. Kolmogorov) (Cambridge University Press, 1995) chapter 9.5.. We use the method to calculate statistics over low-dimensional attractors, and for fluid flow on a rotating sphere. For the cases of the 3-dimensional Lorenz attractor, and a 5-dimensional nonlinear system introduced by Orszag as a toy model of turbulence(Steven Orszag in Fluid Dynamics: Les Houches (1977))., a comparison of results obtained by low-order truncations of the cumulant expansion against statistics calculated by direct numerical integration forward in time shows surprisingly good agreement. The extension of the Hopf method to a high-dimensional barotropic model of inviscid fluid flow on a rotating sphere, which employs Arakawa's method to conserve energy and enstrophy(Akio Arakawa, J. Comp. Phys. 1), 119 (1966)., is discussed.
Efficient sensitivity analysis method for chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Liao, Haitao
2016-05-01
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.
Dynamic characterization of satellite components through non-invasive methods
Mullens, Joshua G; Wiest, Heather K; Mascarenas, David D; Park, Gyuhae
2011-01-24
The rapid deployment of satellites is hindered by the need to flight-qualify their components and the resulting mechanical assembly. Conventional methods for qualification testing of satellite components are costly and time consuming. Furthermore, full-scale vehicles must be subjected to launch loads during testing. The harsh testing environment increases the risk of component damage during qualification. The focus of this research effort was to assess the performance of Structural Health Monitoring (SHM) techniques as replacement for traditional vibration testing. SHM techniques were applied on a small-scale structure representative of a responsive satellite. The test structure consisted of an extruded aluminum space-frame covered with aluminum shear plates, which was assembled using bolted joints. Multiple piezoelectric patches were bonded to the test structure and acted as combined actuators and sensors. Various methods of SHM were explored including impedance-based health monitoring, wave propagation, and conventional frequency response functions. Using these methods in conjunction with finite element modeling, the dynamic properties of the test structure were established and areas of potential damage were identified and localized. The adequacy of the results from each SHM method was validated by comparison to results from conventional vibration testing.
An implicit finite element method for discrete dynamic fracture
Jobie M. Gerken
1999-12-01
A method for modeling the discrete fracture of two-dimensional linear elastic structures with a distribution of small cracks subject to dynamic conditions has been developed. The foundation for this numerical model is a plane element formulated from the Hu-Washizu energy principle. The distribution of small cracks is incorporated into the numerical model by including a small crack at each element interface. The additional strain field in an element adjacent to this crack is treated as an externally applied strain field in the Hu-Washizu energy principle. The resulting stiffness matrix is that of a standard plane element. The resulting load vector is that of a standard plane element with an additional term that includes the externally applied strain field. Except for the crack strain field equations, all terms of the stiffness matrix and load vector are integrated symbolically in Maple V so that fully integrated plane stress and plane strain elements are constructed. The crack strain field equations are integrated numerically. The modeling of dynamic behavior of simple structures was demonstrated within acceptable engineering accuracy. In the model of axial and transverse vibration of a beam and the breathing mode of vibration of a thin ring, the dynamic characteristics were shown to be within expected limits. The models dominated by tensile forces (the axially loaded beam and the pressurized ring) were within 0.5% of the theoretical values while the shear dominated model (the transversely loaded beam) is within 5% of the calculated theoretical value. The constant strain field of the tensile problems can be modeled exactly by the numerical model. The numerical results should therefore, be exact. The discrepancies can be accounted for by errors in the calculation of frequency from the numerical results. The linear strain field of the transverse model must be modeled by a series of constant strain elements. This is an approximation to the true strain field, so some
Space station static and dynamic analyses using parallel methods
NASA Technical Reports Server (NTRS)
Gupta, V.; Newell, J.; Storaasli, O.; Baddourah, M.; Bostic, S.
1993-01-01
Algorithms for high-performance parallel computers are applied to perform static analyses of large-scale Space Station finite-element models (FEMs). Several parallel-vector algorithms under development at NASA Langley are assessed. Sparse matrix solvers were found to be more efficient than banded symmetric or iterative solvers for the static analysis of large-scale applications. In addition, new sparse and 'out-of-core' solvers were found superior to substructure (superelement) techniques which require significant additional cost and time to perform static condensation during global FEM matrix generation as well as the subsequent recovery and expansion. A method to extend the fast parallel static solution techniques to reduce the computation time for dynamic analysis is also described. The resulting static and dynamic algorithms offer design economy for preliminary multidisciplinary design optimization and FEM validation against test modes. The algorithms are being optimized for parallel computers to solve one-million degrees-of-freedom (DOF) FEMs. The high-performance computers at NASA afforded effective software development, testing, efficient and accurate solution with timely system response and graphical interpretation of results rarely found in industry. Based on the author's experience, similar cooperation between industry and government should be encouraged for similar large-scale projects in the future.
Libration Orbit Mission Design: Applications of Numerical & Dynamical Methods
NASA Technical Reports Server (NTRS)
Bauer, Frank (Technical Monitor); Folta, David; Beckman, Mark
2002-01-01
Sun-Earth libration point orbits serve as excellent locations for scientific investigations. These orbits are often selected to minimize environmental disturbances and maximize observing efficiency. Trajectory design in support of libration orbits is ever more challenging as more complex missions are envisioned in the next decade. Trajectory design software must be further enabled to incorporate better understanding of the libration orbit solution space and thus improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple libration missions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes algorithm and software development. The recently launched Microwave Anisotropy Probe (MAP) and upcoming James Webb Space Telescope (JWST) and Constellation-X missions are examples of the use of improved numerical methods for attaining constrained orbital parameters and controlling their dynamical evolution at the collinear libration points. This paper presents a history of libration point missions, a brief description of the numerical and dynamical design techniques including software used, and a sample of future GSFC mission designs.
Study on the measurement method of a dynamic spectrum
NASA Astrophysics Data System (ADS)
Wang, Y.; Li, G.; Lin, L.; Liu, Y. L.; Li, X. X.; C-Y Lu, Stephen
2005-01-01
Continuous non-invasive blood component sensing and regulation is necessary for patients with metabolism disorders. Utilizing near-infrared spectroscopy for non-invasively sensing blood component concentration has been a focus topic in biomedical optics applications. It has been shown to be versatile, speedy and sensitive to several kinds of samples. However, there is no report about any successful non-invasive blood component (except the artery blood oxygen saturation) concentration detection techniques that can meet the requirements of clinic application. One of the key difficulties is the influence of individual discrepancies. Dynamic spectrum is a new non-invasive measure method for sensing blood component concentration presented recently. It can theoretically eliminate the individual discrepancies of the tissues except the pulsatile component of the artery blood. This indicates a brand new way to measure the blood component concentration and the potential to provide absolute quantitation of hemodynamic variables. In this paper, the measurement methodology to acquire the DS from photoplethysmography (PPG) is studied. A dynamic spectrometer to acquire the DS is described.
Applications of Computational Methods for Dynamic Stability and Control Derivatives
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Spence, Angela M.
2004-01-01
Initial steps in the application o f a low-order panel method computational fluid dynamic (CFD) code to the calculation of aircraft dynamic stability and control (S&C) derivatives are documented. Several capabilities, unique to CFD but not unique to this particular demonstration, are identified and demonstrated in this paper. These unique capabilities complement conventional S&C techniques and they include the ability to: 1) perform maneuvers without the flow-kinematic restrictions and support interference commonly associated with experimental S&C facilities, 2) easily simulate advanced S&C testing techniques, 3) compute exact S&C derivatives with uncertainty propagation bounds, and 4) alter the flow physics associated with a particular testing technique from those observed in a wind or water tunnel test in order to isolate effects. Also presented are discussions about some computational issues associated with the simulation of S&C tests and selected results from numerous surface grid resolution studies performed during the course of the study.
STECKMAP: STEllar Content and Kinematics via Maximum A Posteriori likelihood
NASA Astrophysics Data System (ADS)
Ocvirk, P.
2011-08-01
STECKMAP stands for STEllar Content and Kinematics via Maximum A Posteriori likelihood. It is a tool for interpreting galaxy spectra in terms of their stellar populations through the derivation of their star formation history, age-metallicity relation, kinematics and extinction. The observed spectrum is projected onto a temporal sequence of models of single stellar populations, so as to determine a linear combination of these models that best fits the observed spectrum. The weights of the various components of this linear combination indicate the stellar content of the population. This procedure is regularized using various penalizing functions. The principles of the method are detailed in Ocvirk et al. 2006.
CORA: Emission Line Fitting with Maximum Likelihood
NASA Astrophysics Data System (ADS)
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
CORA - emission line fitting with Maximum Likelihood
NASA Astrophysics Data System (ADS)
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Diffusion Tensor Estimation by Maximizing Rician Likelihood
Landman, Bennett; Bazin, Pierre-Louis; Prince, Jerry
2012-01-01
Diffusion tensor imaging (DTI) is widely used to characterize white matter in health and disease. Previous approaches to the estimation of diffusion tensors have either been statistically suboptimal or have used Gaussian approximations of the underlying noise structure, which is Rician in reality. This can cause quantities derived from these tensors — e.g., fractional anisotropy and apparent diffusion coefficient — to diverge from their true values, potentially leading to artifactual changes that confound clinically significant ones. This paper presents a novel maximum likelihood approach to tensor estimation, denoted Diffusion Tensor Estimation by Maximizing Rician Likelihood (DTEMRL). In contrast to previous approaches, DTEMRL considers the joint distribution of all observed data in the context of an augmented tensor model to account for variable levels of Rician noise. To improve numeric stability and prevent non-physical solutions, DTEMRL incorporates a robust characterization of positive definite tensors and a new estimator of underlying noise variance. In simulated and clinical data, mean squared error metrics show consistent and significant improvements from low clinical SNR to high SNR. DTEMRL may be readily supplemented with spatial regularization or a priori tensor distributions for Bayesian tensor estimation. PMID:23132746
Maximum Likelihood Analysis in the PEN Experiment
NASA Astrophysics Data System (ADS)
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Dynamically controlled crystallization method and apparatus and crystals obtained thereby
NASA Technical Reports Server (NTRS)
Arnowitz, Leonard (Inventor); Steinberg, Emanuel (Inventor)
1999-01-01
A method and apparatus for dynamically controlling the crystallization of proteins including a crystallization chamber or chambers for holding a protein in a salt solution, one or more salt solution chambers, two communication passages respectively coupling the crystallization chamber with each of the salt solution chambers, and transfer mechanisms configured to respectively transfer salt solution between each of the salt solution chambers and the crystallization chamber. The transfer mechanisms are interlocked to maintain the volume of salt solution in the crystallization chamber substantially constant. Salt solution of different concentrations is transferred into and out of the crystallization chamber to adjust the salt concentration in the crystallization chamber to achieve precise control of the crystallization process.
Methods for evaluating the predictive accuracy of structural dynamic models
NASA Technical Reports Server (NTRS)
Hasselman, T. K.; Chrostowski, Jon D.
1990-01-01
Uncertainty of frequency response using the fuzzy set method and on-orbit response prediction using laboratory test data to refine an analytical model are emphasized with respect to large space structures. Two aspects of the fuzzy set approach were investigated relative to its application to large structural dynamics problems: (1) minimizing the number of parameters involved in computing possible intervals; and (2) the treatment of extrema which may occur in the parameter space enclosed by all possible combinations of the important parameters of the model. Extensive printer graphics were added to the SSID code to help facilitate model verification, and an application of this code to the LaRC Ten Bay Truss is included in the appendix to illustrate this graphics capability.
Driving the Model to Its Limit: Profile Likelihood Based Model Reduction.
Maiwald, Tim; Hass, Helge; Steiert, Bernhard; Vanlier, Joep; Engesser, Raphael; Raue, Andreas; Kipkeew, Friederike; Bock, Hans H; Kaschek, Daniel; Kreutz, Clemens; Timmer, Jens
2016-01-01
In systems biology, one of the major tasks is to tailor model complexity to information content of the data. A useful model should describe the data and produce well-determined parameter estimates and predictions. Too small of a model will not be able to describe the data whereas a model which is too large tends to overfit measurement errors and does not provide precise predictions. Typically, the model is modified and tuned to fit the data, which often results in an oversized model. To restore the balance between model complexity and available measurements, either new data has to be gathered or the model has to be reduced. In this manuscript, we present a data-based method for reducing non-linear models. The profile likelihood is utilised to assess parameter identifiability and designate likely candidates for reduction. Parameter dependencies are analysed along profiles, providing context-dependent suggestions for the type of reduction. We discriminate four distinct scenarios, each associated with a specific model reduction strategy. Iterating the presented procedure eventually results in an identifiable model, which is capable of generating precise and testable predictions. Source code for all toy examples is provided within the freely available, open-source modelling environment Data2Dynamics based on MATLAB available at http://www.data2dynamics.org/, as well as the R packages dMod/cOde available at https://github.com/dkaschek/. Moreover, the concept is generally applicable and can readily be used with any software capable of calculating the profile likelihood.
Driving the Model to Its Limit: Profile Likelihood Based Model Reduction.
Maiwald, Tim; Hass, Helge; Steiert, Bernhard; Vanlier, Joep; Engesser, Raphael; Raue, Andreas; Kipkeew, Friederike; Bock, Hans H; Kaschek, Daniel; Kreutz, Clemens; Timmer, Jens
2016-01-01
In systems biology, one of the major tasks is to tailor model complexity to information content of the data. A useful model should describe the data and produce well-determined parameter estimates and predictions. Too small of a model will not be able to describe the data whereas a model which is too large tends to overfit measurement errors and does not provide precise predictions. Typically, the model is modified and tuned to fit the data, which often results in an oversized model. To restore the balance between model complexity and available measurements, either new data has to be gathered or the model has to be reduced. In this manuscript, we present a data-based method for reducing non-linear models. The profile likelihood is utilised to assess parameter identifiability and designate likely candidates for reduction. Parameter dependencies are analysed along profiles, providing context-dependent suggestions for the type of reduction. We discriminate four distinct scenarios, each associated with a specific model reduction strategy. Iterating the presented procedure eventually results in an identifiable model, which is capable of generating precise and testable predictions. Source code for all toy examples is provided within the freely available, open-source modelling environment Data2Dynamics based on MATLAB available at http://www.data2dynamics.org/, as well as the R packages dMod/cOde available at https://github.com/dkaschek/. Moreover, the concept is generally applicable and can readily be used with any software capable of calculating the profile likelihood. PMID:27588423
Driving the Model to Its Limit: Profile Likelihood Based Model Reduction
Vanlier, Joep; Engesser, Raphael; Raue, Andreas; Kipkeew, Friederike; Bock, Hans H.; Kaschek, Daniel; Kreutz, Clemens; Timmer, Jens
2016-01-01
In systems biology, one of the major tasks is to tailor model complexity to information content of the data. A useful model should describe the data and produce well-determined parameter estimates and predictions. Too small of a model will not be able to describe the data whereas a model which is too large tends to overfit measurement errors and does not provide precise predictions. Typically, the model is modified and tuned to fit the data, which often results in an oversized model. To restore the balance between model complexity and available measurements, either new data has to be gathered or the model has to be reduced. In this manuscript, we present a data-based method for reducing non-linear models. The profile likelihood is utilised to assess parameter identifiability and designate likely candidates for reduction. Parameter dependencies are analysed along profiles, providing context-dependent suggestions for the type of reduction. We discriminate four distinct scenarios, each associated with a specific model reduction strategy. Iterating the presented procedure eventually results in an identifiable model, which is capable of generating precise and testable predictions. Source code for all toy examples is provided within the freely available, open-source modelling environment Data2Dynamics based on MATLAB available at http://www.data2dynamics.org/, as well as the R packages dMod/cOde available at https://github.com/dkaschek/. Moreover, the concept is generally applicable and can readily be used with any software capable of calculating the profile likelihood. PMID:27588423
The ONIOM molecular dynamics method for biochemical applications: cytidine deaminase
Matsubara, Toshiaki; Dupuis, Michel; Aida, Misako
2007-03-22
Abstract We derived and implemented the ONIOM-molecular dynamics (MD) method for biochemical applications. The implementation allows the characterization of the functions of the real enzymes taking account of their thermal motion. In this method, the direct MD is performed by calculating the ONIOM energy and gradients of the system on the fly. We describe the first application of this ONOM-MD method to cytidine deaminase. The environmental effects on the substrate in the active site are examined. The ONIOM-MD simulations show that the product uridine is strongly perturbed by the thermal motion of the environment and dissociates easily from the active site. TM and MA were supported in part by grants from the Ministry of Education, Culture, Sports, Science and Technology of Japan. MD was supported by the Division of Chemical Sciences, Office of Basic Energy Sciences, and by the Office of Biological and Environmental Research of the U.S. Department of Energy DOE. Battelle operates Pacific Northwest National Laboratory for DOE.
Introduction to finite-difference methods for numerical fluid dynamics
Scannapieco, E.; Harlow, F.H.
1995-09-01
This work is intended to be a beginner`s exercise book for the study of basic finite-difference techniques in computational fluid dynamics. It is written for a student level ranging from high-school senior to university senior. Equations are derived from basic principles using algebra. Some discussion of partial-differential equations is included, but knowledge of calculus is not essential. The student is expected, however, to have some familiarity with the FORTRAN computer language, as the syntax of the computer codes themselves is not discussed. Topics examined in this work include: one-dimensional heat flow, one-dimensional compressible fluid flow, two-dimensional compressible fluid flow, and two-dimensional incompressible fluid flow with additions of the equations of heat flow and the {Kappa}-{epsilon} model for turbulence transport. Emphasis is placed on numerical instabilities and methods by which they can be avoided, techniques that can be used to evaluate the accuracy of finite-difference approximations, and the writing of the finite-difference codes themselves. Concepts introduced in this work include: flux and conservation, implicit and explicit methods, Lagrangian and Eulerian methods, shocks and rarefactions, donor-cell and cell-centered advective fluxes, compressible and incompressible fluids, the Boussinesq approximation for heat flow, Cartesian tensor notation, the Boussinesq approximation for the Reynolds stress tensor, and the modeling of transport equations. A glossary is provided which defines these and other terms.
A Dynamic Integration Method for Borderland Database using OSM data
NASA Astrophysics Data System (ADS)
Zhou, X.-G.; Jiang, Y.; Zhou, K.-X.; Zeng, L.
2013-11-01
Spatial data is the fundamental of borderland analysis of the geography, natural resources, demography, politics, economy, and culture. As the spatial region used in borderland researching usually covers several neighboring countries' borderland regions, the data is difficult to achieve by one research institution or government. VGI has been proven to be a very successful means of acquiring timely and detailed global spatial data at very low cost. Therefore VGI will be one reasonable source of borderland spatial data. OpenStreetMap (OSM) has been known as the most successful VGI resource. But OSM data model is far different from the traditional authoritative geographic information. Thus the OSM data needs to be converted to the scientist customized data model. With the real world changing fast, the converted data needs to be updated. Therefore, a dynamic integration method for borderland data is presented in this paper. In this method, a machine study mechanism is used to convert the OSM data model to the user data model; a method used to select the changed objects in the researching area over a given period from OSM whole world daily diff file is presented, the change-only information file with designed form is produced automatically. Based on the rules and algorithms mentioned above, we enabled the automatic (or semiautomatic) integration and updating of the borderland database by programming. The developed system was intensively tested.
Testing and Validation of the Dynamic Inertia Measurement Method
NASA Technical Reports Server (NTRS)
Chin, Alexander W.; Herrera, Claudia Y.; Spivey, Natalie D.; Fladung, William A.; Cloutier, David
2015-01-01
The Dynamic Inertia Measurement (DIM) method uses a ground vibration test setup to determine the mass properties of an object using information from frequency response functions. Most conventional mass properties testing involves using spin tables or pendulum-based swing tests, which for large aerospace vehicles becomes increasingly difficult and time-consuming, and therefore expensive, to perform. The DIM method has been validated on small test articles but has not been successfully proven on large aerospace vehicles. In response, the National Aeronautics and Space Administration Armstrong Flight Research Center (Edwards, California) conducted mass properties testing on an "iron bird" test article that is comparable in mass and scale to a fighter-type aircraft. The simple two-I-beam design of the "iron bird" was selected to ensure accurate analytical mass properties. Traditional swing testing was also performed to compare the level of effort, amount of resources, and quality of data with the DIM method. The DIM test showed favorable results for the center of gravity and moments of inertia; however, the products of inertia showed disagreement with analytical predictions.
NASA Astrophysics Data System (ADS)
Lika, Konstadia; Kearney, Michael R.; Freitas, Vânia; van der Veer, Henk W.; van der Meer, Jaap; Wijsman, Johannes W. M.; Pecquerie, Laure; Kooijman, Sebastiaan A. L. M.
2011-11-01
The Dynamic Energy Budget (DEB) theory for metabolic organisation captures the processes of development, growth, maintenance, reproduction and ageing for any kind of organism throughout its life-cycle. However, the application of DEB theory is challenging because the state variables and parameters are abstract quantities that are not directly observable. We here present a new approach of parameter estimation, the covariation method, that permits all parameters of the standard Dynamic Energy Budget (DEB) model to be estimated from standard empirical datasets. Parameter estimates are based on the simultaneous minimization of a weighted sum of squared deviations between a number of data sets and model predictions or the minimisation of the negative log likelihood function, both in a single-step procedure. The structure of DEB theory permits the unusual situation of using single data-points (such as the maximum reproduction rate), which we call "zero-variate" data, for estimating parameters. We also introduce the concept of "pseudo-data", exploiting the rules for the covariation of parameter values among species that are implied by the standard DEB model. This allows us to introduce the concept of a generalised animal, which has specified parameter values. We here outline the philosophy behind the approach and its technical implementation. In a companion paper, we assess the behaviour of the estimation procedure and present preliminary findings of emerging patterns in parameter values across diverse taxa.
Transfer Entropy as a Log-Likelihood Ratio
NASA Astrophysics Data System (ADS)
Barnett, Lionel; Bossomaier, Terry
2012-09-01
Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.
Maximum likelihood identification of aircraft parameters with unsteady aerodynamic modelling
NASA Technical Reports Server (NTRS)
Keskar, D. A.; Wells, W. R.
1979-01-01
A simplified aerodynamic force model based on the physical principle of Prandtl's lifting line theory and trailing vortex concept has been developed to account for unsteady aerodynamic effects in aircraft dynamics. Longitudinal equations of motion have been modified to include these effects. The presence of convolution integrals in the modified equations of motion led to a frequency domain analysis utilizing Fourier transforms. This reduces the integro-differential equations to relatively simple algebraic equations, thereby reducing computation time significantly. A parameter extraction program based on the maximum likelihood estimation technique is developed in the frequency domain. The extraction algorithm contains a new scheme for obtaining sensitivity functions by using numerical differentiation. The paper concludes with examples using computer generated and real flight data
Transfer entropy as a log-likelihood ratio.
Barnett, Lionel; Bossomaier, Terry
2012-09-28
Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.
Assessing allelic dropout and genotype reliability using maximum likelihood.
Miller, Craig R; Joyce, Paul; Waits, Lisette P
2002-01-01
A growing number of population genetic studies utilize nuclear DNA microsatellite data from museum specimens and noninvasive sources. Genotyping errors are elevated in these low quantity DNA sources, potentially compromising the power and accuracy of the data. The most conservative method for addressing this problem is effective, but requires extensive replication of individual genotypes. In search of a more efficient method, we developed a maximum-likelihood approach that minimizes errors by estimating genotype reliability and strategically directing replication at loci most likely to harbor errors. The model assumes that false and contaminant alleles can be removed from the dataset and that the allelic dropout rate is even across loci. Simulations demonstrate that the proposed method marks a vast improvement in efficiency while maintaining accuracy. When allelic dropout rates are low (0-30%), the reduction in the number of PCR replicates is typically 40-50%. The model is robust to moderate violations of the even dropout rate assumption. For datasets that contain false and contaminant alleles, a replication strategy is proposed. Our current model addresses only allelic dropout, the most prevalent source of genotyping error. However, the developed likelihood framework can incorporate additional error-generating processes as they become more clearly understood. PMID:11805071
Developmental Changes in Children's Understanding of Future Likelihood and Uncertainty
ERIC Educational Resources Information Center
Lagattuta, Kristin Hansen; Sayfan, Liat
2011-01-01
Two measures assessed 4-10-year-olds' and adults' (N = 201) understanding of future likelihood and uncertainty. In one task, participants sequenced sets of event pictures varying by one physical dimension according to increasing future likelihood. In a separate task, participants rated characters' thoughts about the likelihood of future events,…
Likelihood-free Bayesian computation for structural model calibration: a feasibility study
NASA Astrophysics Data System (ADS)
Jin, Seung-Seop; Jung, Hyung-Jo
2016-04-01
Finite element (FE) model updating is often used to associate FE models with corresponding existing structures for the condition assessment. FE model updating is an inverse problem and prone to be ill-posed and ill-conditioning when there are many errors and uncertainties in both an FE model and its corresponding measurements. In this case, it is important to quantify these uncertainties properly. Bayesian FE model updating is one of the well-known methods to quantify parameter uncertainty by updating our prior belief on the parameters with the available measurements. In Bayesian inference, likelihood plays a central role in summarizing the overall residuals between model predictions and corresponding measurements. Therefore, likelihood should be carefully chosen to reflect the characteristics of the residuals. It is generally known that very little or no information is available regarding the statistical characteristics of the residuals. In most cases, the likelihood is assumed to be the independent identically distributed Gaussian distribution with the zero mean and constant variance. However, this assumption may cause biased and over/underestimated estimates of parameters, so that the uncertainty quantification and prediction are questionable. To alleviate the potential misuse of the inadequate likelihood, this study introduced approximate Bayesian computation (i.e., likelihood-free Bayesian inference), which relaxes the need for an explicit likelihood by analyzing the behavior similarities between model predictions and measurements. We performed FE model updating based on likelihood-free Markov chain Monte Carlo (MCMC) without using the likelihood. Based on the result of the numerical study, we observed that the likelihood-free Bayesian computation can quantify the updating parameters correctly and its predictive capability for the measurements, not used in calibrated, is also secured.
Stochastic Maximum Likelihood (SML) parametric estimation of overlapped Doppler echoes
NASA Astrophysics Data System (ADS)
Boyer, E.; Petitdidier, M.; Larzabal, P.
2004-11-01
This paper investigates the area of overlapped echo data processing. In such cases, classical methods, such as Fourier-like techniques or pulse pair methods, fail to estimate the first three spectral moments of the echoes because of their lack of resolution. A promising method, based on a modelization of the covariance matrix of the time series and on a Stochastic Maximum Likelihood (SML) estimation of the parameters of interest, has been recently introduced in literature. This method has been tested on simulations and on few spectra from actual data but no exhaustive investigation of the SML algorithm has been conducted on actual data: this paper fills this gap. The radar data came from the thunderstorm campaign that took place at the National Astronomy and Ionospheric Center (NAIC) in Arecibo, Puerto Rico, in 1998.
NASA Technical Reports Server (NTRS)
Stepner, D. E.; Mehra, R. K.
1973-01-01
A new method of extracting aircraft stability and control derivatives from flight test data is developed based on the maximum likelihood cirterion. It is shown that this new method is capable of processing data from both linear and nonlinear models, both with and without process noise and includes output error and equation error methods as special cases. The first application of this method to flight test data is reported for lateral maneuvers of the HL-10 and M2/F3 lifting bodies, including the extraction of stability and control derivatives in the presence of wind gusts. All the problems encountered in this identification study are discussed. Several different methods (including a priori weighting, parameter fixing and constrained parameter values) for dealing with identifiability and uniqueness problems are introduced and the results given. The method for the design of optimal inputs for identifying the parameters of linear dynamic systems is also given. The criterion used for the optimization is the sensitivity of the system output to the unknown parameters. Several simple examples are first given and then the results of an extensive stability and control dervative identification simulation for a C-8 aircraft are detailed.
ERIC Educational Resources Information Center
Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike
2011-01-01
It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…
Detection of abrupt changes in dynamic systems
NASA Technical Reports Server (NTRS)
Willsky, A. S.
1984-01-01
Some of the basic ideas associated with the detection of abrupt changes in dynamic systems are presented. Multiple filter-based techniques and residual-based method and the multiple model and generalized likelihood ratio methods are considered. Issues such as the effect of unknown onset time on algorithm complexity and structure and robustness to model uncertainty are discussed.
Steered Molecular Dynamics Methods Applied to Enzyme Mechanism and Energetics.
Ramírez, C L; Martí, M A; Roitberg, A E
2016-01-01
One of the main goals of chemistry is to understand the underlying principles of chemical reactions, in terms of both its reaction mechanism and the thermodynamics that govern it. Using hybrid quantum mechanics/molecular mechanics (QM/MM)-based methods in combination with a biased sampling scheme, it is possible to simulate chemical reactions occurring inside complex environments such as an enzyme, or aqueous solution, and determining the corresponding free energy profile, which provides direct comparison with experimental determined kinetic and equilibrium parameters. Among the most promising biasing schemes is the multiple steered molecular dynamics method, which in combination with Jarzynski's Relationship (JR) allows obtaining the equilibrium free energy profile, from a finite set of nonequilibrium reactive trajectories by exponentially averaging the individual work profiles. However, obtaining statistically converged and accurate profiles is far from easy and may result in increased computational cost if the selected steering speed and number of trajectories are inappropriately chosen. In this small review, using the extensively studied chorismate to prephenate conversion reaction, we first present a systematic study of how key parameters such as pulling speed, number of trajectories, and reaction progress are related to the resulting work distributions and in turn the accuracy of the free energy obtained with JR. Second, and in the context of QM/MM strategies, we introduce the Hybrid Differential Relaxation Algorithm, and show how it allows obtaining more accurate free energy profiles using faster pulling speeds and smaller number of trajectories and thus smaller computational cost.
Dynamically controlled crystallization method and apparatus and crystals obtained thereby
NASA Technical Reports Server (NTRS)
Arnowitz, Leonard (Inventor); Steinberg, Emanuel (Inventor)
2003-01-01
A method and apparatus for dynamically controlling the crystallization of molecules including a crystallization chamber (14) or chambers for holding molecules in a precipitant solution, one or more precipitant solution reservoirs (16, 18), communication passages (17, 19) respectively coupling the crystallization chamber(s) with each of the precipitant solution reservoirs, and transfer mechanisms (20, 21, 22, 24, 26, 28) configured to respectively transfer precipitant solution between each of the precipitant solution reservoirs and the crystallization chamber(s). The transfer mechanisms are interlocked to maintain a constant volume of precipitant solution in the crystallization chamber(s). Precipitant solutions of different concentrations are transferred into and out of the crystallization chamber(s) to adjust the concentration of precipitant in the crystallization chamber(s) to achieve precise control of the crystallization process. The method and apparatus can be used effectively to grow crystals under reduced gravity conditions such as microgravity conditions of space, and under conditions of reduced or enhanced effective gravity as induced by a powerful magnetic field.
Steered Molecular Dynamics Methods Applied to Enzyme Mechanism and Energetics.
Ramírez, C L; Martí, M A; Roitberg, A E
2016-01-01
One of the main goals of chemistry is to understand the underlying principles of chemical reactions, in terms of both its reaction mechanism and the thermodynamics that govern it. Using hybrid quantum mechanics/molecular mechanics (QM/MM)-based methods in combination with a biased sampling scheme, it is possible to simulate chemical reactions occurring inside complex environments such as an enzyme, or aqueous solution, and determining the corresponding free energy profile, which provides direct comparison with experimental determined kinetic and equilibrium parameters. Among the most promising biasing schemes is the multiple steered molecular dynamics method, which in combination with Jarzynski's Relationship (JR) allows obtaining the equilibrium free energy profile, from a finite set of nonequilibrium reactive trajectories by exponentially averaging the individual work profiles. However, obtaining statistically converged and accurate profiles is far from easy and may result in increased computational cost if the selected steering speed and number of trajectories are inappropriately chosen. In this small review, using the extensively studied chorismate to prephenate conversion reaction, we first present a systematic study of how key parameters such as pulling speed, number of trajectories, and reaction progress are related to the resulting work distributions and in turn the accuracy of the free energy obtained with JR. Second, and in the context of QM/MM strategies, we introduce the Hybrid Differential Relaxation Algorithm, and show how it allows obtaining more accurate free energy profiles using faster pulling speeds and smaller number of trajectories and thus smaller computational cost. PMID:27497165
Modelling default and likelihood reasoning as probabilistic
NASA Technical Reports Server (NTRS)
Buntine, Wray
1990-01-01
A probabilistic analysis of plausible reasoning about defaults and about likelihood is presented. 'Likely' and 'by default' are in fact treated as duals in the same sense as 'possibility' and 'necessity'. To model these four forms probabilistically, a logic QDP and its quantitative counterpart DP are derived that allow qualitative and corresponding quantitative reasoning. Consistency and consequence results for subsets of the logics are given that require at most a quadratic number of satisfiability tests in the underlying propositional logic. The quantitative logic shows how to track the propagation error inherent in these reasoning forms. The methodology and sound framework of the system highlights their approximate nature, the dualities, and the need for complementary reasoning about relevance.
Groups, information theory, and Einstein's likelihood principle
NASA Astrophysics Data System (ADS)
Sicuro, Gabriele; Tempesta, Piergiulio
2016-04-01
We propose a unifying picture where the notion of generalized entropy is related to information theory by means of a group-theoretical approach. The group structure comes from the requirement that an entropy be well defined with respect to the composition of independent systems, in the context of a recently proposed generalization of the Shannon-Khinchin axioms. We associate to each member of a large class of entropies a generalized information measure, satisfying the additivity property on a set of independent systems as a consequence of the underlying group law. At the same time, we also show that Einstein's likelihood function naturally emerges as a byproduct of our informational interpretation of (generally nonadditive) entropies. These results confirm the adequacy of composable entropies both in physical and social science contexts.
Groups, information theory, and Einstein's likelihood principle.
Sicuro, Gabriele; Tempesta, Piergiulio
2016-04-01
We propose a unifying picture where the notion of generalized entropy is related to information theory by means of a group-theoretical approach. The group structure comes from the requirement that an entropy be well defined with respect to the composition of independent systems, in the context of a recently proposed generalization of the Shannon-Khinchin axioms. We associate to each member of a large class of entropies a generalized information measure, satisfying the additivity property on a set of independent systems as a consequence of the underlying group law. At the same time, we also show that Einstein's likelihood function naturally emerges as a byproduct of our informational interpretation of (generally nonadditive) entropies. These results confirm the adequacy of composable entropies both in physical and social science contexts.
Groups, information theory, and Einstein's likelihood principle.
Sicuro, Gabriele; Tempesta, Piergiulio
2016-04-01
We propose a unifying picture where the notion of generalized entropy is related to information theory by means of a group-theoretical approach. The group structure comes from the requirement that an entropy be well defined with respect to the composition of independent systems, in the context of a recently proposed generalization of the Shannon-Khinchin axioms. We associate to each member of a large class of entropies a generalized information measure, satisfying the additivity property on a set of independent systems as a consequence of the underlying group law. At the same time, we also show that Einstein's likelihood function naturally emerges as a byproduct of our informational interpretation of (generally nonadditive) entropies. These results confirm the adequacy of composable entropies both in physical and social science contexts. PMID:27176234
Likelihood of achieving air quality targets under model uncertainties.
Digar, Antara; Cohan, Daniel S; Cox, Dennis D; Kim, Byeong-Uk; Boylan, James W
2011-01-01
Regulatory attainment demonstrations in the United States typically apply a bright-line test to predict whether a control strategy is sufficient to attain an air quality standard. Photochemical models are the best tools available to project future pollutant levels and are a critical part of regulatory attainment demonstrations. However, because photochemical models are uncertain and future meteorology is unknowable, future pollutant levels cannot be predicted perfectly and attainment cannot be guaranteed. This paper introduces a computationally efficient methodology for estimating the likelihood that an emission control strategy will achieve an air quality objective in light of uncertainties in photochemical model input parameters (e.g., uncertain emission and reaction rates, deposition velocities, and boundary conditions). The method incorporates Monte Carlo simulations of a reduced form model representing pollutant-precursor response under parametric uncertainty to probabilistically predict the improvement in air quality due to emission control. The method is applied to recent 8-h ozone attainment modeling for Atlanta, Georgia, to assess the likelihood that additional controls would achieve fixed (well-defined) or flexible (due to meteorological variability and uncertain emission trends) targets of air pollution reduction. The results show that in certain instances ranking of the predicted effectiveness of control strategies may differ between probabilistic and deterministic analyses. PMID:21138291
CMBFIT: Rapid WMAP likelihood calculations with normal parameters
NASA Astrophysics Data System (ADS)
Sandvik, Håvard B.; Tegmark, Max; Wang, Xiaomin; Zaldarriaga, Matias
2004-03-01
We present a method for ultrafast confrontation of the Wilkinson Microwave Anisotropy Probe (WMAP) cosmic microwave background observations with theoretical models, implemented as a publicly available software package called CMBFIT, useful for anyone wishing to measure cosmological parameters by combining WMAP with other observations. The method takes advantage of the underlying physics by transforming into a set of parameters where the WMAP likelihood surface is accurately fit by the exponential of a quartic or sextic polynomial. Building on previous physics based approximations by Hu et al., Kosowsky et al., and Chu et al., it combines their speed with precision cosmology grade accuracy. A FORTRAN code for computing the WMAP likelihood for a given set of parameters is provided, precalibrated against CMBFAST, accurate to Δ ln L˜0.05 over the entire 2σ region of the parameter space for 6 parameter “vanilla” ΛCDM models. We also provide 7-parameter fits including spatial curvature, gravitational waves and a running spectral index.
Likelihood-Based Inference of B Cell Clonal Families
Ralph, Duncan K.
2016-01-01
The human immune system depends on a highly diverse collection of antibody-making B cells. B cell receptor sequence diversity is generated by a random recombination process called “rearrangement” forming progenitor B cells, then a Darwinian process of lineage diversification and selection called “affinity maturation.” The resulting receptors can be sequenced in high throughput for research and diagnostics. Such a collection of sequences contains a mixture of various lineages, each of which may be quite numerous, or may consist of only a single member. As a step to understanding the process and result of this diversification, one may wish to reconstruct lineage membership, i.e. to cluster sampled sequences according to which came from the same rearrangement events. We call this clustering problem “clonal family inference.” In this paper we describe and validate a likelihood-based framework for clonal family inference based on a multi-hidden Markov Model (multi-HMM) framework for B cell receptor sequences. We describe an agglomerative algorithm to find a maximum likelihood clustering, two approximate algorithms with various trade-offs of speed versus accuracy, and a third, fast algorithm for finding specific lineages. We show that under simulation these algorithms greatly improve upon existing clonal family inference methods, and that they also give significantly different clusters than previous methods when applied to two real data sets. PMID:27749910
Effects of parameter estimation on maximum-likelihood bootstrap analysis.
Ripplinger, Jennifer; Abdo, Zaid; Sullivan, Jack
2010-08-01
Bipartition support in maximum-likelihood (ML) analysis is most commonly assessed using the nonparametric bootstrap. Although bootstrap replicates should theoretically be analyzed in the same manner as the original data, model selection is almost never conducted for bootstrap replicates, substitution-model parameters are often fixed to their maximum-likelihood estimates (MLEs) for the empirical data, and bootstrap replicates may be subjected to less rigorous heuristic search strategies than the original data set. Even though this approach may increase computational tractability, it may also lead to the recovery of suboptimal tree topologies and affect bootstrap values. However, since well-supported bipartitions are often recovered regardless of method, use of a less intensive bootstrap procedure may not significantly affect the results. In this study, we investigate the impact of parameter estimation (i.e., assessment of substitution-model parameters and tree topology) on ML bootstrap analysis. We find that while forgoing model selection and/or setting substitution-model parameters to their empirical MLEs may lead to significantly different bootstrap values, it probably would not change their biological interpretation. Similarly, even though the use of reduced search methods often results in significant differences among bootstrap values, only omitting branch swapping is likely to change any biological inferences drawn from the data.
Likelihood of achieving air quality targets under model uncertainties.
Digar, Antara; Cohan, Daniel S; Cox, Dennis D; Kim, Byeong-Uk; Boylan, James W
2011-01-01
Regulatory attainment demonstrations in the United States typically apply a bright-line test to predict whether a control strategy is sufficient to attain an air quality standard. Photochemical models are the best tools available to project future pollutant levels and are a critical part of regulatory attainment demonstrations. However, because photochemical models are uncertain and future meteorology is unknowable, future pollutant levels cannot be predicted perfectly and attainment cannot be guaranteed. This paper introduces a computationally efficient methodology for estimating the likelihood that an emission control strategy will achieve an air quality objective in light of uncertainties in photochemical model input parameters (e.g., uncertain emission and reaction rates, deposition velocities, and boundary conditions). The method incorporates Monte Carlo simulations of a reduced form model representing pollutant-precursor response under parametric uncertainty to probabilistically predict the improvement in air quality due to emission control. The method is applied to recent 8-h ozone attainment modeling for Atlanta, Georgia, to assess the likelihood that additional controls would achieve fixed (well-defined) or flexible (due to meteorological variability and uncertain emission trends) targets of air pollution reduction. The results show that in certain instances ranking of the predicted effectiveness of control strategies may differ between probabilistic and deterministic analyses.
Ellis, J. A.; Siemens, X.; Van Haasteren, R.
2013-05-20
Direct detection of gravitational waves by pulsar timing arrays will become feasible over the next few years. In the low frequency regime (10{sup -7} Hz-10{sup -9} Hz), we expect that a superposition of gravitational waves from many sources will manifest itself as an isotropic stochastic gravitational wave background. Currently, a number of techniques exist to detect such a signal; however, many detection methods are computationally challenging. Here we introduce an approximation to the full likelihood function for a pulsar timing array that results in computational savings proportional to the square of the number of pulsars in the array. Through a series of simulations we show that the approximate likelihood function reproduces results obtained from the full likelihood function. We further show, both analytically and through simulations, that, on average, this approximate likelihood function gives unbiased parameter estimates for astrophysically realistic stochastic background amplitudes.
NASA Astrophysics Data System (ADS)
Song, Qiong; Wang, Yuehuan; Yan, Xiaoyun; Liu, Dang
2015-12-01
In this paper we propose an independent sequential maximum likelihood approach to address the joint track-to-track association and bias removal in multi-sensor information fusion systems. First, we enumerate all kinds of association situation following by estimating a bias for each association. Then we calculate the likelihood of each association after bias compensated. Finally we choose the maximum likelihood of all association situations as the association result and the corresponding bias estimation is the registration result. Considering the high false alarm and interference, we adopt the independent sequential association to calculate the likelihood. Simulation results show that our proposed method can give out the right association results and it can estimate the bias precisely simultaneously for small number of targets in multi-sensor fusion system.
Simulation of dynamic interface fracture using spectral boundary integral method
NASA Astrophysics Data System (ADS)
Harish, Ajay Bangalore
Simulation of three-dimensional dynamic fracture events constitutes one of the most challenging topics in the field of computational mechanics. Spontaneous dynamic fracture along the interface of two elastic solids is of great importance and interest to a number of disciplines in engineering and science. Applications include dynamic fractures in aircraft structures, earthquakes, thermal shocks in nuclear containment vessels and delamination in layered composite materials.
Applying dynamic methods in off-line signature recognition
NASA Astrophysics Data System (ADS)
Igarza, Juan Jose; Hernaez, Inmaculada; Goirizelaia, Inaki; Espinosa, Koldo
2004-08-01
In this paper we present the work developed on off-line signature verification using Hidden Markov Models (HMM). HMM is a well-known technique used by other biometric features, for instance, in speaker recognition and dynamic or on-line signature verification. Our goal here is to extend Left-to-Right (LR)-HMM to the field of static or off-line signature processing using results provided by image connectivity analysis. The chain encoding of perimeter points for each blob obtained by this analysis is an ordered set of points in the space, clockwise around the perimeter of the blob. We discuss two different ways of generating the models depending on the way the blobs obtained from the connectivity analysis are ordered. In the first proposed method, blobs are ordered according to their perimeter length. In the second proposal, blobs are ordered in their natural reading order, i.e. from the top to the bottom and left to right. Finally, two LR-HMM models are trained using the parameters obtained by the mentioned techniques. Verification results of the two techniques are compared and some improvements are proposed.
PARTICLE-GAS DYNAMICS WITH ATHENA: METHOD AND CONVERGENCE
Bai Xuening; Stone, James M. E-mail: jstone@astro.princeton.ed
2010-10-15
The Athena magnetohydrodynamics code has been extended to integrate the motion of particles coupled with the gas via aerodynamic drag in order to study the dynamics of gas and solids in protoplanetary disks (PPDs) and the formation of planetesimals. Our particle-gas hybrid scheme is based on a second-order predictor-corrector method. Careful treatment of the momentum feedback on the gas guarantees exact conservation. The hybrid scheme is stable and convergent in most regimes relevant to PPDs. We describe a semi-implicit integrator generalized from the leap-frog approach. In the absence of drag force, it preserves the geometric properties of a particle orbit. We also present a fully implicit integrator that is unconditionally stable for all regimes of particle-gas coupling. Using our hybrid code, we study the numerical convergence of the nonlinear saturated state of the streaming instability. We find that gas flow properties are well converged with modest grid resolution (128 cells per pressure length {eta}r for dimensionless stopping time {tau} {sub s} = 0.1) and an equal number of particles and grid cells. On the other hand, particle clumping properties converge only at higher resolutions, and finer resolution leads to stronger clumping before convergence is reached. Finally, we find that the measurement of particle transport properties resulted from the streaming instability may be subject to error of about {+-}20%.
Method for increasing the dynamic range of mass spectrometers
Belov, Mikhail; Smith, Richard D.; Udseth, Harold R.
2004-09-07
A method for enhancing the dynamic range of a mass spectrometer by first passing a sample of ions through the mass spectrometer having a quadrupole ion filter, whereupon the intensities of the mass spectrum of the sample are measured. From the mass spectrum, ions within this sample are then identified for subsequent ejection. As further sampling introduces more ions into the mass spectrometer, the appropriate rf voltages are applied to a quadrupole ion filter, thereby selectively ejecting the undesired ions previously identified. In this manner, the desired ions may be collected for longer periods of time in an ion trap, thus allowing better collection and subsequent analysis of the desired ions. The ion trap used for accumulation may be the same ion trap used for mass analysis, in which case the mass analysis is performed directly, or it may be an intermediate trap. In the case where collection is an intermediate trap, the desired ions are accumulated in the intermediate trap, and then transferred to a separate mass analyzer. The present invention finds particular utility where the mass analysis is performed in an ion trap mass spectrometer or a Fourier transform ion cyclotron resonance mass spectrometer.
Penalized maximum-likelihood image reconstruction for lesion detection
NASA Astrophysics Data System (ADS)
Qi, Jinyi; Huesman, Ronald H.
2006-08-01
Detecting cancerous lesions is one major application in emission tomography. In this paper, we study penalized maximum-likelihood image reconstruction for this important clinical task. Compared to analytical reconstruction methods, statistical approaches can improve the image quality by accurately modelling the photon detection process and measurement noise in imaging systems. To explore the full potential of penalized maximum-likelihood image reconstruction for lesion detection, we derived simplified theoretical expressions that allow fast evaluation of the detectability of a random lesion. The theoretical results are used to design the regularization parameters to improve lesion detectability. We conducted computer-based Monte Carlo simulations to compare the proposed penalty function, conventional penalty function, and a penalty function for isotropic point spread function. The lesion detectability is measured by a channelized Hotelling observer. The results show that the proposed penalty function outperforms the other penalty functions for lesion detection. The relative improvement is dependent on the size of the lesion. However, we found that the penalty function optimized for a 5 mm lesion still outperforms the other two penalty functions for detecting a 14 mm lesion. Therefore, it is feasible to use the penalty function designed for small lesions in image reconstruction, because detection of large lesions is relatively easy.
Likelihood free inference for Markov processes: a comparison.
Owen, Jamie; Wilkinson, Darren J; Gillespie, Colin S
2015-04-01
Approaches to Bayesian inference for problems with intractable likelihoods have become increasingly important in recent years. Approximate Bayesian computation (ABC) and "likelihood free" Markov chain Monte Carlo techniques are popular methods for tackling inference in these scenarios but such techniques are computationally expensive. In this paper we compare the two approaches to inference, with a particular focus on parameter inference for stochastic kinetic models, widely used in systems biology. Discrete time transition kernels for models of this type are intractable for all but the most trivial systems yet forward simulation is usually straightforward. We discuss the relative merits and drawbacks of each approach whilst considering the computational cost implications and efficiency of these techniques. In order to explore the properties of each approach we examine a range of observation regimes using two example models. We use a Lotka-Volterra predator-prey model to explore the impact of full or partial species observations using various time course observations under the assumption of known and unknown measurement error. Further investigation into the impact of observation error is then made using a Schlögl system, a test case which exhibits bi-modal state stability in some regions of parameter space. PMID:25720092
A method for modeling contact dynamics for automated capture mechanisms
NASA Technical Reports Server (NTRS)
Williams, Philip J.
1991-01-01
Logicon Control Dynamics develops contact dynamics models for space-based docking and berthing vehicles. The models compute contact forces for the physical contact between mating capture mechanism surfaces. Realistic simulation requires proportionality constants, for calculating contact forces, to approximate surface stiffness of contacting bodies. Proportionality for rigid metallic bodies becomes quite large. Small penetrations of surface boundaries can produce large contact forces.
A simple, fast, and accurate algorithm to estimate large phylogenies by maximum likelihood.
Guindon, Stéphane; Gascuel, Olivier
2003-10-01
The increase in the number of large data sets and the complexity of current probabilistic sequence evolution models necessitates fast and reliable phylogeny reconstruction methods. We describe a new approach, based on the maximum- likelihood principle, which clearly satisfies these requirements. The core of this method is a simple hill-climbing algorithm that adjusts tree topology and branch lengths simultaneously. This algorithm starts from an initial tree built by a fast distance-based method and modifies this tree to improve its likelihood at each iteration. Due to this simultaneous adjustment of the topology and branch lengths, only a few iterations are sufficient to reach an optimum. We used extensive and realistic computer simulations to show that the topological accuracy of this new method is at least as high as that of the existing maximum-likelihood programs and much higher than the performance of distance-based and parsimony approaches. The reduction of computing time is dramatic in comparison with other maximum-likelihood packages, while the likelihood maximization ability tends to be higher. For example, only 12 min were required on a standard personal computer to analyze a data set consisting of 500 rbcL sequences with 1,428 base pairs from plant plastids, thus reaching a speed of the same order as some popular distance-based and parsimony algorithms. This new method is implemented in the PHYML program, which is freely available on our web page: http://www.lirmm.fr/w3ifa/MAAS/.
An improved version of the Green's function molecular dynamics method
NASA Astrophysics Data System (ADS)
Kong, Ling Ti; Denniston, Colin; Müser, Martin H.
2011-02-01
This work presents an improved version of the Green's function molecular dynamics method (Kong et al., 2009; Campañá and Müser, 2004 [1,2]), which enables one to study the elastic response of a three-dimensional solid to an external stress field by taking into consideration only atoms near the surface. In the previous implementation, the effective elastic coefficients measured at the Γ-point were altered to reduce finite size effects: their eigenvalues corresponding to the acoustic modes were set to zero. This scheme was found to work well for simple Bravais lattices as long as only atoms within the last layer were treated as Green's function atoms. However, it failed to function as expected in all other cases. It turns out that a violation of the acoustic sum rule for the effective elastic coefficients at Γ (Kong, 2010 [3]) was responsible for this behavior. In the new version, the acoustic sum rule is enforced by adopting an iterative procedure, which is found to be physically more meaningful than the previous one. In addition, the new algorithm allows one to treat lattices with bases and the Green's function slab is no longer confined to one layer. New version program summaryProgram title: FixGFC/FixGFMD v1.12 Catalogue identifier: AECW_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECW_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 206 436 No. of bytes in distributed program, including test data, etc.: 4 314 850 Distribution format: tar.gz Programming language: C++ Computer: All Operating system: Linux Has the code been vectorized or parallelized?: Yes. Code has been parallelized using MPI directives. RAM: Depends on the problem Classification: 7.7 External routines: LAMMPS ( http://lammps.sandia.gov/), MPI ( http
Maximum likelihood dipole fitting in spatially colored noise.
Baryshnikov, B V; Van Veen, B D; Wakai, R T
2004-11-30
We evaluated a maximum likelihood dipole-fitting algorithm for somatosensory evoked field (SEF) MEG data in the presence of spatially colored noise. The method exploits the temporal multiepoch structure of the evoked response data to estimate the spatial noise covariance matrix from the section of data being fit, which eliminates the stationarity assumption implicit in prestimulus based whitening approaches. The performance of the method, including its effectiveness in comparison to other localization techniques (dipole fitting, LCMV and MUSIC) was evaluated using the bootstrap technique. Synthetic data results demonstrated robustness of the algorithm in the presence of relatively high levels of noise when traditional dipole fitting algorithms fail. Application of the algorithm to adult somatosensory MEG data showed that while it is not advantageous for high SNR data, it definitely provides improved performance (measured by the spread of localizations) as the data sample size decreases.
Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation
NASA Astrophysics Data System (ADS)
Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.
2015-11-01
We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.
Tandon, Ankita; Wang, Ming; Roe, Kevin C.; Patel, Surju; Ghahramani, Nasrollah
2016-01-01
Background There is wide variation in referral for kidney transplant and preemptive kidney transplant (PKT). Patient characteristics such as age, race, sex and geographic location have been cited as contributing factors to this disparity. We hypothesize that the characteristics of nephrologists interplay with the patients' characteristics to influence the referral decision. In this study, we used hypothetical case scenarios to assess nephrologists' decisions regarding transplant referral. Methods A total of 3180 nephrologists were invited to participate. Among those interested, 252 were randomly selected to receive a survey in which nephrologists were asked whether they would recommend transplant for the 25 hypothetical patients. Logistic regression models with single covariates and multiple covariates were used to identify patient characteristics associated with likelihood of being referred for transplant and to identify nephrologists' characteristics associated with likelihood of referring for transplant. Results Of the 252 potential participants, 216 completed the survey. A nephrologist's affiliation with an academic institution was associated with a higher likelihood of referral, and being ‘>10 years from fellowship’ was associated with lower likelihood of referring patients for transplant. Patient age <50 years was associated with higher likelihood of referral. Rural location and smoking history/chronic obstructive pulmonary disease were associated with lower likelihood of being referred for transplant. The nephrologist's affiliation with an academic institution was associated with higher likelihood of referring for preemptive transplant, and the patient having a rural residence was associated with lower likelihood of being referred for preemptive transplant. Conclusions The variability in transplant referral is related to patients' age and geographic location as well as the nephrologists' affiliation with an academic institution and time since completion
The maximum likelihood dating of magnetostratigraphic sections
NASA Astrophysics Data System (ADS)
Man, Otakar
2011-04-01
In general, stratigraphic sections are dated by biostratigraphy and magnetic polarity stratigraphy (MPS) is subsequently used to improve the dating of specific section horizons or to correlate these horizons in different sections of similar age. This paper shows, however, that the identification of a record of a sufficient number of geomagnetic polarity reversals against a reference scale often does not require any complementary information. The deposition and possible subsequent erosion of the section is herein regarded as a stochastic process, whose discrete time increments are independent and normally distributed. This model enables the expression of the time dependence of the magnetic record of section increments in terms of probability. To date samples bracketing the geomagnetic polarity reversal horizons, their levels are combined with various sequences of successive polarity reversals drawn from the reference scale. Each particular combination gives rise to specific constraints on the unknown ages of the primary remanent magnetization of samples. The problem is solved by the constrained maximization of the likelihood function with respect to these ages and parameters of the model, and by subsequent maximization of this function over the set of possible combinations. A statistical test of the significance of this solution is given. The application of this algorithm to various published magnetostratigraphic sections that included nine or more polarity reversals gave satisfactory results. This possible self-sufficiency makes MPS less dependent on other dating techniques.
PAML 4: phylogenetic analysis by maximum likelihood.
Yang, Ziheng
2007-08-01
PAML, currently in version 4, is a package of programs for phylogenetic analyses of DNA and protein sequences using maximum likelihood (ML). The programs may be used to compare and test phylogenetic trees, but their main strengths lie in the rich repertoire of evolutionary models implemented, which can be used to estimate parameters in models of sequence evolution and to test interesting biological hypotheses. Uses of the programs include estimation of synonymous and nonsynonymous rates (d(N) and d(S)) between two protein-coding DNA sequences, inference of positive Darwinian selection through phylogenetic comparison of protein-coding genes, reconstruction of ancestral genes and proteins for molecular restoration studies of extinct life forms, combined analysis of heterogeneous data sets from multiple gene loci, and estimation of species divergence times incorporating uncertainties in fossil calibrations. This note discusses some of the major applications of the package, which includes example data sets to demonstrate their use. The package is written in ANSI C, and runs under Windows, Mac OSX, and UNIX systems. It is available at -- (http://abacus.gene.ucl.ac.uk/software/paml.html).
Santra, Kalyan; Zhan, Jinchun; Song, Xueyu; Smith, Emily A.; Vaswani, Namrata; Petrich, Jacob W.
2016-02-10
The need for measuring fluorescence lifetimes of species in subdiffraction-limited volumes in, for example, stimulated emission depletion (STED) microscopy, entails the dual challenge of probing a small number of fluorophores and fitting the concomitant sparse data set to the appropriate excited-state decay function. This need has stimulated a further investigation into the relative merits of two fitting techniques commonly referred to as “residual minimization” (RM) and “maximum likelihood” (ML). Fluorescence decays of the well-characterized standard, rose bengal in methanol at room temperature (530 ± 10 ps), were acquired in a set of five experiments in which the total number ofmore » “photon counts” was approximately 20, 200, 1000, 3000, and 6000 and there were about 2–200 counts at the maxima of the respective decays. Each set of experiments was repeated 50 times to generate the appropriate statistics. Each of the 250 data sets was analyzed by ML and two different RM methods (differing in the weighting of residuals) using in-house routines and compared with a frequently used commercial RM routine. Convolution with a real instrument response function was always included in the fitting. While RM using Pearson’s weighting of residuals can recover the correct mean result with a total number of counts of 1000 or more, ML distinguishes itself by yielding, in all cases, the same mean lifetime within 2% of the accepted value. For 200 total counts and greater, ML always provides a standard deviation of <10% of the mean lifetime, and even at 20 total counts there is only 20% error in the mean lifetime. Here, the robustness of ML advocates its use for sparse data sets such as those acquired in some subdiffraction-limited microscopies, such as STED, and, more importantly, provides greater motivation for exploiting the time-resolved capacities of this technique to acquire and analyze fluorescence lifetime data.« less
The dud-alternative effect in likelihood judgment.
Windschitl, Paul D; Chambers, John R
2004-01-01
The judged likelihood of a focal outcome should generally decrease as the list of alternative possibilities increases. For example, the likelihood that a runner will win a race goes down when 2 new entries are added to the field. However, 6 experiments demonstrate that the presence of implausible alternatives (duds) often increases the judged likelihood of a focal outcome. This dud-alternative effect was detected for judgments involving uncertainty about trivia facts and stochastic events. Nonnumeric likelihood measures and betting measures reliably detected the effect, but numeric likelihood measures did not. Time pressure increased the magnitude of the effect. The results were consistent with a contrast-effect account: The inclusion of duds increases the perceived strength of the evidence for the focal outcome, thereby affecting its judged likelihood.
Dynamic Characteristics of Penor Peat Using MASW Method
NASA Astrophysics Data System (ADS)
Zainorabidin, A.; Said, M. J. M.
2016-07-01
The dynamic behaviour of soil affected the mechanical properties of soil such as shear wave velocity, shear modulus, damping ratio and poisson's ratio [1] which is becoming important aspect need to be considered for structures influences by dynamic movement. This study is to determine the dynamic behaviour of Penor peat such as shear wave velocity using MASW and estimation its shear modulus. Peat soils are very problematic soils since it's have high compressibility, low shear strength, high moisture content and low bearing capacity which is very not suitable materials to construct any foundation structures. Shear wave velocity ranges between 32.94 - 95.89 m/s and shear modulus are ranging between 0.93 - 8.01 MPa. The differences of both dynamic properties are due to the changes of peat density and affected by the fibre content, organic content, degree of degradation and moisture content.
Dynamical multiple-time stepping methods for overcoming resonance instabilities.
Chin, Siu A
2004-01-01
Current molecular dynamics simulations of biomolecules using multiple time steps to update the slowly changing force are hampered by instabilities beginning at time steps near the half period of the fastest vibrating mode. These "resonance" instabilities have became a critical barrier preventing the long time simulation of biomolecular dynamics. Attempts to tame these instabilities by altering the slowly changing force and efforts to damp them out by Langevin dynamics do not address the fundamental cause of these instabilities. In this work, we trace the instability to the nonanalytic character of the underlying spectrum and show that a correct splitting of the Hamiltonian, which renders the spectrum analytic, restores stability. The resulting Hamiltonian dictates that in addition to updating the momentum due to the slowly changing force, one must also update the position with a modified mass. Thus multiple-time stepping must be done dynamically.
Reconstruction of ancestral genomic sequences using likelihood.
Elias, Isaac; Tuller, Tamir
2007-03-01
A challenging task in computational biology is the reconstruction of genomic sequences of extinct ancestors, given the phylogenetic tree and the sequences at the leafs. This task is best solved by calculating the most likely estimate of the ancestral sequences, along with the most likely edge lengths. We deal with this problem and also the variant in which the phylogenetic tree in addition to the ancestral sequences need to be estimated. The latter problem is known to be NP-hard, while the computational complexity of the former is unknown. Currently, all algorithms for solving these problems are heuristics without performance guarantees. The biological importance of these problems calls for developing better algorithms with guarantees of finding either optimal or approximate solutions. We develop approximation, fix parameter tractable (FPT), and fast heuristic algorithms for two variants of the problem; when the phylogenetic tree is known and when it is unknown. The approximation algorithm guarantees a solution with a log-likelihood ratio of 2 relative to the optimal solution. The FPT has a running time which is polynomial in the length of the sequences and exponential in the number of taxa. This makes it useful for calculating the optimal solution for small trees. Moreover, we combine the approximation algorithm and the FPT into an algorithm with arbitrary good approximation guarantee (PTAS). We tested our algorithms on both synthetic and biological data. In particular, we used the FPT for computing the most likely ancestral mitochondrial genomes of hominidae (the great apes), thereby answering an interesting biological question. Moreover, we show how the approximation algorithms find good solutions for reconstructing the ancestral genomes for a set of lentiviruses (relatives of HIV). Supplementary material of this work is available at www.nada.kth.se/~isaac/publications/aml/aml.html.
Maximum likelihood molecular clock comb: analytic solutions.
Chor, Benny; Khetan, Amit; Snir, Sagi
2006-04-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).
Regional Earthquake Likelihood Models: A realm on shaky grounds?
NASA Astrophysics Data System (ADS)
Kossobokov, V.
2005-12-01
Seismology is juvenile and its appropriate statistical tools to-date may have a "medievil flavor" for those who hurry up to apply a fuzzy language of a highly developed probability theory. To become "quantitatively probabilistic" earthquake forecasts/predictions must be defined with a scientific accuracy. Following the most popular objectivists' viewpoint on probability, we cannot claim "probabilities" adequate without a long series of "yes/no" forecast/prediction outcomes. Without "antiquated binary language" of "yes/no" certainty we cannot judge an outcome ("success/failure"), and, therefore, quantify objectively a forecast/prediction method performance. Likelihood scoring is one of the delicate tools of Statistics, which could be worthless or even misleading when inappropriate probability models are used. This is a basic loophole for a misuse of likelihood as well as other statistical methods on practice. The flaw could be avoided by an accurate verification of generic probability models on the empirical data. It is not an easy task in the frames of the Regional Earthquake Likelihood Models (RELM) methodology, which neither defines the forecast precision nor allows a means to judge the ultimate success or failure in specific cases. Hopefully, the RELM group realizes the problem and its members do their best to close the hole with an adequate, data supported choice. Regretfully, this is not the case with the erroneous choice of Gerstenberger et al., who started the public web site with forecasts of expected ground shaking for `tomorrow' (Nature 435, 19 May 2005). Gerstenberger et al. have inverted the critical evidence of their study, i.e., the 15 years of recent seismic record accumulated just in one figure, which suggests rejecting with confidence above 97% "the generic California clustering model" used in automatic calculations. As a result, since the date of publication in Nature the United States Geological Survey website delivers to the public, emergency
Increasing Power of Groupwise Association Test with Likelihood Ratio Test
NASA Astrophysics Data System (ADS)
Sul, Jae Hoon; Han, Buhm; Eskin, Eleazar
Sequencing studies have been discovering a numerous number of rare variants, allowing the identification of the effects of rare variants on disease susceptibility. As a method to increase the statistical power of studies on rare variants, several groupwise association tests that group rare variants in genes and detect associations between groups and diseases have been proposed. One major challenge in these methods is to determine which variants are causal in a group, and to overcome this challenge, previous methods used prior information that specifies how likely each variant is causal. Another source of information that can be used to determine causal variants is observation data because case individuals are likely to have more causal variants than control individuals. In this paper, we introduce a likelihood ratio test (LRT) that uses both data and prior information to infer which variants are causal and uses this finding to determine whether a group of variants is involved in a disease. We demonstrate through simulations that LRT achieves higher power than previous methods. We also evaluate our method on mutation screening data of the susceptibility gene for ataxia telangiectasia, and show that LRT can detect an association in real data. To increase the computational speed of our method, we show how we can decompose the computation of LRT, and propose an efficient permutation test. With this optimization, we can efficiently compute an LRT statistic and its significance at a genome-wide level. The software for our method is publicly available at http://genetics.cs.ucla.edu/rarevariants.
Saavedra, Serguei; Rohr, Rudolf P; Fortuna, Miguel A; Selva, Nuria; Bascompte, Jordi
2016-04-01
Many of the observed species interactions embedded in ecological communities are not permanent, but are characterized by temporal changes that are observed along with abiotic and biotic variations. While work has been done describing and quantifying these changes, little is known about their consequences for species coexistence. Here, we investigate the extent to which changes of species composition impact the likelihood of persistence of the predator-prey community in the highly seasonal Białowieza Primeval Forest (northeast Poland), and the extent to which seasonal changes of species interactions (predator diet) modulate the expected impact. This likelihood is estimated extending recent developments on the study of structural stability in ecological communities. We find that the observed species turnover strongly varies the likelihood of community persistence between summer and winter. Importantly, we demonstrate that the observed seasonal interaction changes minimize the variation in the likelihood of persistence associated with species turnover across the year. We find that these community dynamics can be explained as the coupling of individual species to their environment by minimizing both the variation in persistence conditions and the interaction changes between seasons. Our results provide a homeostatic explanation for seasonal species interactions and suggest that monitoring the association of interactions changes with the level of variation in community dynamics can provide a good indicator of the response of species to environmental pressures. PMID:27220203
Maximum-Likelihood Continuity Mapping (MALCOM): An Alternative to HMMs
Nix, D.A.; Hogden, J.E.
1998-12-01
The authors describe Maximum-Likelihood Continuity Mapping (MALCOM) as an alternative to hidden Markov models (HMMs) for processing sequence data such as speech. While HMMs have a discrete ''hidden'' space constrained by a fixed finite-automata architecture, MALCOM has a continuous hidden space (a continuity map) that is constrained only by a smoothness requirement on paths through the space. MALCOM fits into the same probabilistic framework for speech recognition as HMMs, but it represents a far more realistic model of the speech production process. The authors support this claim by generating continuity maps for three speakers and using the resulting MALCOM paths to predict measured speech articulator data. The correlations between the MALCOM paths (obtained from only the speech acoustics) and the actual articulator movements average 0.77 on an independent test set not used to train MALCOM nor the predictor. On average, this unsupervised model achieves 92% of performance obtained using the corresponding supervised method.
Efficient Robust Regression via Two-Stage Generalized Empirical Likelihood
Bondell, Howard D.; Stefanski, Leonard A.
2013-01-01
Large- and finite-sample efficiency and resistance to outliers are the key goals of robust statistics. Although often not simultaneously attainable, we develop and study a linear regression estimator that comes close. Efficiency obtains from the estimator’s close connection to generalized empirical likelihood, and its favorable robustness properties are obtained by constraining the associated sum of (weighted) squared residuals. We prove maximum attainable finite-sample replacement breakdown point, and full asymptotic efficiency for normal errors. Simulation evidence shows that compared to existing robust regression estimators, the new estimator has relatively high efficiency for small sample sizes, and comparable outlier resistance. The estimator is further illustrated and compared to existing methods via application to a real data set with purported outliers. PMID:23976805
Maximum likelihood: Extracting unbiased information from complex networks
NASA Astrophysics Data System (ADS)
Garlaschelli, Diego; Loffredo, Maria I.
2008-07-01
The choice of free parameters in network models is subjective, since it depends on what topological properties are being monitored. However, we show that the maximum likelihood (ML) principle indicates a unique, statistically rigorous parameter choice, associated with a well-defined topological feature. We then find that, if the ML condition is incompatible with the built-in parameter choice, network models turn out to be intrinsically ill defined or biased. To overcome this problem, we construct a class of safely unbiased models. We also propose an extension of these results that leads to the fascinating possibility to extract, only from topological data, the “hidden variables” underlying network organization, making them “no longer hidden.” We test our method on World Trade Web data, where we recover the empirical gross domestic product using only topological information.
Maximum Likelihood Estimation of GEVD: Applications in Bioinformatics.
Thomas, Minta; Daemen, Anneleen; De Moor, Bart
2014-01-01
We propose a method, maximum likelihood estimation of generalized eigenvalue decomposition (MLGEVD) that employs a well known technique relying on the generalization of singular value decomposition (SVD). The main aim of the work is to show the tight equivalence between MLGEVD and generalized ridge regression. This relationship reveals an important mathematical property of GEVD in which the second argument act as prior information in the model. Thus we show that MLGEVD allows the incorporation of external knowledge about the quantities of interest into the estimation problem. We illustrate the importance of prior knowledge in clinical decision making/identifying differentially expressed genes with case studies for which microarray data sets with corresponding clinical/literature information are available. On all of these three case studies, MLGEVD outperformed GEVD on prediction in terms of test area under the ROC curve (test AUC). MLGEVD results in significantly improved diagnosis, prognosis and prediction of therapy response.
A penalized likelihood approach for mixture cure models.
Corbière, Fabien; Commenges, Daniel; Taylor, Jeremy M G; Joly, Pierre
2009-02-01
Cure models have been developed to analyze failure time data with a cured fraction. For such data, standard survival models are usually not appropriate because they do not account for the possibility of cure. Mixture cure models assume that the studied population is a mixture of susceptible individuals, who may experience the event of interest, and non-susceptible individuals that will never experience it. Important issues in mixture cure models are estimation of the baseline survival function for susceptibles and estimation of the variance of the regression parameters. The aim of this paper is to propose a penalized likelihood approach, which allows for flexible modeling of the hazard function for susceptible individuals using M-splines. This approach also permits direct computation of the variance of parameters using the inverse of the Hessian matrix. Properties and limitations of the proposed method are discussed and an illustration from a cancer study is presented.
H.264 SVC Complexity Reduction Based on Likelihood Mode Decision
Balaji, L.; Thyagharajan, K. K.
2015-01-01
H.264 Advanced Video Coding (AVC) was prolonged to Scalable Video Coding (SVC). SVC executes in different electronics gadgets such as personal computer, HDTV, SDTV, IPTV, and full-HDTV in which user demands various scaling of the same content. The various scaling is resolution, frame rate, quality, heterogeneous networks, bandwidth, and so forth. Scaling consumes more encoding time and computational complexity during mode selection. In this paper, to reduce encoding time and computational complexity, a fast mode decision algorithm based on likelihood mode decision (LMD) is proposed. LMD is evaluated in both temporal and spatial scaling. From the results, we conclude that LMD performs well, when compared to the previous fast mode decision algorithms. The comparison parameters are time, PSNR, and bit rate. LMD achieve time saving of 66.65% with 0.05% detriment in PSNR and 0.17% increment in bit rate compared with the full search method. PMID:26221623
H.264 SVC Complexity Reduction Based on Likelihood Mode Decision.
Balaji, L; Thyagharajan, K K
2015-01-01
H.264 Advanced Video Coding (AVC) was prolonged to Scalable Video Coding (SVC). SVC executes in different electronics gadgets such as personal computer, HDTV, SDTV, IPTV, and full-HDTV in which user demands various scaling of the same content. The various scaling is resolution, frame rate, quality, heterogeneous networks, bandwidth, and so forth. Scaling consumes more encoding time and computational complexity during mode selection. In this paper, to reduce encoding time and computational complexity, a fast mode decision algorithm based on likelihood mode decision (LMD) is proposed. LMD is evaluated in both temporal and spatial scaling. From the results, we conclude that LMD performs well, when compared to the previous fast mode decision algorithms. The comparison parameters are time, PSNR, and bit rate. LMD achieve time saving of 66.65% with 0.05% detriment in PSNR and 0.17% increment in bit rate compared with the full search method.
NASA Technical Reports Server (NTRS)
Gottlieb, D.; Turkel, E.
1980-01-01
New methods are introduced for the time integration of the Fourier and Chebyshev methods of solution for dynamic differential equations. These methods are unconditionally stable, even though no matrix inversions are required. Time steps are chosen by accuracy requirements alone. For the Fourier method both leapfrog and Runge-Kutta methods are considered. For the Chebyshev method only Runge-Kutta schemes are tested. Numerical calculations are presented to verify the analytic results. Applications to the shallow water equations are presented.
A comparative study on the restrictions of dynamic test methods
NASA Astrophysics Data System (ADS)
Majzoobi, GH.; Lahmi, S.
2015-09-01
Dynamic behavior of materials is investigated using different devices. Each of the devices has some restrictions. For instance, the stress-strain curve of the materials can be captured at high strain rates only with Hopkinson bar. However, by using a new approach some of the other techniques could be used to obtain the constants of material models such as Johnson-Cook model too. In this work, the restrictions of some devices such as drop hammer, Taylor test, Flying wedge, Shot impact test, dynamic tensile extrusion and Hopkinson bars which are used to characterize the material properties at high strain rates are described. The level of strain and strain rate and their restrictions are very important in examining the efficiency of each of the devices. For instance, necking or bulging in tensile and compressive Hopkinson bars, fragmentation in dynamic tensile extrusion and petaling in Taylor test are restricting issues in the level of strain rate attainable in the devices.
Method for making a dynamic pressure sensor and a pressure sensor made according to the method
NASA Technical Reports Server (NTRS)
Zuckerwar, Allan J. (Inventor); Robbins, William E. (Inventor); Robins, Glenn M. (Inventor)
1994-01-01
A method for providing a perfectly flat top with a sharp edge on a dynamic pressure sensor using a cup-shaped stretched membrane as a sensing element is described. First, metal is deposited on the membrane and surrounding areas. Next, the side wall of the pressure sensor with the deposited metal is machined to a predetermined size. Finally, deposited metal is removed from the top of the membrane in small steps, by machining or lapping while the pressure sensor is mounted in a jig or the wall of a test object, until the true top surface of the membrane appears. A thin indicator layer having a color contrasting with the color of the membrane may be applied to the top of the membrane before metal is deposited to facilitate the determination of when to stop metal removal from the top surface of the membrane.
The Repeated Replacement Method: A Pure Lagrangian Meshfree Method for Computational Fluid Dynamics
Walker, Wade A.
2012-01-01
In this paper we describe the repeated replacement method (RRM), a new meshfree method for computational fluid dynamics (CFD). RRM simulates fluid flow by modeling compressible fluids’ tendency to evolve towards a state of constant density, velocity, and pressure. To evolve a fluid flow simulation forward in time, RRM repeatedly “chops out” fluid from active areas and replaces it with new “flattened” fluid cells with the same mass, momentum, and energy. We call the new cells “flattened” because we give them constant density, velocity, and pressure, even though the chopped-out fluid may have had gradients in these primitive variables. RRM adaptively chooses the sizes and locations of the areas it chops out and replaces. It creates more and smaller new cells in areas of high gradient, and fewer and larger new cells in areas of lower gradient. This naturally leads to an adaptive level of accuracy, where more computational effort is spent on active areas of the fluid, and less effort is spent on inactive areas. We show that for common test problems, RRM produces results similar to other high-resolution CFD methods, while using a very different mathematical framework. RRM does not use Riemann solvers, flux or slope limiters, a mesh, or a stencil, and it operates in a purely Lagrangian mode. RRM also does not evaluate numerical derivatives, does not integrate equations of motion, and does not solve systems of equations. PMID:22866175
The repeated replacement method: a pure Lagrangian meshfree method for computational fluid dynamics.
Walker, Wade A
2012-01-01
In this paper we describe the repeated replacement method (RRM), a new meshfree method for computational fluid dynamics (CFD). RRM simulates fluid flow by modeling compressible fluids' tendency to evolve towards a state of constant density, velocity, and pressure. To evolve a fluid flow simulation forward in time, RRM repeatedly "chops out" fluid from active areas and replaces it with new "flattened" fluid cells with the same mass, momentum, and energy. We call the new cells "flattened" because we give them constant density, velocity, and pressure, even though the chopped-out fluid may have had gradients in these primitive variables. RRM adaptively chooses the sizes and locations of the areas it chops out and replaces. It creates more and smaller new cells in areas of high gradient, and fewer and larger new cells in areas of lower gradient. This naturally leads to an adaptive level of accuracy, where more computational effort is spent on active areas of the fluid, and less effort is spent on inactive areas. We show that for common test problems, RRM produces results similar to other high-resolution CFD methods, while using a very different mathematical framework. RRM does not use Riemann solvers, flux or slope limiters, a mesh, or a stencil, and it operates in a purely Lagrangian mode. RRM also does not evaluate numerical derivatives, does not integrate equations of motion, and does not solve systems of equations. PMID:22866175
Parallel replica method for dynamics of infrequent events
Voter, A.F.
1998-06-01
Although molecular-dynamics simulations can be parallelized effectively to treat large systems (10{sup 6}{endash}10{sup 8} atoms), to date the power of parallel computers has not been harnessed to make analogous gains in {ital time} scale. I present a simple approach for infrequent-event systems that extends the time scale with high parallel efficiency. Integrating a replica of the system independently on each processor until the first transition occurs gives the correct transition-time distribution, and hence the correct dynamics. I obtain {gt}90{percent} efficiency simulating Cu(100) surface vacancy diffusion on 15 processors. {copyright} {ital 1998} {ital The American Physical Society}
Method for discovering relationships in data by dynamic quantum clustering
Weinstein, Marvin; Horn, David
2014-10-28
Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.
Maximum likelihood estimation applied to multiepoch MEG/EEG analysis
NASA Astrophysics Data System (ADS)
Baryshnikov, Boris V.
A maximum likelihood based algorithm for reducing the effects of spatially colored noise in evoked response MEG and EEG experiments is presented. The signal of interest is modeled as the low rank mean, while the noise is modeled as a Kronecker product of spatial and temporal covariance matrices. The temporal covariance is assumed known, while the spatial covariance is estimated as part of the algorithm. In contrast to prestimulus based whitening followed by principal component analysis, our algorithm does not require signal-free data for noise whitening and thus is more effective with non-stationary noise and produces better quality whitening for a given data record length. The efficacy of this approach is demonstrated using simulated and real MEG data. Next, a study in which we characterize MEG cortical response to coherent vs. incoherent motion is presented. It was found that coherent motion of the object induces not only an early sensory response around 180 ms relative to the stimulus onset but also a late field in the 250--500 ms range that has not been observed previously in similar random dot kinematogram experiments. The late field could not be resolved without signal processing using the maximum likelihood algorithm. The late activity localized to parietal areas. This is what would be expected. We believe that the late field corresponds to higher order processing related to the recognition of the moving object against the background. Finally, a maximum likelihood based dipole fitting algorithm is presented. It is suitable for dipole fitting of evoked response MEG data in the presence of spatially colored noise. The method exploits the temporal multiepoch structure of the evoked response data to estimate the spatial noise covariance matrix from the section of data being fit, eliminating the stationarity assumption implicit in prestimulus based whitening approaches. The preliminary results of the application of this algorithm to the simulated data show its
Thermal dynamics of thermoelectric phenomena from frequency resolved methods
NASA Astrophysics Data System (ADS)
García-Cañadas, J.; Min, G.
2016-03-01
Understanding the dynamics of thermoelectric (TE) phenomena is important for the detailed knowledge of the operation of TE materials and devices. By analyzing the impedance response of both a single TE element and a TE device under suspended conditions, we provide new insights into the thermal dynamics of these systems. The analysis is performed employing parameters such as the thermal penetration depth, the characteristic thermal diffusion frequency and the thermal diffusion time. It is shown that in both systems the dynamics of the thermoelectric response is governed by how the Peltier heat production/absorption at the junctions evolves. In a single thermoelement, at high frequencies the thermal waves diffuse semi-infinitely from the junctions towards the half-length. When the frequency is reduced, the thermal waves can penetrate further and eventually reach the half-length where they start to cancel each other and further penetration is blocked. In the case of a TE module, semi-infinite thermal diffusion along the thickness of the ceramic layers occurs at the highest frequencies. As the frequency is decreased, heat storage in the ceramics becomes dominant and starts to compete with the diffusion of the thermal waves towards the half-length of the thermoelements. Finally, the cancellation of the waves occurs at the lowest frequencies. It is demonstrated that the analysis is able to identify and separate the different physical processes and to provide a detailed understanding of the dynamics of different thermoelectric effects.
Technologies and Truth Games: Research as a Dynamic Method
ERIC Educational Resources Information Center
Hassett, Dawnene D.
2010-01-01
This article offers a way of thinking about literacy instruction that critiques current reasoning, but also provides a space to dynamically think outside of prevalent practices. It presents a framework for both planning and studying literacy pedagogy that combines a practical everyday model of the reading process with Michel Foucault's (1988c)…
Spatio-temporal point processes, partial likelihood, foot and mouth disease.
Diggle, Peter J
2006-08-01
Spatio-temporal point process data arise in many fields of application. An intuitively natural way to specify a model for a spatio-temporal point process is through its conditional intensity at location x and time t, given the history of the process up to time t. Often, this results in an analytically intractable likelihood. Likelihood-based inference then relies on Monte Carlo methods which are computationally intensive and require careful tuning to each application. A partial likelihood alternative is proposed, which is computationally straightforward and can be applied routinely. The method is applied to data from the 2001 foot and mouth epidemic in the UK, using a previously published model for the spatio-temporal spread of the disease.
Augmented composite likelihood for copula modeling in family studies under biased sampling.
Zhong, Yujie; Cook, Richard J
2016-07-01
The heritability of chronic diseases can be effectively studied by examining the nature and extent of within-family associations in disease onset times. Families are typically accrued through a biased sampling scheme in which affected individuals are identified and sampled along with their relatives who may provide right-censored or current status data on their disease onset times. We develop likelihood and composite likelihood methods for modeling the within-family association in these times through copula models in which dependencies are characterized by Kendall's [Formula: see text] Auxiliary data from independent individuals are exploited by augmentating composite likelihoods to increase precision of marginal parameter estimates and consequently increase efficiency in dependence parameter estimation. An application to a motivating family study in psoriatic arthritis illustrates the method and provides some evidence of excessive paternal transmission of risk. PMID:26819481
Zeilinger, Adam R; Olson, Dawn M; Andow, David A
2014-08-01
Consumer feeding preference among resource choices has critical implications for basic ecological and evolutionary processes, and can be highly relevant to applied problems such as ecological risk assessment and invasion biology. Within consumer choice experiments, also known as feeding preference or cafeteria experiments, measures of relative consumption and measures of consumer movement can provide distinct and complementary insights into the strength, causes, and consequences of preference. Despite the distinct value of inferring preference from measures of consumer movement, rigorous and biologically relevant analytical methods are lacking. We describe a simple, likelihood-based, biostatistical model for analyzing the transient dynamics of consumer movement in a paired-choice experiment. With experimental data consisting of repeated discrete measures of consumer location, the model can be used to estimate constant consumer attraction and leaving rates for two food choices, and differences in choice-specific attraction and leaving rates can be tested using model selection. The model enables calculation of transient and equilibrial probabilities of consumer-resource association, which could be incorporated into larger scale movement models. We explore the effect of experimental design on parameter estimation through stochastic simulation and describe methods to check that data meet model assumptions. Using a dataset of modest sample size, we illustrate the use of the model to draw inferences on consumer preference as well as underlying behavioral mechanisms. Finally, we include a user's guide and computer code scripts in R to facilitate use of the model by other researchers.
Moreno-Betancur, Margarita; Rey, Grégoire; Latouche, Aurélien
2015-06-01
Competing risks arise in the analysis of failure times when there is a distinction between different causes of failure. In many studies, it is difficult to obtain complete cause of failure information for all individuals. Thus, several authors have proposed strategies for semi-parametric modeling of competing risks when some causes of failure are missing under the missing at random (MAR) assumption. As many authors have stressed, while semi-parametric models are convenient, fully-parametric regression modeling of the cause-specific hazards (CSH) and cumulative incidence functions (CIF) may be of interest for prediction and is likely to contribute towards a fuller understanding of the time-dynamics of the competing risks mechanism. We propose a so-called "direct likelihood" approach for fitting fully-parametric regression models for these two functionals under MAR. The MAR assumption not being verifiable from the observed data, we propose an approach for performing sensitivity analyses to assess the robustness of inferences to departures from this assumption. The method relies on so-called "pattern-mixture models" from the missing data literature and was evaluated in a simulation study. This sensitivity analysis approach is applicable to various competing risks regression models (fully-parametric or semi-parametric, for the CSH or the CIF). We illustrate the proposed methods with the analysis of a breast cancer clinical trial, including suggestions for ad hoc graphical goodness-of-fit assessments under MAR.
EOTAS dynamic scheduling method based on wearable man-machine synergy
NASA Astrophysics Data System (ADS)
Liu, Zhijun; Wang, Dongmei; Yang, Yukun; Zhao, Jie
2011-12-01
By analyzing the dynamic scheduling needs of its inherent nature, made wearable computing based on human-computer natural interaction forms the basis of EOTAS dynamic scheduling methods, and the targeted building, a new concept of wearable man-machine cooperative forms, turn around its concrete implementation and application, a color based on extended fuzzy Petri net EOTAS dynamic scheduling method for the preliminary settlement of the business operating environment EOTAS field applications of the fast scheduling problem.
EOTAS dynamic scheduling method based on wearable man-machine synergy
NASA Astrophysics Data System (ADS)
Liu, ZhiJun; Wang, DongMei; Yang, YuKun; Zhao, Jie
2012-01-01
By analyzing the dynamic scheduling needs of its inherent nature, made wearable computing based on human-computer natural interaction forms the basis of EOTAS dynamic scheduling methods, and the targeted building, a new concept of wearable man-machine cooperative forms, turn around its concrete implementation and application, a color based on extended fuzzy Petri net EOTAS dynamic scheduling method for the preliminary settlement of the business operating environment EOTAS field applications of the fast scheduling problem.
The Dud-Alternative Effect in Likelihood Judgment
ERIC Educational Resources Information Center
Windschitl, Paul D.; Chambers, John R.
2004-01-01
The judged likelihood of a focal outcome should generally decrease as the list of alternative possibilities increases. For example, the likelihood that a runner will win a race goes down when 2 new entries are added to the field. However, 6 experiments demonstrate that the presence of implausible alternatives (duds) often increases the judged…
NASA Astrophysics Data System (ADS)
Zhou, Zhen; Chen, YongLi; Feng, LiShuang
2016-10-01
The characterization results of a typical electrically controlled metamaterial terahertz (THz) modulator obtained by the dynamic measurement method are presented and are in good agreement with the theoretical results predicted by a first-order model. The dynamic measurement method can characterize the modulation depth and modulation speed simultaneously. The reliability of the method is verified by terahertz time-domain spectroscopy (THz-TDS), and the current response method, which show that the more intuitive and comprehensive dynamic measurement method, can be used to characterize the electrically controlled metamaterial THz modulator accurately.
The Likelihood of Experiencing Relative Poverty over the Life Course.
Rank, Mark R; Hirschl, Thomas A
2015-01-01
Research on poverty in the United States has largely consisted of examining cross-sectional levels of absolute poverty. In this analysis, we focus on understanding relative poverty within a life course context. Specifically, we analyze the likelihood of individuals falling below the 20th percentile and the 10th percentile of the income distribution between the ages of 25 and 60. A series of life tables are constructed using the nationally representative Panel Study of Income Dynamics data set. This includes panel data from 1968 through 2011. Results indicate that the prevalence of relative poverty is quite high. Consequently, between the ages of 25 to 60, 61.8 percent of the population will experience a year below the 20th percentile, and 42.1 percent will experience a year below the 10th percentile. Characteristics associated with experiencing these levels of poverty include those who are younger, nonwhite, female, not married, with 12 years or less of education, or who have a work disability.
ON THE LIKELIHOOD OF PLANET FORMATION IN CLOSE BINARIES
Jang-Condell, Hannah
2015-02-01
To date, several exoplanets have been discovered orbiting stars with close binary companions (a ≲ 30 AU). The fact that planets can form in these dynamically challenging environments implies that planet formation must be a robust process. The initial protoplanetary disks in these systems from which planets must form should be tidally truncated to radii of a few AU, which indicates that the efficiency of planet formation must be high. Here, we examine the truncation of circumstellar protoplanetary disks in close binary systems, studying how the likelihood of planet formation is affected over a range of disk parameters. If the semimajor axis of the binary is too small or its eccentricity is too high, the disk will have too little mass for planet formation to occur. However, we find that the stars in the binary systems known to have planets should have once hosted circumstellar disks that were capable of supporting planet formation despite their truncation. We present a way to characterize the feasibility of planet formation based on binary orbital parameters such as stellar mass, companion mass, eccentricity, and semimajor axis. Using this measure, we can quantify the robustness of planet formation in close binaries and better understand the overall efficiency of planet formation in general.
Phase portrait methods for verifying fluid dynamic simulations
Stewart, H.B.
1989-01-01
As computing resources become more powerful and accessible, engineers more frequently face the difficult and challenging engineering problem of accurately simulating nonlinear dynamic phenomena. Although mathematical models are usually available, in the form of initial value problems for differential equations, the behavior of the solutions of nonlinear models is often poorly understood. A notable example is fluid dynamics: while the Navier-Stokes equations are believed to correctly describe turbulent flow, no exact mathematical solution of these equations in the turbulent regime is known. Differential equations can of course be solved numerically, but how are we to assess numerical solutions of complex phenomena without some understanding of the mathematical problem and its solutions to guide us
A method of measuring dynamic strain under electromagnetic forming conditions
NASA Astrophysics Data System (ADS)
Chen, Jinling; Xi, Xuekui; Wang, Sijun; Lu, Jun; Guo, Chenglong; Wang, Wenquan; Liu, Enke; Wang, Wenhong; Liu, Lin; Wu, Guangheng
2016-04-01
Dynamic strain measurement is rather important for the characterization of mechanical behaviors in electromagnetic forming process, but it has been hindered by high strain rate and serious electromagnetic interference for years. In this work, a simple and effective strain measuring technique for physical and mechanical behavior studies in the electromagnetic forming process has been developed. High resolution (˜5 ppm) of strain curves of a budging aluminum tube in pulsed electromagnetic field has been successfully measured using this technique. The measured strain rate is about 105 s-1, which depends on the discharging conditions, nearly one order of magnitude of higher than that under conventional split Hopkins pressure bar loading conditions (˜104 s-1). It has been found that the dynamic fracture toughness of an aluminum alloy is significantly enhanced during the electromagnetic forming, which explains why the formability is much larger under electromagnetic forging conditions in comparison with conventional forging processes.
Dynamic analysis method of offshore jack-up platforms in regular and random waves
NASA Astrophysics Data System (ADS)
Yu, Hao; Li, Xiaoyu; Yang, Shuguang
2012-03-01
A jack-up platform, with its particular structure, showed obvious dynamic characteristics under complex environmental loads in extreme conditions. In this paper, taking a simplified 3-D finite element dynamic model in extreme storm conditions as research object, a transient dynamic analysis method was proposed, which was under both regular and irregular wave loads. The steps of dynamic analysis under extreme conditions were illustrated with an applied case, and the dynamic amplification factor (DAF) was calculated for each response parameter of base shear, overturning moment and hull sway. Finally, the structural response results of dynamic and static were compared and analyzed. The results indicated that the static strength analysis of the Jack-up Platforms was not enough under the dynamic loads including wave and current, further dynamic response analysis considering both computational efficiency and accuracy was necessary.
Method and system for dynamic probabilistic risk assessment
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta (Inventor); Xu, Hong (Inventor)
2013-01-01
The DEFT methodology, system and computer readable medium extends the applicability of the PRA (Probabilistic Risk Assessment) methodology to computer-based systems, by allowing DFT (Dynamic Fault Tree) nodes as pivot nodes in the Event Tree (ET) model. DEFT includes a mathematical model and solution algorithm, supports all common PRA analysis functions and cutsets. Additional capabilities enabled by the DFT include modularization, phased mission analysis, sequence dependencies, and imperfect coverage.
Drawing Dynamical and Parameters Planes of Iterative Families and Methods
Chicharro, Francisco I.
2013-01-01
The complex dynamical analysis of the parametric fourth-order Kim's iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated with the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us the excellent schemes (or dreadful ones). PMID:24376386
Drawing dynamical and parameters planes of iterative families and methods.
Chicharro, Francisco I; Cordero, Alicia; Torregrosa, Juan R
2013-01-01
The complex dynamical analysis of the parametric fourth-order Kim's iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated with the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us the excellent schemes (or dreadful ones).
Computational Fluid Dynamics. [numerical methods and algorithm development
NASA Technical Reports Server (NTRS)
1992-01-01
This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.
Dynamic State Estimation Utilizing High Performance Computing Methods
Schneider, Kevin P.; Huang, Zhenyu; Yang, Bo; Hauer, Matthew L.; Nieplocha, Jaroslaw
2009-03-18
The state estimation tools which are currently deployed in power system control rooms are based on a quasi-steady-state assumption. As a result, the suite of operational tools that rely on state estimation results as inputs do not have dynamic information available and their accuracy is compromised. This paper presents an overview of the Kalman Filtering process and then focuses on the implementation of the predication component on multiple processors.
Electronically Nonadiabatic Dynamics via Semiclassical Initial Value Methods
Miller, William H.
2008-12-11
In the late 1970's Meyer and Miller (MM) [J. Chem. Phys. 70, 3214 (1979)] presented a classical Hamiltonian corresponding to a finite set of electronic states of a molecular system (i.e., the various potential energy surfaces and their couplings), so that classical trajectory simulations could be carried out treating the nuclear and electronic degrees of freedom (DOF) in an equivalent dynamical framework (i.e., by classical mechanics), thereby describing non-adiabatic dynamics in a more unified manner. Much later Stock and Thoss (ST) [Phys. Rev. Lett. 78, 578 (1997)] showed that the MM model is actually not a 'model', but rather a 'representation' of the nuclear-electronic system; i.e., were the MMST nuclear-electronic Hamiltonian taken as a Hamiltonian operator and used in the Schroedinger equation, the exact (quantum) nuclear-electronic dynamics would be obtained. In recent years various initial value representations (IVRs) of semiclassical (SC) theory have been used with the MMST Hamiltonian to describe electronically non-adiabatic processes. Of special interest is the fact that though the classical trajectories generated by the MMST Hamiltonian (and which are the 'input' for an SC-IVR treatment) are 'Ehrenfest trajectories', when they are used within the SC-IVR framework the nuclear motion emerges from regions of non-adiabaticity on one potential energy surface (PES) or another, and not on an average PES as in the traditional Ehrenfest model. Examples are presented to illustrate and (hopefully) illuminate this behavior.
Relativistic magnetohydrodynamics in dynamical spacetimes: Numerical methods and tests
Duez, Matthew D.; Liu, Yuk Tung; Shapiro, Stuart L.; Stephens, Branson C.
2005-07-15
Many problems at the forefront of theoretical astrophysics require the treatment of magnetized fluids in dynamical, strongly curved spacetimes. Such problems include the origin of gamma-ray bursts, magnetic braking of differential rotation in nascent neutron stars arising from stellar core collapse or binary neutron star merger, the formation of jets and magnetized disks around newborn black holes, etc. To model these phenomena, all of which involve both general relativity (GR) and magnetohydrodynamics (MHD), we have developed a GRMHD code capable of evolving MHD fluids in dynamical spacetimes. Our code solves the Einstein-Maxwell-MHD system of coupled equations in axisymmetry and in full 3+1 dimensions. We evolve the metric by integrating the Baumgarte-Shapiro-Shibata-Nakamura equations, and use a conservative, shock-capturing scheme to evolve the MHD equations. Our code gives accurate results in standard MHD code-test problems, including magnetized shocks and magnetized Bondi flow. To test our code's ability to evolve the MHD equations in a dynamical spacetime, we study the perturbations of a homogeneous, magnetized fluid excited by a gravitational plane wave, and we find good agreement between the analytic and numerical solutions.
Combining evidence using likelihood ratios in writer verification
NASA Astrophysics Data System (ADS)
Srihari, Sargur; Kovalenko, Dimitry; Tang, Yi; Ball, Gregory
2013-01-01
Forensic identification is the task of determining whether or not observed evidence arose from a known source. It involves determining a likelihood ratio (LR) - the ratio of the joint probability of the evidence and source under the identification hypothesis (that the evidence came from the source) and under the exclusion hypothesis (that the evidence did not arise from the source). In LR- based decision methods, particularly handwriting comparison, a variable number of input evidences is used. A decision based on many pieces of evidence can result in nearly the same LR as one based on few pieces of evidence. We consider methods for distinguishing between such situations. One of these is to provide confidence intervals together with the decisions and another is to combine the inputs using weights. We propose a new method that generalizes the Bayesian approach and uses an explicitly defined discount function. Empirical evaluation with several data sets including synthetically generated ones and handwriting comparison shows greater flexibility of the proposed method.
Calibration of two complex ecosystem models with different likelihood functions
NASA Astrophysics Data System (ADS)
Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán
2014-05-01
The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model
Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin
2016-01-01
In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680
Eliciting information from experts on the likelihood of rapid climate change.
Arnell, Nigel W; Tompkins, Emma L; Adger, W Neil
2005-12-01
The threat of so-called rapid or abrupt climate change has generated considerable public interest because of its potentially significant impacts. The collapse of the North Atlantic Thermohaline Circulation or the West Antarctic Ice Sheet, for example, would have potentially catastrophic effects on temperatures and sea level, respectively. But how likely are such extreme climatic changes? Is it possible actually to estimate likelihoods? This article reviews the societal demand for the likelihoods of rapid or abrupt climate change, and different methods for estimating likelihoods: past experience, model simulation, or through the elicitation of expert judgments. The article describes a survey to estimate the likelihoods of two characterizations of rapid climate change, and explores the issues associated with such surveys and the value of information produced. The surveys were based on key scientists chosen for their expertise in the climate science of abrupt climate change. Most survey respondents ascribed low likelihoods to rapid climate change, due either to the collapse of the Thermohaline Circulation or increased positive feedbacks. In each case one assessment was an order of magnitude higher than the others. We explore a high rate of refusal to participate in this expert survey: many scientists prefer to rely on output from future climate model simulations.
Eliciting information from experts on the likelihood of rapid climate change.
Arnell, Nigel W; Tompkins, Emma L; Adger, W Neil
2005-12-01
The threat of so-called rapid or abrupt climate change has generated considerable public interest because of its potentially significant impacts. The collapse of the North Atlantic Thermohaline Circulation or the West Antarctic Ice Sheet, for example, would have potentially catastrophic effects on temperatures and sea level, respectively. But how likely are such extreme climatic changes? Is it possible actually to estimate likelihoods? This article reviews the societal demand for the likelihoods of rapid or abrupt climate change, and different methods for estimating likelihoods: past experience, model simulation, or through the elicitation of expert judgments. The article describes a survey to estimate the likelihoods of two characterizations of rapid climate change, and explores the issues associated with such surveys and the value of information produced. The surveys were based on key scientists chosen for their expertise in the climate science of abrupt climate change. Most survey respondents ascribed low likelihoods to rapid climate change, due either to the collapse of the Thermohaline Circulation or increased positive feedbacks. In each case one assessment was an order of magnitude higher than the others. We explore a high rate of refusal to participate in this expert survey: many scientists prefer to rely on output from future climate model simulations. PMID:16506972
Li Xiantao Yang, Jerry Z. E, Weinan
2010-05-20
We present a multiscale model for numerical simulations of dynamics of crystalline solids. The method combines the continuum nonlinear elasto-dynamics model, which models the stress waves and physical loading conditions, and molecular dynamics model, which provides the nonlinear constitutive relation and resolves the atomic structures near local defects. The coupling of the two models is achieved based on a general framework for multiscale modeling - the heterogeneous multiscale method (HMM). We derive an explicit coupling condition at the atomistic/continuum interface. Application to the dynamics of brittle cracks under various loading conditions is presented as test examples.
A general methodology for maximum likelihood inference from band-recovery data
Conroy, M.J.; Williams, B.K.
1984-01-01
A numerical procedure is described for obtaining maximum likelihood estimates and associated maximum likelihood inference from band- recovery data. The method is used to illustrate previously developed one-age-class band-recovery models, and is extended to new models, including the analysis with a covariate for survival rates and variable-time-period recovery models. Extensions to R-age-class band- recovery, mark-recapture models, and twice-yearly marking are discussed. A FORTRAN program provides computations for these models.
Computational Methods for Analyzing Fluid Flow Dynamics from Digital Imagery
Luttman, A.
2012-03-30
The main goal (long term) of this work is to perform computational dynamics analysis and quantify uncertainty from vector fields computed directly from measured data. Global analysis based on observed spatiotemporal evolution is performed by objective function based on expected physics and informed scientific priors, variational optimization to compute vector fields from measured data, and transport analysis proceeding with observations and priors. A mathematical formulation for computing flow fields is set up for computing the minimizer for the problem. An application to oceanic flow based on sea surface temperature is presented.
Accelerating ab initio molecular dynamics simulations by linear prediction methods
NASA Astrophysics Data System (ADS)
Herr, Jonathan D.; Steele, Ryan P.
2016-09-01
Acceleration of ab initio molecular dynamics (AIMD) simulations can be reliably achieved by extrapolation of electronic data from previous timesteps. Existing techniques utilize polynomial least-squares regression to fit previous steps' Fock or density matrix elements. In this work, the recursive Burg 'linear prediction' technique is shown to be a viable alternative to polynomial regression, and the extrapolation-predicted Fock matrix elements were three orders of magnitude closer to converged elements. Accelerations of 1.8-3.4× were observed in test systems, and in all cases, linear prediction outperformed polynomial extrapolation. Importantly, these accelerations were achieved without reducing the MD integration timestep.
Survey of decentralized control methods. [for large scale dynamic systems
NASA Technical Reports Server (NTRS)
Athans, M.
1975-01-01
An overview is presented of the types of problems that are being considered by control theorists in the area of dynamic large scale systems with emphasis on decentralized control strategies. Approaches that deal directly with decentralized decision making for large scale systems are discussed. It is shown that future advances in decentralized system theory are intimately connected with advances in the stochastic control problem with nonclassical information pattern. The basic assumptions and mathematical tools associated with the latter are summarized, and recommendations concerning future research are presented.
Optimal control methods for controlling bacterial populations with persister dynamics
NASA Astrophysics Data System (ADS)
Cogan, N. G.
2016-06-01
Bacterial tolerance to antibiotics is a well-known phenomena; however, only recent studies of bacterial biofilms have shown how multifaceted tolerance really is. By joining into a structured community and offering shared protection and gene transfer, bacterial populations can protect themselves genotypically, phenotypically and physically. In this study, we collect a line of research that focuses on phenotypic (or plastic) tolerance. The dynamics of persister formation are becoming better understood, even though there are major questions that remain. The thrust of our results indicate that even without detailed description of the biological mechanisms, theoretical studies can offer strategies that can eradicate bacterial populations with existing drugs.
Domain decomposition methods with applications to fluid dynamics
NASA Astrophysics Data System (ADS)
Kuznetsov, Yu. A.
In this presentation, a brief review of domain decomposition methods with emphasis on the applications to solving elliptic problems arising from the Navier-Stokes equations via operator splitting methods is given. The singularly perturbed convection-diffusion equation is chosen as a model problem. We consider both overlapping (multiplicative and additive Schwarz) and nonoverlapping (Neumann-Dirichlet and Neumann-Neumann) domain decomposition methods. Some convergence results for particular cases are presented.
A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics
Anderson, R W; Pember, R B; Elliott, N S
2004-01-28
A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.
A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics
Anderson, R W; Pember, R B; Elliott, N S
2002-10-19
A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.
Efficient Fully Implicit Time Integration Methods for Modeling Cardiac Dynamics
Rose, Donald J.; Henriquez, Craig S.
2013-01-01
Implicit methods are well known to have greater stability than explicit methods for stiff systems, but they often are not used in practice due to perceived computational complexity. This paper applies the Backward Euler method and a second-order one-step two-stage composite backward differentiation formula (C-BDF2) for the monodomain equations arising from mathematically modeling the electrical activity of the heart. The C-BDF2 scheme is an L-stable implicit time integration method and easily implementable. It uses the simplest Forward Euler and Backward Euler methods as fundamental building blocks. The nonlinear system resulting from application of the Backward Euler method for the monodomain equations is solved for the first time by a nonlinear elimination method, which eliminates local and non-symmetric components by using a Jacobian-free Newton solver, called Newton-Krylov solver. Unlike other fully implicit methods proposed for the monodomain equations in the literature, the Jacobian of the global system after the nonlinear elimination has much smaller size, is symmetric and possibly positive definite, which can be solved efficiently by standard optimal solvers. Numerical results are presented demonstrating that the C-BDF2 scheme can yield accurate results with less CPU times than explicit methods for both a single patch and spatially extended domains. PMID:19126449
Kubelka, Jan
2009-04-01
Many important biochemical processes occur on the time-scales of nanoseconds and microseconds. The introduction of the laser temperature-jump (T-jump) to biophysics more than a decade ago opened these previously inaccessible time regimes up to direct experimental observation. Since then, laser T-jump methodology has evolved into one of the most versatile and generally applicable methods for studying fast biomolecular kinetics. This perspective is a review of the principles and applications of the laser T-jump technique in biophysics. A brief overview of the T-jump relaxation kinetics and the historical development of laser T-jump methodology is presented. The physical principles and practical experimental considerations that are important for the design of the laser T-jump experiments are summarized. These include the Raman conversion for generating heating pulses, considerations of size, duration and uniformity of the temperature jump, as well as potential adverse effects due to photo-acoustic waves, cavitation and thermal lensing, and their elimination. The laser T-jump apparatus developed at the NIH Laboratory of Chemical Physics is described in detail along with a brief survey of other laser T-jump designs in use today. Finally, applications of the laser T-jump in biophysics are reviewed, with an emphasis on the broad range of problems where the laser T-jump methodology has provided important new results and insights into the dynamics of the biomolecular processes.
An improved dynamic method to measure kL a in bioreactors.
Damiani, Andrew L; Kim, Min Hea; Wang, Jin
2014-10-01
An accurate measurement or estimation of the volumetric mass transfer coefficient kL a is crucial for the design, operation, and scale up of bioreactors. Among different physical and chemical methods, the classical dynamic method is the most widely applied method to simultaneously estimate both kL a and cell's oxygen utilization rate. Despite several important follow-up articles to improve the original dynamic method, some limitations exist that make the classical dynamic method less effective under certain conditions. For example, for the case of high cell density with moderate agitation, the dissolved oxygen concentration barely increases during the re-gassing step of the classical dynamic method, which makes kL a estimation impossible. To address these limitations, in this work we present an improved dynamic method that consists of both an improved model and an improved procedure. The improved model takes into account the mass transfer between the headspace and the broth; in addition, nitrogen is bubbled through the broth when air is shut off. The improved method not only enables a faster and more accurate estimation of kL a, but also allows the measurement of kL a for high cell density with medium/low agitation that is impossible with the classical dynamic method. Scheffersomyces stipitis was used as the model system to demonstrate the effectiveness of the improved method; in addition, experiments were conducted to examine the effect of cell density and agitation speed on kL a.
Dynamical Monte Carlo methods for plasma-surface reactions
NASA Astrophysics Data System (ADS)
Guerra, Vasco; Marinov, Daniil
2016-08-01
Different dynamical Monte Carlo algorithms to investigate molecule formation on surfaces are developed, evaluated and compared with the deterministic approach based on reaction-rate equations. These include a null event algorithm, the n-fold way/BKL algorithm and an ‘hybrid’ variant of the latter. NO2 formation by NO oxidation on Pyrex and O recombination on silica with the formation of O2 are taken as case studies. The influence of the grid size on the CPU calculation time and the accuracy of the results is analysed. The role of Langmuir–Hinsehlwood recombination involving two physisorbed atoms and the effect of back diffusion and its inclusion in a deterministic formulation are investigated and discussed. It is shown that dynamical Monte Carlo schemes are flexible, simple to implement, describe easily elementary processes that are not straightforward to include in deterministic simulations, can run very efficiently if appropriately chosen and give highly reliable results. Moreover, the present approach provides a relatively simple procedure to describe fully coupled surface and gas phase chemistries.
Dynamical Monte Carlo methods for plasma-surface reactions
NASA Astrophysics Data System (ADS)
Guerra, Vasco; Marinov, Daniil
2016-08-01
Different dynamical Monte Carlo algorithms to investigate molecule formation on surfaces are developed, evaluated and compared with the deterministic approach based on reaction-rate equations. These include a null event algorithm, the n-fold way/BKL algorithm and an ‘hybrid’ variant of the latter. NO2 formation by NO oxidation on Pyrex and O recombination on silica with the formation of O2 are taken as case studies. The influence of the grid size on the CPU calculation time and the accuracy of the results is analysed. The role of Langmuir-Hinsehlwood recombination involving two physisorbed atoms and the effect of back diffusion and its inclusion in a deterministic formulation are investigated and discussed. It is shown that dynamical Monte Carlo schemes are flexible, simple to implement, describe easily elementary processes that are not straightforward to include in deterministic simulations, can run very efficiently if appropriately chosen and give highly reliable results. Moreover, the present approach provides a relatively simple procedure to describe fully coupled surface and gas phase chemistries.
One testing method of dynamic linearity of an accelerometer
NASA Astrophysics Data System (ADS)
Lei, Jing-Yu; Guo, Wei-Guo; Tan, Xue-Ming; Shi, Yun-Bo
2015-09-01
To effectively test dynamic linearity of an accelerometer over a wide rang of 104 g to about 20 × 104g, one published patent technology is first experimentally verified and analysed, and its deficient is presented, then based on stress wave propagation theory on the thin long bar, the relation between the strain signal and the corresponding acceleration signal is obtained, one special link of two coaxial projectile is developed. These two coaxial metal cylinders (inner cylinder and circular tube) are used as projectiles, to prevent their mutual slip inside the gun barrel during movement, the one end of two projectiles is always fastened by small screws. Ti6-AL4-V bar with diameter of 30 mm is used to propagate loading stress pulse. The resultant compression wave can be measured by the strain gauges on the bar, and a half -sine strain pulse is obtained. The measuring accelerometer is attached on the other end of the bar by a vacuum clamp. In this clamp, the accelerometer only bear compression wave, the reflected tension pulse make the accelerometer off the bar. Using this system, dynamic linearity measurement of accelerometer can be easily tested in wider range of acceleration values. And a really measuring results are presented.
NASA Technical Reports Server (NTRS)
Schweikhard, W. G.; Chen, Y. S.
1986-01-01
The Melick method of inlet flow dynamic distortion prediction by statistical means is outlined. A hypothetic vortex model is used as the basis for the mathematical formulations. The main variables are identified by matching the theoretical total pressure rms ratio with the measured total pressure rms ratio. Data comparisons, using the HiMAT inlet test data set, indicate satisfactory prediction of the dynamic peak distortion for cases with boundary layer control device vortex generators. A method for the dynamic probe selection was developed. Validity of the probe selection criteria is demonstrated by comparing the reduced-probe predictions with the 40-probe predictions. It is indicated that the the number of dynamic probes can be reduced to as few as two and still retain good accuracy.
Evaluation of Smoking Prevention Television Messages Based on the Elaboration Likelihood Model
ERIC Educational Resources Information Center
Flynn, Brian S.; Worden, John K.; Bunn, Janice Yanushka; Connolly, Scott W.; Dorwaldt, Anne L.
2011-01-01
Progress in reducing youth smoking may depend on developing improved methods to communicate with higher risk youth. This study explored the potential of smoking prevention messages based on the Elaboration Likelihood Model (ELM) to address these needs. Structured evaluations of 12 smoking prevention messages based on three strategies derived from…
ERIC Educational Resources Information Center
Choi, Jaehwa; Kim, Sunhee; Chen, Jinsong; Dannels, Sharon
2011-01-01
The purpose of this study is to compare the maximum likelihood (ML) and Bayesian estimation methods for polychoric correlation (PCC) under diverse conditions using a Monte Carlo simulation. Two new Bayesian estimates, maximum a posteriori (MAP) and expected a posteriori (EAP), are compared to ML, the classic solution, to estimate PCC. Different…
ERIC Educational Resources Information Center
Magis, David; Raiche, Gilles
2010-01-01
In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…
Assessing compatibility of direct detection data: halo-independent global likelihood analyses
NASA Astrophysics Data System (ADS)
Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.
2016-10-01
We present two different halo-independent methods to assess the compatibility of several direct dark matter detection data sets for a given dark matter model using a global likelihood consisting of at least one extended likelihood and an arbitrary number of Gaussian or Poisson likelihoods. In the first method we find the global best fit halo function (we prove that it is a unique piecewise constant function with a number of down steps smaller than or equal to a maximum number that we compute) and construct a two-sided pointwise confidence band at any desired confidence level, which can then be compared with those derived from the extended likelihood alone to assess the joint compatibility of the data. In the second method we define a ``constrained parameter goodness-of-fit'' test statistic, whose p-value we then use to define a ``plausibility region'' (e.g. where p >= 10%). For any halo function not entirely contained within the plausibility region, the level of compatibility of the data is very low (e.g. p < 10%). We illustrate these methods by applying them to CDMS-II-Si and SuperCDMS data, assuming dark matter particles with elastic spin-independent isospin-conserving interactions or exothermic spin-independent isospin-violating interactions.
Increasing Power of Groupwise Association Test with Likelihood Ratio Test
Sul, Jae Hoon; Han, Buhm
2011-01-01
Abstract Sequencing studies have been discovering a numerous number of rare variants, allowing the identification of the effects of rare variants on disease susceptibility. As a method to increase the statistical power of studies on rare variants, several groupwise association tests that group rare variants in genes and detect associations between genes and diseases have been proposed. One major challenge in these methods is to determine which variants are causal in a group, and to overcome this challenge, previous methods used prior information that specifies how likely each variant is causal. Another source of information that can be used to determine causal variants is the observed data because case individuals are likely to have more causal variants than control individuals. In this article, we introduce a likelihood ratio test (LRT) that uses both data and prior information to infer which variants are causal and uses this finding to determine whether a group of variants is involved in a disease. We demonstrate through simulations that LRT achieves higher power than previous methods. We also evaluate our method on mutation screening data of the susceptibility gene for ataxia telangiectasia, and show that LRT can detect an association in real data. To increase the computational speed of our method, we show how we can decompose the computation of LRT, and propose an efficient permutation test. With this optimization, we can efficiently compute an LRT statistic and its significance at a genome-wide level. The software for our method is publicly available at http://genetics.cs.ucla.edu/rarevariants. PMID:21919745
Method and apparatus for dynamic focusing of ultrasound energy
Candy, James V.
2002-01-01
Method and system disclosed herein include noninvasively detecting, separating and destroying multiple masses (tumors, cysts, etc.) through a plurality of iterations from tissue (e.g., breast tissue). The method and system may open new frontiers with the implication of noninvasive treatment of masses in the biomedical area along with the expanding technology of acoustic surgery.
A review of action estimation methods for galactic dynamics
NASA Astrophysics Data System (ADS)
Sanders, Jason L.; Binney, James
2016-04-01
We review the available methods for estimating actions, angles and frequencies of orbits in both axisymmetric and triaxial potentials. The methods are separated into two classes. Unless an orbit has been trapped by a resonance, convergent, or iterative, methods are able to recover the actions to arbitrarily high accuracy given sufficient computing time. Faster non-convergent methods rely on the potential being sufficiently close to a separable potential, and the accuracy of the action estimate cannot be improved through further computation. We critically compare the accuracy of the methods and the required computation time for a range of orbits in an axisymmetric multicomponent Galactic potential. We introduce a new method for estimating actions that builds on the adiabatic approximation of Schönrich & Binney and discuss the accuracy required for the actions, angles and frequencies using suitable distribution functions for the thin and thick discs, the stellar halo and a star stream. We conclude that for studies of the disc and smooth halo component of the Milky Way, the most suitable compromise between speed and accuracy is the Stäckel Fudge, whilst when studying streams the non-convergent methods do not offer sufficient accuracy and the most suitable method is computing the actions from an orbit integration via a generating function. All the software used in this study can be downloaded from https://github.com/jls713/tact.
Testing and Validation of the Dynamic Interia Measurement Method
NASA Technical Reports Server (NTRS)
Chin, Alexander; Herrera, Claudia; Spivey, Natalie; Fladung, William; Cloutier, David
2015-01-01
This presentation describes the DIM method and how it measures the inertia properties of an object by analyzing the frequency response functions measured during a ground vibration test (GVT). The DIM method has been in development at the University of Cincinnati and has shown success on a variety of small scale test articles. The NASA AFRC version was modified for larger applications.
MDMS: Molecular Dynamics Meta-Simulator for evaluating exchange type sampling methods.
Smith, Daniel B; Okur, Asim; Brooks, Bernard
2012-08-30
Replica exchange methods have become popular tools to explore conformational space for small proteins. For larger biological systems, even with enhanced sampling methods, exploring the free energy landscape remains computationally challenging. This problem has led to the development of many improved replica exchange methods. Unfortunately, testing these methods remains expensive. We propose a Molecular Dynamics Meta-Simulator (MDMS) based on transition state theory to simulate a replica exchange simulation, eliminating the need to run explicit dynamics between exchange attempts. MDMS simulations allow for rapid testing of new replica exchange based methods, greatly reducing the amount of time needed for new method development.
A method for dynamic nuclear polarization enhancement of membrane proteins.
Smith, Adam N; Caporini, Marc A; Fanucci, Gail E; Long, Joanna R
2015-01-26
Dynamic nuclear polarization (DNP) magic-angle spinning (MAS) solid-state NMR (ssNMR) spectroscopy has the potential to enhance NMR signals by orders of magnitude and to enable NMR characterization of proteins which are inherently dilute, such as membrane proteins. In this work spin-labeled lipid molecules (SL-lipids), when used as polarizing agents, lead to large and relatively homogeneous DNP enhancements throughout the lipid bilayer and to an embedded lung surfactant mimetic peptide, KL4 . Specifically, DNP MAS ssNMR experiments at 600 MHz/395 GHz on KL4 reconstituted in liposomes containing SL-lipids reveal DNP enhancement values over two times larger for KL4 compared to liposome suspensions containing the biradical TOTAPOL. These findings suggest an alternative sample preparation strategy for DNP MAS ssNMR studies of lipid membranes and integral membrane proteins. PMID:25504310
Exploring biomolecular dynamics and interactions using advanced sampling methods
NASA Astrophysics Data System (ADS)
Luitz, Manuel; Bomblies, Rainer; Ostermeir, Katja; Zacharias, Martin
2015-08-01
Molecular dynamics (MD) and Monte Carlo (MC) simulations have emerged as a valuable tool to investigate statistical mechanics and kinetics of biomolecules and synthetic soft matter materials. However, major limitations for routine applications are due to the accuracy of the molecular mechanics force field and due to the maximum simulation time that can be achieved in current simulations studies. For improving the sampling a number of advanced sampling approaches have been designed in recent years. In particular, variants of the parallel tempering replica-exchange methodology are widely used in many simulation studies. Recent methodological advancements and a discussion of specific aims and advantages are given. This includes improved free energy simulation approaches and conformational search applications.
Piecewise-parabolic methods for astrophysical fluid dynamics
Woodward, P.R.
1983-11-01
A general description of some modern numerical techniques for the simulation of astrophysical fluid flow is presented. The methods are introduced with a thorough discussion of the especially simple case of advection. Attention is focused on the piecewise-parabolic method (PPM). A description of the SLIC method for treating multifluid problems is also given. The discussion is illustrated by a number of advection and hydrodynamics test problems. Finally, a study of Kelvin-Helmholtz instability of supersonic jets using PPM with SLIC fluid interfaces is presented.
Free energy reconstruction from steered dynamics without post-processing
Athenes, Manuel; Marinica, Mihai-Cosmin
2010-09-20
Various methods achieving importance sampling in ensembles of nonequilibrium trajectories enable one to estimate free energy differences and, by maximum-likelihood post-processing, to reconstruct free energy landscapes. Here, based on Bayes theorem, we propose a more direct method in which a posterior likelihood function is used both to construct the steered dynamics and to infer the contribution to equilibrium of all the sampled states. The method is implemented with two steering schedules. First, using non-autonomous steering, we calculate the migration barrier of the vacancy in Fe-{alpha}. Second, using an autonomous scheduling related to metadynamics and equivalent to temperature-accelerated molecular dynamics, we accurately reconstruct the two-dimensional free energy landscape of the 38-atom Lennard-Jones cluster as a function of an orientational bond-order parameter and energy, down to the solid-solid structural transition temperature of the cluster and without maximum-likelihood post-processing.
Performance-based selection of likelihood models for phylogeny estimation.
Minin, Vladimir; Abdo, Zaid; Joyce, Paul; Sullivan, Jack
2003-10-01
Phylogenetic estimation has largely come to rely on explicitly model-based methods. This approach requires that a model be chosen and that that choice be justified. To date, justification has largely been accomplished through use of likelihood-ratio tests (LRTs) to assess the relative fit of a nested series of reversible models. While this approach certainly represents an important advance over arbitrary model selection, the best fit of a series of models may not always provide the most reliable phylogenetic estimates for finite real data sets, where all available models are surely incorrect. Here, we develop a novel approach to model selection, which is based on the Bayesian information criterion, but incorporates relative branch-length error as a performance measure in a decision theory (DT) framework. This DT method includes a penalty for overfitting, is applicable prior to running extensive analyses, and simultaneously compares all models being considered and thus does not rely on a series of pairwise comparisons of models to traverse model space. We evaluate this method by examining four real data sets and by using those data sets to define simulation conditions. In the real data sets, the DT method selects the same or simpler models than conventional LRTs. In order to lend generality to the simulations, codon-based models (with parameters estimated from the real data sets) were used to generate simulated data sets, which are therefore more complex than any of the models we evaluate. On average, the DT method selects models that are simpler than those chosen by conventional LRTs. Nevertheless, these simpler models provide estimates of branch lengths that are more accurate both in terms of relative error and absolute error than those derived using the more complex (yet still wrong) models chosen by conventional LRTs. This method is available in a program called DT-ModSel. PMID:14530134
Dynamically balanced fuel nozzle and method of operation
Richards, George A.; Janus, Michael C.; Robey, Edward H.
2000-01-01
An apparatus and method of operation designed to reduce undesirably high pressure oscillations in lean premix combustion systems burning hydrocarbon fuels are provided. Natural combustion and nozzle acoustics are employed to generate multiple fuel pockets which, when burned in the combustor, counteract the oscillations caused by variations in heat release in the combustor. A hybrid of active and passive control techniques, the apparatus and method eliminate combustion oscillations over a wide operating range, without the use of moving parts or electronics.
Application of a novel finite difference method to dynamic crack problems
NASA Technical Reports Server (NTRS)
Chen, Y. M.; Wilkins, M. L.
1976-01-01
A versatile finite difference method (HEMP and HEMP 3D computer programs) was developed originally for solving dynamic problems in continuum mechanics. It was extended to analyze the stress field around cracks in a solid with finite geometry subjected to dynamic loads and to simulate numerically the dynamic fracture phenomena with success. This method is an explicit finite difference method applied to the Lagrangian formulation of the equations of continuum mechanics in two and three space dimensions and time. The calculational grid moves with the material and in this way it gives a more detailed description of the physics of the problem than the Eulerian formulation.
NASA Technical Reports Server (NTRS)
1973-01-01
A study has been made of possible ways to improve the performance of the Langley Research Center's Transonic Dynamics Tunnel (TDT). The major effort was directed toward obtaining increased dynamic pressure in the Mach number range from 0.8 to 1.2, but methods to increase Mach number capability were also considered. Methods studied for increasing dynamic pressure capability were higher total pressure, auxiliary suction, reducing circuit losses, reduced test medium temperature, smaller test section and higher molecular weight test medium. Increased Mach number methods investigated were nozzle block inserts, variable geometry nozzle, changes in test section wall configuration, and auxiliary suction.
Speech processing using maximum likelihood continuity mapping
Hogden, John E.
2000-01-01
Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.
Speech processing using maximum likelihood continuity mapping
Hogden, J.E.
2000-04-18
Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.
NASA Astrophysics Data System (ADS)
Hsu, Po Jen; Lai, S. K.; Rapallo, Arnaldo
2014-03-01
Improved basis sets for the study of polymer dynamics by means of the diffusion theory, and tests on a melt of cis-1,4-polyisoprene decamers, and a toluene solution of a 71-mer syndiotactic trans-1,2-polypentadiene were presented recently [R. Gaspari and A. Rapallo, J. Chem. Phys. 128, 244109 (2008)]. The proposed hybrid basis approach (HBA) combined two techniques, the long time sorting procedure and the maximum correlation approximation. The HBA takes advantage of the strength of these two techniques, and its basis sets proved to be very effective and computationally convenient in describing both local and global dynamics in cases of flexible synthetic polymers where the repeating unit is a unique type of monomer. The question then arises if the same efficacy continues when the HBA is applied to polymers of different monomers, variable local stiffness along the chain and with longer persistence length, which have different local and global dynamical properties against the above-mentioned systems. Important examples of this kind of molecular chains are the proteins, so that a fragment of the protein transthyretin is chosen as the system of the present study. This peptide corresponds to a sequence that is structured in β-sheets of the protein and is located on the surface of the channel with thyroxin. The protein transthyretin forms amyloid fibrils in vivo, whereas the peptide fragment has been shown [C. P. Jaroniec, C. E. MacPhee, N. S. Astrof, C. M. Dobson, and R. G. Griffin, Proc. Natl. Acad. Sci. U.S.A. 99, 16748 (2002)] to form amyloid fibrils in vitro in extended β-sheet conformations. For these reasons the latter is given considerable attention in the literature and studied also as an isolated fragment in water solution where both experimental and theoretical efforts have indicated the propensity of the system to form β turns or α helices, but is otherwise predominantly unstructured. Differing from previous computational studies that employed implicit
Hsu, Po Jen; Lai, S. K.; Rapallo, Arnaldo
2014-03-14
Improved basis sets for the study of polymer dynamics by means of the diffusion theory, and tests on a melt of cis-1,4-polyisoprene decamers, and a toluene solution of a 71-mer syndiotactic trans-1,2-polypentadiene were presented recently [R. Gaspari and A. Rapallo, J. Chem. Phys. 128, 244109 (2008)]. The proposed hybrid basis approach (HBA) combined two techniques, the long time sorting procedure and the maximum correlation approximation. The HBA takes advantage of the strength of these two techniques, and its basis sets proved to be very effective and computationally convenient in describing both local and global dynamics in cases of flexible synthetic polymers where the repeating unit is a unique type of monomer. The question then arises if the same efficacy continues when the HBA is applied to polymers of different monomers, variable local stiffness along the chain and with longer persistence length, which have different local and global dynamical properties against the above-mentioned systems. Important examples of this kind of molecular chains are the proteins, so that a fragment of the protein transthyretin is chosen as the system of the present study. This peptide corresponds to a sequence that is structured in β-sheets of the protein and is located on the surface of the channel with thyroxin. The protein transthyretin forms amyloid fibrils in vivo, whereas the peptide fragment has been shown [C. P. Jaroniec, C. E. MacPhee, N. S. Astrof, C. M. Dobson, and R. G. Griffin, Proc. Natl. Acad. Sci. U.S.A. 99, 16748 (2002)] to form amyloid fibrils in vitro in extended β-sheet conformations. For these reasons the latter is given considerable attention in the literature and studied also as an isolated fragment in water solution where both experimental and theoretical efforts have indicated the propensity of the system to form β turns or α helices, but is otherwise predominantly unstructured. Differing from previous computational studies that employed implicit
Hsu, Po Jen; Lai, S K; Rapallo, Arnaldo
2014-03-14
Improved basis sets for the study of polymer dynamics by means of the diffusion theory, and tests on a melt of cis-1,4-polyisoprene decamers, and a toluene solution of a 71-mer syndiotactic trans-1,2-polypentadiene were presented recently [R. Gaspari and A. Rapallo, J. Chem. Phys. 128, 244109 (2008)]. The proposed hybrid basis approach (HBA) combined two techniques, the long time sorting procedure and the maximum correlation approximation. The HBA takes advantage of the strength of these two techniques, and its basis sets proved to be very effective and computationally convenient in describing both local and global dynamics in cases of flexible synthetic polymers where the repeating unit is a unique type of monomer. The question then arises if the same efficacy continues when the HBA is applied to polymers of different monomers, variable local stiffness along the chain and with longer persistence length, which have different local and global dynamical properties against the above-mentioned systems. Important examples of this kind of molecular chains are the proteins, so that a fragment of the protein transthyretin is chosen as the system of the present study. This peptide corresponds to a sequence that is structured in β-sheets of the protein and is located on the surface of the channel with thyroxin. The protein transthyretin forms amyloid fibrils in vivo, whereas the peptide fragment has been shown [C. P. Jaroniec, C. E. MacPhee, N. S. Astrof, C. M. Dobson, and R. G. Griffin, Proc. Natl. Acad. Sci. U.S.A. 99, 16748 (2002)] to form amyloid fibrils in vitro in extended β-sheet conformations. For these reasons the latter is given considerable attention in the literature and studied also as an isolated fragment in water solution where both experimental and theoretical efforts have indicated the propensity of the system to form β turns or α helices, but is otherwise predominantly unstructured. Differing from previous computational studies that employed implicit
Dynamic measurements and uncertainty estimation of clinical thermometers using Monte Carlo method
NASA Astrophysics Data System (ADS)
Ogorevc, Jaka; Bojkovski, Jovan; Pušnik, Igor; Drnovšek, Janko
2016-09-01
Clinical thermometers in intensive care units are used for the continuous measurement of body temperature. This study describes a procedure for dynamic measurement uncertainty evaluation in order to examine the requirements for clinical thermometer dynamic properties in standards and recommendations. In this study thermistors were used as temperature sensors, transient temperature measurements were performed in water and air and the measurement data were processed for the investigation of thermometer dynamic properties. The thermometers were mathematically modelled. A Monte Carlo method was implemented for dynamic measurement uncertainty evaluation. The measurement uncertainty was analysed for static and dynamic conditions. Results showed that dynamic uncertainty is much larger than steady-state uncertainty. The results of dynamic uncertainty analysis were applied on an example of clinical measurements and were compared to current requirements in ISO standard for clinical thermometers. It can be concluded that there was no need for dynamic evaluation of clinical thermometers for continuous measurement, while dynamic measurement uncertainty was within the demands of target uncertainty. Whereas in the case of intermittent predictive thermometers, the thermometer dynamic properties had a significant impact on the measurement result. Estimation of dynamic uncertainty is crucial for the assurance of traceable and comparable measurements.
Low-complexity approximations to maximum likelihood MPSK modulation classification
NASA Technical Reports Server (NTRS)
Hamkins, Jon
2004-01-01
We present a new approximation to the maximum likelihood classifier to discriminate between M-ary and M'-ary phase-shift-keying transmitted on an additive white Gaussian noise (AWGN) channel and received noncoherentl, partially coherently, or coherently.
NASA Technical Reports Server (NTRS)
Weaver, D. L.
1982-01-01
Theoretical methods and solutions of the dynamics of protein folding, protein aggregation, protein structure, and the origin of life are discussed. The elements of a dynamic model representing the initial stages of protein folding are presented. The calculation and experimental determination of the model parameters are discussed. The use of computer simulation for modeling protein folding is considered.
Numerical methods in vehicle system dynamics: state of the art and current developments
NASA Astrophysics Data System (ADS)
Arnold, M.; Burgermeister, B.; Führer, C.; Hippmann, G.; Rill, G.
2011-07-01
Robust and efficient numerical methods are an essential prerequisite for the computer-based dynamical analysis of engineering systems. In vehicle system dynamics, the methods and software tools from multibody system dynamics provide the integration platform for the analysis, simulation and optimisation of the complex dynamical behaviour of vehicles and vehicle components and their interaction with hydraulic components, electronical devices and control structures. Based on the principles of classical mechanics, the modelling of vehicles and their components results in nonlinear systems of ordinary differential equations (ODEs) or differential-algebraic equations (DAEs) of moderate dimension that describe the dynamical behaviour in the frequency range required and with a level of detail being characteristic of vehicle system dynamics. Most practical problems in this field may be transformed to generic problems of numerical mathematics like systems of nonlinear equations in the (quasi-)static analysis and explicit ODEs or DAEs with a typical semi-explicit structure in the dynamical analysis. This transformation to mathematical standard problems allows to use sophisticated, freely available numerical software that is based on well approved numerical methods like the Newton-Raphson iteration for nonlinear equations or Runge-Kutta and linear multistep methods for ODE/DAE time integration. Substantial speed-ups of these numerical standard methods may be achieved exploiting some specific structure of the mathematical models in vehicle system dynamics. In the present paper, we follow this framework and start with some modelling aspects being relevant from the numerical viewpoint. The focus of the paper is on numerical methods for static and dynamic problems, including software issues and a discussion which method fits best for which class of problems. Adaptive components in state-of-the-art numerical software like stepsize and order control in time integration are
Dynamic multiplexed analysis method using ion mobility spectrometer
Belov, Mikhail E
2010-05-18
A method for multiplexed analysis using ion mobility spectrometer in which the effectiveness and efficiency of the multiplexed method is optimized by automatically adjusting rates of passage of analyte materials through an IMS drift tube during operation of the system. This automatic adjustment is performed by the IMS instrument itself after determining the appropriate levels of adjustment according to the method of the present invention. In one example, the adjustment of the rates of passage for these materials is determined by quantifying the total number of analyte molecules delivered to the ion trap in a preselected period of time, comparing this number to the charge capacity of the ion trap, selecting a gate opening sequence; and implementing the selected gate opening sequence to obtain a preselected rate of analytes within said IMS drift tube.
Out-of-atlas likelihood estimation using multi-atlas segmentation
Asman, Andrew J.; Chambless, Lola B.; Thompson, Reid C.; Landman, Bennett A.
2013-01-01
Purpose: Multi-atlas segmentation has been shown to be highly robust and accurate across an extraordinary range of potential applications. However, it is limited to the segmentation of structures that are anatomically consistent across a large population of potential target subjects (i.e., multi-atlas segmentation is limited to “in-atlas” applications). Herein, the authors propose a technique to determine the likelihood that a multi-atlas segmentation estimate is representative of the problem at hand, and, therefore, identify anomalous regions that are not well represented within the atlases. Methods: The authors derive a technique to estimate the out-of-atlas (OOA) likelihood for every voxel in the target image. These estimated likelihoods can be used to determine and localize the probability of an abnormality being present on the target image. Results: Using a collection of manually labeled whole-brain datasets, the authors demonstrate the efficacy of the proposed framework on two distinct applications. First, the authors demonstrate the ability to accurately and robustly detect malignant gliomas in the human brain—an aggressive class of central nervous system neoplasms. Second, the authors demonstrate how this OOA likelihood estimation process can be used within a quality control context for diffusion tensor imaging datasets to detect large-scale imaging artifacts (e.g., aliasing and image shading). Conclusions: The proposed OOA likelihood estimation framework shows great promise for robust and rapid identification of brain abnormalities and imaging artifacts using only weak dependencies on anomaly morphometry and appearance. The authors envision that this approach would allow for application-specific algorithms to focus directly on regions of high OOA likelihood, which would (1) reduce the need for human intervention, and (2) reduce the propensity for false positives. Using the dual perspective, this technique would allow for algorithms to focus on
A notion of graph likelihood and an infinite monkey theorem
NASA Astrophysics Data System (ADS)
Banerji, Christopher R. S.; Mansour, Toufik; Severini, Simone
2014-01-01
We play with a graph-theoretic analogue of the folklore infinite monkey theorem. We define a notion of graph likelihood as the probability that a given graph is constructed by a monkey in a number of time steps equal to the number of vertices. We present an algorithm to compute this graph invariant and closed formulas for some infinite classes. We have to leave the computational complexity of the likelihood as an open problem.
Dynamics of deformable multibody systems using recursive projection methods
NASA Astrophysics Data System (ADS)
Shabana, A. A.
1992-12-01
In this investigation, generalized Newton-Euler equations are developed for deformable bodies that undergo large translational and rotational displacements. The configuration of the deformable body is identified using coupled sets of reference and elastic variables. The nonlinear generalized Newton-Euler equations are formulated in terms of a set of time invariant scalars and matrices that depend on the spatial coordinates as well as the assumed displacement field. These time-invariant quantities appear in the nonlinear terms that represent the dynamic coupling between the rigid body modes and the elastic deformation. A set of recursive kinematic equations, in which the absolute accelerations are expressed in terms of the joint and elastic accelerations are developed for several joint types. The recursive kinematic equations and the joint reaction relationships are combined with the generalized Newton-Euler equations in order to obtain a system of loosely coupled equations which have sparse matrix structure. Using matrix partitioning and recursive projection techniques based on optimal block factorization an order n solution for the system equations is obtained.
Performing dynamic time history analyses by extension of the response spectrum method
Hulbert, G.M.
1983-01-01
A method is presented to calculate the dynamic time history response of finite element models using results from response spectrum analyses. The proposed ''modified'' time history method does not represent a new mathematical approach to dynamic analysis but suggests a more efficient ordering of the analytical equations and procedures. The modified time history method is considerably faster and less expensive to use than normal time history methods. This paper presents the theory and implementation of the modified time history approach along with comparisons of the modified and normal time history methods for a prototypic seismic piping design problem.
COSMIC MICROWAVE BACKGROUND LIKELIHOOD APPROXIMATION FOR BANDED PROBABILITY DISTRIBUTIONS
Gjerløw, E.; Mikkelsen, K.; Eriksen, H. K.; Næss, S. K.; Seljebotn, D. S.; Górski, K. M.; Huey, G.; Jewell, J. B.; Rocha, G.; Wehus, I. K.
2013-11-10
We investigate sets of random variables that can be arranged sequentially such that a given variable only depends conditionally on its immediate predecessor. For such sets, we show that the full joint probability distribution may be expressed exclusively in terms of uni- and bivariate marginals. Under the assumption that the cosmic microwave background (CMB) power spectrum likelihood only exhibits correlations within a banded multipole range, Δl{sub C}, we apply this expression to two outstanding problems in CMB likelihood analysis. First, we derive a statistically well-defined hybrid likelihood estimator, merging two independent (e.g., low- and high-l) likelihoods into a single expression that properly accounts for correlations between the two. Applying this expression to the Wilkinson Microwave Anisotropy Probe (WMAP) likelihood, we verify that the effect of correlations on cosmological parameters in the transition region is negligible in terms of cosmological parameters for WMAP; the largest relative shift seen for any parameter is 0.06σ. However, because this may not hold for other experimental setups (e.g., for different instrumental noise properties or analysis masks), but must rather be verified on a case-by-case basis, we recommend our new hybridization scheme for future experiments for statistical self-consistency reasons. Second, we use the same expression to improve the convergence rate of the Blackwell-Rao likelihood estimator, reducing the required number of Monte Carlo samples by several orders of magnitude, and thereby extend it to high-l applications.
Shack-Hartmann wavefront sensor with large dynamic range by adaptive spot search method.
Shinto, Hironobu; Saita, Yusuke; Nomura, Takanori
2016-07-10
A Shack-Hartmann wavefront sensor (SHWFS) that consists of a microlens array and an image sensor has been used to measure the wavefront aberrations of human eyes. However, a conventional SHWFS has finite dynamic range depending on the diameter of the each microlens. The dynamic range cannot be easily expanded without a decrease of the spatial resolution. In this study, an adaptive spot search method to expand the dynamic range of an SHWFS is proposed. In the proposed method, spots are searched with the help of their approximate displacements measured with low spatial resolution and large dynamic range. By the proposed method, a wavefront can be correctly measured even if the spot is beyond the detection area. The adaptive spot search method is realized by using the special microlens array that generates both spots and discriminable patterns. The proposed method enables expanding the dynamic range of an SHWFS with a single shot and short processing time. The performance of the proposed method is compared with that of a conventional SHWFS by optical experiments. Furthermore, the dynamic range of the proposed method is quantitatively evaluated by numerical simulations.
NASA Technical Reports Server (NTRS)
Carson, John M., III; Bayard, David S.
2006-01-01
G-SAMPLE is an in-flight dynamical method for use by sample collection missions to identify the presence and quantity of collected sample material. The G-SAMPLE method implements a maximum-likelihood estimator to identify the collected sample mass, based on onboard force sensor measurements, thruster firings, and a dynamics model of the spacecraft. With G-SAMPLE, sample mass identification becomes a computation rather than an extra hardware requirement; the added cost of cameras or other sensors for sample mass detection is avoided. Realistic simulation examples are provided for a spacecraft configuration with a sample collection device mounted on the end of an extended boom. In one representative example, a 1000 gram sample mass is estimated to within 110 grams (95% confidence) under realistic assumptions of thruster profile error, spacecraft parameter uncertainty, and sensor noise. For convenience to future mission design, an overall sample-mass estimation error budget is developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.
NASA Astrophysics Data System (ADS)
Rui, Xiao-Ting; Kreuzer, Edwin; Rong, Bao; He, Bin
2012-04-01
In this paper, by defining new state vectors and developing new transfer matrices of various elements moving in space, the discrete time transfer matrix method of multi-rigid-flexible-body system is expanded to study the dynamics of multibody system with flexible beams moving in space. Formulations and numerical example of a rigid-flexible-body three pendulums system moving in space are given to validate the method. Using the new method to study the dynamics of multi-rigid-flexible-body system moving in space, the global dynamics equations of system are not needed, the orders of involved matrices of the system are very low and the computational speed is high, irrespective of the size of the system. The new method is simple, straightforward, practical, and provides a powerful tool for multi-rigid-flexible-body system dynamics.
Protein turnover methods in single-celled organisms: dynamic SILAC.
Claydon, Amy J; Beynon, Robert J
2011-01-01
Early achievements in proteomics were qualitative, typified by the identification of very small quantities of proteins. However, as the subject has developed, there has been a pressure to develop approaches to define the amounts of each protein--whether in a relative or an absolute sense. A further dimension to quantitative proteomics embeds the behavior of each protein in terms of its turnover. Virtually every protein in the cell is in a dynamic state, subject to continuous synthesis and degradation, the relative rates of which control the expansion or the contraction of the protein pool, and the absolute values of which dictate the temporal responsiveness of the protein pool. Strategies must therefore be developed to assess the turnover of individual proteins in the proteome. Because a protein can be turning over rapidly even when the protein pool is in steady state, the only acceptable approach to measure turnover is to use metabolic labels that are incorporated or lost from the protein pool as it is replaced. Using metabolic labeling on a proteome-wide scale in turn requires metabolic labels that contain stable isotopes, the incorporation or loss of which can be assessed by mass spectrometry. A typical turnover experiment is complex. The choice of metabolic label is dictated by several factors, including abundance in the proteome, metabolic redistribution of the label in the precursor pool, and the downstream mass spectrometric analytical protocols. Key issues include the need to control and understand the relative isotope abundance of the precursor, the optimization of label flux into and out of the protein pool, and a sampling strategy that ensures the coverage of the greatest range of turnover rates. Finally, the informatics approaches to data analysis will not be as straightforward as in other areas of proteomics. In this chapter, we will discuss the principles and practice of workflow development for turnover analysis, exemplified by the development of
Dynamic analysis methods for detecting anomalies in asynchronously interacting systems
Kumar, Akshat; Solis, John Hector; Matschke, Benjamin
2014-01-01
Detecting modifications to digital system designs, whether malicious or benign, is problematic due to the complexity of the systems being analyzed. Moreover, static analysis techniques and tools can only be used during the initial design and implementation phases to verify safety and liveness properties. It is computationally intractable to guarantee that any previously verified properties still hold after a system, or even a single component, has been produced by a third-party manufacturer. In this paper we explore new approaches for creating a robust system design by investigating highly-structured computational models that simplify verification and analysis. Our approach avoids the need to fully reconstruct the implemented system by incorporating a small verification component that dynamically detects for deviations from the design specification at run-time. The first approach encodes information extracted from the original system design algebraically into a verification component. During run-time this component randomly queries the implementation for trace information and verifies that no design-level properties have been violated. If any deviation is detected then a pre-specified fail-safe or notification behavior is triggered. Our second approach utilizes a partitioning methodology to view liveness and safety properties as a distributed decision task and the implementation as a proposed protocol that solves this task. Thus the problem of verifying safety and liveness properties is translated to that of verifying that the implementation solves the associated decision task. We develop upon results from distributed systems and algebraic topology to construct a learning mechanism for verifying safety and liveness properties from samples of run-time executions.
A self-consistent field method for galactic dynamics
NASA Technical Reports Server (NTRS)
Hernquist, Lars; Ostriker, Jeremiah P.
1992-01-01
The present study describes an algorithm for evolving collisionless stellar systems in order to investigate the evolution of systems with density profiles like the R exp 1/4 law, using only a few terms in the expansions. A good fit is obtained for a truncated isothermal distribution, which renders the method appropriate for galaxies with flat rotation curves. Calculations employing N of about 10 exp 6-7 are straightforward on existing supercomputers, making possible simulations having significantly smoother fields than with direct methods such as tree-codes. Orbits are found in a given static or time-dependent gravitational field; the potential, phi(r, t) is revised from the resultant density, rho(r, t). Possible scientific uses of this technique are discussed, including tidal perturbations of dwarf galaxies, the adiabatic growth of central masses in spheroidal galaxies, instabilities in realistic galaxy models, and secular processes in galactic evolution.
Discrete time transfer matrix method for dynamics of multibody system with real-time control
NASA Astrophysics Data System (ADS)
Rong, Bao; Rui, Xiaoting; Wang, Guoping; Yang, Fufeng
2010-03-01
By taking the control and feedback parameters into account in state vectors, defining new state vectors and deducing new transfer equations and transfer matrices for actuator, controlled element and feedback element, a new method named as the discrete time transfer matrix method for controlled multibody system (CMS) is developed to study dynamics of CMS with real-time control in this paper. This method does not need the global dynamics equations of system. It has the modeling flexibility, low order of system matrix, high computational efficiency, and is efficient for general CMS. Compared with the ordinary dynamics methods, the proposed method has more advantages for dynamics design and real-time control of a complex CMS. Adopting the PID adaptive controller and modal velocity feedback control on PZT actuators, and applying the proposed method and ordinary dynamics method, respectively, the tip trajectory tracking for a flexible manipulator is carried out. Formulations of the method as well as numerical simulation are given to validate the proposed method.
Maximum likelihood positioning and energy correction for scintillation detectors
NASA Astrophysics Data System (ADS)
Lerche, Christoph W.; Salomon, André; Goldschmidt, Benjamin; Lodomez, Sarah; Weissler, Björn; Solf, Torsten
2016-02-01
An algorithm for determining the crystal pixel and the gamma ray energy with scintillation detectors for PET is presented. The algorithm uses Likelihood Maximisation (ML) and therefore is inherently robust to missing data caused by defect or paralysed photo detector pixels. We tested the algorithm on a highly integrated MRI compatible small animal PET insert. The scintillation detector blocks of the PET gantry were built with the newly developed digital Silicon Photomultiplier (SiPM) technology from Philips Digital Photon Counting and LYSO pixel arrays with a pitch of 1 mm and length of 12 mm. Light sharing was used to readout the scintillation light from the 30× 30 scintillator pixel array with an 8× 8 SiPM array. For the performance evaluation of the proposed algorithm, we measured the scanner’s spatial resolution, energy resolution, singles and prompt count rate performance, and image noise. These values were compared to corresponding values obtained with Center of Gravity (CoG) based positioning methods for different scintillation light trigger thresholds and also for different energy windows. While all positioning algorithms showed similar spatial resolution, a clear advantage for the ML method was observed when comparing the PET scanner’s overall single and prompt detection efficiency, image noise, and energy resolution to the CoG based methods. Further, ML positioning reduces the dependence of image quality on scanner configuration parameters and was the only method that allowed achieving highest energy resolution, count rate performance and spatial resolution at the same time.
Maximum likelihood positioning and energy correction for scintillation detectors.
Lerche, Christoph W; Salomon, André; Goldschmidt, Benjamin; Lodomez, Sarah; Weissler, Björn; Solf, Torsten
2016-02-21
An algorithm for determining the crystal pixel and the gamma ray energy with scintillation detectors for PET is presented. The algorithm uses Likelihood Maximisation (ML) and therefore is inherently robust to missing data caused by defect or paralysed photo detector pixels. We tested the algorithm on a highly integrated MRI compatible small animal PET insert. The scintillation detector blocks of the PET gantry were built with the newly developed digital Silicon Photomultiplier (SiPM) technology from Philips Digital Photon Counting and LYSO pixel arrays with a pitch of 1 mm and length of 12 mm. Light sharing was used to readout the scintillation light from the 30 × 30 scintillator pixel array with an 8 × 8 SiPM array. For the performance evaluation of the proposed algorithm, we measured the scanner's spatial resolution, energy resolution, singles and prompt count rate performance, and image noise. These values were compared to corresponding values obtained with Center of Gravity (CoG) based positioning methods for different scintillation light trigger thresholds and also for different energy windows. While all positioning algorithms showed similar spatial resolution, a clear advantage for the ML method was observed when comparing the PET scanner's overall single and prompt detection efficiency, image noise, and energy resolution to the CoG based methods. Further, ML positioning reduces the dependence of image quality on scanner configuration parameters and was the only method that allowed achieving highest energy resolution, count rate performance and spatial resolution at the same time. PMID:26836394
Maximum likelihood resampling of noisy, spatially correlated data
NASA Astrophysics Data System (ADS)
Goff, J.; Jenkins, C.
2005-12-01
In any geologic application, noisy data are sources of consternation for researchers, inhibiting interpretability and marring images with unsightly and unrealistic artifacts. Filtering is the typical solution to dealing with noisy data. However, filtering commonly suffers from ad hoc (i.e., uncalibrated, ungoverned) application, which runs the risk of erasing high variability components of the field in addition to the noise components. We present here an alternative to filtering: a newly developed methodology for correcting noise in data by finding the "best" value given the data value, its uncertainty, and the data values and uncertainties at proximal locations. The motivating rationale is that data points that are close to each other in space cannot differ by "too much", where how much is "too much" is governed by the field correlation properties. Data with large uncertainties will frequently violate this condition, and in such cases need to be corrected, or "resampled." The best solution for resampling is determined by the maximum of the likelihood function defined by the intersection of two probability density functions (pdf): (1) the data pdf, with mean and variance determined by the data value and square uncertainty, respectively, and (2) the geostatistical pdf, whose mean and variance are determined by the kriging algorithm applied to proximal data values. A Monte Carlo sampling of the data probability space eliminates non-uniqueness, and weights the solution toward data values with lower uncertainties. A test with a synthetic data set sampled from a known field demonstrates quantitatively and qualitatively the improvement provided by the maximum likelihood resampling algorithm. The method is also applied to three marine geology/geophysics data examples: (1) three generations of bathymetric data on the New Jersey shelf with disparate data uncertainties; (2) mean grain size data from the Adriatic Sea, which is combination of both analytic (low uncertainty
NASA Astrophysics Data System (ADS)
Yamaguchi, M.; Katayama, K.; Sawada, T.
2003-08-01
A recently developed lens-free heterodyne transient grating method was applied for the measurement of ultrafast photoexcited dynamics of several kinds of dye molecules in aqueous solutions. The principle of the lens-free heterodyne transient grating method was clarified in detail, especially for thick samples, such as liquid and semi-transparent solid samples. The ultrafast dynamics of malachite green and methyl orange molecules in aqueous solutions was successfully monitored, and the obtained time constants agreed with those in other reports.
A new uncertain analysis method and its application in vehicle dynamics
NASA Astrophysics Data System (ADS)
Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing
2015-01-01
This paper proposes a new uncertain analysis method for vehicle dynamics involving hybrid uncertainty parameters. The Polynomial Chaos (PC) theory that accounts for the random uncertainty is systematically integrated with the Chebyshev inclusion function theory that describes the interval uncertainty, to deliver a Polynomial-Chaos-Chebyshev-Interval (PCCI) method. The PCCI method is non-intrusive, because it does not require the amendment of the original solver for different and complicated dynamics problems. Two types of evaluation indexes are established: the first includes interval mean (IM) and interval variance (IV), and the second are the mean of lower bound (MLB), the variance of lower bound (VLB), the mean of upper bound (MUB) and the variance of upper bound (VUB). The Monte Carlo method is combined with the scanning method to produce the reference results, and then a 4-DOF vehicle roll plan model is employed to demonstrate the effectiveness of the proposed method for vehicle dynamics.
Maximum-likelihood estimation of circle parameters via convolution.
Zelniker, Emanuel E; Clarkson, I Vaughan L
2006-04-01
The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images. PMID:16579374
Multi-Scale homogenization methods for Magma Dynamics
NASA Astrophysics Data System (ADS)
Simpson, G.; Spiegelman, M. W.; Weinstein, M.
2009-12-01
Developing accurate and tractable mathematical models for partially molten systems is critical for understanding the dynamics of mantle magmatic regions (mid-ocean ridges, subduction zones, hot spots) as well as modeling the geochemical evolution of the planet. Because these systems include interacting fluid and solid phases, developing such models is challenging. The composite material of melt and solid possesses emergent features, such as permeability and compressibility, not found in either phase alone. Previous work has used multiphase flow theory to derive macroscopic equations based on conservation principals and assumptions about interphase forces and interactions. Here, we present a complementary approach using homogenization, a multiple scale theory. Our point of departure is a model of the microstructure, assumed to possess an arbitrary, but periodic, geometry of interpenetrating melt and matrix. At this scale, incompressible Stokes flow is assumed to govern both phases, with appropriate interface conditions. Homogenization systematically leads to macroscopic equations for the melt and matrix velocities, as well as the bulk parameters, permeability and bulk viscosity, without requiring ad-hoc closures for interphase forces. We show that homogenization can lead to a range of macroscopic models depending on the relative contrast in melt and solid properties such as viscosity or velocity. In particular, we identify a regime that is in good agreement with previous formulations, without including their attendant assumptions. Thus, this work serves as independent verification of these models. In addition, homogenization provides consistent machinery for computing macroscopic constitutive relations such as permeability and bulk viscosity that are consistent with a given microstructure. We implement this machinery numerically to calculate effective permeability, bulk viscosity and a tensorial correction to the shear viscosity that accounts for fabric formation
C-arm perfusion imaging with a fast penalized maximum-likelihood approach
NASA Astrophysics Data System (ADS)
Frysch, Robert; Pfeiffer, Tim; Bannasch, Sebastian; Serowy, Steffen; Gugel, Sebastian; Skalej, Martin; Rose, Georg
2014-03-01
Perfusion imaging is an essential method for stroke diagnostics. One of the most important factors for a successful therapy is to get the diagnosis as fast as possible. Therefore our approach aims at perfusion imaging (PI) with a cone beam C-arm system providing perfusion information directly in the interventional suite. For PI the imaging system has to provide excellent soft tissue contrast resolution in order to allow the detection of small attenuation enhancement due to contrast agent in the capillary vessels. The limited dynamic range of flat panel detectors as well as the sparse sampling of the slow rotating C-arm in combination with standard reconstruction methods results in limited soft tissue contrast. We choose a penalized maximum-likelihood reconstruction method to get suitable results. To minimize the computational load, the 4D reconstruction task is reduced to several static 3D reconstructions. We also include an ordered subset technique with transitioning to a small number of subsets, which adds sharpness to the image with less iterations while also suppressing the noise. Instead of the standard multiplicative EM correction, we apply a Newton-based optimization to further accelerate the reconstruction algorithm. The latter optimization reduces the computation time by up to 70%. Further acceleration is provided by a multi-GPU implementation of the forward and backward projection, which fulfills the demands of cone beam geometry. In this preliminary study we evaluate this procedure on clinical data. Perfusion maps are computed and compared with reference images from magnetic resonance scans. We found a high correlation between both images.
NASA Technical Reports Server (NTRS)
Murphy, Patrick Charles
1985-01-01
An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.
NASA Technical Reports Server (NTRS)
Bundick, W. T.
1985-01-01
The application of the Generalized Likelihood Ratio technique to the detection and identification of aircraft control element failures has been evaluated in a linear digital simulation of the longitudinal dynamics of a B-737 aircraft. Simulation results show that the technique has potential but that the effects of wind turbulence and Kalman filter model errors are problems which must be overcome.
A multi-similarity spectral clustering method for community detection in dynamic networks.
Qin, Xuanmei; Dai, Weidi; Jiao, Pengfei; Wang, Wenjun; Yuan, Ning
2016-01-01
Community structure is one of the fundamental characteristics of complex networks. Many methods have been proposed for community detection. However, most of these methods are designed for static networks and are not suitable for dynamic networks that evolve over time. Recently, the evolutionary clustering framework was proposed for clustering dynamic data, and it can also be used for community detection in dynamic networks. In this paper, a multi-similarity spectral (MSSC) method is proposed as an improvement to the former evolutionary clustering method. To detect the community structure in dynamic networks, our method considers the different similarity metrics of networks. First, multiple similarity matrices are constructed for each snapshot of dynamic networks. Then, a dynamic co-training algorithm is proposed by bootstrapping the clustering of different similarity measures. Compared with a number of baseline models, the experimental results show that the proposed MSSC method has better performance on some widely used synthetic and real-world datasets with ground-truth community structure that change over time. PMID:27528179
A multi-similarity spectral clustering method for community detection in dynamic networks
NASA Astrophysics Data System (ADS)
Qin, Xuanmei; Dai, Weidi; Jiao, Pengfei; Wang, Wenjun; Yuan, Ning
2016-08-01
Community structure is one of the fundamental characteristics of complex networks. Many methods have been proposed for community detection. However, most of these methods are designed for static networks and are not suitable for dynamic networks that evolve over time. Recently, the evolutionary clustering framework was proposed for clustering dynamic data, and it can also be used for community detection in dynamic networks. In this paper, a multi-similarity spectral (MSSC) method is proposed as an improvement to the former evolutionary clustering method. To detect the community structure in dynamic networks, our method considers the different similarity metrics of networks. First, multiple similarity matrices are constructed for each snapshot of dynamic networks. Then, a dynamic co-training algorithm is proposed by bootstrapping the clustering of different similarity measures. Compared with a number of baseline models, the experimental results show that the proposed MSSC method has better performance on some widely used synthetic and real-world datasets with ground-truth community structure that change over time.
A multi-similarity spectral clustering method for community detection in dynamic networks
Qin, Xuanmei; Dai, Weidi; Jiao, Pengfei; Wang, Wenjun; Yuan, Ning
2016-01-01
Community structure is one of the fundamental characteristics of complex networks. Many methods have been proposed for community detection. However, most of these methods are designed for static networks and are not suitable for dynamic networks that evolve over time. Recently, the evolutionary clustering framework was proposed for clustering dynamic data, and it can also be used for community detection in dynamic networks. In this paper, a multi-similarity spectral (MSSC) method is proposed as an improvement to the former evolutionary clustering method. To detect the community structure in dynamic networks, our method considers the different similarity metrics of networks. First, multiple similarity matrices are constructed for each snapshot of dynamic networks. Then, a dynamic co-training algorithm is proposed by bootstrapping the clustering of different similarity measures. Compared with a number of baseline models, the experimental results show that the proposed MSSC method has better performance on some widely used synthetic and real-world datasets with ground-truth community structure that change over time. PMID:27528179
NASA Astrophysics Data System (ADS)
Wang, Qing; Yao, Jing-Zheng
2010-12-01
Several algorithms were proposed relating to the development of a framework of the perturbation-based stochastic finite element method (PSFEM) for large variation nonlinear dynamic problems. For this purpose, algorithms and a framework related to SFEM based on the stochastic virtual work principle were studied. To prove the validity and practicality of the algorithms and framework, numerical examples for nonlinear dynamic problems with large variations were calculated and compared with the Monte-Carlo Simulation method. This comparison shows that the proposed approaches are accurate and effective for the nonlinear dynamic analysis of structures with random parameters.
Motif-Synchronization: A new method for analysis of dynamic brain networks with EEG
NASA Astrophysics Data System (ADS)
Rosário, R. S.; Cardoso, P. T.; Muñoz, M. A.; Montoya, P.; Miranda, J. G. V.
2015-12-01
The major aim of this work was to propose a new association method known as Motif-Synchronization. This method was developed to provide information about the synchronization degree and direction between two nodes of a network by counting the number of occurrences of some patterns between any two time series. The second objective of this work was to present a new methodology for the analysis of dynamic brain networks, by combining the Time-Varying Graph (TVG) method with a directional association method. We further applied the new algorithms to a set of human electroencephalogram (EEG) signals to perform a dynamic analysis of the brain functional networks (BFN).
Classification data mining method based on dynamic RBF neural networks
NASA Astrophysics Data System (ADS)
Zhou, Lijuan; Xu, Min; Zhang, Zhang; Duan, Luping
2009-04-01
With the widely application of databases and sharp development of Internet, The capacity of utilizing information technology to manufacture and collect data has improved greatly. It is an urgent problem to mine useful information or knowledge from large databases or data warehouses. Therefore, data mining technology is developed rapidly to meet the need. But DM (data mining) often faces so much data which is noisy, disorder and nonlinear. Fortunately, ANN (Artificial Neural Network) is suitable to solve the before-mentioned problems of DM because ANN has such merits as good robustness, adaptability, parallel-disposal, distributing-memory and high tolerating-error. This paper gives a detailed discussion about the application of ANN method used in DM based on the analysis of all kinds of data mining technology, and especially lays stress on the classification Data Mining based on RBF neural networks. Pattern classification is an important part of the RBF neural network application. Under on-line environment, the training dataset is variable, so the batch learning algorithm (e.g. OLS) which will generate plenty of unnecessary retraining has a lower efficiency. This paper deduces an incremental learning algorithm (ILA) from the gradient descend algorithm to improve the bottleneck. ILA can adaptively adjust parameters of RBF networks driven by minimizing the error cost, without any redundant retraining. Using the method proposed in this paper, an on-line classification system was constructed to resolve the IRIS classification problem. Experiment results show the algorithm has fast convergence rate and excellent on-line classification performance.
Accommodative Lag by Autorefraction and Two Dynamic Retinoscopy Methods
2008-01-01
Purpose To evaluate two clinical procedures, MEM and Nott retinoscopy, for detecting accommodative lags 1.00 diopter (D) or greater in children as identified by an open-field autorefractor. Methods 168 children 8 to <12 years old with low myopia, normal visual acuity, and no strabismus participated as part of an ancillary study within the screening process for a randomized trial. Accommodative response to a 3.00 D demand was first assessed by MEM and Nott retinoscopy, viewing binocularly with spherocylindrical refractive error corrected, with testing order randomized and each performed by a different masked examiner. The response was then determined viewing monocularly with spherical equivalent refractive error corrected, using an open-field autorefractor, which was the gold standard used for eligibility for the clinical trial. Sensitivity and specificity for accommodative lags of 1.00 D or more were calculated for each retinoscopy method compared to the autorefractor. Results 116 (69%) of the 168 children had accommodative lag of 1.00 D or more by autorefraction. MEM identified 66 children identified by autorefraction for a sensitivity of 57% (95% CI = 47% to 66%) and a specificity of 63% (95% CI = 49% to 76%). Nott retinoscopy identified 35 children for a sensitivity of 30% (95% CI = 22% to 39%) and a specificity of 81% (95% CI = 67% to 90%). Analysis of receiver operating characteristic (ROC) curves constructed for MEM and for Nott retinoscopy failed to reveal alternate cut points that would improve the combination of sensitivity and specificity for identifying accommodative lag ≥ 1.00 D as defined by autorefraction. Conclusions Neither MEM nor Nott retinoscopy provided adequate sensitivity and specificity to identify myopic children with accommodative lag ≥ 1.00 D as determined by autorefraction. A variety of methodological differences between the techniques may contribute to the modest to poor agreement. PMID:19214130
W. D. Richins; J. M. Lacy; T. K. Larson; S. R. Novascone
2008-05-01
New nuclear power reactor designs will require resistance to a variety of possible malevolent attacks as well as traditional dynamic accident scenarios. The design/analysis team may be faced with a broad range of phenomena including air and ground blasts, high-velocity penetrators or shaped charges, and vehicle or aircraft impacts. With a host of software tools available to address these high-energy events, the analysis team must evaluate and select the software most appropriate for their particular set of problems. The accuracy of the selected software should then be validated with respect to the phenomena governing the interaction of the threat and structure. Several software codes are available for the study of blast, impact, and other shock phenomena. At the Idaho National Laboratory (INL), a study is underway to investigate the comparative characteristics of a group of shock and high-strain rate physics codes including ABAQUS, LS-DYNA, CTH, ALEGRA, and ALE-3D. In part I of this report, a series of five benchmark problems to exercise some important capabilities of the subject software was identified. The benchmark problems selected are a Taylor cylinder test, a split Hopkinson pressure bar test, a free air blast, the dynamic splitting tension (Brazilian) test, and projectile penetration of a concrete slab. Part II-- this paper-- reports the results of two of the benchmark problems: the Taylor cylinder and the dynamic Brazilian test. The Taylor cylinder test is a method to determine the dynamic yield properties of materials. The test specimen is a right circular cylinder which is impacted against a theoretically rigid target. The cylinder deforms upon impact, with the final shape depending upon the dynamic yield stress, in turn a function of strain and strain rate. The splitting tension test, or Brazilian test, is a method to measure the tensile strength of concrete using a cylindrical specimen. The specimen is loaded diametrically in compression, producing a
Coherence penalty functional: A simple method for adding decoherence in Ehrenfest dynamics
Akimov, Alexey V. E-mail: oleg.prezhdo@rochester.edu; Long, Run; Prezhdo, Oleg V. E-mail: oleg.prezhdo@rochester.edu
2014-05-21
We present a new semiclassical approach for description of decoherence in electronically non-adiabatic molecular dynamics. The method is formulated on the grounds of the Ehrenfest dynamics and the Meyer-Miller-Thoss-Stock mapping of the time-dependent Schrödinger equation onto a fully classical Hamiltonian representation. We introduce a coherence penalty functional (CPF) that accounts for decoherence effects by randomizing the wavefunction phase and penalizing development of coherences in regions of strong non-adiabatic coupling. The performance of the method is demonstrated with several model and realistic systems. Compared to other semiclassical methods tested, the CPF method eliminates artificial interference and improves agreement with the fully quantum calculations on the models. When applied to study electron transfer dynamics in the nanoscale systems, the method shows an improved accuracy of the predicted time scales. The simplicity and high computational efficiency of the CPF approach make it a perfect practical candidate for applications in realistic systems.
Novel Method for Processing the Dynamic Calibration Signal of Pressure Sensor.
Wang, Zhongyu; Li, Qiang; Wang, Zhuoran; Yan, Hu
2015-07-21
Dynamic calibration is one of the important ways to acquire the dynamic performance parameters of a pressure sensor. This research focuses on the processing method for the output of calibrated pressure sensor, and mainly attempts to solve the problem of extracting the true information of step response under strong interference noise. A dynamic calibration system based on a shock tube is established to excite the time-domain response signal of a calibrated pressure sensor. A key processing on difference modeling is applied for the obtained signal, and several generating sequences are established. A fusion process for the generating sequences is then undertaken, and the true information of the step response of the calibrated pressure sensor can be obtained. Finally, by implementing the common QR decomposition method to deal with the true information, a dynamic model characterizing the dynamic performance of the calibrated pressure sensor is established. A typical pressure sensor was used to perform calibration tests and a frequency-domain experiment for the sensor was also conducted. Results show that the proposed method could effectively filter strong interference noise in the output of the sensor and the corresponding dynamic model could effectively characterize the dynamic performance of the pressure sensor.
Wang, Shaobu; Lu, Shuai; Zhou, Ning; Lin, Guang; Elizondo, Marcelo A.; Pai, M. A.
2014-09-04
In interconnected power systems, dynamic model reduction can be applied on generators outside the area of interest to mitigate the computational cost with transient stability studies. This paper presents an approach of deriving the reduced dynamic model of the external area based on dynamic response measurements, which comprises of three steps, dynamic-feature extraction, attribution and reconstruction (DEAR). In the DEAR approach, a feature extraction technique, such as singular value decomposition (SVD), is applied to the measured generator dynamics after a disturbance. Characteristic generators are then identified in the feature attribution step for matching the extracted dynamic features with the highest similarity, forming a suboptimal ‘basis’ of system dynamics. In the reconstruction step, generator state variables such as rotor angles and voltage magnitudes are approximated with a linear combination of the characteristic generators, resulting in a quasi-nonlinear reduced model of the original external system. Network model is un-changed in the DEAR method. Tests on several IEEE standard systems show that the proposed method gets better reduction ratio and response errors than the traditional coherency aggregation methods.
McGuire, Connor; Kristman, Vicki L; Williams-Whitt, Kelly; Reguly, Paula; Shaw, William; Soklaridis, Sophie
2015-01-01
PURPOSE To determine the association between supervisors’ leadership style and autonomy and supervisors’ likelihood of supporting job accommodations for back-injured workers. METHODS A cross-sectional study of supervisors from Canadian and US employers was conducted using a web-based, self-report questionnaire that included a case vignette of a back-injured worker. Autonomy and two dimensions of leadership style (considerate and initiating structure) were included as exposures. The outcome, supervisors’ likeliness to support job accommodation, was measured with the Job Accommodation Scale. We conducted univariate analyses of all variables and bivariate analyses of the JAS score with each exposure and potential confounding factor. We used multivariable generalized linear models to control for confounding factors. RESULTS A total of 796 supervisors participated. Considerate leadership style (β= .012; 95% CI: .009–.016) and autonomy (β= .066; 95% CI: .025–.11) were positively associated with supervisors’ likelihood to accommodate after adjusting for appropriate confounding factors. An initiating structure leadership style was not significantly associated with supervisors’ likelihood to accommodate (β = .0018; 95% CI: −.0026–.0061) after adjusting for appropriate confounders. CONCLUSIONS Autonomy and a considerate leadership style were positively associated with supervisors’ likelihood to accommodate a back-injured worker. Providing supervisors with more autonomy over decisions of accommodation and developing their considerate leadership style may aid in increasing work accommodation for back-injured workers and preventing prolonged work disability. PMID:25595332
Park, T
1993-09-30
Liang and Zeger proposed an extension of generalized linear models to the analysis of longitudinal data. Their approach is closely related to quasi-likelihood methods and can handle both normal and non-normal outcome variables such as Poisson or binary outcomes. Their approach, however, has been applied mainly to non-normal outcome variables. This is probably due to the fact that there is a large class of multivariate linear models available for normal outcomes such as growth models and random-effects models. Furthermore, there are many iterative algorithms that yield maximum likelihood estimators (MLEs) of the model parameters. The multivariate linear model approach, based on maximum likelihood (ML) estimation, specifies the joint multivariate normal distribution of outcome variables while the approach of Liang and Zeger, based on the quasi-likelihood, specifies only the marginal distributions. In this paper, I compare the approach of Liang and Zeger and the ML approach for the multivariate normal outcomes. I show that the generalized estimating equation (GEE) reduces to the score equation only when the data do not have missing observations and the correlation is unstructured. In more general cases, however, the GEE estimation yields consistent estimators that may differ from the MLEs. That is, the GEE does not always reduce to the score equation even when the outcome variables are multivariate normal. I compare the small sample properties of the GEE estimators and the MLEs by means of a Monte Carlo simulation study. PMID:8248664
Efficient Strategies for Calculating Blockwise Likelihoods Under the Coalescent.
Lohse, Konrad; Chmelik, Martin; Martin, Simon H; Barton, Nicholas H
2016-02-01
The inference of demographic history from genome data is hindered by a lack of efficient computational approaches. In particular, it has proved difficult to exploit the information contained in the distribution of genealogies across the genome. We have previously shown that the generating function (GF) of genealogies can be used to analytically compute likelihoods of demographic models from configurations of mutations in short sequence blocks (Lohse et al. 2011). Although the GF has a simple, recursive form, the size of such likelihood calculations explodes quickly with the number of individuals and applications of this framework have so far been mainly limited to small samples (pairs and triplets) for which the GF can be written by hand. Here we investigate several strategies for exploiting the inherent symmetries of the coalescent. In particular, we show that the GF of genealogies can be decomposed into a set of equivalence classes that allows likelihood calculations from nontrivial samples. Using this strategy, we automated blockwise likelihood calculations for a general set of demographic scenarios in Mathematica. These histories may involve population size changes, continuous migration, discrete divergence, and admixture between multiple populations. To give a concrete example, we calculate the likelihood for a model of isolation with migration (IM), assuming two diploid samples without phase and outgroup information. We demonstrate the new inference scheme with an analysis of two individual butterfly genomes from the sister species Heliconius melpomene rosina and H. cydno. PMID:26715666
Exclusion probabilities and likelihood ratios with applications to kinship problems.
Slooten, Klaas-Jan; Egeland, Thore
2014-05-01
In forensic genetics, DNA profiles are compared in order to make inferences, paternity cases being a standard example. The statistical evidence can be summarized and reported in several ways. For example, in a paternity case, the likelihood ratio (LR) and the probability of not excluding a random man as father (RMNE) are two common summary statistics. There has been a long debate on the merits of the two statistics, also in the context of DNA mixture interpretation, and no general consensus has been reached. In this paper, we show that the RMNE is a certain weighted average of inverse likelihood ratios. This is true in any forensic context. We show that the likelihood ratio in favor of the correct hypothesis is, in expectation, bigger than the reciprocal of the RMNE probability. However, with the exception of pathological cases, it is also possible to obtain smaller likelihood ratios. We illustrate this result for paternity cases. Moreover, some theoretical properties of the likelihood ratio for a large class of general pairwise kinship cases, including expected value and variance, are derived. The practical implications of the findings are discussed and exemplified.
Efficient Strategies for Calculating Blockwise Likelihoods Under the Coalescent
Lohse, Konrad; Chmelik, Martin; Martin, Simon H.; Barton, Nicholas H.
2016-01-01
The inference of demographic history from genome data is hindered by a lack of efficient computational approaches. In particular, it has proved difficult to exploit the information contained in the distribution of genealogies across the genome. We have previously shown that the generating function (GF) of genealogies can be used to analytically compute likelihoods of demographic models from configurations of mutations in short sequence blocks (Lohse et al. 2011). Although the GF has a simple, recursive form, the size of such likelihood calculations explodes quickly with the number of individuals and applications of this framework have so far been mainly limited to small samples (pairs and triplets) for which the GF can be written by hand. Here we investigate several strategies for exploiting the inherent symmetries of the coalescent. In particular, we show that the GF of genealogies can be decomposed into a set of equivalence classes that allows likelihood calculations from nontrivial samples. Using this strategy, we automated blockwise likelihood calculations for a general set of demographic scenarios in Mathematica. These histories may involve population size changes, continuous migration, discrete divergence, and admixture between multiple populations. To give a concrete example, we calculate the likelihood for a model of isolation with migration (IM), assuming two diploid samples without phase and outgroup information. We demonstrate the new inference scheme with an analysis of two individual butterfly genomes from the sister species Heliconius melpomene rosina and H. cydno. PMID:26715666
NASA Technical Reports Server (NTRS)
Papadopoulos, G. D.
1975-01-01
The output of a radio interferometer is the Fourier transform of the object under investigation. Due to the limited coverage of the Fourier plane, the reconstruction of the image of the source is blurred by the beam of the synthesized array. A maximum-likelihood processing technique is described which uses the statistical properties of the received noise-like signals. This technique has been used extensively in the processing of large-aperture seismic arrays. This inversion method results in a synthesized beam that is more uniform, has lower sidelobes, and higher resolution than the normal Fourier transform methods. The maximum-likelihood method algorithm was applied successfully to very long baseline and short baseline interferometric data.
Adaptive hybrid likelihood model for visual tracking based on Gaussian particle filter
NASA Astrophysics Data System (ADS)
Wang, Yong; Tan, Yihua; Tian, Jinwen
2010-07-01
We present a new scheme based on multiple-cue integration for visual tracking within a Gaussian particle filter framework. The proposed method integrates the color, shape, and texture cues of an object to construct a hybrid likelihood model. During the measurement step, the likelihood model can be switched adaptively according to environmental changes, which improves the object representation to deal with the complex disturbances, such as appearance changes, partial occlusions, and significant clutter. Moreover, the confidence weights of the cues are adjusted online through the estimation using a particle filter, which ensures the tracking accuracy and reliability. Experiments are conducted on several real video sequences, and the results demonstrate that the proposed method can effectively track objects in complex scenarios. Compared with previous similar approaches through some quantitative and qualitative evaluations, the proposed method performs better in terms of tracking robustness and precision.
NASA Technical Reports Server (NTRS)
Graves, Philip L.
1989-01-01
A method of formulating the dynamical equations of a flexible, serial manipulator is presented, using the Method of Kinematic Influence. The resulting equations account for rigid body motion, structural motion due to link and joint flexibilities, and the coupling between these two motions. Nonlinear inertial loads are included in the equations. A finite order mode summation method is used to model flexibilities. The structural data may be obtained from experimental, finite element, or analytical methods. Nonlinear flexibilities may be included in the model.
Predictive Simulation and Design of Materials by Quasicontinuum and Accelerated Dynamics Methods
Luskin, Mitchell; James, Richard; Tadmor, Ellad
2014-03-30
This project developed the hyper-QC multiscale method to make possible the computation of previously inaccessible space and time scales for materials with thermally activated defects. The hyper-QC method combines the spatial coarse-graining feature of a finite temperature extension of the quasicontinuum (QC) method (aka “hot-QC”) with the accelerated dynamics feature of hyperdynamics. The hyper-QC method was developed, optimized, and tested from a rigorous mathematical foundation.
The composite dynamic method as evidence for age-specific waterfowl mortality
Burnham, Kenneth P.; Anderson, David R.
1979-01-01
For the past 25 years estimation of mortality rates for waterfowl has been based almost entirely on the composite dynamic life table. We examined the specific assumptions for this method and derived a valid goodness of fit test. We performed this test on 45 data sets representing a cross section of banded sampled for various waterfowl species, geographic areas, banding periods, and age/sex classes. We found that: (1) the composite dynamic method was rejected (P <0.001) in 37 of the 45 data sets (in fact, 29 were rejected at P <0.00001) and (2) recovery and harvest rates are year-specific (a critical violation of the necessary assumptions). We conclude that the restrictive assumptions required for the composite dynamic method to produce valid estimates of mortality rates are not met in waterfowl data. Also we demonstrate that even when the required assumptions are met, the method produces very biased estimates of age-specific mortality rates. We believe the composite dynamic method should not be used in the analysis of waterfowl banding data. Furthermore, the composite dynamic method does not provide valid evidence for age-specific mortality rates in waterfowl.
Method and apparatus for characterizing and enhancing the dynamic performance of machine tools
Barkman, William E; Babelay, Jr., Edwin F
2013-12-17
Disclosed are various systems and methods for assessing and improving the capability of a machine tool. The disclosure applies to machine tools having at least one slide configured to move along a motion axis. Various patterns of dynamic excitation commands are employed to drive the one or more slides, typically involving repetitive short distance displacements. A quantification of a measurable merit of machine tool response to the one or more patterns of dynamic excitation commands is typically derived for the machine tool. Examples of measurable merits of machine tool performance include dynamic one axis positional accuracy of the machine tool, dynamic cross-axis stability of the machine tool, and dynamic multi-axis positional accuracy of the machine tool.
Factors Influencing the Intended Likelihood of Exposing Sexual Infidelity.
Kruger, Daniel J; Fisher, Maryanne L; Fitzgerald, Carey J
2015-08-01
There is a considerable body of literature on infidelity within romantic relationships. However, there is a gap in the scientific literature on factors influencing the likelihood of uninvolved individuals exposing sexual infidelity. Therefore, we devised an exploratory study examining a wide range of potentially relevant factors. Based in part on evolutionary theory, we anticipated nine potential domains or types of influences on the likelihoods of exposing or protecting cheaters, including kinship, strong social alliances, financial support, previous relationship behaviors (including infidelity and abuse), potential relationship transitions, stronger sexual and emotional aspects of the extra-pair relationship, and disease risk. The pattern of results supported these predictions (N = 159 men, 328 women). In addition, there appeared to be a small positive bias for participants to report infidelity when provided with any additional information about the situation. Overall, this study contributes a broad initial description of factors influencing the predicted likelihood of exposing sexual infidelity and encourages further studies in this area.
Maximum-likelihood estimation of gene location by linkage disequilibrium
Hill, W.G. ); Weir, B.S. )
1994-04-01
Linkage disequilibrium, D, between a polymorphic disease and mapped markers can, in principle, be used to help find the map position of the disease gene. Likelihoods are therefore derived for the value of D conditional on the observed number of haplotypes in the sample and on the population parameter Nc, where N is the effective population size and c the recombination fraction between the disease and marker loci. The likelihood is computed explicitly for the case of two loci with heterozygote superiority and, more generally, by computer simulations assuming a steady state of constant population size and selective pressures or neutrality. It is found that the likelihood is, in general, not very dependent on the degree of selection at the loci and is very flat. This suggests that precise information on map position will not be obtained from estimates of linkage disequilibrium. 15 refs., 5 figs., 21 tabs.
Comparisons of several aerodynamic methods for application to dynamic loads analyses
NASA Technical Reports Server (NTRS)
Kroll, R. I.; Miller, R. D.
1976-01-01
The results of a study are presented in which the applicability at subsonic speeds of several aerodynamic methods for predicting dynamic gust loads on aircraft, including active control systems, was examined and compared. These aerodynamic methods varied from steady state to an advanced unsteady aerodynamic formulation. Brief descriptions of the structural and aerodynamic representations and of the motion and load equations are presented. Comparisons of numerical results achieved using the various aerodynamic methods are shown in detail. From these results, aerodynamic representations for dynamic gust analyses are identified. It was concluded that several aerodynamic methods are satisfactory for dynamic gust analyses of configurations having either controls fixed or active control systems that primarily affect the low frequency rigid body aircraft response.
The Development of a New Method of Idiographic Measurement for Dynamic Assessment Intervention
ERIC Educational Resources Information Center
Hurley, Emma; Murphy, Raegan
2015-01-01
This paper proposes a new method of idiographic measurement for dynamic assessment (DA) intervention. There are two main methods of measurement for DA intervention; split-half tests and integrated scoring systems. Split-half tests of ability have proved useful from a research perspective. Integrated scoring systems coupled with case studies are…
Quantum-Classical Nonadiabatic Dynamics: Coupled- vs Independent-Trajectory Methods.
Agostini, Federica; Min, Seung Kyu; Abedi, Ali; Gross, E K U
2016-05-10
Trajectory-based mixed quantum-classical approaches to coupled electron-nuclear dynamics suffer from well-studied problems such as the lack of (or incorrect account for) decoherence in the trajectory surface hopping method and the inability of reproducing the spatial splitting of a nuclear wave packet in Ehrenfest-like dynamics. In the context of electronic nonadiabatic processes, these problems can result in wrong predictions for quantum populations and in unphysical outcomes for the nuclear dynamics. In this paper, we propose a solution to these issues by approximating the coupled electronic and nuclear equations within the framework of the exact factorization of the electron-nuclear wave function. We present a simple quantum-classical scheme based on coupled classical trajectories and test it against the full quantum mechanical solution from wave packet dynamics for some model situations which represent particularly challenging problems for the above-mentioned traditional methods. PMID:27030209
Costa, L; Mantha, V R; Silva, A J; Fernandes, R J; Marinho, D A; Vilas-Boas, J P; Machado, L; Rouboa, A
2015-07-16
Computational fluid dynamics (CFD) plays an important role to quantify, understand and "observe" the water movements around the human body and its effects on drag (D). We aimed to investigate the flow effects around the swimmer and to compare the drag and drag coefficient (CD) values obtained from experiments (using cable velocimetry in a swimming pool) with those of CFD simulations for the two ventral gliding positions assumed during the breaststroke underwater cycle (with shoulders flexed and upper limbs extended above the head-GP1; with shoulders in neutral position and upper limbs extended along the trunk-GP2). Six well-trained breaststroke male swimmers (with reasonable homogeneity of body characteristics) participated in the experimental tests; afterwards a 3D swimmer model was created to fit within the limits of the sample body size profile. The standard k-ε turbulent model was used to simulate the fluid flow around the swimmer model. Velocity ranged from 1.30 to 1.70 m/s for GP1 and 1.10 to 1.50 m/s for GP2. Values found for GP1 and GP2 were lower for CFD than experimental ones. Nevertheless, both CFD and experimental drag/drag coefficient values displayed a tendency to jointly increase/decrease with velocity, except for GP2 CD where CFD and experimental values display opposite tendencies. Results suggest that CFD values obtained by single model approaches should be considered with caution due to small body shape and dimension differences to real swimmers. For better accuracy of CFD studies, realistic individual 3D models of swimmers are required, and specific kinematics respected.
Costa, L; Mantha, V R; Silva, A J; Fernandes, R J; Marinho, D A; Vilas-Boas, J P; Machado, L; Rouboa, A
2015-07-16
Computational fluid dynamics (CFD) plays an important role to quantify, understand and "observe" the water movements around the human body and its effects on drag (D). We aimed to investigate the flow effects around the swimmer and to compare the drag and drag coefficient (CD) values obtained from experiments (using cable velocimetry in a swimming pool) with those of CFD simulations for the two ventral gliding positions assumed during the breaststroke underwater cycle (with shoulders flexed and upper limbs extended above the head-GP1; with shoulders in neutral position and upper limbs extended along the trunk-GP2). Six well-trained breaststroke male swimmers (with reasonable homogeneity of body characteristics) participated in the experimental tests; afterwards a 3D swimmer model was created to fit within the limits of the sample body size profile. The standard k-ε turbulent model was used to simulate the fluid flow around the swimmer model. Velocity ranged from 1.30 to 1.70 m/s for GP1 and 1.10 to 1.50 m/s for GP2. Values found for GP1 and GP2 were lower for CFD than experimental ones. Nevertheless, both CFD and experimental drag/drag coefficient values displayed a tendency to jointly increase/decrease with velocity, except for GP2 CD where CFD and experimental values display opposite tendencies. Results suggest that CFD values obtained by single model approaches should be considered with caution due to small body shape and dimension differences to real swimmers. For better accuracy of CFD studies, realistic individual 3D models of swimmers are required, and specific kinematics respected. PMID:26087879
NASA Astrophysics Data System (ADS)
Lee, Jinkyo
1993-01-01
Efficient and accurate analytical or semi-analytical solutions have been developed for the dynamics of one and two dimensional linear structures employing elemental dynamic flexibility formulation. This dissertation is divided into three parts. In the first, the elemental flexibility formulation is developed for Euler-Bernoulli beams having discontinuous section properties, which can be viewed as the synthesis of uniform beams, and the exactness of the solution is established. In the second, the elemental flexibility formulation is extended to thin rectangular plates having Levy boundary conditions, and conditions under which the exact solution can be achieved are presented. In the third, the structural-acoustic problem of Helmholtz fluid enclosed by a partially flexible cavity is posed and solved. Here, a concise analytical representation of the structural dynamics is used in conjunction with a boundary element approach for the fluid medium to give an efficient and accurate semi-analytical solution. All three sections are organized along similar lines. Following an introduction and review of the pertinent literature, the governing equations are derived and solved, a series of example problems is presented, the results from the examples are compared with similar results from the literature, and efficacy of the method when compared with other methods is discussed. This is followed by a general conclusions section and a series of appendices.
Rupture Dynamics Simulation for Non-Planar fault by a Curved Grid Finite Difference Method
NASA Astrophysics Data System (ADS)
Zhang, Z.; Zhu, G.; Chen, X.
2011-12-01
We first implement the non-staggered finite difference method to solve the dynamic rupture problem, with split-node, for non-planar fault. Split-node method for dynamic simulation has been used widely, because of that it's more precise to represent the fault plane than other methods, for example, thick fault, stress glut and so on. The finite difference method is also a popular numeric method to solve kinematic and dynamic problem in seismology. However, previous works focus most of theirs eyes on the staggered-grid method, because of its simplicity and computational efficiency. However this method has its own disadvantage comparing to non-staggered finite difference method at some fact for example describing the boundary condition, especially the irregular boundary, or non-planar fault. Zhang and Chen (2006) proposed the MacCormack high order non-staggered finite difference method based on curved grids to precisely solve irregular boundary problem. Based upon on this non-staggered grid method, we make success of simulating the spontaneous rupture problem. The fault plane is a kind of boundary condition, which could be irregular of course. So it's convinced that we could simulate rupture process in the case of any kind of bending fault plane. We will prove this method is valid in the case of Cartesian coordinate first. In the case of bending fault, the curvilinear grids will be used.
Applying Parallel Adaptive Methods with GeoFEST/PYRAMID to Simulate Earth Surface Crustal Dynamics
NASA Technical Reports Server (NTRS)
Norton, Charles D.; Lyzenga, Greg; Parker, Jay; Glasscoe, Margaret; Donnellan, Andrea; Li, Peggy
2006-01-01
This viewgraph presentation reviews the use Adaptive Mesh Refinement (AMR) in simulating the Crustal Dynamics of Earth's Surface. AMR simultaneously improves solution quality, time to solution, and computer memory requirements when compared to generating/running on a globally fine mesh. The use of AMR in simulating the dynamics of the Earth's Surface is spurred by future proposed NASA missions, such as InSAR for Earth surface deformation and other measurements. These missions will require support for large-scale adaptive numerical methods using AMR to model observations. AMR was chosen because it has been successful in computation fluid dynamics for predictive simulation of complex flows around complex structures.
Learned predictions of error likelihood in the anterior cingulate cortex.
Brown, Joshua W; Braver, Todd S
2005-02-18
The anterior cingulate cortex (ACC) and the related medial wall play a critical role in recruiting cognitive control. Although ACC exhibits selective error and conflict responses, it has been unclear how these develop and become context-specific. With use of a modified stop-signal task, we show from integrated computational neural modeling and neuroimaging studies that ACC learns to predict error likelihood in a given context, even for trials in which there is no error or response conflict. These results support a more general error-likelihood theory of ACC function based on reinforcement learning, of which conflict and error detection are special cases.
Nonparametric maximum likelihood estimation for the multisample Wicksell corpuscle problem
Chan, Kwun Chuen Gary; Qin, Jing
2016-01-01
We study nonparametric maximum likelihood estimation for the distribution of spherical radii using samples containing a mixture of one-dimensional, two-dimensional biased and three-dimensional unbiased observations. Since direct maximization of the likelihood function is intractable, we propose an expectation-maximization algorithm for implementing the estimator, which handles an indirect measurement problem and a sampling bias problem separately in the E- and M-steps, and circumvents the need to solve an Abel-type integral equation, which creates numerical instability in the one-sample problem. Extensions to ellipsoids are studied and connections to multiplicative censoring are discussed. PMID:27279657
NASA Astrophysics Data System (ADS)
Zucca, Stefano; Firrone, Christian Maria
2014-02-01
Real applications in structural mechanics where the dynamic behavior is linear are rare. Usually, structures are made of components assembled together by means of joints whose behavior maybe highly nonlinear. Depending on the amount of excitation, joints can dramatically change the dynamic behavior of the whole system, and the modeling of this type of constraint is therefore crucial for a correct prediction of the amount of vibration. The solution of the nonlinear equilibrium equations by means of the Harmonic Balance Method (HBM) is widely accepted as an effective approach to calculate the steady-state forced response in the frequency domain, in spite of Direct Time Integration (DTI). The state-of-the-art contact element used to model the friction forces at the joint interfaces is a node-to-node contact element, where the local contact compliance is modeled by means of linear springs and Coulomb's law is used to govern the friction phenomena. In the literature, when the HBM is applied to vibrating systems with joint interfaces and the state-of-the-art contact model is used, an uncoupled approach is mostly employed: the static governing equations are solved in advance to compute the pre-stress effects and then the dynamic governing equations are solved to predict the vibration amplitude of the system. As a result, the HBM steady-state solution may lead to a poor correlation with the DTI solution, where static and dynamic loads are accounted for simultaneously. In this paper, the HBM performances are investigated by comparing the uncoupled approach to a fully coupled static/dynamic approach. In order to highlight the main differences between the two approaches, a lumped parameter system, characterized by a single friction contact, is considered in order to show the different levels of accuracy that the proposed approaches can provide for different configurations.
Zhou, Xiaoming; Liang, Xin M; Zhao, Gang; Su, Youchao; Wang, Yang
2014-07-01
Roller pumps are commonly used in circulatory assist devices to deliver blood, but the inherent high mechanical stresses (especially wall shear stress) may cause considerable damage to cells. Conventional experimental approaches to evaluate and reduce device-induced cell damage require considerable effort and resources. In this work, we describe the use of a new computational fluid dynamics method to more effectively study roller pump systems. A generalized parametric model for the fluid field in a typical roller pump system is presented first, and analytical formulations of the moving boundary are then derived. Based on the model and formulations, the dynamic geometry and mesh of the fluid field can be updated automatically according to the time-dependent roller positions. The described method successfully simulated the pulsing flow generated by the pump, offering a convenient way to visualize the inherent flow pattern and to assess shear-induced cell damage. Moreover, the highly reconfigurable model and the semiautomated simulation process extend the usefulness of the presented method to a wider range of applications. Comparison studies were conducted, and valuable indications about the detailed effects of structural parameters and operational conditions on the produced wall shear stress were obtained. Given the good consistency between the simulated results and the existing experimental data, the presented method displays promising potential to more effectively guide the development of improved roller pump systems which produce less mechanical damage to cells.